The Ice Age is coming and will collapse civilization – reducing CO2

advertisement
1NC NG
Co2 DA
Humans are the biggest CO2 emitter—anthropogenic warming outweighs the natural
Borenstein ’12 (Seth Borenstein, Associated Press May 31, 2012 “Climate Change: Arctic passes 400 parts per million milestone:
Christian Science Monitor http://ww”.csmonitor.com/Science/2012/0531/Climate-change-Arctic-passes-400-parts-per-millionmilestone/(page)/2)
The world's air has reached what scientists call a troubling new milestone for carbon dioxide, the main
global warming pollutant. Monitoring stations across the Arctic this spring are measuring more than 400
parts per million of the heat-trapping gas in the atmosphere. The number isn't quite a surprise, because it's been rising at an accelerating
pace. Years ago, it passed the 350 ppm mark that many scientists say is the highest safe level for carbon
dioxide. It now stands globally at 395. So far, only the Arctic has reached that 400 level, but the rest of
the world will follow soon. "The fact that it's 400 is significant," said Jim Butler, global monitoring director at the National Oceanic and
Atmospheric Administration's Earth System Research Lab in Boulder, Colo. "It's just a reminder to everybody that we haven't fixed this and we're still in trouble." Carbon dioxide
is the chief greenhouse gas and stays in the atmosphere for 100 years. Some carbon dioxide is natural, mainly from decomposing
dead plants and animals. Before the Industrial Age, levels were around 275 parts per million. For more than 60 years, readings have been in the 300s, except in urban areas, where levels are
. The burning of fossil fuels, such as coal for electricity and oil for gasoline, has caused the
overwhelming bulk of the man-made increase in carbon in the air, scientists say. It's been at least 800,000 years — probably more —
skewed
since Earth saw carbon dioxide levels in the 400s, Butler and other climate scientists said. Until now. Readings are coming in at 400 and higher all over the Arctic. They've been recorded in
drop a bit in the summer, when plants suck up
carbon dioxide, NOAA scientists said. So the yearly average for those northern stations likely will be
lower and so will the global number. Globally, the average carbon dioxide level is about 395 parts per
million but will pass the 400 mark within a few years, scientists said. The Arctic is the leading indicator in
global warming, both in carbon dioxide in the air and effects, said Pieter Tans, a senior NOAA scientist. "This is the first
time the entire Arctic is that high," he said. Tans called reaching the 400 number "depressing," and
Butler said it was "a troubling milestone." "It's an important threshold," said Carnegie Institution ecologist Chris Field, a scientist
who helps lead the Nobel Prize-winning Intergovernmental Panel on Climate Change. "It is an indication that we're in a different world." Ronald
Prinn, an atmospheric sciences professor at the Massachusetts Institute of Technology, said 400 is more a psychological milestone than a scientific one. We think in
hundreds, and "we're poking our heads above 400," he said. Tans said the readings show how much the
Earth's atmosphere and its climate are being affected by humans. Global carbon dioxide emissions from
fossil fuels hit a record high of 34.8 billion tons in 2011, up 3.2 percent, the International Energy Agency announced last week. The
Alaska, Greenland, Norway, Iceland and even Mongolia. But levels change with the seasons and will
agency said it's becoming unlikely that the world can achieve the European goal of limiting global warming to just 2 degrees based on increasing pollution and greenhouse gas levels. "The news
today, that some stations have measured concentrations above 400 ppm in the atmosphere, is further evidence that the world's political leaders — with a few honorable exceptions — are
failing catastrophically to address the climate crisis," former Vice President Al Gore, the highest-profile campaigner against global warming, said in an email. "History will not understand or
forgive them." But political dynamics in the United States mean there's no possibility of significant restrictions on man-made greenhouse gases no matter what the levels are in the air, said
Jerry Taylor, a senior fellow of the libertarian Cato Institute. "These milestones are always worth noting," said economist Myron Ebell at the conservative Competitive Enterprise Institute. "As
carbon dioxide levels have continued to increase, global temperatures flattened out, contrary to the models" used by climate scientists and the United Nations. He contends temperatures
have not risen since 1998, which was unusually hot. Temperature records contradict that claim. Both 2005 and 2010 were warmer than 1998, and the entire decade of 2000 to 2009 was the
warmest on record, according to NOAA.
Food crises are beginning—CO2 emissions are key to preventing billions of deaths
from global food shortages
Idso 11 (Sherwood Idso, former research physicist for the Department of Agriculture, 7/6/11, “Meeting
the Food Needs of a Growing World Population,”
http://www.co2science.org/articles/V14/N27/EDIT.php).
Parry and Hawkesford (2010) introduce their study of the global problem by noting that "food
production needs to increase
50% by 2030 and double by 2050 to meet projected demands," and they note that at the same time the demand for
food is increasing, production is progressively being limited by "non-food uses of crops and cropland," such as the production of biofuels,
stating that in their homeland of the UK, "by 2015 more than a quarter of wheat grain may be destined for bioenergy production," which surely
must strike one as both sad and strange, when they also note that "currently, at
least one billion people are chronically
malnourished and the situation is deteriorating," with more people "hungrier now than at the start of the millennium." So
what to do about it: that is the question the two researchers broach in their review of the sad situation. They begin by describing the allimportant process of photosynthesis, by which the earth's plants
"convert light energy into chemical energy, which
is used in the assimilation of atmospheric CO2 and the formation of sugars that fuel growth and
yield," which phenomena make this natural and life-sustaining process, in their words, "a major target for improving crop productivity both
via conventional breeding and biotechnology." Next to a plant's need for carbon dioxide comes its need for water, the availability of
which, in the words of Parry and Hawkesford, "is the major constraint on world crop productivity." And they state that
"since more than 80% of the [world's] available water is used for agricultural production, there is little opportunity to use additional water for
crop production, especially because as populations increase, the demand to use water for other activities also increases." Hence, they rightly
conclude that "a
real and immediate challenge for agriculture is to increase crop production with less
available water." Enlarging upon this challenge, they give an example of a success story: the Australian wheat variety
'Drysdale', which gained its fame "because it uses water more efficiently." This valued characteristic is achieved
"by slightly restricting stomatal aperture and thereby the loss of water from the leaves." They note,
however, that this ability "reduces photosynthetic performance slightly under ideal conditions," but they say it enables plants to
"have access to water later in the growing season thereby increasing total photosynthesis over the life
of the crop." Of course, Drysdale is but one variety of one crop; and the ideal goal would be to get nearly all varieties
of all crops to use water more efficiently. And that goal can actually be reached by doing nothing, by merely halting the efforts
of radical environmentalists to deny earth's carbon-based life forms -- that's all of us and the rest of the earth's plants and animals -- the extra
carbon we and they need to live our lives to the fullest. This is because allowing
the air's CO2content to rise in response to
the burning of fossil fuels naturally causes the vast majority of earth's plants to progressively reduce
the apertures of their stomata and thereby lower the rate at which water escapes through them to
the air. And the result is even better than that produced by the breeding of Drysdale, because the extra CO2 in the airmore than
overcomes the photosynthetic reduction that results from the partial closure of plant stomatal apertures, allowing even more yield to
be produced per unit of water transpired in the process. Yet man can make the situation better still, by breeding and
selecting crop varieties that perform better under higher atmospheric CO2 concentrations than the varieties we currently rely upon, or he can
employ various technological means of altering them to do so. Truly, we can succeed, even where "the United Nations Millennium
Development Goal of substantially reducing the world's hungry by 2015 will not be met," as Parry and Hawkesford accurately inform us. And
this truly seems to us the moral thing to do, when "at least one billion people are chronically malnourished and the situation
is deteriorating," with more people "hungrier now than at the start of the millennium.”
Ice Age
CO2 and GHG’s are vital to prevent global cooling and the Ice Age.
Lacis et al., 10 (Andrew A., PhD in Physics from the University of Iowa and NASA scientist, with Gavin
A. Schmidt, NASA scientist @ Goddard Space Flight Center, Sciences and Exploration Directorate, Earth
Sciences Division, David Lind, NASA scientist and PhD, and Reto A. Ruedy, NASA scientist and PhD,
"Atmospheric CO2: Principal Control Knob Governing Earth's Temperature", Science Magazine, October
15, http://www.sciencemag.org/content/330/6002/356.full.pdf)
If the global atmospheric temperatures were to fall to as low as TS = TE, the Clausius-Clapeyron relation would imply that
the sustainable amount of atmospheric water vapor would become less than 10% of the current atmospheric
value. This would result in (radiative) forcing reduced by ~30 W/m2, causing much of the remaining water vapor to precipitate, thus
enhancing the snow/ice albedo to further diminish the absorbed solar radiation. Such a condition
would inevitably lead to runaway glaciation, producing an ice ball Earth. Claims that removing all CO2
from the atmosphere “would lead to a 1°C decrease in global warming” (7), or “by 3.53°C when 40% cloud cover is
assumed” (8) are still being heard. A clear demonstration is needed to show that water vapor and clouds do indeed behave as fast
feedback processes and that their atmospheric distributions are regulated by the sustained radiative forcing due to the noncondensing GHGs.
To this end, we performed a simple climate experiment with the GISS 2° × 2.5° AR5 version of ModelE, using the Q-flux ocean with a mixedlayer depth of 250 m, zeroing out all the noncondensing GHGs and aerosols. The results, summarized in Fig. 2, show unequivocally that the
radiative forcing by noncondensing GHGs is essential to sustain the atmospheric temperatures that
are needed for significant levels of water vapor and cloud feedback. Without this noncondensable
GHG forcing, the physics of this model send the climate of Earth plunging rapidly and irrevocably to an
icebound state, though perhaps not to total ocean freezeover. Time evolution of global surface temperature, TOA net flux, column water
vapor, planetary albedo, sea ice cover, and cloud cover, after the zeroing out of the noncondensing GHGs. The model used in the experiment is
the GISS 2°× 2.5° AR5 version of ModelE, with the Q-flux ocean and a mixed-layer depth of 250 m. Model initial conditions are for a
preindustrial atmosphere. Surface temperature and TOA net flux use the lefthand scale. The scope of the climate impact becomes apparent in
just 10 years. During the first year alone, global mean surface temperature falls by 4.6°C. After 50 years, the global temperature stands at –
21°C, a decrease of 34.8°C. Atmospheric water vapor is at ~10% of the control climate value (22.6 to 2.2 mm). Global cloud cover increases from
its 58% control value to more than 75%, and the global sea ice fraction goes from 4.6% to 46.7%, causing the planetary albedo of Earth to also
increase from ~29% to 41.8%. This
has the effect of reducing the absorbed solar energy to further exacerbate
the global cooling.
After 50 years, a third of the ocean surface still remains ice-free, even though the global surface temperature is
colder than –21°C. At tropical latitudes, incident solar radiation is sufficient to keep the ocean from freezing. Although this thermal oasis within
an otherwise icebound Earth appears to be stable, further calculations with an interactive ocean would be needed to verify the potential for
long-term stability. The surface temperatures in Fig. 3 are only marginally warmer than 1°C within the remaining low-latitude heat island. From
the foregoing, it is clear
that CO2 is the key atmospheric gas that exerts principal control over the strength
of the terrestrial greenhouse effect. Water vapor and clouds are fast-acting feedback effects, and as
such are controlled by the radiative forcings supplied by the noncondensing GHGs. There is telling
evidence that atmospheric CO2 also governs the temperature of Earth on geological time scales,
suggesting the related question of what the geological processes that control atmospheric CO2 are.
The geological evidence of glaciation at tropical latitudes from 650 to 750 million years ago supports
the snowball Earth hypothesis (9), and by inference, that escape from the snowball Earth condition is
also achievable .
The Ice Age is coming and will collapse civilization – reducing CO2 causes extinction
Deming, 9 (David, geophysicist and associate professor of Arts and Sciences @ the University of
Oklahoma, “The Coming Ice Age,” May 13,
http://www.americanthinker.com/2009/05/the_coming_ice_age.html)
The Great Famine was followed by the Black Death, the greatest disaster ever to hit the human race.
One-third of the human race died; terror and anarchy prevailed. Human civilization as we know it is
only possible in a warm interglacial climate. Short of a catastrophic asteroid impact, the greatest threat
to the human race is the onset of another ice age.¶ The oscillation between ice ages and interglacial
periods is the dominant feature of Earth's climate for the last million years. But the computer models
that predict significant global warming from carbon dioxide cannot reproduce these temperature
changes. This failure to reproduce the most significant aspect of terrestrial climate reveals an
incomplete understanding of the climate system, if not a nearly complete ignorance.¶ Global warming
predictions by meteorologists are based on speculative, untested, and poorly constrained computer
models. But our knowledge of ice ages is based on a wide variety of reliable data, including cores from
the Greenland and Antarctic ice sheets. In this case, it would be perspicacious to listen to the
geologists, not the meteorologists. By reducing our production of carbon dioxide, we risk hastening
the advent of the next ice age . Even more foolhardy and dangerous is the Obama administration's
announcement that they may try to cool the planet through geoengineering. Such a move in the middle
of a cooling trend could provoke the irreversible onset of an ice age. It is not hyperbole to state that
such a climatic change would mean the end of human civilization as we know it.¶ Earth's climate is
controlled by the Sun. In comparison, every other factor is trivial. The coldest part of the Little Ice Age
during the latter half of the seventeenth century was marked by the nearly complete absence of
sunspots. And the Sun now appears to be entering a new period of quiescence. August of 2008 was
the first month since the year 1913 that no sunspots were observed. As I write, the sun remains quiet.
We are in a cooling trend. The areal extent of global sea ice is above the twenty-year mean.¶ We have
heard much of the dangers of global warming due to carbon dioxide. But the potential danger of any
potential anthropogenic warming is trivial compared to the risk of entering a new ice age. Public policy
decisions should be based on a realistic appraisal that takes both climate scenarios into consideration.
Case
Econ
Shale is sustainable – peer reviewed, comprehensive, and contains control variables
UT, 2/28/13, "New, Rigorous Assessment of Shale Gas Reserves Forecasts Reliable Supply from Barnett
Shale Through 2030," 2-28, http://www.utexas.edu/news/2013/02/28/new-rigorous-assessment-ofshale-gas-reserves-forecasts-reliable-supply-from-barnett-shale-through-2030/
AUSTIN, Texas —
A new study, believed to be the most thorough assessment yet of the natural gas
production potential of the Barnett Shale, foresees slowly declining production through the year 2030 and beyond and
total recovery at greater than three times cumulative production to date. This forecast has broad implications for
the future of U.S energy production and policy. The study, conducted by the Bureau of Economic Geology (BEG) at
The University of Texas at Austin and funded by the Alfred P. Sloan Foundation, integrates engineering,
geology and economics in a numerical model that allows for scenario testing based on many input
parameters. In the base case, the study forecasts a cumulative 44 trillion cubic feet (TCF) of recoverable reserves from the Barnett, with
annual production declining in a predictable curve from the current peak of 2 TCF per year to about 900 billion cubic feet (BCF) per year by
2030. This
forecast falls in between some of the more optimistic and pessimistic predictions of production from the Barnett and
suggests that the formation will continue to be a major contributor to U.S. natural gas production through
2030. The Bureau of Economic Geology will be completing similar studies of three other major U.S. shale gas basins by the end of this year.
The BEG study examines actual production data from more than 16,000 individual wells drilled in the
Barnett play through mid-2011. Other assessments of the Barnett have relied on aggregate views of
average production, offering a “top down” view of production, says Scott Tinker, director of the BEG and co-principal
investigator for the study. The BEG study, in contrast, takes a “bottom up” approach, starting with the
production history of every well and then determining what areas remain to be drilled. The result is a
more accurate and comprehensive view of the basin. The BEG team enhanced the view by identifying and assessing the
potential in 10 production quality tiers and then using those tiers to more accurately forecast future production. The economic feasibility of
production varies tremendously across the basin depending upon production quality tier. The study’s model centers around a base case
assuming average natural gas prices of $4 for a thousand cubic feet but allows for variations in price, volume drained by each well, economic
limit of a well, advances in technology, gas plant processing incentives and many other factors to determine how much natural gas operators
will be able to extract economically. “We
have created a very dynamic and granular model that accounts for the
key geologic, engineering and economic parameters, and this adds significant rigor to the forecasts,” said
Svetlana Ikonnikova, energy economist at the BEG and co-principal investigator of the project. Whereas thickness and porosity affect the
reserves greatly, price is a dominant factor affecting production. While the BEG model shows the correlation between price and production, it
suggests that price sensitivity is not overly dramatic, at least in the early phase of a formation’s development. This is because there are still
many locations to drill in the better rock, explains Tinker, which is cost effective even at lower prices. “Drilling in the better rock won’t last
forever,” says Tinker, “but there are still a few more years of development remaining in the better rock quality areas.” The data in the model
stop at the end of 2010, after approximately 15,000 wells were drilled in the field. In the base case, the assessment forecasts another 13,000
wells would be drilled through 2030. In 2011 and 2012 more than 2900 wells were actually drilled, in line with the forecast, leaving just over
10,000 wells remaining to be drilled through 2030 in the base case. Wells range widely in their ultimate recovery of natural gas, a factor the
study takes into account. A
new method of estimating production for each well, based on the physics of the
system, was integral to the project and should offer a more accurate method of forecasting production
declines in shale gas wells. This method, along with several other components in the work flow, has been submitted in several
manuscripts to peer-reviewed journals. The papers have already undergone a form of professional peer review
built into the BEG research process. Before submitting the papers to journals, the BEG team invited an
independent review panel with members from government, industry and academia to critique their
research. At an open day for academics and industry scientists, 100 attendees were invited to offer
additional feedback. Scientists and engineers from two of the larger producers in the Barnett — Devon Energy and ExxonMobil —
offered critical feedback on the methodology during two in-house corporate review days. Finally, the BEG hired a private consulting firm to
individually critique the draft manuscripts and offer suggestions for improvement of the work. Overall, the
rigorous assessment of
the country’s second most productive shale gas formation reaffirms the transformative, long-term
impact of shale and other unconventional reservoirs of oil and gas on U.S. energy markets. Tinker compares
the expansion of hydrocarbon reserves from shale gas to the expansion of global oil reserves from deep-water exploration that has happened in
the past several decades.
Natural gas causes economic decline
Garvin Jabusch, Chief Investment Officer of Green Alpha Advisors and former Director of Forward
Sustainable Investments, a business unit of Forward Management LLC March and previously served as
Vice President of Strategic Services at Morgan Stanley and has an MBA in international management
and finance and a Ph.D. in physical anthropology and archaeology and has wrote and contributed to
many Environmental Impact Studies 2013, “Your Portfolio is Hooked on Fossil Fuels”
http://www.altenergystocks.com/archives/2013/03/your_portfolio_is_hooked_on_fossil_fuels_1.html
This is not only morally questionable, it’s also likely to lead to disappointing returns. If the goal of
investing is to grow assets, accrue wealth, and prepare for our futures, then it’s key to invest in
companies, industries and sectors that will still be there and growing in that future. Similarly, our
collective macroeconomic goals shouldn’t be to keep the economy ticking along for the next quarter or
current political term, but to keep it healthy so we may thrive for decades if not centuries. Fossil fuels
companies fail on both these fronts: they face an uphill battle trying to grow into the medium and long
term, and, for many reasons, they also hinder our chances of achieving economy-wide long-term
economic growth, which limits your and my chances of positive portfolio returns. We at Green Alpha
believe that fossil fuels have no place in portfolios designed to capitalize on the emerging, sustainable,
green, thriving next economy. We picture and model, rather, a next economy comprised of enterprises
whose technologies, material inputs, and/or practices have not proven deleterious to the environmental
underpinnings of the global economy; and, equally important, those whose businesses have a better
than average probability of keeping economic production running close to capacity (meaning close to
full employment and therefore causing sufficient economic demand to keep economies healthy).
Healthy, innovative economies made up of healthy companies have always proven better for portfolio
performance. Next economy companies are innovation leaders in all areas, not just in the energy
industries; they exist now and will continue to emerge in all economic sectors, providing all products,
goods and services required to have a fully functioning, even thriving global economy. And we believe
that next economy companies will continue to win market share from legacy firms, and that they
therefore provide superior odds of delivering long term competitive returns. There are several key
reasons this should be the case. As a global economy, we can no longer afford to wait for our basic
economic underpinnings to break before we fix them. Too many issues, economy wide, from agriculture
to water to warming, all damaged by fossil fuels, have been ignored and left to degrade. By now it’s
clear that fossil fuels, including natural gas, do not result in us growing a thriving next economy.
Between greenhouse gas emissions, toxic emissions (such as mercury), accidents, spills and
contamination of soil, groundwater and oceans, to say they have proven deleterious to our
environmental-macroeconomic underpinnings is an understatement. We need to make sure the earth’s
basic systems - which global economies rely upon - keep on functioning. And the time to do that is now,
while they’re still working.
Economic collapse inevitable
Li 10 Department of Economics, University of Utah (Minqi, ¶ “The End of the “End of History”: The Structural Crisis of Capitalism and the Fate
of Humanity”, Science & Society, Vol. 74, Symposium: Capitalism and Crisis in the 21st Century, pp. 290-305,
http://www.econ.utah.edu/~mli/Economics%207004/Li_The%20End%20of%20the%20End%20of%20History.pdf )
In 2001, the U. S. stock market bubble started to collapse, after years of “new economy” boom. The Bush
administration took advantage of the psychological shock of 9/11, and undertook a series of “preemptive wars” (first in Afghanistan and then in
Iraq) that ushered in a new era of intensified inter-state conflicts. Towards the end of 2001, Argentina, which was regarded as a neoliberal
Decades of neoliberalism had not only undermined the living
standards of the working classes, but also destroyed the material fortunes of the urban middle classes (which
model country, was hit by a devastating financial crisis.
remained a key social base for neoliberalism in Latin America until the 1990s). After the Argentine crisis, neoliberalism completely lost political
legitimacy in Latin America. This paved the way for the rise of several socialist-oriented governments on the continent. After the 2001 global
The big semi-peripheral economies, the so-called
“BRICs” (Brazil, Rbussia, India, and China) became the most dynamic sector. The neoliberal global economy was
fueled by the super-exploitation of the massive cheap labor force in the semi-periphery (especially in China). The
strategy worked, to the extent that it generated massive amounts of surplus value that could be shared by the global
capitalist classes. But it also created a massive “realization problem.” That is, as the workers in the “emerging
markets” were deprived of purchasing power, on a global scale, there was a persistent lack of effective demand for
the industrial output produced in China and the rest of the semi-periphery. After 2001, the problem was
addressed through increasingly higher levels of debt-financed consumption in the advanced capitalist countries (especially in
the United States). The neoliberal strategy was economically and ecologically unsustainable. Economically, the
debt-financed consumption in the advanced capitalist countries could not go on indefinitely. Ecologically, the rise of
the BRICs greatly accelerated resource depletion and environmental degradation on a global scale. The
global ecological system is now on the verge of total collapse. The world is now in the midst of a
prolonged period of economic and political instability that could last several decades. In the past, the
capitalist world system had responded to similar crises and managed to undertake successful
restructurings. Is it conceivable that the current crisis will result in a similar restructuring within the system that will bring about a new
global “New Deal”? In three respects, the current world historical conjuncture is fundamentally different from that
of 1945. Back in 1945, the United States was the indisputable hegemonic power. It enjoyed overwhelming industrial, financial, and military
recession, the global economy actually entered into a mini-golden age.
advantages relative to the other big powers and, from the capitalist point of view, its national interests largely coincided with the world
system’s common and long-term interests. 4 On the decline of American hegemony, see Arrighi, 2007; Li, 2008, 113S138; Wallerstein, 2006.
Now,
U. S. hegemony is in irreversible decline. But none of the other big powers is in a position to replace
the United States and function as an effective hegemonic power. Thus, exactly at a time when the global capitalist system is
in deep crisis, the system is also deprived of effective leadership.4 In 1945, the construction of a global “New Deal” involved
primarily accommodating the economic and political demands of the western working classes and the non-western elites (the national
bourgeoisies and the westernized intellectuals). In the current conjuncture, any
new global “New Deal” will have to
incorporate not only the western working classes but also the massive, non-western working classes. Can the capitalist
world system afford such a new “New Deal” if it could not even afford the old one? Most importantly, back in 1945,
the world’s resources remained abundant and cheap, and there was still ample global space for environmental pollution. Now, not only
has resources depletion reached an advanced stage, but the world has also virtually run out of space for any further
environmental pollution.
Collapse now is key to prevent extinction
Barry 8 – President and Founder of Ecological Internet, Ph.D. in Land Resources from U-Wisconsin-Madison (Glen, “Economic Collapse And
Global Ecology”, http://www.countercurrents.org/barry140108.htm)
Humanity and the Earth are faced with an enormous conundrum -- sufficient climate policies enjoy political support
only in times of rapid economic growth. Yet this growth is the primary factor driving greenhouse gas emissions and
other environmental ills. The growth machine has pushed the planet well beyond its ecological carrying
capacity, and unless constrained, can only lead to human extinction and an end to complex life. With
every economic downturn, like the one now looming in the United States, it becomes more difficult
and less likely that policy sufficient to ensure global ecological sustainability will be embraced. This essay
explores the possibility that from a biocentric viewpoint of needs for long-term global ecological, economic and social sustainability; it
would be better for the economic collapse to come now rather than later . Economic growth is a deadly
disease upon the Earth, with capitalism as its most virulent strain. Throw-away consumption and explosive population
growth are made possible by using up fossil fuels and destroying ecosystems . Holiday shopping numbers are
covered by media in the same breath as Arctic ice melt, ignoring their deep connection. Exponential economic growth destroys
ecosystems and pushes the biosphere closer to failure. Humanity has proven itself unwilling and
unable to address climate change and other environmental threats with necessary haste and ambition.
Action on coal, forests, population, renewable energy and emission reductions could be taken now at net benefit to the economy. Yet, the
losers -- primarily fossil fuel industries and their bought oligarchy -- successfully resist futures not dependent upon their deadly products.
Perpetual economic growth, and necessary climate and other ecological policies, are fundamentally
incompatible. Global ecological sustainability depends critically upon establishing a steady state
economy, whereby production is right-sized to not diminish natural capital. Whole industries like coal and natural forest logging will be
eliminated even as new opportunities emerge in solar energy and environmental restoration. This critical transition to both economic
and ecological sustainability is simply not happening on any scale. The challenge is how to carry out necessary
environmental policies even as economic growth ends and consumption plunges. The natural response is going to be liquidation of even more
life-giving ecosystems, and jettisoning of climate policies, to vainly try to maintain high growth and personal consumption. We know that
humanity must reduce greenhouse gas emissions by at least 80% over coming decades. How will this and other
necessary climate mitigation strategies be maintained during years of economic downturns, resource wars, reasonable
demands for equitable consumption, and frankly, the weather being more pleasant in some places? If efforts to reduce emissions and move to
a steady state economy fail; the collapse of ecological, economic and social systems is assured. Bright greens take the continued existence of a
habitable Earth with viable, sustainable populations of all species including humans as the ultimate truth and the meaning of life. Whether this
is possible in a time of economic collapse is crucially dependent upon whether enough ecosystems and resources remain post collapse to allow
It may be better for the Earth and humanity's future
that economic collapse comes sooner rather than later, while more ecosystems and opportunities to
return to nature's fold exist
humanity to recover and reconstitute sustainable, relocalized societies.
Warming
Warming wont cause diseases
Bell 11 - a professor of architecture and holds an endowed professorship in space architecture at the
University of Houston. An internationally recognized commentator on scientific and public policy issues,
Bell has written extensively on climate and energy policy and has been featured in many prominent
national and international newspapers, magazines, and television programs (Larry, “Climate of
Corruption: Politics and Power Behind The Global Warming Hoax”
http://books.google.com/books?id=CS8uzm3cvUC&dq=%22warming+%22+impacts+on+%22coral+reefs%22+exaggerated&lr=&source=gbs_navl
inks_s , PZ)
Okay, let’s try examining the threat of global warming causing really nasty tropi cal diseases to spread, just as An Inconvenient Truth
dramatically warns. That should warrant some fear. Well, maybe not. At least Paul
Reiter, a medical entomologist and
professor at the Pasteur Institute in Paris, doesn’t think so. He is one of the scientists featured in the film The Greatest
Global Warming Swindle, produced by WAG-TV in Great Britain in response to the Gore movie. Dr. Reiter was also a contributory author of the
IPCC 2001 report who resigned because he regarded the processes to be driven by agenda rather than science. He later threatened to sue the
IPCC if they didn’t remove his name from the report he didn’t wish to be associated with.4’ Professor
Reiter’s career has been
devoted primarily to studying such mosquitoborne diseases as malaria, dengue, yellow fever, and West
Nile virus, among others. He takes special issue with any notion that global warming is spreading such ill nesses by extending the
carriers to formerly colder locales where they didn’t previ ously exist. In reference to statements in An Inconvenient Truth that the African cities
of Nairobi and Harare were founded above the mosquito line to avoid malaria, and that now the mosquitoes are moving to those higher
altitudes, Dr. Reiter comments, “Gore
is completely wrong here—malaria has been documented at an altitude of
8,200 feet—Nairobi and Harare are at altitudes of about 4,920 feet. The new altitudes of malaria are
lower than those recorded 100 years ago. None of the 30 so-called new diseases Gore references are
attributable to global warming . None:’44 Although few people seem to realize it, malaria was once rampant through.
out cold parts of Europe, the US, and Canada, extending into the 20th century. It was one of the major causes of troop
morbidity during the Russian/Finnish War of the 1940s, and an earlier massive epidemic in the 1920s
went up through Siberia and into Archangel on the White Sea near the Arctic Circle. Still, man’ continue to regard malaria and dengue as top
climate change dangers—far more dangerous than sea level rise.
Warming’s not an existential risk – adaptation, mitigation, geoengineering, and
empirically no runaway.
Muller 12 (Jonatas, writer on ethics and existential risks, 2012, “Analysis of Existential Risks”, http://www.jonatasmuller.com/x-risks.pdf)
A runaway global warming, one in which the temperature rises could be a self- reinforcing process,
has been cited as an existential risk. Predictions show that the Arctic ice could melt completely within a few years, releasing
methane currently trapped in the sea bed (Walter et al. 2007). Methane is a more powerful greenhouse gas than carbon dioxide. Abrupt
methane releases from frozen regions may have been involved in two extinction events on this planet, 55 million years ago in the Paleocene–
Eocene Thermal Maximum, and 251 million years ago in the Permian–Triassic extinction event. The
fact that similar global
warmings have happened before in the history of our planet is a likely indication that the present
global warming would not be of a runaway nature. Theoretical ways exist to reverse global
warmings with technology, which may include capturing greenhouse gases from the atmosphere,
deflecting solar radiation, among other strategies. For instance, organisms such as algae are being bioengineered to
convert atmospheric greenhouse gases into biofuels (Venter 2008). Though they may cause imbalances, these methods
would seem to prevent global warming from being an existential risk in the worst case scenario, but
it may still produce catastrophic results.
No extinction
NIPCC 11 (Nongovernmental International Panel on Climate Change. Surviving the unprecedented climate change of the IPCC. 8 March
2011. http://www.nipccreport.org/articles/2011/mar/8mar2011a5.html)KG
In a paper published in Systematics and Biodiversity, Willis et al. (2010) consider the IPCC (2007) "predicted climatic changes for the next
century" -- i.e., their contentions that "global temperatures will increase by 2-4°C and possibly beyond, sea levels will rise (~1 m ± 0.5 m), and
atmospheric CO2will increase by up to 1000 ppm" -- noting that it is "widely suggested that the magnitude and rate of these changes will result
in many plants and animals going extinct," citing studies that suggest that "within the next century, over 35% of some biota will have gone
extinct (Thomas et al., 2004; Solomon et al., 2007) and there will be extensive die-back of the tropical rainforest due to climate change (e.g.
Huntingford et al., 2008)." On the other hand, they indicate that some biologists
and climatologists have pointed out that
"many of the predicted increases in climate have happened before, in terms of both magnitude and rate
of change (e.g. Royer, 2008; Zachos et al., 2008), and yet biotic communities have remained remarkably resilient
(Mayle and Power, 2008) and in some cases thrived (Svenning and Condit, 2008)." But they report that those who mention these
things are often "placed in the 'climate-change denier' category," although the purpose for pointing out these facts is simply to present "a
sound scientific basis for understanding biotic responses to the magnitudes and rates of climate change predicted for the future through using
the vast data resource that we can exploit in fossil records." Going on to do just that, Willis
et al. focus on "intervals in time in
the fossil record when atmospheric CO2 concentrations increased up to 1200 ppm, temperatures in midto high-latitudes increased by greater than 4°C within 60 years, and sea levels rose by up to 3 m higher
than present," describing studies of past biotic responses that indicate "the scale and impact of the
magnitude and rate of such climate changes on biodiversity." And what emerges from those studies, as they
describe it, "is evidence for rapid community turnover, migrations, development of novel ecosystems and
thresholds from one stable ecosystem state to another." And, most importantly in this regard, they report "there is
very little evidence for broad-scale extinctions due to a warming world." In concluding, the Norwegian, Swedish and
UK researchers say that "based on such evidence we urge some caution in assuming broad-scale extinctions of species
will occur due solely to climate changes of the magnitude and rate predicted for the next century,"
reiterating that "the fossil record indicates remarkable biotic resilience to wide amplitude fluctuations in
climate."
Solvency
Even if there’s enough infrastructure, the plan causes equipment shortages which
cause prices to spike
Ebinger, Senior fellow and Director of the Energy Security Initiative at Brookings, ‘12
(Charles, “Liquid Markets: Assessing the Case for US Exports of Liquefied Natural Gas,” 5-2-12,
http:~/~/www.brookings.edu/~~/media/events/2012/5/02%20lng%20exports/20120502_lng_edu/~~/
media/events/2012/5/02%20lng%20exports/20120502_lng_exports, )
Even if there is sufficient transportation infrastructure to handle increased volumes and new ¶ regional bases for
natural gas production, there ¶ may be limits on the amount of available equipment and qualified
petroleum engineers to develop the gas. To date such a shortage of drilling rig availability in the U.S. natural gas sector ¶ has not
materialized, as figure 3 illustrates. The ¶ increased productivity of new drilling rigs has ¶ served to ensure that supply has kept pace with ¶
demand. For example, in the Haynesville Shale ¶ play in Louisiana, the rig count fell from 181 rigs ¶ in July 2010 to 110 rigs in October 2011, yet
production increased from 4.65 bcf/day to 7.58 bcf/ day over the same period.¶ 34¶ A similar trend is ¶ occurring in the Barnett Shale in Texas,
where ¶ production rates have remained flat despite a ¶ declining rig count.¶ 35¶ While
the supply of drilling rigs remains
adequate, the market for other ¶ equipment and services used for fracking—particularly high-pressure pumping equipment—is
¶ tight and likely to remain so for the near term.¶ 36¶ Tight markets for drilling and completion equipment
can result in increases in fracking costs.¶
Desalinization
K
The affirmative judges the ocean by the way that humankind perceives it which is
anthropocentric and inevitably fails because the ocean is withdrawn.
Morton ‘11 [Timothy, Professor and Rita Shea Guffey Chair in English at Rice
University, Speculations 2, pg. 216-219, http://www.speculationsjournal.org/storage/Morton_Sublime%20Objects_v2.pdf] //JC//
According to OOO, objects
all have four aspects. They withdraw from access by other objects. They
appear to other objects. They are specific entities. And that’s not all: they really exist. Aesthetically, then,
objects are uncanny beasts. If they were pieces of music, they might be some impossible combination
of slapstick sound effects, Sufi singing, Mahler and hardcore techno. If they were literature, they might exist
somewhere between The Commedia Dell’ Arte, The Cloud of Unknowing, War and Peace and Waiting for Godot. Pierrot Lunaire might be a
object-oriented sublime doesn’t come from
some beyond, because this beyond turns out to be a kind of optical illusion of correlationism.
There’s nothing underneath the Universe of objects. Or not even nothing, if you prefer thinking it
that way. The sublime resides in particularity, not in some distant beyond. And the sublime is
generalizable to all objects, insofar as they are all what I’ve called strange strangers, that is, alien to
themselves and to one another in an irreducible way.26 Of the two dominant theories of the sublime, we have a
choice between authority and freedom, between exteriority and interiority. But both choices are
correlationist. That is, both theories of the sublime have to do with human subjec- tive access to
objects. On the one hand we have Edmund Burke, for whom the sublime is shock and awe: an
experience of terrifying authority to which you must submit.27 On the other hand, we have Immanuel Kant, for
whom the sublime is an experience of inner freedom based on some kind of temporary cognitive failure. Try counting up to
infinity. You can’t. But that is precisely what infinity is. The power of your mind is revealed in its
failure to sum infinity.28 Both sublimes assume that: (1) the world is specially or uniquely accessible to
humans; (2) the sublime uniquely correlates the world to humans; and (3) what’s important about the
sublime is a reaction in the subject. The Burkean sublime is simply craven cowering in the presence of authority: the law, the
might of a tyrant God, the power of kings, and the threat of execution. No real knowledge of the authority is assumed—
terrified ignorance will do. Burke argues outright that the sublime is always a safe pain, mediated
by the glass panels of the aesthetic. (That’s why horror movies, a truly speculative genre, try to bust through this aesthetic screen
at every opportunity.) What we need is a more speculative sublime that actually tries to become intimate
with the other, and here Kant is at any rate preferable to Burke. Those more sympathetic to Kant might argue that there is some faint
good metaphor for grotesque, frightening, hilarious, sublime objects. The
echo of reality in the experience of the sublime. Certainly the aesthetic dimension is a way in which the normal subject–object dichotomy is
suspended in Kant. And the
sublime is as it were the essential subroutine of the aesthetic experience,
allowing us to experience the power of our mind by running up against some external obstacle. Kant
references telescopes and microscopes that expand human perception beyond its limits.29 His marvelous passage on the way one’s mind can
encompass human height and by simple multiplication comprehend the vastness of “Milky Way systems” is sublimely expressive of the human
capac- ity to think.30 It’s also true that the Kantian sublime inspired the powerful speculations of Schelling, Schopenhauer and Nietzsche, and
more work needs to be done teasing out how those philosophers begin to think a reality beyond the hu- man (the work of Grant and Woodard
stands out in particular at present).31 It’s true that in §28 of the Third Critique, Kant does talk about how we experience the ‘dynamical
sublime’ in the terror of vastness, for instance of the ocean or the sky. But this isn’t anything like intimacy with the sky or the ocean. In fact, in
the next sections, Kant explicitly rules out anything like a scientific or even probing analysis of what might exist in the sky. As
soon as we
think of the ocean as a body of water containing fish and whales, rather than as a canvas for our
psyche; as soon as we think of the sky as the real Universe of stars and black holes, we aren’t
experiencing the sublime (§29): Therefore, when we call the sight of the starry sky sublime, we must not
base our judgment upon any concepts of worlds that are inhab- ited by rational beings, and then
[conceive of] the bright dots that we see occupying the space above us as being these worlds’ suns,
moved in orbits prescribed for them with great purposiveness; but we must base our judgment
regarding merely on how we see it, as a vast vault encompassing everything, and merely under this
presentation may we posit the sublimity that a pure aesthetic judgment attributes to this object . In the
same way, when we judge the sight of the ocean we must not do so on the basis of how we think, it,
enriched with all sorts of knowledge which we possess (but which is not contained in the direct
intuition), e.g., as a vast realm of aquatic creatures, or as the great reservoir supplying the water for
the vapors that impregnate the air with clouds for the benefit of the land, or again as an element
that, while separating continents from one another, yet makes possible the greatest communication
among them; for all such judgments will be teleological. Instead we must be able to view the ocean as
poets do, merely in terms of what manifests itself to the eye—e.g., if we observe it while it is calm, as a
clear mirror of water bounded only by the sky; or, if it turbulent, as being like an abyss threatening
to engulf every- thing—and yet find it sublime.32 While we may share Kant’s anxiety about teleology, his main point is less
than satisfactory from a speculative realist point of view. We positively shouldn’t speculate when we experience the
sublime. The sublime is precisely the lack of speculation
Ontology comes first. Objects precede our knowledge of them. All other ways of
relating to the world are incorrect an anthropocentric.
Bryant 11 (Levi Bryant is Professor of Philosophy at Collin College in the Dallas-Fort Worth metropolitan
area. 2011. “Levi Democracy Of objects”. http://quod.lib.umich.edu/o/ohp/9750134.0001.001/1:4/.
in all of the heated debates surrounding epistemology that have cast nearly every discipline in turmoil, we
nonetheless seem to miss the point that the question of the object is not an epistemological question,
not a question of how we know the object, but a question of what objects are. The being of objects is an issue
distinct from the question of our knowledge of objects. Here, of course, it seems obvious that in order to discuss the
Yet
being of objects we must first know objects. And if this is the case, it follows as a matter of course that epistemology or questions of knowledge
must precede ontology. However, I hope to show in what follows that questions of ontology are both irreducible to questions of epistemology
and that questions of ontology must precede questions of epistemology or questions of our access to objects.What an object is cannot be
reduced to our access¶ to objects. And as we will see in what follows, that access is highly limited.
Nonetheless, while our access
to objects is highly limited, we can still say a great deal about the being of objects.¶ However, despite the
limitations of access, we must avoid, at all costs, the thesis that objects are what our access to objects gives us. As Graham Harman
has argued, objects are not the given. Not at all. As such, this¶ book defends a robust realism.Yet, and this is crucial to
everything that follows, the realism defended here is not an epistemological realism, but an ontological realism. Epistemological realism argues
that our representations and language are accurate mirrors of the world as it actually is, regardless of whether or not we exist. It seeks to
Ontological realism, by contrast, is not a thesis about our
knowledge of objects, but about the being of objects themselves, whether or not we exist to
represent them. It is the thesis that the world is composed¶ of objects, that these objects are varied
and include entities as diverse as mind, language, cultural and social entities, and objects
independent of humans such as galaxies, stones, quarks, tardigrades and so on. Above all,
ontological realisms refuse to treat objects as constructions of humans. While it is true, I will argue,
that all objects translate one another, the objects that are translated are irreducible to their
translations. As we will see, ontological realism thoroughly refutes epistemological realism or¶
distinguish between true representations and phantasms.
Introduction: Towards a Finally Subjectless Object 19¶ what ordinarily goes by the pejorative title of “naïve realism”. Initially it might sound as if
the distinction between ontological and epistemological realism is a difference that makes no difference but, as I hope to show, this distinction
has far ranging consequences for how we pose a number of questions and theorize a variety of phenomena.¶ One
of the problematic
consequences that follows from the hegemony that epistemology currently enjoys in philosophy is
that it condemns philosophy to a thoroughly anthropocentric reference. Because the ontological question of
substance is elided into the epistemological question of our knowledge of substance, all discussions of substance necessarily contain a human
reference.The subtext or fine print surrounding our discussions of substance always contain reference to an implicit “for- us”.This is true even
of the anti-humanist structuralists and post- structuralists who purport to dispense with the subject in favor of various impersonal and
Here we still remain in the orbit
of¶ an anthropocentric universe insofar as society and culture are human phenomena, and all of being is
anonymous social forces like language and structure that exceed the intentions of individuals.
subordinated to these forces. Being is thereby reduced to what being is for us.¶ By contrast, this book strives to think a subjectless object, or an
object that is for-itself rather than an object that is an opposing pole before or in front of a subject. Put differently, this essay attempts to think
an object for-itself that isn't an object for the gaze of a subject, representation, or a cultural discourse.This, in short, is what the democracy of
objects means. The democracy of objects is not a political thesis to the effect that all objects ought to be treated equally or that all objects
The democracy of objects is the ontological thesis that all objects, as Ian
Bogost has so nicely put it, equally exist while they do not exist equally. The claim that all objects
equally exist is the claim that no object can be treated as constructed by another object.The claim
that objects do not exist equally is the claim that objects contribute to collectives or assemblages to
a greater and lesser degree. In short, no object such as the subject or culture is the ground of all
others. As such, The Democracy of Objects attempts to
ought to participate in human affairs.
think the being of objects unshackled from the gaze of humans in their being for-themselves.¶ 20 Levi
R. Bryant¶ Such a democracy, however, does not entail the exclusion of the human. Rather, what we get is a redrawing of distinctions and a
decentering of the human. The point is not that we should think objects rather than humans. Such a formulation is based on the premise that
humans constitute some special category that is other than objects, that objects are a pole opposed to humans, and therefore the formulation
is based on the premise that objects are correlates or poles opposing or standing-before humans. No, within the framework of onticology—my
name for the ontology that follows—there¶ is only one type of being: objects. As
a consequence, humans are not excluded,
but are rather objects among the various types of objects that exist or populate the world, each with
their own specific powers and capacities.
Ignoring hyperobjects results in billions of death.
James 13 (Arran, UK-based philosopher, graduate student of Critical Theory, and psychiatric nurse). “The catastrophic and the postapocalyptic,”http://syntheticzero.net/2013/08/21/the-catastrophic-and-the-post-apocalyptic/ August 21, 2013)//[AC]
There is a vast onto-cartography at work here that connects species of fish to coolant systems to hydrogen
molecules to legislation on nuclear safety; legislators, parliaments, regulatory bodies, anti-nuclear activists;
ideas like environmentalism; the food supply networks and geographic distribution of production centres; work practices;
capital investments and the wider financial markets as Tepco’s shares fall; and those networks that specifically
effect human beings in the exclusion area. After all, this exclusion zone has seen thousands of families
leave their homes, their jobs, their friends, and the possessions that had been rewarded to them as
recompense for their alienated labour. Consider that some of these people are still paying mortgages on homes they will
probably never be able to return to safely. And there remains one more reactor in the water that has not melted down but possibly will- if not
by human efforts to recover the fuel rods, then by the possibility of another unpredicted earthquake and/or tsunami. I don’t have the space or
the desire to trace the
onto-cartography of this disaster but it is clear that it includes both geological, ecological
and capitalist bodies; indeed, it is clear that the capitalist bodies might be the ones that are ultimately
responsible. According to Christina Consolo,¶ all this collateral damage will continue for decades, if not
centuries, even if things stay exactly the way they are now. But that is unlikely, as bad things happen like
natural disasters and deterioration with time…earthquakes, subsidence, and corrosion, to name a few. Every day that
goes by, the statistical risk increases for this apocalyptic scenario. No one can say or know how this will play out, except
that millions of people will probably die even if things stay exactly as they are, and billions could die if
things get any (here).¶ I raise the spectre of Fukushima as catastrophe and as apocalyptic because it accords to what
Timothy Morton has described as a hyperobject. In ‘Zero Landscapes in the time of hyperobjects’ Morton defines the states
that¶ Objects are beginning to compel us, from outside the wall. The objects we ignored for centuries,
the objects we created in the process of ignoring other ones: plutonium, global warming. I call them hyperobjects.
Hyperobjects are real objects that are massively distributed in time and space. Good examples would be global
warming and nuclear radiation. Hyperobjects
are so vast, so long lasting, that they defy human time and
spatial scales. They wouldn’t fit in a landscape painting. They could never put you in the right mood.¶ The ontocartography or
“map of entities” that we could trace in relation to Fukushima doesn’t just include all those bodies we have listed already but
also, and most importantly, it includes the radiation itself. Born of the unstable hybridisation of techno-materiality and geomateriality in pursuit of energy to satisfy the logic of the infinite growth of capital, the hyperobject of Fukushima’s radiation was
unleashed and now exists independently of those techno-geo-capitalist assemblages. That this radiation
exists on a huge spatio-temporal scale means that it exists beyond our evolved capacity to think. We evolved to cope with
and to handle a world of mid-sized objects, the very tools and raw materials that helped to build
Fukushima. In the language of transcorporealist thought: the weaving or interpenetration of various autonomous
ontological bodies has led to this body composed of bodies. Just as numerous minerals, cells, exogenous
microorganisms, mitochondria, oxygen, lactic acid, sugars, contact lenses, and so on go up to constitute my body
in their choreographic co-actualisation so to does this process give rise to a similar shift in scale. In
my body the shift is that from the molecular to the “molar” scale but in this case, the shift is from the “molar” to the
hyper-scale. The radiation unleashed by the Fukushima meltdown exists on a geological spatial and temporal
scale that the human animal is not equipped to readily perceive.¶ Such hyperobjects proliferate around us and are
equally hard to detect in our proximal engagement with the various worlds we inhabit. They range from incidents like
Fukushima to the more encompassing threats of the collapse of capital, ecocide and cosmic death
that I mentioned above. The reason I have focussed on Fukushima is to illustrate the point that the catastrophe has already taken
place. In relation to the example of Fukushima the catastrophe occurred two years ago but will be ongoing for
centuries. That I can sit here in all my relative comfort and enjoy the benefits of being a white male in Britain does
not mean that I am any the less existing after the catastrophe. Catastrophes are discreet events that
explode into being, even if such an explosion can seem very slow as they happen on the scale of vast
temporalities. In the last analysis that can’t be carried out, the cosmos itself exists as one huge catastrophe; the moment of the big bang
being the cosmic event, everything else since being the unfolding of that catastrophic actualisation working itself out.
Anthropocentrism is THE original hierarchy that makes racism, sexism, and other “isms” possible—if the future is not to endlessly repeat the horrors of the past, then we
NEED a politics that can respect more than human life – the affirmatives focus on race
only REPLICATES the violence of anthropocentrism – only the alternative solves
Best 7 (Steven, Chair of Philosophy at UT-EP, JCAS 5.2)
While a welcome advance over the anthropocentric conceit that only humans shape human actions, the environmental determinism
approach typically fails to emphasize the crucial role that animals play in human history, as well as how the human exploitation of animals
is a key cause of hierarchy, social conflict, and environmental breakdown. A core thesis of what I call “animal standpoint theory” is that
animals have been key driving and shaping forces of human thought, psychology, moral and social life, and history overall. More
specifically, animal standpoint theory argues that the oppression of humanover human has deep roots in the oppression of human over
animal. In this context, Charles Patterson’s recent book, The Eternal Treblinka: Our Treatment of Animals and the Holocaust, articulates
the animal standpoint in a powerful form with revolutionary implications. The main argument of Eternal Treblinka is that the human
domination of animals, such as it emerged some ten thousand years ago with the rise of agricultural society, was the first hierarchical
domination and laid the groundwork for patriarchy, slavery, warfare, genocide, and other systems of violence and power. A key
implication of Patterson’s theory is that human liberation is implausible if disconnected from animal liberation, and thus humanism -a speciesist philosophy that constructs a hierarchal relationship privileging superior humans over inferior animals and reduces animals to
resources for human use -- collapses under the weight of its logical contradictions. Patterson lays out his complex holistic argument in
three parts. In Part I, he demonstrates that animal exploitation and speciesism have direct and profound connections to slavery,
colonialism, racism, and anti-Semitism. In Part II, he shows how these connections exist not only in the realm of ideology – as conceptual
systems of justifying and underpinning domination and hierarchy – but also in systems of technology, such that the tools and techniques
humans devised for the rationalized mass confinement and slaughter of animals were mobilized against human groups for the same ends.
Finally, in the fascinating interviews and narratives of Part III, Patterson describes how personal experience with German Nazism prompted
Jewish to take antithetical paths: whereas most retreated to an insular identity and dogmatic emphasis on the singularity of Nazi evil and
its tragic experience, others recognized the profound similarities between how Nazis treated their human captives and how humanity as a
whole treats other animals, an epiphany that led them to adopt vegetarianism, to become advocates for the animals, and develop a far
broader and more inclusive ethic informed by universal compassion for all suffering and oppressed beings. The Origins of Hierarchy "As
long as men massacre animals, they will kill each other" –Pythagoras It is little understood that the first form of oppression, domination,
and hierarchy involves human domination over animals Patterson’s thesis stands in bold contrast to the Marxist theory that the
domination over nature is fundamental to the domination over other humans. It differs as well from the social ecology position of Murray
Bookchin that domination over humans brings about alienation from the natural world, provokes hierarchical mindsets and institutions,
and is the root of the long-standing western goal to “dominate” nature. In the case of Marxists, anarchists, and so many others, theorists
typically don’t even mention human domination of animals, let alone assign it causal primacy or significance. In Patterson’s model,
however, the human subjugation of animals is the first form of hierarchy and it paves the way for all other systems of domination
such as include patriarchy, racism, colonialism, anti-Semitism, and the Holocaust. As he puts it, “the exploitation of animals was the
model and inspiration for the atrocities people committed against each other, slavery and the Holocaust being but two of the more
dramatic examples.” Hierarchy emerged with the rise of agricultural society some ten thousand years ago. In the shift from nomadic
hunting and gathering bands to settled agricultural practices, humans began to establish their dominance over animals through
“domestication.” In animal domestication (often a euphemism disguising coercion and cruelty), humans began to exploit animals for
purposes such as obtaining food, milk, clothing, plowing, and transportation. As they gained increasing control over the lives and labor
power of animals, humans bred them for desired traits and controlled them in various ways, such as castratingmales to make them
more docile.To conquer, enslave, and claim animals as their own property, humans developed numerous technologies, such as pens,
cages, collars, ropes, chains, and branding irons. The domination of animals paved the way for the domination of humans. The sexual
subjugation of women, Patterson suggests, was modeled afterthe domestication of animals, such that men began to control women’s
reproductive capacity, to enforce repressive sexual norms, and to rape them as they forced breedingin their animals. Not
coincidentally, Patterson argues, slavery emerged in the same region of the Middle East that spawned agriculture, and, in fact,
developed as an extension of animal domestication practices. In areas like Sumer, slaves were managed like livestock, and males were
castrated and forced to work along with females. In the fifteenth century, when Europeans began the colonization of Africa and Spain
introduced the first international slave markets, the metaphors, models, and technologies used to exploit animal slaves were applied
with equal cruelty and force to human slaves. Stealing Africans from their native environment and homeland, breaking up families
who scream in anguish, wrapping chains around slaves’ bodies, shipping them in cramped quarters across continents for weeks or
months with no regard for their needs or suffering, branding their skin with a hot iron to mark them as property, auctioning them as
servants, breeding them for service and labor, exploiting them for profit, beating them in rages of hatred and anger, and killing them
in vast numbers– all these horrors and countless others inflicted on black slaves were developed and perfected centuries earlier
through animal exploitation. As the domestication of animals developed in agricultural society, humans lost the intimate connections
they once had with animals. By the time of Aristotle, certainly, and with the bigoted assistance of medieval theologians such as St.
Augustine and Thomas Aquinas, western humanity had developed an explicitly hierarchical worldview – that came to be known as the
“Great Chain of Being” – used to position humans as the end to which all other beings were mere means. Patterson underscores the
crucial point that the domination of human over human and its exercise through slavery, warfare, and genocide typically begins with the
denigration of victims. But the means and methods of dehumanization are derivative, for speciesism provided the conceptual paradigm
that encouraged, sustained, and justified western brutality toward other peoples. “Throughout the history of our ascent to dominance as
the master species,” Patterson writes, “our victimization of animals has served as the model and foundation for our victimization of each
other. The study of human history reveals the pattern: first, humans exploit and slaughter animals; then, they treat other people like
animals and do the same to them.” Whether the conquerors are European imperialists, American colonialists, or German Nazis, western
aggressors engaged in wordplay before swordplay, vilifying their victims – Africans, Native Americans, Filipinos, Japanese, Vietnamese,
Iraqis, and other unfortunates – with opprobrious terms such as “rats,” “pigs,” “swine,” “monkeys,” “beasts,” and “filthy animals.”Once
perceived as brute beasts or sub-humans occupying a lower evolutionary rung than white westerners, subjugated peoples were
treated accordingly; once characterized as animals, they could be hunted down like animals. The first exiles from the moral community,
animals provided a convenient discard bin for oppressors to dispose the oppressed. The connections are clear: “For a civilization built on
the exploitation and slaughter of animals, the `lower’ and more degraded the human victims are, the easier it is to kill them.” Thus,
colonialism, as Patterson describes, was a “natural extension of human supremacy over the animal kingdom. For just as humans had
subdued animals with their superior intelligence and technologies, so many Europeans believed that the white race had proven its
superiority by bringing the “lower races” under its command. There are important parallels between speciesism and sexism and racism in
the elevation of white male rationality to the touchstone of moral worth. The arguments European colonialists used to legitimate
exploiting Africans – that they were less than human and inferior to white Europeans in ability to reason – are the very same
justifications humans use to trap, hunt, confine, and kill animals. Oncewestern norms of rationality were defined as the essence of
humanity and social normality, by first using non-human animals as the measure of alterity, it was a short step to begin viewing odd,
different, exotic, and eccentric peoples and types asnon- or sub-human.Thus, the same criterion created to exclude animals from humans
was also used to ostracize blacks, women, and numerous other groups from “humanity.”
This Anthropocentric ordering is the foundation of the war machine and drives the
exclusion of populations based on race, ethnicity and gender
Kochi, 2K9 (Tarik, Sussex law school, Species war: Law, Violence and Animals, Law Culture and
Humanities Oct 5.3)
Grotius and Hobbes are sometimes described as setting out a prudential approach, 28 or a natural law of minimal content 29 because in
contrast to Aristotelian or Thomastic legal and political theory their attempt to derive the legitimacy of the state and sovereign order relies less
upon a thick con-ception of the good life and is more focussed upon basic human needs such as survival. In
the context of a response
to religious civil war such an approach made sense in that often thick moral and religious conceptions of
the good life (for example, those held by competing Christian Confessions) often drove conflict and violence. Yet, it would
be a mistake to assume that the categories of “survival,” “preservation of life” and “bare life” are
neutral categories. Rather survival, preservation of life and bare life as expressed by the Westphalian
theoretical tradition already contain distinctions of value
– in particular,
the specific distinction of value
between human and non-human life . “Bare life” in this sense is not “bare” but contains within it a
distinction of value between the worth of human life placed above and beyond the worth of non-human
animal life. In this respect bare life within this tradition contains within it a hidden conception of the good
life. The foundational moment of the modern juridical conception of the law of war already contains
within it the operation of species war. The Westphalian tradition puts itself forward as grounding the
legitimacy of violence upon the preservation of life, however its concern for life is already marked by a
hierarchy of value in which non-human animal life is violently used as the “raw material” for preserving
human life. Grounded upon, but concealing the human-animal distinction, the Westphalian conception of war makes a
double move: it excludes the killing of animals from its definition of “war proper,” and, through
rendering dominant the modern juridical definition of “war proper” the tradition is able to further
institutionalize and normalize a particular conception of the good life. Following from this original distinction of lifevalue realized through the juridical language of war were other forms of human life whose lives were considered to be of a lesser value under a
European, Christian, “secular” 30 natural law conception of the good life. Underneath
this concern with the preservation of
life in general stood veiled preferences over what particu-lar forms of life (such as racial conceptions of
human life) and ways of living were worthy of preservation, realization and elevation. The business
contracts of early capitalism, 31 the power of white males over women and children, and, especially in
the colonial context, the sanctity of European life over non-European and Christian lives over nonChristian heathens and Muslims, were some of the dominant forms of life preferred for preservation
within the early modern juridical ordering of war.
The affirmative trades off with flat ontology. Any demand for human inclusion is a link
to the criticism
Bryant 11 (Levi Bryant, Professor of Philosophy at Collin College, The Democracy of Objects, http://quod.lib.umich.edu/cgi/p/pod/dodidx/democracy-of-objects.pdf?c=ohp;idno=9750134.0001.001)
Flat ontology is a complex
philosophical concept that bundles together a variety of ontological theses under a single term. First, due to the split
characteristic of all objects, flat ontology rejects any ontology of transcendence or presence that privileges
one¶ sort of entity as the origin of all others and as fully present to itself. In this regard, onticology proposes an
Onticology proposes what might be called, drawing on DeLanda's term yet broadening it, a flat ontology.
ontology resonant with Derrida's critique of metaphysics insofar as, in its treatment of beings as withdrawn, it undermines any pretensions to
presence within being.
If this thesis is persuasive, then metaphysics can no longer function as a synonym
for “metaphysics of presence”, nor substance as a synonym for “presence”,¶ but rather an ontology
has been formulated that overcomes the primacy¶ of presence. In this section, I articulate this logic in terms of
Lacan's¶ graphs of sexuation. Here I believe that those graphs have little to tell us about masculine or feminine sexuality—for reasons I will
outline in what follows—but a great deal to tell us about ontologies of immanence or flat ontologies and ontologies of transcendence. Second,
flat ontology signifies that the world or the universe does not exist. I will develop the argument¶ for
this strange claim in what follows, but for the moment it is important¶ to recognize the definite
article in this claim. The claim that the world doesn't exist is the claim that there is no super-object
that gathers all other objects together in a single, harmonious unity. Third, following Harman, flat ontology
refuses to privilege the subject-object, human-world relation as either a) a form of metaphysical
relation different in kind from other relations between objects, and that b) refuses to treat the
subject-object relation as implicitly included in every form of object-object relation. To be sure, flat
ontology readily recognizes that humans have unique powers and capacities and that how humans
relate to the world is a topic more than worthy of investigation, yet nothing about this establishes
that humans must be included in every inter-object relation or that how humans relate to objects
differs in kind from how other entities relate to objects. Finally, fourth, flat ontology argues that all
entities are on equal ontological footing and that no entity, whether artificial or natural, symbolic
or physical, possesses greater ontological dignity than other objects.While indeed some objects might
influence the collectives to which they belong to a greater extent than others, it doesn't follow from
this that these objects are more real than others. Existence, being, is a binary such that something
either¶ is or is not.
Flat ontology key
Bryant ’14 Levi Bryant is Professor of Philosophy at Collin College Onto-Cartography
pg. 215-217
The first step in developing such a framework lies in overcoming human exceptionalism. As I argued
in The Democracy of Objects, ontology must be flattened (see Bryant 2011: ch. 6). Rather than
bifurcating being into two domains — the domain of objects and the domain of subiects, the domain
of nature and the domain of culture — we must instead conceive of being as a single flat plane, a
single nature, on which humans are beings among other beings. While humans are certainly
exceptional, for us they are not ontologically exceptional. To be sure, they differ in their powers and
capacities from other beings, but they are not lords or hierarchs over all other beings. They are
beings that dwell among other beings, that act on them and that are acted upon by them. As
extended mind theorists such as Andy Clark have argued — but also the new materialist feminists and
actor-network theorists such as Latour mind and culture are not special domains that can be
separated from the other non-human entities of the world for special investigation. Rather, we are
intimately bound up with the other entities of the world, coupled and conditioned by them in all
sorts of ways. Above all, we must avoid treating the world as a field given for the contemplative
gaze of humans. A world is something within which we act and engage, not something we passively
contemplate. A flat ontology must therefore be conceived along the lines of Lacan's famous
Borromean knot (see Figure 7.1). A Borromean knot consists of three inter-linked rings of string
fastened together in such a way that if any one ring is severed, the other two fall away. Lacan
indexes each of the three rings to one of his three orders: the real, the symbolic, and the imaginary.
With the Borromean knot, Lacan's work undergoes a funda- mental transformation. In his earlier work,
one of the three orders had always been privileged as dominating and overcoding the others. In his
earliest work, the imaginary dominated the real and the symbolic. In the work of his middle period, it
was the symbolic that overcoded the real and the imaginary. In his third phase, it was the real that
overcoded the symbolic and the imaginary. With the Borromean knot, no order overcodes the others.
Rather, they are all now treated as being on equal footing. This is how we need to think about the
order of being. The domain of the real indexes machines. Machines exist in their own right,
regardless of whether anyone registers them or discourses about them. The domain of the symbolic
refers to the plane of expression, or how beings are discoursed about, signified, imbued with
meaning, and so on. Finally, the domain of the imaginary refers to the way in which one machine
encounters another under conditions of structural openness and operational closure. Situated within
the framework of the Borromean knot, we can simultaneously investigate how a machine is
ideologically coded as in the case of Baudrillard's analysis of objects in System of Objects, how a
machine is phenomenologically encountered by another machine, and how a machine is a real,
independent being in its own right that produces effects irreducible to how it is signified or
phenomenologically given.
Our alternative is to reject the question of the affirmative. This movement away from
correlationism is a necessary philosophical move.
Bryant 11 (Levi Bryant is Professor of Philosophy at Collin College in the Dallas-Fort Worth metropolitan
area. 2011. “Levi Democracy Of objects”. http://quod.lib.umich.edu/o/ohp/9750134.0001.001/1:4/.
It is unlikely that object-oriented ontologists are going to persuade epistemological realists or antirealists that they have found a way of surmounting the epistemological problems that arise out of
the two-world model of being any time soon. Quoting Max Planck, Marshall and Eric McLuhan write, “A new
scientific truth does not triumph by convincing its opponents and making them see the light, but
rather because its opponents die, and a new generation grows up that is familiar with it”.6 This
appears to be how it is in philosophy as well. New innovations in philosophy do¶ not so much refute
their opponents as simply cease being preoccupied by certain questions and problems. In many
respects, object-oriented ontology, following the advice of Richard Rorty, simply tries to step out of
the debate altogether. Object-oriented ontologists have grown weary of a debate that has gone on
for over two centuries, believe that the possible variations of these positions have exhausted
themselves, and want to move on to talking about other things. If this is not good enough for the
epistemology police, we are more than happy to confess our guilt and embrace our alleged lack of
rigor and continue in harboring our illusions that we can speak of a reality independent of humans.
However, such a move of simply moving¶ on is not unheard of in philosophy. No one has yet refuted
the solipsist, nor the Berkeleyian subjective idealist, yet neither solipsism nor the extremes¶ of
Berkeleyian idealism have ever been central and ongoing debates in philosophy. Philosophers largely
just ignore these positions or use them¶ as cautionary examples to be avoided. Why not the same in
the endless debates over access?
Your attempt to persuade institutions through ethical appeal guarantees your politics
fails. Alt is a prerequisite.
Bryant ’14 Levi Bryant is Professor of Philosophy at Collin College Onto-Cartography
pg. 73
In light of the concept of thermodynamic politics, we can see the common shortcoming of protest
politics or what might be called semiotic politics. Semiotic politics is semiotic in the sense that relies
on the use of signs, either attempting to change institutions through communicative persuasion or
engaging in activities of critique as in the case of hermeneutics of suspicion that, through a critique of
ideology, desire, power, and so on, show that relations of domination and oppression are at work in
something we hitherto believed to be just. Semiotic politics is confused in that it is premised on
producing change through ethical persuasion, and thereby assumes that institutional-machines such
as corporations, governments, factories, and so on, are structurally open to the same sorts of
communicative flows as humans. It believes that we can persuade these organizations to change
their operations on ethical grounds. At best, however, these entities are indifferent to such arguments,
while at worst they are completely blind to even the occurrence of such appeals as machines such as corporations are only structurally open to
information events of profit and loss. Persuading
a corporation through ethical appeals is about as effective to
explain calculus to a cat.
Our way of accepting OOO key to shifting from anthro.
Mylius 13 (Ben Mylius, March 10, 2013, law graduate, anthrodecentrism object oriented ontology and refining the goals of ecocreative
writing http://ecologeur.com/post/45014342168/anthrodecentrism-object-oriented-ontology-and-refining)
‘Ontology is the philosophical study of existence.
Object-oriented ontology puts things at the centre of this study. Its
proponents contend that nothing has special status, but that everything exists equally -plumbers, cotton, bonobos, DVD
players, and sandstone, for example. In contemporary thought, things are usually taken either as the aggregation of ever smaller
bits (scientific naturalism) or as constructions of human behaviour and society (social relativism). OOO steers a path
between the two, drawing attention to things at all scales (from atoms to alpacas, bits to blinis), and pondering their
nature and relations with one another as much with ourselves.’ For anyone interested in a more philosophically-oriented explanation,
Wikipedia’s entry here is unusually helpful as a starting-point; Levi Bryant’s ‘Manifesto for object-oriented ontology’ is even more so, as is his
this movement particularly interesting because it represents an
attempt to think other than anthropocentrically: to develop a way of seeing and thinking that avoid
placing subjects in general, and human subjects in particular, at its centre. This is also where the resonance lies with
ecocreative writing, which I see as an attempt in a creative mode to do the same thing. The challenge, as it has always been, is
to find the way of theorising this ‘alternative to anthropocentrism’ in a coherent and non-problematic way. Perhaps
book The Democracy of Objects, available as an ebook here. I find
the key hurdle for the concept of ‘ecocentrism’ in object-oriented terms is that it proposes some overarching, unified ‘One’ (the ‘eco’) that
this might be avoided if we were able to sustain an
image of an ecosystem as a process - an assemblage (Deleuze), ‘mesh’ (Morton) or ‘collective’ (Latour) - rather
than a thing. But the connotations of any kind of ‘centrism’ (what is at the centre?) make this difficult.
might replace the ‘anthro’ at the centre of our thought. My sense is that
Case
Desal is terrible for the environment and kills millions of microorganisms
CMBB 09 [Located at the world-renowned Scripps Institution of Oceanography, the Center for Marine Biotechnology and Biomedicine
(CMBB) is a campuswide UCSD research division dedicated to the exploration of the novel and diverse resources of the ocean., “The Impacts
of Relying on Desalination for Water”, http://www.scientificamerican.com/article/the-impacts-of-relying-on-desalination/]
Meanwhile, expanding populations in desert areas are putting intense pressure on existing fresh water supplies, forcing communities to turn to
desalinization as the most expedient way to satisfy their collective thirst. But
the process of desalinization burns up many
more fossil fuels than sourcing the equivalent amount of fresh water from fresh water bodi es. As such,
the very proliferation of desalinization plants around the world‚ some 13,000 already supply fresh water in 120
nations, primarily in the Middle East, North Africa and Caribbean, is both a reaction to and one of many contributors to
global warming. Beyond the links to climate problems, marine biologists warn that widespread desalinization
could take a heavy toll on ocean biodiversity; as such facilities' intake pipes essentially vacuum up
and inadvertently kill millions of plankton, fish eggs, fish larvae and other microbial organisms that
constitute the base layer of the marine food chain. And, according to Jeffrey Graham of the Scripps Institute of
Oceanography's Center for Marine Biotechnology and Biomedicine, the salty sludge leftover after desalinization for
every gallon of freshwater produced, another gallon of doubly concentrated salt water must be
disposed of can wreak havoc on marine ecosystems if dumped willy-nilly offshore. For some desalinization
operations, says Graham, it is thought that the disappearance of some organisms from discharge areas may
be related to the salty outflow.
Desalination is terrible for the environment and alien objects- assumes new
technology methods and Arabian gulf proves
Dawoud and Al Mulla 12 (Mohamed Dawoud and Mohamed Al Mulla, Professor of Water and Director of Water Resources
Department, Environmental Impacts of Seawater Desalination: Arabian Gulf Case Study, International Journal of Environment and Sustainability
ISSN 1927‐9566 | Vol. 1 No. 3, pp. 22‐37 (2012))
Although desalination of seawater offers a range of human health, socio-economic, and environmental benefits by providing a seemingly
unlimited, constant supply of high quality drinking water without impairing natural freshwater ecosystems, concerns are raised due to
negative impacts (Dawoud, 2006). These are mainly attributed to the concentrate and chemical
discharges, which may impair coastal water quality and affect marine life, and air pollutant
emissions attributed to the energy demand of the processes as shown in Figure (3). The list of potential impacts can be
extended; however, the information available on the marine discharges alone indicates the need for a comprehensive environmental¶
desalination capacity in GCC (2000-2030)¶ evaluation of all major projects (Lattemann and Hoepner, 2003). In order to avoid an
unruly and unsustainable development of coastal areas, desalination activity furthermore should be
integrated into management plans that regulate the use of water resources and desalination
technology on a regional scale (UNEP/MAP/MEDPOL, 2003). In summary, the potential environmental impacts of
desalination projects need to be evaluated, adverse effects mitigated as far as possible, and the
remaining concerns balanced against the impacts of alternative water supply and water
management options, in order to safeguard a sustainable use of the technology. The effects on the
marine environment arising from the operation of the power and desalination plant from the
routine discharge of effluents. Water effluents typically cause a localized increase in sea water
temperatures, which can directly affect the organisms in the discharge area. Increased temperature
can affect water quality processes and result in lower dissolved oxygen concentrations. Furthermore, chlorination of
potential
the cooling water can introduce toxic substances into the water. Additionally, desalination plants can increase the salinity in the receiving
water. The substances of focus for water quality standards and of concern for the ecological assessment can be summarized as follows:¶
Although technological advances have resulted in the development of new and highly efficient
desalination processes, little improvements have been reported in the management and handling of
the major by-product waste of most desalination plants, namely reject brine. The disposal or
management of desalination brine (concentrate) represents major environmental challenges to most
plants, and it is becoming more costly. In spite of the scale of this economical and environmental problem, the options for
brine management for inland plants have been rather limited (Ahmed et al., 2001). These options include: discharge to surface
water or wastewater treatment plants; deep well injection; land disposal; evaporation ponds; and
mechanical/thermal evaporation. Reject brine contains variable concentrations of different
chemicals such as anti-scale additives and¶ inorganic salts that could have negative impacts on soil
and groundwater.¶ By definition, brine is any water stream in a desalination process that has higher salinity than the feed. Reject
brine is the highly concentrated water in the last stage of the desalination process that is usually
discharged as wastewater. Several types of chemicals are used in the desalination process for pre- and posttreatment operations. These include: Sodium hypochlorite (NaOCl) which is used for chlorination to prevent bacterial growth in
the desalination facility; Ferric chloride (FeCl3) or aluminum chloride (AlCl3), which are used as flocculants for the removal of
suspended matter from the water; anti-scale additives such as Sodium hexametaphosphate (NaPO3)6 are used to prevent scale
formation on the pipes and on the membranes; and acids such as sulfuric acid (H2SO4) or hydrochloric acid (HCl) are also used to adjust
the pH of the seawater. Due to the presence of these different chemicals at variable concentrations, reject brine
discharged to the sea has the ability to change the salinity, alkalinity and the temperature averages of the
seawater and can cause change to marine environment. The characteristics of reject brine depend on the type of feed
water and type of desalination process. They also depend on the percent recovery as well as the chemical additives used (Ahmed et al., 2000).
Typical analyses of reject brine for different desalination plants with different types of feed water are presented in Table (4).
Turn: Ocean desalination destroys the environment and kills marine life
FWW 9 (Food & Water Watch is a nonprofit consumer organization that works to ensure clean water and safe food, February 2009, “Desalination: An Ocean
of Problems”, aps)
Ocean desalination endangers the environment and public health.¶ While numbers do a good job of illustrating the
pure financial cost of desalination, they do not accurately reflect the full expense. Food & Water Watch found that additional costs
borne by the public include damage to the environment, danger to the public health and other
external considerations.¶ Ocean desalination could contribute to global warming.¶ Ironically, while
desalination is supposed to improve water shortages, its emissions could actually hasten the global
warming that will alter precipitation patterns and further strain existing water supplies. The greenhouse
gas pollution from the industrial seawater desalination plants dwarfs emissions from other water supply options such as conservation and
reuse. Seawater desalination in California, for example, could consume nine times as much energy as surface water treatment and 14 times as
much energy as groundwater production.¶ Ocean
desalination threatens fisheries and marine environments.¶
Further, on its way into a plant, the ocean water brings with it billions of fish and other organisms that die in
the machin- ery. This results in millions of dollars of lost fishing revenue and a great loss of marine
life. Then, only a portion of the ocean water that enters the plant actually reaches the consumer.¶ The remaining water ends up as a highly
concentrated solution that contains both the salt from the ocean and an array of chemicals from the industrial process – which is released right
Technical failures in desal plants means the plan doesn’t solve
Malik et al. No Date (Anees U. Malik, Saleh A. Al-Fozan, Fahd Al-Muaili, Mohammad Al-Hajri, Journal
of Failure Analysis and Prevention, No Date, “Frequent Failures of Motor Shaft in Seawater Desalination
Plant: Some Case Studies”,
http://www.researchgate.net/publication/257713702_Frequent_Failures_of_Motor_Shaft_in_Seawater
_Desalination_Plant_Some_Case_Studies, aps)
The failure of a shaft from a motor in a pump or a compressor has been a phenomenon of common
occurrence in seawater desalination plants. The origin of the problem in majority of cases is either
the inability of the material to withstand the level of dynamic stresses to which shaft is subjected
during operation and/or inadequacy of the design. The shortcoming in the design may be
responsible for initiating localized corrosion which ultimately leads to failure of the component. The
mode of failure of the shaft could be stress-related failure such as stress corrosion cracking, mechanical
fatigue or corrosion fatigue, and/or localized corrosion such as crevice corrosion. This paper describes
some recent case studies related to shaft failures in seawater desalination plants. The case studies
include shearing of a shaft in brine recycle pump in which a combination of environment, design,
and stresses played important role in failure. In another case, ingress of chloride inside the key slot
was the main cause of the problem. The failure in a high pressure seawater pump in a SWRO plant
occurred due to cracking in the middle of the shaft.
Turn: Desalination increases the costs forced on communities for clean water
McIntyre 8 (Mindy, Los Angeles Times, 4/10/08, “All that water, every drop to drink”, http://www.latimes.com/opinion/la-op-snow-mcintyre10apr10story.html#page=1, aps)
Many people mistakenly consider ocean desalination a harmless way to get water to growing cities
without the effects associated with damming rivers and over-pumping groundwater. The truth is,
desalination is one of the most harmful and expensive water options in California. When compared to other available strategies,
ocean desalination just doesn't pencil out. ¶ Consider that ocean desalination is the most energy intensive way to get water.
That's right -- it requires more energy to desalinate a gallon of ocean water than it does to pump water from Northern California over a mountain range all the way
to Southern California. All
of that energy means more greenhouse gases, which would cause more problems
for our snowpack and groundwater, not to mention other resources.¶ Ocean desalination also
requires that massive amounts of sea water, carrying millions of fish, plankton and other ocean life, must be sucked up and filtered
everyday -- with 100% fish mortality. Those who care about the ocean know that these types of diversions can destroy miles of already stressed coastal habitats. In
fact, people
have been working for decades to stop power plants from this kind of water filtration.¶
Ocean desalination also fails the cost test. It is the most expensive source of new water for
California, thanks to the very high energy requirements. Despite the claims that desalination will
get less expensive as time goes on, you do not have to be an economist to understand that $4
gasoline means that all forms of energy will be much more expensive in the future, not cheaper.¶ We
should also be aware that many of these desalination plants would be owned by private companies, including subsidiaries of multinational corporations. That raises
concerns about transparency and accountability.¶ Locally
controlled water conservation, water recycling and brackish
water desalination are all far cheaper than ocean desalination. Coincidentally, these options are also less energy- and
greenhouse-gas intensive, and less environmentally damaging. ¶ Ocean desalination, quite frankly, is the SUV of water. We have
better options. Communities need to decide whether they want their water sources to generate massive
amount of greenhouse gas, cost a fortune and destroy the environment. I suspect that in most cases, Californians would
Arctic 1NC
Shipping DA
Shipping causes stampedes, threatening to the keystone species
Graef ’10 (http://www.care2.com/causes/will-the-walrus-disappear-too.html#ixzz36us6nfCb)
Last year, a sudden stampede to the water at Alaska’s Icy Cape crushed 131 walruses, most of them
youth, ClimateWire reported. As the sea ice continues to recede during summer months, more ships will be
able to sail through their habitat, potentially scaring the older females, who weigh about one ton, and
causing more frantic stampedes. This year the Fish and Wildlife Service has asked boats, planes and hunters to keep a respectful
distance from the pods. The USGS reports “a clear trend of worsening conditions for the subspecies.” The report concluded that there is a
40 percent chance of Pacific walrus being extinct by the end of this century. The federal government is expected to
decide in 2011 if the Pacific walrus needs Endangered Species Act protection.
Walrus is a keystone species, crucial to Arctic biodiversity
Poland ’13 (http://news.msn.com/science-technology/keystone-species-loss-could-cause-ecosystem-to-collapse)
The walrus is also a keystone species that is threatened in the Arctic and elsewhere. Walruses prefer to dine on
mollusks such as clams, but also eat shrimp, crabs, soft corals, sea cucumbers, tube worms and the occasional seal, so the decline of
walruses allows many of their prey to bloom into overpopulation. Since they rely on the ice pack for
their reproductive periods, the reduction of ice separates lactating cows for longer distances from their
calves in order to get to the best feeding grounds.
Arctic Biodiversity is high—It’s key to global biodiversity
Naido 2013 [Jayeseelan, Chairman, GAIN “The Scramble for the Arctic and the Dangers of Russia's Race
for Oil” 11/6/13 The Huffington Post 13 http://www.huffingtonpost.com/jayaseelan-naidoo/thescramble-for-the-arctic_b_4223661.html)
The Arctic region covers more than 30 million square kilometers - one sixth of the planet's
landmass. It spans 24 time zones. It is one of Earth's last pristine ecosystems. It is critical to global
biodiversity with hundreds of unique plant and animal species. Scientists concur that the Arctic sea ice
serves as the air conditioner of the planet, regulating the global temperature.
Arctic shipping introduces invasive species, destroying biodiversity
Geiling 14 (Natasha, Arctic Shipping: Good For Invasive Species, Bad For the Rest of Nature,
http://www.smithsonianmag.com/science-nature/global-warmings-unexpected-consequence-invasivespecies-180951573/)
Yes, shipping containers and bulk carriers do currently contribute to the spread of invasive species—it's
something that has been irking marine biologists for a long time. Bulk carriers (and ships generally) have things called ballast tanks, which are
compartments that hold water, in order to weigh a ship down and lower its center of gravity, providing stability. Ships
take in water
from one location and discharge it in another, contributing to concerns about invasive species. The zebra mussel, an invasive
species that has colonized the Great Lakes and caused billions of dollars of economic damage, is believed to have been introduced from the
ballast tank of ships coming from Western European ports.
Shipping is already the primary way that invasive marine
species become introduced—contributing to 69 percent of species introductions to marine areas.¶ But Miller and
Ruiz worry that Arctic shipping—both through the Arctic and from the Arctic—could make this statistic even worse. ¶ "What’s happening now is
that ships move between oceans by going through Panama or Suez, but that means ships from higher latitudes have to divert south into
tropical and subtropical waters, so if you are a cold water species, you’re not likely to do well in those warm waters," Miller explains. "That
could currently be working as a filter, minimizing the high latitude species that are moving from one ocean to another."¶ Moreover, the
Panama Canal is a freshwater canal, so organisms clinging to the hulls of ships passing through have to undergo osmotic shock as saltwater
becomes freshwater and back again. A lot of organisms, Miller explains, can't survive that.¶ These new cold water routes don't have the
advantage of temperature or salinity filters the way traditional shipping routes do. That means that species adapted to live in cold waters in the
Arctic could potentially survive in the cool waters in northern port cities in New York and New Jersey, which facilitated the maritime transport
of nearly $250 billion worth of goods in 2008. And because routes through the Arctic are much shorter than traditional shipping routes, invasive
animals like crabs, barnacles and mussels are more likely to survive the short transit distance riding along inside the ballast tanks and clinging to
the hulls.¶ ¶ Invasive species are always cause for apprehension—a Pandora's Box, because no one really knows how they'll
impact a particular ecosystem until it's too late. In an interview with Scientific American in March of 2013, climate scientist Jessica Hellmann, of
the University of Notre Dame, put it this way: "Invasive species are one of those things that once the genie is out of the bottle, it’s hard to put
her back in." There aren't many invasive species from the Arctic that are known, but one that is, the red king crab, has already wreaked havoc
on Norway's waters; a ferocious predator, the red king crab hasn't had much trouble asserting near total dominance over species unfamiliar
with it. "You never know when the next red king crab is going to be in your ballast tank," Miller warns. Invasive species pose two dangers, one
ecological, the other economic. From an ecological standpoint, invasive species threaten
to disrupt systems that have evolved
and adapted to live together over millions of years. "You could have a real breakdown in terms of [the
ecosystems] structure and their function, and in some cases, the diversity and abundance of native species," Miller
explains. But invasive species do more than threaten the ecology of the Arctic—they can threaten the global economy. Many invasive species,
like mussels, can damage infrastructure, such as cooling and water pipes. Seaports are vital to both the United States and the global economy—
ports in the Western hemisphere handle 7.8 billion tons of cargo each year and generate nearly $8.6 trillion of total economic activity,
according to the American Association of Port Authorities. If an invasive species is allowed to gain a foothold in a port, it could completely
disrupt the economic output of that port. The green crab, an invasive species from Europe, for example, has been introduced to New England
coasts and feasts on native oysters and crabs, accounting for nearly $44 million a year in economic losses. If invasive species are able to disrupt
the infrastructure of an American port—from pipes to boats—it could mean damages for the American economy. In recent years, due to
fracking technology, the United States has gone from being an importer of fuel to an exporter, which means that American ports will be hosting
more foreign ships in the coming years—and that means more potential for invasive species to be dispersed. Invasive
species brought
into the Arctic could also disrupt ecosystems, especially because the Arctic has had low exposure to invasions
until now. Potential invasive species could threaten the Arctic's growing economic infrastructure as well, damaging equipment set up to
look for natural gas and other natural resources in the newly-exposed Arctic waters.
Biodiversity Loss Leads to Extinction
Buczynski ’10 gender modified* [Beth, writer and editor for important ecosystem sustainability, UN:
Loss Of Biodiversity Could Mean End Of Human Race, Care2, 18/10/10, http://www.care2.com/causes/unhumans-are-rapidly-destroying-the-biodiversity-ne.html]
UN officials gathered at the Convention on Biological Diversity (CBD) in Japan have issued a global warning that the
rapid loss of animal
and plant species that has characterized the past century must end if humans are to survive. Delegates in Nagoya plan to set
a new target for 2020 for curbing species loss, and will discuss boosting medium-term financial help for poor countries to help them protect
their wildlife and habitats (Yahoo Green). “Business as usual is no more an option for [hu]mankind*,” CBD executive
secretary Ahmed Djoghlaf said in his opening statements. “We need a new approach, we need to reconnect with nature and live in harmony
with nature into the future.” The CBD is an international legally-binding treaty with three main goals: conservation of biodiversity; sustainable
use of biodiversity; fair and equitable sharing of the benefits arising from the use of genetic resources. Its overall objective is to encourage
actions which will lead to a sustainable future. As Djoghlaf acknolwedged in his opening statements, facing the fact that many countries have
ignored their obligation to these goals is imperitive if progress is to be made in the future. “Let us have the courage to look in the eyes of our
children and admit that we have failed, individually and collectively, to fulfil the Johannesburg promise made to them by the 110 Heads of State
and Government to substantially reduce the loss of biodiversity by 2010,” Djoghlaf stated. “Let us look in the eyes of our children and admit
that we continue to lose biodiversity at an unprecedented rate, thus mortgaging their future.” Earlier this year, the U.N. warned several
eco-systems including the Amazon rainforest, freshwater lakes and rivers and coral reefs are approaching a “tipping point”
which, if reached, may see them never recover. According to a study by UC Berkeley and Penn State University researchers,
between 15 and 42 percent of the mammals in North America disappeared after humans arrived. Compared to extinction rates demonstrated
in other periods of Earth’s history, this
means that North American species are already half way to to a sixth mass
extinction, similar to the one that eliminated the dinosaurs. The same is true in many other parts of the world. The third
edition of the Global Biodiversity Outlook demonstrates that, today, the rate of loss of biodiversity is up to one thousand times higher than the
background and historical rate of extinction. The Earth’s
6.8 billion humans are effectively living 50 percent beyond
the planet’s biocapacity in 2007, according to a new assessment by the World Wildlife Fund that said by 2030 humans will
effectively need the capacity of two Earths in order to survive.
CP text: The United States Federal Government should expand its programs for Arctic
Ocean exploration and scientific research.
Russia can’t be relied on for scientific data – the US should map the sea unilaterally
Cohen, et. Al, Senior Research Fellow at the Heritage Foundation, 2008
(Ariel, Ph.D. and Senior Research Fellow in Russian and Eurasian Studies and International
Energy Security, Lajos Szaszdi, Ph.D. and researcher at the Hertiage Foundation, and Jim
Dolbow, defense analyst as the U.S. Naval Institute, “The New Cold War: Reviving the U.S.
Presence in the Arctic”, online pdf available for download:
http://www.heritage.org/research/reports/2008/10/executive-summary-the-new-cold-warreviving-the-us-presence-in-the-arctic)
While paying lip service to international law, Russia’s ambitious actions hearken back to 19thcentury statecraft rather than the 21st-century law- based policy and appear to indicate that
the Kremlin believes that credible displays of power will settle conflicting territorial claims. By
comparison, the West’s posture toward the Arctic has been irresolute and inadequate. This
needs to change.Reestablishing the U.S. Arctic Presence. The United States should not rely on
the findings of other nations that are mapping the Arctic floor. Timely mapping results are
necessary to defending and asserting U.S. rights in bilateral and multilateral fora. The U.S.
needs to increase its efforts to map the floor of the Arctic Ocean to determine the extent of the
U.S. Outer Continental Shelf (OCS) and ascertain the extent of legitimate U.S. claims to territory
beyond its 200-nautical-mile exclusive economic zone. To accomplish this, the U.S. needs to
upgrade its icebreaker fleet. The U.S. should also continue to cooperate and advance its
interests with other Arctic nations through venues such as the recent Arctic Ocean Conference
in Ilulissat, Greenland.
"Cooperation" with Russia won't produce quality clean-up efforts - they pocket
concessions and won't meaningfully engage the US
Kramer and Shevtsova, Kramer: Director of Freedom House, Shevtsova: Kremlinology expert
and currently serves as a senior associate at the Carnegie Endowment for International Peace),
2/21/2013
(David and Lilia, Kramer: United States Assistant Secretary of State for Democracy, Human
Rights, and Labor from 2008 to 2009, Shevtsova: currently serves as a senior associate at the
Carnegie Endowment for International Peace, “Here We Go Again: Falling for the Russian Trap”,
online: http://www.the-american-interest.com/articles/2013/02/21/here-we-go-again-fallingfor-the-russian-trap/)
Nearing the end of his second term, George W. Bush sought to salvage Russian-American
relations with a visit to Sochi in April 2008, but then a few months later, Russia’s invasion of
Georgia brought the bilateral relationship to its lowest point in twenty years. President Obama
came to office intent on repairing the relationship and working together with Moscow on a
range of global issues. At the start of his second term, however, despite four years of the reset
policy, Obama, too, faces a very strained relationship with Russia.¶ True, the United States has
made its mistakes. But the current state of Russian-American relations stems mostly from the
Kremlin’s creation of imitation democracy and its attempts to exploit the West and antiAmericanism for political survival. The Kremlin’s imitation game has complicated American and
Western policies toward Russia and forced the West to pretend, just as the Russian elite does.
The “Let’s Pretend” game allowed both sides to ignore core differences and to find tactical
compromises on a host of issues ranging from the war on terror to nuclear safety. This
concerted imitation has also had strategic consequences, however. It has facilitated the
survival of Russia’s personalized-power system and discredited liberal ideals in the eyes of
Russian society. It has also created a powerful pro-Russia Western lobby that is facilitating the
export of Russia’s corruption to developed countries.¶ Despite numerous U.S. attempts to
avoid irritating the Kremlin, relations between Moscow and Washington always seem to end
up either in mutual suspicion or in full-blown crisis. That is what happened under the Clinton
and George W. Bush Administrations, and that is what happened after Barack Obama’s first
term in office. Each period of disappointment and rupture in relations, which has always been
preceded by a period of optimism, has been followed by another campaign by both Moscow
and Washington to revive relations. Who is behind these campaigns? For a quarter of a century,
it has been the same consolidated cohort of experts in both capitals, most of whom have
serious and established reputations and vast stores of experience. (There are a few new
additions to the cohort, but they walk in lockstep with the old hands.) After every new crisis,
these experts implore politicians on both sides to “think big.” Each time, “big thinking” on the
Western side includes encouragement to avoid issues that would antagonize the Kremlin. Thus
U.S. administrations looked the other way as the Kremlin created a corrupt, authoritarian
regime.
Case
Russia Co-op
Cross apply Russia can’t be relied on to co-op in the arctic they will just claim as their
own. They won’t be meaningful.
Shipping
Shipping Pollution accounts for 60,000 deaths each year
Brahic ’07 (http://www.newscientist.com/article/dn12892-shipping-pollution-kills-60000-every-year.html#.U7ludm2wWAY)
Pollution from ships, in the form of tiny airborne particles, kills at least 60,000 people each year , says a
new study. And unless action is taken quickly to address the problem - such as by switching to cleaner fuels - the death toll will climb,
researchers warn. Premature deaths due to ultra-fine particles spewed out by ships will increase by 40% globally by
2012, the team predicts. Ships release an estimated 1.2 million to 1.6 million metric tons of tiny airborne particles
each year. Less than 10 micrometres in diameter and invisible to the human eye, most come from the combustion of
shipping fuel which releases the ultra-fine soot. This includes various carbon particles, sulphur and nitrogen oxides. Tiny airborne particles
are linked to premature deaths worldwide, and are believed to cause heart and lung failures. The particles get
into the lungs and are small enough to pass through tissues and enter the blood. They can then trigger inflammations which eventually cause
the heart and lungs to fail. There is also some evidence that shipping emissions contain some of the carcinogenic particles found in cigarette
smoke. The smaller the particle, the more dangerous it is. According to one study carried about by Aaron Cohen, currently an adviser to the
World Health Organization European Center for Environment and Health, particles smaller than 2.5 micrometres across are responsible for 0.8
million deaths worldwide each year - or 1.2% of premature deaths (Journal of Toxicology and Environmental Health, part A, vol 68, p 1301).
Particles released by ships fall into this category, with most measuring less than 2.5 microns across. So James Corbett of the University of
Delaware in the US and his colleagues decided to determine what portion of premature deaths can be linked to soot emissions from the
shipping industry.
Shipping causes oil spills which hurt biodiversity takes out their oil adv.
Milkowski ’09 [Stefan, writer living in Fairbanks. He covered business and state politics for the
Fairbanks Daily News-Miner from late 2005 through this summer.
The Environmental Risks of Arctic Shipping, NYT, 29/6/14,
http://green.blogs.nytimes.com/2009/06/29/the-environmental-risks-of-arctic-shipping-ready-for-tomto-publish-mon/]
As the Arctic warms, an expected increase in
shipping threatens to introduce invasive species, harm existing
marine wildlife and lead to damaging oil spills, according to a recent report from the Arctic Council, an intergovernmental
forum of Arctic nations. Seabirds and polar bear and seal pups are particularly sensitive to oil and can quickly die of
hypothermia if it gets into their feathers or fur, according to the report. Whales, as well as walruses and seals, can have a
harder time communicating, foraging and avoiding prey in noisy waters. “Whether it is the release of substances
through emissions to air or discharges to water, accidental release of oil or hazardous cargo, disturbances of wildlife through sound, sight,
collisions or the introduction of invasive alien species, the Arctic
marine environment is especially vulnerable to
potential impacts from marine activity,” the report states. As the climate changes, reductions in sea ice are likely to lengthen the
shipping season, putting migrating animals into more frequent contact with ships. Bowhead and beluga whales share a narrow corridor with
ships in the Bering Strait between Alaska and Russia and could be disturbed. There is also greater risk of introducing invasive species through
ballast water, cargo, or on ships’ hulls. “Introduction of rodent species to islands harboring nesting seabirds, as evidenced in the Aleutian
Islands, can be devastating,” the report states. Shipping between the North Pacific and the North Atlantic is of particular concern, because it
could transport species between areas with similar environmental conditions. The Arctic Marine Shipping Assessment, as the study is called,
was put together by Arctic Council nations, including the United States, and serves as a formal policy document, according to Lawson Brigham,
a University of Alaska Fairbanks professor and retired Coast Guard captain, who chaired the study and presented it last week in Fairbanks. It
recommends that Arctic nations reduce emissions of greenhouse gases and other air pollutants from ships, work to lower the risk of oil spills,
and consider setting aside special areas of the Arctic Ocean for environmental protection, among other things. Mr. Brigham described an Arctic
bustling with activity, where ice-breaking ships with special hulls sail stern-first through heavy ice and a shipping route across the top of the
Earth is not out of the question. “It’s not a question of whether the maritime industry is coming to the Arctic,” he said. It has, he added, already
come.
Oil
Cross apply Milkowski in 9 Shipping causes oil spills. That hurts biodiversity. This takes
out their entire Advantage because it just feeds into our impact. Can’t solve for oil
spills.
OSW 1NC
Windmills K
1NC
Industrial wind turbines engage the world in the same fashion as a coal plant—as a
device, a black box, that discloses its environment as little more than potential energy
Brittan 1 Gordon G. Brittan, Jr., Department of Philosophy, Montana state University. “Wind, energy,
landscape: reconciling nature and technology,” Philosophy & Geography Vol. 4 No. 2
Let me put the point this way. Wind energy (in its most recent embodiment) was introduced in terms of
a trade-off, one environmentally benign but visually intrusive technology being substituted for other
more environmentally malignant but less visually disturbing technologies.22 Aside from their “benign”
and “malignant” features, these technologies share the same general design characteristics. Moreover,
they were and are imposed, grouped, and owned in very much the same sort of way, a point to which I
will return later. In my view, the resistance to wind turbines is not because they are “uglier” than other
forms of energy production, but because they are characteristic of contem- porary technology, on a
scale magnified by their large size, the extensive arrays into which they are placed, and the relative
barrenness of their surroundings. II We need to become clearer about the character of contemporary
technology. No one has done more to clarify it, in my view, than Albert Borgmann.23 Borgmann begins
with a distinction between “devices” (those characteristic inventions of our age, among which the
pocket calculator, the CD sound system, and the jet plane might be taken as exemplary) and what
Heidegger calls “things” (not only natural objects, but human artifacts such as the traditional windmills
of Holland). The pattern of contemporary technology is the device paradigm, which is to say that
technology has to do with “devices” as against “things.” Things “engage” us, an engagement which is at
once bodily, social, and demands skill. A device, by way of contrast, disengages and disburdens us. It
makes no demands on skill, and is in this sense disburdening. It is deŽ ned in functional terms, i.e., a
device is anything that serves a certain human-determined function. This largely involves the
procurement of a commodity. That is, a device is a means to procure some human end. Since the end
may be obtained in a variety of ways, i.e., since a variety of devices are functionally equivalent, a device
has no intrinsic features. But a device also “conceals,” and in the process disengages. The way in which
the device obtains its ends is literally hidden from view. The more advanced the device, the more hidden
from view it is, sheathed in plastic, stainless steel, or titanium. Moreover, concealment and disburdening
go hand in hand. The concealment of the machinery, the fact that it is distanced from us, insures that it
makes no demands on our faculties. The device is socially disburdening as well in its isolation and
impersonality. To make the analysis of “devices” more precise, an objection to it should be considered.
“Is not … the concealment of the machinery and the lack of engagement with our world,” Borgmann
asks, “due to widespread scientiŽ c, economic, and technical illiteracy?”24 That is, why in principle can
we not “go into” contemporary devices, “break through” their apparent concealments? Why should we
not promote electrical engineering, for example, as a general course of study, and in the process come
to know if not also to love contemporary technology? Borgmann initially answers this objection along
three main lines. First, many devices, e.g., the pocket calculator, are in principle irreparable; they are
designed to be thrown away when they fail. In this case, there is no point in “going into” the device.
Second, many devices, e.g., the CD sound system, are in principle carefree; they are designed so as not
to need repair. It is not necessary to go into such devices. Third, many devices, e.g., the jet plane, are in
fact so complex that it is not really possible for anyone but a team of experts to go into them.
Increasingly, this is true of older technologies as well, e.g., automobiles, where “fixing” has become
tantamount to “replacing” their various computerized components. Borgmann contends that even if
technical education made much of the machinery of devices perspicuous, two differences between
devices and “things” would remain. Our engagement with devices would remain “entirely cerebral”
since they resist “appropriation through care, repair, the exercise of skill, and bodily engagement.”
Moreover, the machinery of a device is anonymous. It does not express its creator, “it does not reveal a
region and its particular orientation within nature and culture.”25 On both counts, devices remain
unfamiliar, distanced and distancing. Typing these words, looking at the monitor on which they appear, I
have no real relation to the process or to the machinery involved, no context in which to place them, no
knowledge of their origins or of their development. The only thing that really matters is the product. We
could summarize Borgmann’s position by referring to the familiar theoretical notion of a “black box.” In
a “black box” commodity-producing machinery is concealed insofar as it is both hidden from view or
shielded (literally), and conceptually opaque or incomprehensible (. guratively). Moreover, just those
properties that Borgmann attributes to devices can be attributed equally well to “black boxes.” It is not
possible to get inside them, since they are both sealed and opaque. It is not necessary to get inside them
either, since in principle it is always possible to replace the three-termed function that includes input,
“black box,” and output, with a two-termed function that links input to output directly. All we really care
about is that manipulation of the former alters the latter. Borgmann’s interpretation of technology and
the character of contemporary life can be criticized in a number of different ways.26 Still, the distinction
between “things” and “devices” reveals, I think, the essence of our inability to develop a landscape
aesthetic on which contemporary wind turbines are or might be beautiful, and thereby explains the
widespread resistance to placing them where they might be seen.27 For the fact of the matter is that
contemporary wind turbines are for most of us merely devices. There is therefore no way to go
beyond or beneath their conventionally uncomfortable appearance to the discovery of a latent
mechanical or organic or what-have-you beauty. The attempt to do so is blocked from the outset by the
character of the machine.28 Think about it for a moment: Except for the blades, virtually everything is
shielded, including the towers of many turbines, hidden from view behind the same sort of stainless
steel that sheathes many electronic devices. Moreover, the machinery is located a great distance away
from anyone, save the mechanic who must . rst don climbing gear to access it and often, for liability
reasons, behind chain-link fences and locked gates. The lack of disclosure goes together with the fact
that the turbines are merely producers of a commodity, electrical energy, and interchangeable in this
respect with any other technology that produces the same commodity at least as cheaply and reliably.
The only important differences between wind turbines and other energy-generating technologies are
not intrinsic to what might be called their “design philosophies.” That is, while they differ with respect to
their inputs, their “fuels,” and with respect to their environmental impacts, the same sort of description
can be given of each. There is, as a result, but a single standard on the basis of which wind turbines are
to be evaluated, ef. ciency. It is not to be wondered that they are, with only small modi. cations between
them, so uniform. Many astute commentators would seem to disagree with this judgment. Thus, for
example, Robert Thayer in Gray World, Green Heart, 274: But wind energy’s visibility can also be seen as
an advantage if functional transparency is valued. With wind energy plants, “what you see is what you
get.” When the wind blows, turbines spin and energy is generated. When the wind doesn’t blow, the
turbines are idle. This rather direct expression of function serves to reinforce wind energy’s sense of
landscape appropriateness, clarity, and comprehensibility. In the long run, wind energy will contribute
to a unique sense of place. In fact, however, Thayer reinforces the device-like character of wind
turbines. Only their function is transparent, wind in/electrical energy out; the “black box” where all the
processing takes place remains unopened. This is roughly the same kind of “comprehensibility” that is
involved when we note the correlation between punching numbers into our pocket calculators, hitting
an operation key, and seeing the result as a digital read-out. There are two more things to be said about
Thayer’s position. One is that nothing can be “appropriate” to landscape per se; everything depends on
the type of object and the type of landscape, at least if we think of landscapes, following Leopold, in
something like biological terms, in terms of integration and compensation. It is a matter of context. But
as is typical of “devices” generally, contemporary wind turbines are context-free; they do not relate in
any speci. c way to the area in which they are placed (typically by someone who does not live in the
area). In particular, Leopold insists on the fact that the appropriateness of objects in landscapes has to
do with their respective histories, the ways in which they evolved, or failed to evolve, together. But
contemporary wind turbines have only a very brief history, and in terms of their basic design
parameters—low solidity, high r.p.m., low torque— differ importantly from the windmills whose history
goes back at least 1300 years. If wind turbines have any sort of context, it is by way of their blades and
the development of airplanes, but it is dif. cult to see how airplanes . t as appropriate objects or symbols
into a windswept and uninhabited landscape. Of course, “in the long run” wind turbines will contribute
to a sense of place, but not simply in virtue of having been installed somewhere in massive arrays. They
will . rst have to acquire particular histories.29 It is interesting to note in this respect how unlike other
architectural arrivals on the horizon, such as houses (and traditional windmills), contemporary wind
turbines are. Different styles of architecture developed in different parts of the world in response to
local geological and climatic conditions, to the availability of local materials, to the spiritual and
philosophical patterns of the local culture. As a result, these buildings create a context. In Heidegger’s
wonderful, dark expression, they “gather.” But there is nothing “local” or “gathering” about
contemporary wind turbines; they are everywhere and anonymously the same, whether produced in
Denmark or Japan, placed in India or Spain, alien objects impressed on a region and in no deeper way
connected to it. They have nothing to say to us, nothing to express, no “inside;” they “conceal” rather
than “reveal.” The sense of place that they might eventually engender cannot not, therefore, be unique.
In this regard, the German landscape architect Christophe Schwann seems to catch just the right note:
Elements of technical civilization are very often standardized in their out. t. The more of them are placed
into landscape, the less is the landmark effect. Because of standardization, wind generators can be very
annoying in the marshes: formerly people could distinguish every church tower telling the name of the
place. Today, wherever you look you always see the turning triblades. The in• ation of standardized
elements like high tension masts and wind generators puts down orientation and contributes to the
landscape standardization caused by industrial agriculture.30 The other comment I want to make about
Thayer’s position is this. Wind turbines are quintessential “devices” in that they preclude engagement.
Or rather, the only way in which the vast majority of people can engage with them is visually (and
occasionally by ear). They cannot climb over and around them, they cannot get inside them, they cannot
tinker with them.31 They cannot even get close to them. There is no larger and non-trivial physical or
biological way in which they can be appropriated or their beauty grasped. The irony, of course, is that
precluded from any other sort of engagement with wind turbines, most people . find them visually
objectionable, however they might be willing to countenance their existence as the lesser of evils.
The technological reduction of wind, solar rays to standing reserves of energy is the
structure force that allows for environmental destruction and the killing of disposable
populations.
Backhaus 9 Gary, Loyola College in Maryland. “Automobility: Global Warming as Symptomatology,”
April 20, 2009. Sustainability. Vol. 1
Taking up Heidegger’s hermeneutic ontology in its reflection on Being allows us to envision global
warming as a symptom, as an appearing, complex phenomenon through a particular way, the
interpretive form of Being to which modern human life has been claimed. We are led to the essence of
which global warming is an appearing symptom, which is other than its correct definition—one of the
goals of Gore‟s book is to responsibly inform the average non-scientifically educated person as to the
whatness of global warming, a correct saying of the phenomenon. From a Heideggerian standpoint,
Gore’s shallow analysis is blind to deeper truths that concern more than establishing correct statements
describing the whatness of global warming. In the analysis of a later treatise, “The Question Concerning
Technology”, Heidegger maintains that the essence of technology is not something technological—its
Being is not to be interpreted as itself a being (a technology). He provides what is regarded as the
(standard/accepted) correct definition of technology as a human activity and as a means to an end. By
contrast to the correct definition, Heidegger‟s analysis shows that the truth in the
revealing/unconcealment or the essence/Being of modern technology that allows for modern
technological entities to show themselves as such is a “challenging, which puts to nature the
unreasonable demand that it supply energy which can be extracted and stored as such. But does this not
hold true for the old windmill as well? No. Its sails do indeed turn in the wind; they are left entirely to
the wind‟s blowing. But the windmill does not unlock energy from the air currents in order to store it
[16]”. The challenging is a setting-in-order, a setting upon nature, such that “the earth now reveals itself
as a coal mining district” and “what the river is now, a water-power supplier, derives from the essence
of the power station [16]”. What is the character of this unconcealment? “Everywhere everything is
ordered to stand by, to be immediately on hand, indeed to stand there just so that it may be on call for a
further ordering. Whatever is ordered about in this way has its own standing. We call it standing reserve
[16]”. And the challenging that claims man to challenge nature in this way Heidegger labels, enframing.
“Enframing means the gathering together of that setting-upon that sets upon man, i.e., challenges him
forth, to reveal the real, in the mode of ordering, as standing-reserve. Enframing means that the way of
revealing that holds sway in the essence of modern technology and that is itself nothing technological
[16]”. Modern physics, which interprets nature as a system of calculable forces is the herald of
enframing. The way of Being through which entities stand in the clearing, as technological
instrumentalities, is enframing and the way of Being of those entities is that of standing reserve.
This very brief discussion of Heidegger is important for two reasons. First, because my conception of
automobility emphasizes the spatial organization of standing reserve, which Heidegger does not treat,
and because automobility entails an empirical manifestation of man‟s ordering attitude and behavior in
terms of spatial production, we recognize an already established ontological analysis from which
automobility is to be interpreted. Secondly, we have an exemplar by which we can see what is to be
done to uncover the Being that allows something to appear as that something, which is always other
than the appearing beings. Heidegger‟s hermeneutics provides the possibility to claim that the solution
to the technologically induced problem of global warming is not itself something technological, if indeed
we are to open ourselves to other possible interpretational modes of Being such that other kinds of
entities would then be unconcealed. We want to free ourselves up to sustainability as a way of Being by
being open for a new way of interpretation, a new worldview, a new paradigm for living, other than
enframing, by which new kinds of entities other than those of standing reserve will show themselves
from its clearing. 3.3. Redirecting Reflection from Symptom to Source Al Gore is correct in stating that
global warming is caused by the increase of greenhouse gasses trapping infrared radiation, with CO2
being the most prevalent. In the U.S., coal burning power plants and automobiles are the chief
contributors. He also states correctly that methane and nitric oxide are also contributors to global
warming, which reach dangerous levels through industrialized orderings of farm animals, etc. All of
these involve environmental contamination, what Gore would call side-effects of technological,
industrialized society. But if we reflect on the essence of fossil fuel energy, we will be led to the way of
Being that brings the symptom of global warming to unconcealment. Global warming is a symptom of
the spatial productions of automobility manifesting the enframing that challenges nature and
transforms living-spaces of the earth into sites of energy orderings in a dialectical intensification: the
more storage of energy, the more production of auto-mobile spatiality. We want to redirect attention in
order to come to terms with the disease rather than its symptomatic manifestation.
The alternative is an affirmation of communal windmills.
Unlike wind turbines windmills take account of their environment and do not reduce
nature to a standing reserve.
Brittan 1 Gordon G. Brittan, Jr., Department of Philosophy, Montana state University. “Wind, energy,
landscape: reconciling nature and technology,” Philosophy & Geography Vol. 4 No. 2
What, then, do I propose? A very different sort of wind turbine. A group of us have been working on its
development for the past twenty years, although in fact the idea can be traced back to Crete, where
thousands of windmills have been spinning for generations on the Lesithi Plain.37 In a very schematic
way, let me draw your attention to its main features. The design parameters are traditional—high
solidity, low r.p.m., high torque. The rotor consists of sails, furled when the wind blows hard, unfurled
when it does not. The machinery is exposed and thoroughly accessible, clear and comprehensible. All of
it can be repaired by someone with a rudimentary knowledge of electronics and mechanics, with the
sort of tools used to . x farm machinery. The generators, gear boxes, and brakes are situated at ground
level and the turbine does not require a crane for either its installation or repair, or any sort of tower.38
It is a down-wind machine and tracks easily and freely. In two words, it is a “thing” and not a “device.”39
All of Borgmann’s criteria are satis. ed. Sails, of course, have a very long history. They were the . rst way
in which humans captured the energy of the wind. The context they supply has to do with long voyages
and the hopes and fears that attended them, with naval battles fought and races won. Long central in
human life, they are well-integrated and for this reason, among others, beautiful. Sails also allow for
engagement and skill. Anyone who has ever sailed knows what it is like to feel the power of the wind in
his hands and to take full advantage of it by shaping the sails in the right sort of way and choosing the
best angle of attack. But you do not have to have sailed to use this windmill. All that is necessary is that
you have experienced putting up a sheet to dry in the wind or have tried to fold an umbrella. How
different this experience is from holding up a toy plastic windmill, an experience signi. cant only for
young children. A sail turbine is sensitive to the wind, turning at lower speeds, moved by it alone and
not by gears and motors, furling and unfurling as needs be. Even at top speed, it turns more slowly than
conventional wind turbines (at less than a third their rate) and is never merely a distracting blur. Even in
large arrays, the water-pumping sail machines on the Lesithi Plain have a very pleasing appearance. All
very well, but what about the ef. ciency and economy of the sail turbine? Whatever intrinsic
characteristics it might have, however beautiful it might be, it still has to perform. We have always been
able to generate power curves comparable to conventional turbines, with this exception, that we begin
to generate electricity at lower wind speeds.40 Our problem up to this point has been the mechanical
reliability of the turbine, principally with respect to the furling device. We think we have at long last
solved this problem. Otherwise, the cost per kilowatt-hour is projected to be somewhere in the vicinity
of $0.03, competitive with other, more conventional forms of generation. The comparatively small size
and relative simplicity of the sail turbine means that it can be locally owned and operated, one machine
at a time. Changes taking place in the American power industry have made this more feasible than ever.
Much of the early resistance to wind energy came from the utilities; in addition to the unreliability of
the turbines then available, wind energy did not very well . t the utilities’ “industrial model,” however
many efforts were made to conform to that model on the part of the wind energy companies
themselves.41 But we have entered a phase in which electrical energy is being de-regulated and decentralized, just the sort of development that Schumacher and others had in mind. It will, I believe, be
more and more possible for owners of small numbers of wind turbines, and of the co-operatives into
which I see themselves forming, to put their power on the grid, particularly since wind-generated
electricity on even the most optimistic projections will never amount to more than ten percent of the
total.
Windmills are an expressed rethinking of the world around us, promoting a shift in
environmental relations away from dominating technology towards cultivating a
sustainable life-world that reveals the interconnectedness of nature and communal
energy production
Klein 9 Lance, B.L.A. Kansas State University, A Phenomenological Interpretation of Biomimicry and its
Potential Value for Sustainable Design, Masters Thesis in Architecture 2009
In this sense, local people might be allowed to shape their built environment in a dynamic, holistic
manner, recognizing the individuality of themselves, wind, nature, and place. By adopting this approach
people rely on and gather technology as a useful tool, which they care for in their everyday lives and
establish a bodily, social, and contextually responsive relationship with technology that does not
dominate or alienate. Such an approach aligns with Relph’s perspective regarding the key role which
people should play in forming relationships towards place—a relationship grounded in an attitude of
care and concern. This attitude is what Grange has called foundational ecology—a rich,
multidimensional engagement of people, nature, and environment in a reciprocal, holistic relationship.
This attitude and approach might allow local people the opportunity to engage more deeply with wind
technology and other forms of biomimicry for ecosystem technology that move beyond a mere
replacement of conventional-dominating technology. That is not say that the Windjammer can and
should be applied everywhere, but it does recognize the unique quality of winds and the potential of
simple small-scale, technological things to engage nature and people more deeply. In this sense, the
Windjammer exemplifies an approach which can be learned from and applied to each unique place and
ecosystem. A major issue here, which I address in the last chapter, is how local people can truly
understand their place, when so often in today’s world there is a lack of authentic relationship between
residents and the locality that is their home. The critique of Altamont and the Windjammer illustrates,
concretely, how authentic biomimicry of ecosystems engages those ecosystems more deeply and
reveals wholeness among nature, people, and environment. The critique further distinguishes between
a reductivist, efficiency-focused ecosystem technology that does not engage natural ecosystems
because it is instrument-centered and based on an attitude of dividend ecology for our human
preservation from the alternative, which is a holistic engagement with nature recognizing and revealing
the necessary interconnectedness among nature, people, and environment. It is this deeper
connectedness and attitude of care and concern regarding natural and built environments which I have
previously suggested does not exist in much of ecosystem technology. The Windjammer project begins
to describe an originary approach regarding biomimicry of ecosystems in that this approach engages the
unique characteristics and wholeness of the natural ecosystem, while at the same time allowing a
deeper, longer lasting relationship among nature, people, and technology. In contrast, the Altamont
project reminds us that no matter how efficient ecosystem technology is in reducing our consumption of
natural resources, this instrument-centered approach ignores the deeper interconnections and
wholeness of nature and demonstrates how other quantitative, efficiency-focused approaches are only
partial. As such, they are insufficient in achieving a complete sustainability. Instead, a more holistic
sustainability might replace the dividend attitude of fear and focus on efficiency with a more empathetic
attitude that fosters care and concern among people, nature, and environment. In such an originary way
of being, one moves away from replacement ecosystem technology and toward a holistic ecosystem
technology. Such a shift in attitude might move our culture away from technology’s domination of our
lives to a more appropriate role as a useful tool, thus facilitating and constituting a sustainable lifeworld.
In the following chapter, I further examine the wholeness among nature, people, and environment with
regard to process, the third theme to be drawn upon for a full emulation of nature (Benyus, 2008). In
this next chapter, I expand upon the phenomenological notion of place and describe how designers
might engage local people and environments in a process that might instill a more genuine sense of
belonging in the natural and built worlds.
Plankton DA
1NC
Coastal ecosystems are fragile, but beginning to improve
EPA 12 National Coastal Conditions Report IV, March 14 2012,
http://water.epa.gov/type/oceb/assessmonitor/nccr/upload/Final-NCCR-IV-Fact-Sheet-3-14-12.pdf
Summary of the Findings • Overall condition of the Nation’s coastal waters was fair from 2003 to 2006. •
The three indices that showed the poorest conditions throughout the U.S. were coastal habitat
condition, sediment quality, and benthic condition. • Southeastern Alaska and American Samoa received
the highest overall condition scores (5=Good). • The Great Lakes received the lowest overall condition
score (2.2=Fair to poor). • Comparison of the condition scores shows that overall condition in U.S.
coastal waters has improved slightly since NCCR I.1
FOOTNOTE BEGINS
Although the overall condition of U.S. coastal waters was rated as fair in all four reports, the score
increased slightly from 2.0 to 2.3 from NCCR I to NCCR II and III, and increased to 2.5 in NCCR IV (based
on assessments for the conterminous U.S.). When south-central Alaska and Hawaii were added to NCCR
III, the overall condition score increased from 2.3 to 2.8; Alaska has relatively pristine conditions and a
large coastal area which contributed to the increase in score. With the inclusion of southeastern Alaska,
American Samoa, Guam, and the U.S. Virgin Islands in NCCR IV, the score increased from 2.5 to 3.0.
Offshore wind turbines devastate ocean life, especially plankton
Bailey 13 (Helen, Professor at University of Maryland, Center for Environmental Science. Offshore
Wind Energy. http://www.umces.edu/cbl/wind)
The major concern is the impact of the increased noise on marine life. Noise is produced during the
construction and installation of offshore wind farms from increased boat activity in the area and
procedures such as pile-driving. The sound levels from pile-driving, when the turbine is hammered to
the seabed, are particularly high. This is potentially harmful to marine species and have been of greatest
concern to marine mammal species, such as endangered whales. The noise and vibration of construction
and operation of the wind turbines can be damaging to fish and other marine species. The effects of
noise may be immediately fatal, cause injuries, or result in short or longer term avoidance of the area
depending on the frequency and loudness of the sounds. The impact of the offshore wind turbines on
birds and bats: Risk of death from direct collisions with the rotors and the pressure effects of vortices.
There is also a risk of displacement from the area causing changes in migration routes and loss of quality
habitat. Disturbance to the seabed: Construction activities at the wind power site and the installation of
undersea cables to transmit the energy to shore can have direct effects on the seabed and sediments,
which can affect the abundance and diversity of benthic organisms. Disturbance of the seafloor may also
increase turbidity, which could affect plankton in the water column.
Plankton is key to ocean biodiversity
Burkill and Reid 10 Peter, Sir Alister hardy Foundation for Ocean Science. Chris, University of
Plymouth. “Plankton biodiversity of the North Atlantic: changing patterns revealed by the Continuous
Plankton Recorder Survey,”
https://www.earthobservations.org/documents/cop/bi_geobon/observations/200910_changing_plankt
on_biodiversity_of_the_north_atlantic.pdf
Plankton are the community of tiny drifting creatures that form the life blood of the sea. Although
mostly microscopic in size, this belies their importance. Their abundance and biodiversity fuels marine
food-webs that produce fish, and is a major contributor to oxygen production, carbon sequestration and
global climate regulation. Changes in plankton biodiversity reflect changes in the ocean’s health and the
ecological services provided by the marine ecosystem.
Extinction
Coyne and Hoekstra, 07 (Jerry ,professor in the Department of Ecology and Evolution at the
University of Chicago and Hopi, Associate Professor in the Department of Organismic and Evolutionary
Biology at Harvard University , The New Republic, “The Greatest Dying,” 9/24,
http://www.truthout.org/article/jerry-coyne-and-hopi-e-hoekstra-the-greatest-dying)
Aside from the Great Dying, there have been four other mass extinctions, all of which severely pruned life's diversity. Scientists agree that we're now in the midst of
a sixth such episode. This new one, however, is different - and, in many ways, much worse. For, unlike earlier extinctions, this one results from the work of a single
species, Homo sapiens.We are relentlessly taking over the planet, laying it to waste and eliminating most of our fellow species. Moreover, we're doing it much faster
than the mass extinctions that came before. Every
year, up to 30,000 species disappear due to human activity alone. At
this rate, we could lose half of Earth's species in this century. And, unlike with previous extinctions,
there's no hope that biodiversity will ever recover, since the cause of the decimation - us - is here to stay. To scientists, this is an
unparalleled calamity, far more severe than global warming, which is, after all, only one of many threats to biodiversity. Yet global warming gets far more press.
Why? One reason is that, while the increase in temperature is easy to document, the decrease of species is not. Biologists don't know, for example, exactly how
many species exist on Earth. Estimates range widely, from three million to more than 50 million, and that doesn't count microbes, critical (albeit invisible)
components of ecosystems. We're not certain about the rate of extinction, either; how could we be, since the vast majority of species have yet to be described?
We're even less sure how the loss of some species will affect the ecosystems in which they're embedded, since the intricate connection between organisms means
that the loss of a single species can ramify unpredictably. But we do know some things. Tropical rainforests are disappearing at a rate of 2 percent per year.
Populations of most large fish are down to only 10 percent of what they were in 1950. Many primates and all the great apes - our closest relatives - are nearly gone
from the wild. And we know that extinction and global warming act synergistically. Extinction exacerbates global warming: By burning rainforests, we're not only
polluting the atmosphere with carbon dioxide (a major greenhouse gas) but destroying the very plants that can remove this gas from the air. Conversely, global
warming increases extinction, both directly (killing corals) and indirectly (destroying the habitats of Arctic and Antarctic animals). As extinction increases, then, so
does global warming, which in turn causes more extinction - and so on, into a downward spiral of destruction. Why, exactly, should we care? Let's start with the
most celebrated case: the rainforests. Their loss will worsen global warming - raising temperatures, melting icecaps, and flooding coastal cities. And, as the forest
habitat shrinks, so begins the inevitable contact between organisms that have not evolved together, a scenario played out many times, and one that is never good.
Dreadful diseases have successfully jumped species boundaries, with humans as prime recipients. We have gotten aids from apes, sars from civets, and Ebola from
fruit bats. Additional worldwide plagues from unknown microbes are a very real possibility. But it isn't just the destruction of the rainforests that should trouble us .
Healthy ecosystems the world over provide hidden services like waste disposal, nutrient cycling, soil
formation, water purification, and oxygen production. Such services are best rendered by ecosystems that are diverse. Yet, through
both intention and accident, humans have introduced exotic species that turn biodiversity into monoculture. Fast-growing zebra mussels, for example, have
outcompeted more than 15 species of native mussels in North America's Great Lakes and have damaged harbors and water-treatment plants. Native prairies are
becoming dominated by single species (often genetically homogenous) of corn or wheat. Thanks to these developments, soils will erode and become unproductive which, along with temperature change, will diminish agricultural yields. Meanwhile,with increased pollution and runoff, as well as reduced forest cover, ecosystems
will no longer be able to purify water; and a shortage of clean water spells disaster. In many ways, oceans are the most vulnerable areas of all. As overfishing
eliminates major predators, while polluted and warming waters kill off phytoplankton, the intricate aquatic food web could collapse from both sides. Fish, on which
so many humans depend, will be a fond memory. As phytoplankton vanish, so does the ability of the oceans to absorb carbon dioxide and produce oxygen. (Half of
the oxygen we breathe is made by phytoplankton, with the rest coming from land plants.) Species extinction is also imperiling coral reefs - a major problem since
these reefs have far more than recreational value: They provide tremendous amounts of food for human populations and buffer coastlines against erosion. In fact,
the global value of "hidden" services provided by ecosystems - those services, like waste disposal, that aren't bought and sold in the marketplace - has been
estimated to be as much as $50 trillion per year, roughly equal to the gross domestic product of all countries combined. And that doesn't include tangible goods like
fish and timber.
Life as we know it would be impossible if ecosystems collapsed. Yet that is where we're heading if species
extinction continues at its current pace.Extinction also has a huge impact on medicine. Who really cares if, say, a worm in the remote swamps of French Guiana goes
extinct? Well, those who suffer from cardiovascular disease. The recent discovery of a rare South American leech has led to the isolation of a powerful enzyme that,
unlike other anticoagulants, not only prevents blood from clotting but also dissolves existing clots. And it's not just this one species of worm: Its wriggly relatives
have evolved other biomedically valuable proteins, including antistatin (a potential anticancer agent), decorsin and ornatin (platelet aggregation inhibitors), and
hirudin (another anticoagulant). Plants, too, are pharmaceutical gold mines. The bark of trees, for example, has given us quinine (the first cure for malaria), taxol (a
drug highly effective against ovarian and breast cancer), and aspirin. More than a quarter of the medicines on our pharmacy shelves were originally derived from
plants. The sap of the Madagascar periwinkle contains more than 70 useful alkaloids, including vincristine, a powerful anticancer drug that saved the life of one of
our friends. Of the roughly 250,000 plant species on Earth, fewer than 5 percent have been screened for pharmaceutical properties. Who knows what life-saving
drugs remain to be discovered? Given current extinction rates, it's estimated that we're losing one valuable drug every two years. Our arguments so far have tacitly
assumed that species are worth saving only in proportion to their economic value and their effects on our quality of life, an attitude that is strongly ingrained,
especially in Americans. That is why conservationists always base their case on an economic calculus. But we biologists know in our hearts that there are deeper and
equally compelling reasons to worry about the loss of biodiversity: namely, simple morality and intellectual values that transcend pecuniary interests. What, for
example, gives us the right to destroy other creatures? And what could be more thrilling than looking around us, seeing that we are surrounded by our evolutionary
cousins, and realizing that we all got here by the same simple process of natural selection? To biologists, and potentially everyone else, apprehending the genetic
kinship and common origin of all species is a spiritual experience - not necessarily religious, but spiritual nonetheless, for it stirs the soul. But, whether or not one is
moved by such concerns, it is certain that our future is bleak if we do nothing to stem this sixth extinction .
We are creating a world in which
exotic diseases flourish but natural medicinal cures are lost; a world in which carbon waste accumulates
while food sources dwindle; a world of sweltering heat, failing crops, and impure water. In the end, we
must accept the possibility that we ourselves are not immune to extinction. Or, if we survive, perhaps
only a few of us will remain, scratching out a grubby existence on a devastated planet. Global warming
will seem like a secondary problem when humanity finally faces the consequences of what we have
done to nature: not just another Great Dying, but perhaps the greatest dying of them all.
Disad turns the case—plankton are key to stop CO2 reduction
Burkill and Reid 10 Peter, Sir Alister hardy Foundation for Ocean Science. Chris, University of
Plymouth. “Plankton biodiversity of the North Atlantic: changing patterns revealed by the Continuous
Plankton Recorder Survey,”
https://www.earthobservations.org/documents/cop/bi_geobon/observations/200910_changing_plankt
on_biodiversity_of_the_north_atlantic.pdf
Secondly, the ocean’s foodweb depends crucially upon plankton, since these simple primary and
secondary producers form the functional base of all marine ecosystems. The totality of the ocean’s
primary production estimated to be some 48 x 1015 tonnes C annually (Field et al 1998) . This activity
carried out by microscopic phytoplankton is transferred via zooplankton to fuel the global production of
240 million metric tonnes of fish. Of this some 80 million metric tonnes of fish is harvested annually by
fishing activity. Third, but by no means least, plankton play a crucial role with their interaction with our
climate. This interaction is a two-way process. On the one hand, plankton are responsible for some 46%
of the planetary photosynthesis. This process, which involves the assimilation of CO2 and its
transformation into organic material, results in the reduction of ambient CO2. This is the first step in a
series of complex biogeochemical transformations, termed the biological pump, that involves the export
of carbon and other elements from atmosphere and surface waters into the oceans interior. The waters
of the ocean’s interior are out of contact with the atmosphere and therefore the carbon is in a transient
sink so far as climate is concerned. The typical time scale of the turnover of this transient sink is in the
order of a few thousands of years. However, the role of plankton in climate control extends much
further than the reduction of atmospheric carbon dioxide. Some common plankton taxa, typically
coccolithophores and dinoflagellates, produce dimethyl sulfonioproprionate (DMSP). DMSP is converted
to dimethyl sulphide (DMS) a volatile compound that is important in cloud formation over the ocean, via
biogeochemical transformations involving viruses, bacteria, archaea, protozoa and metazoa in the
surface ocean. This remains an active and controversial research field of how plankton communities may
be able to create their own atmospheric weather.
Case
Solvency
It’s extremely difficult to create a working industry from scratch, even with federal
incentives
Cardwell ’14 (Diane Cardwell is a Business Day reporter for The New York Times covering energy. “U.S.
Offshore Wind Farm, Made in Europe.” NYT. Jan 22, 2014. http://goo.gl/Imnmof)
MIDDLEBOROUGH, Mass. — Carl Horstmann strode around the floor of his factory here, passing welders
honing head-high metal tubes as sparks flew. He is one of a dying breed: the owner of Mass Tank, a steel
tank manufacturer in a down-at-the-heels region that was once a hub of the craft.¶ Four years ago,
having heard of plans to build a $2.6 billion [≈ cost of B-2 bomber] wind farm off the shores of Cape Cod,
he saw opportunity. Much of the work, the developers and the politicians promised, would go to
American companies like his, in what would be the dawn of a lucrative offshore wind industry in the
United States.¶ Now, after Mr. Horstmann has spent more than $500,000, much has changed. Cape
Wind, the wind farm’s developer, won a court case over an important approval on Wednesday but is still
caught up in legal and financial wrangling and faces a tenuous future. And even if the project is
completed, most of the investment and jobs for supplying the parts will go not to American companies
like Mass Tank, but to European manufacturers.¶ Mr. Horstmann’s company lost a bid to build support
structures to a German company it had included as a partner, and last month Cape Wind completed
arrangements for other major components, including the giant blades, towers and turbines, to be built
in Denmark.¶ Those deals have provoked a strong reaction from suppliers like Mr. Horstmann, but they
also illustrate the difficulty of creating a new energy industry from scratch, even one that has financial
support from the government.¶ “We’ve seen this in other industries. We don’t have the volume and the
guaranteed market that China, for example, or some of the European countries that keep those jobs in
their countries, can provide to investors,” said Thomas A. Kochan, a professor at the Sloan School of
Management at the Massachusetts Institute of Technology. “It’s a catch-22,” he said, because without a
steady flow of projects, companies would not build plants and “therefore, we don’t get the jobs.”¶ For
Mr. Horstmann, the issue is personal. “As Americans, we are really upset that all this money is going
overseas,” he said at the factory. As a ratepayer to a utility, he added, “I’m going to be getting my
monthly bill and if Cape Wind goes through it’s going to have this premium on it.”¶ Offshore wind farms
are inherently risky ventures, requiring enormous investments not only from developers and financiers
but also from governments and, ultimately, ordinary citizens.¶ And none is riskier than Cape Wind,
whose plans call for 130 turbines slowly spinning on Horseshoe Shoal of Nantucket Sound, supplying 75
percent of the power for Cape Cod, Martha’s Vineyard and Nantucket.¶ The project has been a source of
bitter resistance since it was proposed in 2001, with opponents, who include the billionaire William Koch
as well as local fishermen and business owners, saying it would increase utility rates and spoil the
pristine view.¶ But proponents say that offshore power plants like Cape Wind are worth the gamble
because they deliver cleaner, more efficient electricity and also spur economic development.¶ As
evidence, supporters point to Europe, where billions have gone into helping companies build factories to
make, transport and install the behemoth windmills needed to harness wind and withstand conditions
miles out to sea. That has yielded dozens of offshore farms and roughly 60,000 jobs, according to
industry estimates..¶ But even there — where policies and subsidies have helped create a robust supply
chain — the upside has been fickle. On Germany’s coast, for example, an estimated $1.3 went into
revitalizing ports and factories to serve the industry, creating about 10,000 jobs. But demand frequently
drops off when projects stall, at times leaving factories in coastal towns like Cuxhaven, on Germany’s
North Sea, sitting idle with hundreds of workers laid off.¶ In the United States, which has yet to put a
wind farm in the water, the Interior Department is leasing sections of the ocean and the Energy
Department has handed out grants and considered loan guarantees, like one that is pending for Cape
Wind.¶ The potential economic impact of a new offshore wind industry is enormous, supporters say. The
Energy Department estimates that the Atlantic coast could support as many as 70,000 jobs by 2030.¶
Cape Wind was to be the catalyst, leading to the first 1,000 jobs, with equipment from General Electric
and other domestic suppliers.¶ But a major setback came around 2009, when G.E. decided to back away
from the offshore wind business, saying it was still too expensive to compete with land-based wind
power. In response, Cape Wind turned to Vestas and Siemens, dominant players based in Northern
Europe with factories in the United States that make onshore wind machines. In December, Siemens and
Cape Wind completed the contract, in time, executives said, for the project to qualify for a federal tax
credit valued at 30 percent of its cost.¶ Siemens plans to make the giant turbines in Denmark, though it
is arranging for some work to be done with a company based in Maine. Offshore wind development is
not yet far enough along to justify the expense of building a factory in the United States, industry
executives say. Because of their size, the turbines and support structures require different factories and
equipment, and are generally too heavy to transport over normal roads.¶ Aside from Cape Wind, there
are only two projects off the Atlantic coast that could come to fruition soon, both relatively small, with
just five turbines each: a project by Deepwater Wind, which would rise from
the waters near Rhode Island, and one by Fishermen’s Energy, near Atlantic City.¶ “It’s very difficult to
build a new factory on the back of one order,” said Mark Rodgers, Cape Wind’s chief spokesman. He said
that the original estimates of creating 600 to 1,000 jobs still held, even though those included the
manufacturing work as well. “We may have been overly conservative initially in our forecast.”¶ As for
Mass Tank — which had already agreed to lease a derelict building for its factory at the once-thriving
Quincy Shipyard in Quincy, Mass. — it lost out on the Cape Wind bid to Erndtebrücker Eisenwerk, or
EEW, a much more established German company that had sought out the work on its own and had
already developed a relationship with Cape Wind.¶ “There is an inherent risk to be in this industry and
you have to be big enough to withstand it,” said Timothy Mack, head of offshore wind development for
North America for EEW. “Mass Tank never came forward with any legitimate plan of financing.”¶ Mr.
Horstmann said he was well aware of the risks, so he lined up a team, including EEW, to help land the
project. The politicians soon came running, eager to promote the hundreds of jobs the project would
bring.¶ In 2010, Gov. Deval Patrick of Massachusetts, in a tight re-election race at the time, nudged the
deal forward and then joined Mr. Horstmann to announce it at the opening of a plant to test turbine
blades in Boston. “This agreement between Cape Wind, Mass Tank and EEW will create hundreds of
new manufacturing jobs in Massachusetts as we take the lead on offshore wind energy in the United
States,” Governor Patrick said at the time, according to a statement. “This is what our clean energy
future is all about.Ӧ By April 2011, under a joint venture named East Coast Offshore Fabricators, or Eco
Fab, the partners submitted their proposal to Cape Wind in the hope of signing a $137 million [≈ net
worth of Dr. Dre, rapper, 2011] contract.¶ But Cape Wind’s president, Jim Gordon, rejected the proposal
as too expensive.¶ “Then things got quiet,” said Randy Kupferberg, Mass Tank’s chief operating officer.¶
Accounts differ over how the deal fell apart. Cape Wind expected Mass Tank to contribute or find
financing before awarding the contract, while Mass Tank needed the contract to raise the roughly $35
million [≈ First-edition Gutenberg Bible] or $40 million [≈ Health industry 2011 political donations] that
its plant would cost. Under those circumstances, Mr. Mack of EEW said, there was not a profitable way
to go forward.¶ Despite the disappointment, Mr. Horstmann and his team are pursuing other
possibilities. There is interest in New Jersey, they say, in their participation in a factory planned for the
Fishermen’s project. But their chance to put Mass Tank at the forefront of serving the Atlantic coast
offshore industry may have slipped through their fingers.¶ “We tried to hit a home run with this,” Mr.
Horstmann said. “And we didn’t.”
Climate Change
Offshore Wind projections wrong – carbon dioxide emission displacement is only half
of the original estimate.
Sawer 08 (Patrick Sawer is a senior reporter on The Telegraph. He previously worked as a reporter and
assistant news editor on the London Evening Standard 10:28AM GMT 20 Dec 2008 Promoters
overstated the environmental benefit of wind farms
http://www.telegraph.co.uk/earth/energy/windpower/3867232/Promoters-overstated-theenvironmental-benefit-of-wind-farms.html)
The British Wind Energy Association (BWEA) has agreed to scale down its calculation for the amount of
harmful carbon dioxide emission that can be eliminated by using wind turbines to generate electricity
instead of burning fossil fuels such as coal or gas.¶ The move is a serious setback for the advocates of
wind power, as it will be regarded as a concession that twice as many wind turbines as previously
calculated will be needed to provide the same degree of reduction in Britain's carbon emissions.¶ A wind farm
industry source admitted: "It's not ideal for us. It's the result of pressure by the anti-wind farm lobby."¶ For several years the BWEA – which lobbies on behalf of wind power firms –
claimed that electricity from wind turbines 'displaces' 860 grams of carbon dioxide emission for every
kilowatt hour of electricity generated.¶ However it has now halved that figure to 430 grams, following discussions with
the Advertising Standards Authority (ASA).¶ Hundreds of wind farms are being planned across the country, adding to the 198 onshore and offshore farms - a total of 2,389 turbines - already in operation. Another 40 farms are
currently under construction.¶ Experts have previously calculated that to help achieve the Government's aim of saving around 200 million tons of CO2 emissions by 2020 - through generating 15 per cent of the country's electricity
But the new figure for carbon displacement means that twice as many
turbines would now be needed to save the same amount of CO2 emissions.¶ While their advocates
regard wind farms as a key part of Britain's fight against climate change, opponents argue they blight the
landscape at great financial cost while bringing little environmental benefit.¶ Dr Mike Hall, an anti-wind farm campaigner
from the Friends of Eden, Lakeland and Lunesdale Scenery group in the Lake District, said: "Every wind farm application says it will lead to a big saving
in the amount of carbon dioxide produced. This has been greatly exaggerated and the reduction in the
carbon displacement figure is a significant admission of this.¶ "As we get cleaner power stations on line, the figure will get even lower. It further backs the
argument that wind farms are one of the most inefficient and expensive ways of lowering carbon emissions."¶ Because
from wind power - would require 50,000 wind turbines.¶
wind farms burn no fuel, they emit no carbon dioxide during regular running. The revised calculation for the amount of carbon emission they save has come about because the BWEA's earlier figure did not take account of recent
improvements to the technology used in conventional, fossil-fuel-burning power stations.¶ The figure of 860 grams dates back to the days of old-style coal-fired power stations. However, since the early 1990s, many of the dirty
coal-fired stations have been replaced by cleaner-burning stations, with a consequent reduction in what the industry calls the "grid average mix" figure for carbon dioxide displacement.¶ As a result, a modern 100MW coal or gas
power station is now calculated to produce half as many tonnes of carbon dioxide as its predecessor would have done. ¶ The BWEA's move follows a number of rulings by the ASA against claims made by individual wind farm
promoters about the benefits their schemes would have in reducing carbon emissions.¶ In one key adjudication, the ASA ruled that a claim by Npower Renewables that a wind farm planned for the southern edge of Exmoor
National Park, in Devon, would help prevent the release of 33,000 tonnes of carbon dioxide into the atmosphere was "inaccurate and likely to mislead". This claim was based on the 860-gram figure.¶ The watchdog concluded: "We
The ASA has now recommended that the
BWEA and generating companies use the far lower figure of 430 grams.¶ In a letter to its members, the BWEA's
head of onshore, Jan Matthiesen, said: "It was agreed to recommend to all BWEA members to use the single static
figure of 430 g CO2/kWh for the time being. The advantage is that it is well accepted and presents little
risk as it understates the true figure."¶ This is now the figure given on the BWEA's website. The organisation will also be forced to lower its claim for the total amount of carbon dioxide
told Npower to ensure that future carbon savings claims were based on a more representative and rigorous carbon emissions factor."¶
emission saved by the 2,389 wind turbines currently operating around Britain. ¶
Environmental Injustice
The rare earth metals necessary for wind turbines cause massive pollution and
environmental injustice
IER 13 (Institute for Energy Research, “Big Wind’s Dirty Little Secret: Toxic Lakes and Radioactive Waste”,
October 23 2013,
http://instituteforenergyresearch.org/analysis/big-winds-dirty-little-secret-rare-earth-minerals/)
The wind industry promotes itself as better for the environment than traditional energy sources such as
coal and natural gas. For example the industry claims that wind energy reduces carbon dioxide
emissions that contribute to global warming,.¶ ¶ But there are many ways to skin a cat. As IER pointed
out last week, even if wind curbs CO2 emissions, wind installations injure, maim, and kill hundreds of
thousands of birds each year in clear violation of federal law. Any marginal reduction in emissions comes
at the expense of protected bird species, including bald and golden eagles. The truth is, all energy
sources impact the natural environment in some way, and life is full of necessary trade-offs. The further
truth is that affordable, abundant energy has made life for billions of people much better than it ever
was.¶ ¶ Another environmental trade-off concerns the materials necessary to construct wind turbines.
Modern wind turbines depend on rare earth minerals mined primarily from China. Unfortunately, given
federal regulations in the U.S. that restrict rare earth mineral development and China’s poor record of
environmental stewardship, the process of extracting these minerals imposes wretched environmental
and public health impacts on local communities. It’s a story Big Wind doesn’t want you to hear.¶ ¶ Rare
Earth Horrors¶ ¶ Manufacturing wind turbines is a resource-intensive process. A typical wind turbine
contains more than 8,000 different components, many of which are made from steel, cast iron, and
concrete. One such component are magnets made from neodymium and dysprosium, rare earth
minerals mined almost exclusively in China, which controls 95 percent of the world’s supply of rare earth
minerals.¶ ¶ Simon Parry from the Daily Mail traveled to Baotou, China, to see the mines, factories, and
dumping grounds associated with China’s rare-earths industry. What he found was truly haunting:¶ ¶ As
more factories sprang up, the banks grew higher, the lake grew larger and the stench and fumes grew
more overwhelming.¶ ¶ ‘It turned into a mountain that towered over us,’ says Mr Su. ‘Anything we
planted just withered, then our animals started to sicken and die.’¶ ¶ People too began to suffer. Dalahai
villagers say their teeth began to fall out, their hair turned white at unusually young ages, and they
suffered from severe skin and respiratory diseases. Children were born with soft bones and cancer rates
rocketed.¶ ¶ Official studies carried out five years ago in Dalahai village confirmed there were unusually
high rates of cancer along with high rates of osteoporosis and skin and respiratory diseases. The lake’s
radiation levels are ten times higher than in the surrounding countryside, the studies found.¶ ¶ As the
wind industry grows, these horrors will likely only get worse. Growth in the wind industry could raise
demand for neodymium by as much as 700 percent over the next 25 years, while demand for
dysprosium could increase by 2,600 percent, according to a recent MIT study. The more wind turbines
pop up in America, the more people in China are likely to suffer due to China’s policies. Or as the Daily
Mail put it, every turbine we erect contributes to “a vast man-made lake of poison in northern China.”¶ ¶
Big Wind’s Dependence on China’s “Toxic Lakes”¶ ¶ The wind industry requires an astounding amount of
rare earth minerals, primarily neodymium and dysprosium, which are key components of the magnets
used in modern wind turbines. Developed by GE in 1982, neodymium magnets are manufactured in
many shapes and sizes for numerous purposes. One of their most common uses is in the generators of
wind turbines.¶ ¶ Estimates of the exact amount of rare earth minerals in wind turbines vary, but in any
case the numbers are staggering. According to the Bulletin of Atomic Sciences, a 2 megawatt (MW) wind
turbine contains about 800 pounds of neodymium and 130 pounds of dysprosium. The MIT study cited
above estimates that a 2 MW wind turbine contains about 752 pounds of rare earth minerals.¶ ¶ To
quantify this in terms of environmental damages, consider that mining one ton of rare earth minerals
produces about one ton of radioactive waste, according to the Institute for the Analysis of Global
Security. In 2012, the U.S. added a record 13,131 MW of wind generating capacity. That means that
between 4.9 million pounds (using MIT’s estimate) and 6.1 million pounds (using the Bulletin of Atomic
Science’s estimate) of rare earths were used in wind turbines installed in 2012. It also means that
between 4.9 million and 6.1 million pounds of radioactive waste were created to make these wind
turbines.¶ ¶ For perspective, America’s nuclear industry produces between 4.4 million and 5 million
pounds of spent nuclear fuel each year. That means the U.S. wind industry may well have created more
radioactive waste last year than our entire nuclear industry produced in spent fuel. In this sense, the
nuclear industry seems to be doing more with less: nuclear energy comprised about one-fifth of
America’s electrical generation in 2012, while wind accounted for just 3.5 percent of all electricity
generated in the United States.¶ ¶ While nuclear storage remains an important issue for many U.S.
environmentalists, few are paying attention to the wind industry’s less efficient and less transparent use
of radioactive material via rare earth mineral excavation in China. The U.S. nuclear industry employs
numerous safeguards to ensure that spent nuclear fuel is stored safely. In 2010, the Obama
administration withdrew funding for Yucca Mountain, the only permanent storage site for the country’s
nuclear waste authorized by federal law. Lacking a permanent solution, nuclear energy companies have
used specially designed pools at individual reactor sites. On the other
hand, China has cut mining permits and imposed export quotas, but is only now beginning to draft rules
to prevent illegal mining and reduce pollution. America may not have a perfect solution to nuclear
storage, but it sure beats disposing of radioactive material in toxic lakes like near Baotou, China.¶ ¶ Not
only do rare earths create radioactive waste residue, but according to the Chinese Society for Rare
Earths, “one ton of calcined rare earth ore generates 9,600 to 12,000 cubic meters (339,021 to 423,776
cubic feet) of waste gas containing dust concentrate, hydrofluoric acid, sulfur dioxide, and sulfuric acid,
[and] approximately 75 cubic meters (2,649 cubic feet) of acidic wastewater.”¶ ¶ Conclusion¶ ¶ Wind
energy is not nearly as “clean” and “good for the environment” as the wind lobbyists want you to
believe. The wind industry is dependent on rare earth minerals imported from China, the procurement
of which results in staggering environmental damages. As one environmentalist told the Daily Mail,
“There’s not one step of the rare earth mining process that is not disastrous for the environment.”
wThat the destruction is mostly unseen and far-flung does not make it any less damaging.¶ ¶ All forms of
energy production have some environmental impact. However, it is disingenuous for wind lobbyists to
hide the impacts of their industry while highlighting the impacts of others. From illegal bird deaths to
radioactive waste, wind energy poses serious environmental risks that the wind lobby would prefer you
never know about. This makes it easier for them when arguing for more subsidies, tax credits, mandates
and government supports.
Rubbish 1NC (Policy)
Plastics DA
US plastic industry is on a robust upturn and demand is increasing
American Chemistry Council 14 (American Chemistry Council
http://www.americanchemistry.com/Jobs/EconomicStatistics/Plastics-Statistics/Year-in-Review.pdf)
The United States plastics resins industry continued its growth trend in 2013. According to the American ¶ ¶
Chemistry Council (ACC) Plastics Industry Producers’ Statistics (PIPS) Group, U.S. resin production ¶ ¶ increased 1.5 percent to 107.5 billion
pounds in 2013, up from 105.9 billion pounds in 2012. Total sales ¶ ¶ for the year increased 1.5 percent to 108.7 billion pounds in 2013, up
from 107.0 billion pounds in 2012. ¶ ¶ ¶ ¶ The Economic Environment ¶ ¶ ¶ ¶ The global economic environment has remained challenging,
preventing the U.S. from escaping its ¶ ¶ persistently slow growth pattern. Business and consumer confidence was still wavering in 2013,
putting a ¶ ¶ damper on investment growth and hiring as well as consumer spending. There was general weakness in ¶ ¶ manufacturing, cuts
in U.S. federal spending, periods of uncertainty, and softness in demand at home ¶ ¶ and abroad. As a result, the U.S. economy grew only 1.9
percent in 2013. However, improvements are ¶ ¶ emerging and the fundamentals are in place for moderate growth in the coming year. The
U.S. ¶ ¶ employment situation is steadily improving and recovery continues in the housing market. As incomes ¶ ¶ and real earnings rise and
¶¶¶
household assets strengthen, American consumers will be better positioned to ¶ ¶ spend. Domestic
demand will build, driving
the U.S. economy and in turn, bolstering global growth. ¶ ¶ Indeed, after just 2 percent growth in 2013, global economic
growth is set to expand and this will translate ¶ ¶ to acceleration in foreign demand for North American plastics. ¶ ¶ ¶ ¶ North American
manufacturing—and the chemical and plastics industries in particular—has the stage set ¶ ¶ for robust
performance. North American producers, with access to abundant supplies of competitively ¶ ¶ priced energy and feedstock, are
presented with renewed opportunity. The U.S. manufacturing sector, ¶ ¶ which represents the primary customer base for resins, is pulling out
of a soft patch. Manufacturing ¶ ¶ growth slowed in 2013 largely due to the federal government sequester and to weakness in major export ¶
¶ markets. However, the surge in unconventional oil and gas development is creating both demand side ¶ ¶ (e.g., pipe mills, oilfield machinery)
and supply-side (e.g., chemicals, fertilizers, direct iron reduction) ¶ ¶ opportunities. Indeed, the
enhanced competitive position
with regard to feedstock costs will support U.S. chemical industry production going forward, with particular
strength in plastic resins. ¶ ¶ ¶ ¶ Trends in Customer Industries ¶ ¶ ¶ ¶ Although the demand for plastics is ultimately tied to overall
economic growth, plastic resins are used in a ¶ ¶ variety of end-use markets. A discussion of performance in some of the most important enduse markets ¶ ¶ for resins follows. ¶ ¶ ¶ ¶ Packaging is the largest market for plastic resins and historically, packaging resin use has been ¶ ¶
correlated with “real” retail sales, i.e., retail sales adjusted for inflation. According to data from the Bureau ¶ ¶ of the Census and Bureau of
Labor Statistics, real retail sales grew 2.8 percent in 2013, following a ¶ ¶ similar 2.9 percent gain in 2012. Consumer spending appeared to be
accelerating towards the end of the ¶ ¶ year and this trend is expected to continue as the job market recovers and household wealth advances.
¶ ¶ According to Statistics Canada, the Canadian retail sector increased 2.5 percent in 2013 after a 2.5 ¶ ¶ percent gain in 2012. As a result,
output of the North American retail sector experienced a 2.8 percent ¶ ¶ gain in 2013. Packaging industry output for the region expanded in
2013 after having contracted in 2012. ¶ ¶ ¶ ¶ ¶ ¶ americanchemistry.com®¶ ¶ 700 Second St., NE | Washington, DC 20002 | (202) 249.7000
¶ ¶ Building and construction represents an important market for plastic resins. The housing market ¶ ¶ continues to recover in the U.S. and
housing starts were up 19 percent in 2013. While starts are still off ¶ ¶ 55 percent from their 2005 peak of 2.07 million units, they have grown
consistently for the last four years. ¶ ¶ In the U.S., housing starts increased from 783,000 units in 2012 to 931,000 units in 2013. Residential ¶ ¶
projects were the drivers for private construction spending. Public and private non-residential construction ¶ ¶ spending both declined in 2013.
In Canada, housing starts fell 13 percent from 215,000 units in 2012 to ¶ ¶ 188,000 units in 2013. The Canadian construction industry grew only
marginally in 2013, reflecting a ¶ ¶ decline in residential construction offset by growth in non-residential projects. Overall North American ¶ ¶
construction activity grew by 4.0 percent in 2013. ¶ ¶ ¶ ¶ Transportation is another significant market for plastic resins. Light vehicle sales
continued to strengthen ¶ ¶ in the U.S., rising from 14.4 million units in 2012 to 15.5 million units in 2013. Improvements in American ¶ ¶
incomes and the ability to take on more debt combined with pent-up demand will encourage continuation ¶ ¶ of a positive trend in vehicle
sales. Canadian light vehicle sales also increased (from 1.72 million units in ¶ ¶ 2012 to 1.78 million units in 2013). According to the U.S. Federal
Reserve Board, production of motor ¶ ¶ vehicles and parts in the U.S. increased 6.8 percent in 2013. This increase follows strong growth in the
¶ ¶ several years since the Great Recession (production increased 17.4 percent in 2012, 9.0 percent in 2011 ¶ ¶ and 32.7 percent in 2010). In
Canada, production of motor vehicles and parts fell in 2013 after a gain in ¶ ¶ 2012. Overall North American production of motor vehicles and
parts increased 5.6 percent in 2013. ¶ ¶ ¶ ¶ Another important plastics market is that for electrical and electronics, much of which is centered
in ¶ ¶ appliances. In the U.S. and in Canada, the appliance industry’s output volume grew 7.1 percent, marking ¶ ¶ the first positive year-overyear comparison since 2004. This is a good sign and reflects recovery in the ¶ ¶ housing market that was finally taking hold. Appliance
production had moderated in recent years and, ¶ ¶ although it’s tied to the health of housing, it also reflected some appliance production that
has shifted to ¶ ¶ low-cost manufacturing countries. Much of the production that has left the U.S. has gone to Mexico and ¶ ¶ resin suppliers in
the U.S. and Canada serve this nearby market. Production of both electronic products ¶ ¶ and other electrical equipment increased in 2013,
extending the trend of positive growth to four years. In ¶ ¶ 2013, production of computers and electronic products in North America rose 2.9
percent while ¶ ¶ production of other electrical equipment rose 4.6 percent. ¶ ¶ ¶ ¶ Furniture and furnishings represent a key market for
plastics. The North American furniture industry, ¶ ¶ also tied to the health of the housing market, grew 4.4 percent in 2013, marking the third
year of ¶ ¶ consecutive growth. In the U.S., production in the furniture industry increased 4.6 percent, and in Canada, ¶ ¶ output grew 2.6
percent. North American production of carpeting and other textile furnishings contracted ¶ ¶ in 2013 after a small gain in 2012. The trend in
these markets should accelerate with the improvements in ¶ ¶ the housing market though this connection will likely be more pronounced in
furniture production. ¶ ¶ ¶ ¶ Industrial machinery represents another important market, one aided by increased business investment ¶ ¶
needed to enhance competitiveness, and to expand capacity, both in North America and in rapidly ¶ ¶ growing emerging markets. North
American production of industrial machinery rose 2.8 percent in 2013. ¶ ¶ Growth in this market has been hampered as businesses continue to
face uncertainty affecting their ¶ ¶ capital investment decisions. ¶ ¶ ¶ ¶ The previous discussion examines the primary end-use markets which
ultimately drive demand. The ¶ ¶ plastics products industry (NAICS 3261) is the key immediate customer industry for plastic resins. In ¶ ¶ turn,
this industry supplies these important end-use markets. During 2013, North American plastics ¶ ¶ products production rose 5.9 percent,
reflecting improving demand among the end-use markets and the ¶ ¶ competiveness of American producers. There have been improvements
in trade flows as well. Following a ¶ ¶ contraction in 2009, North American trade in plastic products recovered and has grown steadily. Both
¶ ¶ imports and exports of plastic products and other finished goods incorporating plastics resins continued
to ¶ ¶ expand in 2013 and the pace of growth in exports has surpassed that of imports. North American exports ¶ ¶ of
plastics products grew to $15.5 billion in 2013. ¶ ¶ ¶ ¶ The economic outlook for the North American resins industry is quite
optimistic. For the most part, ¶ ¶ demand from domestic customer industries is strengthening and foreign
demand is expected to improve ¶ ¶ as well. This positive outlook is driven by the emergence of the U.S. as the venue for chemicals
¶ ¶ investment. With the development of shale gas and the surge in natural gas liquids supply, the U.S. ¶ ¶ moved ahead as a high-cost
producer of key petrochemicals and resins globally. This shift boosted export ¶ ¶ demand and drove significant flows of new capital investment
toward the U.S. As of early 2014, nearly ¶ ¶ 149 projects have been announced with investments totaling more than $100 billion through 2023.
Aff crushes plastics industry—Its profitability depends on consumers remaining
ignorant about the impacts of ocean waste
Boyle 2011 [7/31 Lisa, environmental lawyer, “Plastic And The Great Recycling Swindle”
http://www.huffingtonpost.com/lisa-kaas-boyle/plastics-industry-markets_b_912503.html]
Every day, disposable plastics (bottles, bags, packaging, utensils, etc.) are thrown away in huge quantities after one use,
but they will last virtually forever. Globally we make 300 million tons of plastic waste each year. Disposable plastics are the largest component
of ocean pollution. While Fresh Kills Landfill in New York was once known as the planet's largest man-made structure, with a volume greater
than the Great Wall of China and a height exceeding the Statue of Liberty, our oceans are now known to contain the world's largest dumps.
These unintended landfills in our seas may cover millions of square miles and are composed of plastic waste fragments, circling the natural
vortexes of the oceans like plastic confetti being flushed in giant toilets.
Plastics are made from petroleum; there is less and less available, and we are going to tragic lengths to get at it as evidenced in oil spills around
the globe with loss of life and habitat. Should we be risking life and limb for single use-bags and plastic bottles that can easily be replaced with
sustainable alternatives? Should
we be risking our food chain as plastic fragments become more plentiful than
plankton in our oceans? Should we be exposing our fetuses, babies and children to the endocrine disrupting chemicals that leach out
of plastic food containers into our food and drink? These questions and their answers are exactly what the plastics
lobby wants you to avoid.
Plastic Industry Tactics: Aggression and Distraction
The Plastics Industry has been forced into a new position in order to preserve its global market. It is no
longer enough to pitch affordability and convenience of their products when consumers are concerned about being poisoned by
the chemicals in plastics and are tired of seeing more plastic bags than flowers on the roadside.
Every legislative restriction on plastics defeated by the industry and every consumer mollified into
believing that using disposable plastics is a sustainable practice means the continuation of enormous
global profits for industry. The petrochemical BPA, a hardening agent used in plastics that was developed first as a synthetic estrogen,
alone generates 6 billion dollars in sales for the American petrochemical industry. As preeminent endocrine researcher Dr. Frederick Vom Saal
observed: "If information [about toxics in plastic] had been known at the time that this chemical was first put into commerce, it would not have
been put into commerce.... but because it already is in commerce, and chemical industries
have a huge stake in maintaining
their market share using this chemical, how do they now respond to evidence that it really is not a chemical that you would want your
baby to be exposed to? [The industry] is still in the attack phase."
Plastics key to US economic growth—the Green Chemistry Industry revolution proves
Bienkowski ‘12 (Brian Staff Writer for Environmental Health News
http://www.environmentalhealthnews.org/ehs/news/2012/chemical-plastics-industry-drives-economy July 24, 2012)
CHICAGO, Ill. – The chemical and plastics industry is a leading force in economic growth that is helping U.S. cities bounce
back from the recession, according to a new study commissioned by the U.S. Conference of Mayors. The report paints a rosy picture of
economic growth and credits the manufacturing of plastics and chemicals with spurring a surge in jobs,
exports and research in many cities across the country. Behind the industry’s role as a growing economic force are rock-bottom natural gas
prices, largely due to technologies allowing extractors to tap into new reserves. Natural gas fuels most U.S. chemical processes. Chemical
companies are investing money into places as diverse as the Gulf of Mexico and Pittsburgh – wherever the gas is, according to the study
conducted by IHS Global Insight, a Colorado-based industry analytics company that focuses on energy issues. The report cites
2 to 4
percent job growth in the chemical and plastics industry in some large cities including Minneapolis, Los Angeles,
San Diego, Dallas and Milwaukee from 2010 to 2011. Smaller metro areas such as Warren, Mich., Spokane, Wash., Greeley, Colo.,
Gadsden, Ala., Janesville, Wis. and Alexandria, La., have seen more than 10 percent growth in the industry's employment. However, some
major cities, including Chicago, New York and Philadelphia, had a small decrease, the study says. Robert Atkinson, president of the Information
Technology and Innovation Foundation, a non-partisan economic think tank in Washington, D.C., said the chemical industry historically has
been strong in the U.S. compared to other industries and that this growth could continue to boost the economy. “Chemicals are a stable
industry ... partly because you have higher fixed costs, you don’t just walk away from a chemical plant,” said Atkinson, who did not participate
in the study. The report, which was prepared for a mayors' conference held last week in Philadelphia, predicts the U.S. economy will continue
to improve through the end of 2012, anticipating job growth of 1.4 percent and unemployment to fall to 8 percent. waltarrrrr/flickr Low natural
gas prices have driven new hiring at chemical companies. The metropolitan Chicago area has the highest employment in chemical and plastics
with 43,346 jobs, just above the Houston area at 42,834. Twenty-eight metropolitan areas have more than 10,000 people working in the
industry and 206 metro areas have more than 1,000, according to the report. Tracey Easthope, environmental health director at the Ecology
Center in Ann Arbor, Mich., said she hopes this trend will carry over into “green chemistry,” the design of chemicals and industrial processes
that are non-toxic and environmentally sound. “While green chemistry is growing, it’s still a relatively small proportion,” Easthope said. “As we
keep increasing domestic manufacturing of chemicals, it’s important that both the chemicals and production be more sustainable.” The report
did not mention how much of the growth was “green,” but it is typically a drop in the bucket. According to a 2011 report by Pike Research,
green chemistry was a $2.8 billion industry, compared with the $4-trillion global chemical industry. In the U.S. alone, the chemical industry is a
$760 billion enterprise, according to the American Chemistry Council. The report, however, predicted the green
chemical industry
would grow to $98.5 billion by 2020. Manufacturers of chemicals and plastics have been under fire recently, with scientists
linking many high-volume synthetic compounds – including flame retardants, plasticizers such as bisphenol A and phthalates, pesticides and
Teflon ingredients -- to a variety of health threats. The chemical industry has been able to grow in recent years because natural gas prices have
dropped dramatically. According to the U.S. Energy Information Administration, natural gas was $1.89 per thousand cubic feet (not including
transportation costs) in April 2012, down from $10.79 in July 2008. The combination of horizontal drilling and hydraulic fracturing, known as
fracking, led to the price drop, as shale gas production grew 48 percent from 2006 to 2010, according to U.S. Department of Energy estimates.
And when the prices dropped, out came the businesses that rely on cheap energy. “Four or five years ago manufacturers were bemoaning the
high prices of natural gas in the U.S., and they were going elsewhere,” Atkinson said. “Now you’re hearing something a lot different as the low
natural gas prices are driving their ability to be productive.” At the same time, concerns over the environmental safety of fracking are being
raised across the nation. Dow operates a chemical plant in Midland, Mich. From worries about water pollution in western Pennsylvania to
cancer rate concerns in north Texas, many communities have expressed unease with the nascent practice of injecting chemicals into the ground
near drinking water. Multiple towns have enacted moratoriums on fracking pending more research. Industry officials say the practice is not a
threat to drinking water and that natural gas burns much cleaner than coal. The cheap natural gas also could reduce incentives for
manufacturers to find replacements for fossil fuels. Cutting energy use and switching to renewable resources when available are two of the
principles of green chemistry. But Easthope cited low natural gas prices as an opportunity for companies to develop environmentally
sustainable chemicals and plastics. “This is a big chance to ramp up innovation,” she said. She said coupled with the lower costs of handling
hazardous materials, this could make green chemicals more competitive. Atkinson said the low natural gas prices are here to stay. “This is a
long-term, structural change in our energy supply,” he said. “With these new technologies like horizontal drilling, they’re bringing online a lot
more natural gas than we ever thought was available…These are not artificially low prices.” But Atkinson said it’s going to take more than just
low energy prices to keep the chemical and plastics industry driving growth. “It’s
innovation that’s going to sustain growth,
and there’s a fair amount in those industries right now,” he said. He said it's important to keep putting money into
research, and to use the low energy costs to constantly reinvent the industry. Atkinson pointed to plastics that conduct
electricity as a recent example of the industry pushing forward. “It’s not like they’re just cranking out a bunch of
plastic bottles," he said.
Economic decline triggers nuclear war
Harris and Burrows 9
(Mathew, PhD European History at Cambridge, counselor in the National Intelligence Council (NIC) and Jennifer,
member of the NIC’s Long Range Analysis Unit “Revisiting the Future: Geopolitical Effects of the Financial Crisis”
http://www.ciaonet.org/journals/twq/v32i2/f_0016178_13952.pdf)
Increased Potential for Global Conflict Of course, the report encompasses more than economics and indeed believes the future is likely to be
the result of a number of intersecting and interlocking forces. With so many possible permutations of outcomes, each with ample Revisiting the
Future opportunity for unintended consequences, there is a growing sense of insecurity. Even so, history may be more instructive than ever.
While we continue to believe that the
Great Depression is not likely to be repeated, the lessons to be drawn from that period
include the harmful effects on fledgling democracies and multiethnic societies (think Central Europe in 1920s
and 1930s) and on the sustainability of multilateral institutions (think League of Nations in the same period). There is no reason to think that
this would not be true in the twenty-first as much as in the twentieth century. For that reason, the ways in which the
potential for
greater conflict could grow would seem to be even more apt in a constantly volatile economic environment as they
would be if change would be steadier. In surveying those risks, the report stressed the likelihood that terrorism and nonproliferation will
remain priorities even as resource issues move up on the international agenda. Terrorism’s appeal will decline if economic growth continues in
the Middle East and youth unemployment is reduced. For those terrorist groups that remain active in 2025, however, the diffusion of
technologies and scientific knowledge will place some of the world’s most dangerous capabilities within their reach. Terrorist groups in
2025 will likely be a combination of descendants of long established groups inheriting organizational structures, command and control
processes, and training procedures necessary to conduct sophisticated attacks and newly emergent collections of the angry and
disenfranchised that become
self-radicalized, particularly in the absence of economic outlets that would become narrower in an
economic downturn. The most dangerous casualty of any economically-induced drawdown of U.S.
military presence would almost certainly be the Middle East. Although Iran’s acquisition of nuclear weapons is not inevitable,
worries about a nuclear-armed Iran could lead states in the region to develop new security arrangements with
external powers, acquire additional weapons, and consider pursuing their own nuclear ambitions. It is not clear that the type of
stable deterrent relationship that existed between the great powers for most of the Cold War would emerge naturally in the Middle East with a
nuclear Iran. Episodes of low intensity conflict and terrorism taking place \under a nuclear umbrella could
lead to an unintended
escalation and broader conflict if clear red lines between those states involved are not well established. The close proximity of
potential nuclear rivals combined with underdeveloped surveillance capabilities and mobile dual-capable Iranian missile systems also
will produce inherent difficulties in achieving reliable indications and warning of an impending nuclear attack. The lack
of strategic depth in neighboring states like Israel, short warning and missile flight times, and uncertainty of Iranian intentions may
place more focus on preemption rather than defense, potentially leading to escalating crises. 36 Types of conflict
that the world continues to experience, such as over resources, could reemerge, particularly if protectionism grows
and there is a resort to neo-mercantilist practices. Perceptions of renewed energy scarcity will drive countries to take actions to assure their
future access to energy supplies. In the worst case, this could result in interstate conflicts if government leaders deem assured access to energy
resources, for example, to be essential for maintaining domestic stability and the survival of their regime. Even actions short of war, however,
will have important geopolitical implications. Maritime security concerns are providing a rationale for naval buildups and modernization efforts,
such as China’s and India’s development of blue water naval capabilities. If the fiscal stimulus focus for these countries indeed turns inward,
one of the most obvious funding targets may be military. Buildup of regional naval capabilities could lead to increased tensions, rivalries, and
counterbalancing moves, but it also will create opportunities for multinational cooperation in protecting critical sea lanes. With water also
becoming scarcer in Asia and the Middle East, cooperation to manage changing water resources is likely to be increasingly difficult both within
and between states in a more dog-eat-dog world.
T
1. Interpretation: ocean exploration is search for the purpose of discovery and
excludes survey and at-sea research
NOAA Science Advisory Board 12 Panel Chair is Jesse Ausubel, Director, Program for the Human
Environment, The Rockefeller University, Member President’s Panel on Ocean Exploration (2000),
member of Ocean Exploration Advisory Working Group to NOAA’s Science Advisory Board. “Ocean
Exploration’s Second Decade,”
http://www.sab.noaa.gov/Working_Groups/docs/OER%20review%20report%20from%20SAB_FiNAL_5%
20updated%2003_26_13.pdf
The present Panel affirms the brief definition of exploration of the 2000 Panel: Exploration is the
systematic search and investigation for the initial purpose of discovery and the more elaborated
definition of the US Navy: Systematic examination for the purposes of discovery;
cataloging/documenting what one finds; boldly going where no one has gone before; providing an initial
knowledge base for hypothesis-based science and for exploitation.
The Panel affirms that Ocean Exploration is distinct from comprehensive surveys (such as those carried out by
NAVOCEANO and NOAA Corps) and at-sea research (sponsored by National Science Foundation, Office of Naval Research, and other
agencies), including hypothesis-driven investigations aimed at the ocean bottom, artifacts, water column, and
marine life.
2. Violation: Monitoring is continual surveying and only magnifies the link
Naylor 13 Anna S.R., Masters of Marine Management from Dalhousie University. Integrated Ocean
management: Making local global: the role of monitoring in reaching national and international
commitments. August 2013
http://dalspace.library.dal.ca/bitstream/handle/10222/37034/Naylor,%20A%20%20Graduate_Project2013.pdf?sequence=1
Monitoring, in the broadest sense, is defined as the routine measurement of chosen indicators to understand
the condition and trends of the various components of an ecosystem (Bisbal, 2001). It is an important part of any
policy as it allows for two parts. First, it allows for a community or government to monitor the changing state and resiliency of
the relevant coastal and marine systems. This includes both the biophysical components as well as the
human dimensions (Kearney et al., 2007). Second, it also allows managers to assess the extent to which said policy is working in practice
at the various levels (local or national). To be able to properly monitor ocean and coastal policies, objectives and goals need to be clearly
defined so developing and utilizing appropriate indicators can be used to track changes over time.
3. Voting Issue:
A. Limits—their aff opens the potential for monitoring any activity, quality, and
variable of the ocean, which makes research impossible and splinters and predictable
literature base. A strict interpretation of the mechanism “exploration” is crucial since
the geographic breath of the topic is already huge.
B. Ground—exploration beyond discovery artificially inflates aff ground by allowing
them to claim spotlighting and monitoring advantages that are not germane to ocean
discovery.
Framework
1. Interpretation: your decision should respond to the question posed by the
resolution: Is a substantial increase non-military development and/or exploration of
the Earth’s oceans by the United States better than the status quo or a competitive
option?
A. The resolution calls for debate on hypothetical government action
Ericson, 3 (Jon M., Dean Emeritus of the College of Liberal Arts – California Polytechnic U., et al., The
Debater’s Guide, Third Edition, p. 4)
The Proposition of Policy: Urging Future Action In
policy propositions, each topic contains certain key elements,
although they have slightly different functions from comparable elements of value-oriented propositions. 1. An agent doing the
acting ---“The United States” in “The United States should adopt a policy of free trade.” Like the object of evaluation in a
proposition of value, the agent is the subject of the sentence. 2. The verb should—the first part of a verb phrase that
urges action. 3. An action verb to follow should in the should-verb combination. For example, should adopt here means to put a
program or policy into action though governmental means. 4. A specification of directions or a limitation
of the action desired. The phrase free trade, for example, gives direction and limits to the topic, which would, for
example, eliminate consideration of increasing tariffs, discussing diplomatic recognition, or discussing interstate commerce. Propositions
of policy deal with future action. Nothing has yet occurred. The entire debate is about whether
something ought to occur. What you agree to do, then, when you accept the affirmative side in such a
debate is to offer sufficient and compelling reasons for an audience to perform the future action that
you propose.
B. The word “Resolved” before the colon reflects a legislative forum
Army Officer School ‘04(5-12, “# 12, Punctuation – The Colon and Semicolon”,
http://usawocc.army.mil/IMI/wg12.htm)
The colon introduces the following: a.
A list, but only after "as follows," "the following," or a noun for which the list is an appositive:
Each scout will carry the following: (colon) meals for three days, a survival knife, and his sleeping bag. The company had four new officers:
(colon) Bill Smith, Frank Tucker, Peter Fillmore, and Oliver Lewis. b. A long quotation (one or more paragraphs): In The Killer Angels Michael
Shaara wrote: (colon) You may find it a different story from the one you learned in school. There have been many versions of that battle
[Gettysburg] and that war [the Civil War]. (The quote continues for two more paragraphs.) c. A formal quotation or question: The President
declared: (colon) "The only thing we have to fear is fear itself." The question is: (colon) what can we do about it? d. A second independent
clause which explains the first: Potter's motive is clear: (colon) he wants the assignment. e. After the introduction of a business letter: Dear Sirs:
(colon) Dear Madam: (colon) f. The details following an announcement For sale: (colon) large lakeside cabin with dock g. A
formal
resolution, after the word "resolved:" Resolved: (colon) That this council petition the mayor.
2. Violation: They claim solvency off of their ontological critique of nature-culture
dualism in waste disposal practices not through statutory action.
3. Vote Negative—
A. Decision Making
Debate over a controversial point of action creates argumentative stasis—that’s key
to avoid a devolution of debate into competing truth claims, which destroys the
decision-making benefits of the activity
Steinberg and Freeley ‘13
David Director of Debate at U Miami, Former President of CEDA, officer, American Forensic Association
and National Communication Association. Lecturer in Communication studies and rhetoric. Advisor to
Miami Urban Debate League, Masters in Communication, and Austin, JD, Suffolk University, attorney
who focuses on criminal, personal injury and civil rights law, Argumentation and Debate
Critical Thinking for Reasoned Decision Making, Thirteen Edition
Debate is a means of settling differences, so there must be a controversy, a difference of opinion or a conflict of
interest before there can be a debate. If everyone is in agreement on a feet or value or policy, there is no need or opportunity for
debate; the matter can be settled by unanimous consent. Thus, for example, it would be pointless to attempt to debate
"Resolved: That two plus two equals four,” because there is simply no controversy about this statement. Controversy is
an essential prerequisite of debate. Where there is no clash of ideas, proposals, interests, or expressed positions of issues, there
is no debate. Controversy invites decisive choice between competing positions. Debate cannot produce
effective decisions without clear identification of a question or questions to be answered. For example,
general argument may occur about the broad topic of illegal immigration. How many illegal immigrants live in the United
States? What is the impact of illegal immigration and immigrants on our economy? What is their impact on our communities? Do they commit
crimes? Do they take jobs from American workers? Do they pay taxes? Do they require social services? Is it a problem that some do not speak
English? Is it the responsibility of employers to discourage illegal immigration by not hiring undocumented workers? Should they have the
opportunity to gain citizenship? Does illegal immigration pose a security threat to our country? Do illegal immigrants do work that American
workers are unwilling to do? Are their rights as workers and as human beings at risk due to their status? Are they abused by employers, law
enforcement, housing, and businesses? How are their families impacted by their status? What is the moral and philosophical obligation of a
nation state to maintain its borders? Should we build a wall on the Mexican border, establish a national identification card, or enforce existing
laws against employers? Should we invite immigrants to become U.S. citizens? Surely you can think of many more concerns to be addressed by
a conversation about the topic area of illegal immigration. Participation
in this “debate” is likely to be emotional and
intense. However, it is not likely to be productive or useful without focus on a particular question and
identification of a line demarcating sides in the controversy. To be discussed and resolved effectively, controversies
are best understood when seated clearly such that all parties to the debate share an understanding about the
objective of the debate. This enables focus on substantive and objectively identifiable issues facilitating
comparison of competing argumentation leading to effective decisions. Vague understanding results in unfocused
deliberation and poor decisions, general feelings of tension without opportunity for resolution, frustration,
and emotional distress, as evidenced by the failure of the U.S. Congress to make substantial progress on the immigration debate. Of
course, arguments may be presented without disagreement. For example, claims are presented and supported within speeches, editorials, and
advertisements even without opposing or refutational response. Argumentation occurs in a range of settings from informal to formal, and may
not call upon an audience or judge to make a forced choice among competing claims. Informal discourse occurs as conversation or panel
discussion without demanding a decision about a dichotomous or yes/no question. However, by
definition, debate requires
"reasoned judgment on a proposition. The proposition is a statement about which competing advocates
will offer alternative (pro or con) argumentation calling upon their audience or adjudicator to decide. The
proposition provides focus for the discourse and guides the decision process. Even when a decision will
be made through a process of compromise, it is important to identify the beginning positions of competing
advocates to begin negotiation and movement toward a center, or consensus position. It is frustrating and usually
unproductive to attempt to make a decision when deciders are unclear as to what the decision is about.
The proposition may be implicit in some applied debates (“Vote for me!”); however, when a vote or consequential decision is called for (as in
the courtroom or in applied parliamentary debate) it is essential that the proposition be explicitly expressed (“the defendant is guilty!”). In
academic debate, the proposition provides essential guidance for the preparation of the debaters prior
to the debate, the case building and discourse presented during the debate, and the decision to be made
by the debate judge after the debate. Someone disturbed by the problem of a growing underclass of poorly educated,
socially disenfranchised youths might observe, “Public schools are doing a terrible job! They' are overcrowded, and
many teachers are poorly qualified in their subject areas. Even the best teachers can do little more than struggle to maintain order in their
classrooms." That
same concerned citizen, facing a complex range of issues, might arrive at an unhelpful decision,
such as "We ought to do something about this” or, worse, “It’s too complicated a problem to deal with." Groups of concerned
citizens worried about the state of public education could join together to express their frustrations, anger, disillusionment, and emotions
regarding the schools, but without a focus for their discussions, they could easily agree about the sorry state of education without finding
points of clarity or potential solutions. A
gripe session would follow. But if a precise question is posed—such as “What
can be done to improve public education?”—then a more profitable area of discussion is opened up simply by
placing a focus on the search for a concrete solution step. One or more judgments can be phrased in the form of debate
propositions, motions for parliamentary debate, or bills for legislative assemblies, The statements "Resolved: That the federal
government should implement a program of charter schools in at-risk communities” and “Resolved; That the
state of Florida should adopt a school voucher program" more clearly identify specific ways of dealing with educational
problems in a manageable form, suitable for debate. They provide specific policies to be investigated and aid discussants in
identifying points of difference. This focus contributes to better and more informed decision making with the
potential for better results. In academic debate, it provides better depth of argumentation and enhanced
opportunity for reaping the educational benefits of participation. In the next section, we will consider the challenge of
framing the proposition for debate, and its role in the debate. To have a productive debate, which facilitates effective
decision making by directing and placing limits on the decision to be made, the basis for argument
should be clearly defined. If we merely talk about a topic, such as ‘"homelessness,” or “abortion,” Or
“crime,” or “global warming,” we are likely to have an interesting discussion but not to establish a profitable
basis for argument. For example, the statement “Resolved: That the pen is mightier than the sword” is debatable, yet by itself fails to
provide much basis for dear argumentation. If we take this statement to mean Iliad the written word is more effective than physical force for
some purposes, we can identify a problem area: the comparative effectiveness of writing or physical force for a specific purpose, perhaps
promoting positive social change. (Note that “loose” propositions, such as the example above, may be defined by their advocates in such a way
as to facilitate a clear contrast of competing sides; through definitions and debate they “become” clearly understood statements even though
they may not begin as such. There are formats for debate that often begin with this sort of proposition. However, in
any debate, at
some point, effective and meaningful discussion relies on identification of a clearly stated or understood
proposition.) Back to the example of the written word versus physical force. Although we now have a general subject, we have not
yet stated a problem. It is still too broad, too loosely worded to promote weII-organized argument. What sort of writing are we
concerned with—poems, novels, government documents, website development, advertising, cyber-warfare, disinformation, or what? What
does it mean to be “mightier" in this context? What kind of physical force is being compared—fists, dueling swords, bazookas, nuclear
weapons, or what? A more specific question might be, “Would a mutual defense treaty or a visit by our fleet be more effective in assuring
Laurania of our support in a certain crisis?” The basis for argument could be phrased in a debate proposition such as “Resolved: That the United
States should enter into a mutual defense treaty with Laurania.” Negative advocates might oppose this proposition by arguing that fleet
maneuvers would be a better solution. This
is not to say that debates should completely avoid creative
interpretation of the controversy by advocates, or that good debates cannot occur over competing interpretations of the
controversy; in fact, these sorts of debates may be very engaging. The point is that debate is best facilitated by the
guidance provided by focus on a particular point of difference, which will be outlined in the following discussion.
Learning about policy is key to being informed citizens, without it we never learn
about the political process and don’t take responsibility for the possible bad outcomes
of our actions. Simulating policy solves all their offense, allowing people a safe space
to test new ideas
Joyner, Professor of International Law at Georgetown, 1999 [Christopher C., “Teaching International
Law,” 5 ILSA J Int'l & Comp L 377, l/n]
Use of the debate can be an effective pedagogical tool for education in the social sciences. Debates,
like other role-playing
simulations, help students understand different perspectives on a policy issue by adopting a perspective
as their own. But, unlike other simulation games, debates do not require that a student participate directly in order to realize the benefit
of the game. Instead of developing policy alternatives and experiencing the consequences of different choices in a traditional role-playing
game, debates present the alternatives and consequences in a formal, rhetorical fashion before a judgmental audience. Having the class
audience serve as jury helps each student develop a well-thought-out opinion on the issue by providing contrasting facts and views and
enabling audience members to pose challenges to each debating team. These debates
ask undergraduate students to examine the
international legal implications of various United States foreign policy actions. Their chief tasks are to assess the aims of the
policy in question, determine their relevance to United States national interests, ascertain what legal principles are involved, and conclude how
the United States policy in question squares with relevant principles of international law. Debate questions
are formulated as
resolutions, along the lines of: "Resolved: The United States should deny most-favored-nation status to China on human rights grounds;" or
"Resolved: The United States should resort to military force to ensure inspection of Iraq's possible nuclear, chemical and biological weapons
facilities;" or "Resolved: The United States' invasion of Grenada in 1983 was a lawful use of force;" or "Resolved: The United States should kill
Saddam Hussein." In
addressing both sides of these legal propositions, the student debaters must consult the
vast literature of international law, especially the nearly 100 professional law-school-sponsored international law journals now being
published in the United States. This literature furnishes an incredibly rich body of legal analysis that often treats topics affecting United States
foreign policy, as well as other more esoteric international legal subjects. Although most of these journals are accessible in good law schools,
they are largely unknown to the political science community specializing in international relations, much less to the average undergraduate. By
assessing the role of international law in United States foreign policy- making, students realize that United States actions do not always
measure up to international legal expectations; that at times, international legal strictures get compromised for the sake of perceived national
interests, and that concepts and principles of international law, like domestic law, can be interpreted and twisted in order to justify United
States policy in various international circumstances. In this way, the debate format gives students the benefits ascribed to simulations and
other action learning techniques, in that it makes them become actively engaged with their subjects, and not be mere passive consumers.
Rather than spectators, students become legal advocates, observing, reacting to, and structuring political and legal perceptions to fit the merits
of their case. The debate exercises carry several specific educational objectives. First, students on each team must
work together to refine a cogent argument that compellingly asserts their legal position on a foreign policy issue confronting the United States.
In this way, they gain
greater insight into the real-world legal dilemmas faced by policy makers. Second, as they
work with other members of their team, they realize the complexities of applying and implementing international
law, and the difficulty of bridging the gaps between United States policy and international legal principles, either by reworking the former or
creatively reinterpreting the latter. Finally, research for the debates forces students to become familiarized with
contemporary issues on the United States foreign policy agenda and the role that international law plays in formulating and
executing these policies. n8 The debate thus becomes an excellent vehicle for pushing students beyond stale
arguments over principles into the real world of policy analysis, political critique, and legal defense.
B. Extra-Topicality: Allowing them to claim solvency or advantages off of ontological
arguments is extra topical and is an independent voting issue for fairness. It allows
them to shift their advocacy in the 2AC and moots predictable negative ground.
1NC Rubbish (Performance)
Historical Materialism
Vital materialism mystifies the commodity as having agency on its own which
replicates commodity fetishization – instead, everything thing is a product of human
labor
Dochterman 10 (Zen Dochterman, a student pursuing his PhD in Comparative Literature from UCLA.
Zen’s thoughts arose in response to my UCLA paper a couple of weeks ago,
http://larvalsubjects.wordpress.com/2010/12/17/guest-post-object-oriented-marxism/)
Tiqqun may have hit upon this problem in their statement that the commodity is “objectivized being-for-itself presented as something external
to man” and therefore the social fetish resides not in “crystallized labor” but rather in “crystallized being-for-itself” (On the Economy
Considered as Black Magic #34). At
the same time that the commodity alone among objects appears as selfsufficient, singular in its being, it can enter into relations of absolute equivalence with all other
commodities. The orange we see in the supermarket appears to be the self-sufficient orange — extracted from its past on the tree, from
the hand that picked it, from the dirt on which it fell. The orange that falls on the ground is not — in strictly Marxist terms — the same orange
that we look at in the supermarket. In the first instance, the orange has no proper “being-for-itself,” (while it is still one object in OOO terms) as
it exists in relation with other objects — the tree, the grass, the sunlight and water that nourished it. By the same token, an object given as gift
or grown so as to feed people would not be a commodity and therefore have no being-for-itself, because it does not yet have an abstract
character but is circulated in relation to concrete social needs. The
commodity becomes a commodity only when the sets
of object relations that determine it are extracted from this level of “need” or direct human interaction
— and become subsumed by its infinite exchangability with other objects. This is what paradoxically
makes it appear as a being-for-itself at the very moment that it becomes one being that can be replaced
by any other. For the commodity “it is only to realize its essence as a pure, immediate, and abstract
presence that it must be made to look like a singularity,” meaning that its apparent phenomenological
singularity is the after-effect of its infinite exhchangeability (ibid. 33). Yet the singularization of the object
into a sort of phenomenolgical self-sufficiency and being-for-itself (which we can differentiate from its
ontological individuation) covers over the abstract character as an exchange value.
Capitalism is the primary cause of the exploitation of ocean ecologies-Turns entire
case
Clausen and Clark ‘5
Rebecca Clausen and Brett Clark. University of Oregon. 2005. “The Metabolic Rift and Marine Ecology”.
Organization & Environment 18:4.
In the early 1800s, scientists used metabolism to describe the material ex- changes within a body. Justus von Liebig, the great German chemist,
applied the term on a wider basis, referring to metabolic processes at the cellular level of life, as well as concerning the process of exchange for
organisms as a whole. By the 1850s and 1860s, the term took on greater significance in relation to the depletion of soil nutrients. Agricultural
chemists and agronomists in Britain, France, Germany, and the United States noted how the transfer of food and fiber from the country to the
cities resulted in the loss of necessary soil nutrients such as nitrogen, phosphorus, and potassium (Foster, 2000, pp. 157-159). Whereas
traditional agriculture tended to involve the return of essential nutrients to the soil, capitalist agriculture, which involved the division between
town and country given the concentration of land ownership, transported nutrients to city centers where they accumulated as waste. Liebig
(1859) described the intensive methods of British agriculture as a system of robbery, which was opposed to a rational agriculture. The
expansion of this form of agriculture led to the exhaustion of the soil, as it continually depleted the soil of its nutrients. With further
concentration of ownership, more intensive methods of pro- duction were used, including the application of artificial fertilizers. The attempts to
solve the rift in soil nutrients created additional rifts, as other resources were exploited to produce artificial fertilizers, and failed to resolve the
primary problem driving the exhaustion of the soil: an economic system premised on the escalating accumulation of capital.
Marx found that Liebig’s analysis complemented his critique of political econ- omy. Marx (1964), quite aware of human dependence on nature,
noted that nature provides the means and material that sustain human life. There is a necessary “met- abolic
interaction” between humans and the earth (Marx, 1976, pp. 637-638). Thus, Marx employed the concept of metabolism to refer to “the
complex, dy- namic interchange between human beings and nature” (Foster, 2000, p. 158). Through
labor, humans interact with
nature, exchanging organic matter. Humans confront the nature-imposed conditions and processes of the physical world, while
influencing these conditions at the same time. The state of nature was, in part, “bound up with the social relations of the time” (Marx, 1971,
pp. 162-163). Although transformation was part of the human-nature relationship, Marx (1991) noted that humans needed to organize their
lives in a manner that allowed for a social metabolism that followed the prescribed “natural laws of life itself” (pp. 949-950). But the operation
of the capitalist system, in its relentless drive to accu- mulate capital, violates these principles, as it creates “irreparable rifts” in the meta- bolic
interaction between humans and the earth. In agriculture, large-scale pro- duction, long-distance trade, and synthetic inputs only intensify the
rift. The
pursuit of profit sacrificed reinvestment in the land, causing the degradation of nature through
the depletion of soil nutrients and the accumulation of waste in cit- ies. Marx explained that under capitalist
operations, humans are unable to maintain the conditions necessary for the recycling of nutrients,
because the pursuit of profit imposes an order where the basic conditions of sustainability are seen as
an obsta- cle to be overcome, in order for the economic system to increase its scale of operations. As
capital attempts to overcome natural barriers (including those that it cre- ates), it continues to contribute to a
metabolic rift and to create new ones. The introduction of artificial fertilizers has contributed to the incorporation of
large quantities of oil into agricultural production, which leads to an increase in carbon dioxide emissions and the pollution of
waterways (Foster & Magdoff, 2000). Con- stant inputs are required to sustain and expand capitalist production. Ecological problems remain a
pressing concern, increasing in severity as time passes.
The metabolic rift
has become a powerful conceptual tool for analyzing human interactions with nature
and ecological degradation. Foster (1999, 2000) illustrates how Marx’s conception of the metabolic rift under capitalism illuminates
social- natural relations and degradation in a number of ways: (a) the decline in soil fertil- ity as a result of disrupting the
soil nutrient cycle; (b) scientific and technological developments, under capitalist relations, increase the
exploitation of nature, inten- sifying the degradation of soil; (c) capitalist operations lead to the
accumulation of waste, which become a pollution problem; and (d) capital’s attempts to surmount
environmental problems fail to resolve the immediate metabolic rift, thus contrib- uting to further
environmental problems.
We extend the theory of the metabolic rift to marine ecosystems. A metabolic rift is a rupture in the metabolic processes of a system. Natural
cycles, such as the reproduction rate of fish or the energy transfer through trophic levels, are inter- rupted.
We situate the human-marine relationship within the period of global capi- talism, which is the primary force organizing the social metabolism
with nature. By historically contextualizing this metabolic rift, we can highlight the oceanic crisis in the making. Marine
ecosystems are
experiencing the same exploitive disconnect recognized between soil ecology and capitalist agriculture
due to aquaculture’s intensification of production and concentration of wastes. Intensification and concentration
of fisheries production creates a quantitative increase in the rate of biomass depletion and aquatic pollution.
However, the qualitative changes to the conditions of the marine ecosystem resulting from aquaculture’s productive re- organization
may present an even greater challenge to the stability of human-ocean interactions and the resiliency of
the oceans themselves. The qualitative changes taking place extend beyond ecosystem disruption and include the interaction between
humans and the ocean. As capitalist production in the aquatic realm expands, the alienation of humans from
nature increases. Through the application of metabolic rift analysis, we can gain a greater understanding
of the dynamic relationships involved in the oceanic crisis. To begin the analysis, a review of marine ecological processes
is required.
Case
Biosimplicity Turn
Mass extinctions prevent complete extinctions
Boulter ‘5
Michael Boutler, professor for paleobiology at the Natural History Museum and the University of East London.
Launched Fossil Record 2, editor Palaeontological Association, secretary International Organisation of
Palaeobotany and UK representative at the International Union of Biological Sciences, 15-1-05 "Extinction:
Evolution and the End of Man" p. 182-184.
The system of life on Earth behaves in a similar way for all its measurable variables, whether they are communities or ecosystems. For sand
grains, substitute species or genes. For avalanches, substitute extinctions. Power laws tell us that large avalanches or large extinctions arc
much less common than small ones. The controlling factors for the sand piles arc weight and angle of the sides of the pile; for mammals
they arc space and food within the ecosystem. We can kick the sand pile with our feet, and we can reduce the space and the food by
changing the environment. But what would happen to the life-Earth system without these external changes? Could it be like a pile without
avalanches, eventually collapsing into a mess of white noise? The answer lies in our theory of exponential diversification within macroevolution; the curve ever rising towards the vertical when the Fossil Record 2 Family data are plotted (see figure 3,c). The situation starts to
become critical when numbers rise above a comfortable quantity, whether the system is a pile of sand, cars on a motorway or large
mammals in America. If
there were no mass extinctions, that exponential curve really could have risen to
the truly vertical. It could have happened long ago, and it could happen again if there were no
extinctions holding it back from the vertical. If that were so, all life on planet Earth would cease. It
would need to start again from scratch. But that would be impossible. For example, when a teacher cleans the
blackboard of the lesson's writing, specks of chalk dust are reflected in the rays of sunlight pouring through the windows. There's no way
the dust can be put back into the writing on the board, let alone into the stick of chalk. Could this one-way process, entropy, be like evolution facing the exponential? There is no going backwards, only forwards. To stay still is impossible. As with the chalk dust and the universe
leading towards higher entropy, it may be true to say that
within the history of life there is forever the unrelenting
trend to less order and at the same time towards greater complexity, until bits of the system reach a
critical edge.
Change is more healthy than enforcing rigid diversity
NPR ‘7
Donald J. Dodds M.S.P.E., President of the North Pacific Research, “the Myth of Biodiversity”, 05/30/07,
northpacificresearch.com/downloads/The_myth_of_biodiversity.doc.
Humans are working against nature when they try to prevent extinctions and freeze biodiversity. Examine the curve
in figure one, at no time since the origin of life has biodiversity been constant. If this principal has worked for 550
million years on this planet, and science is supposed to find truth in nature, by what twisted reasoning can fixing biodiversity be considered
science? Let alone good for the environment. Environmentalists are now killing species that they arbitrarily term invasive, which are in reality
simply better adapted to the current environment. Consider the Barred Owl, a superior species is being killed in the name of biodiversity
because the Barred Owl is trying to replace a less environmentally adapted species the Spotted Owl. This is more harmful to the ecosystem
because it impedes the normal flow of evolution based on the idea that biodiversity must remain constant. Human
scientists have
decided to take evolution out of the hands of Mother Nature and give it to the EPA. Now there is a good
example of brilliance. We all know what is wrong with lawyers and politicians, but scientists are supposed to be trustworthy. Unfortunately,
they are all to often, only people who think they know more than anybody else. Abraham Lincoln said, “Those who know not, and know not
that the know not, are fools shun them.” Civilization has fallen into the hands of fools. What is suggested by geologic history is that the world
has more biodiversity than it ever had and that it maybe overdue for another major extinction. Unfortunately, today many
scientists
have too narrow a view. They are highly specialized. They have no time for geologic history. This appears
to be a problem of inadequate education not ignorance. What is abundantly clear is that artificially
enforcing rigid biodiversity works against the laws of nature, and will cause irreparable damage to the
evolution of life on this planet and maybe beyond. The world and the human species may be better served if we stop trying
to prevent change, and begin trying to understand change and positioning the human species to that it survives the inevitable change of
evolution. If
history is to be believed, the planet has 3 times more biodiversity than it had 65 million years
ago. Trying to sustain that level is futile and may be dangerous. The next major extinction, change in
biodiversity, is as inevitable as climate change. We cannot stop either from occurring, but we can
position the human species to survive those changes.
Adaptations actually improve ecosystem resiliency as a whole by creating complex
mosaics of habitats
Bosselman ‘1
Fred, Proffessor in energy and environmental law, 10/04, “What Lawmakers Can Learn From Large-Scale Ecology”
http://www.law.fsu.edu/faculty/2001-2002workshops/lessons.pdf
Ecologists who study microbes point out that we
humans are ourselves heterogeneous habitats. As biological technology
increasingly allows more effective study of microorganisms, we have become aware of the variety and complex
relationships of the microscopic species within our own bodies; like other animals, we harbor a
heterogeneous mix of living creatures, some of which are essential to our survival while others can be harmful. Many biologists
have observed a long-range trend toward greater complexity in organisms and heterogeneity in their interrelationships. The same
processes that create non-equilibrium time dynamics may also create heterogeneous physical structure;
it seems likely that the more that ecological systems change over time, the more these changes will
result in complex mosaics of habitat in various stages of change.
Solvency
1. Exploring trash fails—there are no alternatives for waste disposal in status quo
Burdokovska et al ‘12
Valentina Burdokovska. With Heng Chen and Minodora David. 2012. “Possible Methods for Preventing Plastic
Waste From Entering the Marine Environment”. Roskilde University. Pages 26-28.
There are three main advantages of plastic. First, it is relatively light compared to most metals. This property is a result of the organic
compounds it contains, which are made of light elements such as carbon, hydrogen, oxygen, nitrogen, etc. Second, the plastic can be easily
processed. Plastic exhibits plasticity, which means it can be deformed at high temperatures or under pressure, and keep their shape when
cooled or released from pressure. This makes it possible for plastic to be processed into a product of a particular shape by extrusion or
injection. Third, the plastic will not decompose or rust. However, this also brings a serious problem for human beings. The large amount of
plastic waste cannot be naturally decomposed and as a result can cause severe environmental pollution. Since it is hard to treat this problem,
once plastic has been released into the environment, the focus has to be set on preventing irresponsible
disposal at its source. The pyramid of waste is the most obvious concept that should be applied when dealing with waste
management issues. Reducing the amount of plastics¶ The first step in preventing plastics from polluting the environment is to reduce
to amount of plastic produced. Even though it is hard to accomplish this, since plastics play an important role in our consumer lives, the
levels of hazardous compounds usually found in plastics should be decreased or replaced by less harmful substitutes. Similar to how China
applied the law for taxing the use of plastic bags and how the USA banned plastic bags in several cities, strategies like these should be more
widely practiced.¶ Another way of reducing is by redesigning products so that they require less resources and/or energy in the manufacturing
phase. As an example, in the USA the 2-liter plastic soft drink bottle was reduced from weighing 68 grams in 1977 to 51 grams today,
representing a 25% reduction per bottle. This saved more than 93,400 tonns of packaging each year (WM, 2012). Reusing plastic bags and other
plastics that have a reusing potential Reusing also plays an important role in waste management. Reusing can be most efficiently
practiced at an industrial level by investments in new technologies and by creating industrial symbiosis networks between companies. Also,
reuse could be done on an individual basis. Plastic bags, for example, could be reused until they become unusable. As for other plastics, they
should at least be designed is such a way that once their initial purpose is exceeded they could be reused with a different function. This is
where concepts such as cradle-to-cradle and biomimicry could radically change the way in which we perceive plastic products. For example,
perhaps a plastic container used to transport food items could be later reused for storing miscellaneous objects. Besides shape and function,
the design of the plastic product should also have a desirable appearance that could persuade the consumer to reuse it. Recycling and better
options for sorting When it comes to plastic, the most obvious solution is recycling. As presented in the Country profiles section of
the report, all the key countries which are contributing to the existing plastic waste in the North Pacific Ocean are using this method of waste
treatment to some extent. Of course, all these countries should continue to increase the amount of recycled plastic waste. Recycling is
important for every country as it helps in creating new job opportunities, offers an environmentally friendly way of treating waste and most
importantly, it transforms waste into materials which can be reused by industry (EEA, 2011). Countries in the same situation as China
should try to redirect more of their plastic waste from landfills to recycling stations. To further improve recycling, better
practices of sorting plastic products should be applied and/or researched. Manual sorting is a common way of separating different plastics. The
problems associated with this method are those related to efficiency and human health (Stenmark, 2005). To avoid endangering people,
automated sorting should be prioritized.¶ When it comes to the latter, there are multiple techniques that could be applied, such as:¶
Spectroscopy based methods (identifies the plastics by their light absorbance peaks); ¶ Density based methods (identifies plastics by their
densities); ¶ Sorting by using the differences in melting points; ¶ Selective dissolution (using chemicals for separating particles of different plastic
types by altering their hydrophobicity); (Stenmark, 2005) When it comes to choosing the right method, each country should look for the BAT
which suits their current technologic and economic level of development. It is most likely that a combination of several of these methods will
lead to maximum recycling efficiency. Energy recovery If reduction,
reuse and recycling cannot be applied, recovery is a
better alternative than landfilling. By incinerating waste, energy can be extracted and thus, to some extent, compensate
for the initial energy input in the production of plastic. Incineration might not be an accessible option for a lot of countries
since it requires high investments. On the other hand, every country should consider investing in incineration plants since it reduces the
amount of waste landfilled. Currently, Switzerland and Japan are competing with Denmark for the highest amount of waste burned per capita
(Kleis et. al., 2007). In Denmark, thanks to the strong support by the government, combustion facilities are capable to sell their heat all year
around and the system is working very efficiently. In order to better promote recycling plastics, higher taxes should be applied on landfilling and
waste incineration. There is an ongoing competition between incineration plants and recycling facilities because plastics give higher energy
yields when burned compared to other materials (EEA, 2011). In some countries (i.e. USA) the local public seems to be skeptical about waste
combustion plants in terms of possible air pollution and increased traffic surrounding the facility (EPA, 2012). To avoid such problems, people
should be informed about the benefits of incineration plants, for as nowadays, incinerators as subject to strict environmental regulations.
Technological improvements and replacement of materials¶ To make waste treatment even more effective, countries should constantly invest
in technological improvements which could include better sorting methods, more efficient energy recovery plants and so forth.¶ When it comes
to plastic packaging, all the non-degradable compounds should be gradually eliminated and replaced with ones that can safely decompose if
they end up in the environment. Better international guidelines (i.e. EU requirements for waste management)
Waste management guidelines
should be established at an international level and all the countries should agree and respect the terms. This task must be undertaken by worldwide¶ organizations (i.e. NOAA, EAA, EPA), which would have to collaborate in creating universally available and realistic guidelines that take
into consideration different development stages of countries.¶ As an example, the European Union has already implemented directives on
waste handling. While the European countries have been improving their waste management systems for years, there are still many countries
which do not have well developed ways of dealing with their waste. These differences should be leveled out and the countries in difficulty
should be offered counseling and/or economic support.¶ Even though we have not taken into consideration the shipping industry, as it does not
seem to be regulated by any particular legislation (except in harbors), it is important that strict rules are applied in this sector. Capacity
building¶ In our context, the countries that surround the North Pacific have different economies and the main waste treatment solutions differ
for each of them. Therefore, capacity
building is important for the countries that are less developed and which
do not know how to efficiently use their available resources. By being aware of their capacity for waste handling,
different countries could focus more on education and gaining competence in implementing new
technologies. Awareness and collection campaigns¶ In order to make the 3R strategy work, a certain level of public awareness is required.
Often, people need to be reminded of environmental issues so they do not end up neglecting the importance of reusing products or recycling
waste. By constantly promoting awareness campaigns, environmental agencies in collaboration with NGOs could turn people’s attention
towards these matters. Other types of campaigns such as recollecting campaigns in which producers take responsibility for their products at the
end of their life should also be encouraged. Possibility to recycle¶ It
is not enough to spread information. The possibility to
recycle should also be offered to the consumers. This can be done by establishing sorting places with
selective containers in which people could dispose of their waste.
2. No impact to ocean pollution—their authors are exaggerating
Floyd ‘11
Mark Floyd. Press for Oregon State University. January 4 2011. “Oceanic “garbage patch” not nearly as big as
portrayed in media”. Oregon State University. http://oregonstate.edu/ua/ncs/archives/2011/jan/oceanic%E2%80%9Cgarbage-patch%E2%80%9D-not-nearly-big-portrayed-media.
There is a lot of plastic trash floating in the Pacific Ocean, but claims
that the “Great Garbage Patch” between California and
twice the size of Texas are grossly exaggerated, according to an analysis by an Oregon State University
scientist. Further claims that the oceans are filled with more plastic than plankton, and that the patch has
been growing tenfold each decade since the 1950s are equally misleading, pointed out Angelicque “Angel” White, an
assistant professor of oceanography at Oregon State. “There is no doubt that the amount of plastic in the world’s oceans is troubling, but this
kind of exaggeration undermines the credibility of scientists,” White said. “We have data that allow us to make reasonable
estimates; we don’t need the hyperbole. Given the observed concentration of plastic in the North Pacific, it is simply inaccurate
to state that plastic outweighs plankton, or that we have observed an exponential increase in plastic.” White has
pored over published literature and participated in one of the few expeditions solely aimed at
understanding the abundance of plastic debris and the associated impact of plastic on microbial communities. That
expedition was part of research funded by the National Science Foundation through C-MORE, the Center for
Japan is
Microbial Oceanography: Research and Education. The studies have shown is that if you look at the actual area of the plastic itself, rather than
the entire North Pacific subtropical gyre, the
hypothetically “cohesive” plastic patch is actually less than 1 percent
of the geographic size of Texas. “The amount of plastic out there isn’t trivial,” White said. “But using the highest concentrations ever
reported by scientists produces a patch that is a small fraction of the state of Texas, not twice the size.” Another way to look at it, White said, is
to compare the amount of plastic found to the amount of water in which it was found. “If
we were to filter the surface area of
the ocean equivalent to a football field in waters having the highest concentration (of plastic) ever
recorded,” she said, “the amount of plastic recovered would not even extend to the 1-inch line.” Recent
research by scientists at the Woods Hole Oceanographic Institution found that the amount of plastic, at
least in the Atlantic Ocean, hasn’t increased since the mid-1980s – despite greater production and consumption of materials
made from plastic, she pointed out. “Are we doing a better job of preventing plastics from getting into the ocean?” White said. “Is more plastic
sinking out of the surface waters? Or is it being more efficiently broken down? We just don’t know. But the
data on hand simply do
not suggest that ‘plastic patches’ have increased in size. This is certainly an unexpected conclusion, but it may in part
reflect the high spatial and temporal variability of plastic concentrations in the ocean and the limited number of samples that have been
collected.” The hyperbole about plastic patches saturating the media rankles White, who says such exaggeration
can drive a wedge
between the public and the scientific community. One recent claim that the garbage patch is as deep as the Golden Gate
Bridge is tall is completely unfounded, she said. “Most plastics either sink or float,” White pointed out. “Plastic isn’t likely to be evenly
distributed through the top 100 feet of the water column.”
1NC vs Spectre
K AFF 1NC (OOO)
The focus on discourse/culture/language erases the material and turns it into merely a
mirror of the human. This makes emancipation impossible, turning the case.
Bryant ’14 Levi Bryant is Professor of Philosophy at Collin College Onto-Cartography
pg. 1-4
This books attempts a defense and renewal of materialism. This is a defense and renewal needed in the face of critics and defend ers alike. On
the side of the critics, materialism
must be defended against obscurantists that seek to argue that materialism
is reductive, mechanistic, and that there is something about human beings, culture, thought, and society
that somehow is other than the material. However, it is perhaps the defenders of materialism that are today the greater threat.
Among Continental critical and social and political theorists, we are again and again told that they're
positions are "materialist," only to see the materiality of matter up and disappear in their analyses. In
these discourses and theoretical orientations, the term "materialism" has become so watered down that it's come to denote little more than
"history" and "practice." It is certainly true that matter evolves and develops and therefore has a history, and practices such as building houses
engage with matter. Unfortunately,
under the contemporary materialism, fol- lowing from a highly selective
reading of Marx, "history" has largely come to mean discursive history, and practice has come to mean
discursive practices. History became a history of discourses, how we talk about the world, the norms
and laws by which societies are organized, and practices came to signify the discursive practices —
through the agency of the signifier, performance, nar- rative, and ideology — that form subjectivities.
Such a theory of society was, of course, convenient for humanities scholars who wanted to believe that
the things they work with — texts — make up the most fundamental fabric of worlds and who wanted
to believe that what they do and investigate is the most important of all things. Material factors such as
the amount of calories a person gets a day, their geographical location (e.g., whether or not they're
located in a remote region of Alaska), the rate at which information can be transferred through a
particular medium, the effects of doing data entry for twelve hours a day, whether or not people have
children, the waste output of travel, computing, how homes are heated, the way in which roads are laid
out, whether or not roads are even present, the morphogenetic effects of particular diets, and many
things besides completely fell off the radar. With the "materialist" turn in theory, matter somehow
completely evaporated and we were instead left with nothing but language, culture, and discursivity.
The term materialism became so empty that Zi5ek could write, "Imlaterialism means that the reality I see is never 'whole' not because a large
part of it eludes me, but because it contains a stain, a blind spot, which indicates my inclusion in it" (Zi5ek 2006: 17). This is a peculiar
proposition indeed. What need does matter have to be witnessed by anyone? What does a blind spot have to do with
matter? Why is there no talk here of "stuff", "physicality", or material agencies? It would seem that among the defenders, materialism has
become a terme d'art which has little to do with anything material. Materialism
has come to mean simply that something
is historical, socially constructed, involves cultural practices, and is contingent. It has nothing to do with
processes that take place in the heart of stars, suffering from cancer, or transforming fossil fuels into
greenhouse gases. We wonder where the materialism in materialism is. We might attribute this to a mere difference in intellectual
histor- iCal lineages — those descended from the Greek atomist Democritus on the one side and the critical theorists hailing from historical
materialism on the other — but unfortunately, this
perversion of materialism, this reduction to the cultural and
discursive, has very real analytic and political effects. At the analytic level, it has had the effect of
rendering physical agencies invisible. This arose, in part, from the influence of Marx's analysis — who was not himself guilty of
what is today called " historical materialism" of com- modity fetishism, which showed how we relate to things under capitalism is, in reality, a
relation between people or social (Marx 1990: 165). Marx was right. When a person buys a shirt, they are not merely buying a thing, but are
rather participating in an entire network of social relations involving production, distribution, and consumption. However, somehow
contrary to Marx's own views this thesis became the claim that things aren't real, or that they are
—
merely crystallizations (Marx 1990: 128) of the social and cultural. Based on this elementary schema of
critical theory, the critical gesture became the demonstration that what we take to be a power of things
is, in reality, a disguised instance of the economic, linguistic, or cultural. Everything became an
alienated mirror of humans and the task became demonstrating that what we found in things was
something that we put there. To speak of the powers of things themselves, to speak of them as producing effects beyond their
status as vehicles for social relations, became the height of naiveté. The analytic and political consequences of this were
disasterous. Analytically we could only understand one half of how power and domination function. The
historical materialists, critical theorists, structuralists, and post-structuralists taught us to discern how fashion exercises power and reinforces
certain odious social rela tions by functioning as a vehicle for certain meanings, symbolic capital, and so on. Yet this is only part of the story. As
Jane Bennett puts it, things
have their power as well (see Bennett 2010). Unfortunately, discursivist orientations of
social and political theory could not explain how things like turnstiles in subways, mountain ranges, and
ocean currents also organize social relations and perpetuate forms of domination because they had
already decided that things are only vehicles or carriers of social significations and relations. Because
things had been erased, it became nearly impossible to investigate the efficacy of things in contributing
to the form social relations take. An entire domain of power became invisible, and as a result we lost
all sorts of opportunities for strategic intervention in producing emancipatory change. The ole strategy
for producing change became first revealing how we had discursively constructed some phenomenon,
then revealing how it was contingent, and then showing why it was untenable. The idea of removing
"turnstiles" as one way of producing change and emancipation wasn't even on the radar. This was a
curious anti-dialectical gesture that somehow failed to simultaneously recognize the way in which nonhuman, non-signifying agencies, structure social relations as much as the discursive.
Your attempt to persuade institutions through ethical appeal guarantees your politics
fails. Alt is a prerequisite.
Bryant ’14 Levi Bryant is Professor of Philosophy at Collin College Onto-Cartography
pg. 73
In light of the concept of thermodynamic politics, we can see the common shortcoming of protest
politics or what might be called semiotic politics. Semiotic politics is semiotic in the sense that relies on
the use of signs, either attempting to change institutions through communicative persuasion or
engaging in activities of critique as in the case of hermeneutics of suspicion that, through a critique of
ideology, desire, power, and so on, show that relations of domination and oppression are at work in
something we hitherto believed to be just. Semiotic politics is confused in that it is premised on
producing change through ethical persuasion, and thereby assumes that institutional-machines such
as corporations, governments, factories, and so on, are structurally open to the same sorts of
communicative flows as humans. It believes that we can persuade these organizations to change their
operations on ethical grounds. At best, however, these entities are indifferent to such arguments, while at
worst they are completely blind to even the occurrence of such appeals as machines such as corporations are only structurally open to
information events of profit and loss. Persuading
a corporation through ethical appeals is about as effective to
explain calculus to a cat.
Ignoring hyperobjects results in billions of death.
James 13 (Arran, UK-based philosopher, graduate student of Critical Theory, and psychiatric nurse). “The catastrophic and the postapocalyptic,”http://syntheticzero.net/2013/08/21/the-catastrophic-and-the-post-apocalyptic/ August 21, 2013)//[AC]
There is a vast onto-cartography at work here that connects species of fish to coolant systems to hydrogen
molecules to legislation on nuclear safety; legislators, parliaments, regulatory bodies, anti-nuclear activists; ideas
like environmentalism; the food supply networks and geographic distribution of production centres; work practices; capital
investments and the wider financial markets as Tepco’s shares fall; and those networks that specifically effect
human beings in the exclusion area. After all, this exclusion zone has seen thousands of families leave their
homes, their jobs, their friends, and the possessions that had been rewarded to them as recompense for
their alienated labour. Consider that some of these people are still paying mortgages on homes they will probably never be able to
return to safely. And there remains one more reactor in the water that has not melted down but possibly will- if not by human efforts to
recover the fuel rods, then by the possibility of another unpredicted earthquake and/or tsunami. I don’t have the space or the desire to trace
the onto-cartography of this disaster but it is clear that it includes both geological, ecological and capitalist
bodies; indeed, it is clear that the capitalist bodies might be the ones that are ultimately responsible. According to
Christina Consolo,¶ all this collateral damage will continue for decades, if not centuries, even if things stay
exactly the way they are now. But that is unlikely, as bad things happen like natural disasters and deterioration with
time…earthquakes, subsidence, and corrosion, to name a few. Every day that goes by, the statistical risk increases
for this apocalyptic scenario. No one can say or know how this will play out, except that millions of people will probably
die even if things stay exactly as they are, and billions could die if things get any (here).¶ I raise the spectre of
Fukushima as catastrophe and as apocalyptic because it accords to what Timothy Morton has described as a
hyperobject. In ‘Zero Landscapes in the time of hyperobjects’ Morton defines the states that¶ Objects are beginning to compel
us, from outside the wall. The objects we ignored for centuries, the objects we created in the process of ignoring other
ones: plutonium, global warming. I call them hyperobjects. Hyperobjects are real objects that are massively
distributed in time and space. Good examples would be global warming and nuclear radiation. Hyperobjects are so vast,
so long lasting, that they defy human time and spatial scales. They wouldn’t fit in a landscape painting. They could never
put you in the right mood.¶ The ontocartography or “map of entities” that we could trace in relation to Fukushima
doesn’t just include all those bodies we have listed already but also, and most importantly, it includes the radiation itself. Born of
the unstable hybridisation of techno-materiality and geo-materiality in pursuit of energy to satisfy the logic of the infinite growth of capital,
the hyperobject of Fukushima’s radiation was unleashed and now exists independently of those techno-geocapitalist assemblages. That this radiation exists on a huge spatio-temporal scale means that it exists beyond our
evolved capacity to think. We evolved to cope with and to handle a world of mid-sized objects, the very tools and
raw materials that helped to build Fukushima. In the language of transcorporealist thought: the weaving or
interpenetration of various autonomous ontological bodies has led to this body composed of bodies. Just as
numerous minerals, cells, exogenous microorganisms, mitochondria, oxygen, lactic acid, sugars, contact lenses, and so
on go up to constitute my body in their choreographic co-actualisation so to does this process give rise
to a similar shift in scale. In my body the shift is that from the molecular to the “molar” scale but in this case, the shift is
from the “molar” to the hyper-scale. The radiation unleashed by the Fukushima meltdown exists on a geological
spatial and temporal scale that the human animal is not equipped to readily perceive.¶ Such hyperobjects
proliferate around us and are equally hard to detect in our proximal engagement with the various worlds we inhabit. They range from
incidents like Fukushima to the more encompassing threats of the collapse of capital, ecocide and
cosmic death that I mentioned above. The reason I have focussed on Fukushima is to illustrate the point that the catastrophe has
already taken place. In relation to the example of Fukushima the catastrophe occurred two years ago but will be
ongoing for centuries. That I can sit here in all my relative comfort and enjoy the benefits of being a white male in
Britain does not mean that I am any the less existing after the catastrophe. Catastrophes are discreet
events that explode into being, even if such an explosion can seem very slow as they happen on the
scale of vast temporalities. In the last analysis that can’t be carried out, the cosmos itself exists as one huge catastrophe; the moment
of the big bang being the cosmic event, everything else since being the unfolding of that catastrophic actualisation working itself out.
The alternative is to adopt a methodology of alien phenomenology to relate to
objects- by developing an existential compassion, we can produce more satisfying
social assemblages.
Bryant ’14 Levi Bryant is Professor of Philosophy at Collin College Onto-Cartography
pg. 70-71
This blindness to the alien and narcissistic primacy to the imaginary has massive deleterious ethical and
political conse- quences. Alien phenomenology, by contrast, opens the possibility of more
compassionate ways of relating to aliens, helping us to better attend to their needs, thereby creating
the possibility of better ways of living together. Let us take the amusing example of Cesar Millan of the
television show The Dog Whisperer. Millan is famous for his ability to effectively deal with problem
dogs, rec- ommending ways of changing their behavior and solving problems such as excessive barking
or soiling the house. What is Millan's secret? Millan's secret is that he's an exemplary alien phenomenologist. Millan attempts to think like a dog rather than a human. When Millan approaches a problem
dog, he doesn't approach that dog as a problem for humans, but instead approaches the dog's
environment and owners as a problem for the dog. Based on his knowledge of dog phenomenology, of
what it is like to be a dog and how dogs relate to the environment about them as well as their fellow
pack members — which, for the dog, includes its owners Millan explores the way in this environment as
well as pack relations lead to the problematic behavior of the dog. He then makes suggestions as to how
the environment might be changed or pack relations restructured — i.e., how the behavior of the
human pack members might be changed — so as to create a more satisfying environment for the dog in
which the problematic behavior will change. In this way, Millan is able to produce an ecology or set of
social relations that is more satisfying for both the human owners or fellow pack members and the dog.
By contrast, we can imagine a dog trainer that only adopts the human point of view, holding that it is the
dog alone that is the problem, recommending that the dog be beaten or disciplined with an electric
collar, thereby pro- ducing a depressed and broken dog that lives a life of submission and bondage. A
great deal of human cruelty arises from the failure to practice alien phenomenology. We can see this
in cases of colonial exploi- ration, oppression, and genocide where colonial invaders are unable to
imagine the culture of the others they encounter, instead measuring them by their own culture,
values, and concept of the human, thereby justifying the destruction of their culture as inferior and in
many instances the genocide of these peoples. We see it in the way that people with disabilities, those
who suffer from war trauma, and the mentally ill are measured by an idealized concept of what we
believe the human ought to be, rather than evaluating people in terms of their own capacities and aims.
We see it in phenomena of sexism, where our legal system is constructed around the implicit
assumption of men as the default figure of what the human is, ignoring the specificities of what it means
to be a woman. Finally, we see It In way we relate to animals, treating them only in terms of our own
use and how they advance our aims or pose problems for us, rather than entering the world of animals
as Grandin or Millan do, striving to attend to what animals might need. The point here isn't that we
should adopt some sort of moral masochism where we should always bow to the aims of others and
deny our own aims. The point is that through the practice of alien phenomenology, we might develop
ways of living that are both more compassionate for our others and that might develop more satisfying
social assemblages for all machines involved.
1NC — Topicality
Our interpretation is that the resolution should define the division of affirmative and
negative ground.
The affirmative violates this interpretation because they do not advocate that the
United States federal government substantially increase its ocean exploration and/or
development.
Although this statement is in the PLAN TEXT – it is a speculative METAPHOR. They do
not endorse actual action by the USFG.
OECD 87 — Organisation for Economic Co-operation and Development Council, 1987 (“United States,”
The Control and Management of Government Expenditure, p. 179)
1. Political and organisational structure of government
The United States of America is a federal republic consisting of 50 states. States have their own
constitutions and within each State there are at least two additional levels of government, generally
designated as counties and cities, towns or villages. The relationships between different levels of
government are complex and varied (see Section B for more information).
The Federal Government is composed of three branches: the legislative branch, the executive branch,
and the judicial branch. Budgetary decisionmaking is shared primarily by the legislative and executive
branches. The general structure of these two branches relative to budget formulation and execution is
as follows.
Second, “its” implies ownership. Exploration or development of the ocean isn’t topical
unless it is “owned by” the USFG.
Gaertner-Johnston 6 — Lynn Gaertner-Johnston, founder of Syntax Training—a company that
provides business writing training and consulting, holds a Master’s Degree in Communication from the
University of Notre Dame, 2006 (“Its? It's? Or Its'?,” Business Writing—a blog, May 30th, Available Online
at http://www.businesswritingblog.com/business_writing/2006/05/its_its_or_its_.html, Accessed 0704-2014)
A friend of mine asked me to write about how to choose the correct form of its, and I am happy to
comply. Those three little letters cause a lot of confusion, but once you master a couple of basic rules,
the choice becomes simple. Here goes:
Its' is never correct. Your grammar and spellchecker should flag it for you. Always change it to one of the
forms below.
It's is the contraction (abbreviated form) of "it is" and "it has." It's has no other meanings--only "it is"
and "it has."
Its is the form to use in all other instances when you want a form of i-t-s but you are not sure which one.
Its is a possessive form; that is, it shows ownership the same way Javier's or Santosh's does.
Example: The radio station has lost its license.
The tricky part of the its question is this: If we write "Javier's license" with an apostrophe, why do we
write "its license" without an apostrophe?
Here is the explanation: Its is like hers, his, ours, theirs, and yours. These are all pronouns. Possessive
pronouns do not have apostrophes. That is because their spelling already indicates a possessive. For
example, the possessive form of she is hers. The possessive form of we is ours. Because we change the
spelling, there is no need to add an apostrophe to show possession. Its follows that pattern.
Third, this requires that exploration or development be carried out by a federal
agency. Statutory language is clear.
CFR 6 — Code of Federal Regulations, last updated in 2006 (“Coastal Zone Management Act Federal
Consistency Regulations,” Title 15 › Subtitle B › Chapter IX › Subchapter B › Part 930 › Subpart C › Section
930.31, Available Online at http://www.law.cornell.edu/cfr/text/15/930.31, Accessed 07-04-2014)
§ 930.31 Federal agency activity.
(a) The term “Federal agency activity” means any functions performed by or on behalf of a Federal
agency in the exercise of its statutory responsibilities. The term “Federal agency activity” includes a
range of activities where a Federal agency makes a proposal for action initiating an activity or series of
activities when coastal effects are reasonably foreseeable, e.g., a Federal agency's proposal to physically
alter coastal resources, a plan that is used to direct future agency actions, a proposed rulemaking that
alters uses of the coastal zone. “Federal agency activity” does not include the issuance of a federal
license or permit to an applicant or person (see subparts D and E of this part) or the granting of federal
assistance to an applicant agency (see subpart F of this part).
(b) The term federal “development project” means a Federal agency activity involving the planning,
construction, modification, or removal of public works, facilities, or other structures, and includes the
acquisition, use, or disposal of any coastal use or resource.
(c) The Federal agency activity category is a residual category for federal actions that are not covered
under subparts D, E, or F of this part.
(d) A general permit proposed by a Federal agency is subject to this subpart if the general permit does
not involve case-by-case or individual issuance of a license or permit by a Federal agency. When
proposing a general permit, a Federal agency shall provide a consistency determination to the relevant
management programs and request that the State agency(ies) provide the Federal agency with review,
and if necessary, conditions, based on specific enforceable policies, that would permit the State agency
to concur with the Federal agency's consistency determination. State agency concurrence shall remove
the need for the State agency to review individual uses of the general permit for consistency with the
enforceable policies of management programs. Federal agencies shall, pursuant to the consistent to the
maximum extent practicable standard in § 930.32, incorporate State conditions into the general permit.
If the State agency's conditions are not incorporated into the general permit or a State agency objects to
the general permit, then the Federal agency shall notify potential users of the general permit that the
general permit is not available for use in that State unless an applicant under subpart D of this part or a
person under subpart E of this part, who wants to use the general permit in that State provides the State
agency with a consistency certification under subpart D of this part and the State agency concurs. When
subpart D or E of this part applies, all provisions of the relevant subpart apply.
(e) The terms “Federal agency activity” and “Federal development project” also include modifications of
any such activity or development project which affect any coastal use or resource, provided that, in the
case of modifications of an activity or development project which the State agency has previously
reviewed, the effect on any coastal use or resource is substantially different than those previously
reviewed by the State agency.
Download