Case - Open Evidence Project

advertisement
#WeWinCyberwar2.0 (KQ)
Notes
Brought to you by KWei and Amy from the SWS heg lab.
Email me at ghskwei@gmail.com for help/with questions.
The thing about backdoor Affs is that all of their evidence will talk about past attacks. Press
them on why their scenario is different and how these past attacks prove that empirically, there
is no impact to break-ins through backdoors.
Also, a lot of their ev about mandating backdoors is in the context of future legislation, not the
squo.
Also, their internal links are totally fabricated.
Links to networks, neolib, and gender privacy k, you can find those in the generics.
Links
Some links I don’t have time to cut but that I think will have good args/cards:
Going dark terrorism links: http://judiciary.house.gov/_files/hearings/printers/112th/11259_64581.PDF
Front doors CP: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes
Military DA i/l ev: https://cyberwar.nl/d/20130200_Offensive-Cyber-Capabilities-are-NeededBecause-of-Deterrence_Jarno-Limnell.pdf
http://www.inss.org.il/uploadImages/systemFiles/MASA4-3Engc_Cilluffo.pdf
Military DA Iran impact:
http://www.sobiad.org/ejournals/journal_ijss/arhieves/2012_1/sanghamitra_nath.pdf
Miltiary DA Syria impact: http://nationalinterest.org/commentary/syria-preparing-the-cyberthreat-8997
T
T-Domestic
1NC
NSA spies on foreign corporations through backdoors
NYT 14
(David E. Sanger and Nicole Perlroth. "N.S.A. Breached Chinese Servers Seen as Security Threat," New York Times. 3-22-2014.
http://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-servers-seen-as-spy-peril.html//ghs-kw)
WASHINGTON — American officials have long considered Huawei, the Chinese telecommunications giant, a
security threat, blocking it from business deals in the United States for fear that the company would create “back doors” in its
equipment that could allow the Chinese military or Beijing-backed hackers to steal corporate and government secrets. But even as
the United States made a public case about the dangers of buying from Huawei, classified documents show that the
National
Security Agency was creating its own back doors — directly into Huawei’s networks. The agency
pried its way into the servers in Huawei’s sealed headquarters in Shenzhen, China’s industrial heart,
according to N.S.A. documents provided by the former contractor Edward J. Snowden. It obtained information about
the workings of the giant routers and complex digital switches that Huawei boasts connect a
third of the world’s population, and monitored communications of the company’s top
executives. One of the goals of the operation, code-named “Shotgiant,” was to find any links
between Huawei and the People’s Liberation Army, one 2010 document made clear. But the plans went further:
to exploit Huawei’s technology so that when the company sold equipment to other countries — including both allies and nations
that avoid buying American products — the N.S.A. could roam through their computer and telephone networks to conduct
surveillance and, if ordered by the president, offensive cyberoperations.
NSA targets foreign systems with backdoors
Zetter 13
(Kim Zetter. "NSA Laughs at PCs, Prefers Hacking Routers and Switches," WIRED. 9-4-2013. http://www.wired.com/2013/09/nsarouter-hacking///ghs-kw)
THE NSA RUNS a massive, full-time hacking operation targeting foreign systems, the latest leaks
from Edward Snowden show. But unlike conventional cybercriminals, the agency is less interested in hacking PCs and
Macs. Instead, America’s spooks have their eyes on the internet routers and switches that form the basic
infrastructure of the net, and are largely overlooked as security vulnerabilities. Under a $652-million program codenamed
“Genie,” U.S. intel agencies have hacked into foreign computers and networks to monitor
communications crossing them and to establish control over them, according to a secret black budget
document leaked to the Washington Post. U.S. intelligence agencies conducted 231 offensive cyber operations in 2011 to penetrate
the computer networks of targets abroad. This included not only installing covert “implants” in foreign desktop computers but also
on routers and firewalls — tens of thousands of machines every year in all. According to the Post, the government planned to
expand the program to cover millions of additional foreign machines in the future and preferred hacking routers to individual PCs
because it gave agencies access to data from entire networks of computers instead of just individual machines. Most of the hacks
targeted the systems and communications of top adversaries like China, Russia, Iran and North Korea and included activities around
nuclear proliferation. The NSA’s focus on routers highlights an often-overlooked attack vector with huge advantages for the intruder,
says Marc Maiffret, chief technology officer at security firm Beyond Trust. Hacking routers is an ideal way for an intelligence or
military agency to maintain a persistent hold on network traffic because the systems aren’t updated with new software very often or
patched in the way that Windows and Linux systems are. “No one updates their routers,” he says. “If you think people are bad about
patching Windows and Linux (which they are) then they are … horrible about updating their networking gear because it is too
critical, and usually they don’t have redundancy to be able to do it properly.” He also notes that routers don’t have security software
that can help detect a breach. “The challenge [with desktop systems] is that while antivirus don’t work well on your desktop, they at
least do something [to detect attacks],” he says. “But you don’t even have an integrity check for the most part on routers and other
such devices like IP cameras.” Hijacking routers and switches could allow the NSA to do more than just eavesdrop on all the
communications crossing that equipment. It would also let them bring down networks or prevent certain communication, such as
military orders, from getting through, though the Post story doesn’t report any such activities. With control of routers, the NSA could
re-route traffic to a different location, or intelligence agencies could alter it for disinformation campaigns, such as planting
information that would have a detrimental political effect or altering orders to re-route troops or supplies in a military operation.
According to the budget document, the
CIA’s Tailored Access Programs and NSA’s software engineers
possess “templates” for breaking into common brands and models of routers, switches and
firewalls. The article doesn’t say it, but this would likely involve pre-written scripts or backdoor tools and root kits for
attacking known but unpatched vulnerabilities in these systems, as well as for attacking zero-day vulnerabilities that are yet
unknown to the vendor and customers. “[Router
software is] just an operating system and can be hacked
just as Windows or Linux would be hacked,” Maiffret says. “They’ve tried to harden them a little bit more [than
these other systems], but for folks at a place like the NSA or any other major government intelligence
agency, it’s pretty standard fare of having a ready-to-go backdoor for your [off-the-shelf] Cisco or Juniper
models.”
T-Surveillance
1NC
Backdoors are also used for cyberwarfare—not surveillance
Gellman and Nakashima 13
(Barton Gellman. Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington
Post, most recently the 2014 Pulitzer Prize for Public Service. He is also a senior fellow at the Century Foundation and visiting
lecturer at Princeton’s Woodrow Wilson School. After 21 years at The Post, where he served tours as legal, military, diplomatic,
and Middle East correspondent, Gellman resigned in 2010 to concentrate on book and magazine writing. He returned on
temporary assignment in 2013 and 2014 to anchor The Post's coverage of the NSA disclosures after receiving an archive of
classified documents from Edward Snowden. Ellen Nakashima is a national security reporter for The Washington Post. She
focuses on issues relating to intelligence, technology and civil liberties. She previously served as a Southeast Asia correspondent
for the paper. She wrote about the presidential candidacy of Al Gore and co-authored a biography of Gore, and has also covered
federal agencies, Virginia state politics and local affairs. She joined the Post in 1995. "U.S. spy agencies mounted 231 offensive
cyber-operations in 2011, documents show," Washington Post. 8-30-2013. https://www.washingtonpost.com/world/nationalsecurity/us-spy-agencies-mounted-231-offensive-cyber-operations-in-2011-documents-show/2013/08/30/d090a6ae-119e-11e3b4cb-fd7ce041d814_story.html//ghs-kw)
an implant’s purpose is to create a back door for future access. “You pry open the
window somewhere and leave it so when you come back the owner doesn’t know it’s
unlocked, but you can get back in when you want to,” said one intelligence official, who was speaking generally about the topic
and was not privy to the budget. The official spoke on the condition of anonymity to discuss sensitive technology. Under U.S. cyberdoctrine, these operations
are known as “exploitation,” not “attack,” but they are essential precursors both to attack and
defense. By the end of this year, GENIE is projected to control at least 85,000 implants in strategically
chosen machines around the world. That is quadruple the number — 21,252 — available in 2008, according to the U.S. intelligence budget. The
NSA appears to be planning a rapid expansion of those numbers, which were limited until recently by the need for human
Sometimes
operators to take remote control of compromised machines. Even with a staff of 1,870 people, GENIE made full use of only 8,448 of the 68,975 machines with active implants in
the NSA has brought online an automated system,
capable of managing “potentially millions of implants” for intelligence gathering “and active
2011. For GENIE’s next phase, according to an authoritative reference document,
code-named TURBINE, that is
attack.”
T-Surveillance (ST)
1NC
Undermining encryption standards includes commercial fines against illegal
exports
Goodwin and Procter 14
(Goodwin and Proctor, legal firm. “Software Companies Now on Notice That Encryption Exports May Be Treated More Seriously:
$750,000 Fine Against Intel Subsidiary,” Client Alert, 10-15-2014.
http://www.goodwinprocter.com/Publications/Newsletters/Client-Alert/2014/1015_Software-Companies-Now-on-Notice-ThatEncryption-Exports-May-Be-Treated-More-Seriously.aspx//ghs-kw)
On October 8, 2014, the
Department of Commerce’s Bureau of Industry and Security (BIS) announced
the issuance of a $750,000 penalty against Wind River Systems, an Intel subsidiary, for the unlawful
exportation of encryption software products to foreign government end-users and to
organizations on the BIS Entity List. Wind River Systems exported its software to China, Hong
Kong, Russia, Israel, South Africa, and South Korea. BIS significantly mitigated what would have
been a much larger fine because the company voluntarily disclosed the violations. We believe this to
be the first penalty BIS has ever issued for the unlicensed export of encryption software that did not also involve comprehensively
sanctioned countries (e.g., Cuba, Iran, North Korea, Sudan or Syria). This suggests a fundamental change in BIS’s treatment of
violations of the encryption regulations. Historically, BIS has resolved voluntarily disclosed violations of the encryption regulations
with a warning letter but no material consequence, and has shown itself unlikely to pursue such violations that were not disclosed.
This fine dramatically increases the compliance stakes for software companies — a message that BIS
seemed intent upon making in its announcement. Encryption is ubiquitous in software products. Companies
making these products should reexamine their product classifications, export eligibility, and
internal policies and procedures regarding the export of software that uses or leverages
encryption (even open source or third-party encryption libraries), particularly where a potential transaction on
the horizon — e.g., an acquisition, financing, or initial public offering — will increase the
likelihood that violations of these laws will be identified. If you would like additional information about the
issues addressed in this Client Alert, please contact Rich Matheny, who chairs Goodwin Procter’s National Security & Foreign Trade
Regulation Practice, or the Goodwin Procter attorney with whom you typically consult.
CPs
Foreign Backdoors CP
CX
In the world of the AFF does the government no longer have access to backdoors? So we don’t
use or possess backdoors in the world of the AFF, right?
1NC
(KQ) Counterplan: the United States federal government should ban the
creation of backdoors as outlined in the Secure Data Act of 2015 but should not
ban the surveillance of backdoors and should mandate clandestine corporate
disclosure of foreign-government-mandated backdoors to the United States
federal government.
(CT) Counterplan: The United States federal government should not mandate
the creation of surveillance backdoors in products or request privacy keys, and
should terminate current backdoors created either by government mandates or
government requested keys but should not cease the use of backdoors.
Backdoors are inevitable—we’ll use backdoors created by foreign governments
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings
Institution. He is the author of several books and a member of the Hoover Institution's Task Force on National Security and Law.
"Thoughts on Encryption and Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015.
http://www.lawfareblog.com/thoughts-encryption-and-going-dark-part-ii-debate-merits//ghs-kw)
Still another approach is to let other governments do the dirty work. The computer scientists'
report cites the possibility of other sovereigns adopting their own extraordinary access regimes
as a reason for the U.S. to go slow: Building in exceptional access would be risky enough even if
only one law enforcement agency in the world had it. But this is not only a US issue. The UK
government promises legislation this fall to compel communications service providers, including
US-based corporations, to grant access to UK law enforcement agencies, and other countries
would certainly follow suit. China has already intimated that it may require exceptional access. If
a British-based developer deploys a messaging application used by citizens of China, must it
provide exceptional access to Chinese law enforcement? Which countries have suļ¬ƒcient respect for the rule of
law to participate in an international exceptional access framework? How would such determinations be made? How would timely
approvals be given for the millions of new products with communications capabilities? And how would this new surveillance
ecosystem be funded and supervised? The US and UK governments have fought long and hard to keep the governance of the
Internet open, in the face of demands from authoritarian countries that it be brought under state control. Does not the push for
exceptional access represent a breathtaking policy reversal? I am certain that the
computer scientists are correct that
foreign governments will move in this direction, but I think they are misreading the consequences of this. China
and Britain will do this irrespective of what the United States does, and that fact may well
create potential opportunity for the U.S. After all, if China and Britain are going to force U.S.
companies to think through the problem of how to provide extraordinary access without
compromising general security, perhaps the need to do business in those countries will
provide much of the incentive to think through the hard problems of how to do it. Perhaps
countries far less solicitous than ours of the plight of technology companies or the privacy
interests of their users will force the research that Comey can only hypothesize. Will Apple then take the
view that it can offer phones to users in China which can be decrypted for Chinese authorities
when they require it but that it's technically impossible to do so in the United States?
2NC O/V
Counterplan solves 100% of the case—we mandate the USFG publicly stop
creating backdoors but instead use backdoors that are inevitably mandated by
foreign nations for surveillance—solves perception and doesn’t link to the net
benefit—that’s Wittes
2NC Backdoors Inev
India has backdoors
Ragan 12
(Steve Ragan. Steve Ragan is a security reporter and contributor for SecurityWeek. Prior to joining the journalism world in 2005,
he spent 15 years as a freelance IT contractor focused on endpoint security and security training. "Hackers Expose India's
Backdoor Intercept Program," No Publication. 1-9-2012. http://www.securityweek.com/hackers-expose-indias-backdoorintercept-program//ghs-kw)
Symantec confirmed with SecurityWeek on Friday that hackers did access source code from Symantec Endpoint
Protection 11.0 and Symantec Antivirus 10.2. According to a Symantec spokesperson, “SEP 11 was four years ago to be exact.” In
addition, Symantec Antivirus 10.2 has been discontinued, though the company continues to service it. “We’re taking this extremely
seriously and are erring on the side of caution to develop and long-range plan to take care of customers still using those products,”
Cris Paden, Senior Manager of Corporate Communications at Symantec told SecurityWeek. Over the weekend, the story expanded.
The Lords of Dharmaraja released a purported memo outlining the intercept program known as RINOA, which earns
its
name from the vendors involved - RIM, Nokia, and Apple. The memo said the vendors provided India
with backdoors into their technology in order to them to maintain a presence in the local
market space. India’s Ministry of Defense has “an agreement with all major device vendors” to
provide the country with the source code and information needed for their SUR (surveillance)
platform, the memo explains. These backdoors allowed the military to conduct surveillance
(RINOA SUR) against the US-China Economic and Security Review Commission. Personnel from Indian Naval Military Intelligence
were dispatched to the People’s Republic of China to undertake Telecommunications Surveillance (TESUR) using the RINOA
backdoors and CYCADA-based technologies.
China has backdoors in 80% of global communications
Protalinski 12
(Emil Protalinski. Reporter for CNet and ZDNet. "Former Pentagon analyst: China has backdoors to 80% of telecoms," ZDNet. 7-142012. http://www.zdnet.com/article/former-pentagon-analyst-china-has-backdoors-to-80-of-telecoms///ghs-kw)
The Chinese government reportedly has "pervasive access" to some 80 percent of the world's
communications, thanks to backdoors it has ordered to be installed in devices made by Huawei and ZTE Corporation.
That's according to sources cited by Michael Maloof, a former senior security policy analyst in
the Office of the Secretary of Defense, who now writes for WND: In 2000, Huawei was virtually unknown outside
China, but by 2009 it had grown to be one of the largest, second only to Ericsson. As a consequence, sources say that any
information traversing "any" Huawei equipped network isn't safe unless it has military encryption. One source
warned, "even then, there is no doubt that the Chinese are working very hard to decipher anything
encrypted that they intercept." Sources add that most corporate telecommunications networks use "pretty light
encryption" on their virtual private networks, or VPNs. I found about Maloof's report via this week's edition of The CyberJungle
podcast. Here's my rough transcription of what he says, at about 18 minutes and 30 seconds: The
Chinese government
and the People's Liberation Army are so much into cyberwarfare now that they have looked at
not just Huawei but also ZTE Corporation as providing through the equipment that they install in about 145 countries
around in the world, and in 45 of the top 50 telecom centers around the world, the potential for
backdooring into data. Proprietary information could be not only spied upon but also could be altered and in some cases
could be sabotaged. That's coming from technical experts who know Huawei, they know the company and they know the Chinese.
Since that story came out I've done a subsequent one in which sources tell me that it's
giving Chinese access to
approximately 80 percent of the world telecoms and it's working on the other 20 percent
now.
China is mandating backdoors
Mozur 1/28
(Paul Mozur. Reporter for the NYT. "New Rules in China Upset Western Tech Companies," New York Times. 1-28-2015.
http://www.nytimes.com/2015/01/29/technology/in-china-new-cybersecurity-rules-perturb-western-techcompanies.html//ghs-kw)
HONG KONG — The Chinese
government has adopted new regulations requiring companies that sell
computer equipment to Chinese banks to turn over secret source code, submit to invasive audits and build so-called
back doors into hardware and software, according to a copy of the rules obtained by foreign technology companies
that do billions of dollars’ worth of business in China. The new rules, laid out in a 22-page document approved at the end of last
year, are the first in a series of policies expected to be unveiled in the coming months that Beijing says
are intended to strengthen cybersecurity in critical Chinese industries. As copies have spread in the past month, the regulations have
heightened concern among foreign companies that the authorities are trying to force them out of one of the largest and fastestgrowing markets. In a letter sent Wednesday to a top-level Communist Party committee on cybersecurity, led by President Xi
Jinping, foreign business groups objected to the new policies and complained that they amounted to protectionism. The groups,
which include the U.S. Chamber of Commerce, called for “urgent discussion and dialogue” about what they said was a “growing
trend” toward policies that cite cybersecurity in requiring companies to use only technology products and services that are
developed and controlled by Chinese companies. The letter is the latest salvo in an intensifying tit-for-tat between China and the
United States over online security and technology policy. While the United States has accused Chinese military personnel of hacking
and stealing from American companies, China has pointed to recent disclosures of United States snooping in foreign countries as a
reason to get rid of American technology as quickly as possible. Although it is unclear to what extent the new rules result from
security concerns, and to what extent they are cover for building up the Chinese tech industry, the Chinese regulations go far
beyond measures taken by most other countries, lending some credibility to industry claims that they are protectionist. Beijing also
has long used the Internet to keep tabs on its citizens and ensure the Communist Party’s hold on power. Chinese
companies
must also follow the new regulations, though they will find it easier since for most, their core customers are in China.
China’s Internet filters have increasingly created a world with two Internets, a Chinese one and a global one. The new policies could
further split the tech world, forcing hardware and software makers to sell either to China or the United States, or to create
significantly different products for the two countries. While
the Obama administration will almost certainly
complain that the new rules are protectionist in nature, the Chinese will be able to make a case that they
differ only in degree from Washington’s own requirements.
2NC AT Perm do Both
Permutation links to the net benefit—the AFF stops use of backdoors, that was
1AC cross-ex
2NC AT Perm do the CP
The counterplan bans the creation of backdoors but not the use of them—
that’s different from the plan—that was cross-ex
The permutation is severance—that’s a voting issue:
1. NEG ground—makes the AFF a shifting target which makes it impossible
to garner offense—stop copying k AFFs, vote NEG to be Dave Strauss
2. Kills advocacy skills—they never have to defend implementation of an
advocacy
Cyberterror Advantage CP
1NC
Counterplan: the United States federal government should substantially
increase its support for renewable energy technologies and grid
decentralization.
Grid decentralization and renewables solve terror attacks
Lawson 11
(Lawson, Sean. Sean Lawson is an assistant professor in the Department of Communication at the University of Utah. He holds a
PhD in Science and Technology Studies from Rensselaer Polytechnic Institute, a MA in Arab Studies from Georgetown University,
and a BA in History from California State University, Stanislaus. “BEYOND CYBER-DOOM: Cyberattack Scenarios and the Evidence
of History,” Mercatus Center at George Mason University. Working Paper No. 11-01, January 2011.
http://mercatus.org/sites/default/files/publication/beyond-cyber-doom-cyber-attack-scenarios-evidence-history_1.pdf//ghs-kw)
Cybersecurity policy should promote decentralization and self-organization in efforts to prevent,
defend against, and respond to cyberattacks. Disaster researchers have shown that victims are often
themselves the first responders and that centralized, hierarchical, bureaucratic responses can
hamper their ability to respond in the decentralized, self-organized manner that has often
proved to be more effective (Quarantelli, 2008: 895–896). One way that officials often stand in the way of decentralized
self-organization is by hoarding information (Clarke & Chess, 2009: 1000–1001). Similarly, over the last 50 years, U.S. military
doctrine increasingly has identified decentralization, self-organization, and information sharing as the keys
to effectively operating in ever-more complex conflicts that move at an ever-faster pace and
over ever-greater geographical distances (LeMay & Smith, 1968; Romjue, 1984; Cebrowski & Garstka, 1998;
Hammond, 2001). In the case of preventing or defending against cyberattacks on critical infrastructure, we must recognize that most
cyber and physical infrastructures are owned by private actors. Thus, a
centralized, military-led effort to protect the
fortress at every point will not work. A combination of incentives, regulations, and publicprivate partnerships will be necessary. This will be complex, messy, and difficult. But a cyberattack, should it occur,
will be equally complex, messy, and difficult, occurring instantaneously over global distances via a medium that is almost
incomprehensible in its complex interconnections and interdependencies. The
owners and operators of our critical
infrastructures are on the front lines and will be the first responders. They must be empowered
to act. Similarly, if the worst should occur, average citizens must be empowered to act in a
decentralized, self-organized way to help themselves and others. In the case of critical
infrastructures like the electrical grid, this could include the promotion of alternative energy
generation and distribution methods. In this way, “Instead of being passive consumers, [citizens] can become
actors in the energy network. Instead of waiting for blackouts, they can organize alternatives
and become less vulnerable to either terror or natural catastrophe” (Nye, 2010: 203)
2NC O/V
Counterplan solves all of their grid and cyber-terrorism impacts—we mandate
the USFG provide incentives, regulations, and P3s for widespread adoption of
alt energy and grid decentralization—this means each building has its own
microgrid, which allows for local, decentralized responses to cyberterror attacks
and solves their impact—that’s Lawson
2NC CP>AFF
Only the CP solves—a centralized grid results in inevitable failures and kills the
economy
Warner 10
(Guy Warner. Guy Warner is a leading economist and the founder and CEO of Pareto Energy. "Moving U.S. energy policy to a
decentralized grid," Grist. 6-4-2010. http://grist.org/article/2010-06-03-moving-u-s-energy-policy-to-a-decentralized-gridrethinking-our///ghs-kw)
And, while the development of renewable energy technology has sped up rapidly in recent years, the
technology to deliver
this energy to the places where it is most needed is decades behind. America’s current
electricity transmission and distribution grid was built more than a century ago. Relying on the grid
to relay power from wind farms in the Midwest to cities on the east and west coast is simply not feasible. Our dated
infrastructure cannot handle the existing load — power outages and disruptions currently cost
the nation an estimated $164 billion each year. Wind and solar power produce intermittent power, which, in small
doses, has little impact on grid operations. As we introduce increasingly larger amounts of intermittent
power, our transmission system will require significant upgrades and perhaps even a total grid infrastructure
redesign, which could take decades and cost billions. With 9,200 power plants that link homes and business via 164,000 miles of
lines, a national retrofit is both cost-prohibitive and improbable. One
solution to this challenge is the
development of microgrids. Also known as distributed generation, microgrids produce energy closer to
the user rather than transmitting it from remote power plants. Power is generated and stored
locally and works in parallel with the main grid, providing power as needed and utilizing the
main grid at other times. Microgrids offer a decentralized power source that can be introduced
incrementally in modules now without having to deal with the years of delay realistically associated with building
central generation facilities (e.g. nuclear) and their associated transmission and distribution system add-ons. There is also a
significant difference in the up-front capital costs that are ultimately assigned the consumer. Introducing generation capacity into a
microgrid as needed is far less capital intensive, and some might argue more economical, than building a new nuclear plant at a cost
of $5-12 billion dollars.
Technological advancements in connectivity mean that microgrids can now be
developed for high energy use building clusters, such as trading floors and hospitals, relieving
stress on the macrogrid, and providing more reliable power. In fact, microgrids can be viewed as
the ultimate smart grid, providing local power that meets local needs and utilizing energy
sources, including renewables, that best fit the location and use profile. For example, on the East Coast,
feasibility studies are underway to retrofit obsolete paper mills into biomass fuel generators utilizing left over pulp wood. Pulp
wood, the waste left over from logging, can be easily pelletized, is inexpensive to produce, easy to transport, and has a minimal net
carbon output. Wood pellets are also easily adaptable to automated combustion systems, making them a valuable domestic
resource that can supplement and replace our use of fossil fuels, particularly in microgrids which can be designed to provide heating
and cooling from these biomass products.
2NC Terror Solvency
Decentralization solves terror threats
Verclas 12
(Verclas, Kristen. Kirsten Verclas works as International Program Officer at the National Association of Regulatory Utility
Commissioners (NARUC) in Washington, DC. She holds a BA in International Relations with a Minor in Economics from Franklin
and Marshall College and an MA in International Relations with a concentration in Security Studies from The Elliott School at The
George Washington University. She also earned an MS in Energy Policy and Climate from Johns Hopkins University in August
2013. "The Decentralization of the Electricity Grid – Mitigating Risk in the Energy Sector ,” American Institute for Contemporary
German Studies at John Hopkins University. 4-27-2012. http://www.aicgs.org/publication/the-decentralization-of-the-electricitygrid-mitigating-risk-in-the-energy-sector///ghs-kw)
A decentralized electricity grid has many environmental and security benefits. Microgrids in
combination with distributed energy generation provide a system of small power generation and storage
systems, which are located in a community or in individual houses. These small power generators produce
on average about 10 kW (for individual homes) to 2 MW (for communities) of electricity. While connected to and able to feed excess
energy into the grid, these
generators are simultaneously independent from the grid in that they can
provide power even when power from the main grid is not available. Safety benefits from a
decentralized grid are immense, as it has build-in redundancies. These redundancies are
needed should the main grid become inoperable due to a natural disaster or terrorist attack.
Communities or individual houses can then rely on microgrids with distributed electricity generation for
their power supply. Furthermore, having less centralized electricity generation and fewer main
critical transmission lines reduces targets for terrorist attacks and natural disasters. Fewer people would then
be impacted by subsequent power outages. Additionally, “decentralized power reduces the obstacles to
disaster recovery by allowing the focus to shift first to critical infrastructure and then to flow
outward to less integrated outlets.”[10] Thus critical facilities such as hospitals or police stations
would be the first to have electricity restored, while non-essential infrastructure would have
energy restored at a later date. Power outages are not only dangerous for critical infrastructure, they also cost money to
business and the economy overall. EPRI “reported that power outages and quality disturbances cost American businesses $119
billion per year.”[11] Decentralized
grids are also more energy efficient than centralized electricity
grids because “as electricity streams through a power line a small fraction of it is lost to various
factors. The longer the distance the greater the loss.”[12] Savings that are realized by having shorter
transmission lines could be used to install the renewable energy sources close to homes and
communities. The decrease of transmission costs and the increase in efficiency would cause
lower electricity usage overall. A decrease in the need to generate electricity would also increase energy security—fewer
imports of energy would be needed. The U.S. especially has been concerned with energy dependence in the last decades;
decentralized electricity generation could be one of the policies to address this issue.
Decentralization solves cyberattacks
Kiger 13
(Patrick J. Kiger. "Will Renewable Energy Make Blackouts Into a Thing of the
Past?," National Geographic Channel. 10-2-2013.
http://channel.nationalgeographic.com/american-blackout/articles/willrenewable-energy-make-blackouts-into-a-thing-of-the-past///ghs-kw)
The difference is that Germany’s grid of the future, unlike the present U.S. system, won’t rely on big power plants and long
transmission lines. Instead, Germany is creating a
decentralized “smart” grid—essentially, a system composed
of many small, potentially self-sufficient grids, that will obtain much of their power at the local
level from renewable energy sources, such as solar panels, wind turbines and biomass
generators. And the system will be equipped with sophisticated information and communications
technology (ICT) that will enable it to make the most efficient use of its energy resources. Some
might scoff at the idea that a nation could depend entirely upon renewable energy for its electrical needs, because both sunshine
and wind tend to be variable, intermittent producers of electricity. But the Germans plan to get around that problem by using
“linked renewables”—that is, by combining multiple sources of renewable energy, which has the effect of smoothing out the peaks
and valleys of the supply. As Kurt Rohrig, the deputy director of Germany’s Fraunhofer Institute for Wind Energy and Energy System
Technology, explained in a recent article on Scientific American’s website:
"Each source of energy—be it wind, sun
or bio-gas—has its strengths and weaknesses. If we manage to skillfully combine the different
characteristics of the regenerative energies, we can ensure the power supply for Germany." A
decentralized “smart” grid powered by local renewable energy might help protect the U.S.
against a catastrophic blackout as well, proponents say. “A more diversified supply with more
distributed generation inherently helps reduce vulnerability,” Mike Jacobs, a senior energy analyst at the
Union of Concerned Scientists, noted in a recent blog post on the organization’s website. According to the U.S. Department of
Energy’s SmartGrid.gov website, such
a system would have the ability to bank surplus electricity from
wind turbines and solar panels in numerous storage locations around the system. Utility
operators could tap into those reserves if electricity generation ebbed. Additionally, in the event of a
large-scale disruption, a smart grid would have the ability to switch areas over to power generated by
utility customers themselves, such as solar panels that neighborhood residents have installed on
their roofs. By combining these "distributed generation" resources, a community could keep its
health center, police department, traffic lights, phone system, and grocery store operating
during emergencies, DOE’s website notes. "There are lots of resources that contribute to grid
resiliency and flexibility," Allison Clements, an official with the Natural Resource Defense Council, wrote in a recent blog
post on the NRDC website. "Happily, they are the same resources that are critical to achieving a clean
energy, low carbon future." Joel Gordes, electrical power research director for the U.S. Cyber Consequences Unit, a
private-sector organization that investigates terrorist threats against the electrical grid and other targets,
also thinks that such a decentralized grid "could carry benefits not only for protecting us to a
certain degree from cyber-attacks but also providing power during any number of natural hazards." But Gordes does
offer a caveat—such a system might also offer more potential points of entry for hackers to plant malware and disrupt the entire
grid. Unless that vulnerability is addressed, he warned in an e-mail, "full deployment of [smart grid] technology could end up to be
disastrous."
Patent Reform Advantage CP
Notes
Specify reform + look at law reviews
Read the 500 bil card in the 1NC
Cut different versions w/ different mechanisms
1NC Comprehensive Reform
Counterplan: the United States federal government should comprehensively
reform its patent system for the purpose of eliminating non-practicing entities.
Patent trolls cost the economy half a trillion and counting—larger internal link
to tech and the economy
Lee 11
(Timothy B. Lee. Timothy B. Lee covers tech policy for Ars, with a particular focus on patent and copyright law, privacy, free
speech, and open government. While earning his CS master's degree at Princeton, Lee was the co-author of RECAP, a Firefox
plugin that helps users liberate public documents from the federal judiciary's paywall. Before grad school, he spent time at the
Cato Institute, where he is an adjunct scholar. He has written for both online and traditional publications, including Slate, Reason,
Wired.com, and the New York Times. When not screwing around on the Internet, he can be seen rock climbing, ballroom dancing,
and playing soccer. He lives in Philadelphia. He has a blog at Forbes and you can follow him on Twitter. "Study: patent trolls have
cost innovators half a trillion dollars," Ars Technica. xx-xx-xxxx. http://arstechnica.com/tech-policy/2011/09/study-patent-trollshave-cost-innovators-half-a-trillion-bucks///ghs-kw)
By now, the story of patent
trolls has become well-known: a small company with no products of its own
threatens lawsuits against larger companies who inadvertently infringe its portfolio of broad
patents. The scenario has become so common that we don't even try to cover all the cases here at Ars. If we did, we'd have little
time to write about much else. But anecdotal evidence is one thing. Data is another. Three Boston University
researchers have produced a rigorous empirical estimate of the cost of patent trolling. And the
number is breath-taking: patent trolls ("non-practicing entity" is the clinical term) have cost publicly
traded defendants $500 billion since 1990. And the problem has become most severe in recent years. In the last
four years, the costs have averaged $83 billion per year. The study says this is more than a quarter
of US industrial research and development spending during those years. Two of the study's authors,
James Bessen and Mike Meurer, wrote Patent Failure, an empirical study of the patent system that has been widely read and cited
since its publication in 2008. They were joined for this paper by a colleague, Jennifer Ford.It's hard to measure the costs of litigation
directly. The
most obvious costs for defendants are legal fees and payouts to plaintiffs, but these
are not necessarily the largest costs. Often, indirect costs like employee distraction, legal
uncertainty, and the need to redesign or drop key products are even more significant. The trio use a
clever method known as a stock market event study to estimate these costs. The theory is simple: a company's stock price
represents the stock market's best estimation of the company's value. If the company's stock drops by, say, two percent in the days
after a lawsuit is filed, then the market thinks the lawsuit will cost the company two percent of its market capitalization. Of course,
this wouldn't be a very rigorous technique if they were looking at a single lawsuit. Any number of factors could have affected the
firm's stock price that same week. Maybe the company released a bad earnings report the next day. But with
a large sample
of companies, these random factors should mostly cancel each other out, leaving the market's
rough estimate of how much patent lawsuits cost their targets. The authors used a database of
1,630 patent troll lawsuits compiled by Patent Freedom. Because many of the lawsuits had multiple defendants, there
was a total of 4,114 plaintiff-defendant pairs. The median defendant over all of these pairs lost $20.4 million in
market capitalization, while the mean loss was $122 million.
2NC Solvency
(Senator Orrin Hatch. "Senator Hatch: It’s Time to Kill Patent Trolls for Good," WIRED. 3-162015. http://www.wired.com/2015/03/opinion-must-finally-legislate-patent-trollsexistence///ghs-kw)
There is broad agreement—among both big and small businesses—that any serious solution
must include:
•
Fee shifting, which will require patent trolls to pay legal fees when their suits are
unsuccessful;
•
Heightened pleading and discovery standards, which will raise the bar on litigation
procedure, making it increasingly difficult for trolls to file frivolous lawsuits;
•
Demand letter reforms, which will require those sending demand letters to be more
specific and transparent;
•
Stays of customer suits, which will allow a manufacturer’s case to move forward first,
without binding the end user to the result of that case;
•
A mechanism to enable recovery of fees, which will prevent insolvent plaintiffs from
litigating and dashing.
Some critics argue that these proposals will help only large technology companies and might
even hurt startups and small businesses. In my discussions with stakeholders, however, I have
repeatedly been told that a multi-pronged approach that tackles each of these issues is needed
to effectively combat patent trolls across all levels of industry. These stakeholder discussions
have included representatives from the hotel, restaurant, retail, real estate, financial services,
and high-tech industries, as well as start-up and small business owners.
Enacting legislation on any topic is a major undertaking, and the added complexities inherent in
patent law make passing patent reforms especially challenging. Crucially, we will probably have
only one chance to do so for a long while, so whatever we do must work. We must not pass any
bill that fails to provide an effective deterrent against patent trolls at all stages of litigation.
It is my belief that any viable legislation must ensure that those who successfully defend against
abusive patent litigation and are awarded fees will actually get paid. Even when a patent troll is
a shell company with no assets, there are usually other parties with an interest in the litigation
who do have assets. These parties, however, often keep themselves beyond the jurisdiction of
the courts. They reap benefits if the plaintiff forces a settlement, but are protected from any
liability if they lose.
Right now, that’s a win-win situation for these parties, and a lose-lose situation for America’s
innovators.
Because Congress cannot force parties outside a court’s jurisdiction to join in a case, we must
instead incentivize interested parties to do the right thing and pay court-ordered fee awards.
This is why we must pass legislation that includes a recovery provision. Fee shifting without
recovery is like writing a check on an empty account. It’s purporting to convey something that
isn’t there. Only fee shifting coupled with a recovery provision will stop patent trolls from
litigating-and-dashing.
There is no question that American ingenuity fuels our economy. We must ensure that our
patent system is strong and vibrant and helps to protect our country’s premier position in
innovation.
Reform solves patent trolling
Roberts 14
(Jeff John Roberts. Jeff reports on legal issues that impact the future of the tech industry, such as privacy, net neutrality and
intellectual property. He previously worked as a reporter for Reuters in Paris and New York, and his free-lance work includes clips
for the Economist, the New York Times and the Globe & Mail. A frequent guest on media outlets like NPR and Fox, Jeff is also a
lawyer, having passed the bar in New York and Ontario. "Patent reform is likely in 2015. Here’s what it could look like," No
Publication. 11-19-2014. https://gigaom.com/2014/11/19/patent-reform-is-likely-in-2015-heres-what-it-could-look-like///ghskw)
A patent scholar Dennis Crouch notes, the question is how far the new law will go. In particular, real
reform will depend
on changing the economic asymmetries in patent litigation that allow trolls to flourish, and
that lead troll victims to simply pay up rather engage in costly litigation. Here are some
measures we are likely to see under the Goodlatte bill, according to Crouch and legal sources like IAM and Law.com (subscription
required): Fee-shifting: Right now, trolls typically have nothing to lose by filing a lawsuit since they
are shell companies with no assets. New fee-shifting measures, however, could put them on the
hook for their victims’ legal fees. Discovery limits: Currently, trolls can exploit the discovery
process — in which each side must offer up documents and depositions — by drowning their
targets in expensive and time-consuming requests. Limiting the scope of discovery could take
that tactic off the table. Heightened pleading requirements: Right now, patent trolls don’t have
to specify how exactly a company is infringing their technology, but can simply serve cookiecutter complaints that list the patents and the defendant. Pleading reform would force the trolls
to explain what exactly they are suing over, and give defendants a better opportunity to assess
the case. Identity requirements: This reform proposal is known as “real party of interest” and
would make it harder for those filing patent lawsuits (often lawyers working with private equity
firms) to hide behind shell companies, and require them instead to identify themselves. Crouch
also notes the possibility of expanded “post-grant” review, which gives defendants a fast and
cheaper tool to invalidate bad patents at the Patent Office rather than in federal court.
2NC O/V
The status quo patent system is hopelessly broken and allows patent trolls to
game the system by gaining broad patents for objects such as selling objects on
the internet—those firms sue innovators and startups who “violate” their
patents, costing the US economy half a trillion and stifling innovation—that’s
Lee
The counterplan eliminates patent trolls through a set of comprehensive
reforms we’ll describe below—solves their innovation argumentss and
independently is a bigger internal link to innovation and the economy
Patent reform is key to prevent patent trolling that stifle innovation and reduce
R&D by half
Bessen 14
(James Bessen. Bessen is a Lecturer in Law at the Boston University School of
Law. Bessen was also a Fellow at the Berkman Center for Internet and Society.
"The Evidence Is In: Patent Trolls Do Hurt Innovation," Harvard Business
Review. November 2014. https://hbr.org/2014/07/the-evidence-is-in-patenttrolls-do-hurt-innovation//ghs-kw)
Over the last two years, much has been written about patent
trolls, firms that make their money asserting
patents against other companies, but do not make a useful product of their own. Both the
White House and Congressional leaders have called for patent reform to fix the underlying
problems that give rise to patent troll lawsuits. Not so fast, say Stephen Haber and Ross Levine in a Wall Street
Journal Op-Ed (“The Myth of the Wicked Patent Troll”). We shouldn’t reform the patent system, they say, because there is no
evidence that trolls are hindering innovation; these calls are being driven just by a few large companies who don’t want to pay
inventors. But there is evidence of significant harm. The White House and the Congressional Research Service both cited many
research studies suggesting that patent
litigation harms innovation. And three new empirical studies
provide strong confirmation that patent litigation is reducing venture capital investment in
startups and is reducing R&D spending, especially in small firms. Haber and Levine admit that patent
litigation is surging. There were six times as many patent lawsuits last year than in the 1980s. The
number of firms sued by patent trolls grew nine-fold over the last decade; now a majority of
patent lawsuits are filed by trolls. Haber and Levine argue that this is not a problem: “it might instead reflect a healthy,
dynamic economy.” They cite papers finding that patent trolls tend to file suits in innovative industries and that during the
nineteenth century, new technologies such as the telegraph were sometimes followed by lawsuits. But this does not mean that the
explosion in patent litigation is somehow “normal.” It’s true that plaintiffs, including patent trolls, tend to file lawsuits in dynamic,
innovative industries. But that’s just because they “follow the money.” Patent trolls tend to sue cash rich companies, and innovative
new technologies generate cash. The economic burden of today’s patent lawsuits is, in fact, historically unprecedented. Research
shows that patent trolls cost defendant firms $29 billion per year in direct out-of-pocket costs;
in aggregate, patent litigation destroys over $60 billion in firm wealth each year. While mean
damages in a patent lawsuit ran around $50,000 (in today’s dollars) at the time the telegraph, mean damages today run about $21
million. Even taking into account the much larger size of the economy today, the economic impact of patent litigation today is an
order of magnitude larger than it was in the age of the telegraph. Moreover, these
costs fall disproportionately on
innovative firms: the more R&D a firm performs, the more likely it is to be sued for patent
infringement, all else equal. And, although this fact alone does not prove that this litigation reduces firms’ innovation,
other evidence suggests that this is exactly what happens. A researcher at MIT found, for example, that medical
imaging
businesses sued by a patent troll reduced revenues and innovations relative to comparable
companies that were not sued. But the biggest impact is on small startup firms — contrary to Haber and
Levine, most patent trolls target firms selling less than $100 million a year. One survey of software startups found
that 41% reported “significant operational impacts” from patent troll lawsuits, causing them to
exit business lines or change strategy. Another survey of venture capitalists found that 74% had
companies that experienced “significant impacts” from patent demands. Three recent
econometric studies confirm these negative effects. Catherine Tucker of MIT analyzed venture capital investing
relative to patent lawsuits in different industries and different regions of the country. Controlling for the influence of other factors,
she estimates that lawsuits
from frequent litigators (largely patent trolls) were responsible for a
decline of $22 billion in venture investing over a five-year period. That represents a 14% decline.
Roger Smeets of Rutgers looked at R&D spending by small firms, comparing firms that were hit by extensive lawsuits to a carefully
chosen comparable sample. The comparison sample allowed him to isolate the effect of patent lawsuits from other factors that
might also influence R&D spending. Prior to the lawsuit,
firms devoted 20% of their operating expenditures
to R&D; during the years after the lawsuit, after controlling for other factors, they reduced that spending by 3% to
5% of operating expenditures, representing about a 19% reduction in relative R&D spending. And researchers from
Harvard and the University of Texas recently examined R&D spending of publicly listed firms that had been sued by patent trolls.
They compared firms where the suit was dismissed, representing a clear win for the defendant, to those where the suit was settled
or went to final adjudication (typically much more costly). As in the previous paper, this comparison helped them isolate the effect
of lawsuits from other factors. They found that when lawsuits were not dismissed, firms reduced their
R&D spending
by $211 million and reduced their patenting significantly in subsequent years. The reduction in
R&D spending represents a 48% decline. Importantly, these studies are initial releases of works in progress; the
researchers will refine their estimates of harm over the coming months. Perhaps some of the estimates may shrink a bit.
Nevertheless, across
a significant number of studies using different methodologies and performed
by different researchers, a consistent picture is emerging about the effects of patent litigation: it
costs innovators money; many innovators and venture capitalists report that it significantly
impacts their businesses; innovators respond by investing less in R&D and venture capitalists
respond by investing less in startups. Haber and Levine might not like the results of this research. But the weight of the
evidence from these many studies cannot be ignored; patent trolls do, indeed, cause harm. It’s time for Congress to do
something about it.
2NC Comprehensive Reform
Comprehensive reform solves patent trolling
Downes 7/6
(Larry Downes. Larry Downes is an author and project director at the Georgetown Center for Business and Public Policy. His new
book, with Paul Nunes, is “Big Bang Disruption: Strategy in the Age of Devastating Innovation.” Previous books include the bestselling “Unleashing the Killer App: Digital Strategies for Market Dominance.” "What would 'real' patent reform look like?," CNET.
7-6-2015. http://www.cnet.com/news/what-does-real-patent-reform-look-like///ghs-kw)
And a new report (PDF) from technology think tank Lincoln Labs argues that reversing
the damage to the innovation
economy caused by years of overly generous patent policies requires far stronger medicine than
Congress is considering or the courts seem willing to swallow on their own. The bills making their way through Congress, for
example, focus almost entirely on curbing abuses by companies that buy up often overly broad patents and then, rather than
produce goods, simply sue manufacturers and users they argue are infringing their patents. These nonpracticing
entities,
referred to derisively as patent trolls, are widely seen as a serious drag on innovation, particularly in
fast-evolving technology industries. Trolling behavior, according to studies from Stanford Law
School professor and patent expert Mark Lemley, does little to nothing to promote the
Constitutional goal of patents to encourage innovation by granting inventors temporary
monopolies during which they can recover their investment. The House of Representatives passed antitrolling
legislation in 2013, but a Senate version was killed by then-Majority Leader Harry Reid (D-Nev.) in May 2014. "Patent trolls,"
said Gary Shapiro, president and CEO of the Consumer Electronics Association, "bleed $1.5 billion a week from the US
economy -- that's almost $120 billion since the House passed a patent reform bill in December of 2013." A call for 'real' patent
reform The Lincoln Labs report agrees with these and other criticisms of patent trolling, but argues for more
fundamental changes to the system, or what the report calls "real" patent reform. The report, authored by
former Republican Congressional staffer Derek Khanna, urges a complete overhaul of the process by which the
Patent Office reviews applications, as well as the elimination of patents for software, business
methods, and a special class of patents for design elements -- a category that figured prominently in the
smartphone wars. Khanna claims that the Patent Office has demonstrated an "abject failure" to enforce fundamental legal
requirements that patents only be granted for inventions that are novel, nonobvious and useful. To
reverse that trend, the
report calls on Congress to change incentives for patent examiners that today weigh the scales
in favor of approval, add a requirement for two examiners to review the most problematic
categories of patents, and allow crowdsourced contributions to Patent Office databases of
"prior art" to help filter out nonnovel inventions. Khanna estimates these reforms alone "would
knock out a large number of software patents, perhaps 75-90%, where the economic argument
for patents is exceedingly difficult to sustain." The report also calls for the elimination of design
patents, which offer protection for ornamental features of manufactured products, such as the
original design of the Coca-Cola bottle.
Reg-Neg CP
1NC Shell
Text: the United States federal government should enter into a process of
negotiated rulemaking over _______<insert plan>______________ and
implement the results of negotiation.
The CP is plan minus—it doesn’t mandate the plan, just that a regulatory
negotiations committee is created to discuss the plan
And, it competes—reg neg is not normal means
USDA 06
(The U.S. Department of Agriculture’s Agricultural Marketing Service administers programs that facilitate the efficient, fair
marketing of U.S. agricultural products, including food, fiber, and specialty crops “What is Negotiated Rulemaking?”. Last updated
June 6th 2014. http://www.ams.usda.gov/AMSv1.0/getfile?dDocName=STELPRDC5089434) //ghs-kw)
How reg-neg
differs from “traditional” notice-and-comment rulemaking The “traditional”
notice-and-comment rulemaking provided in the Administrative Procedure Act (APA) requires an agency
planning to adopt a rule on a particular subject to publish a proposed rule (NPRM) in the Federal Register
and to offer the public an opportunity to comment. The APA does not specify who is to draft the
proposed rule nor any particular procedure to govern the drafting process. Ordinarily, agency staff
performs this function, with discretion to determine how much opportunity is allowed for public input. Typically, there is no
opportunity for interchange of views among potentially affected parties, even where an
agency chooses to conduct a hearing. The “traditional” notice-and-comment rulemaking can be very adversarial. The
dynamics encourage parties to take extreme positions in their written and oral statements – in both pre-proposal contacts as well as
in comments on any published proposed rule as well as withholding of information that might be viewed as damaging. This
adversarial atmosphere may contribute to the expense and delay associated with regulatory proceedings, as parties try to position
themselves for the expected litigation. What is lacking is an opportunity for the parties to exchange views, share information, and
focus on finding constructive, creative solutions to problems. In
negotiated rulemaking, the agency, with the
assistance of one or more neutral advisors known as “convenors,” assembles a committee of
representatives of all affected interests to negotiate a proposed rule. Sometimes the law itself will
specify which interests are to be included on the committee. Once assembled, the next goal is for members to receive training in
They then must make sure that all views are heard
and that each committee member agrees to a set of ground rules for the negotiated rulemaking
process. The ultimate goal is to reach consensus on a text that all parties can accept. The agency is
represented at the table by an official who is sufficiently senior to be able to speak authoritatively on its behalf. Negotiating
sessions are chaired by a neutral mediator or facilitator skilled in assisting in the resolution of
multiparty disputes. The Checklist—Advantages as well as Misperceptions The advantages of negotiated rulemaking include:
interest-based problem-solving and consensus-decision making.
“reality check” to
cooperative relationship
e “end runs”
against th
does not
e
parties or nonparticipation is voluntary, for the agency and for others.
<Insert specific solvency advocate>
Reg neg solves—empirics prove
Knaster 10
(Alana Knaster is the Deputy Director of the Resource Management Agency. She was Senior Executive in the Monterey County
Planning Department for five years with responsibility for planning, building, and code enforcement programs. Prior to joining
Monterey County, Alana was the President of the Mediation Institute, a national non-profit firm specializing in the resolution of
complex land use planning and environmental disputes. Many of the disputes that she successfully mediated, involved dozens of
stakeholder groups including government agencies, major corporations and public interest groups. She served in that capacity for
15 years. Alana was Mayor of the City of Hidden Hills, California from 1981-88 and represented her City on a number of regional
planning agencies and commissions. She also has been on the faculty of Pepperdine University Law School since 1989, teaching
courses in environmental and public policy mediation. Knaster, A. “Resolvnig Conflicts Over Climate Change Solutions: Making the
Case for Mediation,” Pepperdine Dispute Resolution Law Journal, Vol 10, No 3, 2010. 465-501. http://law.pepperdine.edu/disputeresolution-law-journal/issues/volume-ten/Knaster%20Article.pdf//ghs-kw)
Federal and international dispute resolution process models. There are also models in U.S. and Canadian
legislation supporting the use of consensus-based processes. These processes have been
successfully applied to resolve dozens of disputes that involved multiple stakeholder interests,
on technically and politically complex environmental and public policy issues. For example, the
Negotiated Rulemaking Act of 1990 was enacted by Congress to formalize a process for negotiating contentious new
regulations.118 The Act provides a process called “reg neg” by which representatives of interest groups that
could be substantially affected by the provisions of a regulation, and agency staff negotiate the
provisions.119 The meetings are open to the public; however, the process does enable negotiators to hold private
interest group caucuses. If a consensus is reached on the provisions of the rule, the Agency commits to
publish the consensus rule in the Federal Register for public comment.120 The participants in the
reg neg agree that as long as the final regulation is consistent with what they have jointly
recommended, they will not challenge it in court. The assumption is that parties will support a product
that they negotiated.121 Reg neg has been utilized by numerous federal agencies to negotiate
rules pertaining to a diverse range of topics including safe drinking water, fugitive gasoline
emissions, eligibility for educational loans, and passenger safety.122 In 1991, in Canada, an initiative was
launched by the National Task Force on Consensus and Sustainability to develop a guidance document that would govern how
federal, provincial, and municipal governments would address resource management disputes. The document that was negotiated,
“Building Consensus for a Sustainable Future: Guiding Principles,” was adopted by consensus in 1994.123 The document outlined
principles for building a consensus and process steps. The ten principles included provisions regarding inclusivity of the process (this
was particularly important in Canada with respect to inclusion of Aboriginal peoples), voluntary participation, accountability to
constituencies, respect for diverse interests, and commitment to any agreement adopted.124 The
consensus principles
were subsequently utilized to resolve disputes over issues that included sustainable forest
management, siting of solid waste facilities, impacts of pulp mill expansion, and economic
diversification based on sustainable wildlife resources.125 The reg neg and Consensus for Sustainable
Future model represent codified mediated negotiation processes that have withstood the test of
legal challenge and have been strongly endorsed by the groups that have participated in these
processes.
1NC Ptix NB
Doesn’t link to politics—empirics prove
USDA 6/6
(The U.S. Department of Agriculture’s Agricultural Marketing Service administers programs that facilitate the efficient, fair
marketing of U.S. agricultural products, including food, fiber, and specialty crops “What is Negotiated Rulemaking?”. Last updated
June 6th 2014 @ http://www.ams.usda.gov/AMSv1.0/getfile?dDocName=STELPRDC5089434)
History In 1990,
Congress endorsed use by federal agencies of an alternative procedure known as "negotiated rulemaking,"'' also
called "regulatory negotiation," or "reg-neg." It has been used by agencies to bring interested parties into the rule-drafting process at an early stage, under circumstances that foster cooperative efforts to achieve
solutions to regulatory problems. Where successful, negotiated rulemaking can lead to better, more acceptable rules, based on a clearer understanding of the concerns of all those affected.
Negotiated rules may be easier to enforce and less likely to be challenged in litigation. The
results of reg-neg usage by the federal government, which began in the early 1980s, are impressive: large-scale
regulators as the Environmental Protection Agency, Nuclear Regulatory Commission, Federal Aviation Administration, and the Occupational Safety and Health
Administration used the process on many occasions. Building on these positive experiences, several states, including Massachusetts, New York, and California, have
also begun using the procedure for a wide range of rules. The very first negotiated rule-making was convened by the Federal Mediation
and Conciliation Service (FMCS) working with the Department of Transportation, the Federal Aviation Administration, airline pilots and other interested
groups to deal with regulations concerning flight and duty time for pilots. The negotiated rulemaking was a success and a draft rule was agreed upon that became the final rule. Since that first reg-neg. FMCS
has assisted in both the convening and facilitating stages in many such procedures at the Departments of Labor, Health and Human Services
(HRSA), Interior, Housing and Urban Development, and the EPA, as well as state-level processes, and other forms of consensus-based decision-making programs such as public policy dialogues, hearings, focus
groups, and meetings.
1NC Fism NB
Failure to use reg neg results in a federalism crisis—REAL ID proves
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School,
cum laude. Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and
federalism. She has presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth
Circuit Judicial Conference, the U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training
and Research. She has advised National Sea Grant multilevel governance studies involving Chesapeake Bay and consulted with
multiple institutions on developing sustainability programs. She has appeared in the Chicago Tribune, the London Financial
Times, the PBS Newshour and Christian Science Monitor’s “Patchwork Nation” project, and on National Public Radio. She is the
author of many scholarly works, including Federalism and the Tug of War Within (Oxford, 2012). Professor Ryan is a graduate of
Harvard Law School, where she was an editor of the Harvard Law Review and a Hewlett Fellow at the Harvard Negotiation
Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for the Ninth Circuit before
practicing environmental, land use, and local government law in San Francisco. She began her academic career at the College of
William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured
throughout Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
b. A Cautionary Tale: The REAL ID Act The value
of negotiated rulemaking to federalism bargaining may be
best understood in relief against the failure of alternatives in federalism-sensitive [*57] contexts.
Particularly informative are the strikingly different state responses to the two approaches Congress has recently taken in tightening
national security through identifi-cation reform--one requiring
regulations through negotiated rulemaking, and
the other through traditional notice and comment. After the 9/11 terrorist attacks, Congress ordered the Department of
Homeland Security (DHS) to establish rules regarding valid identification for federal purposes (such as boarding an aircraft or
accessing federal buildings). n291 Recognizing the implications for state-issued driver's licenses and ID cards, Congress required DHS
to use ne-gotiated
rulemaking to forge consensus among the states about how best to proceed. n292 States leery of
the stag-gering costs associated with proposed reforms participated actively in the process. n293 However,
the subsequent REAL ID Act of 2005 repealed the ongoing negotiated rulemaking and required DHS to
prescribe top-down fed-eral requirements for state-issued licenses. n294 The resulting DHS rules have been bitterly
opposed by the majority of state governors, legislatures, and motor vehicle administrations, n295
prompting a virtual state rebellion that cuts across the red-state/blue-state political divide. n296
No state met the December 2009 deadline initially contemplated by the statute, and over half have enacted or
considered legislation prohibiting compliance with the Act, defunding its implementation, or
calling for its repeal. n297 In the face of this unprecedented state hostility, DHS has extended compliance
deadlines even for those that did not request extensions, and bills have been introduced in both houses of
Congress to repeal the Act. n298 Efforts to repeal what is increasingly referred to as a "failed" policy have
won endorsements [*58] from or-ganizations across the political spectrum. n299 Even the Executive
Director of the ACLU, for whom federalism concerns have not historically ranked highly, opined in USA Today that the REAL ID Act
violates the Tenth Amendment. n300
US federalism will be modelled globally—solves human rights, free trade, war,
and economic growth
Calabresi 95
(Steven G. Calabresi is a Professor of Law at Northwestern University and is a graduate of the Yale Law School (1983) and of Yale
College (1980). Professor Calabresi was a Scholar in Residence at Harvard Law School from 2003 to 2005, and he has been a
Visiting Professor of Political Science at Brown University since 2010. Professor Calabresi was also a Visiting Professor at Yale Law
School in the Fall of 2013. Professor Calabresi served as a Law Clerk to Justice Antonin Scalia of the United States Supreme Court,
and he also clerked for U.S. Court of Appeals Judges Robert H. Bork and Ralph K. Winter. From 1985 to 1990, he served in the
Reagan and first Bush Administrations working both in the West Wing of the Reagan White House and before that in the U.S.
Department of Justice. In 1982, Professor Calabresi co-founded The Federalist Society for Law & Public Policy Studies, a national
organization of lawyers and law students, and he currently serves as the Chairman of the Society’s Board of Directors – a position
he has held since 1986. Since joining the Northwestern Faculty in 1990, he has published more than sixty articles and comments
in every prominent law review in the country. He is the author with Christopher S. Yoo of The Unitary Executive: Presidential
Power from Washington to Bush (Yale University Press 2008); and he is also a co-author with Professors Michael McConnell,
Michael Stokes Paulsen, and Samuel Bray of The Constitution of the United States (2nd ed. Foundation Press 2013), a
constitutional law casebook. Professor Calabresi has taught Constitutional Law I and II; Federal Jurisdiction; Comparative Law;
Comparative Constitutional Law; Administrative Law; Antitrust; a seminar on Privatization; and several other seminars on topics
in constitutional law. Calabresi, S. G. “Government of Limited and Enumerated Powers: In Defense of United States v. Lopez, A
Symposium: Reflections on United States v. Lopez,” Michigan Law Review, Vol 92, No 3, December 1995. Ghs-kw)
We have seen that a
desire for both international and devolutionary federalism has swept across the world
is due to global fascination with and emulation of our own
American federalism success story. The global trend toward federalism is an enormously positive development that
greatly increases the likelihood of future peace, free trade, economic growth, respect for social
and cultural diversity, and protection of individual human rights. It depends for its success on
the willingness of sovereign nations to strike federalism deals in the belief that those deals will
be kept.233 The U.S. Supreme Court can do its part to encourage the future striking of such deals by
enforcing vigorously our own American federalism deal. Lopez could be a first step in that process, if only the
in recent years. To a significant extent, this
Justices and the legal academy would wake up to the importance of what is at stake.
Federalism solves economic growth
Bruekner 05
(Jan K. Bruekner is a Professor of Economics University of California, Irvine. He is a Member member of the Institute of
Transportation Studies, Institute for Mathematical Behavioral Sciences, and a former editor of the Journal of Urban Economics.
Bruekner, J. K. “Fiscal Federalism and Economic Growth,” CESifo Working Paper No. 1601, Novermber 2005. https://www.cesifogroup.de/portal/page/portal/96843357AA7E0D9FE04400144FAFBA7C//ghs-kw)
The analysis in this paper suggests that faster
economic growth may constitute an additional benefit of fiscal
federalism beyond those already well recognized. This result, which matches the conjecture of Oates (1993) and
the expectations of most empirical researchers who have studied the issue, arises from an unexpected
source: a greater incentive to save when public-good levels are tailored under federalism to
suit the differing demands of young and old consumers. This effect grows out of a novel
interaction between the rules of public-good provision which apply cross-sectionally at a given
time and involve the young and old consumers of different generations, and the savings decision
of a given generation, which is intertemporal in nature. This cross-sectional/intertemporal interaction yields the link
between federalism and economic growth. While it is encouraging that the paper’s results match recent empirical
findings showing a positive growth impact from fiscal decentralization, additional theoretical work
exploring other possible sources of such a link is clearly needed. The present results emerge from a model based on very minimal
assumptions, but exploration of richer models may also be fruitful.
US economic growth solves war, collapse ensures instability
National Intelligence Council, ’12 (December, “Global Trends 2030: Alternative Worlds”
http://www.dni.gov/files/documents/GlobalTrends_2030.pdf)
a reinvigorated US economy would increase the prospects that
the growing global and regional challenges would be addressed. A stronger US economy dependent on trade in
services and cutting-edge technologies would be a boost for the world economy, laying the basis for stronger
multilateral cooperation. Washington would have a stronger interest in world trade, potentially
leading a process of World Trade Organization reform that streamlines new negotiations and strengthens the rules governing the
international trading system. The US would be in a better position to boost support for a more
Big Stakes for the International System The optimistic scenario of
democratic Middle East and prevent the slide of failing states. The US could act as balancer
ensuring regional stability, for example, in Asia where the rise of multiple powers—particularly India
and China—could spark increased rivalries. However, a reinvigorated US would not necessarily be a panacea. Terrorism, proliferation,
regional conflicts, and other ongoing threats to the international order will be affected by the presence or absence of strong US leadership but are also driven by their own
The US impact is much more clear-cut in the negative case in which the US fails to rebound
and is in sharp economic decline. In that scenario, a large and dangerous global power vacuum would
be created and in a relatively short space of time. With a weak US, the potential would increase
for the European economy to unravel. The European Union might remain, but as an empty shell around a fragmented continent. Progress on
trade reform as well as financial and monetary system reform would probably suffer. A weaker and less secure international
community would reduce its aid efforts, leaving impoverished or crisis-stricken countries to fend for themselves, multiplying the
chances of grievance and peripheral conflicts. In this scenario, the US would be more likely to lose
influence to regional hegemons—China and India in Asia and Russia in Eurasia. The Middle East
would be riven by numerous rivalries which could erupt into open conflict, potentially sparking
oil-price shocks. This would be a world reminiscent of the 1930s when Britain was losing its grip
on its global leadership role.
dynamics.
2NC O/V
The counterplan convenes a regulatory negotiation committee to discuss the
implementation of the plan. Stakeholders decide how and if the plan is
implemented—then implements the decision - solves better than the AFF:
1. Collaboration—reg neg facilitates government-civilian cooperation, results
in greater satisfaction with regulations and better compliance after
implementation—social psychology and empirics prove
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and
environmental law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of
Toronto, and a Master of Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as
Counselor for Energy and Climate Change in the Obama White House in 2009-2010. Freeman is a prominent scholar of
regulation and institutional design, and a leading thinker on collaborative and contractual approaches to governance. After
leaving the White House, she advised the National Commission on the Deepwater Horizon oil spill on topics of structural
reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United States,
the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the
American College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation,
Policy Analysis, and Public Choice and American College. She holds a PhD in Political Science from the University of North
Carolina, a BA in Government from Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy
Benefit,” N.Y.U. Environmental Journal, Volume 9, 2000.
http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf/)
D. Compliance The compliance implications of consensus-based processes remain a matter of speculation.360 No one has yet
produced empirical data on the relationship between negotiated rulemaking and compliance, let alone data comparing the
compliance implications of negotiated and conventional rules.361 However, the Phase II results introduce interesting new
findings into the debate. The
data shows reg-neg participants to be significantly more likely than
conventional rulemaking participants to report the perception that others will be able to
comply with the final rule.362 Perceiving that others will comply might induce more compliance among competitors,
along the lines of game theoretic models, at least until evidence of defection emerges.363 Moreover, to the extent
that compliance failures are at least partly due to technical and information deficits—rather
than to mere political resistance—it seems plausible that reports of the learning effect and
more horizontal sharing of information might help to improve compliance in the long run.364
The claim that reg-neg could improve compliance is consistent with social psychology
studies showing that in both legal and organizational settings, “fair procedures lead to
greater compliance with the rules and decisions with which they are associated.”365 Similarly,
negotiated rulemaking might facilitate compliance by bringing to the surface some of the
contentious issues earlier in the rulemaking process, where they might be solved collectively rather than
dictated by the agency. Although speculative, these hypotheses seem to fit better with Kerwin and Langbein’s data than do the
rather negative expectations about compliance. Higher
satisfaction could well translate into better longterm compliance, even if litigation rates remained the same. Consistent with our contention that
process matters, we expect it to matter to compliance as well. In any event, empirical studies of compliance
should no longer be so difficult to produce. A number of negotiated rules are now several years old, with some
in the advanced stages of implementation. A study of compliance might compare numbers of enforcement actions for
negotiated as compared to conventional rules, measured by notices of violation, or penalties, for example.366 It might
investigate as well whether compliance methods differ between the two types of rules:
perhaps the enforcement of negotiated rules occurs more cooperatively, or informally, than
enforcement of conventional rules. Possibly, relationships struck during the negotiated
rulemaking make a difference at the compliance stage.367 To date, the effects of how the rule is
developed on eventual compliance remain a matter of speculation, even though it is ultimately an empirical issue on which
both theory and empirical evidence must be brought to bear.
And, we’ll win new net benefits here that ALL turn the aff
a. Delays—cp’s regulatory negotiation means that rules won’t be challenged
during the regulation creation process—empirics prove the CP solves faster
than the AFF
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan.
Philip J. Harter is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the
University of Missouri. He has been involved in the design of many of the major developments of administrative law in the
past 40 years. He is the author of more than 50 papers and books on administrative law and has been a visiting professor or
guest lecturer internationally, including at the University of Paris II, Humboldt University (Berlin) and the University of the
Western Cape (Cape Town). He has consulted on environmental mediation and public participation in rulemaking in China,
including a project sponsored by the Supreme Peoples Court. He has received multiple awards for his achievements in
administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference of the United
States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Properly understood, therefore, the
average length of EPA’s negotiated rulemakings — the time it
took EPA to fulfill its goal — was 751 days or 32% faster than traditional rulemaking. This
knocks a full year off the average time it takes EPA to develop rule by the traditional
method. And, note these are highly complex and controversial rules and that one of them
survived Presidential intervention. Thus, the dynamics surrounding these rules are by no
mean “average.” This means that reg neg’s actual performance is much better than that.
Interestingly and consistently, the average time for all of EPA’s reg negs when viewed in context is virtually identical to that of
the sample drawn by Kerwin and Furlong77 — differing by less than a month. Furthermore, if all of the reg negs that were
conducted by all the agencies that were included in Coglianese’s table78 were analyzed along the same lines as discussed
here,79 the average time for all negotiated rulemakings drops to less than 685 days.80 No
Substantive Review of Rules Based on Reg Neg Consensus. Coglianese argues that negotiated rules are actually subjected to a
higher incident of judicial review than are rules developed by traditional methods, at least those issued by EPA.81 But, like his
analysis of the time it takes to develop rules, Coglianese fails to look at either what happened in the negotiated rulemaking
itself or the nature of any challenge. For example, he makes much of the fact that the Grand Canyon visibility rule was
challenged by interests that were not a party to the negotiations;82 yet, he also points out that this rule was not developed
under the Negotiated Rulemaking Act83 which explicitly establishes procedures that are designed to ensure that each interest
can be represented. This challenge demonstrates the value of convening negotiations.84 And, it is significantly misleading to
include it when discussing the judicial review of negotiated rules since the process of reg neg was not followed. As for
Reformulated Gasoline, the rule as issued by EPA did not reflect the consensus but rather was modified by EPA under the
direction of President Bush.85 There were, indeed, a number of challenges to the application of the rule,86 but amazingly little
to the rule itself given its history. Indeed, after the proposal was changed, many members of the committee continued to meet
in an effort to put Humpty Dumpty back together again, which they largely did; the
fact that the rule had been
negotiated not only resulted in a much better rule,87 it enabled the rule to withstand in large
part a massive assault. Coglianese also somehow attributes a challenge within the World Trade Organization to a
shortcoming of reg neg even though such issues were explicitly outside the purview of the committee; to criticize reg neg here
is like saying surgery is not effective when the patient refused to undergo it. While the Underground Injection rule was
challenged, the committee never reached an agreement88 and, moreover, the convening report made clear that there were
very strong disagreements over the interpretation of the governing statute that would likely have to be resolved by a Court of
Appeals. Coglianese also asserts that the Equipment Leaks rule was the subject of review; it was, but only because the Clean
Air requires parties to file challenges in a very short period, and a challenger therefore filed a defensive challenge while it
worked out some minor details over the regulation. Those negotiations were successful and the challenge was withdrawn. The
Chemical Manufacturers Association, the challenger, had no intention of a substantive challenge.89 Moreover, a challenge to
other parts of the HON should not be ascribed to the Equipment Leaks part of the rule. The agreement in the Asbestos in
Schools negotiation explicitly contemplated judicial review — strange, but true — and hence it came as no surprise and as no
violation of the agreement. As for the Wood Furniture Rule, the challenges were withdrawn after informal negotiations in
which EPA agreed to propose amendments to the rule.90 Similarly, the challenge to EPA’s Disinfectant By-Products Rule91 was
withdrawn. In short, the rules that have emerged from negotiated rulemaking have been remarkably resistant to substantive
challenges. And, indeed, this far into the development of the process, the standard of review and the extent to which an
agreement may be binding on either a signatory or someone whom a party purports to represent are still unknown — the
speculation of many an administrative law class.92 Thus, here too, Coglianese
paints a substantially misleading
picture by failing to distinguish substantive challenges to rules that are based on a
consensus from either challenges to issues that were not the subject of negotiations or were
filed while some details were worked out. Properly understood, reg negs have been
phenomenally successful in warding off substantive review.
B. More democratic—reg neg encourages private sector participation—means
that regulations aren’t unilaterally created by the USFG—CP results in a fair
playing field for the entirety of the private sector
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and
environmental law. Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a
Master of Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy
and Climate Change in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and institutional
design, and a leading thinker on collaborative and contractual approaches to governance. Laura Langbein is the Professor of
Quantitative Methods, Program Evaluation, Policy Analysis, and Public Choice and American College. She holds a PhD in
Political Science from the University of North Carolina, a BA in Government from Oberlin College. Freeman, J. Langbein, R. I.
“Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9, 2000.
http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
2. Negotiated
Rulemaking Is Fairer to Regulated Parties than Conventional Rulemaking To test
whether reg neg was fairer to regulated parties, Ker-win and Langbein asked respondents whether EPA solicited their
participation and whether they believed anyone was left out of the process. They also examined how
much the parties learned in each process, and whether they experienced resource or information disparities. Negotiated rule participants were
significantly more likely to say that the
EPA encouraged their participation than conventional rule participants (65%
versus 33% respectively). Al-though a higher proportion of conventional rulemaking participants reported that a party that should have been
represented in the rulemaking was omitted, the difference is not statistically significant. Specifically, "a majority of both negotiated and
conventional rule participants believed that the parties who should have been involved were involved (66% versus 52% respectively)." In
addition, as reported above, participants in regulatory negotiations reported significantly more learning than their conventional rulemaking
counterparts. Indeed, the disparity between the two types of participants in terms of their reports about learning was one of the study's most
striking results. At the same time, the resource disadvantage of poorer, smaller groups was no greater in negotiated rulemaking than in
conventional rulemaking. So, while
smaller groups did report suffering from a lack of resources during
regulatory negotiation, they reported the same in conventional rulemakings; no disparity
existed between the two processes on this score. Finally, the data suggest that the agency is equally
responsive to the parties in both negotiated and conventional rulemakings. This result, together with
the finding that participants in regulatory negotiations perceived disproportionate influence to be about evenly distributed, suggests that reg neg
is at least as fair to the parties as conventional rulemaking. Indeed, because
participant learning was so much
greater in regulatory negotiation, the process may in fact be more fair.
2NC Solves Better
Reg neg is better for complex rules
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and
environmental law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto,
and a Master of Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for
Energy and Climate Change in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and
institutional design, and a leading thinker on collaborative and contractual approaches to governance. After leaving the White
House, she advised the National Commission on the Deepwater Horizon oil spill on topics of structural reform at the Department
of the Interior. She has been appointed to the Administrative Conference of the United States, the government think tank for
improving the effectiveness and efficiency of federal agencies, and is a member of the American College of Environmental
Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and Public Choice and
American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from Oberlin
College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
4. Complex Rules Are More Likely To Be Settled Through Negotiated Rulemaking Recall that theorists
disagree over whether complex or simple issues are best suited for negotiation. The data suggest that negotiated and conventional
rules differ in systematic ways, indicating that EPA officials do not select just any rule for negotiation. When asked how the issues for
rulemaking were established, reg neg participants reported more often than their counterparts that the participants established at
least some of them (44% versus 0%). Conventional rulemaking
participants more often admitted to being
uninformed of the process for establishing issues (17% versus 0%) or offered that regulated entities set the issues
(11% to 0%). A majority of both groups reported that the EPA or the governing legislation established at least some of the issues.
Kerwin and Langbein found that the types
of issues indeed appeared to differ between negotiated and
conventional rules. When asked about the type of issues to be decided, 52% of participants in conventional groups
identified issues regarding the standard, including its level, timing, or measurement (compared to
31% of negotiated rule participants), while 58% of the negotiating group identified compliance and
implementation issues (compared to 39% of participants in the conventional group). More reg neg participants
(53%) also cited compliance issues as causing the greatest conflict, compared to 32% of conventional
participants. Conventional participants more often reported that the rulemaking failed to resolve all of the
issues (30% versus 14%), but also more often reported that they encountered no "surprise" issues (74% versus 44%). Participants
perceived negotiated rules to be more complex, with more issues and more sides per issue than conventional rules. Kerwin and
Langbein learned in interviews that reg
neg participants tended to develop a more detailed view about
the issues to be decided than did their conventional counterparts. The researchers interpreted this disparity
in reported detail as a perception of complexity. To measure it they computed a complexity score: the more issues and the more
sides to each issue that respondents in a rulemaking could identify, relative to the number of respondents, the more nuanced or
complex the rulemaking. Using this calculation, the rules ranged in com plexity from 1.9 to 5.0, with a mean complexity score of 3.6.
The mean complexity score for reg negs (4.1) was significantly higher than the score (2.5) for conventional rulemaking. Reg neg
participants also presented a clearer understanding of the issues to be decided than did conventional participants. To test clarity,
Kerwin and Langbein developed a measure that would reflect the striking variation among respondents in the number of different
issues and different sides they perceived in their rulemaking. Some respondents could identify very few separate issues and sides
(e.g., "the level of the standard is the single issue and the sides are business, environmentalists, and EPA"), while others detected as
many as four different issues, with three sides on some and two on others. Kerwin and Langbein's measurement was in units of
issue/sides, representing a combination of the two variables, the recognition of which they were measuring; the mentions ranged
from 3 to 10 issue/sides, with a mean of 7.9. Negotiated rulemaking participants mentioned an average of 8.9 issue/sides, compared
to an average of 6issue/sides mentioned by their conventional counterparts, a statistically significant difference. To illustrate the
difference between complexity and clarity: If a party identified the compliance standard as the sole issue, but failed to identify a
number of sub-issues, they would be classified as having a clear understanding but not a complex one. similarly, if the party
identified two sides (business vs. environment) without recognizing distinctions among business participants or within an
environmental coalition, they would also be classified as clear but not complex in their understanding. The
differences in
complexity might be explained by the higher reported rates of learning by reg neg participants,
rather than by differences in the types of rules processed by reg neg versus conventional
rulemaking. Kerwin and Langbein found that complexity and clarity were both positively and
significantly correlated with learning by respondents, but the association between learning and
complexity/clarity disappeared when the type of rulemaking was held constant. However, when the amount learned was held
constant, the association between complexity/clarity and the type of rulemaking remained positive and significant. This signifies that
the association between learning and complexity/clarity was due to the negotiation process. In
other words, the differences in complexity/clarity are not attributable to higher learning but rather to
differences between the processes. The evidence is consistent with the hypothesis that issues selected for
regulatory negotiation are different from and more complicated than those chosen for
conventional rulemaking. The data associating reg negs with complexity, together with the
finding that more issues settle in reg negs, are consistent with the proposition that issues with
more (and more di verse) sub-issues and sides settle more easily than simple issues.
Reg neg is better than conventional rulemaking
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and
environmental law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto,
and a Master of Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for
Energy and Climate Change in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and
institutional design, and a leading thinker on collaborative and contractual approaches to governance. After leaving the White
House, she advised the National Commission on the Deepwater Horizon oil spill on topics of structural reform at the Department
of the Interior. She has been appointed to the Administrative Conference of the United States, the government think tank for
improving the effectiveness and efficiency of federal agencies, and is a member of the American College of Environmental
Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and Public Choice and
American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from Oberlin
College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
In this article, we present an original analysis and summary of new empirical evidence from Neil Kerwin and Laura Langbein's twophase study of Environmental Protection Agency (EPA) negotiated rulemakings. n5 Their qualitative and (*62) quantitative data
reveal more about reg neg than any empirical study to date; although not published in a law review article until now, they
unquestionably bear upon the ongoing debate among legal scholars over the desirability of negotiating rules. Most importantly, this
is the first study to compare participant attitudes toward negotiated rulemaking with attitudes toward conventional rulemaking. The
findings of the studies tend, on balance, to undermine arguments made by the critics of regulatory negotiation and to bolster the
claims of proponents. Kerwin and Langbein found that, according to participants in the study, reg
neg generates more
learning, better quality rules, and higher satisfaction compared to conventional rulemaking. n6
At the same time, stakeholder influence on the agency remains about the same using either
approach. n7 Based on the results, we recommend more frequent use of regulatory negotiation,
accompanied by further comparative and empirical study, for the purposes of establishing
regulatory standards and resolving implementation and compliance issues. This
recommendation contradicts the prevailing view that the process is best used sparingly, n8 and
even then, only for narrow questions of implementation. n9
Reg negs solve better
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J.
Harter is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of
Missouri. He has been involved in the design of many of the major developments of administrative law in the past 40 years. He is
the author of more than 50 papers and books on administrative law and has been a visiting professor or guest lecturer
internationally, including at the University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape
Town). He has consulted on environmental mediation and public participation in rulemaking in China, including a project
sponsored by the Supreme Peoples Court. He has received multiple awards for his achievements in administrative law. He is listed
in Who's Who in America and is a member of the Administrative Conference of the United States.Harter, P. J. “Assessing the
Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
The Primary Objective of Negotiated Rulemaking Is To Create Better and More Widely Accepted Rules. Coglianese argues throughout
his article that the primary benefits of negotiated rules were seen by its advocates as being the reduction in time and in the
incidence of litigation.93 While, both benefits have been realized, neither was seen by those who established it as the predominant
factor in its use. For example, Peter Schuck wrote an important early article in which he described the
benefits of
negotiated solutions over those imposed by a hierarchy.94 Schuck emphasized a number of shortcomings of the
adjudicatory nature of hybrid rulemaking and many benefits of direct negotiations among the affected parties. The tenor of his
thinking is reflected by his statement, “a bargained solution depends for its legitimacy not upon
its objective rationality, inherent justice, or the moral capital of the institution that fashioned it,
but upon the simple fact that it was reached by consent of the parties affected.”95 And, “it
encourages diversity, stimulates the parties to develop relevant information about facts and values, provides a counter-weight to
concentrations of power, and advances participation by those the decisions affect.”96 Nowhere in his long list of benefits was either
speed or reduced litigation, except by implication of the acceptability of the results. My own article that developed the
recommendations97 on which the ACUS Recommendation,98 the Negotiated Rulemaking Act, and the practice itself are based
describes the anticipated benefits of negotiated rulemaking: Negotiating
has many advantages over the
adversarial process. The parties participate directly and immediately in the decision. They share
in its development and concur in it, rather than “participate” by submitting information that the
decisionmaker considers in reaching the decision. Frequently, those who participate in the
negotiations are closer to the ultimate decisionmaking authority of the interest they represent
than traditional intermediaries that represent the interest in an adversarial proceeding. Thus,
participants in negotiations can make substantive decisions, rather than acting as experts in the
decisionmaking process. In addition, negotiation can be a less expensive means of
decisionmaking because it reduces the need to engage in defensive research in anticipation of
arguments made by adversaries. Undoubtedly the prime benefit of direct negotiations is that it
enables the participants to focus squarely on their respective interests.99 The article quotes John
Dunlop, a true pioneer in using negotiations among the affected interests in the public sphere,100 as saying “In our society, a rule
that is developed with the involvement of the parties who are affected is more likely to be accepted and to be effective in
accomplishing its intended purposes.”101 Reducing
time and litigation exposure was not emphasized if
even mentioned directly To be sure, the Congressional findings that precede the Negotiated Rulemaking Act mention the
savings of time and litigation, but they are largely the by-product of far more significant
benefits:102 (2) Agencies currently use rulemaking procedures that may discourage the affected
parties from meeting and communicating with each other, and may cause parties with different
interest to assume conflicting and antagonistic positions and to engage in expensive and timeconsuming litigation over agency rules. (3) Adversarial rulemaking deprives the affected parties
and the public of the benefits of face-to-face negotiations and cooperation in developing and
reaching agreement on a rule. It also deprives them of the benefits of shared information,
knowledge, expertise, and technical abilities possessed by the affected parties 4) Negotiated
rulemaking, in which the parties who will be significantly affected by a rule participate directly in
the development of the rule, can provide significant advantages over adversarial rulemaking. (5)
Negotiated rulemaking can increase the acceptability and improve the substance of rules,
making it less likely that the affected parties will resist enforcement or challenge such rules in
court. It may also shorten the amount of time needed to issue final rules. Thus, those who were
present at the creation of reg neg sought neither expedition nor a shield against litigation.
Rather, they saw direct negotiations among the parties — a form of representational democracy not explicitly
recognized in the Administrative Procedure Act — as resulting in rules that are substantively “better” and
more widely accepted. Those benefits were seen as flowing from the participation of those
affected who bring with them a practical insight and expertise that can result in rules that are
better informed, more tailored to achieving the actual regulatory goal and hence more effective,
and able to be enforced.
Reg negs are the best type of negotiations
Hsu 02
(Shi-Ling Hsu is the Larson Professor of Law at the Florida State University College of Law. Professor Hsu has a B.S. in Electrical
Engineering from Columbia University, and a J.D. from Columbia Law School. He also has an M.S. in Ecology and a Ph.D. in
Agricultural and Resource Economics, both from the University of California, Davis. Professor Hsu has taught in the areas of
environmental and natural resource law, law and economics, quantitative methods, and property. Prior to his current
appointment, Professor Hsu was a Professor of Law and Associate Dean for Special Projects at the University Of British Columbia
Faculty Of Law. He has also served as an Associate Professor at the George Washington University Law School, a Senior Attorney
and Economist for the Environmental Law Institute in Washington D.C, and a Deputy City Attorney for the City and County of San
Francisco. “A Game Theoretic Approach to Regulatory Negotiation: A Framework for Empirical Analysis,” Harvard Environmental
Law Review, Vol 26, No 2, February2002. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=282962//ghs-kw)
There are reasons to be optimistic about what regulatory negotiations can produce in even a
troubled administrative state. Jody Freeman noted that one important finding from the Kerwin and Langbein studies
were that parties involved in negotiated rulemaking were able to use the face-to-face contact as a
learning experience.49 Barton Thompson has noted in his article on common-pool resources problems50 that one reason
that resource users resist collective action solutions is that it is evidently human nature to blame others for the
existence of resource shortages. That in turn leads to an extreme reluctance by resource users
to agree to a collective action solution if it involves even the most minimal personal sacrifices. Thompson suggests
that the one hope for curing resource users of such self-serving myopia is face-to-face contact and
the exchange of views. The vitriol surrounding some environmental regulatory issues suggests that there is a similar
human reaction occurring with respect to some resource conflicts.51 Solutions to environmental
problems and resource conflicts on which regulated parties and environmental organizations hold such strong and disparate views
may require
face-to-face contact to defuse some of the tension and remove some of the demonization
that has arisen in the these conflicts. Reinvention, with the emphasis on negotiations and face-to-face contact,
provides such an opportunity. 52 Farber has argued for making the best of this trend towards regulatory negotiation
characterizing negotiated rulemaking and reinvention. 53 Faced with the reality that some negotiation will inevitably
take place because of the slippage inherent in our system of regulation, Farber argues that the best
model for allowing it to go forward is a bilateral one. A system of bilateral negotiation would
clearly be superior to a system of self-regulation, as such a Farber has argued for making the best of this trend
towards regulatory negotiation characterizing negotiated rulemaking and reinvention. A system of bilateral negotiation would
clearly be superior to a system of self-regulation, as such a system would inevitably descend into a tragedy of the commons.54 But a
system of bilateral negotiation between agencies and regulated parties would even be superior
to a system of multilateral negotiation, due to the transaction costs of assembling all of the
affected stakeholders in a multilateral effort, and the difficulties of reaching a consensus among a
large number of parties. Moreover, multilateral negotiation gives rise to the troubling idea that there should be joint governance
among the parties. Since environmental organizations lack the resources to participate in post-negotiation governance, there is a
heightened danger of regulatory capture by the better-financed regulated parties.55 The
correct balance between
regulatory flexibility and accountability, argues Farber, is to allow bilateral negotiation but with builtin checks to ensure that the negotiation process is not captured by regulated parties. Built-in checks
would include transparency, so that environmental organizations can monitor regulatory bargains, and the availability of citizen
suits, so that environmental organizations could remedy regulatory bargains that exceed the dictates of the underlying statute.
Environmental organizations would thus play the role of the watchdog, rather than the active participant in negotiations. The finding
of Kerwin and Langbein that resource constraints sometimes caused environmental organizations, especially smaller local ones, to
skip negotiated rulemakings would seem to support this conclusion. 56 A
much more efficient use of limited
resources would require that the environmental organization attempt to play a deterrent role in
monitoring negotiated rulemakings.
2NC Cybersecurity Solvency
Reg neg solves cybersecurity
Sales 13
(Sales, Nathan Alexander. Assistant Professor of Law, George Mason University School of Law. “REGULATING CYBERSECURITY,”
Northwestern University Law Review. 2013.
http://www.rwu.edu/sites/default/files/downloads/cyberconference/cyber_threats_and_cyber_realities_readings.pdf//ghs-kw)
An alternative would be a form of “enforced self-regulation”324 in which private companies
develop the new cybersecurity protocols in tandem with the government.325 These
requirements would not be handed down by administrative agencies, but rather would be
developed through a collaborative partnership in which both regulators and regulated would
play a role. In particular, firms might prepare sets of industrywide security standards. (The National
Industrial Recovery Act, famously invalidated by the Supreme Court in 1935, contained such a mechanism,326 and today the energy sector develops
reliability standards in the same way.327) Or agencies
could sponsor something like a negotiated rulemaking in which
regulators, firms, and other stakeholders forge a consensus on new security protocols. 328 In either
case, agencies then would ensure compliance through standard administrative techniques like
audits, investigations, and enforcement actions.329 This approach would achieve all four of the
benefits of private action mentioned above: It avoids (some) problems with information asymmetries, takes
advantage of distributed private sector knowledge about vulnerabilities and threats,
accommodates rapid technological change, and promotes innovation. On the other hand, allowing firms to
help set the standards that will be enforced against them may increase the risk of regulatory capture – the danger that agencies will come to promote
the interests of the companies they regulate instead of the public’s interests.330 The risk of capture is always present in regulatory action, but it is
probably even more acute when regulated entities are expressly invited to the decisionmaking table.331
2NC Encryption Advocate
Here’s a solvency advocate
DMCA 05
(Digital Millenium Copyright Act, Supplement in 2005. https://books.google.com/books?id=nL0s81xgVwC&pg=PA481&lpg=PA481&dq=encryption+AND+(+%22regulatory+negotiation%22+OR+%22negotiated+rulemaking%22)&sou
rce=bl&ots=w9mrCaTJs4&sig=1mVsh_Kzk1p26dmT9_DjozgVQI&hl=en&sa=X&ved=0CB4Q6AEwAGoVChMIxtPG5YH9xgIVwx0eCh2uEgMJ#v=onepage&q&f=false//ghs-kw)
Some encryption supporters advocate use of advisory committee and negotiated rulemaking
procedures to achieve consensus around an encryption standard. See Motorola Comments at 10-11;
Veridian Reply Comments at 20-23.
Reg negs are key to wireless technology innovation
Chamberlain 09
(Chamberlain, Inc. Comments before the Federal Communications Commission. 11-05-2009.
https://webcache.googleusercontent.com/search?q=cache:dfYcw45dQZsJ:apps.fcc.gov/ecfs/document/view%3Bjsessionid%3DS
QnySfcTVd22hL6ZYShTpQYGY1X27xB14p3CS1y01XW15LQjS1jj!1613185479!153728702%3Fid%3D7020245982+&cd=2&hl=en&ct=clnk&gl=us//ghs-kw)
Chamberlain supports solutions that will balance the needs of stakeholders in both the licensed and
unlicensed bands. Chamberlain and other manufacturers of unlicensed devices such as Panasonic
are also uniquely able to provide valuable contributions from the perspective of unlicensed
operators with a long history of innovation in the unlicensed bands. Moreover, as the Commission has
recognized in recent proceedings, alternative mechanisms for gathering data and evaluating
options may assist the Commission in reaching a superior result.19 For these reasons,
Chamberlain would support a negotiated rulemaking process, the use of workshops -both large and small- or
any other alternative process that ensures the widest level of participation from stakeholders across the
wireless market.
2NC Privacy Solvency
Reg neg is key to privacy
Rubinstein 09
(Rubinstein, Ira S. Adjunct Professor of Law and Senior Fellow, Information Law Institute, New York University School of Law.
“PRIVACY AND REGULATORY INNOVATION: MOVING BEYOND VOLUNTARY CODES,” Workshop for Federal Privacy Regulation,
NYU School of Law. 10/2/2009. https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtablescomment-project-no.p095416-544506-00103/544506-00103.pdf//ghs-kw)
Whatever its shortcoming, and despite its many critics, self-regulation is
a recurrent theme in the US approach
to online privacy and perhaps a permanent part of the regulatory landscape. This Article‘s goal has been to
consider new strategies for overcoming observed weaknesses in self-regulatory privacy programs. It began by examining the FTC‘s
intermittent embrace of self-regulation, and found that the Commission‘s most recent foray into self regulatory guidelines for online
behavioral advertising is not very different from earlier efforts, which ended in frustration and a call for legislation. It also reviewed
briefly the more theoretical arguments of privacy scholars for and against self-regulation, but concluded that the market oriented
views of those who favor open information flows clashed with the highly critical views of those who detect a market failure and
worry about the damaging consequences of profiling and surveillance not only to individuals, but to society and to democratic selfdetermination. These views seem irreconcilable and do not pave the way for any applied solutions. Next, this Article presented three
case studies of mandated self-regulation. This included overviews of the NAI Principles and the SHA, as well as a more empirical
analysis of the CARU safe harbor program. An assessment of these case studies against five criteria (completeness, free rider
problems, oversight and enforcement, transparency, and formation of norms) concluded that self-regulation
undergirded
by law—in other words, a statutory safe harbor—is a more effective and efficient instrument
than any self-regulatory guidelines in which industry is chiefly responsible for developing
principles and/or enforcing them. In a nutshell, well-designed safe harbors enable policy makers to
imagine new forms of self-regulation that ā€•build on its strengths … while compensating for its
weaknesses.ā€–268 This embrace of statutory safe harbors led to a discussion of how to improve them by importing secondgeneration strategies from environmental law. Rather than summarizing these strategies and how they translate into the privacy
domain, this Article concludes with a set of specific recommendations based on the ideas discussed in Part III.C. If Congress enacts
comprehensive privacy legislation based on FIPPs, the first recommendation is that the new law include a safe harbor
program, which should echo the COPPA safe harbor to the extent of encouraging groups to submit self-regulatory guidelines and, if
approved by the FTC, treat compliance with these guidelines as deemed compliance with statutory requirements. The FTC should be
granted APA rulemaking powers to implement necessary rules including a safe harbor rule. Congress
should also consider
negotiated rulemaking for an OBA safe harbor or for safe harbor programs more generally. In any
case, FTC should give serious thought to using the negotiated rulemaking process in developing a safe
whether to mandate a
harbor program or approving specific guidelines. In addition, the safe harbor program should be overhauled to reflect secondgeneration strategies. Specifically, the statute should articulate default requirements but allow FTC more discretion in determining
whether proposed industry guidelines achieve desired outcomes, without firms having to match detailed regulatory requirements
on a point by point basis.
2NC Fism NB
Reg negs are better and solves federalism—plan fails
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School,
cum laude. Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and
federalism. She has presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth
Circuit Judicial Conference, the U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training
and Research. She has advised National Sea Grant multilevel governance studies involving Chesapeake Bay and consulted with
multiple institutions on developing sustainability programs. She has appeared in the Chicago Tribune, the London Financial
Times, the PBS Newshour and Christian Science Monitor’s “Patchwork Nation” project, and on National Public Radio. She is the
author of many scholarly works, including Federalism and the Tug of War Within (Oxford, 2012). Professor Ryan is a graduate of
Harvard Law School, where she was an editor of the Harvard Law Review and a Hewlett Fellow at the Harvard Negotiation
Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for the Ninth Circuit before
practicing environmental, land use, and local government law in San Francisco. She began her academic career at the College of
William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured
throughout Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
1. Negotiated Rulemaking Although the most conventional of the less familiar forms, "negotiated rulemaking" between
federal agencies and state stakeholders is a sparingly used tool that holds
promise for facilitating sound
administrative policymaking in disputed federalism contexts, such as those implicating
environmental law, national security, and consumer safety. Under the Administrative Procedure Act, the
traditional "notice and comment" administrative rulemaking pro-cess allows for a limited degree
of participation by state stakeholders who comment on a federal agency's proposed rule. The
agency publishes the proposal in the Federal Register, invites public comments critiquing the draft, and then uses its discretion to
revise or defend the rule in response to comments. n256 Even this iterative process con-stitutes a modest negotiation, but it leaves
participants so frequently unsatisfied that many agencies began to in-formally use more extensive negotiated rulemaking in the
1970s. n257 In 1990, Congress passed the Negotiated Rulemaking Act, amending the Administrative Procedure Act to allow a more
dynamic [*52] and inclusive rulemaking process, n258 and a subsequent Executive Order required all federal agencies to consider
negotiated rulemaking when developing regulations. n259 Negotiated rulemaking allows stakeholders much more influence over
unfolding regulatory decisions. Under
notice and comment, public participation is limited to criticism of
well-formed rules in which the agency is already substantially invested. n260 By contrast,
stakeholders in negotiated rulemaking collectively design a proposed rule that takes into
account their respective interests and expertise from the beginning. n261 The concept, outline, and/or text
of a rule is hammered out by an advisory committee of carefully balanced representation from the agency, the regulated public,
community groups and NGOs, and state and local governments. n262 A professional intermediary leads the effort to ensure that all
stakeholders are appropriately involved and to help interpret prob-lem-solving opportunities. n263 Any consensus reached by the
group becomes the basis of the proposed rule, which is still subject to public comment through the normal notice-and-comment
procedures. n264 If the group does not reach consensus, then the agency proceeds through the usual notice-and-comment process.
n265 The negotiated rulemaking process, a tailored version of interest group bargaining within established legisla-tive constraints,
can yield important benefits. n266 The
process is usually more subjectively satisfying [*53] for all
stakeholders, including the government agency representatives. n267 More cooperative
relationships are estab-lished between the regulated parties and the agencies, facilitating future
implementation and enforcement of new rules. n268 Final regulations include fewer technical
errors and are clearer to stakeholders, so that less time, money and effort is expended on
enforcement. n269 Getting a proposed rule out for public comment takes more time under negotiated rulemaking than
standard notice and comment, but thereafter, negotiated rules receive fewer and more moderate public
comment, and are less frequently challenged in court by regulated entities. n270 Ultimately, then,
final regulations can be implemented more quickly following their debut in the Federal Register,
and with greater compliance from stakeholders. n271 The process also confers valuable learning
benefits on participants, who come to better understand the concerns of other stakeholders,
grow invested in the consensus they help create, and ulti-mately campaign for the success of
the regulations within their own constituencies. n272 Negotiated rulemaking offers additional procedural
benefits because it ensures that agency personnel will be unambiguously informed about the full
federalism implications of a proposed rule by the impacted state interests. Federal agencies are already required by
executive order to prepare a federalism impact statement for rulemaking with federalism implications, n273 but the quality of
state-federal communication within negotiated rulemaking enhances the likelihood that federal
agencies will appreciate and understand the full extent of state [*54] con-cerns. Just as the consensusbuilding process invests participating stakeholders with respect for the competing concerns of other stake-holders, it invests
participating agency personnel with respect for the federalism concerns of state stakeholders.
n274 State-side federalism bargainers interviewed for this project consistently reported that
they always prefer negotiated rulemaking to notice and comment--even if their ultimate impact
remains small--because the products of fully informed federal consultation are always
preferable to the alternative. n275
Reg negs solve federalism—traditional rulemaking fails
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School,
cum laude. Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and
federalism. She has presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth
Circuit Judicial Conference, the U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training
and Research. She has advised National Sea Grant multilevel governance studies involving Chesapeake Bay and consulted with
multiple institutions on developing sustainability programs. She has appeared in the Chicago Tribune, the London Financial
Times, the PBS Newshour and Christian Science Monitor’s “Patchwork Nation” project, and on National Public Radio. She is the
author of many scholarly works, including Federalism and the Tug of War Within (Oxford, 2012). Professor Ryan is a graduate of
Harvard Law School, where she was an editor of the Harvard Law Review and a Hewlett Fellow at the Harvard Negotiation
Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for the Ninth Circuit before
practicing environmental, land use, and local government law in San Francisco. She began her academic career at the College of
William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured
throughout Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
Unsurprisingly, bargaining
in which the normative leverage of federalism values heavily influences
the ex-change offers the most reliable interpretive tools, smoothing out leverage imbalances
and focusing bargainers' in-terlinking interests. n619 Negotiations in which participants are motivated by
shared regard for checks, localism, accountability, and synergy naturally foster constitutional process and hedge against nonconsensual dealings. All
federalism bargaining trades on the normative values of federalism to some
degree, and any given negotiation may feature it more or less prominently based on the factual particulars. n620 Yet the
taxonomy reveals several forms in which federalism values predominate by design, and which may prove especially valuable in
fraught federalism contexts: negotiated rulemaking, policymaking laboratory negotiations, and iterative federalism. n621 These examples indicate the potential for purposeful federalism engineering to reinforce procedural regard for state and fed-eral roles within
the American system. (1) Negotiated
Rulemaking between state and federal actors improves upon
traditional administrative rule-making in fostering participation, localism, and synergy by
incorporating genuine state input into federal regula-tory planning. n622 Most negotiated
rulemaking also uses professional intermediaries to ensure that all stake-holders are
appropriately engaged and to facilitate the search for outcomes that meet parties' dovetailing
interests. n623 For example, after discovering that extreme local variability precluded a uniform federal program, Phase LI
stormwater negotiators invited municipal dischargers to design individually [*123] tailored programs within general federal limits.
n624 Considering
the massive number of municipalities involved, the fact that the rule faced legal
challenge from only a handful of Texas municipalities testifies to the strength of the consensus
through which it was created. By contrast, the iterative exchange within standard notice-andcomment rulemaking--also an example of feder-alism bargaining--can frustrate state participation by
denying participants meaningful opportunities for consulta-tion, collaborative problemsolving, and real-time accountability The contrast between notice-and-comment and
negotiated rulemaking, exemplified by the two phases of REAL ID rulemaking, demonstrates the difference between more and less successful instances of federalism bargaining. n625 Moreover, the difficulty of
asserting state consent to the products of the REAL ID notice-and-comment rulemaking (given the outright rebellion that
fol-lowed) limits its interpretive potential. Negotiated rulemakings take longer than other forms of
administrative rulemaking, but are more likely to succeed over time. Regulatory matters best suited for state-federal
negotiated rulemaking include those in which a decisive federal rule is needed to overcome spillover effects, holdouts, and other
collective action problems, but unique and diverse state expertise is needed for the creation of wise policy. Matters
in
contexts of overlap least suited for negotiated rulemaking include those in which the need for
immediate policy overcomes the need for broad participation--but even these leave open possibilities for
incremental rulemaking, in which the initial federal rule includes mechanisms for periodic reevaluation with local input.
2NC Fism NB Heg Impact
Fast growth promotes US leadership and solves great power war
Khalilzad 11 – PhD, Former Professor of Political Science @ Columbia, Former ambassador to
Iraq and Afghanistan
(Zalmay Khalilzad was the United States ambassador to Afghanistan, Iraq, and the United
Nations during the presidency of George W. Bush and the director of policy planning at the
Defense Department from 1990 to 1992. "The Economy and National Security" Feb 8
http://www.nationalreview.com/articles/259024/economy-and-national-security-zalmaykhalilzad)//BB
economic
trends pose the most severe long-term threat to the United States’ position
as global leader. While the United States suffers from
low economic growth, the
economies of rival powers are developing rapidly. continuation
could lead to a shift
from American primacy toward a multi-polar global system, leading to
geopolitical
rivalry and war among the great powers.
Today,
and fiscal
fiscal imbalances and
The
of these two trends
in turn
even
increased
The current recession is the result of a deep financial crisis, not a mere fluctuation in the business cycle. Recovery is likely to be protracted.
The crisis was preceded by the buildup over two decades of enormous amounts of debt throughout the U.S. economy — ultimately totaling almost 350 percent of GDP — and the development of credit-fueled asset bubbles, particularly in the housing sector.
When the bubbles burst, huge amounts of wealth were destroyed, and unemployment rose to over 10 percent. The decline of tax revenues and massive countercyclical spending put the U.S. government on an unsustainable fiscal path. Publicly held national
debt rose from 38 to over 60 percent of GDP in three years.
interest rates
Without faster economic growth
would crowd out other spending
and actions to reduce deficits, publicly held national debt is projected to reach dangerous proportions. If
were to rise significantly, annual interest payments — which already are larger than the defense budget —
or require substantial
tax increases that would undercut economic growth. Even worse, if unanticipated events trigger what economists call a “sudden stop” in credit markets for U.S. debt, the United States would be unable to roll over its outstanding obligations, precipitating a
It was the economic
that led both countries to relinquish
sovereign-debt crisis that would almost certainly compel a radical retrenchment of the United States internationally. Such scenarios would reshape the international order.
devastation of Britain and France
their empires
during World War II, as well as the rise of other powers,
. In the late 1960s, British leaders concluded that they lacked the economic capacity to maintain a presence “east of Suez.” Soviet economic weakness, which crystallized under Gorbachev, contributed to their decisions to
the United States would be
compelled to retrench,
shedding international commitments We face this
domestic challenge while other major powers are experiencing rapid economic growth
withdraw from Afghanistan, abandon Communist regimes in Eastern Europe, and allow the Soviet Union to fragment. If the U.S. debt problem goes critical,
reducing its military spending and
.
. Even though
countries such as China, India, and Brazil have profound political, social, demographic, and economic problems, their economies are growing faster than ours, and this could alter the global distribution of power. These trends could in the long term produce a
If U.S. policymakers fail to act
The closing of
the gap
could intensify geopolitical competition among major powers,
and
the higher risk of escalation.
the longest period of peace among the great powers has been the era of U.S. leadership
multi-polar systems have been unstable, with
major wars among the
great powers.
American retrenchment could have devastating
consequences
there would be a
heightened possibility of arms races, miscalculation, or other crises spiraling into all-out
conflict
weaker powers may shift their geopolitical posture away from
the United States.
hostile states would be emboldened to make aggressive moves in their
regions
multi-polar world.
and other powers continue to grow, it is not a question of whether but when a new international order will emerge.
between the United States and its rivals
local powers to play major powers against one another,
increase incentives for
undercut our will to preclude or respond to international crises because of
The stakes are high. In modern
history,
. By
contrast,
their competitive dynamics resulting in frequent crises and
Failures of multi-polar international systems produced both world wars.
. Without an American security blanket, regional powers could rearm in an attempt to balance against emerging threats. Under this scenario,
. Alternatively, in seeking to accommodate the stronger powers,
Either way,
.
Slow growth leads to hegemonic wars – relative gap is key
Goldstein 7 - Professor of Global Politics and International Relations @ University of
Pennsylvania,
(Avery Goldstein, “Power transitions, institutions, and China's rise in East Asia: Theoretical
expectations and evidence,” Journal of Strategic Studies, Volume30, Issue 4 & 5 August, EBSCO)
Two closely related, though distinct, theoretical arguments focus explicitly on the consequences for international politics of a shift in
power between a dominant state and a rising power. In War and Change in World Politics, Robert Gilpin
suggested
that peace prevails when a dominant state’s capabilities enable it to ‘govern’ an international order that it has shaped. Over time,
however, as economic and technological diffusion proceeds during eras of peace and development, other
states are empowered. Moreover, the burdens of international governance drain and distract the reigning hegemon,
and challengers eventually emerge who seek to rewrite the rules of governance. As the power
advantage of the erstwhile hegemon ebbs, it may become desperate enough to resort to theultima
ratio of international politics, force, to forestall the increasingly urgent demands of a rising challenger. Or as
the power of the challenger rises, it may be tempted to press its case with threats to use force. It is
the rise and fall of the great powers that creates the circumstances under which major wars, what Gilpin
labels ‘hegemonic wars’, break out.13 Gilpin’s argument logically encourages pessimism about the implications of a rising
China. It leads to the expectation that international trade, investment, and technology transfer will result in a steady diffusion of
American economic power, benefiting the rapidly developing states of the world, including China. As the
US simultaneously scurries to put out the many brushfires that threaten its far-flung global interests (i.e., the classic problem of
overextension), it will be unable to devote sufficient resources to maintain or restore its former advantage
over emerging competitors like China. While the erosion of the once clear American advantage plays itself
out, the US will find it ever more difficult to preserve the order in Asia that it created during its
era of preponderance. The expectation is an increase in the likelihood for the use of force –
either by a Chinese challenger able to field a stronger military in support of its demands for greater influence over
international arrangements in Asia, or by a besieged American hegemon desperate to head off further
decline. Among the trends that alarm those who would look at Asia through the lens of Gilpin’s theory are China’s
expanding share of world trade and wealth(much of it resulting from the gains made possible by the international
economic order a dominant US established); its acquisition of technology in key sectors that have both civilian and
military applications (e.g., information, communications, and electronics linked with to forestall, and the challenger becomes
increasingly determined to realize the transition to a new international order whose contours it will define. the ‘revolution in
military affairs’); and an expanding military burden for the US (as it copes with the challenges of its global war on terrorism and
especially its struggle in Iraq) that limits the resources it can devote to preserving its interests in East Asia.14 Although similar to
Gilpin’s work insofar as it emphasizes the importance of shifts in the capabilities of a dominant state and a rising challenger, the
power-transition theory A. F. K. Organski and Jacek Kugler present in The War Ledger focuses more closely on the allegedly
dangerous phenomenon of ‘crossover’– the point at which a dissatisfied challenger is about to overtake the established leading
state.15 In such cases, when
the power gap narrows, the dominant state becomes increasingly
desperate. Though suggesting why a rising China may ultimately present grave dangers for international peace when its
capabilities make it a peer competitor of America, Organski and Kugler’s power-transition theory is less clear about the
dangers while a potential challenger still lags far behind and faces a difficult struggle to catch up. This clarification is important in
thinking about the theory’s relevance to interpreting China’s rise because a broad consensus prevails among analysts that Chinese
military capabilities are at a minimum two decades from putting it in a league with the US in Asia.16 Their theory, then, points
with alarm to trends in China’s growing wealth and power relative to the United States,
but especially looks ahead to what it sees as the period of maximum danger – that time when a
dissatisfied China could be in a position to overtake the US on dimensions believed crucial for
assessing power. Reports beginning in the mid-1990s that offered extrapolations suggesting China’s growth
would give it the world’s largest gross domestic product (GDP aggregate, not per capita) sometime in the
first few decades of the twentieth century fed these sorts of concerns about a potentially dangerous challenge to
American leadership in Asia.17 The huge gap between Chinese and American military capabilities (especially in terms of
technological sophistication) has so far discouraged prediction of comparably disquieting trends on this dimension, but inklings of
similar concerns may be reflected in occasionally alarmist reports about purchases of advanced Russian air and naval equipment, as
well as concern that Chinese espionage may have undermined the American advantage in nuclear and missile technology, and
speculation about the potential military purposes of China’s manned space program.18 Moreover, because a
dominant
state may react to the prospect of a crossover and believe that it is wiser to embrace the logic of
preventive war and act early to delay a transition while the task is more manageable, Organski and
Kugler’s power-transition theory also provides grounds for concern about the period prior to the
possible crossover.19
2NC Ptix NB
Reg negs are bipartisan
Copeland 06
(Curtis W. Copeland, PhD, was formerly a specialist in American government at the Congressional Research Service (CRS) within
the U.S. Library of Congress. Copeland received his PhD degree in political science from the University of North Texas.His primary
area of expertise is federal rulemaking and regulatory policy. Before coming to CRS in January 2004, Dr. Copeland worked at the
U.S. General Accounting Office (GAO, now the Government Accountability Office) for 23 years on a variety of issues, including
federal personnel policy, pay equity, ethics, procurement policy, management reform, the Office of Management and Budget
(OMB), and, since the mid-1990s, multiple aspects of the federal rulemaking process. At CRS, he wrote reports and testified
before Congress on such issues as federal rulemaking, regulatory reform, the Congressional Review Act, negotiated rulemaking,
the Paperwork Reduction Act, the Regulatory Flexibility Act, OMB’s Office of Information and Regulatory Affairs, Executive Order
13422, midnight rulemaking, peer review, and risk assessment. He has also written and testified on federal personnel policies, the
federal workforce, GAO’s pay-for-performance system, and efforts to oversee the implementation of the Troubled Asset Relief
Program. From 2004 until 2007, Dr. Copeland headed the Executive Branch Operations section within CRS’s Government and
Finance Division. Copeland, C. W. “Negotiated Rulemaking,” Congressional Research Service, September 18, 2006.
http://crs.wikileaks-press.org/RL32452.pdf//ghs-kw)
Negotiated rulemaking (sometimes referred to as regulatory negotiation or “reg-neg”) is a supplement to the traditional
APA rulemaking process in which agency representatives and representatives of affected parties work together to develop what can
ultimately become the text of a proposed rule.1 In this approach, negotiators try
to reach consensus by evaluating
their priorities and making tradeoffs, with the end result being a draft rule that is mutually
acceptable. Negotiated rulemaking has been encouraged (although not usually required) by both congressional and
executive branch actions, and has received bipartisan support as a way to involve affected parties in
rulemaking before agencies have developed their proposals. Some questions have been raised, however,
regarding whether the approach actually speeds rulemaking or reduces litigation.
Reg neg solves controversy—no link to ptix
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J.
Harter is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of
Missouri. He has been involved in the design of many of the major developments of administrative law in the past 40 years. He is
the author of more than 50 papers and books on administrative law and has been a visiting professor or guest lecturer
internationally, including at the University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape
Town). He has consulted on environmental mediation and public participation in rulemaking in China, including a project
sponsored by the Supreme Peoples Court. He has received multiple awards for his achievements in administrative law. He is listed
in Who's Who in America and is a member of the Administrative Conference of the United States.Harter, P. J. “Assessing the
Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Recent Agency Use of Reg Neg. And, indeed, in the past few years agencies
have used reg neg to develop some of
their most contentious rules. For example, the Federal Aviation Administration and the National
Park Service used a variant of the process to write the regulations and policies governing
sightseeing flights over national parks; the issue had been sufficiently controversial that the
President had to intervene and direct the two agencies to develop rules “for the management of sightseeing aircraft in the
National Parks where it is deemed necessary to reduce or prevent the adverse effects of such aircraft.”22 The Department of
Transportation used it to write a regulation governing the delivery of propane and other compressed gases when the regulation
became ensnared in litigation and Congressional action.23 The Occupational Safety and Health Administration used it to address the
erection of steel structures, an issue that had been on its docket for more than a decade with two abortive attempts at rulemaking
when OSHA turned to reg neg.24 The Forest
Service has just published a notice of intent to establish a reg
neg committee to develop policies governing the use of fixed anchors for rock climbing in
designated wilderness areas administered by the Forest Service.25 This
issue has become extremely
controversial.26 Negotiated rulemaking has proven enormously successful in developing
agreements in highly polarized situations and has enabled the parties to address the best, most
effective or efficient way of solving a regulatory controversy. Agencies have therefore turned to
it to help resolve particularly difficult, contentious issues that have eluded closure by means of
traditional rulemaking procedures
2NC CP Solves Ptix Link
The counterplan breaks down adversarialism, is seen as legitimate, and is key
to effective regulation
Mee ‘97
(Siobhan, Jd, An Attorney In The Complex And Class Action Litigation Group,
Focuses Her Practice On A Broad Range Of Commercial Litigation, “Negotiated
Rulemaking And Combined Sewer Overflows (Csos): Consensus Saves
Ossification?,” Fall, 1997 25 B.C. Envtl. Aff. L. Rev. 213, Pg Lexis//Um-Ef)
Benefits that accrue to negotiated rulemaking participants correspond to the criticisms of traditional rulemaking. n132 In
particular, proponents of negotiated rulemaking claim that it increases public participation, n133 fosters nonadversarial
relationships, n134 and reduces long-term regulatory costs. n135 Traditionally, agencies have limited
the avenues for public participation in the rulemaking process to reaction and criticism, releasing rules for the
public's comment after they have been developed [*229] internally. n136 In contrast, negotiated rulemaking elicits wider
involvement at the early stages of production. n137 Input from non-agency and non-governmental
actors, who may possess the most relevant knowledge and who will be most affected by the
rule, is a prerequisite to effective regulation. n138 Increased participation also leads to what Professor
Harter considers the overarching benefit of negotiations: greater legitimacy. n139 Whereas traditional rulemaking lends itself
to adversarialism, n140 negotiated rulemaking is designed to foster cooperation and
accommodation. n141 Rather than clinging to extreme positions, parties prioritize the underlying issues and seek
trade-offs to maximize their overall interests. n142 Participants, including the agency, discover and
address one another's concerns directly. n143 The give-and-take of this process provides an opportunity for parties with differing viewpoints to
test data and arguments directly. n144 The resultant exploration of different approaches is more likely than the
usual notice and comment process to generate creative solutions and avoid ossification. n145 [*230]
Whether or not it results in a rule, negotiated rulemaking establishes valuable links between groups that
otherwise would only communicate in an adversarial context. n146 Rather than trying to outsmart one another, former
competitors become part of a team which must consider the needs of each member. n147 Working relationships developed during negotiations give participants an
understanding of the other side. n148 As one negotiator reflected, in "working with the opposition you find they're not quite the ogres you thought they were, and they don't
The chance to iron out what are often long-standing disagreements can
only improve future interactions. n150
hate you as much as you thought." n149
2NC AT Perm do Both
Perm do both links to the net benefit—does the entirety of the AFF which
_____________
2NC AT Perm do the CP
CP is plan minus since it only mandates the creation of a reg neg committee—
only does the plan if and only if the committee decides to do so—that means
that the CP is uncertain. Perm severs the certainty of the plan:
Substantially means certain and real
Words and Phrases 1964 (40 W&P 759) (this edition of W&P is out of print; the page number
no longer matches up to the current edition and I was unable to find the card in the new edition.
However, this card is also available on google books, Judicial and statutory definitions of words
and phrases, Volume 8, p. 7329)
The words “outward, open, actual, visible, substantial, and exclusive,” in connection with a change of possession, mean substantially the
same thing. They mean not concealed; not hidden; exposed to view; free from concealment, dissimulation, reserve, or disguise; in full existence;
denoting that which not merely can be, but is opposed to potential, apparent, constructive, and imaginary;
veritable; genuine; certain; absolute; real at present time, as a matter of fact, not merely nominal; opposed to form; actually existing;
true; not including admitting, or pertaining to any others; undivided; sole; opposed to inclusive. Bass v. Pease, 79 Ill. App. 308, 318.
Should means must—it’s certain
Supreme Court of Oklahoma 94
(Kelsey v. Dollarsaver Food Warehouse of Durant, Supreme Court of Oklahoma,
1994.
http://www.oscn.net/applications/oscn/DeliverDocument.asp?CiteID=20287#
marker3fn14//ghs-kw)
The turgid phrase - "should be and the same hereby is" - is a tautological absurdity. This is so because "
should" is synonymous with ought or must
and is in itself sufficient to effect an inpraesenti ruling - one that is couched in "a present indicative synonymous with ought." See infra note 15. 3 Carter v. Carter, Okl., 783 P.2d
969, 970 (1989); Horizons, Inc. v. Keo Leasing Co., Okl., 681 P.2d 757, 759 (1984); Amarex, Inc. v. Baker, Okl., 655 P.2d 1040, 1043 (1983); Knell v. Burnes, Okl., 645 P.2d 471, 473
(1982); Prock v. District Court of Pittsburgh County, Okl., 630 P.2d 772, 775 (1981); Harry v. Hertzler, 185 Okl. 151, 90 P.2d 656, 659 (1939); Ginn v. Knight, 106 Okl. 4, 232 P. 936,
937 (1925). 4 "Recordable" means that by force of 12 O.S. 1991 § 24 an instrument meeting that section's criteria must be entered on or "recorded" in the court's journal. The
clerk may "enter" only that which is "on file." The pertinent terms of 12 O.S. 1991 § 24 are: "Upon the journal record required to be kept by the clerk of the district court in civil
cases . . . shall be entered copies of the following instruments on file: 1. All items of process by which the court acquired jurisdiction of the person of each defendant in the case;
and 2. All instruments filed in the case that bear the signature of the and judge and specify clearly the relief granted or order made." [Emphasis added.] 5 See 12 O.S. 1991 §
1116 which states in pertinent part: "Every direction of a court or judge made or entered in writing, and not included in a judgment is an order." [Emphasis added.] 6 The
pertinent terms of 12 O.S. 1993 § 696.3 , effective October 1, 1993, are: "A. Judgments, decrees and appealable orders that are filed with the clerk of the court shall contain: 1. A
caption setting forth the name of the court, the names and designation of the parties, the file number of the case and the title of the instrument; 2. A statement of the
disposition of the action, proceeding, or motion, including a statement of the relief awarded to a party or parties and the liabilities and obligations imposed on the other party or
parties; 3. The signature and title of the court; . . ." 7 The court holds that the May 18 memorial's recital that "the Court finds that the motions should be overruled" is a "finding"
and not a ruling. In its pure form, a finding is generally not effective as an order or judgment. See, e.g., Tillman v. Tillman, 199 Okl. 130, 184 P.2d 784 (1947), cited in the court's
opinion. 8 When ruling upon a motion for judgment n.o.v. the court must take into account all the evidence favorable to the party against whom the motion is directed and
disregard all conflicting evidence favorable to the movant. If the court should conclude the motion is sustainable, it must hold, as a matter of law, that there is an entire absence
of proof tending to show a right to recover. See Austin v. Wilkerson, Inc., Okl., 519 P.2d 899, 903 (1974). 9 See Bullard v. Grisham Const. Co., Okl., 660 P.2d 1045, 1047 (1983),
where this court reviewed a trial judge's "findings of fact", perceived as a basis for his ruling on a motion for judgment n.o.v. (in the face of a defendant's reliance on plaintiff's
contributory negligence). These judicial findings were held impermissible as an invasion of the providence of the jury and proscribed by OKLA. CONST. ART, 23, § 6 . Id. at 1048.
10 Everyday courthouse parlance does not always distinguish between a judge's "finding", which denotes nisi prius resolution of fact issues, and "ruling" or "conclusion of law".
The latter resolves disputed issues of law. In practice usage members of the bench and bar often confuse what the judge "finds" with what that official "concludes", i.e., resolves
as a legal matter. 11 See Fowler v. Thomsen, 68 Neb. 578, 94 N.W. 810, 811-12 (1903), where the court determined a ruling that "[1] find from the bill of particulars that there is
due the plaintiff the sum of . . ." was a judgment and not a finding. In reaching its conclusion the court reasoned that "[e]ffect must be given to the entire in the docket according
to the manifest intention of the justice in making them." Id., 94 N.W. at 811. 12 When the language of a judgment is susceptible of two interpretations, that which makes it
correct and valid is preferred to one that would render it erroneous. Hale v. Independent Powder Co., 46 Okl. 135, 148 P. 715, 716 (1915); Sharp v. McColm, 79 Kan. 772, 101 P.
659, 662 (1909); Clay v. Hildebrand, 34 Kan. 694, 9 P. 466, 470 (1886); see also 1 A.C. FREEMAN LAW OF JUDGMENTS § 76 (5th ed. 1925). 13 "Should" not only is used as a
"present indicative" synonymous with ought but also is the past tense of "shall" with various shades of meaning not always easy to analyze. See 57 C.J. Shall § 9, Judgments §
121 (1932). O. JESPERSEN, GROWTH AND STRUCTURE OF THE ENGLISH LANGUAGE (1984); St. Louis & S.F.R. Co. v. Brown, 45 Okl. 143, 144 P. 1075, 1080-81 (1914). For a more
Certain contexts mandate a construction of the term
"should" as more than merely indicating preference or desirability. Brown, supra at 1080-81 (jury instructions stating that
detailed explanation, see the Partridge quotation infra note 15.
jurors "should" reduce the amount of damages in proportion to the amount of contributory negligence of the plaintiff was held to imply an obligation and to be more than
advisory); Carrigan v. California Horse Racing Board, 60 Wash. App. 79, 802 P.2d 813 (1990) (one of the Rules of Appellate Procedure requiring that a party "should devote a
mean that a party is under an obligation to include
the requested segment); State v. Rack, 318 S.W.2d 211, 215 (Mo. 1958) ("should" would mean the same as "shall" or
"must" when used in an instruction to the jury which tells the triers they "should disregard false testimony").
section of the brief to the request for the fee or expenses" was interpreted to
2NC AT Theory
Counterinterp: process CPs are legitimate if we have a solvency advocate
AND, process CPs good:
1. Key to education—we need to be able to debate the desirability of the
plan’s regulatory process; testing all angles of the AFF is key to
determine the best policy option
2. Key to neg ground—it’s the only CP we can run against regulatory AFFs
3. Predictability and fairness—there’s a huge lit base and solvency advocate
ensures it’s predictable
Applegate 98
(John S. Applegate holds a law degree from Harvard Law School and a bachelor’s degree in English from Haverford
College. Nationally recognized for his work in environmental risk assessment and policy analysis, Applegate has written
books and articles on the regulation of toxic substances, defense nuclear waste, public participation in environmental
decisions, and international environmental law. He serves on the National Academy of Sciences Nuclear and Radiation
Studies Board. In addition, he is an award-winning teacher, known for his ability to present complex information with
an engaging style and wry wit. Before coming to IU, Applegate was the James B. Helmer, Jr. Professor of Law at the
University of Cincinnati College of Law. He also was a visiting professor at the Vanderbilt University School of Law.
From 1983 to 1987, Applegate practiced environmental law in Washington, D.C., with the law firm of Covington &
Burling. He clerked for the late Judge Edward S. Smith of the U.S. Court of Appeals for the Federal Circuit. John S.
Applegate was named Indiana University’s first vice president for planning and policy in July 2008. In March 2010, his
portfolio was expanded and his title changed to vice president for university regional affairs, planning, and policy. In
February 2011, he became executive vice president for regional affairs, planning, and policy. As Executive Vice
President for University Academic Affairs since 2013, his office ensures coordination of university academic matters,
strategic plans, external academic relations, enterprise systems, and the academic policies that enable the university
to most effectively bring its vast intellectual resources to bear in serving the citizens of the state and nation. The
regional affairs mission of OEVPUAA is to lead the development of a shared identity and mission for all of IU's regional
campuses that complements each campus's individual identity and mission. In addition, Executive Vice President
Applegate is responsible for public safety functions across the university, including police, emergency management,
and environmental health and safety. In appointing him in 2008, President McRobbie noted that "John Applegate has
proven himself to be very effective at many administrative and academic initiatives that require a great deal of
analysis and coordination within the university and with external agencies, including the Indiana Commission for
Higher Education. His experience and understanding of both academia and the law make him almost uniquely suited to
take on these responsibilities.” In 2006, John Applegate was appointed Indiana University’s first Presidential Fellow, a
role in which he served both President Emeritus Adam Herbert and current President Michael McRobbie. A
distinguished environmental law scholar, Applegate joined the IU faculty in 1998. He is the Walter W. Foskett
Professor of Law at the Indiana University Maurer School of Law in Bloomington and also served as the school’s
executive associate dean for academic affairs from 2002-2009. Applegate, J. S. “Beyond the Usual Suspects: The Use of
Citizen Advisory Boards in Environmental Decisionmaking,” Indiana Law Journal, Volume 73, Issue 3, July 1, 1998.
http://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=1939&context=ilj//ghs-kw)
There is substantial literature on negotiated rulemaking. The interested reader might
begin with the Negotiated Rulemaking Act of 1990, 5 U.S.C. §§ 561-570 (1994 & Supp. II 1996),
Freeman, supra note 53, Philip J. Harter, Negotiating Regulations: A Cure for Malaise, 71 GEO.
L.J. I (1982), Henry E. Perritt, Jr., Negotiated Rulemaking Before Federal Agencies:
Evaluation of the Recommendations by the Administrative Conference of the United
States, 74 GEO. L.J. 1625 (1986), Lawrence Susskind & Gerard McMahon, The Theory and
Practice of Negotiated Rulemaking, 3 YALE J. ON REG. 133 (1985), and an excellent, justpublished issue on regulatory negotiation, Twenty-Eighth Annual Administrative Law
Issue, 46 DUKE L.J. 1255 (1997)
4. Decision making skills—reg neg is uniquely key to decision making skills
Fiorino 88
(Daniel J. Fiorino holds a PhD & MA in Political Science from Johns Hopkins University and a BA in Political Science &
Minor in Economics from Youngstown State University. Daniel J. Fiorino is the Director of the Center for Environmental
Policy and Executive in Residence in the School of Public Affairs at American University. As a faculty member in the
Department of Public Administration and Policy, he teaches courses on environmental policy, energy and climate
change, environmental sustainability, and public management. Dan is the author or co-author of four books and some
three dozen articles and book chapters in his field. According to Google Scholar, his work has been cited some 2300
times in the professional literature. His book, The New Environmental Regulation, won the Brownlow Award of the
National Academy of Public Administration (NAPA) for “excellence in public administration literature” in 2007.
Altogether his publications have received nine national and international awards from the American Society for Public
Administration, Policy Studies Organization, Academy of Management, and NAPA. His most recent refereed journal
articles were on the role of sustainability in Public Administration Review (2010); explanations for differences in
national environmental performance in Policy Sciences (2011); and technology innovation in renewable energy in
Policy Studies Journal (2013). In 2009 he was a Public Policy Scholar at the Woodrow Wilson International Center for
Scholars. He also serves as an advisor on environmental and sustainability issues for MDB, Inc., a Washington, DC
consulting firm. Dan joined American University in 2009 after a career at the U.S. Environmental Protection Agency
(EPA). Among his positions at EPA were the Associate Director of the Office of Policy Analysis, Director of the Waste
and Chemicals Policy Division, Senior Advisor to the Assistant Administrator for Policy, and the Director of the National
Environmental Performance Track. The Performance Track program was selected as one of the top 50 innovations in
American government 2006 and recognized by Administrator Christine Todd Whitman with an EPA Silver Medal in
2002. In 1993, he received EPA’s Lee M. Thomas Award for Management Excellence. He has appeared on or been
quoted in several media outlets: the Daily Beast, Newsweek, Christian Science Monitor, Australian Broadcasting
Corporation, Agence France-Presse, and CCTV, on such topics as air quality, climate change, the BP Horizon Oil Spill,
carbon trading, EPA, and U.S. environmental and energy politics. He currently is co-director of a project on “Conceptual
Innovations in Environmental Policy” with James Meadowcroft of Carleton University, funded by the Canada Research
Council on Social Sciences and the Humanities. He is a member of the Partnership on Technology and the Environment
with the Heinz Center, Environmental Defense Fund, Nicholas Institute, EPA, and the Wharton School. He is conducting
research on the role of sustainability in policy analysis and the effects of regulatory policy design and implementation
on technology innovation. In 2013, he created the William K. Reilly Fund for Environmental Governance and
Leadership within the Center for Environmental Policy, working with associates of Mr. Reilly and several corporate and
other sponsors. He is a Fellow of the National Academy of Public Administration. Dan is co-editor, with Robert Durant,
of the Routledge series on “Environmental Sustainability and Public Administration.” He is often is invited to speak to
business and academic audiences, most recently as the keynote speaker at a Tel Aviv University conference on
environmental regulation in May 2013. In the summer of 2013 he will present lectures and take part in several events
as the Sir Frank Holmes Visiting Fellow at Victoria University in New Zealand. Fiorino, D. J. “Regulatory Negotiations as
a Policy Process,” Public Administration Review, Vol 48, No 4, pp 764-772, July-August 1988.
http://www.jstor.org/discover/10.2307/975600?uid=3739728&uid=2&uid=4&uid=3739256&sid=21104541489843//gh
s-kw)
Thus, in its premises, objectives, and techniques, regulatory
negotiation reflects the trend toward
alternative dispute settlement. However, because regulatory negotiation is prospective and
general in its application rather than limited to a specific dispute, it also reflects another theme in American
public policy making. That theme is pluralism, or what Robert Reich has described in the context of administrative
rulemaking “interest-group mediation” (Reich 1985, pp. 1619-1620).[20] Reich's analysis sheds light on negotiation
as a form of regulatory policy making, especially its contrasts with more analytical policy
models. Reich proposes interest-group mediation and net-benefit maximization as the
two visions that dominate administrative policy making. The first descends from
pluralist political science and was more influential in the 1960s and early 1970s. The second descends
from decision theory and micro-economics, and it was more influential in the late 1970s and early 1980s.
In the first, the administrator is a referee who brings affected interests into the policy process to reconcile their demands
and preferences. In the net-benefit model, the administrator is an analyst who defines policy options, quantifies the likely
consequences of each, compares them to a given set of objectives, and then selects the option offering the greatest net
benefit or social utility. Under
the interest-group model, objectives emerge from the
bargaining among influential groups, and a good decision is one to which the parties will
agree. Under the net-benefit model, objectives are articulated in advance as external
guides to the policy process. A good decision is one that meets the criterion of economic
efficiency, defined ideally as a state in which no one party can improve its position
without worsening that of another. 21
5. Policy education—reg negs are a key part of the policy process
Spector 99,
(Bertram I. Spector, Senior Technical Director at Management Systems International (MSI) and Executive Director of
the Center for Negotiation Analysis. Ph.D. in Political Science from New York University, May, 1999, Negotiated
Rulemaking: A Participative Approach to Consensus-Building for Regulatory Development and Implementation,
Technical Notes: A Publication of USAID’s Implementing Policy Change Project, http://www.negotiations.org/Tn10%20-%20Negotiated%20Rulemaking.pdf) AJ
Why use negotiated rulemaking? What are the implications for policy reform, the implementation of policy
changes, and conflict between stakeholders and government? First, the process generates an
environment for dialogue that facilitates the reality testing of regulations before they
are implemented. It enables policy reforms to be discussed in an open forum by
stakeholders and for tradeoffs to be made that expedite compliance among those who
are directly impacted by the reforms. Second, negotiated rulemaking is a process of
empowerment. It encourages the participation and enfranchisement of parties that have a stake in reform. It
provides voice to interests, concerns and priorities that otherwise might not be heard or
considered in devising new policy. Third, it is a process that promotes creative but
pragmatic solutions. By encouraging a holistic examination of the policy area, negotiated rulemaking
asks the participants to assess the multiple issues and subissues involved, set priorities
among them, and make compromises. Such rethinking often yields novel and
unorthodox answers. Fourth, negotiated rulemaking offers an efficient mechanism for
policy implementation. Experience shows that it results in earlier implementation;
higher compliance rates; reduced time, money and effort spent on enforcement;
increased cooperation between the regulator and regulated parties; and reduced
litigation over the regulations. Regulatory negotiations can yield both better solutions
and more efficient compliance.
6. At worse, reject the argument, not the team
2NC AT Agency Responsiveness
No difference in agency responsiveness
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and
environmental law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto,
and a Master of Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for
Energy and Climate Change in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and
institutional design, and a leading thinker on collaborative and contractual approaches to governance. After leaving the White
House, she advised the National Commission on the Deepwater Horizon oil spill on topics of structural reform at the Department
of the Interior. She has been appointed to the Administrative Conference of the United States, the government think tank for
improving the effectiveness and efficiency of federal agencies, and is a member of the American College of Environmental
Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and Public Choice and
American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from Oberlin
College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
3. Negotiated Rulemaking Does Not Abrogate the Agency's Responsibility to Execute Delegated Authority Overall, the evidence from
Phase II is generally inconsistent with the theoretical but empirically untested claim that EPA has failed to retain its responsibility for
writing rules in negotiated settings. Recall that theorists disagree over whether reg neg will increase agency responsiveness. Most
scholars assume that EPA retains more authority in conventional rulemaking, and that participants exert commensurately less
influence over conventional as opposed to negotiated rules. To test this hypothesis, Kerwin and Langbein asked participants about
disproportionate influence and about agency responsiveness to the respondent personally, as well as agency responsiveness to the
public in general. The results suggest that the
agency is equally responsive to participants in conventional
and negotiated rulemaking, consistent with the hypothesis that the agency listens to the
affected parties regardless of the method of rule development. Further, when asked what they disliked
about the process, less than 10% of both negotiated and conventional participants volunteered "disproportionate influence." When
asked whether any party had disproportionate influence during rule development, 44% of conventional respondents answered
"yes," compared to 48% of reg neg respondents. In addition, EPA
was as likely to be viewed as having
disproportionate influence in negotiated as conventional rules (25% versus 32% respectively). It follows that
roughly equal proportions of participants in negotiated and conventional rules viewed other participants, and especially EPA, as
having disproportionate influence. Kerwin and Langbein asked those who reported disproportionate influence what about the rule
led them to believe that lopsided influence existed. In response, negotiated
rulemaking participants were
significantly more likely to see excessive influence by one party in the process rather than in the
rule itself, as compared to conventional participants (55% versus 13% respectively). However, when asked what
it was about the process that fostered disproportionate influence, conventional rule participants were twice as likely as negotiated
rule participants to point to the central role of EPA (63% versus 30% respectively). By contrast, negotiated rule participants pointed
to other participants who were particularly vocal and active during the negotiation sessions (26% of negotiated rule respondents
versus no conventional respondents). When asked about agency responsiveness, negotiated rule participants were significantly
more likely than conventional rule participants to view both general participation, and their personal participation, as having a
"major" impact on the proposed rule. By contrast, conventional participants were more likely to see "major" differences between
the proposed and final rule and to believe that public participation and their own participation had a "moderate" or "major" impact
on that change. These results conform to the researchers' expectations: negotiated
rules are designed so that public
participation should have its greatest impact on the proposed rule; conventional rules are
structured so that public participation should have its greatest impact on the final rule. Given these
differences in how the two processes are de-signed, Kerwin and Langbein sought to measure agency responsiveness
overall, rather than at the two separate moments of access. Although the differences were not statistically
significant, the results suggest that conventional participants perceived their public and personal contribution to rulemaking to
have had slightly more impact than negotiated rule participants perceived their contribution to have had. Still, given the
absence of statistical significance, we agree with the researchers that it is safer to conclude that the agency is
equally responsive to both conventional and negotiated rule participants.
2NC AT Cost
Reg negs are more cost effective
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J.
Harter is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of
Missouri. He has been involved in the design of many of the major developments of administrative law in the past 40 years. He is
the author of more than 50 papers and books on administrative law and has been a visiting professor or guest lecturer
internationally, including at the University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape
Town). He has consulted on environmental mediation and public participation in rulemaking in China, including a project
sponsored by the Supreme Peoples Court. He has received multiple awards for his achievements in administrative law. He is listed
in Who's Who in America and is a member of the Administrative Conference of the United States.Harter, P. J. “Assessing the
Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Negotiated Rulemaking Has Fulfilled its Goals. If “better rules” were the aspirations for negotiated rulemaking, the
question remains as to whether the process has lived up to the expectations. From my own personal experience, the rules that
emerge from negotiated rulemaking tend to be both more stringent and yet more cost
effective to implement. That somewhat paradoxical result comes precisely from the practical
orientation of the committee: it can figure out what information is needed to make a
reasonable, responsible decision and then what actions will best achieve the goal; it can,
therefore, avoid common regulatory mistakes that are costly but do not contribute substantially
to accomplishing the task. The only formal evaluation of negotiated rulemaking that has been conducted supports these
observations. After his early article analyzing the time required for negotiated rulemaking, Neil Kerwin undertook an evaluation of
negotiated rulemaking at the Environmental Protection Agency with Dr. Laura Langbein.103 Kerwin
and Langbein
conducted a study of negotiated rulemaking by examining what actually occurs in a reg neg versus the development
of rules by conventional means. To establish the requisite comparison, they “collected data on litigation, data from the comments
on proposed rules, and data from systematic, open-ended interviews with participants in 8 negotiated rules . . . and in 6
‘comparable’ conventional rules.”104 They interviewed 51 participants of conventional rulemaking and 101 from various negotiated
rulemaking committees.105 Kerwin
and Langbein’s important work provides the only rigorous,
empirical evaluation that compares a number of factors of conventional and negotiated
rulemaking. Their overall conclusion is: Our research contains strong but qualified support for the continued
use of negotiated rulemaking. The strong support comes in the form of positive assessments
provided by participants in negotiated rulemaking compared to assessments offered by those
involved in conventional form of regulation development. Further, there is no evidence that
negotiated rules comprise an abrogation of agency authority, and negotiated rules appear no
more (or less) subject to litigation that conventional rules. It is also true that negotiated rulemaking at the EPA
is used largely to develop rules that entail particularly complex issues regarding the implementation and enforcement of legal
obligations rather than those that set the substantive standards themselves. However, participants’
assessments of the
resulting rules are more positive when the issues to be decided entail those of establishing
rather than enforcing the standard. Further, participants’ assessments are also more positive
when the issues to be decided are relatively more complex. Our research would support a recommendation
that negotiated rulemaking continue to be applied to complex issues, and more widely applied to include those entailing the
standard itself.106 Their findings are particularly powerful when comparing individual attributes of negotiated and conventional
rules. Table 3 contains a summary of those comparisons. Importantly, negotiated
rules were viewed more favorably
in every criteria, and significantly so in several dimensions that are often contentious in
regulatory debates — • the economic efficiency of the rule and its cost effectiveness • the quality of the scientific evidence
and the incorporation of appropriate technology, and • “personal experience” is not usually considered in dialogues over regulatory
procedure, Kerwin and Langbein’s findings here too favor negotiated rules. Conclusion. The
benefits envisioned by the
proponents of negotiated rulemaking have indeed been realized. That is demonstrated both
by Coglianese’s own methodology when properly understood and by the only careful and
comprehensive comparative study. Reg neg has proven to be an enormously powerful tool in
addressing highly complex, politicized rules. These are the very kind that stall agencies when
using traditional or conventional procedures.107 Properly understood and used appropriately,
negotiated rulemaking does indeed fulfill its expectations
Reg negs are cheaper
Langbein and Kerwin 00
(Laura I. Langbein is a quantitative methodologist and professor of public administration and policy at American University in
Washington, D.C. She teaches quantitative methods, program evaluation, policy analysis, and public choice. Her articles have
appeared in journals on politics, economics, policy analysis and public administration. Langbein received a BA in government from
Oberlin College in 1965 and a PhD in political science from the University of North Carolina at Chapel Hill in 1972. She has taught
at American University since 1973: until 1978 as an assistant professor in the School of Government and Public Administration;
from 1978 to 1983 as an associate professor in the School of Government and Public Administration; and since 1983 as a
professor in the School of Public Affairs. She is also a private consultant on statistics, research design, survey research, and
program evaluation and an accomplished clarinetist. Cornelius Martin "Neil" Kerwin (born April 10, 1949)(2) is an American
educator in public administration and president of American University. A 1971 undergraduate alumnus of American University,
Kerwin continued his education with a Master of Arts degree in political science from the University of Rhode Island in 1973. In
1975, Kerwin returned to his alma mater and joined the faculty of the American University School of Public Affairs, then the
School of Government and Public Administration. Kerwin completed his doctorate in political science from Johns Hopkins
University in 1978 and continued to teach until 1989, when he became the dean of the school. Langbein, L. I. Kerwin, C. M.
“Regulatory Negotiation versus Conventional Rule Making: Claims, Counterclaims, and Empirical Evidence,” Journal of Public
Administration Research and Theory, July 2000. http://jpart.oxfordjournals.org/content/10/3/599.full.pdf//ghs-kw)
Our research contains strong but qualified support for the continued use of negotiated
rule making. The strong
support comes in the form of positive assessments provided by participants in negotiated rule
making compared to assessments offered by those involved in conventional forms of regulation
development. There is no evidence that negotiated rules comprise an abrogation of agency
authority, and negotiated rules appear no more (or less) subject to litigation than conventional rules. It is also true that
negotiated rule making at the EPA is used largely to develop rules that entail particularly complex
issues regarding the implementation and enforcement of legal obligations rather than rules that set
substantive standards. However, participants' assessments of the resulting rules are more positive when the issues to be decided
entail those of establishing rather than enforcing the standard. Participants' assessments are also more positive when the issues to
be decided are relatively less complex. But even when these and other variables are controlled, reg neg participants' overall
assessments are significantly more positive than those of participants in conventional rule making. In
short, the process itself seems to affect participants' views of the rule making, independent of
differences between the types of rules chosen for conventional and negotiated rule making, and
independent of differences among the participants, including differences in their views of the
economic net benefits of the particular rule. This finding is consistent with theoretical expectations regarding the
importance of participation and the importance of face-to-face communication to increase the likelihood of Pareto-improving social
outcomes. With respect to participation, previous research indicates that compliance
with a law or regulation and
support for policy choice are more likely to be forthcoming not only when it is economically
rational but also when the process by which the decision is made is viewed as fair (Tyler 1990;
Kunreuther et al. 1993; Frey and Oberholzer-Gee 1996). While we did not ask respondents explicitly to rate the fairness of the rulemaking process in which they participated, evidence presented
in this study shows that reg neg participants
rated the overall process (with and without statistical controls in exhibits 9 and 1 respectively) and the ability of
EPA equitably to implement the rule (exhibit 1) significantly higher than conventional rule-making
participants did. Further, while conventional rule-making participants were more likely to say that there was no party with
disproportionate influence during the development of the rule, reg neg participants voluteered significantly more positive
comments and significantly fewer negative comments about the process overall. In general, reg
neg appears more likely
than conventional rule making to leave participants with a warm glow about the decisionmaking process. While the regression results show that the costs and benefits of the rule being promulgated figure
prominently into the respondents' overall assessment of the final rule, process matters too. Participants care not
only about how rules and policies affect them economically, they also care about how the
authorities who make and implement rules and policies treat them (and others). In fact, one reg neg
respondent, the owner of a small shop that manufactured wood burning stoves, remarked
about the woodstoves rule, which would put him out of business, that he felt satisfied even as
he participated in his own "wake." It remains for further research to show whether this warm glow affects long term
compliance and whether it extends to affected parties who were not direct participants in the negotiation process. It is unclear from
our research whether greater satisfaction with negotiated rules implies that negotiated rules are Pareto-superior to conventionally
written rules.13 Becker's (1983) theory of political competition among interest groups implies that in the absence of transactions
costs, groups that bear large costs and opposing groups that reap large benefits have directly proportional and equal incentives to
lobby. Politicians who seek to maximize net political support respond by balancing costs and benefits at the margin, and the
resulting equilibrium will be no worse than market failure would be. Transactions costs, however, are not zero, and they may not be
equal for interests on each side of an issue. For example, in many environmental policy issues, the benefits are dispersed and occur
in the future, while some, but not all, costs are concentrated and occur now. The consequence is that transactions costs
are
different for beneficiaries than for losers. If reg neg reduces transactions costs compared to conventional rule
making, or if reg neg reduces the imbalance in transactions costs between winners and losers, or among different kinds of winners
and losers, then it
might be reasonable to expect negotiated rules to be Pareto-superior to
conventionally written rules. Reg neg may reduce transactions costs in two ways. First,
participation in writing the proposed rule (which sets the agenda that determines the final rule) is direct, at least
for the participants. In conventional rule making, each interest has a repeated, bilateral relation with the rule-making
agency; the rule-making agency proposes the rule (and thereby controls the agenda for the final rule), and affected
interests respond separately to what is in the agency proposal. In negotiated rule making, each interest (including the agency) is in a
repeated N-person set of mutual relations; the negotiating group drafts the proposed rule, thereby setting the agenda for the final
rule. Since
the agency probably knows less about each group's costs and benefits than the group
knows about its own costs and benefits, the rule that emerges from direct negotiation should be
a more accurate reflection of net benefits than one that is written by the agency (even though the
agency tries to be responsive to the affected parties). In effect, reg neg can be expected to better establish a
core relationship of trust, reputation, and reciprocity that Ostrom (1998) argues is central to
improving net social benefits. Reg neg may reduce transactions costs not only by entailing
repeated mutual rather than bilateral relations, but also by face to face communication. Ostrom
(1998, 13) argues that face-to-face communication reduces transactions costs by making it easier to
assess trustworthiness and by lowering the decision costs of reaching a "contingent agreement,"
in which "individuals agree to contribute x resources to a common effort so long as at least y
others also contribute." In fact, our survey results show that reg neg participants are significantly more
likely than conventional rule-making participants to believe that others will comply with the final
rule (exhibit 1). In the absence of outside assessments that compare net social benefits of the conventional and negotiated rules in
this study,15 the hypothesis that reg neg is Pareto superior to conventional rule making remains an untested speculation.
Nonetheless, it seems to be a plausible hypothesis based on recent theories regarding the importance of institutions that foster
participation in helping to effect Pareto-preferred social outcomes.
2NC AT Consensus
Negotiating parties fear the alternative, which is worse than reg neg
Perritt 86
(Professor Perritt earned his B.S. in engineering from MIT in 1966, a master's degree in management from MIT's Sloan School in
1970, and a J.D. from Georgetown University Law Center in 1975. Henry H. Perritt, Jr., is a professor of law at IIT Chicago-Kent
College of Law. He served as Chicago-Kent's dean from 1997 to 2002 and was the Democratic candidate for the U.S. House of
Representatives in the Tenth District of Illinois in 2002. Throughout his academic career, Professor Perritt has made it possible for
groups of law and engineering students to work together to build a rule of law, promote the free press, assist in economic
development, and provide refugee aid through "Project Bosnia," "Operation Kosovo" and "Destination Democracy." Professor
Perritt is the author of more than 75 law review articles and 17 books on international relations and law, technology and law,
employment law, and entertainment law, including Digital Communications Law, one of the leading treatises on Internet law;
Employee Dismissal Law and Practice, one of the leading treatises on employment-at-will; and two books on Kosovo: Kosovo
Liberation Army: The Inside Story of an Insurgency, published by the University of Illinois Press, and The Road to Independence
for Kosovo: A Chronicle of the Ahtisaari Plan, published by Cambridge University Press. He is active in the entertainment field, as
well, writing several law review articles on the future of the popular music industry and of video entertainment. He also wrote a
50-song musical about Kosovo, You Took Away My Flag, which was performed in Chicago in 2009 and 2010. A screenplay for a
movie about the same story and characters has a trailer online and is being shopped to filmmakers. His two new plays, Airline
Miles and Giving Ground, are scheduled for performances in Chicago in 2012. His novel, Arian, was published by Amazon.com in
2012. He has two other novels in the works. He served on President Clinton's Transition Team, working on telecommunications
issues, and drafted principles for electronic dissemination of public information, which formed the core of the Electronic Freedom
of Information Act Amendments adopted by Congress in 1996. During the Ford administration, he served on the White House
staff and as deputy under secretary of labor. Professor Perritt served on the Computer Science and Telecommunications Policy
Board of the National Research Council, and on a National Research Council committee on "Global Networks and Local Values."
He was a member of the interprofessional team that evaluated the FBI's Carnivore system. He is a member of the bars of Virginia
(inactive), Pennsylvania (inactive), the District of Columbia, Maryland, Illinois and the United States Supreme Court. He is a
member of the Council on Foreign Relations and served on the board of directors of the Chicago Council on Foreign Relations, on
the Lifetime Membership Committee of the Council on Foreign Relations, and as secretary of the Section on Labor and
Employment Law of the American Bar Association. He is vice-president and a member of the board of directors of The Artistic
Home theatre company, and is president of Mass. Iota-Tau Association, the alumni corporation for the SAE fraternity chapter at
MIT. Perritt, H. H. “Negotiated Rulemaking Before Federal Agencies: Evaluation of Recommendations By the Administrative
Conference of the United States,” Georgetown Law Journal, Volume 74. August, 1976.
http://www.kentlaw.edu/perritt/publications/74_GEO._L.J._1625.htm//ghs-kw)
The negotiations moved slowly until the FAA submitted a draft rule to the participants. This
reinforced the view that the FAA would move unilaterally. It also reminded the parties that there
would be things in a unilaterally promulgated rule that they would not like--thus reminding
them that their BATNAs were worse than what was being considered at the negotiating table.
Participation by the Vice President's Office, the Office of the Secretary of Transportation, and the OMB at the initial session
discouraged participants from thinking they could influence the contents of the rule outside the negotiation process. One attempt to
The participants tacitly
agreed that it would not be feasible to develop a 'total package' to which the participants
formally could agree. Instead, their objectives were to narrow differences, explore alternative
ways of achieving objectives at less disruption to operational exigencies, and educate the FAA
on practical issues. The mediator had an acute sense that the negotiation process should stop
before agreement began to erode. Accordingly, he forbore to force explicit agreement on
difficult issues, took few votes, and adjourned the negotiations when things began to unravel. In
addition, the FAA, the mediator, and participants were tolerant of the political need of
participants to adhere to positions formally, even though signals were given that participants
could live with something else. Agency participation in the negotiating sessions was crucial to the usefulness of this
communicate with the Administrator while the negotiations were underway was rebuffed. [FN263]
type of process. Because the agency was there, it could form its own impressions of what a party's real position was, despite
adherence to formal positions. In addition, it
was easy for the agency to proceed with a consensus standard
because it had an evolving sense of the consensus. Without agency participation, a more formal step would have
been necessary to communicate negotiating group views to the agency. Taking this formal step could have proven difficult or
impossible because it would have necessitated more formal participant agreement. In addition, the
presence of an outside
contractor who served as drafter was of some assistance. The drafter, a former FAA employee, assisted
informally in resolving internal FAA disagreements over the proposed rule after negotiations
were adjourned.
Reg neg produces participant satisfaction and reduces conflict—consensus will
happen
Langbein and Kerwin 00
(Laura I. Langbein is a quantitative methodologist and professor of public administration and policy at American University in
Washington, D.C. She teaches quantitative methods, program evaluation, policy analysis, and public choice. Her articles have
appeared in journals on politics, economics, policy analysis and public administration. Langbein received a BA in government from
Oberlin College in 1965 and a PhD in political science from the University of North Carolina at Chapel Hill in 1972. She has taught
at American University since 1973: until 1978 as an assistant professor in the School of Government and Public Administration;
from 1978 to 1983 as an associate professor in the School of Government and Public Administration; and since 1983 as a
professor in the School of Public Affairs. She is also a private consultant on statistics, research design, survey research, and
program evaluation and an accomplished clarinetist. Cornelius Martin "Neil" Kerwin (born April 10, 1949)(2) is an American
educator in public administration and president of American University. A 1971 undergraduate alumnus of American University,
Kerwin continued his education with a Master of Arts degree in political science from the University of Rhode Island in 1973. In
1975, Kerwin returned to his alma mater and joined the faculty of the American University School of Public Affairs, then the
School of Government and Public Administration. Kerwin completed his doctorate in political science from Johns Hopkins
University in 1978 and continued to teach until 1989, when he became the dean of the school. Langbein, L. I. Kerwin, C. M.
“Regulatory Negotiation versus Conventional Rule Making: Claims, Counterclaims, and Empirical Evidence,” Journal of Public
Administration Research and Theory, July 2000. http://jpart.oxfordjournals.org/content/10/3/599.full.pdf//ghs-kw)
Our research contains strong but qualified support for the continued use of negotiated
rule making. The strong
support comes in the form of positive assessments provided by participants in negotiated rule
making compared to assessments offered by those involved in conventional forms of regulation
development. There is no evidence that negotiated rules comprise an abrogation of agency
authority, and negotiated rules appear no more (or less) subject to litigation than conventional rules. It is also true that
negotiated rule making at the EPA is used largely to develop rules that entail particularly complex
issues regarding the implementation and enforcement of legal obligations rather than rules that set
substantive standards. However, participants' assessments of the resulting rules are more positive when the issues to be decided
entail those of establishing rather than enforcing the standard. Participants' assessments are also more positive when the issues to
be decided are relatively less complex. But even when these and other variables are controlled, reg neg participants' overall
assessments are significantly more positive than those of participants in conventional rule making. In
short, the process itself seems to affect participants' views of the rule making, independent of
differences between the types of rules chosen for conventional and negotiated rule making, and
independent of differences among the participants, including differences in their views of the
economic net benefits of the particular rule. This finding is consistent with theoretical expectations regarding the
importance of participation and the importance of face-to-face communication to increase the likelihood of Pareto-improving social
outcomes. With respect to participation, previous research indicates that compliance
with a law or regulation and
support for policy choice are more likely to be forthcoming not only when it is economically
rational but also when the process by which the decision is made is viewed as fair (Tyler 1990;
Kunreuther et al. 1993; Frey and Oberholzer-Gee 1996). While we did not ask respondents explicitly to rate the fairness of the rulemaking process in which they participated, evidence presented
in this study shows that reg neg participants
rated the overall process (with and without statistical controls in exhibits 9 and 1 respectively) and the ability of
EPA equitably to implement the rule (exhibit 1) significantly higher than conventional rule-making
participants did. Further, while conventional rule-making participants were more likely to say that there was no party with
disproportionate influence during the development of the rule, reg neg participants voluteered significantly more positive
comments and significantly fewer negative comments about the process overall. In general, reg
neg appears more likely
than conventional rule making to leave participants with a warm glow about the decisionmaking process. While the regression results show that the costs and benefits of the rule being promulgated figure
prominently into the respondents' overall assessment of the final rule, process matters too. Participants care not
only about how rules and policies affect them economically, they also care about how the
authorities who make and implement rules and policies treat them (and others). In fact, one reg neg
respondent, the owner of a small shop that manufactured wood burning stoves, remarked
about the woodstoves rule, which would put him out of business, that he felt satisfied even as
he participated in his own "wake." It remains for further research to show whether this warm glow affects long term
compliance and whether it extends to affected parties who were not direct participants in the negotiation process. It is unclear from
our research whether greater satisfaction with negotiated rules implies that negotiated rules are Pareto-superior to conventionally
written rules.13 Becker's (1983) theory of political competition among interest groups implies that in the absence of transactions
costs, groups that bear large costs and opposing groups that reap large benefits have directly proportional and equal incentives to
lobby. Politicians who seek to maximize net political support respond by balancing costs and benefits at the margin, and the
resulting equilibrium will be no worse than market failure would be. Transactions costs, however, are not zero, and they may not be
equal for interests on each side of an issue. For example, in many environmental policy issues, the benefits are dispersed and occur
in the future, while some, but not all, costs are concentrated and occur now. The consequence is that transactions costs
are
different for beneficiaries than for losers. If reg neg reduces transactions costs compared to conventional rule
making, or if reg neg reduces the imbalance in transactions costs between winners and losers, or among different kinds of winners
and losers, then it
might be reasonable to expect negotiated rules to be Pareto-superior to
conventionally written rules. Reg neg may reduce transactions costs in two ways. First,
participation in writing the proposed rule (which sets the agenda that determines the final rule) is direct, at least
for the participants. In conventional rule making, each interest has a repeated, bilateral relation with the rule-making
agency; the rule-making agency proposes the rule (and thereby controls the agenda for the final rule), and affected
interests respond separately to what is in the agency proposal. In negotiated rule making, each interest (including the agency) is in a
repeated N-person set of mutual relations; the negotiating group drafts the proposed rule, thereby setting the agenda for the final
rule. Since
the agency probably knows less about each group's costs and benefits than the group
knows about its own costs and benefits, the rule that emerges from direct negotiation should be
a more accurate reflection of net benefits than one that is written by the agency (even though the
agency tries to be responsive to the affected parties). In effect, reg neg can be expected to better establish a
core relationship of trust, reputation, and reciprocity that Ostrom (1998) argues is central to
improving net social benefits. Reg neg may reduce transactions costs not only by entailing
repeated mutual rather than bilateral relations, but also by face to face communication. Ostrom
(1998, 13) argues that face-to-face communication reduces transactions costs by making it easier to
assess trustworthiness and by lowering the decision costs of reaching a "contingent agreement,"
in which "individuals agree to contribute x resources to a common effort so long as at least y
others also contribute." In fact, our survey results show that reg neg participants are significantly more
likely than conventional rule-making participants to believe that others will comply with the final
rule (exhibit 1). In the absence of outside assessments that compare net social benefits of the conventional and negotiated rules in
this study,15 the hypothesis that reg neg is Pareto superior to conventional rule making remains an untested speculation.
Nonetheless, it seems to be a plausible hypothesis based on recent theories regarding the importance of institutions that foster
participation in helping to effect Pareto-preferred social outcomes.
A consensus will be reached—parties have incentives to cooperate and
compromise
Harter 09
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J.
Harter is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of
Missouri. He has been involved in the design of many of the major developments of administrative law in the past 40 years. He is
the author of more than 50 papers and books on administrative law and has been a visiting professor or guest lecturer
internationally, including at the University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape
Town). He has consulted on environmental mediation and public participation in rulemaking in China, including a project
sponsored by the Supreme Peoples Court. He has received multiple awards for his achievements in administrative law. He is listed
in Who's Who in America and is a member of the Administrative Conference of the United States. Harter, P. J. “Collaboration: The
Future of Governance,” Journal of Dispute Resolution, Volume 2009, Issue 2, Article 7. 2009.
http://scholarship.law.missouri.edu/cgi/viewcontent.cgi?article=1581&context=jdr//ghs-kw)
Consensus is often misunderstood. It is typically used, derisively, to mean a group decision that is the consequence of a
"group think" that resulted from little or no exploration of the issues, with neither general inquiry, discussion, nor deli¬beration. A
common example would be the boss's saying, "Do we all agree? . . . Good, we have a consensus!" In this context, consensus is the
acquiescence to an accepted point of view. It is, as is often alleged, the lowest common denominator that is developed precisely to
avoid controversy as opposed to generating a better answer. It is a decision resulting from the lack of diversity. It is in fact actually a
cascade that may be more extreme than the views of any member! Thus, the question legitimately is, if this is the understanding of
the term, would you want it if you could get it, or would the result to too diluted? A number of articles posit, with neither
understanding nor research, that it always results in the least common denominator. Done right, however, consensus
is exactly
the opposite: it is the wisdom of crowds. It builds on the insights and experiences of diversity. And it is a
vital element of collaborative governance in terms of actually reaching agreement and in terms of the quality of the resulting
agreement. That undoubtedly sounds
counterintuitive, especially for the difficult, complex,
controversial matters that are customarily the subject of direct negotiations among governments and
their con¬stituents. Indeed, you often hear that it can't be done. One would expect that the controversy
would make consensus unlikely or that if concurrence were obtained, it would likely be so watered down—that least
common denominator again—that it would not be worth much. But, interestingly, it has exactly the opposite
effect. Consensus can mean many things so it is important to understand what is consensus for these purposes. The default
definition of consensus in the Negotiated Rulemaking Act is the "unanimous concurrence among the interests represented on [the] .
. . committee." Thus, each
interest has a veto over the decision, and any party may block a final
agreement by withholding concurrence. Consensus has a significant impact on how the
negotiations actually function: ā–  It makes it "safe" to come to the table. If the committee were
to make decisions by voting, even if a supermajority were required, a party might fear being
outvoted. In that case, it would logically continue to build power to achieve its will outside the
negotiations. Instead, it has the power inside the room to prevent something from happening
that it cannot live with. Thus, at least for the duration of the negotiations, the party can focus on the
substance of the policy and not build political might. ā–  The committee is converted from a group
of disparate, often antag¬nistic, interests into one with a common purpose: reaching a mutually
acceptable agreement. During a policy negotiation such as this, you can actually feel the committee snap
together into a coherent whole when the members realize that. ā–  It forces the parties to deal
with each other which prevents "rolling" someone: "OK, I have the votes, so shut up and let's vote." Rolling
someone in a negotiation is a very good way to create an opponent, to you and to any resulting agreement. Having to actually
listen to each other also creates a friction of ideas that results in better decisions—instead of a cascade, it generates the
"wisdom of crowds." ā–  It enables the parties to make sophisticated proposals in which they
agree to do something, but only if other parties agree to do some¬thing in return. These "if but only if
offers cannot be made in a voting situation for fear that the offeror would not obtain the necessary quid pro quo. ā–  It also
enables the parties to develop and present information they might otherwise be reluctant to share for fear of
its being misused or used against them. A veto prevents that. ā–  If a party cannot control the decision, it will
logically amass as much factual information as possible in order to limit the discretion availa¬ble to the one
making the decision; the theory is that if you win on the facts, the range of choices as to what to do on the policy is consi¬derably
narrowed. Thus, records are stuffed with data that may wellbe irrelevant to the outcome or on which the parties largely agree. If
the decision is made by consensus, the parties do control the outcome, and as a result, they can
concentrate on making the final decision. The question for the committee then becomes, how much in¬formation do
we need to make a responsible resolution? The committee may not need to resolve many of the underlying facts before a policy
choice is clear. Interestingly, therefore, the
use of consensus can significantly reduce the amount of
defensive (or probably more accurately, offensive) record-building that customarily attends adversarial
processes. ā–  It forces the parties to look at the agreement as a whole—consensus is reached only on the
entire package, not its individual elements. The very essence of negotiation is that different parties value issues differently. What is
important to one
party is not so important to another, and that makes for trades that maximize overall value.
The resulting agreement can be analogized to buying a house: something is always wrong with
any house you would consider buying (price, location, kitchen needs repair, etc.), but you cannot buy only
part of a house or move it to another location; the choice must be made as to which house—the
entire thing—you will purchase. ā–  It also means that the resulting decision will not stray from the
statutory mandate. That is because one of the parties to the negotiation is very likely to benefit from an adherence to the
statutory require¬ments and would not concur in a decision that did not implement it. ā–  Finally, if all of the parties
represented concur in the outcome, the likelihood of a successful challenge is greatly reduced
so that the decision has a rare degree of finality.
2NC AT Speed
Reg neg is better—solves faster
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J.
Harter is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of
Missouri. He has been involved in the design of many of the major developments of administrative law in the past 40 years. He is
the author of more than 50 papers and books on administrative law and has been a visiting professor or guest lecturer
internationally, including at the University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape
Town). He has consulted on environmental mediation and public participation in rulemaking in China, including a project
sponsored by the Supreme Peoples Court. He has received multiple awards for his achievements in administrative law. He is listed
in Who's Who in America and is a member of the Administrative Conference of the United States.Harter, P. J. “Assessing the
Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Properly understood, therefore, the
average length of EPA’s negotiated rulemakings — the time it took
EPA to fulfill its goal — was 751 days or 32% faster than traditional rulemaking. This knocks a
full year off the average time it takes EPA to develop rule by the traditional method. And, note
these are highly complex and controversial rules and that one of them survived Presidential
intervention. Thus, the dynamics surrounding these rules are by no mean “average.” This means
that reg neg’s actual performance is much better than that. Interestingly and consistently, the average time
for all of EPA’s reg negs when viewed in context is virtually identical to that of the sample drawn by Kerwin and Furlong77 —
differing by less than a month. Furthermore, if all of the reg negs that were conducted by all the agencies that were included in
Coglianese’s table78 were analyzed along the same lines as discussed here,79 the average
time for all negotiated
rulemakings drops to less than 685 days.80 No Substantive Review of Rules Based on Reg Neg Consensus. Coglianese
argues that negotiated rules are actually subjected to a higher incident of judicial review than are rules developed by traditional
methods, at least those issued by EPA.81 But, like his analysis of the time it takes to develop rules, Coglianese fails to look at either
what happened in the negotiated rulemaking itself or the nature of any challenge. For example, he makes much of the fact that the
Grand Canyon visibility rule was challenged by interests that were not a party to the negotiations;82 yet, he also points out that this
rule was not developed under the Negotiated Rulemaking Act83 which explicitly establishes procedures that are designed to ensure
that each interest can be represented. This challenge demonstrates the value of convening negotiations.84 And, it is significantly
misleading to include it when discussing the judicial review of negotiated rules since the process of reg neg was not followed. As for
Reformulated Gasoline, the rule as issued by EPA did not reflect the consensus but rather was modified by EPA under the direction
of President Bush.85 There were, indeed, a number of challenges to the application of the rule,86 but amazingly little to the rule
itself given its history. Indeed, after the proposal was changed, many members of the committee continued to meet in an effort to
put Humpty Dumpty back together again, which they largely did; the
fact that the rule had been negotiated not
only resulted in a much better rule,87 it enabled the rule to withstand in large part a massive
assault. Coglianese also somehow attributes a challenge within the World Trade Organization to a shortcoming of reg neg even
though such issues were explicitly outside the purview of the committee; to criticize reg neg here is like saying surgery is not
effective when the patient refused to undergo it. While the Underground Injection rule was challenged, the committee never
reached an agreement88 and, moreover, the convening report made clear that there were very strong disagreements over the
interpretation of the governing statute that would likely have to be resolved by a Court of Appeals. Coglianese also asserts that the
Equipment Leaks rule was the subject of review; it was, but only because the Clean Air requires parties to file challenges in a very
short period, and a challenger therefore filed a defensive challenge while it worked out some minor details over the regulation.
Those negotiations were successful and the challenge was withdrawn. The Chemical Manufacturers Association, the challenger, had
no intention of a substantive challenge.89 Moreover, a challenge to other parts of the HON should not be ascribed to the Equipment
Leaks part of the rule. The agreement in the Asbestos in Schools negotiation explicitly contemplated judicial review — strange, but
true — and hence it came as no surprise and as no violation of the agreement. As for the Wood Furniture Rule, the challenges were
withdrawn after informal negotiations in which EPA agreed to propose amendments to the rule.90 Similarly, the challenge to EPA’s
Disinfectant By-Products Rule91 was withdrawn. In short, the rules that have emerged from negotiated rulemaking have been
remarkably resistant to substantive challenges. And, indeed, this far into the development of the process, the standard of review
and the extent to which an agreement may be binding on either a signatory or someone whom a party purports to represent are still
unknown — the speculation of many an administrative law class.92 Thus, here too, Coglianese
paints a substantially
misleading picture by failing to distinguish substantive challenges to rules that are based on a
consensus from either challenges to issues that were not the subject of negotiations or were
filed while some details were worked out. Properly understood, reg negs have been
phenomenally successful in warding off substantive review.
Reg negs solve faster and better—Coglianese’s results concluded neg when
properly interpreted
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J.
Harter is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of
Missouri. He has been involved in the design of many of the major developments of administrative law in the past 40 years. He is
the author of more than 50 papers and books on administrative law and has been a visiting professor or guest lecturer
internationally, including at the University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape
Town). He has consulted on environmental mediation and public participation in rulemaking in China, including a project
sponsored by the Supreme Peoples Court. He has received multiple awards for his achievements in administrative law. He is listed
in Who's Who in America and is a member of the Administrative Conference of the United States.Harter, P. J. “Assessing the
Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Negotiated Rulemaking Has Fulfilled its Goals. If “better rules” were the aspirations for negotiated rulemaking, the
question remains as to whether the process has lived up to the expectations. From my own personal experience, the rules that
emerge from negotiated rulemaking tend to be both more stringent and yet more cost
effective to implement. That somewhat paradoxical result comes precisely from the practical
orientation of the committee: it can figure out what information is needed to make a
reasonable, responsible decision and then what actions will best achieve the goal; it can,
therefore, avoid common regulatory mistakes that are costly but do not contribute substantially
to accomplishing the task. The only formal evaluation of negotiated rulemaking that has been conducted supports these
observations. After his early article analyzing the time required for negotiated rulemaking, Neil Kerwin undertook an evaluation of
negotiated rulemaking at the Environmental Protection Agency with Dr. Laura Langbein.103 Kerwin
and Langbein
conducted a study of negotiated rulemaking by examining what actually occurs in a reg neg versus the development
of rules by conventional means. To establish the requisite comparison, they “collected data on litigation, data from the comments
on proposed rules, and data from systematic, open-ended interviews with participants in 8 negotiated rules . . . and in 6
‘comparable’ conventional rules.”104 They interviewed 51 participants of conventional rulemaking and 101 from various negotiated
rulemaking committees.105 Kerwin
and Langbein’s important work provides the only rigorous,
empirical evaluation that compares a number of factors of conventional and negotiated
rulemaking. Their overall conclusion is: Our research contains strong but qualified support for the continued
use of negotiated rulemaking. The strong support comes in the form of positive assessments
provided by participants in negotiated rulemaking compared to assessments offered by those
involved in conventional form of regulation development. Further, there is no evidence that
negotiated rules comprise an abrogation of agency authority, and negotiated rules appear no
more (or less) subject to litigation that conventional rules. It is also true that negotiated rulemaking at the EPA
is used largely to develop rules that entail particularly complex issues regarding the implementation and enforcement of legal
obligations rather than those that set the substantive standards themselves. However, participants’
assessments of the
resulting rules are more positive when the issues to be decided entail those of establishing
rather than enforcing the standard. Further, participants’ assessments are also more positive
when the issues to be decided are relatively more complex. Our research would support a recommendation
that negotiated rulemaking continue to be applied to complex issues, and more widely applied to include those entailing the
standard itself.106 Their findings are particularly powerful when comparing individual attributes of negotiated and conventional
rules. Table 3 contains a summary of those comparisons. Importantly, negotiated
rules were viewed more favorably
in every criteria, and significantly so in several dimensions that are often contentious in
regulatory debates — • the economic efficiency of the rule and its cost effectiveness • the quality of the scientific evidence
and the incorporation of appropriate technology, and • “personal experience” is not usually considered in dialogues over regulatory
procedure, Kerwin and Langbein’s findings here too favor negotiated rules. Conclusion. The
benefits envisioned by the
proponents of negotiated rulemaking have indeed been realized. That is demonstrated both
by Coglianese’s own methodology when properly understood and by the only careful and
comprehensive comparative study. Reg neg has proven to be an enormously powerful tool in
addressing highly complex, politicized rules. These are the very kind that stall agencies when
using traditional or conventional procedures.107 Properly understood and used appropriately,
negotiated rulemaking does indeed fulfill its expectations
2NC AT Transparency
The process is transparent
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and
environmental law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto,
and a Master of Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for
Energy and Climate Change in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and
institutional design, and a leading thinker on collaborative and contractual approaches to governance. After leaving the White
House, she advised the National Commission on the Deepwater Horizon oil spill on topics of structural reform at the Department
of the Interior. She has been appointed to the Administrative Conference of the United States, the government think tank for
improving the effectiveness and efficiency of federal agencies, and is a member of the American College of Environmental
Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and Public Choice and
American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from Oberlin
College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
Defenders of reg neg retorted that negotiated
rules were far from secret deals. The Negotiated
Rulemaking Act of 1990 (“NRA”) requires federal agencies to provide notice of regulatory
negotiations in the Federal Register,50 to formally charter reg neg committees,51 and to
observe the transparency and accountability requirements52 of the Federal Advisory Committee
Act.53 Any individual or organization that might be “significantly affected” by a proposed rule can apply for membership in a reg
neg committee,54 and even if the agency rejects their application, they remain free to attend as spectators.55 Most significantly, the
NRA requires that the agency submit negotiated rules to traditional notice and comment.56
2NC AT Undemocratic
The process is equal and fair
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and
environmental law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto,
and a Master of Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for
Energy and Climate Change in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and
institutional design, and a leading thinker on collaborative and contractual approaches to governance. After leaving the White
House, she advised the National Commission on the Deepwater Horizon oil spill on topics of structural reform at the Department
of the Interior. She has been appointed to the Administrative Conference of the United States, the government think tank for
improving the effectiveness and efficiency of federal agencies, and is a member of the American College of Environmental
Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and Public Choice and
American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from Oberlin
College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
On balance, the combined results of Phase I and II of the study suggest that reg
neg is superior to conventional
rulemaking on virtually all of the measures that were considered. Strikingly, the process engenders a significant
learning effect, especially compared to conventional rulemaking; participants report, more¬over, that this
learning has long-term value not confined to a particular rulemaking. Most significantly, the
negotiation of rules appears to enhance the legitimacy of outcomes. Kerwin and Langbein's data indicate
that process matters to perceptions of legitimacy. Moreover, as we have seen, reg neg participant re¬ports of
higher satisfaction could not be explained by their as¬sessments of the outcome alone. Instead, higher satisfaction seems to arise in
part from a combination of process and substance variables. This suggests a link between procedure and satisfaction, which is
consistent with the mounting evidence in social psychology that "satisfaction is one of the principal consequences of procedural
fairness." This potential for procedure to enhance satisfaction may prove especially salutary precisely when participants do not
favor outcomes. As Tyler and Lind have suggested, "hedonic glee" over positive outcomes may "obliterate" procedural effects;
perceptions of procedural fairness may matter more, however, "when outcomes are negative (and) organizations have the greatest
need to render decisions more palatable, to blunt discontent, and to give losers reasons to stay committed to the organization." At
a minimum, the data call into question—and sometimes flatly contradict—most of the theoretical
criticisms of reg neg that have surfaced in the scholarly literature over the last twenty years.
There is no evidence that negotiated rulemaking abrogates an agency's responsibility to
implement legislation. Nor does it appear to exacerbate power imbalances or increase the risk
of capture. When asked whether any party seemed to have disproportionate influence during
the development of the rule, about the same proportion of reg neg and conventional
participants said yes. Parties perceived their influence to be about the same for conventional
and negotiated rules, undermining the hypothesis that reg neg exacerbates capture.
Commissions CP
1NC
Counterplan: The United States Congress should establish an independent
commission empowered to submit to Congress recommendations regarding
domestic federal government surveillance. Congress will allow 60 days to pass
legislation overriding recommendations by a two-thirds majority. If Congress
doesn’t vote within the specified period, those recommendations will become
law. The Commission should recommend to Congress that _____<insert the
plan>_______
Commission solves the plan
RWB 13
(Reporters Without Borders is a UNESCO and UN Consultant and a non-profit organization. “US congress urged to create
commission to investigate mass snooping,” RWB, 06-10-2013. https://en.rsf.org/united-states-us-congress-urged-to-create-10-062013,44748.html//ghs-kw)
Reporters Without Borders calls on the US Congress to create a commission of enquiry into the links
between US intelligence agencies and nine leading Internet sector companies that are alleged
to have given them access to their servers. The commission should also identify all the
countries and organizations that have contributed to the mass digital surveillance machinery
that – according to reports in the Washington Post and Guardian newspapers in the past few days – the US authorities
have created. According to these reports, the telephone company Verizon hands over the details of the phone
calls of millions of US and foreign citizens every day to the National Security Agency (NSA), while nine
Internet majors – including Microsoft, Yahoo, Facebook, Google and Apple – have given the
FBI and NSA direct access to their users’ data under a secret programme called Prism. US intelligence
agencies are reportedly able to access all of the emails, audio and video files, instant messaging
conversations and connection data transiting through these companies’ servers. According to The
Guardian, Government Communication Headquarters (GCHQ), the NSA’s British equivalent, also has access to data collected under
Prism. The proposed congressional commission should
evaluate the degree to which the collected data
violates privacy and therefore also freedom of expression and information. The commission’s
findings must not be classified as defence secrets. These issues – protection of privacy and freedom of
expression – are matters of public interest.
2NC O/V
Counterplan solves 100% of the case—Congress creates an independent
commission comprised of experts to debate the merits of the plan, and the
commission recommends to Congress that it passes the plan—Congress must
pass legislation specifically blocking those recommendations within 60 days or
the commission’s recommendations become law
AND, that solves the AFF—commissions are empowered to debate Internet
backdoors and submit recommendations—that’s RWB
2NC Solvency
Empirics prove commissions solve
FT 10
(Andrews, Edmund. “Deficit Panel Faces Obstacles in Poisonous Political Atmosphere,” Fiscal Times. 02-18-2010.
http://www.thefiscaltimes.com/Articles/2010/02/18/Fiscal-Commission-Faces-Big-Obstacles?page=0%2C1//ghs-kw)
Supporters of a bipartisan deficit commission note that at
least two previous presidential commissions succeeded
at breaking through intractable political problems when Congress was paralyzed. The 1983
Greenspan commission, headed by Alan Greenspan, who later became chairman of the Federal Reserve, reached an
historic agreement to gradually raise Social Security taxes and gradually increase the
minimum age at which workers qualify for Social Security retirement benefits. Those
recommendations passed both the House and Senate, and averted a potentially catastrophic
financial crisis with Social Security.
2NC Solves Better
CP solves better—technical complexity
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional
Commissions: Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Obtaining Expertise Congress
may choose to establish a commission when legislators and their staffs do
not currently have sufficient knowledge or expertise in a complex policy area.22 By assembling experts
with backgrounds in particular policy areas to focus on a specific mission, legislators might
efficiently obtain insight into complex public policy problems.23
2NC Cybersecurity Solvency
Commissions are key—solves legitimacy and perception
Abrahams and Bryen 14
(Rebecca Abrahams and Dr. Stephen Bryen, CCO and Chairman of Ziklag Systems, respectively. "Investigating Heartbleed,"
Huffington Post. 04-11-2014. http://www.huffingtonpost.com/rebecca-abrahams/investigatingheartbleed_b_5134404.html//ghs-kw)
But who can investigate the matter? This is a non-trivial question because the government is no
longer trustworthy. Congress could set up an independent commission to investigate
compromises to computer security. It should be staffed by experts in cryptography and by
national security specialists. The Commission, if empowered, should also make
recommendations on a way forward for internet security. What is needed is a system that is
accountable, where the participants are reliable, and where there is security from interference
of any kind. Right now, no one can, or should, trust the Internet.
2NC Politics NB
No link to politics—commissions result in bipartisanship and bypass
Congressional politics
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional
Commissions: Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Overcoming Political Complexity Complex policy issues may also create institutional problems because they do not fall neatly within
the jurisdiction of any particular committee in Congress.26 By virtue of their ad hoc status, commissions may circumvent such issues.
Similarly, a
commission may allow particular legislation or policy solutions to bypass the traditional
development process in Congress, potentially removing some of the impediments inherent in a
decentralized legislature.27 Consensus Building Legislators seeking policy changes may be confronted
by an array of political interests, some in favor of proposed changes and some against. When
these interests clash, the resulting legislation may encounter gridlock in the highly structured
political institution of the modern Congress.28 By creating a commission, Congress can place
policy debates in a potentially more flexible environment, where congressional and public
attention can be developed over time.29 Reducing Partisanship Solutions to policy problems produced
within the normal legislative process may also suffer politically from charges of partisanship.30
Similar charges may be made against investigations conducted by Congress.31 The non-partisan or bipartisan
character of most congressional commissions may make their findings and recommendations
less susceptible to such charges and more politically acceptable to diverse viewpoints. The
bipartisan or nonpartisan arrangement can potentially give their recommendations strong
credibility, both in Congress and among the public, even when dealing with divisive issues of
public policy.32 Commissions may also give political factions space to negotiate compromises in
good faith, bypassing the short-term tactical political maneuvers that accompany public
negotiations.33 Similarly, because commission members are not elected, they may be better suited
to suggesting unpopular, but necessary, policy solutions.34 Solving Collective Action Problems A
commission may allow legislators to solve collective action problems, situations in which all
legislators individually seek to protect the interests of their own district, despite widespread
agreement that the collective result of such interests is something none of them prefer.
Legislators can use a commission to jointly “tie their hands” in such circumstances, allowing
general consensus about a particular policy solution to avoid being impeded by individual
concerns about the effect or implementation of the solution.35 For example, in 1988 Congress
established the Base Closure and Realignment Commission (BRAC) as a politically and
geographically neutral body to make independent decisions about closures of military bases.36
The list of bases slated for closure by the commission was required to be either accepted or rejected as a
whole by Congress, bypassing internal congressional politics over which individual bases would be closed,
and protecting individual Members from political charges that they didn’t “save” their district’s base.37
CP avoids the focus link to politics
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional
Commissions: Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Overcoming Issue Complexity Complex
policy issues may cause time management challenges for
Congress. Legislators often keep busy schedules and may not have time to deal with intricate or
technical policy problems, particularly if the issues require consistent attention over a period of
time.24 A commission can devote itself to a particular issue full-time, and can focus on an
individual problem without distraction.25
No link to politics—commissions create bipartisan negotiations
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of
California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College,
Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's
Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for
the congressman. Before that, he was an Analyst in American National Government at the Congressional Research Service, an
Associate Professor of Political Science at Florida International University, and an American Political Science Association
Congressional Fellow, where he served as a policy adviser to Senator Bob Graham of Florida. Dr. Campbell is the author, coauthor, and co-editor of 11 books on Congress, most recently the Guide to Political Campaigns in America, and Impeaching
Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles on the legislative process.
Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July
2015. Ghs-kw.)
The third major reason for Congress to delegate to a commission is the strategy of distancing itself
from a politically risky decision. These instances generally occur when Congress faces redistributive policy problems,
such as Social Security, military base closures, Medicare, and welfare. Such problems are the most difficult because
legislators must take a clear policy position on something that has greater costs to their districts than
benefits, or that shifts resources visibly from one group to another. Institutionally, Congress has to make national policy that has a
collective benefit, but the self-interest of lawmakers often gets in the way. Members
realize that their individual
interests, based on constituents’ demands, may be at odds with the national interest, and this
can lead to possible electoral repercussions. 55 Even when pursuing policies that are in the
interests of the country as a whole, legislators do not want to be blamed for causing losses to
their constituents. In such an event, the split characteristics of the institution come into direct
conflict. Many on Capitol Hill endorse a commission for effectively resolving a policy problem rather than the other machinery
available to Congress. A commission finds remedies when the normal decision making process has stalled. A long-time Senate staff
director said of the proposed Second National Blue Ribbon Commission to Eliminate Waste in Government: “At
their most
effective, these panels allow Congress to realize purposes most members cannot find the
confidence to do unless otherwise done behind the words of the commission.” 56 When an issue
imposes concentrated costs on individual districts yet provides dispersed benefits to the nation, Congress responds by masking
legislators’ individual contributions and delegates responsibility to a commission for making unpleasant decisions. 57 Members
avoid blame and promote good policy by saying something is out of their hands. This method
allows legislators— especially those aiming for reelection— to vote for the general benefit of
something without ever having to support a plan that directly imposes large and traceable geographic
costs on their constituents. The avoidance or share-the-blame route was much of the way Congress
and the president finally dealt with the problem of financially shoring up Social Security in the
late 1980s. One senior staff assistant to a western Republican representative observed that the creation of the Social Security
Commission was largely for avoidance: “There are sacred cows and then there is Social Security. Neither party or any politician
wants to cut this. Regardless of what you say or do about it, in the end, you defer. Everyone backs away from this.” Similarly, a
legislative director to a southern Democratic representative summarized: “So many people are getting older and when you take a
look at who turns out, who registers, people over sixty-five have the highest turnout and they vote like clockwork.” The Commission
on Executive, Legislative, and Judicial Salaries, later referred to as the Quadrennial Commission (1967), is another example.
Lawmakers delegated to a commission the power to set pay for themselves and other top federal officials, whose pay they linked to
their own, to help them avoid blame. Increasing their own pay is a decision few politicians willingly endorse. Because
the
proposal made by the commission would take effect unless Congress voted to oppose it, the
use of the commission helped insulate legislators from political hazards. 58 That is, because it was the
commission that granted pay raises, legislators could tell their constituents that they would have voted against the increase if given
the chance. Members could get the pay raise and also the credit for opposing it. Redistribution is the most visible public policy type
because it involves the most conspicuous, long run allocations of values and resources. Most divisive socioeconomic issues—
affirmative action, medical care for the aged, aid to depressed geographic areas, public housing, and the elimination of identifiable
governmental actions— involve debates over equality or inequality and degrees of redistribution. These
are “political hot
potatoes, in which a commission is a good means of putting a fire wall between you [the
lawmaker] and that hot potato,” the chief of staff to a midwestern Democratic representative acknowledged. Base
closing took on a redistributive character as federal expenditures outpaced revenues. It was marked not only by extreme conflict but
also by techniques to mask or sugarcoat the redistributions or make them more palatable. The
Base Closure Commission
(1991) was created with an important provision that allowed for silent congressional approval of
its recommendations. Congress required the commission to submit its reports of proposed closures to the secretary of
defense. The president had fifteen days to approve or disapprove the list in its entirety. If approved, the list of recommended base
closures became final unless both houses of Congress adopted a joint resolution of disapproval within forty-five days. Congress had
to consider and vote on the recommendations en bloc rather than one by one, thereby giving the appearance of spreading the
misery equally to affected clienteles. A former staff aide for the Senate Armed Services Committee who was active in the creation of
the Base Closure Commission contended, “There
was simply no political will by Congress. The then-secretary of
Eventually,
however, Congress used the commission idea as a ‘scheme’ for a way out of a ‘box.’” CONCLUSION
defense started the process [base closing] with an in-house commission [within the Defense Department].
Many congressional scholars attribute delegation principally to electoral considerations. 59 For example, in the delegation of
legislative authority to standing committees, legislators, keen on maximizing their reelection prospects, request assignments to
committees whose jurisdictions coincide with the interests of key groups in their districts. Delegation of legislative functions to the
president, to nonelected officials in the federal bureaucracy, or to ad hoc commissions also grows out of electoral motives. Here,
delegation fosters the avoidance of blame. 60 Mindful that most policies entail both costs and
benefits, and apprehensive that those suffering the costs will hold them responsible, members
of Congress often find that the most attractive option is to let someone else make the tough
choices. Others see congressional delegation as unavoidable (and even desirable) in light of basic structural flaws in the design of
Congress. 61 They argue that Congress is incapable of crafting policies that address the full complexity of modern-day problems. 62
Another charge is that congressional
action can be stymied at several junctures in the legislative
policymaking process. Congress is decentralized, having few mechanisms for integrating or
coordinating its policy decisions; it is an institution of bargaining, consensus-seeking, and
compromise. The logic of delegation is broad: to fashion solutions to tough problems, to broker
disputes, to build consensus, and to keep fragile coalitions together. The commission co-opts
the most publicly ideological and privately pragmatic, the liberal left and the conservative right.
Leaders of both parties or their designated representatives can negotiate a deal without the media,
the public, or interest groups present. When deliberations are private, parties can make offers without being
denounced either by their opponents or by affected groups. Removing external contact reduces the
opportunity to use an offer from the other side to curry favor with constituents.
2NC Commissions Popular
Commissions give political cover—result in compromise
Fiscal Seminar 9
(The Fiscal Seminar is a group of scholars who meet on a regular basis, under the auspices of The Brookings Institution and The
Heritage Foundation, to discuss federal budget and fiscal policy issues. The members of the Fiscal Seminar acknowledge the
contributions of Paul Cullinan, a former colleague and Brookings scholar, in the development of this paper, and the editorial
assistance of Emily Monea. “THE POTENTIAL ROLE OF ENTITLEMENT OR BUDGET COMMISSIONS IN ADDRESSING LONG-TERM
BUDGET PROBLEMS,” The Fiscal Seminar. 06-2009.)
In contrast, the
Greenspan Commission provided a forum for developing a political compromise on
a set of politically unsavory changes. In this case, the political parties shared a deep concern about the
impending insolvency of the Social Security system but feared the exposure of promoting their own solutions.
The commission created political cover for the serious background negotiations that resulted in
the ultimate compromise. The structure of the commission reflected these concerns and was
composed of fifteen members, with the President, the Senate Majority Leader, and the Speaker
of the House each appointing five members to the panel.
2NC AT Perm do the CP
Permutation is severance:
1. Severance: CP’s mechanism is distinct—delegates to the commission and
isn’t Congressional action
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the
University of California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining
the National War College, Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the
House Intelligence Committee's Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled
Appropriations, Defense and Trade matters for the congressman. Before that, he was an Analyst in American National
Government at the Congressional Research Service, an Associate Professor of Political Science at Florida International
University, and an American Political Science Association Congressional Fellow, where he served as a policy adviser to
Senator Bob Graham of Florida. Dr. Campbell is the author, co-author, and co-editor of 11 books on Congress, most
recently the Guide to Political Campaigns in America, and Impeaching Clinton: Partisan Strife on Capitol Hill. He has
also written more than two dozen chapters and articles on the legislative process. Discharging Congress : Government
by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July 2015. Ghs-kw.)
So why
and when does Congress formulate policy by commissions rather than by the
normal legislative process? Lawmakers have historically delegated authority to others who
could accomplish ends they could not. Does this form of congressional delegation thus reflect the
particularities of an issue area? Or does it mirror deeper structural reasons such as legislative organization, time, or
manageability? In the end, what is the impact on representation versus the effectiveness of delegating discretionary
authority to temporary entities composed largely of unelected officials, or are both attainable together?
2. Severs resolved: resolved means to enact by law—not the counterplan
mandate
Words and Phrases 64 vol 37A
Definition of the word “resolve,” given by Webster is “to express an opinion or determination by resolution
or vote; as ‘it was resolved by the legislature;” It is of similar force to the word “enact,” which is defined by
Bouvier as meaning “to establish by law”.
3. Severs should: Should requires immediate action
Summers 94 (Justice – Oklahoma Supreme Court, “Kelsey v. Dollarsaver Food
Warehouse of Durant”, 1994 OK 123, 11-8,
http://www.oscn.net/applications/oscn/DeliverDocument.asp?CiteID=20287#marker3f
n13)
The legal question to be resolved by the court is whether the word "should"13 in the May
18 order connotes futurity or may be deemed a ruling in praesenti.14 The answer to this query is not to be
¶4
divined from rules of grammar;15 it must be governed by the age-old practice culture of legal professionals and its
immemorial language usage. To determine if the omission (from the critical May 18 entry) of the turgid phrase, "and the
same hereby is", (1) makes it an in futuro ruling - i.e., an expression of what the judge will or would do at a later stage - or
(2) constitutes an in in praesenti resolution of a disputed law issue, the trial judge's intent must be garnered from the
four corners of the entire record.16 [CONTINUES – TO FOOTNOTE] 13 "Should" not only is used as a "present indicative"
synonymous with ought but also is the past tense of "shall" with various shades of meaning not always easy to analyze.
See 57 C.J. Shall § 9, Judgments § 121 (1932). O. JESPERSEN, GROWTH AND STRUCTURE OF THE ENGLISH LANGUAGE
(1984); St. Louis & S.F.R. Co. v. Brown, 45 Okl. 143, 144 P. 1075, 1080-81 (1914). For a more detailed explanation, see the
Partridge quotation infra note 15. Certain
contexts mandate a construction of the term "should"
as more than merely indicating preference or desirability. Brown, supra at 1080-81 (jury instructions
stating that jurors "should" reduce the amount of damages in proportion to the amount of contributory negligence of
the plaintiff was held to imply an obligation and to be more than advisory); Carrigan v. California
Horse Racing Board, 60 Wash. App. 79, 802 P.2d 813 (1990) (one of the Rules of Appellate Procedure requiring that a
party "should devote a section of the brief to the request for the fee or expenses" was interpreted to mean that a
party is under an obligation to include the requested segment); State v. Rack, 318 S.W.2d 211, 215
(Mo. 1958) ("should" would mean the same as "shall" or "must" when used in an instruction to the jury which tells the
triers they "should disregard false testimony"). 14 In
praesenti means literally "at the present time."
phrase denotes that which in law is presently
or immediately effective, as opposed to something that will or would become effective in
the future [in futurol]. See Van Wyck v. Knevals, 106 U.S. 360, 365, 1 S.Ct. 336, 337, 27 L.Ed. 201 (1882).
BLACK'S LAW DICTIONARY 792 (6th Ed. 1990). In legal parlance the
4. Severs should again: should is mandatory
Summers 94 (Justice – Oklahoma Supreme Court, “Kelsey v. Dollarsaver Food
Warehouse of Durant”, 1994 OK 123, 11-8,
http://www.oscn.net/applications/oscn/DeliverDocument.asp?CiteID=20287#marker3f
n13)
¶4
The legal question to be resolved by the court is whether the word "should"13 in the May 18 order connotes futurity
or may be deemed a ruling in praesenti.14 The answer to this query is not to be divined from rules of grammar;15 it must
be governed by the age-old practice culture of legal professionals and its immemorial language usage. To determine if the
omission (from the critical May 18 entry) of the turgid phrase, "and the same hereby is", (1) makes it an in futuro ruling i.e., an expression of what the judge will or would do at a later stage - or (2) constitutes an in in praesenti resolution of a
disputed law issue, the trial judge's intent must be garnered from the four corners of the entire record.16 [CONTINUES –
TO FOOTNOTE] 13 "Should" not only is used as a "present indicative" synonymous with ought but also is the past tense of
"shall" with various shades of meaning not always easy to analyze. See 57 C.J. Shall § 9, Judgments § 121 (1932). O.
JESPERSEN, GROWTH AND STRUCTURE OF THE ENGLISH LANGUAGE (1984); St. Louis & S.F.R. Co. v. Brown, 45 Okl. 143,
144 P. 1075, 1080-81 (1914). For a more detailed explanation, see the Partridge quotation infra note 15. Certain
contexts mandate a construction of the term "should" as more than merely indicating
preference or desirability. Brown, supra at 1080-81 (jury instructions stating that jurors "should" reduce the
amount of damages in proportion to the amount of contributory negligence of the plaintiff was held to imply an
obligation and to be more than advisory); Carrigan v. California Horse Racing Board, 60 Wash. App. 79, 802
P.2d 813 (1990) (one of the Rules of Appellate Procedure requiring that a party "should devote a section of the brief to
the request for the fee or expenses" was interpreted to mean that a party is under an obligation to include the requested
segment); State v. Rack, 318 S.W.2d 211, 215 (Mo. 1958) ("should" would mean
the same as "shall" or
"must" when used in an instruction to the jury which tells the triers they "should disregard false testimony"). 14 In
praesenti means literally "at the present time." BLACK'S LAW DICTIONARY 792 (6th Ed. 1990). In legal parlance the phrase
denotes that which in law is presently or immediately effective, as opposed to something that will or would become
effective in the future [in futurol]. See Van Wyck v. Knevals, 106 U.S. 360, 365, 1 S.Ct. 336, 337, 27 L.Ed. 201 (1882).
Severance is a reason to reject the team:
1. Neg ground—makes the AFF a shifting target and allows them to spike
out of offense
2. Unpredictable—kills clash which destroys advocacy skills and education
2NC AT Perm do Both
Permutation do both links to politics:
1. Congressional debates—CP means Congress doesn’t debate the
substance of the plan, only the commission report—perm makes
Congress to debate the plan, triggers the link over partisan inclinations
and electoral pressures—that’s the politics net benefit ev
2. Time crunch—perm forces the plan now, doesn’t give the commission
time to generate political support and links to politics
Biggs 09
(Biggs, Andrews G. Andrew G. Biggs is a resident scholar at the American Enterprise Institute, where his work focuses
on Social Security and pensions. From 2003 through 2008, he served at the Social Security Administration, as Associate
Commissioner for Retirement Policy, Deputy Commissioner for Policy, and ultimately the principal Deputy
Commissioner of the agency. During 2005, he worked at the White House National Economic Council on Social Security
reform, and in 2001 was on the staff of the President's Commission to Strengthen Social Security. He blogs on Social
Security-related issues at Notes on Social Security Reform. “Rumors Of Obama Social Security Reform Commission,”
Frum Forum. 02-17-2009. http://www.frumforum.com/rumors-of-obama-social-security-reform-commission///ghskw)
One problem with President Bush’s 2001 Commission was that it didn’t represent the
reasonable spectrum of beliefs on Social Security reform. This didn’t make it a dishonest commission; like
President Roosevelt’s Committee on Economic Security, it was designed to put flesh on the bones laid out by the
President. In this case, the Commission was tasked with designing a reform plan that included personal accounts and
excluded tax increases. That said, a
commission only builds political capital toward enacting
reform if it’s seen as building a consensus through a process in which all views have
been heard. In both the 2001 Commission and the later 2005 reform drive, Democrats
didn’t feel they were part of the process. They clearly will be a central part of the process this time, but
the goal will now be to include Republicans. Just as Republicans shouldn’t reflexively oppose any Obama administration
reform plans for political reasons, so Democrats shouldn’t seek to exclude Republicans from the process. Second, a
reform task force should include a variety of different players, including members of
government, both legislative and executive, representatives of outside interest groups,
and experts who can provide technical advice and help ensure the integrity of the
reforms decided upon. The 2001 Bush Commission didn’t include any sitting Members of Congress and only a
small fraction of commissioners had the technical expertise needed to make the plans the best they could be. A broader
group would be helpful. Third, any
task force or commission needs time. The 2001 Commission
ran roughly from May through December of that year and had to conduct a number of
public hearings. This was simply too much to do in too little time, and as a result the
plans were fairly bare bones. There is plenty else on the policy agenda at the moment,
so there’s no reason not to give a working group a year or more to put things together.
2NC AT Theory
Counterinterp: process CPs are legitimate if we have a solvency advocate
AND, process CPs good:
1. Key to neg ground—agent CPs are the only generics we have on this
topic
2. Policy education—commissions are key to understanding the policy
process
Schwalbe, 03
(Steve,- PhD Public Policy from Auburn, former professor at the Air War College and Col. in the USAF “Independent
Commissions: Their History, Utilization and Effectiveness”)
Many analysts characterize commissions as an unofficial, separate branch of
government, much like the news media. Campbell referred to commissions as the “fifth arm of
government,” after the media, the often-referred-to fourth arm.17 However, the media and independent commissions have as many similarities as
FIFTH BRANCH
differences. They are similar in that neither is mentioned in the Constitution. Both conduct oversight functions. Both serve to educate and inform the public.
Both allow elites to participate in shaping government policy. On the other hand, the media and independent commissions are dissimilar in many ways. Where
the news media responds to market forces, and hence will likely operate in perpetuity, independent commissions respond to a federal requirement to resolve a
difficult problem. Therefore, they exist for a relatively short period of time, expiring once a final report is published and disseminated. Where the media’s
a commission’s primary responsibilities can range from
developing a recommended solution to a difficult problem to regulating an entire
department of the executive branch. The media receives its funding primarily from advertisers, where commissions receive their
primary functions are reporting and analyzing the news,
funding from Congress, the President, or from private sources. The news media deal with issues foreign and domestic, while independent commissions generally
focus on domestic issues. PURPOSE
Commissions serve numerous purposes in the U.S. Government.
Campbell cited three primary reasons for the establishment of federal independent commissions. First, they are established to provide expertise the Congress
does not have among its own elected officials or their staffs. Next, he noted that the second most frequently cited reason by members of Congress for
establishing a commission was to reduce the workload in Congress. Finally, they are formed to provide a convenient scapegoat to deflect the wrath of the
electorate; i.e., “blame avoidance.”18 Fisher found three advantages of regulatory commissions. First, commission members bring essential expert insights to a
commission because the regulated industries are normally “complex and highly technical.” Second, appointing commissioners for extended terms of full-time
work allows commissioners to become very familiar with the technical aspects of an industry, through periodic contacts that Congress would not be able to
accomplish. As a result of their tenure, varied membership, and shared responsibility, commissioners would be resistant to external pressures. Finally, regulatory
commissions provide policy continuity essential to the stability of a regulated industry.19 What the taxpayers are primarily looking for from independent
commissions are non- partisan solutions to current problems. A good example of establishing a commission to find non-partisan solutions is Congress regulating
its own ethical behavior. University of Florida Professor Beth Rosenson researched this issue and concluded that authorizing an ethics commission may be
“based on the fear of electoral retaliation if legislators do not take aggressive action to regulate their own ethics.”20 Campbell noted that commissions perform
several other functions besides providing recommendations to the President and Congress. The most common reason provided by analysts is that members of
Congress generally want to avoid making difficult decisions that may adversely affect their chances for reelection. As he noted, “Incentives to avoid blame lead
members of Congress to adopt a distinctive set of political strategies, such as ‘passing the buck’ or ‘deflection’….”21 Another technique legislators use to avoid
incurring the wrath of the voters is to schedule any controversial independent commissions for after the next election. Establish- ing a commission to research
the issue and come up with recommendations after a preset period of time is an effective way to do that. The most clear-cut example demonstrating this
technique is the timing of the BRAC commissions in the 1990s — all three made their base closure recommendations in non-election years (1991, 1993, and
1995). Even the next BRAC commission, established by the National Defense Authorization Act for Fiscal Year 2002, is not required to submit its base closure
recommendations until 2005. Congress certainly is not the most efficient organization in the U.S.; hence, there are times when an independent commission is
the more efficient and effective way to go. Law- makers are almost always short on time and information, which makes the option of delegating authority to a
commission very appealing. Oftentimes, the expertise and necessary information is very costly for Congress to acquire. Commissions are generally the most
inexpensive way for Congress to solve complex problems. From 1993-1997, Campbell found that 92 congressional offices introduced legislation that included
proposals to establish ad hoc commissions.22 There are numerous other reasons for establishing independent commissions. They are created as a symbolic
response to a crisis or to satisfy the electorate at home. They have served as trial balloons to test the political waters, or to make political gains with the voters.
They can be created to gain public or political consensus. Often, when Congress has exhausted all its other options, a commission serves as an option of last
resort.23 Commissions are a relatively impartial way to help resolve problems between the executive and legislative branches of government, especially during
periods of congressional gridlock. Wolanin also noted that commissions are “particularly useful for problems and in circumstances marked by federal executive
branch incapacity.” Federal bureaucracies suffer from many of the same shortcomings attributed to Congress when considering commissions. They often lack the
expertise, information, and time to conduct the research and make recommendations to resolve internal problems. They can be afflicted by groupthink, not
Commissions offer a non-partisan, neutral
option to address bureaucratic policy problems.24 Defense Secretary Donald Rumsfeld has decided to implement the
recommendations of the congressionally- chartered Commission on Space, which he
chaired prior to being appointed Secretary of Defense!25 One of the more important functions of independent commissions is educating and persuading.
being able to think outside the box, or by not being able to see the big picture.
Due to the high visibility of most appointed commissioners, a policy issue will automatically tend to gain public attention. According to Wolanin, the prestige and
visibility of commissions give them the capability to focus attention on a problem, and to see that thinking about it permeates more rapidly. A recent example of
a high-visibility commission chair appointment was Henry Kissinger, selected to chair the commission to look into the perceived intelligence failure regarding the
September 11, 2001 terrorist attack on the U.S. .26 Wolanin cited four educational impacts of commissions: 1) educating the general public; 2) educating
government officials; 3) serving as intellectual milestones; and, 4) educating the commission members themselves. Regarding education of the general public, he
stated that, “Commissions have helped to place broad new issues on the national agenda, to elevate them to a level of legitimate and pressing matters about
The educational impact of
commissions within government…make it safer for congressmen and federal executives
to openly discuss or advocate a proposal that has been sanctioned by such an ‘august group’.” Commission
reports have often been so influential that they serve as milestones in affected fields. Such reports have
which government should take affirmative action.” Regarding educating government officials, he noted that, “
become source material for analysts, commentators, and even students, particularly when commission reports are widely published and disseminated. Finally, by
serving on a commission, members also learn much about the issue, and about the process of analyzing a problem and coming up with viable recommendations.
Commissioners also learn from one another.27
3. Predictability—commissions are widely used and predictable and
solvency advocate checks
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the
University of California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining
the National War College, Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the
House Intelligence Committee's Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled
Appropriations, Defense and Trade matters for the congressman. Before that, he was an Analyst in American National
Government at the Congressional Research Service, an Associate Professor of Political Science at Florida International
University, and an American Political Science Association Congressional Fellow, where he served as a policy adviser to
Senator Bob Graham of Florida. Dr. Campbell is the author, co-author, and co-editor of 11 books on Congress, most
recently the Guide to Political Campaigns in America, and Impeaching Clinton: Partisan Strife on Capitol Hill. He has
also written more than two dozen chapters and articles on the legislative process. Discharging Congress : Government
by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July 2015. Ghs-kw.)
Ad hoc commissions as instruments of government have a long history. They are used
by almost all units and levels of government for almost every conceivable task. Ironically,
the use which Congress makes of commissions— preparing the groundwork for
legislation, bringing public issues into the spotlight, whipping legislation into shape, and
giving priority to the consideration of complex, technical, and critical developments—
receives relatively little attention from political scientists. As noted in earlier chapters, following the logic of rational
choice theory, individual decisions to delegate are occasioned by imperfect information; legislators who want to
develop effective policies, but who lack the necessary expertise, often
delegate fact-finding and policy
development. Others contend that some commissions are set up to shift blame in order to maximize benefits and
minimize losses.
4. At worse, reject the argument, not the team
2NC AT Certainty
Counterplan solves your certainty args—expertise
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of
California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College,
Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's
Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for
the congressman. Before that, he was an Analyst in American National Government at the Congressional Research Service, an
Associate Professor of Political Science at Florida International University, and an American Political Science Association
Congressional Fellow, where he served as a policy adviser to Senator Bob Graham of Florida. Dr. Campbell is the author, coauthor, and co-editor of 11 books on Congress, most recently the Guide to Political Campaigns in America, and Impeaching
Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles on the legislative process.
Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July
2015. Ghs-kw.)
By delegating some of its policymaking authority to “expertise commissions,” Congress creates
institutions that reduce uncertainty. Tremendous gains accrue as a result of delegating tasks to
other organizations with a comparative advantage in performing them. Commissions are especially
adaptable devices for addressing problems that do not fall neatly within committees’ jurisdictional boundaries. They can
complement and supplement the regular committees. In the 1990s, it became apparent that committees were
ailing— beset by mounting workloads, duplication and jurisdictional battles, and conflicts between program and funding panels. But
relevant expertise can be mobilized by a commission that brings specialized information to its
tasks, especially if commission members and staff are selected on the basis of education, their
training, and their experience in the area which cross-cut the responsibilities of several standing
committees.
2NC AT Commissions Bad
No disads—commissions are inevitable due to Congressional structure
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of
California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College,
Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's
Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for
the congressman. Before that, he was an Analyst in American National Government at the Congressional Research Service, an
Associate Professor of Political Science at Florida International University, and an American Political Science Association
Congressional Fellow, where he served as a policy adviser to Senator Bob Graham of Florida. Dr. Campbell is the author, coauthor, and co-editor of 11 books on Congress, most recently the Guide to Political Campaigns in America, and Impeaching
Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles on the legislative process.
Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July
2015. Ghs-kw.)
Others see congressional delegation as unavoidable (and even desirable) in light of basic structural
flaws in the design of Congress. 61 They argue that Congress is incapable of crafting policies that
address the full complexity of modern-day problems. 62 Another charge is that congressional action
can be stymied at several junctures in the legislative policymaking process. Congress is
decentralized, having few mechanisms for integrating or coordinating its policy decisions; it is an
institution of bargaining, consensus-seeking, and compromise. The logic of delegation is broad: to fashion
solutions to tough problems, to broker disputes, to build consensus, and to keep fragile
coalitions together. The commission co-opts the most publicly ideological and privately pragmatic, the liberal left and the
conservative right. Leaders of both parties or their designated representatives can negotiate a deal without the media, the public, or
interest groups present. When deliberations are private, parties can make offers without being denounced either by their opponents
or by affected groups. Removing external contact reduces the opportunity to use an offer from the other side to curry favor with
constituents.
2NC AT Congress Doesn’t Pass Recommendations
Recommendations are passed—either bipartisan or perceived as non-partisan
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional
Commissions: Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Solutions to policy problems produced within the normal legislative process
may also suffer politically from charges of partisanship.30 Similar charges may be made against
investigations conducted by Congress.31 The non-partisan or bipartisan character of most
congressional commissions may make their findings and recommendations less susceptible to
such charges and more politically acceptable to diverse viewpoints. The bipartisan or
nonpartisan arrangement can potentially give their recommendations strong credibility, both
in Congress and among the public, even when dealing with divisive issues of public policy.32
Commissions may also give political factions space to negotiate compromises in good faith,
bypassing the short-term tactical political maneuvers that accompany public negotiations.33
Similarly, because commission members are not elected, they may be better suited to
suggesting unpopular, but necessary, policy solutions.34
Reducing Partisanship
Recommendations are passed—BRAC Commission proves
Fiscal Seminar 9
(The Fiscal Seminar is a group of scholars who meet on a regular basis, under the auspices of The Brookings Institution and The
Heritage Foundation, to discuss federal budget and fiscal policy issues. The members of the Fiscal Seminar acknowledge the
contributions of Paul Cullinan, a former colleague and Brookings scholar, in the development of this paper, and the editorial
assistance of Emily Monea. “THE POTENTIAL ROLE OF ENTITLEMENT OR BUDGET COMMISSIONS IN ADDRESSING LONG-TERM
BUDGET PROBLEMS,” The Fiscal Seminar. 06-2009.)
On the other hand, the
success of BRAC seems to have resulted more from the defined structure and
process of the commission.5 Under BRAC, a package of recommendations originated with the
Department of Defense, was modified by the BRAC commission, and was then reviewed by the President. Congress
then had to consider the package as a whole with no amendments allowed; if it failed to pass a resolution
of disapproval, the recommendations would be implemented as if they had been enacted in law.
Not one of the five sets of BRAC recommendations has been rejected by the Congress.6,
2NC AT No Authority
Commissions have broad authority
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of
California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College,
Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's
Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for
the congressman. Before that, he was an Analyst in American National Government at the Congressional Research Service, an
Associate Professor of Political Science at Florida International University, and an American Political Science Association
Congressional Fellow, where he served as a policy adviser to Senator Bob Graham of Florida. Dr. Campbell is the author, coauthor, and co-editor of 11 books on Congress, most recently the Guide to Political Campaigns in America, and Impeaching
Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles on the legislative process.
Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July
2015. Ghs-kw.)
Congressional commissions have reached the point where they can
take over various fact-finding functions
formerly performed by Congress itself. Once the facts have been found by a commission, it is
possible for Congress to subject those facts to the scrutiny of cross-examination and debate. And
if the findings stand up under such scrutiny, there remains for Congress the major task of determining the policy to be adopted with
reference to the known factual situation. Once it was clear, for example, that the acquired immune deficiency syndrome (AIDS)
yielded an extraordinary range of newfound political and practical difficulties, the need for legislative action was readily apparent.
The question that remained was one of policy: how to prevent the spread of AIDS. Should it be by accelerated research? By public
education? By facilitating housing support for people living with AIDS? Or by implementing a program of AIDS counseling and
testing? The AIDS Commission could help Congress answer such questions.
2NC AT Perception
CP solves your perception arguments
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional
Commissions: Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Raising Visibility By
establishing a commission, Congress can often provide a highly visible forum
for important issues that might otherwise receive scant attention from the public.38 Commissions
often are composed of notable public figures, allowing personal prestige to be transferred to
policy solutions.39 Meetings and press releases from a commission may receive significantly
more attention in the media than corresponding information coming directly from members
of congressional committees. Upon completion of a commission’s work product, public attention may be
temporarily focused on a topic that otherwise would receive scant attention, thus increasing the
probability of congressional action within the policy area.40
Private Sector CP
1NC
Counterplan: the private sector should implement and enforce default
encryption standards on a level equivalent with those announced by Apple in
2014.
Apple’s new standards are unhackable even by Apple—eliminates backdoors
Green 10/4
(Green, Matthew D. Matthew D. Green is an Assistant Research Professor at the Johns Hopkins Information Security Institute. He
completed his PhD in 2008. His research includes techniques for privacy-enhanced information storage, anonymous payment
systems, and bilinear map-based cryptography. "A Few Thoughts on Cryptographic Engineering: Why can't Apple decrypt your
iPhone?” 10-4-2014. http://blog.cryptographyengineering.com/2014/10/why-cant-apple-decrypt-your-iphone.html//ghs-kw)
In the rest of this post I'm going to talk about how these protections may work and how Apple
can realistically claim not
to possess a back door. One caveat: I should probably point out that Apple isn't known for showing up at parties and
bragging about their technology -- so while a fair amount of this is based on published information provided by Apple, some of it is
speculation. I'll try to be clear where one ends and the other begins. Password-based encryption 101 Normal password-based file
encryption systems take in a password from a user, then apply a key derivation function (KDF) that converts a password (and some
salt) into an encryption key. This approach doesn't require any specialized hardware, so it can be securely implemented purely in
software provided that (1) the software is honest and well-written, and (2) the chosen password is strong, i.e., hard to guess. The
problem here is that nobody ever chooses strong passwords. In fact, since most passwords are terrible, it's usually possible for an
attacker to break the encryption by working through a 'dictionary' of likely passwords and testing to see if any decrypt the data. To
make this really efficient, password crackers often use special-purpose hardware that takes advantage of parallelization (using
FPGAs or GPUs) to massively speed up the process. Thus a common defense against cracking is to use a 'slow' key derivation
function like PBKDF2 or scrypt. Each of these algorithms is designed to be deliberately resource-intensive, which does slow down
normal login attempts -- but hits crackers much harder. Unfortunately, modern cracking rigs can defeat these KDFs by simply
throwing more hardware at the problem. There are some approaches to dealing with this -- this is the approach of memory-hard
KDFs like scrypt -- but this is not the direction that Apple has gone. How
Apple's encryption works Apple doesn't use
approach is to add a 256-bit device-unique secret key called a UID to the mix, and
to store that key in hardware where it's hard to extract from the phone. Apple claims that it does
not record these keys nor can it access them. On recent devices (with A7 chips), this key and the mixing
process are protected within a cryptographic co-processor called the Secure Enclave. The Apple
Key Derivation function 'tangles' the password with the UID key by running both through
PBKDF2-AES -- with an iteration count tuned to require about 80ms on the device itself.** The
result is the 'passcode key'. That key is then used as an anchor to secure much of the data on
the phone. Overview of Apple key derivation and encryption (iOS Security Guide, p.10). Since only the device itself
knows UID -- and the UID can't be removed from the Secure Enclave -- this means all password
cracking attempts have to run on the device itself. That rules out the use of FPGA or ASICs to
crack passwords. Of course Apple could write a custom firmware that attempts to crack the
keys on the device but even in the best case such cracking could be pretty time consuming ,
thanks to the 80ms PBKDF2 timing. (Apple pegs such cracking attempts at 5 1/2 years for a random 6-character
password consisting of lowercase letters and numbers. PINs will obviously take much less time, sometimes as
scrypt. Their
little as half an hour. Choose a good passphrase!) So one view of Apple's process is that it depends on the user picking a strong
password. A different view is that it also depends on the attacker's inability to obtain the UID. Let's explore this a bit more. Securing
the Secure Enclave The
Secure Enclave is designed to prevent exfiltration of the UID key. On earlier Apple
devices this key lived in the application processor itself. Secure Enclave provides an extra level of protection
that holds even if the software on the application processor is compromised -- e.g., jailbroken. One
worrying thing about this approach is that, according to Apple's documentation, Apple controls the signing keys that sign the Secure
Enclave firmware. So using these keys, they might be able to write a special "UID extracting" firmware update that would undo the
protections described above, and potentially allow crackers to run their attacks on specialized hardware. Which leads to the
following question? How
does Apple avoid holding a backdoor signing key that allows them to extract
the UID from the Secure Enclave? It seems to me that there are a few possible ways forward here. No software
can extract the UID. Apple's documentation even claims that this is the case; that software can only see the
output of encrypting something with UID, not the UID itself. The problem with this explanation is that it isn't
really clear that this guarantee covers malicious Secure Enclave firmware written and signed by Apple. Update 10/4: Comex and
others (who have forgotten more about iPhone internals than I've ever known) confirm that #1 is the right answer. The
UID
appears to be connected to the AES circuitry by a dedicated path, so software can set it as a key,
but never extract it. Moreover this appears to be the same for both the Secure Enclave and
older pre-A7 chips. So ignore options 2-4 below.
2NC O/V
The counterplan solves 100% of the case—private corporations will institute
strong encryption standards on all their products and store decryption
mechanisms on individual devices without retaining separate decryption
programs—this means nobody but the owner of the device can decrypt the
information—that’s Green
AND, solves backdoors—companies are technologically incapable of providing
backdoors in the world of the CP—solves the AFF—that’s Green
AT Perception
Other companies follow – solves their credibility internal links
Whittaker 14
(Zack Whittaker. "Apple doubles-down on security, shuts out law enforcement from accessing iPhones, iPads," ZDNet. 9-18-2014.
http://www.zdnet.com/article/apple-doubles-down-on-security-shuts-out-law-enforcement-from-accessing-iphonesipads///ghs-kw)
The new encryption methods prevent even Apple from accessing even the relatively small
amount of data it holds on users. "Unlike our competitors, Apple cannot bypass your passcode and
therefore cannot access this data," the company said in its new privacy policy, updated Wednesday. "So it's not
technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running
iOS 8." There are some caveats, however. For the iCloud data it stores, Apple still has the ability (and the legal responsibility) to turn
over data it stores on its own servers, or third-party servers it uses to support the service. iCloud data can include photos, emails,
music, documents, and contacts. In the wake of the Edward Snowden disclosures, Apple
has set itself apart from the
rest of the crowd by bolstering its encryption efforts in such a way that makes it impossible for it
to decrypt the data. Apple chief executive Tim Cook said in a recent interview with PBS' Charlie Rose that if the government
"laid a subpoena" at its doors, Apple "can't provide" the data. He said, bluntly: "We don't have a key. The
door is closed." Although the iPhone and iPad maker was late to the transparency report party, the company has rocketed up
the ranks of the civil liberties table. The Electronic Frontier Foundation's annual reports for 2012 and 2013 showed Apple as having
poor privacy practices around user data, gaining just one star out of five each year. In 2014, Apple scored the full five stars — a
massive turnaround from two years prior. In the meantime, Yahoo
is bolstering encryption between its
datacenters , and recently turned on encryption-by-default on its email service . Microsoft is
also encrypting its network traffic amid reports of the National Security Agency's datacenter
tapping program. And Google is working hard to crackdown on government spies cracking into
its networks and cables. Privacy and security are, and have been for a while, the pinnacle of tech
credibility. And Apple just scored about a billion points on that scale, leaving most of its Silicon Valley
partners in the dust
AT Links to Terror
No link to their disads—other sources of data
NYT 14
(David E. Sanger and Brian X. Chen. "Signaling Post-Snowden Era, New iPhone Locks Out N.S.A. ," New York Times. 9-26-2014.
http://www.nytimes.com/2014/09/27/technology/iphone-locks-out-the-nsa-signaling-a-post-snowden-era-.html?_r=0//ghs-kw)
Mr. Zdziarski said that concerns
about Apple’s new encryption to hinder law enforcement seemed
overblown. He said there were still plenty of ways for the police to get customer data for
investigations. In the example of a kidnapping victim, the police can still request information on call
records and geolocation information from phone carriers like AT&T and Verizon Wireless.
“Eliminating the iPhone as one source I don’t think is going to wreck a lot of cases,” he said. “There is
such a mountain of other evidence from call logs, email logs, iCloud, Gmail logs. They’re tapping
the whole Internet.”
XO CP
1NC
XOs solve the Secure Data Act
Castro and McQuinn 15
(Castro, Daniel and McQuinn, Alan. Information Technology and Innovation Foundation. The Information Technology and
Innovation Foundation (ITIF) is a Washington, D.C.-based think tank at the cutting edge of designing innovation strategies and
technology policies to create economic opportunities and improve quality of life in the United States and around the world.
Founded in 2006, ITIF is a 501(c) 3 nonprofit, non-partisan organization that documents the beneficial role technology plays in our
lives and provides pragmatic ideas for improving technology-driven productivity, boosting competitiveness, and meeting today’s
global challenges through innovation. Daniel Castro is the vice president of the Information Technology and Innovation
Foundation. His research interests include health IT, data privacy, e-commerce, e-government, electronic voting, information
security, and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst at the Government Accountability Office (GAO)
where he audited IT security and management controls at various government agencies. He has a B.S. in Foreign Service from
Georgetown University and an M.S. in Information Security Technology and Management from Carnegie Mellon University. Alan
McQuinn is a research assistant with the Information Technology and Innovation Foundation. Prior to joining ITIF, Mr. McQuinn
was a telecommunications fellow for Congresswoman Anna Eshoo and an intern for the Federal Communications Commission in
the Office of Legislative Affairs. He got his B.S. in Political Communications and Public Relations from the University of Texas at
Austin. “Beyond the USA Freedom Act: How U.S. Surveillance Still Subverts U.S. Competitiveness,” ITIF. June 2015.
http://www2.itif.org/2015-beyond-usa-freedom-act.pdf//ghs-kw)
the U.S. government should draw a clear line in the sand and declare that the policy of the
U.S. government is to strengthen not weaken information security. The U.S. Congress should pass legislation, such as the
Secure Data Act introduced by Sen. Wyden (D-OR), banning any government efforts to introduce backdoors in software or weaken encryption.43 In
the short term, President Obama, or his successor, should sign an executive order formalizing this
policy as well. In addition, when U.S. government agencies discover vulnerabilities in software or hardware products, they should responsibly notify these companies in a timely manner so that the
Second,
companies can fix these flaws. The best way to protect U.S. citizens from digital threats is to promote strong cybersecurity practices in the private sector.
Zero-Days Adv CP
1NC
Counterplan: the United States federal government should legalize and regulate
the zero-day exploit market.
Regulation is key to stop zero days from falling into enemy hands
Gallagher 13
(Ryan Gallagher. "The Secretive Hacker Market for Software Flaws," Slate Magazine. 1-16-2013.
http://www.slate.com/articles/technology/future_tense/2013/01/zero_day_exploits_should_the_hacker_gray_market_be_regu
lated.html//ghs-kw)
Behind computer screens from France to Fort Worth, Texas, elite hackers
hunt for security vulnerabilities worth
thousands of dollars on a secretive unregulated marketplace. Using sophisticated techniques to detect
weaknesses in widely used programs like Google Chrome, Java, and Flash, they spend hours crafting “zero-day
exploits”—complex codes custom-made to target a software flaw that has not been publicly
disclosed, so they can bypass anti-virus or firewall detection to help infiltrate a computer
system. Like most technologies, the exploits have a dual use. They can be used as part of research efforts to help strengthen
computers against intrusion. But they can also be weaponized and deployed aggressively for everything from
government spying and corporate espionage to flat-out fraud. Now, as cyberwar escalates across the globe, there are fears
that the burgeoning trade in finding and selling exploits is spiralling out of control—spurring calls
for new laws to rein in the murky trade. Some legitimate companies operate in a legal gray zone within the
zero-day market, selling exploits to governments and law enforcement agencies in countries across
the world. Authorities can use them covertly in surveillance operations or as part of
cybersecurity or espionage missions. But because sales are unregulated, there are concerns
that some gray market companies are supplying to rogue foreign regimes that may use
exploits as part of malicious targeted attacks against other countries or opponents. There is
also an anarchic black market that exists on invite-only Web forums, where exploits are sold to a
variety of actors—often for criminal purposes. The importance of zero-day exploits, particularly to
governments, has become increasingly apparent in recent years. Undisclosed vulnerabilities in Windows played a
crucial role in how Iranian computers were infiltrated for surveillance and sabotage when the
country’s nuclear program was attacked by the Stuxnet virus (an assault reportedly launched by the United
States and Israel). Last year, at least eight zero days in programs like Flash and Internet Explorer were discovered and linked to a
Chinese hacker group dubbed the “Elderwood gang,” which targeted more than 1,000 computers belonging to corporations and
human rights groups as part of a shady intelligence-gathering effort allegedly sponsored by China. The most lucrative zero days can
be worth hundreds of thousands of dollars in both the black and gray markets. Documents released by Anonymous in 2011 revealed
Atlanta-based security firm Endgame Systems offering to sell 25 exploits for $2.5 million. Emails published alongside the documents
showed the firm was trying to keep “a very low profile” due to “feedback we've received from our government clients.” (In keeping
with that policy, Endgame didn’t respond to questions for this story.) But not everyone working in the business of selling software
exploits is trying to fly under the radar—and some have decided to blow the whistle on what they see as dangerous and
irresponsible behavior within their secretive profession. Adriel Desautels, for one, has chosen to speak out. The 36-year-old “exploit
broker” from Boston runs a company called Netragard, which buys and sells zero days to organizations in the public and private
sectors. (He won’t name names, citing confidentiality agreements.) The lowest-priced exploit that Desautels says he has sold
commanded $16,000; the highest, more than $250,000. Unlike
other companies and sole traders operating in
the zero-day trade, Desautels has adopted a policy to sell his exploits only domestically within
the United States, rigorously vetting all those he deals with. If he didn’t have this principle, he
says, he could sell to anyone he wanted—even Iran or China—because the field is
unregulated. And that’s exactly why he is concerned. “As technology advances, the effect that zero-day
exploits will have is going to become more physical and more real,” he says. “The software
becomes a weapon. And if you don’t have controls and regulations around weapons, you’re
really open to introducing chaos and problems.” Desautels says he knows of “greedy and
irresponsible” people who “will sell to anybody,” to the extent that some exploits might be sold
by the same hacker or broker to two separate governments not on friendly terms. This can
feasibly lead to these countries unwittingly targeting each other’s computer networks with the
same exploit, purchased from the same seller. “If I take a gun and ship it overseas to some guy
in the Middle East and he uses it to go after American troops—it’s the same concept,” he says. The
position Desautels has taken casts him as something of an outsider within his trade. France’s
Vupen, one of the foremost gray-market zero-day sellers, takes a starkly different approach.
Vupen develops and sells exploits to law enforcement and intelligence agencies across the world
to help them intercept communications and conduct “offensive cyber security missions,” using
what it describes as “extremely sophisticated codes” that “bypass all modern security
protections and exploit mitigation technologies.” Vupen’s latest financial accounts show it reported revenue of
about $1.2 million in 2011, an overwhelming majority of which (86 percent) was generated from exports outside France. Vupen says
it will sell exploits to a list of more than 60 countries that are members or partners of NATO, provided these countries are not
subject to any export sanctions. (This means Iran, North Korea, and Zimbabwe are blacklisted—but the likes of Kazakhstan, Bahrain,
Morocco, and Russia are, in theory at least, prospective customers, as they are not subject to any sanctions at this time.) “As a
European company, we exclusively work with our allies and partners to help them protect their democracies and citizens against
threats and criminals,” says Chaouki Bekrar, Vupen’s CEO, in an email. He adds that even if a given country is not on a sanctions list,
it doesn’t mean Vupen will automatically work with it, though he declines to name specific countries or continents where his firm
does or does not have customers. Vupen’s
policy of selling to a broad range of countries has attracted
much controversy, sparking furious debate around zero-day sales, ethics, and the law. Chris
Soghoian of the ACLU—a prominent privacy and security researcher who regularly spars with Vupen CEO Bekrar on Twitter—has
accused Vupen of being “modern-day merchants of death” selling “the bullets for cyberwar.”
“Just as the engines on an airplane enable the military to deliver a bomb that kills people, so too
can a zero day be used to deliver a cyberweapon that causes physical harm or loss of life,”
Soghoian says in an email. He is astounded that governments are “sitting on flaws” by purchasing zero-day
exploits and keeping them secret. This ultimately entails “exposing their own citizens to espionage,” he says, because it means
that the government knows about software vulnerabilities but is not telling the public about them. Some claim, however, that the
zero-day issue is being overblown and politicized. “You don’t need a zero day to compromise the workstation of an executive, let
alone an activist,” says Wim Remes, a security expert who manages information security for Ernst & Young. Others argue that the
U.S. government in particular needs to purchase exploits to keep pace with what adversaries like China and Iran are doing. “If we’re
going to have a military to defend ourselves, why would you disarm our military?” says Robert Graham at the Atlanta-based firm
Errata Security. “If the government can’t buy exploits on the open market, they will just develop them themselves,” Graham says. He
also fears that regulation of zero-day sales could lead to a crackdown on legitimate coding work. “Plus, digital arms don’t exist—it’s
an analogy. They don’t kill people. Bad things really don’t happen with them.” * * * So are zero days really a danger? The
overwhelming majority of compromises of computer systems happen because users failed to update software and patch
vulnerabilities that are already known about. However, there are a handful of cases in which undisclosed vulnerabilities—that is,
zero days—have been used to target organizations or individuals. It
was a zero day, for instance, that was recently
used by malicious hackers to compromise Microsoft’s Hotmail and steal emails and details of the
victims' contacts. Last year, it was reported that a zero day was used to target a flaw in Internet Explorer
and hijack Gmail accounts. Noted “off
ensive security” companies such as Italy’s Hacking Team and the Englandbased Gamma Group are among those to make use of zero-day exploits to help law enforcement agencies install advanced spyware
on target computers—and both of these companies have been accused of supplying their technologies
to countries
with an authoritarian bent. Tracking and communications interception can have serious realworld consequences for dissidents in places like Iran, Syria, or the United Arab Emirates. In the
wrong hands, it seems clear, zero days could do damage. This potential has been recognized in Europe, where
Dutch politician Marietje Schaake has been crusading for groundbreaking new laws to curb the trade in
what she calls “digital weapons.” Speaking on the phone from Strasbourg, France*, Schaake tells me she’s concerned
about security exploits, particularly where they are being sold with the intent to help enable access to computers or mobile devices
not authorized by the owner. She adds that she is
considering pressing for the European Commission, the EU’s executive
whole new regulatory framework that would encompass the trade in zero days,
perhaps by looking at incentives for companies or hackers to report vulnerabilities that they
find. Such a move would likely be welcomed by the handful of organizations already working to encourage hackers
body, to bring in a
and security researchers to responsibly disclose vulnerabilities they find instead of selling them on the black or gray markets. The
Zero Day Initiative, based in Austin, Texas, has a team of about 2,700 researchers globally who submit vulnerabilities that are then
passed on to software developers so they can be fixed. ZDI, operated by Hewlett-Packard, runs competitions in which hackers can
compete for a pot of more than $100,000 in prize funds if they expose flaws. “We believe our program is focused on the greater
good,” says Brian Gorenc, a senior security researcher who works with the ZDI.
DAs
Terror
1NC - Generic
Terror risk is high—maintaining current surveillance is key
Inserra, 6/8 (David Inserra is a Research Associate for Homeland Security and Cyber Security in the
Douglas and Sarah Allison Center for Foreign and National Security Policy of the Kathryn and Shelby
Cullom Davis Institute for National Security and Foreign Policy, at The Heritage Foundation, 6-8-2015,
"69th Islamist Terrorist Plot: Ongoing Spike in Terrorism Should Force Congress to Finally Confront the
Terrorist Threat," Heritage Foundation, http://www.heritage.org/research/reports/2015/06/69thislamist-terrorist-plot-ongoing-spike-in-terrorism-should-force-congress-to-finally-confront-the-terroristthreat)
On June 2 in Boston, Usaamah Abdullah Rahim drew a knife and attacked police officers and FBI agents,
who then shot and killed him. Rahim was being watched by Boston’s Joint Terrorism Task Force as he had
been plotting to behead police officers as part of violent jihad. A conspirator, David Wright or Dawud
Sharif Abdul Khaliq, was arrested shortly thereafter for helping Rahim to plan this attack. This plot marks
the 69th publicly known Islamist terrorist plot or attack against the U.S. homeland since 9/11, and is part
of a recent spike in terrorist activity. The U.S. must redouble its efforts to stop terrorists before they
strike, through the use of properly applied intelligence tools. The Plot According to the criminal complaint filed
against Wright, Rahim had originally planned to behead an individual outside the state of Massachusetts,[1] which, according to
news reports citing anonymous government officials, was Pamela Geller, the organizer of the “draw Mohammed” cartoon contest in
Garland, Texas.[2] To this end, Rahim had purchased multiple knives, each over 1 foot long, from Amazon.com. The FBI was
listening in on the calls between Rahim and Wright and recorded multiple conversations regarding how
these weapons would be used to behead someone. Rahim then changed his plan early on the morning of June 2. He
planned to go “on vacation right here in Massachusetts…. I’m just going to, ah, go after them, those boys in blue. Cause, ah, it’s the
easiest target.”[3] Rahim and Wright had used the phrase “going on vacation” repeatedly in their conversations as a euphemism for
violent jihad. During this conversation, Rahim told Wright that he planned to attack a police officer on June 2 or June 3. Wright then
offered advice on preparing a will and destroying any incriminating evidence. Based on this threat, Boston police officers and FBI
agents approached Rahim to question him, which prompted him to pull out one of his knives. After being told to drop his weapon,
Rahim responded with “you drop yours” and moved toward the officers, who then shot and killed him. While Rahim’s brother,
Ibrahim, initially claimed that Rahim was shot in the back, video surveillance was shown to community leaders and civil rights
groups, who have confirmed that Rahim was not shot in the back.[4 ] Terrorism Not Going Away This 69th Islamist plot is also
the seventh in this calendar year. Details on how exactly Rahim was radicalized are still forthcoming, but according to
anonymous officials, online propaganda from ISIS and other radical Islamist groups are the source.[5] That
would make this attack the 58th homegrown terrorist plot and continue the recent trend of ISIS playing
an important role in radicalizing individuals in the United States. It is also the sixth plot or attack targeting law
enforcement in the U.S., with a recent uptick in plots aimed at police. While the debate over the PATRIOT Act and the USA FREEDOM
Act is taking a break, the terrorists are not. The result of the debate has been the reduction of U.S. intelligence and counterterrorism
Other
legitimate intelligence tools and capabilities must be leaned on now even more. Protecting the Homeland To
capabilities, meaning that the U.S. has to do even more with less when it comes to connecting the dots on terrorist plots.[6]
keep the U.S. safe, Congress must take a hard look at the U.S. counterterrorism enterprise and determine other measures that are
needed to improve it. Congress should: Emphasize community outreach. Federal grant funds should be used to create robust
community-outreach capabilities in higher-risk urban areas. These funds must not be used for political pork, or so broadly that they
no longer target those communities at greatest risk. Such capabilities are key to building trust within these communities, and if the
United States is to thwart lone-wolf terrorist attacks, it must place effective community outreach operations at the tip of the spear.
Prioritize local cyber capabilities. Building cyber-investigation capabilities in the higher-risk urban areas must become a primary
focus of Department of Homeland Security grants. With so much terrorism-related activity occurring on the Internet, local law
enforcement must have the constitutional ability to monitor and track violent extremist activity on the Web when reasonable
suspicion exists to do so. Push the FBI toward being more effectively driven by intelligence. While the FBI has made high-level
changes to its mission and organizational structure, the bureau is still working on integrating intelligence and law enforcement
activities. Full integration will require overcoming inter-agency cultural barriers and providing FBI intelligence personnel with
resources, opportunities, and the stature they need to become a more effective and integral part of the FBI. Maintain essential
counterterrorism tools. Support for important investigative tools is essential to maintaining the security of
the U.S. and combating terrorist threats. Legitimate government surveillance programs are also a vital
component of U.S. national security and should be allowed to continue. The need for effective
counterterrorism operations does not relieve the government of its obligation to follow the law and
respect individual privacy and liberty. In the American system, the government must do both equally well.
Clear-Eyed Vigilance The recent spike in terrorist plots and attacks should finally awaken policymakers—all
Americans, for that matter—to the seriousness of the terrorist threat. Neither fearmongering nor willful
blindness serves the United States. Congress must recognize and acknowledge the nature and the scope
of the Islamist terrorist threat, and take the appropriate action to confront it.
Backdoors are key to stop terrorism and child predators
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings
Institution. He is the author of several books and a member of the Hoover Institution's Task Force on National Security and Law.
"Thoughts on Encryption and Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015.
http://www.lawfareblog.com/thoughts-encryption-and-going-dark-part-ii-debate-merits//ghs-kw)
On Thursday, I described the surprisingly warm reception FBI Director James Comey got in the Senate this week with his warning
that the FBI was "going dark" because of end-to-end encryption. In this post, I want to take on the merits of
the renewed encryption debate, which seem to me complicated and multi-faceted and not all pushing in the same direction. Let me
start by breaking the encryption debate into two
distinct sets of questions: One is the conceptual question
of whether a world of end-to-end strong encryption is an attractive idea. The other is whether—
assuming it is not an attractive idea and that one wants to ensure that authorities retain the ability to intercept decrypted signal—
an extraordinary access scheme is technically possible without eroding other essential security
and privacy objectives. These questions often get mashed together, both because tech companies are keen to market
themselves as the defenders of their users' privacy interests and because of the libertarian ethos of the tech community more
generally. But the
questions are not the same, and it's worth considering them separately. Consider
the conceptual question first. Would it be a good idea to have a world-wide communications
infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could snap our
fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from
the FSB but also from the FBI even wielding lawful process, would that be desirable? Or, in the alternative, do we want to create an
internet as secure as possible from everyone except government investigators exercising their legal authorities with the
understanding that other countries may do the same? Conceptually speaking, I am with Comey on this question—and the
matter does not seem to me an especially close call. The belief in principle in creating a giant
world-wide network on which surveillance is technically impossible is really an argument for
the creation of the world's largest ungoverned space. I understand why techno-anarchists find
this idea so appealing. I can't imagine for moment, however, why anyone else would. Consider
the comparable argument in physical space: the creation of a city in which authorities are
entirely dependent on citizen reporting of bad conduct but have no direct visibility onto what
happens on the streets and no ability to conduct search warrants (even with court orders) or to
patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really
suck is not controversial when you're talking about Yemen or Somalia. I see nothing more
attractive about the creation of a worldwide architecture in which it is technically impossible to
intercept and read ISIS communications with followers or to follow child predators into
chatrooms where they go after kids. The trouble is that this conceptual position does not answer the entirety of the
policy question before us. The reason is that the case against preserving some form of law enforcement access to decrypted signal is
not only a conceptual embrace of the technological obsolescence of surveillance. It
is also a series of arguments about
the costs—including the security costs—of maintaining the capacity to decrypt captured signal.
Terrorists will use bioweapons- guarantees extinction
Cooper 13 (Joshua, 1/23/13, University of South Carolina, “Bioterrorism and the Fermi Paradox,”
http://people.math.sc.edu/cooper/fermi.pdf, 7/15/15, SM)
We may conclude that, when a civilization reaches its space-faring age, it∂ will more or less at the same moment (1) contain
many individuals who seek to cause large-scale destruction, and (2) acquire the capacity to tinker with
its own genetic chemistry. This is a perfect recipe for bioterrorism , and, given the many very natural
pathways for its development and the overwhelming∂ evidence that precisely this course has been taken
by humanity, it is hard to∂ see how bioterrorism does not provide a neat, if profoundly unsettling, solution∂ to Fermi’s paradox.
One might object that, if omnicidal individuals are∂ successful in releasing highly virulent and deadly genetic
malware into the∂ wild, they are still unlikely to succeed in killing everyone. However, even if∂ every such mass death
event results only in a high (i.e., not total) kill rate and ∂ there is a large gap between each such event
(so that individuals can build up∂ the requisite scientific infrastructure again), extinction would be
inevitable∂ regardless. Some of the engineered bioweapons will be more successful than∂ others; the inter-apocalyptic eras
will vary in length; and post-apocalyptic∂ environments may be so war-torn, disease-stricken, and
impoverished of genetic variation that they may culminate in true extinction events even if the initial
cataclysm ‘only’ results in 90% death rates, since they may cause the∂ effective population size to dip
below the so-called “minimum viable population.”∂ This author ran a Monte Carlo simulation using as (admittedly
very∂ crude and poorly informed, though arguably conservative) estimates the following∂ Earth-like parameters: bioterrorism event
mean death rate 50% and∂ standard deviation 25% (beta distribution), initial population 1010, minimum∂ viable population 4000,
individual omnicidal act probability 10−7 per annum,∂ and population growth rate 2% per annum. One thousand trials yielded an∂
average post-space-age time until extinction of less than 8000 years. This is∂ essentially instantaneous on a cosmological scale, and
varying the parameters∂ by quite a bit does nothing to make the survival period comparable with the∂ age of the universe.
1NC - ISIS Version
ISIS will emerge as a serious threat to the US
Morell 15 (Michael Morell is the former deputy director of the CIA and has twice served as acting
director. He is the author of The Great War of Our Time: The CIA's Fight Against Terrorism — From al
Qa'ida to ISIS. May 14, 2015 Time Magazine ISIS Is a Danger on U.S. Soil
http://time.com/3858354/isis-is-a-danger-on-u-s-soil/)
The terrorist group poses a gathering threat. In the aftermath of the attempted terrorist attack on May 4 in Garland, Texas–for
which ISIS claimed responsibility–we find ourselves again considering the question of whether or not
ISIS is a real
threat. The answer is yes. A very serious one. Extremists inspired by Osama bin Laden’s ideology consider
themselves to be at war with the U.S.; they want to attack us. It is important to never forget that–no matter how
long it has been since 9/11. ISIS is just the latest manifestation of bin Laden’s design. The group has grown faster than any
terrorist group we can remember, and the threat it poses to us is as wide-ranging as any we have seen.
What ISIS has that al-Qaeda doesn’t is a Madison Avenue level of sophisticated messaging and social media. ISIS has a multilingual
propaganda arm known as al-Hayat, which uses GoPros and cameras mounted on drones to make videos that appeal to its followers.
And ISIS uses just about every tool in the platform box–from Twitter to YouTube to Instagram–to great effect, attracting fighters and
funding. Digital media are one of the group’s most significant strengths; they have helped ISIS become an organization that poses
four significant threats to the U.S. First, it is a threat to the stability of the entire Middle East. ISIS is putting the territorial integrity of
both Iraq and Syria at risk. And a further collapse of either or both of these states could easily spread throughout the region,
bringing with it sectarian and religious strife, humanitarian crises and the violent redrawing of borders, all in a part of the world that
remains critical to U.S. national interests. ISIS now controls more territory–in Iraq and Syria–than any other terrorist group anywhere
in the world. When al-Qaeda in Iraq joined the fight in Syria, the group changed its name to ISIS. ISIS added Syrians and foreign
fighters to its ranks, built its supply of arms and money and gained significant battlefield experience fighting Bashar Assad’s regime.
Together with the security vacuum in Iraq and Nouri al-Maliki’s alienation of the Sunnis, this culminated in ISIS’s successful blitzkrieg
across western Iraq in the spring and summer of 2014, when it seized large amounts of territory. ISIS is not the first extremist group
to take and hold territory. Al-Shabab in Somalia did so a number of years ago and still holds territory there, al-Qaeda in the Islamic
Maghreb did so in Mali in 2012, and al-Qaeda in Yemen did so there at roughly the same time. I fully expect extremist groups to
attempt to take–and sometimes be successful in taking–territory in the years ahead. But no other group has taken so much territory
so quickly as ISIS has. Second, ISIS is attracting young men and women to travel to Syria and Iraq to join its cause. At this writing, at
least 20,000 foreign nationals from roughly 90 countries have gone to Syria and Iraq to join the fight. Most have joined ISIS. This flow
of foreigners has outstripped the flow of such fighters into Iraq during the war there a decade ago. And there are more foreign
fighters in Syria and Iraq today than there were in Afghanistan in the 1980s working to drive the Soviet Union out of that country.
These foreign nationals are getting experience on the battlefield, and they are becoming increasingly radicalized to ISIS’s cause.
There is a particular subset of these fighters to worry about. Somewhere between 3,500 and 5,000 jihadist wannabes
have traveled to Syria and Iraq from Western Europe, Canada, Australia and the U.S. They all have easy access to the U.S.
homeland, which presents two major concerns: that these fighters will leave the Middle East and either
conduct an attack on their own or conduct an attack at the direction of the ISIS leadership. The former has
already happened in Europe. It has not happened yet in the U.S.–but it will. In spring 2014, Mehdi Nemmouche, a
young Frenchman who went to fight in Syria, returned to Europe and shot three people at the Jewish Museum of Belgium in
Brussels. The third threat is that ISIS is building a following among other extremist groups around the world. The allied exaltation is
happening at a faster pace than al-Qaeda ever enjoyed. It has occurred in Algeria, Libya, Egypt and Afghanistan. More will follow.
These groups, which are already dangerous, will become even more so. They will increasingly target ISIS’s enemies (including us),
and they will increasingly take on ISIS’s brutality. We saw the targeting play out in early 2015 when an ISIS-associated group in Libya
killed an American in an attack on a hotel in Tripoli frequented by diplomats and international businesspeople. And we saw the
extreme violence play out just a few weeks after that when another ISIS-affiliated group in Libya beheaded 21 Egyptian Coptic
Christians. And fourth, perhaps most insidiously, ISIS’s message is radicalizing young men and women around the globe who have
never traveled to Syria or Iraq but who want to commit an attack to demonstrate their solidarity with ISIS. These are the so-called
lone wolves. Even before May 4, such an ISIS-inspired attack had already occurred in the U.S.: an individual with sympathies for ISIS
attacked two New York City police officers with a hatchet. Al-Qaeda has inspired such U.S. attacks–the Fort Hood shootings in late
2009 that killed 13 and the Boston Marathon bombing in spring 2013 that killed five and injured nearly 300. The attempted attack in
Texas is just the latest of these. We can expect more of these kinds of attacks in the U. S. Attacks by ISIS-inspired individuals are
occurring at a rapid pace around the world–roughly 10 since ISIS took control of so much territory. Two such attacks have occurred
in Canada, including the October 2014 attack on the Parliament building. And another occurred in Sydney, in December 2014. Many
planning such attacks–in Australia, Western Europe and the U.S.–have been arrested before they could carry out their terrorist
plans. Today an ISIS-directed attack in the U. S. would be relatively unsophisticated (small-scale), but over
time ISIS’s capabilities will grow. This is what a long-term safe haven in Iraq and Syria would give ISIS, and it is exactly what
the group is planning to do. They have announced their intentions–just like bin Laden did in the years prior to 9/11.
Backdoors are key to stop ISIS recruitment
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings
Institution. He is the author of several books and a member of the Hoover Institution's Task Force on National Security and Law.
"Jim Comey, ISIS, and "Going Dark"," Lawfare. 7-23-2015. http://www.lawfareblog.com/jim-comey-isis-and-going-dark//ghs-kw)
I had a lengthy conversation with FBI Director Jim Comey today about the
nexus of our domestic ISIS problem and
what the FBI calls the "going dark" issue. CNN the other day reported on some remarks Comey made on the subject,
remarks that have not gotten enough attention but reflect a problem at the front of his mind these days: FBI Director James Comey
said Thursday his agency does not yet have the capabilities to limit ISIS attempts to recruit Americans
through social media. It is becoming increasingly apparent that Americans are gravitating toward the
militant organization by engaging with ISIS online, Comey said, but he told reporters that "we don't have the
capability we need" to keep the "troubled minds" at home. "Our job is to find needles in a nationwide haystack,
needles that are increasingly invisible to us because of end-to-end encryption," Comey said.
"This is the 'going dark' problem in high definition." Comey said ISIS is increasingly communicating
with Americans via mobile apps that are difficult for the FBI to decrypt. He also explained that he had to
balance the desire to intercept the communication with broader privacy concerns. "It is a really, really hard problem, but the
collision that's going on between important privacy concerns and public safety is significant enough that we have to figure out a way
to solve it," Comey said. Let's unpack this. As has been widely reported, the FBI has been busy recently dealing with ISIS threats.
There have been a bunch of arrests, both because ISIS
has gotten extremely good at the inducing selfradicalization in disaffected souls worldwide using Twitter and because of the convergence of Ramadan and the
run-up to the July 4 holiday. As has also been widely reported, the FBI is concerned about the effect of end-to-end
encryption on its ability to conduct counterterrorism operations and other law enforcement
functions. The concern is two-fold: It's about data at rest on devices, data that is now being
encrypted in a fashion that can't easily be cracked when those devices are lawfully seized. And
it's also about data in transit between devices, data encrypted such that when captured with a
lawful court-ordered wiretap, the signal intercepted is undecipherable. Comey raised his concerns on
both subjects at a speech at Brookings last year and has talked about them periodically since then: What was not clear to me until
today, however, was the
extent to which the ISIS concerns and the "going dark" concerns have
converged. In his Brookings speech, Comey did not focus on counterterrorism in the examples he gave of the going dark
problem. In the remarks quoted by CNN, and in his conversation with me today, however, he made clear that the landscape is
changing fast. Initial recruitment may take place on Twitter, but the promising ISIS candidate
quickly gets moved onto messaging platforms that are encrypted end to end. As a practical matter, that
means there are people in the United States whom authorities reasonably believe to be in
contact with ISIS for whom surveillance is lawful and appropriate but for whom useful signals
interception is not technically feasible. That's a pretty scary thought. I don't know what the right answer is to this
problem, which involves a particularly complex mix of legitimate cybersecurity, investigative, and privacy questions. I do think the
problem is a very different one if the costs of impaired law enforcement access to signal is
enhanced ISIS ability to communicate with its recruits than if we're dealing primarily with more
routine crimes, even serious ones.
ISIS is a threat to the grid
Landsbaum 14
(Mark, 9/5/2014, OC Register, “Mark Landsbaum: Attack on power grid could bring dark days,”
http://www.ocregister.com/articles/emp-633883-power-attack.html, 7/15/15, SM)
It could be worse.
Terrorists pose an “imminent” threat to the U.S. electrical grid, which could leave the
good ol’ USA looking like 19th century USA for a lot longer than three days.∂ Don’t take my word for it. Ask Peter Pry, former
CIA officer and one-time House Armed Services Committee staffer, who served on a congressional commission investigating such
eventualities.∂ “There is an imminent threat from ISIS to the national electric grid and not just to a
single U.S. city,” Pry warns. He points to a leaked U.S. Federal Energy Regulatory Commission report in March that said a
coordinated terrorist attack on just nine of the nation’s 55,000 electrical power substations
could cause coast-to-coast blackouts for up to 18 months.∂ Consider what you’ll have to worry about then. If
you were uncomfortable watching looting and riots on TV last month in Ferguson, Mo., as police stood by, project such unseemly
behavior nationwide. For 18 months.∂ It’s likely phones won’t be reliable, so you won’t have to watch police stand idly by. Chances
are, police won’t show up. Worse, your odds of needing them will be excruciatingly more likely if terrorists attack the power grid
using an electromagnetic pulse (EMP) burst of energy to knock out electronic devices.∂ “The Congressional EMP Commission, on
which I served, did an extensive study of this,” Pry says. “We discovered to our own revulsion that critical
systems in this
country are distressingly unprotected. We calculated that, based on current realities, in the first
year after a full-scale EMP event, we could expect about two-thirds of the national population –
200 million Americans – to perish from starvation and disease, as well as anarchy in the streets.”∂
Skeptical? Consider who is capable of engineering such measures before dismissing the likelihood.∂ In his 2013 book, “A Nation
Forsaken,” Michael Maloof reported that the 2008 EMP Commission considered whether a
hostile nation or terrorist
group could attack with a high-altitude EMP weapon and determined, “any number of
adversaries possess both the ballistic missiles and nuclear weapons capabilities,” and could attack
within 15 years.∂ That was six years ago. “North Korea, Pakistan, India, China and Russia are all in the
position to launch an EMP attack against the United States now,” Maloof wrote last year.∂ Maybe you’ll rest
more comfortably knowing the House intelligence authorization bill passed in May told the intelligence community to report to
Congress within six months, “on the threat posed by man-made electromagnetic pulse weapons to United States interests through
2025, including threats from foreign countries and foreign nonstate actors.”∂ Or, maybe that’s not so comforting. In 2004 and again
in 2008, separate congressional commissions gave detailed, horrific reports on such threats. Now, Congress wants another report.∂
In his book, Maloof quotes Clay Wilson of the Congressional Research Service, who said, “Several nations, including reported
sponsors of terrorism, may currently have a capability to use EMP as a weapon for cyberwarfare or cyberterrorism to disrupt
communications and other parts of the U.S. critical infrastructure.”∂ What would an EMP attack look like? “Within an instant,”
Maloof writes, “we will have no idea what’s happening all around us, because we will have no news. There will be no radio, no TV,
no cell signal. No newspaper delivered.∂ “Products won’t flow into the nearby Wal-Mart. The big trucks will be stuck on the
interstates. Gas stations won’t be able to pump the fuel they do have. Some police officers and firefighters will show up for work,
but most will stay home to protect their own families. Power lines will get knocked down in windstorms, but nobody will care.
They’ll all be fried anyway. Crops will wither in the fields until scavenged – since the big picking machines will all be idled, and there
will be no way to get the crop to market anyway.∂ “Nothing
that’s been invented in the last 50 years – based
on computer chips, microelectronics or digital technology – will work. And it will get worse.”
Cyberterror leads to nuclear exchanges – traditional defense doesn’t apply
Fritz 9 (Jason, Master in International Relations from Bond, BS from St. Cloud), “Hacking
Nuclear Command and Control,” International Commission on Nuclear Non-proliferation and
Disarmament, 2009, pnnd.org)//duncan
This paper will analyse the threat of cyber terrorism in regard to nuclear weapons. Specifically, this
research will use open source knowledge to identify the structure of nuclear command and control centres, how those structures
might be compromised through computer network operations, and how doing so would fit within established cyber terrorists’
capabilities, strategies, and tactics. If
access to command and control centres is obtained, terrorists could
fake or actually cause one nuclear-armed state to attack another, thus provoking a nuclear
response from another nuclear power. This may be an easier alternative for terrorist groups
than building or acquiring a nuclear weapon or dirty bomb themselves. This would also act as a
force equaliser, and provide terrorists with the asymmetric benefits of high speed, removal of
geographical distance, and a relatively low cost. Continuing difficulties in developing computer
tracking technologies which could trace the identity of intruders, and difficulties in establishing
an internationally agreed upon legal framework to guide responses to computer network operations, point
towards an inherent weakness in using computer networks to manage nuclear weaponry. This is
particularly relevant to reducing the hair trigger posture of existing nuclear arsenals.¶ All
computers which are connected to the internet are susceptible to infiltration and remote
control. Computers which operate on a closed network may also be compromised by various hacker methods, such as privilege
escalation, roaming notebooks, wireless access points, embedded exploits in software and hardware, and maintenance entry points.
For example, e-mail spoofing targeted at individuals who have access to a closed network, could lead to the installation of a virus on
an open network. This virus could then be carelessly transported on removable data storage between the open and closed network.
Information found on the internet may also reveal how to access these closed networks directly. Efforts
by militaries to
place increasing reliance on computer networks, including experimental technology such as
autonomous systems, and their desire to have multiple launch options, such as nuclear triad
capability, enables multiple entry points for terrorists. For example, if a terrestrial command centre is
impenetrable, perhaps isolating one nuclear armed submarine would prove an easier task. There is evidence to suggest multiple
attempts have been made by hackers to compromise the extremely low radio frequency once used by the US Navy to send nuclear
launch approval to submerged submarines. Additionally, the alleged Soviet system known as Perimetr was designed to automatically
launch nuclear weapons if it was unable to establish communications with Soviet leadership. This was intended as a retaliatory
response in the event that nuclear weapons had decapitated Soviet leadership; however it did not account for the possibility of
cyber terrorists blocking communications through computer network operations in an attempt to engage the system. ¶ Should
a
warhead be launched, damage could be further enhanced through additional computer network
operations. By using proxies, multi-layered attacks could be engineered. Terrorists could remotely commandeer
computers in China and use them to launch a US nuclear attack against Russia. Thus Russia would believe it was under attack from
the US and the US would believe China was responsible. Further, emergency
response communications could be
disrupted, transportation could be shut down, and disinformation, such as misdirection, could
be planted, thereby hindering the disaster relief effort and maximizing destruction. Disruptions
in communication and the use of disinformation could also be used to provoke uninformed
responses. For example, a nuclear strike between India and Pakistan could be coordinated with Distributed Denial of Service
attacks against key networks, so they would have further difficulty in identifying what happened and be forced to respond quickly.
Terrorists could also knock out communications between these states so they cannot discuss the situation. Alternatively, amidst
the confusion of a traditional large-scale terrorist attack, claims of responsibility and
declarations of war could be falsified in an attempt to instigate a hasty military response. These
false claims could be posted directly on Presidential, military, and government websites. E-mails could also be sent to the media and
foreign governments using the IP addresses and e-mail accounts of government officials. A sophisticated and all encompassing
combination of traditional terrorism and cyber terrorism could be enough to launch nuclear weapons on its own, without the need
for compromising command and control centres directly.
2NC UQ - ISIS
ISIS is mobilizing now and ready to take action.
DeSoto 5/7 (Randy DeSoto May 7, 2015 http://www.westernjournalism.com/isis-claims-to-have-71trained-soldiers-in-targeted-u-s-states/ Randy DeSoto is a writer for Western Journalism, which
consistently ranks in the top 5 most popular conservative online news outlets in the country)
Purported ISIS jihadists issued threats against the United States Tuesday, indicating the group has trained
soldiers positioned throughout the country, ready to attack “any target we desire.” The online post singles out
controversial blogger Pamela Geller, one of the organizers of the “Draw the Prophet” Muhammad cartoon contest in Garland,
Texas, calling for her death to “heal the hearts of our brothers and disperse the ones behind her.” ISIS also claimed
responsibility for the shooting, which marked the first time the terror group claimed responsibility for an
attack on U.S. soil, according to the New York Daily News. “The attack by the Islamic State in America is only the beginning
of our efforts to establish a wiliyah [authority or governance] in the heart of our enemy,” the ISIS post reads. As for Geller, the
jihadists state: “To those who protect her: this will be your only warning of housing this woman and her circus show. Everyone who
houses her events, gives her a platform to spill her filth are legitimate targets. We have been watching closely who was present at
this event and the shooter of our brothers.” ISIS further claims to have known that
the Muhammad cartoon
contest venue would be heavily guarded, but conducted the attack to demonstrate the willingness of its
followers to die for the “Sake of Allah.” The FBI and the Department of Homeland Security, in fact, issued a bulletin on April
20 indicating the event would be a likely terror target. ISIS drew its message to a close with an ominous threat: We
have 71 trained soldiers in 15 different states ready at our word to attack any target we desire. Out of the
71 trained soldiers 23 have signed up for missions like Sunday, We are increasing in number bithnillah [if God
wills]. Of the 15 states, 5 we will name… Virginia, Maryland, Illinois, California, and Michigan …The next six
months will be interesting. Fox News reports that “the U.S. intelligence community was assessing the threat and
trying to determine if the source is directly related to ISIS leadership or an opportunist such as a low-level
militant seeking to further capitalize on the Garland incident .” Former Navy Seal Rob O’Neill told Fox News he
believes the ISIS threat is credible, and the U.S. must be prepared. He added that the incident in Garland “is a prime example of the
difference between a gun free zone and Texas. They showed up at Charlie Hebdo, and it was a massacre. If these two guys had
gotten into that building it would have been Charlie Hebdo times ten. But these two guys showed up because they were offended by
something protected by the First Amendment, and were quickly introduced to the Second Amendment.” Geller issued a
statement regarding the ISIS posting: “This threat illustrates the savagery and barbarism of the Islamic State. They want me dead
for violating Sharia blasphemy laws. What remains to be seen is whether the free world will finally wake up and stand for the
freedom of speech, or instead kowtow to this evil and continue to denounce me.”
ISIS will attack – three reasons – its capabilities are growing, an attack would be
good propaganda, and it basically hates all things America
Rogan 15 (Tom, panelist on The McLaughlin Group and holds the Tony Blankley Chair at the Steamboat
Institute, “Why ISIS Will Attack America,” National Review, 3-24-15,
http://www.nationalreview.com/article/415866/why-isis-will-attack-america-tom-rogan)//MJ
There is no good in you if they are secure and happy while you have a pulsing vein. Erupt volcanoes of jihad everywhere. Light the
earth with fire upon all the [apostate rulers], their soldiers and supporters. — ISIS leader Abu Bakr al-Baghdadi, November 2014.
Those words weren’t idle. The Islamic State (ISIS) is still advancing, across continents and cultures. It’s attacking
Shia Muslims in Yemen, gunning down Western tourists in Tunisia, beheading Christians in Libya, and
murdering or enslaving all who do not yield in Iraq and Syria. Its black banner seen as undaunted by the international
coalition against it, new recruits still flock to its service. The Islamic State’s rise is, in other words, not over, and it is likely to
end up involving an attack on America. Three reasons why such an attempt is inevitable: ISIS’S STRATEGY PRACTICALLY
DEMANDS IT Imbued with existential hatred against the United States, the group doesn’t just oppose American power, it opposes
America’s identity. Where the United States is a secular democracy that binds law to individual freedom, the Islamic State is a
totalitarian empire determined to sweep freedom from the earth. As an ideological and physical necessity, ISIS must
ultimately conquer America. Incidentally, this kind of total-war strategy explains why counterterrorism experts are rightly
concerned about nuclear proliferation. The Islamic State’s strategy is also energized by its desire to replace al-
Qaeda as Salafi jihadism’s global figurehead. While al-Qaeda in the Arabian Peninsula (AQAP) and ISIS had a short flirtation
last year, ISIS has now signaled its intent to usurp al-Qaeda’s power in its home territory. Attacks by ISIS last week against
Shia mosques in the Yemeni capital of Sana’a were, at least in part, designed to suck recruits, financial donors, and prestige away
from AQAP. But to truly displace al-Qaeda, ISIS knows it must furnish a new 9/11. ITS CAPABILITIES ARE
GROWING Today, ISIS has thousands of European citizens in its ranks. Educated at the online University of Edward Snowden, ISIS
operations officers have cut back intelligence services’ ability to monitor and disrupt their
communications. With EU intelligence services stretched beyond breaking point, ISIS has the means and
confidence to attempt attacks against the West. EU passports are powerful weapons: ISIS could attack — as al-Qaeda
has repeatedly — U.S. targets around the world. AN ATTACK ON THE U.S. IS PRICELESS PROPAGANDA For transnational
Salafi jihadists like al-Qaeda and ISIS, a successful blow against the U.S. allows them to claim the mantle of a
global force and strengthens the narrative that they’re on a holy mission. Holiness is especially important: ISIS
knows that to recruit new fanatics and deter its enemies, it must offer an abiding narrative of strength and divine purpose. With
the group’s leaders styling themselves as Mohammed’s heirs, Allah’s chosen warriors on earth, attacking
the infidel United States would reinforce ISIS’s narrative. Of course, attacking America wouldn’t actually serve the
Islamic State’s long-term objectives. Quite the opposite: Any atrocity would fuel a popular American resolve to crush the group with
expediency. (Make no mistake, it would be crushed.) The problem, however, is that, until then, America is in the bull’s eye.
2NC Cyber - ISIS
ISIS is a threat to the grid
Landsbaum 14
(Mark, 9/5/2014, OC Register, “Mark Landsbaum: Attack on power grid could bring dark days,”
http://www.ocregister.com/articles/emp-633883-power-attack.html, 7/15/15, SM)
It could be worse.
Terrorists pose an “imminent” threat to the U.S. electrical grid, which could leave the
Pry, former
good ol’ USA looking like 19th century USA for a lot longer than three days.∂ Don’t take my word for it. Ask Peter
CIA officer and one-time House Armed Services Committee staffer, who served on a congressional commission investigating such
eventualities.∂ “There is an imminent threat from ISIS to the national electric grid and not just to a
single U.S. city,” Pry warns. He points to a leaked U.S. Federal Energy Regulatory Commission report in March that said a
coordinated terrorist attack on just nine of the nation’s 55,000 electrical power substations
could cause coast-to-coast blackouts for up to 18 months.∂ Consider what you’ll have to worry about then. If
you were uncomfortable watching looting and riots on TV last month in Ferguson, Mo., as police stood by, project such unseemly
behavior nationwide. For 18 months.∂ It’s likely phones won’t be reliable, so you won’t have to watch police stand idly by. Chances
are, police won’t show up. Worse, your odds of needing them will be excruciatingly more likely if terrorists attack the power grid
using an electromagnetic pulse (EMP) burst of energy to knock out electronic devices.∂ “The Congressional EMP Commission, on
which I served, did an extensive study of this,” Pry says. “We discovered to our own revulsion that critical
systems in this
country are distressingly unprotected. We calculated that, based on current realities, in the first
year after a full-scale EMP event, we could expect about two-thirds of the national population –
200 million Americans – to perish from starvation and disease, as well as anarchy in the streets.”∂
Skeptical? Consider who is capable of engineering such measures before dismissing the likelihood.∂ In his 2013 book, “A Nation
Forsaken,” Michael Maloof reported that the 2008 EMP Commission considered whether a
hostile nation or terrorist
group could attack with a high-altitude EMP weapon and determined, “any number of
adversaries possess both the ballistic missiles and nuclear weapons capabilities,” and could attack
within 15 years.∂ That was six years ago. “North Korea, Pakistan, India, China and Russia are all in the
position to launch an EMP attack against the United States now,” Maloof wrote last year.∂ Maybe you’ll rest
more comfortably knowing the House intelligence authorization bill passed in May told the intelligence community to report to
Congress within six months, “on the threat posed by man-made electromagnetic pulse weapons to United States interests through
2025, including threats from foreign countries and foreign nonstate actors.”∂ Or, maybe that’s not so comforting. In 2004 and again
in 2008, separate congressional commissions gave detailed, horrific reports on such threats. Now, Congress wants another report.∂
In his book, Maloof quotes Clay Wilson of the Congressional Research Service, who said, “Several nations, including reported
sponsors of terrorism, may currently have a capability to use EMP as a weapon for cyberwarfare or cyberterrorism to disrupt
communications and other parts of the U.S. critical infrastructure.”∂ What would an EMP attack look like? “Within an instant,”
Maloof writes, “we will have no idea what’s happening all around us, because we will have no news. There will be no radio, no TV,
no cell signal. No newspaper delivered.∂ “Products won’t flow into the nearby Wal-Mart. The big trucks will be stuck on the
interstates. Gas stations won’t be able to pump the fuel they do have. Some police officers and firefighters will show up for work,
but most will stay home to protect their own families. Power lines will get knocked down in windstorms, but nobody will care.
They’ll all be fried anyway. Crops will wither in the fields until scavenged – since the big picking machines will all be idled, and there
will be no way to get the crop to market anyway.∂ “Nothing
that’s been invented in the last 50 years – based
on computer chips, microelectronics or digital technology – will work. And it will get worse.”
2NC Links
Backdoors are key to prevent terrorism
Corn 7/13
(Corn, Geoffrey S. * Presidential Research Professor of Law, South Texas College of Law; Lieutenant Colonel (Retired), U.S. Army
Judge Advocate General’s Corps. Prior to joining the faculty at South Texas, Professor Corn served in a variety of military
assignments, including as the Army’s Senior Law of War Advisor, Supervisory Defense Counsel for the Western United States,
Chief of International Law for U.S. Army Europe, and as a Tactical Intelligence Officer in Panama. “Averting the Inherent Dangers
of 'Going Dark': Why Congress Must Require a Locked Front Door to Encrypted Data,” SSRN. 07-13-2015.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes//ghs-kw)
The risks related to “going dark” are real. When the President of the United States,60 the Prime Minister of the
United Kingdom,61 and the Director of the FBI62 all publically express deep concerns about how this phenomenon will endanger
their respective nations, it is difficult to ignore. Today, encryption
technologies that are making it increasingly
easy for individual users to prevent even lawful government access to potentially vital
information related to crimes or other national security threats. This evolution of individual
encryption capabilities represents a fundamental distortion of the balance between government
surveillance authority and individual liberty central to the Fourth Amendment. And balance is the operative word.
The right of The People to be secure against unreasonable government intrusions into those places and things protected by the
Fourth Amendment must be vehemently protected. Reasonable searches, however, should not only be permitted, but they
should be mandated where necessary. Congress has the authority to ensure that such searches
are possible. While some argue that this could cause American manufacturers to suffer, saddled as they will appear to be by the
“Snowden Effect,” the rules will apply equally to any manufacturer that wishes to do business in the United States. Considering that
the United States economy is the largest in the world, it is highly unlikely that foreign manufacturers will forego access to our market
in order to avoid having to create CALEA-like solutions to allow for lawful access to encrypted data. Just as foreign cellular telephone
providers, such as T-Mobile, are active in the United States, so too will foreign device manufacturers and other communications
services adjust their technology to comply with our laws and regulations. This will put American and foreign companies on an equal
playing field while encouraging ingenuity and competition. Most importantly, “the
right of the people to be secure in
their persons, houses, papers, and effects” will be protected not only “against unreasonable searches and seizures,” but
also against attacks by criminals and terrorists. And is not this, in essence, the primary purpose of government?
Backdoors are key to security—terror turns the case
Goldsmith 13
(Jack Goldsmith. Jack Goldsmith, a contributing editor, teaches at Harvard Law School and is a member of the Hoover Institution
Task Force on National Security and Law. "We Need an Invasive NSA," New Republic. 10-10-2013.
http://www.newrepublic.com/article/115002/invasive-nsa-will-protect-us-cyber-attacks//ghs-kw)
Ever since stories about the National Security Agency’s (NSA) electronic intelligence-gathering capabilities began tumbling out last
June, The
New York Times has published more than a dozen editorials excoriating the “national surveillance state.” It
wants the NSA to end the “mass warehousing of everyone’s data” and the use of “back doors” to break encrypted
communications. A major element of the Times’ critique is that the NSA’s domestic sweeps are not justified by the terrorist threat
they aim to prevent. At the end of August, in the midst of the Times’ assault on the NSA, the newspaper suffered what it described
as a “malicious external attack” on its domain name registrar at the hands of the Syrian Electronic Army, a group of hackers who
support Syrian President Bashar Al Assad. The paper’s website was down for several hours and, for some people, much longer. “In
terms of the sophistication of the attack, this is a big deal,” said Marc Frons, the Times’ chief information officer. Ten months earlier,
hackers stole the corporate passwords for every employee at the Times, accessed the computers of 53 employees, and breached the
e-mail accounts of two reporters who cover China. “We brought in the FBI, and the FBI said this had all the hallmarks of hacking by
the Chinese military,” Frons said at the time. He also acknowledged that the hackers were in the Times system on election night in
2012 and could have “wreaked havoc” on its coverage if they wanted. Such cyber-intrusions
threaten corporate
America and the U.S. government every day. “Relentless assaults on America’s computer
networks by China and other foreign governments, hackers and criminals have created an
urgent need for safeguards to protect these vital systems,” the Times editorial page noted last year while
supporting legislation encouraging the private sector to share cybersecurity information with the government. It cited General
Keith Alexander, the director of the NSA, who had noted a 17-fold increase in cyber-intrusions on critical
infrastructure from 2009 to 2011 and who described the losses in the United States from cybertheft as “the greatest transfer of wealth in history.” If a “catastrophic cyber-attack occurs,” the
Times concluded, “Americans will be justified in asking why their lawmakers ... failed to protect
them.” When catastrophe strikes, the public will adjust its tolerance for intrusive government
measures. The Times editorial board is quite right about the seriousness of the cyber- threat
and the federal government’s responsibility to redress it. What it does not appear to realize is
the connection between the domestic NSA surveillance it detests and the governmental
assistance with cybersecurity it cherishes. To keep our computer and telecommunication
networks secure, the government will eventually need to monitor and collect intelligence on
those networks using techniques similar to ones the Times and many others find
reprehensible when done for counterterrorism ends. The fate of domestic surveillance is today
being fought around the topic of whether it is needed to stop Al Qaeda from blowing things up.
But the fight tomorrow, and the more important fight, will be about whether it is necessary to
protect our ways of life embedded in computer networks. Anyone anywhere with a connection to the Internet
can engage in cyber-operations within the United States. Most truly harmful cyber-operations, however, require
group effort and significant skill. The attacking group or nation must have clever hackers,
significant computing power, and the sophisticated software—known as “malware”—that enables the
monitoring, exfiltration, or destruction of information inside a computer. The supply of all of these
resources has been growing fast for many years—in governmental labs devoted to developing these tools and on sprawling black
markets on the Internet. Telecommunication
networks are the channels through which malware
typically travels, often anonymized or encrypted, and buried in the billions of communications that traverse the globe each
day. The targets are the communications networks themselves as well as the computers they
connect—things like the Times’ servers, the computer systems that monitor nuclear plants, classified documents on computers in
the Pentagon, the nasdaq exchange, your local bank, and your social-network providers. To keep these computers and
networks secure, the government needs powerful intelligence capabilities abroad so that it can
learn about planned cyber-intrusions. It also needs to raise defenses at home. An important first step is
to correct the market failures that plague cybersecurity. Through law or regulation, the government must improve incentives for
individuals to use security software, for private firms to harden their defenses and share information with one another, and for
Internet service providers to crack down on the botnets—networks of compromised zombie computers—that underlie many cyberattacks. More, too, must be done to prevent insider threats like Edward Snowden’s, and to control the stealth introduction of
vulnerabilities during the manufacture of computer components—vulnerabilities that can later be used as windows for cyberattacks. And yet that’s still not enough. The
U.S. government can fully monitor air, space, and sea for
potential attacks from abroad. But it has limited access to the channels of cyber-attack and
cyber-theft, because they are owned by private telecommunication firms, and because Congress strictly
limits government access to private communications. “I can’t defend the country until I’m into all the networks,” General Alexander
reportedly told senior government officials a few months ago. For Alexander, being
in the network means having
government computers scan the content and metadata of Internet communications in the
United States and store some of these communications for extended periods. Such access, he
thinks, will give the government a fighting chance to find the needle of known malware in the haystack of
communications so that it can block or degrade the attack or exploitation. It will also allow it to
discern patterns of malicious activity in the swarm of communications, even when it doesn’t
possess the malware’s signature. And it will better enable the government to trace back an
attack’s trajectory so that it can discover the identity and geographical origin of the threat.
Alexander’s domestic cybersecurity plans look like pumped-up versions of the NSA’s counterterrorism-related homeland surveillance
that has sparked so much controversy in recent months. That is why so many people in Washington think that Alexander’s vision has
“virtually no chance of moving forward,” as the Times recently reported. “Whatever trust was there is now gone,” a senior
intelligence official told Times. There are two reasons to think that these predictions are wrong and that the
government, with
extensive assistance from the NSA, will one day intimately monitor private networks. The first is that the
cybersecurity threat is more pervasive and severe than the terrorism threat and is somewhat easier to see. If the
Times’ website goes down a few more times and for longer periods, and if the next penetration
of its computer systems causes large intellectual property losses or a compromise in its
reporting, even the editorial page would rethink the proper balance of privacy and security. The
point generalizes: As cyber-theft and cyber-attacks continue to spread (and they will), and
especially when they result in a catastrophic disaster (like a banking compromise that
destroys market confidence, or a successful attack on an electrical grid), the public will
demand government action to remedy the problem and will adjust its tolerance for intrusive
government measures. At that point, the nation’s willingness to adopt some version of Alexander’s vision will depend on
the possibility of credible restraints on the NSA’s activities and credible ways for the public to monitor, debate, and approve what
the NSA is doing over time. Which leads to the
second reason why skeptics about enhanced government involvement in
be wrong. The public mistrusts the NSA not just because of what it does, but also because of its
extraordinary secrecy. To obtain the credibility it needs to secure permission from the American people to
protect our networks, the NSA and the intelligence community must fundamentally recalibrate their
attitude toward disclosure and scrutiny. There are signs that this is happening—and that, despite the
the network might
undoubted damage he inflicted on our national security in other respects, we have Edward Snowden to thank. “Before the
unauthorized disclosures, we were always conservative about discussing specifics of our collection programs, based on the truism
that the more adversaries know about what we’re doing, the more they can avoid our surveillance,” testified Director of National
Intelligence James Clapper last month. “But the disclosures, for better or worse, have lowered the threshold for discussing these
matters in public.” In the last few weeks, the
NSA has done the unthinkable in releasing dozens of
documents that implicitly confirm general elements of its collection capabilities. These revelations
are bewildering to most people in the intelligence community and no doubt hurt some elements of collection. But they are
justified by the countervailing need for public debate about, and public confidence in, NSA activities that
had run ahead of what the public expected. And they suggest that secrecy about collection capacities is one value, but not the only
or even the most important one. They also show that not all revelations of NSA capabilities are equally harmful. Disclosure that it
sweeps up metadata is less damaging to its mission than disclosure of the fine-grained details about how it collects and analyzes that
metadata.
2NC AT Encryption =/= Backdoors
All our encryption args still apply
Sasso 14
(Brendan Sasso. technology correspondent for National Journal, previously covered technology policy issues for The Hill and was
a researcher and contributing writer for the 2012 edition of the Almanac of American Politics. "The NSA Isn't Just Spying on Us,
It's Also Undermining Internet Security," nationaljournal. 4-29-2014. http://www.nationaljournal.com/daily/the-nsa-isn-t-justspying-on-us-it-s-also-undermining-internet-security-20140429//ghs-kw)
According to the leaked documents, the
NSA inserted a so-called back door into at least one encryption
standard that was developed by the National Institute of Standards and Technology. The NSA
could use that back door to spy on suspected terrorists, but the vulnerability was also available to any other
hacker who discovered it.
2NC Turns Backdoors
Cyberattacks turn the case—public pressures for backdoors
Goldsmith 13
(Jack Goldsmith. Jack Goldsmith, a contributing editor, teaches at Harvard Law School and is a member of the Hoover Institution
Task Force on National Security and Law. "We Need an Invasive NSA," New Republic. 10-10-2013.
http://www.newrepublic.com/article/115002/invasive-nsa-will-protect-us-cyber-attacks//ghs-kw)
There are two reasons to think that these predictions are wrong and that the government, with extensive assistance from the
NSA, will one day intimately monitor private networks. The first is that the cybersecurity threat is more
pervasive and severe than the terrorism threat and is somewhat easier to see. If the Times’ website goes down a few
more times and for longer periods, and if the next penetration of its computer systems causes
large intellectual property losses or a compromise in its reporting, even the editorial page would
rethink the proper balance of privacy and security. The point generalizes: As cyber-theft and cyberattacks continue to spread (and they will), and especially when they result in a catastrophic
disaster (like a banking compromise that destroys market confidence, or a successful attack on an electrical
grid), the public will demand government action to remedy the problem and will adjust its
tolerance for intrusive government measures.
Ptix
1NC
Backdoors are popular now—national security concerns
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings
Institution. He is the author of several books and a member of the Hoover Institution's Task Force on National Security and Law.
"Thoughts on Encryption and Going Dark: Part I," Lawfare. 7-23-2015. http://www.lawfareblog.com/thoughts-encryption-andgoing-dark-part-i//ghs-kw)
In other words, I think Comey and Yates inevitably are asking for legislation, at least in the longer term. The
administration has decided not to seek it now, so the conversation is taking place at a somewhat higher level of abstraction than it
would if there were a specific legislative proposal on the table. But
the current discussion should be understood
as an effort to begin building a legislative coalition for some sort of mandate that internet
platform companies retain (or build) the ability to permit, with appropriate legal process, the capture and
delivery to law enforcement and intelligence authorities of decrypted versions of the signals
they carry. This coalition does not exist yet, particularly not in the House of Representatives. But yesterday's
hearings were striking in showing how successful Comey has been in the early phases of building
it. A lot of members are clearly concerned already. That concern will likely grow if Comey is
correct about the speed with which major investigative tools are weakening in their utility. And
it could become a powerful force in the event an attack swings the pendulum away from civil
libertarian orthodoxy.
2NC
(KQ) 1AC Macri 14 evidence magnifies the link to politics: “The U.S. Senate
voted down consideration of a bill on Tuesday that would have reigned in the
NSA’s powers to conduct domestic surveillance, upping the legal hurdles for
certain types of spying Rogers repeated Thursday he was largely uninterested
in.”
Even if backdoors are unpopular now, that will inevitably change
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings
Institution. He is the author of several books and a member of the Hoover Institution's Task Force on National Security and Law.
"Thoughts on Encryption and Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015.
http://www.lawfareblog.com/thoughts-encryption-and-going-dark-part-ii-debate-merits//ghs-kw)
There's a final, non-legal factor that may push companies to work this problem as energetically
as they are now moving toward end-to-end encryption: politics. We are at very particular
moment in the cryptography debate, a moment in which law enforcement sees a major problem
as having arrived but the tech companies see that problem as part of the solution to the
problems the Snowden revelations created for them. That is, we have an end-to-end encryption issue, in
significant part, because companies are trying to assure customers worldwide that they have their backs privacy-wise and are not
simply tools of NSA. I think those politics
are likely to change. If Comey is right and we start seeing law
enforcement and intelligence agencies blind in investigating and preventing horrible crimes and
significant threats, the pressure on the companies is going to shift. And it may shift fast and
hard. Whereas the companies now feel intense pressure to assure customers that their data is
safe from NSA, the kidnapped kid with the encrypted iPhone is going to generate a very
different sort of political response. In extraordinary circumstances, extraordinary access may
well seem reasonable. And people will wonder why it doesn't exist.
Military DA
1NC
Cyber-deterrence is strong now but keeping our capabilities in line with other
powers’ is key to maintain stability
Healey 14
(Healey, Jason. Jason Healey is a Nonresident Senior Fellow for the Cyber Statecraft Initiative of the Atlantic Council and Senior
Research Scholar at Columbia University's School of International and Public Affairs, focusing on international cooperation,
competition, and conflict in cyberspace. From 2011 to 2015, he worked as the Director of the Council's Cyber Statecraft Initiative.
Starting his career in the United States Air Force, Mr. Healey earned two Meritorious Service Medals for his early work in cyber
operations at Headquarters Air Force at the Pentagon and as a plankholder (founding member) of the Joint Task Force –
Computer Network Defense, the world's first joint cyber warfighting unit. He has degrees from the United States Air Force
Academy (political science), Johns Hopkins University (liberal arts), and James Madison University (information security).
"Commentary: Cyber Deterrence Is Working," Defense News. 7-30-2014.
http://archive.defensenews.com/article/20140730/DEFFEAT05/307300017/Commentary-Cyber-Deterrence-Working//ghs-kw)
Despite the mainstream view of cyberwar professionals and theorists, cyber
deterrence is not only possible but
has been working for decades. Cyberwar professionals are in the midst of a decades-old debate on how America
could deter adversaries from attacking us in cyberspace. In 2010, then-Deputy Defense Secretary Bill Lynn
summed up the prevailing view that “Cold War deterrence models do not apply to cyberspace” because of low barriers to entry and
the anonymity of Internet attacks. Cyber attacks, unlike intercontinental missiles, don’t have a return address. But this view is too
narrow and technical. The
history of how nations have actually fought (or not fought) conflicts in
cyberspace makes it clear deterrence is not only theoretically possible, but is actually keeping an
upper threshold to cyber hostilities. The hidden hand of deterrence is most obvious in the
discussion of “a digital Pearl Harbor.” In 2012, then-Defense Secretary Leon Panetta described
his worries of such a bolt-from-the-blue attack that could cripple the United States or its
military. Though his phrase raised eyebrows among cyber professionals, there was broad agreement with the
basic implication: The United States is strategically vulnerable and potential adversaries have
both the means for strategic attack and the will to do it. But worrying about a digital Pearl
Harbor actually dates not to 2012 but to testimony by Winn Schwartau to Congress in 1991. So
cyber experts have been handwringing about a digital Pearl Harbor for more than 20 of the 70
years since the actual Pearl Harbor. Waiting for Blow To Come? Clearly there is a different dynamic than
recognized by conventional wisdom. For over two decades, the United States has had its throat bared to
the cyber capabilities of potential adversaries (and presumably their throats are as bared to our
capabilities), yet the blow has never come. There is no solid evidence anyone has ever been killed by any cyber
attack; no massive power outages, no disruptions of hospitals or faking of hospital records, no tampering of dams causing a
catastrophic flood. The
Internet is a fierce domain and conflicts are common between nations. But
deterrence — or at least restraint — has kept a lid on the worst. Consider: ā–  Large nations have
never launched strategically significant disruptive cyber attacks against other large nations. China,
Russia and the United States seem to have plans to do so not as surprise attacks from a clear sky, but as part of a major (perhaps
even existential) international security crisis — not unlike the original Pearl Harbor. Cyber
attacks between equals have
always stayed below the threshold of death and destruction. ā–  Larger nations do seem to be
willing to launch significant cyber assaults against rivals but only during larger crises and below
the threshold of death and destruction, such as Russian attacks against Estonia and Georgia or China egging on
patriotic hackers to disrupt computers in dust-ups with Japan, Vietnam or the Philippines. The United States and Israel have perhaps
come closest to the threshold with the Stuxnet attacks but even here, the attacks were against a very limited target (Iranian
programs to enrich uranium) and hardly out of the blue. ā–  Nations seem almost completely unrestrained using cyber espionage to
further their security (and sometimes commercial) objectives and only slightly more restrained using low levels of cyber force for
small-scale disruption, such as Chinese or Russian disruption of dissidents’ websites or British disruption of chat rooms used by
Anonymous to coordinate protest attacks. In
a discussion about any other kind of military power, such as
nuclear weapons, we would have no problem using the word deterrence to describe nations’
reluctance to unleash capabilities against one another. Indeed, a comparison with nuclear
deterrence is extremely relevant, but not necessarily the one that Cold Warriors have
recognized. Setting a Ceiling Nuclear weapons did not make all wars unthinkable, as some early
postwar thinkers had hoped. Instead, they provided a ceiling under which the superpowers
fought all kinds of wars, regular and irregular. The United States and Soviet Union, and their allies and proxies,
engaged in lethal, intense conflicts from Korea to Vietnam and through proxies in Africa, Asia and Latin America. Nuclear
warheads did not stop these wars, but did set an upper threshold neither side proved willing to
exceed. Likewise, the most cyber capable nations (including America, China and Russia) have been more
than willing to engage in irregular cyber conflicts, but have stayed well under the threshold of
strategic cyber warfare, creating a de facto norm. Nations have proved just as unwilling to
launch a strategic attack in cyberspace as they are in the air, land, sea or space. The new norm is
same as the old norm. This norm of strategic restraint is a blessing but still is no help to deter cyber crime or the
irregular conflicts that have long occurred under the threshold. Cyber espionage and lesser state-sponsored cyber disruption seem
to be increasing markedly in the last few years.
Backdoors are key to cyberoffensive capabilities
Schneier 13
(Schneier. Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School and a program fellow at the
New America Foundation's Open Technology. He is an American cryptographer, computer security and privacy specialist, and
writer. He is the author of several books on general security topics, computer security and cryptography. He is also a contributing
writer for The Guardian news organization.[ "US Offensive Cyberwar Policy.” 06-21-2013.
https://www.schneier.com/blog/archives/2013/06/us_offensive_cy.html//ghs-kw)
Cyberattacks have the potential to be both immediate and devastating. They can disrupt
communications systems, disable national infrastructure, or, as in the case of Stuxnet, destroy nuclear reactors;
but only if they've been created and targeted beforehand. Before launching cyberattacks against
another country, we have to go through several steps. We have to study the details of the computer
systems they're running and determine the vulnerabilities of those systems. If we can't find exploitable
vulnerabilities, we need to create them: leaving "back doors," in hacker speak. Then we have to build
new cyberweapons designed specifically to attack those systems. Sometimes we have to embed the hostile
code in those networks -- these are called "logic bombs" -- to be unleashed in the future. And we have to keep
penetrating those foreign networks, because computer systems always change and we need to
ensure that the cyberweapons are still effective. Like our nuclear arsenal during the Cold War, our
cyberweapons arsenal must be pretargeted and ready to launch. That's what Obama directed the US Cyber
Command to do. We can see glimpses of how effective we are in Snowden's allegations that the NSA
is currently penetrating foreign networks around the world: "We hack network backbones -- like
huge Internet routers, basically -- that give us access to the communications of hundreds of
thousands of computers without having to hack every single one."
Loss of cyber-offensive capabilities incentivizes China to take Taiwan—turns
heg and the economy
Hjortdal 11
(Magnus Hjortdal received his BSc and MSc in Political Science, with a specialization in IR, from the University of Copenhagen. He
was an Assistant Lecturer at the University of Copenhagen, a Research Fellow at the Royal Danish Defence College, and is now the
Head of the Ministry of Foreign Affairs in Denmark. “China's Use of Cyber Warfare: Espionage Meets Strategic Deterrence ,”
Journal of Strategic Security, Vol. 4 No. 2, Summer 2011: Strategic Security in the Cyber Age, Article 2, pp 1-24.
http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1101&context=jss//ghs-kw)
China's military strategy mentions cyber capabilities as an area that the People's Liberation
Army (PLA) should invest in and use on a large scale.13 The U.S. Secretary of Defense, Robert Gates, has also
declared that China's development in the cyber area increasingly concerns him,14 and that there has
been a decade-long trend of cyber attacks emanating from China.15 Virtually all digital and electronic military systems can be
attacked via cyberspace. Therefore, it
is essential for a state to develop capabilities in this area if it wishes
to challenge the present American hegemony. The interesting question then is whether China is developing
capabilities in cyberspace in order to deter the United States.16 China's military strategists describe cyber
capabilities as a powerful asymmetric opportunity in a deterrence strategy.19 Analysts consider
that an "important theme in Chinese writings on computer-network operations (CNO) is the use
of computer-network attack (CNA) as the spearpoint of deterrence."20 CNA increases the
enemy's costs to become too great to engage in warfare in the first place, which Chinese
analysts judge to be essential for deterrence.21 This could, for example, leave China with the
potential ability to deter the United States from intervening in a scenario concerning Taiwan.
CNO is viewed as a focal point for the People's Liberation Army, but it is not clear how the actual capacity functions or precisely what
conditions it works under.22 If
a state with superpower potential (here China) is to create an
opportunity to ascend militarily and politically in the international system, it would require an
asymmetric deterrence capability such as that described here.23 It is said that the "most
significant computer network attack is characterized as a pre-emption weapon to be used under
the rubric of the rising Chinese strategy of […] gaining mastery before the enemy has struck."24
Therefore, China, like other states seeking a similar capacity, has recruited massively within the hacker milieu inside China.25
Increasing resources in the PLA are being allocated to develop assets in relation to cyberspace.26 The improvements are visible: The
PLA has established "information warfare" capabilities,27 with a special focus on cyber warfare that, according to their doctrine, can
be used in peacetime.28 Strategists from the PLA advocate the use of virus and hacker attacks that can paralyze and surprise its
enemies.29 Aggressive and Widespread Cyber Attacks from China and the International Response China's
use of
asymmetric capabilities, especially cyber warfare, could pose a serious threat to the American
economy.30 Research and development in cyber espionage figure prominently in the 12th Five-Year Plan (2011–2015) that is
being drafted by both the Chinese central government and the PLA.31 Analysts say that China could well have the most
extensive and aggressive cyber warfare capability in the world, and that this is being driven by
China's desire for "global-power status."32 These observations do not come out of the blue, but are a consequence of
the fact that authoritative Chinese writings on the subject present cyber warfare as an obvious asymmetric instrument for balancing
overwhelming (mainly U.S.) power, especially in case of open conflict, but also as a deterrent.33
Escalates to nuclear war and turns the economy
Landay 2k
(Jonathan S. Landay, National Security and Intelligence Correspondent, -2K [“Top Administration Officials Warn Stakes for U.S.
Are High in Asian Conflicts”, Knight Ridder/Tribune News Service, March 10, p. Lexis. Ghs-kw)
Few if any experts think China
and Taiwan, North Korea and South Korea, or India and Pakistan are
spoiling to fight. But even a minor miscalculation by any of them could destabilize Asia, jolt the global
economy and even start a nuclear war. India, Pakistan and China all have nuclear weapons, and
North Korea may have a few, too. Asia lacks the kinds of organizations, negotiations and
diplomatic relationships that helped keep an uneasy peace for five decades in Cold War
Europe. “Nowhere else on Earth are the stakes as high and relationships so fragile,” said Bates Gill,
director of northeast Asian policy studies at the Brookings Institution, a Washington think tank. “We see the convergence
of great power interest overlaid with lingering confrontations with no institutionalized security
mechanism in place. There are elements for potential disaster.” In an effort to cool the region’s tempers,
President Clinton, Defense Secretary William S. Cohen and National Security Adviser Samuel R. Berger all will hopscotch Asia’s
capitals this month. For
America, the stakes could hardly be higher. There are 100,000 U.S. troops in
Asia committed to defending Taiwan, Japan and South Korea, and the United States would
instantly become embroiled if Beijing moved against Taiwan or North Korea attacked South Korea. While
Washington has no defense commitments to either India or Pakistan, a conflict between the two could end the global
taboo against using nuclear weapons and demolish the already shaky international
nonproliferation regime. In addition, globalization has made a stable Asia _ with its massive
markets, cheap labor, exports and resources indispensable to the U.S. economy. Numerous U.S.
firms and millions of American jobs depend on trade with Asia that totaled $600 billion last year,
according to the Commerce Department.
2NC UQ
Cyber-capabilities strong now but it’s close
NBC 13
(NBC citing Scott Borg, CEO of the US Cyber Consequences Unit, and independent, non-profit research institute. Borg has lectured
at Harvard, Yale, Columbia, London, and other leading universities. "Expert: US in cyberwar arms race with China, Russia," NBC
News. 02-20-2013. http://investigations.nbcnews.com/_news/2013/02/20/17022378-expert-us-in-cyberwar-arms-race-withchina-russia//ghs-kw)
The United States is locked in a tight race with China and Russia to build destructive
cyberweapons capable of seriously damaging other nations’ critical infrastructure, according to a
leading expert on hostilities waged via the Internet. Scott Borg, CEO of the U.S. Cyber Consequences Unit, a nonprofit institute that
advises the U.S. government and businesses on cybersecurity, said all
three nations have built arsenals of
sophisticated computer viruses, worms, Trojan horses and other tools that place them atop the
rest of the world in the ability to inflict serious damage on one another, or lesser powers.
Ranked just below the Big Three, he said, are four U.S. allies: Great Britain, Germany, Israel and perhaps
Taiwan. But in testament to the uncertain risk/reward ratio in cyberwarfare, Iran has used attacks on its nuclear program to
bolster its offensive capabilities and is now developing its own "cyberarmy," Borg said. Borg offered his assessment of the current
state of cyberwar capabilities Tuesday in the wake of a report by the American computer security company Mandiant linking hacking
attacks and cyber espionage against the U.S. to a sophisticated Chinese group known as “Peoples Liberation Army Unit 61398. In
today’s brave new interconnected world, hackers
who can defeat security defenses are capable of
disrupting an array of critical services, including delivery of water, electricity and heat, or bringing transportation to a
grinding halt. U.S. senators last year received a closed-door briefing at which experts demonstrated how a power company
employee could take down the New York City electrical grid by clicking on a single email attachment, the New York Times reported.
U.S. officials rarely discuss offensive capability when discussing cyberwar, though several privately told NBC News recently that the
U.S. could "shut down" the electrical grid of a smaller nation -- Iran, for example – if it chose to do so.
Borg echoed that assessment, saying the U.S. cyberwarriors, who work within the National Security Agency, are “very
good across the board. … There is a formidable capability.” “Stuxnet and Flame (malware used
to disrupt and gather intelligence on Iran's nuclear program) are demonstrations of that,” he
said. “… (The U.S.) could shut down most critical infrastructure in potential adversaries relatively
quickly.”
Cyber-deterrence works now
Healey 14
(Healey, Jason. Jason Healey is a Nonresident Senior Fellow for the Cyber Statecraft Initiative of the Atlantic Council and Senior
Research Scholar at Columbia University's School of International and Public Affairs, focusing on international cooperation,
competition, and conflict in cyberspace. From 2011 to 2015, he worked as the Director of the Council's Cyber Statecraft Initiative.
Starting his career in the United States Air Force, Mr. Healey earned two Meritorious Service Medals for his early work in cyber
operations at Headquarters Air Force at the Pentagon and as a plankholder (founding member) of the Joint Task Force –
Computer Network Defense, the world's first joint cyber warfighting unit. He has degrees from the United States Air Force
Academy (political science), Johns Hopkins University (liberal arts), and James Madison University (information security).
"Commentary: Cyber Deterrence Is Working," Defense News. 7-30-2014.
http://archive.defensenews.com/article/20140730/DEFFEAT05/307300017/Commentary-Cyber-Deterrence-Working//ghs-kw)
Nations have been unwilling to take advantage of each other’s vulnerable infrastructures perhaps
because, as Joe Nye notes in his book, “The Future of Power,” “interstate deterrence through entanglement
and denial still exist” for cyber conflicts. The most capable cyber nations rely heavily on the same Internet
infrastructure and global standards (though using significant local infrastructure), so attacks above a certain threshold are not
obviously in any nation’s self-interest. In
addition, both deterrence by denial and deterrence by
punishment are in force. Despite their vulnerabilities, nations may still be able to mount effective-
enough defenses to deny any benefits to the adversary. Taking down a cyber target is
spectacularly easy and well within the capability of the proverbial “two-teenagers-in-abasement.” But keeping a target down over time in the face of determined defenses is very
hard, demanding intelligence, battle damage assessment and the ability to keep restriking
targets over time. These capabilities are still largely the province of the great cyber powers,
meaning it can be trivially easy to determine the likely attacker. During all of the most disruptive
cyber conflicts (such as Estonia, Georgia or Stuxnet) there was quick consensus on the “obvious choice” of
which nation or nations were behind the assault. If any of those attacks had caused large
numbers of deaths or truly strategic disruption, hiding behind Internet anonymity (“It wasn’t us and
you can’t prove otherwise”) would ring flat and invite a retaliatory strike.
2NC Link - Backdoors
Backdoors and surveillance are key to winning the cyber arms race
Spiegel 15
(Spiegel Online, Hamburg, Germany. "The Digital Arms Race: NSA Preps America for Future Battle," SPIEGEL ONLINE. 1-17-2015.
http://www.spiegel.de/international/world/new-snowden-docs-indicate-scope-of-nsa-preparations-for-cyber-battle-a1013409.html//ghs-kw)
Potential interns are also told that research into third party computers might include plans to "remotely degrade or destroy
opponent computers, routers, servers and network enabled devices by attacking the hardware." Using a program called
Passionatepolka, for example, they may be asked to "remotely brick network cards." With
programs like Berserkr they
would implant "persistent backdoors" and "parasitic drivers". Using another piece of software called Barnfire, they
would "erase the BIOS on a brand of servers that act as a backbone to many rival governments." An intern's tasks might also include
remotely destroying the functionality of hard drives. Ultimately, the goal of the internship program was "developing an attacker's
mindset." The internship listing is eight years old, but the attacker's mindset has since become a kind of doctrine for the NSA's data
spies. And the intelligence service isn't just trying to achieve mass surveillance of Internet communication, either. The digital spies of
the Five Eyes alliance -- comprised of the United States, Britain, Canada, Australia and New Zealand -- want more. The Birth of D
Weapons According to top secret documents from the archive of NSA whistleblower Edward Snowden seen exclusively by SPIEGEL,
they are
planning for wars of the future in which the Internet will play a critical role, with the aim
of being able to use the net to paralyze computer networks and, by doing so, potentially all the
infrastructure they control, including power and water supplies, factories, airports or the flow of
money. During the 20th century, scientists developed so-called ABC weapons -- atomic, biological and chemical. It took decades
before their deployment could be regulated and, at least partly, outlawed. New digital weapons have now been
developed for the war on the Internet. But there are almost no international conventions or
supervisory authorities for these D weapons, and the only law that applies is the survival of the
fittest. Canadian media theorist Marshall McLuhan foresaw these developments decades ago. In 1970, he wrote, "World War III is
a guerrilla information war with no division between military and civilian participation." That's precisely the reality that spies are
preparing for today. The US Army, Navy, Marines and Air Force have already established their own cyber forces, but it is the
NSA,
also officially a military agency, that is taking the lead. It's no coincidence that the director of the NSA also serves as the head
of the US Cyber Command. The country's leading data spy, Admiral Michael Rogers, is also its chief cyber warrior and his close to
40,000 employees are responsible for both digital spying and destructive network attacks. Surveillance only 'Phase 0' From
a
military perspective, surveillance of the Internet is merely "Phase 0" in the US digital war
strategy. Internal NSA documents indicate that it is the prerequisite for everything that follows.
They show that the aim of the surveillance is to detect vulnerabilities in enemy systems. Once
"stealthy implants" have been placed to infiltrate enemy systems, thus allowing "permanent
accesses," then Phase Three has been achieved -- a phase headed by the word "dominate" in
the documents. This enables them to "control/destroy critical systems & networks at will
through pre-positioned accesses (laid in Phase 0)." Critical infrastructure is considered by the
agency to be anything that is important in keeping a society running: energy, communications
and transportation. The internal documents state that the ultimate goal is "real time controlled
escalation". One NSA presentation proclaims that "the next major conflict will start in cyberspace." To that
end, the US government is currently undertaking a massive effort to digitally arm itself for network
warfare. For the 2013 secret intelligence budget, the NSA projected it would need around $1 billion in order to increase the strength
of its computer network attack operations. The budget included an increase of some $32 million for "unconventional solutions"
alone.
Back doors are key to cyber-warfare
Gellman and Nakashima 13
(Barton Gellman. Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington
Post, most recently the 2014 Pulitzer Prize for Public Service. He is also a senior fellow at the Century Foundation and visiting
lecturer at Princeton’s Woodrow Wilson School. After 21 years at The Post, where he served tours as legal, military, diplomatic,
and Middle East correspondent, Gellman resigned in 2010 to concentrate on book and magazine writing. He returned on
temporary assignment in 2013 and 2014 to anchor The Post's coverage of the NSA disclosures after receiving an archive of
classified documents from Edward Snowden. Ellen Nakashima is a national security reporter for The Washington Post. She
focuses on issues relating to intelligence, technology and civil liberties. She previously served as a Southeast Asia correspondent
for the paper. She wrote about the presidential candidacy of Al Gore and co-authored a biography of Gore, and has also covered
federal agencies, Virginia state politics and local affairs. She joined the Post in 1995. "U.S. spy agencies mounted 231 offensive
cyber-operations in 2011, documents show," Washington Post. 8-30-2013. https://www.washingtonpost.com/world/nationalsecurity/us-spy-agencies-mounted-231-offensive-cyber-operations-in-2011-documents-show/2013/08/30/d090a6ae-119e-11e3b4cb-fd7ce041d814_story.html//ghs-kw)
“The policy debate has moved so that offensive options are more prominent now,” said former deputy
defense secretary William J. Lynn III, who has not seen the budget document and was speaking generally. “I think there’s more of a case made now that
offensive cyberoptions can be an important element in deterring certain adversaries.” Of
the 231 offensive operations
conducted in 2011, the budget said, nearly three-quarters were against top-priority targets, which former
officials say includes adversaries such as Iran, Russia, China and North Korea and activities such as
nuclear proliferation. The document provided few other details about the operations. Stuxnet, a computer worm reportedly developed by
the United States and Israel that destroyed Iranian nuclear centrifuges in attacks in 2009 and 2010, is often cited as the most dramatic use of a
cyberweapon. Experts said no other known cyberattacks carried out by the United States match the physical damage inflicted in that case. U.S. agencies
define offensive cyber-operations as activities intended “to manipulate, disrupt, deny, degrade, or destroy information resident in computers or
computer networks, or the computers and networks themselves,” according to a presidential directive issued in October 2012. Most offensive
operations have immediate effects only on data or the proper functioning of an adversary’s machine: slowing its network connection, filling its screen
with static or scrambling the results of basic calculations. Any of those could have powerful effects if they caused an adversary to botch the timing of an
attack, lose control of a computer or miscalculate locations. U.S. intelligence services are making routine use around the world of government-built
malware that differs little in function from the “advanced persistent threats” that U.S. officials attribute to China. The principal difference, U.S. officials
told The Post, is that China steals U.S. corporate secrets for financial gain. “The Department of Defense does engage” in computer network exploitation,
according to an e-mailed statement from an NSA spokesman, whose agency is part of the Defense Department. “The department does ***not***
engage in economic espionage in any domain, including cyber.” ‘Millions of implants’ The
administration’s cyber-operations
sometimes involve what one budget document calls “field operations” abroad, commonly with
the help of CIA operatives or clandestine military forces, “to physically place hardware implants
or software modifications.” Much more often, an implant is coded entirely in software by an NSA group called Tailored Access
Operations (TAO). As its name suggests, TAO builds attack tools that are custom-fitted to their targets. The
NSA unit’s software engineers would rather tap into networks than individual computers
because there are usually many devices on each network. Tailored Access Operations has
software templates to break into common brands and models of “routers, switches and firewalls
from multiple product vendor lines,” according to one document describing its work. The implants that TAO
creates are intended to persist through software and equipment upgrades, to copy stored data,
“harvest” communications and tunnel into other connected networks. This year TAO is working on implants
that “can identify select voice conversations of interest within a target network and exfiltrate select cuts,” or excerpts, according to one budget
document. In
some cases, a single compromised device opens the door to hundreds or thousands
of others. Sometimes an implant’s purpose is to create a back door for future access. “You pry
open the window somewhere and leave it so when you come back the owner doesn’t know
it’s unlocked, but you can get back in when you want to,” said one intelligence official, who was speaking generally
about the topic and was not privy to the budget. The official spoke on the condition of anonymity to discuss sensitive technology. Under U.S.
cyberdoctrine, these
operations are known as “exploitation,” not “attack,” but they are essential precursors both to attack
and defense. By the end of this year, GENIE is projected to control at least 85,000 implants in
strategically chosen machines around the world. That is quadruple the number — 21,252 — available in 2008, according to
the U.S. intelligence budget. The NSA appears to be planning a rapid expansion of those numbers, which were limited until recently by the need for
human operators to take remote control of compromised machines. Even with a staff of 1,870 people, GENIE made full use of only 8,448 of the 68,975
machines with active implants in 2011. For
GENIE’s next phase, according to an authoritative reference
document, the NSA has brought online an automated system, code-named TURBINE, that is
capable of managing “potentially millions of implants” for intelligence gathering “and active
attack.” ‘The ROC’ When it comes time to fight the cyberwar against the best of the NSA’s global competitors, the TAO calls in its elite operators,
who work at the agency’s Fort Meade headquarters and in regional operations centers in Georgia, Texas, Colorado and Hawaii. The NSA’s
organizational chart has the main office as S321. Nearly everyone calls it “the ROC,” pronounced “rock”: the Remote Operations Center. “To the NSA as
a whole, the ROC is where the hackers live,” said a former operator from another section who has worked closely with the exploitation teams. “It’s
basically the one-stop shop for any kind of active operation that’s not defensive.” Once the hackers find a hole in an adversary’s defense,
“[t]argeted systems are compromised electronically, typically providing access to system
functions as well as data. System logs and processes are modified to cloak the intrusion,
facilitate future access, and accomplish other operational goals,” according to a 570-page budget blueprint for
what the government calls its Consolidated Cryptologic Program, which includes the NSA. Teams from the FBI, the CIA and U.S. Cyber Command work
alongside the ROC, with overlapping missions and legal authorities. So do the operators from the NSA’s National Threat Operations Center, whose
mission is focused primarily on cyber­defense. That was Snowden’s job as a Booz Allen Hamilton contractor, and it required him to learn the NSA’s best
hacking techniques. According to one key document, the
ROC teams give Cyber Command “specific target related
technical and operational material (identification/recognition), tools and techniques that allow
the employment of U.S. national and tactical specific computer network attack mechanisms.” The
intelligence community’s cybermissions include defense of military and other classified computer
networks against foreign attack, a task that absorbs roughly one-third of a total cyber operations budget of $1.02 billion in fiscal
2013, according to the Cryptologic Program budget. The ROC’s breaking-and-entering mission, supported by the GENIE infrastructure, spends nearly
twice as much: $651.7 million. Most
GENIE operations aim for “exploitation” of foreign systems, a term
defined in the intelligence budget summary as “surreptitious virtual or physical access to create
and sustain a presence inside targeted systems or facilities.” The document adds: “System logs
and processes are modified to cloak the intrusion, facilitate future access, and accomplish other
operational goals.” The NSA designs most of its own implants, but it devoted $25.1 million this year to “additional covert purchases of
software vulnerabilities” from private malware vendors, a growing gray-market industry based largely in Europe.
2NC Link – Exports
Backdoors are inserted in US products and exported globally—Schneier
indicates backdoors in networks is key to cyber-operations
Greenwald 14
(Glenn Greenwald. Glenn Greenwald is an ex-constitutional lawyer and a contributor for the Guardian, NYT, LAT, and The
Intercept. He received his BA from George Washington University and a JD from NYU. "Glenn Greenwald: how the NSA tampers
with US-made internet routers," Guardian. 5-12-2014. http://www.theguardian.com/books/2014/may/12/glenn-greenwald-nsatampers-us-internet-routers-snowden//ghs-kw)
But while American companies were being warned away from supposedly untrustworthy Chinese routers, foreign organisations
would have been well advised to beware of American-made ones. A June 2010 report from the head of the NSA's Access and Target
Development department is shockingly explicit. The
NSA routinely receives – or intercepts – routers, servers
and other computer network devices being exported from the US before they are delivered to
the international customers. The agency then implants backdoor surveillance tools, repackages
the devices with a factory seal and sends them on. The NSA thus gains access to entire
networks and all their users. The document gleefully observes that some "SIGINT tradecraft … is very
hands-on (literally!)". Eventually, the implanted device connects back to the NSA. The report
continues: "In one recent case, after several months a beacon implanted through supply-chain
interdiction called back to the NSA covert infrastructure. This call back provided us access to
further exploit the device and survey the network." It is quite possible that Chinese firms are implanting
surveillance mechanisms in their network devices. But the US is certainly doing the same.
Routers are key—gives us access to thousands of connected devices
Gellman and Nakashima 13
(Barton Gellman. Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington
Post, most recently the 2014 Pulitzer Prize for Public Service. He is also a senior fellow at the Century Foundation and visiting
lecturer at Princeton’s Woodrow Wilson School. After 21 years at The Post, where he served tours as legal, military, diplomatic,
and Middle East correspondent, Gellman resigned in 2010 to concentrate on book and magazine writing. He returned on
temporary assignment in 2013 and 2014 to anchor The Post's coverage of the NSA disclosures after receiving an archive of
classified documents from Edward Snowden. Ellen Nakashima is a national security reporter for The Washington Post. She
focuses on issues relating to intelligence, technology and civil liberties. She previously served as a Southeast Asia correspondent
for the paper. She wrote about the presidential candidacy of Al Gore and co-authored a biography of Gore, and has also covered
federal agencies, Virginia state politics and local affairs. She joined the Post in 1995. "U.S. spy agencies mounted 231 offensive
cyber-operations in 2011, documents show," Washington Post. 8-30-2013. https://www.washingtonpost.com/world/nationalsecurity/us-spy-agencies-mounted-231-offensive-cyber-operations-in-2011-documents-show/2013/08/30/d090a6ae-119e-11e3b4cb-fd7ce041d814_story.html//ghs-kw)
“The policy debate has moved so that offensive options are more prominent now,” said former deputy
defense secretary William J. Lynn III, who has not seen the budget document and was speaking generally. “I think there’s more of a case made now that
offensive cyberoptions can be an important element in deterring certain adversaries.” Of
the 231 offensive operations
conducted in 2011, the budget said, nearly three-quarters were against top-priority targets, which former
officials say includes adversaries such as Iran, Russia, China and North Korea and activities such as
nuclear proliferation. The document provided few other details about the operations. Stuxnet, a computer worm reportedly developed by
the United States and Israel that destroyed Iranian nuclear centrifuges in attacks in 2009 and 2010, is often cited as the most dramatic use of a
cyberweapon. Experts said no other known cyberattacks carried out by the United States match the physical damage inflicted in that case. U.S. agencies
define offensive cyber-operations as activities intended “to manipulate, disrupt, deny, degrade, or destroy information resident in computers or
computer networks, or the computers and networks themselves,” according to a presidential directive issued in October 2012. Most offensive
operations have immediate effects only on data or the proper functioning of an adversary’s machine: slowing its network connection, filling its screen
with static or scrambling the results of basic calculations. Any of those could have powerful effects if they caused an adversary to botch the timing of an
attack, lose control of a computer or miscalculate locations. U.S. intelligence services are making routine use around the world of government-built
malware that differs little in function from the “advanced persistent threats” that U.S. officials attribute to China. The principal difference, U.S. officials
told The Post, is that China steals U.S. corporate secrets for financial gain. “The Department of Defense does engage” in computer network exploitation,
according to an e-mailed statement from an NSA spokesman, whose agency is part of the Defense Department. “The department does ***not***
engage in economic espionage in any domain, including cyber.” ‘Millions of implants’ The
administration’s cyber-operations
sometimes involve what one budget document calls “field operations” abroad, commonly with
the help of CIA operatives or clandestine military forces, “to physically place hardware implants
or software modifications.” Much more often, an implant is coded entirely in software by an NSA group called Tailored Access
Operations (TAO). As its name suggests, TAO builds attack tools that are custom-fitted to their targets. The
NSA unit’s software engineers would rather tap into networks than individual computers
because there are usually many devices on each network. Tailored Access Operations has
software templates to break into common brands and models of “routers, switches and firewalls
from multiple product vendor lines,” according to one document describing its work. The implants that TAO
creates are intended to persist through software and equipment upgrades, to copy stored data,
“harvest” communications and tunnel into other connected networks. This year TAO is working on implants
that “can identify select voice conversations of interest within a target network and exfiltrate select cuts,” or excerpts, according to one budget
document. In
some cases, a single compromised device opens the door to hundreds or thousands
of others. Sometimes an implant’s purpose is to create a back door for future access. “You pry
open the window somewhere and leave it so when you come back the owner doesn’t know
it’s unlocked, but you can get back in when you want to,” said one intelligence official, who was speaking generally
about the topic and was not privy to the budget. The official spoke on the condition of anonymity to discuss sensitive technology. Under U.S.
cyberdoctrine, these
operations are known as “exploitation,” not “attack,” but they are essential precursors both to attack
and defense. By the end of this year, GENIE is projected to control at least 85,000 implants in
strategically chosen machines around the world. That is quadruple the number — 21,252 — available in 2008, according to
the U.S. intelligence budget. The NSA appears to be planning a rapid expansion of those numbers, which were limited until recently by the need for
human operators to take remote control of compromised machines. Even with a staff of 1,870 people, GENIE made full use of only 8,448 of the 68,975
machines with active implants in 2011. For
GENIE’s next phase, according to an authoritative reference
document, the NSA has brought online an automated system, code-named TURBINE, that is
capable of managing “potentially millions of implants” for intelligence gathering “and active
attack.” ‘The ROC’ When it comes time to fight the cyberwar against the best of the NSA’s global competitors, the TAO calls in its elite operators,
who work at the agency’s Fort Meade headquarters and in regional operations centers in Georgia, Texas, Colorado and Hawaii. The NSA’s
organizational chart has the main office as S321. Nearly everyone calls it “the ROC,” pronounced “rock”: the Remote Operations Center. “To the NSA as
a whole, the ROC is where the hackers live,” said a former operator from another section who has worked closely with the exploitation teams. “It’s
basically the one-stop shop for any kind of active operation that’s not defensive.” Once the hackers find a hole in an adversary’s defense,
“[t]argeted systems are compromised electronically, typically providing access to system
functions as well as data. System logs and processes are modified to cloak the intrusion,
facilitate future access, and accomplish other operational goals,” according to a 570-page budget blueprint for
what the government calls its Consolidated Cryptologic Program, which includes the NSA. Teams from the FBI, the CIA and U.S. Cyber Command work
alongside the ROC, with overlapping missions and legal authorities. So do the operators from the NSA’s National Threat Operations Center, whose
mission is focused primarily on cyber­defense. That was Snowden’s job as a Booz Allen Hamilton contractor, and it required him to learn the NSA’s best
hacking techniques. According to one key document, the
ROC teams give Cyber Command “specific target related
technical and operational material (identification/recognition), tools and techniques that allow
the employment of U.S. national and tactical specific computer network attack mechanisms.” The
intelligence community’s cybermissions include defense of military and other classified computer
networks against foreign attack, a task that absorbs roughly one-third of a total cyber operations budget of $1.02 billion in fiscal
2013, according to the Cryptologic Program budget. The ROC’s breaking-and-entering mission, supported by the GENIE infrastructure, spends nearly
twice as much: $651.7 million. Most
GENIE operations aim for “exploitation” of foreign systems, a term
defined in the intelligence budget summary as “surreptitious virtual or physical access to create
and sustain a presence inside targeted systems or facilities.” The document adds: “System logs
and processes are modified to cloak the intrusion, facilitate future access, and accomplish other
operational goals.” The NSA designs most of its own implants, but it devoted $25.1 million this year to “additional covert purchases of
software vulnerabilities” from private malware vendors, a growing gray-market industry based largely in Europe.
2NC Link - Zero Days
Zero-days are key to the cyber-arsenal
Cushing 14
(Cushing, Seychelle. Cushing received her MA with Distinction in Political Science and her BA in Political Science from Simon
Fraser Unversity. She is the Manager of Strategic Initiatives and Special Projects at the Office of the Vice-President, Research.
“Leveraging Information as Power: America’s Pursuit of Cyber Security,” Simon Fraser University. 11-28-2014.
http://summit.sfu.ca/system/files/iritems1/14703/etd8726_SCushing.pdf//ghs-kw)
zerodays used in cyber weapons require the US to constantly discover new vulnerabilities to maintain a
deployable cyber arsenal. Holding a specific zero-day does not guarantee that the vulnerability
will remain unpatched for a prolonged period of time by the targeted state.59 Complicating this is the
Nuclear or conventional weapons, once developed, can remain dormant yet functional until needed. In comparison, the
fact that undetected vulnerabilities, once acquired, are rarely used immediately given the time and resources it takes to construct a
cyber attack.60 In the time between acquisition and use, a patch for the vulnerability may be released, whether through routine
patches or a specific identification of a security hole, rendering the vulnerability obsolete. To minimize this, America
deploys
several zero-days at once in a cyber attack to increase the odds that at least one (or more) of
the vulnerabilities remains open to provide system access.6 2.4. One Attack, Multiple Vulnerabilities
Multiple backdoor entry points are preferable given that America cannot be absolutely certain
of what vulnerabilities the target system will contain62 despite extensive pre-launch cyber attack testing63 and
customization.64 A successful cyber attack needs a minimum of one undetected vulnerability to gain
access to the target system. Each successive zero-day that works adds to the strength and
sophistication of a cyber assault.65 As one vulnerability is patched, America can still rely on the other undetected
vulnerabilities to continue its cyber strike. Incorporating multiple undetected vulnerabilities into a cyber
attack reduces the need to create new cyber attacks after each zero-day fails. Stuxnet, a joint USIsrael operation, was a cyber attack designed to disrupt Iran’s progress on its nuclear weapons
program.66 The attack was designed to alter the code of Natanz’s computers and industrial control systems to induce “chronic
fatigue,” rather than destruction, of the nuclear centrifuges.67 The precision of Stuxnet ensured that all other control systems were
ignored except for those regulating the centrifuges.68 What
is notable about Stuxnet is its use of four zero-day
exploits (of which one was allegedly purchased)69 in the attack.70 That is, to target one system,
Stuxnet entered through four different backdoors. A target state aware of a specific vulnerability in its system will
enact a patch upon detection and likely assume that the problem is fixed. Exploiting multiple vulnerabilities creates
variations in how the attack is executed given that different backdoors alter how the attack
enters the target system.71 One patch does not stop the cyber attack. The use of multiple zero-days thus
capitalizes on a state’s limited awareness of the vulnerabilities in its system. Each phase of Stuxnet was different from its previous
phase which created confusion among the Iranians. Launched in 2009, Stuxnet was not discovered by the Iranians until 2010.72 Yet
even upon the initial discovery of the attack, who the attacker was remained unclear. The
failures in the Natanz centrifuges were first attributed to insider error73 and later to China74
before finally discovering the true culprits.75 The use of multiple undetected vulnerabilities helped to obscure the US and Israel as
the actual attackers.76 The
Stuxnet case helps illustrate the efficacy of zero-day attacks as a means
of attaining political goals. Although Stuxnet did not produce immediate results in terminating Iran’s nuclear program, it
helped buy time for the Americans to consider other options against Iran. A nuclear Iran would not only threaten American security
but possibly open a third conflict for America77 in the Middle East given Israel’s proclivity to strike a nuclear Iran first. Stuxnet
allowed the United States to delay Iran’s nuclear program without resorting to kinetic action.78
Zero-days are key to effective cyber-war offensive capabilities
Gjelten 13
(Gjelten, Tom. TOM GJELTEN is a correspondent for NPR. Over the years, he has reported extensively from Europe and Latin
America, including Cuba. He was reporting live from the Pentagon when it was attacked on September 11, 2001. Subsequently, he
covered the war in Afghanistan and Iraq invasion as NPR's lead Pentagon correspondent. Gjelten also covered the first Gulf War
and the wars in Croatia and Bosnia, Nicaragua, El Salvador, Guatemala, and Colombia. From Berlin (1990–1994), he covered
Europe’s political and economic transition after the fall of the Berlin Wall. Gjelten’s series From Marx to Markets, documenting
Eastern Europe’s transition to a market economy, earned him an Overseas Press Club award for the the Best Business or
Economic Reporting in Radio or TV. His reporting from Bosnia earned him a second Overseas Press Club Award, a George Polk
Award, and a Robert F Kennedy Journalism Award. Gjelten’s books include Sarajevo Daily: A City and Its Newspaper Under Siege,
which the New York Times called “a chilling portrayal of a city’s slow murder.” His 2008 book, Bacardi and the Long Fight for
Cuba: The Biography of a Cause, was selected as a New York Times Notable Nonfiction Book. "First Strike: US Cyber Warriors Seize
the Offensive," World Affairs Journal. January/February 2013. http://www.worldaffairsjournal.org/article/first-strike-us-cyberwarriors-seize-offensive//ghs-kw)
That was then. Much
of the cyber talk around the Pentagon these days is about offensive operations.
It is no longer enough for cyber troops to be deployed along network perimeters, desperately
trying to block the constant attempts by adversaries to penetrate front lines. The US military’s
geek warriors are now prepared to go on the attack, armed with potent cyberweapons that can
break into enemy computers with pinpoint precision. The new emphasis is evident in a program launched in October
2012 by the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s experimental research arm. DARPA funding enabled the invention of
the Internet, stealth aircraft, GPS, and voice-recognition software, and the new program, dubbed Plan X, is equally ambitious. DARPA
managers said the Plan X goal was “to create revolutionary technologies for understanding,
planning, and managing cyberwarfare.” The US Air Force was also signaling its readiness to go into cyber attack mode,
announcing in August that it was looking for ideas on how “to destroy, deny, degrade, disrupt, deceive, corrupt, or usurp the adversaries [sic] ability to
use the cyberspace domain for his advantage.” The new interest in attacking enemies rather than simply defending against them has even spread to
the business community. Like their military counterparts, cybersecurity experts in the private sector have become increasingly frustrated by their
inability to stop intruders from penetrating critical computer networks to steal valuable data or even sabotage network operations. The
new
idea is to pursue the perpetrators back into their own networks. “We’re following a failed
security strategy in cyber,” says Steven Chabinsky, formerly the head of the FBI’s cyber intelligence section and now chief risk officer at
CrowdStrike, a startup company that promotes aggressive action against its clients’ cyber adversaries. “There’s no way that we are
going to win the cybersecurity effort on defense. We have to go on offense.” The growing interest in
offensive operations is bringing changes in the cybersecurity industry. Expertise in patching security flaws in one’s own computer network is out;
expertise in finding those flaws in the other guy’s network is in. Among
the “hot jobs” listed on the career page at the
National Security Agency are openings for computer scientists who specialize in “vulnerability
discovery.” Demand is growing in both government and industry circles for technologists with the skills to develop ever more sophisticated cyber
tools, including malicious software—malware—with such destructive potential as to qualify as cyberweapons when implanted in an enemy’s network.
“Offense is the biggest growth sector in the cyber industry right now,” says Jeffrey Carr, a cybersecurity analyst
and author of Inside Cyber Warfare. But have we given sufficient thought to what we are doing? Offensive operations in the cyber domain raise a host
of legal, ethical, and political issues, and governments, courts, and business groups have barely begun to consider them. The move to offensive
operations in cyberspace was actually under way even as Pentagon officials were still insisting their strategy was defensive. We just didn’t know it. The
big revelation came in June 2012, when New York Times reporter David Sanger reported that the United States and Israel were behind the
development of the Stuxnet worm, which had been used to damage computer systems controlling Iran’s nuclear enrichment facilities. Sanger,
citing members of President Obama’s national security team, said the attacks were code-named Olympic Games and constituted
“America’s first sustained use of cyberweapons.” The highly sophisticated Stuxnet worm delivered computer instructions
that caused some Iranian centrifuges to spin uncontrollably and self-destruct. According to Sanger, the secret cyber attacks had begun during the
presidency of George W. Bush but were accelerated on the orders of Obama. The publication of such a highly classified operation provoked a firestorm
of controversy, but government officials who took part in discussions of Stuxnet have not denied the accuracy of Sanger’s reporting. “He nailed it,” one
participant told me. In
the aftermath of the Stuxnet revelations, discussions about cyber war became
more realistic and less theoretical. Here was a cyberweapon that had been designed and used
for the same purpose and with the same effect as a kinetic weapon: like a missile or a bomb, it caused physical
destruction. Security experts had been warning that a US adversary could use a cyberweapon to destroy power plants, water treatment facilities, or
other critical infrastructure assets here in the United States, but the
Stuxnet story showed how the American military
itself could use an offensive cyberweapon against an enemy. The advantages of such a strike
were obvious. A cyberweapon could take down computer networks and even destroy physical
equipment without the civilian casualties that a bombing mission would entail. Used preemptively, it could
keep a conflict from evolving in a more lethal direction. The targeted country would have a hard time determining
where the cyber attack came from. In fact, the news that the United States had actually developed and used an offensive
cyberweapon gave new significance to hints US officials had quietly dropped on previous occasions about the enticing potential of such tools. In
remarks at the Brookings Institution in April 2009, for example, the then Air Force chief of staff, General Norton Schwartz, suggested that
cyberweapons could be used to attack an enemy’s air defense system. “Traditionally,”
Schwartz said, “we take down
integrated air defenses via kinetic means. But if it were possible to interrupt radar systems or
surface to air missile systems via cyber, that would be another very powerful tool in the tool kit
allowing us to accomplish air missions.” He added, “We will develop that—have [that]—capability.” A
full two years before the Pentagon rolled out its “defensive” cyber strategy, Schwartz was clearly suggesting an offensive application. The Pentagon’s
reluctance in 2011 to be more transparent about its interest in offensive cyber capabilities may simply have reflected sensitivity to an ongoing dispute
within the Obama administration. Howard Schmidt, the White House Cybersecurity Coordinator at the time the Department of Defense strategy was
released, was steadfastly opposed to any use of the term “cyber war” and had no patience for those who seemed eager to get into such a conflict. But
his was a losing battle. Pentagon
planners had already classified cyberspace officially as a fifth “domain”
of warfare, alongside land, air, sea, and space. As the 2011 cyber strategy noted, that designation “allows DoD to organize,
train, and equip for cyberspace as we do in air, land, maritime, and space to support national security interests.” That statement by itself contradicted
any notion that the Pentagon’s interest in cyber was mainly defensive. Once
the US military accepts the challenge to fight
in a new domain, it aims for superiority in that domain over all its rivals, in both offensive and
defensive realms. Cyber is no exception. The US Air Force budget request for 2013 included $4 billion in proposed spending to
achieve “cyberspace superiority,” according to Air Force Secretary Michael Donley. It is hard to imagine the US military settling for any less, given the
importance of electronic assets in its capabilities. Even small unit commanders go into combat equipped with laptops and video links. “We’re no longer
just hurling mass and energy at our opponents in warfare,” says John Arquilla, professor of defense analysis at the Naval Postgraduate School. “Now
we’re using information, and the more you have, the less of the older kind of weapons you need.” Access to data networks has given warfighters a huge
advantage in intelligence, communication, and coordination. But their dependence on those networks also creates vulnerabilities, particularly when
engaged with an enemy that has cyber capabilities of his own. “Our adversaries are probing every possible entry point into the network, looking for
that one possible weak spot,” said General William Shelton, head of the Air Force Space Command, speaking at a CyberFutures Conference in 2012. “If
we don’t do this right, these new data links could become one of those spots.” Achieving “cyber superiority” in a twenty-first-century battle space is
analogous to the establishment of air superiority in a traditional bombing campaign. Before strike missions begin against a set of targets, air
commanders want to be sure the enemy’s air defense system has been suppressed. Radar sites, antiaircraft missile batteries, enemy aircraft, and
command-and-control facilities need to be destroyed before other targets are hit. Similarly, when an information-dependent combat operation is
planned against an opposing military, the operational commanders may first want to attack the enemy’s computer systems to defeat his ability to
penetrate and disrupt the US military’s information and communication networks. Indeed, operations like this have already been carried out. A former
ground commander in Afghanistan, Marine Lieutenant General Richard Mills, has acknowledged using cyber attacks against his opponent while
directing international forces in southwest Afghanistan in 2010. “I was able to use my cyber operations against my adversary with great impact,” Mills
said, in comments before a military conference in August 2012. “I was able to get inside his nets, infect his command-and-control, and in fact defend
myself against his almost constant incursions to get inside my wire, to affect my operations.” Mills was describing offensive cyber actions. This is cyber
war, waged on a relatively small scale and at the tactical level, but cyber war nonetheless. And, as DARPA’s Plan X reveals, the US military is currently
engaged in much larger scale cyber war planning. DARPA managers want contractors to come up with ideas for mapping the digital battlefield so that
commanders could know where and how an enemy has arrayed his computer networks, much as they are now able to map the location of enemy
tanks, ships, and aircraft. Such visualizations would enable cyber war commanders to identify the computer targets they want to destroy and then
assess the “battle damage” afterwards. Plan X would also support the development of new cyber war architecture. The DARPA managers envision
operating systems and platforms with “mission scripts” built in, so that a cyber attack, once initiated, can proceed on its own in a manner “similar to
the auto-pilot function in modern aircraft.” None of this technology exists yet, but neither did the Internet or GPS when DARPA researchers first
dreamed of it. As with those innovations, the
government role is to fund and facilitate, but much of the experimental and
research work would be done in the private sector. A computer worm with a destructive code like the one Stuxnet carried can probably be designed
only with state sponsorship, in a research lab with resources like those at the NSA. But private contractors are in a position to provide many of the tools
needed for offensive cyber activity, including the software
bugs that can be exploited to provide a “back door”
into a computer’s operating system. Ideally, the security flaw or vulnerability that can be
exploited for this purpose will be one of which the network operator is totally unaware. Some
hackers specialize in finding these vulnerabilities, and as the interest in offensive cyber
operations has grown, so has the demand for their services. The world-famous hacker conference known as Defcon
attracts a wide and interesting assortment of people each year to Las Vegas: creative but often antisocial hackers who identify themselves only by their
screen names, hackers who have gone legit as computer security experts, law enforcement types, government spies, and a few curious academics and
journalists. One can learn what’s hot in the hacker world just by hanging out there. In August 2012, several attendees were seated in the Defcon cafe
when a heavy-set young man in jeans, a t-shirt, and a scraggly beard strolled casually up and dropped several homemade calling cards on the table. He
then moved to the next table and tossed down a few more, all without saying a word. There was no company logo or brand name on the card, just this
message: “Paying top dollar for 0-day and offensive technologies . . . ” The card identified the buyer as “zer0daybroker” and listed an e-mail address. A
“zero-day” is the most valuable of computer vulnerabilities, one unknown to anyone but the
researcher who finds it. Hackers prize zero-days because no one knows to have prepared a
defense against them. The growing demand for these tools has given rise to brokers like Zer0day, who identified himself in a subsequent email exchange as “Zer0 Day Haxor” but provided no other identifying information. As a broker, he probably did not intend to hack into a computer
network himself but only to act as an intermediary, connecting sellers who have discovered system vulnerabilities with buyers who want to make use of
the tools and are willing to pay a high price for them. In the past, the main market for these vulnerabilities was software firms themselves who wanted
to know about flaws in their products so that they could write patches to fix them. Big companies like Google and Microsoft employ “penetration
testers” whose job it is to find and report vulnerabilities that would allow someone to hack into their systems. In some cases, such companies have paid
a bounty to freelance cyber researchers who discover a vulnerability and alert the company engineers. But the
rise in offensive cyber
operations has transformed the vulnerability market, and hackers these days are more inclined
to sell zero-days to the highest bidder. In most cases, these are governments. The market for
back-door exploits has been boosted in large part by the burgeoning demand from militaries
eager to develop their cyber warfighting capabilities. The designers of the Stuxnet code cleared a path into Iranian
computers through the use of four or five separate zero-day vulnerabilities, an achievement that impressed security researchers around the world. The
next Stuxnet would require the use of additional vulnerabilities. “If
the president asks the US military to launch a cyber
operation in Iran tomorrow, it’s not the time to start looking for exploits,” says Christopher
Soghoian, a Washington-based cybersecurity researcher. “They need to have the exploits
ready to go. And you may not know what kind of computer your target uses until you get
there. You need a whole arsenal [of vulnerabilities] ready to go in order to cover every
possible configuration you may meet.” Not surprisingly, the National Security Agency—buying through defense
contractors—may well be the biggest customer in the vulnerability market, largely because it pays
handsomely. The US military’s dominant presence in the market means that other possible
purchasers cannot match the military’s price. “Instead of telling Google or Mozilla about a flaw
and getting a bounty for two thousand dollars, researchers will sell it to a defense contractor
like Raytheon or SAIC and get a hundred thousand for it,” says Soghoian, now the principal technologist in the Speech,
Privacy and Technology Project at the American Civil Liberties Union and a prominent critic of the zero-day market. “Those companies will then turn
around and sell the vulnerability upstream to the NSA or another defense agency. They will outbid Google every time.”
2NC China
Cyber capabilities are key to deterrence and defending against China
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior
Fellow at RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National
Security Policy. Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University.
Martin Libicki received his PhD in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his
BSc in Mathematics from MIT. He is a Professor at the RAND Graduate School and a Senior Management Scientist at RAND.
“Waging Cyber War the American Way,” Survival: Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28.
07-22-2015. http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-augustseptember-2015-c6ba/57-4-02-gompert-and-libicki-eab1//ghs-kw)
At the same time, the
United States regards cyber war during armed conflict with a cyber-capable
enemy as probable, if not inevitable. It both assumes that the computer systems on which its own
forces rely to deploy, receive support and strike will be attacked, and intends to attack the
computer systems that enable opposing forces to operate as well. Thus, the United States has said
that it can and would conduct cyber war to ‘support operational and contingency plans’ – a euphemism for attacking computer
systems that enable enemy war fighting. US military doctrine now regards ‘non-kinetic’ (that is, cyber)
measures as an integral aspect of US joint offensive operations.8 Even so, the stated purposes of the US military
regarding cyber war stress protecting the ability of conventional military forces to function as they should, as well as avoiding and preventing
escalation, especially to non-military targets. Apart from its preparedness to conduct counter-military cyber operations during wartime, the United
States has been reticent about using its offensive capabilities. While it has not excluded conducting cyber operations to coerce hostile states or nonstate actors, it has yet to brandish such a threat.9 Broadly
speaking, US policy is to rely on the threat of retaliation
to deter a form of warfare it is keen to avoid. Chinese criticism that the US retaliatory policy and capabilities ‘will up the ante
on the Internet arms race’ is disingenuous in that China has been energetic in forming and using capabilities for cyber operations.10 Chinese criticism is
disingenuous Notwithstanding the defensive bias in US attitudes toward cyber war, the dual missions of deterrence and preparedness for offensive
operations during an armed conflict warrant maintaining superb, if not superior, offensive capabilities. Moreover, the case can be made – and we have
made it – that the
United States should have superiority in offensive capabilities in order to control
escalation.11 The combination of significant capabilities and declared reluctance to wage cyber war raises a question that is not answered by any
US official public statements: when it comes to offence, what are US missions, desired effects, target sets and restraints – in short, what is US policy?
To be clear, we do not take issue with the basic US stance of being at once wary and capable of
cyber war. Nor do we think that the United States should advertise exactly when and how it
would conduct offensive cyber war. However, the very fact that the United States maintains options for offensive operations
implies the need for some articulation of policy. After all, the United States was broadly averse to the use of nuclear weapons during the Cold War, yet
it elaborated a declaratory policy governing such use to inform adversaries, friends and world opinion, as well as to forge domestic consensus. Indeed,
if the United States wants to discourage and limit cyber war internationally, while keeping its options open, it must offer an example. For that matter,
the American people deserve to know what national policy on cyber war is, lest they assume it is purely defensive – or just too esoteric to comprehend.
Whether to set a normative example, warn potential adversaries or foster national consensus, US policy on waging cyber war should be coherent. At
the same time, it must encompass three distinguishable offensive missions: wartime counter-military operations, which the United States intends to
conduct; retaliatory missions, which the US must have the will and ability to conduct for reasons of deterrence; and coercive missions against hostile
states, which could substitute for armed attack.12 Four cases serve to highlight the relevant issues and to inform the elaboration of an overall policy to
guide US conduct of offensive cyber war. The first involves wartime counter-military cyber operations against a cyber-capable opponent, which may
also be waging cyber war; the second involves retaliation against a cyber-capable opponent for attacking US systems other than counter-military ones;
the third involves coercion of a ‘cyber-weak’ opponent with little or no means to retaliate against US cyber attack; and the fourth involves coercion of a
‘cyber-strong’ opponent with substantial means to retaliate against US cyber attack. Of these, the first and fourth imply a willingness to initiate cyber
war. Counter-military cyber war during wartime Just as cyber war is war, armed hostilities will presumably include cyber war if the belligerents are both
capable of and vulnerable to it. The reason for such certainty is that impairing opposing military forces’ use of computer systems is operationally
compelling. Forces with requisite technologies and skills benefit enormously from data communications and computation for command and control,
intelligence, surveillance and reconnaissance (ISR), targeting, navigation, weapon guidance, battle assessment and logistics management, among other
key functions. If the performance of forces is dramatically enhanced by such systems, it follows that degrading them can provide important military
advantages. Moreover, allowing an enemy to use cyber war without reciprocating could mean military defeat. Thus, the
United States and
other advanced states are acquiring capabilities not only to use and protect computer systems,
but also to disrupt those used by enemies. The intention to wage cyber war is now prevalent in
Chinese planning for war with the United States – and vice versa. Chinese military planners have
long made known their belief that, because computer systems are essential for effective US
military operations, they must be targeted. Chinese cyber capabilities may not (yet) pose a
threat to US command, control, communications, computers, intelligence, surveillance and
reconnaissance (C4ISR) networks, which are well partitioned and protected. However, the
networks that enable logistical support for US forces are inviting targets. Meant to disable US military
operations, Chinese use of cyber war during an armed conflict would not be contingent on US cyber
operations. Indeed, it could come early, first or even as a precursor of armed hostilities. For its
part, the US military is increasingly aware not only that sophisticated adversaries like China can
be expected to use cyber war to degrade the performance of US forces, but also that US forces
must integrate cyber war into their capabilities and operations. Being more dependent on computer networks to
enhance military performance than are its adversaries, including China, US forces have more to lose than to gain from the outbreak of cyber war during
an armed conflict. This being so, would it make sense for the United States to wait and see if the enemy resorts to cyber war before doing so itself?
Given US conventional military superiority, it can be assumed that any adversary that can use
cyber war against US forces will do so. Moreover, waiting for the other side to launch a cyber attack could be disadvantageous
insofar as US forces would be the first to suffer degraded performance. Thus, rather than waiting, there will be pressure for the United States to
commence cyber attacks early, and perhaps first. Moreover, leading US military officers have strongly implied that cyber war would have a role in
attacking enemy anti-access and area-denial (A2AD) capabilities irrespective of the enemy’s use of cyber war.13 If the United States is prepared to
conduct offensive cyber operations against a highly advanced opponent such as China, it stands to reason that it would do likewise against lesser
opponents. In sum, offensive cyber war is becoming part and parcel of the US war-fighting doctrine. The
nature of US countermilitary cyber attacks during wartime should derive from the mission of gaining, or denying the
opponent, operational advantage. Primary targets of the United States should mirror those of a cyber-capable adversary: ISR,
command and control, navigation and guidance, transport and logistics support. Because this mission is not coercive or strategic in nature, economic
and other civilian networks should not be targeted. However, to the extent that networks that enable military operations may be multipurpose,
avoidance of non-military harm cannot be assured. There are no sharp ‘firebreaks’ in cyber war.14
China would initiate preemptive cyber strikes on the US
Freedberg 13
(Freedberg, Sydney J. Sydney J. Freedberg Jr. is the deputy editor for Breaking Defense. He graduated summa cum laude from
Harvard with an AB in History and holds an MA in Security Studies from Georgetown University and a MPhil in European Studies
from Cambridge University. During his 13 years at National Journal magazine, he wrote his first story about what became known
as "homeland security" in 1998, his first story about "military transformation" in 1999, and his first story on "asymmetrical
warfare" in 2000. Since 2004 he has conducted in-depth interviews with more than 200 veterans of Afghanistan and Iraq about
their experiences, insights, and lessons-learned, writing stories that won awards from the association of Military Reporters &
Editors in 2008 and 2009, as well as an honorable mention in 2010. "China’s Fear Of US May Tempt Them To Preempt:
Sinologists," Breaking Defense. 10-1-2013. http://breakingdefense.com/2013/10/chinas-fear-of-us-may-tempt-them-to-preemptsinologists/2///ghs-kw)
WASHINGTON: Because
China believes it is much weaker than the United States, they are more
likely to launch a massive preemptive strike in a crisis. Here’s the other bad news: The current US concept
for high-tech warfare, known as Air-Sea Battle, might escalate the conflict even further towards a “limited” nuclear
war, says one of the top American experts on the Chinese military. [This is one in an occasional series on the crucial strategic
relationship and the military capabilities of the US, its allies and China.] What US analysts call an “anti-access/area denial” strategy is
what China calls “counter-intervention” and “active defense,” and the
Chinese approach is born of a deep sense of
vulnerability that dates back 200 years, China analyst Larry Wortzel said at the Institute of World Politics: “The
People’s Liberation Army still sees themselves as an inferior force to the American military, and
that’s who they think their most likely enemy is.” That’s fine as long as it deters China from attacking its
neighbors. But if deterrence fails, the Chinese are likely to go big or go home. Chinese military
history from the Korean War in 1950 to the Chinese invasion of Vietnam in 1979 to more recent,
albeit vigorous but non-violent, grabs for the disputed Scarborough Shoal suggests a preference
for a sudden use of overwhelming force at a crucial point, what Clausewitz would call the enemy’s “center of
gravity.” “What they do is very heavily built on preemption,” Wortzel said. “The problem with the striking the
enemy’s center of gravity is, for the United States, they see it as being in Japan, Hawaii, and the
West Coast….That’s very escalatory.” (Students of the American military will nod sagely, of course, as we remind
everyone that President George Bush made preemption a centerpiece of American strategy after the terror attacks of 2001.)
Wortzel argued that the current version of US Air-Sea Battle concept is also likely to lead to escalation. “China’s dependent on these
ballistic missiles and anti-ship missiles and satellite links,” he said. Since those are almost all land-based, any attack on them
“involves striking the Chinese mainland, which is pretty escalatory.” “You don’t know how they’re going to react,” he said. “They do
have nuclear missiles. They actually think we’re more allergic to nuclear missiles landing on our soil than they are on their soil. They
think they can withstand a limited nuclear attack, or even a big nuclear attack, and retaliate.” What War Would Look Like So
how
would China’s preemptive attack unfold? First would come weeks of escalating rhetoric and
cyberattacks. There’s no evidence the Chinese favor a “bolt out of the blue” without giving the
adversary what they believe is a chance to back down, agreed retired Rear Adm. Michael McDevitt and Dennis
Blasko, former Army defense attache in Beijing, speaking on a recent Wilson Center panel on Chinese strategy where they agreed on
almost nothing else. That’s not much comfort, though, considering that Imperial Japan showed clear signs they might attack and still
caught the US flat-footed at Pearl Harbor. When
the blow does fall, the experts believe it would be sudden.
Stuxnet-style viruses, electronic jamming, and Israeli-designed Harpy radar-seeking cruise missiles (similar to the American
HARM but slower and longer-ranged) would try to blind every land-based and shipborne radar. Long-range
anti-aircraft missiles like the Russian-built S-300 would go for every plane currently in the air within 125 miles of China’s coast, a
radius that covers all of Taiwan and some of Japan. Salvos of ballistic missiles would strike every airfield within 1,250 miles. That’s
enough range to hit the four US airbases in Japan and South Korea – which are, after all, static targets you can look up on Google
Maps – to destroy aircraft on the ground, crater the runways, and scatter the airfield with unexploded cluster bomblets to defeat
repair attempts. Long-range cruise missiles launched from shore, ships, and submarines then go after naval vessels. And if the
Chinese get really good and really lucky, they just might get a solid enough fix on a US Navy aircraft carrier to lob a precision-guided
ballistic missile at it. But would this work? Maybe. “This is fundamentally terra incognita,” Heritage Foundation research fellow Dean
Cheng told me. There has been no direct conventional clash between major powers since Korea in the 1950s, no large-scale use of
anti-ship missiles since the Falklands in 1982, and no war ever where both sides possessed today’s space, cyber, electronic warfare,
and precision-guided missile capabilities. Perhaps the least obvious but most critical uncertainty in a Pacific war would be invisible.
“I don’t think we’ve seen electronic warfare on a scale that we’d see in a US-China
confrontation,” said Cheng. “I doubt very much they are behind us when it comes to electronic
warfare, [and] the Chinese are training every day on cyber: all those pings, all those attacks, all
those attempts to penetrate.” While the US has invested heavily in jamming and spoofing over the last decade, much of
the focus has been on how to disable insurgents’ roadside bombs, not on how to counter a high-tech nation-state. China,
however, has focused its electronic warfare and cyber attack efforts on the United States.
Conceptually, China may well be ahead of us in linking the two. (F-35 supporters may well disagree with this
conclusion.) Traditional radar jammers, for example, can also be used to insert viruses into the highly computerized AESA radars
(active electronically scanned array) that are increasingly common in the US military. “Where there has
been a
fundamental difference, and perhaps the Chinese are better than we are at this, is the Chinese
seem to have kept cyber and electronic warfare as a single integrated thing,” Cheng said. “We are only
now coming round to the idea that electronic warfare is linked to computer network operations.” In a battle for the electromagnetic
spectrum, Cheng said, the worst case
“is that you thought your jammers, your sensors, everything was
working great, and the next thing you know missiles are penetrating [your defenses], planes are
being shot out of the sky.”
China/Taiwan war goes nuclear
Glaser 11
(Charles, Professor of Political Science and International Affairs at the Elliott School of International Affairs at George Washington
University, Director of the Institute for Security and Conflict Studies, “Will China’s Rise lead to War? ,” Foreign Affairs March/April
2011, http://web.clas.ufl.edu/users/zselden/coursereading2011/Glaser.pdf)
THE PROSPECTS for avoiding intense military competition and war may be good, ¶ but growth in China's power may nevertheless
require some changes in U.S. ¶ foreign policy that Washington will find disagreeable--particularly regarding ¶ Taiwan. Although it lost
control of Taiwan during the Chinese Civil War more ¶ than six decades ago, China still considers
Taiwan to be part of its
homeland, ¶ and unification remains a key political goal for Beijing. China has made clear that ¶ it will use force
if Taiwan declares independence, and much of China's ¶ conventional military buildup has been
dedicated to increasing its ability to ¶ coerce Taiwan and reducing the United States' ability to intervene.
Because ¶ China places such high value on Taiwan and because the United States and ¶ China-whatever they might formally agree to--have such different attitudes ¶ regarding the legitimacy of the status quo, the
issue poses special dangers and ¶ challenges for the U.S.-Chinese relationship, placing it in a different category ¶ than
crisis over Taiwan could fairly easily escalate to nuclear war, because
each ¶ step along the way might well seem rational to the actors involved. Current U.S. ¶ policy is designed
Japan or South Korea. ¶ A
to reduce the probability that Taiwan will declare ¶ independence and to make clear that the United States will not come to Taiwan's
it does. Nevertheless, the United States would find itself under pressure to ¶ protect Taiwan against
any sort of attack, no matter how it originated. Given the¶ different interests and perceptions of the various parties
¶ aid if
and the limited control ¶ Washington has over Taipei's behavior, a crisis could unfold in which the United ¶ States found itself
following events rather than leading them. ¶ Such dangers have been around for decades, but ongoing
improvements in ¶
China's military capabilities may make Beijing more willing to escalate a Taiwan ¶ crisis. In addition
to its improved conventional capabilities, China is modernizing ¶ its nuclear forces to increase their ability to
survive and retaliate following a ¶ large-scale U.S. attack. Standard deterrence theory holds that Washington's ¶
current ability to destroy most or all of China's nuclear force enhances its ¶ bargaining position. China's nuclear
modernization might remove that check on ¶ Chinese action, leading Beijing to behave more
boldly in future crises than it has ¶ in past ones. A U.S. attempt to preserve its ability to defend Taiwan,
meanwhile, ¶ could fuel a conventional and nuclear arms race. Enhancements to U.S. offensive ¶ targeting
capabilities and strategic ballistic missile defenses might be interpreted ¶ by China as a signal of malign U.S. motives, leading to
further Chinese military ¶ efforts and a general poisoning of U.S.-Chinese relations.
2NC Cyber-Deterrence
Cyber-offensive strengths are key to cyber-deterrence and minimizing damage
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior
Fellow at RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National
Security Policy. Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University.
Martin Libicki received his PhD in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his
BSc in Mathematics from MIT. He is a Professor at the RAND Graduate School and a Senior Management Scientist at RAND.
“Waging Cyber War the American Way,” Survival: Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28.
07-22-2015. http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-augustseptember-2015-c6ba/57-4-02-gompert-and-libicki-eab1//ghs-kw)
Even with effective C2, there is
a danger that US counter-military cyber operations will infect and
damage systems other than those targeted, including civilian systems, because of the technical
difficulties of controlling effects, especially for systems that support multiple services. As we have
previously noted in these pages, ‘an attack that uses a replicable agent, such as a virus or worm, has substantial potential to spread,
perhaps uncontrollably’.19 The dangers of collateral damage on non-combatants imply not only the possibility of violating the laws
of war (as they might apply to cyber war), but also of provoking escalation. While the United States would like there to be strong
technical and C2 safeguards against unwanted effects and thus escalation, it is not clear that there are. It follows that US
doctrine concerning the conduct of wartime counter-military offensive operations must account
for these risks. This presents a dilemma, for dedicated military systems tend to be harder to access and
disrupt than multipurpose or civilian ones. China’s military, for example, is known for its
attention to communications security, aided by its reliance on short-range and land-based (for
example, fibre-optical) transmission of C4ISR. Yet, to attack less secure multipurpose systems on
which the Chinese military depends for logistics is to risk collateral damage and heighten the risk
of escalation. Faced with this dilemma, US policy should be to exercise care in attacking military networks that
also support civilian services. The better its offensive cyber-war capabilities, the more able the United
States will be to disrupt critical enemy military systems and avoid indiscriminate effects.
Moreover, US offensive strength could deter enemy escalation. As we have argued before, US
superiority in counter-military cyber war would have the dual advantage of delivering
operational benefits by degrading enemy forces and averting a more expansive cyber war than
intended. While the United States should avoid the spread of cyber war beyond military
systems, it should develop and maintain an unmatched capability to conduct counter-military
cyber war. This would give it operational advantages and escalation dominance. Such
capabilities might enable the United States to disrupt enemy C4ISR systems used for the control
and operation of nuclear forces. However, to attack such systems would risk causing the enemy to perceive that the
United States was either engaged in a non-nuclear-disarming first strike or preparing for a nuclear-disarming first strike. Avoiding
such a misperception requires the avoidance of such systems, even if they also support enemy non-nuclear C4ISR (as China’s may
do). In sum, US
policy should be to create, maintain and be ready to use superior cyber-war
capabilities for counter-military operations during armed conflict. Such an approach would deny
even the most capable of adversaries, China included, an advantage by resorting to cyber war in
an armed conflict. The paramount goal of the United States should be to retain its military
advantage in the age of cyber war – a tall order, but a crucial one for US interests.
2NC Russia
Deterrence solves cyber-war and Russian aggression
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior
Fellow at RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National
Security Policy. Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University.
Martin Libicki received his PhD in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his
BSc in Mathematics from MIT. He is a Professor at the RAND Graduate School and a Senior Management Scientist at RAND.
“Waging Cyber War the American Way,” Survival: Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28.
07-22-2015. http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-augustseptember-2015-c6ba/57-4-02-gompert-and-libicki-eab1//ghs-kw)
Retaliation While
the United States should be ready to conduct cyber attacks against military forces
in an armed conflict, it should in general otherwise try to avoid and prevent cyber war. (Possible
exceptions to this posture of avoidance are taken up later in the cases concerning coercion.) In keeping with its commitment to an
‘open, secure, interoperable and reliable internet that enables prosperity, public safety, and the free flow of commerce and ideas’,
the United States should seek to minimise the danger of unrestricted cyber war, in which critical
economic, governmental and societal systems and services are disrupted.20 Given how difficult it is to
protect such systems, the United States must rely to a heavy extent on deterrence and thus the
threat of retaliation. To this end, the US Defense Department has stated that a would-be attacker could ‘suffer
unacceptable costs’ if it launches a cyber attack on the United States.21 While such a warning is worth
issuing, it raises the question of how these ‘unacceptable costs’ could be defined and levied. Short of disclosing specific targets and
methods, which we do not advocate, the United States could strengthen both the deterrence it seeks and the norms it favours by
indicating what actions might constitute retaliation. This is especially important because the most vulnerable targets of cyber
retaliation are computer networks that serve civilian life, starting with the internet. By definition, cyber
retaliation that
extends beyond military capabilities, as required for strong deterrence, might be considered
indiscriminate. Whether it is also disproportionate depends in part on the enemy attack that precipitated it. We can posit, for
purposes of analysis, that an enemy attack would be aimed at causing severe disruptions of such economic and societal functions as
financial services, power-grid management, transport systems, telecommunications services, media and government services, along
with the expected military and intelligence functions. In considering how the United States should retaliate, the distinction between
the population and the state of the attacker is useful. The United States would hold the latter, not the former, culpable, and thus the
rightful object of retaliation. This would suggest targeting propaganda and other societal-control systems; government financial
systems; state access to banks; political and economic elites on which the state depends; industries on which the state depends,
especially state-owned enterprises; and internal security forces and functions. To judge how effective such a retaliation strategy
could be, consider
the case of Russia. The Russian state is both sprawling and centralised: within
Russia’s economy and society, it is pervasive, heavy-handed and exploitative; power is
concentrated in the Kremlin; and elites of all sorts are beholden to it. Although the Russian state
is well entrenched and not vulnerable to being overthrown, it is porous and exposed, especially
in cyberspace. Even if the computer systems of the innermost circle of Russian state decisionmaking may be inaccessible, there are many important systems that are not. Insofar as those who
control the Russian state are more concerned about their own well-being than that of the ‘masses’, targeting their apparatus would
cause acute apprehension. Of course, the more important a computer system is to the state, the less accessible it is likely to be. Still,
even if Russia were to launch indiscriminate cyber attacks on the US economy and society, the
United States might get more bang for its bytes by retaliating against systems that support
Russian state power. Of course, US cyber targeting could also include the systems on which Russian
leaders rely to direct military and other security forces, which are the ultimate means of state power and
control. Likewise, Russian military and intelligence systems would be fair game for retaliation. At the
same time, it would be vital to observe the stricture against disabling nuclear C2 systems, lest the Kremlin perceive that a US
strategic strike of some sort was in the works. With this exception, the
Russian state’s cyber vulnerabilities should
be exploited as much as possible. The United States could thus not only meet the standard of
‘unacceptable costs’ on which deterrence depends, but also gain escalation control by giving
Russia’s leaders a sense of their vulnerability. In addition to preventing further escalation, this US targeting
strategy would meet, more or less, normative standards of discrimination and proportionality.
And the cyberthreat is real – Mutliple Countries and Terrorists are acquiring
capabilities – increases the risk of nuclear nuclear war and collapsing
agriculture and the power grid
Habiger, 2k10
(Eugue – Retired Air Force General, Cyberwarfare and Cyberterrorism, The Cyber Security Institute, p. 1119)
However, there are reasons to believe that what is going on now amounts to a fundamental shift as opposed to business as usual. Today’s network exploitation or information operation trespasses
possess a number of characteristics that suggest that the line between espionage and conflict has been, or is close to being, crossed. (What that suggests for the proper response is a different matter.)
the number of cyberattacks we are facing is growing significantly. Andrew Palowitch, a
former CIA official now consulting with the US Strategic Command (STRATCOM), which oversees the Defense Department’s Joint Task Force­Global Network Operations,
recently told a meeting of experts that the Defense Department has experienced almost 80,000
computer attacks, and some number of these assaults have actually “reduced” the military’s
“operational capabilities.”20 Second, the nature of these attacks is starting to shift from penetration
attempts aimed at gathering intelligence (cyber spying) to offensive efforts aimed at taking down systems (cyberattacks). Palowitch put this in stark terms last November,
“We are currently in a cyberwar and war is going on today.”21 Third, these recent attacks need to be taken in a broader strategic context. Both Russia and China have
stepped up their offensive efforts and taken a much more aggressive cyberwarfare
posture. The Chinese have developed an openly discussed cyberwar strategy aimed at achieving electronic dominance over the U.S. and its allies by 2050. In 2007 the Department of Defense
reported that for the first time China has developed first strike viruses, marking a major shift from
prior investments in defensive measures.22 And in the intervening period China has launched a series of offensive cyber operations against U.S.
First,
government and private sector networks and infrastructure. In 2007, Gen. James Cartwright, the former head of STRATCOM and now the Vice Chairman of the Joint Chiefs of Staff, told the US­China
Economic and Security Review Commission that China’s ability to launch “denial of service” attacks to overwhelm an IT system is of particular concern. 23
Russia also has already
begun to wage offensive cyberwar. At the outset of the recent hostilities with Georgia, Russian assets launched a series of cyberattacks against the
Georgian government and its critical infrastructure systems, including media, banking and transportation sites.24 In 2007, cyberattacks that many experts attribute, directly or indirectly, to Russia shut
down the Estonia government’s IT systems. Fourth, the current geopolitical context must also be factored into any effort to gauge the degree of threat of cyberwar. The start of the new Obama
Administration has begun to help reduce tensions between the United States and other nations. And, the new administration has taken initial steps to improve bilateral relations specifically with both
China and Russia. However, it must be said that over the last few years the posture of both the Chinese and Russian governments toward America has clearly become more assertive, and at times even
. Some commentators have talked about the prospects of a cyber Pearl Harbor, and the pattern of
Chinese and Russian behavior to date gives reason for concern along these lines: both nations have
offensive cyberwarfare strategies in place; both nations have taken the cyber equivalent of
building up their forces; both nations now regularly probe our cyber defenses looking for gaps to be exploited; both
nations have begun taking actions that cross the line from cyberespionage to cyberaggression; and, our bilateral relations with both
nations are increasingly fractious and complicated by areas of marked, direct competition. Clearly, there a sharp
aggressive
differences between current U.S. relations with these two nations and relations between the US and Japan just prior to World War II. However, from a strategic defense perspective, there are enough
the limited resources required to carry out even a large scale cyberattack also
makes likely the potential for a significant cyberterror attack against the United States. However, the lack of a
long list of specific incidences of cyberterrorism should provide no comfort. There is strong evidence to suggest that al Qaeda
has the ability to conduct cyberterror attacks against the United States and its allies. Al Qaeda and other terrorist organizations are
warning signs to warrant preparation. In addition to the threat of cyberwar,
extremely active in cyberspace, using these technologies to communicate among themselves and others, carry out logistics, recruit members, and wage information warfare. For example, al Qaeda
leaders used email to communicate with the 9­11 terrorists and the 9­11 terrorists used the Internet to make travel plans and book flights. Osama bin Laden and other al Qaeda members routinely post
there is evidence of efforts that al Qaeda and other
terrorist organizations are actively developing cyberterrorism capabilities and seeking to carry
videos and other messages to online sites to communicate. Moreover,
out cyberterrorist attacks. For example, the Washington Post has reported that “U.S. investigators have found evidence in the logs that mark a browser's path through the Internet that al Qaeda
operators spent time on sites that offer software and programming instructions for the digital switches that run power, water, transport and communications grids. In some interrogations . . . al Qaeda
prisoners have described intentions, in general terms, to use those tools.”25 Similarly, a 2002 CIA report on the cyberterror threat to a member of the Senate stated that al Qaeda and Hezbollah have
become "more adept at using the internet and computer technologies.”26 The FBI has issued bulletins stating that, “U. S. law enforcement and intelligence agencies have received indications that Al
Qaeda members have sought information on Supervisory Control And Data Acquisition (SCADA) systems available on multiple SCADA­related web sites.”27 In addition a number of jihadist websites,
such as 7hj.7hj.com, teach computer attack and hacking skills in the service of Islam.28 While al Qaeda may lack the cyber­attack capability of nations like Russia and China, there is every reason to
believe its operatives, and those of its ilk, are as capable as the cyber criminals and hackers who routinely effect great harm on the world’s digital infrastructure generally and American assets
specifically. In fact, perhaps, the most troubling indication of the level of the cyberterrorist threat is the countless, serious non­terrorist cyberattacks routinely carried out by criminals, hackers,
disgruntled insiders, crime syndicates and the like. If run­of­the­mill criminals and hackers can threaten powergrids, hack vital military networks, steal vast sums of money, take down a city’s of traffic
lights, compromise the Federal Aviation Administration’s air traffic control systems, among other attacks, it is overwhelmingly likely that terrorists can carry out similar, if not more malicious attacks.
Moreover, even if the world’s terrorists are unable to breed these skills, they can certainly buy them. There are untold numbers of cybermercenaries around the world—sophisticated hackers with
advanced training who would be willing to offer their services for the right price. Finally, given the nature of our understanding of cyber threats, there is always the possibility that we have already been
a well­designed cyberattack has the
capacity cause widespread chaos, sow societal unrest, undermine national governments, spread paralyzing fear and
anxiety, and create a state of utter turmoil, all without taking a single life. A sophisticated cyberattack
could throw a nation’s banking and finance system into chaos causing markets to crash,
prompting runs on banks, degrading confidence in markets , perhaps even putting the nation’s currency in play and making the government look helpless
and hapless. In today’s difficult economy, imagine how Americans would react if vast sums of money were
taken from their accounts and their supporting financial records were destroyed. A truly nefarious cyberattacker could carry out an attack in such a way (akin to Robin Hood) as to engender
populist support and deepen rifts within our society, thereby making efforts to restore the system all the more difficult. A modestly advanced enemy
could use a cyberattack to shut down (if not physically damage) one or more regional power grids. An entire region could be cast into
total darkness, power­dependent systems could be shutdown. An attack on one or more regional power grids could also cause cascading
effects that could jeopardize our entire national grid. When word leaks that the
blackout was caused by a cyberattack, the specter of a foreign enemy capable of sending the entire nation into darkness
would only increase the fear, turmoil and unrest. While the finance and energy sectors are considered prime targets for a
the victim or a cyberterrorist attack, or such an attack has already been set but not yet effectuated, and we don’t know it yet. Instead,
cyberattack, an attack on any of the 17 delineated critical infrastructure sectors could have a major impact on the United States. For example, our healthcare system is already technologically driven and
the Obama Administration’s e­health efforts will only increase that dependency. A cyberattack on the U.S. e­health infrastructure could send our healthcare system into chaos and put countless of lives
A cyberattack on our nation’s water
systems could likewise cause widespread disruption. An attack on the control systems for one or more dams could put entire
communities at risk of being inundated, and could create ripple effects across the water, agriculture, and
energy sectors. Similar water control system attacks could be used to at least temporarily deny water to
otherwise arid regions, impacting everything from the quality of life in these areas to agriculture. In 2007, the U.S. Cyber Consequences Unit determined
at risk. Imagine if emergency room physicians and surgeons were suddenly no longer able to access vital patient information.
that the destruction from a single wave of cyberattacks on critical infrastructures could exceed $700 billion, which would be the rough equivalent of 50 Katrina­esque hurricanes hitting the United States
all at the same time.29 Similarly, one IT security source has estimated that the impact of a single day cyberwar attack that focused on and disrupted U.S. credit and debit card transactions would be
approximately $35 billion.30 Another way to gauge the potential for harm is in comparison to other similar noncyberattack infrastructure failures. For example, the August 2003 regional power grid
blackout is estimated to have cost the U.S. economy up to $10 billion, or roughly .1 percent of the nation’s GDP. 31 That said, a cyberattack of the exact same magnitude would most certainly have a
much larger impact. The origin of the 2003 blackout was almost immediately disclosed as an atypical system failure having nothing to do with terrorism. This made the event both less threatening and
likely a single time occurrence. Had it been disclosed that the event was the result of an attack that could readily be repeated the impacts would likely have grown substantially, if not exponentially.
Additionally, a cyberattack could also be used to disrupt our nation’s defenses or distract our national leaders in advance of a more traditional conventional or strategic attack. Many military leaders
actually believe that such a disruptive cyber pre­offensive is the most effective use of offensive cyber capabilities. This is, in fact, the way Russia utilized cyberattackers—whether government assets,
governmentdirected/ coordinated assets, or allied cyber irregulars—in advance of the invasion of Georgia. Widespread distributed denial of service (DDOS) attacks were launched on the Georgian
governments IT systems. Roughly a day later Russian armor rolled into Georgian territory. The cyberattacks were used to prepare the battlefield; they denied the Georgian government a critical
communications tool isolating it from its citizens and degrading its command and control capabilities precisely at the time of attack. In this way, these attacks were the functional equivalent of
conventional air and/or missile strikes on a nation’s communications infrastructure.32 One interesting element of the Georgian cyberattacks has been generally overlooked: On July 20th, weeks before
the August cyberattack, the website of Georgian President Mikheil Saakashvili was overwhelmed by a more narrowly focused, but technologically similar DDOS attack.33 This should be particularly
chilling to American national security experts as our systems undergo the same sorts of focused, probing attacks on a constant basis. The ability of an enemy to use a cyberattack to counter our offensive
capabilities or soften our defenses for a wider offensive against the United States is much more than mere speculation. In fact, in Iraq it is already happening. Iraq insurgents are now using off­the­shelf
insurgents
have succeeded in greatly reducing one of our most valuable sources of realā€time
intelligence and situational awareness. If our enemies in Iraq are capable of such an effective cyberattack against one of our more sophisticated systems, consider what a more
software (costing just $26) to hack U.S. drones (costing $4.5 million each), allowing them to intercept the video feed from these drones.34 By hacking these drones the
technologically advanced enemy could do. At the strategic level, in 2008, as the United States Central Command was leading wars in both Iraq and Afghanistan, a cyber intruder compromised the
security of the Command and sat within its IT systems, monitoring everything the Command was doing. 35 This time the attacker simply gathered vast amounts of intelligence. However, it is clear that
the attacker could have used this access to wage cyberwar—altering information,
disrupting the flow of information, destroying information, taking down systems—
against the United States forces already at war. Similarly, during 2003 as the United States prepared for and began the War in Iraq, the IT networks of the Department of Defense were hacked 294
these ongoing attacks compelled then­Deputy Secretary of Defense Paul Wolfowitz to
"Recent exploits have
reduced operational capabilities on our
times.36 By August of 2004, with America at war,
write in a memo that,
networks."37 This wasn’t the first time that our national security IT infrastructure was penetrated immediately in advance of a U.S. military option.38 In February of 1998 the Solar Sunrise
attacks systematically compromised a series of Department of Defense networks. What is often overlooked is that these attacks occurred during the ramp up period ahead of potential military action
against Iraq. The attackers were able to obtain vast amounts of sensitive information—information that would have certainly been of value to an enemy’s military leaders. There is no way to prove that
these actions were purposefully launched with the specific intent to distract American military assets or degrade our capabilities. However, such ambiguities—the inability to specifically attribute actions
and motives to actors—are the very nature of cyberspace. Perhaps, these repeated patterns of behavior were mere coincidence, or perhaps they weren’t. The potential that an enemy might use a
cyberattack to soften physical defenses, increase the gravity of harms from kinetic attacks, or both, significantly increases the potential harms from a cyberattack. Consider the gravity of the threat and
Such an enemy might be
convinced that it could win a war—conventional or even nuclear —against the U nited S tates. The
effect of this would be to undermine our deterrence­based defenses, making us significantly
more at risk of a major war .
risk if an enemy, rightly or wrongly, believed that it could use a cyberattack to degrade our strategic weapons capabilities.
And we control probability and magnitude- it causes extinction
Bostrom, 2k2
(Nick Bostrom, Ph.D. and Professor of Philosophy at Oxford University, March 2002, Journal of Evolution
and Technology, Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards)
A much greater existential risk emerged with the build-up of nuclear arsenals in the
US and the USSR. An all-out nuclear war was a possibility with both a substantial
probability and with consequences that might have been persistent enough to qualify as global and
terminal. There was a real worry among those best acquainted with the information available at the time that a nuclear Armageddon would occur and that it might annihilate our species or
permanently destroy human civilization. Russia and the US retain large nuclear arsenals that could be used
in a future confrontation, either accidentally or deliberately. There is also a risk that other states may one day build up large
nuclear arsenals. Note however that a smaller nuclear exchange, between India and Pakistan for instance, is not an existential risk,
since it would not destroy or thwart humankind’s potential permanently. Such a war might however be a local terminal risk for the cities most likely to be targeted. Unfortunately, we shall see that
nuclear Armageddon and comet or asteroid strikes are mere preludes to the existential risks that we will encounter in the 21st century.
2NC T/ Case
Cyber-deterrence turns terrorism, war, prolif, and human rights
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior
Fellow at RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National
Security Policy. Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University.
Martin Libicki received his PhD in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his
BSc in Mathematics from MIT. He is a Professor at the RAND Graduate School and a Senior Management Scientist at RAND.
“Waging Cyber War the American Way,” Survival: Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28.
07-22-2015. http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-augustseptember-2015-c6ba/57-4-02-gompert-and-libicki-eab1//ghs-kw)
Given that retaliation and counter-military cyber war require copious offensive capabilities,
questions arise about whether these means could and should also be used to coerce hostile
states into complying with US demands without requiring the use of armed force. Examples
include pressuring a state to cease international aggression, intimidating behaviour or support
for terrorists; or to abandon acquisition of weapons of mass destruction; or to end domestic
human-rights violations. If, as some argue, it is getting harder, costlier and riskier for the United
States to use conventional military force for such ends, threatening or conducting cyber war
may seem to be an attractive alternative.25 Of course, equating cyber war with war suggests that conducting or
threatening it to impose America’s will is an idea not to be treated lightly. Whereas counter-military cyber war presupposes a state
of armed conflict, and retaliation presupposes that the United States has suffered a cyber attack, coercion (as meant here)
presupposes neither a state of armed conflict nor an enemy attack. This means, in essence, the
United States would threaten to start a cyber war outside of an armed conflict – something US policy
has yet to address. While the United States has intimated that it would conduct cyber war during an armed conflict and would
retaliate if deterrence failed, it is silent about using or threatening cyber war as an instrument of coercion. Such reticence fits with
the general US aversion to this form of warfare, as well as a possible preference to carry out cyber attacks without attribution or
admission. Notwithstanding US reticence, the
use of cyber war for coercion can be more attractive than the
use of conventional force: it can be conducted without regard to geography, without
threatening death and physical destruction, and with no risk of American casualties. While the
United States has other non-military options, such as economic sanctions and supporting regime opponents, none
is a substitute for cyber war. Moreover, in the case of an adversary with little or no ability to
return fire in cyberspace, the United States might have an even greater asymmetric advantage
than it does with its conventional military capabilities.
China Tech DA
CX Questions
Customers are shifting to foreign products now – why does the plan reverse
that trend?
1NC
NSA spying shifts tech dominance to China but it’s fragile—reversing the trend
now kills China
Li and McElveen 13
(Cheng Li; Ryan Mcelveen. Cheng Li received a M.A. in Asian studies from the University of California, Berkeley and a Ph.D. in
political science from Princeton University. He is director of the John L. Thornton China Center and a senior fellow in the Foreign
Policy program at Brookings. He is also a director of the Nationsal Committee on U.S.-China Relations. Li focuses on the
transformation of political leaders, generational change and technological development in China. "NSA Revelations Have
Irreparably Hurt U.S. Corporations in China," Brookings Institution. 12-12-2013.
http://www.brookings.edu/research/opinions/2013/12/12-nsa-revelations-hurt-corporations-china-li-mcelveen//ghs-kw)
For the Obama administration, Snowden’s timing could not have been worse. The
first story about the NSA appeared in The
Guardian on June 5. When Obama and Xi met in California two days later, the United States had
lost all credibility on the cyber security issue. Instead of providing Obama with the perfect opportunity to confront China
about its years of intellectual property theft from U.S. firms, the Sunnylands meeting forced Obama to resort to a defensive posture. Reflecting on how
the tables had turned, the media reported that President Xi chose to stay off-site at a nearby Hyatt hotel out of fear of eavesdropping. After the
the Chinese government turned to official media to launch a public campaign
against U.S. technology firms operating in China through its “de-Cisco” (qu Sike hua) movement. By
targeting Cisco, the U.S. networking company that had helped many local Chinese governments develop and improve their IT
infrastructures beginning in the mid-1990s, the Chinese government struck at the very core of U.S.-China
technological and economic collaboration. The movement began with the publication of an issue of China Economic
Weekly titled “He’s Watching You” that singled out eight U.S. firms as “guardian warriors” who had infiltrated
the Chinese market: Apple, Cisco, Google, IBM, Intel, Microsoft, Oracle and Qualcomm. Cisco,
however, was designated as the “most horrible” of these warriors because of its pervasive reach into China’s financial and governmental sectors. For
these U.S. technology firms, China is a vital source of business that represents a fast-growing
slice of the global technology market. After the Chinese official media began disparaging the
“guardian warriors” in June, the sales of those companies have fallen precipitously. With the release
of its third quarter earnings in November, Cisco reported that orders from China fell 18 percent from the same period a
year earlier and projected that overall revenue would fall 8 to 10 percent as a result, according to Reuters. IBM reported that its
revenue from the Chinese market fell 22 percent, which resulted in a 4 percent drop in overall profit. Similarly,
Microsoft has said that China had become its weakest market. However, smaller U.S. technology firms working in
Sunnylands summit,
China have not seen the same slowdown in business. Juniper Networks, a networking rival to Cisco, and EMC Corp, a storage system maker, both saw
increased business in the third quarter. As the
Chinese continue to shun the “guardian warriors,” they may turn to similar but
smaller U.S. firms until domestic Chinese firms are ready to assume their role. In the meantime, trying to completely “deCisco” would be too costly for China, as Cisco’s network infrastructure has become too deeply embedded around the country.
Chinese technology firms have greatly benefited in the aftermath of the Snowden revelations. For
example, the share price of China National Software has increased 250 percent since June. In addition, the Chinese government
continues to push for faster development of its technology industry, in which it has invested since the early
1990s, by funding the development of supercomputers and satellite navigation systems. Still, China’s
current investment in cyber security cannot compare with that of the United States. The U.S. government spends $6.5 billion annually on cyber
security, whereas China spends $400 million, according to NetentSec CEO Yuan Shengang. But that will not be the case for long. The
Chinese
government’s investment in both cyber espionage and cyber security will continue to increase,
and that investment will overwhelmingly benefit Chinese technology corporations. China’s
reliance on the eight American “guardian warrior” corporations will diminish as its domestic
firms develop commensurate capabilities. Bolstering China’s cyber capabilities may emerge as one of the goals of China’s
National Security Committee, which was formed after the Third Plenary Meeting of the 18th Party Congress in November. Modeled on the U.S. National
Security Council and led by President Xi Jinping, the committee was established to centralize coordination and quicken response time, although it is not
yet clear how much of its efforts will be focused domestically or internationally. The Third Plenum also brought further reform and opening of China’s
economy, including encouraging more competition in the private sector. The Chinese leadership continues to solicit foreign investment, as evidenced
by in the newly established Shanghai Free Trade Zone. However, there
is no doubt that investments by foreign
technology companies are less welcome than investments from other sectors because of the
Snowden revelations.
The AFF reclaims US tech leadership from China
Castro and McQuinn 15
(Castro, Daniel and McQuinn, Alan. Information Technology and Innovation Foundation. The Information Technology and
Innovation Foundation (ITIF) is a Washington, D.C.-based think tank at the cutting edge of designing innovation strategies and
technology policies to create economic opportunities and improve quality of life in the United States and around the world.
Founded in 2006, ITIF is a 501(c) 3 nonprofit, non-partisan organization that documents the beneficial role technology plays in our
lives and provides pragmatic ideas for improving technology-driven productivity, boosting competitiveness, and meeting today’s
global challenges through innovation. Daniel Castro is the vice president of the Information Technology and Innovation
Foundation. His research interests include health IT, data privacy, e-commerce, e-government, electronic voting, information
security, and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst at the Government Accountability Office (GAO)
where he audited IT security and management controls at various government agencies. He has a B.S. in Foreign Service from
Georgetown University and an M.S. in Information Security Technology and Management from Carnegie Mellon University. Alan
McQuinn is a research assistant with the Information Technology and Innovation Foundation. Prior to joining ITIF, Mr. McQuinn
was a telecommunications fellow for Congresswoman Anna Eshoo and an intern for the Federal Communications Commission in
the Office of Legislative Affairs. He got his B.S. in Political Communications and Public Relations from the University of Texas at
Austin. “Beyond the USA Freedom Act: How U.S. Surveillance Still Subverts U.S. Competitiveness,” ITIF. June 2015.
http://www2.itif.org/2015-beyond-usa-freedom-act.pdf//ghs-kw)
CONCLUSION When historians write about this period in U.S. history it
could very well be that one of the themes will be
how the United States lost its global technology leadership to other nations. And clearly one of the
factors they would point to is the long-standing privileging of U.S. national security interests over U.S.
industrial and commercial interests when it comes to U.S. foreign policy. This has occurred over the last few years as the U.S.
government has done relatively little to address the rising commercial challenge to U.S. technology companies, all the while
putting intelligence gathering first and foremost. Indeed, policy decisions by the U.S. intelligence
community have reverberated throughout the global economy. If the U.S. tech industry is to
remain the leader in the global marketplace, then the U.S. government will need to set a new
course that balances economic interests with national security interests. The cost of inaction is not only short-term
economic losses for U.S. companies, but a wave of protectionist policies that will systematically weaken U.S.
technology competiveness in years to come, with impacts on economic growth, jobs, trade balance, and national
security through a weakened industrial base. Only by taking decisive steps to reform its digital surveillance
activities will the U.S. government enable its tech industry to effectively compete in the global
market.
Growth is slowing now—innovation and tech are key to sustain CCP legitimacy
Ebner 14
(Julia Ebner. Julia Ebner received her MSc in International Relations and Affairs and her MSc in Political Economy, Development
Economics, and Natural Resources from Peking University. She was a researcher at the European Institute of Asia Studies.
"Entrepreneurs: China’s Next Growth Engine?," Diplomat. 8-7-2014. http://thediplomat.com/2014/08/entrepreneurs-chinasnext-growth-engine///ghs-kw)
Should China want to remain an international economic superpower, it will
need to substitute its current growth
model – one largely based on abundant, cheap labor – with a different comparative advantage that can lay
the foundation for a new, more sustainable growth strategy. Chinese policymakers are hoping now that an
emerging entrepreneurship may fit that bill, with start-ups and family-run enterprises
potentially becoming a major driver of sustainable growth and thus replacing the country’s
current economic model. In 2014, international conferences on private entrepreneurship and
innovation were organized all across China: The China Council for the Promotion of International Trade organized its
first annual Global Innovation Economic Congress, while numerous innovation-related conferences were held at
well-known Chinese universities such as Tsinghua University, Jilin University and Wuhan
University. New Growth Model Needed Although China still ranks among the fastest growing economies in the world, the
country’s growth rates have decreased notably over the past few years. From the 1990s until the 2008 financial
crisis, China’s GDP growth was consistently in the double digits with only a brief interruption following the
Asian financial crisis of 1997. Despite a relatively quick recovery after the global financial crisis, declining export rates resulting from
the economic distress of China’s main trading partners have left their mark on the Chinese economy. Today’s GDP
growth of
7.8 percent is just half level recorded immediately before the 2008 crisis, according to the latest data
provided by the World Bank. This recent slowdown in China’s economic growth has naturally been a source of
concern for the government. A continuation of the country’s phenomenal economic growth is
needed to maintain both social stability and the Communist Party’s legitimacy. Sustainable
economic growth has thus been identified as one of China’s key challenges for the coming
decade. That challenge is complicated by demographic trends, which are set to have a strongly negative impact on the Chinese
economy within the next decade. Researchers anticipate that as a consequence of the country’s one-child policy, introduced in
1977, China will soon experience a sharp decline of its working-age population, leading to a substantial labor force bottleneck. A
labor shortage is likely to mean climbing wages, threatening China’s cheap labor edge. The challenge is well described in a recent
article published by the International Monetary Fund. Replacing the Cheap Labor Strategy Entrepreneurship
is widely
recognized as an important engine for economic growth: It contributes positively to economic
development by fuelling job markets through the creation of new employment opportunities, by
stimulating technological change through increased levels of innovation, and by enhancing the
market environment through an intensification of market competition. Entrepreneurship and
innovation have the potential to halt the contraction in China’ economic growth and to
replace the country’s unsustainable comparative advantage of cheap labor over the long term.
As former Chinese President Hu Jintao stressed in 2006, if China can transform its current growth strategy into
one based on innovation and entrepreneurship, it could sustain its growth rates and secure a key
role in the international world order. Indeed, increasing levels of entrepreneurship in the Chinese
private sector are likely to lead to technological innovation and productivity increases. This
could prove particularly useful in offsetting the workforce bottleneck created by demographic trends.
Greater innovation would also make China more competitive and less dependent on the knowledge and
technology of traditional Western trading partners such as the EU and the U.S.
Economic growth is key to prevent CCP collapse and lashout
Friedberg 10, Professor of Politics and International Affairs – Princeton, Asia Expert – CFR
(Aaron, “Implications of the Financial Crisis for the US-China Rivalry,” Survival, Volume 52, Issue
4, August, p. 31 – 54)
Despite its magnitude, Beijing's stimulusprogrammewas insufficient to forestall a sizeable spike in unemployment. The regime
acknowledges that upwards of 20 million migrant workers lost their jobs in the first year of the crisis , with many
returning to their villages, and 7m recent college graduates are reportedly on the streets in search of work.9 Not surprisingly, tough times have been accompanied
by increased social turmoil. Even before the crisis hit, the number of so-called 'mass incidents' (such as riots or strikes) reported each
year in China had been rising. Perhaps because it feared that the steep upward trend might be unnerving to foreign investors, Beijing stopped publishing aggregate, national statistics in 2005.10
Nevertheless, there is ample, if fragmentary, evidence that things got worse as the economy slowed. In Beijing, for example,
salary cuts, layoffs, factory closures and the failure of business owners to pay back wages
resulted in an almost 100% increase in the number of labour disputes brought before the courts.11 Since the early days of the
current crisis, the regime has clearly been bracing itself for trouble. Thus, at the start of 2009, an official news-agency story candidly warned Chinese readers that the country was, 'without a doubt … entering a
the regime for the first time summoned all 3,080 county-level
police chiefs to the capital to learn the latest riot-control tactics, and over 200 intermediate and lower-level judges were also called in for special training.13 Beijing's stimulus was
insufficient At least for the moment, the Chinese Communist Party (CCP) appears to be weathering the storm. But if in the next several
years the economy slumps again or simply fails to return to its previous pace, Beijing's troubles will mount. The regime probably
has enough repressive capacity to cope with a good deal more turbulence than it has thus far encountered, but a
protracted crisis could eventually pose a challenge to the solidarity of the party's leadership and thus
to its continued grip on political power. Sinologist MinxinPei points out that the greatest danger to CCP rule comes not
from below but from above. Rising societal discontent 'might be sufficient to tempt some members of the elite
to exploit the situation to their own political advantage' using 'populist appeals to weaken their
rivals and, in the process, open[ing] up divisions within the party's seemingly unified upper ranks'.14 If this happens, all bets will be off and a very wide range of outcomes, from a democratic transition to a
peak period of mass incidents'.12 In anticipation of an expected increase in unrest,
bloody civil war, will suddenly become plausible. Precisely because it is aware of this danger, the regime has been very careful to keep whatever differences exist over how
Short of
causing the regime to unravel, a sustained economic crisis could induce it to abandon its current, cautious policy of
avoiding conflict with other countries while patiently accumulating all the elements of 'comprehensive national power'. If they believe that
their backs are to the wall, China's leaders might even be tempted to lash out, perhaps provoking a
confrontation with a foreign power in the hopes of rallying domestic support and deflecting public attention from their dayto-day troubles. Beijing might also choose to implement a policy of 'military Keynesianism', further accelerating its already ambitious plans
for military construction in the hopes of pumping up aggregate demand and resuscitating a sagging domestic economy.15 In sum, despite its impressive initial performance,
Beijing is by no means on solid ground. The reverberations from the 2008-09 financial crisismay yet shake the regime
to its foundations, and could induce it to behave in unexpected, and perhaps unexpectedly aggressive, ways.
to deal with the current crisis within bounds and out of view. If there are significant rifts they could become apparent in the run-up to the pending change in leadership scheduled for 2012.
Chinese lashout goes nuclear
Epoch Times 4
(The Epoch Times, Renxing San, 8/4/2004, 8/4, http://english.epochtimes.com/news/5-8-4/30931.html//ghs-kw)
Since the Party’s life is “above all else,” it would not be surprising if the CCP resorts to the use of
biological, chemical, and nuclear weapons in its attempt to extend its life. The CCP, which disregards
human life, would not hesitate to kill two hundred million Americans, along with seven or eight
hundred million Chinese, to achieve its ends. These speeches let the public see the CCP for what it really is. With evil
filling its every cell the CCP intends to wage a war against humankind in its desperate attempt to cling
to life. That is the main theme of the speeches. This theme is murderous and utterly evil. In China we have seen beggars who
coerced people to give them money by threatening to stab themselves with knives or pierce their throats with long nails. But we
have never, until now, seen such a gangster who would use biological, chemical, and nuclear weapons to threaten the world, that all
will die together with him. This bloody confession has confirmed the CCP’s nature: that of a monstrous murderer who has killed 80
million Chinese people and who now plans to hold one billion people hostage and gamble with their lives.
2NC O/V
Disad outweighs and turns the AFF—NSA backdoors are causing foreign
customers to switch to Chinese tech now but the plan reverses that by closing
backdoors and reclaiming US tech leadership. That kills Chinese growth and
results in a loss of CCP legitimacy, which causes CCP lashout and extinction:
<insert o/w and t/ args>
2NC UQ
Extend uniqueness—perception of NSA backdoors incentivizes the Chinese
government and foreign customers to shift to Chinese tech, which boosts
Chinese tech—US company foreign sales have been falling fast—that’s Li and
McElveen
NSA spying boosts Chinese tech firms
Kan 13
(Kan, Michael. Michael Kan covers IT, telecommunications, and the Internet in China for the IDG News Service. "NSA spying
scandal accelerating China's push to favor local tech vendors," PCWorld. 12-3-2013.
http://www.pcworld.com/article/2068900/nsa-spying-scandal-accelerating-chinas-push-to-favor-local-tech-vendors.html//ghskw)
While China’s demand for electronics continues to soar, the
tech services market may be shrinking for U.S. enterprise
vendors. Security concerns over U.S. secret surveillance are giving the Chinese government
and local companies more reason to trust domestic vendors, according to industry experts. The country
has always tried to support its homegrown tech industry, but lately it is increasingly favoring local brands over
foreign competition. Starting this year, the nation’s government tenders have required IT suppliers to
source more products from local Chinese firms, said an executive at a U.S.-based storage supplier that sells to China.
In some cases, the tenders have required 50 percent or more of the equipment to come from
domestic brands, said the executive, who requested anonymity. Recent leaks by former U.S. National Security Agency
contractor, Edward Snowden, about the U.S.’s secret spying program aren’t helping the matter. “I think in general China wants
to favor local brands; they feel their technology is getting better,” the executive said. “Snowden has just caused this
to accelerate incrementally.” Last month, other U.S. enterprise vendors including Cisco and Qualcomm said the
U.S. spying scandal has put strains on their China business. Cisco reported its revenue from the country
fell 18 percent year-over-year in the last fiscal quarter. The Chinese government has yet to release an official document telling
companies to stay away from U.S. vendors, said the manager of a large data center, who has knowledge of such developments. But
state-owned telecom operators have already stopped orders for certain U.S. equipment to
power their networks, he added. Instead, the operators are relying on Chinese vendors such as
Huawei Technologies, to supply their telecommunications equipment. ”It will be hard for certain networking
equipment made in the U.S. to enter the Chinese market,” the manager said. “Its hard for them (U.S.
vendors) to get approval, to get certification from the related government departments.” Other
companies, especially banks, are concerned that buying enterprise gear from U.S. vendors may
lead to scrutiny from the central government, said Bryan Wang, an analyst with Forrester Research. ”The NSA
issue has been having an impact, but it hasn’t been black and white,” he added. In the future, China could
create new regulations on where certain state industries should source their technology from,
a possibility some CIOs are considering when making IT purchases, Wang said. The obstacles facing
U.S. enterprise vendors come at a time when China’s own homegrown companies are expanding
in the enterprise market. Huawei Technologies, a major vendor for networking equipment, this August came out with a
new networking switch that will put the company in closer competition with Cisco. Lenovo and ZTE are also targeting
the enterprise market with products targeted at government, and closing the technology gap
with their foreign rivals, Wang said. ”Overall in the longer-term, the environment is positive for local
vendors. We definitely see them taking market share from multinational firms in China,” he added.
Chinese vendors are also expanding outside the country and targeting the U.S. market. But last
year Huawei and ZTE saw a push back from U.S. lawmakers concerned with the two companies’ alleged ties to the Chinese
government. A Congressional panel eventually advised that U.S. firms buy networking gear from other vendors, calling Huawei and
ZTE a security threat.
Europe is shifting to China now
Ranger 15
(Steve Ranger. "Rise of China tech, internet surveillance revelations form
background to CeBIT show," ZDNet. 3-17-2015.
http://www.zdnet.com/article/rise-of-china-tech-internet-surveillancerevelations-form-background-to-cebit-show///ghs-kw)
As well as showcasing new devices, from tablets to robotic sculptors and drones, this year's CeBIT
technology show in
Hannover reflects a gradual but important shift taking place in the European technology world.
Whereas in previous years US companies would have taken centre stage, this year the emphasis is on China, both
as a creator of technology and as a huge potential market. "German business values China,
not just as our most important trade partner outside of Europe, but also as a partner in
developing sophisticated technologies," said Angela Merkel as she opened the show. "Especially in the
digital economy, German and Chinese companies have core strengths ... and that's why cooperation is a natural choice," she said.
Chinese vice premier Ma Kai also attended the show, which featured a keynote from Alibaba founder Jack Ma. China is CeBIT's
'partner country' this year, with over 600 Chinese companies - including Huawei, Xiaomi, ZTE, and Neusoft - presenting their
innovations at the show. The
UK is also keen on further developing a historically close relationship: the
China-Britain Business Council is in Hannover to help UK firms set up meetings with Chinese
companies, and to provide support and advice to UK companies interested in doing business in
China. "China is mounting the biggest CeBIT partner country showcase ever. Attendees will clearly see that Chinese companies are
up there with the biggest and best of the global IT industry," said a spokesman for CeBIT. Some of this activity is a result of
the increasingly sophisticated output of Chinese tech companies who are looking for new
markets for their products. Firms that have found it hard to make headway in the US, such as
Huawei, have been focusing their efforts on Europe instead. European tech companies are
equally keen to access the rapidly growing Chinese market. Revelations about mass interception
of communications by the US National Security Agency (including allegations that spies had even tapped Angela
Merkel's phone) have not helped US-European relations, either. So it's perhaps significant that an interview with NSA
contractor-turned-whistleblower Edward Snowden is closing the Hannover show.
2NC UQ: US Failing Now
US tech falling behind other countries
Kevin Ashton 06/2015 [the co-founder and former executive director of the MIT Auto-ID
Center, coined the term “Internet of Things.” His book “How to Fly a Horse: The Secret History
of Creation, Invention, and Discovery” was published by Doubleday earlier this year] "America
last?," The Agenda, http://www.politico.com/agenda/story/2015/06/kevin-ashton-internet-ofthings-in-the-us-000102
And, while they were not mentioning it, some key indicators began swinging away from the U.S. In
2005, China’s high-tech
exports exceeded America’s for the first time. In 2009, just after Wen Jiabao spoke about the Internet of Things,
Germany’s high-tech exports exceeded America’s as well. Today, Germany produces five times
more high tech per capita than the United States. Singapore and Korea’s high-tech exporters are also far more
productive than America’s and, according to the most recent data, are close to pushing the U.S. down to fifth place in
the world’s high-tech economy. And, as the most recent data are for 2013, that may have happened already. This decline
will surprise many Americans, including many American policymakers and pundits, who assume
U.S. leadership simply transfers from one tech revolution to the next. After all, that next revolution, the
Internet of Things, was born in America, so perhaps it seems natural that America will lead. Many U.S. commentators spin a myth
that America is No. 1 in high tech, then extend it to claims that Europe is lagging because of excessive government regulation, and
hints that Asians are not innovators and entrepreneurs, but mere imitators with cheap labor. This is jingoistic nonsense that could
not be more wrong. Not only does Germany, a leader of the European Union, lead the U.S. in high tech, but EU member states fund
CERN, the European Organization for Nuclear Research, which invented the World Wide Web and built the Large Hadron Collider,
likely to be a source of several centuries of high-tech innovation. (U.S. government intervention killed America’s equivalent particle
physics program, the Superconducting Super Collider, in 1993 — an early symptom of declining federal investment in basic
research.) Asia, the alleged imitator, is anything but. Apple’s
iPhone, for example, so often held up as the
epitome of American innovation, looked a lot like a Korean phone, the LG KE850, which was
revealed and released before Apple’s product. Most of the technology in the iPhone was invented in, and is exported by,
Asian countries.
2NC Link
Extend the link—the AFF stops creation of backdoors and perpetuates the
perception that US tech is safe, which means the US regains customers and tech
leadership from China—that’s Castro and McQuinn
If the US loses its tech dominance, Chinese and Indian innovation will quickly
replace it
Fannin 13 (Rebecca Fannin, 7-12-2013, forbes magazine contributor "China Still Likely To Take Over Tech
Leadership If And When Silicon Valley Slips," Forbes,
http://www.forbes.com/sites/rebeccafannin/2013/07/12/china-still-likely-to-take-over-tech-leadership-if-andwhen-silicon-valley-slips)
Will Silicon Valley continue to maintain its market-leading position for technology innovation?
It’s a question that’s often
pondered and debated, especially in the Valley, which has the most to lose if the emerging
markets of China or India take over leadership.¶ KPMG took a look at this question and other
trends in its annual Technology Innovation Survey, and found that the center of gravity may not
be shifting quite so fast to the East as once predicted. The KPMG survey of 811 technology
executives globally found that one-third believe the Valley will likely lose its tech trophy to an
overseas market within just four years. That percentage might seem high, but it compares with nearly half (44 percent) in last
year’s survey. It’s a notable improvement for the Valley, as the U.S. economy and tech sector pick up. ¶ Which country will lead in disruptive
breakthroughs? Here, the U.S. again solidifies its long-standing reputation as the world’s tech giant while China has slipped in stature from a year ago,
according to the survey. In last year’s poll, the U.S. and China were tied for the top spot. But today, some 37 percent predict that the U.S. shows the
most promise for tech disruptions, little surprise considering Google GOOG +2.72%‘s strong showing in the survey as top company innovator in the
world with its Google glass and driver-less cars. Meanwhile, about one-quarter pick China,
which is progressing from a
reputation for just copying to also innovating or micro-innovating. India, with a heritage of
leadership in outsourcing, a large talent pool of engineers, ample mentoring from networking
groups such as TiE, and a vibrant mobile communications market, ranked right behind the U.S.
and China two years in a row.¶ Even though China’s rank slid in this year’s tech innovation
survey, its Silicon Dragon tech economy is still regarded as the leading challenger and most likely
to replace the Valley, fueled by the market’s huge, fast-growing and towering brands such as
Tencent, Baidu BIDU -1.13%and Alibaba, and a growing footprint overseas. KPMG partner Egidio
Zarrella notes that China is innovating at an “impressive speed,” driven by domestic
consumption for local brands that are unique to the market. “China will innovate for China’s
sake,” he observes, adding that with improved research and development capabilities, China will
bridge the gap in expanding globally.¶ For another appraisal of China’s tech innovation prowess, see Forbes post detailing how
Mary Meeker’s annual trends report singles out the market’s merits, including the fact that China leads the world for the most Internet and mobile
communications users and has a tech-savvy consumer class that embraces new technologies. ¶ Besides China, it’s India that shines in the KPMG survey.
India scores as the second-most likely country to topple the U.S. for tech leadership. And,
significantly, this emerging tiger nation ranks first on an index that measures each country’s
confidence in its own tech innovation abilities. Based on ten factors, India rates highest on
talent, mentoring, and customer adoption of new technologies.¶ The U.S. came in third on the
confidence index, while Israel’s Silicon Wadi ranked second. Israel was deemed strong in disruptive technologies,
talent and technology infrastructure. The U.S. was judged strongest in tech infrastructure, access to alliances and partnerships, talent, and technology
breakthroughs, and weakest in educational system and government incentives. Those weaknesses for the U.S. are points that should be underscored in
America’s tech clusters and in the nation’s capital as future tech leadership unfolds.¶
A second part of the comprehensive
survey covering tech sectors pinpointed cloud computing and mobile communications as hardly
a fad but here to stay at least for the next three years as the most disruptive technologies. Both
were highlighted in the 2012 report a well. In a change from last year, however, big data and
biometrics (face, voice and hand gestures that are digitally read) were identified as top sectors
that will see big breakthroughs. It’s brave new tech world.
2NC Perception Link
The AFF restores trust in internet tech
Danielle Kehl et al 14, Senior Policy Analyst at New America’s Open Technology Institute.
Kevin Bankston is a Policy Director at OTI, Robyn Greene is a Policy Counsel at OTI, Robert
Morgus is a Research Associate at OTI, “Surveillance Costs: The NSA’s Impact on the Economy,
Internet Freedom & Cybersecurity”, July 2014, pg 40-1
The U.S. government should not require or request that new surveillance capabilities or security vulnerabilities be
built into communications technologies and services, even if these are intended only to facilitate lawful surveillance. There is a great
deal of evidence that backdoors fundamentally weaken the security of hardware and software, regardless of whether only the
NSA purportedly knows about said vulnerabilities, as some of the documents suggest. A policy state- ment from the Internet
Engineering Task Force in 2000 emphasized that “adding a requirement for wiretapping will make affected protocol designs
considerably more complex. Experience has shown that complexity almost inevitably jeopardizes the security of communications.”
355 More recently, a May 2013 paper from the Center for Democracy and Technology on the risks of wiretap modifications to
endpoints concludes that “deployment of an intercept capability in… communications services, systems and applica- tions poses
serious security risks.” 356 The authors add that “on balance mandating that endpoint software vendors build intercept functionality
into their products will be much more costly to personal, economic and governmental security overall than the risks associated with
not being able to wiretap all communications.” 357 While NSA programs such as SIGINT Enabling—much like proposals from
domestic law enforcement agen- cies to update the Communications Assistance for Law Enforcement Act (CALEA) to require dig- ital
wiretapping capabilities in modern Internet- based communications services 358 —may aim to promote national security and law
enforcement by ensuring that federal agencies have the ability to intercept Internet communications, they do so at a huge cost to
online security overall. Because of the associated security risks, the U.S. government should not mandate or request the creation of
surveillance backdoors in prod- ucts, whether through legislation, court order, or the leveraging industry relationships to convince
companies to voluntarily insert vulnerabilities. As Bellovin et al. explain, complying with these types of requirements would also
hinder innovation and impose a “tax” on software development in addition to creating a whole new class of vulnerabilities in
hardware and software that un- dermines the overall security of the products. 359 An amendment offered to the NDAA for Fiscal
Year 2015 (H.R. 4435) by Representatives Zoe Lofgren (D-CA) and Rush Holt (D-NJ) would have prohibited inserting these kinds of
vulnerabilities outright. 360 The Lofgren-Holt proposal aimed to prevent “the funding of any intelligence agency, intelligence
program, or intelligence related activity that mandates or requests that a device manufacturer, software developer, or standards
organization build in a backdoor to circumvent the encryption or privacy protections of its products, unless there is statutory
authority to make such a mandate or request.” 361 Although that measure was not adopted as part of the NDAA, a similar
amendment sponsored by Lofgren along with Representatives Jim Sensenbrenner (D-WI) and Thomas Massie (R-KY), did make it into
the House-approved version of the NDAA—with the support of Internet companies and privacy orga- nizations 362 —passing on an
overwhelming vote of 293 to 123. 363 Like Representative Grayson’s amendment on NSA’s consultations with NIST around
encryption, it remains to be seen whether this amendment will end up in the final appropri- ations bill that the President signs.
Nonetheless, these legislative efforts are a heartening sign and are consistent with recommendations from the President’s Review
Group that the U.S. govern- ment should not attempt to deliberately weaken the security of commercial encryption products. Such
mandated vulnerabilities, whether required under statute or by court order or inserted simply by request, unduly threaten
innovation in secure Internet technologies while introducing security flaws that may be exploited by a variety of bad actors. A
clear policy against such vulnerability mandates is necessary to restore international trust in
U.S. companies and technologies.
Policies such as the Secure Data Act are perceived as strengthening security
Castro and McQuinn 15
(Castro, Daniel and McQuinn, Alan. Information Technology and Innovation Foundation. The Information Technology and
Innovation Foundation (ITIF) is a Washington, D.C.-based think tank at the cutting edge of designing innovation strategies and
technology policies to create economic opportunities and improve quality of life in the United States and around the world.
Founded in 2006, ITIF is a 501(c) 3 nonprofit, non-partisan organization that documents the beneficial role technology plays in our
lives and provides pragmatic ideas for improving technology-driven productivity, boosting competitiveness, and meeting today’s
global challenges through innovation. Daniel Castro is the vice president of the Information Technology and Innovation
Foundation. His research interests include health IT, data privacy, e-commerce, e-government, electronic voting, information
security, and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst at the Government Accountability Office (GAO)
where he audited IT security and management controls at various government agencies. He has a B.S. in Foreign Service from
Georgetown University and an M.S. in Information Security Technology and Management from Carnegie Mellon University. Alan
McQuinn is a research assistant with the Information Technology and Innovation Foundation. Prior to joining ITIF, Mr. McQuinn
was a telecommunications fellow for Congresswoman Anna Eshoo and an intern for the Federal Communications Commission in
the Office of Legislative Affairs. He got his B.S. in Political Communications and Public Relations from the University of Texas at
Austin. “Beyond the USA Freedom Act: How U.S. Surveillance Still Subverts U.S. Competitiveness,” ITIF. June 2015.
http://www2.itif.org/2015-beyond-usa-freedom-act.pdf//ghs-kw)
Second, the
U.S. government should draw a clear line in the sand and declare that the policy of
the U.S. government is to strengthen not weaken information security. The U.S. Congress
should pass legislation, such as the Secure Data Act introduced by Sen. Wyden (D-OR),
banning any government efforts to introduce backdoors in software or weaken encryption. 43 In
the short term, President Obama, or his successor, should sign an executive order formalizing this policy as well. In addition, when U.S. government
agencies discover vulnerabilities in software or hardware products, they should responsibly notify these companies in a timely manner so that the
companies can fix these flaws. The best way to protect U.S. citizens from digital threats is to promote strong cybersecurity practices in the private
sector.
2NC Chinese Markets Link
Domestic markets are key to Chinese tech—plan steals Chinese market share
Lohr 12/2
(Steve Lohr. "In 2015, Technology Shifts Accelerate and China Rules, IDC
Predicts," NYT. 12-2-2014. http://bits.blogs.nytimes.com/2014/12/02/in-2015technology-shifts-accelerate-and-china-rules-idc-predicts///ghs-kw)
Beyond the detail, a couple of larger themes stand out. First is China. Most of the reporting and commentary recently on the
Chinese economy has been about its slowing growth and challenges. “In
information technology, it’s just the opposite,”
Frank Gens, IDC’s chief analyst, said in an interview. “China has a roaring domestic market in technology.” In 2015,
IDC estimates that nearly 500 million smartphones will be sold in China, three times the number
sold in the United States and about one third of global sales. Roughly 85 percent of the
smartphones sold in China will be made by its domestic producers like Lenovo, Xiaomi, Huawei,
ZTE and Coolpad. The rising prowess of China’s homegrown smartphone makers will make it tougher on outsiders, as
Samsung’s slowing growth and profits recently reflect. More than 680 million people in China will be online
next year, or 2.5 times the number in the United States. And the China numbers are poised to
grow further, helped by its national initiative, the Broadband China Project, intended to give 95 percent of the country’s urban
population access to high-speed broadband networks. In all, China’s spending on information and
communications technology will be more than $465 billion in 2015, a growth rate of 11 percent.
The expansion of the China tech market will account for 43 percent of tech-sector growth
worldwide.
The Chinese market is key to Chinese tech growth
Mozur 1/28
(Paul Mozur. Reporter for the NYT. "New Rules in China Upset Western Tech Companies," New York Times. 1-28-2015.
http://www.nytimes.com/2015/01/29/technology/in-china-new-cybersecurity-rules-perturb-western-techcompanies.html//ghs-kw)
Mr. Yao said 90 percent of high-end servers
and mainframes in China were still produced by
multinationals. Still, Chinese companies are catching up at the lower end. “For all enterprise
hardware, local brands represented 21.3 percent revenue share in 2010 in P.R.C. market and
we expect in 2014 that number will reach 43.1 percent,” he said, using the abbreviation for the People’s
Republic of China. “That’s a huge jump.”
Chinese tech is key to the global industry
Lohr 12/2
(Steve Lohr. "In 2015, Technology Shifts Accelerate and China Rules, IDC
Predicts," NYT. 12-2-2014. http://bits.blogs.nytimes.com/2014/12/02/in-2015technology-shifts-accelerate-and-china-rules-idc-predicts///ghs-kw)
Beyond the detail, a couple of larger themes stand out. First is China. Most of the reporting and commentary recently on the
Chinese economy has been about its slowing growth and challenges. “In
Frank Gens, IDC’s chief analyst, said in an interview. “China
information technology, it’s just the opposite,”
has a roaring domestic market in technology.” In 2015,
IDC estimates that nearly 500 million smartphones will be sold in China, three times the number
sold in the United States and about one third of global sales. Roughly 85 percent of the
smartphones sold in China will be made by its domestic producers like Lenovo, Xiaomi, Huawei,
ZTE and Coolpad. The rising prowess of China’s homegrown smartphone makers will make it tougher on outsiders, as
Samsung’s slowing growth and profits recently reflect. More than 680 million people in China will be online
next year, or 2.5 times the number in the United States. And the China numbers are poised to
grow further, helped by its national initiative, the Broadband China Project, intended to give 95 percent of the country’s urban
population access to high-speed broadband networks. In all, China’s spending on information and
communications technology will be more than $465 billion in 2015, a growth rate of 11
percent. The expansion of the China tech market will account for 43 percent of tech-sector
growth worldwide.
2NC Tech K2 China Growth
Tech is key to Chinese growth
Xinhua 7/24
(Xinhua. Major Chinese news agency. "Industrial profits decline while high-tech sector shines in Chinaļ½œWCT”. 7-24-2015.
http://www.wantchinatimes.com/news-subclass-cnt.aspx?id=20150328000036&cid=1102//ghs-kw)
Driven by the country's restructuring efforts amid the economic "new normal" of slow but quality growth, China's
high-tech
industry flourished with the value-added output of the high-tech sector growing 12.3% year-onyear in 2014. The high-tech industry accounted for 10.6% of the country's overall industrial
value-added output in 2014, which rose 7% from 2013 to 22.8 trillion yuan (US$3.71 trillion). The fast expansion
of the high-tech and modern service industries shows China's economy is advancing to the
"middle and high end," said Xie Hongguang, deputy chief of the NBS. China should work toward greater investment in "soft
infrastructure"–like innovation–instead of "hard infrastructure" to climb the global value chain, said Zhang Monan, an expert with
the China Center for International Economic Exchanges. Indeed, boosting
innovation has been put at the top of
the government's agenda as China has pledged to boost the implementation of the "Made in China 2025" strategy, which
will upgrade the manufacturing sector and help the country achieve a medium-high level of economic growth.
China transitioning to tech-based economy
Barry van
Wyk Upstart: China’s emergence in technology and innovation
published: May 27, 2010 Last updated: June 3, 2010
---- by Barry van Wyk, The Beijing Axis First
Significant progress has already been achieved with the MLP, and it is not hard to identify signs of China’s
rapidly improving innovative abilities. GERD increased to 1.54 per cent in 2008 from 0.57 per cent in 1995. Occurring
at a time when its GDP was growing exceptionally fast, China’s GERD now ranks behind only the US and Japan.
The number of triadic patents (granted in all three of the major patent offices in the US, Japan and Europe) granted to China
remains relatively small, reaching 433 in 2005 (compared to 652 for Sweden and 3,158 for Korea), yet Chinese patent
applications are increasing rapidly. Chinese patent applications to the World Intellectual Property Office (WIPO), for
example, increased by 44 per cent in 2005 and by a further 57 per cent in 2006. From a total of about 20,000 in 1998,
China’s output of scientific papers has increased fourfold to about 112,000 as of 2008, moving China to
second place in the global rankings, behind only the US. In the period 2004 to 2008, China produced about
400,000 papers, with the major focus areas being material science, chemistry, physics, mathematics and engineering, but
new fields like biological and medical science also gaining prominence.
China transitioning now
“Trends in China's Transition toward a Knowledge Economy” Authors:
Adam Segal, Ira A. Lipman Senior Fellow for
Counterterrorism and National Security Studies Ernest J. Wilson III January/February 2006 Asian Survey
http://www.cfr.org/publication/9924/trends_in_chinas_transition_toward_a_knowledge_economy.html
During the past decade, China has arguably placed more importance on reforming and modernizing its information
and communication technology (ICT) sector than any other developing country in the world. Under former
Premier Zhu Rongji, the Chinese leadership was strongly committed to making ICT central to its national goals—from
transforming Chinese society at home to pursuing its ambitions as a world economic and political power. In one of his final
speeches, delivered at the first session of the 10th National People’s Congress in 2003, Zhu implored his successors to
“energetically promote information technology (IT) applications and use IT to propel and accelerate
industrialization” so that the Chinese Communist Party (CCP) can continue to build a “well-off society.”1
2NC Global Econ I/L
China economic crash goes global—outweighs the US and disproves resiliency
empirics
Pesek 14
(Writer for Bloomberg, an edited economic publication “What to Fear If China Crashes,” Bloomberg View,
http://www.bloombergview.com/articles/2014-07-16/what-to-fear-if-china-crashes)
Few moments in modern financial history were scarier than the week of Sept. 15, 2008, when first
Lehman Brothers and then American International Group collapsed. Who could forget the cratering stock
markets, panicky bailout negotiations, rampant foreclosures, depressing job losses and decimated retirement accounts -- not to
mention the discouraging recovery since then? Yet
a Chinese crash might make 2008 look like a garden
party. As the risks of one increase, it's worth exploring how it might look. After all, China is now the world's biggest
trading nation, the second-biggest economy and holder of some $4 trillion of foreign-currency
reserves. If China does experience a true credit crisis, it would be felt around the world. "The example
of how the global financial crisis began in one poorly-understood financial market and spread dramatically from there illustrates the
capacity for misjudging contagion risk," Adam Slater wrote in a July 14 Oxford Economics report. Lehman
and AIG,
remember, were just two financial firms out of dozens. Opaque dealings and off-balance-sheet investment
vehicles made it virtually impossible even for the managers of those companies to understand their vulnerabilities -- and those of
the broader financial system. The
term "shadow banking system" soon became shorthand for potential
instability and contagion risk in world markets. Well, China is that and more. China surpassed Japan in
2011 in gross domestic product and it's gaining on the U.S. Some World Bank researchers even think China is already on the verge of
becoming No. 1 (I'm skeptical). China's world-trade weighting has doubled in the last decade. But the real explosion has been in the
financial sector. Since 2008, Chinese stock valuations surged from $1.8 trillion to $3.8 trillion and bank-balance sheets and the
money supply jumped accordingly. China's broad measure of money has surged by an incredible $12.5 trillion since 2008 to roughly
match the U.S.'s monetary stock. This enormous money buildup fed untold amounts of private-sector debt along with public-sector
institutions. Its scale, speed and opacity are fueling genuine concerns about a bad-loan meltdown in an economy that's 2 1/2 times
bigger than Germany's. If that happens, at a minimum it would torch China's property markets and could take down systemically
important parts of Hong Kong's banking system. The reverberations probably wouldn't stop there, however, and would hit resourcedependent Australia, batter trade-driven economies Japan, Singapore, South Korea and Taiwan and whack prices of everything from
oil and steel to gold and corn. "China’s
importance for the world economy and the rapid growth of its financial
financial crisis in China would also turn into a
system, mean that there are widespread concerns that a
global crisis," says London-based Slater. "A bad asset problem on this scale would dwarf that seen in the major emerging
financial crises seen in Russia and Argentina in 1998 and 2001, and also be more severe than the Japanese bad loan problem of the
1990s." Such risks belie President Xi Jinping's insistence that China's financial reform process is a domestic affair, subject neither to
input nor scrutiny by the rest of the world. That's not the case. Just like the Chinese pollution that darkens Asian skies and
contributes to climate change, China's financial vulnerability is a global problem. U.S. President Barack Obama made that clear
enough in a May interview with National Public Radio. “We welcome China’s peaceful rise," he said. “In many ways, it would be a
bigger national security problem for us if China started falling apart at the seams.” China's ascent obviously preoccupies the White
House as it thwarts U.S. foreign-policy objectives, taunts Japan and other nations with territorial claims in the Pacific and casts
aspersions on America's moral leadership. But China's frailty has to be on the minds of U.S. policy makers, too The
potential
for things careening out of control in China are real. What worries bears such as Patrick Chovanec of Silvercrest
Asset Management in New York, is China’s unaltered obsession with building the equivalent of new “Manhattans” almost overnight
even as the nation's financial system shows signs of buckling. As policy makers in Beijing generate even more credit to keep bubbles
from bursting, the shadow banking system continues to grow. The longer China delays its reckoning, the worst it might be for China - and perhaps the rest of us.
CCP collapse causes the second Great Depression
BHANDARI. 10.
Maya. Head of Emerging Markets Analysis, Lombard Street Research. “If the Chinese Bubble
Bursts…” THE INTERNATIONAL ECONOMY. http://www.internationaleconomy.com/TIE_F10_ChinaBubbleSymp.pdf
The latest
financial crisis proved the central role of China in driving global economic outcomes.
China is the chief overseas surplus country corresponding to the U.S. deficit, and it was excess ex ante Chinese
savings which prompted ex post U _S. dis-saving. The massive ensuing build-up of debt triggered a Great
Recession almost as bad as the Great Depression. This causal direction, from excess saving to excess spending, is
confirmed by low global real interest rates through much of the Goldilocks period. Had over-borrowing been the cause
rather than effect, then real interest rates would have been bid up to attract the required capital. A
prospective hard landing in China might thus be expected to have serious global implications. The
Chinese economy did slow sharply over the last eighteen months, but only briefly, as large-scale Irhind-the-scenes stimulus meant
that it quickly retumed to overheating. Given its 9—10 percent "trend" growth rate, and 30 per. cent import ratio, China is nearly
twice as powerful a global growth locomotive as the United States, based on its implied import gain. So while the surrounding
export hubs, whose growth prospects are a "second derivative" of what transpires in China, would suffer most directly
from Chinese slowing, the knock to global growth would be significant. Voracious Chinese demand
has also been a crucial driver of global commodity prices, particularly metals and oil, so they too
may face a hard landing if Chinese demand dries up.
CCP collapse deals a massive deflationary shock to the world.
ZHAO. 10.
Chen. Chief Global Strategist and Managing Editor for Global Investment Strategy, BCA Research
Group. “If the Chinese Bubble Bursts…” THE INTERNATIONAL ECONOMY.
http://www.international-economy.com/TIE_F10_ChinaBubbleSymp.pdf
At the onset, I believe the odds of a China asset bub- ble bursting are very low. It is difficult to argue that Chinese asset markets,
particularly real estate, are indeed already in a 'bubble. " Property prices in tier two and tier three cities are actually quite cheap, but
for pur- poses of discussion, there is always the danger that asset values could get massively inflated over the next few years. If so, a
crash would be inevitable. In fact, China experienced a devastating real estate meltdown and "growth recession" in 1993—94, when
then-premier Zhu Rongii initiated a credit crackdown to rein in spreading inflation and real estate speculation. Property prices in
major cities dropped by over 40 per- cent and private sector GDP growth dropped to 3 per. cent from double-digit levels. Nonperforming loans soared to 30 perænt of total banking sector assets. It took more than seven years for the government to clean up
the financial mess and recapitalize the banking system. If
another episode of a bursting asset bubble were to
happen in China, the damage to the banking sector could be rather severe. History has repeatedly
show-n that credit inflation begets asset bubbles and, almost by definition, a bursting asset bubble
always leads to a banking crisis and severe credit contraction. In China's case, bank credit is the lifeline for
large state-owned companies, and a credit crunch could choke off growth of these enterprises quickly. The big difference between
today's situation and the early 1990s, however, is that the Chinese authorities have accumulated '.ast reserves _ China also runs a
huge cun-ent account surplus. In the early 1990s, China's reserves had dwindled to almost nothing and the current account was in
massive deficit. As a real estate meltdown led to a collapse in the Chinese currency in 1992—93. In other words, Beijing today has a
lot of resources at its disposal to stimulate the economy or to recapitalize the banking system, whenever necessary. Therefore, the
impact of a bursting bubble on growth could be very sham and even severe, but it would be short-lived because of supp-an from
public sector spending _ A bursting
China bubble would also be felt acutely in commodity prices. The
commodity story has been built around the China story. Naturally, a bursting China bub- ble would deal
a devastating blow to the commodities as well as commodity producers such as Latin America, Australia, and
Canada, among others. Asia as a whole, and Japan in particular, would also be acutely affected by a "growth
recession" in China. The economic integration between China and the rest of Asia is well—documented
but it is important to note that there has been virtually no domestic spending in Japan in recent years and the country's economic
growth has been leveraged almost entirely on exports to China A
bursting China bubble could seriously impair
Japan's economic and asset market performance Finally, a bursting China bubble would be a massive deflationary shock to the world economy. With China in growth recession, global saving excesses
could surge and world aggregate demand would vastly defi- cient. Bond yields could move to
new lows and stocks would drop, probably precipitously—in short, investors would face very bleak and
frightening prospects.
2NC US Econ I/L
Chinese growth turns the case --- strong Chinese technological power forms
linkages with US companies --- drives growth of US companies
NRC 10 National Research Council “The Dragon and the Elephant: Understanding the Development of Innovation Capacity
in China and India: Summary of a Conference” www.nap.edu/openbook.php?record_id=12873&page=13
Wadhwa found in his surveys that companies go offshore for reasons of “cost and where the markets are.”
Meanwhile, Asian immigrants are driving enterprise growth in the United States. Twenty-five percent of technology and
engineering firms launched in the last decade and 52% of Silicon Valley startups had immigrant founders. Indian immigrants
accounted for one-quarter of these. Among America’s new immigrant entrepreneurs, more than 74 percent have a master’s
or a PhD degree. Yet the backlog of U.S. immigration applications puts this stream of talent in limbo. One
million skilled immigrants are waiting for the annual quota of 120,000 visas, with caps of 8,400 per country. This is causing
a “reverse brain drain” from the U nited S tates back to countries of origin, the majority to India and China.
This endangers U.S. innovation and economic growth. There is a high likelihood, however, that returning skilled
talent will create new linkages to U.S. companies , as they are doing within General Electric, IBM, and
other companies. Jai Menon of IBM Corporation began his survey of IBM’s view of global talent recruitment by suggesting
that “aa. IBM pursues growth of its operations as a global entity. There are 372,000 IBMers in 172 countries; 123,000 of these
are in the Asia-Pacific region. Eighty percent of the firm’s R&D activity is still based in the United States. IBM supports open
standards development and networked business models to facilitate global collaboration. Three factors drive the firm’s
decisions on staff placement and location of recruitment -- economics, skills and environment. IBM India has grown its staff
tenfold in five years; its $6 billion investment in three years represents a tripling of resources in people, infrastructure and
capital. Increasingly, as Vivek Wadhwa suggested, people get degrees in the United States and return to India for their first
jobs. IBM follows a comparable approach in China, with 10,000+ IBM employees involved in R&D,
services and sales. In 2006, for the first time the number of service workers overtook the number of agricultural laborers
worldwide. Thus the needs of a service economy comprise an issue looming for world leaders.
CCP collapse hurts US economy
Karabell 13
(Zachary. American author, historian, money manager and economist. Karabell is President of River Twice Research, where he
analyzes economic and political trends. He is also a Senior Advisor for Business for Social Responsibility. Previously, he was
Executive Vice President, Head of Marketing and Chief Economist at Fred Alger Management, a New York-based investment firm,
and President of Fred Alger and Company, as well as Portfolio Manager of the China-US Growth Fund, which won both a Lipper
Award for top performance and a 5-star designation from Morningstar, Inc.. He was also Executive Vice President of Alger's
Spectra Funds, a no-load family of mutual funds that launched the $30 million Spectra Green Fund, which was based on the idea
that profit and sustainability are linked. At Alger, he oversaw the creation, launch and marketing of several funds, led corporate
strategy for acquisitions, and represented the firm at public forums and in the media. Educated at Columbia, Oxford, and Harvard,
where he received his Ph.D., he is the author of several books. “The U.S. can’t afford a Chinese economic collapse.” Reuters.
http://blogs.reuters.com/edgy-optimist/2013/03/07/the-u-s-cant-afford-a-chinese-economic-collapse/)
Is China about to collapse? That question has been front and center in the past weeks as the country completes its leadership
transition and after the exposure of its various real estate bubbles during a widely watched 60 Minutes exposé this past weekend.
Concerns about soaring property prices throughout China are hardly new, but they have been given added weight by the
government itself. Recognizing that a rapid implosion of the property market would disrupt economic growth, the central
government recently announced far-reaching measures designed to dent the rampant speculation. Higher down payments, limiting
the purchases of investment properties, and a capital gains tax on real estate transactions designed to make flipping properties less
lucrative were included. These measures, in conjunction with the new government’s announcing more modest growth targets of 7.5
percent a year, sent Chinese equities plunging and led to a slew of commentary in the United States saying China would be the next
shoe to drop in the global system. Yet there is more here than simple alarm over the viability of China’s economic growth. There is
the not-so-veiled undercurrent of rooting against China. It is difficult to find someone who explicitly wants it to collapse, but the
tone of much of the discourse suggests bloodlust. Given that China largely escaped the crises that so afflicted the United States and
the eurozone, the desire to see it stumble may be understandable. No one really likes a global winner if that winner isn’t you. The
need to see China fail verges on jingoism. Americans distrust the Chinese model, find that its business practices verge on the
immoral and illegal, that its reporting and accounting standards are sub-par at best and that its system is one of crony capitalism run
by crony communists. On Wall Street, the presumption usually seems to be that any Chinese company is a ponzi scheme
masquerading as a viable business. In various conversations and debates, I have rarely heard China’s economic model mentioned
without disdain. Take, as just one example, Gordon Chang in Forbes: “Beijing’s technocrats can postpone a reckoning, but they have
not repealed the laws of economics. There will be a crash.” The consequences
of a Chinese collapse, however, would
be severe for the United States and for the world. There could be no major Chinese contraction
without a concomitant contraction in the United States. That would mean sharply curtailed Chinese
purchases of U.S. Treasury bonds, far less revenue for companies like General Motors, Nike, KFC and Apple
that have robust business in China (Apple made $6.83 billion in the fourth quarter of 2012, up from $4.08 billion a year
prior), and far fewer Chinese imports of high-end goods from American and Asian companies. It would also
mean a collapse of Chinese imports of materials such as copper, which would in turn harm economic
growth in emerging countries that continue to be a prime market for American, Asian and
European goods. China is now the world’s second-largest economy, and property booms have been one
aspect of its growth. Individual Chinese cannot invest outside of the country, and the limited options of China’s stock exchanges and
almost nonexistent bond market mean that if you are middle class and want to do more than keep your money in cash or lowyielding bank accounts, you buy either luxury goods or apartments. That has meant a series of property bubbles over the past
decade and a series of measures by state and local officials to contain them. These recent measures are hardly the first, and they are
not likely to be the last. The past 10 years have seen wild swings in property prices, and as recently as 2011 the government took
steps to cool them; the number of transactions plummeted and prices slumped in hot markets like Shanghai as much as 30, 40 and
even 50 percent. You could go back year by year in the 2000s and see similar bubbles forming and popping, as the government
reacted to sharp run-ups with restrictions and then eased them when the pendulum threatened to swing too far. China has had a
series of property bubbles and a series of property busts. It has also had massive urbanization that in time has absorbed the excess
supply generated by massive development. Today much of that supply is priced far above what workers flooding into China’s cities
can afford. But that has always been true, and that housing has in time been purchased and used by Chinese families who are
moving up the income spectrum, much as U.S. suburbs evolved in the second half of the 20th century. More to the point, all
property bubbles are not created equal. The housing bubbles in the United States and Spain, for instance, would never had been so
disruptive without the massive amount of debt and the financial instruments and derivatives based on them. A bursting housing
bubble absent those would have been a hit to growth but not a systemic crisis. In China, most buyers pay cash, and there is no
derivative market around mortgages (at most there’s a small shadow market). Yes, there are all sorts of unofficial transactions with
high-interest loans, but even there, the consequences of busts are not the same as they were in the United States and Europe in
recent years. Two issues converge whenever China is discussed in the United States: fear of the next global crisis, and distrust and
dislike of the country. Concern is fine; we should always be attentive to possible risks. But China’s property bubbles are an unlikely
risk, because of the absence of derivatives and because the central government is clearly alert to the market’s behavior. Suspicion
and antipathy, however, are not constructive. They speak to the ongoing difficulty China poses to Americans’ sense of global
economic dominance and to the belief in the superiority of free-market capitalism to China’s state-managed capitalism. The U.S.
system may prove to be more resilient over time; it has certainly proven successful to date. Its
success does not require
China’s failure, nor will China’s success invalidate the American model. For our own self-interest
we should be rooting for their efforts, and not jingoistically wishing for them to fail.
2NC Impact UQ
Latest data show Chinese economy is growing now—ignore stock market claims
which don’t accurately reflect economic fundamentals
Miller and Charney 7/15
(Miller, Leland R. and Charney, Craig. Mr. Miller is president and Mr. Charney is research director of China Beige Book
International, a private economic survey. “China’s Economy Is Recovering,” Wall Street Journal, 7/15/2015.
http://www.wsj.com/articles/chinas-economy-is-recovering-1436979092//ghs-kw)
China released second-quarter statistics Wednesday that showed the economy growing at 7%, the
same real rate as the first quarter but with stronger nominal growth. That result, higher than
expected and coming just after a stock-market panic, surprised some commentators and even aroused suspicion that
the government cooked the numbers for political reasons. While official data is indeed unreliable, our firm's latest
research confirms that the Chinese economy is improving after several disappointing quarters
-- just not for the reasons given by Beijing. The China Beige Book (CBB), a private survey of more than 2,000 Chinese
firms each quarter, frequently anticipates the official story. We documented the 2012 property rebound, the
2013 interbank credit crunch and the 2014 slowdown in capital expenditure before any of them showed up in official statistics. The
modest but broad-based improvement in the Chinese economy that we tracked in the second quarter may
seem at odds with the headlines of carnage in the country's financial markets. But stock prices in
China have almost nothing to do with the economy's fundamentals. Our data show sales revenue,
capital expenditure, new domestic orders, hiring, wages and profits were all better in the
second quarter, making the improvement unmistakable -- albeit not outstanding in any one category. In the
labor market, both employment and wage growth strengthened, and prospects for hiring look
stable. This is not new: Our data have shown the labor market remarkably steady over the past year, despite the economy's
overall deceleration. Inflation data are also a reason for optimism. Along with wages, input costs and
sales prices grew faster in the second quarter. The rate is still slower than a year ago, but at least this is a break
from the previously unstoppable tide of price deterioration. While it is just one quarter, our data suggest deflation may have
peaked. With the explosive stock market run-up occupying all but the final weeks of the quarter, it might seem reasonable to
conclude that this rally was the impetus behind the better results. Not so. Of all our indicators, capital expenditure should have
responded most positively to a boom in equities prices, but the uptick was barely noticeable. The
strength of the secondquarter performance is instead found in widespread expanding sales volumes, which firms were
able to accomplish without sacrificing profit margins. The fact that stronger sales, rather than
greater investment, was the driving force this quarter is itself an encouraging sign in light of
China's longstanding problem of excess investment and inadequate consumption. These gains
also track across sectors, highlighted by a welcome resurgence in both property and retail.
Property saw its strongest results in 18 months, buoyed by stronger commercial and
residential realty as well as transportation construction. Six of our eight regions were better
than last quarter, led by the Southwest and North. The results were also an improvement over the second quarter of last year,
if somewhat less so, with residential construction the sector's major remaining black eye. Retailers, meanwhile, reported a
second consecutive quarter of improvement, both on-quarter and on-year, with growth
accelerating. For the first time in 18 months, the retail sector also had faster growth than
manufacturing, underscoring the danger of treating manufacturing as the bellwether for the economy.
China’s economy is stabilizing now but it’s fragile
AFP and Reuters 7/15
(Agence France-Presse and Reuters on Deutsche Welle. "China beats expectations on economic growth," DW. 07-15-2015.
http://www.dw.com/en/china-beats-expectations-on-economic-growth/a-18584453//ghs-kw)
Slowing growth in key areas like foreign trade, state investment and domestic demand had prompted economists to predict
a year-on-year GDP increase of just under 7 percent for the April-June quarter. The figure, released by the National
Bureau of Statistics (NBS) on Wednesday, matched first-quarter growth in China exactly. The government has officially
set 7 percent as its target for GDP growth this year. "We are aware that the domestic and
external economic conditions are still complicated, the global economic recovery is slow and
tortuous and the foundation for the stabilization of China's economy needs to be further
consolidated," NBS spokesman Sheng Laiyun told reporters. However, "the major indicators of the second
quarter showed that the growth was stabilized and ready to pick up, the economy developed
with positive changes and the vitality of the economic development was strengthened," Sheng
added. Industrial output, including production at factories, workshops and mines also rose by 6.8 percent in June
compared to 6.1 percent in May, the NBS said. Tough transition, stock market fluctuating The robust growth comes
despite a difficult economic year for China. Figures released on Monday showed a dipped in foreign trade in the
first half of the year - with exports up slightly but imports well down. Public investment, for years the driver of double-digit
percentage growth in China, is down as the government seeks to rely more on consumer demand - itself slow to pick up. In recent
weeks, the Shanghai stock market has been falling sharply, albeit after a huge boom in months leading up to the crash.
Surveys prove China is experiencing growth now
Reuters 6/23
(Reuters. "China’s Economy Appears to Be Stabilizing, Reports Show," International New York Times. 6-23-2015.
http://www.nytimes.com/2015/06/24/business/international/chinas-economy-appears-to-be-stabilizing-reportsshow.html//ghs-kw)
SHANGHAI — China’s
factory activity showed signs of stabilizing in June, with two nongovernment
surveys suggesting that the economy might be regaining some momentum, while many analysts expected
further policy support to ensure a more sure-footed recovery. The preliminary purchasing managers index for China published by
HSBC and compiled by Markit, a data analysis firm, edged up to 49.6 in June. It was the survey’s highest level in three months but
still below the 50 mark, which would have pointed to an expansion. The final reading for May was 49.2. “The pickup
in new
orders” — which returned to positive territory at 50.3 in June — “was driven by a strong rise in the
new export orders subcomponent, suggesting that foreign demand may finally be turning a
corner,” Capital Economics analysts wrote in a research note. “Today’s P.M.I. reading reinforces our view that the economy has
started to find its footing.” But companies stepped up layoffs, the survey showed, shedding jobs at the fastest pace in more than six
years. Annabel Fiddes, an economist at Markit, said: “Manufacturers continued to cut staff. This suggests companies have relatively
muted growth expectations.” She said that she expected Beijing to “step up their efforts to stimulate growth and job creation.” A
much rosier
picture was painted by a separate survey, a quarterly report by China Beige Book International, a data
a “broad-based recovery” in the second quarter, led primarily by China’s
interior provinces. “Among major sectors, two developments stand out: a welcome resurgence
in retail — which saw rising revenue growth despite a slip in prices — and a broad-based
rebound in property,” said the report’s authors, Leland Miller and Craig Charney. Manufacturing, services, real
estate, agriculture and mining all had year-on-year and quarterly gains, they said.
analysis firm, describing
2NC US Heg I/L
Chinese growth is key to US hegemony
Yiwei 07 Wang yiwei, Center for American Studies @ Fudan University, “China's Rise: An Unlikely Pillar of US Hegemony,” Harvard International Review, Volume
29, Issue 1 Spring7, pp. 60-63.
China’s rise is taking place in this context. That is to say, Chinese development is merely one facet of Asian and developing
states’ economic progress in general. Historically, the United States has provided the dominant development paradigm for the
world. But today, China has come up with development strategies that are different from that of any other nation-state in
history and are a consequence of the global migration of industry along comparative advantage lines. Presently, the
movement of light industry and consumer goods production from advanced industrialized countries to China is nearly
complete, but heavy industry is only beginning to move. Developed countries’ dependence on China will be far
more pronounced following this movement. As global production migrates to China and other
developing countries, a feedback loop will emerge and indeed is already beginning to emerge. Where
globalization was once an engine fueled by Western muscle and steered by Western policy, there is
now more gas in the tank but there are also more hands on the steering wheel. In the past, developing
countries were often in a position only to respond to globalization, but now, developed countries must respond as well.
Previously the United States believed that globalization was synonymous with Americanization, but today’s world has
witnessed a United States that is feeling the influence of the world as well. In the past, a sneeze on Wall Street was followed
by a downturn in world markets. But in February 2007, Chinese stocks fell sharply and Wall Street responded with its steepest
decline in several years. In this way, the whirlpool of globalization is no longer spinning in one direction. Rather, it is
generating feedback mechanisms and is widening into an ellipse with two focal points: one located in the United States, the
historical leader of the developed world, and one in the China, the strongest country in the new developing world power bloc.
Combating Regionalization It is important to extend the discussion beyond platitudes regarding “US decline” or the “rise of
China” and the invective-laden debate over threats and security issues that arises from these. We must step out of a narrowly
national mindset and reconsider what Chinese development means for the United States. One of the consequences of
globalization has been that countries such as China, which depend on exporting to US markets, have
accumulated large dollar reserves. This has been unavoidable for these countries, as they must purchase dollars in
order to keep the dollar strong and thus avoid massive losses. Thus, the United States is bound to bear a trade
deficit, and moreover, this deficit is inextricably tied to the dollar’s hegemony in today’s markets. The
artificially high dollar and the US economy at large depend in a very real sense on China’s investment in
the dollar. Low US inflation and interest rates similarly depend on the thousands of “Made in China”
labels distributed across the United States. As Paul Krugman wrote in The New York Times, the situation is
comparable to one in which “the American sells the house but the money to buy the house comes from China.” Former US
treasury secretary Lawrence Summers even affirms that China and the United States may be in a kind of imprudent “balance
of financial terror.” Today, the US trade deficit with China is US$200 billion. China holds over US$1 trillion in foreign exchange
reserves and US$350 billion in US bonds. Together, the Chinese and US economies account for half of global economic
growth. Thus, a fantastic situation has arisen: China’s rise is actually supporting US hegemony. Taking US hegemony and
Western preeminence as the starting point, many have concluded that the rise of China presents a
threat. The premise of this logic is that the international system predicated on US hegemony and Western preeminence
would be destabilized by the rise of a second major power. But this view is inconsistent with the phenomenon of
one-way globalization. The so-called process of one-way globalization can more truly be called
Westernization. Today’s globalization is still in large part driven by the West, inasmuch as it is tinged by
Western unilateralism and entails the dissemination of essentially Western standards and ideology. For example, Coca
Cola has become a Chinese cultural icon, Louis Vuitton stores crowd high-end shopping districts in Shanghai, and, as
gender equality progresses, Chinese women look to Western women for inspiration. In contrast, Haier, the
best-known Chinese brand in the United States, is still relatively unknown, and Wang Fei, who is widely regarded in China as
the pop star who was able to make it in the United States, has less name-recognition there than a first-round American Idol
cut.
2NC Growth Impacts
Chinese growth prevents global economic collapse, war over Taiwan and CCP
collapse
Lewis ‘08 [Dan, Research Director – Economic Research Council, “The Nightmare of a Chinese
Economic Collapse,” World Finance, 5/13,
http://www.worldfinance.com/news/home/finalbell/article117.html]
In 2001, Gordon Chang authored a global bestseller "The Coming Collapse of China." To suggest that the world’s largest nation of 1.3
billion people is on the brink of collapse is understandably for many, a deeply unnerving theme. And many seasoned “China Hands”
rejected Chang’s thesis outright. In a very real sense, they were of course right. China’s
expansion has continued over
the last six years without a hitch. After notching up a staggering 10.7 percent growth last year, it is now the 4th largest
economy in the world with a nominal GDP of $2.68trn. Yet there are two Chinas that concern us here; the 800 million who live in the
cities, coastal and southern regions and the 500 million who live in the countryside and are mainly engaged in agriculture. The latter
– which we in the West hear very little about – are still very poor and much less happy. Their poverty and misery do not necessarily
spell an impending cataclysm – after all, that is how they have always have been. But it does illustrate the inequity of Chinese
monetary policy. For many years, the Chinese yen has been held at an artificially low value to boost manufacturing exports. This has
clearly worked for one side of the economy, but not for the purchasing power of consumers and the rural poor, some of who are
getting even poorer. The central reason for this has been the inability of Chinese monetary policy to adequately support both
Chinas. Meanwhile,
rural unrest in China is on the rise – fuelled not only by an accelerating income
gap with the coastal cities, but by an oft-reported appropriation of their land for little or no
compensation by the state. According to Professor David B. Smith, one of the City’s most accurate and respected
economists in recent years, potentially far more serious though is the impact that Chinese monetary policy could have on many
Western nations such as the UK. Quite simply, China’s undervalued currency has enabled Western governments to maintain
artificially strong currencies, reduce inflation and keep interest rates lower than they might otherwise be. We should therefore be
very worried about how vulnerable Western economic growth is to an upward revaluation of the Chinese yuan. Should that
revaluation happen to appease China’s rural poor, at a stroke, the dollar, sterling and the euro would quickly depreciate, rates in
those currencies would have to rise substantially and the yield on government bonds would follow suit. This would add greatly to
the debt servicing cost of budget deficits in the USA, the UK and much of euro land. A reduction in demand for imported Chinese
goods would quickly entail a decline in China’s economic growth rate. That is alarming. It
has been calculated that to
keep China’s society stable – ie to manage the transition from a rural to an urban societywithout
devastating unemployment - the minimum growth rate is 7.2 percent. Anything less than that
and unemployment will rise and the massive shift in population from the country to the cities
becomes unsustainable. This is when real discontent with communist party rulebecomes vocal
and hard to ignore. It doesn’t end there. That will at best bring a global recession. The crucial
point is that communist authoritarian states have at least had some success in keeping a lid on
ethnic tensions – so far. But when multi-ethnic communist countries fall apartfrom economic
stress and the implosion of central power, history suggests that they don’t become successful
democracies overnight. Far from it. There’s a very real chance that China might go the way of
Yugoloslavia or the Soviet Union – chaos, civil unrestand internecine war. In the very worst case
scenario,a Chinese government might seek to maintain national cohesion by going to war with
Taiwan – whom America is pledged to defend.
Chinese economic growth prevents global nuclear war
Kaminski 7 (Antoni Z., Professor – Institute of Political Studies, “World Order: The Mechanics
of Threats (Central European Perspective)”, Polish Quarterly of International Affairs, 1, p. 58)
As already argued, the economic advance of China has taken place with relatively few corresponding changes in the political system,
although the operation of political and economic institutions has seen some major changes. Still, tools are missing that would allow
the establishment of political and legal foundations for the modem economy, or they are too weak. The tools are efficient public
administration, the rule of law, clearly defined ownership rights, efficient banking system, etc. For these reasons, many experts fear
an economic crisis in China. Considering the importance of the state for the development of the global economy, the crisis
would have serious global repercussions. Its political ramifications could be no less dramatic owing to the special
position the military occupies in the Chinese political system, and the existence of many potential vexed issues in East Asia (disputes
over islands in the China Sea and the Pacific). A potential hotbed
of conflict is also Taiwan's status. Economic
recession and the related destabilization of internal policies could lead to apolitical, or even military crisis. The
likelihood of the global escalation of the conflict is high, as the interests of Russia, China, Japan,
Australia and, first and foremost, the US clash in the region.
China’s economic rise is good --- they’re on the brink of collapse --- causes CCP
instability and lashout --- also tubes the global economy, US primacy, and Sino
relations
Mead 9 Walter Russell Mead, Henry A. Kissinger Senior Fellow in U.S. Foreign Policy at the
Council on Foreign Relations, “Only Makes You Stronger,” The New Republic, 2/4/9,
http://www.tnr.com/story_print.html?id=571cbbb9-2887-4d81-8542-92e83915f5f8
The greatest danger both to U.S.-China relations and to American power itself is probably not that
China will rise too far, too fast; it is that the current crisis might end China's growth miracle. In the
worst-case scenario, the turmoil in the international economy will plunge China into a major economic
downturn. The Chinese financial system will implode as loans to both state and private enterprises go bad.
Millions or even tens of millions of Chinese will be unemployed in a country without an
effective social safety net. The collapse of asset bubbles in the stock and property markets will wipe out
the savings of a generation of the Chinese middle class. The political consequences could include
dangerous unrest--and a bitter climate of anti-foreign feeling that blames others for China's woes.
(Think of Weimar Germany, when both Nazi and communist politicians blamed the West for Germany's economic
travails.) Worse, instability could lead to a vicious cycle, as nervous investors moved their money out of the country,
further slowing growth and, in turn, fomenting ever-greater bitterness. Thanks to a generation of rapid
economic growth, China has so far been able to manage the stresses and conflicts of modernization
and change; nobody knows what will happen if the growth stops.
Growth decline threatens CCP rule---they’ll start diversionary wars in response
Shirk 7 Susan L. Shirk is an expert on Chinese politics and former Deputy Assistant Secretary of
State during the Clinton administration. She was in the Bureau of East Asia and Pacific Affairs
(People's Republic of China, Taiwan, Hong Kong and Mongolia). She is currently a professor at
the Graduate School of International Relations and Pacific Studies at the University of California,
San Diego. She is also a Senior Director of Albright Stonebridge Group, a global strategy firm,
where she assists clients with issues related to East Asia. “China: Fragile Superpower,” Book
By sustaining high rates of economic growth, China’s leaders create new jobs and limit the
number ofunemployed workers who might go to the barricades. Binding the public to the Party through nationalism also
helps preempt opposition. The trick is to find a foreign policy approach that can achieve both these vital objectives simultaneously. How long can it last? Viewed objectively,
China’s communist regime looks surprisingly resil- ient. It may be capable of surviving for years to come so long as the economy continues to grow and create jobs. Survey
research in Beijing shows wide- spread support (over 80 percent) for the political system as a whole linked to sentiments of nationalism and acceptance of the CCP’s argument
about “stability first.”97 Without making any fundamental changes in the CCP- dominated political system—leaders from time to time have toyed with reform ideas such as local
elections but in each instance have backed away for fear of losing control—the Party has bought itself time. As scholar Pei Minxin notes, the ability of communist regimes to use
their patronage and coercion to hold on to power gives them little incentive to give up any of that power by introducing gradual democratization from above. Typically, only
the greatest political risk lying ahead
of them is the possibility of an economic crash that throws millions of workers out of their jobs or
sends millions of depositors to withdraw their savings from the shaky banking system. A massive environmental or public health disaster also
could trigger regime collapse, especially if people’s lives are endangered by a media cover-up imposed by Party authorities. Nationwide
rebellion becomes a real possibility when large numbers of people are upset about the same issue at the same
time. Another dangerous scenario is a domesticor international crisis in which the CCP leaders feel
compelled to lash out against Japan, Taiwan, or the United States because from their point of view not
lashing out might endanger Party rule.
when communist systems implode do their political fun- damentals change.98 As China’s leaders well know,
Chinese Growth Key to Military Restraint on Taiwan- Decline of Economic
Influence Causes China to Resort to Military Aggression
Lampton, ’3 (David, Director Chinese Studies, Nixon Center, FDCH, 3/18)
The Chinese realize that power has different faces--military, economic, and normative
(ideological) power. Right now, China is finding that in the era of globalization, economic power
(and potential economic power) is the form of power it has in greatest abundance and which it
can use most effectively. As long as economic influence continues to be effective for Beijing, as
it now seems to be in dealing with Taiwan, for example, China is unlikely to resort to military
intimidation as its chief foreign policy instrument.
Decline causes lashout- nationalists target the US and Taiwan
Friedberg professor of IR at Princeton2011 (July/August, Aaron L., professor of politics and international
affairs at the Woodrow Wilson School at Princeton University, Hegemony with Chinese Characteristics, The National Interest, lexis)
Such fears of
aggression are heightened by an awareness that anxiety over a lack of legitimacy at
home can cause nondemocratic governments to try to deflect popular frustration and
discontent toward external enemies. Some Western observers worry, for example, that if China’s economy
falters its rulers will try to blame foreigners and even manufacture crises with Taiwan, Japan or
the United States in order to rally their people and redirect the population’s anger. Whatever
Beijing’s intent, such confrontations couldeasilyspiral out of control.Democratic leaders are hardly immune
to the temptation of foreign adventures. However, because the stakes for them are so much lower (being voted out of office rather
than being overthrown and imprisoned, or worse), they are less likely to take extreme risks to retain their hold on power.
2NC China-India War Impact
Economic collapse will crush party legitimacy and ignite social instability Li 9 (Cheng, Dir. of Research, John L. Thornton China Center, “China’s Team of
Rivals ” Brookings Foundation Article
series,Marcyhttp://www.brookings.edu/articles/2009/03_china_li.aspx)
The two dozen senior politicians who walk the halls of Zhongnanhai, the compound of the Chinese Communist Party’s leadership in
Beijing, are worried. What
was inconceivable a year ago now threatens their rule: an economy in
freefall. Exports, critical to China’s searing economic growth, have plunged. Thousands of
factories and businesses, especially those in the prosperous coastal regions, have closed. In the
last six months of 2008, 10 million workers, plus 1 million new college graduates, joined the
already gigantic ranks of the country’s unemployed. During the same period, the Chinese stock
market lost 65 percent of its value, equivalent to $3 trillion. The crisis, President Hu Jintao said
recently, “is a test of our ability to control a complex situation, and also a test of our party’s
governing ability.”With this rapid downturn, the Chinese Communist Party suddenly looks vulnerable.
Since Deng Xiaoping initiated economic reforms three decades ago, the party’s legitimacy has relied upon its ability
to keep the economy running at breakneck pace.If China is no longer able to maintain a high
growth rate or provide jobs for its ever growing labor force, massive public dissatisfaction and
social unrest could erupt. No one realizes this possibility more than the handful of people who steer China’s massive
economy. Double-digit growth has sheltered them through a SARS epidemic, massive earthquakes, and contamination scandals.
Now, the
crucial question is whetherthey are equipped to handle an economic crisis of this magnitude—and
survive the political challenges it will bring. This year marks the 60th anniversary of the People’s Republic, and the
ruling party is no longer led by one strongman, like Mao Zedong or Deng Xiaoping. Instead, the Politburo and
its Standing Committee, China’s most powerful body, are run by two informal coalitions that compete
against each other for power, influence, and control over policy. Competition in the Communist Party is, of course, nothing new. But
the jockeying today is no longer a zero-sum game in which a winner takes all. It is worth remembering
that when Jiang Zemin handed the reins to his successor, Hu Jintao, in 2002, it marked the first time in the republic’s history that the
transfer of power didn’t involve bloodshed or purges. What’s more, Hu was not a protégé of Jiang’s; they belonged to competing
factions. To borrow a phrase popular in Washington these days, post-Deng
China has been run by a team of
rivals. This internal competition was enshrined as party practice a little more than a year ago. In October 2007, President Hu
surprised many China watchers by abandoning the party’s normally straightforward succession procedure and
designating not one but two heirs apparent. The Central Committee named Xi Jinping and Li Keqiang—two very
different leaders in their early 50s—to the nine-member Politburo Standing Committee, where the rulers of China are
groomed. The future roles of these two men, who will essentially share power after the next party congress meets in 2012, have
since been refined: Xi will be the candidate to succeed the president, and Li will succeed Premier Wen Jiabao. The
two rising stars share little in terms of family background, political association, leadership skills, and policy orientation. But they are
each heavily involved in shaping economic policy—and they
are expected to lead the two competing coalitions
that will be relied upon to craft China’s political and economic trajectory in the next decade and
beyond.
Regime collapse causes China-India war
Cohen ’02 (Stephen, Senior Fellow – Brookings Institution, “Nuclear Weapons and Nuclear War in South Asia: An Unknowable Future”, May,
http://www.brookings.edu/dybdocroot/views/speeches/cohens20020501.pdf)
A similar argument may be made with respect to China. China is a country that has had its share of upheavals in the past. While
there is no expectation today of renewed internal turmoil, it is important to remember that closed authoritarian societies are
subject to deep crisis in moments of sudden change. The breakup of the Soviet Union and Yugoslavia, and the turmoil that has
ravaged many members of the former communist bloc are examples of what could happen to China. A severe economic crisis,
rebellions in Tibet and Xinjiang, a reborn democracy movement and a party torn by factions could be the ingredients of an unstable
situation. A vulnerable Chinese leadership determined to bolster its shaky position by an aggressive policy toward India or the
United States or both might become involved in a major crisis with India, perhaps engage in nuclear saber-rattling. That would
encourage India to adopt a stronger nuclear posture, possibly with American assistance.
Causes nuclear use
Jonathan S. Landay, National Security and Intelligence Correspondent, -2K [“Top Administration Officials Warn Stakes for U.S.
Are High in Asian Conflicts”, Knight Ridder/Tribune News Service, March 10, p. Lexis]
Few if any experts think China and Taiwan, North Korea and South Korea, or India and Pakistan
are spoiling to fight. But even a minor miscalculation by any of them could destabilize Asia, jolt
the global economy and even start a nuclear war. India, Pakistan and China all have nuclear
weapons, and North Korea may have a few, too. Asia lacks the kinds of organizations,
negotiations and diplomatic relationships that helped keep an uneasy peace for five decades in
Cold War Europe. “Nowhere else on Earth are the stakes as high and relationships so fragile,” said Bates Gill, director of
northeast Asian policy studies at the Brookings Institution, a Washington think tank. “We see the convergence of great power
interest overlaid with lingering confrontations with no institutionalized security mechanism in place. There are elements for
potential disaster.” In an effort to cool the region’s tempers, President Clinton, Defense Secretary William S. Cohen and National
Security Adviser Samuel R. Berger all will hopscotch Asia’s capitals this month. For America, the stakes could hardly be higher. There
are 100,000 U.S. troops in Asia committed to defending Taiwan, Japan and South Korea, and the United States would instantly
become embroiled if Beijing moved against Taiwan or North Korea attacked South Korea. While
Washington has no
defense commitments to either India or Pakistan, a conflict between the two could end the
global taboo against using nuclear weapons and demolishthe already shaky international
nonproliferation regime. In addition, globalization has made a stable Asia _ with its massive markets, cheap labor, exports
and resources _ indispensable to the U.S. economy. Numerous U.S. firms and millions of American jobs depend on trade with Asia
that totaled $600 billion last year, according to the Commerce Department.
2NC Bioweapons Impact
The CCP would lash out for power, and they would use bioweapons
Renxin 05Renxin, Journalist, 8-3-2K5 (San, “CCP Gambles Insanely to Avoid Death,” Epoch Times,
www.theepochtimes.com/news/5-8-3/30931.html)
Since the Party’s life is “above all else,” it would not be surprising if the CCP resorts to the use of
biological, chemical, and nuclear weapons in its attempt to postpone its life. The CCP,that disregards
human life, would not hesitate to kill two hundred million Americans, coupled with seven or eight
hundred million Chinese, to achieve its ends. The “speech,” free of all disguises, lets the public see the CCP for what
it really is: with evil filling its every cell, the CCP intends to fightall of mankind in its desperate attempt to
clingto life. And that is the theme of the “speech.” The theme is murderous and utterly evil. We did witness in China beggars
who demanded money from people by threatening to stab themselves with knives or prick their throats on long nails. But we have
never, until now, seen a rogue who blackmails the world to die with it by wielding biological, chemical, and nuclear weapons.
Anyhow, the bloody confession affirmed the CCP’s bloodiness: a monstrous murderer, who has killed 80 million Chinese people, now
plans to hold one billion people hostage and gamble with their lives. As the CCP is known to be a clique with a closed system, it is
extraordinary for it to reveal its top secret on its own. One might ask: what is the CCP’s purpose to make public its gambling plan on
its deathbed? The answer is: the “speech” would have the effect of killing three birds with one stone. Its intentions are the
following: Expressing the CCP’s resolve that it “not be buried by either heaven or earth” (direct quote from the “speech”). But then,
isn’t the CCP opposed to the universe if it claims not to be buried by heaven and earth? Feeling the urgent need to harden its image
as a soft egg in the face of the Nine Commentaries. Preparing publicity for its final battle with mankind by threatening war and
trumpeting violence. So, strictly speaking, what the CCP has leaked out is more of an attempt to clutch at straws to save its life
rather than to launch a trial balloon. Of course, the way the “speech” was presented had been carefully prepared. It did not have a
usual opening or ending, and the audience, time, place, and background related to the “speech” were all kept unidentified. One may
speculate or imagine as one may, but never verify. The aim was obviously to create a mysterious setting. In short, the “speech” came
out as something one finds difficult to tell whether it is false or true.
Outweighs and causes extinction
Ochs 2Past president of the Aberdeen Proving Ground Superfund Citizens Coalition, Member of the Depleted Uranium Task
force of the Military Toxics Project, and M of the Chemical Weapons Working Group [Richard Ochs, , June 9, 2002, “Biological
Weapons Must Be Abolished Immediately,” http://www.freefromterror.net/other_articles/abolish.html]
Of all the weapons of mass destruction, the genetically
engineered biological weapons, many without a
known cure or vaccine, are an extreme danger to the continued survival of life on earth. Any
perceived military value or deterrence pales in comparison to the great risk these weapons pose just sitting in vials in laboratories.
While a “nuclear winter,” resulting from a massive exchange of nuclear weapons, could also kill off most of life on earth and
severely compromise the health of future generations, they are
easier to control. Biological weapons, on the
other hand, can get out of control very easily, as the recent anthrax attacks has demonstrated. There is no way to
guarantee the security of these doomsday weapons because very tiny amounts can be stolen or accidentally released and then
grow or be grown to horrendous proportions. The Black Death of the Middle Ages would be small in comparison to
the potential damage bioweapons could cause. Abolition of chemical weapons is less of a priority because, while they can also kill
millions of people outright, their persistence in the environment would be less than nuclear or biological agents or more localized.
Hence, chemical weapons would have a lesser effect on future generations of innocent people and the natural environment. Like the
Holocaust, once a localized chemical extermination is over, it is over. With nuclear and biological weapons, the killing will probably
never end. Radioactive elements last tens of thousands of years and will keep causing cancers virtually forever. Potentially worse
than that, bio-engineered agents
by the hundreds with no known cure could wreck even greater
calamity on the human race than could persistent radiation. AIDS and ebola viruses are just a small example of recently
emerging plagues with no known cure or vaccine. Can we imagine hundreds of such plagues? HUMAN EXTINCTION IS NOW
POSSIBLE. Ironically, the Bush administration has just changed the U.S. nuclear doctrine to allow nuclear retaliation against
threats upon allies by conventional weapons. The past doctrine allowed such use only as a last resort when our nation’s survival was
at stake. Will the new policy also allow easier use of US bioweapons? How slippery is this slope?
2NC AT Collapse Good
Reject their collapse good arguments—they’re racist and incoherent—Chinese
collapse decimates the U.S. for several reasons
Karabell, 13—PhD @ Harvard, President of River Twice Research
Zachary, “The U.S. can’t afford a Chinese economic collapse,” The Edgy Optimist, a Reuters blog
run by Karabell, March 7, http://blogs.reuters.com/edgy-optimist/2013/03/07/the-u-s-cantafford-a-chinese-economic-collapse/ --BR
Is China about to collapse? That question has been front and center in the past weeks as the country
completes its leadership transition and after the exposure of its various real estate bubbles during a widely watched 60 Minutes
exposé this past weekend. Concerns about soaring property prices throughout China are
hardly new, but they have
been given added weight by the government itself. Recognizing that a rapid implosion of the property market
would disrupt economic growth, the central government recently announced far-reaching measures designed to dent the rampant
speculation. Higher down payments, limiting the purchases of investment properties, and a capital gains tax on real estate
transactions designed to make flipping properties less lucrative were included. These measures, in conjunction with the new
government’s announcing more modest growth targets of 7.5 percent a year, sent Chinese equities plunging and led to a slew of
commentary in the United States saying China would be the next shoe to drop in the global system. Yet there
is more here
than simple alarm over the viability of China’s economic growth. There is the not-so-veiled
undercurrent of rooting against China. It is difficult to find someone who explicitly wants it to
collapse, but the tone of much of the discourse suggests bloodlust. Given that China largely
escaped the crises that so afflicted the United States and the eurozone, the desire to see it
stumble may be understandable. No one really likes a global winner if that winner isn’t you. The
need to see China fail verges on jingoism. Americans distrust the Chinese model, find that its
business practices verge on the immoral and illegal, that its reporting and accounting standards
are sub-par at best and that its system is one of crony capitalism run by crony communists. On
Wall Street, the presumption usually seems to be that any Chinese company is a ponzi scheme
masquerading as a viable business. In various conversations and debates, I have rarely heard
China’s economic model mentioned without disdain. Take, as just one example, Gordon Chang
in Forbes: “Beijing’s technocrats can postpone a reckoning, but they have not repealed the laws
of economics. There will be a crash.” The consequences of a Chinese collapse, however, would
be severe for the United States and for the world. There could be no major Chinese contraction
without a concomitant contraction in the United States. That would mean sharply curtailed
Chinese purchases of U.S. Treasury bonds, far less revenue for companies like General Motors,
Nike, KFC and Apple that have robust business in China (Apple made $6.83 billion in the fourth quarter of 2012,
up from $4.08 billion a year prior), and far fewer Chinese imports of high-end goods from American and
Asian companies. It would also mean a collapse of Chinese imports of materials such as copper,
which would in turn harm economic growth in emerging countries that continue to be a prime
market for American, Asian and European goods. China is now the world’s second-largest economy, and property
booms have been one aspect of its growth. Individual Chinese cannot invest outside of the country, and the limited options of
China’s stock exchanges and almost nonexistent bond market mean that if you are middle class and want to do more than keep your
money in cash or low-yielding bank accounts, you buy either luxury goods or apartments. That has meant a series of property
bubbles over the past decade and a series of measures by state and local officials to contain them. These recent measures are hardly
the first, and they are not likely to be the last. The past 10 years have seen wild swings in property prices, and as recently as 2011
the government took steps to cool them; the number of transactions plummeted and prices slumped in hot markets like Shanghai as
much as 30, 40 and even 50 percent. You could go back year by year in the 2000s and see similar bubbles forming and popping, as
the government reacted to sharp run-ups with restrictions and then eased them when the pendulum threatened to swing too far.
China has had a series of property bubbles and a series of property busts. It has also had massive urbanization that in time has
absorbed the excess supply generated by massive development. Today much of that supply is priced far above what workers
flooding into China’s cities can afford. But that has always been true, and that housing has in time been purchased and used by
Chinese families who are moving up the income spectrum, much as U.S. suburbs evolved in the second half of the 20th century.
More to the point, all property bubbles are not created equal. The housing bubbles in the United States and Spain, for instance,
would never had been so disruptive without the massive amount of debt and the financial instruments and derivatives based on
them. A bursting housing bubble absent those would have been a hit to growth but not a systemic crisis. In China, most buyers pay
cash, and there is no derivative market around mortgages (at most there’s a small shadow market). Yes, there are all sorts of
unofficial transactions with high-interest loans, but even there, the consequences of busts are not the same as they were in the
United States and Europe in recent years. Two
issues converge whenever China is discussed in the United
States: fear of the next global crisis, and distrust and dislike of the country. Concern is fine; we
should always be attentive to possible risks. But China’s property bubbles are an unlikely risk, because of the
absence of derivatives and because the central government is clearly alert to the market’s behavior. Suspicion and
antipathy, however, are not constructive. They speak to the ongoing difficulty China poses to
Americans’ sense of global economic dominance and to the belief in the superiority of freemarket capitalism to China’s state-managed capitalism. The U.S. system may prove to be more
resilient over time; it has certainly proven successful to date. Its success does not require
China’s failure, nor will China’s success invalidate the American model. For our own self-interest
we should be rooting for their efforts, and not jingoistically wishing for them to fail.
2NC AT Collapse Inevitable
Status quo isn’t sufficient to trigger collapse because the US is lagging behind
Forbes, 7/9/2014
US Finance/Economics News Report Service
(“John Kerry In Beijing: Four Good Reasons Why The Chinese View American Leaders As Empty
Suits”,http://www.forbes.com/sites/eamonnfingleton/2014/07/09/john-kerry-in-beijing-fourgood-reasons-why-the-chinese-treat-american-leaders-as-jackasses/)
2. American policymakers have procrastinated in meeting the Chinese challenge because they
have constantly – for more than a decade now – been misled by siren American voices
predicting an imminent Chinese financial collapse. China is a big economy and large financial
collapses are not inconceivable. But even the most disastrous such collapse would be unlikely to
stop the Chinese export drive in its tracks. American policymakers have failed to pay sufficient
attention to the central objective of Chinese policy, which is to take over from the United States,
Japan and Germany as the world’s premier source of advanced manufactured products.
Consensus exists and the best markers point to a slow decline, and the worst
markers make sense in the context of china
Huang, 2/11, a senior associate in the Carnegie Asia Program, where his research focuses on
China’s economic development and its impact on Asia and the global economy (Yukon, “Do Not
Fear a Chinese Property Bubble”, Carnegie Endowment for International Peace,
http://carnegieendowment.org/2014/02/11/do-not-fear-chinese-property-bubble/h0oz)
Yet when
analysts drill into the balance sheets of borrowers and banks, they find little evidence
of impending disaster. Government debt ratios are not high by global standards and are backed by
valuable assets at the local level. Household debt is a fraction of what it is in the west, and it is supported by
savings and rising incomes. The profits and cash positions of most firms for which data are available
have not deteriorated significantly while sovereign guarantees cushion the more vulnerable state enterprises. The
consensus, therefore, is that China’s debt situation has weakened but is manageable.¶ Why are the
views from detailed sector analysis so different from the red flags signalled by the broader macro debt indicators? The answer lies in
the role that land values play in shaping these trends.¶ Take the two most pressing concerns: rising debt levels as a share of gross
domestic product and weakening links between credit expansion and GDP growth. The first relates to the surge in the ratio of total
credit to GDP by about 50-60 percentage points over the past five years, which is viewed as a strong predictor of an impending
crash. Fitch, a rating agency, is among those who see this as the fallout from irresponsible shadow-banking which is being channelled
into property development, creating a bubble. The second concern is that the “credit impulse” to growth has diminished, meaning
that more and more credit is needed to generate the same amount of GDP, which reduces prospects for future deleveraging.¶
Linking these two concerns is the price of land including related mark-ups levied by officials and developers. But its significance is
not well understood because China’s property market emerged only in the late 1990s, when the decision was made to privatise
housing. A functioning resale market only began to form around the middle of the last decade. That is why the large stimulus
programme in response to the Asia financial crisis more than a decade ago did not manifest itself in a property price surge, whereas
the 2008-9 stimulus did.¶ Over the past decade, no other factor has been as important as rising property values in influencing
growth patterns and perceptions of financial risks. The weakening impact of credit on growth is largely explained by the divergence
between fixed asset investment (FAI) and gross fixed capital formation (GFCF). Both are measures of investment. FAI measures
investment in physical assets including land while GFCF measures investment in new equipment and structures, excluding the value
of land and existing assets. This latter feeds directly into GDP, while only a portion of FAI shows up in GDP accounts.¶ Until recently,
the difference between the two measures did not matter in interpreting economic trends: both were increasing at the same rate
and reached about 35 per cent of GDP by 2002-03. Since then, however, they have diverged and GFCF now stands at 45 per cent of
GDP while the share of FAI has jumped to 70 per cent.¶ Overall credit levels have increased in line with the rapid growth in FAI rather
than the more modest growth in GFCF. Most of the difference between the ratios is explained by rising asset prices. Thus a large
share of the surge in credit is financing property related transactions which explains why the growth impact of credit has declined.¶
Is the increase in property and underlying land prices sustainable, or is it a bubble? Part of the
explanation is unique to China. Land in China is an asset whose market value went largely unrecognised when it was totally
controlled by the State. Once a private property market was created, the process of discovering land’s intrinsic value began, but
establishing such values takes time in a rapidly changing economy.¶ The Wharton/NUS/Tsinghua Land Price
Index indicates
that from 2004-2012, land prices have increased approximately fourfold nationally, with more dramatic
increases in major cities such as Beijing balanced by modest rises in secondary cities. Although this may seem excessive, such growth
rates are similar to what happened in Russia after it privatised its housing stock. Once the economy stabilised, housing prices in
Moscow increased six fold in just six years.¶ Could
investors have overshot the mark in China? Possibly, but
the land values should be high given China’s large population, its shortage of plots that are suitable for
construction and its rapid economic growth. Nationally, the ratio of incomes to housing prices has improved and is now comparable
to the levels found in Australia, Taiwan and the UK. In Beijing and Shanghai prices are similar to or lower than Delhi, Singapore and
Hong Kong.¶ Much of
the recent surge in the credit to GDP ratio is actually evidence of financial
deepening rather than financial instability as China moves toward more market-based asset
values. If so, the higher credit ratios are fully consistent with the less alarming impressions that
come from scrutiny of sector specific financial indicators.
2NC AT Stocks
China’s stock market is loosely tied to its economy—structural factors are fine
and stock declines don’t accurately reflect growth
Rapoza 7/9
(Kenneth Rapoza. Contributing Editor at Forbes. "Don't Mistake China's Stock Market For China's Economy," Forbes. 7-9-2015.
http://www.forbes.com/sites/kenrapoza/2015/07/09/dont-mistake-chinas-stock-market-for-chinas-economy///ghs-kw)
China’s A-share market is rebounding, but whether or not it has hit bottom is beside the point. What
matters is this: the equity market in China is a more or less a gambling den dominated by retail
investors who make their investment decisions based on what they read in investor
newsletters. It’s a herd mentality. And more importantly, their trading habits do not reflect
economic fundamentals. “The country’s stock market plays a smaller role in its economy than
the U.S. stock market does in ours, and has fewer linkages to the rest of the economy,” says Bill
Adams, PNC Financial’s senior international economist in Pittsburgh. The fact that the two are unhinged limits the
potential for China’s equity correction — or a bubble — to trigger widespread economic
distress. The recent 25% decline in the Deutsche X-Trackers China A-Shares (ASHR) fund, partially
recuperated on Thursday, is not a signal of an impending Chinese recession. PNC’s baseline forecast
for Chinese real GDP growth in 2015 remains unchanged at 6.8% despite the correction, a
correction which has been heralded by the bears as the beginning of the end for China’s capitalist experiment. China’s
economy, like its market, is transforming. China is moving away from being a low-cost producer and
exporter, to becoming a consumer driven society. It wants to professionalize its financial
services sector, and build a green-tech economy to help eliminate its pollution problems. It’s
slowly opening its capital account and taking steps to reforming its financial markets. There will be
errors and surprises, and anyone who thinks otherwise will be disappointed. Over the last four weeks, the Chinese
government misplayed its hand when it decided to use tools for the economy — mainly an
interest rate reduction and reserve ratio requirement cuts for banks in an effort to provide the market with
more liquidity. It worked for a little while, and recent moves to change rules on margin, and even utilize a circuit-breaker mechanism
to temporarily delist fast-tanking companies from the mainland stock market, might have worked if the Greece crisis didn’t pull the
plug on global risk. The timing was terrible. And it pushed people into panic selling, turning China into the
biggest financial market headline this side of Athens. For better or for worse, Beijing now has no choice but to go all-in to defend
equities, some investors told FORBES. But China’s
real economy is doing much better than the Shanghai
and Shenzhen exchanges suggest. According to China Beige Book, the Chinese economy actually
recovered last quarter. Markets are focusing on equities and PMI indicators from the state and
HSBC as a gauge, but it should become clear in the coming weeks that China’s stock market is
not a reflection of the fundamentals. The Good, The Bad and the Ugly To get a more detailed picture of what is driving
China’s growth slowdown, it is necessary to look at a broader array of economic and financial indicators. The epicenter of China’s
problems are the industrial and property sectors. Shares of the Shanghai Construction Group, one of the largest developers listed on
the Shanghai stock exchange, is down 42.6% in the past four weeks, two times worse than the Shanghai Composite Index. China
Railway Group is down 33%, also an underperformer. Growth in real industrial output has declined from 14% in mid-2011 to 5.9% in
April, growth in fixed-asset investment declined 50% over the same period and electricity consumption by primary and secondary
industries is in decline. China’s trade with the outside world is also falling, though this data does not always match up with other
countries’ trade figures. Real estate is in decline as Beijing has put the breaks on its housing bubble. Only the east coast cities are still
seeing price increases, but construction is not booming in Shanghai anymore. The two main components of that have prevented a
deeper downturn in activity are private spending on services, particularly financial services, and government-led increases in
transportation infrastructure like road and rail. Retail sales, especially e-commerce sales that have benefited the likes of Alibaba and
Tencent, both of which have outperformed the index, have been growing faster than the overall economy. Electricity consumption
in the services sector is expanding strongly. Growth in household incomes is outpacing GDP growth. “China has begun the necessary
rebalancing towards a more sustainable, consumption-led growth model,” says Jeremy Lawson, chief economist at Standard Life
Investments in the U.K. He warns that “it’s still too early to claim success.” Since 2011, developed markets led by the S&P 500 have
performed better than China, but for one reason and one reason only: The central banks of Europe, the U.K., Japan and of course
the U.S. have bought up assets in unprecedented volumes using printed money, or outright buying securities like the Fed’s purchase
of bonds and mortgage backed securities. Why bemoan China’s state intervention when central bank intervention has been what
kept southern Europe afloat, and the U.S. stock market on fire since March 2009? Companies
in China are still making
money. “I think people have no clue on China,” says Jan Dehn, head of research at Ashmore in London, a $70 billion
emerging market fund manager with money at work in mainland China securities. “They don’t see the big picture. And
they forget it is still an emerging market. The Chinese make mistakes and will continue to make
mistakes like all governments. However, they will learn from their mistakes. The magnitude of
most problems are not such that they lead to systematic meltdown. Each time the market
freaks out, value — often deep value — starts to emerge. Long term, these volatile episodes
are mere blips. They will not change the course of internationalization and maturing of the market,” Dehn told FORBES.
China is still building markets. It has a large environmental problem that will bode well for green tech firms like BYD. It’s
middle class is not shrinking. Its billionaires are growing in numbers. They are reforming all the
time. And in the long term, China is going to win. Markets are impatient and love a good drama. But investing is not a soap opera.
It’s not Keeping up with the Kardashians you’re buying, you’re buying the world’s No. 2 economy, the biggest commodity consumer
in the world, and home to 1.4 billion people, many of which have been steadily earning more than ever. China’s transition will cause
temporary weakness in growth and volatility, maybe even crazy volatility. But you have to break eggs to make an omelette, says
Dehn. Why
The Stock Market Correction Won’t Hurt China The Chinese equity correction is
healthy and unlikely to have major adverse real economy consequences for several reasons:
First, China’s A-shares are still up 79% over the past 12 months. A reversal of fortunes was a
shoo-in to occur. Second, Chinese banks are basically not involved in providing leverage and
show no signs of stress. The total leverage in Chinese financial markets is about four trillion yuan
($600 billion). Stock market leverage is concentrated in the informal sector – with trust funds and brokerages accounting for a
little over half of the leverage. Margin financing via brokerages is down from 2.4 trillion yuan to 1.9 trillion yuan and let’s not forget
that Chinese GDP is about 70 trillion yuan. Third,
there is very little evidence that the moves in the stock
market will have a major impact on the real economy and consumption via portfolio loss. Stocks
comprise only 15% of total wealth. Official sector institutions are large holders of stocks and
their spending is under control of the government. As for the retail investor, they spend far less of
their wealth than other countries. China has a 49% savings rate. Even if they lost half of it, they
would be saving more than Americans, the highly indebted consumer society the world loves to
love. During the rally over the past twelve months, the stock market bubble did not trigger a boost in
consumption indicating that higher equity gains didn’t impact spending habits too much. The
Chinese stock market is only 5% of total social financing in China. Stock markets only finance 2%
of Chinese fixed asset investment. Only 1% of company loans have been put up with stocks as
collateral, so the impact on corporate activity is going to be limited. “The rapid rally and the violent
correction illustrate the challenges of capital account liberalization, the need for a long-term institutional investor base, index
inclusion and deeper financial markets, including foreign institutional investors,” Dehn says. The A-shares correction is likely to
encourage deeper financial reforms, not a reversal.
Plan Flaw
1NCs
1NC CT
Counterplan text: The United States federal government should neither
mandate the creation of surveillance backdoors in products nor request privacy
keys and should terminate current backdoors created either by government
mandates or government requested keys.
Three arguments here:
1. A. Mandate means “to make required”
Merriam-Webster’s Dictionary of Law 96
(Merriam-Webster’s Dictionary of Law, 1996, http://dictionary.findlaw.com/definition/mandate.html//ghs-kw)
mandate n [Latin mandatum , from neuter of mandatus , past participle of mandare to entrust, enjoin, probably
irregularly from manus hand + -dere to put] 1 a : a formal communication from a reviewing court notifying the court
below of its judgment and directing the lower court to act accordingly b : mandamus 2 in the civil law of Louisiana : an act
by which a person gives another person the power to transact for him or her one or several affairs 3 a : an authoritative
command : a clear authorization or direction [the of the full faith and credit clause "National Law Journal "] b : the
authorization to act given by a constituency to its elected representative vt man·dat·ed man·dat·ing : to
make
mandatory or required [the Pennsylvania Constitution s a criminal defendant's right to confrontation "National
Law Journal "]
B. Circumvention: companies including those under PRISM agree to
provide data because the government pays them
Timberg and Gecllman 13
(Timberg, Craig and Gellman, Barton. Reporters for the Washington Post, citing government budgets and internal
documents. “NSA paying U.S. companies for access to communications networks,” Washington Post. 8/29/2013.
https://www.washingtonpost.com/world/national-security/nsa-paying-us-companies-for-access-to-communicationsnetworks/2013/08/29/5641a4b6-10c2-11e3-bdf6-e4fc677d94a1_story.html//ghs-kw)
The National Security Agency is paying hundreds of millions of dollars a year to U.S.
companies for clandestine access to their communications networks, filtering vast traffic flows for
foreign targets in a process that also sweeps in large volumes of American telephone calls, e-mails and instant messages. The bulk of
the spending, detailed in a multi-volume intelligence budget obtained by The Washington Post, goes to participants in a
Corporate Partner Access Project for major U.S. telecommunications providers. The documents open an important
window into surveillance operations on U.S. territory that have been the subject of debate since they were revealed by The Post and
Britain’s Guardian newspaper in June. New details of the corporate-partner project, which falls under the NSA’s Special Source Operations,
confirm that the
agency taps into “high volume circuit and packet-switched networks,”
according to the spending blueprint for fiscal 2013. The program was expected to cost $278 million in the current fiscal year, down nearly
one-third from its peak of $394 million in 2011. Voluntary cooperation from the “backbone” providers of global communications dates to
the 1970s under the cover name BLARNEY, according to documents provided by former NSA contractor Edward Snowden. These
relationships long predate the PRISM program disclosed in June, under which American technology companies hand over customer data
after receiving orders from the Foreign Intelligence Surveillance Court. In briefing slides, the NSA described BLARNEY and three other
corporate projects — OAKSTAR, FAIRVIEW and STORMBREW — under the heading of “passive” or “upstream” collection. They capture
data as they move across fiber-optic cables and the gateways that direct global communications traffic. Read the documents Budget Inside
the secret 'black budget' View select pages from the Office of the Director of National Intelligence's top-secret 2013 budget with key
sections annotated by The Washington Post. The documents offer a rare view of a secret surveillance economy in which government
officials set financial terms for programs capable of peering into the lives of almost anyone who uses a phone, computer or other device
connected to the Internet. Although the companies are required to comply with lawful surveillance orders, privacy advocates say the
multimillion-dollar payments could create a profit motive to offer more than the
required assistance. “It turns surveillance into a revenue stream, and that’s not the way it’s
supposed to work,” said Marc Rotenberg, executive director of the Electronic Privacy Information Center, a Washington-based research
and advocacy group. “The fact that the government is paying money to telephone companies to turn over information that they are
compelled to turn over is very troubling.” Verizon, AT&T and other major telecommunications companies declined to comment for this
article, although several industry officials noted that government surveillance laws explicitly call for companies to receive reasonable
reimbursement for their costs. Previous news reports have made clear that companies
frequently seek such
payments, but never before has their overall scale been disclosed. The budget documents do not list individual companies, although
they do break down spending among several NSA programs, listed by their code names. There is no record in the documents obtained by
The Post of money set aside to pay technology companies that provide information to the NSA’s PRISM program. That program is the
source of 91 percent of the 250 million Internet communications collected through Section 702 of the FISA Amendments Act, which
authorizes PRISM and the upstream programs, according to an 2011 opinion and order by the Foreign Intelligence Surveillance Court.
Several of the companies
that provide information to PRISM, including Apple, Facebook and Google, say they
take no payments from the government when they comply with national security requests. Others say they do take
payments in some circumstances. The Guardian reported last week that the NSA had covered “millions of
dollars” in costs that some technology companies incurred to comply with government demands for information.
Telecommunications companies generally do charge to comply with surveillance requests, which come from state, local
and federal law enforcement officials as well as intelligence agencies. Former telecommunications executive Paul Kouroupas, a security
officer who worked at Global Crossing for 12 years, said that some companies
welcome the revenue and enter
into contracts in which the government makes higher payments than otherwise available to firms
receiving re-imbursement for complying with surveillance orders. These contractual payments, he said, could cover
the cost of buying and installing new equipment, along with a reasonable profit. These voluntary agreements
simplify the government’s access to surveillance, he said. “It certainly lubricates the
[surveillance] infrastructure,” Kouroupas said. He declined to say whether Global Crossing, which operated a fiber-optic
network spanning several continents and was bought by Level 3 Communications in 2011, had such a contract. A spokesman for Level 3
Communications declined to comment.
2. Plan flaw: the plan mandates that we stop surveilling backdoors, request
public encryption keys, and close existing backdoors—that guts solvency
because the government can still create backdoors with encryption keys
3. Presumption: we don’t mandate back doors in the status quo, all their ev
is in the context of a bill that would require backdoors in the future, so
the AFF does nothing
1NC KQ
The Secure Data Act of 2015 states that no agency may mandate backdoors
Secure Data Act of 2015
(Wyden, Ron. Senator, D-OR. S. 135, known as the Secure Data Act of 2015, introduced in Congress 1/8/2015.
https://www.congress.gov/bill/114th-congress/senate-bill/135/text//ghs-kw)
SEC. 2. PROHIBITION ON DATA SECURITY VULNERABILITY MANDATES. (a) In General.—Except as provided in subsection (b), no
agency may mandate that a manufacturer, developer, or seller of covered products design or
alter the security functions in its product or service to allow the surveillance of any user of such
product or service, or to allow the physical search of such product, by any agency.
Mandate means “to make required”
Merriam-Webster’s Dictionary of Law 96
(Merriam-Webster’s Dictionary of Law, 1996, http://dictionary.findlaw.com/definition/mandate.html//ghs-kw)
mandate n [Latin mandatum , from neuter of mandatus , past participle of mandare to entrust, enjoin, probably irregularly from
manus hand + -dere to put] 1 a : a formal communication from a reviewing court notifying the court below of its judgment and
directing the lower court to act accordingly b : mandamus 2 in the civil law of Louisiana : an act by which a person gives another
person the power to transact for him or her one or several affairs 3 a : an authoritative command : a clear authorization or direction
[the of the full faith and credit clause "National Law Journal "] b : the authorization to act given by a constituency to its elected
representative vt man·dat·ed man·dat·ing : to make mandatory
defendant's right to confrontation "National Law Journal "]
or required [the Pennsylvania Constitution s a criminal
Circumvention: companies including those under PRISM agree to provide data
because the government pays them
Timberg and Gellman 13
(Timberg, Craig and Gellman, Barton. Reporters for the Washington Post, citing government budgets and internal documents.
“NSA paying U.S. companies for access to communications networks,” Washington Post. 8/29/2013.
https://www.washingtonpost.com/world/national-security/nsa-paying-us-companies-for-access-to-communicationsnetworks/2013/08/29/5641a4b6-10c2-11e3-bdf6-e4fc677d94a1_story.html//ghs-kw)
The National Security Agency is paying hundreds of millions of dollars a year to U.S. companies
for clandestine access to their communications networks, filtering vast traffic flows for foreign targets in a process that
also sweeps in large volumes of American telephone calls, e-mails and instant messages. The bulk of the spending, detailed in a multivolume intelligence budget obtained by The Washington Post, goes to participants in a Corporate Partner Access Project for major U.S.
telecommunications providers. The documents open an important window into surveillance operations on U.S. territory that have
been the subject of debate since they were revealed by The Post and Britain’s Guardian newspaper in June. New details of the corporate-partner
project, which falls under the NSA’s Special Source Operations, confirm that the
agency taps into “high volume circuit and
packet-switched networks,” according to the spending blueprint for fiscal 2013. The program was expected to cost $278 million in the
current fiscal year, down nearly one-third from its peak of $394 million in 2011. Voluntary cooperation from the “backbone” providers of global
communications dates to the 1970s under the cover name BLARNEY, according to documents provided by former NSA contractor Edward Snowden.
These relationships long predate the PRISM program disclosed in June, under which American technology companies hand over customer data after
receiving orders from the Foreign Intelligence Surveillance Court. In briefing slides, the NSA described BLARNEY and three other corporate projects —
OAKSTAR, FAIRVIEW and STORMBREW — under the heading of “passive” or “upstream” collection. They capture data as they move across fiber-optic
cables and the gateways that direct global communications traffic. Read the documents Budget Inside the secret 'black budget' View select pages from
the Office of the Director of National Intelligence's top-secret 2013 budget with key sections annotated by The Washington Post. The documents offer a
rare view of a secret surveillance economy in which government officials set financial terms for programs capable of peering into the lives of almost
anyone who uses a phone, computer or other device connected to the Internet. Although the companies are required to comply with lawful
surveillance orders, privacy advocates say the multimillion-dollar
payments could create a profit motive to offer
more than the required assistance. “It turns surveillance into a revenue stream, and that’s not the way
it’s supposed to work,” said Marc Rotenberg, executive director of the Electronic Privacy Information Center, a Washington-based research and
advocacy group. “The fact that the government is paying money to telephone companies to turn over information that they are compelled to turn over
is very troubling.” Verizon, AT&T and other major telecommunications companies declined to comment for this article, although several industry
officials noted that government surveillance laws explicitly call for companies to receive reasonable reimbursement for their costs. Previous news
reports have made clear that companies
frequently seek such payments, but never before has their overall scale been
disclosed. The budget documents do not list individual companies, although they do break down spending among several NSA programs, listed by their
code names. There is no record in the documents obtained by The Post of money set aside to pay technology companies that provide information to
the NSA’s PRISM program. That program is the source of 91 percent of the 250 million Internet communications collected through Section 702 of the
FISA Amendments Act, which authorizes PRISM and the upstream programs, according to an 2011 opinion and order by the Foreign Intelligence
Surveillance Court. Several of the companies
that provide information to PRISM, including Apple, Facebook and Google, say
they take no payments from the government when they comply with national security requests. Others say they do take payments in
some circumstances. The Guardian reported last week that the NSA had covered “millions of dollars” in costs that some
technology companies incurred to comply with government demands for information. Telecommunications companies generally do
charge to comply with surveillance requests, which come from state, local and federal law enforcement officials as well as intelligence agencies. Former
companies
welcome the revenue and enter into contracts in which the government makes higher
payments than otherwise available to firms receiving re-imbursement for complying with surveillance orders. These contractual
payments, he said, could cover the cost of buying and installing new equipment, along with a reasonable profit. These
voluntary agreements simplify the government’s access to surveillance, he said. “It certainly
lubricates the [surveillance] infrastructure,” Kouroupas said. He declined to say whether Global Crossing, which operated a
telecommunications executive Paul Kouroupas, a security officer who worked at Global Crossing for 12 years, said that some
fiber-optic network spanning several continents and was bought by Level 3 Communications in 2011, had such a contract. A spokesman for Level 3
Communications declined to comment.
2NC
2NC Mandate
Mandate is an order or requirement
The People's Law Dictionary 02
(Hill, Gerald and Kathleen. Gerald Hill holds a J.D. from Hastings College of the Law of the University of California. He was
Executive Director of the California Governor's Housing Commission, has drafted legislation, taught at Golden Gate University Law
School, served as an arbitrator and pro tem judge, edited and co-authored Housing in California, was an elected trustee of a
public hospital, and has testified before Congressional committees. Kathleen Hill holds an M.A. in political psychology from
California State University, Sonoma. She was also a Fellow in Public Affairs with the prestigious Coro Foundation, earned a
Certificat from the Sorbonne in Paris, France, headed the Peace Corps Speakers' Bureau in Washington, D.C., worked in the White
House for President Kennedy, and was Executive Coordinator of the 25th Anniversary of the United Nations. Kathleen has served
on a Grand Jury, chaired two city commissions and has developed programs for the Institute of Governmental Studies of the
University of California. The People’s Law Dictionary, 2002. http://dictionary.law.com/Default.aspx?selected=1204//ghs-kw)
mandate n. 1) any mandatory order or requirement under statute, regulation, or by a public
agency. 2) order of an appeals court to a lower court (usually the original trial court in the case) to comply with an appeals court's
ruling, such as holding a new trial, dismissing the case or releasing a prisoner whose conviction has been overturned. 3) same as the
writ of mandamus, which orders a public official or public body to comply with the law.
2NC Circumvention
NSA enters into mutually agreed upon contracts for back doors
Reuters 13
(Menn, Joseph. “Exclusive: Secret contract tied NSA and security industry pioneer,” Reuters. 12/20/2013.
http://www.reuters.com/article/2013NC/12/21/us-usa-security-rsa-idUSBRE9BJ1C220131221//ghs-kw)
As a key part of a campaign to embed encryption software that it could crack into widely used computer products, the
U.S.
National Security Agency arranged a secret $10 million contract with RSA, one of the most
influential firms in the computer security industry, Reuters has learned. Documents leaked by former NSA
contractor Edward Snowden show that the NSA created and promulgated a flawed formula for
generating random numbers to create a "back door" in encryption products, the New York Times
reported in September. Reuters later reported that RSA became the most important distributor of that
formula by rolling it into a software tool called Bsafe that is used to enhance security in personal computers and
many other products. Undisclosed until now was that RSA received $10 million in a deal that set the
NSA formula as the preferred, or default, method for number generation in the BSafe software,
according to two sources familiar with the contract. Although that sum might seem paltry, it represented more than a
third of the revenue that the relevant division at RSA had taken in during the entire previous
year, securities filings show. The earlier disclosures of RSA's entanglement with the NSA already had shocked some in the closeknit world of computer security experts. The company had a long history of championing privacy and security,
and it played a leading role in blocking a 1990s effort by the NSA to require a special chip to
enable spying on a wide range of computer and communications products. RSA, now a subsidiary of
computer storage giant EMC Corp, urged customers to stop using the NSA formula after the Snowden disclosures revealed its
weakness. RSA and EMC declined to answer questions for this story, but RSA said in a statement: "RSA always acts in the best
interest of its customers and under no circumstances does RSA design or enable any back doors in our products. Decisions about the
features and functionality of RSA products are our own." The NSA declined to comment. The RSA deal shows one way the NSA
carried out what Snowden's documents describe as a key strategy for enhancing surveillance: the systematic erosion of security
tools. NSA documents released in recent months called for using "commercial relationships" to advance that
goal, but did not name any security companies as collaborators. The NSA came under attack this week in a landmark report from a
White House panel appointed to review U.S. surveillance policy. The panel noted that "encryption is an essential basis for trust on
the Internet," and called for a halt to any NSA efforts to undermine it. Most of the dozen current and former RSA employees
interviewed said that the company erred in agreeing to such a contract, and many cited RSA's corporate evolution away
from pure cryptography products as one of the reasons it occurred.
Case
Cyber Adv
Notes
30 second explainer: cyberattacks inevitable, backdoors can be exploited by cyberattackers and
the grid is vulnerable, grid attacks means grid collapse, kills the econ and Harris and Burrows,
means nuclear power plant meltdown and extinction through radioactive contamination, also
causes nuclear retatliation b/ hackers hack nuke terminals
CX Questions
Macri says “China and “one or two others” had already broken into the U.S. power
grid…criminal groups, which are often Russian-speaking, have already been using statedeveloped cyber tools,” what makes the next cyberattack different? Macri is also post-Snowden
so the ev assumes backdoors.
Backdoors apply to commercial products—where’s the ev that grids are affected?
NYT ev says “the United States has a first-rate cyberoffense capacity…The Wall Street Journal
reported in April 2009 that the United States’ electrical grid had been penetrated by cyberspies
(reportedly from China, Russia and other countries), who left behind software that could be
used to sabotage the system in the future,” why are status quo measures insufficient?
Cappiello is in the context of Japan and the grid being knocked out by “an earthquake or
tornado,” cyberattacks don’t destroy physical infrastructures like natural disasters, why can’t we
restore power to prevent meltdowns and why don’t back-up systems solve?
Fritz is in the context of cyber terrorists hacking into nuclear control centers and terminals—
those don’t use commercial tech which is what backdoors apply to, what’s the internal link
story?
1NC No Solvency
No internal link—absent ev that the NSA puts backdoors in electric grids you
should give them 0% risk because they don’t have an advantage
NYT 13
(Editorial Board. "Close the N.S.A.’s Back Doors," New York Times. 9-21-2013.
http://www.nytimes.com/2013/09/22/opinion/sunday/close-the-nsas-backdoors.html//ghs-kw)
In 2006, a federal agency, the
National Institute of Standards and Technology, helped build an
international encryption system to help countries and industries fend off computer hacking and
theft. Unbeknown to the many users of the system, a different government arm, the National Security Agency,
secretly inserted a “back door” into the system that allowed federal spies to crack open any data
that was encoded using its technology. Documents leaked by Edward Snowden, the former N.S.A. contractor, make clear that the
agency has never met an encryption system that it has not tried to penetrate. And it frequently tries to take the easy way out.
Because modern cryptography can be so hard to break, even using the brute force of the agency’s powerful supercomputers, the
agency prefers to collaborate with big software companies and cipher authors, getting hidden
access built right into their systems. The New York Times, The Guardian and ProPublica recently reported that the
agency now has access to the codes that protect commerce and banking systems, trade
secrets and medical records, and everyone’s e-mail and Internet chat messages, including
virtual private networks. In some cases, the agency pressured companies to give it access; as The Guardian reported
earlier this year, Microsoft provided access to Hotmail, Outlook.com, SkyDrive and Skype. According to
some of the Snowden documents given to Der Spiegel, the N.S.A. also has access to the encryption protecting
data on iPhones, Android and BlackBerry phones.
Secure Data Act doesn’t solve the grid—here’s the text
Secure Data Act of 2015
(Wyden, Ron. Senator, D-OR. S. 135, known as the Secure Data Act of 2015, introduced in Congress 1/8/2015.
https://www.congress.gov/bill/114th-congress/senate-bill/135/text//ghs-kw)
(a) In General.—Except as provided in subsection (b), no
agency may mandate that a manufacturer, developer,
or seller of covered products design or alter the security functions in its product or service to
allow the surveillance of any user of such product or service, or to allow the physical search of
such product, by any agency. (b) Exception.—Subsection (a) shall not apply to mandates authorized under the
Communications Assistance for Law Enforcement Act (47 U.S.C. 1001 et seq.). (c) Definitions.—In this section— (1) the term
“agency” has the meaning given the term in section 3502 of title 44, United States Code; and (2) the
term “covered
product” means any computer hardware, computer software, or electronic device that is
made available to the general public.
1NC No Cyber
Their evidence concedes that there’s no impact to a cyberattack – empirically
proven
Macri, 14
(Giuseppe, staff writer for the Daily Caller, citing NSA head Michael Rogers, “NSA Chief: US Will
Suffer A Catastrophic Cyberattack In The Next Ten Years,”
http://dailycaller.com/2014/11/21/nsa-chief-us-will-suffer-a-catastrophic-cyberattack-in-thenext-ten-years/, BC)
National Security Agency and U.S. Cyber Command head Adm. Michael Rogers warned lawmakers during a congressional
briefing this week that the U.S. would suffer a severe cyberattack against critical infrastructure like
power or fuel grids in the not-too-distant future.∂ “I fully expect that during my time as a commander, we are going to be
tasked with defending critical infrastructure in the United States,” Rogers said while citing findings from an October Pew Research
Center report. “It’s
only a matter of the when, not the if, that we’re going to see something
dramatic… I bet it happens before 2025.”∂ Rogers told the House Intelligence Committee Thursday he expected the attack
to occur during his tenure as head of NSA the U.S. military’s cyber-war branch, and that it would likely come from statesponsored hackers with ties to China, Russia or several other countries, many of whom have
already successfully breached the systems of critical U.S. industries.∂ “There are multiple nation-states
that have the capability and have been on the systems,” Rogers told the committee, adding that many were engaged in
“reconnaissance” activities to surveil “specific schematics of most of our control systems.”∂ “There shouldn’t be any doubt in our
minds that there are nation-states and groups out there that have the capability… to shut down, forestall our ability to operate our
basic infrastructure, whether it’s generating power across this nation, whether it’s moving water and fuel,” Rogers said, warning
China and “one or two others” had already broken into the U.S. power grid.∂ Rogers also
predicted that in the coming years, cyber criminals previously engaged in stealing bank, credit card and other financial data
would start to be co-opted by nation-states to act as “surrogates,” obscuring countries’ fingerprints in the
infiltration and theft of information valuable to planning attacks.∂ The admiral added that such criminal groups, which are
often Russian-speaking, have already been using state-developed cyber tools.
Their impacts are all hype—no cyberattack
Walt 10 – Stephen M. Walt 10 is the Robert and Renée Belfer Professor of international
relations at Harvard University "Is the cyber threat overblown?" March 30
walt.foreignpolicy.com/posts/2010/03/30/is_the_cyber_threat_overblown
Am I the only person -- well, besides Glenn Greenwald and Kevin Poulson -- who thinks the "
cyber-warfare" business may be overblown? It’s clear the U.S.
national security establishment is paying a lot more attention to the issue, and colleagues of mine -- including some pretty serious and level-headed people -- are
increasingly worried by the danger of some sort of "cyber-Katrina." I don't dismiss it entirely, but this sure
looks to me like a classic opportunity
for threat-inflation.¶ Mind you, I'm not saying that there aren't a lot of shenanigans going on in cyber-space, or that various forms of cyber-warfare don't
have military potential. So I'm not arguing for complete head-in-the-sand complacency. But here’s what makes me worry that the threat is being
overstated.¶ First, the whole issue is highly esoteric -- you really need to know a great deal about computer networks, software,
encryption, etc., to know how serious the danger might be. Unfortunately, details about a number of the alleged incidents that
are being invoked to demonstrate the risk of a "cyber-Katrina," or a cyber-9/11, remain classified, which
makes it hard for us lay-persons to gauge just how serious the problem really was or is. Moreover, even
when we hear about computers being penetrated by hackers, or parts of the internet crashing, etc., it’s hard to
know how much valuable information was stolen or how much actual damage was done. And as
a lot of the experts have a clear vested interest in hyping
the threat, so as to create greater demand for their services. Plus, we already seem to have
politicians leaping on the issue as a way to grab some pork for their states.¶ Second, there are lots of different
problems being lumped under a single banner, whether the label is "cyber-terror" or "cyber-war." One issue is the use of various
with other specialized areas of technology and/or military affairs,
computer tools to degrade an enemy’s military capabilities (e.g., by disrupting communications nets, spoofing sensors, etc.). A second issue is the alleged threat that bad
guys would penetrate computer networks and shut down power grids, air traffic control, traffic lights, and other important elements of infrastructure, the way that
internet terrorists (led by a disgruntled computer expert) did in the movie Live Free and Die Hard. A third problem is web-based criminal activity, including identity theft or
simple fraud (e.g., those emails we all get from someone in Nigeria announcing that they have millions to give us once we send them some account information). A fourth
potential threat is “cyber-espionage”; i.e., clever foreign hackers penetrate Pentagon or defense contractors’ computers and download valuable classified information.
And then there are annoying activities like viruses, denial-of-service attacks, and other things that affect the stability of web-based activities and disrupt commerce (and
This sounds like a rich menu of potential trouble, and putting the phrase
"cyber" in front of almost any noun makes it sound trendy and a bit more frightening. But notice too
that these are all somewhat different problems of quite different importance, and the appropriate response to each is likely to be different too. Some issues -such as the danger of cyber-espionage -- may not require elaborate technical fixes but simply
more rigorous security procedures to isolate classified material from the web. Other problems may not require
big federal programs to address, in part because both individuals and the private sector
have incentives to protect themselves (e.g., via firewalls or by backing up critical data). And as Greenwald warns, there may be real costs
my ability to send posts into FP).¶
to civil liberties if concerns about vague cyber dangers lead us to grant the NSA or some other government agency greater control over the Internet. ¶ Third, this is another
Is the danger that some malign hacker crashes a power
grid greater than the likelihood that a blizzard would do the same thing? Is the risk of cyberespionage greater than the potential danger from more traditional forms of spying? Without a
issue that cries out for some comparative cost-benefit analysis.
comparative assessment of different risks and the costs of mitigating each one, we will allocate resources on the basis of hype rather than analysis. In short, my fear is not
that we won't take reasonable precautions against a potential set of dangers; my concern is that we will spend tens of billions of dollars protecting ourselves against a set
of threats that are not as dangerous as we are currently being told they are.
Their own evidence concedes that a cyber attack is exaggerated and unlikely –
also empirically proven to have no impact
Reuters 15
(Carolyn Cohn, reporter, 7-8-15, “Cyber attack on U.S. power grid could cost economy $1 trillion:
report,” http://www.reuters.com/article/2015/07/08/us-cyberattack-power-surveyidUSKCN0PI0XS20150708, BC)
A cyber attack which shuts down parts of the United States' power grid could cost as much as $1 trillion to
the U.S. economy, according to a report published on Wednesday.∂ Company executives are worried about security
breaches, but recent surveys suggest they are not convinced about the value or effectiveness of cyber insurance.∂ The report from
the University of Cambridge Centre for Risk Studies and the Lloyd's of London insurance market outlines a
scenario of an electricity blackout that leaves 93 million people in New York City and Washington DC without power.∂ The
scenario, developed by Cambridge, is technologically possible and is assessed to be within the once-in-200year probability for which insurers should be prepared, the report said.∂ The hypothetical attack causes a rise in
mortality rates as health and safety systems fail, a drop in trade as ports shut down and
disruption to transport and infrastructure.∂ "The total impact to the U.S. economy is estimated at $243 billion,
rising to more than $1 trillion in the most extreme version of the scenario," the report said. The losses come from
damage to infrastructure and business supply chains, and are estimated over a five-year time
period.∂ The extreme scenario is built on the greatest loss of power, with 100 generators
taken offline, and would lead to insurance industry losses of more than $70 billion, the report
added.∂ There have been 15 suspected cyber attacks on the U.S. electricity grid since 2000, the report
said, citing U.S. energy department data.∂ The U.S. Industrial Control System Cyber Emergency Response Team said that 32 percent
of its responses last year to cyber security threats to critical infrastructure occurred in the energy sector.∂ "The
evidence of
major attacks during 2014 suggests that attackers were often able to exploit vulnerabilities
faster than∂ defenders could remedy them," Tom Bolt, director of performance management at Lloyd's, said in
the report.
2NC No Cyber
No impact to attack and alt causes to cyberattack vulnerability – their evidence
NYT 10
(Michiko Kakutani, Pulitzer Prize winning book reviewer, citing Richard Clarke, former National
Coordinator for Security, Infrastructure Protection, and Counter-terrorism for the United States,
4-27-10, http://www.nytimes.com/2010/04/27/books/27book.html?pagewanted=all, BC)
Blackouts hit New York, Los Angeles, Washington and more than 100 other American cities.
Subways crash. Trains derail. Airplanes fall from the sky.∂ Gas pipelines explode. Chemical
plants release clouds of toxic chlorine. Banks lose all their data. Weather and communication
satellites spin out of their orbits. And the Pentagon’s classified networks grind to a halt,
blinding the greatest military power in the world.∂ This might sound like a takeoff on the 2007 Bruce Willis “Die Hard” movie, in which a group of
cyberterrorists attempts to stage what it calls a “fire sale”: a systematic shutdown of the nation’s vital communication and utilities infrastructure. According to the former
counterterrorism czar Richard A. Clarke, however, it’s a scenario that could happen in real life — and it could all go down in
15 minutes. While the United States has a first-rate cyberoffense capacity, he says, its lack of a
credible defense system, combined with the country’s heavy reliance on technology, makes it
highly susceptible to a devastating cyberattack.∂ “The United States is currently far more vulnerable to cyberwar than Russia or China,” he writes. “The
U.S. is more at risk from cyberwar than are minor states like North Korea. We may even be at risk some day from nations or nonstate actors lacking cyberwar capabilities, but who can hire teams of highly capable
hackers.”∂ Lest this sound like the augury of an alarmist, the reader might recall that Mr. Clarke, counterterrorism chief in both the Bill Clinton and George W. Bush administrations, repeatedly warned his
superiors about the need for an aggressive plan to combat al Qaeda — with only a pallid response before 9/11. He recounted this campaign in his controversial 2004 book, “Against All Enemies.” ∂ Once again,
there is a lack of coordination between the various arms of the military and various committees in Congress over how to handle a potential
attack. Once again, government agencies and private companies in charge of civilian infrastructure are ill prepared to handle a possible disaster.∂ In these pages Mr. Clarke uses his insider’s knowledge of
national security policy to create a harrowing — and persuasive — picture of the cyberthreat the United States faces today. Mr. Clarke is hardly a lone wolf on the subject: Mike McConnell, the former director of
national intelligence, told a Senate committee in February that “
if we were in a cyberwar today, the United States would lose.”∂ And
last November, Steven Chabinsky, deputy assistant director for the Federal Bureau of Investigation’s cyber division, noted that the F.B.I. was looking into Qaeda sympathizers who want to develop their hacking
skills and appear to want to target the United States’ infrastructure.∂ Mr. Clarke — who wrote this book with Robert K. Knake, an international affairs fellow at the Council on Foreign Relations — argues that
because the United States military relies so heavily upon databases and new technology, it is “highly
vulnerable to cyberattack.” And while the newly established Cyber Command, along with the Department of Homeland Security, is supposed to defend the federal government, he
writes, “the rest of us are on our own”:∂ “There is no federal agency that has the mission to defend the banking
system, the transportation networks or the power grid from cyberattack.” In fact, The Wall Street
Journal reported in April 2009 that the United States’ electrical grid had been penetrated by
cyberspies (reportedly from China, Russia and other countries), who left behind software that
could be used to sabotage the system in the future.
No cyber impact
Healey 3/20 Jason, Director of the Cyber Statecraft Initiative at the Atlantic Council, "No,
Cyberwarfare Isn't as Dangerous as Nuclear War", 2013,
www.usnews.com/opinion/blogs/world-report/2013/03/20/cyber-attacks-not-yet-anexistential-threat-to-the-us
America does not face an existential cyberthreat today, despite recent warnings. Our
cybervulnerabilities are undoubtedly grave and the threats we face are severe but far from comparable to
nuclear war. ¶ The most recent alarms come in a Defense Science Board report on how to make military cybersystems more
resilient against advanced threats (in short, Russia or China). It warned that the "cyber threat is serious, with potential consequences
similar in some ways to the nuclear threat of the Cold War." Such fears were also expressed by Adm. Mike Mullen, then chairman of
the Joint Chiefs of Staff, in 2011. He called cyber "The single biggest existential threat that's out there" because "cyber actually more
than theoretically, can attack our infrastructure, our financial systems."¶ While
it is true that cyber attacks might do
these things, it is also true they have not only never happened but are far more difficult to
accomplish than mainstream thinking believes. The consequences from cyber threats may be similar
in some ways to nuclear, as the Science Board concluded, but mostly, they are incredibly dissimilar. ¶ Eighty
years ago, the generals of the U.S. Army Air Corps were sure that their bombers would easily topple other countries and cause their
populations to panic, claims which did not stand up to reality. A
study of the 25-year history of cyber conflict, by
shown a similar dynamic where the impact of
disruptive cyberattacks has been consistently overestimated. ¶ Rather than theorizing about future
the Atlantic Council and Cyber Conflict Studies Association, has
cyberwars or extrapolating from today's concerns, the history of cyberconflict that have actually been fought, shows that cyber
incidents have so far tended to have effects that are either widespread but fleeting or persistent but narrowly focused. No
attacks, so far, have been both widespread and persistent. There have been no authenticated
cases of anyone dying from a cyber attack. Any widespread disruptions, even the 2007 disruption against
Estonia, have been short-lived causing no significant GDP loss. ¶ Moreover, as with conflict in other domains, cyberattacks
can take down many targets but keeping them down over time in the face of determined defenses has so far been out of the range
of all but the most dangerous adversaries such as Russia and China. Of course, if the United States is in a conflict with those nations,
cyber will be the least important of the existential threats policymakers should be worrying about. Plutonium
trumps bytes
in a shooting war.¶ This is not all good news. Policymakers have recognized the problems since at least 1998 with little
significant progress. Worse, the threats and vulnerabilities are getting steadily more worrying. Still, experts have been
warning of a cyber Pearl Harbor for 20 of the 70 years since the actual Pearl Harbor. ¶ The transfer
of U.S. trade secrets through Chinese cyber espionage could someday accumulate into an existential
threat. But it doesn't seem so seem just yet, with only handwaving estimates of annual losses of 0.1 to 0.5 percent
to the total U.S. GDP of around $15 trillion. That's bad, but it doesn't add up to an existential crisis or "economic
cyberwar."
No impact to cyberterror
Green 2 – editor of The Washington Monthly (Joshua, 11/11, The Myth of
Cyberterrorism, http://www.washingtonmonthly.com/features/2001/0211.green.html, AG)
There's just one problem: There
is no such thing as cyberterrorism--no instance of anyone ever
having been killed by a terrorist (or anyone else) using a computer. Nor is there compelling
evidence that al Qaeda or any other terrorist organization has resorted to computers for any sort of
serious destructive activity. What's more, outside of a Tom Clancy novel, computer security specialists believe it
is virtually impossible to use the Internet to inflict death on a large scale, and many scoff at the
notion that terrorists would bother trying. "I don't lie awake at night worrying about cyberattacks ruining my life," says Dorothy
Denning, a computer science professor at Georgetown University and one of the country's
foremost cybersecurity experts. "Not only does [cyberterrorism] not rank alongside chemical, biological, or
nuclear weapons, but it is not anywhere near as serious as other potential physical threats like car bombs
or suicide bombers." Which is not to say that cybersecurity isn't a serious problem--it's just not one that involves terrorists. Interviews
with terrorism and computer security experts, and current and former government and military officials, yielded near unanimous
agreement that the real danger is from the criminals and other hackers who did $15 billion in damage to the global economy last year
using viruses, worms, and other readily available tools. That figure is sure to balloon if more isn't done to protect vulnerable computer
systems, the vast majority of which are in the private sector. Yet when it comes to imposing the tough measures on business necessary
people imagine
tend to think along Hollywood plot lines, doomsday scenarios in which
terrorists hijack nuclear weapons, airliners, or military computers from halfway around the world. Given the colorful
to protect against the real cyberthreats, the Bush administration has balked. Crushing BlackBerrys When ordinary
cyberterrorism, they
history of federal boondoggles--billion-dollar weapons systems that misfire, $600 toilet seats--that's an understandable concern. But,
with few exceptions, it's not one that applies to preparedness for a cyberattack. "The government is miles ahead of the private sector
when it comes to cybersecurity," says Michael Cheek, director of intelligence for iDefense, a Virginia-based computer security
company with government and private-sector clients. "Particularly the most sensitive military systems." Serious effort and plain good
fortune have combined to bring this about. Take nuclear weapons. The biggest fallacy about their vulnerability, promoted in action
thrillers like WarGames, is that they're designed for remote operation. "[The movie] is premised on the assumption that there's a
modem bank hanging on the side of the computer that controls the missiles," says Martin Libicki, a defense analyst at the RAND
Corporation. "I assure you, there isn't." Rather, nuclear weapons and other sensitive military systems enjoy the most basic form of
Internet security: they're "air-gapped," meaning that they're not physically connected to the Internet and are therefore inaccessible to
outside hackers. (Nuclear weapons also contain "permissive action links," mechanisms to prevent weapons from being armed without
inputting codes carried by the president.) A retired military official was somewhat indignant at the mere suggestion: "As a general
principle, we've been looking at this thing for 20 years. What cave have you been living in if you haven't considered this [threat]?"
When it comes to cyberthreats, the
Defense Department has been particularly vigilant to protect key
systems by isolating them from the Net and even from the Pentagon's internal network. All new software must be
submitted to the National Security Agency for security testing. "Terrorists could not gain control of our
spacecraft, nuclear weapons, or any other type of high-consequence asset," says Air Force Chief
Information Officer John Gilligan. For more than a year, Pentagon CIO John Stenbit has enforced a moratorium on new wireless
networks, which are often easy to hack into, as well as common wireless devices such as PDAs, BlackBerrys, and even wireless or
infrared copiers and faxes. The September 11 hijackings led to an outcry that airliners are particularly susceptible to cyberterrorism.
Earlier this year, for instance, Sen. Charles Schumer (D-N.Y.) described "the absolute havoc and devastation that would result if
cyberterrorists suddenly shut down our air traffic control system, with thousands of planes in mid-flight." In fact, cybersecurity experts
give some of their highest marks to the FAA, which reasonably separates its administrative and air traffic control systems and strictly
air-gaps the latter. And there's a reason the 9/11 hijackers used box-cutters instead of keyboards: It's
impossible to hijack a
plane remotely, which eliminates the possibility of a high-tech 9/11 scenario in which planes are
used as weapons. Another source of concern is terrorist infiltration of our intelligence agencies. But here, too, the risk is slim.
The CIA's classified computers are also air-gapped, as is the FBI's entire computer system. "They've
been paranoid about this forever," says Libicki, adding that paranoia is a sound governing principle when it comes to cybersecurity.
Such concerns are manifesting themselves in broader policy terms as well. One notable characteristic of last year's Quadrennial
Defense Review was how strongly it focused on protecting information systems.
Cyberattacks impossible – empirics and defenses solve
Rid ‘12 (Thomas Rid, reader in war studies at King's College London, is author of
"Cyber War Will Not Take Place" and co-author of "Cyber-Weapons.", March/April 2012,
“Think Again: Cyberwar”,
http://www.foreignpolicy.com/articles/2012/02/27/cyberwar?page=full)
"Cyberwar Is Already Upon Us." No way. "Cyberwar
is coming!" John Arquilla and David Ronfeldt predicted in a celebrated
Rand paper back in 1993. Since then, it seems to have arrived -- at least by the account of the U.S. military establishment,
which is busy competing over who should get what share of the fight. Cyberspace is "a domain in which the Air Force flies and
fights," Air Force Secretary Michael Wynne claimed in 2006. By 2012, William J. Lynn III, the deputy defense secretary at the time,
was writing that cyberwar is "just as critical to military operations as land, sea, air, and space ." In January,
the Defense Department vowed to equip the U.S. armed forces for "conducting a combined arms campaign across all domains -- land,
air, maritime, space, and cyberspace." Meanwhile, growing piles of books and articles explore the threats of cyberwarfare,
cyberterrorism, and how to survive them. Time
for a reality check: Cyberwar is still more hype than hazard.
to be potentially violent, it has to be purposeful, and it has to be
political. The cyberattacks we've seen so far, from Estonia to the Stuxnet virus, simply don't meet these
criteria. Take the dubious story of a Soviet pipeline explosion back in 1982, much cited by cyberwar's true believers as the most
destructive cyberattack ever. The account goes like this: In June 1982, a Siberian pipeline that the CIA had virtually boobyConsider the definition of an act of war: It has
trapped with a so-called "logic bomb" exploded in a monumental fireball that could be seen from space. The U.S. Air Force estimated
the explosion at 3 kilotons, equivalent to a small nuclear device. Targeting a Soviet pipeline linking gas fields in Siberia to European
markets, the operation sabotaged the pipeline's control systems with software from a Canadian firm that the CIA had doctored with
malicious code. No one died, according to Thomas Reed, a U.S. National Security Council aide at the time who revealed the
incident in his 2004 book, At the Abyss; the
only harm came to the Soviet economy. But did it really happen? After
Reed's account came out, Vasily Pchelintsev, a former KGB head of the Tyumen region, where the alleged
explosion supposedly took place, denied the story. There are also no media reports from 1982 that confirm such an
explosion, though accidents and pipeline explosions in the Soviet Union were regularly reported in the early 1980s. Something likely
did happen, but Reed's book is the only public mention of the incident and his account relied on a single document. Even after the CIA
declassified a redacted version of Reed's source, a note on the so-called Farewell Dossier that describes the effort to provide the Soviet
Union with defective technology, the agency did not confirm that such an explosion occurred. The available evidence on the Siberian
pipeline blast is so thin that it shouldn't be counted as a proven case of a successful cyberattack. Most other commonly cited cases of
cyberwar are even less remarkable. Take the attacks on Estonia in April 2007, which came in response to the controversial relocation
of a Soviet war memorial, the Bronze Soldier. The well-wired country found itself at the receiving end of a massive distributed denialof-service attack that emanated from up to 85,000 hijacked computers and lasted three weeks. The attacks reached a peak on May 9,
when 58 Estonian websites were attacked at once and the online services of Estonia's largest bank were taken down. "What's the
difference between a blockade of harbors or airports of sovereign states and the blockade of government institutions and newspaper
websites?" asked Estonian Prime Minister Andrus Ansip. Despite his analogies, the attack was no act of war. It was certainly a
nuisance and an emotional strike on the country, but the bank's actual network was not even penetrated; it went down for 90 minutes
one day and two hours the next. The attack was not violent, it wasn't purposefully aimed at changing Estonia's behavior, and no
political entity took credit for it. The same is true for the vast majority of cyberattacks on record. Indeed, there is
no known
cyberattack that has caused the loss of human life. No cyberoffense has ever injured a person or
damaged a building. And if an act is not at least potentially violent, it's not an act of war. Separating war
from physical violence makes it a metaphorical notion; it would mean that there is no way to distinguish between World War II, say,
and the "wars" on obesity and cancer. Yet those ailments, unlike past examples of cyber "war," actually do kill people. "A Digital
Pearl Harbor Is Only a Matter of Time." Keep waiting. U.S. Defense Secretary Leon Panetta delivered a stark
warning last summer: "We could face a cyberattack that could be the equivalent of Pearl Harbor." Such alarmist predictions
have been ricocheting inside the Beltway for the past two decades, and some scaremongers have
even upped the ante by raising the alarm about a cyber 9/11. In his 2010 book, Cyber War, former White House
counterterrorism czar Richard Clarke invokes the specter of nationwide power blackouts, planes falling out of the sky, trains derailing,
refineries burning, pipelines exploding, poisonous gas clouds wafting, and satellites spinning out of orbit -- events that would make
the 2001 attacks pale in comparison. But the empirical record is less hair-raising, even by the standards of
the most drastic example available. Gen. Keith Alexander, head of U.S. Cyber Command (established in 2010
and now boasting a budget of more than $3 billion), shared his worst fears in an April 2011 speech at the University of Rhode Island:
"What I'm concerned about are destructive attacks," Alexander said, "those that are coming." He then invoked a remarkable accident at
Russia's Sayano-Shushenskaya hydroelectric plant to highlight the kind of damage a cyberattack might be able to cause. Shortly after
midnight on Aug. 17, 2009, a 900-ton turbine was ripped out of its seat by a so-called "water hammer," a sudden surge in water
pressure that then caused a transformer explosion. The turbine's unusually high vibrations had worn down the bolts that kept its cover
in place, and an offline sensor failed to detect the malfunction. Seventy-five people died in the accident, energy prices in Russia rose,
and rebuilding the plant is slated to cost $1.3 billion. Tough luck for the Russians, but here's what the head of Cyber Command didn't
say: The ill-fated turbine had been malfunctioning for some time, and the plant's management was notoriously poor. On top of that, the
key event that ultimately triggered the catastrophe seems to have been a fire at Bratsk power station, about 500 miles away. Because
the energy supply from Bratsk dropped, authorities remotely increased the burden on the Sayano-Shushenskaya plant. The sudden
spike overwhelmed the turbine, which was two months shy of reaching the end of its 30-year life cycle, sparking the catastrophe. If
Sayano-Shushenskaya incident highlights how difficult a devastating attack would be to
mount. The plant's washout was an accident at the end of a complicated and unique chain of
events. Anticipating such vulnerabilities in advance is extraordinarily difficult even for insiders;
creating comparable coincidences from cyberspace would be a daunting challenge at best for
outsiders. If this is the most drastic incident Cyber Command can conjure up, perhaps it's time for everyone to take a deep breath.
"Cyberattacks Are Becoming Easier." Just the opposite. U.S. Director of National Intelligence James R.
Clapper warned last year that the volume of malicious software on American networks had more
than tripled since 2009 and that more than 60,000 pieces of malware are now discovered every day. The United States, he
said, is undergoing "a phenomenon known as 'convergence,' which amplifies the opportunity for
disruptive cyberattacks, including against physical infrastructures." ("Digital convergence" is a snazzy term for a simple thing:
more and more devices able to talk to each other, and formerly separate industries and activities able to work together.) Just
because there's more malware, however, doesn't mean that attacks are becoming easier. In fact,
potentially damaging or life-threatening cyberattacks should be more difficult to pull off. Why?
Sensitive systems generally have built-in redundancy and safety systems, meaning an
attacker's likely objective will not be to shut down a system, since merely forcing the shutdown
of one control system, say a power plant, could trigger a backup and cause operators to start looking
for the bug. To work as an effective weapon, malware would have to influence an active process - but not bring it to a screeching halt. If the malicious activity extends over a lengthy period, it
has to remain stealthy. That's a more difficult trick than hitting the virtual off-button. Take Stuxnet,
the worm that sabotaged Iran's nuclear program in 2010. It didn't just crudely shut down the centrifuges at the
Natanz nuclear facility; rather, the worm subtly manipulated the system. Stuxnet stealthily infiltrated the
anything, the
plant's networks, then hopped onto the protected control systems, intercepted input values from sensors, recorded these data, and then
provided the legitimate controller code with pre-recorded fake input signals, according to researchers who have studied the worm. Its
objective was not just to fool operators in a control room, but also to circumvent digital safety and monitoring systems so it could
secretly manipulate the actual processes. Building
and deploying Stuxnet required extremely detailed
intelligence about the systems it was supposed to compromise, and the same will be true for other
dangerous cyberweapons. Yes, "convergence," standardization, and sloppy defense of controlsystems software could increase the risk of generic attacks, but the same trend has also caused
defenses against the most coveted targets to improve steadily and has made reprogramming
highly specific installations on legacy systems more complex, not less.
1NC Cyber Inev
Cybersecurity vulnerabilities are inevitable
Corn 7/13
(Corn, Geoffrey S. * Presidential Research Professor of Law, South Texas College of Law; Lieutenant Colonel (Retired), U.S. Army
Judge Advocate General’s Corps. Prior to joining the faculty at South Texas, Professor Corn served in a variety of military
assignments, including as the Army’s Senior Law of War Advisor, Supervisory Defense Counsel for the Western United States,
Chief of International Law for U.S. Army Europe, and as a Tactical Intelligence Officer in Panama. “Averting the Inherent Dangers
of 'Going Dark': Why Congress Must Require a Locked Front Door to Encrypted Data,” SSRN. 07-13-2015.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes//ghs-kw)
Like CALEA, a statutory obligation along the lines proposed herein will inevitably trigger criticisms and generate concerns. One
obvious criticism
is that the creation of an escrow key or the maintenance of a duplicate key by a
manufacturer would introduce an unacceptable risk of compromise for the device. This argument
presupposes that the risk is significant, that the costs of its exploitation are large, and that the
benefit is not worth the risk. Yet manufacturers, product developers, service providers and
users constantly introduce such risks. Nearly every feature or bit of code added to a device
introduces a risk, some greater than others. The vulnerabilities that have been introduced to
computers by software such as Flash, ActiveX controls, Java, and web browsers are well
documented.51 The ubiquitous SQL database, while extremely effective at helping web designers
create effective data driven websites, is notorious for its vulnerability to SQL injection attacks.52
The adding of microphones to electronic devices opened the door to aural interceptions. Similarly,
the introduction of cameras has resulted in unauthorized video surveillance of users. Consumers
accept all of these risks, however, since we, as individual users and as a society, have concluded
that they are worth the cost. Some will inevitably argue that no new possible vulnerabilities
should be introduced into devices to allow the government to execute reasonable, and therefore lawful,
searches for unique and otherwise unavailable evidence. However, this argument implicitly
asserts that there is no, or insignificant, value to society of such a feature. And herein lies the
Achilles heel to opponents of mandated front-door access: the conclusion is entirely at odds with the
inherent balance between individual liberty and collective security central to the Fourth
Amendment itself. Nor should lawmakers be deluded into believing that the currently existing
vulnerabilities that we live with on a daily basis are less significant in scope than the possibility
of obtaining complete access to the encrypted contents of a device. Various malware variants
that are so widespread as to be almost omnipresent in our online community achieve just such
access through what would seem like minor cracks in the defense of systems.53 One example is
the Zeus malware strain, which has been tied to the unlawful online theft of hundreds of
millions of dollars from U.S. companies and citizens and gives its operator complete access to
and control over any computer it infects.54 It can be installed on a machine through the simple mistake of viewing an
infected website or email, or clicking on an otherwise innocuous link.55 The malware is designed to not only bypass
malware detection software, but to deactivate to software’s ability to detect it.56 Zeus and the many
other variants of malware that are freely available to purchasers on dark-net websites and forums are responsible for the theft of
funds from countless online bank accounts (the credentials having been stolen by the malware’s key-logger features), the theft of
credit card information, and innumerable personal identifiers.57
2NC Cyber Inev
Security issues are inevitable
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings
Institution. He is the author of several books and a member of the Hoover Institution's Task Force on National Security and Law.
"Thoughts on Encryption and Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015.
http://www.lawfareblog.com/thoughts-encryption-and-going-dark-part-ii-debate-merits//ghs-kw)
On Thursday, I described the surprisingly warm reception FBI Director James Comey got in the Senate this week with his warning
that the FBI was "going dark" because of end-to-end encryption. In this post, I want to take on the merits of
the renewed encryption debate, which seem to me complicated and multi-faceted and not all pushing in the same direction. Let me
start by breaking the encryption debate into two
distinct sets of questions: One is the conceptual question
of whether a world of end-to-end strong encryption is an attractive idea. The other is whether—
assuming it is not an attractive idea and that one wants to ensure that authorities retain the ability to intercept decrypted signal—
an extraordinary access scheme is technically possible without eroding other essential security
and privacy objectives. These questions often get mashed together, both because tech companies are keen to market
themselves as the defenders of their users' privacy interests and because of the libertarian ethos of the tech community more
generally. But the
questions are not the same, and it's worth considering them separately. Consider
the conceptual question first. Would it be a good idea to have a world-wide communications
infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could snap our
fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from
the FSB but also from the FBI even wielding lawful process, would that be desirable? Or, in the alternative, do we want to create an
internet as secure as possible from everyone except government investigators exercising their legal authorities with the
understanding that other countries may do the same? Conceptually speaking, I am with Comey on this question—and the
matter does not seem to me an especially close call. The belief in principle in creating a giant
world-wide network on which surveillance is technically impossible is really an argument for
the creation of the world's largest ungoverned space. I understand why techno-anarchists find
this idea so appealing. I can't imagine for moment, however, why anyone else would. Consider
the comparable argument in physical space: the creation of a city in which authorities are
entirely dependent on citizen reporting of bad conduct but have no direct visibility onto what
happens on the streets and no ability to conduct search warrants (even with court orders) or to
patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really
suck is not controversial when you're talking about Yemen or Somalia. I see nothing more
attractive about the creation of a worldwide architecture in which it is technically impossible to
intercept and read ISIS communications with followers or to follow child predators into
chatrooms where they go after kids. The trouble is that this conceptual position does not answer the entirety of the
policy question before us. The reason is that the case against preserving some form of law enforcement access to decrypted signal is
not only a conceptual embrace of the technological obsolescence of surveillance. It
is also a series of arguments about
the costs—including the security costs—of maintaining the capacity to decrypt captured signal.
Consider the report issued this past week by a group of computer security experts (including Lawfare contributing editors Bruce
Schneier and Susan Landau), entitled "Keys Under Doormats: Mandating Insecurity By Requiring Government Access to All Data and
Communications." The report does not make an in-principle argument or a conceptual argument against extraordinary access. It
argues, rather, that the effort to build such a system risks eroding cybersecurity in ways far more important than the problems it
would solve. The authors, to summarize, make three claims in support of the broad claim that any exceptional access system would
"pose . . . grave security risks [and] imperil innovation." What
are those "grave security risks"? "[P]roviding
exceptional access to communications would force a U-turn from the best practices now being
deployed to make the Internet more secure. These practices include forward secrecy—where
decryption keys are deleted immediately after use, so that stealing the encryption key used by a communications server would not
compromise earlier or later communications. A related technique, authenticated encryption, uses the same temporary key to
guarantee conļ¬dentiality and to verify that the message has not been forged or tampered with." "[B]uilding
in exceptional
access would substantially increase system complexity" and "complexity is the enemy of
security." Adding code to systems increases that system's attack surface, and a certain number of additional vulnerabilities come
with every marginal increase in system complexity. So by requiring a potentially complicated new system to be developed and
implemented, we'd be effectively guaranteeing more vulnerabilities for malicious actors to hit. "[E]xceptional
access
would create concentrated targets that could attract bad actors." If we require tech companies to retain
some means of accessing user communications, those keys have to stored somewhere, and that storage then becomes an unusually
high-stakes target for malicious attack. Their theft then compromises, as did the OPM hack, large numbers of users. The strong
implication of the report is that these issues are not resolvable, though the report never quite says that. But at a
minimum, the authors raise a series of important questions about whether such a system would, in practice, create an insecure
internet in general—rather than one whose general security has the technical capacity to make security exceptions to comply with
the law. There is some reason, in my view, to suspect that the
picture may not be quite as stark as the computer
scientists make it seem. After all, the big tech companies increase the complexity of their software
products all the time, and they generally regard the increased attack surface of the software
they create as a result as a mitigatable problem. Similarly, there are lots of high-value intelligence
targets that we have to secure and would have big security implications if we could not do so
successfully. And when it really counts, that task is not hopeless. Google and Apple and Facebook are not without tools in the
cybersecurity department. The real question, in my view, is whether a system of the sort Comey imagines could be
built in fashion in which the security gain it would provide would exceed the heightened security
risks the extraordinary access would involve. As Herb Lin puts it in his excellent, and admirably brief, Senate
testimony the other day, this is ultimately a question without an answer in the absence of a lot of new research. "One side says [the]
access [Comey is seeking] inevitably weakens the security of a system and will eventually be compromised by a bad guy; the other
side says it doesn’t weaken security and won’t be compromised. Neither side can prove its case, and we see a theological clash of
absolutes." Only when someone actually does the research and development and tries actually to produce a system that meets
Comey's criteria are we going to find out whether it's doable or not. And therein lies the rub, and the real meat of the policy
problem, in my view: Who's going to do this research? Who's going to conduct the sustained investment in trying to imagine a
system that secures communications except from government when and only government has a warrant to intercept those
communications? The assumption of the computer scientists in their report is that the burden of that research lies with the
government. "Absent a concrete technical proposal," they write, "and without answers to the questions raised in this report,
legislators should reject out of hand any proposal to return to the failed cryptography control policy of the 1990s." Indeed, their
most central recommendation is that the burden of development is on Comey. "Our strong recommendation is that anyone
proposing regulations should first present concrete technical requirements, which industry, academics, and the public can analyze
for technical weaknesses and for hidden costs." In his testimony, Herb supports this call, though he acknowledges that it is not the
inevitable route: the government has not yet provided any specifics, arguing that private vendors should do it. At the same time, the
vendors won’t do it, because [their] customers aren’t demanding such features. Indeed, many customers would see such features as
a reason to avoid a given vendor. Without specifics, there will be no progress. I believe the government is afraid that any specific
proposal will be subject to enormous criticism—and that’s true—but the government is the party that wants . . . access, and rather
than running away from such criticism, it should embrace any resulting criticism as an opportunity to improve upon its initial
designs." Herb might also have mentioned that lots of people in the academic tech community who would be natural candidates to
help develop such an access system are much more interested in developing encryption systems to keep the feds out than to—
under any circumstances—let them in. The tech community has spent a lot more time and energy arguing against the plausibility
and desireability of implementing what Comey is seeking than it has spent in trying to develop systems that deliver it while
mitigating the risks such a system might pose. For both industry and the tech communities, more broadly, this is government's
problem, not their problem. Yet reviving the Clipper Chip model—in which government develops a fully-formed system and then
puts it out publicly for the community to shoot down—is clearly not what Comey has in mind. He is talking in very different
language: the language of performance requirements. He wants to leave the development task to Silicon Valley to figure out how to
implement government's requirements. He wants
to describe what he needs—decrypted signal when he
has a warrant—and leave the companies to figure out how to deliver it while still providing
secure communications in other circumstances to their customers. The advantage to this
approach is that it potentially lets a thousand flowers bloom. Each company might do it
differently. They would compete to provide the most security consistent with the
performance standard. They could learn from each other. And government would not be in
the position of developing and promoting specific algorithms. It wouldn't even need to know
how the task was being done.
1NC Meltdown =/= Extinction
No impact to or risk of nuclear meltdowns – their evidence
Cappiello 3/29/11 – national environmental reporter for The Associated Press, master’s
degrees in earth and environmental science and journalism from Columbia University (Dina,
“Long Blackouts Pose Risk To U.S. Nuclear Reactors” Huffington Post,
http://www.huffingtonpost.com/2011/03/29/blackout-risk-us-nuclearreactors_n_841869.html)//IS
A 2003 federal analysis looking at how to estimate the risk of containment failure said that should
power be knocked out
by an earthquake or tornado it "would be unlikely that power will be recovered in the time frame to
prevent core meltdown." In Japan, it was a one-two punch: first the earthquake, then the tsunami. Tokyo Electric Power
Co., the operator of the crippled plant, found other ways to cool the reactor core and so far avert a full-scale meltdown without
electricity. "Clearly the coping duration is an issue on the table now," said Biff Bradley, director of risk assessment for the Nuclear
Energy Institute. "The industry and the Nuclear Regulatory Commission will have to go back in light of what we just observed and
rethink station blackout duration." David Lochbaum, a former plant engineer and nuclear safety director at the advocacy group
Union of Concerned Scientists, put it another way: "Japan
shows what happens when you play beat-the-clock
and lose." Lochbaum plans to use the Japan disaster to press lawmakers and the nuclear power industry to do more when it
comes to coping with prolonged blackouts, such as having temporary generators on site that can recharge batteries. A complete
loss of electrical power, generally speaking, poses a major problem for a nuclear power plant
because the reactor core must be kept cool, and back-up cooling systems – mostly pumps that
replenish the core with water_ require massive amounts of power to work. Without the
electrical grid, or diesel generators, batteries can be used for a time, but they will not last long
with the power demands. And when the batteries die, the systems that control and monitor the
plant can also go dark, making it difficult to ascertain water levels and the condition of the core.
One variable not considered in the NRC risk assessments of severe blackouts was cooling water
in spent fuel pools, where rods once used in the reactor are placed. With limited resources, the
commission decided to focus its analysis on the reactor fuel, which has the potential to release
more radiation. An analysis of individual plant risks released in 2003 by the NRC shows that for
39 of the 104 nuclear reactors, the risk of core damage from a blackout was greater than 1 in
100,000. At 45 other plants the risk is greater than 1 in 1 million, the threshold NRC is using to determine
which severe accidents should be evaluated in its latest analysis. The Beaver Valley Power Station, Unit 1, in Pennsylvania had the
greatest risk of core melt – 6.5 in 100,000, according to the analysis. But that risk may have been reduced in subsequent years as
NRC regulations required plants to do more to cope with blackouts. Todd Schneider, a spokesman for FirstEnergy Nuclear Operating
Co., which runs Beaver Creek, told the AP that batteries on site would last less than a week. In 1988, eight years after labeling
blackouts "an unresolved safety issue," the NRC required nuclear power plants to improve the reliability of their diesel generators,
have more backup generators on site, and better train personnel to restore power. These steps would allow them to keep the core
cool for four to eight hours if they lost all electrical power. By contrast, the newest generation of nuclear power plant, which is still
awaiting approval, can last 72 hours without taking any action, and a minimum of seven days if water is supplied by other means to
cooling pools. Despite the added safety measures, a 1997 report found that blackouts – the loss of on-site and off-site electrical
power – remained "a dominant contributor to the risk of core melt at some plants." The events of Sept. 11, 2001, further solidified
that nuclear reactors might have to keep the core cool for a longer period without power. After 9/11, the commission issued
regulations requiring that plants
have portable power supplies for relief valves and be able to
manually operate an emergency reactor cooling system when batteries go out. The NRC says
these steps, and others, have reduced the risk of core melt from station blackouts from the
current fleet of nuclear plants. For instance, preliminary results of the latest analysis of the risks to the Peach Bottom
plant show that any release caused by a blackout there would be far less rapid and would release
less radiation than previously thought, even without any actions being taken. With more time,
people can be evacuated. The NRC says improved computer models, coupled with up-to-date information about the plant,
resulted in the rosier outlook. "When you simplify, you always err towards the worst possible circumstance," Scott Burnell, a
spokesman for the Nuclear Regulatory Commission, said of the earlier studies. The
latest work shows that "even in
situations where everything is broken and you can't do anything else, these events take a long
time to play out," he said. "Even when you get to releasing into environment, much less of it is
released than actually thought." Exelon Corp., the operator of the Peach Bottom plant, referred all detailed questions
about its preparedness and the risk analysis back to the NRC. In a news release issued earlier this month, the company, which
operates 10 nuclear power plants, said "all
Exelon nuclear plants are able to safely shut down and keep the
fuel cooled even without electricity from the grid." Other people, looking at the crisis unfolding in Japan, aren't so
sure. In the worst-case scenario, the NRC's 1990 risk assessment predicted that a core melt at Peach Bottom could
begin in one hour if electrical power on- and off-site were lost, the diesel generators – the main
back-up source of power for the pumps that keep the core cool with water – failed to work and
other mitigating steps weren't taken. "It is not a question that those things are definitely
effective in this kind of scenario," said Richard Denning, a professor of nuclear engineering at Ohio State University,
referring to the steps NRC has taken to prevent incidents. Denning had done work as a contractor on severe accident analyses for
the NRC since 1975. He retired from Battelle Memorial Institute in 1995. "They certainly could have made all the difference in this
particular case," he said, referring to Japan. "That's assuming you have stored these things in a place that would not have been
swept away by tsunami."
Coal plants disprove the impact – they emit way more radiation than a global
meltdown
Worstall 13 – Forbes Contributor focusing on business and technology (Tim Worstall,
8/10/13, “The Fukushima Radiation Leak Is Equal to 76 Milion Bananas,”
http://www.forbes.com/sites/timworstall/2013/08/10/the-fukushima-radiation-leak-isequal-to-76-million-bananas/)//twonily
Not that Greenpeace is ever going to say anything other than that nuclear power is the work of the very devil of course. And
the headlines do indeed seem alarming: Radioactive Fukushima groundwater rises above barrier – Up to 40 trillion
becquerels released into Pacific ocean so far – Storage for radioactive water running out. Or: Tepco admitted on Friday that a
cumulative 20 trillion to 40 trillion becquerels of radioactive tritium may have leaked into the
sea since the disaster. Most of us haven’t a clue what that means of course. We don’t instinctively understand what a
becquerel is in the same way that we do pound, pint or gallons, and certainly trillions of anything sounds hideous. But don’t
forget that trillions of picogrammes of dihydrogen monoxide is also the major ingredient in a glass of beer. So what we
really want to know is whether 20 trillion becquerels of radiation is actually an important
To which the answer is no, it isn’t. This is actually around and about (perhaps a little
over) the amount of radiation the plant was allowed to dump into the environment before the disaster.
Now there are indeed those who insist that any amount of radiation kills us all stone dead while we
sleep in our beds but I’m afraid that this is incorrect . We’re all exposed to radiation all the time
and we all seem to survive long enough to be killed by something else so radiation isn’t as dangerous
number.
as all that. At which point we can offer a comparison. Something to try and give us a sense of perspective about whether 20
trillion nasties of radiation is something to get all concerned about or not. That comparison being that the radiation leakage
from Fukushima appears to be about the same as that from 76 million bananas. Which is a lot of bananas I agree, but again we
can put that into some sort of perspective. Let’s start from the beginning with the banana equivalent dose, the BED. Bananas
contain potassium, some portion of potassium is always radioactive, thus bananas contain some radioactivity. This gets into
the human body as we digest the lovely fruit (OK, bananas are an herb but still…): Since a typical banana contains about half a
gram of potassium, it will have an activity of roughly 15 Bq. Excellent, we now have a unit that we can grasp, one that the
human mind can use to give a sense of proportion to these claims about radioactivity. We know that bananas are good for us
on balance, thus this amount of radioactivity isn’t all that much of a burden on us. We also have that
claim of 20 trillion becquerels of radiation having been dumped into the Pacific Ocean in the past couple of years. 20 trillion
divided by two years by 365 days by 24 hours gives us an hourly rate of 1,141,552,511 becquerels per hour. Divide that by our
15 Bq per banana and we can see that the radiation spillage from Fukushima is running at 76 million bananas per hour. Which
is, as I say above, a lot of bananas. But it’s not actually that many bananas. World production of them is some 145 million
tonnes a year. There’s a thousand kilos in a tonne, say a banana is 100 grammes (sounds about right, four bananas to the
pound, ten to the kilo) or 1.45 trillion bananas a year eaten around the world. Divide again by 365 and 24 to get the hourly
consumption rate and we get 165 million bananas consumed per hour. We can do this slightly differently and say that the 1.45
trillion bananas consumed each year have those 15 Bq giving us around 22 trillion Bq each year. The Fukushima leak is 20
trillion Bq over two years: thus our two calculations agree. The current leak is
just under half that exposure
that we all get from the global consumption of bananas. Except even that’s overstating it. For the
banana consumption does indeed get into our bodies: the Fukushima leak is getting into the Pacific Ocean where it’s obviously
far less dangerous. And don’t forget that all that radiation in the bananas ends up in the oceans as well, given that we do in fact
urinate it out and no, it’s not something that the sewage treatment plants particularly keep out of the rivers. There are some
who are viewing this radiation leak very differently: Arnold Gundersen, Fairewinds Associates: [...] we are contaminating the
Pacific Ocean which is extraordinarily serious. Evgeny Sukhoi: Is there anything that can be done with that, I mean with the
ocean? Gundersen: Frankly, I don’t believe so. I think we will continue to release radioactive material into the ocean for 20 or
30 years at least. They have to pump the water out of the areas surrounding the nuclear reactor. But frankly, this water is the
most radioactive water I’ve ever experienced. I have to admit that I simply don’t agree. I’m not actually arguing that radiation
is good for us but I really don’t think that half the radiation of the world’s banana crop being diluted into the Pacific Ocean is all
that much to worry about. And why
we really shouldn’t worry about it
all that much. The radiation that fossil
fuel plants spew into the environment each year is around 0.1 EBq. That’s ExaBecquerel, or 10 to the power of 18.
Fukushima is pumping out 10 trillion becquerels a year at present. Or 10 TBq, or 10 of 10 to the power of 12. Or, if
you prefer, one ten thousandth of the amount that the world’s coal plants are doing . Or
even, given that there are only about 2,500 coal plants in the world, Fukushima is, in this disaster,
pumping out around one quarter of the radiation that a coal plant does in normal
operation . You can worry about it if you want but it’s not something that’s likely to have any real
measurable effect on anyone or anything.
2NC Meltdown =/= Extinction
No impact – empirics
Marder, 11 – staff writer (Jenny, “Mechanics of a Nuclear Meltdown Explained,” PBS,
3/15/2011, http://www.pbs.org/newshour/rundown/mechanics-of-a-meltdownexplained/) //RGP
After a powerful explosion on Tuesday, Japanese
workers are still struggling to regain control of an
earthquake and tsunami-damaged nuclear power plant amid worsening fears of a full
meltdown. Which raises the questions: What exactly is a nuclear meltdown? And what is a partial meltdown? “This term
‘meltdown’ is being bandied about, and I think people think that you get the fuel hot and things start
melting and become liquid,” said Charles Ferguson, physicist and president of the Federation of American Scientists.
“But there are different steps along the way.” Inside the core of the boiling water reactors at Japan’s Fukushima
Dai-ichi facility are thousands of zirconium metal fuel rods, each stacked with ceramic pellets the size of pencil erasers. These
pellets contain uranium dioxide. Under normal circumstances, energy is generated by harnessing the heat produced through
an atom-splitting process called nuclear fission. As uranium atoms split, they produce heat, while creating what’s known as
fission products. These are radioactive fragments, such as barium, iodine and Cesium-137. In a working nuclear reactor, water
gets pumped into the reactor’s heated core, boils, turns into steam and powers a turbine, generating electricity. “Basically,
each uranium atom splits into two parts, and you get a whole soup of elements in the middle
of the periodic table,” said Arjun Makhijani, a nuclear engineer and president of the Institute for Energy and
Environmental Research. A reactor is like a pressure cooker. It contains boiling water and steam, and as temperature rises, so
does pressure, since the steam can’t escape. In the
event of a cooling failure, water gets injected to cool
the fuel rods, and pressure builds. This superheated core must be cooled with water to prevent overheating and
an excessive buildup of steam, which can cause an explosion. In Japan, they’ve been relieving pressure by
releasing steam through pressure valves. But it’s a trade-off, as there’s no way to do this
without also releasing some radioactive material. A nuclear meltdown is an accident resulting from severe
heating and a lack of sufficient cooling at the reactor core, and it occurs in different stages. As the core heats, the zirconium
metal reacts with steam to become zirconium oxide. This oxidation process releases additional heat, further increasing the
temperature inside the core. High temperatures cause the zirconium coating that covers the surface of the fuel rods to blister
and balloon. In time, that ultra-hot zirconium metal starts to melt. Exposed parts of the fuel rods eventually become liquid,
sink down into the coolant and solidify. And that’s just the beginning of a potentially catastrophic event. “This can clog and
prevent the flow of more coolant,” Ferguson said. “And that can become a vicious cycle. Partial melting can solidify and block
cooling channels, leading to more melting and higher temperatures if adequate cooling isn’t present.” A full meltdown would
involve all of the fuel in that core melting and a mass of molten material falling and settling at the bottom of the reactor vessel.
If the vessel is ruptured, the material could flow into the larger containment building surrounding it. That containment
is shielded by protective layers of steel and concrete. “But if that containment is ruptured, then potentially
a lot of material could go into the environment,” Ferguson said. Meltdown can also occur in the pools containing spent fuel
rods. Used fuel rods are removed from the reactor and submerged in what’s called a spent fuel pool, which cools and shields
the radioactive material. Overheating of
the spent fuel pools could cause the water containing and
cooling the rods to evaporate. Without coolant, the fuel rods become highly vulnerable to catching fire and
spontaneously combusting, releasing dangerous levels of radiation into the atmosphere. “ Water not only provides
cooling, but it provides shielding,” said Robert Alvarez, a nuclear expert and a senior scholar at the Institute for
Policy Studies. “[Radiation] dose rates coming off from spent fuel at distances of 50 to 100 yards could be life-threatening.”
Since spent fuel is less radioactive than fuel in the reactor core, these pools are easier to control, said Peter Caracappa, a
professor and radiation safety officer at Rensselaer Polytechnic Institute. But they’re also less contained. “If material is
released, it has a greater potential to spread because there’s no primary containment,” he said. Most of the problems with the
backup generators were caused by the tsunami flooding them. But Makhijani suspects that unseen damage from the
earthquake may be adding another challenge. “I think because the earthquake was so severe, there’s probably a lot of damage
becoming apparent now,” he said. “Valves might have become displaced, and there may be cracked pipes. We can’t know,
because there’s no way to suspect. Yesterday, they had trouble releasing a valve. And they’ve had trouble maintaining coolant
inside, which means leaks.”
No extinction – empirics – reactors leak literally all the time
Nichols 13 – columnist @ Veterans Today (Bob Nichols, 4/6/13, “All Nuclear Reactors Leak
All of he Time,” http://www.veteranstoday.com/2013/04/06/all-reactors-leak-all-thetime/)//twonily
(San Francisco) Reportedly Americans widely believe in God and lead the world in the percentage of citizens in prison and on
parole. That is actual reality from an imaginary character in a TV show. The Gallup Poll also says it is true and has been for
years. Most Americans
believe that nuke reactors are safe and quite sound, too. Wonder why they do
that? Most people at one time in their lives watched as steam escapes from a pressure cooker
and accept it as real and true. A reactor is very much the same thing . The “cooks,” called “Operators,” even
take the lid off from time to time too. A nuclear reactor is just an expensive, overly complicated way to
heat water to make steam. Of course all reactors leak ! All nuclear reactors also actually manufacture more
than 1,946 dangerous and known radioactive metals, gases and aerosols. Many isotopes, such as radioactive
hydrogen, simply cannot be contained . So, they barely even try. It is mostly just a show for the rubes.[1]
Even explosions don’t cause leaks – empirics
Bellona News 11 – (9/12/11, “Breaking: Explosion rocks French nuclear facility; no
radiation leaks apparent,” http://bellona.org/news/nuclear-issues/accidents-andincidents/2011-09-breaking-explosion-rocks-french-nuclear-facility-no-radiation-leaksapparent)//twonily
There is no immediate evidence of a radioactive leak after a blast at the southern French
nuclear facility of Marcoule near Nimes which killed one person and injured four others, one seriously,
French media have reported and safety officials have confirmed. There was no risk of a radioactive leak
after the blast , caused by a fire near a furnace in the Centraco radioactive waste storage site, said officials according to
various media reports. The plant’s owner, national electricity provider EDF, said it had been “an industrial
accident, not a nuclear accident.” “For the time being nothing has made it outside ,” said one
spokesman for France’s Atomic Energy Commission who spoke anonymously to the BBC. The Centraco treatment centre,
which has been operational since February of 1999, belongs to a subsidiary of EDF. It produces MOX fuel, which recycles
plutonium from nuclear weapons. “[Marcoule] is French version of Sellafield. It is difficult to evaluate right now how serious
the situation is based on the information we have at the moment. But it can develop further,” said Bellona nuclear physicist
Nils Bøhmer. The local Midi Libre newspaper, on its web site, said an oven exploded at the plant, killing one person and
seriously injuring another. No
radiation leak was reported, the report said, adding that no quarantine or
evacuation orders were issued for neighboring towns. A security perimeter has been set up because of the
risk of leakage. The explosion hit the site at 11:45 local time. The EDF spokesman said the furnace affected had
been burning contaminated waste, including fuels, tools and clothing, which had been used in nuclear
energy production. “The fire caused by the explosion was under control,” he told the BBC. The International Atomic Energy
Agency (IAEA) said it was in touch with the French authorities to learn more about the nature of the explosion. IAEA Director
General Yukiya Amano said the organisation’s incident centre had been “immediately activated,” Reuters reports. A statement
issued by the Nuclear Safety Authority also said there have been no radiation leaks outside of the plant. Staff at the plant
reacted to the accident according to planned procedures, it said. France’s Nuclear Safety Authority, however, is not noted for
its transparency. Operational since 1956, the Marcoule plant is a major site involved with the decommissioning of nuclear
facilities, and operates a pressurised water reactor used to produce tritium. The site is has also been used since 1995 by
French nuclear giant Areva to produce MOX fuel at the site’s MELOX factory, which recycles plutonium from nuclear weapons.
Part of the process involves firing superheated plutonium and uranium pellets in an oven. The Marcoule plant is located in the
Gard department in Languedoc-Roussillon region, near France’s Mediterranean coast. Marcoule: Sellafield’s French brother Its
first major role upon opening was weapons production as France sought a place among nuclear nations. Its reactors
generated the first plutonium for France’s first nuclear weapons test in 1960. Its reactor producing
tritium as fuel for hydrogen as well as other weapons related reactors sprang up as the arms race gained international
traction. The site also houses an experimental Phenix fast-breeder reactor which since 1995 has combine fissile uranium and
plutonium into mixed oxide or MOX fuel that can be used in civilian nuclear power stations.
1NC Econ =/= War
International norms maintain economic stability
***Zero empirical data supports their theory – the only financial crisis of the new liberal order
experienced zero uptick in violence or challenges to the central factions governed by the US that
check inter-state violence – they have no theoretical foundation for proving causality
Barnett, 9 – senior managing director of Enterra Solutions LLC (Thomas, The New Rules: Security
Remains Stable Amid Financial Crisis, 25 August 2009, http://www.aprodex.com/the-new-rules-security-remains-stable-amid-financial-crisis-398-bl.aspx)
When the global financial crisis struck roughly a year ago, the blogosphere was ablaze with all sorts of scary
predictions of, and commentary regarding, ensuing conflict and wars -- a rerun of the Great Depression leading to world
war, as it were. Now, as global economic news brightens and recovery -- surprisingly led by China and emerging markets -- is the talk
of the day, it's interesting to look back over the past year and realize how globalization's
first truly worldwide
recession has had virtually no impact whatsoever on the international security landscape. None of the more
than three-dozen ongoing conflicts listed by GlobalSecurity.org can be clearly attributed to the global
recession. Indeed, the last new entry (civil conflict between Hamas and Fatah in the Palestine) predates the
economic crisis by a year, and three quarters of the chronic struggles began in the last century. Ditto for the 15 lowintensity conflicts listed by Wikipedia (where the latest entry is the Mexican "drug war" begun in 2006). Certainly, the RussiaGeorgia conflict last August was specifically timed, but by most accounts the opening ceremony of the Beijing Olympics was the
most important external trigger (followed by the U.S. presidential campaign) for that sudden spike in an almost two-decade long
struggle between Georgia and its two breakaway regions. Looking over the various databases, then, we
see a most familiar
picture: the usual mix of civil conflicts, insurgencies, and liberation-themed terrorist movements.
Besides the recent Russia-Georgia dust-up, the only two potential state-on-state wars (North v. South Korea, Israel v.
Iran) are both tied to one side acquiring a nuclear weapon capacity -- a process wholly unrelated to global economic
trends. And with the United States effectively tied down by its two ongoing major interventions (Iraq and Afghanistan-bleedinginto-Pakistan), our involvement elsewhere around the planet has been quite modest, both leading up to and
following the onset of the economic crisis: e.g., the usual counter-drug efforts in Latin America, the usual military exercises
with allies across Asia, mixing it up with pirates off Somalia's coast). Everywhere else we find serious instability we pretty much let it
burn, occasionally pressing the Chinese -- unsuccessfully -- to do something. Our new Africa Command, for example, hasn't led us to
anything beyond advising and training local forces. So, to sum up: •No significant uptick in mass violence or unrest
(remember the smattering of urban riots last year in places like Greece, Moldova and Latvia?); •The usual frequency maintained in
civil conflicts (in all the usual places); •Not a single state-on-state war directly caused (and no great-power-on-great-power crises
even triggered); •No great improvement or disruption in great-power cooperation regarding the emergence of
new nuclear powers (despite all that diplomacy); •A modest scaling back of international policing efforts by the system's
acknowledged Leviathan power (inevitable given the strain); and •No
serious efforts by any rising great power to
challenge that Leviathan or supplant its role. (The worst things we can cite are Moscow's occasional deployments of
strategic assets to the Western hemisphere and its weak efforts to outbid the United States on basing rights in Kyrgyzstan; but the
best include China and India stepping up their aid and investments in Afghanistan and Iraq.) Sure, we've finally seen global defense
spending surpass the previous world record set in the late 1980s, but even that's likely to wane given the stress on public budgets
created by all this unprecedented "stimulus" spending. If anything, the friendly
cooperation on such stimulus
packaging was the most notable great-power dynamic caused by the crisis. Can we say that the world has
suffered a distinct shift to political radicalism as a result of the economic crisis? Indeed, no. The world's major economies
remain governed by center-left or center-right political factions that remain decidedly friendly to both
markets and trade. In the short run, there were attempts across the board to insulate economies from immediate damage (in
effect, as much protectionism as allowed under current trade rules), but there was no great slide into "trade wars." Instead, the
World Trade Organization is functioning as it was designed to function, and regional efforts toward free-trade agreements have not
slowed. Can we say Islamic radicalism was inflamed by the economic crisis? If it was, that shift was clearly overwhelmed by the
Islamic world's growing disenchantment with the brutality displayed by violent extremist groups such as al-Qaida. And looking
forward, austere economic times are just as likely to breed connecting evangelicalism as disconnecting fundamentalism. At the end
of the day, the economic crisis did not prove to be sufficiently frightening to provoke major economies into establishing global
regulatory schemes, even as it has sparked a spirited -- and much needed, as I argued last week -- discussion of the continuing
viability of the U.S. dollar as the world's primary reserve currency. Naturally, plenty of experts and pundits have attached great
significance to this debate, seeing in it the beginning of "economic warfare" and the like between "fading" America and "rising"
China. And yet, in a world of globally integrated production chains and interconnected financial markets, such "diverging interests"
hardly constitute signposts for wars up ahead. Frankly, I don't welcome a world in which America's fiscal profligacy goes
undisciplined, so bring it on -- please! Add it all up and it's fair to say that this global financial
resilience of America's post-World War II international liberal trade order.
crisis has proven the great
2NC Econ =/= War
Aggregate data proves interstate violence doesn’t result from economic decline
Drezner, 12 --- The Fletcher School of Law and Diplomacy at Tufts University (October 2012,
Daniel W., “The Irony of Global Economic Governance: The System Worked,”
www.globaleconomicgovernance.org/wp-content/uploads/IR-Colloquium-MT12-Week-5_TheIrony-of-Global-Economic-Governance.pdf)
The final outcome addresses a
dog that hasn’t barked: the effect of the Great Recession on cross-border
conflict and violence. During the initial stages of the crisis, multiple analysts asserted that the financial
crisis would lead states to increase their use of force as a tool for staying in power.37 Whether
through greater internal repression, diversionary wars, arms races, or a ratcheting up of great power conflict, there were
genuine concerns that the global economic downturn would lead to an increase in conflict.
Violence in the Middle East, border disputes in the South China Sea, and even the disruptions of the Occupy movement fuel
impressions of surge in global public disorder.
The aggregate data suggests otherwise, however. The Institute for Economics and Peace has
constructed a “Global Peace Index” annually since 2007. A key conclusion they draw from the
2012 report is that “The average level of peacefulness in 2012 is approximately the same as it
was in 2007.”38 Interstate violence in particular has declined since the start of the financial crisis – as
have military expenditures in most sampled countries. Other studies confirm that the Great Recession has not
triggered any increase in violent conflict; the secular decline in violence that started with the end of the Cold War has
not been reversed.39 Rogers Brubaker concludes, “the crisis has not to date generated the surge in
protectionist nationalism or ethnic exclusion that might have been expected.”40
None of these data suggest that the global economy is operating swimmingly. Growth remains unbalanced and fragile, and has
clearly slowed in 2012. Transnational capital flows remain depressed compared to pre-crisis levels, primarily due to a drying up of
cross-border interbank lending in Europe. Currency volatility remains an ongoing concern. Compared to the aftermath of other
postwar recessions, growth in output, investment, and employment in the developed world have all lagged behind. But the Great
Recession is not like other postwar recessions in either scope or kind; expecting a standard “V”-shaped recovery was unreasonable.
One financial analyst characterized the post-2008 global economy as in a state of “contained
depression.”41 The key word is “contained,” however. Given the severity, reach and depth of the 2008
financial crisis, the proper comparison is with Great Depression. And by that standard, the
outcome variables look impressive. As Carmen Reinhart and Kenneth Rogoff concluded in This Time is Different: “that
its macroeconomic outcome has been only the most severe global recession since World War II – and not even worse – must be
regarded as fortunate.”42
Most rigorous historical analysis proves
Miller, 2K – economist, adjunct professor in the University of Ottawa’s Faculty of
Administration, consultant on international development issues, former Executive Director and
Senior Economist at the World Bank, (Morris, “Poverty as a cause of wars?”, Winter,
Interdisciplinary Science Reviews, Vol. 25, Iss. 4, p. Proquest)
Perhaps one should ask, as some scholars do, whether it is not poverty as such but some
dramatic event or sequence of such events leading to the exacerbation of poverty that is the
factor that contributes in a significant way to the denouement of war. This calls for
addressing the question: do wars spring from a popular reaction to an economic crisis that
exacerbates poverty and/or from a heightened awareness of the poor of the wide and
growing disparities in wealth and incomes that diminishes their tolerance to poverty? It
seems reasonable to believe that a powerful "shock" factor might act as a catalyst for a
violent reaction on the part of the people or on the part of the political leadership. The
leadership, finding that this sudden adverse economic and social impact destabilizing, would
possibly be tempted to seek a diversion by finding or, if need be, fabricating an enemy and
setting in train the process leading to war. There would not appear to be any merit in this
hypothesis according to a study undertaken by Minxin Pei and Ariel Adesnik of the Carnegie
Endowment for International Peace. After studying 93 episodes of economic crisis in 22
countries in Latin America and Asia in the years since World War II they concluded that Much
of the conventional wisdom about the political impact of economic crises may be wrong
…..The severity of economic crisis - as measured in terms of inflation and negative growth –
bore no relationship to the collapse of regimes….(or, in democratic states, rarely) to an
outbreak of violence…In the cases of dictatorships and semi-democracies, the ruling elites
responded to crises by increasing repression (thereby using one form of violence to abort
another.)
Heg Adv
Notes
It is important to note here the difference between COMMERCIAL tech innovation and MILITARY
tech innovation. You can read the military args w/o taking out the tech tradeoff DA, but not the
commercial tech high args.
30 second explainer: backdoors kill tech b/c foreign customers leave, tech innovation is k2
maintain US primacy b/c we need to have tech before everyone else, heg k2 solve war
CX Questions
1NC No Tech Damage
Surveillance doesn’t harm US tech and the tech sector is high—their ev is
speculation and only we have hard data
Insider Surveillance 14
(Insider Surveillance. Insider Surveillance is the most widely read source of information on surveillance technologies for law
enforcement, government agencies, military intelligence, communications companies and technology leaders who together
safeguard national security and protect the public from criminals and terrorists. The publication reflects the expertise of the
intelligence, law enforcement and public policy communities on surveillance and is followed by members in over 130 nations —
from Washington, D.C. to London, Paris, Beijing, Moscow, Rome, Madrid, Berlin, Tokyo, Lahore, Delhi, Abu Dhabi, Rio de Janeiro,
Mexico City, Seoul and thousands of places in between. "De-Bunking the Myth of U.S. Tech Sales Lost Due to NSA," . 9-24-2014.
https://insidersurveillance.com/de-bunking-myth-of-u-s-tech-sales-lost-due-nsa///ghs-kw)
Flashback to October 2013. “The sky is falling! The sky is falling!” Customers worldwide are
furious about NSA spying. That means imminent doom for the U.S. tech industry. Offshore sales
will plummet as buyers drop U.S. tech products/services and buy local instead. The end is nigh!
News flash for Chicken Little: The sky’s still up there. It’s shining bright over a U.S. tech market that
in the past year has experienced almost unprecedented growth — largely thanks to foreign
sales. As to impending Armageddon for the tech sector, to date no one has positively identified a single nickel
of tech industry revenue or profit lost due to foreign customers’ purported anger over the NSA.
On the contrary, the U.S. technology and aligned sectors in defense have enjoyed a banner yet. A few points to consider: U.S. tech
stocks are near an all-time high. The Morgan Stanley High-Technology Index 35, which includes Amazon, Apple, Google, Microsoft
and Netflix — among the most vociferous Internet and cloud companies blaming NSA for lost profits — today stands 23.4% higher
than its 52-week low one year ago when anti-surveillance furor reached its peak. In recent weeks the index has stood as high as 25%
above the October 2013 low point. Not too shabby for a sector supposedly on the ropes. Foreign
sales lead the march to
U.S. tech profits. According to an AP story posted after 2Q2014 earnings: “Technology trendsetters Apple Inc.,
Google Inc., Facebook Inc. and Netflix Inc. all mined foreign countries to produce earnings or
revenue that exceeded analysts’ projections in their latest quarters.” In the second quarter, Google
generated 58% of its revenue outside the U.S. Facebook continued to draw 55% of revenue from
overseas. Netflix added 1.1 million new foreign subscribers — double the number won in the U.S. and Canada during the second
quarter. Apple reported soaring sales of its iPhone in China, Russia, India and Brazil, offsetting
tepid growth in the U.S. Net net, the industry’s biggest gains came in the very markets that tech
leaders last year cited as being at risk. U.S. defense contractors fare best offshore. Faced with
dwindling U.S. Defense Department purchases — the U.S. hasn’t purchase a single new F-16 in the last 10 years — defense
suppliers’ decision to pursue foreign buyers has fueled a bonanza. Sales to Israel, Oman and Iraq
keep Lockheed Martin’s plant humming in F-16 production. Over at Sikorsky Aircraft, makers of
the Black Hawk helicopter, the company late last year reported a 30-year contract with Taiwan
valued at over US$1.0 billion. International sales at Boeing’s defense division comprise 24% of
the company’s $US33 billion in defense sales. To be sure, the defense market is a tough one. However, when
U.S. sales are lost it’s not because a foreign buyer was angry over NSA and decided to buy
weapons systems in-country. More often the answer is far simpler: competition from a major
non-U.S. player. Example: Turkey’s decision to “dis” Raytheon’s bid for a long range air defense system was a simple dollars
and cents matter: China, not exactly a bastion of human rights, won the contract. Russian and European companies were also among
the losers. No
one uttered a “peep peep” about the NSA. Defense executives don’t sit around
fretting about foreign sales supposedly lost due to U.S. spying. Their real worry is China, an
increasingly aggressive player in the defense systems market. The story of the U.S. tech and
defense industries’ rampage of profits over the last year — much and sometimes most of it
driven by foreign buyers — is borne out by the numbers: more sales, higher revenues and
equities prices. Those are all hard numbers, too, not guess work. The same can’t be said of the
tech leaders who don sackcloth and ashes when bewailing the imagined impact of the “NSA
scandal” on offshore sales — while growing rich in the same markets. Where, one might well ask, is the
documentation supporting the doom-mongers’ forecasts? Let’s travel back in time. The Open Technology Institute
Paper Beginning with a meeting of some 30 tech industry leaders with President Barack Obama last December, the cascade of
warnings gained mass. Soon after, pollsters and financial analysts chimed in, pointing to public surveys showing widespread global
anger over the NSA — and threats that foreign buyers would keep their tech wallets at home. The poster child of the complaints:
cloud computing. Tech
companies expressed grave concern that foreign customers would cease to
use U.S. cloud companies, many of which operate offshore data centers that would seem easy
targets for the NSA. As proof that this trend already had wings, analysts pointed to a Swiss cloud company — Artmotion —
which in June 2013 touted a sudden 45% surge in revenue, supposedly due to non-U.S. customers exiting more “vulnerable” services
provided by American companies, in favor of Artmotion. [More about Artmotion in a moment.] Similar charges dribbled into the
media during the first half of 2014. But the
crowning “blow,” if one wants to call it such, came in late July
with the publication of “Surveillance Costs: The NSA’s Impact on the Economy, Internet Freedom
and Cybersecurity,” a 35-page policy paper by the Open Technology Institute (OTI). Why the sevenmonth delay? One would presume that OTI wanted to take sufficient time to amass evidence of the disastrous impact of the NSA on
U.S. technology and the economy. The questions is: Did they succeed? Frankly, the
end result of all OTI’s effort is
lame at best, scurrilous at worst. In the policy’s paper’s discussion of “Direct Economic Costs to
American Companies,” the authors quote widely from TIME Magazine, The Washington Post and
Congressional testimony on how “NSA Spying Could Cost U.S. Tech Giants Billions.” Cloud computing
is presented as the immediate victim. The OTI paper cites the example of Dropbox, Amazon Web Services
(AWS) and Microsoft Azure suffering severe losses in foreign cloud sales to Switzerland’s
Artmotion — due to foreign anger over the NSA. The source: An article published in The
International Business Times, “Companies Turn to Switzerland for Cloud Storage,” on July 4,
2013 — three weeks after the first NSA revelations by Edward Snowden. Describing Artmotion as
“Switzerland’s biggest offshore hosting company,” the article quotes the company’s CEO Mateo Meier claiming a 45% jump in
revenue during that period. Aspects
of the original article and the policy paper show how easily
speculation is presented as fact by sloppy authors eager to make a point without bothering to
check their facts: Nowhere in the International Business Times story is any evidence produced
showing AWS, Dropbox or Azure losing business. Nor is any concrete number on losses
presented. The closest the reporter can come is to aver: “However now services like Dropbox,
AWS and Azure are seen as potentially insecure. . ..” Seen by whom? The IBT doesn’t say. The OTI
policy paper cites the IBT article as the source for an assertion that “companies like Dropbox and Amazon were beginning to lose
business to overseas business.” Remember: the IBT didn’t cite any losses by these companies — it merely said they were “seen” [by
unnamed sources] “as potentially insecure.” It’s anybody’s guess whether Artmotion is Switzerland’s “biggest” (or “smallest”)
offshore hosting company. Artmotion is a privately held company. It does not provide any public data on finances, numbers of
employees or clients, or any other information that could be used to determine the company’s size. A 45% revenue gain in three
weeks would defy the odds for a large enterprise, so — to borrow the practice of speculating from our subject — it is most likely that
Artmotion is a smaller entrepreneurial venture led by a CEO who had the savvy to capitalize on the NSA scandal. Large
enterprise customers, who took years to trust the idea of handing over their data to third
party cloud providers, are notoriously slow to embrace change. The likelihood of FTSE1000
companies shifting cloud service providers — in three weeks! — is preposterous. Even Mom and
Pop cloud customers would scarcely be apt to change their minds and shift all their cloud-stored
data that quickly. Even assuming that the overnight 45% revenue boost claim is true, where is the proof tying this cash surge
to non-U.S. customers defecting from Amazon, Dropbox or Google to Artmotion? Answer: There is no proof. It’s pure
hearsay. If we’re picking on Artmotion overmuch, it’s for good cause. This case study is the most substantial “proof” in the entire
OTI paper. From there it degenerates into even more dubious assessments by analysts and industry “think tanks.” Of these, one of
the better studies is by the International Technology and Innovation Foundation (ITIF), generally hailed as a non-partisan group.
Published in August 2013, the report honestly states that at that early date, “the data are still thin — clearly this is a developing story
and perceptions will likely evolve.” But even ITIF resorts to “maybes” versus facts. Example: a projection that U.S. cloud computing
companies might lose US $21.5 billion by 2016, presuming that 10% of foreign customers flee, or up to US$ 35 billion assuming a
20% attrition rate. The basis for these assumptions: a survey by yet another think tank, The Cloud Security Alliance, which found 10%
of non-U.S. respondents saying they had cancelled a project with a U.S. cloud communications provider. And so it goes with the OTI
study. The authors
leap from speculation to fact, or quote studies based on assumptions by one
group that hinge on conclusions of yet another organization. All sources tend to be very “early
days,” when emotions on the NSA ran high. If the current numbers exist bearing out the case for NSA spying
damaging U.S. tech companies foreign sales, then why doesn’t OTI quote them? Instead, the farther one progresses into the OTI
policy paper, the more infatuated its authors become with wildly exaggerated projections of tech industry losses. Within a few
paragraphs of ITIF’s claims of cloud losses reaching $US 35 billion, we find a truly astounding quote from Forrester Research. Not to
be outdone by a mere think tank, the famous industry analyst group forecasts U.S. cloud company losses of “$US 180 billion” by
2016. That’s a good trick for an industry whose total growth was projected to reach just $US 210 billion — also by the year 2016 and
also by Forrester, just a few months earlier.
2NC No Tech Damage
Companies won’t leave the US—market is too large
Corn 7/13
(Corn, Geoffrey S. * Presidential Research Professor of Law, South Texas College of Law; Lieutenant Colonel (Retired), U.S. Army
Judge Advocate General’s Corps. Prior to joining the faculty at South Texas, Professor Corn served in a variety of military
assignments, including as the Army’s Senior Law of War Advisor, Supervisory Defense Counsel for the Western United States,
Chief of International Law for U.S. Army Europe, and as a Tactical Intelligence Officer in Panama. “Averting the Inherent Dangers
of 'Going Dark': Why Congress Must Require a Locked Front Door to Encrypted Data,” SSRN. 07-13-2015.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes//ghs-kw)
The risks related to “going dark” are real. When the President of the United States,60 the Prime Minister of the United Kingdom,61
and the Director of the FBI62 all publically express deep concerns about how this phenomenon will endanger their respective
nations, it is difficult to ignore. Today, encryption technologies that are making it increasingly easy for individual users to prevent
even lawful government access to potentially vital information related to crimes or other national security threats. This evolution of
individual encryption capabilities represents a fundamental distortion of the balance between government surveillance authority
and individual liberty central to the Fourth Amendment. And balance is the operative word. The right of The People to be secure
against unreasonable government intrusions into those places and things protected by the Fourth Amendment must be vehemently
protected. Reasonable searches, however, should not only be permitted, but they should be mandated where necessary.
Congress has the authority to ensure that such searches are possible. While some argue that this
could cause American manufacturers to suffer, saddled as they will appear to be by the
“Snowden Effect,” the rules will apply equally to any manufacturer that wishes to do business
in the United States. Considering that the United States economy is the largest in the world, it
is highly unlikely that foreign manufacturers will forego access to our market in order to avoid
having to create CALEA-like solutions to allow for lawful access to encrypted data. Just as
foreign cellular telephone providers, such as T-Mobile, are active in the United States, so too will
foreign device manufacturers and other communications services adjust their technology to
comply with our laws and regulations. This will put American and foreign companies on an equal
playing field while encouraging ingenuity and competition. Most importantly, “the right of the people to be
secure in their persons, houses, papers, and effects” will be protected not only “against unreasonable searches and seizures,” but
also against attacks by criminals and terrorists. And is not this, in essence, the primary purpose of government?
1NC Tech High
US tech leadership is strong now, despite Asia’s rise in science—their ev
Segal 4
(Adam, director of the Program on Digital and Cyberspace Policy at the Council on Foreign
Relations (CFR), An expert on security issues, technology development, November/December
2004 Issue, “Is America Losing Its Edge,” https://www.foreignaffairs.com/articles/unitedstates/2004-11-01/america-losing-its-edge, BC)
The United States' global
primacy depends in large part on its ability to develop new technologies and
industries faster than anyone else. For the last five decades, U.S. scientific innovation and technological
entrepreneurship have ensured the country's economic prosperity and military power. It was
Americans who invented and commercialized the semiconductor, the personal computer, and
the Internet; other countries merely followed the U.S. lead.∂ Today, however, this technological edge-so long
taken for granted-may be slipping, and the most serious challenge is coming from Asia. Through competitive
tax policies, increased investment in research and development (R&D), and preferential
policies for science and technology (S&T) personnel, Asian governments are improving the
quality of their science and ensuring the exploitation of future innovations. The percentage of
patents issued to and science journal articles published by scientists in China, Singapore, South
Korea, and Taiwan is rising. Indian companies are quickly becoming the second-largest producers of application services in
the world, developing, supplying, and managing database and other types of software for clients around the world. South Korea
has rapidly eaten away at the U.S. advantage in the manufacture of computer chips and
telecommunications software. And even China has made impressive gains in advanced
technologies such as lasers, biotechnology, and advanced materials used in semiconductors,
aerospace, and many other types of manufacturing.∂ Although the United States' technical
dominance remains solid, the globalization of research and development is exerting considerable pressures on the
American system. Indeed, as the United States is learning, globalization cuts both ways: it is both a potent catalyst of U.S.
technological innovation and a significant threat to it. The
United States will never be able to prevent rivals from developing
new technologies; it can remain dominant only by continuing to innovate faster than everyone else.
But this won't be easy; to keep its privileged position in the world, the United States must get better at
fostering technological entrepreneurship at home.
2NC Tech High
Unilaterialism fails and US strong now
Richard N. Haass 13, President of the Council on Foreign Relations, 4/30/13, “The World
Without America,” http://www.project-syndicate.org/commentary/repairing-the-roots-ofamerican-power-by-richard-n--haass
The most critical threat facing the United States now and for the foreseeable future is not a rising
China, a reckless North Korea, a nuclear Iran, modern terrorism, or climate change. Although all of these constitute potential or actual threats, the
biggest challenges facing the US are its burgeoning debt, crumbling infrastructure, second-rate primary
and secondary schools, outdated immigration system, and slow economic growth – in short, the
domestic foundations of American power . Readers in other countries may be tempted to react to this judgment with a dose of schadenfreude, finding more than
Let me posit a radical idea:
a little satisfaction in America’s difficulties. Such a response should not be surprising. The US and those representing it have been guilty of hubris (the US may often be the indispensable nation, but it would be
better if others pointed this out), and examples of inconsistency between America’s practices and its principles understandably provoke charges of hypocrisy. When America does not adhere to the principles that
it preaches to others, it breeds resentment. But, like most temptations, the urge to gloat at America’s imperfections and struggles ought to be resisted. People around the globe should be careful what they wish
America’s failure to deal with its internal challenges would come at a steep price. Indeed, the rest of the world’s
stake in American success is nearly as large as that of the US itself. Part of the reason is economic. The US economy still accounts for about one-quarter of global output. If US growth
accelerates, America’s capacity to consume other countries’ goods and services will increase, thereby
boosting growth around the world. At a time when Europe is drifting and Asia is slowing, only the US (or,
more broadly, North America) has the potential to drive global economic recovery . The US remains a unique
source of innovation. Most of the world’s citizens communicate with mobile devices based on technology developed in Silicon Valley; likewise, the Internet was made in America. More
recently, new technologies developed in the US greatly increase the ability to extract oil and
natural gas from underground formations. This technology is now making its way around the
globe, allowing other societies to increase their energy production and decrease both their
reliance on costly imports and their carbon emissions. The US is also an invaluable source of
ideas. Its world-class universities educate a significant percentage of future world leaders. More
fundamentally, the US has long been a leading example of what market economies and democratic politics
can accomplish. People and governments around the world are far more likely to become more open if
the American model is perceived to be succeeding. Finally, the world faces many serious challenges, ranging
from the need to halt the spread of weapons of mass destruction, fight climate change, and maintain a functioning
world economic order that promotes trade and investment to regulating practices in cyberspace,
for.
improving global health, and preventing armed conflicts. These problems will not simply go
away or sort themselves out . While Adam Smith’s “invisible hand” may ensure the success of free markets, it is powerless in the
world of geopolitics . Order requires the visible hand of leadership to formulate and realize global
responses to global challenges. Don’t get me wrong: None of this is meant to suggest that the US can deal effectively with the world’s problems on its own.
Unilateralism rarely works. It is not just that the US lacks the means; the very nature of
contemporary global problems suggests that only collective responses stand a good chance of
succeeding. But multilateralism is much easier to advocate than to design and implement. Right now there is
only one candidate for this role: the US. No other country has the necessary combination of
capability and outlook. This brings me back to the argument that the US must put its house in order – economically, physically,
socially, and politically – if it is to have the resources needed to promote order in the world. Everyone should hope that it does:
The alternative to a world led by the US is not a world led by China, Europe, Russia, Japan, India, or any other
country, but rather a world that is not led at all. Such a world would almost certainly be characterized by chronic crisis and
conflict. That would be bad not just for Americans, but for the vast majority of the planet’s inhabitants.
Tech sector is growing
Grisham 2/10 (Preston Grisham, “United States Tech Industry Employs 6.5 Million in 2014”,
February 10th, 2015, https://www.comptia.org/about-us/newsroom/pressreleases/2015/02/10/united-states-tech-industry-employs-6.5-million-in-2014)
Washington, D.C., February 10, 2015 – The U.S. tech industry added 129,600 net jobs between
2013 and 2014, for a total of nearly 6.5 million jobs in the U.S., according to Cyberstates 2015:
The Definitive State-by-State Analysis of the U.S. Tech Industry published by CompTIA. The
report represents a comprehensive look at tech employment, wages, and other key economic
factors nationally and state-by-state, covering all 50 states, the District of Columbia, and Puerto
Rico. This year’s edition shows that tech industry jobs account for 5.7 percent of the entire
private sector workforce. Tech industry employment grew at the same rate as the overall
private sector, 2 percent, between 2013-2014. ¶ Growth was led by the IT services sector which
added 63,300 jobs between 2013 and 2014 and the R&D, testing, and engineering services
sector that added 50,700 jobs.¶ “The U.S. tech industry continues to make significant
contributions to our economy,” said Todd Thibodeaux, president and CEO, CompTIA. “The tech
industry accounts for 7.1 percent of the overall U.S. GDP and 11.4 percent of the total U.S.
private sector payroll. With annual average wages that are more than double that of the private
sector, we should be doing all we can to encourage the growth and vitality of our nation’s tech
industry.”
Tech spending increasing now despite projections
Seitz 1/30/15
(Patrick, 1/30/15, Investor’s Business Daily, “Software apps to continue dominating cloud sales,”
http://news.investors.com/technology-click/013015-736967-software-as-a-service-gets-lionsshare-of-public-cloud-revenue.htm, 7/13/15, SM)
Public cloud computing services are a bright spot in the otherwise stagnant corporate information technology market, and softwareas-a-service (SaaS) vendors are seen benefiting disproportionately in the years ahead.∂ Public
cloud spending reached
$67 billion in 2014 and is expected to hit $113 billion in 2018, Technology Business Research said in a report
Wednesday.∂ "While the vast majority of IT companies remain plagued by low-single-digit revenue growth rates at best,
investments in public cloud from software-centric vendors such as Microsoft and SAP are
moving the corporate needle," TBR analyst Jillian Mirandi said in a statement.∂ Microsoft (NASDAQ:MSFT) is pushing the
cloud development platform Azure and migrating Office customers to the cloud-based Office 365. SAP (NYSE:SAP) got a late start to
the public cloud but has acquired SuccessFactors and Ariba to accelerate its efforts.∂ The second half of 2014 was marked by
partnerships and integration of services from different vendors in the software-as-a-service sector. SaaS vendors
like
Salesforce.com (NYSE:CRM) and Workday (NYSE:WDAY) have also added cloud-based analytics applications,
which have increased their appeal to business users, Mirandi said.∂ Software-as-a-service accounted
for 62% of public cloud spending last year, and the percentage will decline only modestly in the years ahead.
Technology Business Research estimates that SaaS will be 59.5% of public cloud spending in 2018.∂ Infrastructure-as-aservice (IaaS)is the second-largest category of public cloud spending, at 28.5% in 2014, but
climbing to 30.5% in 2018. IaaS vendors include Amazon.com's (NASDAQ:AMZN) Amazon Web Services, Microsoft and
Google (NASDAQ:GOOGL).∂ Platform-as-a-service (PaaS) is the third category, accounting for 9.5% of spending last year and
projected to be 10% in 2018, TBR says. PaaS vendors include Google, Microsoft and Salesforce.com.
Tech industry spending high now
Columbus 14
(Louis, 2/24/14, Forbes, “The Best Cloud Computing Companies And CEOs To Work For In 2014,”
http://www.forbes.com/sites/louiscolumbus/2014/02/24/the-best-cloud-computingcompanies-and-ceos-to-work-for-in-2014/, 7/17/15, SM)
IT decision makers’ spending on security technologies will increase 46% in 2015, with cloud computing
increasing 42% and business analytics investments up 38%. . Enterprise investments in storage will increase
36%, and for wireless & mobile, 35%.∂ Cloud computing initiatives are the most important project for the majority of IT departments
today (16%) and are expected to cause the most disruption in the future. IDG predicts the majority of cloud computing’s disruption
will be focused on improving service and generating new revenue streams.∂ These and other key take-aways are from recent IDG
Enterprise research titled Computerworld Forecast Study 2015. The goal of the study was to determine IT priorities for 2015 in areas
such as spending, staffing and technology. Computerworld spoke with 194 respondents, 55% of which are from the executive IT
roles. 19% from mid-level IT, 16% in IT professional roles and 7% in business management. You can find the results and methodology
of the study here.∂ Additional key take-aways from the study include:∂ Enterprises are predicting they will increase their spending on
security technologies by 46%, cloud computing by 42% with the greatest
growth in enterprises with over 1,000
employees (52%), 38% in business analytics, 36% for storage solutions and 35% for wireless & mobile. The following graphic
provides an overview of the top five tech spending increases in 2015:
Tech spending is through the roof now
Holland 1/26 (Simon Holland, “Marketing technology industry set for explosive revenue
gains”, 1/26/15 http://www.marketingtechnews.net/news/2015/jan/26/marketing-technologyindustry-set-explosive-revenue-gains/)
Companies investing in marketing technology will continue to raise their budgets, with global
vendor revenue forecasted to touch $32.2 billion by 2018.¶ The projections, part of an IDC webinar on
the marketing software revolution, reveal a compound annual growth rate (CAGR) of 12.4% and total spend of
$130 billion across the five-year stretch between 2014 and 2015.¶ Customer relationship
management software is a sizable growth sector of marketing, with projections from IDC’s software
tracker predicting CRM application revenue will reach $31.7 billion by 2018, a CAGR of 6.9%.¶ A MaaS revival¶ Most
marketing solutions are available in the cloud, but some large businesses are acquiring these point solutions, investing in them and
then turning them into a marketing as a service platform.¶ The MaaS, an industry segment bundling a tech platform, creative
services and the IT services to run it, is making a comeback after economic uncertainty stunted investment in this area for so many
years.¶ IDC’s view on marketing as a service platforms is that it will blend global media and marketing tech expenditure. There may
have been little or no budget being attributed to this type of product in 2014, but IDC
has forecasted increases in the
run up to 2018.¶ Getting the investment in early can set a company up for a similar or larger return later down the road, a fact
demonstrated by IDC that puts spend from digital marketing leaders at $14 million while achievers
and contenders set aside $4.2 million and $3.1 million respectively.
The tech sector is growing now—employment
Snyder 2/5 (Bill Snyder, “The best jobs are in tech, and so is the job growth”, Febuary 5th,
2015, http://www.infoworld.com/article/2879051/it-careers/the-best-jobs-are-in-tech-and-sois-the-job-growth.html)
In 2014,
IT employment grew by 2.4 percent. Although that doesn’t sound like much, it represents
more than 100,000 jobs. If the projections by CompTIA and others hold up, the economy will add even more
this year.¶ Tech dominates the best jobs in America¶ A separate report by Glassdoor, a large job board that
includes employee-written reviews of companies and top managers, singled out 25 of the “best jobs in America,” and 10 of those
were in IT. Judged by a combination of factors -- including earnings potential, career opportunities, and the number of current job
listings -- the highest-rated tech job was software engineer, with an average base salary of $98,074.¶ In the last three months,
employers have posted 104,828 openings for software engineers and developers on the Glassdoor job site, though many are no
longer current. (Glassdoor combines the titles of software developers and software engineers, so we don't know how many of those
positions were just for engineers.)¶ The highest-paid tech occupation listed on Glassdoor is solutions architect, with an average base
pay of $121,657.¶ Looked at more broadly, the hottest tech occupation in the United States last year was Web developer, for which
available jobs grew by 4 percent to a total of 235,043 jobs -- a substantial chunk of the 4.88 million employed tech workers,
according to the U.S. Bureau of Labor Statistics.¶ As for tech support, jobs in that occupation increased by 2.5 percent to 853,256,
which is a bit more than overall
tech job growth of 2.4 percent.¶ Taken together, the two new reports provide more
evidence that we can expect at least another year of buoyant employment prospects in IT -- and give
rough guidelines of the skills you need to get a great job and the potential employers you might contact.¶ Hiring across the
economy¶ Most
striking is the shift in employer attitudes over the last year or two, says Tim Herbert,
CompTIA’s vice president of research. “There’s less concern about the bottom dropping out,” he said. Even
worst-case estimates by employers are not at all bad, he adds.¶ The survey found that 43 percent of the
companies say they are understaffed, and 68 percent say they expect filling those positions will be “challenging or very challenging.”
If that’s the case, supply
and demand should push salaries even higher.¶ One of the most positive
trends in last year’s employment picture is the broad wave of IT hiring stretching across
different sectors of the economy. Companies that posted the largest number of online ads for IT-related jobs were
Accenture, Deloitte, Oracle, General Dynamics, Amazon.com, JP Morgan, United Health, and Best Buy, according to Burning Glass
Technologies Labor Insights, which tracks online advertising.¶ “Information
technology now pervades the entire
economy,” says CompTIA’s Herbert. What’s more, technologies like cloud computing and software as a
service are cheap enough and stable enough for small and medium-sized businesses to adopt,
which in turn creates even more job opportunities, he notes.
1NC Tech Alt Cause
Alt cause – loss of foreign investment is because the NSA surveils foreign
suspects; the Aff can only resolve domestic surveillance
Benner 14
(Katie, 12/19/14, BloombergView, “Microsoft and Google in a Post-Snowden World,” Katie Benner is a
columnist @ BloombergView reporting on companies, culture, and technology,
http://www.bloombergview.com/articles/2014-12-19/microsoft-and-google-in-a-postsnowden-world,
7/13/15, SM)
His documents revealed myriad NSA spy programs that hoovered up information on foreign suspects as
well as U.S. citizens. The agency had also pressured telecom companies like Verizon and Internet giants like Google to feed
customer data into the government's vast surveillance operation. As the Snowden revelations showed, the U.S. government was
also actively exploiting corporate security flaws to take whatever it wanted from those companies.¶ In the wake of all of that, tech
firms immediately tried to distance themselves from the NSA, even as the Snowden revelations tarnished their
reputations with corporate clients, consumers and governments worldwide. Companies warned that fallout from the Snowden
revelations would hurt their future earnings and, anecdotally, it seemed that global customers started to look for alternatives to U.S.
tech suppliers.
1NC Military High
Status quo solves military innovation—new programs
WashPo 11/16
(Robert Burns. "Hagel announces DOD plan to maintain U.S. military’s superiority," Washington Post. 11-16-2014.
https://www.washingtonpost.com/politics/hagel-announces-innovation-initiative-to-fend-off-risks-to-us-militarysuperiority/2014/11/16/e2257a42-6db5-11e4-8808-afaa1e3a33ef_story.html//ghs-kw)
Hagel announced a “defense innovation initiative” that he likened to historic and successful
campaigns during the Cold War to offset the military advantages of U.S. adversaries. He
described a “game-changing” strategy to sharpen American’s military edge in the face of budget
impasses on Capitol Hill. “We must change the way we innovate, operate and do business,” he told
a defense forum at the Ronald Reagan Presidential Library. In a memo to Pentagon leaders in which he outlined the initiative, Hagel
said the United States must not lose its commanding edge in military technology. “While we have been engaged in two large landmass wars over the last 13 years, potential adversaries have been modernizing their militaries, developing and proliferating
disruptive capabilities across the spectrum of conflict. This represents a clear and growing challenge to our military power,” he
wrote. Speaking just a short walk from Reagan’s tomb, Hagel invoked the late president’s legacy as a rebuilder of U.S. military
strength in the 1980s and cited Reagan’s famous call for the Soviets to tear down the Berlin Wall, which epitomized a divided Europe
and a world at risk of a new global war. “America and its allies prevailed over a determined Soviet adversary by coming together as a
nation — over decades and across party lines — to make long-term, strategic investments, including in innovation and reform of our
nation’s military,” he said. Those investments “ultimately helped force the Soviet military and Soviet regime to fold its hand.” In
separate remarks to the defense forum, the vice chairman of the Joint Chiefs of Staff, Adm. James A. Winnefeld Jr., said Russia and
China began reasserting themselves on the world stage to capitalize on America’s “distraction” in the long wars in Iraq and
Afghanistan. “In protecting our allies against potential mischief from these powers, we’ve always counted on our overmatch in
capability and capacity to offset the challenges of distance and initiative,” Winnefeld said. “That overmatch is now in jeopardy.”
Hagel, a Republican who served two terms in Congress as a senator from Nebraska, said the United States can no longer count on
outspending its rivals and potential adversaries. But long-standing overseas alliances and America’s reputation for dependability
require, he said, that the military be able to project power abroad — an expensive capability that he said is now at risk. “If this
capability is eroded or lost, we will see a world far more dangerous and unstable — far more threatening to America and our citizens
here at home than we have seen since World War II,” he said. Hagel said the United States cannot afford to relax or assume that the
military superiority it developed during the Cold War will automatically persist. “We
are not waiting for change to
come to us — we are taking the initiative, getting ahead of the changes we know are coming
and making the long-term investments we need for the future,” he said. Hagel said he is launching a
long-range research and development program to find and field breakthroughs in key
technology, including robotics, miniaturization and advanced manufacturing techniques such as 3-D
printing. He said the Pentagon will call on the private sector and on academia for help. “This program
will look toward the next decade and beyond,” he said. “In the near-term, it will invite some of
the brightest minds from inside and outside government to start with a clean sheet of paper and
assess what technologies and systems DOD ought to develop over the next three to five years.”
1NC Military Alt Cause
Alt cause—military budget cuts
Morrison 14
(Charles Morrison. Morrison received his MA in Security Studies from Georgetown University and his BA in IR from Tufts
University. "Technological superiority no longer sufficient for US military dominance," AEI. 8-5-2014.
http://www.aei.org/publication/technological-superiority-no-longer-sufficient-for-us-military-dominance///ghs-kw)
The panel outlines three disturbing trends that are helping to close this capability gap. For one, advanced military technologies—
many of which used to be America-only capabilities—are now proliferating to potential competitors. For instance, at least 75 states
are currently pursuing unmanned systems. Secondly, states
like China and Russia have focused their military
investments on countering US systems and exploiting their weaknesses. And finally, the US
government’s relative share of research and development spending has declined. As a result of
these developments, the report argues that the US “must now plan for battlefields that are more lethal, conflict that unfolds
more rapidly, and greatly restricted operational depth making sanctuary far more difficult to create and maintain.” If that wasn’t
depressing enough, the panel also warns that even
if the US is able to maintain a technological edge through
increased investment, “capability is not always a substitute for capacity.” In other words, if the US
military keeps shrinking, no amount of innovation or advanced technology will make up for
real losses in combat power. Yet, at the same time, without “significant investments” to maintain US
technological superiority, the Pentagon’s ability to meet national objectives will be greatly at risk. Fortunately,
policymakers can eliminate the false choice between capability and capacity now facing the Pentagon. As the panel recommends,
Congress and the President can immediately overturn the 2011 Budget Control Act and restore defense
spending to the plan set forth by Robert Gates in 2012. By itself, this step would be insufficient to rebuild American military
strength, however, without higher budgets, the Pentagon will increasingly face devastating
tradeoffs that will end up costing American lives. While restoring Pentagon spending to 2012 levels will not be easy, the NDP
makes clear that experts from both political parties now agree that higher defense budgets are a
national imperative. With higher funding levels, the Pentagon could get serious about military
modernization and begin to invest in the kind of 21st century military arsenal that raises the bar
for conflict and ensures the men and women of the American military never face a fair fight.
1NC Heg
Economic decline and internal domestic conflict following it has made American
soft and hard power unsustainable and ineffective at current levels
Cedras, Undergraduate at the Department of Politics and International
Relations at the University of Reading, 14 (Jamina, 7-3-14, Academia, “Is US power in
decline?” , https://www.academia.edu/7516584/Is_US_power_in_decline)
A state’s ability to exercise both soft and hard power stems primarily from its intrastate stability and domestic capabilities. Given
that contemporary declinist rhetoric has centered on the economic rise of China, it is fundamental to note that the rise of one
country's power does not necessarily translate to the fall of another in absolute terms. The
decline of American power
in absolute terms may thus be made apparent through an internal examination of the United
States. Economically, politically, socially and environmentally, the US now faces an array of
complications that have burdened its role as the global hegemony. Granted these complications are nothing
new, they are worrisome given the interdependent nature of economies today. At present, the economic foundation of
US hegemony stems from the dollar’s role as the international system’s reserve currency (Layne,
2012: 418). However, given the US’s economic downturn post 2008 and the federal government’s
credit rating downgrade to AA+ in 2011, fears of dollar devaluation are not unwarranted. The
American budget and trade deficits have ballooned and discredited the US’s debt repayment
capabilities. Although it is essential to note that no country is without fault, surely it is not irrational to expect a
global hegemony to help solve an economic crisis and not initiate one. The US deficit shows no
signs of reduction within the foreseeable future and imperial overstretch has been costly for the
US national budget, a decline in US power is obvious from this perspective. The political rigidity
surrounding the legislative branch of government has also had a magnitude of implications for
the US economy. At present the US is exhibiting the greatest division over foreign policy between the Democrats and
Republicans since the Second World War (Trubowitz, 2012: 157). As Timothy Ash asserts: “The erosion of American
power is happening faster than most of us predicted - while the politicians behave like rutting stags
with locked antlers" (Timothy Garton Ash, 2013). One only needs to consider the 2013 Government shutdown and its
subsequent costs as confirmation of this. In addition to these monetary costs, the continual trend of rising
income inequality and the deep rooted concerns surrounding the education and healthcare
market alike have raised intrinsic equity concerns that may pose threats to future political
stability in the US, damaging both America’s soft and hard power.
2NC Heg
U.S. hegemony is unsustainable – credibility is declining rapidly while costs rise
– a shift toward “restraint” is key
Dizikes 7/09/14 – MIT News Office Analyst
Peter, “Time to rethink foreign policy?”, MIT News, http://newsoffice.mit.edu/2014/rethinkforeign-policy-0709]//AS
The ongoing turmoil in
Iraq has prompted calls for a renewal of U.S. military action in that country, as
well as criticism from those who want to avoid further military commitment there. Among the dissenters: Barry Posen, an
MIT political scientist who has become an increasingly vocal critic of what he sees as excessive
hawkishness in U.S. foreign policy. Posen believes that U.S. long-term strategy relies too heavily on a
bipartisan commitment to military activism in order to pursue the goal of spreading liberal
democracy — what he calls the “liberal hegemony project” that dominates Washington. After
years of war in Iraq and Afghanistan without lasting stability to show for it, Posen says, it is time to use U.S. military
power more judiciously, with a narrower range of goals. Liberal hegemony “has performed
poorly in securing the United States over the last two decades, and given ongoing changes in the world it
will perform less and less well,” Posen writes in a new book, “Restraint: A New Foundation for U.S. Grand Strategy,”
published this month by Cornell University Press. “The strategy has been costly, wasteful, and
counterproductive.” Iraq and Afghanistan have been problematic not because of bad luck or bad decisions, Posen asserts, but
because such interventions are extremely unlikely to create sustainably peaceful polities of the sort
that foreign-policy activists envisioned. “I think they’re mistakes that are inherent to the [liberal
hegemony] project,” contends Posen, the Ford International Professor of Political Science and director of MIT’s Security
Studies Program. A three-part grand strategy In Posen’s view, the U.S. has three main international tasks that
should inform the country’s grand strategy in foreign affairs and military deployment. First, Posen
thinks it is almost inevitable that the U.S. and other large countries will seek a geopolitical balance of
power, a policy he regards as having centuries of precedent. “Other states will balance against
the largest state in the system,” he says. In this view, the U.S. does need to maintain an active and
well-honed military. But a theme of “Restraint” is that U.S. foreign policy starts from a position of strength, not
weakness: As an economically powerful, nuclear-equipped country screened by vast oceans, Posen believes, the U.S. naturally
has an extremely strong hand in international affairs, which it only weakens with wars such as
the one in Iraq. “It’s very hard for anybody to generate enough power to really affect us, but it’s an experiment that we’ve
never wanted to run historically, and I don’t really want to run it,” says Posen — who therefore thinks the U.S. should, when
necessary, act to block the rise of a hegemonic power in Eurasia. Second, Posen believes the
U.S. should be active in
limiting the proliferation of nuclear weapons, and in tracing their locations. Eliminating nuclear weapons altogether,
he thinks, is unrealistic. However, he says, “Our mission should be to do the best we can in terms of sanctions, technology control,
diplomacy, to keep proliferation slow, and to ensure, as best we can, that it’s states who have nuclear weapons, and not groups who
may not be deterrable.” Third, in a related point, Posen contends that the
U.S. needs to be active in limiting the
capabilities of terrorist groups, using “a mix of defensive and offensive means,” as he writes in the
book. At its most severe, that risk involves terrorists obtaining nuclear arms. Posen recommends a mix of intelligence and
counterterrorism activities as starting points for a sustained effort to reduce the potency of terror groups. But
can a policy
shift occur? “Restraint” has received praise from other foreign-policy scholars. Andrew Bacevich, a
political scientist at Boston University, calls it a “splendid achievement,” and says that Posen
“illuminates the path back toward good sense and sobriety.” Richard K. Betts, of Columbia University, calls it “a
realistic alternative to American overstretch.” Still, Posen acknowledges that calls for more selective use of U.S. force face an uphill
battle in Washington. “The vast tracts of the American foreign policy debate are populated with people of both parties who actually
agree on most things,” Posen says. “They all agree on the liberal hegemony project.” He wrote the book, he says, in part to see if it
were possible to craft an alternative approach in the realm of grand-strategy thinking, and then
to see how much traction such a view would receive. “A coherent alternative … is a tool of
change,” Posen says. “Even if you can’t win, you force the other side to think, explain, defend, and hold them to account.” Finally,
Posen thinks popular opinion has turned against military interventions in a way that was not the case a decade ago, when the Iraq
war was more widely regarded as a success. “Presently
public opinion is strikingly disenchanted with this
grand strategy,” Posen says. “There is a body politic out there that is much less hospitable to the
liberal hegemony project than it’s been.” Posen ascribes this less to a generalized war-weariness among the
American people than to an increasing lack of public confidence in the idea that these wars have created tangible gains. Too many
claims of success, he
says, have created a new “credibility gap,” using a phrase that originated during
the Vietnam War. “We treated [the public] to tale after tale of success, victory, progress, and
none of it seems to add up, after more than a decade,” Posen says. “This is interminable, and it’s not
credible. I think we’re back in the days of the credibility gap.”
History disproves effective deterrence
Kober ‘10 (Stanley Kober, Research Fellow in foreign policy studies at the Cato Institute, “The
Deterrence Illusion” http://www.cato.org/pub_display.php?pub_id=11898, June 13, 2010)
The world at the beginning of the 21st century bears an eerie — and disquieting — resemblance to Europe at the beginning of the
last century. That was also an era of globalisation. New technologies for transportation and communication were transforming the
world. Europeans had lived so long in peace that war seemed irrational. And they were right, up to a point. The first world war was
the product of a mode of rational thinking that went badly off course. The peace
of Europe was based on security
assurances. Germany was the protector of Austria-Hungary, and Russia was the protector of Serbia. The prospect of
escalation was supposed to prevent war, and it did — until, finally, it didn't. The Russians, who
should have been deterred — they had suffered a terrible defeat at the hands of Japan just a few
years before — decided they had to come to the support of their fellow Slavs. As countries
honoured their commitments, a system that was designed to prevent war instead widened it. We have also
been living in an age of globalisation, especially since the end of the cold war, but it too is increasingly being challenged. And just
like the situation at the beginning of the last century, deterrence is not working. Much is made, for example, of the
North Atlantic Treaty Organisation (NATO) invoking Article V — the famous "three musketeers" pledge that an attack on one
member is to be considered as an attack on all — following the terrorist attacks of September 11. But the United
States is the
most powerful member of NATO by far. Indeed, in 2001, it was widely considered to be a hegemon, a hyperpower.
Other countries wanted to be in NATO because they felt an American guarantee would provide security. And yet it was the US
that was attacked. This failure of deterrence has not received the attention it deserves. It is, after all,
not unique. The North Vietnamese were not deterred by the American guarantee to South
Vietnam. Similarly, Hezbollah was not deterred in Lebanon in the 1980s, and American forces
were assaulted in Somalia. What has been going wrong? The successful deterrence of the
superpowers during the cold war led to the belief that if such powerful countries could be deterred, then
lesser powers should fall into line when confronted with an overwhelmingly powerful adversary. It is plausible, but it
may be too rational. For all their ideological differences, the US and the Soviet Union observed red
lines during the cold war. There were crises — Berlin, Cuba, to name a couple — but these did not touch on emotional
issues or vital interests, so that compromise and retreat were possible. Indeed, what we may have missed in the west is
the importance of retreat in Soviet ideology. "Victory is impossible unless [the revolutionary
parties] have learned both how to attack and how to retreat properly," Lenin wrote in Left-Wing
Communism: An Infantile Disorder. When the Soviets retreated, the US took the credit.
Deterrence worked. But what if retreat was part of the plan all along? What if, in other words, the
Soviet Union was the exception rather than the rule? That question is more urgent because, in the post-cold war
world, the US has expanded its security guarantees, even as its enemies show they are not impressed. The Iraqi
insurgents
were not intimidated by President Bush's challenge to "bring 'em on". The Taliban have made an
extraordinary comeback from oblivion and show no respect for American power. North Korea is
demonstrating increasing belligerence. And yet the US keeps emphasising security through alliances.
"We believe that there are certain commitments, as we saw in a bipartisan basis to NATO, that need to be embedded in the DNA of
American foreign policy," secretary of state Hillary Clinton affirmed in introducing the new National Security Strategy. But that was
the reason the US was in Vietnam. It had a bipartisan commitment to South Vietnam under the Southeast Asia Treaty Organisation,
reaffirmed through the Tonkin Gulf Resolution, which passed Congress with only two dissenting votes. It didn't work, and found its
commitments were not embedded in its DNA. Americans turned against the war, Secretary Clinton among them.The
great
powers could not guarantee peace in Europe a century ago, and the US could not guarantee it in
Asia a half-century ago.
No potential conflicts for hotspots to escalate
Fettweis ‘11 (Christopher J. Fettweis, Department of Political Science, Tulane University, Free
Riding or Restraint? Examining European Grand Strategy, Comparative Strategy, 30:316–332,
EBSCO, September 26, 2011)
Assertions that without the combination of U.S. capabilities, presence and commitments
instability would return to Europe and the Pacific Rim are usually rendered in rather vague language. If the
United States were to decrease its commitments abroad, argued Robert Art, “the world will become a more
dangerous place and, sooner or later, that will redound to America’s detriment.”53 From where would this danger
arise? Who precisely would do the fighting, and over what issues? Without the United States, would Europe
really descend into Hobbesian anarchy? Would the Japanese attack mainland China again, to see if they could fare better this time
around? Would the Germans and French have another go at it? In other words, where
exactly is hegemony is keeping
the peace? With one exception, these questions are rarely addressed. That exception is in the Pacific Rim. Some analysts fear
that a de facto surrender of U.S. hegemony would lead to a rise of Chinese influence. Bradley Thayer worries that Chinese would
become “the language of diplomacy, trade and commerce, transportation and navigation, the internet, world sport, and global
culture,” and that Beijing would come to “dominate science and technology, in all its forms” to the extent that soon the world would
witness a Chinese astronaut who not only travels to the Moon, but “plants the communist flag on Mars, and perhaps other planets in
the future.”54 Indeed China is the only other major power that has increased its military spending since the end of the Cold War,
even if it still is only about 2 percent of its GDP. Such levels of effort do not suggest a desire to compete with, much less supplant, the
United States. The
much-ballyhooed, decade-long military buildup has brought Chinese spending up
to somewhere between one-tenth and one-fifth of the U.S. level. It is hardly clear that a restrained United
States would invite Chinese regional, must less global, political expansion. Fortunately one need not
ponder for too long the horrible specter of a red flag on Venus, since on the planet Earth, where war is no longer the dominant form
of conflict resolution, the threats posed by even a rising China would not be terribly dire. The dangers
contained in the terrestrial security environment are less severe than ever before. Believers in the pacifying power of hegemony
ought to keep in mind a rather basic tenet: When it comes to policymaking, specific
threats are more significant than
vague, unnamed dangers. Without specific risks, it is just as plausible to interpret U.S. presence
as redundant, as overseeing a peace that has already arrived. Strategy should not be based upon
vague images emerging from the dark reaches of the neoconservative imagination. Overestimating
Our Importance One of the most basic insights of cognitive psychology provides the final reason to doubt the
power of hegemonic stability: Rarely are our actions as consequential upon their behavior as we perceive
them to be. A great deal of experimental evidence exists to support the notion that people (and therefore states) tend to
overrate the degree to which their behavior is responsible for the actions of others. Robert Jervis has
argued that two processes account for this overestimation, both of which would seem to be especially relevant in the U.S. case.55
First, believing that we are responsible for their actions gratifies our national ego (which is not small to begin with; the United States
is exceptional in its exceptionalism). The hubris of the United States, long appreciated and noted, has only grown with the collapse of
the Soviet Union.56 U.S. policymakers famously have comparatively little knowledge of—or interest in—events that occur outside of
their own borders. If
there is any state vulnerable to the overestimation of its importance due to the
fundamental misunderstanding of the motivation of others, it would have to be the United States.
Second, policymakers in the United States are far more familiar with our actions than they are with the decisionmaking processes of our allies. Try as we might, it is not possible to fully understand the threats, challenges,
and opportunities that our allies see from their perspective. The European great powers have domestic politics
as complex as ours, and they also have competent, capable strategists to chart their way forward. They react to many international
forces, of which U.S. behavior is only one. Therefore, for any actor trying to make sense of the action of others, Jervis notes, “in
the absence of strong evidence to the contrary, the
most obvious and parsimonious explanation is that he
was responsible.”57 It is natural, therefore, for U.S. policymakers and strategists to believe that the
behavior of our allies (and rivals) is shaped largely by what Washington does. Presumably Americans are at least as
susceptible to the overestimation of their ability as any other people, and perhaps more so. At the very least, political psychologists
tell us, we
are probably not as important to them as we think. The importance of U.S. hegemony in
contributing to international stability is therefore almost certainly overrated. In the end, one can never be sure
why our major allies have not gone to, and do not even plan for, war. Like deterrence, the hegemonic stability theory
rests on faith; it can only be falsified, never proven. It does not seem likely, however, that hegemony could fully
account for twenty years of strategic decisions made in allied capitals if the international system were not already a remarkably
peaceful place. Perhaps these states have no intention of fighting one another to begin with, and our commitments are redundant.
European great powers may well have chosen strategic restraint because they feel that their security is all but assured, with or
without the United States.
No US lashout
MacDonald ’11 (Paul K. MacDonald, Assistant Professor of Political Science at Williams College,
and Joseph M. Parent, Assistant Professor of Political Science at the University of Miami,
“Graceful Decline?: The Surprising Success of Great Power Retrenchment,” International
Security, Vol. 35, No. 4, p. 7-44, Spring 2011)
With regard to militarized disputes, declining great powers demonstrate more caution and
restraint in the use of force: they were involved in an average of 1.7 fewer militarized disputes in
the five years following ordinal change compared with other great powers over similar periods.67
Declining great powers also initiated fewer militarized disputes, and their disputes tended to
escalate to lower levels of hostility than the baseline category (see figure 2).68 These findings
suggest the need for a fundamental revision to the pessimist's argument regarding the war
proneness of declining powers.69 Far from being more likely to lash out aggressively, declining
states refrain from initiating and escalating military disputes. Nor do declining great powers
appear more vulnerable to external predation than other great powers. This may be because external
predators have great difficulty assessing the vulnerability of potential victims, or because
retrenchment allows vulnerable powers to effectively recover from decline and still deter
potential challengers.
Solvency
Notes
CX Questions
Segal ev says “Through competitive tax policies, increased investment in research and
development (R&D), and preferential policies for science and technology (S&T) personnel, Asian
governments are improving the quality of their science and ensuring the exploitation of future
innovations…the United States' technical dominance remains solid,” why is US tech leadership in
danger?
1NC Circumvention
Circumvention – their evidence concedes NSA will force companies to build
backdoors
Trevor Timm 15, Trevor Timm is a Guardian US columnist and executive director of the
Freedom of the Press Foundation, a non-profit that supports and defends journalism dedicated
to transparency and accountability. 3-4-2015, "Building backdoors into encryption isn't only bad
for China, Mr President," Guardian,
http://www.theguardian.com/commentisfree/2015/mar/04/backdoors-encryption-china-applegoogle-nsa)//GV
Want to know why forcing tech companies to build backdoors into encryption is a terrible idea? Look no further than President
Obama’s stark criticism of China’s plan to do exactly that on Tuesday. If only he would tell the FBI and NSA the same thing. In a
stunningly short-sighted move, the
FBI - and more recently the NSA - have been pushing for a new US
law that would force tech companies like Apple and Google to hand over the encryption keys
or build backdoors into their products and tools so the government would always have access to our communications. It was
only a matter of time before other governments jumped on the bandwagon, and China wasted no time in demanding the same from
tech companies a few weeks ago. As President Obama himself described to Reuters, China has proposed an expansive new “antiterrorism” bill that “would essentially force all foreign companies, including US companies, to turn over to the Chinese government
mechanisms where they can snoop and keep track of all the users of those services.” Obama continued: “Those kinds of restrictive
practices I think would ironically hurt the Chinese economy over the long term because I don’t think there is any US or European
firm, any international firm, that could credibly get away with that wholesale turning over of data, personal data, over to a
government.” Bravo! Of course these are the exact arguments for why it would be a disaster for US government to force tech
companies to do the same. (Somehow Obama left that part out.) As Yahoo’s top security executive Alex Stamos told NSA director
Mike Rogers in a public confrontation last week, building backdoors into encryption is like “drilling a hole into a windshield.” Even if
it’s technically possible to produce the flaw - and we, for some reason, trust the US government never to abuse it - other countries
will inevitably demand access for themselves. Companies
will no longer be in a position to say no, and even
if they did, intelligence services would find the backdoor unilaterally - or just steal the keys
outright. For an example on how this works, look no further than last week’s Snowden revelation that the UK’s
intelligence service and the NSA stole the encryption keys for millions of Sim cards used by
many of the world’s most popular cell phone providers. It’s happened many times before too. Security expert
Bruce Schneier has documented with numerous examples, “Back-door access built for the good guys is routinely used by the bad
guys.” Stamos repeatedly (and commendably) pushed the NSA director for an answer on what happens when China or Russia also
demand backdoors from tech companies, but Rogers didn’t have an answer prepared at all. He just kept repeating “I think we can
work through this”. As Stamos insinuated, maybe Rogers should ask his own staff why we actually can’t work through this, because
virtually every technologist agrees backdoors just cannot be secure in practice. (If you want to further understand the details behind
the encryption vs. backdoor debate and how what the NSA director is asking for is quite literally impossible, read this excellent piece
by surveillance expert Julian Sanchez.) It’s downright bizarre that the US government has been warning of the grave cybersecurity
risks the country faces while, at the very same time, arguing that we should pass a law that would weaken cybersecurity and put
every single citizen at more risk of having their private information stolen by criminals, foreign governments, and our own. Forcing
backdoors will also be disastrous for the US economy as it would be for China’s. US tech companies - which already have suffered
billions of dollars of losses overseas because of consumer distrust over their relationships with the NSA - would lose all credibility
with users around the world if the FBI and NSA succeed with their plan. The White House is supposedly coming out with an official
policy on encryption sometime this month, according to the New York Times – but the President can save himself a lot of time and
just apply his comments about China to the US government. If he knows backdoors in encryption are bad for cybersecurity, privacy,
and the economy, why is there even a debate?
2NC Circumvention
Circumvention – Secure Data Act (specified in your solvency evidence/plan text)
provides exceptions for CALEA
Secure Data Act of 2015
(Wyden, Ron. Senator, D-OR. S. 135, known as the Secure Data Act of 2015, introduced in Congress 1/8/2015.
https://www.congress.gov/bill/114th-congress/senate-bill/135/text//ghs-kw)
(a) In General.—Except as provided in subsection (b), no agency may mandate that a manufacturer, developer, or seller of covered
products design or alter the security functions in its product or service to allow the surveillance of any user of such product or
service, or to allow the physical search of such product, by any agency. (b)
Exception.—Subsection (a) shall not
apply to mandates authorized under the Communications Assistance for Law Enforcement Act
(47 U.S.C. 1001 et seq.). (c) Definitions.—In this section— (1) the term “agency” has the meaning given the term in section
3502 of title 44, United States Code; and (2) the term “covered product” means any computer hardware, computer software, or
electronic device that is made available to the general public.
CALEA mandates backdoors
EFF 13
(Electronic Frontier Foundation. "The Government Wants A Backdoor Into Your
Online Communications," Electronic Frontier Foundation. 5-22-2013.
https://www.eff.org/deeplinks/2013/05/caleatwo//ghs-kw)
According to the New York Times, President Obama is "on the verge of backing" a proposal by the FBI to introduce legislation
dramatically expanding the reach of the
Communications Assistance for Law Enforcement Act, or CALEA.
CALEA forces telephone companies to provide backdoors to the government so that it can spy
on users after obtaining court approval, and was expanded in 2006 to reach Internet technologies like
VoIP. The new proposal reportedly allows the FBI to listen in on any conversation online, regardless of the technology used, by
mandating engineers build "backdoors" into communications software. We urge EFF supporters to tell the administration now to
stop this proposal, provisionally called CALEA II.
Loopholes exist for the FBI and NSA
Cushing 14
Tim Cushing, Techdirt contributor, 12-5-14, "Ron Wyden Introduces Legislation Aimed At
Preventing FBI-Mandated Backdoors In Cellphones And Computers," Techdirt.,
https://www.techdirt.com/articles/20141204/16220529333/ron-wyden-introduces-legislationaimed-preventing-fbi-mandated-backdoors-cellphones-computers.shtml//SRawal
Here's the
actual wording of the backdoor ban [pdf link], which has a couple of loopholes in it. (a) IN GENERAL.—
Except as provided in subsection (b), no agency may mandate that a manufacturer, developer, or seller of covered products design or alter the security
functions in its product or service to allow the surveillance of any user of such product or service, or to allow the physical search of such product, by any
agency. Subsection
(b) presents the first loophole, naming the very act that Comey is pursuing to have
amended in his agency's favor. (b) EXCEPTION.—Subsection (a) shall not apply to mandates authorized
under the Communications Assistance for Law Enforcement Act (47 U.S.C. 1001 et seq.). Comey wants to
alter CALEA or, failing that, get a few legislators to run some sort of encryption-targeting legislation
up the Congressional flagpole for him. Wyden's bill won't thwart these efforts and it does leave the NSA free to
continue with its pre-existing homebrewed backdoor efforts -- the kind that don't require
mandates because they're performed off-site without the manufacturer's knowledge.
They will still have access- government can still influence companies
Newman 14
Lily Hay Newman, 12-5-2014, "Senator Proposes Bill to Prohibit Government-Mandated
Backdoors in Smartphones," Slate Magazine,
http://www.slate.com/blogs/future_tense/2014/12/05/senator_wyden_proposes_secure_data
_act_to_keep_government_agencies_from.html//SRawal
It's worth noting, though, that the Secure Data Act doesn't actually prohibit backdoors—it just
prohibits agencies from mandating them. There are a lot of other types of pressure government
groups could still use to influence the creation of backdoors, even if they couldn't flat-out
demand them. Here's the wording in the bill: "No agency may mandate that a manufacturer,
developer, or seller of covered products design or alter the security functions in its product or
service to allow the surveillance of any user of such product or service, or to allow the physical
search of such product, by any agency."
1NC No Solvency
No impact to backdoors, and there are already solutions to backdoors – their
evidence
Kohn 14
(Cindy, writer for the Electronic Freedom Foundation, 9-26-14, “Nine Epic Failures of Regulating
Cryptography,” https://www.eff.org/deeplinks/2014/09/nine-epic-failures-regulatingcryptography, BC)
For those who weren't following digital civil liberties issues in 1995, or for those who have forgotten, here's
a refresher list of why
forcing companies to break their own privacy and security measures by installing a back door
was a bad idea 15 years ago:∂ It will create security risks. Don't take our word for it. Computer security expert
Steven Bellovin has explained some of the problems. First, it's hard to secure communications properly even between two parties.
Cryptography with a back door adds a third party, requiring a more complex protocol, and as Bellovin puts it: "Many
previous attempts to add such features have resulted in new, easily exploited security flaws rather than better law enforcement access." It doesn't end
there. Bellovin notes:∂ Complexity in the protocols isn't the only problem; protocols require computer programs to implement them, and more
complex code generally creates more exploitable bugs. In the most notorious incident of this type, a cell phone switch in Greece was hacked by an
unknown party. The so-called 'lawful intercept' mechanisms in the switch — that is, the features designed to permit the police to wiretap calls easily —
was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime minister's. This attack would not have been
possible if the vendor hadn't written the lawful intercept code. ∂ More recently, as security researcher Susan Landau explains, "an IBM researcher found
that a
Cisco wiretapping architecture designed to accommodate law-enforcement requirements
— a system already in use by major carriers — had numerous security holes in its design. This
would have made it easy to break into the communications network and surreptitiously
wiretap private communications."∂ The same is true for Google, which had its "compliance"
technologies hacked by China.∂ This isn't just a problem for you and me and millions of companies that need secure
communications. What will the government itself use for secure communications? The FBI and other government agencies currently use many
commercial products — the same ones they want to force to have a back door. How will the FBI stop people from un-backdooring their deployments?
Or does the government plan to stop using commercial communications technologies altogether? ∂ It won't stop the bad guys. Users who want strong
encryption will be able to get it — from Germany, Finland, Israel, and many other places in the world where it's offered for sale and for free. In 1996,
the National Research Council did a study called "Cryptography's Role in Securing the Information Society," nicknamed CRISIS. Here's what they said:∂
Products using unescrowed encryption are in use today by millions of users, and such products are available from many difficult-to-censor Internet sites
abroad. Users could pre-encrypt their data, using whatever means were available, before their data were accepted by an escrowed encryption device
or system. Users
could store their data on remote computers, accessible through the click of a
mouse but otherwise unknown to anyone but the data owner, such practices could occur
quite legally even with a ban on the use of unescrowed encryption. Knowledge of strong encryption techniques
is available from official U.S. government publications and other sources worldwide, and experts understanding how to use such knowledge might well
be in high demand from criminal elements. — CRISIS Report at 303∂ None of that has changed. And of course, more encryption technology is more
readily available today than it was in 1996. So unless the goverment wants to mandate that you are forbidden to run anything that is not U.S.
government approved on your devices, they won't stop bad guys from getting access to strong encryption. ∂ It
will harm innovation. In
order to ensure that no "untappable" technology exists, we'll likely see a technology mandate
and a draconian regulatory framework. The implications of this for America's leadership in
innovation are dire. Could Mark Zuckerberg have built Facebook in his dorm room if he'd had to
build in surveillance capabilities before launch in order to avoid government fines? Would Skype have ever
happened if it had been forced to include an artificial bottleneck to allow government easy
access to all of your peer-to-peer communications? This has especially serious implications for the open source
community and small innovators. Some open source developers have already taken a stand
against building back doors into software.∂ It will harm US business. If, thanks to this proposal, US
businesses cannot innovate and cannot offer truly secure products, we're just handing
business over to foreign companies who don't have such limitations. Nokia, Siemens, and
Ericsson would all be happy to take a heaping share of the communications technology business
from US companies. And it's not just telecom carriers and VOIP providers at risk. Many game consoles that people can
use to play over the Internet, such as the Xbox, allow gamers to chat with each other while they
play. They'd have to be tappable, too.
Backdoor reform is key to solve, not abolishment – their evidence
Burger et al 14
(Eric, Research Professor of Computer Science at Georgetown, L. Jean Camp, Associate
professor at the Indiana University School of Information and Computing, Dan Lubar, Emerging
Standards Consultant at RelayServices, Jon M Pesha, Carnegie Mellon University, Terry Davis,
MicroSystems Automation Group, “Risking It All: Unlocking the Backdoor to the Nation’s
Cybersecurity,” IEEE USA, 7/20/2014, pg. 1-5, Social Science Research Network,
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2468604)//duncan
This paper addresses government policies that can influence commercial practices to weaken security in products and services sold
on the commercial market. The
debate on information surveillance for national security must include
consideration of the potential cybersecurity risks and economic implications of the information
collection strategies employed. As IEEE-USA, we write to comment on current discussions with respect to weakening
standards, or altering commercial products and services for intelligence, or law enforcement. Any policy that seeks to weaken
technology sold on the commercial market has many serious downsides, even if it temporarily advances the intelligence and law
enforcement missions of facilitating legal and authorized government surveillance.∂ Specifically, we define and address
the
risks of installing backdoors in commercial products, introducing malware and spyware into
products, and weakening standards. We illustrate that these are practices that harm America’s
cybersecurity posture and put the resilience of American cyberinfrastructure at risk. We write as a
technical society to clarify the potential harm should these strategies be adopted. Whether or not these strategies ever have been
used in practice is outside the scope of this paper.∂ Individual
computer users, large corporations and government
agencies all depend on security features built into information technology products and services
they buy on the commercial market. If the security features of these widely available products
and services are weak, everyone is in greater danger. There recently have been allegations that
U.S. government agencies (and some private entities) have engaged in a number of activities deliberately
intended to weaken mass market, widely used technology. Weakening commercial products and services does
have the benefit that it becomes easier for U.S. intelligence agencies to conduct surveillance on
targets that use the weakened technology, and more information is available for law enforcement purposes. On the surface, it
would appear these motivations would be reasonable. However, such
strategies also inevitably make it easier for
foreign powers, criminals and terrorists to infiltrate these systems for their own purposes.
Moreover, everyone who uses backdoor technologies may be vulnerable, and not just the handful of
surveillance targets for U.S. intelligence agencies. It is the opinion of IEEE-USA’s Committee on Communications Policy that no entity
should act to reduce the security of a product or service sold on the commercial market without first conducting a careful and
methodical risk assessment. A complete risk assessment would consider the interests of the large swath of users of the technology
who are not the intended targets of government surveillance.∂ A
methodical risk assessment would give proper
weight to the asymmetric nature of cyberthreats, given that technology is equally advanced and ubiquitous in the
United States, and the locales of many of our adversaries. Vulnerable products should be corrected, as needed,
based on this assessment. The next section briefly describes some of the government policies and technical strategies that might
have the undesired side effect of reducing security. The following section discusses why the effect of these practices may be a
decrease, not an increase, in security.∂ Government policies
can affect greatly the security of commercial
products, either positively or negatively. There are a number of methods by which a government might
affect security negatively as a means of facilitating legal government surveillance. One
inexpensive method is to exploit pre-existing weaknesses that are already present in
commercial software, while keeping these weaknesses a secret. Another method is to
motivate the designer of a computer or communications system to make those systems easier
for government agencies to access. Motivation may come from direct mandate or financial
incentives. There are many ways that a designer can facilitate government access once so motivated. For example, the
system may be equipped with a “backdoor.” The company that creates it — and, presumably,
the government agency that requests it — would “know” the backdoor, but not the product’s (or
service’s) purchaser(s). The hope is that the government agency will use this feature when it is given
authority to do so, but no one else will. However, creating a backdoor introduces the risk that
other parties will find the vulnerability, especially when capable adversaries, who are actively
seeking security vulnerabilities, know how to leverage such weaknesses.∂ History illustrates
that secret backdoors do not remain secret and that the more widespread a backdoor, the more
dangerous its existence. The 1988 Morris worm, the first widespread Internet attack, used a
number of backdoors to infect systems and spread widely. The backdoors in that case were a set of secrets
then known only by a small, highly technical community. A single, putatively innocent error resulted in a largescale attack that disabled many systems. In recent years, Barracuda had a completely undocumented
backdoor that allowed high levels of access from the Internet addresses assigned to Barracuda. However, when it
was publicized, as almost inevitably happens, it became extremely unsafe, and Barracuda’s customers
rejected it.∂ One example of how attackers can subvert backdoors placed into systems for benign reasons
occurred in the network of the largest commercial cellular operator in Greece. Switches deployed in the system
came equipped with built-in wiretapping features, intended only for authorized law
enforcement agencies. Some unknown attacker was able to install software, and made use of these
embedded wiretapping features to surreptitiously and illegally eavesdrop on calls from many cell phones
— including phones belonging to the Prime Minister of Greece, a hundred high-ranking Greek dignitaries, and
an employee of the U.S. Embassy in Greece before the security breach finally was discovered. In essence, a
backdoor created to fight crime was used to commit crime.
Aff is insufficient and doesn’t solve – their author
Kehl et al 14 (Danielle Kehl is a Policy Analyst at New America’s Open Technology Institute (OTI). Kevin
Bankston is the Policy Director at OTI, Robyn Greene is a Policy Counsel at OTI, and Robert Morgus is a
Research Associate at OTI, “New America’s Open Technology Institute Policy Paper, Surveillance Costs:
The NSA’s Impact on the Economy, Internet Freedom & Cybersecurity,” July 2014// rck)
The U.S. government has already taken some limited steps to mitigate this damage and begin the slow, difficult process of rebuilding
trust in the United States as a responsible steward of the Internet. But the reform efforts to date have been relatively narrow,
focusing primarily on the surveillance programs’ impact on the rights of U.S. citizens. Based on our findings, we recommend that the
U.S. government take the following steps to address the broader concern that the NSA’s programs are impacting our economy, our
foreign relations, and our cybersecurity:¶ Strengthen privacy protections for both Americans and non-Americans, within the United
States and extraterritorially.¶ Provide for increased transparency
around government surveillance, both from
the government and companies.¶ Recommit to the Internet Freedom agenda in a way that directly
addresses issues raised by NSA surveillance, including moving toward international human-rights based
standards on surveillance.¶ Begin the process of restoring trust in cryptography standards through the
National Institute of Standards and Technology.¶ Ensure that the U.S. government does not undermine cybersecurity by
inserting surveillance backdoors into hardware or software products.¶ Help to eliminate security vulnerabilities in
software, rather than stockpile them.¶ Develop clear policies about whether, when, and under what legal
standards it is permissible for the government to secretly install malware on a computer or in a network. ¶
Separate the offensive and defensive functions of the NSA in order to minimize conflicts of interest.
2NC No Solvency
Aff is insufficient and won’t solve loss of foreign investors
Enderle 6/12
(Rob, 6/12/15, CIO, “US surveillance programs are killing the tech industry,” Rob is the president and
principal analyst of the Enderle Group, he has worked for IBM, Dell, Microsoft, Siemens, and Intel, MBA @
California State University, Long Beach, http://www.cio.com/article/2934887/privacy/u-s-surveillanceprograms-are-killing-the-tech-industry.html, 7/13/15, SM)
The Information Technology & Innovation Foundation, ranked as the most authoritative science and technology think tank in the
U.S. (second in the world behind Max Planck Institutes of Germany), has just released its latest report on the impact of the existence
and disclosure of the broad NSA national and international spying programs.∂ It was initially reported that the revenue loss range
would be between $21.5 billion and $35 billion, mostly affecting U.S. cloud service providers. However, they have gone back and
researched the impact and found it to be both far larger and far broader than originally estimated. In fact, it appears the surveillance
programs could cause a number of U.S. technology firms to fail outright or to be forced into bankruptcy as they reorganize for
survival. The damage has also since spread to domestic aerospace and telephony service providers.∂ The programs identified in the
report are PRISM; the program authorized by the FISA Amendments act, which allowed search without the need for a warrant
domestically and abroad, and Bullrun; the program designed to compromise encryption technology worldwide.∂ The report ends in
the following recommendations:∂ Increase transparency about U.S. surveillance activities both
at home and
abroad.∂ Strengthen information security by opposing any government efforts to introduce backdoors in software or weaken
encryption.∂ Strengthen U.S. mutual legal assistance treaties (MLATs).∂ Work to establish international legal
standards for government access to data.∂ Complete trade agreements like the Trans Pacific Partnership that ban
digital protectionism, and pressure nations that seek to erect protectionist barriers to abandon those
efforts.∂ The 2014 survey indicates that 25 percent of companies in the UK and Canada plan to pull data
out of the U.S. Of those responding, 82 percent indicated they now look at national laws as the major deciding factor with regard
to where they put their data.∂ Software-as-a-Service (SaaS) company Birst indicated that its European customers are refusing to host
information in the U.S. for fear of spying.∂ Salesforce, another SaaS company, revealed that its German insurance client pulled out of
using the firm. In fact, Salesforce faced major short-term sales losses and suffered a $124 million deficit in the fiscal quarter after the
NSA revelations according to the report.∂ Cisco, the U.S. firm that leads the networking market, reported that sales was interrupted
in Brazil, China and Russia as a result of the belief that the U.S. had placed backdoors in its networking products. Cisco’s CEO, John
Chambers, tied his revenue shortfall to the NSA disclosure.∂ Servint, a U.S. Web Hosting company, reported losing half of its
international clients as a result of the NSA Disclosure.∂ Qualcomm, IBM, Microsoft and Hewlett-Packard have all reported significant
adverse revenue impact in China from the NSA disclosure.∂ A variety of U.S. companies including Cisco, McAfee/Intel, Apple and
Citrix Systems were all dropped from the approved list for the Chinese government as a result of the NSA disclosure.∂ But it isn’t
even just tech companies that have lost significant customers and revenues.∂ Boeing lost a major defense contract to Saab AB to
replace Brazil’s aging fighter jets due to the disclosure.∂ Verizon was dropped by a large number German government facilities for
fear Verizon would open them up to wiretapping and other surveillance.
Download