Reg-Neg CP - Open Evidence Project

advertisement
#WeWinCyberwar2.0 (ST)
Notes
Brought to you by KWei and Amy from the SWS heg lab.
Email me at ghskwei@gmail.com for help/with questions.
The thing about backdoor Affs is that all of their evidence will talk about past attacks. Press them on why
their scenario is different and how these past attacks prove that empirically, there is no impact to breakins through backdoors.
Also, a lot of their ev about mandating backdoors is in the context of future legislation, not the squo.
Also, their internal links are totally fabricated.
Links to networks, neolib, and gender privacy k, you can find those in the generics.
Links
Some links I don’t have time to cut but that I think will have good args/cards:
Going dark terrorism links: http://judiciary.house.gov/_files/hearings/printers/112th/112-59_64581.PDF
Front doors CP: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes
Military DA i/l ev: https://cyberwar.nl/d/20130200_Offensive-Cyber-Capabilities-are-Needed-Becauseof-Deterrence_Jarno-Limnell.pdf
http://www.inss.org.il/uploadImages/systemFiles/MASA4-3Engc_Cilluffo.pdf
Military DA Iran impact:
http://www.sobiad.org/ejournals/journal_ijss/arhieves/2012_1/sanghamitra_nath.pdf
Miltiary DA Syran impact: http://nationalinterest.org/commentary/syria-preparing-the-cyber-threat8997
T
T-Domestic
1NC
NSA spies on foreign corporations through backdoors
NYT 14
(David E. Sanger and Nicole Perlroth. "N.S.A. Breached Chinese Servers Seen as Security Threat," New York Times. 3-22-2014.
http://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-servers-seen-as-spy-peril.html//ghs-kw)
WASHINGTON — American officials have long considered Huawei, the Chinese telecommunications giant, a security threat,
blocking it from business deals in the United States for fear that the company would create “back doors” in its equipment that could allow the
Chinese military or Beijing-backed hackers to steal corporate and government secrets. But even as the United States made a public case about
the dangers of buying from Huawei, classified documents show that the
National Security Agency was creating its own back
doors — directly into Huawei’s networks. The agency pried its way into the servers in Huawei’s sealed
headquarters in Shenzhen, China’s industrial heart, according to N.S.A. documents provided by the former contractor Edward J.
Snowden. It obtained information about the workings of the giant routers and complex digital switches
that Huawei boasts connect a third of the world’s population, and monitored communications of the
company’s top executives. One of the goals of the operation, code-named “Shotgiant,” was to find any
links between Huawei and the People’s Liberation Army, one 2010 document made clear. But the plans went further: to
exploit Huawei’s technology so that when the company sold equipment to other countries — including both allies and nations that avoid buying
American products — the N.S.A. could roam through their computer and telephone networks to conduct surveillance and, if ordered by the
president, offensive cyberoperations.
NSA targets foreign systems with backdoors
Zetter 13
(Kim Zetter. "NSA Laughs at PCs, Prefers Hacking Routers and Switches," WIRED. 9-4-2013. http://www.wired.com/2013/09/nsa-routerhacking///ghs-kw)
THE NSA RUNS a massive, full-time hacking operation targeting foreign systems, the latest leaks from Edward
Snowden show. But unlike conventional cybercriminals, the agency is less interested in hacking PCs and Macs. Instead, America’s
spooks have their eyes on the internet routers and switches that form the basic infrastructure of the net, and are
largely overlooked as security vulnerabilities. Under a $652-million program codenamed “Genie,” U.S. intel agencies have hacked
into foreign computers and networks to monitor communications crossing them and to establish control
over them, according to a secret black budget document leaked to the Washington Post. U.S. intelligence agencies conducted 231 offensive
cyber operations in 2011 to penetrate the computer networks of targets abroad. This included not only installing covert “implants” in foreign
desktop computers but also on routers and firewalls — tens of thousands of machines every year in all. According to the Post, the government
planned to expand the program to cover millions of additional foreign machines in the future and preferred hacking routers to individual PCs
because it gave agencies access to data from entire networks of computers instead of just individual machines. Most of the hacks targeted the
systems and communications of top adversaries like China, Russia, Iran and North Korea and included activities around nuclear proliferation.
The NSA’s focus on routers highlights an often-overlooked attack vector with huge advantages for the intruder, says Marc Maiffret, chief
technology officer at security firm Beyond Trust. Hacking routers is an ideal way for an intelligence or military agency to maintain a persistent
hold on network traffic because the systems aren’t updated with new software very often or patched in the way that Windows and Linux
systems are. “No one updates their routers,” he says. “If you think people are bad about patching Windows and Linux (which they are) then
they are … horrible about updating their networking gear because it is too critical, and usually they don’t have redundancy to be able to do it
properly.” He also notes that routers don’t have security software that can help detect a breach. “The challenge [with desktop systems] is that
while antivirus don’t work well on your desktop, they at least do something [to detect attacks],” he says. “But you don’t even have an integrity
check for the most part on routers and other such devices like IP cameras.” Hijacking routers and switches could allow the NSA to do more than
just eavesdrop on all the communications crossing that equipment. It would also let them bring down networks or prevent certain
communication, such as military orders, from getting through, though the Post story doesn’t report any such activities. With control of routers,
the NSA could re-route traffic to a different location, or intelligence agencies could alter it for disinformation campaigns, such as planting
information that would have a detrimental political effect or altering orders to re-route troops or supplies in a military operation. According to
the budget document, the
CIA’s Tailored Access Programs and NSA’s software engineers possess “templates”
for breaking into common brands and models of routers, switches and firewalls. The article doesn’t say it, but
this would likely involve pre-written scripts or backdoor tools and root kits for attacking known but unpatched vulnerabilities in
these systems, as well as for attacking zero-day vulnerabilities that are yet unknown to the vendor and customers. “[Router software is]
just an operating system and can be hacked just as Windows or Linux would be hacked,” Maiffret says.
“They’ve tried to harden them a little bit more [than these other systems], but for folks at a place like the NSA or any other
major government intelligence agency, it’s pretty standard fare of having a ready-to-go backdoor for
your [off-the-shelf] Cisco or Juniper models.”
T-Surveillance
1NC
Backdoors are also used for cyberwarfare—not surveillance
Gellman and Nakashima 13
(Barton Gellman. Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington Post, most
recently the 2014 Pulitzer Prize for Public Service. He is also a senior fellow at the Century Foundation and visiting lecturer at Princeton’s
Woodrow Wilson School. After 21 years at The Post, where he served tours as legal, military, diplomatic, and Middle East correspondent,
Gellman resigned in 2010 to concentrate on book and magazine writing. He returned on temporary assignment in 2013 and 2014 to anchor
The Post's coverage of the NSA disclosures after receiving an archive of classified documents from Edward Snowden. Ellen Nakashima is a
national security reporter for The Washington Post. She focuses on issues relating to intelligence, technology and civil liberties. She
previously served as a Southeast Asia correspondent for the paper. She wrote about the presidential candidacy of Al Gore and co-authored a
biography of Gore, and has also covered federal agencies, Virginia state politics and local affairs. She joined the Post in 1995. "U.S. spy
agencies mounted 231 offensive cyber-operations in 2011, documents show," Washington Post. 8-30-2013.
https://www.washingtonpost.com/world/national-security/us-spy-agencies-mounted-231-offensive-cyber-operations-in-2011-documentsshow/2013/08/30/d090a6ae-119e-11e3-b4cb-fd7ce041d814_story.html//ghs-kw)
an implant’s purpose is to create a back door for future access. “You pry open the window
somewhere and leave it so when you come back the owner doesn’t know it’s unlocked, but you can
get back in when you want to,” said one intelligence official, who was speaking generally about the topic and was not privy to the budget. The official spoke on the
condition of anonymity to discuss sensitive technology. Under U.S. cyberdoctrine, these operations are known as “exploitation,” not “attack,”
but they are essential precursors both to attack and defense. By the end of this year, GENIE is projected to control
at least 85,000 implants in strategically chosen machines around the world. That is quadruple the number — 21,252 — available
in 2008, according to the U.S. intelligence budget. The NSA appears to be planning a rapid expansion of those numbers, which were
Sometimes
limited until recently by the need for human operators to take remote control of compromised machines. Even with a staff of 1,870 people, GENIE made full use of only 8,448 of the 68,975
the NSA has brought online an
automated system, code-named TURBINE, that is capable of managing “potentially millions of implants” for intelligence gathering “and
active attack.”
machines with active implants in 2011. For GENIE’s next phase, according to an authoritative reference document,
T-Surveillance (ST)
1NC
Undermining encryption standards includes commercial fines against illegal exports
Goodwin and Procter 14
(Goodwin and Proctor, legal firm. “Software Companies Now on Notice That Encryption Exports May Be Treated More Seriously: $750,000
Fine Against Intel Subsidiary,” Client Alert, 10-15-2014. http://www.goodwinprocter.com/Publications/Newsletters/ClientAlert/2014/1015_Software-Companies-Now-on-Notice-That-Encryption-Exports-May-Be-Treated-More-Seriously.aspx//ghs-kw)
On October 8, 2014, the
Department of Commerce’s Bureau of Industry and Security (BIS) announced the
issuance of a $750,000 penalty against Wind River Systems, an Intel subsidiary, for the unlawful exportation of
encryption software products to foreign government end-users and to organizations on the BIS Entity
List. Wind River Systems exported its software to China, Hong Kong, Russia, Israel, South Africa, and
South Korea. BIS significantly mitigated what would have been a much larger fine because the company
voluntarily disclosed the violations. We believe this to be the first penalty BIS has ever issued for the unlicensed export of
encryption software that did not also involve comprehensively sanctioned countries (e.g., Cuba, Iran, North Korea, Sudan or Syria). This
suggests a fundamental change in BIS’s treatment of violations of the encryption regulations. Historically, BIS has resolved voluntarily disclosed
violations of the encryption regulations with a warning letter but no material consequence, and has shown itself unlikely to pursue such
violations that were not disclosed. This
fine dramatically increases the compliance stakes for software companies
— a message that BIS seemed intent upon making in its announcement. Encryption is ubiquitous in software products.
Companies making these products should reexamine their product classifications, export eligibility, and
internal policies and procedures regarding the export of software that uses or leverages encryption (even
open source or third-party encryption libraries), particularly where a potential transaction on the horizon — e.g., an
acquisition, financing, or initial public offering — will increase the likelihood that violations of these laws
will be identified. If you would like additional information about the issues addressed in this Client Alert, please contact Rich Matheny,
who chairs Goodwin Procter’s National Security & Foreign Trade Regulation Practice, or the Goodwin Procter attorney with whom you typically
consult.
CPs
Foreign Backdoors CP
CX
In the world of the AFF does the government no longer have access to backdoors? So we don’t use or
possess backdoors in the world of the AFF, right?
1NC
(KQ) Counterplan: the United States federal government should ban the creation of
backdoors as outlined in the Secure Data Act of 2015 but should not ban the
surveillance of backdoors and should mandate clandestine corporate disclosure of
foreign-government-mandated backdoors to the United States federal government.
(CT) Counterplan: The United States federal government should not mandate the
creation of surveillance backdoors in products or request privacy keys, and should
terminate current backdoors created either by government mandates or government
requested keys but should not cease the use of backdoors.
Backdoors are inevitable—we’ll use backdoors created by foreign governments
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-partii-debate-merits//ghs-kw)
Still another approach is to let other governments do the dirty work. The computer scientists' report
cites the possibility of other sovereigns adopting their own extraordinary access regimes as a reason for
the U.S. to go slow: Building in exceptional access would be risky enough even if only one law
enforcement agency in the world had it. But this is not only a US issue. The UK government promises
legislation this fall to compel communications service providers, including US-based corporations, to
grant access to UK law enforcement agencies, and other countries would certainly follow suit. China has
already intimated that it may require exceptional access. If a British-based developer deploys a
messaging application used by citizens of China, must it provide exceptional access to Chinese law
enforcement? Which countries have sufficient respect for the rule of law to participate in an international exceptional access framework?
How would such determinations be made? How would timely approvals be given for the millions of new products with communications
capabilities? And how would this new surveillance ecosystem be funded and supervised? The US and UK governments have fought long and
hard to keep the governance of the Internet open, in the face of demands from authoritarian countries that it be brought under state control.
Does not the push for exceptional access represent a breathtaking policy reversal? I am certain that the
computer scientists are
correct that foreign governments will move in this direction, but I think they are misreading the consequences of this.
China and Britain will do this irrespective of what the United States does, and that fact may well
create potential opportunity for the U.S. After all, if China and Britain are going to force U.S.
companies to think through the problem of how to provide extraordinary access without
compromising general security, perhaps the need to do business in those countries will provide much
of the incentive to think through the hard problems of how to do it. Perhaps countries far less
solicitous than ours of the plight of technology companies or the privacy interests of their users will
force the research that Comey can only hypothesize. Will Apple then take the view that it can offer phones to
users in China which can be decrypted for Chinese authorities when they require it but that it's
technically impossible to do so in the United States?
2NC O/V
Counterplan solves 100% of the case—we mandate the USFG publicly stop creating
backdoors but instead use backdoors that are inevitably mandated by foreign nations
for surveillance—solves perception and doesn’t link to the net benefit—that’s Wittes
2NC Backdoors Inev
India has backdoors
Ragan 12
(Steve Ragan. Steve Ragan is a security reporter and contributor for SecurityWeek. Prior to joining the journalism world in 2005, he spent 15
years as a freelance IT contractor focused on endpoint security and security training. "Hackers Expose India's Backdoor Intercept Program,"
No Publication. 1-9-2012. http://www.securityweek.com/hackers-expose-indias-backdoor-intercept-program//ghs-kw)
Symantec confirmed with SecurityWeek on Friday that hackers did access source code from Symantec Endpoint Protection 11.0 and
Symantec Antivirus 10.2. According to a Symantec spokesperson, “SEP 11 was four years ago to be exact.” In addition, Symantec Antivirus 10.2
has been discontinued, though the company continues to service it. “We’re taking this extremely seriously and are erring on the side of caution
to develop and long-range plan to take care of customers still using those products,” Cris Paden, Senior Manager of Corporate Communications
at Symantec told SecurityWeek. Over the weekend, the story expanded. The Lords of Dharmaraja released a purported memo outlining the
intercept program known as RINOA, which earns
its name from the vendors involved - RIM, Nokia, and Apple. The
memo said the vendors provided India with backdoors into their technology in order to them to maintain
a presence in the local market space. India’s Ministry of Defense has “an agreement with all major
device vendors” to provide the country with the source code and information needed for their SUR
(surveillance) platform, the memo explains. These backdoors allowed the military to conduct
surveillance (RINOA SUR) against the US-China Economic and Security Review Commission. Personnel from Indian Naval Military
Intelligence were dispatched to the People’s Republic of China to undertake Telecommunications Surveillance (TESUR) using the RINOA
backdoors and CYCADA-based technologies.
China has backdoors in 80% of global communications
Protalinski 12
(Emil Protalinski. Reporter for CNet and ZDNet. "Former Pentagon analyst: China has backdoors to 80% of telecoms," ZDNet. 7-14-2012.
http://www.zdnet.com/article/former-pentagon-analyst-china-has-backdoors-to-80-of-telecoms///ghs-kw)
The Chinese government reportedly has "pervasive access" to some 80 percent of the world's
communications, thanks to backdoors it has ordered to be installed in devices made by Huawei and ZTE Corporation. That's
according to sources cited by Michael Maloof, a former senior security policy analyst in the Office of the
Secretary of Defense, who now writes for WND: In 2000, Huawei was virtually unknown outside China, but by 2009 it had grown to be
one of the largest, second only to Ericsson. As a consequence, sources say that any information traversing "any" Huawei equipped
network isn't safe unless it has military encryption. One source warned, "even then, there is no doubt that the
Chinese are working very hard to decipher anything encrypted that they intercept." Sources add that most
corporate telecommunications networks use "pretty light encryption" on their virtual private networks, or VPNs. I found about Maloof's report
via this week's edition of The CyberJungle podcast. Here's my rough transcription of what he says, at about 18 minutes and 30 seconds: The
Chinese government and the People's Liberation Army are so much into cyberwarfare now that they
have looked at not just Huawei but also ZTE Corporation as providing through the equipment that they install in about 145 countries
around in the world, and in 45 of the top 50 telecom centers around the world, the potential for
backdooring into data. Proprietary information could be not only spied upon but also could be altered and in some cases could be
sabotaged. That's coming from technical experts who know Huawei, they know the company and they know the Chinese. Since that story came
out I've done a subsequent one in which sources tell me that it's
giving Chinese access to approximately 80 percent of
the world telecoms and it's working on the other 20 percent now.
China is mandating backdoors
Mozur 1/28
(Paul Mozur. Reporter for the NYT. "New Rules in China Upset Western Tech Companies," New York Times. 1-28-2015.
http://www.nytimes.com/2015/01/29/technology/in-china-new-cybersecurity-rules-perturb-western-tech-companies.html//ghs-kw)
HONG KONG — The Chinese
government has adopted new regulations requiring companies that sell
computer equipment to Chinese banks to turn over secret source code, submit to invasive audits and build so-called back
doors into hardware and software, according to a copy of the rules obtained by foreign technology companies that do billions of
dollars’ worth of business in China. The new rules, laid out in a 22-page document approved at the end of last year, are the first in a
series of policies expected to be unveiled in the coming months that Beijing says are intended to strengthen
cybersecurity in critical Chinese industries. As copies have spread in the past month, the regulations have heightened concern among foreign
companies that the authorities are trying to force them out of one of the largest and fastest-growing markets. In a letter sent Wednesday to a
top-level Communist Party committee on cybersecurity, led by President Xi Jinping, foreign business groups objected to the new policies and
complained that they amounted to protectionism. The groups, which include the U.S. Chamber of Commerce, called for “urgent discussion and
dialogue” about what they said was a “growing trend” toward policies that cite cybersecurity in requiring companies to use only technology
products and services that are developed and controlled by Chinese companies. The letter is the latest salvo in an intensifying tit-for-tat
between China and the United States over online security and technology policy. While the United States has accused Chinese military
personnel of hacking and stealing from American companies, China has pointed to recent disclosures of United States snooping in foreign
countries as a reason to get rid of American technology as quickly as possible. Although it is unclear to what extent the new rules result from
security concerns, and to what extent they are cover for building up the Chinese tech industry, the Chinese regulations go far beyond measures
taken by most other countries, lending some credibility to industry claims that they are protectionist. Beijing also has long used the Internet to
keep tabs on its citizens and ensure the Communist Party’s hold on power. Chinese
companies must also follow the new
regulations, though they will find it easier since for most, their core customers are in China. China’s Internet filters have increasingly
created a world with two Internets, a Chinese one and a global one. The new policies could further split the tech world, forcing hardware and
software makers to sell either to China or the United States, or to create significantly different products for the two countries. While
the
Obama administration will almost certainly complain that the new rules are protectionist in nature, the Chinese will
be able to make a case that they differ only in degree from Washington’s own requirements.
2NC AT Perm do Both
Permutation links to the net benefit—the AFF stops use of backdoors, that was 1AC
cross-ex
2NC AT Perm do the CP
The counterplan bans the creation of backdoors but not the use of them—that’s
different from the plan—that was cross-ex
The permutation is severance—that’s a voting issue:
1. NEG ground—makes the AFF a shifting target which makes it impossible to
garner offense—stop copying k AFFs, vote NEG to be Dave Strauss
2. Kills advocacy skills—they never have to defend implementation of an advocacy
Cyberterror Advantage CP
1NC
Counterplan: the United States federal government should substantially increase its
support for renewable energy technologies and grid decentralization.
Grid decentralization and renewables solve terror attacks
Lawson 11
(Lawson, Sean. Sean Lawson is an assistant professor in the Department of Communication at the University of Utah. He holds a PhD in
Science and Technology Studies from Rensselaer Polytechnic Institute, a MA in Arab Studies from Georgetown University, and a BA in
History from California State University, Stanislaus. “BEYOND CYBER-DOOM: Cyberattack Scenarios and the Evidence of History,” Mercatus
Center at George Mason University. Working Paper No. 11-01, January 2011. http://mercatus.org/sites/default/files/publication/beyondcyber-doom-cyber-attack-scenarios-evidence-history_1.pdf//ghs-kw)
Cybersecurity policy should promote decentralization and self-organization in efforts to prevent, defend
against, and respond to cyberattacks. Disaster researchers have shown that victims are often themselves the first
responders and that centralized, hierarchical, bureaucratic responses can hamper their ability to
respond in the decentralized, self-organized manner that has often proved to be more effective
(Quarantelli, 2008: 895–896). One way that officials often stand in the way of decentralized self-organization is by hoarding information (Clarke
& Chess, 2009: 1000–1001). Similarly, over the last 50 years, U.S.
military doctrine increasingly has identified
decentralization, self-organization, and information sharing as the keys to effectively operating in ever-more
complex conflicts that move at an ever-faster pace and over ever-greater geographical distances (LeMay &
Smith, 1968; Romjue, 1984; Cebrowski & Garstka, 1998; Hammond, 2001). In the case of preventing or defending against cyberattacks on
critical infrastructure, we must recognize that most cyber and physical infrastructures are owned by private actors. Thus, a
centralized,
military-led effort to protect the fortress at every point will not work. A combination of incentives,
regulations, and public-private partnerships will be necessary. This will be complex, messy, and difficult. But a
cyberattack, should it occur, will be equally complex, messy, and difficult, occurring instantaneously over global distances via a medium that is
almost incomprehensible in its complex interconnections and interdependencies. The
owners and operators of our critical
infrastructures are on the front lines and will be the first responders. They must be empowered to act.
Similarly, if the worst should occur, average citizens must be empowered to act in a decentralized, selforganized way to help themselves and others. In the case of critical infrastructures like the electrical
grid, this could include the promotion of alternative energy generation and distribution methods. In this
way, “Instead of being passive consumers, [citizens] can become actors in the energy network. Instead of waiting for
blackouts, they can organize alternatives and become less vulnerable to either terror or natural catastrophe”
(Nye, 2010: 203)
2NC O/V
Counterplan solves all of their grid and cyber-terrorism impacts—we mandate the
USFG provide incentives, regulations, and P3s for widespread adoption of alt energy
and grid decentralization—this means each building has its own microgrid, which
allows for local, decentralized responses to cyberterror attacks and solves their
impact—that’s Lawson
2NC CP>AFF
Only the CP solves—a centralized grid results in inevitable failures and kills the
economy
Warner 10
(Guy Warner. Guy Warner is a leading economist and the founder and CEO of Pareto Energy. "Moving U.S. energy policy to a decentralized
grid," Grist. 6-4-2010. http://grist.org/article/2010-06-03-moving-u-s-energy-policy-to-a-decentralized-grid-rethinking-our///ghs-kw)
And, while the development of renewable energy technology has sped up rapidly in recent years, the
technology to deliver this
energy to the places where it is most needed is decades behind. America’s current electricity
transmission and distribution grid was built more than a century ago. Relying on the grid to relay power from wind
farms in the Midwest to cities on the east and west coast is simply not feasible. Our dated infrastructure cannot handle the
existing load — power outages and disruptions currently cost the nation an estimated $164 billion each
year. Wind and solar power produce intermittent power, which, in small doses, has little impact on grid operations. As we introduce
increasingly larger amounts of intermittent power, our transmission system will require significant
upgrades and perhaps even a total grid infrastructure redesign, which could take decades and cost billions. With 9,200 power plants that
link homes and business via 164,000 miles of lines, a national retrofit is both cost-prohibitive and improbable. One solution to this
challenge is the development of microgrids. Also known as distributed generation, microgrids produce energy
closer to the user rather than transmitting it from remote power plants. Power is generated and stored
locally and works in parallel with the main grid, providing power as needed and utilizing the main grid at
other times. Microgrids offer a decentralized power source that can be introduced incrementally in
modules now without having to deal with the years of delay realistically associated with building central generation facilities (e.g. nuclear)
and their associated transmission and distribution system add-ons. There is also a significant difference in the up-front capital costs that are
ultimately assigned the consumer. Introducing generation capacity into a microgrid as needed is far less capital intensive, and some might
argue more economical, than building a new nuclear plant at a cost of $5-12 billion dollars.
Technological advancements in
connectivity mean that microgrids can now be developed for high energy use building clusters, such as
trading floors and hospitals, relieving stress on the macrogrid, and providing more reliable power. In fact,
microgrids can be viewed as the ultimate smart grid, providing local power that meets local needs and
utilizing energy sources, including renewables, that best fit the location and use profile. For example, on the
East Coast, feasibility studies are underway to retrofit obsolete paper mills into biomass fuel generators utilizing left over pulp wood. Pulp
wood, the waste left over from logging, can be easily pelletized, is inexpensive to produce, easy to transport, and has a minimal net carbon
output. Wood pellets are also easily adaptable to automated combustion systems, making them a valuable domestic resource that can
supplement and replace our use of fossil fuels, particularly in microgrids which can be designed to provide heating and cooling from these
biomass products.
2NC Terror Solvency
Decentralization solves terror threats
Verclas 12
(Verclas, Kristen. Kirsten Verclas works as International Program Officer at the National Association of Regulatory Utility Commissioners
(NARUC) in Washington, DC. She holds a BA in International Relations with a Minor in Economics from Franklin and Marshall College and an
MA in International Relations with a concentration in Security Studies from The Elliott School at The George Washington University. She also
earned an MS in Energy Policy and Climate from Johns Hopkins University in August 2013. "The Decentralization of the Electricity Grid –
Mitigating Risk in the Energy Sector ,” American Institute for Contemporary German Studies at John Hopkins University. 4-27-2012.
http://www.aicgs.org/publication/the-decentralization-of-the-electricity-grid-mitigating-risk-in-the-energy-sector///ghs-kw)
A decentralized electricity grid has many environmental and security benefits. Microgrids in combination with
distributed energy generation provide a system of small power generation and storage systems, which are
located in a community or in individual houses. These small power generators produce on average about 10 kW (for individual
homes) to 2 MW (for communities) of electricity. While connected to and able to feed excess energy into the grid, these generators are
simultaneously independent from the grid in that they can provide power even when power from the
main grid is not available. Safety benefits from a decentralized grid are immense, as it has build-in
redundancies. These redundancies are needed should the main grid become inoperable due to a natural
disaster or terrorist attack. Communities or individual houses can then rely on microgrids with distributed
electricity generation for their power supply. Furthermore, having less centralized electricity generation
and fewer main critical transmission lines reduces targets for terrorist attacks and natural disasters. Fewer people
would then be impacted by subsequent power outages. Additionally, “decentralized power reduces the obstacles to
disaster recovery by allowing the focus to shift first to critical infrastructure and then to flow outward to
less integrated outlets.”[10] Thus critical facilities such as hospitals or police stations would be the first to
have electricity restored, while non-essential infrastructure would have energy restored at a later date.
Power outages are not only dangerous for critical infrastructure, they also cost money to business and the economy overall. EPRI “reported that
power outages and quality disturbances cost American businesses $119 billion per year.”[11] Decentralized
grids are also more
energy efficient than centralized electricity grids because “as electricity streams through a power line a
small fraction of it is lost to various factors. The longer the distance the greater the loss.”[12] Savings that
are realized by having shorter transmission lines could be used to install the renewable energy sources close to
homes and communities. The decrease of transmission costs and the increase in efficiency would cause
lower electricity usage overall. A decrease in the need to generate electricity would also increase energy security—fewer imports of
energy would be needed. The U.S. especially has been concerned with energy dependence in the last decades; decentralized electricity
generation could be one of the policies to address this issue.
Decentralization solves cyberattacks
Kiger 13
(Patrick J. Kiger. "Will Renewable Energy Make Blackouts Into a Thing of the Past?,"
National Geographic Channel. 10-2-2013.
http://channel.nationalgeographic.com/american-blackout/articles/will-renewableenergy-make-blackouts-into-a-thing-of-the-past///ghs-kw)
The difference is that Germany’s grid of the future, unlike the present U.S. system, won’t rely on big power plants and long transmission lines.
Instead, Germany is creating a
decentralized “smart” grid—essentially, a system composed of many small,
potentially self-sufficient grids, that will obtain much of their power at the local level from renewable
energy sources, such as solar panels, wind turbines and biomass generators. And the system will be
equipped with sophisticated information and communications technology (ICT) that will enable it to
make the most efficient use of its energy resources. Some might scoff at the idea that a nation could depend entirely upon
renewable energy for its electrical needs, because both sunshine and wind tend to be variable, intermittent producers of electricity. But the
Germans plan to get around that problem by using “linked renewables”—that is, by combining multiple sources of renewable energy, which has
the effect of smoothing out the peaks and valleys of the supply. As Kurt Rohrig, the deputy director of Germany’s Fraunhofer Institute for Wind
Energy and Energy System Technology, explained in a recent article on Scientific American’s website:
"Each source of energy—be it
wind, sun or bio-gas—has its strengths and weaknesses. If we manage to skillfully combine the different
characteristics of the regenerative energies, we can ensure the power supply for Germany." A decentralized
“smart” grid powered by local renewable energy might help protect the U.S. against a catastrophic
blackout as well, proponents say. “A more diversified supply with more distributed generation inherently
helps reduce vulnerability,” Mike Jacobs, a senior energy analyst at the Union of Concerned Scientists, noted in a recent blog post on
the organization’s website. According to the U.S. Department of Energy’s SmartGrid.gov website, such a system would have the
ability to bank surplus electricity from wind turbines and solar panels in numerous storage locations
around the system. Utility operators could tap into those reserves if electricity generation ebbed.
Additionally, in the event of a large-scale disruption, a smart grid would have the ability to switch areas over to
power generated by utility customers themselves, such as solar panels that neighborhood residents
have installed on their roofs. By combining these "distributed generation" resources, a community could
keep its health center, police department, traffic lights, phone system, and grocery store operating
during emergencies, DOE’s website notes. "There are lots of resources that contribute to grid resiliency and
flexibility," Allison Clements, an official with the Natural Resource Defense Council, wrote in a recent blog post on the NRDC website.
"Happily, they are the same resources that are critical to achieving a clean energy, low carbon future."
Joel Gordes, electrical power research director for the U.S. Cyber Consequences Unit, a private-sector organization that investigates
terrorist threats against the electrical grid and other targets, also thinks that such a decentralized grid
"could carry benefits not only for protecting us to a certain degree from cyber-attacks but also providing power
during any number of natural hazards." But Gordes does offer a caveat—such a system might also offer more potential points of entry for
hackers to plant malware and disrupt the entire grid. Unless that vulnerability is addressed, he warned in an e-mail, "full deployment of [smart
grid] technology could end up to be disastrous."
Patent Reform Advantage CP
Notes
Specify reform + look at law reviews
Read the 500 bil card in the 1NC
Cut different versions w/ different mechanisms
1NC Comprehensive Reform
Counterplan: the United States federal government should comprehensively reform
its patent system for the purpose of eliminating non-practicing entities.
Patent trolls cost the economy half a trillion and counting—larger internal link to tech
and the economy
Lee 11
(Timothy B. Lee. Timothy B. Lee covers tech policy for Ars, with a particular focus on patent and copyright law, privacy, free speech, and
open government. While earning his CS master's degree at Princeton, Lee was the co-author of RECAP, a Firefox plugin that helps users
liberate public documents from the federal judiciary's paywall. Before grad school, he spent time at the Cato Institute, where he is an
adjunct scholar. He has written for both online and traditional publications, including Slate, Reason, Wired.com, and the New York Times.
When not screwing around on the Internet, he can be seen rock climbing, ballroom dancing, and playing soccer. He lives in Philadelphia. He
has a blog at Forbes and you can follow him on Twitter. "Study: patent trolls have cost innovators half a trillion dollars," Ars Technica. xx-xxxxxx. http://arstechnica.com/tech-policy/2011/09/study-patent-trolls-have-cost-innovators-half-a-trillion-bucks///ghs-kw)
By now, the story of patent
trolls has become well-known: a small company with no products of its own threatens
lawsuits against larger companies who inadvertently infringe its portfolio of broad patents. The scenario has
become so common that we don't even try to cover all the cases here at Ars. If we did, we'd have little time to write about much else. But
anecdotal evidence is one thing. Data is another. Three
Boston University researchers have produced a rigorous
empirical estimate of the cost of patent trolling. And the number is breath-taking: patent trolls ("non-practicing
entity" is the clinical term) have cost publicly traded defendants $500 billion since 1990. And the problem has
become most severe in recent years. In the last four years, the costs have averaged $83 billion per year. The study says
this is more than a quarter of US industrial research and development spending during those years.
Two of the study's authors, James Bessen and Mike Meurer, wrote Patent Failure, an empirical study of the patent system that has been widely
read and cited since its publication in 2008. They were joined for this paper by a colleague, Jennifer Ford.It's hard to measure the costs of
litigation directly. The
most obvious costs for defendants are legal fees and payouts to plaintiffs, but these
are not necessarily the largest costs. Often, indirect costs like employee distraction, legal uncertainty, and
the need to redesign or drop key products are even more significant. The trio use a clever method known as a stock
market event study to estimate these costs. The theory is simple: a company's stock price represents the stock market's best estimation of the
company's value. If the company's stock drops by, say, two percent in the days after a lawsuit is filed, then the market thinks the lawsuit will
cost the company two percent of its market capitalization. Of course, this wouldn't be a very rigorous technique if they were looking at a single
lawsuit. Any number of factors could have affected the firm's stock price that same week. Maybe the company released a bad earnings report
the next day. But with
a large sample of companies, these random factors should mostly cancel each other out,
leaving the market's rough estimate of how much patent lawsuits cost their targets. The authors used a
database of 1,630 patent troll lawsuits compiled by Patent Freedom. Because many of the lawsuits had multiple defendants,
there was a total of 4,114 plaintiff-defendant pairs. The median defendant over all of these pairs lost $20.4 million in market
capitalization, while the mean loss was $122 million.
2NC Solvency
(Senator Orrin Hatch. "Senator Hatch: It’s Time to Kill Patent Trolls for Good," WIRED. 3-16-2015.
http://www.wired.com/2015/03/opinion-must-finally-legislate-patent-trolls-existence///ghs-kw)
There is broad agreement—among both big and small businesses—that any serious solution must
include:
•
Fee shifting, which will require patent trolls to pay legal fees when their suits are unsuccessful;
•
Heightened pleading and discovery standards, which will raise the bar on litigation procedure,
making it increasingly difficult for trolls to file frivolous lawsuits;
•
Demand letter reforms, which will require those sending demand letters to be more specific and
transparent;
•
Stays of customer suits, which will allow a manufacturer’s case to move forward first, without
binding the end user to the result of that case;
•
A mechanism to enable recovery of fees, which will prevent insolvent plaintiffs from litigating
and dashing.
Some critics argue that these proposals will help only large technology companies and might even hurt
startups and small businesses. In my discussions with stakeholders, however, I have repeatedly been
told that a multi-pronged approach that tackles each of these issues is needed to effectively combat
patent trolls across all levels of industry. These stakeholder discussions have included representatives
from the hotel, restaurant, retail, real estate, financial services, and high-tech industries, as well as startup and small business owners.
Enacting legislation on any topic is a major undertaking, and the added complexities inherent in patent
law make passing patent reforms especially challenging. Crucially, we will probably have only one
chance to do so for a long while, so whatever we do must work. We must not pass any bill that fails to
provide an effective deterrent against patent trolls at all stages of litigation.
It is my belief that any viable legislation must ensure that those who successfully defend against abusive
patent litigation and are awarded fees will actually get paid. Even when a patent troll is a shell company
with no assets, there are usually other parties with an interest in the litigation who do have assets.
These parties, however, often keep themselves beyond the jurisdiction of the courts. They reap benefits
if the plaintiff forces a settlement, but are protected from any liability if they lose.
Right now, that’s a win-win situation for these parties, and a lose-lose situation for America’s
innovators.
Because Congress cannot force parties outside a court’s jurisdiction to join in a case, we must instead
incentivize interested parties to do the right thing and pay court-ordered fee awards. This is why we
must pass legislation that includes a recovery provision. Fee shifting without recovery is like writing a
check on an empty account. It’s purporting to convey something that isn’t there. Only fee shifting
coupled with a recovery provision will stop patent trolls from litigating-and-dashing.
There is no question that American ingenuity fuels our economy. We must ensure that our patent
system is strong and vibrant and helps to protect our country’s premier position in innovation.
Reform solves patent trolling
Roberts 14
(Jeff John Roberts. Jeff reports on legal issues that impact the future of the tech industry, such as privacy, net neutrality and intellectual
property. He previously worked as a reporter for Reuters in Paris and New York, and his free-lance work includes clips for the Economist, the
New York Times and the Globe & Mail. A frequent guest on media outlets like NPR and Fox, Jeff is also a lawyer, having passed the bar in
New York and Ontario. "Patent reform is likely in 2015. Here’s what it could look like," No Publication. 11-19-2014.
https://gigaom.com/2014/11/19/patent-reform-is-likely-in-2015-heres-what-it-could-look-like///ghs-kw)
A patent scholar Dennis Crouch notes, the question is how far the new law will go. In particular, real
reform will depend on
changing the economic asymmetries in patent litigation that allow trolls to flourish, and that lead troll
victims to simply pay up rather engage in costly litigation. Here are some measures we are likely to see under
the Goodlatte bill, according to Crouch and legal sources like IAM and Law.com (subscription required): Fee-shifting: Right now,
trolls typically have nothing to lose by filing a lawsuit since they are shell companies with no assets. New
fee-shifting measures, however, could put them on the hook for their victims’ legal fees. Discovery
limits: Currently, trolls can exploit the discovery process — in which each side must offer up documents
and depositions — by drowning their targets in expensive and time-consuming requests. Limiting the
scope of discovery could take that tactic off the table. Heightened pleading requirements: Right now,
patent trolls don’t have to specify how exactly a company is infringing their technology, but can simply
serve cookie-cutter complaints that list the patents and the defendant. Pleading reform would force the
trolls to explain what exactly they are suing over, and give defendants a better opportunity to assess the
case. Identity requirements: This reform proposal is known as “real party of interest” and would make it
harder for those filing patent lawsuits (often lawyers working with private equity firms) to hide behind
shell companies, and require them instead to identify themselves. Crouch also notes the possibility of
expanded “post-grant” review, which gives defendants a fast and cheaper tool to invalidate bad patents
at the Patent Office rather than in federal court.
2NC O/V
The status quo patent system is hopelessly broken and allows patent trolls to game
the system by gaining broad patents for objects such as selling objects on the
internet—those firms sue innovators and startups who “violate” their patents, costing
the US economy half a trillion and stifling innovation—that’s Lee
The counterplan eliminates patent trolls through a set of comprehensive reforms we’ll
describe below—solves their innovation argumentss and independently is a bigger
internal link to innovation and the economy
Patent reform is key to prevent patent trolling that stifle innovation and reduce R&D
by half
Bessen 14
(James Bessen. Bessen is a Lecturer in Law at the Boston University School of Law.
Bessen was also a Fellow at the Berkman Center for Internet and Society. "The
Evidence Is In: Patent Trolls Do Hurt Innovation," Harvard Business Review. November
2014. https://hbr.org/2014/07/the-evidence-is-in-patent-trolls-do-hurtinnovation//ghs-kw)
Over the last two years, much has been written about patent
trolls, firms that make their money asserting patents
against other companies, but do not make a useful product of their own. Both the White House and
Congressional leaders have called for patent reform to fix the underlying problems that give rise to
patent troll lawsuits. Not so fast, say Stephen Haber and Ross Levine in a Wall Street Journal Op-Ed (“The Myth of the Wicked Patent
Troll”). We shouldn’t reform the patent system, they say, because there is no evidence that trolls are hindering innovation; these calls are being
driven just by a few large companies who don’t want to pay inventors. But there is evidence of significant harm. The White House and the
Congressional Research Service both cited many research studies suggesting that patent
litigation harms innovation. And three
new empirical studies provide strong confirmation that patent litigation is reducing venture capital
investment in startups and is reducing R&D spending, especially in small firms. Haber and Levine admit that
patent litigation is surging. There were six times as many patent lawsuits last year than in the 1980s. The
number of firms sued by patent trolls grew nine-fold over the last decade; now a majority of patent
lawsuits are filed by trolls. Haber and Levine argue that this is not a problem: “it might instead reflect a healthy, dynamic economy.”
They cite papers finding that patent trolls tend to file suits in innovative industries and that during the nineteenth century, new technologies
such as the telegraph were sometimes followed by lawsuits. But this does not mean that the explosion in patent litigation is somehow
“normal.” It’s true that plaintiffs, including patent trolls, tend to file lawsuits in dynamic, innovative industries. But that’s just because they
“follow the money.” Patent trolls tend to sue cash rich companies, and innovative new technologies generate cash. The economic burden of
today’s patent lawsuits is, in fact, historically unprecedented. Research
shows that patent trolls cost defendant firms
$29 billion per year in direct out-of-pocket costs; in aggregate, patent litigation destroys over $60
billion in firm wealth each year. While mean damages in a patent lawsuit ran around $50,000 (in today’s dollars) at the time the
telegraph, mean damages today run about $21 million. Even taking into account the much larger size of the economy today, the economic
impact of patent litigation today is an order of magnitude larger than it was in the age of the telegraph. Moreover, these
costs fall
disproportionately on innovative firms: the more R&D a firm performs, the more likely it is to be sued
for patent infringement, all else equal. And, although this fact alone does not prove that this litigation reduces firms’
innovation, other evidence suggests that this is exactly what happens. A researcher at MIT found, for example, that medical imaging
businesses sued by a patent troll reduced revenues and innovations relative to comparable companies
that were not sued. But the biggest impact is on small startup firms — contrary to Haber and Levine, most patent trolls
target firms selling less than $100 million a year. One
survey of software startups found that 41% reported
“significant operational impacts” from patent troll lawsuits, causing them to exit business lines or change
strategy. Another survey of venture capitalists found that 74% had companies that experienced “significant
impacts” from patent demands. Three recent econometric studies confirm these negative effects.
Catherine Tucker of MIT analyzed venture capital investing relative to patent lawsuits in different industries and different regions of the
country. Controlling for the influence of other factors, she estimates that lawsuits
from frequent litigators (largely patent
trolls) were responsible for a decline of $22 billion in venture investing over a five-year period. That
represents a 14% decline. Roger Smeets of Rutgers looked at R&D spending by small firms, comparing firms that were hit by extensive
lawsuits to a carefully chosen comparable sample. The comparison sample allowed him to isolate the effect of patent lawsuits from other
factors that might also influence R&D spending. Prior to the lawsuit, firms
devoted 20% of their operating
expenditures to R&D; during the years after the lawsuit, after controlling for other factors, they reduced that spending
by 3% to 5% of operating expenditures, representing about a 19% reduction in relative R&D spending. And researchers from
Harvard and the University of Texas recently examined R&D spending of publicly listed firms that had been sued by patent trolls. They
compared firms where the suit was dismissed, representing a clear win for the defendant, to those where the suit was settled or went to final
adjudication (typically much more costly). As in the previous paper, this comparison helped them isolate the effect of lawsuits from other
factors. They found that when lawsuits were not dismissed, firms
reduced their R&D spending by $211 million and
reduced their patenting significantly in subsequent years. The reduction in R&D spending represents a
48% decline. Importantly, these studies are initial releases of works in progress; the researchers will refine their estimates of harm over
the coming months. Perhaps some of the estimates may shrink a bit. Nevertheless, across a significant number of studies using
different methodologies and performed by different researchers, a consistent picture is emerging about
the effects of patent litigation: it costs innovators money; many innovators and venture capitalists
report that it significantly impacts their businesses; innovators respond by investing less in R&D and
venture capitalists respond by investing less in startups. Haber and Levine might not like the results of this research. But
the weight of the evidence from these many studies cannot be ignored; patent trolls do, indeed, cause harm. It’s time for
Congress to do something about it.
2NC Comprehensive Reform
Comprehensive reform solves patent trolling
Downes 7/6
(Larry Downes. Larry Downes is an author and project director at the Georgetown Center for Business and Public Policy. His new book, with
Paul Nunes, is “Big Bang Disruption: Strategy in the Age of Devastating Innovation.” Previous books include the best-selling “Unleashing the
Killer App: Digital Strategies for Market Dominance.” "What would 'real' patent reform look like?," CNET. 7-6-2015.
http://www.cnet.com/news/what-does-real-patent-reform-look-like///ghs-kw)
And a new report (PDF) from technology think tank Lincoln Labs argues that reversing
the damage to the innovation
economy caused by years of overly generous patent policies requires far stronger medicine than Congress is
considering or the courts seem willing to swallow on their own. The bills making their way through Congress, for example, focus almost entirely
on curbing abuses by companies that buy up often overly broad patents and then, rather than produce goods, simply sue manufacturers and
users they argue are infringing their patents. These nonpracticing
entities, referred to derisively as patent trolls, are
widely seen as a serious drag on innovation, particularly in fast-evolving technology industries. Trolling
behavior, according to studies from Stanford Law School professor and patent expert Mark Lemley, does
little to nothing to promote the Constitutional goal of patents to encourage innovation by granting
inventors temporary monopolies during which they can recover their investment. The House of Representatives
passed antitrolling legislation in 2013, but a Senate version was killed by then-Majority Leader Harry Reid (D-Nev.) in May 2014. "Patent
trolls," said Gary Shapiro, president and CEO of the Consumer Electronics Association, "bleed $1.5 billion a week from the US
economy -- that's almost $120 billion since the House passed a patent reform bill in December of 2013." A call for 'real' patent reform The
Lincoln Labs report agrees with these and other criticisms of patent trolling, but argues for more fundamental changes to
the system, or what the report calls "real" patent reform. The report, authored by former Republican Congressional staffer
Derek Khanna, urges a complete overhaul of the process by which the Patent Office reviews applications, as
well as the elimination of patents for software, business methods, and a special class of patents for
design elements -- a category that figured prominently in the smartphone wars. Khanna claims that the Patent Office has demonstrated
an "abject failure" to enforce fundamental legal requirements that patents only be granted for inventions that are novel, nonobvious and
useful. To
reverse that trend, the report calls on Congress to change incentives for patent examiners that
today weigh the scales in favor of approval, add a requirement for two examiners to review the most
problematic categories of patents, and allow crowdsourced contributions to Patent Office databases of
"prior art" to help filter out nonnovel inventions. Khanna estimates these reforms alone "would knock
out a large number of software patents, perhaps 75-90%, where the economic argument for patents is
exceedingly difficult to sustain." The report also calls for the elimination of design patents, which offer
protection for ornamental features of manufactured products, such as the original design of the CocaCola bottle.
Reg-Neg CP
1NC Shell
Text: the United States federal government should enter into a process of negotiated
rulemaking over _______<insert plan>______________ and implement the results of
negotiation.
The CP is plan minus—it doesn’t mandate the plan, just that a regulatory negotiations
committee is created to discuss the plan
And, it competes—reg neg is not normal means
USDA 06
(The U.S. Department of Agriculture’s Agricultural Marketing Service administers programs that facilitate the efficient, fair marketing of U.S.
agricultural products, including food, fiber, and specialty crops “What is Negotiated Rulemaking?”. Last updated June 6th 2014.
http://www.ams.usda.gov/AMSv1.0/getfile?dDocName=STELPRDC5089434) //ghs-kw)
How reg-neg
differs from “traditional” notice-and-comment rulemaking The “traditional” notice-andcomment rulemaking provided in the Administrative Procedure Act (APA) requires an agency planning to adopt a rule
on a particular subject to publish a proposed rule (NPRM) in the Federal Register and to offer the public an
opportunity to comment. The APA does not specify who is to draft the proposed rule nor any particular
procedure to govern the drafting process. Ordinarily, agency staff performs this function, with discretion to determine how
much opportunity is allowed for public input. Typically, there is no opportunity for interchange of views among
potentially affected parties, even where an agency chooses to conduct a hearing. The “traditional” notice-andcomment rulemaking can be very adversarial. The dynamics encourage parties to take extreme positions in their written and oral statements –
in both pre-proposal contacts as well as in comments on any published proposed rule as well as withholding of information that might be
viewed as damaging. This adversarial atmosphere may contribute to the expense and delay associated with regulatory proceedings, as parties
try to position themselves for the expected litigation. What is lacking is an opportunity for the parties to exchange views, share information,
and focus on finding constructive, creative solutions to problems. In
negotiated rulemaking, the agency, with the
assistance of one or more neutral advisors known as “convenors,” assembles a committee of
representatives of all affected interests to negotiate a proposed rule. Sometimes the law itself will specify which
interests are to be included on the committee. Once assembled, the next goal is for members to receive training in interest-based problem-
They then must make sure that all views are heard and that each committee
member agrees to a set of ground rules for the negotiated rulemaking process. The ultimate goal is to reach
consensus on a text that all parties can accept. The agency is represented at the table by an official who is sufficiently senior
to be able to speak authoritatively on its behalf. Negotiating sessions are chaired by a neutral mediator or facilitator
skilled in assisting in the resolution of multiparty disputes. The Checklist—Advantages as well as Misperceptions The
solving and consensus-decision making.
ad
ic
ative options for
e “end runs” against
-issuance contentiousness and litigation.
agency from any
not eliminate the agency’s obligation to produce any economic analysis; paperwork or other
-parties to set aside their legal or political
rights as a con
<Insert specific solvency advocate>
Reg neg solves—empirics prove
Knaster 10
(Alana Knaster is the Deputy Director of the Resource Management Agency. She was Senior Executive in the Monterey County Planning
Department for five years with responsibility for planning, building, and code enforcement programs. Prior to joining Monterey County,
Alana was the President of the Mediation Institute, a national non-profit firm specializing in the resolution of complex land use planning and
environmental disputes. Many of the disputes that she successfully mediated, involved dozens of stakeholder groups including government
agencies, major corporations and public interest groups. She served in that capacity for 15 years. Alana was Mayor of the City of Hidden
Hills, California from 1981-88 and represented her City on a number of regional planning agencies and commissions. She also has been on
the faculty of Pepperdine University Law School since 1989, teaching courses in environmental and public policy mediation. Knaster, A.
“Resolvnig Conflicts Over Climate Change Solutions: Making the Case for Mediation,” Pepperdine Dispute Resolution Law Journal, Vol 10, No
3, 2010. 465-501. http://law.pepperdine.edu/dispute-resolution-law-journal/issues/volume-ten/Knaster%20Article.pdf//ghs-kw)
Federal and international dispute resolution process models. There are also models in U.S. and Canadian
legislation supporting the use of consensus-based processes. These processes have been successfully
applied to resolve dozens of disputes that involved multiple stakeholder interests, on technically and
politically complex environmental and public policy issues. For example, the Negotiated Rulemaking Act of
1990 was enacted by Congress to formalize a process for negotiating contentious new regulations.118 The Act provides a process called “reg
neg” by which representatives of interest groups that could be substantially affected by the provisions
of a regulation, and agency staff negotiate the provisions.119 The meetings are open to the public; however,
the process does enable negotiators to hold private interest group caucuses. If a consensus is reached on the provisions of
the rule, the Agency commits to publish the consensus rule in the Federal Register for public
comment.120 The participants in the reg neg agree that as long as the final regulation is consistent with
what they have jointly recommended, they will not challenge it in court. The assumption is that parties will
support a product that they negotiated.121 Reg neg has been utilized by numerous federal agencies to
negotiate rules pertaining to a diverse range of topics including safe drinking water, fugitive gasoline
emissions, eligibility for educational loans, and passenger safety.122 In 1991, in Canada, an initiative was launched by
the National Task Force on Consensus and Sustainability to develop a guidance document that would govern how federal, provincial, and
municipal governments would address resource management disputes. The document that was negotiated, “Building Consensus for a
Sustainable Future: Guiding Principles,” was adopted by consensus in 1994.123 The document outlined principles for building a consensus and
process steps. The ten principles included provisions regarding inclusivity of the process (this was particularly important in Canada with respect
to inclusion of Aboriginal peoples), voluntary participation, accountability to constituencies, respect for diverse interests, and commitment to
any agreement adopted.124 The
consensus principles were subsequently utilized to resolve disputes over issues
that included sustainable forest management, siting of solid waste facilities, impacts of pulp mill
expansion, and economic diversification based on sustainable wildlife resources.125 The reg neg and
Consensus for Sustainable Future model represent codified mediated negotiation processes that have withstood
the test of legal challenge and have been strongly endorsed by the groups that have participated in
these processes.
1NC Ptix NB
Doesn’t link to politics—empirics prove
USDA 6/6
(The U.S. Department of Agriculture’s Agricultural Marketing Service administers programs that facilitate the efficient, fair marketing of U.S.
agricultural products, including food, fiber, and specialty crops “What is Negotiated Rulemaking?”. Last updated June 6th 2014 @
http://www.ams.usda.gov/AMSv1.0/getfile?dDocName=STELPRDC5089434)
History In 1990,
Congress endorsed use by federal agencies of an alternative procedure known as "negotiated rulemaking,"'' also called "regulatory
negotiation," or "reg-neg." It has been used by agencies to bring interested parties into the rule-drafting process at an early stage, under circumstances that foster cooperative efforts to achieve solutions to regulatory problems.
Negotiated rules may be easier to
enforce and less likely to be challenged in litigation. The results of reg-neg usage by the federal
government, which began in the early 1980s, are impressive: large-scale regulators as the Environmental Protection Agency, Nuclear Regulatory
Commission, Federal Aviation Administration, and the Occupational Safety and Health Administration used the process on many occasions. Building on these
positive experiences, several states, including Massachusetts, New York, and California, have also begun using the procedure for a wide range of rules. The very first negotiated rule-making was
convened by the Federal Mediation and Conciliation Service (FMCS) working with the Department of Transportation, the Federal
Where successful, negotiated rulemaking can lead to better, more acceptable rules, based on a clearer understanding of the concerns of all those affected.
Aviation Administration, airline pilots and other interested groups to deal with regulations concerning flight and duty time for pilots. The negotiated rulemaking was a success and a draft rule was agreed upon that became the final
rule. Since that first reg-neg.
FMCS has assisted in both the convening and facilitating stages in many such procedures at the Departments of Labor,
EPA, as well as state-level processes, and other forms of consensus-based decision-making programs such as public policy dialogues,
Health and Human Services (HRSA), Interior, Housing and Urban Development, and the
hearings, focus groups, and meetings.
1NC Fism NB
Failure to use reg neg results in a federalism crisis—REAL ID proves
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School, cum laude.
Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and federalism. She has
presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth Circuit Judicial Conference, the
U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training and Research. She has advised National Sea
Grant multilevel governance studies involving Chesapeake Bay and consulted with multiple institutions on developing sustainability
programs. She has appeared in the Chicago Tribune, the London Financial Times, the PBS Newshour and Christian Science Monitor’s
“Patchwork Nation” project, and on National Public Radio. She is the author of many scholarly works, including Federalism and the Tug of
War Within (Oxford, 2012). Professor Ryan is a graduate of Harvard Law School, where she was an editor of the Harvard Law Review and a
Hewlett Fellow at the Harvard Negotiation Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for
the Ninth Circuit before practicing environmental, land use, and local government law in San Francisco. She began her academic career at
the College of William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured throughout
Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
b. A Cautionary Tale: The REAL ID Act The value
of negotiated rulemaking to federalism bargaining may be best
understood in relief against the failure of alternatives in federalism-sensitive [*57] contexts. Particularly
informative are the strikingly different state responses to the two approaches Congress has recently taken in tightening national security
through identifi-cation reform--one requiring
regulations through negotiated rulemaking, and the other through
traditional notice and comment. After the 9/11 terrorist attacks, Congress ordered the Department of Homeland Security (DHS) to
establish rules regarding valid identification for federal purposes (such as boarding an aircraft or accessing federal buildings). n291 Recognizing
the implications for state-issued driver's licenses and ID cards, Congress required DHS to use ne-gotiated
rulemaking to forge
consensus among the states about how best to proceed. n292 States leery of the stag-gering costs associated with proposed
reforms participated actively in the process. n293 However, the subsequent REAL ID Act of 2005 repealed
the ongoing negotiated rulemaking and required DHS to prescribe top-down fed-eral requirements for state-issued licenses. n294
The resulting DHS rules have been bitterly opposed by the majority of state governors, legislatures, and
motor vehicle administrations, n295 prompting a virtual state rebellion that cuts across the redstate/blue-state political divide. n296 No state met the December 2009 deadline initially contemplated by the statute,
and over half have enacted or considered legislation prohibiting compliance with the Act, defunding its
implementation, or calling for its repeal. n297 In the face of this unprecedented state hostility, DHS has
extended compliance deadlines even for those that did not request extensions, and bills have been introduced in both houses
of Congress to repeal the Act. n298 Efforts to repeal what is increasingly referred to as a "failed" policy have won
endorsements [*58] from or-ganizations across the political spectrum. n299 Even the Executive Director of the ACLU, for
whom federalism concerns have not historically ranked highly, opined in USA Today that the REAL ID Act violates the Tenth Amendment. n300
US federalism will be modelled globally—solves human rights, free trade, war, and
economic growth
Calabresi 95
(Steven G. Calabresi is a Professor of Law at Northwestern University and is a graduate of the Yale Law School (1983) and of Yale College
(1980). Professor Calabresi was a Scholar in Residence at Harvard Law School from 2003 to 2005, and he has been a Visiting Professor of
Political Science at Brown University since 2010. Professor Calabresi was also a Visiting Professor at Yale Law School in the Fall of 2013.
Professor Calabresi served as a Law Clerk to Justice Antonin Scalia of the United States Supreme Court, and he also clerked for U.S. Court of
Appeals Judges Robert H. Bork and Ralph K. Winter. From 1985 to 1990, he served in the Reagan and first Bush Administrations working
both in the West Wing of the Reagan White House and before that in the U.S. Department of Justice. In 1982, Professor Calabresi cofounded The Federalist Society for Law & Public Policy Studies, a national organization of lawyers and law students, and he currently serves
as the Chairman of the Society’s Board of Directors – a position he has held since 1986. Since joining the Northwestern Faculty in 1990, he
has published more than sixty articles and comments in every prominent law review in the country. He is the author with Christopher S. Yoo
of The Unitary Executive: Presidential Power from Washington to Bush (Yale University Press 2008); and he is also a co-author with
Professors Michael McConnell, Michael Stokes Paulsen, and Samuel Bray of The Constitution of the United States (2nd ed. Foundation Press
2013), a constitutional law casebook. Professor Calabresi has taught Constitutional Law I and II; Federal Jurisdiction; Comparative Law;
Comparative Constitutional Law; Administrative Law; Antitrust; a seminar on Privatization; and several other seminars on topics in
constitutional law. Calabresi, S. G. “Government of Limited and Enumerated Powers: In Defense of United States v. Lopez, A Symposium:
Reflections on United States v. Lopez,” Michigan Law Review, Vol 92, No 3, December 1995. Ghs-kw)
We have seen that a
desire for both international and devolutionary federalism has swept across the world in recent
years. To a significant extent, this is due to global fascination with and emulation of our own American
federalism success story. The global trend toward federalism is an enormously positive development that greatly increases the
likelihood of future peace, free trade, economic growth, respect for social and cultural diversity, and
protection of individual human rights. It depends for its success on the willingness of sovereign nations
to strike federalism deals in the belief that those deals will be kept.233 The U.S. Supreme Court can do its part to
encourage the future striking of such deals by enforcing vigorously our own American federalism deal.
Lopez could be a first step in that process, if only the Justices and the legal academy would wake up to the importance of what is at stake.
Federalism solves economic growth
Bruekner 05
(Jan K. Bruekner is a Professor of Economics University of California, Irvine. He is a Member member of the Institute of Transportation
Studies, Institute for Mathematical Behavioral Sciences, and a former editor of the Journal of Urban Economics. Bruekner, J. K. “Fiscal
Federalism and Economic Growth,” CESifo Working Paper No. 1601, Novermber 2005. https://www.cesifogroup.de/portal/page/portal/96843357AA7E0D9FE04400144FAFBA7C//ghs-kw)
The analysis in this paper suggests that faster
economic growth may constitute an additional benefit of fiscal
federalism beyond those already well recognized. This result, which matches the conjecture of Oates (1993) and the
expectations of most empirical researchers who have studied the issue, arises from an unexpected source: a
greater incentive to save when public-good levels are tailored under federalism to suit the differing
demands of young and old consumers. This effect grows out of a novel interaction between the rules of
public-good provision which apply cross-sectionally at a given time and involve the young and old
consumers of different generations, and the savings decision of a given generation, which is intertemporal in
nature. This cross-sectional/intertemporal interaction yields the link between federalism and economic growth. While it is encouraging that
the paper’s results match recent empirical findings showing a positive growth impact from fiscal
decentralization, additional theoretical work exploring other possible sources of such a link is clearly needed. The present results emerge
from a model based on very minimal assumptions, but exploration of richer models may also be fruitful.
US economic growth solves war, collapse ensures instability
National Intelligence Council, ’12 (December, “Global Trends 2030: Alternative Worlds”
http://www.dni.gov/files/documents/GlobalTrends_2030.pdf)
a reinvigorated US economy would increase the prospects that the growing
global and regional challenges would be addressed. A stronger US economy dependent on trade in services and cutting-edge
technologies would be a boost for the world economy, laying the basis for stronger multilateral cooperation.
Washington would have a stronger interest in world trade, potentially leading a process of World Trade Organization reform
that streamlines new negotiations and strengthens the rules governing the international trading system. The US would be in a
better position to boost support for a more democratic Middle East and prevent the slide of failing
states. The US could act as balancer ensuring regional stability, for example, in Asia where the rise of multiple
powers—particularly India and China—could spark increased rivalries. However, a reinvigorated US would not necessarily be a
Big Stakes for the International System The optimistic scenario of
panacea. Terrorism, proliferation, regional conflicts, and other ongoing threats to the international order will be affected by the presence or absence of strong US leadership but are also driven
The US impact is much more clear-cut in the negative case in which the US fails to rebound
and is in sharp economic decline. In that scenario, a large and dangerous global power vacuum would be
by their own dynamics.
created and in a relatively short space of time. With a weak US, the potential would increase for the
European economy to unravel. The European Union might remain, but as an empty shell around a fragmented continent. Progress on trade reform as well as financial
and monetary system reform would probably suffer. A weaker and less secure international community would reduce its aid
efforts, leaving impoverished or crisis-stricken countries to fend for themselves, multiplying the chances of grievance and peripheral
conflicts. In this scenario, the US would be more likely to lose influence to regional hegemons—China and India
in Asia and Russia in Eurasia. The Middle East would be riven by numerous rivalries which could erupt
into open conflict, potentially sparking oil-price shocks. This would be a world reminiscent of the 1930s
when Britain was losing its grip on its global leadership role.
2NC O/V
The counterplan convenes a regulatory negotiation committee to discuss the
implementation of the plan. Stakeholders decide how and if the plan is
implemented—then implements the decision - solves better than the AFF:
1. Collaboration—reg neg facilitates government-civilian cooperation, results in
greater satisfaction with regulations and better compliance after
implementation—social psychology and empirics prove
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental
law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of
Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change
in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker
on collaborative and contractual approaches to governance. After leaving the White House, she advised the National Commission on
the Deepwater Horizon oil spill on topics of structural reform at the Department of the Interior. She has been appointed to the
Administrative Conference of the United States, the government think tank for improving the effectiveness and efficiency of federal
agencies, and is a member of the American College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative
Methods, Program Evaluation, Policy Analysis, and Public Choice and American College. She holds a PhD in Political Science from the
University of North Carolina, a BA in Government from Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the
Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9, 2000.
http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf/)
D. Compliance The compliance implications of consensus-based processes remain a matter of speculation.360 No one has yet produced
empirical data on the relationship between negotiated rulemaking and compliance, let alone data comparing the compliance implications
of negotiated and conventional rules.361 However, the Phase II results introduce interesting new findings into the debate. The
data
shows reg-neg participants to be significantly more likely than conventional rulemaking participants
to report the perception that others will be able to comply with the final rule.362 Perceiving that others will
comply might induce more compliance among competitors, along the lines of game theoretic models, at least until evidence of defection
emerges.363 Moreover,
to the extent that compliance failures are at least partly due to technical and
information deficits—rather than to mere political resistance—it seems plausible that reports of the
learning effect and more horizontal sharing of information might help to improve compliance in the
long run.364 The claim that reg-neg could improve compliance is consistent with social psychology
studies showing that in both legal and organizational settings, “fair procedures lead to greater
compliance with the rules and decisions with which they are associated.”365 Similarly, negotiated
rulemaking might facilitate compliance by bringing to the surface some of the contentious issues
earlier in the rulemaking process, where they might be solved collectively rather than dictated by the agency. Although
speculative, these hypotheses seem to fit better with Kerwin and Langbein’s data than do the rather negative expectations about
compliance. Higher
satisfaction could well translate into better long-term compliance, even if litigation
rates remained the same. Consistent with our contention that process matters, we expect it to matter to compliance as well. In
any event, empirical studies of compliance should no longer be so difficult to produce. A number of
negotiated rules are now several years old, with some in the advanced stages of implementation. A study of compliance might compare
numbers of enforcement actions for negotiated as compared to conventional rules, measured by notices of violation, or penalties, for
example.366 It might
investigate as well whether compliance methods differ between the two types of
rules: perhaps the enforcement of negotiated rules occurs more cooperatively, or informally, than
enforcement of conventional rules. Possibly, relationships struck during the negotiated rulemaking
make a difference at the compliance stage.367 To date, the effects of how the rule is developed on eventual compliance
remain a matter of speculation, even though it is ultimately an empirical issue on which both theory and empirical evidence must be
brought to bear.
And, we’ll win new net benefits here that ALL turn the aff
a. Delays—cp’s regulatory negotiation means that rules won’t be challenged during
the regulation creation process—empirics prove the CP solves faster than the AFF
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter
is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has
been involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more
than 50 papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the
University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on
environmental mediation and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court.
He has received multiple awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of
the Administrative Conference of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated
Rulemaking,” December 1999. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Properly understood, therefore, the
average length of EPA’s negotiated rulemakings — the time it took EPA
to fulfill its goal — was 751 days or 32% faster than traditional rulemaking. This knocks a full year
off the average time it takes EPA to develop rule by the traditional method. And, note these are
highly complex and controversial rules and that one of them survived Presidential intervention.
Thus, the dynamics surrounding these rules are by no mean “average.” This means that reg neg’s
actual performance is much better than that. Interestingly and consistently, the average time for all of EPA’s reg negs
when viewed in context is virtually identical to that of the sample drawn by Kerwin and Furlong77 — differing by less than a month.
Furthermore, if all of the reg negs that were conducted by all the agencies that were included in Coglianese’s table78 were analyzed along
the same lines as discussed here,79 the
average time for all negotiated rulemakings drops to less than 685
days.80 No Substantive Review of Rules Based on Reg Neg Consensus. Coglianese argues that negotiated rules are actually subjected to
a higher incident of judicial review than are rules developed by traditional methods, at least those issued by EPA.81 But, like his analysis of
the time it takes to develop rules, Coglianese fails to look at either what happened in the negotiated rulemaking itself or the nature of any
challenge. For example, he makes much of the fact that the Grand Canyon visibility rule was challenged by interests that were not a party
to the negotiations;82 yet, he also points out that this rule was not developed under the Negotiated Rulemaking Act83 which explicitly
establishes procedures that are designed to ensure that each interest can be represented. This challenge demonstrates the value of
convening negotiations.84 And, it is significantly misleading to include it when discussing the judicial review of negotiated rules since the
process of reg neg was not followed. As for Reformulated Gasoline, the rule as issued by EPA did not reflect the consensus but rather was
modified by EPA under the direction of President Bush.85 There were, indeed, a number of challenges to the application of the rule,86 but
amazingly little to the rule itself given its history. Indeed, after the proposal was changed, many members of the committee continued to
meet in an effort to put Humpty Dumpty back together again, which they largely did; the
fact that the rule had been
negotiated not only resulted in a much better rule,87 it enabled the rule to withstand in large part a
massive assault. Coglianese also somehow attributes a challenge within the World Trade Organization to a shortcoming of reg neg
even though such issues were explicitly outside the purview of the committee; to criticize reg neg here is like saying surgery is not
effective when the patient refused to undergo it. While the Underground Injection rule was challenged, the committee never reached an
agreement88 and, moreover, the convening report made clear that there were very strong disagreements over the interpretation of the
governing statute that would likely have to be resolved by a Court of Appeals. Coglianese also asserts that the Equipment Leaks rule was
the subject of review; it was, but only because the Clean Air requires parties to file challenges in a very short period, and a challenger
therefore filed a defensive challenge while it worked out some minor details over the regulation. Those negotiations were successful and
the challenge was withdrawn. The Chemical Manufacturers Association, the challenger, had no intention of a substantive challenge.89
Moreover, a challenge to other parts of the HON should not be ascribed to the Equipment Leaks part of the rule. The agreement in the
Asbestos in Schools negotiation explicitly contemplated judicial review — strange, but true — and hence it came as no surprise and as no
violation of the agreement. As for the Wood Furniture Rule, the challenges were withdrawn after informal negotiations in which EPA
agreed to propose amendments to the rule.90 Similarly, the challenge to EPA’s Disinfectant By-Products Rule91 was withdrawn. In short,
the rules that have emerged from negotiated rulemaking have been remarkably resistant to substantive challenges. And, indeed, this far
into the development of the process, the standard of review and the extent to which an agreement may be binding on either a signatory
or someone whom a party purports to represent are still unknown — the speculation of many an administrative law class.92 Thus, here
too, Coglianese
paints a substantially misleading picture by failing to distinguish substantive
challenges to rules that are based on a consensus from either challenges to issues that were not the
subject of negotiations or were filed while some details were worked out. Properly understood, reg
negs have been phenomenally successful in warding off substantive review.
B. More democratic—reg neg encourages private sector participation—means that
regulations aren’t unilaterally created by the USFG—CP results in a fair playing field
for the entirety of the private sector
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental
law. Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition
to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama
White House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on
collaborative and contractual approaches to governance. Laura Langbein is the Professor of Quantitative Methods, Program
Evaluation, Policy Analysis, and Public Choice and American College. She holds a PhD in Political Science from the University of North
Carolina, a BA in Government from Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,”
N.Y.U. Environmental Journal, Volume 9, 2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
2. Negotiated
Rulemaking Is Fairer to Regulated Parties than Conventional Rulemaking To test whether reg
neg was fairer to regulated parties, Ker-win and Langbein asked respondents whether EPA solicited their participation and
whether they believed anyone was left out of the process. They also examined how much the parties learned in each
process, and whether they experienced resource or information disparities. Negotiated rule participants were significantly more likely to say that the EPA
encouraged their participation than conventional rule participants (65% versus 33% respectively). Al-though a higher proportion of
conventional rulemaking participants reported that a party that should have been represented in the rulemaking was omitted, the difference is not
statistically significant. Specifically, "a majority of both negotiated and conventional rule participants believed that the parties who should have been involved
were involved (66% versus 52% respectively)." In addition, as reported above, participants in regulatory negotiations reported significantly more learning than
their conventional rulemaking counterparts. Indeed, the disparity between the two types of participants in terms of their reports about learning was one of
the study's most striking results. At the same time, the resource disadvantage of poorer, smaller groups was no greater in negotiated rulemaking than in
conventional rulemaking. So, while
smaller groups did report suffering from a lack of resources during
regulatory negotiation, they reported the same in conventional rulemakings; no disparity existed
between the two processes on this score. Finally, the data suggest that the agency is equally responsive to the
parties in both negotiated and conventional rulemakings. This result, together with the finding that participants in regulatory
negotiations perceived disproportionate influence to be about evenly distributed, suggests that reg neg is at least as fair to the parties as conventional
rulemaking. Indeed, because
participant learning was so much greater in regulatory negotiation, the
process may in fact be more fair.
2NC Solves Better
Reg neg is better for complex rules
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental law. She
holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition to
a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama White
House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on collaborative and
contractual approaches to governance. After leaving the White House, she advised the National Commission on the Deepwater Horizon oil
spill on topics of structural reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United
States, the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the American
College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and
Public Choice and American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from
Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
4. Complex Rules Are More Likely To Be Settled Through Negotiated Rulemaking Recall that theorists disagree
over whether complex or simple issues are best suited for negotiation. The data suggest that negotiated and conventional rules differ in
systematic ways, indicating that EPA officials do not select just any rule for negotiation. When asked how the issues for rulemaking were
established, reg neg participants reported more often than their counterparts that the participants established at least some of them (44%
versus 0%). Conventional
rulemaking participants more often admitted to being uninformed of the process
for establishing issues (17% versus 0%) or offered that regulated entities set the issues (11% to 0%). A majority of both groups reported
that the EPA or the governing legislation established at least some of the issues. Kerwin and Langbein found that the types of issues
indeed appeared to differ between negotiated and conventional rules. When asked about the type of issues to be
decided, 52% of participants in conventional groups identified issues regarding the standard, including its level,
timing, or measurement (compared to 31% of negotiated rule participants), while 58% of the negotiating group identified
compliance and implementation issues (compared to 39% of participants in the conventional group). More reg neg
participants (53%) also cited compliance issues as causing the greatest conflict, compared to 32% of conventional
participants. Conventional participants more often reported that the rulemaking failed to resolve all of the issues (30%
versus 14%), but also more often reported that they encountered no "surprise" issues (74% versus 44%). Participants perceived negotiated
rules to be more complex, with more issues and more sides per issue than conventional rules. Kerwin and Langbein learned in interviews that
reg neg participants tended to develop a more detailed view about the issues to be decided than did
their conventional counterparts. The researchers interpreted this disparity in reported detail as a perception of complexity. To
measure it they computed a complexity score: the more issues and the more sides to each issue that respondents in a rulemaking could
identify, relative to the number of respondents, the more nuanced or complex the rulemaking. Using this calculation, the rules ranged in com
plexity from 1.9 to 5.0, with a mean complexity score of 3.6. The mean complexity score for reg negs (4.1) was significantly higher than the
score (2.5) for conventional rulemaking. Reg neg participants also presented a clearer understanding of the issues to be decided than did
conventional participants. To test clarity, Kerwin and Langbein developed a measure that would reflect the striking variation among
respondents in the number of different issues and different sides they perceived in their rulemaking. Some respondents could identify very few
separate issues and sides (e.g., "the level of the standard is the single issue and the sides are business, environmentalists, and EPA"), while
others detected as many as four different issues, with three sides on some and two on others. Kerwin and Langbein's measurement was in units
of issue/sides, representing a combination of the two variables, the recognition of which they were measuring; the mentions ranged from 3 to
10 issue/sides, with a mean of 7.9. Negotiated rulemaking participants mentioned an average of 8.9 issue/sides, compared to an average of
6issue/sides mentioned by their conventional counterparts, a statistically significant difference. To illustrate the difference between complexity
and clarity: If a party identified the compliance standard as the sole issue, but failed to identify a number of sub-issues, they would be classified
as having a clear understanding but not a complex one. similarly, if the party identified two sides (business vs. environment) without
recognizing distinctions among business participants or within an environmental coalition, they would also be classified as clear but not
complex in their understanding. The
differences in complexity might be explained by the higher reported rates of
learning by reg neg participants, rather than by differences in the types of rules processed by reg neg
versus conventional rulemaking. Kerwin and Langbein found that complexity and clarity were both positively
and significantly correlated with learning by respondents, but the association between learning and complexity/clarity
disappeared when the type of rulemaking was held constant. However, when the amount learned was held constant, the association between
complexity/clarity and the type of rulemaking remained positive and significant. This signifies that the
association between learning
and complexity/clarity was due to the negotiation process. In other words, the differences in
complexity/clarity are not attributable to higher learning but rather to differences between the processes. The
evidence is consistent with the hypothesis that issues selected for regulatory negotiation are different from and more
complicated than those chosen for conventional rulemaking. The data associating reg negs with
complexity, together with the finding that more issues settle in reg negs, are consistent with the
proposition that issues with more (and more di verse) sub-issues and sides settle more easily than
simple issues.
Reg neg is better than conventional rulemaking
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental law. She
holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition to
a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama White
House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on collaborative and
contractual approaches to governance. After leaving the White House, she advised the National Commission on the Deepwater Horizon oil
spill on topics of structural reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United
States, the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the American
College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and
Public Choice and American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from
Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
In this article, we present an original analysis and summary of new empirical evidence from Neil Kerwin and Laura Langbein's two-phase study
of Environmental Protection Agency (EPA) negotiated rulemakings. n5 Their qualitative and (*62) quantitative data reveal more about reg neg
than any empirical study to date; although not published in a law review article until now, they unquestionably bear upon the ongoing debate
among legal scholars over the desirability of negotiating rules. Most importantly, this is the first study to compare participant attitudes toward
negotiated rulemaking with attitudes toward conventional rulemaking. The findings of the studies tend, on balance, to undermine arguments
made by the critics of regulatory negotiation and to bolster the claims of proponents. Kerwin and Langbein found that, according to participants
in the study, reg
neg generates more learning, better quality rules, and higher satisfaction compared to
conventional rulemaking. n6 At the same time, stakeholder influence on the agency remains about the
same using either approach. n7 Based on the results, we recommend more frequent use of regulatory
negotiation, accompanied by further comparative and empirical study, for the purposes of establishing
regulatory standards and resolving implementation and compliance issues. This recommendation
contradicts the prevailing view that the process is best used sparingly, n8 and even then, only for narrow
questions of implementation. n9
Reg negs solve better
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter is a
scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has been
involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more than 50
papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the University of
Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on environmental mediation
and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court. He has received multiple
awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference
of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
The Primary Objective of Negotiated Rulemaking Is To Create Better and More Widely Accepted Rules. Coglianese argues throughout his article
that the primary benefits of negotiated rules were seen by its advocates as being the reduction in time and in the incidence of litigation.93
While, both benefits have been realized, neither was seen by those who established it as the predominant factor in its use. For example, Peter
Schuck wrote an important early article in which he described the
benefits of negotiated solutions over those imposed by
a hierarchy.94 Schuck emphasized a number of shortcomings of the adjudicatory nature of hybrid rulemaking and many benefits of direct
negotiations among the affected parties. The tenor of his thinking is reflected by his statement, “a bargained
solution depends for its legitimacy not upon its objective rationality, inherent justice, or the moral
capital of the institution that fashioned it, but upon the simple fact that it was reached by consent of the
parties affected.”95 And, “it encourages diversity, stimulates the parties to develop relevant information about facts and values, provides
a counter-weight to concentrations of power, and advances participation by those the decisions affect.”96 Nowhere in his long list of benefits
was either speed or reduced litigation, except by implication of the acceptability of the results. My own article that developed the
recommendations97 on which the ACUS Recommendation,98 the Negotiated Rulemaking Act, and the practice itself are based describes the
anticipated benefits of negotiated rulemaking: Negotiating
has many advantages over the adversarial process. The
parties participate directly and immediately in the decision. They share in its development and concur in
it, rather than “participate” by submitting information that the decisionmaker considers in reaching the
decision. Frequently, those who participate in the negotiations are closer to the ultimate decisionmaking
authority of the interest they represent than traditional intermediaries that represent the interest in an
adversarial proceeding. Thus, participants in negotiations can make substantive decisions, rather than
acting as experts in the decisionmaking process. In addition, negotiation can be a less expensive means
of decisionmaking because it reduces the need to engage in defensive research in anticipation of
arguments made by adversaries. Undoubtedly the prime benefit of direct negotiations is that it enables
the participants to focus squarely on their respective interests.99 The article quotes John Dunlop, a true pioneer in
using negotiations among the affected interests in the public sphere,100 as saying “In our society, a rule that is developed with the involvement
of the parties who are affected is more likely to be accepted and to be effective in accomplishing its intended purposes.”101 Reducing
time and litigation exposure was not emphasized if even mentioned directly To be sure, the Congressional findings
that precede the Negotiated Rulemaking Act mention the savings of time and litigation, but they are largely the byproduct of far more significant benefits:102 (2) Agencies currently use rulemaking procedures that may
discourage the affected parties from meeting and communicating with each other, and may cause
parties with different interest to assume conflicting and antagonistic positions and to engage in
expensive and time-consuming litigation over agency rules. (3) Adversarial rulemaking deprives the
affected parties and the public of the benefits of face-to-face negotiations and cooperation in
developing and reaching agreement on a rule. It also deprives them of the benefits of shared
information, knowledge, expertise, and technical abilities possessed by the affected parties 4)
Negotiated rulemaking, in which the parties who will be significantly affected by a rule participate
directly in the development of the rule, can provide significant advantages over adversarial rulemaking.
(5) Negotiated rulemaking can increase the acceptability and improve the substance of rules, making it
less likely that the affected parties will resist enforcement or challenge such rules in court. It may also
shorten the amount of time needed to issue final rules. Thus, those who were present at the creation
of reg neg sought neither expedition nor a shield against litigation. Rather, they saw direct
negotiations among the parties — a form of representational democracy not explicitly recognized in the Administrative Procedure
Act — as resulting in rules that are substantively “better” and more widely accepted. Those benefits
were seen as flowing from the participation of those affected who bring with them a practical insight
and expertise that can result in rules that are better informed, more tailored to achieving the actual
regulatory goal and hence more effective, and able to be enforced.
Reg negs are the best type of negotiations
Hsu 02
(Shi-Ling Hsu is the Larson Professor of Law at the Florida State University College of Law. Professor Hsu has a B.S. in Electrical Engineering
from Columbia University, and a J.D. from Columbia Law School. He also has an M.S. in Ecology and a Ph.D. in Agricultural and Resource
Economics, both from the University of California, Davis. Professor Hsu has taught in the areas of environmental and natural resource law,
law and economics, quantitative methods, and property. Prior to his current appointment, Professor Hsu was a Professor of Law and
Associate Dean for Special Projects at the University Of British Columbia Faculty Of Law. He has also served as an Associate Professor at the
George Washington University Law School, a Senior Attorney and Economist for the Environmental Law Institute in Washington D.C, and a
Deputy City Attorney for the City and County of San Francisco. “A Game Theoretic Approach to Regulatory Negotiation: A Framework for
Empirical Analysis,” Harvard Environmental Law Review, Vol 26, No 2, February2002.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=282962//ghs-kw)
There are reasons to be optimistic about what regulatory negotiations can produce in even a troubled
administrative state. Jody Freeman noted that one important finding from the Kerwin and Langbein studies were that parties
involved in negotiated rulemaking were able to use the face-to-face contact as a learning experience.49
Barton Thompson has noted in his article on common-pool resources problems50 that one reason that resource users resist collective action
solutions is that it
is evidently human nature to blame others for the existence of resource shortages. That in
turn leads to an extreme reluctance by resource users to agree to a collective action solution if it involves
even the most minimal personal sacrifices. Thompson suggests that the one hope for curing resource users of such selfserving myopia is face-to-face contact and the exchange of views. The vitriol surrounding some environmental
regulatory issues suggests that there is a similar human reaction occurring with respect to some resource
conflicts.51 Solutions to environmental problems and resource conflicts on which regulated parties and environmental organizations
hold such strong and disparate views may require face-to-face contact to defuse some of the tension and remove some of
the demonization that has arisen in the these conflicts. Reinvention, with the emphasis on negotiations and face-to-face
contact, provides such an opportunity. 52 Farber has argued for making the best of this trend towards regulatory negotiation
characterizing negotiated rulemaking and reinvention. 53 Faced with the reality that some negotiation will inevitably take place
because of the slippage inherent in our system of regulation, Farber argues that the best model for allowing it
to go forward is a bilateral one. A system of bilateral negotiation would clearly be superior to a system
of self-regulation, as such a Farber has argued for making the best of this trend towards regulatory negotiation characterizing negotiated
rulemaking and reinvention. A system of bilateral negotiation would clearly be superior to a system of self-regulation, as such a system would
inevitably descend into a tragedy of the commons.54 But a
system of bilateral negotiation between agencies and
regulated parties would even be superior to a system of multilateral negotiation, due to the transaction
costs of assembling all of the affected stakeholders in a multilateral effort, and the difficulties of
reaching a consensus among a large number of parties. Moreover, multilateral negotiation gives rise to the troubling idea that there
should be joint governance among the parties. Since environmental organizations lack the resources to participate in post-negotiation
governance, there is a heightened danger of regulatory capture by the better-financed regulated parties.55 The
correct balance
between regulatory flexibility and accountability, argues Farber, is to allow bilateral negotiation but with
built-in checks to ensure that the negotiation process is not captured by regulated parties. Built-in checks
would include transparency, so that environmental organizations can monitor regulatory bargains, and the availability of citizen suits, so that
environmental organizations could remedy regulatory bargains that exceed the dictates of the underlying statute. Environmental organizations
would thus play the role of the watchdog, rather than the active participant in negotiations. The finding of Kerwin and Langbein that resource
constraints sometimes caused environmental organizations, especially smaller local ones, to skip negotiated rulemakings would seem to
support this conclusion. 56 A
much more efficient use of limited resources would require that the
environmental organization attempt to play a deterrent role in monitoring negotiated rulemakings.
2NC Cybersecurity Solvency
Reg neg solves cybersecurity
Sales 13
(Sales, Nathan Alexander. Assistant Professor of Law, George Mason University School of Law. “REGULATING CYBERSECURITY,”
Northwestern University Law Review. 2013.
http://www.rwu.edu/sites/default/files/downloads/cyberconference/cyber_threats_and_cyber_realities_readings.pdf//ghs-kw)
An alternative would be a form of “enforced self-regulation”324 in which private companies develop the
new cybersecurity protocols in tandem with the government.325 These requirements would not be
handed down by administrative agencies, but rather would be developed through a collaborative
partnership in which both regulators and regulated would play a role. In particular, firms might prepare
sets of industrywide security standards. (The National Industrial Recovery Act, famously invalidated by the Supreme Court in 1935,
contained such a mechanism,326 and today the energy sector develops reliability standards in the same way.327) Or agencies could sponsor
something like a negotiated rulemaking in which regulators, firms, and other stakeholders forge a consensus
on new security protocols.328 In either case, agencies then would ensure compliance through standard
administrative techniques like audits, investigations, and enforcement actions.329 This approach would
achieve all four of the benefits of private action mentioned above: It avoids (some) problems with information asymmetries,
takes advantage of distributed private sector knowledge about vulnerabilities and threats,
accommodates rapid technological change, and promotes innovation. On the other hand, allowing firms to help set the
standards that will be enforced against them may increase the risk of regulatory capture – the danger that agencies will come to promote the interests of the
companies they regulate instead of the public’s interests.330 The risk of capture is always present in regulatory action, but it is probably even more acute when
regulated entities are expressly invited to the decisionmaking table.331
2NC Encryption Advocate
Here’s a solvency advocate
DMCA 05
(Digital Millenium Copyright Act, Supplement in 2005. https://books.google.com/books?id=nL0s81xgVwC&pg=PA481&lpg=PA481&dq=encryption+AND+(+%22regulatory+negotiation%22+OR+%22negotiated+rulemaking%22)&source=bl&ots
=w9mrCaTJs4&sig=1mVsh_Kzk1p26dmT9_DjozgVQI&hl=en&sa=X&ved=0CB4Q6AEwAGoVChMIxtPG5YH9xgIVwx0eCh2uEgMJ#v=onepage&q&f=false//ghs-kw)
Some encryption supporters advocate use of advisory committee and negotiated rulemaking procedures to
achieve consensus around an encryption standard. See Motorola Comments at 10-11; Veridian Reply Comments at 20-23.
Reg negs are key to wireless technology innovation
Chamberlain 09
(Chamberlain, Inc. Comments before the Federal Communications Commission. 11-05-2009.
https://webcache.googleusercontent.com/search?q=cache:dfYcw45dQZsJ:apps.fcc.gov/ecfs/document/view%3Bjsessionid%3DSQnySfcTVd
22hL6ZYShTpQYGY1X27xB14p3CS1y01XW15LQjS1jj!-1613185479!153728702%3Fid%3D7020245982+&cd=2&hl=en&ct=clnk&gl=us//ghs-kw)
Chamberlain supports solutions that will balance the needs of stakeholders in both the licensed and unlicensed
bands. Chamberlain and other manufacturers of unlicensed devices such as Panasonic are also uniquely
able to provide valuable contributions from the perspective of unlicensed operators with a long history
of innovation in the unlicensed bands. Moreover, as the Commission has recognized in recent proceedings,
alternative mechanisms for gathering data and evaluating options may assist the Commission in
reaching a superior result.19 For these reasons, Chamberlain would support a negotiated rulemaking
process, the use of workshops -both large and small- or any other alternative process that ensures the widest level of
participation from stakeholders across the wireless market.
2NC Privacy Solvency
Reg neg is key to privacy
Rubinstein 09
(Rubinstein, Ira S. Adjunct Professor of Law and Senior Fellow, Information Law Institute, New York University School of Law. “PRIVACY AND
REGULATORY INNOVATION: MOVING BEYOND VOLUNTARY CODES,” Workshop for Federal Privacy Regulation, NYU School of Law.
10/2/2009. https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtables-comment-project-no.p095416544506-00103/544506-00103.pdf//ghs-kw)
Whatever its shortcoming, and despite its many critics, self-regulation is
a recurrent theme in the US approach to online
privacy and perhaps a permanent part of the regulatory landscape. This Article‘s goal has been to consider new
strategies for overcoming observed weaknesses in self-regulatory privacy programs. It began by examining the FTC‘s intermittent embrace of
self-regulation, and found that the Commission‘s most recent foray into self regulatory guidelines for online behavioral advertising is not very
different from earlier efforts, which ended in frustration and a call for legislation. It also reviewed briefly the more theoretical arguments of
privacy scholars for and against self-regulation, but concluded that the market oriented views of those who favor open information flows
clashed with the highly critical views of those who detect a market failure and worry about the damaging consequences of profiling and
surveillance not only to individuals, but to society and to democratic self-determination. These views seem irreconcilable and do not pave the
way for any applied solutions. Next, this Article presented three case studies of mandated self-regulation. This included overviews of the NAI
Principles and the SHA, as well as a more empirical analysis of the CARU safe harbor program. An assessment of these case studies against five
criteria (completeness, free rider problems, oversight and enforcement, transparency, and formation of norms) concluded that self-
regulation undergirded by law—in other words, a statutory safe harbor—is a more effective and
efficient instrument than any self-regulatory guidelines in which industry is chiefly responsible for
developing principles and/or enforcing them. In a nutshell, well-designed safe harbors enable policy makers
to imagine new forms of self-regulation that ―build on its strengths … while compensating for its
weaknesses.‖268 This embrace of statutory safe harbors led to a discussion of how to improve them by importing second-generation
strategies from environmental law. Rather than summarizing these strategies and how they translate into the privacy domain, this Article
concludes with a set of specific recommendations based on the ideas discussed in Part III.C. If Congress enacts comprehensive privacy
legislation based on FIPPs, the first recommendation is that the new law include a safe harbor program, which should echo the COPPA safe
harbor to the extent of encouraging groups to submit self-regulatory guidelines and, if approved by the FTC, treat compliance with these
guidelines as deemed compliance with statutory requirements. The FTC should be granted APA rulemaking powers to implement necessary
rules including a safe harbor rule. Congress
should also consider whether to mandate a negotiated rulemaking for an OBA
safe harbor or for safe harbor programs more generally. In any case, FTC should give serious thought to using the
negotiated rulemaking process in developing a safe harbor program or approving specific guidelines. In addition, the safe harbor
program should be overhauled to reflect second-generation strategies. Specifically, the statute should articulate default requirements but allow
FTC more discretion in determining whether proposed industry guidelines achieve desired outcomes, without firms having to match detailed
regulatory requirements on a point by point basis.
2NC Fism NB
Reg negs are better and solves federalism—plan fails
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School, cum laude.
Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and federalism. She has
presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth Circuit Judicial Conference, the
U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training and Research. She has advised National Sea
Grant multilevel governance studies involving Chesapeake Bay and consulted with multiple institutions on developing sustainability
programs. She has appeared in the Chicago Tribune, the London Financial Times, the PBS Newshour and Christian Science Monitor’s
“Patchwork Nation” project, and on National Public Radio. She is the author of many scholarly works, including Federalism and the Tug of
War Within (Oxford, 2012). Professor Ryan is a graduate of Harvard Law School, where she was an editor of the Harvard Law Review and a
Hewlett Fellow at the Harvard Negotiation Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for
the Ninth Circuit before practicing environmental, land use, and local government law in San Francisco. She began her academic career at
the College of William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured throughout
Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
1. Negotiated Rulemaking Although the most conventional of the less familiar forms, "negotiated rulemaking" between federal
agencies and state stakeholders is a sparingly used tool that holds
promise for facilitating sound administrative
policymaking in disputed federalism contexts, such as those implicating environmental law, national
security, and consumer safety. Under the Administrative Procedure Act, the traditional "notice and comment"
administrative rulemaking pro-cess allows for a limited degree of participation by state stakeholders
who comment on a federal agency's proposed rule. The agency publishes the proposal in the Federal Register, invites
public comments critiquing the draft, and then uses its discretion to revise or defend the rule in response to comments. n256 Even this iterative
process con-stitutes a modest negotiation, but it leaves participants so frequently unsatisfied that many agencies began to in-formally use more
extensive negotiated rulemaking in the 1970s. n257 In 1990, Congress passed the Negotiated Rulemaking Act, amending the Administrative
Procedure Act to allow a more dynamic [*52] and inclusive rulemaking process, n258 and a subsequent Executive Order required all federal
agencies to consider negotiated rulemaking when developing regulations. n259 Negotiated rulemaking allows stakeholders much more
influence over unfolding regulatory decisions. Under
notice and comment, public participation is limited to criticism
of well-formed rules in which the agency is already substantially invested. n260 By contrast,
stakeholders in negotiated rulemaking collectively design a proposed rule that takes into account their
respective interests and expertise from the beginning. n261 The concept, outline, and/or text of a rule is hammered out by
an advisory committee of carefully balanced representation from the agency, the regulated public, community groups and NGOs, and state and
local governments. n262 A professional intermediary leads the effort to ensure that all stakeholders are appropriately involved and to help
interpret prob-lem-solving opportunities. n263 Any consensus reached by the group becomes the basis of the proposed rule, which is still
subject to public comment through the normal notice-and-comment procedures. n264 If the group does not reach consensus, then the agency
proceeds through the usual notice-and-comment process. n265 The negotiated rulemaking process, a tailored version of interest group
bargaining within established legisla-tive constraints, can yield important benefits. n266 The
process is usually more subjectively
satisfying [*53] for all stakeholders, including the government agency representatives. n267 More
cooperative relationships are estab-lished between the regulated parties and the agencies, facilitating
future implementation and enforcement of new rules. n268 Final regulations include fewer technical
errors and are clearer to stakeholders, so that less time, money and effort is expended on enforcement.
n269 Getting a proposed rule out for public comment takes more time under negotiated rulemaking than standard notice and comment, but
thereafter, negotiated
rules receive fewer and more moderate public comment, and are less frequently
challenged in court by regulated entities. n270 Ultimately, then, final regulations can be implemented more
quickly following their debut in the Federal Register, and with greater compliance from stakeholders.
n271 The process also confers valuable learning benefits on participants, who come to better understand
the concerns of other stakeholders, grow invested in the consensus they help create, and ulti-mately
campaign for the success of the regulations within their own constituencies. n272 Negotiated rulemaking offers
additional procedural benefits because it ensures that agency personnel will be unambiguously informed about
the full federalism implications of a proposed rule by the impacted state interests. Federal agencies are already required by
executive order to prepare a federalism impact statement for rulemaking with federalism implications, n273 but the quality of statefederal communication within negotiated rulemaking enhances the likelihood that federal agencies will
appreciate and understand the full extent of state [*54] con-cerns. Just as the consensus-building process invests
participating stakeholders with respect for the competing concerns of other stake-holders, it invests participating agency
personnel with respect for the federalism concerns of state stakeholders. n274 State-side federalism
bargainers interviewed for this project consistently reported that they always prefer negotiated
rulemaking to notice and comment--even if their ultimate impact remains small--because the products
of fully informed federal consultation are always preferable to the alternative. n275
Reg negs solve federalism—traditional rulemaking fails
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School, cum laude.
Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and federalism. She has
presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth Circuit Judicial Conference, the
U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training and Research. She has advised National Sea
Grant multilevel governance studies involving Chesapeake Bay and consulted with multiple institutions on developing sustainability
programs. She has appeared in the Chicago Tribune, the London Financial Times, the PBS Newshour and Christian Science Monitor’s
“Patchwork Nation” project, and on National Public Radio. She is the author of many scholarly works, including Federalism and the Tug of
War Within (Oxford, 2012). Professor Ryan is a graduate of Harvard Law School, where she was an editor of the Harvard Law Review and a
Hewlett Fellow at the Harvard Negotiation Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for
the Ninth Circuit before practicing environmental, land use, and local government law in San Francisco. She began her academic career at
the College of William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured throughout
Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
Unsurprisingly, bargaining
in which the normative leverage of federalism values heavily influences the exchange offers the most reliable interpretive tools, smoothing out leverage imbalances and focusing
bargainers' in-terlinking interests. n619 Negotiations in which participants are motivated by shared regard for checks, localism,
accountability, and synergy naturally foster constitutional process and hedge against non-consensual dealings. All federalism
bargaining trades on the normative values of federalism to some degree, and any given negotiation may feature it
more or less prominently based on the factual particulars. n620 Yet the taxonomy reveals several forms in which federalism values
predominate by design, and which may prove especially valuable in fraught federalism contexts: negotiated rulemaking, policymaking
laboratory negotiations, and iterative federalism. n621 These ex-amples indicate the potential for purposeful federalism engineering to
reinforce procedural regard for state and fed-eral roles within the American system. (1) Negotiated Rulemaking
between state
and federal actors improves upon traditional administrative rule-making in fostering participation,
localism, and synergy by incorporating genuine state input into federal regula-tory planning. n622 Most
negotiated rulemaking also uses professional intermediaries to ensure that all stake-holders are
appropriately engaged and to facilitate the search for outcomes that meet parties' dovetailing interests.
n623 For example, after discovering that extreme local variability precluded a uniform federal program, Phase LI stormwater negotiators invited
municipal dischargers to design individually [*123] tailored programs within general federal limits. n624 Considering
the massive
number of municipalities involved, the fact that the rule faced legal challenge from only a handful of
Texas municipalities testifies to the strength of the consensus through which it was created. By contrast,
the iterative exchange within standard notice-and-comment rulemaking--also an example of feder-alism
bargaining--can frustrate state participation by denying participants meaningful opportunities for
consulta-tion, collaborative problem-solving, and real-time accountability The contrast between
notice-and-comment and negotiated rulemaking, exemplified by the two phases of REAL ID rulemaking, demonstrates
the difference be-tween more and less successful instances of federalism bargaining. n625 Moreover, the
difficulty of asserting state consent to the products of the REAL ID notice-and-comment rulemaking (given the outright rebellion
that fol-lowed) limits its interpretive potential. Negotiated rulemakings take longer than other forms of administrative
rulemaking, but are more likely to succeed over time. Regulatory matters best suited for state-federal negotiated rulemaking
include those in which a decisive federal rule is needed to overcome spillover effects, holdouts, and other collective action problems, but
unique and diverse state expertise is needed for the creation of wise policy. Matters
in contexts of overlap least suited for
negotiated rulemaking include those in which the need for immediate policy overcomes the need for
broad participation--but even these leave open possibilities for incremental rulemaking, in which the initial federal rule includes
mechanisms for periodic reevaluation with local input.
2NC Fism NB Heg Impact
Fast growth promotes US leadership and solves great power war
Khalilzad 11 – PhD, Former Professor of Political Science @ Columbia, Former ambassador to Iraq and
Afghanistan
(Zalmay Khalilzad was the United States ambassador to Afghanistan, Iraq, and the United Nations during
the presidency of George W. Bush and the director of policy planning at the Defense Department from
1990 to 1992. "The Economy and National Security" Feb 8
http://www.nationalreview.com/articles/259024/economy-and-national-security-zalmay-khalilzad)//BB
economic
trends pose the most severe long-term threat to the United States’ position as global leader.
While the United States suffers from
low economic growth, the economies of rival powers are
developing rapidly. continuation
could lead to a shift from American primacy toward a multi-polar
global system, leading to
geopolitical rivalry and war among the great powers.
Today,
and fiscal
fiscal imbalances and
The
of these two trends
in turn
increased
even
The current recession is the result of a deep financial
crisis, not a mere fluctuation in the business cycle. Recovery is likely to be protracted. The crisis was preceded by the buildup over two decades of enormous amounts of debt throughout the U.S. economy — ultimately totaling almost 350 percent of GDP — and the development
of credit-fueled asset bubbles, particularly in the housing sector. When the bubbles burst, huge amounts of wealth were destroyed, and unemployment rose to over 10 percent. The decline of tax revenues and massive countercyclical spending put the U.S. government on an
unsustainable fiscal path. Publicly held national debt rose from 38 to over 60 percent of GDP in three years.
dangerous proportions. If
interest rates
Without faster economic growth
were to rise significantly, annual interest payments — which already are larger than the defense budget —
and actions to reduce deficits, publicly held national debt is projected to reach
would crowd out other spending
or require substantial tax
increases that would undercut economic growth. Even worse, if unanticipated events trigger what economists call a “sudden stop” in credit markets for U.S. debt, the United States would be unable to roll over its outstanding obligations, precipitating a sovereign-debt crisis that
It was the economic devastation of Britain and
that led both countries to relinquish their empires
would almost certainly compel a radical retrenchment of the United States internationally. Such scenarios would reshape the international order.
France
during World War II, as well as the rise of other powers,
. In the late 1960s, British leaders concluded that they lacked the economic
capacity to maintain a presence “east of Suez.” Soviet economic weakness, which crystallized under Gorbachev, contributed to their decisions to withdraw from Afghanistan, abandon Communist regimes in Eastern Europe, and allow the Soviet Union to fragment. If the U.S. debt
problem goes critical,
the United States would be compelled to retrench,
reducing its military spending and
shedding international commitments
We face this domestic challenge while other major powers are experiencing rapid economic growth
.
.
Even though countries such as China, India, and Brazil have profound political, social, demographic, and economic problems, their economies are growing faster than ours, and this could alter the global distribution of power. These trends could in the long term produce a multi-
If U.S. policymakers fail to act
The closing of the gap
could intensify geopolitical competition among major powers,
and
the higher risk of escalation.
the longest period of peace among the
great powers has been the era of U.S. leadership
multi-polar systems have been unstable, with
major wars among the great powers.
American retrenchment
could have devastating consequences
there would
be a heightened possibility of arms races, miscalculation, or other crises spiraling into all-out conflict
weaker powers may shift their geopolitical posture away from the United States.
hostile
states would be emboldened to make aggressive moves in their regions
polar world.
and other powers continue to grow, it is not a question of whether but when a new international order will emerge.
States and its rivals
will to preclude or respond to international crises because of
between the United
increase incentives for local powers to play major powers against one another,
. By contrast,
dynamics resulting in frequent crises and
undercut our
The stakes are high. In modern history,
their competitive
Failures of multi-polar international systems produced both world wars.
. Without an American security blanket, regional powers could rearm in an attempt to balance against emerging threats. Under this scenario,
. Alternatively, in seeking to
accommodate the stronger powers,
Either way,
.
Slow growth leads to hegemonic wars – relative gap is key
Goldstein 7 - Professor of Global Politics and International Relations @ University of Pennsylvania,
(Avery Goldstein, “Power transitions, institutions, and China's rise in East Asia: Theoretical expectations
and evidence,” Journal of Strategic Studies, Volume30, Issue 4 & 5 August, EBSCO)
Two closely related, though distinct, theoretical arguments focus explicitly on the consequences for international politics of a shift in power
between a dominant state and a rising power. In War and Change in World Politics, Robert Gilpin
suggested that peace prevails when a
economic and
technological diffusion proceeds during eras of peace and development, other states are empowered. Moreover, the
burdens of international governance drain and distract the reigning hegemon, and challengers eventually emerge who seek to
rewrite the rules of governance. As the power advantage of the erstwhile hegemon ebbs, it may become
desperate enough to resort to theultima ratio of international politics, force, to forestall the increasingly
urgent demands of a rising challenger. Or as the power of the challenger rises, it may be tempted to press
its case with threats to use force. It is the rise and fall of the great powers that creates the circumstances under
dominant state’s capabilities enable it to ‘govern’ an international order that it has shaped. Over time, however, as
which major wars, what Gilpin labels ‘hegemonic wars’, break out.13 Gilpin’s argument logically encourages pessimism about the
implications of a rising China. It leads to the expectation that international trade, investment, and technology transfer will result in a
steady diffusion
of American economic power, benefiting the rapidly developing states of the world, including China.
As the
US simultaneously scurries to put out the many brushfires that threaten its far-flung global interests (i.e., the classic problem of
overextension), it will be unable to devote sufficient resources to maintain or restore its former advantage over
emerging competitors like China. While the erosion of the once clear American advantage plays itself out, the US
will find it ever more difficult to preserve the order in Asia that it created during its era of
preponderance. The expectation is an increase in the likelihood for the use of force – either by
a Chinese challenger able to field a stronger military in support of its demands for greater influence over international arrangements in
Asia, or by a besieged American hegemon desperate to head off further decline. Among the trends
that alarm those who would look at Asia through the lens of Gilpin’s theory are China’s expanding share of world trade
and wealth(much of it resulting from the gains made possible by the international economic order a dominant US established); its
acquisition of technology in key sectors that have both civilian and military applications (e.g., information, communications, and
electronics linked with to forestall, and the challenger becomes increasingly determined to realize the transition to a new international
order whose contours it will define. the ‘revolution in military affairs’); and an expanding military burden for the US (as it copes with the
challenges of its global war on terrorism and especially its struggle in Iraq) that limits the resources it can devote to preserving its interests in
East Asia.14 Although similar to Gilpin’s work insofar as it emphasizes the importance of shifts in the capabilities of a dominant state and a
rising challenger, the power-transition theory A. F. K. Organski and Jacek Kugler present in The War Ledger focuses more closely on the
allegedly dangerous phenomenon of ‘crossover’– the point at which a dissatisfied challenger is about to overtake the established leading
state.15 In such cases, when
the power gap narrows, the dominant state becomes increasingly
desperate. Though suggesting why a rising China may ultimately present grave dangers for international peace when its capabilities make it
a peer competitor of America, Organski and Kugler’s power-transition theory is less clear about the dangers while a potential
challenger still lags far behind and faces a difficult struggle to catch up. This clarification is important in thinking about the theory’s relevance to
interpreting China’s rise because a broad consensus prevails among analysts that Chinese military capabilities are at a minimum two decades
from putting it in a league with the US in Asia.16 Their theory, then, points
with alarm to trends in China’s growing wealth
and power relative to the United States, but especially looks ahead to what it sees as the period of maximum
danger – that time when a dissatisfied China could be in a position to overtake the US on dimensions
believed crucial for assessing power. Reports beginning in the mid-1990s that offered extrapolations suggesting China’s
growth would give it the world’s largest gross domestic product (GDP aggregate, not per capita) sometime in the
first few decades of the twentieth century fed these sorts of concerns about a potentially dangerous challenge to American
leadership in Asia.17 The huge gap between Chinese and American military capabilities (especially in terms of technological sophistication) has
so far discouraged prediction of comparably disquieting trends on this dimension, but inklings of similar concerns may be reflected in
occasionally alarmist reports about purchases of advanced Russian air and naval equipment, as well as concern that Chinese espionage may
have undermined the American advantage in nuclear and missile technology, and speculation about the potential military purposes of China’s
manned space program.18 Moreover, because a
dominant state may react to the prospect of a crossover
and believe that it is wiser to embrace the logic of preventive war and act early to delay a
transition while the task is more manageable, Organski and Kugler’s power-transition theory also provides
grounds for concern about the period prior to the possible crossover.19
2NC Ptix NB
Reg negs are bipartisan
Copeland 06
(Curtis W. Copeland, PhD, was formerly a specialist in American government at the Congressional Research Service (CRS) within the U.S.
Library of Congress. Copeland received his PhD degree in political science from the University of North Texas.His primary area of expertise is
federal rulemaking and regulatory policy. Before coming to CRS in January 2004, Dr. Copeland worked at the U.S. General Accounting Office
(GAO, now the Government Accountability Office) for 23 years on a variety of issues, including federal personnel policy, pay equity, ethics,
procurement policy, management reform, the Office of Management and Budget (OMB), and, since the mid-1990s, multiple aspects of the
federal rulemaking process. At CRS, he wrote reports and testified before Congress on such issues as federal rulemaking, regulatory reform,
the Congressional Review Act, negotiated rulemaking, the Paperwork Reduction Act, the Regulatory Flexibility Act, OMB’s Office of
Information and Regulatory Affairs, Executive Order 13422, midnight rulemaking, peer review, and risk assessment. He has also written and
testified on federal personnel policies, the federal workforce, GAO’s pay-for-performance system, and efforts to oversee the
implementation of the Troubled Asset Relief Program. From 2004 until 2007, Dr. Copeland headed the Executive Branch Operations section
within CRS’s Government and Finance Division. Copeland, C. W. “Negotiated Rulemaking,” Congressional Research Service, September 18,
2006. http://crs.wikileaks-press.org/RL32452.pdf//ghs-kw)
Negotiated rulemaking (sometimes referred to as regulatory negotiation or “reg-neg”) is a supplement to the traditional APA
rulemaking process in which agency representatives and representatives of affected parties work together to develop what can ultimately
become the text of a proposed rule.1 In this approach, negotiators try to
reach consensus by evaluating their priorities
and making tradeoffs, with the end result being a draft rule that is mutually acceptable. Negotiated
rulemaking has been encouraged (although not usually required) by both congressional and executive branch actions, and has
received bipartisan support as a way to involve affected parties in rulemaking before agencies have
developed their proposals. Some questions have been raised, however, regarding whether the approach actually speeds rulemaking
or reduces litigation.
Reg neg solves controversy—no link to ptix
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter is a
scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has been
involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more than 50
papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the University of
Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on environmental mediation
and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court. He has received multiple
awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference
of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Recent Agency Use of Reg Neg. And, indeed, in the past few years agencies
have used reg neg to develop some of their
most contentious rules. For example, the Federal Aviation Administration and the National Park Service
used a variant of the process to write the regulations and policies governing sightseeing flights over
national parks; the issue had been sufficiently controversial that the President had to intervene and direct
the two agencies to develop rules “for the management of sightseeing aircraft in the National Parks where it is deemed necessary to reduce or
prevent the adverse effects of such aircraft.”22 The Department of Transportation used it to write a regulation governing the delivery of
propane and other compressed gases when the regulation became ensnared in litigation and Congressional action.23 The Occupational Safety
and Health Administration used it to address the erection of steel structures, an issue that had been on its docket for more than a decade with
two abortive attempts at rulemaking when OSHA turned to reg neg.24 The Forest
Service has just published a notice of
intent to establish a reg neg committee to develop policies governing the use of fixed anchors for rock
climbing in designated wilderness areas administered by the Forest Service.25 This issue has become extremely
controversial.26 Negotiated rulemaking has proven enormously successful in developing agreements in
highly polarized situations and has enabled the parties to address the best, most effective or efficient
way of solving a regulatory controversy. Agencies have therefore turned to it to help resolve particularly
difficult, contentious issues that have eluded closure by means of traditional rulemaking procedures
2NC CP Solves Ptix Link
The counterplan breaks down adversarialism, is seen as legitimate, and is key to
effective regulation
Mee ‘97
(Siobhan, Jd, An Attorney In The Complex And Class Action Litigation Group, Focuses
Her Practice On A Broad Range Of Commercial Litigation, “Negotiated Rulemaking And
Combined Sewer Overflows (Csos): Consensus Saves Ossification?,” Fall, 1997 25 B.C.
Envtl. Aff. L. Rev. 213, Pg Lexis//Um-Ef)
Benefits that accrue to negotiated rulemaking participants correspond to the criticisms of traditional rulemaking. n132 In particular,
proponents of negotiated rulemaking claim that it increases public participation, n133 fosters nonadversarial relationships , n134
and reduces long-term regulatory costs. n135 Traditionally, agencies have limited the avenues for public
participation in the rulemaking process to reaction and criticism, releasing rules for the public's comment after they have been developed [*229]
internally. n136 In contrast, negotiated rulemaking elicits wider involvement at the early stages of production.
n137 Input from non-agency and non-governmental actors, who may possess the most relevant
knowledge and who will be most affected by the rule, is a prerequisite to effective regulation. n138
Increased participation also leads to what Professor Harter considers the overarching benefit of negotiations: greater legitimacy. n139
Whereas traditional rulemaking lends itself to adversarialism, n140 negotiated rulemaking is designed to
foster cooperation and accommodation. n141 Rather than clinging to extreme positions, parties prioritize the underlying
issues and seek trade-offs to maximize their overall interests. n142 Participants, including the agency,
discover and address one another's concerns directly. n143 The give-and-take of this process provides an opportunity for parties with differing
viewpoints to test data and arguments directly. n144 The resultant exploration of different approaches is more likely than the
usual notice and comment process to generate creative solutions and avoid ossification. n145 [*230] Whether or not
it results in a rule, negotiated rulemaking establishes valuable links between groups that otherwise would only
communicate in an adversarial context. n146 Rather than trying to outsmart one another, former competitors become part of a team which must consider the
needs of each member. n147 Working relationships developed during negotiations give participants an understanding of the other side. n148 As one negotiator reflected, in "working with the
The chance to iron out what are
often long-standing disagreements can only improve future interactions. n150
opposition you find they're not quite the ogres you thought they were, and they don't hate you as much as you thought." n149
2NC AT Perm do Both
Perm do both links to the net benefit—does the entirety of the AFF which
_____________
2NC AT Perm do the CP
CP is plan minus since it only mandates the creation of a reg neg committee—only
does the plan if and only if the committee decides to do so—that means that the CP is
uncertain. Perm severs the certainty of the plan:
Substantially means certain and real
Words and Phrases 1964 (40 W&P 759) (this edition of W&P is out of print; the page number no longer
matches up to the current edition and I was unable to find the card in the new edition. However, this
card is also available on google books, Judicial and statutory definitions of words and phrases, Volume 8,
p. 7329)
The words “outward, open, actual, visible, substantial, and exclusive,” in connection with a change of possession, mean substantially the same thing.
They mean not concealed; not hidden; exposed to view; free from concealment, dissimulation, reserve, or disguise; in full existence; denoting that
which not merely can be, but is opposed to potential, apparent, constructive, and imaginary; veritable; genuine; certain; absolute;
real at present time, as a matter of fact, not merely nominal; opposed to form; actually existing; true; not including admitting, or pertaining to any
others; undivided; sole; opposed to inclusive. Bass v. Pease, 79 Ill. App. 308, 318.
Should means must—it’s certain
Supreme Court of Oklahoma 94
(Kelsey v. Dollarsaver Food Warehouse of Durant, Supreme Court of Oklahoma, 1994.
http://www.oscn.net/applications/oscn/DeliverDocument.asp?CiteID=20287#marker
3fn14//ghs-kw)
The turgid phrase - "should be and the same hereby is" - is a tautological absurdity. This is so because "
should" is synonymous with ought or must and is in itself
sufficient to effect an inpraesenti ruling - one that is couched in "a present indicative synonymous with ought." See infra note 15. 3 Carter v. Carter, Okl., 783 P.2d 969, 970 (1989); Horizons,
Inc. v. Keo Leasing Co., Okl., 681 P.2d 757, 759 (1984); Amarex, Inc. v. Baker, Okl., 655 P.2d 1040, 1043 (1983); Knell v. Burnes, Okl., 645 P.2d 471, 473 (1982); Prock v. District Court of
Pittsburgh County, Okl., 630 P.2d 772, 775 (1981); Harry v. Hertzler, 185 Okl. 151, 90 P.2d 656, 659 (1939); Ginn v. Knight, 106 Okl. 4, 232 P. 936, 937 (1925). 4 "Recordable" means that by
force of 12 O.S. 1991 § 24 an instrument meeting that section's criteria must be entered on or "recorded" in the court's journal. The clerk may "enter" only that which is "on file." The pertinent
terms of 12 O.S. 1991 § 24 are: "Upon the journal record required to be kept by the clerk of the district court in civil cases . . . shall be entered copies of the following instruments on file: 1. All
items of process by which the court acquired jurisdiction of the person of each defendant in the case; and 2. All instruments filed in the case that bear the signature of the and judge and
specify clearly the relief granted or order made." [Emphasis added.] 5 See 12 O.S. 1991 § 1116 which states in pertinent part: "Every direction of a court or judge made or entered in writing,
and not included in a judgment is an order." [Emphasis added.] 6 The pertinent terms of 12 O.S. 1993 § 696.3 , effective October 1, 1993, are: "A. Judgments, decrees and appealable orders
that are filed with the clerk of the court shall contain: 1. A caption setting forth the name of the court, the names and designation of the parties, the file number of the case and the title of the
instrument; 2. A statement of the disposition of the action, proceeding, or motion, including a statement of the relief awarded to a party or parties and the liabilities and obligations imposed
on the other party or parties; 3. The signature and title of the court; . . ." 7 The court holds that the May 18 memorial's recital that "the Court finds that the motions should be overruled" is a
"finding" and not a ruling. In its pure form, a finding is generally not effective as an order or judgment. See, e.g., Tillman v. Tillman, 199 Okl. 130, 184 P.2d 784 (1947), cited in the court's
opinion. 8 When ruling upon a motion for judgment n.o.v. the court must take into account all the evidence favorable to the party against whom the motion is directed and disregard all
conflicting evidence favorable to the movant. If the court should conclude the motion is sustainable, it must hold, as a matter of law, that there is an entire absence of proof tending to show a
right to recover. See Austin v. Wilkerson, Inc., Okl., 519 P.2d 899, 903 (1974). 9 See Bullard v. Grisham Const. Co., Okl., 660 P.2d 1045, 1047 (1983), where this court reviewed a trial judge's
"findings of fact", perceived as a basis for his ruling on a motion for judgment n.o.v. (in the face of a defendant's reliance on plaintiff's contributory negligence). These judicial findings were
held impermissible as an invasion of the providence of the jury and proscribed by OKLA. CONST. ART, 23, § 6 . Id. at 1048. 10 Everyday courthouse parlance does not always distinguish
between a judge's "finding", which denotes nisi prius resolution of fact issues, and "ruling" or "conclusion of law". The latter resolves disputed issues of law. In practice usage members of the
bench and bar often confuse what the judge "finds" with what that official "concludes", i.e., resolves as a legal matter. 11 See Fowler v. Thomsen, 68 Neb. 578, 94 N.W. 810, 811-12 (1903),
where the court determined a ruling that "[1] find from the bill of particulars that there is due the plaintiff the sum of . . ." was a judgment and not a finding. In reaching its conclusion the court
reasoned that "[e]ffect must be given to the entire in the docket according to the manifest intention of the justice in making them." Id., 94 N.W. at 811. 12 When the language of a judgment is
susceptible of two interpretations, that which makes it correct and valid is preferred to one that would render it erroneous. Hale v. Independent Powder Co., 46 Okl. 135, 148 P. 715, 716
(1915); Sharp v. McColm, 79 Kan. 772, 101 P. 659, 662 (1909); Clay v. Hildebrand, 34 Kan. 694, 9 P. 466, 470 (1886); see also 1 A.C. FREEMAN LAW OF JUDGMENTS § 76 (5th ed. 1925). 13
"Should" not only is used as a "present indicative" synonymous with ought but also is the past tense of "shall" with various shades of meaning not always easy to analyze. See 57 C.J. Shall § 9,
Judgments § 121 (1932). O. JESPERSEN, GROWTH AND STRUCTURE OF THE ENGLISH LANGUAGE (1984); St. Louis & S.F.R. Co. v. Brown, 45 Okl. 143, 144 P. 1075, 1080-81 (1914). For a more
Certain contexts mandate a construction of the term "should" as
more than merely indicating preference or desirability. Brown, supra at 1080-81 (jury instructions stating that jurors "should" reduce the amount
detailed explanation, see the Partridge quotation infra note 15.
of damages in proportion to the amount of contributory negligence of the plaintiff was held to imply an obligation and to be more than advisory); Carrigan v. California Horse Racing Board, 60
Wash. App. 79, 802 P.2d 813 (1990) (one of the Rules of Appellate Procedure requiring that a party "should devote a section of the brief to the request for the fee or expenses" was interpreted
mean that a party is under an obligation to include the requested segment); State v. Rack, 318 S.W.2d 211, 215 (Mo. 1958)
("should" would mean the same as "shall" or "must" when used in an instruction to the jury which tells the triers they "should disregard false testimony").
to
2NC AT Theory
Counterinterp: process CPs are legitimate if we have a solvency advocate
AND, process CPs good:
1. Key to education—we need to be able to debate the desirability of the plan’s
regulatory process; testing all angles of the AFF is key to determine the best
policy option
2. Key to neg ground—it’s the only CP we can run against regulatory AFFs
3. Predictability and fairness—there’s a huge lit base and solvency advocate
ensures it’s predictable
Applegate 98
(John S. Applegate holds a law degree from Harvard Law School and a bachelor’s degree in English from Haverford College.
Nationally recognized for his work in environmental risk assessment and policy analysis, Applegate has written books and articles
on the regulation of toxic substances, defense nuclear waste, public participation in environmental decisions, and international
environmental law. He serves on the National Academy of Sciences Nuclear and Radiation Studies Board. In addition, he is an
award-winning teacher, known for his ability to present complex information with an engaging style and wry wit. Before coming
to IU, Applegate was the James B. Helmer, Jr. Professor of Law at the University of Cincinnati College of Law. He also was a
visiting professor at the Vanderbilt University School of Law. From 1983 to 1987, Applegate practiced environmental law in
Washington, D.C., with the law firm of Covington & Burling. He clerked for the late Judge Edward S. Smith of the U.S. Court of
Appeals for the Federal Circuit. John S. Applegate was named Indiana University’s first vice president for planning and policy in
July 2008. In March 2010, his portfolio was expanded and his title changed to vice president for university regional affairs,
planning, and policy. In February 2011, he became executive vice president for regional affairs, planning, and policy. As Executive
Vice President for University Academic Affairs since 2013, his office ensures coordination of university academic matters,
strategic plans, external academic relations, enterprise systems, and the academic policies that enable the university to most
effectively bring its vast intellectual resources to bear in serving the citizens of the state and nation. The regional affairs mission
of OEVPUAA is to lead the development of a shared identity and mission for all of IU's regional campuses that complements each
campus's individual identity and mission. In addition, Executive Vice President Applegate is responsible for public safety functions
across the university, including police, emergency management, and environmental health and safety. In appointing him in 2008,
President McRobbie noted that "John Applegate has proven himself to be very effective at many administrative and academic
initiatives that require a great deal of analysis and coordination within the university and with external agencies, including the
Indiana Commission for Higher Education. His experience and understanding of both academia and the law make him almost
uniquely suited to take on these responsibilities.” In 2006, John Applegate was appointed Indiana University’s first Presidential
Fellow, a role in which he served both President Emeritus Adam Herbert and current President Michael McRobbie. A
distinguished environmental law scholar, Applegate joined the IU faculty in 1998. He is the Walter W. Foskett Professor of Law at
the Indiana University Maurer School of Law in Bloomington and also served as the school’s executive associate dean for
academic affairs from 2002-2009. Applegate, J. S. “Beyond the Usual Suspects: The Use of Citizen Advisory Boards in
Environmental Decisionmaking,” Indiana Law Journal, Volume 73, Issue 3, July 1, 1998.
http://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=1939&context=ilj//ghs-kw)
There is substantial literature on negotiated rulemaking. The interested reader might begin
with the Negotiated Rulemaking Act of 1990, 5 U.S.C. §§ 561-570 (1994 & Supp. II 1996), Freeman, supra note
53, Philip J. Harter, Negotiating Regulations: A Cure for Malaise, 71 GEO. L.J. I (1982), Henry E. Perritt,
Jr., Negotiated Rulemaking Before Federal Agencies: Evaluation of the Recommendations by the
Administrative Conference of the United States, 74 GEO. L.J. 1625 (1986), Lawrence Susskind & Gerard
McMahon, The Theory and Practice of Negotiated Rulemaking, 3 YALE J. ON REG. 133 (1985), and an
excellent, just-published issue on regulatory negotiation, Twenty-Eighth Annual Administrative
Law Issue, 46 DUKE L.J. 1255 (1997)
4. Decision making skills—reg neg is uniquely key to decision making skills
Fiorino 88
(Daniel J. Fiorino holds a PhD & MA in Political Science from Johns Hopkins University and a BA in Political Science & Minor in
Economics from Youngstown State University. Daniel J. Fiorino is the Director of the Center for Environmental Policy and
Executive in Residence in the School of Public Affairs at American University. As a faculty member in the Department of Public
Administration and Policy, he teaches courses on environmental policy, energy and climate change, environmental sustainability,
and public management. Dan is the author or co-author of four books and some three dozen articles and book chapters in his
field. According to Google Scholar, his work has been cited some 2300 times in the professional literature. His book, The New
Environmental Regulation, won the Brownlow Award of the National Academy of Public Administration (NAPA) for “excellence in
public administration literature” in 2007. Altogether his publications have received nine national and international awards from
the American Society for Public Administration, Policy Studies Organization, Academy of Management, and NAPA. His most
recent refereed journal articles were on the role of sustainability in Public Administration Review (2010); explanations for
differences in national environmental performance in Policy Sciences (2011); and technology innovation in renewable energy in
Policy Studies Journal (2013). In 2009 he was a Public Policy Scholar at the Woodrow Wilson International Center for Scholars. He
also serves as an advisor on environmental and sustainability issues for MDB, Inc., a Washington, DC consulting firm. Dan joined
American University in 2009 after a career at the U.S. Environmental Protection Agency (EPA). Among his positions at EPA were
the Associate Director of the Office of Policy Analysis, Director of the Waste and Chemicals Policy Division, Senior Advisor to the
Assistant Administrator for Policy, and the Director of the National Environmental Performance Track. The Performance Track
program was selected as one of the top 50 innovations in American government 2006 and recognized by Administrator Christine
Todd Whitman with an EPA Silver Medal in 2002. In 1993, he received EPA’s Lee M. Thomas Award for Management Excellence.
He has appeared on or been quoted in several media outlets: the Daily Beast, Newsweek, Christian Science Monitor, Australian
Broadcasting Corporation, Agence France-Presse, and CCTV, on such topics as air quality, climate change, the BP Horizon Oil Spill,
carbon trading, EPA, and U.S. environmental and energy politics. He currently is co-director of a project on “Conceptual
Innovations in Environmental Policy” with James Meadowcroft of Carleton University, funded by the Canada Research Council on
Social Sciences and the Humanities. He is a member of the Partnership on Technology and the Environment with the Heinz
Center, Environmental Defense Fund, Nicholas Institute, EPA, and the Wharton School. He is conducting research on the role of
sustainability in policy analysis and the effects of regulatory policy design and implementation on technology innovation. In 2013,
he created the William K. Reilly Fund for Environmental Governance and Leadership within the Center for Environmental Policy,
working with associates of Mr. Reilly and several corporate and other sponsors. He is a Fellow of the National Academy of Public
Administration. Dan is co-editor, with Robert Durant, of the Routledge series on “Environmental Sustainability and Public
Administration.” He is often is invited to speak to business and academic audiences, most recently as the keynote speaker at a Tel
Aviv University conference on environmental regulation in May 2013. In the summer of 2013 he will present lectures and take
part in several events as the Sir Frank Holmes Visiting Fellow at Victoria University in New Zealand. Fiorino, D. J. “Regulatory
Negotiations as a Policy Process,” Public Administration Review, Vol 48, No 4, pp 764-772, July-August 1988.
http://www.jstor.org/discover/10.2307/975600?uid=3739728&uid=2&uid=4&uid=3739256&sid=21104541489843//ghs-kw)
Thus, in its premises, objectives, and techniques, regulatory
negotiation reflects the trend toward alternative
dispute settlement. However, because regulatory negotiation is prospective and general in its
application rather than limited to a specific dispute, it also reflects another theme in American public policy making. That
theme is pluralism, or what Robert Reich has described in the context of administrative rulemaking “interest-group mediation”
(Reich 1985, pp. 1619-1620).[20] Reich's analysis sheds light on negotiation as a form of regulatory policy making,
especially its contrasts with more analytical policy models. Reich proposes interest-group mediation
and net-benefit maximization as the two visions that dominate administrative policy making.
The first descends from pluralist political science and was more influential in the 1960s and early 1970s. The
second descends from decision theory and micro-economics, and it was more influential in the late 1970s and
early 1980s. In the first, the administrator is a referee who brings affected interests into the policy process to reconcile their
demands and preferences. In the net-benefit model, the administrator is an analyst who defines policy options, quantifies the likely
consequences of each, compares them to a given set of objectives, and then selects the option offering the greatest net benefit or
social utility. Under
the interest-group model, objectives emerge from the bargaining among
influential groups, and a good decision is one to which the parties will agree. Under the netbenefit model, objectives are articulated in advance as external guides to the policy process. A
good decision is one that meets the criterion of economic efficiency, defined ideally as a state in
which no one party can improve its position without worsening that of another. 21
5. Policy education—reg negs are a key part of the policy process
Spector 99,
(Bertram I. Spector, Senior Technical Director at Management Systems International (MSI) and Executive Director of the Center
for Negotiation Analysis. Ph.D. in Political Science from New York University, May, 1999, Negotiated Rulemaking: A Participative
Approach to Consensus-Building for Regulatory Development and Implementation, Technical Notes: A Publication of USAID’s
Implementing Policy Change Project, http://www.negotiations.org/Tn-10%20-%20Negotiated%20Rulemaking.pdf) AJ
Why use negotiated rulemaking? What are the implications for policy reform, the implementation of policy changes, and
conflict between stakeholders and government? First, the process generates an environment for dialogue
that facilitates the reality testing of regulations before they are implemented. It enables policy
reforms to be discussed in an open forum by stakeholders and for tradeoffs to be made that
expedite compliance among those who are directly impacted by the reforms. Second,
negotiated rulemaking is a process of empowerment. It encourages the participation and enfranchisement of
parties that have a stake in reform. It provides voice to interests, concerns and priorities that otherwise
might not be heard or considered in devising new policy. Third, it is a process that promotes
creative but pragmatic solutions. By encouraging a holistic examination of the policy area, negotiated
rulemaking asks the participants to assess the multiple issues and subissues involved, set
priorities among them, and make compromises. Such rethinking often yields novel and
unorthodox answers. Fourth, negotiated rulemaking offers an efficient mechanism for policy
implementation. Experience shows that it results in earlier implementation; higher compliance
rates; reduced time, money and effort spent on enforcement; increased cooperation between
the regulator and regulated parties; and reduced litigation over the regulations. Regulatory
negotiations can yield both better solutions and more efficient compliance.
6. At worse, reject the argument, not the team
2NC AT Agency Responsiveness
No difference in agency responsiveness
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental law. She
holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition to
a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama White
House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on collaborative and
contractual approaches to governance. After leaving the White House, she advised the National Commission on the Deepwater Horizon oil
spill on topics of structural reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United
States, the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the American
College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and
Public Choice and American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from
Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
3. Negotiated Rulemaking Does Not Abrogate the Agency's Responsibility to Execute Delegated Authority Overall, the evidence from Phase II is
generally inconsistent with the theoretical but empirically untested claim that EPA has failed to retain its responsibility for writing rules in
negotiated settings. Recall that theorists disagree over whether reg neg will increase agency responsiveness. Most scholars assume that EPA
retains more authority in conventional rulemaking, and that participants exert commensurately less influence over conventional as opposed to
negotiated rules. To test this hypothesis, Kerwin and Langbein asked participants about disproportionate influence and about agency
responsiveness to the respondent personally, as well as agency responsiveness to the public in general. The results suggest that the
agency
is equally responsive to participants in conventional and negotiated rulemaking, consistent with the
hypothesis that the agency listens to the affected parties regardless of the method of rule
development. Further, when asked what they disliked about the process, less than 10% of both negotiated and conventional participants
volunteered "disproportionate influence." When asked whether any party had disproportionate influence during rule development, 44% of
conventional respondents answered "yes," compared to 48% of reg neg respondents. In addition, EPA
was as likely to be viewed as
having disproportionate influence in negotiated as conventional rules (25% versus 32% respectively). It follows that
roughly equal proportions of participants in negotiated and conventional rules viewed other participants, and especially EPA, as having
disproportionate influence. Kerwin and Langbein asked those who reported disproportionate influence what about the rule led them to believe
that lopsided influence existed. In response, negotiated
rulemaking participants were significantly more likely to see
excessive influence by one party in the process rather than in the rule itself, as compared to
conventional participants (55% versus 13% respectively). However, when asked what it was about the process that fostered
disproportionate influence, conventional rule participants were twice as likely as negotiated rule participants to point to the central role of EPA
(63% versus 30% respectively). By contrast, negotiated rule participants pointed to other participants who were particularly vocal and active
during the negotiation sessions (26% of negotiated rule respondents versus no conventional respondents). When asked about agency
responsiveness, negotiated rule participants were significantly more likely than conventional rule participants to view both general
participation, and their personal participation, as having a "major" impact on the proposed rule. By contrast, conventional participants were
more likely to see "major" differences between the proposed and final rule and to believe that public participation and their own participation
had a "moderate" or "major" impact on that change. These results conform to the researchers' expectations: negotiated
rules are
designed so that public participation should have its greatest impact on the proposed rule; conventional
rules are structured so that public participation should have its greatest impact on the final rule. Given
these differences in how the two processes are de-signed, Kerwin and Langbein sought to measure agency responsiveness overall,
rather than at the two separate moments of access. Although the differences were not statistically significant, the results
suggest that conventional participants perceived their public and personal contribution to rulemaking to have had slightly more impact than
negotiated rule participants perceived their contribution to have had. Still, given
agree with the researchers that it is safer to conclude that the
negotiated rule participants.
the absence of statistical significance, we
agency is equally responsive to both conventional and
2NC AT Cost
Reg negs are more cost effective
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter is a
scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has been
involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more than 50
papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the University of
Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on environmental mediation
and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court. He has received multiple
awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference
of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Negotiated Rulemaking Has Fulfilled its Goals. If “better rules” were the aspirations for negotiated rulemaking, the question
remains as to whether the process has lived up to the expectations. From my own personal experience, the rules that emerge from
negotiated rulemaking tend to be both more stringent and yet more cost effective to implement. That
somewhat paradoxical result comes precisely from the practical orientation of the committee: it can
figure out what information is needed to make a reasonable, responsible decision and then what actions
will best achieve the goal; it can, therefore, avoid common regulatory mistakes that are costly but do
not contribute substantially to accomplishing the task. The only formal evaluation of negotiated rulemaking that has been
conducted supports these observations. After his early article analyzing the time required for negotiated rulemaking, Neil Kerwin undertook an
evaluation of negotiated rulemaking at the Environmental Protection Agency with Dr. Laura Langbein.103 Kerwin
and Langbein
conducted a study of negotiated rulemaking by examining what actually occurs in a reg neg versus the development of rules by
conventional means. To establish the requisite comparison, they “collected data on litigation, data from the comments on proposed rules, and
data from systematic, open-ended interviews with participants in 8 negotiated rules . . . and in 6 ‘comparable’ conventional rules.”104 They
interviewed 51 participants of conventional rulemaking and 101 from various negotiated rulemaking committees.105 Kerwin
and
Langbein’s important work provides the only rigorous, empirical evaluation that compares a number of
factors of conventional and negotiated rulemaking. Their overall conclusion is: Our research contains strong but
qualified support for the continued use of negotiated rulemaking. The strong support comes in the form
of positive assessments provided by participants in negotiated rulemaking compared to assessments
offered by those involved in conventional form of regulation development. Further, there is no evidence
that negotiated rules comprise an abrogation of agency authority, and negotiated rules appear no more
(or less) subject to litigation that conventional rules. It is also true that negotiated rulemaking at the EPA is used largely to
develop rules that entail particularly complex issues regarding the implementation and enforcement of legal obligations rather than those that
set the substantive standards themselves. However, participants’
assessments of the resulting rules are more positive
when the issues to be decided entail those of establishing rather than enforcing the standard. Further,
participants’ assessments are also more positive when the issues to be decided are relatively more
complex. Our research would support a recommendation that negotiated rulemaking continue to be applied to complex issues, and more
widely applied to include those entailing the standard itself.106 Their findings are particularly powerful when comparing individual attributes of
negotiated and conventional rules. Table 3 contains a summary of those comparisons. Importantly, negotiated
rules were viewed
more favorably in every criteria, and significantly so in several dimensions that are often contentious in
regulatory debates — • the economic efficiency of the rule and its cost effectiveness • the quality of the scientific evidence and the
incorporation of appropriate technology, and • “personal experience” is not usually considered in dialogues over regulatory procedure, Kerwin
and Langbein’s findings here too favor negotiated rules. Conclusion. The
benefits envisioned by the proponents of
negotiated rulemaking have indeed been realized. That is demonstrated both by Coglianese’s own
methodology when properly understood and by the only careful and comprehensive comparative
study. Reg neg has proven to be an enormously powerful tool in addressing highly complex, politicized
rules. These are the very kind that stall agencies when using traditional or conventional procedures.107
Properly understood and used appropriately, negotiated rulemaking does indeed fulfill its expectations
Reg negs are cheaper
Langbein and Kerwin 00
(Laura I. Langbein is a quantitative methodologist and professor of public administration and policy at American University in Washington,
D.C. She teaches quantitative methods, program evaluation, policy analysis, and public choice. Her articles have appeared in journals on
politics, economics, policy analysis and public administration. Langbein received a BA in government from Oberlin College in 1965 and a PhD
in political science from the University of North Carolina at Chapel Hill in 1972. She has taught at American University since 1973: until 1978
as an assistant professor in the School of Government and Public Administration; from 1978 to 1983 as an associate professor in the School
of Government and Public Administration; and since 1983 as a professor in the School of Public Affairs. She is also a private consultant on
statistics, research design, survey research, and program evaluation and an accomplished clarinetist. Cornelius Martin "Neil" Kerwin (born
April 10, 1949)(2) is an American educator in public administration and president of American University. A 1971 undergraduate alumnus of
American University, Kerwin continued his education with a Master of Arts degree in political science from the University of Rhode Island in
1973. In 1975, Kerwin returned to his alma mater and joined the faculty of the American University School of Public Affairs, then the School
of Government and Public Administration. Kerwin completed his doctorate in political science from Johns Hopkins University in 1978 and
continued to teach until 1989, when he became the dean of the school. Langbein, L. I. Kerwin, C. M. “Regulatory Negotiation versus
Conventional Rule Making: Claims, Counterclaims, and Empirical Evidence,” Journal of Public Administration Research and Theory, July 2000.
http://jpart.oxfordjournals.org/content/10/3/599.full.pdf//ghs-kw)
Our research contains strong but qualified support for the continued use of negotiated
rule making. The strong support comes
in the form of positive assessments provided by participants in negotiated rule making compared to
assessments offered by those involved in conventional forms of regulation development. There is no
evidence that negotiated rules comprise an abrogation of agency authority, and negotiated rules appear no more
(or less) subject to litigation than conventional rules. It is also true that negotiated rule making at the EPA is used largely to
develop rules that entail particularly complex issues regarding the implementation and enforcement of
legal obligations rather than rules that set substantive standards. However, participants' assessments of the resulting rules are more
positive when the issues to be decided entail those of establishing rather than enforcing the standard. Participants' assessments are also more
positive when the issues to be decided are relatively less complex. But even when these and other variables are controlled, reg neg participants'
overall assessments are
significantly more positive than those of participants in conventional rule making. In short,
the process itself seems to affect participants' views of the rule making, independent of differences
between the types of rules chosen for conventional and negotiated rule making, and independent of
differences among the participants, including differences in their views of the economic net benefits of
the particular rule. This finding is consistent with theoretical expectations regarding the importance of participation and the importance
of face-to-face communication to increase the likelihood of Pareto-improving social outcomes. With respect to participation, previous research
indicates that compliance
with a law or regulation and support for policy choice are more likely to be
forthcoming not only when it is economically rational but also when the process by which the decision is
made is viewed as fair (Tyler 1990; Kunreuther et al. 1993; Frey and Oberholzer-Gee 1996). While we did not ask respondents explicitly
to rate the fairness of the rule-making process in which they participated, evidence presented in this study shows that reg
neg participants rated the overall process (with and without statistical controls in exhibits 9 and 1 respectively) and the
ability of EPA equitably to implement the rule (exhibit 1) significantly higher than conventional rule-making
participants did. Further, while conventional rule-making participants were more likely to say that there was no party with
disproportionate influence during the development of the rule, reg neg participants voluteered significantly more positive comments and
significantly fewer negative comments about the process overall. In general, reg
neg appears more likely than conventional
rule making to leave participants with a warm glow about the decision-making process. While the regression
results show that the costs and benefits of the rule being promulgated figure prominently into the respondents' overall assessment of the final
rule, process matters too. Participants
care not only about how rules and policies affect them
economically, they also care about how the authorities who make and implement rules and policies
treat them (and others). In fact, one reg neg respondent, the owner of a small shop that manufactured wood
burning stoves, remarked about the woodstoves rule, which would put him out of business, that he felt
satisfied even as he participated in his own "wake." It remains for further research to show whether this warm glow affects
long term compliance and whether it extends to affected parties who were not direct participants in the negotiation process. It is unclear from
our research whether greater satisfaction with negotiated rules implies that negotiated rules are Pareto-superior to conventionally written
rules.13 Becker's (1983) theory of political competition among interest groups implies that in the absence of transactions costs, groups that
bear large costs and opposing groups that reap large benefits have directly proportional and equal incentives to lobby. Politicians who seek to
maximize net political support respond by balancing costs and benefits at the margin, and the resulting equilibrium will be no worse than
market failure would be. Transactions costs, however, are not zero, and they may not be equal for interests on each side of an issue. For
example, in many environmental policy issues, the benefits are dispersed and occur in the future, while some, but not all, costs are
concentrated and occur now. The consequence is that transactions costs are different for beneficiaries than for losers. If
reg neg reduces transactions costs compared to conventional rule making, or if reg neg reduces the imbalance in transactions costs between
winners and losers, or among different kinds of winners and losers, then it
might be reasonable to expect negotiated rules
to be Pareto-superior to conventionally written rules. Reg neg may reduce transactions costs in two ways.
First, participation in writing the proposed rule (which sets the agenda that determines the final rule) is direct, at least for
the participants. In conventional rule making, each interest has a repeated, bilateral relation with the rule-making agency; the rulemaking agency proposes the rule (and thereby controls the agenda for the final rule), and affected interests respond separately to
what is in the agency proposal. In negotiated rule making, each interest (including the agency) is in a repeated N-person set of mutual relations;
the negotiating group drafts the proposed rule, thereby setting the agenda for the final rule. Since
the agency probably knows
less about each group's costs and benefits than the group knows about its own costs and benefits, the
rule that emerges from direct negotiation should be a more accurate reflection of net benefits than one
that is written by the agency (even though the agency tries to be responsive to the affected parties). In effect, reg neg can be
expected to better establish a core relationship of trust, reputation, and reciprocity that Ostrom (1998)
argues is central to improving net social benefits. Reg neg may reduce transactions costs not only by
entailing repeated mutual rather than bilateral relations, but also by face to face communication. Ostrom
(1998, 13) argues that face-to-face communication reduces transactions costs by making it easier to assess
trustworthiness and by lowering the decision costs of reaching a "contingent agreement," in which
"individuals agree to contribute x resources to a common effort so long as at least y others also
contribute." In fact, our survey results show that reg neg participants are significantly more likely than
conventional rule-making participants to believe that others will comply with the final rule (exhibit 1). In the
absence of outside assessments that compare net social benefits of the conventional and negotiated rules in this study,15 the hypothesis that
reg neg is Pareto superior to conventional rule making remains an untested speculation. Nonetheless, it seems to be a plausible hypothesis
based on recent theories regarding the importance of institutions that foster participation in helping to effect Pareto-preferred social
outcomes.
2NC AT Consensus
Negotiating parties fear the alternative, which is worse than reg neg
Perritt 86
(Professor Perritt earned his B.S. in engineering from MIT in 1966, a master's degree in management from MIT's Sloan School in 1970, and a
J.D. from Georgetown University Law Center in 1975. Henry H. Perritt, Jr., is a professor of law at IIT Chicago-Kent College of Law. He served
as Chicago-Kent's dean from 1997 to 2002 and was the Democratic candidate for the U.S. House of Representatives in the Tenth District of
Illinois in 2002. Throughout his academic career, Professor Perritt has made it possible for groups of law and engineering students to work
together to build a rule of law, promote the free press, assist in economic development, and provide refugee aid through "Project Bosnia,"
"Operation Kosovo" and "Destination Democracy." Professor Perritt is the author of more than 75 law review articles and 17 books on
international relations and law, technology and law, employment law, and entertainment law, including Digital Communications Law, one of
the leading treatises on Internet law; Employee Dismissal Law and Practice, one of the leading treatises on employment-at-will; and two
books on Kosovo: Kosovo Liberation Army: The Inside Story of an Insurgency, published by the University of Illinois Press, and The Road to
Independence for Kosovo: A Chronicle of the Ahtisaari Plan, published by Cambridge University Press. He is active in the entertainment field,
as well, writing several law review articles on the future of the popular music industry and of video entertainment. He also wrote a 50-song
musical about Kosovo, You Took Away My Flag, which was performed in Chicago in 2009 and 2010. A screenplay for a movie about the same
story and characters has a trailer online and is being shopped to filmmakers. His two new plays, Airline Miles and Giving Ground, are
scheduled for performances in Chicago in 2012. His novel, Arian, was published by Amazon.com in 2012. He has two other novels in the
works. He served on President Clinton's Transition Team, working on telecommunications issues, and drafted principles for electronic
dissemination of public information, which formed the core of the Electronic Freedom of Information Act Amendments adopted by Congress
in 1996. During the Ford administration, he served on the White House staff and as deputy under secretary of labor. Professor Perritt served
on the Computer Science and Telecommunications Policy Board of the National Research Council, and on a National Research Council
committee on "Global Networks and Local Values." He was a member of the interprofessional team that evaluated the FBI's Carnivore
system. He is a member of the bars of Virginia (inactive), Pennsylvania (inactive), the District of Columbia, Maryland, Illinois and the United
States Supreme Court. He is a member of the Council on Foreign Relations and served on the board of directors of the Chicago Council on
Foreign Relations, on the Lifetime Membership Committee of the Council on Foreign Relations, and as secretary of the Section on Labor and
Employment Law of the American Bar Association. He is vice-president and a member of the board of directors of The Artistic Home theatre
company, and is president of Mass. Iota-Tau Association, the alumni corporation for the SAE fraternity chapter at MIT. Perritt, H. H.
“Negotiated Rulemaking Before Federal Agencies: Evaluation of Recommendations By the Administrative Conference of the United States,”
Georgetown Law Journal, Volume 74. August, 1976. http://www.kentlaw.edu/perritt/publications/74_GEO._L.J._1625.htm//ghs-kw)
The negotiations moved slowly until the FAA submitted a draft rule to the participants. This reinforced the
view that the FAA would move unilaterally. It also reminded the parties that there would be things in
a unilaterally promulgated rule that they would not like--thus reminding them that their BATNAs were
worse than what was being considered at the negotiating table. Participation by the Vice President's Office, the
Office of the Secretary of Transportation, and the OMB at the initial session discouraged participants from thinking they could influence the
contents of the rule outside the negotiation process. One attempt to communicate with the Administrator while the negotiations were
The participants tacitly agreed that it would not be feasible to develop a 'total
package' to which the participants formally could agree. Instead, their objectives were to narrow
differences, explore alternative ways of achieving objectives at less disruption to operational exigencies,
and educate the FAA on practical issues. The mediator had an acute sense that the negotiation process
should stop before agreement began to erode. Accordingly, he forbore to force explicit agreement on
difficult issues, took few votes, and adjourned the negotiations when things began to unravel. In addition,
the FAA, the mediator, and participants were tolerant of the political need of participants to adhere
to positions formally, even though signals were given that participants could live with something else.
underway was rebuffed. [FN263]
Agency participation in the negotiating sessions was crucial to the usefulness of this type of process. Because the agency was there, it could
form its own impressions of what a party's real position was, despite adherence to formal positions. In addition, it
was easy for the
agency to proceed with a consensus standard because it had an evolving sense of the consensus. Without
agency participation, a more formal step would have been necessary to communicate negotiating group views to the agency. Taking this formal
step could have proven difficult or impossible because it would have necessitated more formal participant agreement. In addition, the
presence of an outside contractor who served as drafter was of some assistance. The drafter, a former FAA
employee, assisted
informally in resolving internal FAA disagreements over the proposed rule after
negotiations were adjourned.
Reg neg produces participant satisfaction and reduces conflict—consensus will happen
Langbein and Kerwin 00
(Laura I. Langbein is a quantitative methodologist and professor of public administration and policy at American University in Washington,
D.C. She teaches quantitative methods, program evaluation, policy analysis, and public choice. Her articles have appeared in journals on
politics, economics, policy analysis and public administration. Langbein received a BA in government from Oberlin College in 1965 and a PhD
in political science from the University of North Carolina at Chapel Hill in 1972. She has taught at American University since 1973: until 1978
as an assistant professor in the School of Government and Public Administration; from 1978 to 1983 as an associate professor in the School
of Government and Public Administration; and since 1983 as a professor in the School of Public Affairs. She is also a private consultant on
statistics, research design, survey research, and program evaluation and an accomplished clarinetist. Cornelius Martin "Neil" Kerwin (born
April 10, 1949)(2) is an American educator in public administration and president of American University. A 1971 undergraduate alumnus of
American University, Kerwin continued his education with a Master of Arts degree in political science from the University of Rhode Island in
1973. In 1975, Kerwin returned to his alma mater and joined the faculty of the American University School of Public Affairs, then the School
of Government and Public Administration. Kerwin completed his doctorate in political science from Johns Hopkins University in 1978 and
continued to teach until 1989, when he became the dean of the school. Langbein, L. I. Kerwin, C. M. “Regulatory Negotiation versus
Conventional Rule Making: Claims, Counterclaims, and Empirical Evidence,” Journal of Public Administration Research and Theory, July 2000.
http://jpart.oxfordjournals.org/content/10/3/599.full.pdf//ghs-kw)
Our research contains strong but qualified support for the continued use of negotiated
rule making. The strong support comes
in the form of positive assessments provided by participants in negotiated rule making compared to
assessments offered by those involved in conventional forms of regulation development. There is no
evidence that negotiated rules comprise an abrogation of agency authority, and negotiated rules appear no more
(or less) subject to litigation than conventional rules. It is also true that negotiated rule making at the EPA is used largely to
develop rules that entail particularly complex issues regarding the implementation and enforcement of
legal obligations rather than rules that set substantive standards. However, participants' assessments of the resulting rules are more
positive when the issues to be decided entail those of establishing rather than enforcing the standard. Participants' assessments are also more
positive when the issues to be decided are relatively less complex. But even when these and other variables are controlled, reg neg participants'
overall assessments are
significantly more positive than those of participants in conventional rule making. In short,
the process itself seems to affect participants' views of the rule making, independent of differences
between the types of rules chosen for conventional and negotiated rule making, and independent of
differences among the participants, including differences in their views of the economic net benefits of
the particular rule. This finding is consistent with theoretical expectations regarding the importance of participation and the importance
of face-to-face communication to increase the likelihood of Pareto-improving social outcomes. With respect to participation, previous research
indicates that compliance
with a law or regulation and support for policy choice are more likely to be
forthcoming not only when it is economically rational but also when the process by which the decision is
made is viewed as fair (Tyler 1990; Kunreuther et al. 1993; Frey and Oberholzer-Gee 1996). While we did not ask respondents explicitly
to rate the fairness of the rule-making process in which they participated, evidence presented in this study shows that reg
neg participants rated the overall process (with and without statistical controls in exhibits 9 and 1 respectively) and the
ability of EPA equitably to implement the rule (exhibit 1) significantly higher than conventional rule-making
participants did. Further, while conventional rule-making participants were more likely to say that there was no party with
disproportionate influence during the development of the rule, reg neg participants voluteered significantly more positive comments and
significantly fewer negative comments about the process overall. In general, reg
neg appears more likely than conventional
rule making to leave participants with a warm glow about the decision-making process. While the regression
results show that the costs and benefits of the rule being promulgated figure prominently into the respondents' overall assessment of the final
rule, process matters too. Participants
care not only about how rules and policies affect them
economically, they also care about how the authorities who make and implement rules and policies
treat them (and others). In fact, one reg neg respondent, the owner of a small shop that manufactured wood
burning stoves, remarked about the woodstoves rule, which would put him out of business, that he felt
satisfied even as he participated in his own "wake." It remains for further research to show whether this warm glow affects
long term compliance and whether it extends to affected parties who were not direct participants in the negotiation process. It is unclear from
our research whether greater satisfaction with negotiated rules implies that negotiated rules are Pareto-superior to conventionally written
rules.13 Becker's (1983) theory of political competition among interest groups implies that in the absence of transactions costs, groups that
bear large costs and opposing groups that reap large benefits have directly proportional and equal incentives to lobby. Politicians who seek to
maximize net political support respond by balancing costs and benefits at the margin, and the resulting equilibrium will be no worse than
market failure would be. Transactions costs, however, are not zero, and they may not be equal for interests on each side of an issue. For
example, in many environmental policy issues, the benefits are dispersed and occur in the future, while some, but not all, costs are
concentrated and occur now. The consequence is that transactions costs are different for beneficiaries than for losers. If
reg neg reduces transactions costs compared to conventional rule making, or if reg neg reduces the imbalance in transactions costs between
winners and losers, or among different kinds of winners and losers, then it
might be reasonable to expect negotiated rules
to be Pareto-superior to conventionally written rules. Reg neg may reduce transactions costs in two ways.
First, participation in writing the proposed rule (which sets the agenda that determines the final rule) is direct, at least for
the participants. In conventional rule making, each interest has a repeated, bilateral relation with the rule-making agency; the rulemaking agency proposes the rule (and thereby controls the agenda for the final rule), and affected interests respond separately to
what is in the agency proposal. In negotiated rule making, each interest (including the agency) is in a repeated N-person set of mutual relations;
the negotiating group drafts the proposed rule, thereby setting the agenda for the final rule. Since
the agency probably knows
less about each group's costs and benefits than the group knows about its own costs and benefits, the
rule that emerges from direct negotiation should be a more accurate reflection of net benefits than one
that is written by the agency (even though the agency tries to be responsive to the affected parties). In effect, reg neg can be
expected to better establish a core relationship of trust, reputation, and reciprocity that Ostrom (1998)
argues is central to improving net social benefits. Reg neg may reduce transactions costs not only by
entailing repeated mutual rather than bilateral relations, but also by face to face communication. Ostrom
(1998, 13) argues that face-to-face communication reduces transactions costs by making it easier to assess
trustworthiness and by lowering the decision costs of reaching a "contingent agreement," in which
"individuals agree to contribute x resources to a common effort so long as at least y others also
contribute." In fact, our survey results show that reg neg participants are significantly more likely than
conventional rule-making participants to believe that others will comply with the final rule (exhibit 1). In the
absence of outside assessments that compare net social benefits of the conventional and negotiated rules in this study,15 the hypothesis that
reg neg is Pareto superior to conventional rule making remains an untested speculation. Nonetheless, it seems to be a plausible hypothesis
based on recent theories regarding the importance of institutions that foster participation in helping to effect Pareto-preferred social
outcomes.
A consensus will be reached—parties have incentives to cooperate and compromise
Harter 09
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter is a
scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has been
involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more than 50
papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the University of
Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on environmental mediation
and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court. He has received multiple
awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference
of the United States. Harter, P. J. “Collaboration: The Future of Governance,” Journal of Dispute Resolution, Volume 2009, Issue 2, Article 7.
2009. http://scholarship.law.missouri.edu/cgi/viewcontent.cgi?article=1581&context=jdr//ghs-kw)
Consensus is often misunderstood. It is typically used, derisively, to mean a group decision that is the consequence of a "group
think" that resulted from little or no exploration of the issues, with neither general inquiry, discussion, nor deli¬beration. A common example
would be the boss's saying, "Do we all agree? . . . Good, we have a consensus!" In this context, consensus is the acquiescence to an accepted
point of view. It is, as is often alleged, the lowest common denominator that is developed precisely to avoid controversy as opposed to
generating a better answer. It is a decision resulting from the lack of diversity. It is in fact actually a cascade that may be more extreme than the
views of any member! Thus, the question legitimately is, if this is the understanding of the term, would you want it if you could get it, or would
the result to too diluted? A number of articles posit, with neither understanding nor research, that it always results in the least common
denominator. Done right, however, consensus
is exactly the opposite: it is the wisdom of crowds. It builds on the insights
and experiences of diversity. And it is a vital element of collaborative governance in terms of actually reaching agreement and in
terms of the quality of the resulting agreement. That undoubtedly sounds counterintuitive, especially for the difficult,
complex, controversial matters that are customarily the subject of direct negotiations among governments and
their con¬stituents. Indeed, you often hear that it can't be done. One would expect that the controversy would
make consensus unlikely or that if concurrence were obtained, it would likely be so watered down—that least common denominator
again—that it would not be worth much. But, interestingly, it has exactly the opposite effect. Consensus can mean many
things so it is important to understand what is consensus for these purposes. The default definition of consensus in the Negotiated Rulemaking
Act is the "unanimous concurrence among the interests represented on [the] . . . committee." Thus, each
interest has a veto over
the decision, and any party may block a final agreement by withholding concurrence. Consensus has a
significant impact on how the negotiations actually function: ■ It makes it "safe" to come to the table. If
the committee were to make decisions by voting, even if a supermajority were required, a party might
fear being outvoted. In that case, it would logically continue to build power to achieve its will outside the
negotiations. Instead, it has the power inside the room to prevent something from happening that it
cannot live with. Thus, at least for the duration of the negotiations, the party can focus on the substance of the policy
and not build political might. ■ The committee is converted from a group of disparate, often antag¬nistic,
interests into one with a common purpose: reaching a mutually acceptable agreement. During a policy
negotiation such as this, you can actually feel the committee snap together into a coherent whole when the
members realize that. ■ It forces the parties to deal with each other which prevents "rolling" someone:
"OK, I have the votes, so shut up and let's vote." Rolling someone in a negotiation is a very good way to create an opponent, to you and to any
resulting agreement. Having
to actually listen to each other also creates a friction of ideas that results in better decisions—
instead of a cascade, it generates the "wisdom of crowds." ■ It enables the parties to make sophisticated
proposals in which they agree to do something, but only if other parties agree to do some¬thing in
return. These "if but only if offers cannot be made in a voting situation for fear that the offeror would not obtain the necessary quid pro quo.
■ It also enables the parties to develop and present information they might otherwise be reluctant to share for fear of
its being misused or used against them. A veto prevents that. ■ If a party cannot control the decision, it will logically
amass as much factual information as possible in order to limit the discretion availa¬ble to the one making the decision; the
theory is that if you win on the facts, the range of choices as to what to do on the policy is consi¬derably narrowed. Thus, records are stuffed
with data that may wellbe irrelevant to the outcome or on which the parties largely agree. If
the decision is made by consensus,
the parties do control the outcome, and as a result, they can concentrate on making the final decision. The
question for the committee then becomes, how much in¬formation do we need to make a responsible resolution? The committee may not
need to resolve many of the underlying facts before a policy choice is clear. Interestingly, therefore, the
use of consensus can
significantly reduce the amount of defensive (or probably more accurately, offensive) record-building that
customarily attends adversarial processes. ■ It forces the parties to look at the agreement as a whole—
consensus is reached only on the entire package, not its individual elements. The very essence of negotiation is that different parties value
issues differently. What is important to one
party is not so important to another, and that makes for trades that maximize
overall value. The resulting agreement can be analogized to buying a house: something is always wrong
with any house you would consider buying (price, location, kitchen needs repair, etc.), but you cannot buy only part
of a house or move it to another location; the choice must be made as to which house—the entire
thing—you will purchase. ■ It also means that the resulting decision will not stray from the statutory mandate.
That is because one of the parties to the negotiation is very likely to benefit from an adherence to the statutory require¬ments and would not
concur in a decision that did not implement it. ■ Finally, if
all of the parties represented concur in the outcome, the
likelihood of a successful challenge is greatly reduced so that the decision has a rare degree of finality.
2NC AT Speed
Reg neg is better—solves faster
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter is a
scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has been
involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more than 50
papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the University of
Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on environmental mediation
and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court. He has received multiple
awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference
of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Properly understood, therefore, the
average length of EPA’s negotiated rulemakings — the time it took EPA to
fulfill its goal — was 751 days or 32% faster than traditional rulemaking. This knocks a full year off the
average time it takes EPA to develop rule by the traditional method. And, note these are highly complex
and controversial rules and that one of them survived Presidential intervention. Thus, the dynamics
surrounding these rules are by no mean “average.” This means that reg neg’s actual performance is
much better than that. Interestingly and consistently, the average time for all of EPA’s reg negs when viewed in context is virtually
identical to that of the sample drawn by Kerwin and Furlong77 — differing by less than a month. Furthermore, if all of the reg negs that were
conducted by all the agencies that were included in Coglianese’s table78 were analyzed along the same lines as discussed here,79 the
average time for all negotiated rulemakings drops to less than 685 days.80 No Substantive Review of Rules Based on
Reg Neg Consensus. Coglianese argues that negotiated rules are actually subjected to a higher incident of judicial review than are rules
developed by traditional methods, at least those issued by EPA.81 But, like his analysis of the time it takes to develop rules, Coglianese fails to
look at either what happened in the negotiated rulemaking itself or the nature of any challenge. For example, he makes much of the fact that
the Grand Canyon visibility rule was challenged by interests that were not a party to the negotiations;82 yet, he also points out that this rule
was not developed under the Negotiated Rulemaking Act83 which explicitly establishes procedures that are designed to ensure that each
interest can be represented. This challenge demonstrates the value of convening negotiations.84 And, it is significantly misleading to include it
when discussing the judicial review of negotiated rules since the process of reg neg was not followed. As for Reformulated Gasoline, the rule as
issued by EPA did not reflect the consensus but rather was modified by EPA under the direction of President Bush.85 There were, indeed, a
number of challenges to the application of the rule,86 but amazingly little to the rule itself given its history. Indeed, after the proposal was
changed, many members of the committee continued to meet in an effort to put Humpty Dumpty back together again, which they largely did;
the fact that the rule had been negotiated not only resulted in a much better rule,87 it enabled the rule
to withstand in large part a massive assault. Coglianese also somehow attributes a challenge within the World Trade
Organization to a shortcoming of reg neg even though such issues were explicitly outside the purview of the committee; to criticize reg neg
here is like saying surgery is not effective when the patient refused to undergo it. While the Underground Injection rule was challenged, the
committee never reached an agreement88 and, moreover, the convening report made clear that there were very strong disagreements over
the interpretation of the governing statute that would likely have to be resolved by a Court of Appeals. Coglianese also asserts that the
Equipment Leaks rule was the subject of review; it was, but only because the Clean Air requires parties to file challenges in a very short period,
and a challenger therefore filed a defensive challenge while it worked out some minor details over the regulation. Those negotiations were
successful and the challenge was withdrawn. The Chemical Manufacturers Association, the challenger, had no intention of a substantive
challenge.89 Moreover, a challenge to other parts of the HON should not be ascribed to the Equipment Leaks part of the rule. The agreement in
the Asbestos in Schools negotiation explicitly contemplated judicial review — strange, but true — and hence it came as no surprise and as no
violation of the agreement. As for the Wood Furniture Rule, the challenges were withdrawn after informal negotiations in which EPA agreed to
propose amendments to the rule.90 Similarly, the challenge to EPA’s Disinfectant By-Products Rule91 was withdrawn. In short, the rules that
have emerged from negotiated rulemaking have been remarkably resistant to substantive challenges. And, indeed, this far into the
development of the process, the standard of review and the extent to which an agreement may be binding on either a signatory or someone
whom a party purports to represent are still unknown — the speculation of many an administrative law class.92 Thus, here too, Coglianese
paints a substantially misleading picture by failing to distinguish substantive challenges to rules that are
based on a consensus from either challenges to issues that were not the subject of negotiations or were
filed while some details were worked out. Properly understood, reg negs have been phenomenally
successful in warding off substantive review.
Reg negs solve faster and better—Coglianese’s results concluded neg when properly
interpreted
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter is a
scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has been
involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more than 50
papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the University of
Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on environmental mediation
and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court. He has received multiple
awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference
of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Negotiated Rulemaking Has Fulfilled its Goals. If “better rules” were the aspirations for negotiated rulemaking, the question
remains as to whether the process has lived up to the expectations. From my own personal experience, the rules that emerge from
negotiated rulemaking tend to be both more stringent and yet more cost effective to implement. That
somewhat paradoxical result comes precisely from the practical orientation of the committee: it can
figure out what information is needed to make a reasonable, responsible decision and then what actions
will best achieve the goal; it can, therefore, avoid common regulatory mistakes that are costly but do
not contribute substantially to accomplishing the task. The only formal evaluation of negotiated rulemaking that has been
conducted supports these observations. After his early article analyzing the time required for negotiated rulemaking, Neil Kerwin undertook an
evaluation of negotiated rulemaking at the Environmental Protection Agency with Dr. Laura Langbein.103 Kerwin
and Langbein
conducted a study of negotiated rulemaking by examining what actually occurs in a reg neg versus the development of rules by
conventional means. To establish the requisite comparison, they “collected data on litigation, data from the comments on proposed rules, and
data from systematic, open-ended interviews with participants in 8 negotiated rules . . . and in 6 ‘comparable’ conventional rules.”104 They
interviewed 51 participants of conventional rulemaking and 101 from various negotiated rulemaking committees.105 Kerwin
and
Langbein’s important work provides the only rigorous, empirical evaluation that compares a number of
factors of conventional and negotiated rulemaking. Their overall conclusion is: Our research contains strong but
qualified support for the continued use of negotiated rulemaking. The strong support comes in the form
of positive assessments provided by participants in negotiated rulemaking compared to assessments
offered by those involved in conventional form of regulation development. Further, there is no evidence
that negotiated rules comprise an abrogation of agency authority, and negotiated rules appear no more
(or less) subject to litigation that conventional rules. It is also true that negotiated rulemaking at the EPA is used largely to
develop rules that entail particularly complex issues regarding the implementation and enforcement of legal obligations rather than those that
set the substantive standards themselves. However, participants’
assessments of the resulting rules are more positive
when the issues to be decided entail those of establishing rather than enforcing the standard. Further,
participants’ assessments are also more positive when the issues to be decided are relatively more
complex. Our research would support a recommendation that negotiated rulemaking continue to be applied to complex issues, and more
widely applied to include those entailing the standard itself.106 Their findings are particularly powerful when comparing individual attributes of
negotiated and conventional rules. Table 3 contains a summary of those comparisons. Importantly, negotiated
rules were viewed
more favorably in every criteria, and significantly so in several dimensions that are often contentious in
regulatory debates — • the economic efficiency of the rule and its cost effectiveness • the quality of the scientific evidence and the
incorporation of appropriate technology, and • “personal experience” is not usually considered in dialogues over regulatory procedure, Kerwin
and Langbein’s findings here too favor negotiated rules. Conclusion. The
benefits envisioned by the proponents of
negotiated rulemaking have indeed been realized. That is demonstrated both by Coglianese’s own
methodology when properly understood and by the only careful and comprehensive comparative
study. Reg neg has proven to be an enormously powerful tool in addressing highly complex, politicized
rules. These are the very kind that stall agencies when using traditional or conventional procedures.107
Properly understood and used appropriately, negotiated rulemaking does indeed fulfill its expectations
2NC AT Transparency
The process is transparent
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental law. She
holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition to
a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama White
House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on collaborative and
contractual approaches to governance. After leaving the White House, she advised the National Commission on the Deepwater Horizon oil
spill on topics of structural reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United
States, the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the American
College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and
Public Choice and American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from
Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
Defenders of reg neg retorted that negotiated
rules were far from secret deals. The Negotiated Rulemaking Act of
1990 (“NRA”) requires federal agencies to provide notice of regulatory negotiations in the Federal
Register,50 to formally charter reg neg committees,51 and to observe the transparency and
accountability requirements52 of the Federal Advisory Committee Act.53 Any individual or organization that might
be “significantly affected” by a proposed rule can apply for membership in a reg neg committee,54 and even if the agency rejects their
application, they remain free to attend as spectators.55 Most significantly, the NRA
negotiated rules to traditional notice and comment.56
requires that the agency submit
2NC AT Undemocratic
The process is equal and fair
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental law. She
holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition to
a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama White
House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on collaborative and
contractual approaches to governance. After leaving the White House, she advised the National Commission on the Deepwater Horizon oil
spill on topics of structural reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United
States, the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the American
College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and
Public Choice and American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from
Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
On balance, the combined results of Phase I and II of the study suggest that reg
neg is superior to conventional rulemaking on
virtually all of the measures that were considered. Strikingly, the process engenders a significant learning effect,
especially compared to conventional rulemaking; participants report, more¬over, that this learning has long-term
value not confined to a particular rulemaking. Most significantly, the negotiation of rules appears to enhance
the legitimacy of outcomes. Kerwin and Langbein's data indicate that process matters to perceptions of legitimacy.
Moreover, as we have seen, reg neg participant re¬ports of higher satisfaction could not be explained by their as¬sessments of the outcome
alone. Instead, higher satisfaction seems to arise in part from a combination of process and substance variables. This suggests a link between
procedure and satisfaction, which is consistent with the mounting evidence in social psychology that "satisfaction is one of the principal
consequences of procedural fairness." This potential for procedure to enhance satisfaction may prove especially salutary precisely when
participants do not favor outcomes. As Tyler and Lind have suggested, "hedonic glee" over positive outcomes may "obliterate" procedural
effects; perceptions of procedural fairness may matter more, however, "when outcomes are negative (and) organizations have the greatest
need to render decisions more palatable, to blunt discontent, and to give losers reasons to stay committed to the organization." At
a
minimum, the data call into question—and sometimes flatly contradict—most of the theoretical criticisms of
reg neg that have surfaced in the scholarly literature over the last twenty years. There is no evidence
that negotiated rulemaking abrogates an agency's responsibility to implement legislation. Nor does it
appear to exacerbate power imbalances or increase the risk of capture. When asked whether any
party seemed to have disproportionate influence during the development of the rule, about the same
proportion of reg neg and conventional participants said yes. Parties perceived their influence to be
about the same for conventional and negotiated rules, undermining the hypothesis that reg neg
exacerbates capture.
Commissions CP
1NC
Counterplan: The United States Congress should establish an independent commission
empowered to submit to Congress recommendations regarding domestic federal
government surveillance. Congress will allow 60 days to pass legislation overriding
recommendations by a two-thirds majority. If Congress doesn’t vote within the
specified period, those recommendations will become law. The Commission should
recommend to Congress that _____<insert the plan>_______
Commission solves the plan
RWB 13
(Reporters Without Borders is a UNESCO and UN Consultant and a non-profit organization. “US congress urged to create commission to
investigate mass snooping,” RWB, 06-10-2013. https://en.rsf.org/united-states-us-congress-urged-to-create-10-06-2013,44748.html//ghskw)
Reporters Without Borders calls on the US Congress to create a commission of enquiry into the links
between US intelligence agencies and nine leading Internet sector companies that are alleged to have
given them access to their servers. The commission should also identify all the countries and
organizations that have contributed to the mass digital surveillance machinery that – according to reports in the
Washington Post and Guardian newspapers in the past few days – the US authorities have created. According to these reports, the
telephone company Verizon hands over the details of the phone calls of millions of US and foreign citizens
every day to the National Security Agency (NSA), while nine Internet majors – including Microsoft, Yahoo,
Facebook, Google and Apple – have given the FBI and NSA direct access to their users’ data under a secret
programme called Prism. US intelligence agencies are reportedly able to access all of the emails, audio and
video files, instant messaging conversations and connection data transiting through these companies’
servers. According to The Guardian, Government Communication Headquarters (GCHQ), the NSA’s British equivalent, also has access to data
collected under Prism. The proposed congressional commission should evaluate the degree to which the collected
data violates privacy and therefore also freedom of expression and information. The commission’s
findings must not be classified as defence secrets. These issues – protection of privacy and freedom of expression – are
matters of public interest.
2NC O/V
Counterplan solves 100% of the case—Congress creates an independent commission
comprised of experts to debate the merits of the plan, and the commission
recommends to Congress that it passes the plan—Congress must pass legislation
specifically blocking those recommendations within 60 days or the commission’s
recommendations become law
AND, that solves the AFF—commissions are empowered to debate Internet backdoors
and submit recommendations—that’s RWB
2NC Solvency
Empirics prove commissions solve
FT 10
(Andrews, Edmund. “Deficit Panel Faces Obstacles in Poisonous Political Atmosphere,” Fiscal Times. 02-18-2010.
http://www.thefiscaltimes.com/Articles/2010/02/18/Fiscal-Commission-Faces-Big-Obstacles?page=0%2C1//ghs-kw)
Supporters of a bipartisan deficit commission note that at
least two previous presidential commissions succeeded at
breaking through intractable political problems when Congress was paralyzed. The 1983 Greenspan
commission, headed by Alan Greenspan, who later became chairman of the Federal Reserve, reached an historic agreement
to gradually raise Social Security taxes and gradually increase the minimum age at which workers
qualify for Social Security retirement benefits. Those recommendations passed both the House and
Senate, and averted a potentially catastrophic financial crisis with Social Security.
2NC Solves Better
CP solves better—technical complexity
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional Commissions:
Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Obtaining Expertise Congress
may choose to establish a commission when legislators and their staffs do not
currently have sufficient knowledge or expertise in a complex policy area.22 By assembling experts with
backgrounds in particular policy areas to focus on a specific mission, legislators might efficiently obtain
insight into complex public policy problems.23
2NC Cybersecurity Solvency
Commissions are key—solves legitimacy and perception
Abrahams and Bryen 14
(Rebecca Abrahams and Dr. Stephen Bryen, CCO and Chairman of Ziklag Systems, respectively. "Investigating Heartbleed," Huffington Post.
04-11-2014. http://www.huffingtonpost.com/rebecca-abrahams/investigating-heartbleed_b_5134404.html//ghs-kw)
But who can investigate the matter? This is a non-trivial question because the government is no longer
trustworthy. Congress could set up an independent commission to investigate compromises to
computer security. It should be staffed by experts in cryptography and by national security specialists.
The Commission, if empowered, should also make recommendations on a way forward for internet
security. What is needed is a system that is accountable, where the participants are reliable, and where
there is security from interference of any kind. Right now, no one can, or should, trust the Internet.
2NC Politics NB
No link to politics—commissions result in bipartisanship and bypass Congressional
politics
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional Commissions:
Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Overcoming Political Complexity Complex policy issues may also create institutional problems because they do not fall neatly within the
jurisdiction of any particular committee in Congress.26 By virtue of their ad hoc status, commissions may circumvent such issues. Similarly, a
commission may allow particular legislation or policy solutions to bypass the traditional development
process in Congress, potentially removing some of the impediments inherent in a decentralized legislature.27
Consensus Building Legislators seeking policy changes may be confronted by an array of political interests,
some in favor of proposed changes and some against. When these interests clash, the resulting
legislation may encounter gridlock in the highly structured political institution of the modern Congress.28
By creating a commission, Congress can place policy debates in a potentially more flexible
environment, where congressional and public attention can be developed over time.29 Reducing Partisanship
Solutions to policy problems produced within the normal legislative process may also suffer politically
from charges of partisanship.30 Similar charges may be made against investigations conducted by Congress.31 The nonpartisan or bipartisan character of most congressional commissions may make their findings and
recommendations less susceptible to such charges and more politically acceptable to diverse
viewpoints. The bipartisan or nonpartisan arrangement can potentially give their recommendations
strong credibility, both in Congress and among the public, even when dealing with divisive issues of
public policy.32 Commissions may also give political factions space to negotiate compromises in good
faith, bypassing the short-term tactical political maneuvers that accompany public negotiations.33 Similarly,
because commission members are not elected, they may be better suited to suggesting unpopular, but
necessary, policy solutions.34 Solving Collective Action Problems A commission may allow legislators to solve
collective action problems, situations in which all legislators individually seek to protect the interests of
their own district, despite widespread agreement that the collective result of such interests is something
none of them prefer. Legislators can use a commission to jointly “tie their hands” in such circumstances,
allowing general consensus about a particular policy solution to avoid being impeded by individual
concerns about the effect or implementation of the solution.35 For example, in 1988 Congress established
the Base Closure and Realignment Commission (BRAC) as a politically and geographically neutral body to
make independent decisions about closures of military bases.36 The list of bases slated for closure by the
commission was required to be either accepted or rejected as a whole by Congress, bypassing internal
congressional politics over which individual bases would be closed, and protecting individual Members from
political charges that they didn’t “save” their district’s base.37
CP avoids the focus link to politics
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional Commissions:
Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Overcoming Issue Complexity Complex
policy issues may cause time management challenges for Congress.
Legislators often keep busy schedules and may not have time to deal with intricate or technical policy
problems, particularly if the issues require consistent attention over a period of time.24 A commission
can devote itself to a particular issue full-time, and can focus on an individual problem without
distraction.25
No link to politics—commissions create bipartisan negotiations
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of California,
Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College, Dr. Campbell was a
Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's Subcommittee on Terrorism,
Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for the congressman. Before that, he was an
Analyst in American National Government at the Congressional Research Service, an Associate Professor of Political Science at Florida
International University, and an American Political Science Association Congressional Fellow, where he served as a policy adviser to Senator
Bob Graham of Florida. Dr. Campbell is the author, co-author, and co-editor of 11 books on Congress, most recently the Guide to Political
Campaigns in America, and Impeaching Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles
on the legislative process. Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary.
Web. 27 July 2015. Ghs-kw.)
The third major reason for Congress to delegate to a commission is the strategy of distancing itself from a
politically risky decision. These instances generally occur when Congress faces redistributive policy problems, such as Social Security,
military base closures, Medicare, and welfare. Such problems are the most difficult because legislators must take a
clear policy position on something that has greater costs to their districts than benefits, or that shifts resources visibly from
one group to another. Institutionally, Congress has to make national policy that has a collective benefit, but the self-interest of lawmakers often
gets in the way. Members realize
that their individual interests, based on constituents’ demands, may be at
odds with the national interest, and this can lead to possible electoral repercussions. 55 Even when
pursuing policies that are in the interests of the country as a whole, legislators do not want to be blamed
for causing losses to their constituents. In such an event, the split characteristics of the institution come
into direct conflict. Many on Capitol Hill endorse a commission for effectively resolving a policy problem rather than the other machinery
available to Congress. A commission finds remedies when the normal decision making process has stalled. A long-time Senate staff director said
of the proposed Second National Blue Ribbon Commission to Eliminate Waste in Government: “At
their most effective, these
panels allow Congress to realize purposes most members cannot find the confidence to do unless
otherwise done behind the words of the commission.” 56 When an issue imposes concentrated costs on individual
districts yet provides dispersed benefits to the nation, Congress responds by masking legislators’ individual contributions and delegates
responsibility to a commission for making unpleasant decisions. 57 Members
avoid blame and promote good policy by
saying something is out of their hands. This method allows legislators— especially those aiming for
reelection— to vote for the general benefit of something without ever having to support a plan that
directly imposes large and traceable geographic costs on their constituents. The avoidance or share-the-blame route
was much of the way Congress and the president finally dealt with the problem of financially shoring
up Social Security in the late 1980s. One senior staff assistant to a western Republican representative observed that the creation
of the Social Security Commission was largely for avoidance: “There are sacred cows and then there is Social Security. Neither party or any
politician wants to cut this. Regardless of what you say or do about it, in the end, you defer. Everyone backs away from this.” Similarly, a
legislative director to a southern Democratic representative summarized: “So many people are getting older and when you take a look at who
turns out, who registers, people over sixty-five have the highest turnout and they vote like clockwork.” The Commission on Executive,
Legislative, and Judicial Salaries, later referred to as the Quadrennial Commission (1967), is another example. Lawmakers delegated to a
commission the power to set pay for themselves and other top federal officials, whose pay they linked to their own, to help them avoid blame.
Increasing their own pay is a decision few politicians willingly endorse. Because
the proposal made by the commission
would take effect unless Congress voted to oppose it, the use of the commission helped insulate
legislators from political hazards. 58 That is, because it was the commission that granted pay raises, legislators could tell their
constituents that they would have voted against the increase if given the chance. Members could get the pay raise and also the credit for
opposing it. Redistribution is the most visible public policy type because it involves the most conspicuous, long run allocations of values and
resources. Most divisive socioeconomic issues— affirmative action, medical care for the aged, aid to depressed geographic areas, public
housing, and the elimination of identifiable governmental actions— involve debates over equality or inequality and degrees of redistribution.
These are “political hot potatoes, in which a commission is a good means of putting a fire wall between
you [the lawmaker] and that hot potato,” the chief of staff to a midwestern Democratic representative acknowledged. Base
closing took on a redistributive character as federal expenditures outpaced revenues. It was marked not only by extreme conflict but also by
techniques to mask or sugarcoat the redistributions or make them more palatable. The
Base Closure Commission (1991) was
created with an important provision that allowed for silent congressional approval of its
recommendations. Congress required the commission to submit its reports of proposed closures to the secretary of defense. The
president had fifteen days to approve or disapprove the list in its entirety. If approved, the list of recommended base closures became final
unless both houses of Congress adopted a joint resolution of disapproval within forty-five days. Congress had to consider and vote on the
recommendations en bloc rather than one by one, thereby giving the appearance of spreading the misery equally to affected clienteles. A
former staff aide for the Senate Armed Services Committee who was active in the creation of the Base Closure Commission contended,
“There was simply no political will by Congress. The then-secretary of defense started the process [base closing] with an inhouse commission [within the Defense Department]. Eventually, however, Congress used the commission idea as a
‘scheme’ for a way out of a ‘box.’” CONCLUSION Many congressional scholars attribute delegation principally to electoral
considerations. 59 For example, in the delegation of legislative authority to standing committees, legislators, keen on maximizing their
reelection prospects, request assignments to committees whose jurisdictions coincide with the interests of key groups in their districts.
Delegation of legislative functions to the president, to nonelected officials in the federal bureaucracy, or to ad hoc commissions also grows out
of electoral motives. Here, delegation
fosters the avoidance of blame. 60 Mindful that most policies entail both
costs and benefits, and apprehensive that those suffering the costs will hold them responsible, members
of Congress often find that the most attractive option is to let someone else make the tough choices.
Others see congressional delegation as unavoidable (and even desirable) in light of basic structural flaws in the design of Congress. 61 They
argue that Congress is incapable of crafting policies that address the full complexity of modern-day problems. 62 Another charge is that
congressional action can be stymied at several junctures in the legislative policymaking process.
Congress is decentralized, having few mechanisms for integrating or coordinating its policy decisions; it
is an institution of bargaining, consensus-seeking, and compromise. The logic of delegation is broad: to
fashion solutions to tough problems, to broker disputes, to build consensus, and to keep fragile
coalitions together. The commission co-opts the most publicly ideological and privately pragmatic, the
liberal left and the conservative right. Leaders of both parties or their designated representatives can
negotiate a deal without the media, the public, or interest groups present. When deliberations are private, parties can make
offers without being denounced either by their opponents or by affected groups. Removing external contact
reduces the opportunity to use an offer from the other side to curry favor with constituents.
2NC Commissions Popular
Commissions give political cover—result in compromise
Fiscal Seminar 9
(The Fiscal Seminar is a group of scholars who meet on a regular basis, under the auspices of The Brookings Institution and The Heritage
Foundation, to discuss federal budget and fiscal policy issues. The members of the Fiscal Seminar acknowledge the contributions of Paul
Cullinan, a former colleague and Brookings scholar, in the development of this paper, and the editorial assistance of Emily Monea. “THE
POTENTIAL ROLE OF ENTITLEMENT OR BUDGET COMMISSIONS IN ADDRESSING LONG-TERM BUDGET PROBLEMS,” The Fiscal Seminar. 062009.)
In contrast, the
Greenspan Commission provided a forum for developing a political compromise on a set of
politically unsavory changes. In this case, the political parties shared a deep concern about the impending insolvency
of the Social Security system but feared the exposure of promoting their own solutions. The commission created
political cover for the serious background negotiations that resulted in the ultimate compromise. The
structure of the commission reflected these concerns and was composed of fifteen members, with the
President, the Senate Majority Leader, and the Speaker of the House each appointing five members to
the panel.
2NC AT Perm do the CP
Permutation is severance:
1. Severance: CP’s mechanism is distinct—delegates to the commission and isn’t
Congressional action
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of
California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College,
Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's
Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for
the congressman. Before that, he was an Analyst in American National Government at the Congressional Research Service, an
Associate Professor of Political Science at Florida International University, and an American Political Science Association
Congressional Fellow, where he served as a policy adviser to Senator Bob Graham of Florida. Dr. Campbell is the author, coauthor, and co-editor of 11 books on Congress, most recently the Guide to Political Campaigns in America, and Impeaching
Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles on the legislative process.
Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July
2015. Ghs-kw.)
So why
and when does Congress formulate policy by commissions rather than by the normal
legislative process? Lawmakers have historically delegated authority to others who could
accomplish ends they could not. Does this form of congressional delegation thus reflect the particularities of an issue
area? Or does it mirror deeper structural reasons such as legislative organization, time, or manageability? In the end, what is the
impact on representation versus the effectiveness of delegating discretionary authority to temporary entities composed largely of
unelected officials, or are both attainable together?
2. Severs resolved: resolved means to enact by law—not the counterplan
mandate
Words and Phrases 64 vol 37A
Definition of the word “resolve,” given by Webster is “to express an opinion or determination by resolution or vote; as
‘it was resolved by the legislature;” It is of similar force to the word “enact,” which is defined by Bouvier as
meaning “to establish by law”.
3. Severs should: Should requires immediate action
Summers 94 (Justice – Oklahoma Supreme Court, “Kelsey v. Dollarsaver Food Warehouse of
Durant”, 1994 OK 123, 11-8,
http://www.oscn.net/applications/oscn/DeliverDocument.asp?CiteID=20287#marker3fn13)
¶4
The legal question to be resolved by the court is whether the word "should"13 in the May 18 order
connotes futurity or may be deemed a ruling in praesenti.14 The answer to this query is not to be divined from rules of
grammar;15 it must be governed by the age-old practice culture of legal professionals and its immemorial language usage. To
determine if the omission (from the critical May 18 entry) of the turgid phrase, "and the same hereby is", (1) makes it an in futuro
ruling - i.e., an expression of what the judge will or would do at a later stage - or (2) constitutes an in in praesenti resolution of a
disputed law issue, the trial judge's intent must be garnered from the four corners of the entire record.16 [CONTINUES – TO
FOOTNOTE] 13 "Should" not only is used as a "present indicative" synonymous with ought but also is the past tense of "shall" with
various shades of meaning not always easy to analyze. See 57 C.J. Shall § 9, Judgments § 121 (1932). O. JESPERSEN, GROWTH AND
STRUCTURE OF THE ENGLISH LANGUAGE (1984); St. Louis & S.F.R. Co. v. Brown, 45 Okl. 143, 144 P. 1075, 1080-81 (1914). For a more
detailed explanation, see the Partridge quotation infra note 15. Certain
contexts mandate a construction of the
term "should" as more than merely indicating preference or desirability. Brown, supra at 1080-81 (jury
instructions stating that jurors "should" reduce the amount of damages in proportion to the amount of contributory negligence
of the plaintiff was held to imply an obligation and to be more than advisory); Carrigan v. California Horse
Racing Board, 60 Wash. App. 79, 802 P.2d 813 (1990) (one of the Rules of Appellate Procedure requiring that a party "should devote
a section of the brief to the request for the fee or expenses" was interpreted to mean that a
party is under an obligation
to include the requested segment); State v. Rack, 318 S.W.2d 211, 215 (Mo. 1958) ("should" would mean the same as
"shall" or "must" when used in an instruction to the jury which tells the triers they "should disregard false testimony"). 14 In
praesenti means literally "at the present time." BLACK'S LAW DICTIONARY 792 (6th Ed. 1990). In legal parlance the
phrase denotes that which in law is presently or immediately effective, as opposed to something that
will or would become effective in the future [in futurol]. See Van Wyck v. Knevals, 106 U.S. 360, 365, 1 S.Ct. 336, 337, 27
L.Ed. 201 (1882).
4. Severs should again: should is mandatory
Summers 94 (Justice – Oklahoma Supreme Court, “Kelsey v. Dollarsaver Food Warehouse of
Durant”, 1994 OK 123, 11-8,
http://www.oscn.net/applications/oscn/DeliverDocument.asp?CiteID=20287#marker3fn13)
¶4
The legal question to be resolved by the court is whether the word "should"13 in the May 18 order connotes futurity or may be
deemed a ruling in praesenti.14 The answer to this query is not to be divined from rules of grammar;15 it must be governed by the
age-old practice culture of legal professionals and its immemorial language usage. To determine if the omission (from the critical
May 18 entry) of the turgid phrase, "and the same hereby is", (1) makes it an in futuro ruling - i.e., an expression of what the judge
will or would do at a later stage - or (2) constitutes an in in praesenti resolution of a disputed law issue, the trial judge's intent must
be garnered from the four corners of the entire record.16 [CONTINUES – TO FOOTNOTE] 13 "Should" not only is used as a "present
indicative" synonymous with ought but also is the past tense of "shall" with various shades of meaning not always easy to analyze.
See 57 C.J. Shall § 9, Judgments § 121 (1932). O. JESPERSEN, GROWTH AND STRUCTURE OF THE ENGLISH LANGUAGE (1984); St.
Louis & S.F.R. Co. v. Brown, 45 Okl. 143, 144 P. 1075, 1080-81 (1914). For a more detailed explanation, see the Partridge quotation
infra note 15. Certain
contexts mandate a construction of the term "should" as more than merely
indicating preference or desirability. Brown, supra at 1080-81 (jury instructions stating that jurors "should" reduce
the amount of damages in proportion to the amount of contributory negligence of the plaintiff was held to imply an
obligation and to be more than advisory); Carrigan v. California Horse Racing Board, 60 Wash. App. 79, 802 P.2d 813
(1990) (one of the Rules of Appellate Procedure requiring that a party "should devote a section of the brief to the request for the fee
or expenses" was interpreted to mean that a party is under an obligation to include the requested segment); State v. Rack, 318
S.W.2d 211, 215 (Mo. 1958) ("should" would mean the same as "shall" or "must" when used in an instruction to the
jury which tells the triers they "should disregard false testimony"). 14 In praesenti means literally "at the present time." BLACK'S
LAW DICTIONARY 792 (6th Ed. 1990). In legal parlance the phrase denotes that which in law is presently or immediately effective, as
opposed to something that will or would become effective in the future [in futurol]. See Van Wyck v. Knevals, 106 U.S. 360, 365, 1
S.Ct. 336, 337, 27 L.Ed. 201 (1882).
Severance is a reason to reject the team:
1. Neg ground—makes the AFF a shifting target and allows them to spike out of
offense
2. Unpredictable—kills clash which destroys advocacy skills and education
2NC AT Perm do Both
Permutation do both links to politics:
1. Congressional debates—CP means Congress doesn’t debate the substance of
the plan, only the commission report—perm makes Congress to debate the
plan, triggers the link over partisan inclinations and electoral pressures—that’s
the politics net benefit ev
2. Time crunch—perm forces the plan now, doesn’t give the commission time to
generate political support and links to politics
Biggs 09
(Biggs, Andrews G. Andrew G. Biggs is a resident scholar at the American Enterprise Institute, where his work focuses on Social
Security and pensions. From 2003 through 2008, he served at the Social Security Administration, as Associate Commissioner for
Retirement Policy, Deputy Commissioner for Policy, and ultimately the principal Deputy Commissioner of the agency. During
2005, he worked at the White House National Economic Council on Social Security reform, and in 2001 was on the staff of the
President's Commission to Strengthen Social Security. He blogs on Social Security-related issues at Notes on Social Security
Reform. “Rumors Of Obama Social Security Reform Commission,” Frum Forum. 02-17-2009. http://www.frumforum.com/rumorsof-obama-social-security-reform-commission///ghs-kw)
One problem with President Bush’s 2001 Commission was that it didn’t represent the reasonable
spectrum of beliefs on Social Security reform. This didn’t make it a dishonest commission; like President Roosevelt’s
Committee on Economic Security, it was designed to put flesh on the bones laid out by the President. In this case, the Commission
was tasked with designing a reform plan that included personal accounts and excluded tax increases. That said, a
commission
only builds political capital toward enacting reform if it’s seen as building a consensus through a
process in which all views have been heard. In both the 2001 Commission and the later 2005
reform drive, Democrats didn’t feel they were part of the process. They clearly will be a central part of the
process this time, but the goal will now be to include Republicans. Just as Republicans shouldn’t reflexively oppose any Obama
administration reform plans for political reasons, so Democrats shouldn’t seek to exclude Republicans from the process. Second, a
reform task force should include a variety of different players, including members of
government, both legislative and executive, representatives of outside interest groups, and
experts who can provide technical advice and help ensure the integrity of the reforms decided
upon. The 2001 Bush Commission didn’t include any sitting Members of Congress and only a small fraction of commissioners had
the technical expertise needed to make the plans the best they could be. A broader group would be helpful. Third, any task
force or commission needs time. The 2001 Commission ran roughly from May through
December of that year and had to conduct a number of public hearings. This was simply too
much to do in too little time, and as a result the plans were fairly bare bones. There is plenty
else on the policy agenda at the moment, so there’s no reason not to give a working group a
year or more to put things together.
2NC AT Theory
Counterinterp: process CPs are legitimate if we have a solvency advocate
AND, process CPs good:
1. Key to neg ground—agent CPs are the only generics we have on this topic
2. Policy education—commissions are key to understanding the policy process
Schwalbe, 03
(Steve,- PhD Public Policy from Auburn, former professor at the Air War College and Col. in the USAF “Independent Commissions:
Their History, Utilization and Effectiveness”)
Many analysts characterize commissions as an unofficial, separate branch of government,
much like the news media. Campbell referred to commissions as the “fifth arm of government,” after the media, the
FIFTH BRANCH
often-referred-to fourth arm.17 However, the media and independent commissions have as many similarities as differences. They are similar in that neither is mentioned in the
Constitution. Both conduct oversight functions. Both serve to educate and inform the public. Both allow elites to participate in shaping government policy. On the other hand,
the media and independent commissions are dissimilar in many ways. Where the news media responds to market forces, and hence will likely operate in perpetuity, independent
commissions respond to a federal requirement to resolve a difficult problem. Therefore, they exist for a relatively short period of time, expiring once a final report is published
a commission’s primary responsibilities can
range from developing a recommended solution to a difficult problem to regulating an entire
department of the executive branch. The media receives its funding primarily from advertisers, where commissions receive their funding from
and disseminated. Where the media’s primary functions are reporting and analyzing the news,
Congress, the President, or from private sources. The news media deal with issues foreign and domestic, while independent commissions generally focus on domestic issues.
PURPOSE
Commissions serve numerous purposes in the U.S. Government. Campbell cited three
primary reasons for the
establishment of federal independent commissions. First, they are established to provide expertise the Congress does not have among its own elected officials or their staffs.
Next, he noted that the second most frequently cited reason by members of Congress for establishing a commission was to reduce the workload in Congress. Finally, they are
formed to provide a convenient scapegoat to deflect the wrath of the electorate; i.e., “blame avoidance.”18 Fisher found three advantages of regulatory commissions. First,
commission members bring essential expert insights to a commission because the regulated industries are normally “complex and highly technical.” Second, appointing
commissioners for extended terms of full-time work allows commissioners to become very familiar with the technical aspects of an industry, through periodic contacts that
Congress would not be able to accomplish. As a result of their tenure, varied membership, and shared responsibility, commissioners would be resistant to external pressures.
Finally, regulatory commissions provide policy continuity essential to the stability of a regulated industry.19 What the taxpayers are primarily looking for from independent
commissions are non- partisan solutions to current problems. A good example of establishing a commission to find non-partisan solutions is Congress regulating its own ethical
behavior. University of Florida Professor Beth Rosenson researched this issue and concluded that authorizing an ethics commission may be “based on the fear of electoral
retaliation if legislators do not take aggressive action to regulate their own ethics.”20 Campbell noted that commissions perform several other functions besides providing
recommendations to the President and Congress. The most common reason provided by analysts is that members of Congress generally want to avoid making difficult decisions
that may adversely affect their chances for reelection. As he noted, “Incentives to avoid blame lead members of Congress to adopt a distinctive set of political strategies, such as
‘passing the buck’ or ‘deflection’….”21 Another technique legislators use to avoid incurring the wrath of the voters is to schedule any controversial independent commissions for
after the next election. Establish- ing a commission to research the issue and come up with recommendations after a preset period of time is an effective way to do that. The
most clear-cut example demonstrating this technique is the timing of the BRAC commissions in the 1990s — all three made their base closure recommendations in non-election
years (1991, 1993, and 1995). Even the next BRAC commission, established by the National Defense Authorization Act for Fiscal Year 2002, is not required to submit its base
closure recommendations until 2005. Congress certainly is not the most efficient organization in the U.S.; hence, there are times when an independent commission is the more
efficient and effective way to go. Law- makers are almost always short on time and information, which makes the option of delegating authority to a commission very appealing.
Oftentimes, the expertise and necessary information is very costly for Congress to acquire. Commissions are generally the most inexpensive way for Congress to solve complex
problems. From 1993-1997, Campbell found that 92 congressional offices introduced legislation that included proposals to establish ad hoc commissions.22 There are numerous
other reasons for establishing independent commissions. They are created as a symbolic response to a crisis or to satisfy the electorate at home. They have served as trial balloons
to test the political waters, or to make political gains with the voters. They can be created to gain public or political consensus. Often, when Congress has exhausted all its other
options, a commission serves as an option of last resort.23 Commissions are a relatively impartial way to help resolve problems between the executive and legislative branches
of government, especially during periods of congressional gridlock. Wolanin also noted that commissions are “particularly useful for problems and in circumstances marked by
federal executive branch incapacity.” Federal bureaucracies suffer from many of the same shortcomings attributed to Congress when considering commissions. They often lack
the expertise, information, and time to conduct the research and make recommendations to resolve internal problems. They can be afflicted by groupthink, not being able to
Commissions offer a non-partisan, neutral option to address bureaucratic
policy problems.24 Defense Secretary Donald Rumsfeld has decided to implement the recommendations of the
congressionally- chartered Commission on Space, which he chaired prior to being appointed Secretary of Defense!25
think outside the box, or by not being able to see the big picture.
One of the more important functions of independent commissions is educating and persuading. Due to the high visibility of most appointed commissioners, a policy issue will
automatically tend to gain public attention. According to Wolanin, the prestige and visibility of commissions give them the capability to focus attention on a problem, and to see
that thinking about it permeates more rapidly. A recent example of a high-visibility commission chair appointment was Henry Kissinger, selected to chair the commission to look
into the perceived intelligence failure regarding the September 11, 2001 terrorist attack on the U.S. .26 Wolanin cited four educational impacts of commissions: 1) educating the
general public; 2) educating government officials; 3) serving as intellectual milestones; and, 4) educating the commission members themselves. Regarding education of the
general public, he stated that, “Commissions have helped to place broad new issues on the national agenda, to elevate them to a level of legitimate and pressing matters about
Regarding educating government officials, he noted that, “The educational impact of
commissions within government…make it safer for congressmen and federal executives to openly
discuss or advocate a proposal that has been sanctioned by such an ‘august group’.” Commission reports have often been so
influential that they serve as milestones in affected fields. Such reports have become source material for analysts, commentators, and even
which government should take affirmative action.”
students, particularly when commission reports are widely published and disseminated. Finally, by serving on a commission, members also learn much about the issue, and about
the process of analyzing a problem and coming up with viable recommendations. Commissioners also learn from one another.27
3. Predictability—commissions are widely used and predictable and solvency
advocate checks
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of
California, Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College,
Dr. Campbell was a Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's
Subcommittee on Terrorism, Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for
the congressman. Before that, he was an Analyst in American National Government at the Congressional Research Service, an
Associate Professor of Political Science at Florida International University, and an American Political Science Association
Congressional Fellow, where he served as a policy adviser to Senator Bob Graham of Florida. Dr. Campbell is the author, coauthor, and co-editor of 11 books on Congress, most recently the Guide to Political Campaigns in America, and Impeaching
Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles on the legislative process.
Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary. Web. 27 July
2015. Ghs-kw.)
Ad hoc commissions as instruments of government have a long history. They are used by
almost all units and levels of government for almost every conceivable task. Ironically, the use
which Congress makes of commissions— preparing the groundwork for legislation, bringing
public issues into the spotlight, whipping legislation into shape, and giving priority to the
consideration of complex, technical, and critical developments— receives relatively little attention from
political scientists. As noted in earlier chapters, following the logic of rational choice theory, individual decisions to delegate are
occasioned by imperfect information; legislators who want to develop effective policies, but who lack the necessary expertise,
often delegate fact-finding and policy development. Others contend that some commissions are set up to shift
blame in order to maximize benefits and minimize losses.
4. At worse, reject the argument, not the team
2NC AT Certainty
Counterplan solves your certainty args—expertise
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of California,
Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College, Dr. Campbell was a
Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's Subcommittee on Terrorism,
Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for the congressman. Before that, he was an
Analyst in American National Government at the Congressional Research Service, an Associate Professor of Political Science at Florida
International University, and an American Political Science Association Congressional Fellow, where he served as a policy adviser to Senator
Bob Graham of Florida. Dr. Campbell is the author, co-author, and co-editor of 11 books on Congress, most recently the Guide to Political
Campaigns in America, and Impeaching Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles
on the legislative process. Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary.
Web. 27 July 2015. Ghs-kw.)
By delegating some of its policymaking authority to “expertise commissions,” Congress creates
institutions that reduce uncertainty. Tremendous gains accrue as a result of delegating tasks to other
organizations with a comparative advantage in performing them. Commissions are especially adaptable devices
for addressing problems that do not fall neatly within committees’ jurisdictional boundaries. They can complement and
supplement the regular committees. In the 1990s, it became apparent that committees were ailing— beset by mounting
workloads, duplication and jurisdictional battles, and conflicts between program and funding panels. But relevant expertise can be
mobilized by a commission that brings specialized information to its tasks, especially if commission
members and staff are selected on the basis of education, their training, and their experience in the area
which cross-cut the responsibilities of several standing committees.
2NC AT Commissions Bad
No disads—commissions are inevitable due to Congressional structure
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of California,
Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College, Dr. Campbell was a
Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's Subcommittee on Terrorism,
Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for the congressman. Before that, he was an
Analyst in American National Government at the Congressional Research Service, an Associate Professor of Political Science at Florida
International University, and an American Political Science Association Congressional Fellow, where he served as a policy adviser to Senator
Bob Graham of Florida. Dr. Campbell is the author, co-author, and co-editor of 11 books on Congress, most recently the Guide to Political
Campaigns in America, and Impeaching Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles
on the legislative process. Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary.
Web. 27 July 2015. Ghs-kw.)
Others see congressional delegation as unavoidable (and even desirable) in light of basic structural flaws in
the design of Congress. 61 They argue that Congress is incapable of crafting policies that address the full
complexity of modern-day problems. 62 Another charge is that congressional action can be stymied at several
junctures in the legislative policymaking process. Congress is decentralized, having few mechanisms for
integrating or coordinating its policy decisions; it is an institution of bargaining, consensus-seeking, and compromise. The
logic of delegation is broad: to fashion solutions to tough problems, to broker disputes, to build
consensus, and to keep fragile coalitions together. The commission co-opts the most publicly ideological and privately
pragmatic, the liberal left and the conservative right. Leaders of both parties or their designated representatives can negotiate a deal without
the media, the public, or interest groups present. When deliberations are private, parties can make offers without being denounced either by
their opponents or by affected groups. Removing external contact reduces the opportunity to use an offer from the other side to curry favor
with constituents.
2NC AT Congress Doesn’t Pass Recommendations
Recommendations are passed—either bipartisan or perceived as non-partisan
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional Commissions:
Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Solutions to policy problems produced within the normal legislative process may also
suffer politically from charges of partisanship.30 Similar charges may be made against investigations
conducted by Congress.31 The non-partisan or bipartisan character of most congressional commissions
may make their findings and recommendations less susceptible to such charges and more politically
acceptable to diverse viewpoints. The bipartisan or nonpartisan arrangement can potentially give
their recommendations strong credibility, both in Congress and among the public, even when dealing
with divisive issues of public policy.32 Commissions may also give political factions space to negotiate
compromises in good faith, bypassing the short-term tactical political maneuvers that accompany public
negotiations.33 Similarly, because commission members are not elected, they may be better suited to
suggesting unpopular, but necessary, policy solutions.34
Reducing Partisanship
Recommendations are passed—BRAC Commission proves
Fiscal Seminar 9
(The Fiscal Seminar is a group of scholars who meet on a regular basis, under the auspices of The Brookings Institution and The Heritage
Foundation, to discuss federal budget and fiscal policy issues. The members of the Fiscal Seminar acknowledge the contributions of Paul
Cullinan, a former colleague and Brookings scholar, in the development of this paper, and the editorial assistance of Emily Monea. “THE
POTENTIAL ROLE OF ENTITLEMENT OR BUDGET COMMISSIONS IN ADDRESSING LONG-TERM BUDGET PROBLEMS,” The Fiscal Seminar. 062009.)
On the other hand, the
success of BRAC seems to have resulted more from the defined structure and process
of the commission.5 Under BRAC, a package of recommendations originated with the Department of Defense,
was modified by the BRAC commission, and was then reviewed by the President. Congress then had to consider the
package as a whole with no amendments allowed; if it failed to pass a resolution of disapproval, the
recommendations would be implemented as if they had been enacted in law. Not one of the five sets of
BRAC recommendations has been rejected by the Congress.6,
2NC AT No Authority
Commissions have broad authority
Campbell 01
(Campbell, Colton C. Dr. Colton Campbell is Professor of National Security Strategy. He received his Ph.D. from the University of California,
Santa Barbara, and his B.A. and M.A. from California State University, Chico. Prior to joining the National War College, Dr. Campbell was a
Legislative Aide to Representative Mike Thompson (CA-01), chair of the House Intelligence Committee's Subcommittee on Terrorism,
Analysis and Counterintelligence, where he handled Appropriations, Defense and Trade matters for the congressman. Before that, he was an
Analyst in American National Government at the Congressional Research Service, an Associate Professor of Political Science at Florida
International University, and an American Political Science Association Congressional Fellow, where he served as a policy adviser to Senator
Bob Graham of Florida. Dr. Campbell is the author, co-author, and co-editor of 11 books on Congress, most recently the Guide to Political
Campaigns in America, and Impeaching Clinton: Partisan Strife on Capitol Hill. He has also written more than two dozen chapters and articles
on the legislative process. Discharging Congress : Government by Commission. Westport, CT, USA: Greenwood Press, 2001. ProQuest ebrary.
Web. 27 July 2015. Ghs-kw.)
Congressional commissions have reached the point where they can
take over various fact-finding functions formerly
performed by Congress itself. Once the facts have been found by a commission, it is possible for Congress
to subject those facts to the scrutiny of cross-examination and debate. And if the findings stand up under such
scrutiny, there remains for Congress the major task of determining the policy to be adopted with reference to the known factual situation. Once
it was clear, for example, that the acquired immune deficiency syndrome (AIDS) yielded an extraordinary range of newfound political and
practical difficulties, the need for legislative action was readily apparent. The question that remained was one of policy: how to prevent the
spread of AIDS. Should it be by accelerated research? By public education? By facilitating housing support for people living with AIDS? Or by
implementing a program of AIDS counseling and testing? The AIDS Commission could help Congress answer such questions.
2NC AT Perception
CP solves your perception arguments
Glassman and Straus 15
(Glassman, Matthew E. and Straus, Jacob R. Analysts on Congress at the Congressional Research Service. “Congressional Commissions:
Overview, Structure, and Legislative Considerations ,” Congressional Research Service. 01-27-2015.
http://fas.org/sgp/crs/misc/R40076.pdf//ghs-kw)
Raising Visibility By
establishing a commission, Congress can often provide a highly visible forum for
important issues that might otherwise receive scant attention from the public.38 Commissions often are
composed of notable public figures, allowing personal prestige to be transferred to policy solutions.39
Meetings and press releases from a commission may receive significantly more attention in the media
than corresponding information coming directly from members of congressional committees. Upon
completion of a commission’s work product, public attention may be temporarily focused on a topic that otherwise
would receive scant attention, thus increasing the probability of congressional action within the policy area.40
Private Sector CP
1NC
Counterplan: the private sector should implement and enforce default encryption
standards on a level equivalent with those announced by Apple in 2014.
Apple’s new standards are unhackable even by Apple—eliminates backdoors
Green 10/4
(Green, Matthew D. Matthew D. Green is an Assistant Research Professor at the Johns Hopkins Information Security Institute. He completed
his PhD in 2008. His research includes techniques for privacy-enhanced information storage, anonymous payment systems, and bilinear
map-based cryptography. "A Few Thoughts on Cryptographic Engineering: Why can't Apple decrypt your iPhone?” 10-4-2014.
http://blog.cryptographyengineering.com/2014/10/why-cant-apple-decrypt-your-iphone.html//ghs-kw)
In the rest of this post I'm going to talk about how these protections may work and how Apple
can realistically claim not to
possess a back door. One caveat: I should probably point out that Apple isn't known for showing up at parties and bragging about their
technology -- so while a fair amount of this is based on published information provided by Apple, some of it is speculation. I'll try to be clear
where one ends and the other begins. Password-based encryption 101 Normal password-based file encryption systems take in a password from
a user, then apply a key derivation function (KDF) that converts a password (and some salt) into an encryption key. This approach doesn't
require any specialized hardware, so it can be securely implemented purely in software provided that (1) the software is honest and wellwritten, and (2) the chosen password is strong, i.e., hard to guess. The problem here is that nobody ever chooses strong passwords. In fact,
since most passwords are terrible, it's usually possible for an attacker to break the encryption by working through a 'dictionary' of likely
passwords and testing to see if any decrypt the data. To make this really efficient, password crackers often use special-purpose hardware that
takes advantage of parallelization (using FPGAs or GPUs) to massively speed up the process. Thus a common defense against cracking is to use a
'slow' key derivation function like PBKDF2 or scrypt. Each of these algorithms is designed to be deliberately resource-intensive, which does slow
down normal login attempts -- but hits crackers much harder. Unfortunately, modern cracking rigs can defeat these KDFs by simply throwing
more hardware at the problem. There are some approaches to dealing with this -- this is the approach of memory-hard KDFs like scrypt -- but
this is not the direction that Apple has gone. How
Apple's encryption works Apple doesn't use scrypt. Their approach is to
add a 256-bit device-unique secret key called a UID to the mix, and to store that key in hardware
where it's hard to extract from the phone. Apple claims that it does not record these keys nor can it access
them. On recent devices (with A7 chips), this key and the mixing process are protected within a cryptographic coprocessor called the Secure Enclave. The Apple Key Derivation function 'tangles' the password with the
UID key by running both through PBKDF2-AES -- with an iteration count tuned to require about 80ms on
the device itself.** The result is the 'passcode key'. That key is then used as an anchor to secure much of
the data on the phone. Overview of Apple key derivation and encryption (iOS Security Guide, p.10). Since only the device
itself knows UID -- and the UID can't be removed from the Secure Enclave -- this means all password
cracking attempts have to run on the device itself. That rules out the use of FPGA or ASICs to crack
passwords. Of course Apple could write a custom firmware that attempts to crack the keys on the
device but even in the best case such cracking could be pretty time consuming, thanks to the 80ms PBKDF2
timing. (Apple pegs such cracking attempts at 5 1/2 years for a random 6-character password consisting of
lowercase letters and numbers. PINs will obviously take much less time, sometimes as little as half an hour. Choose a good
passphrase!) So one view of Apple's process is that it depends on the user picking a strong password. A different view is that it also depends on
the attacker's inability to obtain the UID. Let's explore this a bit more. Securing the Secure Enclave The
Secure Enclave is designed
to prevent exfiltration of the UID key. On earlier Apple devices this key lived in the application processor itself. Secure
Enclave provides an extra level of protection that holds even if the software on the application
processor is compromised -- e.g., jailbroken. One worrying thing about this approach is that, according to Apple's documentation,
Apple controls the signing keys that sign the Secure Enclave firmware. So using these keys, they might be able to write a special "UID
extracting" firmware update that would undo the protections described above, and potentially allow crackers to run their attacks on specialized
hardware. Which leads to the following question? How
does Apple avoid holding a backdoor signing key that allows
them to extract the UID from the Secure Enclave? It seems to me that there are a few possible ways forward here. No
software can extract the UID. Apple's documentation even claims that this is the case; that software can only see the
output of encrypting something with UID, not the UID itself. The problem with this explanation is that it isn't really clear
that this guarantee covers malicious Secure Enclave firmware written and signed by Apple. Update 10/4: Comex and others (who have
forgotten more about iPhone internals than I've ever known) confirm that #1 is the right answer. The
UID appears to be connected
to the AES circuitry by a dedicated path, so software can set it as a key, but never extract it. Moreover
this appears to be the same for both the Secure Enclave and older pre-A7 chips. So ignore options 2-4 below.
2NC O/V
The counterplan solves 100% of the case—private corporations will institute strong
encryption standards on all their products and store decryption mechanisms on
individual devices without retaining separate decryption programs—this means
nobody but the owner of the device can decrypt the information—that’s Green
AND, solves backdoors—companies are technologically incapable of providing
backdoors in the world of the CP—solves the AFF—that’s Green
AT Perception
Other companies follow – solves their credibility internal links
Whittaker 14
(Zack Whittaker. "Apple doubles-down on security, shuts out law enforcement from accessing iPhones, iPads," ZDNet. 9-18-2014.
http://www.zdnet.com/article/apple-doubles-down-on-security-shuts-out-law-enforcement-from-accessing-iphones-ipads///ghs-kw)
The new encryption methods prevent even Apple from accessing even the relatively small amount of
data it holds on users. "Unlike our competitors, Apple cannot bypass your passcode and therefore cannot
access this data," the company said in its new privacy policy, updated Wednesday. "So it's not technically feasible for us to respond to
government warrants for the extraction of this data from devices in their possession running iOS 8." There are some caveats, however. For the
iCloud data it stores, Apple still has the ability (and the legal responsibility) to turn over data it stores on its own servers, or third-party servers it
uses to support the service. iCloud data can include photos, emails, music, documents, and contacts. In the wake of the Edward Snowden
disclosures, Apple
has set itself apart from the rest of the crowd by bolstering its encryption efforts in such
a way that makes it impossible for it to decrypt the data. Apple chief executive Tim Cook said in a recent interview with
PBS' Charlie Rose that if the government "laid a subpoena" at its doors, Apple "can't provide" the data. He said, bluntly: "We
don't have a key. The door is closed." Although the iPhone and iPad maker was late to the transparency report party, the
company has rocketed up the ranks of the civil liberties table. The Electronic Frontier Foundation's annual reports for 2012 and 2013 showed
Apple as having poor privacy practices around user data, gaining just one star out of five each year. In 2014, Apple scored the full five stars — a
massive turnaround from two years prior. In the meantime, Yahoo
is bolstering encryption between its datacenters , and
recently turned on encryption-by-default on its email service . Microsoft is also encrypting its network
traffic amid reports of the National Security Agency's datacenter tapping program. And Google is
working hard to crackdown on government spies cracking into its networks and cables. Privacy and
security are, and have been for a while, the pinnacle of tech credibility. And Apple just scored about a billion
points on that scale, leaving most of its Silicon Valley partners in the dust
AT Links to Terror
No link to their disads—other sources of data
NYT 14
(David E. Sanger and Brian X. Chen. "Signaling Post-Snowden Era, New iPhone Locks Out N.S.A. ," New York Times. 9-26-2014.
http://www.nytimes.com/2014/09/27/technology/iphone-locks-out-the-nsa-signaling-a-post-snowden-era-.html?_r=0//ghs-kw)
Mr. Zdziarski said that concerns
about Apple’s new encryption to hinder law enforcement seemed overblown.
were still plenty of ways for the police to get customer data for investigations. In the example
of a kidnapping victim, the police can still request information on call records and geolocation information
from phone carriers like AT&T and Verizon Wireless. “Eliminating the iPhone as one source I don’t think is
going to wreck a lot of cases,” he said. “There is such a mountain of other evidence from call logs, email
logs, iCloud, Gmail logs. They’re tapping the whole Internet.”
He said there
XO CP
1NC
XOs solve the Secure Data Act
Castro and McQuinn 15
(Castro, Daniel and McQuinn, Alan. Information Technology and Innovation Foundation. The Information Technology and Innovation
Foundation (ITIF) is a Washington, D.C.-based think tank at the cutting edge of designing innovation strategies and technology policies to
create economic opportunities and improve quality of life in the United States and around the world. Founded in 2006, ITIF is a 501(c) 3
nonprofit, non-partisan organization that documents the beneficial role technology plays in our lives and provides pragmatic ideas for
improving technology-driven productivity, boosting competitiveness, and meeting today’s global challenges through innovation. Daniel
Castro is the vice president of the Information Technology and Innovation Foundation. His research interests include health IT, data privacy,
e-commerce, e-government, electronic voting, information security, and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst
at the Government Accountability Office (GAO) where he audited IT security and management controls at various government agencies. He
has a B.S. in Foreign Service from Georgetown University and an M.S. in Information Security Technology and Management from Carnegie
Mellon University. Alan McQuinn is a research assistant with the Information Technology and Innovation Foundation. Prior to joining ITIF,
Mr. McQuinn was a telecommunications fellow for Congresswoman Anna Eshoo and an intern for the Federal Communications Commission
in the Office of Legislative Affairs. He got his B.S. in Political Communications and Public Relations from the University of Texas at Austin.
“Beyond the USA Freedom Act: How U.S. Surveillance Still Subverts U.S. Competitiveness,” ITIF. June 2015. http://www2.itif.org/2015beyond-usa-freedom-act.pdf//ghs-kw)
the U.S. government should draw a clear line in the sand and declare that the policy of the U.S.
government is to strengthen not weaken information security. The U.S. Congress should pass legislation, such as the Secure Data
Act introduced by Sen. Wyden (D-OR), banning any government efforts to introduce backdoors in software or weaken encryption.43 In the short term, President
Obama, or his successor, should sign an executive order formalizing this policy as well. In addition, when U.S. government agencies
Second,
discover vulnerabilities in software or hardware products, they should responsibly notify these companies in a timely manner so that the companies can fix these flaws. The best way to protect U.S. citizens from digital threats is to
promote strong cybersecurity practices in the private sector.
Zero-Days Adv CP
1NC
Counterplan: the United States federal government should legalize and regulate the
zero-day exploit market.
Regulation is key to stop zero days from falling into enemy hands
Gallagher 13
(Ryan Gallagher. "The Secretive Hacker Market for Software Flaws," Slate Magazine. 1-16-2013.
http://www.slate.com/articles/technology/future_tense/2013/01/zero_day_exploits_should_the_hacker_gray_market_be_regulated.html
//ghs-kw)
Behind computer screens from France to Fort Worth, Texas, elite hackers
hunt for security vulnerabilities worth thousands
of dollars on a secretive unregulated marketplace. Using sophisticated techniques to detect weaknesses in widely used
programs like Google Chrome, Java, and Flash, they spend hours crafting “zero-day exploits”—complex codes
custom-made to target a software flaw that has not been publicly disclosed, so they can bypass antivirus or firewall detection to help infiltrate a computer system. Like most technologies, the exploits have a dual use.
They can be used as part of research efforts to help strengthen computers against intrusion. But they can also be weaponized and
deployed aggressively for everything from government spying and corporate espionage to flat-out fraud. Now, as cyberwar
escalates across the globe, there are fears that the burgeoning trade in finding and selling exploits is spiralling out of
control—spurring calls for new laws to rein in the murky trade. Some legitimate companies operate in a legal
gray zone within the zero-day market, selling exploits to governments and law enforcement agencies in countries
across the world. Authorities can use them covertly in surveillance operations or as part of cybersecurity
or espionage missions. But because sales are unregulated, there are concerns that some gray market
companies are supplying to rogue foreign regimes that may use exploits as part of malicious targeted
attacks against other countries or opponents. There is also an anarchic black market that exists on
invite-only Web forums, where exploits are sold to a variety of actors—often for criminal purposes. The
importance of zero-day exploits, particularly to governments, has become increasingly apparent in recent years.
Undisclosed vulnerabilities in Windows played a crucial role in how Iranian computers were infiltrated for surveillance and
sabotage when the country’s nuclear program was attacked by the Stuxnet virus (an assault reportedly launched by
the United States and Israel). Last year, at least eight zero days in programs like Flash and Internet Explorer were discovered and linked to a
Chinese hacker group dubbed the “Elderwood gang,” which targeted more than 1,000 computers belonging to corporations and human rights
groups as part of a shady intelligence-gathering effort allegedly sponsored by China. The most lucrative zero days can be worth hundreds of
thousands of dollars in both the black and gray markets. Documents released by Anonymous in 2011 revealed Atlanta-based security firm
Endgame Systems offering to sell 25 exploits for $2.5 million. Emails published alongside the documents showed the firm was trying to keep “a
very low profile” due to “feedback we've received from our government clients.” (In keeping with that policy, Endgame didn’t respond to
questions for this story.) But not everyone working in the business of selling software exploits is trying to fly under the radar—and some have
decided to blow the whistle on what they see as dangerous and irresponsible behavior within their secretive profession. Adriel Desautels, for
one, has chosen to speak out. The 36-year-old “exploit broker” from Boston runs a company called Netragard, which buys and sells zero days to
organizations in the public and private sectors. (He won’t name names, citing confidentiality agreements.) The lowest-priced exploit that
Desautels says he has sold commanded $16,000; the highest, more than $250,000. Unlike
other companies and sole traders
operating in the zero-day trade, Desautels has adopted a policy to sell his exploits only domestically
within the United States, rigorously vetting all those he deals with. If he didn’t have this principle, he
says, he could sell to anyone he wanted—even Iran or China—because the field is unregulated. And
that’s exactly why he is concerned. “As technology advances, the effect that zero-day exploits will have is going to
become more physical and more real,” he says. “The software becomes a weapon. And if you don’t have
controls and regulations around weapons, you’re really open to introducing chaos and problems.”
Desautels says he knows of “greedy and irresponsible” people who “will sell to anybody,” to the extent
that some exploits might be sold by the same hacker or broker to two separate governments not on
friendly terms. This can feasibly lead to these countries unwittingly targeting each other’s computer
networks with the same exploit, purchased from the same seller. “If I take a gun and ship it overseas to
some guy in the Middle East and he uses it to go after American troops—it’s the same concept,” he says.
The position Desautels has taken casts him as something of an outsider within his trade. France’s Vupen,
one of the foremost gray-market zero-day sellers, takes a starkly different approach. Vupen develops
and sells exploits to law enforcement and intelligence agencies across the world to help them intercept
communications and conduct “offensive cyber security missions,” using what it describes as “extremely
sophisticated codes” that “bypass all modern security protections and exploit mitigation technologies.”
Vupen’s latest financial accounts show it reported revenue of about $1.2 million in 2011, an overwhelming majority of which (86 percent) was
generated from exports outside France. Vupen says it will sell exploits to a list of more than 60 countries that are members or partners of
NATO, provided these countries are not subject to any export sanctions. (This means Iran, North Korea, and Zimbabwe are blacklisted—but the
likes of Kazakhstan, Bahrain, Morocco, and Russia are, in theory at least, prospective customers, as they are not subject to any sanctions at this
time.) “As a European company, we exclusively work with our allies and partners to help them protect their democracies and citizens against
threats and criminals,” says Chaouki Bekrar, Vupen’s CEO, in an email. He adds that even if a given country is not on a sanctions list, it doesn’t
mean Vupen will automatically work with it, though he declines to name specific countries or continents where his firm does or does not have
customers. Vupen’s
policy of selling to a broad range of countries has attracted much controversy, sparking
furious debate around zero-day sales, ethics, and the law. Chris Soghoian of the ACLU—a prominent privacy and
security researcher who regularly spars with Vupen CEO Bekrar on Twitter—has accused Vupen of being “modern-day
merchants of death” selling “the bullets for cyberwar.” “Just as the engines on an airplane enable the
military to deliver a bomb that kills people, so too can a zero day be used to deliver a cyberweapon that
causes physical harm or loss of life,” Soghoian says in an email. He is astounded that governments are “sitting on
flaws” by purchasing zero-day exploits and keeping them secret. This ultimately entails “exposing their own citizens to
espionage,” he says, because it means that the government knows about software vulnerabilities but is not telling the public about them. Some
claim, however, that the zero-day issue is being overblown and politicized. “You don’t need a zero day to compromise the workstation of an
executive, let alone an activist,” says Wim Remes, a security expert who manages information security for Ernst & Young. Others argue that the
U.S. government in particular needs to purchase exploits to keep pace with what adversaries like China and Iran are doing. “If we’re going to
have a military to defend ourselves, why would you disarm our military?” says Robert Graham at the Atlanta-based firm Errata Security. “If the
government can’t buy exploits on the open market, they will just develop them themselves,” Graham says. He also fears that regulation of zeroday sales could lead to a crackdown on legitimate coding work. “Plus, digital arms don’t exist—it’s an analogy. They don’t kill people. Bad things
really don’t happen with them.” * * * So are zero days really a danger? The overwhelming majority of compromises of computer systems
happen because users failed to update software and patch vulnerabilities that are already known about. However, there are a handful of cases
in which undisclosed vulnerabilities—that is, zero days—have been used to target organizations or individuals. It
was a zero day, for
was recently used by malicious hackers to compromise Microsoft’s Hotmail and steal emails
and details of the victims' contacts. Last year, it was reported that a zero day was used to target a flaw in
Internet Explorer and hijack Gmail accounts. Noted “off
ensive security” companies such as Italy’s Hacking Team and the
instance, that
England-based Gamma Group are among those to make use of zero-day exploits to help law enforcement agencies install advanced spyware on
target computers—and both of these companies have been accused of supplying their technologies
to countries with an
authoritarian bent. Tracking and communications interception can have serious real-world
consequences for dissidents in places like Iran, Syria, or the United Arab Emirates. In the wrong hands, it
seems clear, zero days could do damage. This potential has been recognized in Europe, where Dutch politician Marietje
Schaake has been crusading for groundbreaking new laws to curb the trade in what she calls “digital
weapons.” Speaking on the phone from Strasbourg, France*, Schaake tells me she’s concerned about security exploits, particularly
where they are being sold with the intent to help enable access to computers or mobile devices not authorized by the owner. She adds that she
is considering pressing for the European Commission, the EU’s executive body, to bring in a whole new regulatory
framework that would encompass the trade in zero days, perhaps by looking at incentives for companies
or hackers to report vulnerabilities that they find. Such a move would likely be welcomed by the handful of
organizations already working to encourage hackers and security researchers to responsibly disclose vulnerabilities they find instead of selling
them on the black or gray markets. The Zero Day Initiative, based in Austin, Texas, has a team of about 2,700 researchers globally who submit
vulnerabilities that are then passed on to software developers so they can be fixed. ZDI, operated by Hewlett-Packard, runs competitions in
which hackers can compete for a pot of more than $100,000 in prize funds if they expose flaws. “We believe our program is focused on the
greater good,” says Brian Gorenc, a senior security researcher who works with the ZDI.
DAs
Terror
1NC - Generic
Terror risk is high—maintaining current surveillance is key
Inserra, 6/8 (David Inserra is a Research Associate for Homeland Security and Cyber Security in the Douglas and
Sarah Allison Center for Foreign and National Security Policy of the Kathryn and Shelby Cullom Davis Institute for
National Security and Foreign Policy, at The Heritage Foundation, 6-8-2015, "69th Islamist Terrorist Plot: Ongoing
Spike in Terrorism Should Force Congress to Finally Confront the Terrorist Threat," Heritage Foundation,
http://www.heritage.org/research/reports/2015/06/69th-islamist-terrorist-plot-ongoing-spike-in-terrorismshould-force-congress-to-finally-confront-the-terrorist-threat)
On June 2 in Boston, Usaamah Abdullah Rahim drew a knife and attacked police officers and FBI agents, who then
shot and killed him. Rahim was being watched by Boston’s Joint Terrorism Task Force as he had been plotting to
behead police officers as part of violent jihad. A conspirator, David Wright or Dawud Sharif Abdul Khaliq, was
arrested shortly thereafter for helping Rahim to plan this attack. This plot marks the 69th publicly known Islamist
terrorist plot or attack against the U.S. homeland since 9/11, and is part of a recent spike in terrorist activity. The
U.S. must redouble its efforts to stop terrorists before they strike, through the use of properly applied intelligence
tools. The Plot According to the criminal complaint filed against Wright, Rahim had originally planned to behead an individual outside the
state of Massachusetts,[1] which, according to news reports citing anonymous government officials, was Pamela Geller, the organizer of the
“draw Mohammed” cartoon contest in Garland, Texas.[2] To this end, Rahim had purchased multiple knives, each over 1 foot long, from
Amazon.com. The FBI was listening in on the calls between Rahim and Wright and recorded multiple conversations
regarding how these weapons would be used to behead someone. Rahim then changed his plan early on the morning of June
2. He planned to go “on vacation right here in Massachusetts…. I’m just going to, ah, go after them, those boys in blue. Cause, ah, it’s the
easiest target.”[3] Rahim and Wright had used the phrase “going on vacation” repeatedly in their conversations as a euphemism for violent
jihad. During this conversation, Rahim told Wright that he planned to attack a police officer on June 2 or June 3. Wright then offered advice on
preparing a will and destroying any incriminating evidence. Based on this threat, Boston police officers and FBI agents approached Rahim to
question him, which prompted him to pull out one of his knives. After being told to drop his weapon, Rahim responded with “you drop yours”
and moved toward the officers, who then shot and killed him. While Rahim’s brother, Ibrahim, initially claimed that Rahim was shot in the back,
video surveillance was shown to community leaders and civil rights groups, who have confirmed that Rahim was not shot in the back.[4 ]
Terrorism Not Going Away This 69th Islamist plot is also the seventh in this calendar year. Details on how exactly Rahim was
radicalized are still forthcoming, but according to anonymous officials, online propaganda from ISIS and other radical Islamist
groups are the source.[5] That would make this attack the 58th homegrown terrorist plot and continue the recent
trend of ISIS playing an important role in radicalizing individuals in the United States. It is also the sixth plot or attack
targeting law enforcement in the U.S., with a recent uptick in plots aimed at police. While the debate over the PATRIOT Act and the USA
FREEDOM Act is taking a break, the terrorists are not. The result of the debate has been the reduction of U.S. intelligence and counterterrorism
capabilities, meaning that the U.S. has to do even more with less when it comes to connecting the dots on terrorist plots.[6]
Other
legitimate intelligence tools and capabilities must be leaned on now even more. Protecting the Homeland To keep the U.S.
safe, Congress must take a hard look at the U.S. counterterrorism enterprise and determine other measures that are needed to improve it.
Congress should: Emphasize community outreach. Federal grant funds should be used to create robust community-outreach capabilities in
higher-risk urban areas. These funds must not be used for political pork, or so broadly that they no longer target those communities at greatest
risk. Such capabilities are key to building trust within these communities, and if the United States is to thwart lone-wolf terrorist attacks, it must
place effective community outreach operations at the tip of the spear. Prioritize local cyber capabilities. Building cyber-investigation capabilities
in the higher-risk urban areas must become a primary focus of Department of Homeland Security grants. With so much terrorism-related
activity occurring on the Internet, local law enforcement must have the constitutional ability to monitor and track violent extremist activity on
the Web when reasonable suspicion exists to do so. Push the FBI toward being more effectively driven by intelligence. While the FBI has made
high-level changes to its mission and organizational structure, the bureau is still working on integrating intelligence and law enforcement
activities. Full integration will require overcoming inter-agency cultural barriers and providing FBI intelligence personnel with resources,
opportunities, and the stature they need to become a more effective and integral part of the FBI. Maintain essential counterterrorism
tools. Support for important investigative tools is essential to maintaining the security of the U.S. and combating
terrorist threats. Legitimate government surveillance programs are also a vital component of U.S. national security
and should be allowed to continue. The need for effective counterterrorism operations does not relieve the
government of its obligation to follow the law and respect individual privacy and liberty. In the American system,
the government must do both equally well. Clear-Eyed Vigilance The recent spike in terrorist plots and attacks should
finally awaken policymakers—all Americans, for that matter—to the seriousness of the terrorist threat. Neither
fearmongering nor willful blindness serves the United States. Congress must recognize and acknowledge the
nature and the scope of the Islamist terrorist threat, and take the appropriate action to confront it.
Backdoors are key to stop terrorism and child predators
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-partii-debate-merits//ghs-kw)
On Thursday, I described the surprisingly warm reception FBI Director James Comey got in the Senate this week with his warning that the
FBI
was "going dark" because of end-to-end encryption. In this post, I want to take on the merits of the renewed encryption
debate, which seem to me complicated and multi-faceted and not all pushing in the same direction. Let me start by breaking the encryption
debate into two
distinct sets of questions: One is the conceptual question of whether a world of end-to-end
strong encryption is an attractive idea. The other is whether—assuming it is not an attractive idea and that one wants to
ensure that authorities retain the ability to intercept decrypted signal—an extraordinary access scheme is technically
possible without eroding other essential security and privacy objectives. These questions often get mashed together,
both because tech companies are keen to market themselves as the defenders of their users' privacy interests and because of the libertarian
ethos of the tech community more generally. But the
questions are not the same, and it's worth considering them
separately. Consider the conceptual question first. Would it be a good idea to have a world-wide
communications infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could
snap our fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from the
FSB but also from the FBI even wielding lawful process, would that be desirable? Or, in the alternative, do we want to create an internet as
secure as possible from everyone except government investigators exercising their legal authorities with the understanding that other countries
may do the same? Conceptually speaking, I am with Comey on this question—and the
matter does not seem to me an especially
close call. The belief in principle in creating a giant world-wide network on which surveillance is
technically impossible is really an argument for the creation of the world's largest ungoverned space. I
understand why techno-anarchists find this idea so appealing. I can't imagine for moment, however, why
anyone else would. Consider the comparable argument in physical space: the creation of a city in which
authorities are entirely dependent on citizen reporting of bad conduct but have no direct visibility onto
what happens on the streets and no ability to conduct search warrants (even with court orders) or to
patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really suck is
not controversial when you're talking about Yemen or Somalia. I see nothing more attractive about the
creation of a worldwide architecture in which it is technically impossible to intercept and read ISIS
communications with followers or to follow child predators into chatrooms where they go after kids. The
trouble is that this conceptual position does not answer the entirety of the policy question before us. The reason is that the case against
preserving some form of law enforcement access to decrypted signal is not only a conceptual embrace of the technological obsolescence of
surveillance. It
is also a series of arguments about the costs—including the security costs—of maintaining
the capacity to decrypt captured signal.
Terrorists will use bioweapons- guarantees extinction
Cooper 13 (Joshua, 1/23/13, University of South Carolina, “Bioterrorism and the Fermi Paradox,”
http://people.math.sc.edu/cooper/fermi.pdf, 7/15/15, SM)
We may conclude that, when a civilization reaches its space-faring age, it∂ will more or less at the same moment (1) contain many
individuals who seek to cause large-scale destruction, and (2) acquire the capacity to tinker with its own genetic
chemistry. This is a perfect recipe for bioterrorism , and, given the many very natural pathways for its
development and the overwhelming∂ evidence that precisely this course has been taken by humanity , it is hard to∂
see how bioterrorism does not provide a neat, if profoundly unsettling, solution∂ to Fermi’s paradox. One might object that, if omnicidal
individuals are∂ successful in releasing highly virulent and deadly genetic malware into the∂ wild, they are still unlikely to
succeed in killing everyone. However, even if∂ every such mass death event results only in a high (i.e., not total) kill rate
and∂ there is a large gap between each such event (so that individuals can build up ∂ the requisite scientific
infrastructure again), extinction would be inevitable∂ regardless. Some of the engineered bioweapons will be more successful
than∂ others; the inter-apocalyptic eras will vary in length; and post-apocalyptic∂ environments may be so war-torn, diseasestricken, and impoverished of genetic variation that they may culminate in true extinction events even if the
initial cataclysm ‘only’ results in 90% death rates, since they may cause the∂ effective population size to dip
below the so-called “minimum viable population.”∂ This author ran a Monte Carlo simulation using as (admittedly very∂ crude
and poorly informed, though arguably conservative) estimates the following∂ Earth-like parameters: bioterrorism event mean death rate 50%
and∂ standard deviation 25% (beta distribution), initial population 1010, minimum∂ viable population 4000, individual omnicidal act probability
10−7 per annum,∂ and population growth rate 2% per annum. One thousand trials yielded an∂ average post-space-age time until extinction of
less than 8000 years. This is∂ essentially instantaneous on a cosmological scale, and varying the parameters∂ by quite a bit does nothing to
make the survival period comparable with the∂ age of the universe.
1NC - ISIS Version
ISIS will emerge as a serious threat to the US
Morell 15 (Michael Morell is the former deputy director of the CIA and has twice served as acting director. He is
the author of The Great War of Our Time: The CIA's Fight Against Terrorism — From al Qa'ida to ISIS. May 14,
2015 Time Magazine ISIS Is a Danger on U.S. Soil http://time.com/3858354/isis-is-a-danger-on-u-s-soil/)
The terrorist group poses a gathering threat. In the aftermath of the attempted terrorist attack on May 4 in Garland, Texas–for which ISIS
claimed responsibility–we find ourselves again considering the question of whether or not
ISIS is a real threat. The answer is
yes. A very serious one. Extremists inspired by Osama bin Laden’s ideology consider themselves to be at war with the U.S.;
they want to attack us. It is important to never forget that–no matter how long it has been since 9/11. ISIS is just the latest manifestation
of bin Laden’s design. The group has grown faster than any terrorist group we can remember, and the threat it poses to
us is as wide-ranging as any we have seen. What ISIS has that al-Qaeda doesn’t is a Madison Avenue level of sophisticated messaging
and social media. ISIS has a multilingual propaganda arm known as al-Hayat, which uses GoPros and cameras mounted on drones to make
videos that appeal to its followers. And ISIS uses just about every tool in the platform box–from Twitter to YouTube to Instagram–to great
effect, attracting fighters and funding. Digital media are one of the group’s most significant strengths; they have helped ISIS become an
organization that poses four significant threats to the U.S. First, it is a threat to the stability of the entire Middle East. ISIS is putting the
territorial integrity of both Iraq and Syria at risk. And a further collapse of either or both of these states could easily spread throughout the
region, bringing with it sectarian and religious strife, humanitarian crises and the violent redrawing of borders, all in a part of the world that
remains critical to U.S. national interests. ISIS now controls more territory–in Iraq and Syria–than any other terrorist group anywhere in the
world. When al-Qaeda in Iraq joined the fight in Syria, the group changed its name to ISIS. ISIS added Syrians and foreign fighters to its ranks,
built its supply of arms and money and gained significant battlefield experience fighting Bashar Assad’s regime. Together with the security
vacuum in Iraq and Nouri al-Maliki’s alienation of the Sunnis, this culminated in ISIS’s successful blitzkrieg across western Iraq in the spring and
summer of 2014, when it seized large amounts of territory. ISIS is not the first extremist group to take and hold territory. Al-Shabab in Somalia
did so a number of years ago and still holds territory there, al-Qaeda in the Islamic Maghreb did so in Mali in 2012, and al-Qaeda in Yemen did
so there at roughly the same time. I fully expect extremist groups to attempt to take–and sometimes be successful in taking–territory in the
years ahead. But no other group has taken so much territory so quickly as ISIS has. Second, ISIS is attracting young men and women to travel to
Syria and Iraq to join its cause. At this writing, at least 20,000 foreign nationals from roughly 90 countries have gone to Syria and Iraq to join the
fight. Most have joined ISIS. This flow of foreigners has outstripped the flow of such fighters into Iraq during the war there a decade ago. And
there are more foreign fighters in Syria and Iraq today than there were in Afghanistan in the 1980s working to drive the Soviet Union out of that
country. These foreign nationals are getting experience on the battlefield, and they are becoming increasingly radicalized to ISIS’s cause. There
is a particular subset of these fighters to worry about. Somewhere between 3,500 and 5,000 jihadist wannabes have traveled to
Syria and Iraq from Western Europe, Canada, Australia and the U.S. They all have easy access to the U.S. homeland, which
presents two major concerns: that these fighters will leave the Middle East and either conduct an attack on their
own or conduct an attack at the direction of the ISIS leadership. The former has already happened in Europe. It has
not happened yet in the U.S.–but it will. In spring 2014, Mehdi Nemmouche, a young Frenchman who went to fight in Syria, returned
to Europe and shot three people at the Jewish Museum of Belgium in Brussels. The third threat is that ISIS is building a following among other
extremist groups around the world. The allied exaltation is happening at a faster pace than al-Qaeda ever enjoyed. It has occurred in Algeria,
Libya, Egypt and Afghanistan. More will follow. These groups, which are already dangerous, will become even more so. They will increasingly
target ISIS’s enemies (including us), and they will increasingly take on ISIS’s brutality. We saw the targeting play out in early 2015 when an ISISassociated group in Libya killed an American in an attack on a hotel in Tripoli frequented by diplomats and international businesspeople. And
we saw the extreme violence play out just a few weeks after that when another ISIS-affiliated group in Libya beheaded 21 Egyptian Coptic
Christians. And fourth, perhaps most insidiously, ISIS’s message is radicalizing young men and women around the globe who have never
traveled to Syria or Iraq but who want to commit an attack to demonstrate their solidarity with ISIS. These are the so-called lone wolves. Even
before May 4, such an ISIS-inspired attack had already occurred in the U.S.: an individual with sympathies for ISIS attacked two New York City
police officers with a hatchet. Al-Qaeda has inspired such U.S. attacks–the Fort Hood shootings in late 2009 that killed 13 and the Boston
Marathon bombing in spring 2013 that killed five and injured nearly 300. The attempted attack in Texas is just the latest of these. We can
expect more of these kinds of attacks in the U. S. Attacks by ISIS-inspired individuals are occurring at a rapid pace around the world–roughly 10
since ISIS took control of so much territory. Two such attacks have occurred in Canada, including the October 2014 attack on the Parliament
building. And another occurred in Sydney, in December 2014. Many planning such attacks–in Australia, Western Europe and the U.S.–have
been arrested before they could carry out their terrorist plans. Today an ISIS-directed attack in the U. S. would be relatively
unsophisticated (small-scale), but over time ISIS’s capabilities will grow. This is what a long-term safe haven in Iraq and Syria
would give ISIS, and it is exactly what the group is planning to do. They have announced their intentions–just like bin Laden did in the years
prior to 9/11.
Backdoors are key to stop ISIS recruitment
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Jim Comey, ISIS, and "Going
Dark"," Lawfare. 7-23-2015. http://www.lawfareblog.com/jim-comey-isis-and-going-dark//ghs-kw)
I had a lengthy conversation with FBI Director Jim Comey today about the
nexus of our domestic ISIS problem and what the
FBI calls the "going dark" issue. CNN the other day reported on some remarks Comey made on the subject, remarks that have not
gotten enough attention but reflect a problem at the front of his mind these days: FBI Director James Comey said Thursday his agency does
not yet have the capabilities to limit ISIS attempts to recruit Americans through social media. It is becoming
increasingly apparent that Americans are gravitating toward the militant organization by engaging with ISIS
online, Comey said, but he told reporters that "we don't have the capability we need" to keep the "troubled minds" at home. "Our job is
to find needles in a nationwide haystack, needles that are increasingly invisible to us because of end-toend encryption," Comey said. "This is the 'going dark' problem in high definition." Comey said ISIS is
increasingly communicating with Americans via mobile apps that are difficult for the FBI to decrypt. He
also explained that he had to balance the desire to intercept the communication with broader privacy concerns. "It is a really, really hard
problem, but the collision that's going on between important privacy concerns and public safety is significant enough that we have to figure out
a way to solve it," Comey said. Let's unpack this. As has been widely reported, the FBI has been busy recently dealing with ISIS threats. There
have been a bunch of arrests, both because ISIS
has gotten extremely good at the inducing self-radicalization in
disaffected souls worldwide using Twitter and because of the convergence of Ramadan and the run-up to the July 4 holiday. As has
also been widely reported, the FBI is concerned about the effect of end-to-end encryption on its ability to conduct
counterterrorism operations and other law enforcement functions. The concern is two-fold: It's about
data at rest on devices, data that is now being encrypted in a fashion that can't easily be cracked when
those devices are lawfully seized. And it's also about data in transit between devices, data encrypted
such that when captured with a lawful court-ordered wiretap, the signal intercepted is undecipherable.
Comey raised his concerns on both subjects at a speech at Brookings last year and has talked about them periodically since then: What was not
clear to me until today, however, was the
extent to which the ISIS concerns and the "going dark" concerns have
converged. In his Brookings speech, Comey did not focus on counterterrorism in the examples he gave of the going dark problem. In the
remarks quoted by CNN, and in his conversation with me today, however, he made clear that the landscape is changing fast.
Initial recruitment may take place on Twitter, but the promising ISIS candidate quickly gets moved onto
messaging platforms that are encrypted end to end. As a practical matter, that means there are people in the
United States whom authorities reasonably believe to be in contact with ISIS for whom surveillance is
lawful and appropriate but for whom useful signals interception is not technically feasible. That's a pretty
scary thought. I don't know what the right answer is to this problem, which involves a particularly complex mix of legitimate cybersecurity,
investigative, and privacy questions. I do think the
problem is a very different one if the costs of impaired law
enforcement access to signal is enhanced ISIS ability to communicate with its recruits than if we're
dealing primarily with more routine crimes, even serious ones.
ISIS is a threat to the grid
Landsbaum 14
(Mark, 9/5/2014, OC Register, “Mark Landsbaum: Attack on power grid could bring dark days,”
http://www.ocregister.com/articles/emp-633883-power-attack.html, 7/15/15, SM)
It could be worse.
Terrorists pose an “imminent” threat to the U.S. electrical grid, which could leave the good ol’ USA
officer and onetime House Armed Services Committee staffer, who served on a congressional commission investigating such eventualities.∂ “There is an
looking like 19th century USA for a lot longer than three days.∂ Don’t take my word for it. Ask Peter Pry, former CIA
imminent threat from ISIS to the national electric grid and not just to a single U.S. city,” Pry warns. He points
to a leaked U.S. Federal Energy Regulatory Commission report in March that said a coordinated terrorist attack on just nine of
the nation’s 55,000 electrical power substations could cause coast-to-coast blackouts for up to 18
months.∂ Consider what you’ll have to worry about then. If you were uncomfortable watching looting and riots on TV last month in
Ferguson, Mo., as police stood by, project such unseemly behavior nationwide. For 18 months.∂ It’s likely phones won’t be reliable, so you
won’t have to watch police stand idly by. Chances are, police won’t show up. Worse, your odds of needing them will be excruciatingly more
likely if terrorists attack the power grid using an electromagnetic pulse (EMP) burst of energy to knock out electronic devices.∂ “The
Congressional EMP Commission, on which I served, did an extensive study of this,” Pry says. “We discovered to our own revulsion that critical
systems in this country are distressingly unprotected. We calculated that, based on current realities, in
the first year after a full-scale EMP event, we could expect about two-thirds of the national population –
200 million Americans – to perish from starvation and disease, as well as anarchy in the streets.”∂ Skeptical?
Consider who is capable of engineering such measures before dismissing the likelihood.∂ In his 2013 book, “A Nation Forsaken,” Michael Maloof
reported that the 2008 EMP Commission considered whether a
hostile nation or terrorist group could attack with a highaltitude EMP weapon and determined, “any number of adversaries possess both the ballistic missiles
and nuclear weapons capabilities,” and could attack within 15 years.∂ That was six years ago. “North Korea, Pakistan,
India, China and Russia are all in the position to launch an EMP attack against the United States now,”
Maloof wrote last year.∂ Maybe you’ll rest more comfortably knowing the House intelligence authorization bill passed in May told the
intelligence community to report to Congress within six months, “on the threat posed by man-made electromagnetic pulse weapons to United
States interests through 2025, including threats from foreign countries and foreign nonstate actors.”∂ Or, maybe that’s not so comforting. In
2004 and again in 2008, separate congressional commissions gave detailed, horrific reports on such threats. Now, Congress wants another
report.∂ In his book, Maloof quotes Clay Wilson of the Congressional Research Service, who said, “Several nations, including reported sponsors
of terrorism, may currently have a capability to use EMP as a weapon for cyberwarfare or cyberterrorism to disrupt communications and other
parts of the U.S. critical infrastructure.”∂ What would an EMP attack look like? “Within an instant,” Maloof writes, “we will have no idea what’s
happening all around us, because we will have no news. There will be no radio, no TV, no cell signal. No newspaper delivered.∂ “Products won’t
flow into the nearby Wal-Mart. The big trucks will be stuck on the interstates. Gas stations won’t be able to pump the fuel they do have. Some
police officers and firefighters will show up for work, but most will stay home to protect their own families. Power lines will get knocked down
in windstorms, but nobody will care. They’ll all be fried anyway. Crops will wither in the fields until scavenged – since the big picking machines
will all be idled, and there will be no way to get the crop to market anyway.∂ “Nothing
that’s been invented in the last 50
years – based on computer chips, microelectronics or digital technology – will work. And it will get
worse.”
Cyberterror leads to nuclear exchanges – traditional defense doesn’t apply
Fritz 9 (Jason, Master in International Relations from Bond, BS from St. Cloud), “Hacking Nuclear
Command and Control,” International Commission on Nuclear Non-proliferation and Disarmament,
2009, pnnd.org)//duncan
This paper will analyse the threat of cyber terrorism in regard to nuclear weapons. Specifically, this research will
use open source knowledge to identify the structure of nuclear command and control centres, how those structures might be compromised
through computer network operations, and how doing so would fit within established cyber terrorists’ capabilities, strategies, and tactics. If
access to command and control centres is obtained, terrorists could fake or actually cause one nucleararmed state to attack another, thus provoking a nuclear response from another nuclear power. This
may be an easier alternative for terrorist groups than building or acquiring a nuclear weapon or dirty
bomb themselves. This would also act as a force equaliser, and provide terrorists with the asymmetric
benefits of high speed, removal of geographical distance, and a relatively low cost. Continuing difficulties in
developing computer tracking technologies which could trace the identity of intruders, and difficulties in
establishing an internationally agreed upon legal framework to guide responses to computer network operations, point
towards an inherent weakness in using computer networks to manage nuclear weaponry. This is
particularly relevant to reducing the hair trigger posture of existing nuclear arsenals.¶ All computers
which are connected to the internet are susceptible to infiltration and remote control. Computers which
operate on a closed network may also be compromised by various hacker methods, such as privilege escalation, roaming notebooks, wireless
access points, embedded exploits in software and hardware, and maintenance entry points. For example, e-mail spoofing targeted at
individuals who have access to a closed network, could lead to the installation of a virus on an open network. This virus could then be carelessly
transported on removable data storage between the open and closed network. Information found on the internet may also reveal how to
access these closed networks directly. Efforts
by militaries to place increasing reliance on computer networks,
including experimental technology such as autonomous systems, and their desire to have multiple
launch options, such as nuclear triad capability, enables multiple entry points for terrorists. For example, if a
terrestrial command centre is impenetrable, perhaps isolating one nuclear armed submarine would prove an easier task. There is evidence to
suggest multiple attempts have been made by hackers to compromise the extremely low radio frequency once used by the US Navy to send
nuclear launch approval to submerged submarines. Additionally, the alleged Soviet system known as Perimetr was designed to automatically
launch nuclear weapons if it was unable to establish communications with Soviet leadership. This was intended as a retaliatory response in the
event that nuclear weapons had decapitated Soviet leadership; however it did not account for the possibility of cyber terrorists blocking
communications through computer network operations in an attempt to engage the system. ¶ Should
a warhead be launched,
damage could be further enhanced through additional computer network operations. By using proxies, multilayered attacks could be engineered. Terrorists could remotely commandeer computers in China and use them to launch a US
nuclear attack against Russia. Thus Russia would believe it was under attack from the US and the US would believe China was responsible.
Further, emergency
response communications could be disrupted, transportation could be shut down, and
disinformation, such as misdirection, could be planted, thereby hindering the disaster relief effort and
maximizing destruction. Disruptions in communication and the use of disinformation could also be
used to provoke uninformed responses. For example, a nuclear strike between India and Pakistan could be coordinated with
Distributed Denial of Service attacks against key networks, so they would have further difficulty in identifying what happened and be forced to
respond quickly. Terrorists could also knock out communications between these states so they cannot discuss the situation. Alternatively,
amidst the confusion of a traditional large-scale terrorist attack, claims of responsibility and declarations
of war could be falsified in an attempt to instigate a hasty military response. These false claims could be posted
directly on Presidential, military, and government websites. E-mails could also be sent to the media and foreign governments using the IP
addresses and e-mail accounts of government officials. A sophisticated and all encompassing combination of traditional terrorism and cyber
terrorism could be enough to launch nuclear weapons on its own, without the need for compromising command and control centres directly.
2NC UQ - ISIS
ISIS is mobilizing now and ready to take action.
DeSoto 5/7 (Randy DeSoto May 7, 2015 http://www.westernjournalism.com/isis-claims-to-have-71-trainedsoldiers-in-targeted-u-s-states/ Randy DeSoto is a writer for Western Journalism, which consistently ranks in
the top 5 most popular conservative online news outlets in the country)
Purported ISIS jihadists issued threats against the United States Tuesday, indicating the group has trained soldiers
positioned throughout the country, ready to attack “any target we desire.” The online post singles out controversial blogger
Pamela Geller, one of the organizers of the “Draw the Prophet” Muhammad cartoon contest in Garland, Texas, calling for her death to “heal
the hearts of our brothers and disperse the ones behind her.” ISIS also claimed responsibility for the shooting, which marked
the first time the terror group claimed responsibility for an attack on U.S. soil, according to the New York Daily News.
“The attack by the Islamic State in America is only the beginning of our efforts to establish a wiliyah [authority or governance] in the heart of
our enemy,” the ISIS post reads. As for Geller, the jihadists state: “To those who protect her: this will be your only warning of housing this
woman and her circus show. Everyone who houses her events, gives her a platform to spill her filth are legitimate targets. We have been
watching closely who was present at this event and the shooter of our brothers.” ISIS further claims to have known that
the
Muhammad cartoon contest venue would be heavily guarded, but conducted the attack to demonstrate the
willingness of its followers to die for the “Sake of Allah.” The FBI and the Department of Homeland Security, in fact, issued a
bulletin on April 20 indicating the event would be a likely terror target. ISIS drew its message to a close with an ominous threat:
We have 71 trained soldiers in 15 different states ready at our word to attack any target we desire. Out of the 71
trained soldiers 23 have signed up for missions like Sunday, We are increasing in number bithnillah [if God wills]. Of the
15 states, 5 we will name… Virginia, Maryland, Illinois, California, and Michigan …The next six months will be interesting. Fox
News reports that “the U.S. intelligence community was assessing the threat and trying to determine if the source is
directly related to ISIS leadership or an opportunist such as a low-level militant seeking to further capitalize on the
Garland incident.” Former Navy Seal Rob O’Neill told Fox News he believes the ISIS threat is credible, and the U.S. must be prepared. He
added that the incident in Garland “is a prime example of the difference between a gun free zone and Texas. They showed up at Charlie Hebdo,
and it was a massacre. If these two guys had gotten into that building it would have been Charlie Hebdo times ten. But these two guys showed
up because they were offended by something protected by the First Amendment, and were quickly introduced to the Second Amendment.”
Geller issued a statement regarding the ISIS posting: “This threat illustrates the savagery and barbarism of the Islamic State. They want me
dead for violating Sharia blasphemy laws. What remains to be seen is whether the free world will finally wake up and stand for the freedom of
speech, or instead kowtow to this evil and continue to denounce me.”
ISIS will attack – three reasons – its capabilities are growing, an attack would be good
propaganda, and it basically hates all things America
Rogan 15 (Tom, panelist on The McLaughlin Group and holds the Tony Blankley Chair at the Steamboat
Institute, “Why ISIS Will Attack America,” National Review, 3-24-15,
http://www.nationalreview.com/article/415866/why-isis-will-attack-america-tom-rogan)//MJ
There is no good in you if they are secure and happy while you have a pulsing vein. Erupt volcanoes of jihad everywhere. Light the earth with
fire upon all the [apostate rulers], their soldiers and supporters. — ISIS leader Abu Bakr al-Baghdadi, November 2014. Those words weren’t idle.
The Islamic State (ISIS) is still advancing, across continents and
cultures. It’s attacking Shia Muslims in Yemen, gunning
down Western tourists in Tunisia, beheading Christians in Libya, and murdering or enslaving all who do not yield in
Iraq and Syria. Its black banner seen as undaunted by the international coalition against it, new recruits still flock to its service. The
Islamic State’s rise is, in other words, not over, and it is likely to end up involving an attack on America. Three reasons why such
an attempt is inevitable: ISIS’S STRATEGY PRACTICALLY DEMANDS IT Imbued with existential hatred against the United States, the group
doesn’t just oppose American power, it opposes America’s identity. Where the United States is a secular democracy that binds law to individual
freedom, the Islamic State is a totalitarian empire determined to sweep freedom from the earth. As an ideological and physical
necessity, ISIS must ultimately conquer America. Incidentally, this kind of total-war strategy explains why counterterrorism experts
are rightly concerned about nuclear proliferation. The Islamic State’s strategy is also energized by its desire to replace alQaeda as Salafi jihadism’s global figurehead. While al-Qaeda in the Arabian Peninsula (AQAP) and ISIS had a short flirtation last year,
ISIS has now signaled its intent to usurp al-Qaeda’s power in its home territory. Attacks by ISIS last week against Shia mosques in the Yemeni
capital of Sana’a were, at least in part, designed to suck recruits, financial donors, and prestige away from AQAP. But to truly displace al-
Qaeda, ISIS knows it must furnish a new 9/11. ITS CAPABILITIES ARE GROWING Today, ISIS has thousands of European
citizens in its ranks. Educated at the online University of Edward Snowden, ISIS operations officers have cut back intelligence
services’ ability to monitor and disrupt their communications. With EU intelligence services stretched beyond
breaking point, ISIS has the means and confidence to attempt attacks against the West. EU passports are powerful
weapons: ISIS could attack — as al-Qaeda has repeatedly — U.S. targets around the world. AN ATTACK ON THE U.S. IS PRICELESS
PROPAGANDA For transnational Salafi jihadists like al-Qaeda and ISIS, a successful blow against the U.S. allows them to claim
the mantle of a global force and strengthens the narrative that they’re on a holy mission. Holiness is especially important:
ISIS knows that to recruit new fanatics and deter its enemies, it must offer an abiding narrative of strength and divine purpose. With the
group’s leaders styling themselves as Mohammed’s heirs, Allah’s chosen warriors on earth, attacking the infidel
United States would reinforce ISIS’s narrative. Of course, attacking America wouldn’t actually serve the Islamic State’s long-term
objectives. Quite the opposite: Any atrocity would fuel a popular American resolve to crush the group with expediency. (Make no mistake, it
would be crushed.) The problem, however, is that, until then, America is in the bull’s eye.
2NC Cyber - ISIS
ISIS is a threat to the grid
Landsbaum 14
(Mark, 9/5/2014, OC Register, “Mark Landsbaum: Attack on power grid could bring dark days,”
http://www.ocregister.com/articles/emp-633883-power-attack.html, 7/15/15, SM)
It could be worse.
Terrorists pose an “imminent” threat to the U.S. electrical grid, which could leave the good ol’ USA
officer and onetime House Armed Services Committee staffer, who served on a congressional commission investigating such eventualities.∂ “There is an
imminent threat from ISIS to the national electric grid and not just to a single U.S. city,” Pry warns. He points to
a leaked U.S. Federal Energy Regulatory Commission report in March that said a coordinated terrorist attack on just nine of
the nation’s 55,000 electrical power substations could cause coast-to-coast blackouts for up to 18
months.∂ Consider what you’ll have to worry about then. If you were uncomfortable watching looting and riots on TV last month in
looking like 19th century USA for a lot longer than three days.∂ Don’t take my word for it. Ask Peter Pry, former CIA
Ferguson, Mo., as police stood by, project such unseemly behavior nationwide. For 18 months.∂ It’s likely phones won’t be reliable, so you
won’t have to watch police stand idly by. Chances are, police won’t show up. Worse, your odds of needing them will be excruciatingly more
likely if terrorists attack the power grid using an electromagnetic pulse (EMP) burst of energy to knock out electronic devices.∂ “The
Congressional EMP Commission, on which I served, did an extensive study of this,” Pry says. “We discovered to our own revulsion that critical
systems in this country are distressingly unprotected. We calculated that, based on current realities, in
the first year after a full-scale EMP event, we could expect about two-thirds of the national population –
200 million Americans – to perish from starvation and disease, as well as anarchy in the streets.”∂ Skeptical?
Consider who is capable of engineering such measures before dismissing the likelihood.∂ In his 2013 book, “A Nation Forsaken,” Michael Maloof
reported that the 2008 EMP Commission considered whether a
hostile nation or terrorist group could attack with a highaltitude EMP weapon and determined, “any number of adversaries possess both the ballistic missiles
and nuclear weapons capabilities,” and could attack within 15 years.∂ That was six years ago. “North Korea, Pakistan,
India, China and Russia are all in the position to launch an EMP attack against the United States now,”
Maloof wrote last year.∂ Maybe you’ll rest more comfortably knowing the House intelligence authorization bill passed in May told the
intelligence community to report to Congress within six months, “on the threat posed by man-made electromagnetic pulse weapons to United
States interests through 2025, including threats from foreign countries and foreign nonstate actors.”∂ Or, maybe that’s not so comforting. In
2004 and again in 2008, separate congressional commissions gave detailed, horrific reports on such threats. Now, Congress wants another
report.∂ In his book, Maloof quotes Clay Wilson of the Congressional Research Service, who said, “Several nations, including reported sponsors
of terrorism, may currently have a capability to use EMP as a weapon for cyberwarfare or cyberterrorism to disrupt communications and other
parts of the U.S. critical infrastructure.”∂ What would an EMP attack look like? “Within an instant,” Maloof writes, “we will have no idea what’s
happening all around us, because we will have no news. There will be no radio, no TV, no cell signal. No newspaper delivered.∂ “Products won’t
flow into the nearby Wal-Mart. The big trucks will be stuck on the interstates. Gas stations won’t be able to pump the fuel they do have. Some
police officers and firefighters will show up for work, but most will stay home to protect their own families. Power lines will get knocked down
in windstorms, but nobody will care. They’ll all be fried anyway. Crops will wither in the fields until scavenged – since the big picking machines
will all be idled, and there will be no way to get the crop to market anyway.∂ “Nothing
that’s been invented in the last 50
years – based on computer chips, microelectronics or digital technology – will work. And it will get
worse.”
2NC Links
Backdoors are key to prevent terrorism
Corn 7/13
(Corn, Geoffrey S. * Presidential Research Professor of Law, South Texas College of Law; Lieutenant Colonel (Retired), U.S. Army Judge
Advocate General’s Corps. Prior to joining the faculty at South Texas, Professor Corn served in a variety of military assignments, including as
the Army’s Senior Law of War Advisor, Supervisory Defense Counsel for the Western United States, Chief of International Law for U.S. Army
Europe, and as a Tactical Intelligence Officer in Panama. “Averting the Inherent Dangers of 'Going Dark': Why Congress Must Require a
Locked Front Door to Encrypted Data,” SSRN. 07-13-2015.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes//ghs-kw)
The risks related to “going dark” are real. When the President of the United States,60 the Prime Minister of the United
Kingdom,61 and the Director of the FBI62 all publically express deep concerns about how this phenomenon will endanger their respective
nations, it is difficult to ignore. Today, encryption
technologies that are making it increasingly easy for individual
users to prevent even lawful government access to potentially vital information related to crimes or
other national security threats. This evolution of individual encryption capabilities represents a
fundamental distortion of the balance between government surveillance authority and individual liberty
central to the Fourth Amendment. And balance is the operative word. The right of The People to be secure against unreasonable government
intrusions into those places and things protected by the Fourth Amendment must be vehemently protected. Reasonable
however, should not only be permitted, but they should
searches,
be mandated where necessary. Congress has the authority
to ensure that such searches are possible. While some argue that this could cause American manufacturers to suffer, saddled as
they will appear to be by the “Snowden Effect,” the rules will apply equally to any manufacturer that wishes to do business in the United States.
Considering that the United States economy is the largest in the world, it is highly unlikely that foreign manufacturers will forego access to our
market in order to avoid having to create CALEA-like solutions to allow for lawful access to encrypted data. Just as foreign cellular telephone
providers, such as T-Mobile, are active in the United States, so too will foreign device manufacturers and other communications services adjust
their technology to comply with our laws and regulations. This will put American and foreign companies on an equal playing field while
encouraging ingenuity and competition. Most importantly, “the
right of the people to be secure in their persons, houses,
be protected not only “against unreasonable searches and seizures,” but also against attacks by
criminals and terrorists. And is not this, in essence, the primary purpose of government?
papers, and effects” will
Backdoors are key to security—terror turns the case
Goldsmith 13
(Jack Goldsmith. Jack Goldsmith, a contributing editor, teaches at Harvard Law School and is a member of the Hoover Institution Task Force
on National Security and Law. "We Need an Invasive NSA," New Republic. 10-10-2013.
http://www.newrepublic.com/article/115002/invasive-nsa-will-protect-us-cyber-attacks//ghs-kw)
Ever since stories about the National Security Agency’s (NSA) electronic intelligence-gathering capabilities began tumbling out last June, The
New York Times has published more than a dozen editorials excoriating the “national surveillance state.” It wants the NSA to
end the “mass warehousing of everyone’s data” and the use of “back doors” to break encrypted communications. A major element of
the Times’ critique is that the NSA’s domestic sweeps are not justified by the terrorist threat they aim to prevent. At the end of August, in the
midst of the Times’ assault on the NSA, the newspaper suffered what it described as a “malicious external attack” on its domain name registrar
at the hands of the Syrian Electronic Army, a group of hackers who support Syrian President Bashar Al Assad. The paper’s website was down for
several hours and, for some people, much longer. “In terms of the sophistication of the attack, this is a big deal,” said Marc Frons, the Times’
chief information officer. Ten months earlier, hackers stole the corporate passwords for every employee at the Times, accessed the computers
of 53 employees, and breached the e-mail accounts of two reporters who cover China. “We brought in the FBI, and the FBI said this had all the
hallmarks of hacking by the Chinese military,” Frons said at the time. He also acknowledged that the hackers were in the Times system on
election night in 2012 and could have “wreaked havoc” on its coverage if they wanted. Such cyber-intrusions
threaten corporate
America and the U.S. government every day. “Relentless assaults on America’s computer networks by
China and other foreign governments, hackers and criminals have created an urgent need for safeguards
to protect these vital systems,” the Times editorial page noted last year while supporting legislation encouraging the private sector
to share cybersecurity information with the government. It cited General Keith Alexander, the director of the NSA, who had noted
a 17fold increase in cyber-intrusions on critical infrastructure from 2009 to 2011 and who described the losses
in the United States from cyber-theft as “the greatest transfer of wealth in history.” If a “catastrophic
cyber-attack occurs,” the Times concluded, “Americans will be justified in asking why their lawmakers ... failed
to protect them.” When catastrophe strikes, the public will adjust its tolerance for intrusive
government measures. The Times editorial board is quite right about the seriousness of the cyberthreat and the federal government’s responsibility to redress it. What it does not appear to realize is the
connection between the domestic NSA surveillance it detests and the governmental assistance with
cybersecurity it cherishes. To keep our computer and telecommunication networks secure, the
government will eventually need to monitor and collect intelligence on those networks using
techniques similar to ones the Times and many others find reprehensible when done for
counterterrorism ends. The fate of domestic surveillance is today being fought around the topic of
whether it is needed to stop Al Qaeda from blowing things up. But the fight tomorrow, and the more
important fight, will be about whether it is necessary to protect our ways of life embedded in computer
networks. Anyone anywhere with a connection to the Internet can engage in cyber-operations within the United States. Most truly
harmful cyber-operations, however, require group effort and significant skill. The attacking group or nation
must have clever hackers, significant computing power, and the sophisticated software—known as
“malware”—that enables the monitoring, exfiltration, or destruction of information inside a computer. The
supply of all of these resources has been growing fast for many years—in governmental labs devoted to developing these tools and on
sprawling black markets on the Internet. Telecommunication
networks are the channels through which malware
typically travels, often anonymized or encrypted, and buried in the billions of communications that traverse the globe each day. The
targets are the communications networks themselves as well as the computers they connect—things like the
Times’ servers, the computer systems that monitor nuclear plants, classified documents on computers in the Pentagon, the nasdaq exchange,
your local bank, and your social-network providers. To
keep these computers and networks secure, the government
needs powerful intelligence capabilities abroad so that it can learn about planned cyber-intrusions. It
also needs to raise defenses at home. An important first step is to correct the market failures that plague cybersecurity. Through
law or regulation, the government must improve incentives for individuals to use security software, for private firms to harden their defenses
and share information with one another, and for Internet service providers to crack down on the botnets—networks of compromised zombie
computers—that underlie many cyber-attacks. More, too, must be done to prevent insider threats like Edward Snowden’s, and to control the
stealth introduction of vulnerabilities during the manufacture of computer components—vulnerabilities that can later be used as windows for
cyber-attacks. And yet that’s still not enough. The
U.S. government can fully monitor air, space, and sea for potential
attacks from abroad. But it has limited access to the channels of cyber-attack and cyber-theft, because
they are owned by private telecommunication firms, and because Congress strictly limits government access to private
communications. “I can’t defend the country until I’m into all the networks,” General Alexander reportedly told senior government officials a
few months ago. For Alexander, being
in the network means having government computers scan the content and
metadata of Internet communications in the United States and store some of these communications for
extended periods. Such access, he thinks, will give the government a fighting chance to find the needle of known
malware in the haystack of communications so that it can block or degrade the attack or exploitation. It will also
allow it to discern patterns of malicious activity in the swarm of communications, even when it doesn’t
possess the malware’s signature. And it will better enable the government to trace back an attack’s
trajectory so that it can discover the identity and geographical origin of the threat. Alexander’s domestic
cybersecurity plans look like pumped-up versions of the NSA’s counterterrorism-related homeland surveillance that has sparked so much
controversy in recent months. That is why so many people in Washington think that Alexander’s vision has “virtually no chance of moving
forward,” as the Times recently reported. “Whatever trust was there is now gone,” a senior intelligence official told Times. There are two
reasons to think that these predictions are wrong and that the
government, with extensive assistance from the NSA, will one day
intimately monitor private networks. The first is that the cybersecurity threat is more pervasive and severe
than the terrorism threat and is somewhat easier to see. If the Times’ website goes down a few more times and for
longer periods, and if the next penetration of its computer systems causes large intellectual property
losses or a compromise in its reporting, even the editorial page would rethink the proper balance of
privacy and security. The point generalizes: As cyber-theft and cyber-attacks continue to spread (and they
will), and especially when they result in a catastrophic disaster (like a banking compromise that
destroys market confidence, or a successful attack on an electrical grid), the public will demand
government action to remedy the problem and will adjust its tolerance for intrusive government
measures. At that point, the nation’s willingness to adopt some version of Alexander’s vision will depend on the possibility of credible
restraints on the NSA’s activities and credible ways for the public to monitor, debate, and approve what the NSA is doing over time. Which
leads to the
second reason why skeptics about enhanced government involvement in the network might be wrong. The public
mistrusts the NSA not just because of what it does, but also because of its extraordinary secrecy. To obtain the credibility it needs
to secure permission from the American people to protect our networks, the NSA and the intelligence community must
fundamentally recalibrate their attitude toward disclosure and scrutiny. There are signs that this is
happening—and that, despite the undoubted damage he inflicted on our national security in other respects, we have Edward Snowden to
thank. “Before the unauthorized disclosures, we were always conservative about discussing specifics of our collection programs, based on the
truism that the more adversaries know about what we’re doing, the more they can avoid our surveillance,” testified Director of National
Intelligence James Clapper last month. “But the disclosures, for better or worse, have lowered the threshold for discussing these matters in
public.” In the last few weeks, the NSA
has done the unthinkable in releasing dozens of documents that implicitly
confirm general elements of its collection capabilities. These revelations are bewildering to most people in the
intelligence community and no doubt hurt some elements of collection. But they are justified by the countervailing need for
public debate about, and public confidence in, NSA activities that had run ahead of what the public expected. And they suggest
that secrecy about collection capacities is one value, but not the only or even the most important one. They also show that not all revelations of
NSA capabilities are equally harmful. Disclosure that it sweeps up metadata is less damaging to its mission than disclosure of the fine-grained
details about how it collects and analyzes that metadata.
2NC AT Encryption =/= Backdoors
All our encryption args still apply
Sasso 14
(Brendan Sasso. technology correspondent for National Journal, previously covered technology policy issues for The Hill and was a
researcher and contributing writer for the 2012 edition of the Almanac of American Politics. "The NSA Isn't Just Spying on Us, It's Also
Undermining Internet Security," nationaljournal. 4-29-2014. http://www.nationaljournal.com/daily/the-nsa-isn-t-just-spying-on-us-it-s-alsoundermining-internet-security-20140429//ghs-kw)
According to the leaked documents, the
NSA inserted a so-called back door into at least one encryption standard
that was developed by the National Institute of Standards and Technology. The NSA could use that back
door to spy on suspected terrorists, but the vulnerability was also available to any other hacker who discovered it.
2NC Turns Backdoors
Cyberattacks turn the case—public pressures for backdoors
Goldsmith 13
(Jack Goldsmith. Jack Goldsmith, a contributing editor, teaches at Harvard Law School and is a member of the Hoover Institution Task Force
on National Security and Law. "We Need an Invasive NSA," New Republic. 10-10-2013.
http://www.newrepublic.com/article/115002/invasive-nsa-will-protect-us-cyber-attacks//ghs-kw)
There are two reasons to think that these predictions are wrong and that the government, with extensive assistance from the
NSA, will
one day intimately monitor private networks. The first is that the cybersecurity threat is more pervasive and severe than the
terrorism threat and is somewhat easier to see. If the Times’ website goes down a few more times and for longer
periods, and if the next penetration of its computer systems causes large intellectual property losses or
a compromise in its reporting, even the editorial page would rethink the proper balance of privacy and
security. The point generalizes: As cyber-theft and cyber-attacks continue to spread (and they will), and
especially when they result in a catastrophic disaster (like a banking compromise that destroys market confidence, or a
successful attack on an electrical grid), the public will demand government action to remedy the
problem and will adjust its tolerance for intrusive government measures.
Ptix
1NC
Backdoors are popular now—national security concerns
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark: Part I," Lawfare. 7-23-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-part-i//ghs-kw)
In other words, I think Comey and Yates inevitably are asking for legislation, at least in the longer term. The administration
has decided not to seek it now, so the conversation is taking place at a somewhat higher level of abstraction than it would if there were a
specific legislative proposal on the table. But
the current discussion should be understood as an effort to begin
building a legislative coalition for some sort of mandate that internet platform companies retain (or build)
the ability to permit, with appropriate legal process, the capture and delivery to law enforcement and intelligence
authorities of decrypted versions of the signals they carry. This coalition does not exist yet, particularly not in
the House of Representatives. But yesterday's hearings were striking in showing how successful Comey has been
in the early phases of building it. A lot of members are clearly concerned already. That concern will likely
grow if Comey is correct about the speed with which major investigative tools are weakening in their
utility. And it could become a powerful force in the event an attack swings the pendulum away from civil
libertarian orthodoxy.
2NC
(KQ) 1AC Macri 14 evidence magnifies the link to politics: “The U.S. Senate voted
down consideration of a bill on Tuesday that would have reigned in the NSA’s powers
to conduct domestic surveillance, upping the legal hurdles for certain types of spying
Rogers repeated Thursday he was largely uninterested in.”
Even if backdoors are unpopular now, that will inevitably change
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-partii-debate-merits//ghs-kw)
There's a final, non-legal factor that may push companies to work this problem as energetically as they
are now moving toward end-to-end encryption: politics. We are at very particular moment in the
cryptography debate, a moment in which law enforcement sees a major problem as having arrived but
the tech companies see that problem as part of the solution to the problems the Snowden revelations
created for them. That is, we have an end-to-end encryption issue, in significant part, because companies are trying to assure customers
worldwide that they have their backs privacy-wise and are not simply tools of NSA. I think those politics are likely to change. If
Comey is right and we start seeing law enforcement and intelligence agencies blind in investigating and
preventing horrible crimes and significant threats, the pressure on the companies is going to shift. And it
may shift fast and hard. Whereas the companies now feel intense pressure to assure customers that
their data is safe from NSA, the kidnapped kid with the encrypted iPhone is going to generate a very
different sort of political response. In extraordinary circumstances, extraordinary access may well seem
reasonable. And people will wonder why it doesn't exist.
Military DA
1NC
Cyber-deterrence is strong now but keeping our capabilities in line with other powers’
is key to maintain stability
Healey 14
(Healey, Jason. Jason Healey is a Nonresident Senior Fellow for the Cyber Statecraft Initiative of the Atlantic Council and Senior Research
Scholar at Columbia University's School of International and Public Affairs, focusing on international cooperation, competition, and conflict
in cyberspace. From 2011 to 2015, he worked as the Director of the Council's Cyber Statecraft Initiative. Starting his career in the United
States Air Force, Mr. Healey earned two Meritorious Service Medals for his early work in cyber operations at Headquarters Air Force at the
Pentagon and as a plankholder (founding member) of the Joint Task Force – Computer Network Defense, the world's first joint cyber
warfighting unit. He has degrees from the United States Air Force Academy (political science), Johns Hopkins University (liberal arts), and
James Madison University (information security). "Commentary: Cyber Deterrence Is Working," Defense News. 7-30-2014.
http://archive.defensenews.com/article/20140730/DEFFEAT05/307300017/Commentary-Cyber-Deterrence-Working//ghs-kw)
Despite the mainstream view of cyberwar professionals and theorists, cyber
deterrence is not only possible but has been
working for decades. Cyberwar professionals are in the midst of a decades-old debate on how America could deter
adversaries from attacking us in cyberspace. In 2010, then-Deputy Defense Secretary Bill Lynn summed up the prevailing view
that “Cold War deterrence models do not apply to cyberspace” because of low barriers to entry and the anonymity of Internet attacks. Cyber
attacks, unlike intercontinental missiles, don’t have a return address. But this view is too narrow and technical. The
history of how
nations have actually fought (or not fought) conflicts in cyberspace makes it clear deterrence is not only
theoretically possible, but is actually keeping an upper threshold to cyber hostilities. The hidden hand of
deterrence is most obvious in the discussion of “a digital Pearl Harbor.” In 2012, then-Defense Secretary
Leon Panetta described his worries of such a bolt-from-the-blue attack that could cripple the United
States or its military. Though his phrase raised eyebrows among cyber professionals, there was broad agreement with the
basic implication: The United States is strategically vulnerable and potential adversaries have both the
means for strategic attack and the will to do it. But worrying about a digital Pearl Harbor actually dates
not to 2012 but to testimony by Winn Schwartau to Congress in 1991. So cyber experts have been
handwringing about a digital Pearl Harbor for more than 20 of the 70 years since the actual Pearl
Harbor. Waiting for Blow To Come? Clearly there is a different dynamic than recognized by conventional wisdom. For over
two decades, the United States has had its throat bared to the cyber capabilities of potential adversaries
(and presumably their throats are as bared to our capabilities), yet the blow has never come. There is no
solid evidence anyone has ever been killed by any cyber attack; no massive power outages, no disruptions of hospitals or faking of hospital
records, no tampering of dams causing a catastrophic flood. The
Internet is a fierce domain and conflicts are common
between nations. But deterrence — or at least restraint — has kept a lid on the worst. Consider: ■ Large
nations have never launched strategically significant disruptive cyber attacks against other large nations.
China, Russia and the United States seem to have plans to do so not as surprise attacks from a clear sky, but as part of a major (perhaps even
existential) international security crisis — not unlike the original Pearl Harbor. Cyber
attacks between equals have always
stayed below the threshold of death and destruction. ■ Larger nations do seem to be willing to launch
significant cyber assaults against rivals but only during larger crises and below the threshold of death
and destruction, such as Russian attacks against Estonia and Georgia or China egging on patriotic hackers to disrupt computers in dustups with Japan, Vietnam or the Philippines. The United States and Israel have perhaps come closest to the threshold with the Stuxnet attacks
but even here, the attacks were against a very limited target (Iranian programs to enrich uranium) and hardly out of the blue. ■ Nations seem
almost completely unrestrained using cyber espionage to further their security (and sometimes commercial) objectives and only slightly more
restrained using low levels of cyber force for small-scale disruption, such as Chinese or Russian disruption of dissidents’ websites or British
disruption of chat rooms used by Anonymous to coordinate protest attacks. In
a discussion about any other kind of military
power, such as nuclear weapons, we would have no problem using the word deterrence to describe
nations’ reluctance to unleash capabilities against one another. Indeed, a comparison with nuclear
deterrence is extremely relevant, but not necessarily the one that Cold Warriors have recognized. Setting a
Nuclear weapons did not make all wars unthinkable, as some early postwar thinkers had hoped.
Instead, they provided a ceiling under which the superpowers fought all kinds of wars, regular and
irregular. The United States and Soviet Union, and their allies and proxies, engaged in lethal, intense conflicts from Korea to Vietnam and
through proxies in Africa, Asia and Latin America. Nuclear warheads did not stop these wars, but did set an upper
threshold neither side proved willing to exceed. Likewise, the most cyber capable nations (including America,
China and Russia) have been more than willing to engage in irregular cyber conflicts, but have stayed well
under the threshold of strategic cyber warfare, creating a de facto norm. Nations have proved just as
unwilling to launch a strategic attack in cyberspace as they are in the air, land, sea or space. The new
norm is same as the old norm. This norm of strategic restraint is a blessing but still is no help to deter cyber crime or the
Ceiling
irregular conflicts that have long occurred under the threshold. Cyber espionage and lesser state-sponsored cyber disruption seem to be
increasing markedly in the last few years.
Backdoors are key to cyberoffensive capabilities
Schneier 13
(Schneier. Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School and a program fellow at the New America
Foundation's Open Technology. He is an American cryptographer, computer security and privacy specialist, and writer. He is the author of
several books on general security topics, computer security and cryptography. He is also a contributing writer for The Guardian news
organization.[ "US Offensive Cyberwar Policy.” 06-21-2013. https://www.schneier.com/blog/archives/2013/06/us_offensive_cy.html//ghskw)
Cyberattacks have the potential to be both immediate and devastating. They can disrupt
communications systems, disable national infrastructure, or, as in the case of Stuxnet, destroy nuclear reactors; but
only if they've been created and targeted beforehand. Before launching cyberattacks against another country,
we have to go through several steps. We have to study the details of the computer systems they're
running and determine the vulnerabilities of those systems. If we can't find exploitable vulnerabilities, we need
to create them: leaving "back doors," in hacker speak. Then we have to build new cyberweapons designed
specifically to attack those systems. Sometimes we have to embed the hostile code in those networks -- these are called "logic
bombs" -- to be unleashed in the future. And we have to keep penetrating those foreign networks, because
computer systems always change and we need to ensure that the cyberweapons are still effective. Like
our nuclear arsenal during the Cold War, our cyberweapons arsenal must be pretargeted and ready to launch. That's
what Obama directed the US Cyber Command to do. We can see glimpses of how effective we are in Snowden's
allegations that the NSA is currently penetrating foreign networks around the world: "We hack network
backbones -- like huge Internet routers, basically -- that give us access to the communications of
hundreds of thousands of computers without having to hack every single one."
Loss of cyber-offensive capabilities incentivizes China to take Taiwan—turns heg and
the economy
Hjortdal 11
(Magnus Hjortdal received his BSc and MSc in Political Science, with a specialization in IR, from the University of Copenhagen. He was an
Assistant Lecturer at the University of Copenhagen, a Research Fellow at the Royal Danish Defence College, and is now the Head of the
Ministry of Foreign Affairs in Denmark. “China's Use of Cyber Warfare: Espionage Meets Strategic Deterrence ,” Journal of Strategic Security,
Vol. 4 No. 2, Summer 2011: Strategic Security in the Cyber Age, Article 2, pp 1-24.
http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1101&context=jss//ghs-kw)
China's military strategy mentions cyber capabilities as an area that the People's Liberation Army (PLA)
should invest in and use on a large scale.13 The U.S. Secretary of Defense, Robert Gates, has also declared that China's
development in the cyber area increasingly concerns him,14 and that there has been a decade-long trend of cyber attacks
emanating from China.15 Virtually all digital and electronic military systems can be attacked via cyberspace. Therefore, it is essential for
a state to develop capabilities in this area if it wishes to challenge the present American hegemony. The
interesting question then is whether China is developing capabilities in cyberspace in order to deter the United States.16 China's military
strategists describe cyber capabilities as a powerful asymmetric opportunity in a deterrence strategy.19
Analysts consider that an "important theme in Chinese writings on computer-network operations (CNO)
is the use of computer-network attack (CNA) as the spearpoint of deterrence."20 CNA increases the
enemy's costs to become too great to engage in warfare in the first place, which Chinese analysts judge
to be essential for deterrence.21 This could, for example, leave China with the potential ability to deter the
United States from intervening in a scenario concerning Taiwan. CNO is viewed as a focal point for the People's
Liberation Army, but it is not clear how the actual capacity functions or precisely what conditions it works under.22 If a state with
superpower potential (here China) is to create an opportunity to ascend militarily and politically in the
international system, it would require an asymmetric deterrence capability such as that described
here.23 It is said that the "most significant computer network attack is characterized as a pre-emption
weapon to be used under the rubric of the rising Chinese strategy of […] gaining mastery before the
enemy has struck."24 Therefore, China, like other states seeking a similar capacity, has recruited massively within the hacker milieu
inside China.25 Increasing resources in the PLA are being allocated to develop assets in relation to cyberspace.26 The improvements are visible:
The PLA has established "information warfare" capabilities,27 with a special focus on cyber warfare that, according to their doctrine, can be
used in peacetime.28 Strategists from the PLA advocate the use of virus and hacker attacks that can paralyze and surprise its enemies.29
Aggressive and Widespread Cyber Attacks from China and the International Response China's
use of asymmetric capabilities,
especially cyber warfare, could pose a serious threat to the American economy.30 Research and development in
cyber espionage figure prominently in the 12th Five-Year Plan (2011–2015) that is being drafted by both the Chinese central government and
the PLA.31 Analysts say that China
could well have the most extensive and aggressive cyber warfare capability in
the world, and that this is being driven by China's desire for "global-power status."32 These observations do not
come out of the blue, but are a consequence of the fact that authoritative Chinese writings on the subject present cyber warfare as an obvious
asymmetric instrument for balancing overwhelming (mainly U.S.) power, especially in case of open conflict, but also as a deterrent.33
Escalates to nuclear war and turns the economy
Landay 2k
(Jonathan S. Landay, National Security and Intelligence Correspondent, -2K [“Top Administration Officials Warn Stakes for U.S. Are High in
Asian Conflicts”, Knight Ridder/Tribune News Service, March 10, p. Lexis. Ghs-kw)
Few if any experts think China
and Taiwan, North Korea and South Korea, or India and Pakistan are spoiling to fight.
But even a minor miscalculation by any of them could destabilize Asia, jolt the global economy and even
start a nuclear war. India, Pakistan and China all have nuclear weapons, and North Korea may have a
few, too. Asia lacks the kinds of organizations, negotiations and diplomatic relationships that helped
keep an uneasy peace for five decades in Cold War Europe. “Nowhere else on Earth are the stakes as
high and relationships so fragile,” said Bates Gill, director of northeast Asian policy studies at the Brookings Institution, a
Washington think tank. “We see the convergence of great power interest overlaid with lingering
confrontations with no institutionalized security mechanism in place. There are elements for potential
disaster.” In an effort to cool the region’s tempers, President Clinton, Defense Secretary William S. Cohen and National Security Adviser
Samuel R. Berger all will hopscotch Asia’s capitals this month. For America, the stakes could hardly be higher. There are
100,000 U.S. troops in Asia committed to defending Taiwan, Japan and South Korea, and the United
States would instantly become embroiled if Beijing moved against Taiwan or North Korea attacked South Korea.
While Washington has no defense commitments to either India or Pakistan, a conflict between the two could end the global
taboo against using nuclear weapons and demolish the already shaky international nonproliferation
regime. In addition, globalization has made a stable Asia _ with its massive markets, cheap labor, exports
and resources indispensable to the U.S. economy. Numerous U.S. firms and millions of American jobs
depend on trade with Asia that totaled $600 billion last year, according to the Commerce Department.
2NC UQ
Cyber-capabilities strong now but it’s close
NBC 13
(NBC citing Scott Borg, CEO of the US Cyber Consequences Unit, and independent, non-profit research institute. Borg has lectured at
Harvard, Yale, Columbia, London, and other leading universities. "Expert: US in cyberwar arms race with China, Russia," NBC News. 02-202013. http://investigations.nbcnews.com/_news/2013/02/20/17022378-expert-us-in-cyberwar-arms-race-with-china-russia//ghs-kw)
The United States is locked in a tight race with China and Russia to build destructive cyberweapons
capable of seriously damaging other nations’ critical infrastructure, according to a leading expert on hostilities
waged via the Internet. Scott Borg, CEO of the U.S. Cyber Consequences Unit, a nonprofit institute that advises the U.S. government and
businesses on cybersecurity, said all
three nations have built arsenals of sophisticated computer viruses, worms,
Trojan horses and other tools that place them atop the rest of the world in the ability to inflict serious
damage on one another, or lesser powers. Ranked just below the Big Three, he said, are four U.S. allies: Great
Britain, Germany, Israel and perhaps Taiwan. But in testament to the uncertain risk/reward ratio in cyberwarfare, Iran has used
attacks on its nuclear program to bolster its offensive capabilities and is now developing its own "cyberarmy," Borg said. Borg offered his
assessment of the current state of cyberwar capabilities Tuesday in the wake of a report by the American computer security company Mandiant
linking hacking attacks and cyber espionage against the U.S. to a sophisticated Chinese group known as “Peoples Liberation Army Unit 61398. In
today’s brave new interconnected world, hackers
who can defeat security defenses are capable of disrupting an
array of critical services, including delivery of water, electricity and heat, or bringing transportation to a grinding halt. U.S. senators
last year received a closed-door briefing at which experts demonstrated how a power company employee could take down the New York City
electrical grid by clicking on a single email attachment, the New York Times reported. U.S. officials rarely discuss offensive capability when
discussing cyberwar, though several privately told NBC News recently that the
U.S. could "shut down" the electrical grid of a
smaller nation -- Iran, for example – if it chose to do so. Borg echoed that assessment, saying the U.S. cyberwarriors, who work
within the National Security Agency, are “very good across the board. … There is a formidable capability.”
“Stuxnet and Flame (malware used to disrupt and gather intelligence on Iran's nuclear program) are
demonstrations of that,” he said. “… (The U.S.) could shut down most critical infrastructure in potential
adversaries relatively quickly.”
Cyber-deterrence works now
Healey 14
(Healey, Jason. Jason Healey is a Nonresident Senior Fellow for the Cyber Statecraft Initiative of the Atlantic Council and Senior Research
Scholar at Columbia University's School of International and Public Affairs, focusing on international cooperation, competition, and conflict
in cyberspace. From 2011 to 2015, he worked as the Director of the Council's Cyber Statecraft Initiative. Starting his career in the United
States Air Force, Mr. Healey earned two Meritorious Service Medals for his early work in cyber operations at Headquarters Air Force at the
Pentagon and as a plankholder (founding member) of the Joint Task Force – Computer Network Defense, the world's first joint cyber
warfighting unit. He has degrees from the United States Air Force Academy (political science), Johns Hopkins University (liberal arts), and
James Madison University (information security). "Commentary: Cyber Deterrence Is Working," Defense News. 7-30-2014.
http://archive.defensenews.com/article/20140730/DEFFEAT05/307300017/Commentary-Cyber-Deterrence-Working//ghs-kw)
Nations have been unwilling to take advantage of each other’s vulnerable infrastructures perhaps because,
as Joe Nye notes in his book, “The Future of Power,” “interstate deterrence through entanglement and denial still
exist” for cyber conflicts. The most capable cyber nations rely heavily on the same Internet infrastructure and global standards
(though using significant local infrastructure), so attacks above a certain threshold are not obviously in any nation’s self-interest. In
addition, both deterrence by denial and deterrence by punishment are in force. Despite their vulnerabilities,
nations may still be able to mount effective-enough defenses to deny any benefits to the adversary.
Taking down a cyber target is spectacularly easy and well within the capability of the proverbial “twoteenagers-in-a-basement.” But keeping a target down over time in the face of determined defenses is
very hard, demanding intelligence, battle damage assessment and the ability to keep restriking targets
over time. These capabilities are still largely the province of the great cyber powers, meaning it can be
trivially easy to determine the likely attacker. During all of the most disruptive cyber conflicts (such as
Estonia, Georgia or Stuxnet) there was quick consensus on the “obvious choice” of which nation or nations were
behind the assault. If any of those attacks had caused large numbers of deaths or truly strategic
disruption, hiding behind Internet anonymity (“It wasn’t us and you can’t prove otherwise”) would ring flat and invite
a retaliatory strike.
2NC Link - Backdoors
Backdoors and surveillance are key to winning the cyber arms race
Spiegel 15
(Spiegel Online, Hamburg, Germany. "The Digital Arms Race: NSA Preps America for Future Battle," SPIEGEL ONLINE. 1-17-2015.
http://www.spiegel.de/international/world/new-snowden-docs-indicate-scope-of-nsa-preparations-for-cyber-battle-a-1013409.html//ghskw)
Potential interns are also told that research into third party computers might include plans to "remotely degrade or destroy opponent
computers, routers, servers and network enabled devices by attacking the hardware." Using a program called Passionatepolka, for example,
they may be asked to "remotely brick network cards." With
programs like Berserkr they would implant "persistent
backdoors" and "parasitic drivers". Using another piece of software called Barnfire, they would "erase the BIOS on a brand of servers that
act as a backbone to many rival governments." An intern's tasks might also include remotely destroying the functionality of hard drives.
Ultimately, the goal of the internship program was "developing an attacker's mindset." The internship listing is eight years old, but the
attacker's mindset has since become a kind of doctrine for the NSA's data spies. And the intelligence service isn't just trying to achieve mass
surveillance of Internet communication, either. The digital spies of the Five Eyes alliance -- comprised of the United States, Britain, Canada,
Australia and New Zealand -- want more. The Birth of D Weapons According to top secret documents from the archive of NSA whistleblower
Edward Snowden seen exclusively by SPIEGEL, they are
planning for wars of the future in which the Internet will play a
critical role, with the aim of being able to use the net to paralyze computer networks and, by doing so,
potentially all the infrastructure they control, including power and water supplies, factories, airports or
the flow of money. During the 20th century, scientists developed so-called ABC weapons -- atomic, biological and chemical. It took
decades before their deployment could be regulated and, at least partly, outlawed. New digital weapons have now been
developed for the war on the Internet. But there are almost no international conventions or supervisory
authorities for these D weapons, and the only law that applies is the survival of the fittest. Canadian media
theorist Marshall McLuhan foresaw these developments decades ago. In 1970, he wrote, "World War III is a guerrilla information war with no
division between military and civilian participation." That's precisely the reality that spies are preparing for today. The US Army, Navy, Marines
and Air Force have already established their own cyber forces, but it is the NSA, also officially a military agency, that is taking the lead.
It's no coincidence that the director of the NSA also serves as the head of the US Cyber Command. The country's leading data spy, Admiral
Michael Rogers, is also its chief cyber warrior and his close to 40,000 employees are responsible for both digital spying and destructive network
attacks. Surveillance only 'Phase 0' From
a military perspective, surveillance of the Internet is merely "Phase 0" in
the US digital war strategy. Internal NSA documents indicate that it is the prerequisite for everything
that follows. They show that the aim of the surveillance is to detect vulnerabilities in enemy systems.
Once "stealthy implants" have been placed to infiltrate enemy systems, thus allowing "permanent
accesses," then Phase Three has been achieved -- a phase headed by the word "dominate" in the
documents. This enables them to "control/destroy critical systems & networks at will through prepositioned accesses (laid in Phase 0)." Critical infrastructure is considered by the agency to be anything
that is important in keeping a society running: energy, communications and transportation. The internal
documents state that the ultimate goal is "real time controlled escalation". One NSA presentation proclaims that
"the next major conflict will start in cyberspace." To that end, the US government is currently
undertaking a massive effort to digitally arm itself for network warfare. For the 2013 secret intelligence budget, the NSA
projected it would need around $1 billion in order to increase the strength of its computer network attack operations. The budget included an
increase of some $32 million for "unconventional solutions" alone.
Back doors are key to cyber-warfare
Gellman and Nakashima 13
(Barton Gellman. Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington Post, most
recently the 2014 Pulitzer Prize for Public Service. He is also a senior fellow at the Century Foundation and visiting lecturer at Princeton’s
Woodrow Wilson School. After 21 years at The Post, where he served tours as legal, military, diplomatic, and Middle East correspondent,
Gellman resigned in 2010 to concentrate on book and magazine writing. He returned on temporary assignment in 2013 and 2014 to anchor
The Post's coverage of the NSA disclosures after receiving an archive of classified documents from Edward Snowden. Ellen Nakashima is a
national security reporter for The Washington Post. She focuses on issues relating to intelligence, technology and civil liberties. She
previously served as a Southeast Asia correspondent for the paper. She wrote about the presidential candidacy of Al Gore and co-authored a
biography of Gore, and has also covered federal agencies, Virginia state politics and local affairs. She joined the Post in 1995. "U.S. spy
agencies mounted 231 offensive cyber-operations in 2011, documents show," Washington Post. 8-30-2013.
https://www.washingtonpost.com/world/national-security/us-spy-agencies-mounted-231-offensive-cyber-operations-in-2011-documentsshow/2013/08/30/d090a6ae-119e-11e3-b4cb-fd7ce041d814_story.html//ghs-kw)
“The policy debate has moved so that offensive options are more prominent now,” said former deputy defense
secretary William J. Lynn III, who has not seen the budget document and was speaking generally. “I think there’s more of a case made now that offensive
cyberoptions can be an important element in deterring certain adversaries.” Of
the 231 offensive operations conducted in 2011, the
budget said, nearly three-quarters were against top-priority targets, which former officials say includes adversaries
such as Iran, Russia, China and North Korea and activities such as nuclear proliferation. The document provided
few other details about the operations. Stuxnet, a computer worm reportedly developed by the United States and Israel that destroyed Iranian nuclear centrifuges
in attacks in 2009 and 2010, is often cited as the most dramatic use of a cyberweapon. Experts said no other known cyberattacks carried out by the United States
match the physical damage inflicted in that case. U.S. agencies define offensive cyber-operations as activities intended “to manipulate, disrupt, deny, degrade, or
destroy information resident in computers or computer networks, or the computers and networks themselves,” according to a presidential directive issued in
October 2012. Most offensive operations have immediate effects only on data or the proper functioning of an adversary’s machine: slowing its network connection,
filling its screen with static or scrambling the results of basic calculations. Any of those could have powerful effects if they caused an adversary to botch the timing
of an attack, lose control of a computer or miscalculate locations. U.S. intelligence services are making routine use around the world of government-built malware
that differs little in function from the “advanced persistent threats” that U.S. officials attribute to China. The principal difference, U.S. officials told The Post, is that
China steals U.S. corporate secrets for financial gain. “The Department of Defense does engage” in computer network exploitation, according to an e-mailed
statement from an NSA spokesman, whose agency is part of the Defense Department. “The department does ***not*** engage in economic espionage in any
domain, including cyber.” ‘Millions of implants’ The
administration’s cyber-operations sometimes involve what one
budget document calls “field operations” abroad, commonly with the help of CIA operatives or
clandestine military forces, “to physically place hardware implants or software modifications.” Much more
often, an implant is coded entirely in software by an NSA group called Tailored Access Operations (TAO). As its name suggests, TAO builds
attack tools that are custom-fitted to their targets. The NSA unit’s software engineers would rather tap
into networks than individual computers because there are usually many devices on each network.
Tailored Access Operations has software templates to break into common brands and models of
“routers, switches and firewalls from multiple product vendor lines,” according to one document describing its work. The
implants that TAO creates are intended to persist through software and equipment upgrades, to copy
stored data, “harvest” communications and tunnel into other connected networks. This year TAO is working on
implants that “can identify select voice conversations of interest within a target network and exfiltrate select cuts,” or excerpts, according to one budget document.
In some cases, a single compromised device opens the door to hundreds or thousands of others.
Sometimes an implant’s purpose is to create a back door for future access. “You pry open the window
somewhere and leave it so when you come back the owner doesn’t know it’s unlocked, but you can
get back in when you want to,” said one intelligence official, who was speaking generally about the topic and was not privy to the budget. The
official spoke on the condition of anonymity to discuss sensitive technology. Under U.S. cyberdoctrine, these operations are known as “exploitation,”
not “attack,” but they are essential precursors both to attack and defense. By the end of this year, GENIE is projected to
control at least 85,000 implants in strategically chosen machines around the world. That is quadruple the number —
21,252 — available in 2008, according to the U.S. intelligence budget. The NSA appears to be planning a rapid expansion of those numbers, which were limited until
recently by the need for human operators to take remote control of compromised machines. Even with a staff of 1,870 people, GENIE made full use of only 8,448 of
the 68,975 machines with active implants in 2011. For
GENIE’s next phase, according to an authoritative reference
document, the NSA has brought online an automated system, code-named TURBINE, that is capable of
managing “potentially millions of implants” for intelligence gathering “and active attack.” ‘The ROC’ When it
comes time to fight the cyberwar against the best of the NSA’s global competitors, the TAO calls in its elite operators, who work at the agency’s Fort Meade
headquarters and in regional operations centers in Georgia, Texas, Colorado and Hawaii. The NSA’s organizational chart has the main office as S321. Nearly
everyone calls it “the ROC,” pronounced “rock”: the Remote Operations Center. “To the NSA as a whole, the ROC is where the hackers live,” said a former operator
from another section who has worked closely with the exploitation teams. “It’s basically the one-stop shop for any kind of active operation that’s not defensive.”
Once the hackers find a hole in an adversary’s defense, “[t]argeted
systems are compromised electronically, typically
providing access to system functions as well as data. System logs and processes are modified to cloak
the intrusion, facilitate future access, and accomplish other operational goals,” according to a 570-page budget
blueprint for what the government calls its Consolidated Cryptologic Program, which includes the NSA. Teams from the FBI, the CIA and U.S. Cyber Command work
alongside the ROC, with overlapping missions and legal authorities. So do the operators from the NSA’s National Threat Operations Center, whose mission is
focused primarily on cyber­defense. That was Snowden’s job as a Booz Allen Hamilton contractor, and it required him to learn the NSA’s best hacking techniques.
According to one key document, the
ROC teams give Cyber Command “specific target related technical and
operational material (identification/recognition), tools and techniques that allow the employment of
U.S. national and tactical specific computer network attack mechanisms.” The intelligence community’s
cybermissions include defense of military and other classified computer networks against foreign attack,
a task that absorbs roughly one-third of a total cyber operations budget of $1.02 billion in fiscal 2013, according to the Cryptologic Program budget. The ROC’s
breaking-and-entering mission, supported by the GENIE infrastructure, spends nearly twice as much: $651.7 million. Most
GENIE operations aim
for “exploitation” of foreign systems, a term defined in the intelligence budget summary as
“surreptitious virtual or physical access to create and sustain a presence inside targeted systems or
facilities.” The document adds: “System logs and processes are modified to cloak the intrusion, facilitate
future access, and accomplish other operational goals.” The NSA designs most of its own implants, but it devoted $25.1 million this
year to “additional covert purchases of software vulnerabilities” from private malware vendors, a growing gray-market industry based largely in Europe.
2NC Link – Exports
Backdoors are inserted in US products and exported globally—Schneier indicates
backdoors in networks is key to cyber-operations
Greenwald 14
(Glenn Greenwald. Glenn Greenwald is an ex-constitutional lawyer and a contributor for the Guardian, NYT, LAT, and The Intercept. He
received his BA from George Washington University and a JD from NYU. "Glenn Greenwald: how the NSA tampers with US-made internet
routers," Guardian. 5-12-2014. http://www.theguardian.com/books/2014/may/12/glenn-greenwald-nsa-tampers-us-internet-routerssnowden//ghs-kw)
But while American companies were being warned away from supposedly untrustworthy Chinese routers, foreign organisations would have
been well advised to beware of American-made ones. A June 2010 report from the head of the NSA's Access and Target Development
department is shockingly explicit. The
NSA routinely receives – or intercepts – routers, servers and other
computer network devices being exported from the US before they are delivered to the international
customers. The agency then implants backdoor surveillance tools, repackages the devices with a factory
seal and sends them on. The NSA thus gains access to entire networks and all their users. The document
gleefully observes that some "SIGINT tradecraft … is very hands-on (literally!)". Eventually, the implanted
device connects back to the NSA. The report continues: "In one recent case, after several months a beacon
implanted through supply-chain interdiction called back to the NSA covert infrastructure. This call back
provided us access to further exploit the device and survey the network." It is quite possible that Chinese firms are
implanting surveillance mechanisms in their network devices. But the US is certainly doing the same.
Routers are key—gives us access to thousands of connected devices
Gellman and Nakashima 13
(Barton Gellman. Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington Post, most
recently the 2014 Pulitzer Prize for Public Service. He is also a senior fellow at the Century Foundation and visiting lecturer at Princeton’s
Woodrow Wilson School. After 21 years at The Post, where he served tours as legal, military, diplomatic, and Middle East correspondent,
Gellman resigned in 2010 to concentrate on book and magazine writing. He returned on temporary assignment in 2013 and 2014 to anchor
The Post's coverage of the NSA disclosures after receiving an archive of classified documents from Edward Snowden. Ellen Nakashima is a
national security reporter for The Washington Post. She focuses on issues relating to intelligence, technology and civil liberties. She
previously served as a Southeast Asia correspondent for the paper. She wrote about the presidential candidacy of Al Gore and co-authored a
biography of Gore, and has also covered federal agencies, Virginia state politics and local affairs. She joined the Post in 1995. "U.S. spy
agencies mounted 231 offensive cyber-operations in 2011, documents show," Washington Post. 8-30-2013.
https://www.washingtonpost.com/world/national-security/us-spy-agencies-mounted-231-offensive-cyber-operations-in-2011-documentsshow/2013/08/30/d090a6ae-119e-11e3-b4cb-fd7ce041d814_story.html//ghs-kw)
“The policy debate has moved so that offensive options are more prominent now,” said former deputy defense
secretary William J. Lynn III, who has not seen the budget document and was speaking generally. “I think there’s more of a case made now that offensive
cyberoptions can be an important element in deterring certain adversaries.” Of
the 231 offensive operations conducted in 2011, the
three-quarters were against top-priority targets, which former officials say includes adversaries
such as Iran, Russia, China and North Korea and activities such as nuclear proliferation. The document provided
budget said, nearly
few other details about the operations. Stuxnet, a computer worm reportedly developed by the United States and Israel that destroyed Iranian nuclear centrifuges
in attacks in 2009 and 2010, is often cited as the most dramatic use of a cyberweapon. Experts said no other known cyberattacks carried out by the United States
match the physical damage inflicted in that case. U.S. agencies define offensive cyber-operations as activities intended “to manipulate, disrupt, deny, degrade, or
destroy information resident in computers or computer networks, or the computers and networks themselves,” according to a presidential directive issued in
October 2012. Most offensive operations have immediate effects only on data or the proper functioning of an adversary’s machine: slowing its network connection,
filling its screen with static or scrambling the results of basic calculations. Any of those could have powerful effects if they caused an adversary to botch the timing
of an attack, lose control of a computer or miscalculate locations. U.S. intelligence services are making routine use around the world of government-built malware
that differs little in function from the “advanced persistent threats” that U.S. officials attribute to China. The principal difference, U.S. officials told The Post, is that
China steals U.S. corporate secrets for financial gain. “The Department of Defense does engage” in computer network exploitation, according to an e-mailed
statement from an NSA spokesman, whose agency is part of the Defense Department. “The department does ***not*** engage in economic espionage in any
domain, including cyber.” ‘Millions of implants’ The
administration’s cyber-operations sometimes involve what one
budget document calls “field operations” abroad, commonly with the help of CIA operatives or
clandestine military forces, “to physically place hardware implants or software modifications.” Much more
often, an implant is coded entirely in software by an NSA group called Tailored Access Operations (TAO). As its name suggests, TAO builds
attack tools that are custom-fitted to their targets. The NSA unit’s software engineers would rather tap
into networks than individual computers because there are usually many devices on each network.
Tailored Access Operations has software templates to break into common brands and models of
“routers, switches and firewalls from multiple product vendor lines,” according to one document describing its work. The
implants that TAO creates are intended to persist through software and equipment upgrades, to copy
stored data, “harvest” communications and tunnel into other connected networks. This year TAO is working on
implants that “can identify select voice conversations of interest within a target network and exfiltrate select cuts,” or excerpts, according to one budget document.
In some cases, a single compromised device opens the door to hundreds or thousands of others.
Sometimes an implant’s purpose is to create a back door for future access. “You pry open the window
somewhere and leave it so when you come back the owner doesn’t know it’s unlocked, but you can
get back in when you want to,” said one intelligence official, who was speaking generally about the topic and was not privy to the budget. The
official spoke on the condition of anonymity to discuss sensitive technology. Under U.S. cyberdoctrine, these operations are known as “exploitation,”
not “attack,” but they are essential precursors both to attack and defense. By the end of this year, GENIE is projected to
control at least 85,000 implants in strategically chosen machines around the world. That is quadruple the number —
21,252 — available in 2008, according to the U.S. intelligence budget. The NSA appears to be planning a rapid expansion of those numbers, which were limited until
recently by the need for human operators to take remote control of compromised machines. Even with a staff of 1,870 people, GENIE made full use of only 8,448 of
the 68,975 machines with active implants in 2011. For
GENIE’s next phase, according to an authoritative reference
document, the NSA has brought online an automated system, code-named TURBINE, that is capable of
managing “potentially millions of implants” for intelligence gathering “and active attack.” ‘The ROC’ When it
comes time to fight the cyberwar against the best of the NSA’s global competitors, the TAO calls in its elite operators, who work at the agency’s Fort Meade
headquarters and in regional operations centers in Georgia, Texas, Colorado and Hawaii. The NSA’s organizational chart has the main office as S321. Nearly
everyone calls it “the ROC,” pronounced “rock”: the Remote Operations Center. “To the NSA as a whole, the ROC is where the hackers live,” said a former operator
from another section who has worked closely with the exploitation teams. “It’s basically the one-stop shop for any kind of active operation that’s not defensive.”
Once the hackers find a hole in an adversary’s defense, “[t]argeted
systems are compromised electronically, typically
providing access to system functions as well as data. System logs and processes are modified to cloak
the intrusion, facilitate future access, and accomplish other operational goals,” according to a 570-page budget
blueprint for what the government calls its Consolidated Cryptologic Program, which includes the NSA. Teams from the FBI, the CIA and U.S. Cyber Command work
alongside the ROC, with overlapping missions and legal authorities. So do the operators from the NSA’s National Threat Operations Center, whose mission is
focused primarily on cyber­defense. That was Snowden’s job as a Booz Allen Hamilton contractor, and it required him to learn the NSA’s best hacking techniques.
According to one key document, the
ROC teams give Cyber Command “specific target related technical and
operational material (identification/recognition), tools and techniques that allow the employment of
U.S. national and tactical specific computer network attack mechanisms.” The intelligence community’s
cybermissions include defense of military and other classified computer networks against foreign attack,
a task that absorbs roughly one-third of a total cyber operations budget of $1.02 billion in fiscal 2013, according to the Cryptologic Program budget. The ROC’s
breaking-and-entering mission, supported by the GENIE infrastructure, spends nearly twice as much: $651.7 million. Most
GENIE operations aim
for “exploitation” of foreign systems, a term defined in the intelligence budget summary as
“surreptitious virtual or physical access to create and sustain a presence inside targeted systems or
facilities.” The document adds: “System logs and processes are modified to cloak the intrusion, facilitate
future access, and accomplish other operational goals.” The NSA designs most of its own implants, but it devoted $25.1 million this
year to “additional covert purchases of software vulnerabilities” from private malware vendors, a growing gray-market industry based largely in Europe.
2NC Link - Zero Days
Zero-days are key to the cyber-arsenal
Cushing 14
(Cushing, Seychelle. Cushing received her MA with Distinction in Political Science and her BA in Political Science from Simon Fraser
Unversity. She is the Manager of Strategic Initiatives and Special Projects at the Office of the Vice-President, Research. “Leveraging
Information as Power: America’s Pursuit of Cyber Security,” Simon Fraser University. 11-28-2014.
http://summit.sfu.ca/system/files/iritems1/14703/etd8726_SCushing.pdf//ghs-kw)
zero-days used
in cyber weapons require the US to constantly discover new vulnerabilities to maintain a deployable cyber
arsenal. Holding a specific zero-day does not guarantee that the vulnerability will remain unpatched for
a prolonged period of time by the targeted state.59 Complicating this is the fact that undetected vulnerabilities, once
Nuclear or conventional weapons, once developed, can remain dormant yet functional until needed. In comparison, the
acquired, are rarely used immediately given the time and resources it takes to construct a cyber attack.60 In the time between acquisition and
use, a patch for the vulnerability may be released, whether through routine patches or a specific identification of a security hole, rendering the
vulnerability obsolete. To minimize this, America deploys
several zero-days at once in a cyber attack to increase the
odds that at least one (or more) of the vulnerabilities remains open to provide system access.6 2.4. One
Attack, Multiple Vulnerabilities Multiple backdoor entry points are preferable given that America cannot be
absolutely certain of what vulnerabilities the target system will contain62 despite extensive pre-launch cyber attack
testing63 and customization.64 A successful cyber attack needs a minimum of one undetected vulnerability to
gain access to the target system. Each successive zero-day that works adds to the strength and
sophistication of a cyber assault.65 As one vulnerability is patched, America can still rely on the other undetected vulnerabilities to
continue its cyber strike. Incorporating multiple undetected vulnerabilities into a cyber attack reduces the need
to create new cyber attacks after each zero-day fails. Stuxnet, a joint US-Israel operation, was a cyber attack
designed to disrupt Iran’s progress on its nuclear weapons program.66 The attack was designed to alter the code of
Natanz’s computers and industrial control systems to induce “chronic fatigue,” rather than destruction, of the nuclear centrifuges.67 The
precision of Stuxnet ensured that all other control systems were ignored except for those regulating the centrifuges.68 What
is notable
about Stuxnet is its use of four zero-day exploits (of which one was allegedly purchased)69 in the
attack.70 That is, to target one system, Stuxnet entered through four different backdoors. A target state aware of
a specific vulnerability in its system will enact a patch upon detection and likely assume that the problem is fixed. Exploiting multiple
vulnerabilities creates variations in how the attack is executed given that different backdoors alter how
the attack enters the target system.71 One patch does not stop the cyber attack. The use of multiple zero-days
thus capitalizes on a state’s limited awareness of the vulnerabilities in its system. Each phase of Stuxnet was different from its previous phase
which created confusion among the Iranians. Launched in 2009, Stuxnet was not discovered by the Iranians until 2010.72 Yet
even upon
the initial discovery of the attack, who the attacker was remained unclear. The failures in the Natanz
centrifuges were first attributed to insider error73 and later to China74 before finally discovering the true culprits.75
The use of multiple undetected vulnerabilities helped to obscure the US and Israel as the actual attackers.76 The Stuxnet case helps
illustrate the efficacy of zero-day attacks as a means of attaining political goals. Although Stuxnet did not
produce immediate results in terminating Iran’s nuclear program, it helped buy time for the Americans to consider other options against Iran. A
nuclear Iran would not only threaten American security but possibly open a third conflict for America77 in the Middle East given Israel’s
proclivity to strike a nuclear Iran first. Stuxnet allowed the United States to delay Iran’s nuclear program without resorting to kinetic action.78
Zero-days are key to effective cyber-war offensive capabilities
Gjelten 13
(Gjelten, Tom. TOM GJELTEN is a correspondent for NPR. Over the years, he has reported extensively from Europe and Latin America,
including Cuba. He was reporting live from the Pentagon when it was attacked on September 11, 2001. Subsequently, he covered the war in
Afghanistan and Iraq invasion as NPR's lead Pentagon correspondent. Gjelten also covered the first Gulf War and the wars in Croatia and
Bosnia, Nicaragua, El Salvador, Guatemala, and Colombia. From Berlin (1990–1994), he covered Europe’s political and economic transition
after the fall of the Berlin Wall. Gjelten’s series From Marx to Markets, documenting Eastern Europe’s transition to a market economy,
earned him an Overseas Press Club award for the the Best Business or Economic Reporting in Radio or TV. His reporting from Bosnia earned
him a second Overseas Press Club Award, a George Polk Award, and a Robert F Kennedy Journalism Award. Gjelten’s books include Sarajevo
Daily: A City and Its Newspaper Under Siege, which the New York Times called “a chilling portrayal of a city’s slow murder.” His 2008 book,
Bacardi and the Long Fight for Cuba: The Biography of a Cause, was selected as a New York Times Notable Nonfiction Book. "First Strike: US
Cyber Warriors Seize the Offensive," World Affairs Journal. January/February 2013. http://www.worldaffairsjournal.org/article/first-strikeus-cyber-warriors-seize-offensive//ghs-kw)
That was then. Much
of the cyber talk around the Pentagon these days is about offensive operations. It is no
longer enough for cyber troops to be deployed along network perimeters, desperately trying to block
the constant attempts by adversaries to penetrate front lines. The US military’s geek warriors are now
prepared to go on the attack, armed with potent cyberweapons that can break into enemy computers
with pinpoint precision. The new emphasis is evident in a program launched in October 2012 by the Defense Advanced Research Projects Agency
(DARPA), the Pentagon’s experimental research arm. DARPA funding enabled the invention of the Internet, stealth aircraft, GPS, and voice-recognition software,
and the new program, dubbed Plan X, is equally ambitious. DARPA
managers said the Plan X goal was “to create revolutionary
technologies for understanding, planning, and managing cyberwarfare.” The US Air Force was also signaling its readiness to
go into cyber attack mode, announcing in August that it was looking for ideas on how “to destroy, deny, degrade, disrupt, deceive, corrupt, or usurp the adversaries
[sic] ability to use the cyberspace domain for his advantage.” The new interest in attacking enemies rather than simply defending against them has even spread to
the business community. Like their military counterparts, cybersecurity experts in the private sector have become increasingly frustrated by their inability to stop
intruders from penetrating critical computer networks to steal valuable data or even sabotage network operations. The
new idea is to pursue the
perpetrators back into their own networks. “We’re following a failed security strategy in cyber,” says Steven
Chabinsky, formerly the head of the FBI’s cyber intelligence section and now chief risk officer at CrowdStrike, a startup company that promotes aggressive action
against its clients’ cyber adversaries. “There’s
no way that we are going to win the cybersecurity effort on defense.
We have to go on offense.” The growing interest in offensive operations is bringing changes in the cybersecurity industry. Expertise in patching
security flaws in one’s own computer network is out; expertise in finding those flaws in the other guy’s network is in. Among the “hot jobs” listed
on the career page at the National Security Agency are openings for computer scientists who specialize
in “vulnerability discovery.” Demand is growing in both government and industry circles for technologists with the skills to develop ever more
sophisticated cyber tools, including malicious software—malware—with such destructive potential as to qualify as cyberweapons when implanted in an enemy’s
network. “Offense
is the biggest growth sector in the cyber industry right now,” says Jeffrey Carr, a cybersecurity analyst
and author of Inside Cyber Warfare. But have we given sufficient thought to what we are doing? Offensive operations in the cyber domain raise a host of legal,
ethical, and political issues, and governments, courts, and business groups have barely begun to consider them. The move to offensive operations in cyberspace was
actually under way even as Pentagon officials were still insisting their strategy was defensive. We just didn’t know it. The big revelation came in June 2012, when
New York Times reporter David Sanger reported that the United States and Israel were behind the development of the Stuxnet worm, which had been used to
damage computer systems controlling Iran’s nuclear enrichment facilities. Sanger, citing members of President Obama’s national security team, said the attacks
were code-named Olympic Games and constituted
“America’s first sustained use of cyberweapons.” The highly sophisticated
Stuxnet worm delivered computer instructions that caused some Iranian centrifuges to spin uncontrollably and self-destruct. According to Sanger, the secret cyber
attacks had begun during the presidency of George W. Bush but were accelerated on the orders of Obama. The publication of such a highly classified operation
provoked a firestorm of controversy, but government officials who took part in discussions of Stuxnet have not denied the accuracy of Sanger’s reporting. “He
nailed it,” one participant told me. In
the aftermath of the Stuxnet revelations, discussions about cyber war became
more realistic and less theoretical. Here was a cyberweapon that had been designed and used for the
same purpose and with the same effect as a kinetic weapon: like a missile or a bomb, it caused physical destruction. Security
experts had been warning that a US adversary could use a cyberweapon to destroy power plants, water treatment facilities, or other critical infrastructure assets
here in the United States, but the
Stuxnet story showed how the American military itself could use an offensive
cyberweapon against an enemy. The advantages of such a strike were obvious. A cyberweapon could
take down computer networks and even destroy physical equipment without the civilian casualties that
a bombing mission would entail. Used preemptively, it could keep a conflict from evolving in a more lethal direction. The targeted
country would have a hard time determining where the cyber attack came from. In fact, the news that the United
States had actually developed and used an offensive cyberweapon gave new significance to hints US officials had quietly dropped on previous occasions about the
enticing potential of such tools. In remarks at the Brookings Institution in April 2009, for example, the then Air Force chief of staff, General Norton Schwartz,
suggested that cyberweapons could be used to attack an enemy’s air defense system. “Traditionally,”
Schwartz said, “we take down
integrated air defenses via kinetic means. But if it were possible to interrupt radar systems or surface to
air missile systems via cyber, that would be another very powerful tool in the tool kit allowing us to
accomplish air missions.” He added, “We will develop that—have [that]—capability.” A full two years before the Pentagon
rolled out its “defensive” cyber strategy, Schwartz was clearly suggesting an offensive application. The Pentagon’s reluctance in 2011 to be more transparent about
its interest in offensive cyber capabilities may simply have reflected sensitivity to an ongoing dispute within the Obama administration. Howard Schmidt, the White
House Cybersecurity Coordinator at the time the Department of Defense strategy was released, was steadfastly opposed to any use of the term “cyber war” and
had no patience for those who seemed eager to get into such a conflict. But his was a losing battle. Pentagon
planners had already classified
cyberspace officially as a fifth “domain” of warfare, alongside land, air, sea, and space. As the 2011 cyber strategy
noted, that designation “allows DoD to organize, train, and equip for cyberspace as we do in air, land, maritime, and space to support national security interests.”
That statement by itself contradicted any notion that the Pentagon’s interest in cyber was mainly defensive. Once
the US military accepts the
challenge to fight in a new domain, it aims for superiority in that domain over all its rivals, in both
offensive and defensive realms. Cyber is no exception. The US Air Force budget request for 2013 included $4 billion in proposed
spending to achieve “cyberspace superiority,” according to Air Force Secretary Michael Donley. It is hard to imagine the US military settling for any less, given the
importance of electronic assets in its capabilities. Even small unit commanders go into combat equipped with laptops and video links. “We’re no longer just hurling
mass and energy at our opponents in warfare,” says John Arquilla, professor of defense analysis at the Naval Postgraduate School. “Now we’re using information,
and the more you have, the less of the older kind of weapons you need.” Access to data networks has given warfighters a huge advantage in intelligence,
communication, and coordination. But their dependence on those networks also creates vulnerabilities, particularly when engaged with an enemy that has cyber
capabilities of his own. “Our adversaries are probing every possible entry point into the network, looking for that one possible weak spot,” said General William
Shelton, head of the Air Force Space Command, speaking at a CyberFutures Conference in 2012. “If we don’t do this right, these new data links could become one of
those spots.” Achieving “cyber superiority” in a twenty-first-century battle space is analogous to the establishment of air superiority in a traditional bombing
campaign. Before strike missions begin against a set of targets, air commanders want to be sure the enemy’s air defense system has been suppressed. Radar sites,
antiaircraft missile batteries, enemy aircraft, and command-and-control facilities need to be destroyed before other targets are hit. Similarly, when an informationdependent combat operation is planned against an opposing military, the operational commanders may first want to attack the enemy’s computer systems to
defeat his ability to penetrate and disrupt the US military’s information and communication networks. Indeed, operations like this have already been carried out. A
former ground commander in Afghanistan, Marine Lieutenant General Richard Mills, has acknowledged using cyber attacks against his opponent while directing
international forces in southwest Afghanistan in 2010. “I was able to use my cyber operations against my adversary with great impact,” Mills said, in comments
before a military conference in August 2012. “I was able to get inside his nets, infect his command-and-control, and in fact defend myself against his almost
constant incursions to get inside my wire, to affect my operations.” Mills was describing offensive cyber actions. This is cyber war, waged on a relatively small scale
and at the tactical level, but cyber war nonetheless. And, as DARPA’s Plan X reveals, the US military is currently engaged in much larger scale cyber war planning.
DARPA managers want contractors to come up with ideas for mapping the digital battlefield so that commanders could know where and how an enemy has arrayed
his computer networks, much as they are now able to map the location of enemy tanks, ships, and aircraft. Such visualizations would enable cyber war commanders
to identify the computer targets they want to destroy and then assess the “battle damage” afterwards. Plan X would also support the development of new cyber
war architecture. The DARPA managers envision operating systems and platforms with “mission scripts” built in, so that a cyber attack, once initiated, can proceed
on its own in a manner “similar to the auto-pilot function in modern aircraft.” None of this technology exists yet, but neither did the Internet or GPS when DARPA
researchers first dreamed of it. As with those innovations, the
government role is to fund and facilitate, but much of the experimental and
research work would be done in the private sector. A computer worm with a destructive code like the one Stuxnet carried can probably be designed only with state
sponsorship, in a research lab with resources like those at the NSA. But private contractors are in a position to provide many of the tools needed for offensive cyber
activity, including the software
bugs that can be exploited to provide a “back door” into a computer’s
operating system. Ideally, the security flaw or vulnerability that can be exploited for this purpose will be
one of which the network operator is totally unaware. Some hackers specialize in finding these
vulnerabilities, and as the interest in offensive cyber operations has grown, so has the demand for their
services. The world-famous hacker conference known as Defcon attracts a wide and interesting assortment of people each year to Las Vegas: creative but
often antisocial hackers who identify themselves only by their screen names, hackers who have gone legit as computer security experts, law enforcement types,
government spies, and a few curious academics and journalists. One can learn what’s hot in the hacker world just by hanging out there. In August 2012, several
attendees were seated in the Defcon cafe when a heavy-set young man in jeans, a t-shirt, and a scraggly beard strolled casually up and dropped several homemade
calling cards on the table. He then moved to the next table and tossed down a few more, all without saying a word. There was no company logo or brand name on
the card, just this message: “Paying top dollar for 0-day and offensive technologies . . . ” The card identified the buyer as “zer0daybroker” and listed an e-mail
address. A
“zero-day” is the most valuable of computer vulnerabilities, one unknown to anyone but the
researcher who finds it. Hackers prize zero-days because no one knows to have prepared a defense
against them. The growing demand for these tools has given rise to brokers like Zer0day, who identified himself in a subsequent e-mail exchange as “Zer0
Day Haxor” but provided no other identifying information. As a broker, he probably did not intend to hack into a computer network himself but only to act as an
intermediary, connecting sellers who have discovered system vulnerabilities with buyers who want to make use of the tools and are willing to pay a high price for
them. In the past, the main market for these vulnerabilities was software firms themselves who wanted to know about flaws in their products so that they could
write patches to fix them. Big companies like Google and Microsoft employ “penetration testers” whose job it is to find and report vulnerabilities that would allow
someone to hack into their systems. In some cases, such companies have paid a bounty to freelance cyber researchers who discover a vulnerability and alert the
company engineers. But the
rise in offensive cyber operations has transformed the vulnerability market, and
hackers these days are more inclined to sell zero-days to the highest bidder. In most cases, these are
governments. The market for back-door exploits has been boosted in large part by the burgeoning
demand from militaries eager to develop their cyber warfighting capabilities. The designers of the Stuxnet code cleared
a path into Iranian computers through the use of four or five separate zero-day vulnerabilities, an achievement that impressed security researchers around the
world. The next Stuxnet would require the use of additional vulnerabilities. “If
the president asks the US military to launch a cyber
operation in Iran tomorrow, it’s not the time to start looking for exploits,” says Christopher Soghoian,
a Washington-based cybersecurity researcher. “They need to have the exploits ready to go. And you
may not know what kind of computer your target uses until you get there. You need a whole arsenal
[of vulnerabilities] ready to go in order to cover every possible configuration you may meet.” Not
surprisingly, the National Security Agency—buying through defense contractors—may well be the biggest customer in the
vulnerability market, largely because it pays handsomely. The US military’s dominant presence in the
market means that other possible purchasers cannot match the military’s price. “Instead of telling
Google or Mozilla about a flaw and getting a bounty for two thousand dollars, researchers will sell it to a
defense contractor like Raytheon or SAIC and get a hundred thousand for it,” says Soghoian, now the principal
technologist in the Speech, Privacy and Technology Project at the American Civil Liberties Union and a prominent critic of the zero-day market. “Those companies
will then turn around and sell the vulnerability upstream to the NSA or another defense agency. They will outbid Google every time.”
2NC China
Cyber capabilities are key to deterrence and defending against China
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior Fellow at
RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National Security Policy.
Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University. Martin Libicki received his PhD
in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his BSc in Mathematics from MIT. He is a
Professor at the RAND Graduate School and a Senior Management Scientist at RAND. “Waging Cyber War the American Way,” Survival:
Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28. 07-22-2015.
http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-august-september-2015-c6ba/57-402-gompert-and-libicki-eab1//ghs-kw)
At the same time, the
United States regards cyber war during armed conflict with a cyber-capable enemy as
probable, if not inevitable. It both assumes that the computer systems on which its own forces rely to deploy,
receive support and strike will be attacked, and intends to attack the computer systems that enable
opposing forces to operate as well. Thus, the United States has said that it can and would conduct cyber war to
‘support operational and contingency plans’ – a euphemism for attacking computer systems that enable enemy war fighting. US
military doctrine now regards ‘non-kinetic’ (that is, cyber) measures as an integral aspect of US joint offensive
operations.8 Even so, the stated purposes of the US military regarding cyber war stress protecting the ability of conventional military forces to function as
they should, as well as avoiding and preventing escalation, especially to non-military targets. Apart from its preparedness to conduct counter-military cyber
operations during wartime, the United States has been reticent about using its offensive capabilities. While it has not excluded conducting cyber operations to
coerce hostile states or non-state actors, it has yet to brandish such a threat.9 Broadly
speaking, US policy is to rely on the threat of
retaliation to deter a form of warfare it is keen to avoid. Chinese criticism that the US retaliatory policy and capabilities ‘will up the
ante on the Internet arms race’ is disingenuous in that China has been energetic in forming and using capabilities for cyber operations.10 Chinese criticism is
disingenuous Notwithstanding the defensive bias in US attitudes toward cyber war, the dual missions of deterrence and preparedness for offensive operations
during an armed conflict warrant maintaining superb, if not superior, offensive capabilities. Moreover, the case can be made – and we have made it – that the
United States should have superiority in offensive capabilities in order to control escalation.11 The
combination of significant capabilities and declared reluctance to wage cyber war raises a question that is not answered by any US official public statements: when
it comes to offence, what are US missions, desired effects, target sets and restraints – in short, what is US policy? To
be clear, we do not take
issue with the basic US stance of being at once wary and capable of cyber war. Nor do we think that the
United States should advertise exactly when and how it would conduct offensive cyber war. However, the very
fact that the United States maintains options for offensive operations implies the need for some articulation of policy. After all, the United States was broadly
averse to the use of nuclear weapons during the Cold War, yet it elaborated a declaratory policy governing such use to inform adversaries, friends and world
opinion, as well as to forge domestic consensus. Indeed, if the United States wants to discourage and limit cyber war internationally, while keeping its options open,
it must offer an example. For that matter, the American people deserve to know what national policy on cyber war is, lest they assume it is purely defensive – or
just too esoteric to comprehend. Whether to set a normative example, warn potential adversaries or foster national consensus, US policy on waging cyber war
should be coherent. At the same time, it must encompass three distinguishable offensive missions: wartime counter-military operations, which the United States
intends to conduct; retaliatory missions, which the US must have the will and ability to conduct for reasons of deterrence; and coercive missions against hostile
states, which could substitute for armed attack.12 Four cases serve to highlight the relevant issues and to inform the elaboration of an overall policy to guide US
conduct of offensive cyber war. The first involves wartime counter-military cyber operations against a cyber-capable opponent, which may also be waging cyber
war; the second involves retaliation against a cyber-capable opponent for attacking US systems other than counter-military ones; the third involves coercion of a
‘cyber-weak’ opponent with little or no means to retaliate against US cyber attack; and the fourth involves coercion of a ‘cyber-strong’ opponent with substantial
means to retaliate against US cyber attack. Of these, the first and fourth imply a willingness to initiate cyber war. Counter-military cyber war during wartime Just as
cyber war is war, armed hostilities will presumably include cyber war if the belligerents are both capable of and vulnerable to it. The reason for such certainty is that
impairing opposing military forces’ use of computer systems is operationally compelling. Forces with requisite technologies and skills benefit enormously from data
communications and computation for command and control, intelligence, surveillance and reconnaissance (ISR), targeting, navigation, weapon guidance, battle
assessment and logistics management, among other key functions. If the performance of forces is dramatically enhanced by such systems, it follows that degrading
them can provide important military advantages. Moreover, allowing an enemy to use cyber war without reciprocating could mean military defeat. Thus, the
United States and other advanced states are acquiring capabilities not only to use and protect computer
systems, but also to disrupt those used by enemies. The intention to wage cyber war is now prevalent in
Chinese planning for war with the United States – and vice versa. Chinese military planners have long
made known their belief that, because computer systems are essential for effective US military
operations, they must be targeted. Chinese cyber capabilities may not (yet) pose a threat to US
command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR)
networks, which are well partitioned and protected. However, the networks that enable logistical
support for US forces are inviting targets. Meant to disable US military operations, Chinese use of cyber war during an armed
conflict would not be contingent on US cyber operations. Indeed, it could come early, first or even as a
precursor of armed hostilities. For its part, the US military is increasingly aware not only that sophisticated
adversaries like China can be expected to use cyber war to degrade the performance of US forces, but
also that US forces must integrate cyber war into their capabilities and operations. Being more dependent on
computer networks to enhance military performance than are its adversaries, including China, US forces have more to lose than to gain from the outbreak of cyber
war during an armed conflict. This being so, would it make sense for the United States to wait and see if the enemy resorts to cyber war before doing so itself?
Given US conventional military superiority, it can be assumed that any adversary that can use cyber war
against US forces will do so. Moreover, waiting for the other side to launch a cyber attack could be disadvantageous insofar as US forces would be
the first to suffer degraded performance. Thus, rather than waiting, there will be pressure for the United States to commence cyber attacks early, and perhaps first.
Moreover, leading US military officers have strongly implied that cyber war would have a role in attacking enemy anti-access and area-denial (A2AD) capabilities
irrespective of the enemy’s use of cyber war.13 If the United States is prepared to conduct offensive cyber operations against a highly advanced opponent such as
China, it stands to reason that it would do likewise against lesser opponents. In sum, offensive cyber war is becoming part and parcel of the US war-fighting
doctrine. The
nature of US counter-military cyber attacks during wartime should derive from the mission of
gaining, or denying the opponent, operational advantage. Primary targets of the United States should mirror those of a cybercapable adversary: ISR, command and control, navigation and guidance, transport and logistics support. Because this mission is not coercive or strategic in nature,
economic and other civilian networks should not be targeted. However, to the extent that networks that enable military operations may be multipurpose,
avoidance of non-military harm cannot be assured. There are no sharp ‘firebreaks’ in cyber war.14
China would initiate preemptive cyber strikes on the US
Freedberg 13
(Freedberg, Sydney J. Sydney J. Freedberg Jr. is the deputy editor for Breaking Defense. He graduated summa cum laude from Harvard with
an AB in History and holds an MA in Security Studies from Georgetown University and a MPhil in European Studies from Cambridge
University. During his 13 years at National Journal magazine, he wrote his first story about what became known as "homeland security" in
1998, his first story about "military transformation" in 1999, and his first story on "asymmetrical warfare" in 2000. Since 2004 he has
conducted in-depth interviews with more than 200 veterans of Afghanistan and Iraq about their experiences, insights, and lessons-learned,
writing stories that won awards from the association of Military Reporters & Editors in 2008 and 2009, as well as an honorable mention in
2010. "China’s Fear Of US May Tempt Them To Preempt: Sinologists," Breaking Defense. 10-1-2013.
http://breakingdefense.com/2013/10/chinas-fear-of-us-may-tempt-them-to-preempt-sinologists/2///ghs-kw)
WASHINGTON: Because
China believes it is much weaker than the United States, they are more likely to
launch a massive preemptive strike in a crisis. Here’s the other bad news: The current US concept for high-tech
warfare, known as Air-Sea Battle, might escalate the conflict even further towards a “limited” nuclear war, says one of the top
American experts on the Chinese military. [This is one in an occasional series on the crucial strategic relationship and the military capabilities of
the US, its allies and China.] What US analysts call an “anti-access/area denial” strategy is what China calls “counter-intervention” and “active
defense,” and the
Chinese approach is born of a deep sense of vulnerability that dates back 200 years, China
analyst Larry Wortzel said at the Institute of World Politics: “The People’s Liberation Army still sees themselves as an
inferior force to the American military, and that’s who they think their most likely enemy is.” That’s fine as
long as it deters China from attacking its neighbors. But if deterrence fails, the Chinese are likely to go big or go home.
Chinese military history from the Korean War in 1950 to the Chinese invasion of Vietnam in 1979 to
more recent, albeit vigorous but non-violent, grabs for the disputed Scarborough Shoal suggests a
preference for a sudden use of overwhelming force at a crucial point, what Clausewitz would call the enemy’s “center
of gravity.” “What they do is very heavily built on preemption,” Wortzel said. “The problem with the striking the enemy’s
center of gravity is, for the United States, they see it as being in Japan, Hawaii, and the West
Coast….That’s very escalatory.” (Students of the American military will nod sagely, of course, as we remind everyone that President
George Bush made preemption a centerpiece of American strategy after the terror attacks of 2001.) Wortzel argued that the current version of
US Air-Sea Battle concept is also likely to lead to escalation. “China’s dependent on these ballistic missiles and anti-ship missiles and satellite
links,” he said. Since those are almost all land-based, any attack on them “involves striking the Chinese mainland, which is pretty escalatory.”
“You don’t know how they’re going to react,” he said. “They do have nuclear missiles. They actually think we’re more allergic to nuclear missiles
landing on our soil than they are on their soil. They think they can withstand a limited nuclear attack, or even a big nuclear attack, and
retaliate.” What War Would Look Like So
how would China’s preemptive attack unfold? First would come weeks of
escalating rhetoric and cyberattacks. There’s no evidence the Chinese favor a “bolt out of the blue”
without giving the adversary what they believe is a chance to back down, agreed retired Rear Adm. Michael
McDevitt and Dennis Blasko, former Army defense attache in Beijing, speaking on a recent Wilson Center panel on Chinese strategy where they
agreed on almost nothing else. That’s not much comfort, though, considering that Imperial Japan showed clear signs they might attack and still
caught the US flat-footed at Pearl Harbor. When
the blow does fall, the experts believe it would be sudden. Stuxnetstyle viruses, electronic jamming, and Israeli-designed Harpy radar-seeking cruise missiles (similar to the American HARM but slower and
longer-ranged) would try to blind every land-based and shipborne radar. Long-range anti-aircraft missiles like the
Russian-built S-300 would go for every plane currently in the air within 125 miles of China’s coast, a radius that covers all of Taiwan and some of
Japan. Salvos of ballistic missiles would strike every airfield within 1,250 miles. That’s enough range to hit the four US airbases in Japan and
South Korea – which are, after all, static targets you can look up on Google Maps – to destroy aircraft on the ground, crater the runways, and
scatter the airfield with unexploded cluster bomblets to defeat repair attempts. Long-range cruise missiles launched from shore, ships, and
submarines then go after naval vessels. And if the Chinese get really good and really lucky, they just might get a solid enough fix on a US Navy
aircraft carrier to lob a precision-guided ballistic missile at it. But would this work? Maybe. “This is fundamentally terra incognita,” Heritage
Foundation research fellow Dean Cheng told me. There has been no direct conventional clash between major powers since Korea in the 1950s,
no large-scale use of anti-ship missiles since the Falklands in 1982, and no war ever where both sides possessed today’s space, cyber, electronic
warfare, and precision-guided missile capabilities. Perhaps the least obvious but most critical uncertainty in a Pacific war would be invisible. “I
don’t think we’ve seen electronic warfare on a scale that we’d see in a US-China confrontation,” said
Cheng. “I doubt very much they are behind us when it comes to electronic warfare, [and] the Chinese
are training every day on cyber: all those pings, all those attacks, all those attempts to penetrate.” While
the US has invested heavily in jamming and spoofing over the last decade, much of the focus has been on how to disable insurgents’ roadside
bombs, not on how to counter a high-tech nation-state. China, however, has
focused its electronic warfare and cyber
attack efforts on the United States. Conceptually, China may well be ahead of us in linking the two. (F-35
supporters may well disagree with this conclusion.) Traditional radar jammers, for example, can also be used to insert viruses into the highly
computerized AESA radars (active electronically scanned array) that are increasingly common in the US military. “Where
there has
been a fundamental difference, and perhaps the Chinese are better than we are at this, is the Chinese
seem to have kept cyber and electronic warfare as a single integrated thing,” Cheng said. “We are only now
coming round to the idea that electronic warfare is linked to computer network operations.” In a battle for the electromagnetic spectrum,
Cheng said, the worst case “is that
you thought your jammers, your sensors, everything was working great,
and the next thing you know missiles are penetrating [your defenses], planes are being shot out of the
sky.”
China/Taiwan war goes nuclear
Glaser 11
(Charles, Professor of Political Science and International Affairs at the Elliott School of International Affairs at George Washington University,
Director of the Institute for Security and Conflict Studies, “Will China’s Rise lead to War? ,” Foreign Affairs March/April 2011,
http://web.clas.ufl.edu/users/zselden/coursereading2011/Glaser.pdf)
THE PROSPECTS for avoiding intense military competition and war may be good, ¶ but growth in China's power may nevertheless require some
changes in U.S. ¶ foreign policy that Washington will find disagreeable--particularly regarding ¶ Taiwan. Although it lost control of Taiwan during
the Chinese Civil War more ¶ than six decades ago, China still considers
Taiwan to be part of its homeland, ¶ and unification
has made clear that ¶ it will use force if Taiwan declares
independence, and much of China's ¶ conventional military buildup has been dedicated to increasing its ability to ¶
coerce Taiwan and reducing the United States' ability to intervene. Because ¶ China places such high value
on Taiwan and because the United States and ¶ China--whatever they might formally agree to--have such different
attitudes ¶ regarding the legitimacy of the status quo, the issue poses special dangers and ¶ challenges for the U.S.-Chinese
remains a key political goal for Beijing. China
relationship, placing it in a different category ¶ than Japan or South Korea. ¶ A
crisis over Taiwan could fairly easily
escalate to nuclear war, because each ¶ step along the way might well seem rational to the actors
involved. Current U.S. ¶ policy is designed to reduce the probability that Taiwan will declare ¶ independence and to make clear that the
United States will not come to Taiwan's ¶ aid if it does. Nevertheless, the United States would find itself under pressure to ¶
protect Taiwan against any sort of attack, no matter how it originated. Given the¶ different interests and perceptions of
the various parties and the limited control ¶ Washington has over Taipei's behavior, a crisis could unfold in which the United ¶ States found
itself following events rather than leading them. ¶ Such dangers have been around for decades, but ongoing
improvements in ¶
China's military capabilities may make Beijing more willing to escalate a Taiwan ¶ crisis. In addition to its
improved conventional capabilities, China is modernizing ¶ its nuclear forces to increase their ability to survive and
retaliate following a ¶ large-scale U.S. attack. Standard deterrence theory holds that Washington's ¶ current ability to destroy
most or all of China's nuclear force enhances its ¶ bargaining position. China's nuclear modernization might remove that
check on ¶ Chinese action, leading Beijing to behave more boldly in future crises than it has ¶ in past ones. A
U.S. attempt to preserve its ability to defend Taiwan, meanwhile, ¶ could fuel a conventional and nuclear
arms race. Enhancements to U.S. offensive ¶ targeting capabilities and strategic ballistic missile defenses might be interpreted ¶ by China as
a signal of malign U.S. motives, leading to further Chinese military ¶ efforts and a general poisoning of U.S.-Chinese relations.
2NC Cyber-Deterrence
Cyber-offensive strengths are key to cyber-deterrence and minimizing damage
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior Fellow at
RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National Security Policy.
Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University. Martin Libicki received his PhD
in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his BSc in Mathematics from MIT. He is a
Professor at the RAND Graduate School and a Senior Management Scientist at RAND. “Waging Cyber War the American Way,” Survival:
Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28. 07-22-2015.
http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-august-september-2015-c6ba/57-402-gompert-and-libicki-eab1//ghs-kw)
Even with effective C2, there is
a danger that US counter-military cyber operations will infect and damage
systems other than those targeted, including civilian systems, because of the technical difficulties of
controlling effects, especially for systems that support multiple services. As we have previously noted in these pages,
‘an attack that uses a replicable agent, such as a virus or worm, has substantial potential to spread, perhaps uncontrollably’.19 The dangers of
collateral damage on non-combatants imply not only the possibility of violating the laws of war (as they might apply to cyber war), but also of
provoking escalation. While the United States would like there to be strong technical and C2 safeguards against unwanted effects and thus
escalation, it is not clear that there are. It follows that US
doctrine concerning the conduct of wartime counter-military
offensive operations must account for these risks. This presents a dilemma, for dedicated military systems tend
to be harder to access and disrupt than multipurpose or civilian ones. China’s military, for example, is
known for its attention to communications security, aided by its reliance on short-range and land-based
(for example, fibre-optical) transmission of C4ISR. Yet, to attack less secure multipurpose systems on
which the Chinese military depends for logistics is to risk collateral damage and heighten the risk of
escalation. Faced with this dilemma, US policy should be to exercise care in attacking military networks that also support
civilian services. The better its offensive cyber-war capabilities, the more able the United States will be to
disrupt critical enemy military systems and avoid indiscriminate effects. Moreover, US offensive
strength could deter enemy escalation. As we have argued before, US superiority in counter-military cyber war
would have the dual advantage of delivering operational benefits by degrading enemy forces and
averting a more expansive cyber war than intended. While the United States should avoid the spread of
cyber war beyond military systems, it should develop and maintain an unmatched capability to conduct
counter-military cyber war. This would give it operational advantages and escalation dominance. Such
capabilities might enable the United States to disrupt enemy C4ISR systems used for the control and
operation of nuclear forces. However, to attack such systems would risk causing the enemy to perceive that the United States was
either engaged in a non-nuclear-disarming first strike or preparing for a nuclear-disarming first strike. Avoiding such a misperception requires
the avoidance of such systems, even if they also support enemy non-nuclear C4ISR (as China’s may do). In sum, US
policy should be to
create, maintain and be ready to use superior cyber-war capabilities for counter-military operations
during armed conflict. Such an approach would deny even the most capable of adversaries, China
included, an advantage by resorting to cyber war in an armed conflict. The paramount goal of the United
States should be to retain its military advantage in the age of cyber war – a tall order, but a crucial one
for US interests.
2NC Russia
Deterrence solves cyber-war and Russian aggression
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior Fellow at
RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National Security Policy.
Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University. Martin Libicki received his PhD
in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his BSc in Mathematics from MIT. He is a
Professor at the RAND Graduate School and a Senior Management Scientist at RAND. “Waging Cyber War the American Way,” Survival:
Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28. 07-22-2015.
http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-august-september-2015-c6ba/57-402-gompert-and-libicki-eab1//ghs-kw)
Retaliation While
the United States should be ready to conduct cyber attacks against military forces in an
armed conflict, it should in general otherwise try to avoid and prevent cyber war. (Possible exceptions to this
posture of avoidance are taken up later in the cases concerning coercion.) In keeping with its commitment to an ‘open, secure, interoperable
and reliable internet that enables prosperity, public safety, and the free flow of commerce and ideas’, the
United States should seek
to minimise the danger of unrestricted cyber war, in which critical economic, governmental and societal
systems and services are disrupted.20 Given how difficult it is to protect such systems, the United States must rely to
a heavy extent on deterrence and thus the threat of retaliation. To this end, the US Defense Department has stated
that a would-be attacker could ‘suffer unacceptable costs’ if it launches a cyber attack on the United
States.21 While such a warning is worth issuing, it raises the question of how these ‘unacceptable costs’ could be defined and levied. Short of
disclosing specific targets and methods, which we do not advocate, the United States could strengthen both the deterrence it seeks and the
norms it favours by indicating what actions might constitute retaliation. This is especially important because the most vulnerable targets of
cyber retaliation are computer networks that serve civilian life, starting with the internet. By definition, cyber retaliation
that
extends beyond military capabilities, as required for strong deterrence, might be considered
indiscriminate. Whether it is also disproportionate depends in part on the enemy attack that precipitated it. We can posit, for purposes of
analysis, that an enemy attack would be aimed at causing severe disruptions of such economic and societal functions as financial services,
power-grid management, transport systems, telecommunications services, media and government services, along with the expected military
and intelligence functions. In considering how the United States should retaliate, the distinction between the population and the state of the
attacker is useful. The United States would hold the latter, not the former, culpable, and thus the rightful object of retaliation. This would
suggest targeting propaganda and other societal-control systems; government financial systems; state access to banks; political and economic
elites on which the state depends; industries on which the state depends, especially state-owned enterprises; and internal security forces and
functions. To judge how effective such a retaliation strategy could be, consider
the case of Russia. The Russian state is both
sprawling and centralised: within Russia’s economy and society, it is pervasive, heavy-handed and
exploitative; power is concentrated in the Kremlin; and elites of all sorts are beholden to it. Although the
Russian state is well entrenched and not vulnerable to being overthrown, it is porous and exposed,
especially in cyberspace. Even if the computer systems of the innermost circle of Russian state decisionmaking may be inaccessible, there are many important systems that are not. Insofar as those who control the
Russian state are more concerned about their own well-being than that of the ‘masses’, targeting their apparatus would cause acute
apprehension. Of course, the more important a computer system is to the state, the less accessible it is likely to be. Still, even
if Russia
were to launch indiscriminate cyber attacks on the US economy and society, the United States might get
more bang for its bytes by retaliating against systems that support Russian state power. Of course, US
cyber targeting could also include the systems on which Russian leaders rely to direct military and other
security forces, which are the ultimate means of state power and control. Likewise, Russian military and intelligence
systems would be fair game for retaliation. At the same time, it would be vital to observe the stricture against disabling nuclear
C2 systems, lest the Kremlin perceive that a US strategic strike of some sort was in the works. With this exception, the Russian state’s
cyber vulnerabilities should be exploited as much as possible. The United States could thus not only
meet the standard of ‘unacceptable costs’ on which deterrence depends, but also gain escalation
control by giving Russia’s leaders a sense of their vulnerability. In addition to preventing further escalation, this US
targeting strategy would meet, more or less, normative standards of discrimination and proportionality.
And the cyberthreat is real – Mutliple Countries and Terrorists are acquiring
capabilities – increases the risk of nuclear nuclear war and collapsing agriculture and
the power grid
Habiger, 2k10
(Eugue – Retired Air Force General, Cyberwarfare and Cyberterrorism, The Cyber Security Institute, p. 11-19)
However, there are reasons to believe that what is going on now amounts to a fundamental shift as opposed to business as usual. Today’s network exploitation or information operation trespasses possess a number of
the number of
cyberattacks we are facing is growing significantly. Andrew Palowitch, a former CIA official now consulting
with the US Strategic Command (STRATCOM), which oversees the Defense Department’s Joint Task Force­Global Network Operations, recently told a meeting of experts that the Defense
Department has experienced almost 80,000 computer attacks, and some number of these assaults
have actually “reduced” the military’s “operational capabilities.”20 Second, the nature of these
attacks is starting to shift from penetration attempts aimed at gathering intelligence (cyber spying) to offensive efforts aimed at taking down systems
(cyberattacks). Palowitch put this in stark terms last November, “We are currently in a cyberwar and war is going on today.”21 Third, these recent attacks need to be taken in a broader strategic context. Both Russia
and China have stepped up their offensive efforts and taken a much more aggressive
cyberwarfare posture. The Chinese have developed an openly discussed cyberwar strategy aimed at achieving electronic dominance over the U.S. and its allies by 2050. In 2007 the Department
of Defense reported that for the first time China has developed first strike viruses, marking a major shift from prior
investments in defensive measures.22 And in the intervening period China has launched a series of offensive cyber operations against U.S. government and private sector
characteristics that suggest that the line between espionage and conflict has been, or is close to being, crossed. (What that suggests for the proper response is a different matter.) First,
networks and infrastructure. In 2007, Gen. James Cartwright, the former head of STRATCOM and now the Vice Chairman of the Joint Chiefs of Staff, told the US­China Economic and Security Review Commission that China’s
ability to launch “denial of service” attacks to overwhelm an IT system is of particular concern. 23
Russia also has already begun to wage offensive
cyberwar. At the outset of the recent hostilities with Georgia, Russian assets launched a series of cyberattacks against the Georgian government and its critical infrastructure systems, including media, banking
and transportation sites.24 In 2007, cyberattacks that many experts attribute, directly or indirectly, to Russia shut down the Estonia government’s IT systems. Fourth, the current geopolitical context must also be factored
into any effort to gauge the degree of threat of cyberwar. The start of the new Obama Administration has begun to help reduce tensions between the United States and other nations. And, the new administration has
taken initial steps to improve bilateral relations specifically with both China and Russia. However, it must be said that over the last few years the posture of both the Chinese and Russian governments toward America has
. Some commentators have talked about the prospects of a cyber Pearl Harbor, and
the pattern of Chinese and Russian behavior to date gives reason for concern along these lines: both nations
have offensive cyberwarfare strategies in place; both nations have taken the cyber equivalent of building
up their forces; both nations now regularly probe our cyber defenses looking for gaps to be exploited; both nations have begun taking
actions that cross the line from cyberespionage to cyberaggression; and, our bilateral relations with both nations are increasingly fractious
and complicated by areas of marked, direct competition. Clearly, there a sharp differences between current U.S. relations with these two nations and relations between the
US and Japan just prior to World War II. However, from a strategic defense perspective, there are enough warning signs to warrant preparation. In addition to the threat of cyberwar, the limited
resources required to carry out even a large scale cyberattack also makes likely the potential for a significant
cyberterror attack against the United States. However, the lack of a long list of specific incidences of cyberterrorism should provide no comfort. There is strong
evidence to suggest that al Qaeda has the ability to conduct cyberterror attacks against the United States and
clearly become more assertive, and at times even aggressive
its allies. Al Qaeda and other terrorist organizations are extremely active in cyberspace, using these technologies to communicate among themselves and others, carry out logistics, recruit members, and wage information
warfare. For example, al Qaeda leaders used email to communicate with the 9­11 terrorists and the 9­11 terrorists used the Internet to make travel plans and book flights. Osama bin Laden and other al Qaeda members
there is evidence of efforts that al Qaeda and other
terrorist organizations are actively developing cyberterrorism capabilities and seeking to carry out cyberterrorist
routinely post videos and other messages to online sites to communicate. Moreover,
attacks. For example, the Washington Post has reported that “U.S. investigators have found evidence in the logs that mark a browser's path through the Internet that al Qaeda operators spent time on sites that offer
software and programming instructions for the digital switches that run power, water, transport and communications grids. In some interrogations . . . al Qaeda prisoners have described intentions, in general terms, to use
those tools.”25 Similarly, a 2002 CIA report on the cyberterror threat to a member of the Senate stated that al Qaeda and Hezbollah have become "more adept at using the internet and computer technologies.”26 The FBI
has issued bulletins stating that, “U. S. law enforcement and intelligence agencies have received indications that Al Qaeda members have sought information on Supervisory Control And Data Acquisition (SCADA) systems
available on multiple SCADA­related web sites.”27 In addition a number of jihadist websites, such as 7hj.7hj.com, teach computer attack and hacking skills in the service of Islam.28 While al Qaeda may lack the cyber­attack
capability of nations like Russia and China, there is every reason to believe its operatives, and those of its ilk, are as capable as the cyber criminals and hackers who routinely effect great harm on the world’s digital
infrastructure generally and American assets specifically. In fact, perhaps, the most troubling indication of the level of the cyberterrorist threat is the countless, serious non­terrorist cyberattacks routinely carried out by
criminals, hackers, disgruntled insiders, crime syndicates and the like. If run­of­the­mill criminals and hackers can threaten powergrids, hack vital military networks, steal vast sums of money, take down a city’s of traffic
lights, compromise the Federal Aviation Administration’s air traffic control systems, among other attacks, it is overwhelmingly likely that terrorists can carry out similar, if not more malicious attacks. Moreover, even if the
world’s terrorists are unable to breed these skills, they can certainly buy them. There are untold numbers of cybermercenaries around the world—sophisticated hackers with advanced training who would be willing to offer
their services for the right price. Finally, given the nature of our understanding of cyber threats, there is always the possibility that we have already been the victim or a cyberterrorist attack, or such an attack has already
a well­designed cyberattack has the capacity cause widespread chaos, sow
societal unrest, undermine national governments, spread paralyzing fear and anxiety, and create a state of utter turmoil ,
been set but not yet effectuated, and we don’t know it yet. Instead,
A sophisticated cyberattack could throw a nation’s banking and finance system into
chaos causing markets to crash, prompting runs on banks, degrading confidence in markets, perhaps even putting the nation’s
currency in play and making the government look helpless and hapless. In today’s difficult economy, imagine how Americans would react if vast sums
of money were taken from their accounts and their supporting financial records were destroyed. A truly nefarious cyberattacker could carry out an attack in such a way (akin to Robin Hood)
as to engender populist support and deepen rifts within our society, thereby making efforts to restore the system all the more difficult. A modestly advanced enemy could
use a cyberattack to shut down (if not physically damage) one or more regional power grids. An entire region could be cast into total darkness, power­dependent
systems could be shutdown. An attack on one or more regional power grids could also cause cascading effects that could
jeopardize our entire national grid. When word leaks that the blackout was caused by a cyberattack,
the specter of a foreign enemy capable of sending the entire nation into darkness would only increase the fear, turmoil
and unrest. While the finance and energy sectors are considered prime targets for a cyberattack, an attack on any of the 17 delineated critical infrastructure sectors could have a major impact on the United
all without taking a single life.
States. For example, our healthcare system is already technologically driven and the Obama Administration’s e­health efforts will only increase that dependency. A cyberattack on the U.S. e­health infrastructure could send
A cyberattack
on our nation’s water systems could likewise cause widespread disruption. An attack on the control systems for one or more dams
could put entire communities at risk of being inundated, and could create ripple effects across the water, agriculture, and
energy sectors. Similar water control system attacks could be used to at least temporarily deny water to otherwise arid
regions, impacting everything from the quality of life in these areas to agriculture. In 2007, the U.S. Cyber Consequences Unit determined that the destruction from a single wave
our healthcare system into chaos and put countless of lives at risk. Imagine if emergency room physicians and surgeons were suddenly no longer able to access vital patient information.
of cyberattacks on critical infrastructures could exceed $700 billion, which would be the rough equivalent of 50 Katrina­esque hurricanes hitting the United States all at the same time.29 Similarly, one IT security source
has estimated that the impact of a single day cyberwar attack that focused on and disrupted U.S. credit and debit card transactions would be approximately $35 billion.30 Another way to gauge the potential for harm is in
comparison to other similar noncyberattack infrastructure failures. For example, the August 2003 regional power grid blackout is estimated to have cost the U.S. economy up to $10 billion, or roughly .1 percent of the
nation’s GDP. 31 That said, a cyberattack of the exact same magnitude would most certainly have a much larger impact. The origin of the 2003 blackout was almost immediately disclosed as an atypical system failure having
nothing to do with terrorism. This made the event both less threatening and likely a single time occurrence. Had it been disclosed that the event was the result of an attack that could readily be repeated the impacts would
likely have grown substantially, if not exponentially. Additionally, a cyberattack could also be used to disrupt our nation’s defenses or distract our national leaders in advance of a more traditional conventional or strategic
attack. Many military leaders actually believe that such a disruptive cyber pre­offensive is the most effective use of offensive cyber capabilities. This is, in fact, the way Russia utilized cyberattackers—whether government
assets, governmentdirected/ coordinated assets, or allied cyber irregulars—in advance of the invasion of Georgia. Widespread distributed denial of service (DDOS) attacks were launched on the Georgian governments IT
systems. Roughly a day later Russian armor rolled into Georgian territory. The cyberattacks were used to prepare the battlefield; they denied the Georgian government a critical communications tool isolating it from its
citizens and degrading its command and control capabilities precisely at the time of attack. In this way, these attacks were the functional equivalent of conventional air and/or missile strikes on a nation’s communications
infrastructure.32 One interesting element of the Georgian cyberattacks has been generally overlooked: On July 20th, weeks before the August cyberattack, the website of Georgian President Mikheil Saakashvili was
overwhelmed by a more narrowly focused, but technologically similar DDOS attack.33 This should be particularly chilling to American national security experts as our systems undergo the same sorts of focused, probing
attacks on a constant basis. The ability of an enemy to use a cyberattack to counter our offensive capabilities or soften our defenses for a wider offensive against the United States is much more than mere speculation. In
fact, in Iraq it is already happening. Iraq insurgents are now using off­the­shelf software (costing just $26) to hack U.S. drones (costing $4.5 million each), allowing them to intercept the video feed from these drones.34 By
insurgents have succeeded in greatly reducing one of our most valuable sources
of real‐time intelligence and situational awareness. If our enemies in Iraq are capable of such an effective cyberattack against one of our more sophisticated systems, consider what a
hacking these drones the
more technologically advanced enemy could do. At the strategic level, in 2008, as the United States Central Command was leading wars in both Iraq and Afghanistan, a cyber intruder compromised the security of the
the attacker
could have used this access to wage cyberwar—altering information, disrupting the flow of
information, destroying information, taking down systems—against the United States forces already at war. Similarly, during 2003 as
the United States prepared for and began the War in Iraq, the IT networks of the Department of Defense were hacked 294 times.36 By August of 2004, with America at war, these ongoing
attacks compelled then­Deputy Secretary of Defense Paul Wolfowitz to write in a memo that, "Recent exploits have reduced
operational capabilities on our networks."37 This wasn’t the first time that our national security IT infrastructure was penetrated immediately in advance of a U.S.
Command and sat within its IT systems, monitoring everything the Command was doing. 35 This time the attacker simply gathered vast amounts of intelligence. However, it is clear that
military option.38 In February of 1998 the Solar Sunrise attacks systematically compromised a series of Department of Defense networks. What is often overlooked is that these attacks occurred during the ramp up period
ahead of potential military action against Iraq. The attackers were able to obtain vast amounts of sensitive information—information that would have certainly been of value to an enemy’s military leaders. There is no way
to prove that these actions were purposefully launched with the specific intent to distract American military assets or degrade our capabilities. However, such ambiguities—the inability to specifically attribute actions and
motives to actors—are the very nature of cyberspace. Perhaps, these repeated patterns of behavior were mere coincidence, or perhaps they weren’t. The potential that an enemy might use a cyberattack to soften physical
defenses, increase the gravity of harms from kinetic attacks, or both, significantly increases the potential harms from a cyberattack. Consider the gravity of the threat and risk if an enemy, rightly or wrongly, believed that
Such an enemy might be convinced that it could win a war—
even nuclear —against the U nited S tates. The effect of this would be to undermine our
deterrence­based defenses, making us significantly more at risk of a major war .
it could use a cyberattack to degrade our strategic weapons capabilities.
conventional or
And we control probability and magnitude- it causes extinction
Bostrom, 2k2
(Nick Bostrom, Ph.D. and Professor of Philosophy at Oxford University, March 2002, Journal of Evolution and
Technology, Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards)
A much greater existential risk emerged with the build-up of nuclear arsenals in the US and
the USSR. An all-out nuclear war was a possibility with both a substantial probability and
with consequences that might have been persistent enough to qualify as global and terminal. There was a real worry among those best acquainted
Russia and the US
retain large nuclear arsenals that could be used in a future confrontation, either accidentally or
deliberately. There is also a risk that other states may one day build up large nuclear arsenals. Note however that a smaller nuclear exchange, between India and
Pakistan for instance, is not an existential risk, since it would not destroy or thwart humankind’s potential permanently. Such a war might however be a local terminal risk for the cities most
with the information available at the time that a nuclear Armageddon would occur and that it might annihilate our species or permanently destroy human civilization.
likely to be targeted. Unfortunately, we shall see that nuclear Armageddon and comet or asteroid strikes are mere preludes to the existential risks that we will encounter in the 21st century.
2NC T/ Case
Cyber-deterrence turns terrorism, war, prolif, and human rights
Gompert and Libicki 7/22
(Gompert, David C. and Libicki, Martin. David C. Gompert is the Principle Deputy Director of National Intelligence. He is a Senior Fellow at
RAND and a Distinguished Visiting Professor at the National Defense University's Center for Technology and National Security Policy.
Gompert received his BA in Engineering from the US Naval Academy and his MPA from Princeton University. Martin Libicki received his PhD
in Economics from UC Berkeley, his MA in City and Regional Planning from UC Berkeley, and his BSc in Mathematics from MIT. He is a
Professor at the RAND Graduate School and a Senior Management Scientist at RAND. “Waging Cyber War the American Way,” Survival:
Global Politics and Strategy. August–September 2015. Vol 57., 4th ed, pp 7-28. 07-22-2015.
http://www.iiss.org/en/publications/survival/sections/2015-1e95/survival--global-politics-and-strategy-august-september-2015-c6ba/57-402-gompert-and-libicki-eab1//ghs-kw)
Given that retaliation and counter-military cyber war require copious offensive capabilities, questions
arise about whether these means could and should also be used to coerce hostile states into complying
with US demands without requiring the use of armed force. Examples include pressuring a state to cease
international aggression, intimidating behaviour or support for terrorists; or to abandon acquisition of
weapons of mass destruction; or to end domestic human-rights violations. If, as some argue, it is getting
harder, costlier and riskier for the United States to use conventional military force for such ends,
threatening or conducting cyber war may seem to be an attractive alternative.25 Of course, equating cyber war
with war suggests that conducting or threatening it to impose America’s will is an idea not to be treated lightly. Whereas counter-military cyber
war presupposes a state of armed conflict, and retaliation presupposes that the United States has suffered a cyber attack, coercion (as
meant here) presupposes
neither a state of armed conflict nor an enemy attack. This means, in essence, the
United States would threaten to start a cyber war outside of an armed conflict – something US policy has yet to
address. While the United States has intimated that it would conduct cyber war during an armed conflict and would retaliate if deterrence
failed, it is silent about using or threatening cyber war as an instrument of coercion. Such reticence fits with the general US aversion to this
form of warfare, as well as a possible preference to carry out cyber attacks without attribution or admission. Notwithstanding US reticence,
the use of cyber war for coercion can be more attractive than the use of conventional force: it can be
conducted without regard to geography, without threatening death and physical destruction, and with
no risk of American casualties. While the United States has other non-military options, such as economic
sanctions and supporting regime opponents, none is a substitute for cyber war. Moreover, in the case of an adversary
with little or no ability to return fire in cyberspace, the United States might have an even greater
asymmetric advantage than it does with its conventional military capabilities.
China Tech DA
CX Questions
Customers are shifting to foreign products now – why does the plan reverse that
trend?
1NC
NSA spying shifts tech dominance to China but it’s fragile—reversing the trend now
kills China
Li and McElveen 13
(Cheng Li; Ryan Mcelveen. Cheng Li received a M.A. in Asian studies from the University of California, Berkeley and a Ph.D. in political
science from Princeton University. He is director of the John L. Thornton China Center and a senior fellow in the Foreign Policy program at
Brookings. He is also a director of the Nationsal Committee on U.S.-China Relations. Li focuses on the transformation of political leaders,
generational change and technological development in China. "NSA Revelations Have Irreparably Hurt U.S. Corporations in China,"
Brookings Institution. 12-12-2013. http://www.brookings.edu/research/opinions/2013/12/12-nsa-revelations-hurt-corporations-china-limcelveen//ghs-kw)
For the Obama administration, Snowden’s timing could not have been worse. The
first story about the NSA appeared in The
Guardian on June 5. When Obama and Xi met in California two days later, the United States had lost all
credibility on the cyber security issue. Instead of providing Obama with the perfect opportunity to confront China about its years of intellectual
property theft from U.S. firms, the Sunnylands meeting forced Obama to resort to a defensive posture. Reflecting on how the tables had turned, the media reported
the Chinese government
turned to official media to launch a public campaign against U.S. technology firms operating in China
through its “de-Cisco” (qu Sike hua) movement. By targeting Cisco, the U.S. networking company that had helped
many local Chinese governments develop and improve their IT infrastructures beginning in the mid-1990s, the Chinese government struck at
the very core of U.S.-China technological and economic collaboration. The movement began with the publication of
an issue of China Economic Weekly titled “He’s Watching You” that singled out eight U.S. firms as “guardian warriors” who had
infiltrated the Chinese market: Apple, Cisco, Google, IBM, Intel, Microsoft, Oracle and Qualcomm. Cisco,
however, was designated as the “most horrible” of these warriors because of its pervasive reach into China’s financial and governmental sectors. For these
U.S. technology firms, China is a vital source of business that represents a fast-growing slice of the global
technology market. After the Chinese official media began disparaging the “guardian warriors” in June,
the sales of those companies have fallen precipitously. With the release of its third quarter earnings in November, Cisco
reported that orders from China fell 18 percent from the same period a year earlier and projected that overall revenue would fall 8 to 10
percent as a result, according to Reuters. IBM reported that its revenue from the Chinese market fell 22 percent , which
resulted in a 4 percent drop in overall profit. Similarly, Microsoft has said that China had become its weakest market. However,
that President Xi chose to stay off-site at a nearby Hyatt hotel out of fear of eavesdropping. After the Sunnylands summit,
smaller U.S. technology firms working in China have not seen the same slowdown in business. Juniper Networks, a networking rival to Cisco, and EMC Corp, a
storage system maker, both saw increased business in the third quarter. As the
Chinese continue to shun the “guardian warriors,” they may turn to
similar but smaller U.S. firms until domestic Chinese firms are ready to assume their role. In the meantime, trying to completely “deCisco” would be too costly for China, as Cisco’s network infrastructure has become too deeply embedded around the country. Chinese
technology firms have greatly benefited in the aftermath of the Snowden revelations. For example, the share price
of China National Software has increased 250 percent since June. In addition, the Chinese government continues to push for faster
development of its technology industry, in which it has invested since the early 1990s, by funding the development of
supercomputers and satellite navigation systems. Still, China’s current investment in cyber security cannot compare with that of the
United States. The U.S. government spends $6.5 billion annually on cyber security, whereas China spends $400 million, according to NetentSec CEO Yuan Shengang.
But that will not be the case for long. The
Chinese government’s investment in both cyber espionage and cyber
security will continue to increase, and that investment will overwhelmingly benefit Chinese technology
corporations. China’s reliance on the eight American “guardian warrior” corporations will diminish as
its domestic firms develop commensurate capabilities. Bolstering China’s cyber capabilities may emerge as one of the goals of
China’s National Security Committee, which was formed after the Third Plenary Meeting of the 18th Party Congress in November. Modeled on the U.S. National
Security Council and led by President Xi Jinping, the committee was established to centralize coordination and quicken response time, although it is not yet clear
how much of its efforts will be focused domestically or internationally. The Third Plenum also brought further reform and opening of China’s economy, including
encouraging more competition in the private sector. The Chinese leadership continues to solicit foreign investment, as evidenced by in the newly established
Shanghai Free Trade Zone. However, there
is no doubt that investments by foreign technology companies are less
welcome than investments from other sectors because of the Snowden revelations.
The AFF reclaims US tech leadership from China
Castro and McQuinn 15
(Castro, Daniel and McQuinn, Alan. Information Technology and Innovation Foundation. The Information Technology and Innovation
Foundation (ITIF) is a Washington, D.C.-based think tank at the cutting edge of designing innovation strategies and technology policies to
create economic opportunities and improve quality of life in the United States and around the world. Founded in 2006, ITIF is a 501(c) 3
nonprofit, non-partisan organization that documents the beneficial role technology plays in our lives and provides pragmatic ideas for
improving technology-driven productivity, boosting competitiveness, and meeting today’s global challenges through innovation. Daniel
Castro is the vice president of the Information Technology and Innovation Foundation. His research interests include health IT, data privacy,
e-commerce, e-government, electronic voting, information security, and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst
at the Government Accountability Office (GAO) where he audited IT security and management controls at various government agencies. He
has a B.S. in Foreign Service from Georgetown University and an M.S. in Information Security Technology and Management from Carnegie
Mellon University. Alan McQuinn is a research assistant with the Information Technology and Innovation Foundation. Prior to joining ITIF,
Mr. McQuinn was a telecommunications fellow for Congresswoman Anna Eshoo and an intern for the Federal Communications Commission
in the Office of Legislative Affairs. He got his B.S. in Political Communications and Public Relations from the University of Texas at Austin.
“Beyond the USA Freedom Act: How U.S. Surveillance Still Subverts U.S. Competitiveness,” ITIF. June 2015. http://www2.itif.org/2015beyond-usa-freedom-act.pdf//ghs-kw)
CONCLUSION When historians write about this period in U.S. history it
could very well be that one of the themes will be how the
United States lost its global technology leadership to other nations. And clearly one of the factors they would
point to is the long-standing privileging of U.S. national security interests over U.S. industrial and commercial interests
when it comes to U.S. foreign policy. This has occurred over the last few years as the U.S. government has done relatively little to
address the rising commercial challenge to U.S. technology companies, all the while putting intelligence gathering first and
foremost. Indeed, policy decisions by the U.S. intelligence community have reverberated throughout the
global economy. If the U.S. tech industry is to remain the leader in the global marketplace, then the
U.S. government will need to set a new course that balances economic interests with national security interests. The cost
of inaction is not only short-term economic losses for U.S. companies, but a wave of protectionist policies that will
systematically weaken U.S. technology competiveness in years to come, with impacts on economic growth, jobs,
trade balance, and national security through a weakened industrial base. Only by taking decisive steps to reform its digital
surveillance activities will the U.S. government enable its tech industry to effectively compete in the
global market.
Growth is slowing now—innovation and tech are key to sustain CCP legitimacy
Ebner 14
(Julia Ebner. Julia Ebner received her MSc in International Relations and Affairs and her MSc in Political Economy, Development Economics,
and Natural Resources from Peking University. She was a researcher at the European Institute of Asia Studies. "Entrepreneurs: China’s Next
Growth Engine?," Diplomat. 8-7-2014. http://thediplomat.com/2014/08/entrepreneurs-chinas-next-growth-engine///ghs-kw)
Should China want to remain an international economic superpower, it will
need to substitute its current growth model –
one largely based on abundant, cheap labor – with a different comparative advantage that can lay the foundation for
a new, more sustainable growth strategy. Chinese policymakers are hoping now that an emerging entrepreneurship
may fit that bill, with start-ups and family-run enterprises potentially becoming a major driver of
sustainable growth and thus replacing the country’s current economic model. In 2014, international
conferences on private entrepreneurship and innovation were organized all across China: The China Council
for the Promotion of International Trade organized its first annual Global Innovation Economic Congress, while numerous innovationrelated conferences were held at well-known Chinese universities such as Tsinghua University, Jilin
University and Wuhan University. New Growth Model Needed Although China still ranks among the fastest growing economies in
the world, the country’s growth rates have decreased notably over the past few years. From
the 1990s until the 2008 financial
crisis, China’s GDP growth was consistently in the double digits with only a brief interruption following the Asian
financial crisis of 1997. Despite a relatively quick recovery after the global financial crisis, declining export rates resulting from the economic
distress of China’s main trading partners have left their mark on the Chinese economy. Today’s GDP
growth of 7.8 percent is just
half level recorded immediately before the 2008 crisis, according to the latest data provided by the World Bank. This
recent slowdown in China’s economic growth has naturally been a source of concern for the government. A
continuation of the country’s phenomenal economic growth is needed to maintain both social
stability and the Communist Party’s legitimacy. Sustainable economic growth has thus been identified
as one of China’s key challenges for the coming decade. That challenge is complicated by demographic trends, which are
set to have a strongly negative impact on the Chinese economy within the next decade. Researchers anticipate that as a consequence of the
country’s one-child policy, introduced in 1977, China will soon experience a sharp decline of its working-age population, leading to a substantial
labor force bottleneck. A labor shortage is likely to mean climbing wages, threatening China’s cheap labor edge. The challenge is well described
in a recent article published by the International Monetary Fund. Replacing the Cheap Labor Strategy Entrepreneurship
is widely
recognized as an important engine for economic growth: It contributes positively to economic
development by fuelling job markets through the creation of new employment opportunities, by
stimulating technological change through increased levels of innovation, and by enhancing the market
environment through an intensification of market competition. Entrepreneurship and innovation have
the potential to halt the contraction in China’ economic growth and to replace the country’s
unsustainable comparative advantage of cheap labor over the long term. As former Chinese President Hu Jintao
stressed in 2006, if China can transform its current growth strategy into one based on innovation and
entrepreneurship, it could sustain its growth rates and secure a key role in the international world order.
Indeed, increasing levels of entrepreneurship in the Chinese private sector are likely to lead to technological
innovation and productivity increases. This could prove particularly useful in offsetting the workforce bottleneck
created by demographic trends. Greater innovation would also make China more competitive and less
dependent on the knowledge and technology of traditional Western trading partners such as the EU and the U.S.
Economic growth is key to prevent CCP collapse and lashout
Friedberg 10, Professor of Politics and International Affairs – Princeton, Asia Expert – CFR (Aaron,
“Implications of the Financial Crisis for the US-China Rivalry,” Survival, Volume 52, Issue 4, August, p. 31
– 54)
Despite its magnitude, Beijing's stimulusprogrammewas insufficient to forestall a sizeable spike in unemployment. The regime acknowledges that
upwards of 20 million migrant workers lost their jobs in the first year of the crisis, with many returning to their villages, and 7m
recent college graduates are reportedly on the streets in search of work.9 Not surprisingly, tough times have been accompanied by increased social
turmoil. Even before the crisis hit, the number of so-called 'mass incidents' (such as riots or strikes) reported each year in China had been rising. Perhaps
because it feared that the steep upward trend might be unnerving to foreign investors, Beijing stopped publishing aggregate, national statistics in 2005.10 Nevertheless, there is ample, if fragmentary,
evidence that things got worse as the economy slowed. In Beijing, for example, salary cuts, layoffs, factory closures
and the failure of business owners to pay back wages resulted in an almost 100% increase in the number
of labour disputes brought before the courts.11 Since the early days of the current crisis, the regime has clearly been bracing itself for trouble. Thus, at the start of 2009, an official news-agency story candidly
warned Chinese readers that the country was, 'without a doubt … entering a peak period of mass incidents'.12 In anticipation of an expected increase in unrest, the regime for the first time
summoned all 3,080 county-level police chiefs to the capital to learn the latest riot-control tactics, and over 200 intermediate and lower-level judges were also called
in for special training.13 Beijing's stimulus was insufficient At least for the moment, the Chinese Communist Party (CCP) appears to be weathering the storm.
But if in the next several years the economy slumps again or simply fails to return to its previous pace, Beijing's troubles will mount. The
regime probably has enough repressive capacity to cope with a good deal more turbulence than it has thus far encountered, but a
protracted crisis could eventually pose a challenge to the solidarity of the party's leadership and thus to its continued
grip on political power. Sinologist MinxinPei points out that the greatest danger to CCP rule comes not from below but
from above. Rising societal discontent 'might be sufficient to tempt some members of the elite to exploit the situation
to their own political advantage' using 'populist appeals to weaken their rivals and, in the process, open[ing] up divisions within the party's
seemingly unified upper ranks'.14 If this happens, all bets will be off and a very wide range of outcomes, from a democratic transition to
a bloody civil war, will suddenly become plausible.
Precisely because it is aware of this danger, the regime has been very careful to keep whatever differences exist over how to deal with the current crisis within bounds and out of view. If there are significant rifts they could become
Short of causing the regime to unravel, a sustained economic crisis could
induce it to abandon its current, cautious policy of avoiding conflict with other countries while patiently accumulating all the elements of
'comprehensive national power'. If they believe that their backs are to the wall, China's leaders might even be tempted to
lash out, perhaps provoking a confrontation with a foreign power in the hopes of rallying domestic support and
deflecting public attention from their day-to-day troubles. Beijing might also choose to implement a policy of 'military Keynesianism', further accelerating its
already ambitious plans for military construction in the hopes of pumping up aggregate demand and resuscitating a sagging domestic economy.15 In sum, despite its impressive initial performance,
Beijing is by no means on solid ground. The reverberations from the 2008-09 financial crisismay yet shake the regime to its
foundations, and could induce it to behave in unexpected, and perhaps unexpectedly aggressive, ways.
apparent in the run-up to the pending change in leadership scheduled for 2012.
Chinese lashout goes nuclear
Epoch Times 4
(The Epoch Times, Renxing San, 8/4/2004, 8/4, http://english.epochtimes.com/news/5-8-4/30931.html//ghs-kw)
Since the Party’s life is “above all else,” it would not be surprising if the CCP resorts to the use of biological,
chemical, and nuclear weapons in its attempt to extend its life. The CCP, which disregards human life, would not
hesitate to kill two hundred million Americans, along with seven or eight hundred million Chinese, to achieve its
ends. These speeches let the public see the CCP for what it really is. With evil filling its every cell the CCP intends to wage a war
against humankind in its desperate attempt to cling to life. That is the main theme of the speeches. This theme is murderous
and utterly evil. In China we have seen beggars who coerced people to give them money by threatening to stab themselves with knives or
pierce their throats with long nails. But we have never, until now, seen such a gangster who would use biological, chemical, and nuclear
weapons to threaten the world, that all will die together with him. This bloody confession has confirmed the CCP’s nature: that of a monstrous
murderer who has killed 80 million Chinese people and who now plans to hold one billion people hostage and gamble with their lives.
2NC O/V
Disad outweighs and turns the AFF—NSA backdoors are causing foreign customers to
switch to Chinese tech now but the plan reverses that by closing backdoors and
reclaiming US tech leadership. That kills Chinese growth and results in a loss of CCP
legitimacy, which causes CCP lashout and extinction:
<insert o/w and t/ args>
2NC UQ
Extend uniqueness—perception of NSA backdoors incentivizes the Chinese
government and foreign customers to shift to Chinese tech, which boosts Chinese
tech—US company foreign sales have been falling fast—that’s Li and McElveen
NSA spying boosts Chinese tech firms
Kan 13
(Kan, Michael. Michael Kan covers IT, telecommunications, and the Internet in China for the IDG News Service. "NSA spying scandal
accelerating China's push to favor local tech vendors," PCWorld. 12-3-2013. http://www.pcworld.com/article/2068900/nsa-spying-scandalaccelerating-chinas-push-to-favor-local-tech-vendors.html//ghs-kw)
While China’s demand for electronics continues to soar, the
tech services market may be shrinking for U.S. enterprise
vendors. Security concerns over U.S. secret surveillance are giving the Chinese government and local
companies more reason to trust domestic vendors, according to industry experts. The country has always tried to
support its homegrown tech industry, but lately it is increasingly favoring local brands over foreign competition. Starting
this year, the nation’s government tenders have required IT suppliers to source more products from local
Chinese firms, said an executive at a U.S.-based storage supplier that sells to China. In some cases, the tenders have required 50
percent or more of the equipment to come from domestic brands, said the executive, who requested anonymity.
Recent leaks by former U.S. National Security Agency contractor, Edward Snowden, about the U.S.’s secret spying program aren’t helping the
matter. “I think in general China
wants to favor local brands; they feel their technology is getting better,” the executive said.
this to accelerate incrementally.” Last month, other U.S. enterprise vendors including Cisco
and Qualcomm said the U.S. spying scandal has put strains on their China business. Cisco reported its revenue
from the country fell 18 percent year-over-year in the last fiscal quarter. The Chinese government has yet to release an official document
“Snowden has just caused
telling companies to stay away from U.S. vendors, said the manager of a large data center, who has knowledge of such developments. But
state-owned telecom operators have already stopped orders for certain U.S. equipment to power their
networks, he added. Instead, the operators are relying on Chinese vendors such as Huawei Technologies, to
supply their telecommunications equipment. ”It will be hard for certain networking equipment made in the U.S. to
enter the Chinese market,” the manager said. “Its hard for them (U.S. vendors) to get approval, to get
certification from the related government departments.” Other companies, especially banks, are
concerned that buying enterprise gear from U.S. vendors may lead to scrutiny from the central
government, said Bryan Wang, an analyst with Forrester Research. ”The NSA issue has been having an impact, but it hasn’t
been black and white,” he added. In the future, China could create new regulations on where certain state
industries should source their technology from, a possibility some CIOs are considering when making
IT purchases, Wang said. The obstacles facing U.S. enterprise vendors come at a time when China’s own
homegrown companies are expanding in the enterprise market. Huawei Technologies, a major vendor for networking
equipment, this August came out with a new networking switch that will put the company in closer competition with Cisco. Lenovo and
ZTE are also targeting the enterprise market with products targeted at government, and closing the
technology gap with their foreign rivals, Wang said. ”Overall in the longer-term, the environment is positive for
local vendors. We definitely see them taking market share from multinational firms in China,” he added.
Chinese vendors are also expanding outside the country and targeting the U.S. market. But last year Huawei
and ZTE saw a push back from U.S. lawmakers concerned with the two companies’ alleged ties to the Chinese government. A Congressional
panel eventually advised that U.S. firms buy networking gear from other vendors, calling Huawei and ZTE a security threat.
Europe is shifting to China now
Ranger 15
(Steve Ranger. "Rise of China tech, internet surveillance revelations form background
to CeBIT show," ZDNet. 3-17-2015. http://www.zdnet.com/article/rise-of-china-techinternet-surveillance-revelations-form-background-to-cebit-show///ghs-kw)
As well as showcasing new devices, from tablets to robotic sculptors and drones, this year's CeBIT
technology show in Hannover
reflects a gradual but important shift taking place in the European technology world. Whereas in previous years
US companies would have taken centre stage, this year the emphasis is on China, both as a creator of technology
and as a huge potential market. "German business values China, not just as our most important trade
partner outside of Europe, but also as a partner in developing sophisticated technologies," said Angela
Merkel as she opened the show. "Especially in the digital economy, German and Chinese companies have core strengths ... and that's why
cooperation is a natural choice," she said. Chinese vice premier Ma Kai also attended the show, which featured a keynote from Alibaba founder
Jack Ma. China is CeBIT's 'partner country' this year, with over 600 Chinese companies - including Huawei, Xiaomi, ZTE, and Neusoft - presenting
their innovations at the show. The
UK is also keen on further developing a historically close relationship: the ChinaBritain Business Council is in Hannover to help UK firms set up meetings with Chinese companies, and to
provide support and advice to UK companies interested in doing business in China. "China is mounting the
biggest CeBIT partner country showcase ever. Attendees will clearly see that Chinese companies are up there with the biggest and best of the
global IT industry," said a spokesman for CeBIT. Some of this
activity is a result of the increasingly sophisticated output
of Chinese tech companies who are looking for new markets for their products. Firms that have found it
hard to make headway in the US, such as Huawei, have been focusing their efforts on Europe instead.
European tech companies are equally keen to access the rapidly growing Chinese market. Revelations
about mass interception of communications by the US National Security Agency (including allegations that spies
had even tapped Angela Merkel's phone) have not helped US-European relations, either. So it's perhaps significant that an interview
with NSA contractor-turned-whistleblower Edward Snowden is closing the Hannover show.
2NC UQ: US Failing Now
US tech falling behind other countries
Kevin Ashton 06/2015 [the co-founder and former executive director of the MIT Auto-ID Center,
coined the term “Internet of Things.” His book “How to Fly a Horse: The Secret History of Creation,
Invention, and Discovery” was published by Doubleday earlier this year] "America last?," The Agenda,
http://www.politico.com/agenda/story/2015/06/kevin-ashton-internet-of-things-in-the-us-000102
And, while they were not mentioning it, some key indicators began swinging away from the U.S. In
2005, China’s high-tech exports
exceeded America’s for the first time. In 2009, just after Wen Jiabao spoke about the Internet of Things, Germany’s hightech exports exceeded America’s as well. Today, Germany produces five times more high tech per capita
than the United States. Singapore and Korea’s high-tech exporters are also far more productive than America’s and, according to the most
recent data, are close to pushing the U.S. down to fifth place in the world’s high-tech economy. And, as the most recent data
are for 2013, that may have happened already. This decline will surprise many Americans, including many
American policymakers and pundits, who assume U.S. leadership simply transfers from one tech
revolution to the next. After all, that next revolution, the Internet of Things, was born in America, so perhaps it seems natural that
America will lead. Many U.S. commentators spin a myth that America is No. 1 in high tech, then extend it to claims that Europe is lagging
because of excessive government regulation, and hints that Asians are not innovators and entrepreneurs, but mere imitators with cheap labor.
This is jingoistic nonsense that could not be more wrong. Not only does Germany, a leader of the European Union, lead the U.S. in high tech,
but EU member states fund CERN, the European Organization for Nuclear Research, which invented the World Wide Web and built the Large
Hadron Collider, likely to be a source of several centuries of high-tech innovation. (U.S. government intervention killed America’s equivalent
particle physics program, the Superconducting Super Collider, in 1993 — an early symptom of declining federal investment in basic research.)
Asia, the alleged imitator, is anything but. Apple’s
iPhone, for example, so often held up as the epitome of American
innovation, looked a lot like a Korean phone, the LG KE850, which was revealed and released before Apple’s
product. Most of the technology in the iPhone was invented in, and is exported by, Asian countries.
2NC Link
Extend the link—the AFF stops creation of backdoors and perpetuates the perception
that US tech is safe, which means the US regains customers and tech leadership from
China—that’s Castro and McQuinn
If the US loses its tech dominance, Chinese and Indian innovation will quickly replace
it
Fannin 13 (Rebecca Fannin, 7-12-2013, forbes magazine contributor "China Still Likely To Take Over Tech Leadership If
And When Silicon Valley Slips," Forbes, http://www.forbes.com/sites/rebeccafannin/2013/07/12/china-still-likely-to-takeover-tech-leadership-if-and-when-silicon-valley-slips)
Will Silicon Valley continue to maintain its market-leading position for technology innovation?
It’s a question that’s often pondered and
debated, especially in the Valley, which has the most to lose if the emerging markets of China or India
take over leadership.¶ KPMG took a look at this question and other trends in its annual Technology
Innovation Survey, and found that the center of gravity may not be shifting quite so fast to the East as
once predicted. The KPMG survey of 811 technology executives globally found that one-third believe
the Valley will likely lose its tech trophy to an overseas market within just four years. That percentage might
seem high, but it compares with nearly half (44 percent) in last year’s survey. It’s a notable improvement for the Valley, as the U.S. economy and tech sector pick
up.¶ Which country will lead in disruptive breakthroughs? Here, the U.S. again solidifies its long-standing reputation as the world’s tech giant while China has
slipped in stature from a year ago, according to the survey. In last year’s poll, the U.S. and China were tied for the top spot. But today, some 37 percent predict that
the U.S. shows the most promise for tech disruptions, little surprise considering Google GOOG +2.72%‘s strong showing in the survey as top company innovator in
the world with its Google glass and driver-less cars. Meanwhile, about one-quarter pick China,
which is progressing from a reputation
for just copying to also innovating or micro-innovating. India, with a heritage of leadership in
outsourcing, a large talent pool of engineers, ample mentoring from networking groups such as TiE, and
a vibrant mobile communications market, ranked right behind the U.S. and China two years in a row.¶
Even though China’s rank slid in this year’s tech innovation survey, its Silicon Dragon tech economy is
still regarded as the leading challenger and most likely to replace the Valley, fueled by the market’s
huge, fast-growing and towering brands such as Tencent, Baidu BIDU -1.13%and Alibaba, and a growing
footprint overseas. KPMG partner Egidio Zarrella notes that China is innovating at an “impressive
speed,” driven by domestic consumption for local brands that are unique to the market. “China will
innovate for China’s sake,” he observes, adding that with improved research and development
capabilities, China will bridge the gap in expanding globally.¶ For another appraisal of China’s tech innovation prowess, see Forbes
post detailing how Mary Meeker’s annual trends report singles out the market’s merits, including the fact that China leads the world for the most Internet and
mobile communications users and has a tech-savvy consumer class that embraces new technologies. ¶ Besides China, it’s India that shines in the KPMG survey .
India scores as the second-most likely country to topple the U.S. for tech leadership. And, significantly,
this emerging tiger nation ranks first on an index that measures each country’s confidence in its own
tech innovation abilities. Based on ten factors, India rates highest on talent, mentoring, and customer
adoption of new technologies.¶ The U.S. came in third on the confidence index, while Israel’s Silicon
Wadi ranked second. Israel was deemed strong in disruptive technologies, talent and technology infrastructure. The U.S. was judged strongest in tech
infrastructure, access to alliances and partnerships, talent, and technology breakthroughs, and weakest in educational system and government incentives. Those
weaknesses for the U.S. are points that should be underscored in America’s tech clusters and in the nation’s capital as future tech leadership unfolds.¶
A
second part of the comprehensive survey covering tech sectors pinpointed cloud computing and mobile
communications as hardly a fad but here to stay at least for the next three years as the most disruptive
technologies. Both were highlighted in the 2012 report a well. In a change from last year, however, big
data and biometrics (face, voice and hand gestures that are digitally read) were identified as top sectors
that will see big breakthroughs. It’s brave new tech world.
2NC Perception Link
The AFF restores trust in internet tech
Danielle Kehl et al 14, Senior Policy Analyst at New America’s Open Technology Institute. Kevin
Bankston is a Policy Director at OTI, Robyn Greene is a Policy Counsel at OTI, Robert Morgus is a
Research Associate at OTI, “Surveillance Costs: The NSA’s Impact on the Economy, Internet Freedom &
Cybersecurity”, July 2014, pg 40-1
The U.S. government should not require or request that new surveillance capabilities or security vulnerabilities be built into
communications technologies and services, even if these are intended only to facilitate lawful surveillance. There is a great deal of evidence
that backdoors fundamentally weaken the security of hardware and software, regardless of whether only the NSA purportedly knows about
said vulnerabilities, as some of the documents suggest. A policy state- ment from the Internet Engineering Task Force in 2000 emphasized that
“adding a requirement for wiretapping will make affected protocol designs considerably more complex. Experience has shown that complexity
almost inevitably jeopardizes the security of communications.” 355 More recently, a May 2013 paper from the Center for Democracy and
Technology on the risks of wiretap modifications to endpoints concludes that “deployment of an intercept capability in… communications
services, systems and applica- tions poses serious security risks.” 356 The authors add that “on balance mandating that endpoint software
vendors build intercept functionality into their products will be much more costly to personal, economic and governmental security overall than
the risks associated with not being able to wiretap all communications.” 357 While NSA programs such as SIGINT Enabling—much like proposals
from domestic law enforcement agen- cies to update the Communications Assistance for Law Enforcement Act (CALEA) to require dig- ital
wiretapping capabilities in modern Internet- based communications services 358 —may aim to promote national security and law enforcement
by ensuring that federal agencies have the ability to intercept Internet communications, they do so at a huge cost to online security overall.
Because of the associated security risks, the U.S. government should not mandate or request the creation of surveillance backdoors in products, whether through legislation, court order, or the leveraging industry relationships to convince companies to voluntarily insert
vulnerabilities. As Bellovin et al. explain, complying with these types of requirements would also hinder innovation and impose a “tax” on
software development in addition to creating a whole new class of vulnerabilities in hardware and software that un- dermines the overall
security of the products. 359 An amendment offered to the NDAA for Fiscal Year 2015 (H.R. 4435) by Representatives Zoe Lofgren (D-CA) and
Rush Holt (D-NJ) would have prohibited inserting these kinds of vulnerabilities outright. 360 The Lofgren-Holt proposal aimed to prevent “the
funding of any intelligence agency, intelligence program, or intelligence related activity that mandates or requests that a device manufacturer,
software developer, or standards organization build in a backdoor to circumvent the encryption or privacy protections of its products, unless
there is statutory authority to make such a mandate or request.” 361 Although that measure was not adopted as part of the NDAA, a similar
amendment sponsored by Lofgren along with Representatives Jim Sensenbrenner (D-WI) and Thomas Massie (R-KY), did make it into the
House-approved version of the NDAA—with the support of Internet companies and privacy orga- nizations 362 —passing on an overwhelming
vote of 293 to 123. 363 Like Representative Grayson’s amendment on NSA’s consultations with NIST around encryption, it remains to be seen
whether this amendment will end up in the final appropri- ations bill that the President signs. Nonetheless, these legislative efforts are a
heartening sign and are consistent with recommendations from the President’s Review Group that the U.S. govern- ment should not attempt to
deliberately weaken the security of commercial encryption products. Such mandated vulnerabilities, whether required under statute or by
court order or inserted simply by request, unduly threaten innovation in secure Internet technologies while introducing security flaws that may
be exploited by a variety of bad actors. A
clear policy against such vulnerability mandates is necessary to restore
international trust in U.S. companies and technologies.
Policies such as the Secure Data Act are perceived as strengthening security
Castro and McQuinn 15
(Castro, Daniel and McQuinn, Alan. Information Technology and Innovation Foundation. The Information Technology and Innovation
Foundation (ITIF) is a Washington, D.C.-based think tank at the cutting edge of designing innovation strategies and technology policies to
create economic opportunities and improve quality of life in the United States and around the world. Founded in 2006, ITIF is a 501(c) 3
nonprofit, non-partisan organization that documents the beneficial role technology plays in our lives and provides pragmatic ideas for
improving technology-driven productivity, boosting competitiveness, and meeting today’s global challenges through innovation. Daniel
Castro is the vice president of the Information Technology and Innovation Foundation. His research interests include health IT, data privacy,
e-commerce, e-government, electronic voting, information security, and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst
at the Government Accountability Office (GAO) where he audited IT security and management controls at various government agencies. He
has a B.S. in Foreign Service from Georgetown University and an M.S. in Information Security Technology and Management from Carnegie
Mellon University. Alan McQuinn is a research assistant with the Information Technology and Innovation Foundation. Prior to joining ITIF,
Mr. McQuinn was a telecommunications fellow for Congresswoman Anna Eshoo and an intern for the Federal Communications Commission
in the Office of Legislative Affairs. He got his B.S. in Political Communications and Public Relations from the University of Texas at Austin.
“Beyond the USA Freedom Act: How U.S. Surveillance Still Subverts U.S. Competitiveness,” ITIF. June 2015. http://www2.itif.org/2015beyond-usa-freedom-act.pdf//ghs-kw)
Second, the
U.S. government should draw a clear line in the sand and declare that the policy of the U.S.
government is to strengthen not weaken information security. The U.S. Congress should pass
legislation, such as the Secure Data Act introduced by Sen. Wyden (D-OR), banning any government
efforts to introduce backdoors in software or weaken encryption.43 In the short term, President Obama, or his successor,
should sign an executive order formalizing this policy as well. In addition, when U.S. government agencies discover vulnerabilities in software or hardware products,
they should responsibly notify these companies in a timely manner so that the companies can fix these flaws. The best way to protect U.S. citizens from digital
threats is to promote strong cybersecurity practices in the private sector.
2NC Chinese Markets Link
Domestic markets are key to Chinese tech—plan steals Chinese market share
Lohr 12/2
(Steve Lohr. "In 2015, Technology Shifts Accelerate and China Rules, IDC Predicts,"
NYT. 12-2-2014. http://bits.blogs.nytimes.com/2014/12/02/in-2015-technology-shiftsaccelerate-and-china-rules-idc-predicts///ghs-kw)
Beyond the detail, a couple of larger themes stand out. First is China. Most of the reporting and commentary recently on the Chinese
economy has been about its slowing growth and challenges. “In
information technology, it’s just the opposite,” Frank Gens, IDC’s
chief analyst, said in an interview. “China has a roaring domestic market in technology.” In 2015, IDC estimates that
nearly 500 million smartphones will be sold in China, three times the number sold in the United States
and about one third of global sales. Roughly 85 percent of the smartphones sold in China will be made
by its domestic producers like Lenovo, Xiaomi, Huawei, ZTE and Coolpad. The rising prowess of China’s homegrown
smartphone makers will make it tougher on outsiders, as Samsung’s slowing growth and profits recently reflect. More than 680 million
people in China will be online next year, or 2.5 times the number in the United States. And the China
numbers are poised to grow further, helped by its national initiative, the Broadband China Project, intended to give 95 percent of
the country’s urban population access to high-speed broadband networks. In all, China’s spending on information and
communications technology will be more than $465 billion in 2015, a growth rate of 11 percent. The
expansion of the China tech market will account for 43 percent of tech-sector growth worldwide.
The Chinese market is key to Chinese tech growth
Mozur 1/28
(Paul Mozur. Reporter for the NYT. "New Rules in China Upset Western Tech Companies," New York Times. 1-28-2015.
http://www.nytimes.com/2015/01/29/technology/in-china-new-cybersecurity-rules-perturb-western-tech-companies.html//ghs-kw)
Mr. Yao said 90 percent of high-end servers
and mainframes in China were still produced by multinationals. Still,
Chinese companies are catching up at the lower end. “For all enterprise hardware, local brands
represented 21.3 percent revenue share in 2010 in P.R.C. market and we expect in 2014 that number
will reach 43.1 percent,” he said, using the abbreviation for the People’s Republic of China. “That’s a huge jump.”
Chinese tech is key to the global industry
Lohr 12/2
(Steve Lohr. "In 2015, Technology Shifts Accelerate and China Rules, IDC Predicts,"
NYT. 12-2-2014. http://bits.blogs.nytimes.com/2014/12/02/in-2015-technology-shiftsaccelerate-and-china-rules-idc-predicts///ghs-kw)
Beyond the detail, a couple of larger themes stand out. First is China. Most of the reporting and commentary recently on the Chinese
economy has been about its slowing growth and challenges. “In
information technology, it’s just the opposite,” Frank Gens, IDC’s
chief analyst, said in an interview. “China has a roaring domestic market in technology.” In 2015, IDC estimates that
nearly 500 million smartphones will be sold in China, three times the number sold in the United States
and about one third of global sales. Roughly 85 percent of the smartphones sold in China will be made
by its domestic producers like Lenovo, Xiaomi, Huawei, ZTE and Coolpad. The rising prowess of China’s homegrown
smartphone makers will make it tougher on outsiders, as Samsung’s slowing growth and profits recently reflect. More than 680 million
people in China will be online next year, or 2.5 times the number in the United States. And the China
numbers are poised to grow further, helped by its national initiative, the Broadband China Project, intended to give 95 percent of
the country’s urban population access to high-speed broadband networks. In all, China’s spending on information and
communications technology will be more than $465 billion in 2015, a growth rate of 11 percent. The
expansion of the China tech market will account for 43 percent of tech-sector growth worldwide.
2NC Tech K2 China Growth
Tech is key to Chinese growth
Xinhua 7/24
(Xinhua. Major Chinese news agency. "Industrial profits decline while high-tech sector shines in China|WCT”. 7-24-2015.
http://www.wantchinatimes.com/news-subclass-cnt.aspx?id=20150328000036&cid=1102//ghs-kw)
Driven by the country's restructuring efforts amid the economic "new normal" of slow but quality growth, China's
high-tech industry
flourished with the value-added output of the high-tech sector growing 12.3% year-on-year in 2014. The
high-tech industry accounted for 10.6% of the country's overall industrial value-added output in 2014,
which rose 7% from 2013 to 22.8 trillion yuan (US$3.71 trillion). The fast expansion of the high-tech and modern service
industries shows China's economy is advancing to the "middle and high end," said Xie Hongguang, deputy chief
of the NBS. China should work toward greater investment in "soft infrastructure"–like innovation–instead of "hard infrastructure" to climb the
global value chain, said Zhang Monan, an expert with the China Center for International Economic Exchanges. Indeed, boosting
innovation has been put at the top of the government's agenda as China has pledged to boost the implementation of
the "Made in China 2025" strategy, which will upgrade the manufacturing sector and help the country achieve a medium-high level of economic
growth.
China transitioning to tech-based economy
Barry van
Wyk Upstart: China’s emergence in technology and innovation
---- by Barry van Wyk, The Beijing Axis First published: May
27, 2010 Last updated: June 3, 2010
Significant progress has already been achieved with the MLP, and it is not hard to identify signs of China’s rapidly
improving innovative abilities. GERD increased to 1.54 per cent in 2008 from 0.57 per cent in 1995. Occurring at a time when its
GDP was growing exceptionally fast, China’s GERD now ranks behind only the US and Japan. The number of triadic patents
(granted in all three of the major patent offices in the US, Japan and Europe) granted to China remains relatively small, reaching 433 in
2005 (compared to 652 for Sweden and 3,158 for Korea), yet Chinese patent applications are increasing rapidly. Chinese patent
applications to the World Intellectual Property Office (WIPO), for example, increased by 44 per cent in 2005 and by a further 57 per cent
in 2006. From a total of about 20,000 in 1998, China’s output of scientific papers has increased fourfold to about 112,000
as of 2008, moving China to second place in the global rankings, behind only the US. In the period 2004 to 2008, China
produced about 400,000 papers, with the major focus areas being material science, chemistry, physics, mathematics and
engineering, but new fields like biological and medical science also gaining prominence.
China transitioning now
“Trends in China's Transition toward a Knowledge Economy” Authors:
Adam Segal, Ira A. Lipman Senior Fellow for
Counterterrorism and National Security Studies Ernest J. Wilson III January/February 2006 Asian Survey
http://www.cfr.org/publication/9924/trends_in_chinas_transition_toward_a_knowledge_economy.html
During the past decade, China has arguably placed more importance on reforming and modernizing its information and
communication technology (ICT) sector than any other developing country in the world. Under former Premier Zhu Rongji,
the Chinese leadership was strongly committed to making ICT central to its national goals—from transforming Chinese society at home
to pursuing its ambitions as a world economic and political power. In one of his final speeches, delivered at the first session of the 10th
National People’s Congress in 2003, Zhu implored his successors to “energetically promote information technology (IT)
applications and use IT to propel and accelerate industrialization” so that the Chinese Communist Party (CCP) can continue
to build a “well-off society.”1
2NC Global Econ I/L
China economic crash goes global—outweighs the US and disproves resiliency empirics
Pesek 14
(Writer for Bloomberg, an edited economic publication “What to Fear If China Crashes,” Bloomberg View,
http://www.bloombergview.com/articles/2014-07-16/what-to-fear-if-china-crashes)
Few moments in modern financial history were scarier than the week of Sept. 15, 2008, when first Lehman
Brothers and then American International Group collapsed. Who could forget the cratering stock markets, panicky bailout
negotiations, rampant foreclosures, depressing job losses and decimated retirement accounts -- not to mention the discouraging recovery since
then? Yet
a Chinese crash might make 2008 look like a garden party. As the risks of one increase, it's worth exploring
is now the world's biggest trading nation, the second-biggest economy and
holder of some $4 trillion of foreign-currency reserves. If China does experience a true credit crisis, it would
be felt around the world. "The example of how the global financial crisis began in one poorly-understood financial market and spread
how it might look. After all, China
dramatically from there illustrates the capacity for misjudging contagion risk," Adam Slater wrote in a July 14 Oxford Economics report.
Lehman and AIG, remember, were just two financial firms out of dozens. Opaque dealings and off-balance-sheet
investment vehicles made it virtually impossible even for the managers of those companies to understand their vulnerabilities -- and those of
the broader financial system. The
term "shadow banking system" soon became shorthand for potential
instability and contagion risk in world markets. Well, China is that and more. China surpassed Japan in 2011 in gross
domestic product and it's gaining on the U.S. Some World Bank researchers even think China is already on the verge of becoming No. 1 (I'm
skeptical). China's world-trade weighting has doubled in the last decade. But the real explosion has been in the financial sector. Since 2008,
Chinese stock valuations surged from $1.8 trillion to $3.8 trillion and bank-balance sheets and the money supply jumped accordingly. China's
broad measure of money has surged by an incredible $12.5 trillion since 2008 to roughly match the U.S.'s monetary stock. This enormous
money buildup fed untold amounts of private-sector debt along with public-sector institutions. Its scale, speed and opacity are fueling genuine
concerns about a bad-loan meltdown in an economy that's 2 1/2 times bigger than Germany's. If that happens, at a minimum it would torch
China's property markets and could take down systemically important parts of Hong Kong's banking system. The reverberations probably
wouldn't stop there, however, and would hit resource-dependent Australia, batter trade-driven economies Japan, Singapore, South Korea and
Taiwan and whack prices of everything from oil and steel to gold and corn. "China’s
importance for the world economy and the
rapid growth of its financial system, mean that there are widespread concerns that a financial crisis in China would also
turn into a global crisis," says London-based Slater. "A bad asset problem on this scale would dwarf that seen in the major emerging
financial crises seen in Russia and Argentina in 1998 and 2001, and also be more severe than the Japanese bad loan problem of the 1990s."
Such risks belie President Xi Jinping's insistence that China's financial reform process is a domestic affair, subject neither to input nor scrutiny by
the rest of the world. That's not the case. Just like the Chinese pollution that darkens Asian skies and contributes to climate change, China's
financial vulnerability is a global problem. U.S. President Barack Obama made that clear enough in a May interview with National Public Radio.
“We welcome China’s peaceful rise," he said. “In many ways, it would be a bigger national security problem for us if China started falling apart
at the seams.” China's ascent obviously preoccupies the White House as it thwarts U.S. foreign-policy objectives, taunts Japan and other nations
with territorial claims in the Pacific and casts aspersions on America's moral leadership. But China's frailty has to be on the minds of U.S. policy
makers, too The potential for things careening out of control in China are real. What worries bears such as Patrick
Chovanec of Silvercrest Asset Management in New York, is China’s unaltered obsession with building the equivalent of new “Manhattans”
almost overnight even as the nation's financial system shows signs of buckling. As policy makers in Beijing generate even more credit to keep
bubbles from bursting, the shadow banking system continues to grow. The longer China delays its reckoning, the worst it might be for China -and perhaps the rest of us.
CCP collapse causes the second Great Depression
BHANDARI. 10.
Maya. Head of Emerging Markets Analysis, Lombard Street Research. “If the Chinese Bubble Bursts…”
THE INTERNATIONAL ECONOMY. http://www.internationaleconomy.com/TIE_F10_ChinaBubbleSymp.pdf
The latest
financial crisis proved the central role of China in driving global economic outcomes. China is the
chief overseas surplus country corresponding to the U.S. deficit, and it was excess ex ante Chinese savings which prompted ex post
U _S. dis-saving. The massive ensuing build-up
of debt triggered a Great Recession almost as bad as the Great
Depression. This causal direction, from excess saving to excess spending, is confirmed by low global real interest rates through much of the
Goldilocks period. Had over-borrowing been the cause rather than effect, then real interest rates would have
been bid up to attract the required capital. A prospective hard landing in China might thus be expected to
have serious global implications. The Chinese economy did slow sharply over the last eighteen months, but only briefly, as largescale Irhind-the-scenes stimulus meant that it quickly retumed to overheating. Given its 9—10 percent "trend" growth rate, and 30 per. cent
import ratio, China is nearly twice as powerful a global growth locomotive as the United States, based on its implied import gain. So while the
surrounding export hubs, whose growth prospects are a "second derivative" of what transpires in China, would suffer most
directly from Chinese slowing, the knock to global growth would be significant. Voracious Chinese demand
has also been a crucial driver of global commodity prices, particularly metals and oil, so they too may face
a hard landing if Chinese demand dries up.
CCP collapse deals a massive deflationary shock to the world.
ZHAO. 10.
Chen. Chief Global Strategist and Managing Editor for Global Investment Strategy, BCA Research Group.
“If the Chinese Bubble Bursts…” THE INTERNATIONAL ECONOMY. http://www.internationaleconomy.com/TIE_F10_ChinaBubbleSymp.pdf
At the onset, I believe the odds of a China asset bub- ble bursting are very low. It is difficult to argue that Chinese asset markets, particularly
real estate, are indeed already in a 'bubble. " Property prices in tier two and tier three cities are actually quite cheap, but for pur- poses of
discussion, there is always the danger that asset values could get massively inflated over the next few years. If so, a crash would be inevitable.
In fact, China experienced a devastating real estate meltdown and "growth recession" in 1993—94, when then-premier Zhu Rongii initiated a
credit crackdown to rein in spreading inflation and real estate speculation. Property prices in major cities dropped by over 40 per- cent and
private sector GDP growth dropped to 3 per. cent from double-digit levels. Non-performing loans soared to 30 perænt of total banking sector
assets. It took more than seven years for the government to clean up the financial mess and recapitalize the banking system. If
another
episode of a bursting asset bubble were to happen in China, the damage to the banking sector could be
rather severe. History has repeatedly show-n that credit inflation begets asset bubbles and, almost by
definition, a bursting asset bubble always leads to a banking crisis and severe credit contraction. In China's case,
bank credit is the lifeline for large state-owned companies, and a credit crunch could choke off growth of these enterprises quickly. The big
difference between today's situation and the early 1990s, however, is that the Chinese authorities have accumulated '.ast reserves _ China also
runs a huge cun-ent account surplus. In the early 1990s, China's reserves had dwindled to almost nothing and the current account was in
massive deficit. As a real estate meltdown led to a collapse in the Chinese currency in 1992—93. In other words, Beijing today has a lot of
resources at its disposal to stimulate the economy or to recapitalize the banking system, whenever necessary. Therefore, the impact of a
bursting bubble on growth could be very sham and even severe, but it would be short-lived because of supp-an from public sector spending _ A
bursting China bubble would also be felt acutely in commodity prices. The commodity story has been built
around the China story. Naturally, a bursting China bub- ble would deal a devastating blow to the commodities as
well as commodity producers such as Latin America, Australia, and Canada, among others. Asia as a whole, and
Japan in particular, would also be acutely affected by a "growth recession" in China. The economic integration between
China and the rest of Asia is well—documented but it is important to note that there has been virtually no domestic spending
in Japan in recent years and the country's economic growth has been leveraged almost entirely on exports to China A bursting China
bubble could seriously impair Japan's economic and asset market performance Finally, a bursting China
bubble would be a mas- sive deflationary shock to the world economy. With China in growth recession, global
saving excesses could surge and world aggregate demand would vastly defi- cient. Bond yields could
move to new lows and stocks would drop, probably precipitously—in short, investors would face very bleak and
frightening prospects.
2NC US Econ I/L
Chinese growth turns the case --- strong Chinese technological power forms linkages
with US companies --- drives growth of US companies
NRC 10 National Research Council “The Dragon and the Elephant: Understanding the Development of Innovation Capacity in China
and India: Summary of a Conference” www.nap.edu/openbook.php?record_id=12873&page=13
Wadhwa found in his surveys that companies go offshore for reasons of “cost and where the markets are.” Meanwhile,
Asian immigrants are driving enterprise growth in the United States. Twenty-five percent of technology and engineering firms launched
in the last decade and 52% of Silicon Valley startups had immigrant founders. Indian immigrants accounted for one-quarter of these.
Among America’s new immigrant entrepreneurs, more than 74 percent have a master’s or a PhD degree. Yet the backlog of U.S.
immigration applications puts this stream of talent in limbo. One million skilled immigrants are waiting for the annual quota
of 120,000 visas, with caps of 8,400 per country. This is causing a “reverse brain drain” from the U nited S tates back to
countries of origin, the majority to India and China. This endangers U.S. innovation and economic growth. There is a high
likelihood, however, that returning skilled talent will create new linkages to U.S. companies , as they are doing
within General Electric, IBM, and other companies. Jai Menon of IBM Corporation began his survey of IBM’s view of global
talent recruitment by suggesting that “aa. IBM pursues growth of its operations as a global entity. There are 372,000 IBMers in 172
countries; 123,000 of these are in the Asia-Pacific region. Eighty percent of the firm’s R&D activity is still based in the United States. IBM
supports open standards development and networked business models to facilitate global collaboration. Three factors drive the firm’s
decisions on staff placement and location of recruitment -- economics, skills and environment. IBM India has grown its staff tenfold in
five years; its $6 billion investment in three years represents a tripling of resources in people, infrastructure and capital. Increasingly, as
Vivek Wadhwa suggested, people get degrees in the United States and return to India for their first jobs. IBM follows a comparable
approach in China, with 10,000+ IBM employees involved in R&D, services and sales . In 2006, for the first time the
number of service workers overtook the number of agricultural laborers worldwide. Thus the needs of a service economy comprise an
issue looming for world leaders.
CCP collapse hurts US economy
Karabell 13
(Zachary. American author, historian, money manager and economist. Karabell is President of River Twice Research, where he analyzes
economic and political trends. He is also a Senior Advisor for Business for Social Responsibility. Previously, he was Executive Vice President,
Head of Marketing and Chief Economist at Fred Alger Management, a New York-based investment firm, and President of Fred Alger and
Company, as well as Portfolio Manager of the China-US Growth Fund, which won both a Lipper Award for top performance and a 5-star
designation from Morningstar, Inc.. He was also Executive Vice President of Alger's Spectra Funds, a no-load family of mutual funds that
launched the $30 million Spectra Green Fund, which was based on the idea that profit and sustainability are linked. At Alger, he oversaw the
creation, launch and marketing of several funds, led corporate strategy for acquisitions, and represented the firm at public forums and in the
media. Educated at Columbia, Oxford, and Harvard, where he received his Ph.D., he is the author of several books. “The U.S. can’t afford a
Chinese economic collapse.” Reuters. http://blogs.reuters.com/edgy-optimist/2013/03/07/the-u-s-cant-afford-a-chinese-economiccollapse/)
Is China about to collapse? That question has been front and center in the past weeks as the country completes its leadership transition and
after the exposure of its various real estate bubbles during a widely watched 60 Minutes exposé this past weekend. Concerns about soaring
property prices throughout China are hardly new, but they have been given added weight by the government itself. Recognizing that a rapid
implosion of the property market would disrupt economic growth, the central government recently announced far-reaching measures designed
to dent the rampant speculation. Higher down payments, limiting the purchases of investment properties, and a capital gains tax on real estate
transactions designed to make flipping properties less lucrative were included. These measures, in conjunction with the new government’s
announcing more modest growth targets of 7.5 percent a year, sent Chinese equities plunging and led to a slew of commentary in the United
States saying China would be the next shoe to drop in the global system. Yet there is more here than simple alarm over the viability of China’s
economic growth. There is the not-so-veiled undercurrent of rooting against China. It is difficult to find someone who explicitly wants it to
collapse, but the tone of much of the discourse suggests bloodlust. Given that China largely escaped the crises that so afflicted the United
States and the eurozone, the desire to see it stumble may be understandable. No one really likes a global winner if that winner isn’t you. The
need to see China fail verges on jingoism. Americans distrust the Chinese model, find that its business practices verge on the immoral and
illegal, that its reporting and accounting standards are sub-par at best and that its system is one of crony capitalism run by crony communists.
On Wall Street, the presumption usually seems to be that any Chinese company is a ponzi scheme masquerading as a viable business. In various
conversations and debates, I have rarely heard China’s economic model mentioned without disdain. Take, as just one example, Gordon Chang
in Forbes: “Beijing’s technocrats can postpone a reckoning, but they have not repealed the laws of economics. There will be a crash.” The
consequences of a Chinese collapse, however, would be severe for the United States and for the world.
There could be no major Chinese contraction without a concomitant contraction in the United States. That would
mean sharply curtailed Chinese purchases of U.S. Treasury bonds, far less revenue for companies like General
Motors, Nike, KFC and Apple that have robust business in China (Apple made $6.83 billion in the fourth quarter of 2012, up from
$4.08 billion a year prior), and far fewer Chinese imports of high-end goods from American and Asian companies. It would
also mean a collapse of Chinese imports of materials such as copper, which would in turn harm economic growth
in emerging countries that continue to be a prime market for American, Asian and European goods. China
is now the world’s second-largest economy, and property booms have been one aspect of its growth. Individual Chinese cannot
invest outside of the country, and the limited options of China’s stock exchanges and almost nonexistent bond market mean that if you are
middle class and want to do more than keep your money in cash or low-yielding bank accounts, you buy either luxury goods or apartments.
That has meant a series of property bubbles over the past decade and a series of measures by state and local officials to contain them. These
recent measures are hardly the first, and they are not likely to be the last. The past 10 years have seen wild swings in property prices, and as
recently as 2011 the government took steps to cool them; the number of transactions plummeted and prices slumped in hot markets like
Shanghai as much as 30, 40 and even 50 percent. You could go back year by year in the 2000s and see similar bubbles forming and popping, as
the government reacted to sharp run-ups with restrictions and then eased them when the pendulum threatened to swing too far. China has
had a series of property bubbles and a series of property busts. It has also had massive urbanization that in time has absorbed the excess supply
generated by massive development. Today much of that supply is priced far above what workers flooding into China’s cities can afford. But that
has always been true, and that housing has in time been purchased and used by Chinese families who are moving up the income spectrum,
much as U.S. suburbs evolved in the second half of the 20th century. More to the point, all property bubbles are not created equal. The housing
bubbles in the United States and Spain, for instance, would never had been so disruptive without the massive amount of debt and the financial
instruments and derivatives based on them. A bursting housing bubble absent those would have been a hit to growth but not a systemic crisis.
In China, most buyers pay cash, and there is no derivative market around mortgages (at most there’s a small shadow market). Yes, there are all
sorts of unofficial transactions with high-interest loans, but even there, the consequences of busts are not the same as they were in the United
States and Europe in recent years. Two issues converge whenever China is discussed in the United States: fear of the next global crisis, and
distrust and dislike of the country. Concern is fine; we should always be attentive to possible risks. But China’s property bubbles are an unlikely
risk, because of the absence of derivatives and because the central government is clearly alert to the market’s behavior. Suspicion and
antipathy, however, are not constructive. They speak to the ongoing difficulty China poses to Americans’ sense of global economic dominance
and to the belief in the superiority of free-market capitalism to China’s state-managed capitalism. The U.S. system may prove to be more
resilient over time; it has certainly proven successful to date. Its
success does not require China’s failure, nor will China’s
success invalidate the American model. For our own self-interest we should be rooting for their efforts,
and not jingoistically wishing for them to fail.
2NC Impact UQ
Latest data show Chinese economy is growing now—ignore stock market claims which
don’t accurately reflect economic fundamentals
Miller and Charney 7/15
(Miller, Leland R. and Charney, Craig. Mr. Miller is president and Mr. Charney is research director of China Beige Book International, a
private economic survey. “China’s Economy Is Recovering,” Wall Street Journal, 7/15/2015. http://www.wsj.com/articles/chinas-economyis-recovering-1436979092//ghs-kw)
China released second-quarter statistics Wednesday that showed the economy growing at 7%, the same
real rate as the first quarter but with stronger nominal growth. That result, higher than expected and
coming just after a stock-market panic, surprised some commentators and even aroused suspicion that the government cooked the
numbers for political reasons. While official data is indeed unreliable, our firm's latest research confirms that the Chinese
economy is improving after several disappointing quarters -- just not for the reasons given by Beijing. The China Beige
Book (CBB), a private survey of more than 2,000 Chinese firms each quarter, frequently anticipates the
official story. We documented the 2012 property rebound, the 2013 interbank credit crunch and the 2014 slowdown in capital expenditure
before any of them showed up in official statistics. The modest but broad-based improvement in the Chinese
economy that we tracked in the second quarter may seem at odds with the headlines of carnage in the country's
financial markets. But stock prices in China have almost nothing to do with the economy's fundamentals.
Our data show sales revenue, capital expenditure, new domestic orders, hiring, wages and profits were all
better in the second quarter, making the improvement unmistakable -- albeit not outstanding in any one category.
In the labor market, both employment and wage growth strengthened, and prospects for hiring look stable.
This is not new: Our data have shown the labor market remarkably steady over the past year, despite the economy's overall deceleration.
Inflation data are also a reason for optimism. Along with wages, input costs and sales prices grew faster
in the second quarter. The rate is still slower than a year ago, but at least this is a break from the previously unstoppable tide of price
deterioration. While it is just one quarter, our data suggest deflation may have peaked. With the explosive stock market run-up
occupying all but the final weeks of the quarter, it might seem reasonable to conclude that this rally was the impetus behind the better results.
Not so. Of all our indicators, capital expenditure should have responded most positively to a boom in equities prices, but the uptick was barely
noticeable. The
strength of the second-quarter performance is instead found in widespread expanding sales
volumes, which firms were able to accomplish without sacrificing profit margins. The fact that stronger
sales, rather than greater investment, was the driving force this quarter is itself an encouraging sign in
light of China's longstanding problem of excess investment and inadequate consumption. These gains
also track across sectors, highlighted by a welcome resurgence in both property and retail. Property saw
its strongest results in 18 months, buoyed by stronger commercial and residential realty as well as
transportation construction. Six of our eight regions were better than last quarter, led by the Southwest and
North. The results were also an improvement over the second quarter of last year, if somewhat less so, with residential construction the
sector's major remaining black eye. Retailers, meanwhile, reported a
second consecutive quarter of improvement,
both on-quarter and on-year, with growth accelerating. For the first time in 18 months, the retail sector
also had faster growth than manufacturing, underscoring the danger of treating manufacturing as the bellwether for the
economy.
China’s economy is stabilizing now but it’s fragile
AFP and Reuters 7/15
(Agence France-Presse and Reuters on Deutsche Welle. "China beats expectations on economic growth," DW. 07-15-2015.
http://www.dw.com/en/china-beats-expectations-on-economic-growth/a-18584453//ghs-kw)
Slowing growth in key areas like foreign trade, state investment and domestic demand had prompted economists to predict a year-onyear GDP
increase of just under 7 percent for the April-June quarter. The figure, released by the National Bureau of Statistics
government has officially set 7 percent as its
target for GDP growth this year. "We are aware that the domestic and external economic conditions are
still complicated, the global economic recovery is slow and tortuous and the foundation for the
stabilization of China's economy needs to be further consolidated," NBS spokesman Sheng Laiyun told reporters.
However, "the major indicators of the second quarter showed that the growth was stabilized and ready to
pick up, the economy developed with positive changes and the vitality of the economic development
was strengthened," Sheng added. Industrial output, including production at factories, workshops and mines also rose by 6.8
percent in June compared to 6.1 percent in May, the NBS said. Tough transition, stock market fluctuating The robust growth comes
despite a difficult economic year for China. Figures released on Monday showed a dipped in foreign trade in the first half of
the year - with exports up slightly but imports well down. Public investment, for years the driver of double-digit percentage growth in
(NBS) on Wednesday, matched first-quarter growth in China exactly. The
China, is down as the government seeks to rely more on consumer demand - itself slow to pick up. In recent weeks, the Shanghai stock market
has been falling sharply, albeit after a huge boom in months leading up to the crash.
Surveys prove China is experiencing growth now
Reuters 6/23
(Reuters. "China’s Economy Appears to Be Stabilizing, Reports Show," International New York Times. 6-23-2015.
http://www.nytimes.com/2015/06/24/business/international/chinas-economy-appears-to-be-stabilizing-reports-show.html//ghs-kw)
SHANGHAI — China’s
factory activity showed signs of stabilizing in June, with two nongovernment surveys
suggesting that the economy might be regaining some momentum, while many analysts expected further policy support to
ensure a more sure-footed recovery. The preliminary purchasing managers index for China published by HSBC and compiled by Markit, a data
analysis firm, edged up to 49.6 in June. It was the survey’s highest level in three months but still below the 50 mark, which would have pointed
to an expansion. The final reading for May was 49.2. “The
pickup in new orders” — which returned to positive territory
at 50.3 in June — “was driven by a strong rise in the new export orders subcomponent, suggesting that
foreign demand may finally be turning a corner,” Capital Economics analysts wrote in a research note. “Today’s P.M.I. reading
reinforces our view that the economy has started to find its footing.” But companies stepped up layoffs, the survey showed, shedding jobs at
the fastest pace in more than six years. Annabel Fiddes, an economist at Markit, said: “Manufacturers continued to cut staff. This suggests
companies have relatively muted growth expectations.” She said that she expected Beijing to “step up their efforts to stimulate growth and job
creation.” A much rosier
picture was painted by a separate survey, a quarterly report by China Beige Book International, a
data analysis firm, describing a “broad-based recovery” in the second quarter, led primarily by China’s
interior provinces. “Among major sectors, two developments stand out: a welcome resurgence in retail
— which saw rising revenue growth despite a slip in prices — and a broad-based rebound in property,”
said the report’s authors, Leland Miller and Craig Charney. Manufacturing, services, real estate, agriculture and mining
all had year-on-year and quarterly gains, they said.
2NC US Heg I/L
Chinese growth is key to US hegemony
Yiwei 07 Wang yiwei, Center for American Studies @ Fudan University, “China's Rise: An Unlikely Pillar of US Hegemony,” Harvard International Review, Volume 29, Issue 1
Spring7, pp. 60-63.
China’s rise is taking place in this context. That is to say, Chinese development is merely one facet of Asian and developing states’
economic progress in general. Historically, the United States has provided the dominant development paradigm for the world. But today,
China has come up with development strategies that are different from that of any other nation-state in history and are a consequence
of the global migration of industry along comparative advantage lines. Presently, the movement of light industry and consumer goods
production from advanced industrialized countries to China is nearly complete, but heavy industry is only beginning to move.
Developed countries’ dependence on China will be far more pronounced following this movement. As global
production migrates to China and other developing countries, a feedback loop will emerge and indeed is already
beginning to emerge. Where globalization was once an engine fueled by Western muscle and steered by
Western policy, there is now more gas in the tank but there are also more hands on the steering wheel. In the past,
developing countries were often in a position only to respond to globalization, but now, developed countries must respond as well.
Previously the United States believed that globalization was synonymous with Americanization, but today’s world has witnessed a United
States that is feeling the influence of the world as well. In the past, a sneeze on Wall Street was followed by a downturn in world
markets. But in February 2007, Chinese stocks fell sharply and Wall Street responded with its steepest decline in several years. In this
way, the whirlpool of globalization is no longer spinning in one direction. Rather, it is generating feedback mechanisms and is widening
into an ellipse with two focal points: one located in the United States, the historical leader of the developed world, and one in the China,
the strongest country in the new developing world power bloc. Combating Regionalization It is important to extend the discussion
beyond platitudes regarding “US decline” or the “rise of China” and the invective-laden debate over threats and security issues that
arises from these. We must step out of a narrowly national mindset and reconsider what Chinese development means for the United
States. One of the consequences of globalization has been that countries such as China, which depend on
exporting to US markets, have accumulated large dollar reserves. This has been unavoidable for these countries, as they
must purchase dollars in order to keep the dollar strong and thus avoid massive losses. Thus, the United States is bound to bear a
trade deficit, and moreover, this deficit is inextricably tied to the dollar’s hegemony in today’s markets. The
artificially high dollar and the US economy at large depend in a very real sense on China’s investment in the
dollar. Low US inflation and interest rates similarly depend on the thousands of “Made in China” labels
distributed across the United States. As Paul Krugman wrote in The New York Times, the situation is comparable to one in which
“the American sells the house but the money to buy the house comes from China.” Former US treasury secretary Lawrence Summers
even affirms that China and the United States may be in a kind of imprudent “balance of financial terror.” Today, the US trade deficit
with China is US$200 billion. China holds over US$1 trillion in foreign exchange reserves and US$350 billion in US bonds. Together, the
Chinese and US economies account for half of global economic growth. Thus, a fantastic situation has arisen: China’s rise is actually
supporting US hegemony. Taking US hegemony and Western preeminence as the starting point, many have concluded
that the rise of China presents a threat. The premise of this logic is that the international system predicated on US hegemony
and Western preeminence would be destabilized by the rise of a second major power. But this view is inconsistent with the
phenomenon of one-way globalization. The so-called process of one-way globalization can more truly be called
Westernization. Today’s globalization is still in large part driven by the West, inasmuch as it is tinged by Western
unilateralism and entails the dissemination of essentially Western standards and ideology. For example, Coca Cola has become a
Chinese cultural icon, Louis Vuitton stores crowd high-end shopping districts in Shanghai, and, as gender equality progresses,
Chinese women look to Western women for inspiration. In contrast, Haier, the best-known Chinese brand in the United
States, is still relatively unknown, and Wang Fei, who is widely regarded in China as the pop star who was able to make it in the United
States, has less name-recognition there than a first-round American Idol cut.
2NC Growth Impacts
Chinese growth prevents global economic collapse, war over Taiwan and CCP collapse
Lewis ‘08 [Dan, Research Director – Economic Research Council, “The Nightmare of a Chinese
Economic Collapse,” World Finance, 5/13,
http://www.worldfinance.com/news/home/finalbell/article117.html]
In 2001, Gordon Chang authored a global bestseller "The Coming Collapse of China." To suggest that the world’s largest nation of 1.3 billion
people is on the brink of collapse is understandably for many, a deeply unnerving theme. And many seasoned “China Hands” rejected Chang’s
thesis outright. In a very real sense, they were of course right. China’s expansion has continued over the last six years without a
hitch. After notching up a staggering 10.7 percent growth last year, it is now the 4th largest economy in the world with a nominal GDP of
$2.68trn. Yet there are two Chinas that concern us here; the 800 million who live in the cities, coastal and southern regions and the 500 million
who live in the countryside and are mainly engaged in agriculture. The latter – which we in the West hear very little about – are still very poor
and much less happy. Their poverty and misery do not necessarily spell an impending cataclysm – after all, that is how they have always have
been. But it does illustrate the inequity of Chinese monetary policy. For many years, the Chinese yen has been held at an artificially low value to
boost manufacturing exports. This has clearly worked for one side of the economy, but not for the purchasing power of consumers and the
rural poor, some of who are getting even poorer. The central reason for this has been the inability of Chinese monetary policy to adequately
support both Chinas. Meanwhile, rural unrest in China is on the rise
– fuelled not only by an accelerating income gap
with the coastal cities, but by an oft-reported appropriation of their land for little or no compensation by the state .
According to Professor David B. Smith, one of the City’s most accurate and respected economists in recent years, potentially far more serious
though is the impact that Chinese monetary policy could have on many Western nations such as the UK. Quite simply, China’s undervalued
currency has enabled Western governments to maintain artificially strong currencies, reduce inflation and keep interest rates lower than they
might otherwise be. We should therefore be very worried about how vulnerable Western economic growth is to an upward revaluation of the
Chinese yuan. Should that revaluation happen to appease China’s rural poor, at a stroke, the dollar, sterling and the euro would quickly
depreciate, rates in those currencies would have to rise substantially and the yield on government bonds would follow suit. This would add
greatly to the debt servicing cost of budget deficits in the USA, the UK and much of euro land. A reduction in demand for imported Chinese
goods would quickly entail a decline in China’s economic growth rate. That is alarming. It has been calculated that
to keep China’s
society stable – ie to manage the transition from a rural to an urban societywithout devastating unemployment the minimum growth rate is 7.2 percent. Anything less than that and unemployment will rise and the massive shift
in population from the country to the cities becomes unsustainable. This is when real discontent with communist
party rulebecomes vocal and hard to ignore. It doesn’t end there. That will at best bring a global recession. The
crucial point is that communist authoritarian states have at least had some success in keeping a lid on ethnic
tensions – so far. But when multi-ethnic communist countries fall apartfrom economic stress and the implosion of
central power, history suggests that they don’t become successful democracies overnight. Far from it. There’s a
very real chance that China might go the way of Yugoloslavia or the Soviet Union – chaos, civil unrestand
internecine war. In the very worst case scenario,a Chinese government might seek to maintain national cohesion by going
to war with Taiwan – whom America is pledged to defend.
Chinese economic growth prevents global nuclear war
Kaminski 7 (Antoni Z., Professor – Institute of Political Studies, “World Order: The Mechanics of
Threats (Central European Perspective)”, Polish Quarterly of International Affairs, 1, p. 58)
As already argued, the economic advance of China has taken place with relatively few corresponding changes in the political system, although
the operation of political and economic institutions has seen some major changes. Still, tools are missing that would allow the establishment of
political and legal foundations for the modem economy, or they are too weak. The tools are efficient public administration, the rule of law,
clearly defined ownership rights, efficient banking system, etc. For these reasons, many experts fear an
economic crisis in China.
Considering the importance of the state for the development of the global economy, the crisis would have serious global
repercussions. Its political ramifications could be no less dramatic owing to the special position the military occupies in the Chinese
political system, and the existence of many potential vexed issues in East Asia (disputes over islands in the China Sea and the Pacific). A
potential hotbed of conflict is also Taiwan's status. Economic recession and the related destabilization of internal policies
could lead to apolitical, or even military crisis. The likelihood of the global escalation of the conflict is high, as the
interests of Russia, China, Japan, Australia and, first and foremost, the US clash in the region.
China’s economic rise is good --- they’re on the brink of collapse --- causes CCP
instability and lashout --- also tubes the global economy, US primacy, and Sino
relations
Mead 9 Walter Russell Mead, Henry A. Kissinger Senior Fellow in U.S. Foreign Policy at the Council on
Foreign Relations, “Only Makes You Stronger,” The New Republic, 2/4/9,
http://www.tnr.com/story_print.html?id=571cbbb9-2887-4d81-8542-92e83915f5f8
The greatest danger both to U.S.-China relations and to American power itself is probably not that China will
rise too far, too fast; it is that the current crisis might end China's growth miracle. In the worst-case scenario, the turmoil in
the international economy will plunge China into a major economic downturn. The Chinese financial system will
implode as loans to both state and private enterprises go bad. Millions or even tens of millions of Chinese will be
unemployed in a country without an effective social safety net. The collapse of asset bubbles in the stock and
property markets will wipe out the savings of a generation of the Chinese middle class. The political consequences
could include dangerous unrest--and a bitter climate of anti-foreign feeling that blames others for China's woes.
(Think
of Weimar Germany, when both Nazi and communist politicians blamed the West for Germany's economic travails.) Worse,
instability could lead to a vicious cycle, as nervous investors moved their money out of the country, further slowing
growth and, in turn, fomenting ever-greater bitterness. Thanks to a generation of rapid economic growth, China has so
far been able to manage the stresses and conflicts of modernization and change; nobody knows what will happen
if the growth stops.
Growth decline threatens CCP rule---they’ll start diversionary wars in response
Shirk 7 Susan L. Shirk is an expert on Chinese politics and former Deputy Assistant Secretary of State
during the Clinton administration. She was in the Bureau of East Asia and Pacific Affairs (People's
Republic of China, Taiwan, Hong Kong and Mongolia). She is currently a professor at the Graduate
School of International Relations and Pacific Studies at the University of California, San Diego. She is also
a Senior Director of Albright Stonebridge Group, a global strategy firm, where she assists clients with
issues related to East Asia. “China: Fragile Superpower,” Book
By sustaining high rates of economic growth, China’s leaders create new jobs and limit the number
ofunemployed workers who might go to the barricades. Binding the public to the Party through nationalism also helps preempt opposition. The
trick is to find a foreign policy approach that can achieve both these vital objectives simultaneously. How long can it last? Viewed objectively, China’s communist regime looks surprisingly resilient. It may be capable of surviving for years to come so long as the economy continues to grow and create jobs. Survey research in Beijing shows wide- spread support (over 80 percent) for
the political system as a whole linked to sentiments of nationalism and acceptance of the CCP’s argument about “stability first.”97 Without making any fundamental changes in the CCPdominated political system—leaders from time to time have toyed with reform ideas such as local elections but in each instance have backed away for fear of losing control—the Party has
bought itself time. As scholar Pei Minxin notes, the ability of communist regimes to use their patronage and coercion to hold on to power gives them little incentive to give up any of that
the
greatest political risk lying ahead of them is the possibility of an economic crash that throws millions of
workers out of their jobs or sends millions of depositors to withdraw their savings from the shaky banking system. A massive environmental or public
health disaster also could trigger regime collapse, especially if people’s lives are endangered by a media cover-up imposed by Party authorities.
Nationwide rebellion becomes a real possibility when large numbers of people are upset about the same issue at the
same time. Another dangerous scenario is a domesticor international crisis in which the CCP leaders feel compelled
to lash out against Japan, Taiwan, or the United States because from their point of view not lashing out might
endanger Party rule.
power by introducing gradual democratization from above. Typically, only when communist systems implode do their political fun- damentals change.98 As China’s leaders well know,
Chinese Growth Key to Military Restraint on Taiwan- Decline of Economic Influence
Causes China to Resort to Military Aggression
Lampton, ’3 (David, Director Chinese Studies, Nixon Center, FDCH, 3/18)
The Chinese realize that power has different faces--military, economic, and normative (ideological)
power. Right now, China is finding that in the era of globalization, economic power (and potential economic
power) is the form of power it has in greatest abundance and which it can use most effectively. As long as
economic influence continues to be effective for Beijing, as it now seems to be in dealing with Taiwan, for
example, China is unlikely to resort to military intimidation as its chief foreign policy instrument.
Decline causes lashout- nationalists target the US and Taiwan
Friedberg professor of IR at Princeton2011 (July/August, Aaron L., professor of politics and international affairs at
the Woodrow Wilson School at Princeton University, Hegemony with Chinese Characteristics, The National Interest, lexis)
Such fears of
aggression are heightened by an awareness that anxiety over a lack of legitimacy at home
can cause nondemocratic governments to try to deflect popular frustration and discontent toward
external enemies. Some Western observers worry, for example, that if China’s economy falters its rulers will try to
blame foreigners and even manufacture crises with Taiwan, Japan or the United States in order to rally
their people and redirect the population’s anger. Whatever Beijing’s intent, such confrontations
couldeasilyspiral out of control.Democratic leaders are hardly immune to the temptation of foreign adventures. However, because
the stakes for them are so much lower (being voted out of office rather than being overthrown and imprisoned, or worse), they are less likely to
take extreme risks to retain their hold on power.
2NC China-India War Impact
Economic collapse will crush party legitimacy and ignite social instability Li 9 (Cheng, Dir. of Research, John L. Thornton China Center, “China’s Team of Rivals ”
Brookings Foundation Article
series,Marcyhttp://www.brookings.edu/articles/2009/03_china_li.aspx)
The two dozen senior politicians who walk the halls of Zhongnanhai, the compound of the Chinese Communist Party’s leadership in Beijing, are
an economy in freefall. Exports, critical to
China’s searing economic growth, have plunged. Thousands of factories and businesses, especially those
in the prosperous coastal regions, have closed. In the last six months of 2008, 10 million workers, plus 1
million new college graduates, joined the already gigantic ranks of the country’s unemployed. During the
same period, the Chinese stock market lost 65 percent of its value, equivalent to $3 trillion. The crisis,
President Hu Jintao said recently, “is a test of our ability to control a complex situation, and also a test of
our party’s governing ability.”With this rapid downturn, the Chinese Communist Party suddenly looks vulnerable.
worried. What was inconceivable a year ago now threatens their rule:
Since Deng Xiaoping initiated economic reforms three decades ago, the party’s legitimacy has relied upon its ability to keep the
economy running at breakneck pace.If China is no longer able to maintain a high growth rate or provide jobs for its
ever growing labor force, massive public dissatisfaction and social unrest could erupt . No one realizes this possibility more
than the handful of people who steer China’s massive economy. Double-digit growth has sheltered them through a SARS epidemic, massive
earthquakes, and contamination scandals. Now, the crucial question is whetherthey are equipped to handle an economic crisis of
this magnitude—and survive the political challenges it will bring. This year marks the 60th anniversary of the People’s Republic, and
the ruling party is no longer led by one strongman, like Mao Zedong or Deng Xiaoping. Instead, the Politburo and its Standing
Committee, China’s most powerful body, are run by two informal coalitions that compete against each other for power,
influence, and control over policy. Competition in the Communist Party is, of course, nothing new. But the jockeying today is no longer a
zero-sum game in which a winner takes all. It is worth remembering that when Jiang Zemin handed the reins to his successor, Hu
Jintao, in 2002, it marked the first time in the republic’s history that the transfer of power didn’t involve bloodshed or purges. What’s more, Hu
was not a protégé of Jiang’s; they belonged to competing factions. To borrow a phrase popular in Washington these days, post-Deng
China has been run by a team of rivals. This internal competition was enshrined as party practice a little more than a year ago. In
October 2007, President Hu surprised many
China watchers by abandoning the party’s normally straightforward succession procedure
and designating not one but two heirs apparent. The Central Committee named Xi Jinping and Li Keqiang—two very different
leaders in their early 50s—to the nine-member Politburo Standing Committee, where the rulers of China are groomed. The future roles of
these two men, who will essentially share power after the next party congress meets in 2012, have since been refined: Xi will be the
candidate to succeed the president, and Li will succeed Premier Wen Jiabao. The two rising stars share little in terms of family
background, political association, leadership skills, and policy orientation. But they are each heavily involved in shaping economic policy—and
they are expected to lead the two competing coalitions that will be relied upon to craft China’s political and
economic trajectory in the next decade and beyond.
Regime collapse causes China-India war
Cohen ’02 (Stephen, Senior Fellow – Brookings Institution, “Nuclear Weapons and Nuclear War in South Asia: An Unknowable Future”, May,
http://www.brookings.edu/dybdocroot/views/speeches/cohens20020501.pdf)
A similar argument may be made with respect to China. China is a country that has had its share of upheavals in the past. While there is no
expectation today of renewed internal turmoil, it is important to remember that closed authoritarian societies are subject to deep crisis in
moments of sudden change. The breakup of the Soviet Union and Yugoslavia, and the turmoil that has ravaged many members of the former
communist bloc are examples of what could happen to China. A severe economic crisis, rebellions in Tibet and Xinjiang, a reborn democracy
movement and a party torn by factions could be the ingredients of an unstable situation. A vulnerable Chinese leadership determined to bolster
its shaky position by an aggressive policy toward India or the United States or both might become involved in a major crisis with India, perhaps
engage in nuclear saber-rattling. That would encourage India to adopt a stronger nuclear posture, possibly with American assistance.
Causes nuclear use
Jonathan S. Landay, National Security and Intelligence Correspondent, -2K [“Top Administration Officials Warn Stakes for U.S. Are High in
Asian Conflicts”, Knight Ridder/Tribune News Service, March 10, p. Lexis]
Few if any experts think China and Taiwan, North Korea and South Korea, or India and Pakistan are spoiling to fight.
But even a minor miscalculation by any of them could destabilize Asia, jolt the global economy and even start a
nuclear war. India, Pakistan and China all have nuclear weapons, and North Korea may have a few , too. Asia lacks
the kinds of organizations, negotiations and diplomatic relationships that helped keep an uneasy peace for five
decades in Cold War Europe. “Nowhere else on Earth are the stakes as high and relationships so fragile,” said Bates Gill, director of
northeast Asian policy studies at the Brookings Institution, a Washington think tank. “We see the convergence of great power interest overlaid
with lingering confrontations with no institutionalized security mechanism in place. There are elements for potential disaster.” In an effort to
cool the region’s tempers, President Clinton, Defense Secretary William S. Cohen and National Security Adviser Samuel R. Berger all will
hopscotch Asia’s capitals this month. For America, the stakes could hardly be higher. There are 100,000 U.S. troops in Asia committed to
defending Taiwan, Japan and South Korea, and the United States would instantly become embroiled if Beijing moved against Taiwan or North
Korea attacked South Korea. While Washington has no defense commitments to either India or Pakistan, a
conflict
between the two could end the global taboo against using nuclear weapons and demolishthe already shaky
international nonproliferation regime. In addition, globalization has made a stable Asia _ with its massive markets, cheap labor,
exports and resources _ indispensable to the U.S. economy. Numerous U.S. firms and millions of American jobs depend on trade with Asia that
totaled $600 billion last year, according to the Commerce Department.
2NC Bioweapons Impact
The CCP would lash out for power, and they would use bioweapons
Renxin 05Renxin, Journalist, 8-3-2K5 (San, “CCP Gambles Insanely to Avoid Death,” Epoch Times, www.theepochtimes.com/news/5-83/30931.html)
Since the Party’s life is “above all else,” it would not be surprising if the CCP resorts to the use of
biological, chemical, and nuclear weapons in its attempt to postpone its life. The CCP,that disregards human
life, would not hesitate to kill two hundred million Americans, coupled with seven or eight hundred
million Chinese, to achieve its ends. The “speech,” free of all disguises, lets the public see the CCP for what it really is: with evil
filling its every cell, the CCP intends to fightall of mankind in its desperate attempt to clingto life. And that is the
theme of the “speech.” The theme is murderous and utterly evil. We did witness in China beggars who demanded money from people by
threatening to stab themselves with knives or prick their throats on long nails. But we have never, until now, seen a rogue who blackmails the
world to die with it by wielding biological, chemical, and nuclear weapons. Anyhow, the bloody confession affirmed the CCP’s bloodiness: a
monstrous murderer, who has killed 80 million Chinese people, now plans to hold one billion people hostage and gamble with their lives. As the
CCP is known to be a clique with a closed system, it is extraordinary for it to reveal its top secret on its own. One might ask: what is the CCP’s
purpose to make public its gambling plan on its deathbed? The answer is: the “speech” would have the effect of killing three birds with one
stone. Its intentions are the following: Expressing the CCP’s resolve that it “not be buried by either heaven or earth” (direct quote from the
“speech”). But then, isn’t the CCP opposed to the universe if it claims not to be buried by heaven and earth? Feeling the urgent need to harden
its image as a soft egg in the face of the Nine Commentaries. Preparing publicity for its final battle with mankind by threatening war and
trumpeting violence. So, strictly speaking, what the CCP has leaked out is more of an attempt to clutch at straws to save its life rather than to
launch a trial balloon. Of course, the way the “speech” was presented had been carefully prepared. It did not have a usual opening or ending,
and the audience, time, place, and background related to the “speech” were all kept unidentified. One may speculate or imagine as one may,
but never verify. The aim was obviously to create a mysterious setting. In short, the “speech” came out as something one finds difficult to tell
whether it is false or true.
Outweighs and causes extinction
Ochs 2Past president of the Aberdeen Proving Ground Superfund Citizens Coalition, Member of the Depleted Uranium Task force of the
Military Toxics Project, and M of the Chemical Weapons Working Group [Richard Ochs, , June 9, 2002, “Biological Weapons Must Be Abolished
Immediately,” http://www.freefromterror.net/other_articles/abolish.html]
Of all the weapons of mass destruction, the genetically
engineered biological weapons, many without a known cure
or vaccine, are an extreme danger to the continued survival of life on earth. Any perceived military value or
deterrence pales in comparison to the great risk these weapons pose just sitting in vials in laboratories. While a “nuclear winter,” resulting
from a massive exchange of nuclear weapons, could also kill off most of life on earth and severely compromise the health of future generations,
they are
easier to control. Biological weapons, on the other hand, can get out of control very easily, as the
recent anthrax attacks has demonstrated. There is no way to guarantee the security of these doomsday weapons because very tiny
amounts can be stolen or accidentally released and then grow or be grown to horrendous proportions. The Black Death of
the Middle Ages would be small in comparison to the potential damage bioweapons could cause. Abolition of chemical weapons is less of a
priority because, while they can also kill millions of people outright, their persistence in the environment would be less than nuclear or
biological agents or more localized. Hence, chemical weapons would have a lesser effect on future generations of innocent people and the
natural environment. Like the Holocaust, once a localized chemical extermination is over, it is over. With nuclear and biological weapons, the
killing will probably never end. Radioactive elements last tens of thousands of years and will keep causing cancers virtually forever. Potentially
worse than that, bio-engineered agents
by the hundreds with no known cure could wreck even greater calamity
on the human race than could persistent radiation. AIDS and ebola viruses are just a small example of recently emerging plagues with no
known cure or vaccine. Can we imagine hundreds of such plagues? HUMAN EXTINCTION IS NOW POSSIBLE. Ironically, the Bush
administration has just changed the U.S. nuclear doctrine to allow nuclear retaliation against threats upon allies by conventional weapons. The
past doctrine allowed such use only as a last resort when our nation’s survival was at stake. Will the new policy also allow easier use of US
bioweapons? How slippery is this slope?
2NC AT Collapse Good
Reject their collapse good arguments—they’re racist and incoherent—Chinese collapse
decimates the U.S. for several reasons
Karabell, 13—PhD @ Harvard, President of River Twice Research
Zachary, “The U.S. can’t afford a Chinese economic collapse,” The Edgy Optimist, a Reuters blog run by
Karabell, March 7, http://blogs.reuters.com/edgy-optimist/2013/03/07/the-u-s-cant-afford-a-chineseeconomic-collapse/ --BR
Is China about to collapse? That question has been front and center in the past weeks as the country completes its
leadership transition and after the exposure of its various real estate bubbles during a widely watched 60 Minutes exposé this past weekend.
Concerns about soaring property prices throughout China are hardly new, but they have been given added weight by
the government itself. Recognizing that a rapid implosion of the property market would disrupt economic growth, the central
government recently announced far-reaching measures designed to dent the rampant speculation. Higher down payments, limiting the
purchases of investment properties, and a capital gains tax on real estate transactions designed to make flipping properties less lucrative were
included. These measures, in conjunction with the new government’s announcing more modest growth targets of 7.5 percent a year, sent
Chinese equities plunging and led to a slew of commentary in the United States saying China would be the next shoe to drop in the global
system. Yet
there is more here than simple alarm over the viability of China’s economic growth. There is
the not-so-veiled undercurrent of rooting against China. It is difficult to find someone who explicitly
wants it to collapse, but the tone of much of the discourse suggests bloodlust. Given that China largely
escaped the crises that so afflicted the United States and the eurozone, the desire to see it stumble may
be understandable. No one really likes a global winner if that winner isn’t you. The need to see China
fail verges on jingoism. Americans distrust the Chinese model, find that its business practices verge on
the immoral and illegal, that its reporting and accounting standards are sub-par at best and that its
system is one of crony capitalism run by crony communists. On Wall Street, the presumption usually
seems to be that any Chinese company is a ponzi scheme masquerading as a viable business. In various
conversations and debates, I have rarely heard China’s economic model mentioned without disdain.
Take, as just one example, Gordon Chang in Forbes: “Beijing’s technocrats can postpone a reckoning,
but they have not repealed the laws of economics. There will be a crash.” The consequences of a
Chinese collapse, however, would be severe for the United States and for the world. There could be no
major Chinese contraction without a concomitant contraction in the United States. That would mean
sharply curtailed Chinese purchases of U.S. Treasury bonds, far less revenue for companies like General
Motors, Nike, KFC and Apple that have robust business in China (Apple made $6.83 billion in the fourth quarter of 2012,
up from $4.08 billion a year prior), and far fewer Chinese imports of high-end goods from American and Asian
companies. It would also mean a collapse of Chinese imports of materials such as copper, which would
in turn harm economic growth in emerging countries that continue to be a prime market for American,
Asian and European goods. China is now the world’s second-largest economy, and property booms have been one aspect of its
growth. Individual Chinese cannot invest outside of the country, and the limited options of China’s stock exchanges and almost nonexistent
bond market mean that if you are middle class and want to do more than keep your money in cash or low-yielding bank accounts, you buy
either luxury goods or apartments. That has meant a series of property bubbles over the past decade and a series of measures by state and
local officials to contain them. These recent measures are hardly the first, and they are not likely to be the last. The past 10 years have seen
wild swings in property prices, and as recently as 2011 the government took steps to cool them; the number of transactions plummeted and
prices slumped in hot markets like Shanghai as much as 30, 40 and even 50 percent. You could go back year by year in the 2000s and see similar
bubbles forming and popping, as the government reacted to sharp run-ups with restrictions and then eased them when the pendulum
threatened to swing too far. China has had a series of property bubbles and a series of property busts. It has also had massive urbanization that
in time has absorbed the excess supply generated by massive development. Today much of that supply is priced far above what workers
flooding into China’s cities can afford. But that has always been true, and that housing has in time been purchased and used by Chinese families
who are moving up the income spectrum, much as U.S. suburbs evolved in the second half of the 20th century. More to the point, all property
bubbles are not created equal. The housing bubbles in the United States and Spain, for instance, would never had been so disruptive without
the massive amount of debt and the financial instruments and derivatives based on them. A bursting housing bubble absent those would have
been a hit to growth but not a systemic crisis. In China, most buyers pay cash, and there is no derivative market around mortgages (at most
there’s a small shadow market). Yes, there are all sorts of unofficial transactions with high-interest loans, but even there, the consequences of
busts are not the same as they were in the United States and Europe in recent years. Two
issues converge whenever China is
discussed in the United States: fear of the next global crisis, and distrust and dislike of the country.
Concern is fine; we should always be attentive to possible risks. But China’s property bubbles are an unlikely risk,
because of the absence of derivatives and because the central government is clearly alert to the market’s behavior. Suspicion and
antipathy, however, are not constructive. They speak to the ongoing difficulty China poses to Americans’
sense of global economic dominance and to the belief in the superiority of free-market capitalism to
China’s state-managed capitalism. The U.S. system may prove to be more resilient over time; it has
certainly proven successful to date. Its success does not require China’s failure, nor will China’s success
invalidate the American model. For our own self-interest we should be rooting for their efforts, and not
jingoistically wishing for them to fail.
2NC AT Collapse Inevitable
Status quo isn’t sufficient to trigger collapse because the US is lagging behind
Forbes, 7/9/2014
US Finance/Economics News Report Service
(“John Kerry In Beijing: Four Good Reasons Why The Chinese View American Leaders As Empty
Suits”,http://www.forbes.com/sites/eamonnfingleton/2014/07/09/john-kerry-in-beijing-four-goodreasons-why-the-chinese-treat-american-leaders-as-jackasses/)
2. American policymakers have procrastinated in meeting the Chinese challenge because they have
constantly – for more than a decade now – been misled by siren American voices predicting an
imminent Chinese financial collapse. China is a big economy and large financial collapses are not
inconceivable. But even the most disastrous such collapse would be unlikely to stop the Chinese export
drive in its tracks. American policymakers have failed to pay sufficient attention to the central objective
of Chinese policy, which is to take over from the United States, Japan and Germany as the world’s
premier source of advanced manufactured products.
Consensus exists and the best markers point to a slow decline, and the worst markers
make sense in the context of china
Huang, 2/11, a senior associate in the Carnegie Asia Program, where his research focuses on China’s
economic development and its impact on Asia and the global economy (Yukon, “Do Not Fear a Chinese
Property Bubble”, Carnegie Endowment for International Peace,
http://carnegieendowment.org/2014/02/11/do-not-fear-chinese-property-bubble/h0oz)
Yet when
analysts drill into the balance sheets of borrowers and banks, they find little evidence of
impending disaster. Government debt ratios are not high by global standards and are backed by valuable assets
at the local level. Household debt is a fraction of what it is in the west, and it is supported by savings and rising incomes.
The profits and cash positions of most firms for which data are available have not deteriorated significantly
while sovereign guarantees cushion the more vulnerable state enterprises. The consensus, therefore, is that China’s debt
situation has weakened but is manageable.¶ Why are the views from detailed sector analysis so different from the red flags
signalled by the broader macro debt indicators? The answer lies in the role that land values play in shaping these trends.¶ Take the two most
pressing concerns: rising debt levels as a share of gross domestic product and weakening links between credit expansion and GDP growth. The
first relates to the surge in the ratio of total credit to GDP by about 50-60 percentage points over the past five years, which is viewed as a strong
predictor of an impending crash. Fitch, a rating agency, is among those who see this as the fallout from irresponsible shadow-banking which is
being channelled into property development, creating a bubble. The second concern is that the “credit impulse” to growth has diminished,
meaning that more and more credit is needed to generate the same amount of GDP, which reduces prospects for future deleveraging.¶ Linking
these two concerns is the price of land including related mark-ups levied by officials and developers. But its significance is not well understood
because China’s property market emerged only in the late 1990s, when the decision was made to privatise housing. A functioning resale market
only began to form around the middle of the last decade. That is why the large stimulus programme in response to the Asia financial crisis more
than a decade ago did not manifest itself in a property price surge, whereas the 2008-9 stimulus did.¶ Over the past decade, no other factor has
been as important as rising property values in influencing growth patterns and perceptions of financial risks. The weakening impact of credit on
growth is largely explained by the divergence between fixed asset investment (FAI) and gross fixed capital formation (GFCF). Both are measures
of investment. FAI measures investment in physical assets including land while GFCF measures investment in new equipment and structures,
excluding the value of land and existing assets. This latter feeds directly into GDP, while only a portion of FAI shows up in GDP accounts.¶ Until
recently, the difference between the two measures did not matter in interpreting economic trends: both were increasing at the same rate and
reached about 35 per cent of GDP by 2002-03. Since then, however, they have diverged and GFCF now stands at 45 per cent of GDP while the
share of FAI has jumped to 70 per cent.¶ Overall credit levels have increased in line with the rapid growth in FAI rather than the more modest
growth in GFCF. Most of the difference between the ratios is explained by rising asset prices. Thus a large share of the surge in credit is
financing property related transactions which explains why the growth impact of credit has declined.¶ Is
the increase in property
and underlying land prices sustainable, or is it a bubble? Part of the explanation is unique to China. Land in China is an
asset whose market value went largely unrecognised when it was totally controlled by the State. Once a private property market was created,
the process of discovering land’s intrinsic value began, but establishing such values takes time in a rapidly changing economy.¶ The
Wharton/NUS/Tsinghua Land Price
Index indicates that from 2004-2012, land prices have increased approximately
fourfold nationally, with more dramatic increases in major cities such as Beijing balanced by modest rises in secondary cities. Although
this may seem excessive, such growth rates are similar to what happened in Russia after it privatised its housing stock. Once the economy
stabilised, housing prices in Moscow increased six fold in just six years.¶ Could
investors have overshot the mark in China?
Possibly, but the land values should be high given China’s large population, its shortage of plots that are suitable
for construction and its rapid economic growth. Nationally, the ratio of incomes to housing prices has improved and is now comparable to the
levels found in Australia, Taiwan and the UK. In Beijing and Shanghai prices are similar to or lower than Delhi, Singapore and Hong Kong.¶
Much of the recent surge in the credit to GDP ratio is actually evidence of financial deepening rather
than financial instability as China moves toward more market-based asset values. If so, the higher credit
ratios are fully consistent with the less alarming impressions that come from scrutiny of sector specific
financial indicators.
2NC AT Stocks
China’s stock market is loosely tied to its economy—structural factors are fine and
stock declines don’t accurately reflect growth
Rapoza 7/9
(Kenneth Rapoza. Contributing Editor at Forbes. "Don't Mistake China's Stock Market For China's Economy," Forbes. 7-9-2015.
http://www.forbes.com/sites/kenrapoza/2015/07/09/dont-mistake-chinas-stock-market-for-chinas-economy///ghs-kw)
China’s A-share market is rebounding, but whether or not it has hit bottom is beside the point. What matters is
this: the equity market in China is a more or less a gambling den dominated by retail investors who
make their investment decisions based on what they read in investor newsletters. It’s a herd
mentality. And more importantly, their trading habits do not reflect economic fundamentals. “The
country’s stock market plays a smaller role in its economy than the U.S. stock market does in ours, and
has fewer linkages to the rest of the economy,” says Bill Adams, PNC Financial’s senior international economist in Pittsburgh.
The fact that the two are unhinged limits the potential for China’s equity correction — or a bubble — to
trigger widespread economic distress. The recent 25% decline in the Deutsche X-Trackers China A-Shares
(ASHR) fund, partially recuperated on Thursday, is not a signal of an impending Chinese recession. PNC’s baseline
forecast for Chinese real GDP growth in 2015 remains unchanged at 6.8% despite the correction, a
correction which has been heralded by the bears as the beginning of the end for China’s capitalist experiment. China’s economy, like its
market, is transforming. China is moving away from being a low-cost producer and exporter, to becoming
a consumer driven society. It wants to professionalize its financial services sector, and build a green-tech
economy to help eliminate its pollution problems. It’s slowly opening its capital account and taking steps
to reforming its financial markets. There will be errors and surprises, and anyone who thinks otherwise will be disappointed. Over
the last four weeks, the Chinese government misplayed its hand when it decided to use tools for the economy
— mainly an interest rate reduction and reserve ratio requirement cuts for banks in an effort to provide the market
with more liquidity. It worked for a little while, and recent moves to change rules on margin, and even utilize a circuit-breaker mechanism to
temporarily delist fast-tanking companies from the mainland stock market, might have worked if the Greece crisis didn’t pull the plug on global
risk. The timing was terrible. And it pushed people into panic selling, turning China into the biggest financial market
headline this side of Athens. For better or for worse, Beijing now has no choice but to go all-in to defend equities, some investors told FORBES.
But China’s
real economy is doing much better than the Shanghai and Shenzhen exchanges suggest.
According to China Beige Book, the Chinese economy actually recovered last quarter. Markets are focusing on
equities and PMI indicators from the state and HSBC as a gauge, but it should become clear in the
coming weeks that China’s stock market is not a reflection of the fundamentals. The Good, The Bad and the Ugly
To get a more detailed picture of what is driving China’s growth slowdown, it is necessary to look at a broader array of economic and financial
indicators. The epicenter of China’s problems are the industrial and property sectors. Shares of the Shanghai Construction Group, one of the
largest developers listed on the Shanghai stock exchange, is down 42.6% in the past four weeks, two times worse than the Shanghai Composite
Index. China Railway Group is down 33%, also an underperformer. Growth in real industrial output has declined from 14% in mid-2011 to 5.9%
in April, growth in fixed-asset investment declined 50% over the same period and electricity consumption by primary and secondary industries
is in decline. China’s trade with the outside world is also falling, though this data does not always match up with other countries’ trade figures.
Real estate is in decline as Beijing has put the breaks on its housing bubble. Only the east coast cities are still seeing price increases, but
construction is not booming in Shanghai anymore. The two main components of that have prevented a deeper downturn in activity are private
spending on services, particularly financial services, and government-led increases in transportation infrastructure like road and rail. Retail
sales, especially e-commerce sales that have benefited the likes of Alibaba and Tencent, both of which have outperformed the index, have been
growing faster than the overall economy. Electricity consumption in the services sector is expanding strongly. Growth in household incomes is
outpacing GDP growth. “China has begun the necessary rebalancing towards a more sustainable, consumption-led growth model,” says Jeremy
Lawson, chief economist at Standard Life Investments in the U.K. He warns that “it’s still too early to claim success.” Since 2011, developed
markets led by the S&P 500 have performed better than China, but for one reason and one reason only: The central banks of Europe, the U.K.,
Japan and of course the U.S. have bought up assets in unprecedented volumes using printed money, or outright buying securities like the Fed’s
purchase of bonds and mortgage backed securities. Why bemoan China’s state intervention when central bank intervention has been what kept
southern Europe afloat, and the U.S. stock market on fire since March 2009? Companies
in China are still making money. “I
think people
have no clue on China,” says Jan Dehn, head of research at Ashmore in London, a $70 billion emerging market fund
manager with money at work in mainland China securities. “They don’t see the big picture. And they forget it is still an
emerging market. The Chinese make mistakes and will continue to make mistakes like all governments.
However, they will learn from their mistakes. The magnitude of most problems are not such that they
lead to systematic meltdown. Each time the market freaks out, value — often deep value — starts to
emerge. Long term, these volatile episodes are mere blips. They will not change the course of internationalization and
maturing of the market,” Dehn told FORBES. China is still building markets. It has a large environmental problem that will bode well
for green tech firms like BYD. It’s middle class is not shrinking. Its billionaires are growing in numbers. They are
reforming all the time. And in the long term, China is going to win. Markets are impatient and love a good drama. But investing is not a
soap opera. It’s not Keeping up with the Kardashians you’re buying, you’re buying the world’s No. 2 economy, the biggest commodity consumer
in the world, and home to 1.4 billion people, many of which have been steadily earning more than ever. China’s transition will cause temporary
weakness in growth and volatility, maybe even crazy volatility. But you have to break eggs to make an omelette, says Dehn. Why
The
Stock Market Correction Won’t Hurt China The Chinese equity correction is healthy and unlikely to have
major adverse real economy consequences for several reasons: First, China’s A-shares are still up 79%
over the past 12 months. A reversal of fortunes was a shoo-in to occur. Second, Chinese banks are
basically not involved in providing leverage and show no signs of stress. The total leverage in Chinese
financial markets is about four trillion yuan ($600 billion). Stock market leverage is concentrated in the informal sector –
with trust funds and brokerages accounting for a little over half of the leverage. Margin financing via brokerages is down from 2.4 trillion yuan
to 1.9 trillion yuan and let’s not forget that Chinese GDP is about 70 trillion yuan. Third,
there is very little evidence that the
moves in the stock market will have a major impact on the real economy and consumption via portfolio
loss. Stocks comprise only 15% of total wealth. Official sector institutions are large holders of stocks and
their spending is under control of the government. As for the retail investor, they spend far less of their
wealth than other countries. China has a 49% savings rate. Even if they lost half of it, they would be
saving more than Americans, the highly indebted consumer society the world loves to love. During the rally
over the past twelve months, the stock market bubble did not trigger a boost in consumption indicating that
higher equity gains didn’t impact spending habits too much. The Chinese stock market is only 5% of total
social financing in China. Stock markets only finance 2% of Chinese fixed asset investment. Only 1% of
company loans have been put up with stocks as collateral, so the impact on corporate activity is going to
be limited. “The rapid rally and the violent correction illustrate the challenges of capital account liberalization, the need for a long-term
institutional investor base, index inclusion and deeper financial markets, including foreign institutional investors,” Dehn says. The A-shares
correction is likely to encourage deeper financial reforms, not a reversal.
Plan Flaw
1NCs
1NC CT
Counterplan text: The United States federal government should neither mandate the
creation of surveillance backdoors in products nor request privacy keys and should
terminate current backdoors created either by government mandates or government
requested keys.
Three arguments here:
1. A. Mandate means “to make required”
Merriam-Webster’s Dictionary of Law 96
(Merriam-Webster’s Dictionary of Law, 1996, http://dictionary.findlaw.com/definition/mandate.html//ghs-kw)
mandate n [Latin mandatum , from neuter of mandatus , past participle of mandare to entrust, enjoin, probably irregularly from
manus hand + -dere to put] 1 a : a formal communication from a reviewing court notifying the court below of its judgment and
directing the lower court to act accordingly b : mandamus 2 in the civil law of Louisiana : an act by which a person gives another
person the power to transact for him or her one or several affairs 3 a : an authoritative command : a clear authorization or direction
[the of the full faith and credit clause "National Law Journal "] b : the authorization to act given by a constituency to its elected
representative vt man·dat·ed man·dat·ing : to make mandatory
defendant's right to confrontation "National Law Journal "]
or required [the Pennsylvania Constitution s a criminal
B. Circumvention: companies including those under PRISM agree to provide
data because the government pays them
Timberg and Gecllman 13
(Timberg, Craig and Gellman, Barton. Reporters for the Washington Post, citing government budgets and internal documents.
“NSA paying U.S. companies for access to communications networks,” Washington Post. 8/29/2013.
https://www.washingtonpost.com/world/national-security/nsa-paying-us-companies-for-access-to-communicationsnetworks/2013/08/29/5641a4b6-10c2-11e3-bdf6-e4fc677d94a1_story.html//ghs-kw)
The National Security Agency is paying hundreds of millions of dollars a year to U.S. companies
for clandestine access to their communications networks, filtering vast traffic flows for foreign targets in a process that
also sweeps in large volumes of American telephone calls, e-mails and instant messages. The bulk of the spending, detailed in a multivolume intelligence budget obtained by The Washington Post, goes to participants in a Corporate Partner Access Project for major U.S.
telecommunications providers. The documents open an important window into surveillance operations on U.S. territory that have
been the subject of debate since they were revealed by The Post and Britain’s Guardian newspaper in June. New details of the corporate-partner
project, which falls under the NSA’s Special Source Operations, confirm that the
agency taps into “high volume circuit and
packet-switched networks,” according to the spending blueprint for fiscal 2013. The program was expected to cost $278 million in the
current fiscal year, down nearly one-third from its peak of $394 million in 2011. Voluntary cooperation from the “backbone” providers of global
communications dates to the 1970s under the cover name BLARNEY, according to documents provided by former NSA contractor Edward Snowden.
These relationships long predate the PRISM program disclosed in June, under which American technology companies hand over customer data after
receiving orders from the Foreign Intelligence Surveillance Court. In briefing slides, the NSA described BLARNEY and three other corporate projects —
OAKSTAR, FAIRVIEW and STORMBREW — under the heading of “passive” or “upstream” collection. They capture data as they move across fiber-optic
cables and the gateways that direct global communications traffic. Read the documents Budget Inside the secret 'black budget' View select pages from
the Office of the Director of National Intelligence's top-secret 2013 budget with key sections annotated by The Washington Post. The documents offer a
rare view of a secret surveillance economy in which government officials set financial terms for programs capable of peering into the lives of almost
anyone who uses a phone, computer or other device connected to the Internet. Although the companies are required to comply with lawful
surveillance orders, privacy advocates say the multimillion-dollar
payments could create a profit motive to offer
more than the required assistance. “It turns surveillance into a revenue stream, and that’s not the way
it’s supposed to work,” said Marc Rotenberg, executive director of the Electronic Privacy Information Center, a Washington-based research and
advocacy group. “The fact that the government is paying money to telephone companies to turn over information that they are compelled to turn over
is very troubling.” Verizon, AT&T and other major telecommunications companies declined to comment for this article, although several industry
officials noted that government surveillance laws explicitly call for companies to receive reasonable reimbursement for their costs. Previous news
reports have made clear that companies
frequently seek such payments, but never before has their overall scale been
disclosed. The budget documents do not list individual companies, although they do break down spending among several NSA programs, listed by their
code names. There is no record in the documents obtained by The Post of money set aside to pay technology companies that provide information to
the NSA’s PRISM program. That program is the source of 91 percent of the 250 million Internet communications collected through Section 702 of the
FISA Amendments Act, which authorizes PRISM and the upstream programs, according to an 2011 opinion and order by the Foreign Intelligence
Surveillance Court. Several of the companies
that provide information to PRISM, including Apple, Facebook and Google, say
they take no payments from the government when they comply with national security requests. Others say they do take payments in
some circumstances. The Guardian reported last week that the NSA had covered “millions of dollars” in costs that some
technology companies incurred to comply with government demands for information. Telecommunications companies generally do
charge to comply with surveillance requests, which come from state, local and federal law enforcement officials as well as intelligence agencies. Former
telecommunications executive Paul Kouroupas, a security officer who worked at Global Crossing for 12 years, said that some companies
welcome the revenue and enter into contracts in which the government makes higher
payments than otherwise available to firms receiving re­imbursement for complying with surveillance orders. These contractual
payments, he said, could cover the cost of buying and installing new equipment, along with a reasonable profit. These
voluntary agreements simplify the government’s access to surveillance, he said. “It certainly
lubricates the [surveillance] infrastructure,” Kouroupas said. He declined to say whether Global Crossing, which operated a
fiber-optic network spanning several continents and was bought by Level 3 Communications in 2011, had such a contract. A spokesman for Level 3
Communications declined to comment.
2. Plan flaw: the plan mandates that we stop surveilling backdoors, request public
encryption keys, and close existing backdoors—that guts solvency because the
government can still create backdoors with encryption keys
3. Presumption: we don’t mandate back doors in the status quo, all their ev is in
the context of a bill that would require backdoors in the future, so the AFF does
nothing
1NC KQ
The Secure Data Act of 2015 states that no agency may mandate backdoors
Secure Data Act of 2015
(Wyden, Ron. Senator, D-OR. S. 135, known as the Secure Data Act of 2015, introduced in Congress 1/8/2015.
https://www.congress.gov/bill/114th-congress/senate-bill/135/text//ghs-kw)
SEC. 2. PROHIBITION ON DATA SECURITY VULNERABILITY MANDATES. (a) In General.—Except as provided in subsection (b), no
agency
may mandate that a manufacturer, developer, or seller of covered products design or alter the security
functions in its product or service to allow the surveillance of any user of such product or service, or to
allow the physical search of such product, by any agency.
Mandate means “to make required”
Merriam-Webster’s Dictionary of Law 96
(Merriam-Webster’s Dictionary of Law, 1996, http://dictionary.findlaw.com/definition/mandate.html//ghs-kw)
mandate n [Latin mandatum , from neuter of mandatus , past participle of mandare to entrust, enjoin, probably irregularly from manus
hand + -dere to put] 1 a : a formal communication from a reviewing court notifying the court below of its judgment and directing the lower
court to act accordingly b : mandamus 2 in the civil law of Louisiana : an act by which a person gives another person the power to transact for
him or her one or several affairs 3 a : an authoritative command : a clear authorization or direction [the of the full faith and credit clause
"National Law Journal "] b : the authorization to act given by a constituency to its elected representative vt man·dat·ed man·dat·ing : to
make mandatory or required [the Pennsylvania Constitution s a criminal defendant's right to confrontation "National Law Journal "]
Circumvention: companies including those under PRISM agree to provide data
because the government pays them
Timberg and Gellman 13
(Timberg, Craig and Gellman, Barton. Reporters for the Washington Post, citing government budgets and internal documents. “NSA paying
U.S. companies for access to communications networks,” Washington Post. 8/29/2013. https://www.washingtonpost.com/world/nationalsecurity/nsa-paying-us-companies-for-access-to-communications-networks/2013/08/29/5641a4b6-10c2-11e3-bdf6e4fc677d94a1_story.html//ghs-kw)
The National Security Agency is paying hundreds of millions of dollars a year to U.S. companies for
clandestine access to their communications networks, filtering vast traffic flows for foreign targets in a process that also sweeps in
large volumes of American telephone calls, e-mails and instant messages. The bulk of the spending, detailed in a multi-volume intelligence budget
obtained by The Washington Post, goes to participants in a Corporate Partner Access Project for major U.S. telecommunications
providers. The documents open an important window into surveillance operations on U.S. territory that have been the subject of debate since they were
revealed by The Post and Britain’s Guardian newspaper in June. New details of the corporate-partner project, which falls under the NSA’s Special Source Operations,
confirm that the
agency taps into “high volume circuit and packet-switched networks,” according to the spending
blueprint for fiscal 2013. The program was expected to cost $278 million in the current fiscal year, down nearly one-third from its peak of $394 million in 2011.
Voluntary cooperation from the “backbone” providers of global communications dates to the 1970s under the cover name BLARNEY, according to documents
provided by former NSA contractor Edward Snowden. These relationships long predate the PRISM program disclosed in June, under which American technology
companies hand over customer data after receiving orders from the Foreign Intelligence Surveillance Court. In briefing slides, the NSA described BLARNEY and three
other corporate projects — OAKSTAR, FAIRVIEW and STORMBREW — under the heading of “passive” or “upstream” collection. They capture data as they move
across fiber-optic cables and the gateways that direct global communications traffic. Read the documents Budget Inside the secret 'black budget' View select pages
from the Office of the Director of National Intelligence's top-secret 2013 budget with key sections annotated by The Washington Post. The documents offer a rare
view of a secret surveillance economy in which government officials set financial terms for programs capable of peering into the lives of almost anyone who uses a
phone, computer or other device connected to the Internet. Although the companies are required to comply with lawful surveillance orders, privacy advocates say
the multimillion-dollar
payments could create a profit motive to offer more than the required
assistance. “It turns surveillance into a revenue stream, and that’s not the way it’s supposed to work,” said Marc Rotenberg,
executive director of the Electronic Privacy Information Center, a Washington-based research and advocacy group. “The fact that the government is paying money
to telephone companies to turn over information that they are compelled to turn over is very troubling.” Verizon, AT&T and other major telecommunications
companies declined to comment for this article, although several industry officials noted that government surveillance laws explicitly call for companies to receive
reasonable reimbursement for their costs. Previous news reports have made clear that companies
frequently seek such payments, but
never before has their overall scale been disclosed. The budget documents do not list individual companies, although they do break down spending among several
NSA programs, listed by their code names. There is no record in the documents obtained by The Post of money set aside to pay technology companies that provide
information to the NSA’s PRISM program. That program is the source of 91 percent of the 250 million Internet communications collected through Section 702 of the
FISA Amendments Act, which authorizes PRISM and the upstream programs, according to an 2011 opinion and order by the Foreign Intelligence Surveillance Court.
Several of the companies
that provide information to PRISM, including Apple, Facebook and Google, say they take no payments from
the government when they comply with national security requests. Others say they do take payments in some circumstances. The Guardian reported
last week that the NSA had covered “millions of dollars” in costs that some technology companies incurred to comply with government
demands for information. Telecommunications companies generally do charge to comply with surveillance requests, which come from state, local
and federal law enforcement officials as well as intelligence agencies. Former telecommunications executive Paul Kouroupas, a security officer who worked at
Global Crossing for 12 years, said that some companies
welcome the revenue and enter into contracts in which the
government makes higher payments than otherwise available to firms receiving re­imbursement for complying with surveillance orders.
These contractual payments, he said, could cover the cost of buying and installing new equipment, along with a reasonable profit.
These voluntary agreements simplify the government’s access to surveillance, he said. “It certainly
lubricates the [surveillance] infrastructure,” Kouroupas said. He declined to say whether Global Crossing, which operated a fiber-optic
network spanning several continents and was bought by Level 3 Communications in 2011, had such a contract. A spokesman for Level 3 Communications declined to
comment.
2NC
2NC Mandate
Mandate is an order or requirement
The People's Law Dictionary 02
(Hill, Gerald and Kathleen. Gerald Hill holds a J.D. from Hastings College of the Law of the University of California. He was Executive Director
of the California Governor's Housing Commission, has drafted legislation, taught at Golden Gate University Law School, served as an
arbitrator and pro tem judge, edited and co-authored Housing in California, was an elected trustee of a public hospital, and has testified
before Congressional committees. Kathleen Hill holds an M.A. in political psychology from California State University, Sonoma. She was also
a Fellow in Public Affairs with the prestigious Coro Foundation, earned a Certificat from the Sorbonne in Paris, France, headed the Peace
Corps Speakers' Bureau in Washington, D.C., worked in the White House for President Kennedy, and was Executive Coordinator of the 25th
Anniversary of the United Nations. Kathleen has served on a Grand Jury, chaired two city commissions and has developed programs for the
Institute of Governmental Studies of the University of California. The People’s Law Dictionary, 2002.
http://dictionary.law.com/Default.aspx?selected=1204//ghs-kw)
mandate n. 1) any mandatory order or requirement under statute, regulation, or by a public agency. 2)
order of an appeals court to a lower court (usually the original trial court in the case) to comply with an appeals court's ruling, such as holding a
new trial, dismissing the case or releasing a prisoner whose conviction has been overturned. 3) same as the writ of mandamus, which orders a
public official or public body to comply with the law.
2NC Circumvention
NSA enters into mutually agreed upon contracts for back doors
Reuters 13
(Menn, Joseph. “Exclusive: Secret contract tied NSA and security industry pioneer,” Reuters. 12/20/2013.
http://www.reuters.com/article/2013NC/12/21/us-usa-security-rsa-idUSBRE9BJ1C220131221//ghs-kw)
As a key part of a campaign to embed encryption software that it could crack into widely used computer products, the
U.S. National
Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the
computer security industry, Reuters has learned. Documents leaked by former NSA contractor Edward Snowden show that the
NSA created and promulgated a flawed formula for generating random numbers to create a "back
door" in encryption products, the New York Times reported in September. Reuters later reported that RSA became the
most important distributor of that formula by rolling it into a software tool called Bsafe that is used to enhance
security in personal computers and many other products. Undisclosed until now was that RSA received $10 million in a
deal that set the NSA formula as the preferred, or default, method for number generation in the BSafe
software, according to two sources familiar with the contract. Although that sum might seem paltry, it represented more than a
third of the revenue that the relevant division at RSA had taken in during the entire previous year,
securities filings show. The earlier disclosures of RSA's entanglement with the NSA already had shocked some in the close-knit world of
computer security experts. The company had a long history of championing privacy and security, and it played a
leading role in blocking a 1990s effort by the NSA to require a special chip to enable spying on a wide
range of computer and communications products. RSA, now a subsidiary of computer storage giant EMC Corp, urged
customers to stop using the NSA formula after the Snowden disclosures revealed its weakness. RSA and EMC declined to answer questions for
this story, but RSA said in a statement: "RSA always acts in the best interest of its customers and under no circumstances does RSA design or
enable any back doors in our products. Decisions about the features and functionality of RSA products are our own." The NSA declined to
comment. The RSA deal shows one way the NSA carried out what Snowden's documents describe as a key strategy for enhancing surveillance:
the systematic erosion of security tools. NSA
documents released in recent months called for using "commercial
relationships" to advance that goal, but did not name any security companies as collaborators. The NSA came under attack this week in a
landmark report from a White House panel appointed to review U.S. surveillance policy. The panel noted that "encryption is an essential basis
for trust on the Internet," and called for a halt to any NSA efforts to undermine it. Most of the dozen current and former RSA employees
interviewed said that the company erred in agreeing to
cryptography products as one of the reasons it occurred.
such a contract, and many cited RSA's corporate evolution away from pure
Case
Economy Adv
Notes
This advantage makes NO sense. Venezia ev doesn’t say Internet would collapse, just that there’d be a
bunch of identity theft, etc. This has no bearing on backdoors’ effects on physical infrastructure
30 second explainer: backdoors collapse the internet (not true), internet k2 the conomy b/c new
industries and faster growth, econ collapse = ext b/c Harris and Burrows
CX Questions
Venezia doesn’t say the internet would be eliminated, just that data would be decrypted and that there
would be mass identity theft—where’s the ev into Internet collapse?
1NC Internet Not k2 Econ
No reason why backdoors would collapse the Internet :
1. No internal link: Venezia doesn’t say the Internet would literally collapse, just
that it would “essentially be destroyed,” meaning that there’d be a bunch of
identity theft—has nothing to do with the physical infrastructure of the
internet collapsing.
2. Quals: their evidence just comes from a blogger – it’s highly unlikely that
someone using a backdoor would destroy the entire Internet.
The Internet’s positive influence is overhyped – the washing machine has done more
for our economy
Dave Masko, award-winning foreign correspondent and photojournalist, 7-16-2015, "Internet’s
Impact Exaggerated, Washing Machine Does More," Read Wave, http://www.readwave.com/internet-simpact-exaggerated-washing-machine-does-more_s85070
Story and photo by Dave Masko EUGENE, Oregon — Data
collected by the Pew Internet & American Life Project finds
the Internet being hyped as something great when ITs not. Still, there are many University of Oregon students admitting
to spending lots of time surfing the Internet because they are “lonely or bored.” “Are you kidding, if I had a girlfriend and in love I would hardly
spend my weekends surfing the Net for what… for just more mind fxxx you know; it’s just a way to think you are something when you are just
another loser; lost and lonely online,” admits senior Brian Kelleher. In turn, Kelleher points to a recent lecture he viewed online when
researching an economics assignment. “The lecture by Nobel Prize nominee Ha-Joon Chang really opened my eyes to the power of Internet
‘branding’ and marketing over the past 25 years the Net has existed,” Kelleher explained. “Basically, Professor
Chang stated that the
Internet’s impact is vastly exaggerated; while showing how the washing machine has done way more for
our society and various cultures worldwide than the Internet because not everyone on Earth today uses
views online knowledge as something of value. Frankly, Professor Chang opened my eyes as to ‘why’ so many of us are blind
when it comes to Internet hype being just more bullxxxx.” Meanwhile, this interview with Kelleher took place a few months ago when this
university senior was single. “After letting go of my Internet addiction, I met a like-minded student named Carol who got me outside in the real
world when not in class. You know what, I feel really alive again thanks to Carol’s view that we unplug from the machine we don’t really need
the Net. It’s great to be offline and loving life again. Carol and I even go to the laundry and use our favorite new ‘machine’ the washing
machine,” joked Kelleher who is pictured walking around campus with Carol on a bright and beautiful April 30, 2015 spring day in Eugene. In
fact, Kelleher said Professor Chang’s lecture raised eyebrows here at the University of Oregon’s famed “Wearable Computing Lab” that was
founded in 1995 at the dawn of the so-called information-era. While Kelleher thinks digital-age fans here on campus view the Internet “as
revolutionizing just about everything,” they took pause when Professor Chang — a famed University of Cambridge, England, economist —
presented interesting views on why the washing machine helps more people worldwide than the Net ever could.” Professor Chang argues that
the Internet’s revolutionary is pretty harmless, noting that, “Instead
of reading a paper, we now read the news online.
Instead of buying books at a store, we buy them on-line. What’s so revolutionary? The Internet has
mainly affected our leisure life. In short, the washing machine has allowed women to get into the labor
market so that we have nearly doubled the work force.” Moreover, Professor Chang questions all the hype about the good
stuff the Internet is doing for the poor. “Charities are now working to give people in poor countries access to the Internet. But shouldn’t we
spend that money on providing health clinics and safe water,” writes Professor Chang. While the digital revolution has helped make the shift
from traditional industry, the clothes washer technology also has been revolutionary because it reduces the drudgery of scrubbing and rubbing
clothing, the professor added. Professor Chang is viewed as one of the foremost thinkers on “new economics and development.” His
economic textbook, “23 Things They Don’t Tell You About Capitalism” also details his interest in how the washing machine
is way more revolutionary than the Internet. Thanks to washing machine technology, “women started
having fewer children, gained more bargaining power in their relationships and enjoyed a higher status.
This liberation of women has done more for democracy than the Internet,” states Professor Chang’s lecture. “The
washing machine is a symbol of a fundamental change in how we look at women. It has changed society more than the Internet.” As one of the
top economics professors in the world, Professor Chang likes to challenge his students to looking at things in a different way. For instance, he
notes that “people like you and me have no memory of spending two hours a day washing our clothes in cold water.” This is part 8 for an
occasional series titled: “Tech: Hooked Into Machine,” that is being offered to book and website publishers. DAVE MASKO is award-winning
foreign correspondent and photojournalist who has published prolifically in top print newspapers and magazines online. He has reported on
vital issues worldwide over the past 40 years. He accepts freelance work. Contact him at dpmasko@msn.com.
1NC Collapse Inev
Collapse is inevitable—peak capacity
RT 15
(RT, “Capacity crunch’: Internet could collapse by 2023, researchers warn.” 05-05-2015. http://www.rt.com/news/255329-internet-capacitycollapse-researchers///ghs-kw)
The internet could face an imminent ‘capacity crunch’ as soon as in eight years, should it fail to provide faster
data, UK scientists say. The cables and fiber optics that deliver the data to users will have reached their limit
by 2023. Optical cables are transparent strands the thickness of a human hair: the data is transformed into light, and is sent down the fiber,
and then turns back into information. “We are starting to reach the point in the research lab where we can't get
any more data into a single optical fiber. The deployment to market is about six to eight years behind
the research lab - so within eight years that will be it, we can't get any more data in,” Professor Andrew Ellis, of
Aston University in Birmingham, told the Daily Mail. “Demand is increasingly catching up. It is growing again and again,
and it is harder and harder to keep ahead. Unless we come forward with really radical ideas, we are
going to see costs dramatically increase,” he added. Internet companies could set up additional cables, but that would see price
tags for web usage soar. Researchers warn we could end up with an internet that switches on and off all the
time, or be forced to pay far more than we do now. “That is a completely different business model. I think a conversation is needed with the
British public as to whether or not they are prepared to switch that business model in exchange for more capacity,” Ellis warned. Plus, there
is another issue: that of electricity needed to cope with the skyrocketing demand. “That is quite a huge
problem. If we have multiple fibers to keep up, we are going to run out of energy in about 15 years,”
Professor Ellis said. Some 16 percent of the power in the UK is consumed via the internet already, and the amount is doubling every four years.
Globally, it is responsible for about two percent of power usage.
1NC No Collapse
No Internet collapse—self-improvements
Dvorak 07
(John C. Dvorak. John Dvorak is a columnist for PCMag.com and the host of the weekly TV video podcast CrankyGeeks. His work is licensed
around the world. Previously a columnist for Forbes, Forbes Digital, PC World, Barrons, MacUser, PC/Computing, Smart Business and other
magazines and newspapers. Former editor and consulting editor for Infoworld. Has appeared in the New York Times, LA Times, Philadelphia
Enquirer, SF Examiner, Vancouver Sun. Was on the start-up team for CNet TV as well as ZDTV. At ZDTV (and TechTV) was host of Silicon Spin
for four years doing 1000 live and live-to-tape TV shows. Also was on public radio for 8 years. Written over 4000 articles and columns as well
as authoring or co-authoring 14 books. 2004 Award winner of the American Business Editors Association's national gold award for best
online column of 2003. That was followed up by an unprecedented second national gold award from the ABEA in 2005, again for the best
online column (for 2004). Won the Silver National Award for best magazine column in 2006. "Will the Internet Collapse?," PCMAG. 5- 12007. http://www.pcmag.com/article2/0%2c2817%2c2124376%2c00.asp//ghs-kw)
When is the Internet going to collapse? The answer is NEVER. The Internet is amazing for no other reason than that it hasn't
simply collapsed, never to be rebooted. Over a decade ago, many pundits were predicting an all-out catastrophic
failure, and back then the load was nothing compared with what it is today. So how much more can this
network take? Let's look at the basic changes that have occurred since the Net became chat-worthy around 1990. First of all, only a few
people were on the Net back in 1990, since it was essentially a carrier for e-mail (spam free!), newsgroups, gopher, and FTP. These capabilities
remain. But the e-mail load has grown to phenomenal proportions and become burdened with megatons of spam. In one year, the amount of
spam can exceed a decade's worth, say 1990 to 2000, of all Internet traffic. It's actually the astonishing overall growth of the Internet that is
amazing. In 1990, the total U.S. backbone throughput of the Internet was 1 terabyte, and in 1991 it doubled to 2TB.
Throughput continued to double until 1996, when it jumped to 1,500TB. After that huge jump, it returned to doubling, reaching 80,000 to
140,000TB in 2002. This ridiculous growth rate has continued as more and more services are added to the burden.
The jump in 1996 is attributable to the one-two punch of the universal popularization of the Web and the introduction of the MP3 standard and
subsequent music file sharing. More recently, the emergence of inane video clips (YouTube and the rest) as universal entertainment has
continued to slam the Net with overhead, as has large video file sharing via BitTorrent and other systems. Then
VoIP came along, and
IPTV is next. All the while, e-mail numbers are in the trillions of messages, and spam has never been more
plentiful and bloated. Add blogging, vlogging, and twittering and it just gets worse. According to some
expensive studies, the growth rate has begun to slow down to something like 50 percent per year. But
that's growth on top of huge numbers. Petabytes. So when does this thing just grind to a halt or blow
up? To date, we have to admit that the structure of the Net is robust, to say the least. This is impressive,
considering the fact that experts were predicting a collapse in the 1990s. Robust or not, this Internet is a
transportation system. It transports data. All transportation systems eventually need upgrading, repair, basic changes, or
reinvention. But what needs to be done here? This, to me, has come to be the big question. Does anything at all need to be done, or do we run
it into the ground and then fix it later? Is this like a jalopy leaking oil and water about to blow, or an organic perpetual-motion machine that
fixes itself somehow? Many believe that the
Net has never collapsed because it does tend to fix itself. A decade
ago we were going to run out of IP addresses—remember? It righted itself, with rotating addresses and
subnets. Many of the Net's improvements are self-improvements. Only spam, viruses, and spyware represent incurable
diseases that could kill the organism. I have to conclude that the worst-case scenario for the Net is an outage here or
there, if anywhere. After all, the phone system, a more machine-intensive system, never really imploded
after years and years of growth, did it? While it has outages, it's actually more reliable than the power
grid it sits on. Why should the Internet be any different now that it is essentially run by phone
companies who know how to keep networks up? And let's be real here. The Net is being improved daily, with
newer routers and better gear being constantly hot-swapped all over the world. This is not the same
Internet we had in 1990, nor is it what we had in 2000. While phone companies seem to enjoy nickel-and-diming their
customers to death with various petty scams and charges, they could easily charge one flat fee and spend their efforts on quality-of-service
issues and improving overall network speed and throughput.
1NC Econ =/= War
International norms maintain economic stability
***Zero empirical data supports their theory – the only financial crisis of the new liberal order
experienced zero uptick in violence or challenges to the central factions governed by the US that check
inter-state violence – they have no theoretical foundation for proving causality
Barnett, 9 – senior managing director of Enterra Solutions LLC (Thomas, The New Rules: Security
Remains Stable Amid Financial Crisis, 25 August 2009, http://www.aprodex.com/the-new-rules-security-remains-stable-amid-financial-crisis-398-bl.aspx)
When the global financial crisis struck roughly a year ago, the blogosphere was ablaze with all sorts of scary
predictions of, and commentary regarding, ensuing conflict and wars -- a rerun of the Great Depression leading to world war, as it
were. Now, as global economic news brightens and recovery -- surprisingly led by China and emerging markets -- is the talk of the day, it's
interesting to look back over the past year and realize how globalization's
first truly worldwide recession has had virtually
no impact whatsoever on the international security landscape. None of the more than three-dozen ongoing
conflicts listed by GlobalSecurity.org can be clearly attributed to the global recession. Indeed, the last new entry (civil
conflict between Hamas and Fatah in the Palestine) predates the economic crisis by a year, and three quarters of the
chronic struggles began in the last century. Ditto for the 15 low-intensity conflicts listed by Wikipedia (where the latest entry is
the Mexican "drug war" begun in 2006). Certainly, the Russia-Georgia conflict last August was specifically timed, but by most accounts the
opening ceremony of the Beijing Olympics was the most important external trigger (followed by the U.S. presidential campaign) for that sudden
spike in an almost two-decade long struggle between Georgia and its two breakaway regions. Looking over the various databases, then, we
see a most familiar picture: the usual mix of civil conflicts, insurgencies, and liberation-themed terrorist
movements. Besides the recent Russia-Georgia dust-up, the only two potential state-on-state wars (North v. South Korea,
Israel v. Iran) are both tied to one side acquiring a nuclear weapon capacity -- a process wholly unrelated to global economic
trends. And with the United States effectively tied down by its two ongoing major interventions (Iraq and Afghanistan-bleeding-intoPakistan), our involvement elsewhere around the planet has been quite modest, both leading up to and following the
onset of the economic crisis: e.g., the usual counter-drug efforts in Latin America, the usual military exercises with allies across Asia, mixing it
up with pirates off Somalia's coast). Everywhere else we find serious instability we pretty much let it burn, occasionally pressing the Chinese -unsuccessfully -- to do something. Our new Africa Command, for example, hasn't led us to anything beyond advising and training local forces.
So, to sum up: •No significant uptick in mass violence or unrest (remember the smattering of urban riots last year in places like
Greece, Moldova and Latvia?); •The usual frequency maintained in civil conflicts (in all the usual places); •Not a single state-on-state war
directly caused (and no great-power-on-great-power crises even triggered); •No
great improvement or disruption in great-power
cooperation regarding the emergence of new nuclear powers (despite all that diplomacy); •A modest scaling back of international policing
efforts by the system's acknowledged Leviathan power (inevitable given the strain); and •No serious efforts by any rising great
power to challenge that Leviathan or supplant its role. (The worst things we can cite are Moscow's occasional deployments of
strategic assets to the Western hemisphere and its weak efforts to outbid the United States on basing rights in Kyrgyzstan; but the best include
China and India stepping up their aid and investments in Afghanistan and Iraq.) Sure, we've finally seen global defense spending surpass the
previous world record set in the late 1980s, but even that's likely to wane given the stress on public budgets created by all this unprecedented
"stimulus" spending. If anything, the friendly
cooperation on such stimulus packaging was the most notable greatpower dynamic caused by the crisis. Can we say that the world has suffered a distinct shift to political radicalism as a result of the
economic crisis? Indeed, no. The world's major economies remain governed by center-left or center-right political factions
that remain decidedly friendly to both markets and trade. In the short run, there were attempts across the board to insulate
economies from immediate damage (in effect, as much protectionism as allowed under current trade rules), but there was no great slide into
"trade wars." Instead, the World Trade Organization is functioning as it was designed to function, and regional efforts toward free-trade
agreements have not slowed. Can we say Islamic radicalism was inflamed by the economic crisis? If it was, that shift was clearly overwhelmed
by the Islamic world's growing disenchantment with the brutality displayed by violent extremist groups such as al-Qaida. And looking forward,
austere economic times are just as likely to breed connecting evangelicalism as disconnecting fundamentalism. At the end of the day, the
economic crisis did not prove to be sufficiently frightening to provoke major economies into establishing global regulatory schemes, even as it
has sparked a spirited -- and much needed, as I argued last week -- discussion of the continuing viability of the U.S. dollar as the world's primary
reserve currency. Naturally, plenty of experts and pundits have attached great significance to this debate, seeing in it the beginning of
"economic warfare" and the like between "fading" America and "rising" China. And yet, in a world of globally integrated production chains and
interconnected financial markets, such "diverging interests" hardly constitute signposts for wars up ahead. Frankly, I don't welcome a world in
which America's fiscal profligacy goes undisciplined, so bring it on -- please! Add it all up and it's fair to say that this global financial
has proven the great resilience of America's post-World War II international liberal trade order.
crisis
2NC Econ =/= War
Aggregate data proves interstate violence doesn’t result from economic decline
Drezner, 12 --- The Fletcher School of Law and Diplomacy at Tufts University (October 2012, Daniel W.,
“The Irony of Global Economic Governance: The System Worked,”
www.globaleconomicgovernance.org/wp-content/uploads/IR-Colloquium-MT12-Week-5_The-Irony-ofGlobal-Economic-Governance.pdf)
The final outcome addresses a
dog that hasn’t barked: the effect of the Great Recession on cross-border conflict
and violence. During the initial stages of the crisis, multiple analysts asserted that the financial crisis would lead
states to increase their use of force as a tool for staying in power.37 Whether through greater internal repression,
diversionary wars, arms races, or a ratcheting up of great power conflict, there were genuine concerns that the global
economic downturn would lead to an increase in conflict. Violence in the Middle East, border disputes in the South China
Sea, and even the disruptions of the Occupy movement fuel impressions of surge in global public disorder.
The aggregate data suggests otherwise, however. The Institute for Economics and Peace has
constructed a “Global Peace Index” annually since 2007. A key conclusion they draw from the 2012
report is that “The average level of peacefulness in 2012 is approximately the same as it was in 2007.”38
Interstate violence in particular has declined since the start of the financial crisis – as have military expenditures in
most sampled countries. Other studies confirm that the Great Recession has not triggered any increase in
violent conflict; the secular decline in violence that started with the end of the Cold War has not been reversed.39 Rogers Brubaker
concludes, “the crisis has not to date generated the surge in protectionist nationalism or ethnic
exclusion that might have been expected.”40
None of these data suggest that the global economy is operating swimmingly. Growth remains unbalanced and fragile, and has clearly slowed in
2012. Transnational capital flows remain depressed compared to pre-crisis levels, primarily due to a drying up of cross-border interbank lending
in Europe. Currency volatility remains an ongoing concern. Compared to the aftermath of other postwar recessions, growth in output,
investment, and employment in the developed world have all lagged behind. But the Great Recession is not like other postwar recessions in
either scope or kind; expecting a standard “V”-shaped recovery was unreasonable. One
financial analyst characterized the
post-2008 global economy as in a state of “contained depression.”41 The key word is “contained,” however. Given
the severity, reach and depth of the 2008 financial crisis, the proper comparison is with Great
Depression. And by that standard, the outcome variables look impressive. As Carmen Reinhart and Kenneth Rogoff
concluded in This Time is Different: “that its macroeconomic outcome has been only the most severe global recession since World War II – and
not even worse – must be regarded as fortunate.”42
Most rigorous historical analysis proves
Miller, 2K – economist, adjunct professor in the University of Ottawa’s Faculty of Administration,
consultant on international development issues, former Executive Director and Senior Economist at the
World Bank, (Morris, “Poverty as a cause of wars?”, Winter, Interdisciplinary Science Reviews, Vol. 25,
Iss. 4, p. Proquest)
Perhaps one should ask, as some scholars do, whether it is not poverty as such but some dramatic event or
sequence of such events leading to the exacerbation of poverty that is the factor that contributes in a
significant way to the denouement of war. This calls for addressing the question: do wars spring from a
popular reaction to an economic crisis that exacerbates poverty and/or from a heightened awareness
of the poor of the wide and growing disparities in wealth and incomes that diminishes their tolerance
to poverty? It seems reasonable to believe that a powerful "shock" factor might act as a catalyst for a violent
reaction on the part of the people or on the part of the political leadership . The leadership, finding that this
sudden adverse economic and social impact destabilizing, would possibly be tempted to seek a
diversion by finding or, if need be, fabricating an enemy and setting in train the process leading to
war. There would not appear to be any merit in this hypothesis according to a study undertaken by
Minxin Pei and Ariel Adesnik of the Carnegie Endowment for International Peace. After studying 93
episodes of economic crisis in 22 countries in Latin America and Asia in the years since World War II
they concluded that Much of the conventional wisdom about the political impact of economic crises
may be wrong …..The severity of economic crisis - as measured in terms of inflation and negative
growth – bore no relationship to the collapse of regimes….(or, in democratic states, rarely) to an
outbreak of violence…In the cases of dictatorships and semi-democracies, the ruling elites responded
to crises by increasing repression (thereby using one form of violence to abort another.)
Innovation Adv
Notes
This advantage is even worse than the previous. How that is possible I have no idea. Zylberberg says
backdoors result in centralized information flows, meaning that information flows inwards towards the
NSA. Crowe is in the context of the internet of things and says that inefficient flows of information use
energy, Also Tyler is a BLOGGER for the Motley Fool, which makes him qualified to talk about
investments but not the technical aspects of the Internet. He also talks about oversupplying energy to
the grid, which is probably an industrial application and NOT a commercial one, which probably means
they don’t have an internal link into energy companies actually developing better alternatives. Tyler also
talks about things like “Increased communication between everything -- engines, appliances, generators,
automobiles -- allows for instant feedback for more efficient travel routes, optimized fertilizer and water
consumption to reduce deforestation, real-time monitoring of electricity consumption and instant
feedback to generators, and fully integrated heating, cooling, and lighting systems that can adjust for
human occupancy. There are lots of projections and estimates related to carbon emissions and climate
change, but the one that has emerged as the standard bearer is the amount of carbon emissions” which
the squo probably resolves. This reflects a fundamental misunderstanding of what the HELL backdoors
actually are, which is just government access into company servers, NOT mandating rerouting ALL
internet traffic. It’s an embarrassment to DDI. Read the patents advantage CP for this. Sorry I’m grouchy
at 3:30AM.
30 second explainer: backdoors kill innovation b/c “centralized information flows,” that kills innovation
b/c no end-to-end encryption, innovation is k2 solve warming b/c we oversupply the grid and better
communications means energy efficiency and less CO2, warming = ext b/c Roberts
CX Questions
Personally I’d be very tempted to give them all of CX to explain all their warrants and tell a coherent
story but plz don’t do that.
1NC Warming =/= Extinction
This advantage is highly unlikely – the minimal effect that backdoors has on innovation
means that the risk of the impact is tiny. AND, backdoors have existed for a while, so the
impact should’ve happened by now if the link story was true.
AND, Tyler says “we just don't always have the adequate information to make the
most efficient decision” which means even in a world of innovation, we still
oversupply the grid which triggers your internal link
No impact to warming
IBD 5/13 (5/13/2014, Investor’s Business Daily, “Obama Climate Report: Apocalypse Not,” Factiva, JMP)
Climate: Not
since Jimmy Carter falsely spooked Americans about overpopulation, the world running out
of food, water and energy, and worsening pollution, has a president been so filled with doom and
gloom as this one. Last week's White House report on climate change was a primal scream to alarm
Americans into action to save the earth from a literal meltdown. Maybe we should call President Obama
the Fearmonger in Chief. While scientists can argue until the cows come home about what will happen in the future with the
planet's climate, we do have scientific records on what's already happened. Obama moans that the devastation from climate change is
already here as more severe weather events threaten to imperil our very survival. But, according
to the government's own
records — which presumably the White House can get — severe weather events are no more likely now than
they were 50 or 100 years ago and the losses of lives and property are much less devastating. Here is
what government data reports and top scientists tell us about extreme climate conditions: • Hurricanes: The century-long trend
in Hurricanes is slightly down, not up. According to the National Hurricane Center, in 2013, "There were no major hurricanes
in the North Atlantic Basin for the first time since 1994. And the number of hurricanes this year was the lowest since 1982." According to
Dr. Ryan Maue at Weather Bell Analytics, "We are currently in the longest period since the Civil War Era without a major hurricane strike in
the U.S. (i.e., category 3, 4 or 5)" • Tornadoes: Don't worry, Kansas. The National Oceanic and Atmospheric Administration says there
has been no change in severe tornado activity. "There has been little trend in the frequency of the stronger tornadoes
over the past 55 years." • Extreme heat and cold temperatures: NOAA's U.S. Climate Extremes Index of unusually hot or cold temperatures
finds that over the last 10 years, five years have been below the historical mean and five above the mean. • Severe drought/extreme
moisture: While higher than average portions of the country were subjected to extreme drought/moisture in the last few years, the 1930's,
40's and 50's were more extreme in this regard. In fact, over the last 10 years, four years have been below the average and six above the
average. • Cyclones: Maue reports: "the global frequency of tropical cyclones has reached a historical low." • Floods: Dr. Roger Pielke
Jr., past
chairman of the American Meteorological Society Committee on Weather Forecasting and
Analysis, reports, "floods have not increased in the U.S. in frequency or intensity since at least 1950. Flood losses as a percentage of
U.S. GDP have dropped by about 75% since 1940." • Warming: Even NOAA admits a "lack of significant warming at the
Earth's surface in the past decade" and a pause "in global warming observed since 2000." Specifically,
NOAA last year stated, "since the turn of the century, however, the change in Earth's global mean surface temperature has been close to
zero." Pielke sums up: "There
is no evidence that disasters are getting worse because of climate change. ...
It is misleading, and just plain incorrect, to claim that disasters associated with hurricanes, tornadoes,
floods or droughts have increased on climate time scales either in the U.S. or globally." One big change
between today and 100 years ago is that humans are much more capable of dealing with hurricanes and earthquakes and other acts of
God. Homes and buildings are better built to withstand severe storms and alert systems are much more accurate to warn people of the
coming storms. As a result, globally, weather-related losses have actually decreased by about 25% as a proportion of GDP since 1990. The
liberal hubris is that government can do anything to change the earth's climate or prevent the next big hurricane, earthquake or monsoon.
These are the people in Washington who can't run a website, can't deliver the mail and can't balance a budget. But they are going to
prevent droughts and forest fires. The
President's doomsday claims last week served mostly to undermine the
alarmists' case for radical action on climate change. Truth always seems to be the first casualty in this
debate. This is the tactic of tyrants. Americans are wise to be wary about giving up our basic freedoms and lowering our
standard of living to combat an exaggerated crisis.
1NC No Warming
Their models are wrong
Ridley 14 --- author of The Rational Optimist, a columnist for the Times (London) and a member of the
House of Lords (6/19/14, Matt, “Junk Science Week: IPCC commissioned models to see if global warming
would reach dangerous levels this century. Consensus is ‘no’”,
http://business.financialpost.com/2014/06/19/ipcc-climate-changewarming/?utm_source=Daily+Carbon+Briefing&utm_campaign=6c73d70ec9DAILY_BRIEFING&utm_medium=email&utm_term=0_876aab4fd7-6c73d70ec9-303421281)
Even if you pile crazy assumption upon crazy assumption, you cannot even manage to make climate
change cause minor damage The debate over climate change is horribly polarized. From the way it is conducted,
you would think that only two positions are possible: that the whole thing is a hoax or that catastrophe is inevitable. In fact there is room for
lots of intermediate positions, including the view I hold, which is that
man-made climate change is real but not likely to
do much harm, let alone prove to be the greatest crisis facing humankind this century. After more than 25
years reporting and commenting on this topic for various media organizations, and having started out alarmed, that’s where I have ended up.
But it is not just I that hold this view. I share
it with a very large international organization, sponsored by the
United Nations and supported by virtually all the world’s governments: the Intergovernmental Panel on
Climate Change (IPCC) itself. The IPCC commissioned four different models of what might happen to the world economy, society and
technology in the 21st century and what each would mean for the climate, given a certain assumption about the atmosphere’s “sensitivity” to
carbon dioxide. Three
of the models show a moderate, slow and mild warming, the hottest of which leaves
the planet just 2 degrees Centigrade warmer than today in 2081-2100. The coolest comes out just 0.8
degrees warmer. Now two degrees is the threshold at which warming starts to turn dangerous,
according to the scientific consensus. That is to say, in three of the four scenarios considered by the
IPCC, by the time my children’s children are elderly, the earth will still not have experienced any harmful
warming, let alone catastrophe. But what about the fourth scenario? This is known as RCP8.5, and it
produces 3.5 degrees of warming in 2081-2100. Curious to know what assumptions lay behind this model, I decided to look up the
original papers describing the creation of this scenario. Frankly, I was gobsmacked. It is a world that is very, very
implausible. For a start, this is a world of “continuously increasing global population” so that there are 12 billion
on the planet. This is more than a billion more than the United Nations expects, and flies in the face of the fact that the world population
growth rate has been falling for 50 years and is on course to reach zero – i.e., stable population – in around 2070. More people mean more
emissions. Second,
the world is assumed in the RCP8.5 scenario to be burning an astonishing 10 times as
much coal as today, producing 50% of its primary energy from coal, compared with about 30% today. Indeed, because oil is assumed to
have become scarce, a lot of liquid fuel would then be derived from coal. Nuclear and renewable technologies contribute little, because of a
“slow pace of innovation” and hence “fossil fuel technologies continue to dominate the primary energy portfolio over the entire time horizon of
the RCP8.5 scenario.” Energy efficiency has improved very little. These
are highly unlikely assumptions. With abundant
natural gas displacing coal on a huge scale in the United States today, with the price of solar power plummeting, with
nuclear power experiencing a revival, with gigantic methane-hydrate gas resources being discovered on the
seabed, with energy efficiency rocketing upwards, and with population growth rates continuing to fall fast in virtually every country in the
world, the one thing we can say about RCP8.5 is that it is very, very implausible. Notice,
however, that even so, it is not a
world of catastrophic pain. The per capita income of the average human being in 2100 is three times
what it is now. Poverty would be history. So it’s hardly Armageddon. But there’s an even more startling
fact. We now have many different studies of climate sensitivity based on observational data and they all converge on the conclusion that it is
much lower than assumed by the IPCC in these models. It has to be, otherwise global temperatures would have risen much faster than they
have over the past 50 years. As Ross McKitrick noted on this page earlier this week, temperatures
have not risen at all now for
more than 17 years. With these much more realistic estimates of sensitivity (known as “transient climate response”), even RCP8.5
cannot produce dangerous warming. It manages just 2.1C of warming by 2081-2100. That is to say, even if you pile crazy
assumption upon crazy assumption till you have an edifice of vanishingly small probability, you cannot
even manage to make climate change cause minor damage in the time of our grandchildren, let alone
catastrophe. That’s not me saying this – it’s the IPCC itself. But what strikes me as truly fascinating about
these scenarios is that they tell us that globalization, innovation and economic growth are
unambiguously good for the environment. At the other end of the scale from RCP8.5 is a much more cheerful scenario called
RCP2.6. In this happy world, climate change is not a problem at all in 2100, because carbon dioxide
emissions have plummeted thanks to the rapid development of cheap nuclear and solar, plus a surge in
energy efficiency. The RCP2.6 world is much, much richer. The average person has an income about 15 times today’s in real terms, so
that most people are far richer than Americans are today. And it achieves this by free trade, massive globalization, and lots of investment in
new technology. All the things the green movement keeps saying it opposes because they will wreck the planet. The
answer to climate
change is, and always has been, innovation. To worry now in 2014 about a very small, highly implausible
set of circumstances in 2100 that just might, if climate sensitivity is much higher than the evidence suggests, produce a
marginal damage to the world economy, makes no sense. Think of all the innovation that happened between 1914
and 2000. Do we really think there will be less in this century? As for how to deal with that small risk,
well there are several possible options. You could encourage innovation and trade. You could put a modest but growing tax on
carbon to nudge innovators in the right direction. You could offer prizes for low-carbon technologies. All of these might make a little
sense. But the one thing you should not do is pour public subsidy into supporting old-fashioned existing technologies that produce more
carbon dioxide per unit of energy even than coal (bio-energy), or into ones that produce expensive energy (existing solar), or that have very low
energy density and so require huge areas of land (wind). The
IPCC produced two reports last year. One said that the
cost of climate change is likely to be less than 2% of GDP by the end of this century. The other said that
the cost of decarbonizing the world economy with renewable energy is likely to be 4% of GDP. Why do
something that you know will do more harm than good?
1NC Warming Inev
Even if they win that warming is real and caused by CO2, warming is inevitable
Skuce 4/19 – a recently-retired geophysical consultant living in British Columbia. He has a BSc in
geology from Sheffield University and an MSc in geophysics from the University of Leeds. His work
experience includes a period at the British Geological Survey in Edinburgh and work for a variety of oil
companies based in Calgary, Vienna and Quito (Andrew, “Global Warming: Not Reversible, But
Stoppable,” Skeptical Science, 2014, http://www.skepticalscience.com/global-warming-not-reversiblebut-stoppable.html)
Bringing human emissions to a dead stop, as shown by the red lines in Figure 1, is not a realistic option. This would
put the entire world, all seven billion of us, into a new dark age and the human suffering would be
unimaginable. For this reason, most climate models don’t even consider it as a viable scenario and, if they
run the model at all, it is as a "what-if". Even cutting back emissions severely enough to stabilize CO2
concentrations at a fixed level, as shown in the blue lines in Figure 1, would still require massive and rapid
reductions in fossil fuel use. But, even this reduction would not be enough to stop future warming. For
example, holding concentration levels steady at 380 ppm would lead to temperatures rising an
additional 0.5 degrees C over the next two hundred years. This effect is often referred to as
“warming in the pipeline”: extra warming that we can’t do anything to avoid. The most important
distinction to grasp, though, is that the inertia is not inherent in the physics and chemistry of the planet’s
climate system, but rather in our inability to change our behaviour rapidly enough. Figure 2 shows the average
lifetimes of the equipment and infrastructure that we rely upon in the modern world. Cars last us up to 20 years; pipelines up to 50; coal-fired plants 60; our
buildings and urban infrastructure a century. It
takes time to change our ways, unless we discard working vehicles,
power plants and buildings and immediately replace them with, electric carsnewable energy plants
and new, energy-efficient buildings. “Warming in the pipeline” is not, therefore, a very good metaphor to
describe the natural climate system, if we could stop emissions, the warming would stop. However,
when it comes to the decisions we are making to build new, carbon-intensive infrastructure, such as the
Keystone XL pipeline, the expression is quite literally true.
1NC Adaptation
Animals and plants will adapt and thrive – our evidence assumes rapid change
Contescu 12 – Professor Emeritus of Geology and Geography at Roosevelt University, Ph.D. (Lorin,
"600 MILION YEARS OF CLIMATE CHANGE; A CRITIQUE OF THE ANTHROPOGENIC GLOBAL WARMING
HYPOTESIS FROM A TIME-SPACE PERSPECTIVE”, Geo-Eco-Marina, 2012, Issue 18, pgs. 5-25, peer
reviewed, Proquest)
The other side of the coin shows that climate
warming has also important favorable effects, mostly on plants and indirectly
on animals that feed on the plants. Studies concluded that the most feared doubling of the atmospheric CO2 will
increase the productivity of herbs by 30%-50% and of trees by 50%-80%. Many plants will grow faster
and healthier during a warmer climate (Idso et al., 2003), and produce more offsprings. It also appears that plants can
survive quite well when climatic conditions change, even when change is rapid. For instance, cold-adapted
trees can still grow to maturity (though slower) even 100-150 km north of their natural range, and they also grow as well as much as
1,000 km south of their southern boundaries. Shifting climate boundaries will also generate competition among
species of grasses and trees, leading to the selection of those most adaptable to changing conditions. The
conclusion that can be drawn from the above considerations is that both the vegetal and animal kingdoms are far more
resilient and adaptable even for relatively quickly environmental modifications. If a species becomes extinct, a
biological niche becoming thus empty, it will be quickly occupied by another species better adapted to the new
eco logical conditions, as bio-ecological history of the planet has demonstrated time and again.
1NC CO2 =/=Key
Even if they win an impact—CO2 doesn’t increase warming—peer viewed studies prove
and the IPCC is wrong
Ballonoff 14 – Economist, a former utility rate regulator in Kansas and Illinois, writer for the Cato
Institute (Paul, “A Fresh Look at Climate Change”, Winter, “AN INTERDISCIPLINARY JOURNAL OF PUBLIC
POLICY ANALYSIS”, The Cato Journal, Volume 34, Number 1,
http://object.cato.org/sites/cato.org/files/serials/files/catojournal/2014/2/cato34n1issuelow.pdf#page=119)
The foundation of the modern climate change discussion is the accurate observation that human activity
has significantly increased the atmospheric concentration of CO2, and that such activity is continuing (Tans 2009).
Increased CO2 concentration, especially when amplified by predicted feedback effects thus also is assumed to predict
increasing global average atmospheric temperature. Depending on the degree of warming expected, other serious and
mainly undesired effects are predicted. As The Economist (2013a) observed, the average global temperature did rise on
average over the previous century. Following a 25-year cooling trend post-World War II, temperatures increased at an especially
strong rate in the quarter century ending in 1997. The trend of that warming period, the correlation with increased
CO2, and the fact of human activity causing that CO2 increase apparently supported use of projection
models extending that trend to future years. Such projections were the basis for the UN’s 1997 IPCC analysis on which much
current policy is based. It is thus at least ironic that 1997 was also the last year in which such measured global average temperature increase
took place. One
of the key features of the IPCC forecast, and greenhouse effect forecasts generally, is the
expected feedback loops. One of those is that the presumed drier and hotter conditions on the ground would cause expanded
desertification and deforestation. A distinct kind of greenhouse effect is also predicted from increased CO2
concentration—namely, the aerial fertilization effect, which is that plants grow better in an
atmosphere of higher CO2. Many analysts, such as the IPCC, clearly thought the greater effect would be from
heating, not plant growth. One must assume this was an intentional judgment, as the IPCC was aware of the CO2 aerial fertilization
effect from its 1995 Second Assessment Report, which contained empirical evidence of increased greening in enhanced CO2 environments
(Reilly 2002: 19). In
contrast, climate analysts such as those with the Cato Center for the Study of Science have argued since 1999
that atmospheric temperature is much less sensitive to increased concentration of CO2 (Michaels 1999b).
While in fact heating has not occurred as the IPCC forecasted, greatly increased global biomass is indeed
demonstrated. Well documented evidence shows that concurrently with the increased CO2 levels,
extensive, large, and continuing increase in biomass is taking place globally—reducing deserts, turning
grasslands to savannas, savannas to forests, and expanding existing forests (Idso 2012). That survey
covered 400 peer-reviewed empirical studies, many of which included surveys of dozens to hundreds of
sources. Comprehensive study of global and regional relative greening and browning using NOAA data
showed that shorter-term trends in specific locations may reflect either greening or browning, and also
noted that the rapid pace of greening of the Sahel is due in part to the end of the drought in that region.
Nevertheless, in nearly all regions and globally, the overall effect in recent decades is decidedly toward greening (de
Jong et al. 2012). This result is also the opposite of what the IPCC expected. Global greening in response to
increased CO2 concentrations was clearly predicted by a controlled experiment of the U.S. Water
Conservation Laboratory conducted from 1987 through 2005 (Idso 1991).1 In that study, half of a group of
genetically identical trees were grown in natural conditions and the other half in the same conditions
but in an atmosphere of enhanced CO2 concentration. By 1991 the Agricultural Research Service (ARS) reported that the
trees in the enhanced CO2 environment contained more than 2.8 times more sequestered carbon than
the natural environment trees (i.e., were 2.8 times larger). By 2005, when the experiment was ended, the total additional growth of
the enhanced CO2 trees was 85 percent more than that of the natural-condition trees, both in woody mass and in fruit. One
reason for
expanded growth even into dry environments is a seldom remarked propensity that CO2 induced
growth due to aerial fertilization also greatly increases a plant’s efficiency of use of water. The ARS further
documented this effect in a 2011 study, citing the extensive literature demonstrating that enhanced CO2 environments “impact
growth through improved plant water relations” (Prior et al. 2011). Similar results, both as to aerial
fertilization effect and increased efficiency of water use, were found by the joint study of the USDA and
the U.S. Department of Energy on the effects of CO2 on agricultural production in the United States (Reilly
2002). In that study, the effect of forecasted increased CO2 concentration, together with the increased
warming forecasted, was shown to cause up to 80 percent increases in agricultural productivity, and
decreased use of water since the growth would occur faster and with more efficient water use by plants.
While different crops were forecasted to respond differently, most crops were positively affected, with a range from 10 percent
reduction in yield up to 80 percent increase. Even considering the complex interactions with market conditions, the overall effect was
certainly found to be favorable. Using demonstrated experimental data, the 1991 ARS study also predicted effects of
further or even greatly enhanced atmospheric CO2 concentrations, such as from the expected large
increase that might come (and subsequently did come and is continuing) especially from developing and newly
industrializing countries. Comparing demonstrated warming to that date to the evidence, the ARS study concluded: If past is
prologue to the future, how much more CO2 induced warming is likely to occur? Very little. . . . The
warming yet to be faced cannot be much more than what has already occurred. . . . A doubling of
current emissions, for example, would lead to an atmospheric CO2 content on the order of 700 ppm, which
would probably be climatically acceptable, but only if the earth’s forests are not decimated in the
meantime [Idso 1991: 964–65]. The 1991 study noted that expanded forested areas would allow even greater atmospheric CO2
concentrations. To assure the measured results were accurate and a reasonable basis on which to infer the effect of
global-scale CO2 concentration, the ARS also published results of eight additional distinct empirical
studies of natural processes, each of which independently verified that the measured results found by direct experiment were a
reasonable basis for such extrapolation (Idso 1998). The effects were recently further verified by models whose results
were compared to empirical data on Australian and other arid regions. Modeling water use by plants in
enhanced CO2 environment, the study predicted the effect on plant growth in dry regions and verified
the result empirically compared to actual measurements over a 30-year period (AGU 2013). The data verified the
prediction both in the direction and in the quantity of effect observed: Enhanced CO2 improves water use by plants and
reduces, not increases, dry regions by making them greener. Thus, evidence to date implies that the
view that global temperature is far less sensitive to CO2 than many fear, is likely correct. Simultaneously,
demonstrated experimental evidence on plant growth predicted exactly what the now extensive empirical literature shows: Enhanced
CO2 is associated with greatly increased biomass production, even in dry climates. The extent of
increased CO2 sequestration both in soil and in biomass associated with increased atmospheric
concentration has also been documented (Pan et al. 2011). Those results, while not what the IPCC
predicted, do not imply we should have no concerns about climate policy.
2NC CO2 =/= Key
Empirics—they go negative—data shows temperature rise comes before CO2 spikes
Contescu 12 – Professor Emeritus of Geology and Geography at Roosevelt University, Ph.D. (Lorin,
"600 MILION YEARS OF CLIMATE CHANGE; A CRITIQUE OF THE ANTHROPOGENIC GLOBAL WARMING
HYPOTESIS FROM A TIME-SPACE PERSPECTIVE”, Geo-Eco-Marina, 2012, Issue 18, pgs. 5-25, peer
reviewed, Proquest)
Most unsettling is the fact that data show quite clearly that during glacial-interglacial intervals the rise
in temperature has preceded the increase in atmospheric CO2 and not the other way around (Lee Ray,
1993; Solomon, 2008). Indeed, the analysis of Antarctica ice cores determined that temperatures over the
continent started to rise centuries (more precisely some 800 years) before the atmospheric CO2 levels begun to
increase.
Cyber Crime Adv
Notes
At least there’s a coherent arg? Nuclear terror attack ev doesn’t indicate where attack would occur—
don’t let them be shifty about this.
30 second explainer: backdoors = cause organized crime b/c they’d be exploited, that funds organized
crime, some random nuke terror card that doesn’t have a coherent internal link with the AFF, retaliation
and extinction
CX Questions
Zaitseva doesn’t actually talk about organized crime in Russia, what are the scenarios for nuke
terror/what’s the internal link from Russian crime to nuke terror?
Where would the US retaliate and how would we attribute nuclear attack?
Ayson ev assumes US retaliation in a world in which US-Russia and US-China are already exchanging
military threats—where’s the ev that this is happening in the status quo?
1NC No Impact
No impact to backdoors, and there are already solutions to backdoors – their evidence
Kohn 14
(Cindy, writer for the Electronic Freedom Foundation, 9-26-14, “Nine Epic Failures of Regulating
Cryptography,” https://www.eff.org/deeplinks/2014/09/nine-epic-failures-regulating-cryptography, BC)
For those who weren't following digital civil liberties issues in 1995, or for those who have forgotten, here's
a refresher list of why forcing
companies to break their own privacy and security measures by installing a back door was a bad idea 15
years ago:∂ It will create security risks. Don't take our word for it. Computer security expert Steven Bellovin has
explained some of the problems. First, it's hard to secure communications properly even between two parties. Cryptography with a back
door adds a third party, requiring a more complex protocol, and as Bellovin puts it: "Many previous attempts to add such features have resulted in
new, easily exploited security flaws rather than better law enforcement access." It doesn't end there. Bellovin notes: ∂ Complexity in the protocols isn't the only
problem; protocols require computer programs to implement them, and more complex code generally creates more exploitable bugs. In the most notorious
incident of this type, a cell phone switch in Greece was hacked by an unknown party. The so-called 'lawful intercept' mechanisms in the switch — that is, the
features designed to permit the police to wiretap calls easily — was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime
minister's. This attack would not have been possible if the vendor hadn't written the lawful intercept code. ∂ More recently, as security researcher Susan Landau
explains, "an IBM researcher found that a
Cisco wiretapping architecture designed to accommodate law-enforcement
requirements — a system already in use by major carriers — had numerous security holes in its
design. This would have made it easy to break into the communications network and surreptitiously
wiretap private communications."∂ The same is true for Google, which had its "compliance"
technologies hacked by China.∂ This isn't just a problem for you and me and millions of companies that need secure communications. What will
the government itself use for secure communications? The FBI and other government agencies currently use many commercial products — the same ones they
want to force to have a back door. How will the FBI stop people from un-backdooring their deployments? Or does the government plan to stop using commercial
communications technologies altogether?∂ It won't stop the bad guys. Users who want strong encryption will be able to get it — from Germany, Finland, Israel, and
many other places in the world where it's offered for sale and for free. In 1996, the National Research Council did a study called "Cryptography's Role in Securing the
Information Society," nicknamed CRISIS. Here's what they said: ∂ Products using unescrowed encryption are in use today by millions of users, and such products are
available from many difficult-to-censor Internet sites abroad. Users could pre-encrypt their data, using whatever means were available, before their data were
accepted by an escrowed encryption device or system. Users
could store their data on remote computers, accessible
through the click of a mouse but otherwise unknown to anyone but the data owner, such practices
could occur quite legally even with a ban on the use of unescrowed encryption. Knowledge of strong encryption
techniques is available from official U.S. government publications and other sources worldwide, and experts understanding how to use such knowledge might well
be in high demand from criminal elements. — CRISIS Report at 303∂ None of that has changed. And of course, more encryption technology is more readily available
today than it was in 1996. So unless the goverment wants to mandate that you are forbidden to run anything that is not U.S. government approved on your devices,
they won't stop bad guys from getting access to strong encryption.∂ It
will harm innovation. In order to ensure that no
"untappable" technology exists, we'll likely see a technology mandate and a draconian regulatory
framework. The implications of this for America's leadership in innovation are dire. Could Mark
Zuckerberg have built Facebook in his dorm room if he'd had to build in surveillance capabilities before
launch in order to avoid government fines? Would Skype have ever happened if it had been forced to include an
artificial bottleneck to allow government easy access to all of your peer-to-peer communications? This has especially
serious implications for the open source community and small innovators. Some open source developers
have already taken a stand against building back doors into software.∂ It will harm US business. If, thanks to
this proposal, US businesses cannot innovate and cannot offer truly secure products, we're just handing
business over to foreign companies who don't have such limitations. Nokia, Siemens, and Ericsson
would all be happy to take a heaping share of the communications technology business from US
companies. And it's not just telecom carriers and VOIP providers at risk. Many game consoles that people can use to play over
the Internet, such as the Xbox, allow gamers to chat with each other while they play. They'd have to be
tappable, too.
1NC Cyber Inev
Cybersecurity vulnerabilities are inevitable
Corn 7/13
(Corn, Geoffrey S. * Presidential Research Professor of Law, South Texas College of Law; Lieutenant Colonel (Retired), U.S. Army Judge
Advocate General’s Corps. Prior to joining the faculty at South Texas, Professor Corn served in a variety of military assignments, including as
the Army’s Senior Law of War Advisor, Supervisory Defense Counsel for the Western United States, Chief of International Law for U.S. Army
Europe, and as a Tactical Intelligence Officer in Panama. “Averting the Inherent Dangers of 'Going Dark': Why Congress Must Require a
Locked Front Door to Encrypted Data,” SSRN. 07-13-2015.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes//ghs-kw)
Like CALEA, a statutory obligation along the lines proposed herein will inevitably trigger criticisms and generate concerns. One obvious
criticism is that the creation of an escrow key or the maintenance of a duplicate key by a manufacturer
would introduce an unacceptable risk of compromise for the device. This argument presupposes that the
risk is significant, that the costs of its exploitation are large, and that the benefit is not worth the risk.
Yet manufacturers, product developers, service providers and users constantly introduce such risks.
Nearly every feature or bit of code added to a device introduces a risk, some greater than others. The
vulnerabilities that have been introduced to computers by software such as Flash, ActiveX controls, Java,
and web browsers are well documented.51 The ubiquitous SQL database, while extremely effective at
helping web designers create effective data driven websites, is notorious for its vulnerability to SQL
injection attacks.52 The adding of microphones to electronic devices opened the door to aural
interceptions. Similarly, the introduction of cameras has resulted in unauthorized video surveillance of
users. Consumers accept all of these risks, however, since we, as individual users and as a society, have
concluded that they are worth the cost. Some will inevitably argue that no new possible vulnerabilities
should be introduced into devices to allow the government to execute reasonable, and therefore lawful, searches
for unique and otherwise unavailable evidence. However, this argument implicitly asserts that there is
no, or insignificant, value to society of such a feature. And herein lies the Achilles heel to opponents of
mandated front-door access: the conclusion is entirely at odds with the inherent balance between individual
liberty and collective security central to the Fourth Amendment itself. Nor should lawmakers be deluded
into believing that the currently existing vulnerabilities that we live with on a daily basis are less
significant in scope than the possibility of obtaining complete access to the encrypted contents of a
device. Various malware variants that are so widespread as to be almost omnipresent in our online
community achieve just such access through what would seem like minor cracks in the defense of
systems.53 One example is the Zeus malware strain, which has been tied to the unlawful online theft of
hundreds of millions of dollars from U.S. companies and citizens and gives its operator complete access
to and control over any computer it infects.54 It can be installed on a machine through the simple mistake of viewing an
infected website or email, or clicking on an otherwise innocuous link.55 The malware is designed to not only bypass
malware detection software, but to deactivate to software’s ability to detect it.56 Zeus and the many other
variants of malware that are freely available to purchasers on dark-net websites and forums are responsible for the theft of funds from
countless online bank accounts (the credentials having been stolen by the malware’s key-logger features), the theft of credit card information,
and innumerable personal identifiers.57
2NC Cyber Inev
Security issues are inevitable
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-partii-debate-merits//ghs-kw)
On Thursday, I described the surprisingly warm reception FBI Director James Comey got in the Senate this week with his warning that the
FBI
was "going dark" because of end-to-end encryption. In this post, I want to take on the merits of the renewed encryption
debate, which seem to me complicated and multi-faceted and not all pushing in the same direction. Let me start by breaking the encryption
debate into two
distinct sets of questions: One is the conceptual question of whether a world of end-to-end
strong encryption is an attractive idea. The other is whether—assuming it is not an attractive idea and that one wants to
ensure that authorities retain the ability to intercept decrypted signal—an extraordinary access scheme is technically
possible without eroding other essential security and privacy objectives. These questions often get mashed together,
both because tech companies are keen to market themselves as the defenders of their users' privacy interests and because of the libertarian
ethos of the tech community more generally. But the
questions are not the same, and it's worth considering them
separately. Consider the conceptual question first. Would it be a good idea to have a world-wide
communications infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could
snap our fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from the
FSB but also from the FBI even wielding lawful process, would that be desirable? Or, in the alternative, do we want to create an internet as
secure as possible from everyone except government investigators exercising their legal authorities with the understanding that other countries
may do the same? Conceptually speaking, I am with Comey on this question—and the
matter does not seem to me an especially
close call. The belief in principle in creating a giant world-wide network on which surveillance is
technically impossible is really an argument for the creation of the world's largest ungoverned space. I
understand why techno-anarchists find this idea so appealing. I can't imagine for moment, however, why
anyone else would. Consider the comparable argument in physical space: the creation of a city in which
authorities are entirely dependent on citizen reporting of bad conduct but have no direct visibility onto
what happens on the streets and no ability to conduct search warrants (even with court orders) or to
patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really suck is
not controversial when you're talking about Yemen or Somalia. I see nothing more attractive about the
creation of a worldwide architecture in which it is technically impossible to intercept and read ISIS
communications with followers or to follow child predators into chatrooms where they go after kids. The
trouble is that this conceptual position does not answer the entirety of the policy question before us. The reason is that the case against
preserving some form of law enforcement access to decrypted signal is not only a conceptual embrace of the technological obsolescence of
surveillance. It
is also a series of arguments about the costs—including the security costs—of maintaining
the capacity to decrypt captured signal. Consider the report issued this past week by a group of computer security experts
(including Lawfare contributing editors Bruce Schneier and Susan Landau), entitled "Keys Under Doormats: Mandating Insecurity By Requiring
Government Access to All Data and Communications." The report does not make an in-principle argument or a conceptual argument against
extraordinary access. It argues, rather, that the effort to build such a system risks eroding cybersecurity in ways far more important than the
problems it would solve. The authors, to summarize, make three claims in support of the broad claim that any exceptional access system would
"pose . . . grave security risks [and] imperil innovation." What
are those "grave security risks"? "[P]roviding exceptional
access to communications would force a U-turn from the best practices now being deployed to make
the Internet more secure. These practices include forward secrecy—where decryption keys are deleted immediately
after use, so that stealing the encryption key used by a communications server would not compromise earlier or later communications. A
related technique, authenticated encryption, uses the same temporary key to guarantee confidentiality and to verify that the message has not
been forged or tampered with." "[B]uilding
in exceptional access would substantially increase system complexity"
and "complexity is the enemy of security." Adding code to systems increases that system's attack surface, and a certain number
of additional vulnerabilities come with every marginal increase in system complexity. So by requiring a potentially complicated new system to
be developed and implemented, we'd be effectively guaranteeing more vulnerabilities for malicious actors to hit. "[E]xceptional access
would create concentrated targets that could attract bad actors." If we require tech companies to retain some means
of accessing user communications, those keys have to stored somewhere, and that storage then becomes an unusually high-stakes target for
malicious attack. Their theft then compromises, as did the OPM hack, large numbers of users. The strong implication of the report is that
these issues are not resolvable, though the report never quite says that. But at a minimum, the authors raise a series of important
questions about whether such a system would, in practice, create an insecure internet in general—rather than one whose general security has
the technical capacity to make security exceptions to comply with the law. There is some reason, in my view, to suspect that the
picture
may not be quite as stark as the computer scientists make it seem. After all, the big tech companies
increase the complexity of their software products all the time, and they generally regard the increased
attack surface of the software they create as a result as a mitigatable problem. Similarly, there are lots of
high-value intelligence targets that we have to secure and would have big security implications if we
could not do so successfully. And when it really counts, that task is not hopeless. Google and Apple and Facebook are not without
tools in the cybersecurity department. The real question, in my view, is whether a system of the sort Comey imagines could be
built in fashion in which the security gain it would provide would exceed the heightened security risks
the extraordinary access would involve. As Herb Lin puts it in his excellent, and admirably brief, Senate testimony the other day,
this is ultimately a question without an answer in the absence of a lot of new research. "One side says [the] access [Comey is seeking] inevitably
weakens the security of a system and will eventually be compromised by a bad guy; the other side says it doesn’t weaken security and won’t be
compromised. Neither side can prove its case, and we see a theological clash of absolutes." Only when someone actually does the research and
development and tries actually to produce a system that meets Comey's criteria are we going to find out whether it's doable or not. And
therein lies the rub, and the real meat of the policy problem, in my view: Who's going to do this research? Who's going to conduct the
sustained investment in trying to imagine a system that secures communications except from government when and only government has a
warrant to intercept those communications? The assumption of the computer scientists in their report is that the burden of that research lies
with the government. "Absent a concrete technical proposal," they write, "and without answers to the questions raised in this report,
legislators should reject out of hand any proposal to return to the failed cryptography control policy of the 1990s." Indeed, their most central
recommendation is that the burden of development is on Comey. "Our strong recommendation is that anyone proposing regulations should
first present concrete technical requirements, which industry, academics, and the public can analyze for technical weaknesses and for hidden
costs." In his testimony, Herb supports this call, though he acknowledges that it is not the inevitable route: the government has not yet
provided any specifics, arguing that private vendors should do it. At the same time, the vendors won’t do it, because [their] customers aren’t
demanding such features. Indeed, many customers would see such features as a reason to avoid a given vendor. Without specifics, there will be
no progress. I believe the government is afraid that any specific proposal will be subject to enormous criticism—and that’s true—but the
government is the party that wants . . . access, and rather than running away from such criticism, it should embrace any resulting criticism as an
opportunity to improve upon its initial designs." Herb might also have mentioned that lots of people in the academic tech community who
would be natural candidates to help develop such an access system are much more interested in developing encryption systems to keep the
feds out than to—under any circumstances—let them in. The tech community has spent a lot more time and energy arguing against the
plausibility and desireability of implementing what Comey is seeking than it has spent in trying to develop systems that deliver it while
mitigating the risks such a system might pose. For both industry and the tech communities, more broadly, this is government's problem, not
their problem. Yet reviving the Clipper Chip model—in which government develops a fully-formed system and then puts it out publicly for the
community to shoot down—is clearly not what Comey has in mind. He is talking in very different language: the language of performance
requirements. He wants to leave the development task to Silicon Valley to figure out how to implement government's requirements. He
wants to describe what he needs—decrypted signal when he has a warrant—and leave the companies
to figure out how to deliver it while still providing secure communications in other circumstances to
their customers. The advantage to this approach is that it potentially lets a thousand flowers bloom.
Each company might do it differently. They would compete to provide the most security consistent
with the performance standard. They could learn from each other. And government would not be in
the position of developing and promoting specific algorithms. It wouldn't even need to know how the
task was being done.
1NC Crime Inev
Organized crime inevitable--No jurisdiction, weak states, trade offs, too adaptable
Dr. Phil Williams is Professor of International Security in the Graduate School of Public and International Affairs at the University of
Pittsburgh 8­18­2006 http://www.oup.com/uk/orc/bin/9780199289783/baylis_chap09.pdf
There are several reasons for this. First, in spite of growing international cooperation among national law enforcement agencies, law
enforcement remains a national activity confined to a single territorial jurisdiction, while organized crime is
transnational in scope. In effect, law enforcement still continues to operate in a bordered world, whereas organized crime
operates in a borderless world. Second, although the United States placed a high priority on denying safe haven or sanctuary to
international criminals, many states have limited capacity to enforce laws against organized crime.Consequently,
transnational criminal organizations are able to operate from safe havens, using a mix of corruption and violence to
perpetuate the weakness of the states from which they operate .Nowhere is this more evident than in Mexico, where a war for
control of routes and markets on the northern border has led to violence spilling over into the United States. Third, all too often attacking
transnational criminal organizations has been subordinated to other goals and objectives . In spite of the emphasis on
attacking smuggling and smugglers, for example, this is not something which has been allowed to interfere with global trade. In effect, reaping
the benefits of globalization, tacitly at least, has been deemed more important than combating transnational organized crime.Not surprisingly,
therefore, as Moises Naim has pointed out, ‘there is simply nothing in the cards that points to an imminent reversal of fortune for the myriads
of networks active in illicit trade. It is even difficult to find evidence of substantial progress in reversing or even just containing the
growth of these illicit markets’(2005: 221). Fourth, both transnational criminal organizations and the illicit markets in which
they operate are highly adaptable. Law enforcement success against a particular organization, for example, tends simply to offer
opportunities for its rivals to fill the gap.Moreover, the ability of organizations to move from one illicit product to another
makes them even more difficult to combat. In recent years, for example, Burmese warlords have moved from opium to
methamphetamine production and have become major suppliers to Asian markets for the drug.
2NC Crime Inev
Organized crime inevitable--Globalization
Dr. Phil Williams is Professor of International Security in the Graduate School of Public and International Affairs at the University of
Pittsburgh 8­18­2006 http://www.oup.com/uk/orc/bin/9780199289783/baylis_chap09.pdf
Globalization has had paradoxical consequences for both transnational organized crime and international terrorism, acting as
both motivator and facilitator. This is not entirely surprising. Although globalization has had many beneficial consequences, it has losers
as well as winners—and the pain for the losers can be enormous. Indeed, globalization has had a disruptive impact on patterns of
employment, on traditional cultures, and on the capacity of states to deal with problems facing citizens within their
jurisdictions, as well as problems that span multiple jurisdictions. In some instances, globalization has created massive economic
dislocation that has pushed people from the legal economy to the illegal. In other cases, globalization has been seen as merely a cover for
Western and especially United States cultural and economic domination—domination that has created enough resentment to help fuel what
has become the global jihad movement. At the same time, globalization has acted as a facilitator for a whole set of illicit
activities ranging from drugs and arms trafficking to the use of large­scale violence against innocent civilians. Many observers assumed that in
the post­cold war world, democracy, peace, stability and order could easily be exported from the advanced post­industrialized states to areas of
conflict and instability (Singer and Wildavsky 1993). In fact the opposite has occurred. Al­Qaeda was able to attack the United States homeland
while based in Afghanistan, thereby illustrating what Robert Keohane described as the transformation of geography from a barrier to a
connector (2002: 275). Indeed, one of the most important characteristics of a globalized world is that the interconnections among different
parts of the world are dense, communication is cheap and easy, and transportation and transmission, whether of disease, crime, or violence, are
impossible to stop. Transnational networks link businessmen, families, scientists, and scholars; they also link members of terrorist networks and
criminal organizations. In some cases, networks are successfully integrated into the host societies. In other instances, however, migrants find
themselves in what Castells called ‘zones of social exclusion’ (1998: 72).Muslim immigration from North Africa and Pakistan to Western Europe,
for example, has resulted in marginalization and alienation that were evident in the widespread riots in France in the late months of 2005 and
that have also helped to fuel radical Islamic terrorism in Western Europe.Moreover, for second and third generation immigrants who have
limited opportunities in the licit economy, the illegal economy and either petty crime or organized crime can appear as an attractive alternative.
Ethnic networks of this kind can provide both cover and recruitment opportunities for transnational criminal and terrorist organizations. In
effect, therefore, globalization has acted as a force multiplier for both criminal and terrorist organizations, providing them
with new resources and new opportunities.
Organized crime inevitable--Organization advantage
Dr. Phil Williams is Professor of International Security in the Graduate School of Public and International Affairs at the University of
Pittsburgh 8­18­2006 http://www.oup.com/uk/orc/bin/9780199289783/baylis_chap09.pdf
The bottom line on all this is that, even though the United States has developed clear strategies for combating both organized
crime and terrorism, the implementation of these strategies is clearly hindered by the dominance of governmental
structures that were wellsuited to the cold war against a slow, bureaucratic, ponderous adversary but are singularly ill­suited to
combating agile transnational adversaries. In the final analysis, fighting terrorism and transnational organized crime is not only
about strategy, it is also about appropriate organizational structures to implement strategy.And in that respect, terrorists and
criminals have the advantage. The result is that the efforts of the United States and the international community to combat both
crime and terrorism are unlikely to meet with unqualified success.
Organized crime inevitable--Too agile, bureaucratic inefficiency
Dr. Phil Williams is Professor of International Security in the Graduate School of Public and International Affairs at the University of
Pittsburgh 8­18­2006 http://www.oup.com/uk/orc/bin/9780199289783/baylis_chap09.pdf
In many respects, the threats posed to the United States and more broadly to the international community of states by transnational
organized crime and terrorism can be understood as an important manifestation of the new phase in world politics in which some of the key
interactions are between the state system and what James Rosenau (1990) termed the ‘multi‐centric system’, composed of ‘sovereignty‐free
actors’ . In this connection, it is notable that the first serious challenge to United States hegemony in the post‐cold war world came not from
another state but from a terrorist network.Moreover, both criminals and terrorists have certain advantages over states: they are
agile, distributed, highly dynamic organizations with a capacity to morph or transform themselves when under
pressure. States in contrast are slow, clumsy, hierarchical, and bureaucratic and, although they have the capacity to bring
lots of resources to bear on a problem, can rarely do this with speed and efficiency. As discussed above, in the United States war on terror,
the strategy for the war of ideas was very slow to develop, not least because of inter‐agency differences. The same has been true in the
effort to combat terrorist finances. As the Government Accountability Office (2005) has noted, ‘the U.S. government lacks an integrated
strategy to coordinate the delivery of counter‐terrorism financing training and technical assistance to countries vulnerable to terrorist
financing. Specifically, the effort does not have key stakeholder acceptance of roles and procedures, a strategic alignment of resources with
needs, or a process to measure performance’. Differences of perspective and approach between the Departments of State
and Treasury have also seriously bedevilled the effort to ‘enable weak states’, one of the keys to the multilateral component of
the administration’s strategy to combat terrorism. Similar problems have been evident in efforts to combat organized crime
and drug trafficking.A striking example is the counter‐drug intelligence architecture for the United States which has the Crime and Narcotics
Center at CIA looking at the international dimension of drug trafficking, the National Drug Intelligence Center responsible for domestic
aspects of the problem, the Treasury’s Financial Crimes Enforcement Network focusing on money laundering, and the El Paso Intelligence
Center responsible for tactical intelligence. Although this architecture provides clear roles and responsibilities, it also creates bureaucratic
seams in the effort to understand and assess what is clearly a seamless process of drug trafficking and money laundering across borders.
Although good information exchanges can ease this problem, the architecture is far from optimal.
1NC No Retaliation
Domestic and international opposition block retaliation.
Bremmer 4
(Ian, President – Eurasia Group and Senior Fellow – World Policy Institute, New
Statesman, 9-13, Lexis)
This time, the public response would move much more quickly from shock to anger; debate over how America should respond would begin immediately. Yet
it is difficult to
imagine how the Bush administration could focus its response on an external enemy. Should the US send 50,000 troops to the Afghan­Pakistani border to
intensify the hunt for Osama Bin Laden and "step up" efforts to attack the heart of al­Qaeda? Many would wonder if that wasn't what the administration pledged to do after the attacks three
years ago. The president would face intensified criticism from those who have argued all along that Iraq was a distraction from "the real war on terror". And what if a significant number of the
The Bush administration could hardly take military action against the Saudi
government at a time when crude‐oil prices are already more than $45 a barrel and global supply is stretched to the limit. While the Saudi royal
terrorists responsible for the pre­election attack were again Saudis?
family might support a co­ordinated attack against terrorist camps, real or imagined, near the Yemeni border ­ where recent searches for al­Qaeda have concentrated ­ that would seem like a
trivial, insufficient retaliation for an attack on the US mainland. Remember how the Republicans criticised Bill Clinton's administration for ineffectually "bouncing the rubble" in Afghanistan
after the al­Qaeda attacks on the US embassies in Kenya and Tanzania in the 1990s. So what kind of response might be credible? Washington's concerns about Iran are rising. The 9/11
commission report noted evidence of co­operation between Iran and al­Qaeda operatives, if not direct Iranian advance knowledge of the 9/11 hijacking plot. Over the past few weeks, US
officials have been more explicit, too, in declaring Iran's nuclear programme "unacceptable". However, in the absence of an official Iranian claim of responsibility for this hypothetical terrorist
domestic opposition to such a war and the international outcry it would provoke would
make quick action against Iran unthinkable. In short, a decisive response from Bush could not be external. It would have to be domestic. Instead of Donald
attack, the
Rumsfeld, the defence secretary, leading a war effort abroad, Tom Ridge, the homeland security secretary, and John Ashcroft, the attorney general, would pursue an anti­terror campaign at
home. Forced to use legal tools more controversial than those provided by the Patriot Act,
Americans would experience stepped‐up
domestic surveillance and border controls, much tighter security in public places and the detention of a large number of suspects. Many Americans would undoubtedly
support such moves. But concern for civil liberties and personal freedom would ensure that the government would
have nowhere near the public support it enjoyed for the invasion of Afghanistan.
2NC No Retaliation
Obama won’t retaliate to terrorist attack
Crowley 10
(Michael, Senior Editor – New Republic, “Obama and Nuclear Deterrence”, The New
Republic, 1-5, http://www.tnr.com/node/72263)
some experts don't place much weight on how our publicly­stated doctrine emerges because they don't expect
foreign nations to take it literally. And the reality is that any decisions about using nukes will certainly be case‐by‐case. But I'd
As the story notes,
still like to see some wider discussion of the underlying questions, which are among the most consequential that policymakers can consider. The questions are particularly vexing when it comes
Would we, for instance, actually nuke Pyongyang if it sold a weapon to terrorists
who used it in America? That implied threat seems to exist, but I actually doubt that a President Obama‐‐
or any president, for that matter­­would go through with it.
to terrorist groups and rogue states.
No escalation – studies show the public won’t support military intervention in the
name of terrorism
Huddy et al 05 (Leonie, Department of Political Science SUNY at Stony Brook Amer. Journal Poli. Sci., Vol 49, no 3)
The findings from this study lend further insight into the future trajectory of support for antiterrorism measures in the United States when we
consider the potential effects of anxiety. Security threats in this and other studies increase support for military action (Jentleson 1992; Jentleson
and Britton 1998; Herrmann, Tetlock, and Visser 1999). But anxious
respondents were less supportive of belligerent
military action against terrorists, suggesting an important source of opposition to military intervention.
In the aftermath of 9/11, several factors were consistently related to heightened levels of anxiety and
related psychological reactions, including living close to the attack sites (Galea et al. 2002; Piotrkowski and Brannen 2002; Silver et al.
2002), and knowing someone who was hurt or killed in the attacks (in this study). It is difficult to say what might happen if the United States
were attacked again in the near future. Based on our results, it is plausible that a
future threat or actual attack directed at a different
broaden the number of individuals directly affected by terrorism and concomitantly
raise levels of anxiety. This could, in turn, lower support for overseas military action. In contrast, in the absence
geographic region would
of any additional attacks levels of anxiety are likely to decline slowly over time (we observed a slow decline in this study), weakening opposition
to future overseas military action. Since our conclusions are based on analysis of reactions to a single event in a country that has rarely felt the
effects of foreign terrorism, we
should consider whether they can be generalized to reactions to other terrorist
incidents or to reactions under conditions of sustained terrorist action. Our answer is a tentative yes, although
there is no conclusive evidence on this point as yet. Some of our findings corroborate evidence from Israel, a country that has prolonged
experience with terrorism. For example, Israeli researchers find that perceived risk leads to increased vilification of a threatening group and
support for belligerent action (Arian 1989; Bar­Tal and Labin 2001). There is also evidence that Israelis experienced fear during the Gulf War,
especially in Tel Aviv where scud missiles were aimed (Arian and Gordon 1993). What is missing, however, is any evidence that anxiety tends to
undercut support for belligerent antiterrorism measures under conditions of sustained threat. For the most part, Israeli research has not
examined the distinct political effects of anxiety.
1NC No Nuclear Terror
No chance of nuclear terror attack---too tough to execute
John Mueller and Mark G. Stewart 12, Senior Research Scientist at the Mershon Center for
International Security Studies and Adjunct Professor in the Department of Political Science, both at Ohio
State University, and Senior Fellow at the Cato Institute AND Australian Research Council Professorial
Fellow and Professor and Director at the Centre for Infrastructure Performance and Reliability at the
University of Newcastle, "The Terrorism Delusion," Summer, International Security, Vol. 37, No. 1,
politicalscience.osu.edu/faculty/jmueller//absisfin.pdf
In 2009, the U.S. Department of Homeland Security (DHS) issued a lengthy report on protecting the homeland. Key to achieving such an
objective should be a careful assessment of the character, capacities, and desires of potential terrorists targeting that homeland. Although the
report contains a section dealing with what its authors call “the nature of the terrorist adversary,” the section devotes only two sentences to
assessing that nature: “The number and high profile of international and domestic terrorist attacks and disrupted plots during the last two
decades underscore the determination and persistence of terrorist organizations. Terrorists have proven
to be relentless,
patient, opportunistic, and flexible, learning from experience and modifying tactics and targets to exploit
perceived vulnerabilities and avoid observed strengths.”8¶ This description may apply to some terrorists somewhere,
including at least a few of those involved in the September 11 attacks. Yet, it scarcely describes the vast majority of those
individuals picked up on terrorism charges in the United States since those attacks. The inability of the DHS to
consider this fact even parenthetically in its fleeting discussion is not only amazing but perhaps
delusional in its single-minded preoccupation with the extreme.¶ In sharp contrast, the authors of the case studies, with
remarkably few exceptions, describe their subjects with such words as incompetent, ineffective, unintelligent, idiotic, ignorant, inadequate,
unorganized, misguided, muddled, amateurish, dopey, unrealistic, moronic, irrational, and foolish.9 And in nearly all of the cases where an
operative from the police or from the Federal Bureau of Investigation was at work (almost half of the total), the most appropriate descriptor
would be “gullible.”¶ In all, as Shikha Dalmia has put it, would-be
terrorists need to be “radicalized enough to die for
their cause; Westernized enough to move around without raising red flags; ingenious enough to exploit
loopholes in the security apparatus; meticulous enough to attend to the myriad logistical details that
could torpedo the operation; self-sufficient enough to make all the preparations without enlisting
outsiders who might give them away; disciplined enough to maintain complete secrecy; and—above
all—psychologically tough enough to keep functioning at a high level without cracking in the face of
their own impending death.”10 The case studies examined in this article certainly do not abound with people
with such characteristics. ¶ In the eleven years since the September 11 attacks, no terrorist has been able to
detonate even a primitive bomb in the United States, and except for the four explosions in the London transportation
system in 2005, neither has any in the United Kingdom. Indeed, the only method by which Islamist terrorists have
managed to kill anyone in the United States since September 11 has been with gunfire—inflicting a total of
perhaps sixteen deaths over the period (cases 4, 26, 32).11 This limited capacity is impressive because, at one time, small-scale terrorists in the
United States were quite successful in setting off bombs. Noting that the scale of the September 11 attacks has “tended to obliterate America’s
memory of pre-9/11 terrorism,” Brian Jenkins reminds us (and we clearly do need reminding) that the 1970s witnessed sixty to seventy terrorist
incidents, mostly bombings, on U.S. soil every year.12¶ The
situation seems scarcely different in Europe and other
Western locales. Michael Kenney, who has interviewed dozens of government officials and intelligence agents and analyzed court
documents, has found that, in sharp contrast with the boilerplate characterizations favored by the DHS and with the imperatives listed by
Dalmia, Islamist
militants in those locations are operationally unsophisticated, short on know-how, prone to
making mistakes, poor at planning, and limited in their capacity to learn.13 Another study documents
the difficulties of network coordination that continually threaten the terrorists’ operational unity, trust,
cohesion, and ability to act collectively.14¶ In addition, although some of the plotters in the cases targeting the
United States harbored visions of toppling large buildings, destroying airports, setting off dirty bombs, or bringing down the Brooklyn
Bridge (cases 2, 8, 12, 19, 23, 30, 42), all were nothing more than wild fantasies, far beyond the plotters’ capacities
however much they may have been encouraged in some instances by FBI operatives. Indeed, in many of the
cases, target selection is effectively a random process, lacking guile and careful planning. Often, it seems,
targets have been chosen almost capriciously and simply for their convenience. For example, a would-be bomber
targeted a mall in Rockford, Illinois, because it was nearby (case 21). Terrorist plotters in Los Angeles in 2005 drew up a list of targets that were
all within a 20-mile radius of their shared apartment, some of which did not even exist (case 15). In Norway, a neo-Nazi terrorist on his way to
bomb a synagogue took a tram going the wrong way and dynamited a mosque instead.15
2NC No Nuclear Terror
Terrorists aren’t pursuing nuclear attacks
Wolfe 12 – Alan Wolfe is Professor of Political Science at Boston College. He is also a Senior Fellow
with the World Policy Institute at the New School University in New York. A contributing editor of The
New Republic, The Wilson Quarterly, Commonwealth Magazine, and In Character, Professor Wolfe
writes often for those publications as well as for Commonweal, The New York Times, Harper's, The
Atlantic Monthly, The Washington Post, and other magazines and newspapers. March 27, 2012, "Fixated
by “Nuclear Terror” or Just Paranoia?" http://www.hlswatch.com/2012/03/27/fixated-by-“nuclearterror”-or-just-paranoia-2/
If one were to read the
most recent unclassified report to Congress on the acquisition of technology
relating to weapons of mass destruction and advanced conventional munitions, it does have a section
on CBRN terrorism (note, not WMD terrorism). The intelligence community has a very toned down
statement that says “several terrorist groups … probably remain interested in [CBRN] capabilities, but not
necessarily in all four of those capabilities. … mostly focusing on low­level chemicals and toxins.” They’re talking about terrorists
getting industrial chemicals and making ricin toxin, not nuclear weapons. And yes, Ms. Squassoni, it is primarily al Qaeda that the
U.S. government worries about, no one else. The trend of worldwide terrorism continues to remain in the realm
of conventional attacks. In 2010, there were more than 11,500 terrorist attacks, affecting about 50,000 victims
including almost 13,200 deaths. None of them were caused by CBRN hazards. Of the 11,000 terrorist attacks in 2009, none
were caused by CBRN hazards. Of the 11,800 terrorist attacks in 2008, none were caused by CBRN hazards.
No successful detonation
Schneidmiller 9(Chris, Experts Debate Threat of Nuclear, Biological Terrorism, 13 January 2009, http://www.globalsecuritynewswire.org/gsn/nw_20090113_7105.php)
There is an "almost vanishingly small" likelihood that terrorists would ever be able to
acquire and detonate a nuclear weapon, one expert said here yesterday (see GSN, Dec. 2, 2008). In even the most likely scenario of nuclear terrorism,
there are 20 barriers between extremists and a successful nuclear strike on a major city, said
John Mueller, a political science professor at Ohio State University. The process itself is seemingly straightforward but
exceedingly difficult ‐‐ buy or steal highly enriched uranium, manufacture a weapon, take the
bomb to the target site and blow itup. Meanwhile, variables strewn across the path to an attack would increase the complexity of the effort, Mueller argued.
Terrorists would have to bribe officials in a state nuclear program to acquire the material, while avoiding a sting by
authorities or a scam by the sellers. The material itself could also turn out to be bad. "Once the purloined material is purloined, [police are] going to be chasing after you.
They are also going to put on a high reward, extremely high reward, on getting the weapon back or getting the fissile material back," Mueller said during a panel discussion at a two‐
day Cato Institute conference on counterterrorism issues facing the incoming Obama administration. Smuggling the material out of a country would mean relying on criminals who
terrorists would then have to find
scientists and engineers willing to giveup their normal lives to manufacture a bomb, which would require an expensive and
"are very good at extortion" and might have to be killed to avoid a double‐cross, Mueller said. The
sophisticated machine shop. Finally, further technological expertise would be needed to sneak the weapon across national borders to its destination point and conduct a successful
detonation, Mueller said. Every obstacle is "difficult but not impossible" to overcome, Mueller said, putting the chance of success at no less than one in three for each. The
likelihood of successfully passing through each obstacle, in sequence, would be
roughly one in 3 1/2 billion, he said, but for argument's sake dropped it to 3 1/2 million. "It's a total gamble. This is a very expensive and difficult thing
to do," said Mueller, who addresses the issue at greater length in an upcoming book, Atomic Obsession. "So unlike buying a ticket to the lottery ... you're basically putting everything,
Other scenarios are even less probable,
Mueller said. A nuclear‐armed state is "exceedingly unlikely" to hand a weapon to a
terrorist group, he argued: "States just simply won't give it to somebody they can't control."
including your life, at stake for a gamble that's maybe one in 3 1/2 million or 3 1/2 billion."
Terrorists are also not likely tobe able to steala whole weapon, Mueller asserted,
dismissingthe idea of "loose nukes." Even Pakistan, which today is perhaps the nation of greatest concern regarding nuclear security, keeps its bombs in
two segments that are stored at different locations, he said (see GSN, Jan. 12). Fear of an "extremely improbable event" such as nuclear terrorism produces support for a wide range
of homeland security activities, Mueller said. He argued that there has been a major and costly overreaction to the terrorism threat ‐‐ noting that the Sept. 11 attacks helped to
precipitate the invasion of Iraq, which has led to far more deaths than the original event. Panel moderator Benjamin Friedman, a research fellow at the Cato Institute, said
academic and governmental discussions of acts of nuclear or biological terrorism have tended to
focus on "worst‐case assumptions about terrorists' ability to use these weapons to kill us." There is need for consideration for what is probable
rather than simply what is possible, he said. Friedman took issue withthe finding late last year of an experts' report that an act of WMD
terrorism would "more likely than not" occurin the next half decade unless the international community takes greater action. "I
would say that the report, if you read it, actually offers no analysis to justify that claim, which seems to
have been made to change policy by generating alarm in headlines." One panel speaker offered a
partial rebuttal to Mueller's presentation. Jim Walsh, principal research scientist for the Security Studies Program at the Massachusetts Institute of
Technology, said he agreed that nations would almost certainly not give anuclear weapon to a
nonstate group, that most terrorist organizations have no interest in seeking out the bomb, and that it
would be difficult to build a weaponor use one that has been stolen .
Not attractive to nukes
Mueller ’11 (John, IR Professor at Ohio State, PhD in pol sci from UCLA, The Truth about Al
Qaeda, http://www.foreignaffairs.com/articles/68012/john­mueller/the­truth­about­al­
qaeda?page=show, August 2, 2011)
As a misguided Turkish proverb holds, "If your enemy be an ant, imagine him to be an elephant." The new
information unearthed
in Osama bin Laden's hideout in Abbottabad, Pakistan, suggests that the United States has been doing so for a full decade.
Whatever al Qaeda's threatening rhetoric and occasional nuclear fantasies, its potential as a menace,
particularly as an atomic one, has been much inflated. The public has now endured a decade of dire
warnings about the imminence of a terrorist atomic attack. In 2004, the former CIA spook Michael Scheuer
proclaimed on television's 60 Minutes that it was "probably a near thing," and in 2007, the physicist Richard Garwin assessed the likelihood of a
nuclear explosion in an American or a European city by terrorism or other means in the next ten years to be 87 percent. By 2008, Defense
Secretary Robert Gates mused that what keeps every senior government leader awake at night is "the thought of a terrorist ending up with a
weapon of mass destruction, especially nuclear." Few, it seems, found much solace in the fact that an al Qaeda computer seized in Afghanistan
in 2001 indicated that the
group's budget for research on weapons of mass destruction (almost all of it
focused on primitive chemical weapons work) was some $2,000 to $4,000. In the wake of the killing of Osama bin Laden, officials
now have more al Qaeda computers, which reportedly contain a wealth of information about the workings of the organization in the
intervening decade. A multi­agency task force has completed its assessment, and according to first reports, it has found that al Qaeda
members have primarily been engaged in dodging drone strikes and complaining about how
cash­strapped they are. Some reports suggest they've also been looking at quite a bit of pornography. The full story is not out yet,
but it seems breathtakingly unlikely that the miserable little group has had the time or
inclination, let alone the money, to set up and staff a uranium­seizing operation, as well as a
fancy, super­high­tech facility to fabricate a bomb. It is a process that requires trusting
corrupted foreign collaborators and other criminals, obtaining and transporting highly guarded
material, setting up a machine shop staffed with top scientists and technicians, and rolling the
heavy, cumbersome, and untested finished product into position to be detonated by a skilled
crew, all the while attracting no attention from outsiders. The documents also reveal that after fleeing Afghanistan,
bin Laden maintained what one member of the task force calls an "obsession" with attacking the United States again, even though 9/11 was in
many ways a disaster for the group. It led to a worldwide loss of support, a major attack on it and on its Taliban hosts, and a decade of furious
and dedicated harassment. And indeed, bin Laden did repeatedly and publicly threaten an attack on the United States. He assured Americans in
2002 that "the youth of Islam are preparing things that will fill your hearts with fear"; and in 2006, he declared that his group had been able "to
breach your security measures" and that "operations are under preparation, and you will see them on your own ground once they are finished."
Al Qaeda's animated spokesman, Adam Gadahn, proclaimed in 2004 that "the streets of America shall run red with blood" and that "the next
wave of attacks may come at any moment." The obsessive desire notwithstanding, such
fulminations have clearly lacked
substance. Although hundreds of millions of people enter the United States legally every year, and countless others illegally, no true al
Qaeda cell has been found in the country since 9/11 and exceedingly few people have been uncovered who
even have any sort of "link" to the organization. The closest effort at an al Qaeda operation within the country was a
decidedly nonnuclear one by an Afghan­American, Najibullah Zazi, in 2009. Outraged at the U.S.­led war on his home country, Zazi attempted to
join the Taliban but was persuaded by al Qaeda operatives in Pakistan to set off some bombs in the United States instead. Under surveillance
from the start, he was soon arrested, and, however "radicalized," he has been talking to investigators ever since, turning traitor to his former
colleagues. Whatever training Zazi received was inadequate; he repeatedly and desperately sought further instruction from his overseas
instructors by phone. At one point, he purchased bomb material with a stolen credit card, guaranteeing that the purchase would attract
attention and that security video recordings would be scrutinized. Apparently, his handlers were so strapped that they could not even advance
him a bit of cash to purchase some hydrogen peroxide for making a bomb. For al Qaeda, then, the operation was a failure in every way ­­ except
for the ego boost it got by inspiring the usual dire litany about the group's supposedly existential challenge to the United States, to the civilized
world, to the modern state system. Indeed,
no Muslim extremist has succeeded in detonating even a simple
bomb in the United States in the last ten years, and except for the attacks on the London Underground in 2005, neither has
any in the United Kingdom. It seems wildly unlikely that al Qaeda is remotely ready to go nuclear. Outside
of war zones, the
amount of killing carried out by al Qaeda and al Qaeda linkees, maybes, and wannabes
throughout the entire world since 9/11 stands at perhaps a few hundred per year. That's a few
hundred too many, of course, but it scarcely presents an existential, or elephantine, threat . And
the likelihood that an American will be killed by a terrorist of any ilk stands at one in 3.5 million
per year, even with 9/11 included.
Internet Freedom Adv
Notes
30 second explainer: encryption k2 human rights k2 demopromo k2 solve Diamond 95 (sorry it’s 4:30
AM and I’m too tired to write more)
CX Questions
1NC Backdoors Inev
Note: more ev under foreign backdoor CP
The UK will inevitably require backdoors, killing Internet freedom – their evidence
Venezia 7-13
Paul Venezia, system and network architect, and senior contributing editor at InfoWorld, where he
writes analysis, reviews and The Deep End blog, “Encryption with backdoors is worse than useless – it’s
dangerous”, InfoWorld, 7/13/15, http://www.infoworld.com/article/2946064/encryption/encryptionwith-forced-backdoors-is-worse-than-useless-its-dangerous.html, 7/14/15 AV
On the other side of the pond, U.K.
Prime Minister David Cameron has said he wants to either ban strong
encryption or require backdoors to be placed into any encryption code to allow law enforcement to
decrypt any data at any time. The fact that these officials are even having this discussion is a bald demonstration that they do not
understand encryption or how critical it is for modern life. They're missing a key point: The moment you force any form of encryption to contain
a backdoor, that form of encryption is rendered useless. If a backdoor exists, it will be exploited by criminals. This is not a supposition, but a
certainty. It's not an American judge that we're worried about. It's the criminals looking for exploits. We use strong encryption every single day.
We use it on our banking sites, shopping sites, and social media sites. We protect our credit card information with encryption. We encrypt our
databases containing sensitive information (or at least we should). Our economy relies on strong encryption to move money around in
industries large and small. Many high-visibility sites, such as Twitter, Google, Reddit, and YouTube, default to SSL/TLS encryption now. When
there were bugs in the libraries that support this type of encryption, the IT world moved heaven and earth to patch them and eliminate the
vulnerability. Security pros were sweating bullets for the hours, days, and in some cases weeks between the hour Heartbleed was revealed and
the hour they could finally get their systems patched -- and now politicians with no grasp of the ramifications want to introduce a fixed
vulnerability into these frameworks. They are threatening the very foundations of not only Internet commerce, but the health and security of
the global economy. Put simply, if backdoors are required in encryption methods, the Internet would essentially be destroyed, and billions of
people would be put at risk for identity theft, bank and credit card fraud, and any number of other horrible outcomes. Those of us who know
how the security sausage is made are appalled that this is a point of discussion at any level, much less nationally on two continents. It’s
abhorrent to consider. The general idea coming from these camps is that terrorists use encryption to communicate. Thus, if there are
backdoors, then law enforcement can eavesdrop on those communications. Leaving aside the massive vulnerabilities that would be introduced
on everyone else, it’s clear that the terrorists could very easily modify their communications to evade those types of encryption or set up
alternative communication methods. We would be creating holes in the protection used for trillions of transactions, all for naught. Citizens of a
city do not give the police the keys to their houses. We do not register our bank account passwords with the FBI. We do not knowingly or
specifically allow law enforcement to listen and record our phone calls and Internet communications (though that hasn’t seemed to matter).
We should definitely not crack the foundation of secure Internet communications with a backdoor that will only be exploited by criminals or the
very terrorists that we’re supposedly trying to thwart. Remember, if the government can lose an enormous cache of extraordinarily sensitive,
deeply personal information on millions of its own employees, one can only wonder what horrors would be visited upon us if it somehow
succeeded in destroying encryption as well.
2NC Backdoors Inev
Other countries will inevitably build backdoors – their evidence
Dimitri, Data Journalist at the Correspondent (Netherlands) “Think piece: How to protect privacy and
security?” Global Conference on CyberSpace 2015 16 - 17 April 2015 The Hague, The Netherlands
https://www.gccs2015.com/sites/default/files/documents/How%20to%20protect%20privacy%20and%2
0security%20in%20the%20crypto%20wars.pdf
Unsound economics The second argument is one of economics. Backdoors can stifle innovation. Even until very recently, communications were
a matter for a few big companies, often state-owned. The architecture of their systems changed slowly, so it was relatively cheap and easy to
build a wiretapping facility into them. Today thousands of start-ups handle communications in one form or another. And with each new feature
these companies provide, the architecture of the systems changes. It would be a big burden for these companies if they had to ensure that
governments can always intercept and decrypt their traffic. Backdoors require centralised information flows, but the most exciting innovations
are moving in the opposite direction, i.e. towards decentralised services. More and more web services are using peer-to-peer technology
through which computers talk directly to one another, without a central point of control. File storage services as well as payment processing
and communications services are now being built in this decentralised fashion. It’s extremely difficult to wiretap these services. And if you were
A government
that imposes backdoors on its tech companies also risks harming their export opportunities. For
instance, Huawei – the Chinese manufacturer of phones, routers and other network equipment – is
unable to gain market access in the US because of fears of Chinese backdoors built into its hardware. US
to force companies to make such wiretapping possible, it would become impossible for these services to continue to exist.
companies, especially cloud storage providers, have lost overseas customers due to fears that the NSA or other agencies could access client
data. Unilateral demands for backdoors could put companies in a tight spot. Or, as researcher Julian Sanchez of the libertarian Cato Institute
says: ‘An iPhone that Apple can’t unlock when American cops come knocking for good reasons is also an iPhone they can’t unlock when the
Chinese government comes knocking for bad ones.’
1NC Cyber Inev
Cybersecurity vulnerabilities are inevitable
Corn 7/13
(Corn, Geoffrey S. * Presidential Research Professor of Law, South Texas College of Law; Lieutenant Colonel (Retired), U.S. Army Judge
Advocate General’s Corps. Prior to joining the faculty at South Texas, Professor Corn served in a variety of military assignments, including as
the Army’s Senior Law of War Advisor, Supervisory Defense Counsel for the Western United States, Chief of International Law for U.S. Army
Europe, and as a Tactical Intelligence Officer in Panama. “Averting the Inherent Dangers of 'Going Dark': Why Congress Must Require a
Locked Front Door to Encrypted Data,” SSRN. 07-13-2015.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes//ghs-kw)
Like CALEA, a statutory obligation along the lines proposed herein will inevitably trigger criticisms and generate concerns. One obvious
criticism is that the creation of an escrow key or the maintenance of a duplicate key by a manufacturer
would introduce an unacceptable risk of compromise for the device. This argument presupposes that the
risk is significant, that the costs of its exploitation are large, and that the benefit is not worth the risk.
Yet manufacturers, product developers, service providers and users constantly introduce such risks.
Nearly every feature or bit of code added to a device introduces a risk, some greater than others. The
vulnerabilities that have been introduced to computers by software such as Flash, ActiveX controls, Java,
and web browsers are well documented.51 The ubiquitous SQL database, while extremely effective at
helping web designers create effective data driven websites, is notorious for its vulnerability to SQL
injection attacks.52 The adding of microphones to electronic devices opened the door to aural
interceptions. Similarly, the introduction of cameras has resulted in unauthorized video surveillance of
users. Consumers accept all of these risks, however, since we, as individual users and as a society, have
concluded that they are worth the cost. Some will inevitably argue that no new possible vulnerabilities
should be introduced into devices to allow the government to execute reasonable, and therefore lawful, searches
for unique and otherwise unavailable evidence. However, this argument implicitly asserts that there is
no, or insignificant, value to society of such a feature. And herein lies the Achilles heel to opponents of
mandated front-door access: the conclusion is entirely at odds with the inherent balance between individual
liberty and collective security central to the Fourth Amendment itself. Nor should lawmakers be deluded
into believing that the currently existing vulnerabilities that we live with on a daily basis are less
significant in scope than the possibility of obtaining complete access to the encrypted contents of a
device. Various malware variants that are so widespread as to be almost omnipresent in our online
community achieve just such access through what would seem like minor cracks in the defense of
systems.53 One example is the Zeus malware strain, which has been tied to the unlawful online theft of
hundreds of millions of dollars from U.S. companies and citizens and gives its operator complete access
to and control over any computer it infects.54 It can be installed on a machine through the simple mistake of viewing an
infected website or email, or clicking on an otherwise innocuous link.55 The malware is designed to not only bypass
malware detection software, but to deactivate to software’s ability to detect it.56 Zeus and the many other
variants of malware that are freely available to purchasers on dark-net websites and forums are responsible for the theft of funds from
countless online bank accounts (the credentials having been stolen by the malware’s key-logger features), the theft of credit card information,
and innumerable personal identifiers.57
2NC Cyber Inev
Security issues are inevitable
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-partii-debate-merits//ghs-kw)
On Thursday, I described the surprisingly warm reception FBI Director James Comey got in the Senate this week with his warning that the
FBI
was "going dark" because of end-to-end encryption. In this post, I want to take on the merits of the renewed encryption
debate, which seem to me complicated and multi-faceted and not all pushing in the same direction. Let me start by breaking the encryption
debate into two
distinct sets of questions: One is the conceptual question of whether a world of end-to-end
strong encryption is an attractive idea. The other is whether—assuming it is not an attractive idea and that one wants to
ensure that authorities retain the ability to intercept decrypted signal—an extraordinary access scheme is technically
possible without eroding other essential security and privacy objectives. These questions often get mashed together,
both because tech companies are keen to market themselves as the defenders of their users' privacy interests and because of the libertarian
ethos of the tech community more generally. But the
questions are not the same, and it's worth considering them
separately. Consider the conceptual question first. Would it be a good idea to have a world-wide
communications infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could
snap our fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from the
FSB but also from the FBI even wielding lawful process, would that be desirable? Or, in the alternative, do we want to create an internet as
secure as possible from everyone except government investigators exercising their legal authorities with the understanding that other countries
may do the same? Conceptually speaking, I am with Comey on this question—and the
matter does not seem to me an especially
close call. The belief in principle in creating a giant world-wide network on which surveillance is
technically impossible is really an argument for the creation of the world's largest ungoverned space. I
understand why techno-anarchists find this idea so appealing. I can't imagine for moment, however, why
anyone else would. Consider the comparable argument in physical space: the creation of a city in which
authorities are entirely dependent on citizen reporting of bad conduct but have no direct visibility onto
what happens on the streets and no ability to conduct search warrants (even with court orders) or to
patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really suck is
not controversial when you're talking about Yemen or Somalia. I see nothing more attractive about the
creation of a worldwide architecture in which it is technically impossible to intercept and read ISIS
communications with followers or to follow child predators into chatrooms where they go after kids. The
trouble is that this conceptual position does not answer the entirety of the policy question before us. The reason is that the case against
preserving some form of law enforcement access to decrypted signal is not only a conceptual embrace of the technological obsolescence of
surveillance. It
is also a series of arguments about the costs—including the security costs—of maintaining
the capacity to decrypt captured signal. Consider the report issued this past week by a group of computer security experts
(including Lawfare contributing editors Bruce Schneier and Susan Landau), entitled "Keys Under Doormats: Mandating Insecurity By Requiring
Government Access to All Data and Communications." The report does not make an in-principle argument or a conceptual argument against
extraordinary access. It argues, rather, that the effort to build such a system risks eroding cybersecurity in ways far more important than the
problems it would solve. The authors, to summarize, make three claims in support of the broad claim that any exceptional access system would
"pose . . . grave security risks [and] imperil innovation." What
are those "grave security risks"? "[P]roviding exceptional
access to communications would force a U-turn from the best practices now being deployed to make
the Internet more secure. These practices include forward secrecy—where decryption keys are deleted immediately
after use, so that stealing the encryption key used by a communications server would not compromise earlier or later communications. A
related technique, authenticated encryption, uses the same temporary key to guarantee confidentiality and to verify that the message has not
been forged or tampered with." "[B]uilding
in exceptional access would substantially increase system complexity"
and "complexity is the enemy of security." Adding code to systems increases that system's attack surface, and a certain number
of additional vulnerabilities come with every marginal increase in system complexity. So by requiring a potentially complicated new system to
be developed and implemented, we'd be effectively guaranteeing more vulnerabilities for malicious actors to hit. "[E]xceptional access
would create concentrated targets that could attract bad actors." If we require tech companies to retain some means
of accessing user communications, those keys have to stored somewhere, and that storage then becomes an unusually high-stakes target for
malicious attack. Their theft then compromises, as did the OPM hack, large numbers of users. The strong implication of the report is that
these issues are not resolvable, though the report never quite says that. But at a minimum, the authors raise a series of important
questions about whether such a system would, in practice, create an insecure internet in general—rather than one whose general security has
the technical capacity to make security exceptions to comply with the law. There is some reason, in my view, to suspect that the
picture
may not be quite as stark as the computer scientists make it seem. After all, the big tech companies
increase the complexity of their software products all the time, and they generally regard the increased
attack surface of the software they create as a result as a mitigatable problem. Similarly, there are lots of
high-value intelligence targets that we have to secure and would have big security implications if we
could not do so successfully. And when it really counts, that task is not hopeless. Google and Apple and Facebook are not without
tools in the cybersecurity department. The real question, in my view, is whether a system of the sort Comey imagines could be
built in fashion in which the security gain it would provide would exceed the heightened security risks
the extraordinary access would involve. As Herb Lin puts it in his excellent, and admirably brief, Senate testimony the other day,
this is ultimately a question without an answer in the absence of a lot of new research. "One side says [the] access [Comey is seeking] inevitably
weakens the security of a system and will eventually be compromised by a bad guy; the other side says it doesn’t weaken security and won’t be
compromised. Neither side can prove its case, and we see a theological clash of absolutes." Only when someone actually does the research and
development and tries actually to produce a system that meets Comey's criteria are we going to find out whether it's doable or not. And
therein lies the rub, and the real meat of the policy problem, in my view: Who's going to do this research? Who's going to conduct the
sustained investment in trying to imagine a system that secures communications except from government when and only government has a
warrant to intercept those communications? The assumption of the computer scientists in their report is that the burden of that research lies
with the government. "Absent a concrete technical proposal," they write, "and without answers to the questions raised in this report,
legislators should reject out of hand any proposal to return to the failed cryptography control policy of the 1990s." Indeed, their most central
recommendation is that the burden of development is on Comey. "Our strong recommendation is that anyone proposing regulations should
first present concrete technical requirements, which industry, academics, and the public can analyze for technical weaknesses and for hidden
costs." In his testimony, Herb supports this call, though he acknowledges that it is not the inevitable route: the government has not yet
provided any specifics, arguing that private vendors should do it. At the same time, the vendors won’t do it, because [their] customers aren’t
demanding such features. Indeed, many customers would see such features as a reason to avoid a given vendor. Without specifics, there will be
no progress. I believe the government is afraid that any specific proposal will be subject to enormous criticism—and that’s true—but the
government is the party that wants . . . access, and rather than running away from such criticism, it should embrace any resulting criticism as an
opportunity to improve upon its initial designs." Herb might also have mentioned that lots of people in the academic tech community who
would be natural candidates to help develop such an access system are much more interested in developing encryption systems to keep the
feds out than to—under any circumstances—let them in. The tech community has spent a lot more time and energy arguing against the
plausibility and desireability of implementing what Comey is seeking than it has spent in trying to develop systems that deliver it while
mitigating the risks such a system might pose. For both industry and the tech communities, more broadly, this is government's problem, not
their problem. Yet reviving the Clipper Chip model—in which government develops a fully-formed system and then puts it out publicly for the
community to shoot down—is clearly not what Comey has in mind. He is talking in very different language: the language of performance
requirements. He wants to leave the development task to Silicon Valley to figure out how to implement government's requirements. He
wants to describe what he needs—decrypted signal when he has a warrant—and leave the companies
to figure out how to deliver it while still providing secure communications in other circumstances to
their customers. The advantage to this approach is that it potentially lets a thousand flowers bloom.
Each company might do it differently. They would compete to provide the most security consistent
with the performance standard. They could learn from each other. And government would not be in
the position of developing and promoting specific algorithms. It wouldn't even need to know how the
task was being done.
1NC Alt Cause
Their evidence concedes there are alt causes to a decline in democracy, specifically the
strengthening of non-democratic nations, which the US cannot reverse
Chenoweth & Stephan 2015
Erica Chenoweth, political scientist at the University of Denver.& Maria J. Stephan, Senior Policy Fellow
at the U.S. Institute of Peace, Senior Fellow at the Atlantic Council.7-7-2015, "How Can States and NonState Actors Respond to Authoritarian Resurgence?," Political Violence @ a Glance,
http://politicalviolenceataglance.org/2015/07/07/how-can-states-and-non-state-actors-respond-toauthoritarian-resurgence/
Chenoweth: Why is authoritarianism making a comeback? Stephan: There’s obviously no single answer to this. But part of the
answer is that democracy is losing its allure in parts of the world. When people don’t see the economic and governance benefits of democratic
transitions, they lose hope. Then there’s the compelling “stability first” argument. Regimes
around the world, including China
and Russia, have readily cited the “chaos” of the Arab Spring to justify heavy-handed policies and
consolidating their grip on power. The “color revolutions” that toppled autocratic regimes in Serbia,
Georgia, and Ukraine inspired similar dictatorial retrenchment. There is nothing new about authoritarian regimes
adapting to changing circumstances. Their resilience is reinforced by a combination of violent and non-coercive measures. But authoritarian
paranoia seems to have grown more piqued over the past decade. Regimes have figured out that “people power” endangers their grip on
power and they are cracking down. There’s no better evidence of the effectiveness of civil resistance than the measures that governments take
to suppress it—something you detail in your chapter from my new book. Finally, and importantly, democracy in this country and elsewhere has
taken a hit lately. Authoritarian regimes mockingly cite images of torture, mass surveillance, and the catering to the radical fringes happening in
the US political system to refute pressures to democratize themselves. The financial crisis here and in Europe did not inspire much confidence
in democracy and we are seeing political extremism on the rise in places like Greece and Hungary. Here in the US we need to get our own house
in order if we hope to inspire confidence in democracy abroad.
Alt cause: Economic development key to democracy promotion
Drake et al 00 (William J. Drake was a Senior Associate and the Director of the Project on the
Information Revolution and World Politics at the Carnegie Endowment for International Peace. Shanthi
Kalathil specializes in the political impact of information and communication technology (ICT). Her
research focuses on the impact of ICT in authoritarian regimes, the global digital divide, and security
issues in the information age. Taylor Boas is a Project Associate with the Project on the Information
Revolution and World Politics. “Dictatorships in the Digital Age: Some Considerations on the Internet in
China and Cuba” <http://carnegieendowment.org/2000/10/23/dictatorships-in-digital-age-someconsiderations-on-internet-in-china-and-cuba/4e9e>) NM
The Economy. Economic development and the growth of a middle class may be important contributors
to democratization. Internet-based electronic commerce is set to boom in parts of the developing world
(most notably Asia and Latin America) and will provide many new opportunities for individual
entrepreneurs, small businesses, larger internationally-oriented companies, and consumers. The
resulting invigoration of national economies could help to foster pro-democracy attitudes, e.g., by
increasing demands for transparency, accountability, and "good government" and an end to "crony
capitalist" practices that are out of synch with the ethos of the global Internet economy. Alternatively, in
some cases even Internet-oriented businesspeople and consumers may prefer to go along with an
undemocratic regime than to rock the boat. Hence, it would be worth attempting to gauge the impact of
Internet-based economic activity on the broad tenor of national political cultures, as well as on the
attitudes and political demands of relevant individuals, firms, trade associations, etc.
1NC No Solvency
Aff insufficient – their author says more action than the plan is necessary to solve
internet freedom
Donahoe, 14,
Eileen Donahoe, director of global affairs at Human Rights Watch. Donahoe previously served as the first
US Ambassador to the United Nations Human Rights Council, "Human Rights in the Digital Age", Just
Security, 12-23-2014, http://justsecurity.org/18651/human-rights-digital-age/
1. Create a Special Rapporteur Mandate on the Right to Privacy at the UN Human Rights Council¶ The first
practical step to take in protecting human rights in the digital realm is to generate global support for the creation of a “special rapporteur”
(essentially an international human rights law expert) for the right to privacy at the UN Human Rights Council in Geneva at its next session in
March. The creation of such a mandate would follow directly from an invitation in the UN General Assembly (UNGA) resolution on The Right to
Privacy in the Digital Age that passed by consensus on Dec. 18 in New York, under the leadership of Brazil and Germany.¶ The core idea is
simple: when everything you say or do can be tracked and intercepted, it has a chilling effect on what you feel free to say, where you feel free
to go, and with whom you choose to meet. These concerns go to the heart of the work of human rights activists and defenders around the
world.¶ The consensus UNGA text expressed growing global concern about the human rights costs and consequences of unchecked mass
surveillance, including the erosion of fundamental freedoms of expression, assembly and association. The resolution invited Human Rights
Council members in Geneva to take up the challenge of protecting privacy in the digital context by considering the creation of a special
procedure mandate holder to address these global concerns. Ideally, this mandate holder would be dedicated to fleshing out the implications of
digital communications technology for the right to privacy, and help articulate how to adhere to the rule of law and ensure protection of human
rights and fundamental freedoms in the digital environment. The international community must stand behind the creation of this urgently
needed international mandate.¶ 2.
Contribute to Development of Multi-Stakeholder Internet Governance¶ A
second practical step that can be taken to reinforce human rights promotion in the digital context would
be to support further development of the multi-stakeholder approach to Internet governance that
prevails today, rather than allow retrenchment toward a multilateral, state based “Westphalian” model
of governance.¶ The Internet itself has in many ways been a boon to the exercise of rights, but also has contributed to the larger trend of
distribution of power away from governments to non-state actors. The Internet, which emerged through the collaboration of technologists and
various other stakeholders, operates through global, trans-boundary connectivity, and does not depend on geographic borders. In effect, the
Internet challenged the nation-state system that lies at the heart of the UN structure, the so-called “Westphalian” model. While individuals
have been empowered through global connectivity and the free flow of information across borders, the territory-based nation-state system of
governance has been tested.¶ In response, some governments, notably China, increasingly endorse a concept of Internet sovereignty, whereby
each national government has sovereign control over all aspects of Internet infrastructure, data, content and governance within its borders.
This approach would in effect be an effort to Westphalianize the global Internet, and to resist the global trend toward a distributed,
decentralized multi-stakeholder model of Internet governance.¶ To meet this challenge, the multi-stakeholder model for Internet governance
must be protected and strengthened. A basic concept underlying this model is that governments alone are not best positioned to make
technical or policy decisions about the Internet single-handedly. The Internet evolved through collaboration and decision-making by many nongovernmental actors, and the functionality of the open interoperable Internet depends on continued inclusion of many stakeholders in Internet
governance processes, most notably technologists. On the human rights policy front, civil society organizations dedicated to protection and
promotion of human rights are best placed, and must have a seat at the table alongside governments, technologists, the private sector and
others, in creating Internet governance mechanisms that prioritize global human rights in the digital realm. 3.
Reinforce the
Conceptualization of Human Rights Protection as a National Security Priority¶ Finally, we need to solidify the
international understanding that protection of human rights and adherence to the rule of law in the digital realm are essential to the protection
of national and global security, rather than antithetical to it. All too often in the post-Snowden context, national security interests are
presented in binary opposition to freedom and privacy consideration, as though there is only a zero-sum relationship between human rights
and national security. In reality, human rights protection has been an essential pillar of the global security architecture since the founding of the
United Nations immediately after World War II. Recent failures to adequately protect human rights and adhere to the rule of law in the digital
realm has been deeply undermining of some crucial aspects of long-term national and global security.¶ One of the most troubling aspects of the
mass surveillance programs disclosed by Edward Snowden was the extent to which digital security for individual users, for data, and for
networks, has been undermined in the name of protecting of national security.¶ This is both ironic and tragic, given that digital security is now
at the heart of national security — whether protecting critical infrastructure, confidential information, or sensitive data. Practices, such as
surreptitiously tapping into networks, requiring back doors to encrypted services and weakening global encryption standards will directly
undermine national and global security, as well as human rights.¶ Meanwhile targeted malware and crafted digital attacks on human rights
activists have become the modus operandi of repressive governments motivated to undermine human rights work. Civil society actors
increasingly face an onslaught of persistent computer espionage attacks from governments and other political actors like cyber militias, just as
businesses and governments do. So while our notions of privacy are evolving along with social media and data-capturing technology, we also
need to recognize that it’s not “just privacy” that is affected by the digitization of everything. The exercise of all fundamental freedoms is
undermined when governments utilize new capacities that flow from digitization without regard for human rights.¶ Furthermore, by engaging
in tactics that undermine digital security for individuals, for networks and for data, governments trigger and further inspire a hackers race to
the bottom. Practices that undermine digital security will be learned and followed by other governments and non-state actors, and ultimately
undermine security for critical infrastructure, as well as individuals users everywhere. Strengthening digital security for individual users, for
data, for networks, and for critical infrastructure must be seen as the national and global security priority that it is.¶ Conclusion¶ We are at a
critical moment for protection of human rights in the digital context.¶ All global players whose actions impact the enjoyment of human rights,
especially governments who claim to be champions of human rights, must lead in the reaffirmation of the international human rights
framework as a central pillar for security, development and freedom in the 21st century digital environment.
1NC No Authoritarianism Solvency
Internet freedom is just as likely to be used to crush dissent
Siegel 11 (Lee Siegel, a columnist and editor at large for The New York Observer, is the author of
“Against the Machine: How the Web Is Reshaping Culture and Commerce — and Why It Matters. “‘The
Net Delusion’ and the Egypt Crisis”, February 4, 2011,
http://artsbeat.blogs.nytimes.com/2011/02/04/the-net-delusion-and-the-egypt-crisis)
¶Morozov takes the ideas of what he calls “cyber-utopians” and shows how reality perverts them in one
political situation after another. In Iran, the regime used the internet to crush the internet-driven
protests in June 2009. In Russia, neofascists use the internet to organize pogroms. And on and on.
Morozov has written hundreds of pages to make the point that technology is amoral and cuts many
different ways. Just as radio can bolster democracy or — as in Rwanda — incite genocide, so the
internet can help foment a revolution but can also help crush it. This seems obvious, yet it has often
been entirely lost as grand claims are made for the internet’s positive, liberating qualities. ¶And
suddenly here are Tunisia and, even more dramatically, Egypt, simultaneously proving and refuting
Morozov’s argument. In both cases, social networking allowed truths that had been whispered to be
widely broadcast and commented upon. In Tunisia and Egypt — and now across the Arab world —
Facebook and Twitter have made people feel less alone in their rage at the governments that stifle their
lives. There is nothing more politically emboldening than to feel, all at once, that what you have
experienced as personal bitterness is actually an objective condition, a universal affliction in your society
that therefore can be universally opposed. ¶Yet at the same time, the Egyptian government shut off the
internet, which is an effective way of using the internet. And according to one Egyptian blogger,
misinformation is being spread through Facebook — as it was in Iran — just as real information was
shared by anti-government protesters. This is the “dark side of internet freedom” that Morozov is
warning against. It is the freedom to wantonly crush the forces of freedom. ¶All this should not surprise
anyone. It seems that, just as with every other type of technology of communication, the internet is not
a solution to human conflict but an amplifier for all aspects of a conflict. As you read about progovernment agitators charging into crowds of protesters on horseback and camel, you realize that
nothing has changed in our new internet age. The human situation is the same as it always was, except
that it is the same in a newer and more intense way. Decades from now, we will no doubt be celebrating
a spanking new technology that promises to liberate us from the internet. And the argument joined by
Morozov will occur once again.
2NC No Authoritarianism Solvency
Mobilization and Internet access are not correlated – other factors are more
important
Kuebler 11 (Johanne Kuebler, contributor to the CyberOrient journal, Vol. 5, Iss. 1, 2011, “Overcoming
the Digital Divide: The Internet and Political Mobilization in Egypt and Tunisia”,
http://www.cyberorient.net/article.do?articleId=6212)
The assumption that the uncensored accessibility of the Internet encourages the struggle for democracy
has to be differentiated. At first sight, the case studies seem to confirm the statement, since Egypt,
featuring a usually uncensored access to the Internet, has witnessed mass mobilisations organised over
the Internet while Tunisia had not. However, the mere availability of freely accessible Internet is not a
sufficient condition insofar as mobilisations in Egypt took place when a relative small portion of the
population had Internet access and, on the other hand, mobilisation witnessed a decline between 2005
and 2008 although the number of Internet users rose during the same period. As there is no direct
correlation between increased Internet use and political action organised through this medium, we
have to assume a more complex relationship. A successful social movement seems to need more than a
virtual space of debate to be successful, although such a space can be an important complementary
factor in opening windows and expanding the realm of what can be said in public. A political movement
revolves around a core of key actors, and "netizens" qualify for this task. The Internet also features a
variety of tools that facilitate the organisation of events. However, to be successful, social movements
need more than a well-organised campaign. In Egypt, we witnessed an important interaction between
print and online media, between the representatives of a relative elitist medium and the traditional,
more accessible print media. A social movement needs to provide frames resonating with grievances of
the public coupled with periods of increased public attention to politics in order to create opportunity
structures. To further transport their message and to attract supporters, a reflection of the struggle of
the movement with the government in the "classical" media such as newspapers and television channels
is necessary to give the movement momentum outside the Internet context.
1NC No I/L
No evidence that the internet actually spurs democratization
Aday et al. 10 (Sean Aday is an associate professor of media and public affairs and international
affairs at The George Washington University, and director of the Institute for Public Diplomacy and
Global Communication. Henry Farrell is an associate professor of political science at The George
Washington University. Marc Lynch is an associate professor of political science and international affairs
at The George Washington University and director of the Institute for Middle East Studies. John Sides is
an assistant professor of political science at The George Washington University. John Kelly is the founder
and lead scientist at Morningside Analytics and an affiliate of the Berkman Center for Internet and
Society at Harvard University. Ethan Zuckerman is senior researcher at the Berkman Center for Internet
and Society at Harvard University and also part of the team building Global Voices, a group of
international bloggers bridging cultural and linguistic differences through weblogs. August 2010, “BLOGS
AND BULLETS: new media in contentious politics”, http://www.usip.org/files/resources/pw65.pdf)
New media, such as blogs, Twitter, Facebook, and YouTube, have played a major role in episodes of
contentious political action. They are often described as important tools for activists seeking to replace
authoritarian regimes and to promote freedom and democracy, and they have been lauded for their
democratizing potential. Despite the prominence of “Twitter revolutions,” “color revolutions,” and the
like in public debate, policymakers and scholars know very little about whether and how new media
affect contentious politics. Journalistic accounts are inevitably based on anecdotes rather than
rigorously designed research. Although data on new media have been sketchy, new tools are emerging
that measure linkage patterns and content as well as track memes across media outlets and thus might
offer fresh insights into new media. The impact of new media can be better understood through a
framework that considers five levels of analysis: individual transformation, intergroup relations,
collective action, regime policies, and external attention. New media have the potential to change how
citizens think or act, mitigate or exacerbate group conflict, facilitate collective action, spur a backlash
among regimes, and garner international attention toward a given country. Evidence from the protests
after the Iranian presidential election in June 2009 suggests the utility of examining the role of new
media at each of these five levels. Although there is reason to believe the Iranian case exposes the
potential benefits of new media, other evidence—such as the Iranian regime’s use of the same social
network tools to harass, identify, and imprison protesters—suggests that, like any media, the Internet is
not a “magic bullet.” At best, it may be a “rusty bullet.” Indeed, it is plausible that traditional media
sources were equally if not more important. Scholars and policymakers should adopt a more nuanced
view of new media’s role in democratization and social change, one that recognizes that new media can
have both positive and negative effects. Introduction In January 2010, U.S. Secretary of State Hillary
Clinton articulated a powerful vision of the Internet as promoting freedom and global political
transformation and rewriting the rules of political engagement and action. Her vision resembles that of
others who argue that new media technologies facilitate participatory politics and mass mobilization,
help promote democracy and free markets, and create new kinds of global citizens. Some observers
have even suggested that Twitter’s creators should receive the Nobel Peace Prize for their role in the
2009 Iranian protests.1 But not everyone has such sanguine views. Clinton herself was careful to note
when sharing her vision that new media were not an “unmitigated blessing.” Pessimists argue that these
technologies may actually exacerbate conflict, as exemplified in Kenya, the Czech Republic, and Uganda,
and help authoritarian regimes monitor and police their citizens. 2 They argue that new media
encourage self-segregation and polarization as people seek out only information that reinforces their
prior beliefs, offering ever more opportunities for the spread of hate, misinformation, and prejudice.3
Some skeptics question whether new media have significant effects at all. Perhaps they are simply a tool
used by those who would protest in any event or a trendy “hook” for those seeking to tell political
stories. Do new media have real consequences for contentious politics—and in which direction?4 The
sobering answer is that, fundamentally, no one knows. To this point, little research has sought to
estimate the causal effects of new media in a methodologically rigorous fashion, or to gather the rich
data needed to establish causal influence. Without rigorous research designs or rich data, partisans of
all viewpoints turn to anecdotal evidence and intuition
1NC Collapse Inev
Can’t solve – US allies destroy i-freedom signal
Hanson 10/25/12, Nonresident Fellow, Foreign Policy, Brookings
http://www.brookings.edu/research/reports/2012/10/25-ediplomacy-hanson-internet-freedom
Another challenge is dealing with close partners and allies who undermine internet freedom. In August
2011, in the midst of the Arab uprisings, the UK experienced a different connection technology infused
movement, the London Riots. On August 11, in the heat of the crisis, Prime Minister Cameron told the
House of Commons: Free flow of information can be used for good. But it can also be used for ill. So we
are working with the police, the intelligence services and industry to look at whether it would be right to
stop people communicating via these websites and services when we know they are plotting violence,
disorder and criminality. This policy had far-reaching implications. As recently as January 2011, then
President of Egypt, Hosni Mubarak, ordered the shut-down of Egypt’s largest ISPs and the cell phone
network, a move the United States had heavily criticized. Now the UK was contemplating the same
move and threatening to create a rationale for authoritarian governments everywhere to shut down
communications networks when they threatened “violence, disorder and criminality.” Other allies like
Australia are also pursuing restrictive internet policies. As OpenNet reported it: “Australia maintains
some of the most restrictive Internet policies of any Western country…” When these allies pursue
policies so clearly at odds with the U.S. internet freedom agenda, several difficulties arise. It undermines
the U.S. position that an open and free internet is something free societies naturally want. It also gives
repressive authoritarian governments an excuse for their own monitoring and filtering activities. To an
extent, U.S. internet freedom policy responds even-handedly to this challenge because the vast bulk of
its grants are for open source circumvention tools that can be just as readily used by someone in London
as Beijing, but so far, the United States has been much more discreet about criticising the restrictive
policies of allies than authoritarian states.
2NC Collapse Inev
Collapse of Internet freedom inevitable
VARA 14 [Vauhini Vara, the former business editor of newyorker.com, lives in San Francisco and is a business
and technology correspondent for the New Yorker. “The World Cracks Down on the Internet”, 12-4-14,
http://www.newyorker.com/tech/elements/world-cracks-internet, msm]
In September of last year, Chinese
authorities announced an unorthodox standard to help them decide whether to punish people
for posting online comments that are false, defamatory, or otherwise harmful: Was a message popular enough to
attract five hundred reposts or five thousand views? It was a striking example of how sophisticated the Chinese
government has become, in recent years, in restricting Internet communication—going well beyond crude measures like
restricting access to particular Web sites or censoring online comments that use certain keywords. Madeline Earp, a research analyst at Freedom House, the
Washington-based nongovernmental organization, suggested a phrase to describe the approach: “strategic, timely censorship.” She told me, “It’s about allowing a
surprising amount of open discussion, as long as you’re not the kind of person who can really use that discussion to organize people.”¶ On Thursday, Freedom
House published its fifth annual report on Internet freedom around the world. As in years past, China
is again near the bottom of the
rankings, which include sixty-five countries. Only Syria and Iran got worse scores, while Iceland and Estonia fared the best. (The report was
funded partly by the Dutch Ministry of Foreign Affairs, the United States Department of State, Google, and Yahoo, but Freedom House described the report as its
“sole responsibility” and said that it doesn’t necessarily represent its funders’ views.)¶ China’s
place in the rankings won’t come as a
surprise to many people. The notable part is that the report suggests that, when it comes to Internet freedom, the rest of the
world is gradually becoming more like China and less like Iceland. The researchers found that Internet freedom declined in
thirty-six of the sixty-five countries they studied, continuing a trajectory they have noticed since they
began publishing the reports in 2010.¶ Earp, who wrote the China section, said that authoritarian regimes might even be
explicitly looking at China as a model in policing Internet communication. (Last year, she co-authored a report on the topic
for the Committee to Protect Journalists.) China isn’t alone in its influence, of course. The report’s authors even said that some countries are
using the U.S. National Security Agency’s widespread surveillance, which came to light following disclosures by the whistle-blower Edward Snowden, “as an excuse
to augment their own monitoring capabilities.” Often, the surveillance comes with little or no oversight, they said, and is directed at human-rights activists and
political opponents.¶ China, the U.S., and their copycats aren’t the only offenders, of course. In fact, interestingly,
the United States was the
sixth-best country for Internet freedom, after Germany—though this may say as much about the poor state of Web freedom in other places
as it does about protections for U.S. Internet users. Among the other countries, this was a particularly bad year for Russia and Turkey, which
registered the sharpest declines in Internet freedom from the previous year. In Turkey, over the past several years, the
government has increased censorship, targeted online journalists and social-media users for assault and
prosecution, allowed state agencies to block content, and charged more people for expressing
themselves online, the report noted—not to mention temporarily shutting down access to YouTube and Twitter. As Jenna Krajeski wrote in a post about
Turkey’s Twitter ban, Prime Minister Recep Tayyip Erdoğan vowed in March, “We’ll eradicate Twitter. I don’t care what the international community says. They will
see the power of the Turkish Republic.” A month later, Russian President Vladimir Putin, not to be outdone by Erdoğan, famously called the Internet a “C.I.A.
project,” as Masha Lipman wrote in a post about Russia’s recent Internet controls. Since Putin took office again in 2012, the report found, the government has
enacted laws to block online content, prosecuted people for their Internet activity, and surveilled information and communication technologies. Among changes in
other countries, the report said that the governments of Uzbekistan and Nigeria had passed laws requiring cybercafés to keep logs of their customers, and that the
Vietnamese government began requiring international Internet companies to keep at least one server in Vietnam. ¶ What’s behind
the decline in
Internet freedom throughout the world? There could be several reasons for it, but the most obvious one is also somewhat mundane: especially in
countries where people are just beginning to go online in large numbers, governments that restrict freedom offline—particularly
authoritarian regimes—are only beginning to do the same online, too. What’s more, governments that had been using strategies like blocking certain
Web sites to try to control the Internet are now realizing that those approaches don’t actually do much to keep their citizens from seeing content that the
governments would prefer to keep hidden. So they’re turning
to their legal systems, enacting new laws that restrict how
people can use the Internet and other technologies.¶ “There is definitely a sense that the Internet offered this real alternative to
traditional media—and then government started playing catch-up a little bit,” Earp told me. “If a regime has developed laws and practices over time that limit what
the traditional media can do, there’s that moment of recognition: ‘How can we apply what we learned in the traditional media world online?’ ”¶ There were a
couple of hopeful signs for Internet activists during the year. India, where authorities relaxed restric­tions that had been imposed in 2013 to help quell rioting, saw
the biggest improvement in its Internet-freedom score. Brazil, too, notched a big gain after lawmakers approved a bill known as the Marco Civil da Internet, which
protects net neutrality and online privacy. But, despite those developments, the report’s authors didn’t seem particularly upbeat. “There might be some cautious
optimism there, but I do not want to overstate that because, since we started tracking this, it’s been a continuous decline, unfortunately,” Sanja Kelly, the project
director for the report, told me. Perhaps the surprising aspect of Freedom House’s findings isn’t that the Internet is becoming less free—it’s that it has taken this
long for it to happen.
Governments will inevitably oppose internet freedom – attempts to oppose it
exasperate the problem
Utah Post 1-3 [“2014 MARKED THE DECLINE IN INTERNET FREEDOM”, 1-3-15,
http://www.utahpeoplespost.com/2015/01/2014-marked-decline-internet-freedom/, msm]
Last year marked a decline in internet freedom in numerous countries, as indicated by a report released by the Freedom House. The
study analyzed 65 countries in terms of user access to internet and laws governing the World Wide Web.
The report shows that web freedom has corroded for the fourth back to back year. The document highlights
administrative endeavors to ban applications and tech advances by putting cutoff points on content,
sites’ filters and infringement of clients’ rights by peeping in their online log.¶ ¶ The report also warns that 2015’s dares
in terms of web freedom will increase as Russia and Turkey plan to increase controls on foreign-based
internet organizations.¶ ¶ Many countries already put major American internet businesses into odd
circumstances. Among them: Twitter, Facebook and Google, who were challenged by problematic
regulations. Overlooking these laws has led to their services being hindered. For instance, Google’s engineers retreated from Russia while
China blocked Gmail, after the company refused to give the national governments access to its servers. ¶ ¶ This Wednesday, Vladimir Putin, Russian President approved the law obliging
organizations to store Russian clients’ information on servers located on Russian grounds. But only a few countries
¶
approve of this new legislation. As a result it is expected that the law will spur some international debates not long from now.¶ ¶ Most of tech experts believe that pieces of legislation and other state measures will not be able to
actually stop information from rolling on the internet.¶ ¶ For instance, a year ago Russian powers asked Facebook to shut down a page setup against the government, advancing anti-government protests. Despite the fact that
The Turkish government was also
slammed by internet power when it attempted to stop the spread of leaked documents on Twitter in
March. Recep Tayyip Erdogan’s government at the time requested the shutdown of Twitter inside Turkey after the organization declined to erase the posts revealing information about government authorities accused of
Facebook consented to the request and erased the page, which had 10 million supporters, different replica pages were immediately set up.¶ ¶
corruption. The result of the government action was that while Twitter was blocked, Turkish users started to evade the ban. Comparable demands were registered in nations like China, Pakistan, and so forth. ¶ ¶ According to a
popular Russian blogger, Anton Nosik, governments are delusional to think they can remove an article or video footage from the web when materials can easily be duplicated and posted somewhere else. ¶ ¶ Most Internet users
Governments on the other hand, are not really fans of this idea.
Tech analysts say it is likely to see an increase in clashes between internet surfers and authorities in
various countries throughout 2015.
militate for a free and limitless system, where individuals are permitted to openly navigate whatever they want.
As internet use increases, internet freedom will inevitably decrease – it’s zero-sum
Kelly and Cook 11 [Sanja Kelly, managing editor, and Sarah Cook, assistant editor, at Freedom House produced
"Freedom on the Net: A Global Assessment of Internet and Digital Media," a 2011 report. “Internet freedom”, 4-17-11,
http://www.sfgate.com/opinion/openforum/article/Internet-freedom-declining-as-use-grows-2375021.php, msm]
Indeed, as
more people use the Internet to freely communicate and obtain information, governments have ratcheted up
efforts to control it. Today, more than 2 billion people have access to the Internet, a number that has more than doubled in the past five years.
Deepening Internet penetration is particularly evident in the developing world, where declining subscription costs, government investments in infrastructure, and
the rise of mobile technology has allowed the number of users to nearly triple since 2006. ¶ In order to better understand the diverse, rapidly evolving threats to
Internet freedom, Freedom House, a Washington, D.C., NGO that conducts research on political freedom, has undertaken an analysis - the first of its kind - of the
ways in which governments in 37 key countries create obstacles to Internet access, limit digital content and violate users' rights. What we found was that Internet
freedom in a range of countries, both democratic and authoritarian, is declining. Embold ened
governments and their sympathizers are
increasingly using technical attacks to disrupt political activists' online networks, eavesdrop on their
communications and debilitate their websites. Such attacks were reported in at least 12 countries, ranging
from China to Russia, Tunisia to Burma, Iran to Vietnam. In Belarus, at the height of controversial elections, the authorities created mirror
versions of opposition websites, diverting users to the new ones, where deliberately false information
on the times and locations of protests were posted. In Tunisia, in the run-up to the January 2011 uprising that drove the regime from
power, the authorities regularly broke into the e-mail, Facebook and blogging accounts of opposition and human rights activists, either deleting specific material or
simply collecting intelligence about their plans.¶ Governments
around the world increasingly are establishing mechanisms
to block what they deem to be undesirable information. In many cases, the restrictions apply to content involving illegal gambling,
child pornography, copyright infringement or the incitement of hatred or violence. However, a large number of governments are also
engaging in deliberate efforts to block access to information related to politics, social issues and human
rights. In Thailand, tens of thousands of websites critical of the monarchy have been blocked. In China - in addition to blocking dissident websites - user
discussions and blog postings revealing tainted-milk products, pollution or torture are deleted.¶
Centralized government control over a country's connection to international Internet traffic also
emerged as one significant threat to online free expression. In one-third of the states examined,
authorities have exploited their control over infrastructure to limit access to politically and socially
controversial content or, in extreme cases, cut off access to the Internet entirely, as Hosni Mubarak's
government did in Egypt during the height of the protests there.¶ Until recently, the conventional assumption
has been that Internet freedom would inexorably improve, given the technology's diffuse and open
structure. But this assumption was premature. Our findings should serve as an early warning sign to
defenders of free expression.¶
1NC Squo Solves
Squo solves – their evidence concedes that we’re already funding groups to fight for
Internet freedom
Kehl, 2015
Danielle Kehl is a senior policy analyst at New America's Open Technology Institute, BA cum laude Yale
6-17-2015, "Doomed To Repeat History? Lessons From The Crypto Wars Of The 1990s," New America,
https://www.newamerica.org/oti/doomed-to-repeat-history-lessons-from-the-crypto-wars-of-the1990s/
Strong encryption has become an integral tool in the protection of privacy and the promotion of free expression online The end of the Crypto
Wars ushered in an age where the security and privacy protections afforded by the use of strong encryption also help promote free expression.
As the American Civil Liberties Union recently explained in a submission to the UN Human Rights Council, “encryption and anonymity are the
modern safeguards for free expression. Without them, online communications are effectively unprotected as they traverse the Internet,
vulnerable to interception and review in bulk. Encryption makes mass surveillance significantly more costly.”187 The human rights benefits of
strong encryption have undoubtedly become more evident since the end of the Crypto Wars. Support for strong encryption has become an
integral part of American foreign policy related to Internet freedom, and since 2010, the U.S. government has built up a successful policy and
programming agenda based on promoting an open and free Internet.188 These
efforts include providing over $120 million
in funding for “groups working to advance Internet freedom,” much of which specifically funds
circumvention tools that rely on strong encryption — which makes Internet censorship significantly
harder — as part of the underlying technology.189 Similarly, a June 2015 report by David Kaye, the UN Special Rapporteur for
Freedom of Expression and Opinion found that, “Encryption and anonymity provide individuals and groups with a zone of privacy online to hold
opinions and exercise freedom of expression without arbitrary and unlawful interference or attacks.”190 The report goes on to urge all states
to protect and promote the use of strong encryption, and not to restrict it in any way. Over the past fifteen years, a virtuous cycle between
strong encryption, economic growth, and support for free expression online has evolved. Some experts have dubbed this phenomenon
“collateral freedom,” which refers to the fact that, “When crucial business activity is inseparable from Internet freedom, the prospects for
Internet freedom improve.”191 Free expression and support for human rights have certainly benefited from the rapid expansion of encryption
in the past two decades.
1NC No Impact
No democracy impact.
Rosato, 03 Sebastian, Ph.D. candidate, Political Science Department, UChicago, American Political Science Review, November,
http://journals.cambridge.org/download.php?file=%2FPSR%2FPSR97_04%2FS0003055403000893a.pdf&code=97d5513385df289000828a47df4
80146, “The Flawed Logic of Democratic Peace Theory,” ADM
Democratic peace theory is probably the most powerful liberal contribution to the debate on the causes of
war and peace. In this paper I examine the causal logics that underpin the theory to determine whether they
offer compelling explanations for the finding of mutual democratic pacifism. I find that they do not. Democracies do not
reliably externalize their domestic norms of conflict resolution and do not trust or respect one another
when their interests clash. Moreover, elected leaders are not especially accountable to peace loving publics or pacific interest
groups, democracies are not particularly slow to mobilize or incapable of surprise attack, and open political competition does not guarantee
that a democracy will reveal private information about its level of resolve thereby avoidingconflict. Since the evidence suggests that the logics
do not operate as stipulated by the theory’s proponents, there are good reasons to believe that while
there is certainly peace among democracies, it may not be caused by the democratic nature of those
states. Democratic peace theory—the claim that democracies rarely fight one another because they share common norms of live-and-let-live
and domestic institutions that constrain the recourse to war—is probably the most powerful liberal contribution to the debate on the causes of
war and peace.1 If the theory is correct, it has important implications for both the study and the practice of international politics. Within the
academy it undermines both the realist claim that states are condemned to exist in a constant state of security competition and its assertion
that the structure of the international system, rather than state type, should be central to our understanding of state behavior. In practical
terms democratic peace theory provides the intellectual justification for the belief that spreading democracy abroad will perform the dual task
of enhancing American national security and promoting world peace. In this article I offer an assessment of democratic peace theory.
Specifically, I examine the causal logics that underpin the theory to determine whether they offer compelling explanations for why democracies
do not fight one another. A theory is comprised of a hypothesis stipulating an association between an independent and a dependent variable
and a causal logic that explains the connection between those two variables. To test a theory fully, we should determine whether there is
support for the hypothesis, that is, whether there is a correlation between the independent and the dependent variables and whether there is
a causal relationship between them.2 An evaluation of democratic peace theory, then, rests on answering two questions. First, do the data
support the claim that democracies rarely fight each other? Second, is there a compelling explanation for why this should be the case?
Democratic peace theorists have discovered a powerful empirical generalization: Democracies rarely go
to war or engage in militarized disputes with one another. Although there have been several attempts to challenge these
findings (e.g., Farber and Gowa 1997; Layne 1994; Spiro 1994), the correlations remain robust (e.g., Maoz 1998; Oneal and Russett 1999; Ray
1995; Russett 1993; Weart 1998). Nevertheless, some
scholars argue that while there is certainly peace among
democracies, it may be caused by factors other than the democratic nature of those states (Farber and Gowa
1997; Gartzke 1998; Layne 1994). Farber and Gowa (1997), for example, suggest that the Cold War largely explains the democratic peace
finding. In essence, they are raising doubts about whether there is a convincing causal logic that explains how democracies interact with each
other in ways that lead to peace. To resolve this debate, we must take the next step in the testing process: determining the persuasiveness of
the various causal logics offered by democratic peace theorists.
1NC No Dem Peace Theory
Democracy doesn’t solve war their ev is based on flawed studies
Henderson ‘2 (Errol Henderson, Assistant Professor, Dept. of Political Science at the University of Florida, 2002, Democracy and War The
End of an Illusion?)
The replication and extension of Oneal and Russet (1997), which is one of the most important studies
on the DPP, showed that democracies are not significantly less likely to fight each other. The results
demonstrate that Oneal and Russet (1997) findings in support of the DPP are not robust and that join
democracy does not reduce the probability of international conflict of pairs of states during the
postwar era. Simple and straightforward modifications of Oneal and Russett’s (1997) research design
generate these dramatically contradictory results. Specifically, by teasing out the separate impact of
democracy and political distance (or political dissimilarity) and by not coding cases of ongoing disputes
as new cases of conflict, it became clear that there is no siginifant relationship between join democracy
and the likelihood of international war or militarized interstate dispute (MID) for states during the
postwar era. These findings suggest that the post-Cold War strategy of “democratic enlargement,” which is aimed at ensuring peace by
englaring the community of democratic states, is quite a thin reed on which to rest a state’s foreign policy- much less the hope for international
The results indicate that democracies are more war-prone than non-democracies (whether democracy
is coded dichotomously or continuously) and that democracies are more likely to initiate interstate wars. The
findings are obtained from analyses that control for a host of political, economic, and cultural factors
that have been implicated in the onset of interstate war, and focus explicitly on state level factors instead of simply
peace.
inferring state level processes from dyadic level observations as was done in earlier studies (e.g., Oneal and Russett, 1997; Oneal and Ray,
1997). The results imply
that democratic enlargement is more likely to increase the probability of war for
states since democracies are more likely to become involved in—and to initiate—interstate wars.
2NC No Dem Peace Theory
Democratic peace theory is flawed
Layne 7
Christopher, Professor @ TX A&M, American Empire: A Debate, pg. 94
Wilsonian ideology drives the American Empire because its proponents posit that the United States must use
its military power to extend democracy abroad. Here, the ideology of Empire rests on assumptions that are
not supported by the facts. One reason the architects of Empire champion democracy promotion is because
they believe in the so-called democratic peace theory, which holds that democratic states do not fight other
democracies. Or as President George W. Bush put it with his customary eloquence, "democracies don't war;
democracies are peaceful."136 The democratic peace theory is the probably the most overhyped and
undersupported "theory" ever to be concocted by American academics. In fact, it is not a theory at all. Rather
it is a theology that suits the conceits of Wilsonian true believers-especially the neoconservatives who have
been advocating American Empire since the early 1990s. As serious scholars have shown, however, the
historical record does not support the democratic peace theory.131 On the contrary, it shows that
democracies do not act differently toward other democracies than they do toward nondemocratic states.
When important national interests are at stake, democracies not only have threatened to use force against
other democracies, but, in fact, democracies have gone to war with other democracies.
Democracy doesn’t prevent war
Goldstein, ’11 (Joshua, is professor emeritus of international relations at American University and author of Winning the War on War: The
Decline of Armed
Conflict Worldwide, Sept/Oct 2011, “Think Again: War. World peace could be closer than you think”, Foreign Policy)
"A More Democratic World Will Be a More Peaceful One." Not necessarily. The well-worn observation that real democracies almost never fight
each other is historically correct, but it's also true that democracies have
always been perfectly willing to fight nondemocracies.
In fact, democracy can heighten conflict by amplifying ethnic and nationalist forces, pushing leaders to
appease belligerent sentiment in order to stay in power. Thomas Paine and Immanuel Kant both believed that selfish
autocrats caused wars, whereas the common people, who bear the costs, would be loath to fight. But try telling that to the leaders
of authoritarian China, who are struggling to hold in check, not inflame, a popular undercurrent of nationalism
against Japanese and American historical enemies. Public opinion in tentatively democratic Egypt is far more
hostile toward Israel than the authoritarian government of Hosni Mubarak ever was (though being hostile and actually going to
war are quite different things). Why then do democracies limit their wars to non-democracies rather than fight each other? Nobody really
knows As the University of Chicago's Charles Lipson once quipped about the notion of a democratic peace, "We know it works in practice. Now
we have to see if it works in theory!" The best explanation is that of political scientists 9/29/2011 Think Again: War - By Joshua S. Goldst…
foreignpolicy.com/…/think_again_war?… 6/9Bruce Russett and John Oneal, who argue that three elements -- democracy, economic
interdependence (especially trade), and the growth of international organizations -- are mutually supportive of each other and of peace within
the community of democratic countries. Democratic
war with autocracies.
leaders, then, see themselves as having less to lose in going to
“Democratic Peace” is a myth, the United States is the world’s leading democracy and
engages in many wars.
Ostrowski 02, (James Ostrowski is a lawyer and a libertarian author. “The Myth of Democratic Peace.”
http://www.lewrockwell.com/1970/01/james-ostrowski/the-myth-of-democratic-peace/)
We are led to believe that democracy and peace are inextricably linked; that democracy leads to and
causes peace; and that peace cannot be achieved in the absence of democracy. Woodrow Wilson was one of the earliest and
strongest proponents of this view. He said in his "war message" on April 2, 1917: A steadfast concert for peace can never be maintained except by a partnership of democratic nations. No
autocratic government could be trusted to keep faith within it or observe its covenants. It must be a league of honour, a partnership of opinion. Intrigue would eat its vitals away; the plottings
of inner circles who could plan what they would and render account to no one would be a corruption seated at its very heart. Only free peoples can hold their purpose and their honour steady
Even if
this is true, it distorts reality and makes people far too sanguine about democracy’s ability to deliver the
world’s greatest need today — peace. In reality, the main threat to world peace today is not war
between two nation-states, but (1) nuclear arms proliferation; (2) terrorism; and (3) ethnic and religious
conflict within states. As this paper was being written, India, the world’s largest democracy, appeared to be itching to start a war with Pakistan, bringing the world closer to
nuclear war than it has been for many years. The United States, the world’s leading democracy, is waging war in Afghanistan,
which war relates to the second and third threats noted above — terrorism and ethnic/religious conflict.
If the terrorists are to be believed — and why would they lie?─they struck at the United States on
September 11th because of its democratically-induced interventions into ethnic/religious disputes in
their parts of the world. As I shall argue below, democracy is implicated in all three major threats to world peace
and others as well. The vaunted political machinery of democracy has failed to deliver on its promises.
The United States, the quintessential democracy, was directly or indirectly involved in most of the major
wars in the 20th Century. On September 11, 2001, the 350-year experiment with the modern nation-state ended in failure. A radical re-thinking of the relationship
between the individual and the collective, society and state is urgently required. Our lives depend on it. We must seriously question whether the
primitive and ungainly political technology of democracy can possibly keep the peace in tomorrow’s
world. Thus, a thorough reconsideration of the relationship between democracy and peace is essential.
This paper makes a beginning in that direction.
to a common end and prefer the interests of mankind to any narrow interest of their own. Spencer R. Weart alleges that democracies rarely if ever go to war with each other.
1NC Democracy Bad
Democracy causes war – much more recent evidence
Lebow 11 (http://www.dartmouth.edu/~nedlebow/aggresive_democracies.pdf, Aggressive
Democracies)//A.V.
Aggressive Democracies Richard Ned Lebow1 abstract Democracies
are the most aggressive regime type measured in
terms of war initiation. Since 1945, the United States has also been the world’s most aggressive state by
this measure. This finding prompts the question of whether the aggressiveness of democracies, and the United States in particular, is due
to regime type or other factors. I make the case for the latter. My argument has implications for the Democratic Peace thesis and the
unfortunate tendency of some of its advocates to use its claims for policy guidance. The
Democratic Peace research
programme is based on the putative empirical finding that democracies do not fight other democracies.
It has generated a large literature around the validity of this finding and about the reasons why
democracies do not initiate wars against democratic opponents. In this paper, I do not engage these
controversies directly, but rather look at the record of democracies as war initiators in the post-World
War II period. They turn out to be the most aggressive regime type measured by war initiation. The United
States, which claims to be the world’s leading democracy, is also the world’s most aggressive state by this measure. Below, I first document this
set of claims using a data set that Benjamin Valentino and I constructed. Next, I speculate about some of the reasons why the United States has
been such an aggressive state in the post-war era. In particular, I am interested in the extent to which this aggressiveness is due to democratic
governance or other, more idiosyncratic factors. I am inclined to make the case for the latter. This argument has implications for the
Democratic Peace thesis and the unfortunate tendency of some of its advocates to use its claims for policy guidance. Richard Ned Lebow,
“Aggressive Democracies,” St Antony’s International Review 6, no. 2 (2011): 120–133.121 The United States and War Initiation The more
meaningful peer group comparison for the United States is with the countries of Western Europe, Japan, the “Old Commonwealth” (Canada,
Australia, and New Zealand), and certain Latin American states.
This is because these are all fellow democracies that—like
the United States—are relatively well-established, relatively liberal, relatively wealthy (on a per capita income
basis), and— unlike Israel and India—relatively geo-politically secure and relatively lacking in severe
religious and ethnic tension. Here the United States is clearly an outlier, as only two of these countries initiated wars (France and
Britain against Egypt in 1956). Britain was also a partner of the United States in the 1991 Gulf War and the 2003 invasion of Iraq. The United
States differs from all these countries in several important ways. In A Cultural Theory of International Relations, I describe it as a “parvenu”
power. These are states that are late entrants into the arena where they can compete for standing and do so with greater intensity than other
states. Moreover, due to the ideational legacy left by their parvenu status, such states may continue to behave like this for a considerable time
after achieving great power status. They
devote a higher percentage of their national income to military forces
and pursue more aggressive foreign policies. Examples include Sweden under Gustavus Adolphus, Prussia and Russia in
the eighteenth centuries, and Japan and the United States in the late nineteenth and twentieth centuries.5 Unlike
other parvenu powers, the constraints on the United States were more internal than external. Congress, not other powers, kept American
presidents from playing a more active role in European affairs in the 1920s and 1930s and forced a withdrawal from Indochina in the 1970s. The
United States was never spurned or humiliated by other powers, but some American presidents and their advisers did feel humiliated by the
constraints imposed upon them domestically. They frequently sought to commit the country to activist policies through membership in
international institutions that involved long-term obligations (for example, the imf and nato), executive actions (for example, the 1940
destroyer deal, intervention in the Korean War, and sending Marines to Lebanon in 1958), and congressional resolutions secured on the basis of
false or misleading information (the Gulf of Tonkin and Iraq War resolutions). Ironically, concern for credibility promoted ill-considered and
open-ended commitments like Vietnam and Iraq that later led to public opposition and the congressional constraints that subsequent American
presidents considered detrimental to presidential credibility. Instead
of 123 prompting a reassessment of national
security strategy, these setbacks appear to have strengthened the commitment of at least some
presidents and their advisers to breaking free of these constraints and asserting leadership in the world,
thus ushering in a new cycle of overextension, failure, and renewed constraints. The United States is unique in
other ways. It is by far and away the most powerful economy in the world. At the end of World War II, it accounted for 46
per cent of the world’s gross domestic product (gdp) and today represents a still-impressive 21 per
cent.6 Prodigious wealth allows the United States to spend an extraordinary percentage of its gdp on its
armed forces in comparison to other countries. In the aftermath of the Cold War, most countries cut back on military
spending, but us spending has increased. In
2003, the United States spent $417 billion on defence, 47 per cent of
the world total.7 In 2008, it spent 41 per cent of its national budget on the military and the cost of past wars, which accounted for almost
50 per cent of world defence spending. In absolute terms, this was twice the total of Japan, Russia, the United Kingdom, Germany, and China
combined. Not surprisingly, the United States is the only state with global military reach.8 Democratic and Republican administrations alike
have held that extraordinary levels of military expenditure will sustain, if not increase, the standing and influence that traditionally comes with
military dominance. It is intended to make the United States, in the words of former Secretary of State Madeleine Albright, “the indispensable
nation”—the only power capable of enforcing global order.9 An equally important point is that possession of such military instruments
encourages policymakers to formulate maximalist objectives. Such goals are, by definition, more difficult to achieve by diplomacy, pushing the
United States into “eyeball-to-eyeball” confrontations where the use of force becomes a possibility. us defence expenditure also reflects the
political power of the military-industrial complex. Defence spending has encouraged the dependence of numerous companies on the
government and helped bring others into being. In 1991, at the end of the Cold War, twelve million people, roughly ten per cent of the us
workforce, were directly or indirectly dependent upon defence dollars. The number has not changed significantly since. Having
such a
large impact on the economy gives defence contractors enormous political clout.10 Those who land
major weapons projects are careful to subcontract production across the country, often offering a part
of the production process to companies in every state. This gives the contractors enormous political
leverage in Congress, often 124
2NC Democracy Bad
Democracies start more wars- statistical analysis proves
Henderson 2 (Errol Henderson, Assistant Professor, Dept. of Political Science at the University of Florida, 2002,
Democracy and War The End of an Illusion?, p. 146)
Are Democracies More Peaceful than Nondemocracies with Respect to Interstate Wars? The results indicate
that democracies are more war-prone than non-democracies (whether democracy is coded
dichotomously or continuously) and that democracies are more likely to initiate interstate wars. The
findings are obtained from analyses that control for a host of political, economic, and cultural factors
that have been implicated in the onset of interstate war, and focus explicitly on state level factors instead
of simply inferring state level processes from dyadic level observations as was done in earlier studies (e.g.,
Oneal and Russett, 1997; Oneal and Ray, 1997). The results imply that democratic enlargement is more
likely to increase the probability of war for states since democracies are more likely to become involved
in—and to initiate—interstate wars.
Democracy leads to wars against non-democracies.
Daase 6 (Christopher, Chair in International Organisation, University of Frankfurt, Democratic Wars, pg. 77)
In what follows, I will focus on three reasons why democracies might be peaceful to each other, but abrasive
or even bellicose towards non- democracies. The first reason is an institutional one: domestic institutions
dampen conflicts among democracies but aggravate conflicts between democracies and nondemocracies. The second reason is a normative one: shared social values and political ideals prevent
wars between democracies but make wars between democracies and non-democracies more likely
and savage. The third reason is a structural one: the search for safety encourages democracies to create
security communities by renouncing violence among themselves but demands assertiveness against
outsiders and the willingness to use military means if enlargement of that community cannot be
achieved peacefully. To illustrate this, I will draw mainly on the United States as an example following a
Tocquevlllean tradition, but knowing that not all democracies behave in the same way or that the US is the
only war-fighting democracy. It is clear that the hypotheses are first conjectures and that more case studies
and quantitative tests are needed to reach more general conclusions .
Democratic governments engage in diversionary wars to influence elections.
Daase 6 (Christopher, Chair in International Organisation, University of Frankfurt, Democratic Wars, pg. 77)
However, there is a contradictory effect as well. Democratic governments are tempted to use military
violence prior to elections if their public esteem is in decline and if they must fear not being re-elected
(Ostrom and]ob, 1986; Russett, 1990; Mintz and Russett, 1992; Mintz and Geza, 1993). In doing so, they
count on the 'rally round the flag' effect, which is usually of short duration but long enough to make
the public forget economic misery or governmental misbehaviour in order to influence tight
elections results in favour of the incumbent. This diversionary effect of warfare is especially
attractive to democracies since they have no other means at their disposal to diffuse discontent or
suppress internal conflict. Therefore, the use of military force for diversionary purposes is generally
'a pathology of democratic systems' (Gelpi, 1997, p. 280).
Critical Infrastructure (Zero Days) Adv
Notes
30 second explainer: zero-day vulnerabilities (basically vulnerabilities in software that are unknown and
can be exploited in “zero-days”) makes nuke power at risk, cyber-terror causes nuke meltdowns,
extinction, retaliation, nuke war, yadayadayada
CX Questions
1NC Cyber Inev
Cybersecurity vulnerabilities are inevitable
Corn 7/13
(Corn, Geoffrey S. * Presidential Research Professor of Law, South Texas College of Law; Lieutenant Colonel (Retired), U.S. Army Judge
Advocate General’s Corps. Prior to joining the faculty at South Texas, Professor Corn served in a variety of military assignments, including as
the Army’s Senior Law of War Advisor, Supervisory Defense Counsel for the Western United States, Chief of International Law for U.S. Army
Europe, and as a Tactical Intelligence Officer in Panama. “Averting the Inherent Dangers of 'Going Dark': Why Congress Must Require a
Locked Front Door to Encrypted Data,” SSRN. 07-13-2015.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes//ghs-kw)
Like CALEA, a statutory obligation along the lines proposed herein will inevitably trigger criticisms and generate concerns. One obvious
criticism is that the creation of an escrow key or the maintenance of a duplicate key by a manufacturer
would introduce an unacceptable risk of compromise for the device. This argument presupposes that the
risk is significant, that the costs of its exploitation are large, and that the benefit is not worth the risk.
Yet manufacturers, product developers, service providers and users constantly introduce such risks.
Nearly every feature or bit of code added to a device introduces a risk, some greater than others. The
vulnerabilities that have been introduced to computers by software such as Flash, ActiveX controls, Java,
and web browsers are well documented.51 The ubiquitous SQL database, while extremely effective at
helping web designers create effective data driven websites, is notorious for its vulnerability to SQL
injection attacks.52 The adding of microphones to electronic devices opened the door to aural
interceptions. Similarly, the introduction of cameras has resulted in unauthorized video surveillance of
users. Consumers accept all of these risks, however, since we, as individual users and as a society, have
concluded that they are worth the cost. Some will inevitably argue that no new possible vulnerabilities
should be introduced into devices to allow the government to execute reasonable, and therefore lawful, searches
for unique and otherwise unavailable evidence. However, this argument implicitly asserts that there is
no, or insignificant, value to society of such a feature. And herein lies the Achilles heel to opponents of
mandated front-door access: the conclusion is entirely at odds with the inherent balance between individual
liberty and collective security central to the Fourth Amendment itself. Nor should lawmakers be deluded
into believing that the currently existing vulnerabilities that we live with on a daily basis are less
significant in scope than the possibility of obtaining complete access to the encrypted contents of a
device. Various malware variants that are so widespread as to be almost omnipresent in our online
community achieve just such access through what would seem like minor cracks in the defense of
systems.53 One example is the Zeus malware strain, which has been tied to the unlawful online theft of
hundreds of millions of dollars from U.S. companies and citizens and gives its operator complete access
to and control over any computer it infects.54 It can be installed on a machine through the simple mistake of viewing an
infected website or email, or clicking on an otherwise innocuous link.55 The malware is designed to not only bypass
malware detection software, but to deactivate to software’s ability to detect it.56 Zeus and the many other
variants of malware that are freely available to purchasers on dark-net websites and forums are responsible for the theft of funds from
countless online bank accounts (the credentials having been stolen by the malware’s key-logger features), the theft of credit card information,
and innumerable personal identifiers.57
2NC Cyber Inev
Security issues are inevitable
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-partii-debate-merits//ghs-kw)
On Thursday, I described the surprisingly warm reception FBI Director James Comey got in the Senate this week with his warning that the
FBI
was "going dark" because of end-to-end encryption. In this post, I want to take on the merits of the renewed encryption
debate, which seem to me complicated and multi-faceted and not all pushing in the same direction. Let me start by breaking the encryption
debate into two
distinct sets of questions: One is the conceptual question of whether a world of end-to-end
strong encryption is an attractive idea. The other is whether—assuming it is not an attractive idea and that one wants to
ensure that authorities retain the ability to intercept decrypted signal—an extraordinary access scheme is technically
possible without eroding other essential security and privacy objectives. These questions often get mashed together,
both because tech companies are keen to market themselves as the defenders of their users' privacy interests and because of the libertarian
ethos of the tech community more generally. But the
questions are not the same, and it's worth considering them
separately. Consider the conceptual question first. Would it be a good idea to have a world-wide
communications infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could
snap our fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from the
FSB but also from the FBI even wielding lawful process, would that be desirable? Or, in the alternative, do we want to create an internet as
secure as possible from everyone except government investigators exercising their legal authorities with the understanding that other countries
may do the same? Conceptually speaking, I am with Comey on this question—and the
matter does not seem to me an especially
close call. The belief in principle in creating a giant world-wide network on which surveillance is
technically impossible is really an argument for the creation of the world's largest ungoverned space. I
understand why techno-anarchists find this idea so appealing. I can't imagine for moment, however, why
anyone else would. Consider the comparable argument in physical space: the creation of a city in which
authorities are entirely dependent on citizen reporting of bad conduct but have no direct visibility onto
what happens on the streets and no ability to conduct search warrants (even with court orders) or to
patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really suck is
not controversial when you're talking about Yemen or Somalia. I see nothing more attractive about the
creation of a worldwide architecture in which it is technically impossible to intercept and read ISIS
communications with followers or to follow child predators into chatrooms where they go after kids. The
trouble is that this conceptual position does not answer the entirety of the policy question before us. The reason is that the case against
preserving some form of law enforcement access to decrypted signal is not only a conceptual embrace of the technological obsolescence of
surveillance. It
is also a series of arguments about the costs—including the security costs—of maintaining
the capacity to decrypt captured signal. Consider the report issued this past week by a group of computer security experts
(including Lawfare contributing editors Bruce Schneier and Susan Landau), entitled "Keys Under Doormats: Mandating Insecurity By Requiring
Government Access to All Data and Communications." The report does not make an in-principle argument or a conceptual argument against
extraordinary access. It argues, rather, that the effort to build such a system risks eroding cybersecurity in ways far more important than the
problems it would solve. The authors, to summarize, make three claims in support of the broad claim that any exceptional access system would
"pose . . . grave security risks [and] imperil innovation." What
are those "grave security risks"? "[P]roviding exceptional
access to communications would force a U-turn from the best practices now being deployed to make
the Internet more secure. These practices include forward secrecy—where decryption keys are deleted immediately
after use, so that stealing the encryption key used by a communications server would not compromise earlier or later communications. A
related technique, authenticated encryption, uses the same temporary key to guarantee confidentiality and to verify that the message has not
been forged or tampered with." "[B]uilding
in exceptional access would substantially increase system complexity"
and "complexity is the enemy of security." Adding code to systems increases that system's attack surface, and a certain number
of additional vulnerabilities come with every marginal increase in system complexity. So by requiring a potentially complicated new system to
be developed and implemented, we'd be effectively guaranteeing more vulnerabilities for malicious actors to hit. "[E]xceptional access
would create concentrated targets that could attract bad actors." If we require tech companies to retain some means
of accessing user communications, those keys have to stored somewhere, and that storage then becomes an unusually high-stakes target for
malicious attack. Their theft then compromises, as did the OPM hack, large numbers of users. The strong implication of the report is that
these issues are not resolvable, though the report never quite says that. But at a minimum, the authors raise a series of important
questions about whether such a system would, in practice, create an insecure internet in general—rather than one whose general security has
the technical capacity to make security exceptions to comply with the law. There is some reason, in my view, to suspect that the
picture
may not be quite as stark as the computer scientists make it seem. After all, the big tech companies
increase the complexity of their software products all the time, and they generally regard the increased
attack surface of the software they create as a result as a mitigatable problem. Similarly, there are lots of
high-value intelligence targets that we have to secure and would have big security implications if we
could not do so successfully. And when it really counts, that task is not hopeless. Google and Apple and Facebook are not without
tools in the cybersecurity department. The real question, in my view, is whether a system of the sort Comey imagines could be
built in fashion in which the security gain it would provide would exceed the heightened security risks
the extraordinary access would involve. As Herb Lin puts it in his excellent, and admirably brief, Senate testimony the other day,
this is ultimately a question without an answer in the absence of a lot of new research. "One side says [the] access [Comey is seeking] inevitably
weakens the security of a system and will eventually be compromised by a bad guy; the other side says it doesn’t weaken security and won’t be
compromised. Neither side can prove its case, and we see a theological clash of absolutes." Only when someone actually does the research and
development and tries actually to produce a system that meets Comey's criteria are we going to find out whether it's doable or not. And
therein lies the rub, and the real meat of the policy problem, in my view: Who's going to do this research? Who's going to conduct the
sustained investment in trying to imagine a system that secures communications except from government when and only government has a
warrant to intercept those communications? The assumption of the computer scientists in their report is that the burden of that research lies
with the government. "Absent a concrete technical proposal," they write, "and without answers to the questions raised in this report,
legislators should reject out of hand any proposal to return to the failed cryptography control policy of the 1990s." Indeed, their most central
recommendation is that the burden of development is on Comey. "Our strong recommendation is that anyone proposing regulations should
first present concrete technical requirements, which industry, academics, and the public can analyze for technical weaknesses and for hidden
costs." In his testimony, Herb supports this call, though he acknowledges that it is not the inevitable route: the government has not yet
provided any specifics, arguing that private vendors should do it. At the same time, the vendors won’t do it, because [their] customers aren’t
demanding such features. Indeed, many customers would see such features as a reason to avoid a given vendor. Without specifics, there will be
no progress. I believe the government is afraid that any specific proposal will be subject to enormous criticism—and that’s true—but the
government is the party that wants . . . access, and rather than running away from such criticism, it should embrace any resulting criticism as an
opportunity to improve upon its initial designs." Herb might also have mentioned that lots of people in the academic tech community who
would be natural candidates to help develop such an access system are much more interested in developing encryption systems to keep the
feds out than to—under any circumstances—let them in. The tech community has spent a lot more time and energy arguing against the
plausibility and desireability of implementing what Comey is seeking than it has spent in trying to develop systems that deliver it while
mitigating the risks such a system might pose. For both industry and the tech communities, more broadly, this is government's problem, not
their problem. Yet reviving the Clipper Chip model—in which government develops a fully-formed system and then puts it out publicly for the
community to shoot down—is clearly not what Comey has in mind. He is talking in very different language: the language of performance
requirements. He wants to leave the development task to Silicon Valley to figure out how to implement government's requirements. He
wants to describe what he needs—decrypted signal when he has a warrant—and leave the companies
to figure out how to deliver it while still providing secure communications in other circumstances to
their customers. The advantage to this approach is that it potentially lets a thousand flowers bloom.
Each company might do it differently. They would compete to provide the most security consistent
with the performance standard. They could learn from each other. And government would not be in
the position of developing and promoting specific algorithms. It wouldn't even need to know how the
task was being done.
1NC No Meltdowns Impact
No impact to or risk of nuclear meltdowns – their evidence
Cappiello 3/29/11 – national environmental reporter for The Associated Press, master’s degrees in
earth and environmental science and journalism from Columbia University (Dina, “Long Blackouts Pose
Risk To U.S. Nuclear Reactors” Huffington Post, http://www.huffingtonpost.com/2011/03/29/blackoutrisk-us-nuclear-reactors_n_841869.html)//IS
A 2003 federal analysis looking at how to estimate the risk of containment failure said that should
power be knocked out by an
earthquake or tornado it "would be unlikely that power will be recovered in the time frame to prevent core
meltdown." In Japan, it was a one-two punch: first the earthquake, then the tsunami. Tokyo Electric Power Co., the operator of the crippled
plant, found other ways to cool the reactor core and so far avert a full-scale meltdown without electricity. "Clearly the coping duration is an
issue on the table now," said Biff Bradley, director of risk assessment for the Nuclear Energy Institute. "The industry and the Nuclear Regulatory
Commission will have to go back in light of what we just observed and rethink station blackout duration." David Lochbaum, a former plant
engineer and nuclear safety director at the advocacy group Union of Concerned Scientists, put it another way: "Japan
shows what
happens when you play beat-the-clock and lose." Lochbaum plans to use the Japan disaster to press lawmakers and the
nuclear power industry to do more when it comes to coping with prolonged blackouts, such as having temporary generators on site that can
recharge batteries. A
complete loss of electrical power, generally speaking, poses a major problem for a
nuclear power plant because the reactor core must be kept cool, and back-up cooling systems – mostly
pumps that replenish the core with water_ require massive amounts of power to work. Without the
electrical grid, or diesel generators, batteries can be used for a time, but they will not last long with the
power demands. And when the batteries die, the systems that control and monitor the plant can also go
dark, making it difficult to ascertain water levels and the condition of the core. One variable not
considered in the NRC risk assessments of severe blackouts was cooling water in spent fuel pools, where
rods once used in the reactor are placed. With limited resources, the commission decided to focus its
analysis on the reactor fuel, which has the potential to release more radiation. An analysis of individual
plant risks released in 2003 by the NRC shows that for 39 of the 104 nuclear reactors, the risk of core
damage from a blackout was greater than 1 in 100,000. At 45 other plants the risk is greater than 1 in
1 million, the threshold NRC is using to determine which severe accidents should be evaluated in its latest analysis. The Beaver Valley Power
Station, Unit 1, in Pennsylvania had the greatest risk of core melt – 6.5 in 100,000, according to the analysis. But that risk may have been
reduced in subsequent years as NRC regulations required plants to do more to cope with blackouts. Todd Schneider, a spokesman for
FirstEnergy Nuclear Operating Co., which runs Beaver Creek, told the AP that batteries on site would last less than a week. In 1988, eight years
after labeling blackouts "an unresolved safety issue," the NRC required nuclear power plants to improve the reliability of their diesel
generators, have more backup generators on site, and better train personnel to restore power. These steps would allow them to keep the core
cool for four to eight hours if they lost all electrical power. By contrast, the newest generation of nuclear power plant, which is still awaiting
approval, can last 72 hours without taking any action, and a minimum of seven days if water is supplied by other means to cooling pools.
Despite the added safety measures, a 1997 report found that blackouts – the loss of on-site and off-site electrical power – remained "a
dominant contributor to the risk of core melt at some plants." The events of Sept. 11, 2001, further solidified that nuclear reactors might have
to keep the core cool for a longer period without power. After 9/11, the commission issued regulations requiring that plants
have
portable power supplies for relief valves and be able to manually operate an emergency reactor
cooling system when batteries go out. The NRC says these steps, and others, have reduced the risk of
core melt from station blackouts from the current fleet of nuclear plants. For instance, preliminary results of the
latest analysis of the risks to the Peach Bottom plant show that any release caused by a blackout there would be far less
rapid and would release less radiation than previously thought, even without any actions being taken.
With more time, people can be evacuated. The NRC says improved computer models, coupled with up-to-date information
about the plant, resulted in the rosier outlook. "When you simplify, you always err towards the worst possible circumstance," Scott Burnell, a
spokesman for the Nuclear Regulatory Commission, said of the earlier studies. The
latest work shows that "even in situations
where everything is broken and you can't do anything else, these events take a long time to play out,"
he said. "Even when you get to releasing into environment, much less of it is released than actually
thought." Exelon Corp., the operator of the Peach Bottom plant, referred all detailed questions about its preparedness and the risk analysis
back to the NRC. In a news release issued earlier this month, the company, which operates 10 nuclear power plants, said "all Exelon
nuclear plants are able to safely shut down and keep the fuel cooled even without electricity from the
grid." Other people, looking at the crisis unfolding in Japan, aren't so sure. In the worst-case scenario, the NRC's 1990 risk assessment
predicted that a core melt at Peach Bottom could begin in one hour if electrical power on- and off-site were
lost, the diesel generators – the main back-up source of power for the pumps that keep the core cool
with water – failed to work and other mitigating steps weren't taken. "It is not a question that those
things are definitely effective in this kind of scenario," said Richard Denning, a professor of nuclear engineering at Ohio
State University, referring to the steps NRC has taken to prevent incidents. Denning had done work as a contractor on severe accident analyses
for the NRC since 1975. He retired from Battelle Memorial Institute in 1995. "They certainly could have made all the difference in this particular
case," he said, referring to Japan. "That's assuming you have stored these things in a place that would not have been swept away by tsunami."
1NC No Cyber
Their impacts are all hype—no cyberattack
Walt 10 – Stephen M. Walt 10 is the Robert and Renée Belfer Professor of international relations at
Harvard University "Is the cyber threat overblown?" March 30
walt.foreignpolicy.com/posts/2010/03/30/is_the_cyber_threat_overblown
Am I the only person -- well, besides Glenn Greenwald and Kevin Poulson -- who thinks the "
cyber-warfare" business may be overblown? It’s clear the U.S. national security
establishment is paying a lot more attention to the issue, and colleagues of mine -- including some pretty serious and level-headed people -- are increasingly worried by the danger of
some sort of "cyber-Katrina." I don't dismiss it entirely, but this sure
looks to me like a classic opportunity for threat-inflation.¶ Mind you, I'm not
saying that there aren't a lot of shenanigans going on in cyber-space, or that various forms of cyber-warfare don't have military potential. So I'm not arguing for complete head-in-the-
here’s what makes me worry that the threat is being overstated.¶ First, the whole issue is highly esoteric -you really need to know a great deal about computer networks, software, encryption, etc., to know how serious the danger might be. Unfortunately, details about a
number of the alleged incidents that are being invoked to demonstrate the risk of a "cyber-Katrina," or a cyber-9/11, remain
classified, which makes it hard for us lay-persons to gauge just how serious the problem really was or is. Moreover, even
when we hear about computers being penetrated by hackers, or parts of the internet crashing, etc., it’s hard to know how
much valuable information was stolen or how much actual damage was done. And as with other specialized areas of technology
and/or military affairs, a lot of the experts have a clear vested interest in hyping the threat, so as to create greater
demand for their services. Plus, we already seem to have politicians leaping on the issue as a way to grab some pork for their
states.¶ Second, there are lots of different problems being lumped under a single banner , whether the label is "cyber-terror" or
sand complacency. But
"cyber-war." One issue is the use of various computer tools to degrade an enemy’s military capabilities (e.g., by disrupting communications nets, spoofing sensors, etc.). A second issue is
the alleged threat that bad guys would penetrate computer networks and shut down power grids, air traffic control, traffic lights, and other important elements of infrastructure, the way
that internet terrorists (led by a disgruntled computer expert) did in the movie Live Free and Die Hard. A third problem is web-based criminal activity, including identity theft or simple
fraud (e.g., those emails we all get from someone in Nigeria announcing that they have millions to give us once we send them some account information). A fourth potential threat is
“cyber-espionage”; i.e., clever foreign hackers penetrate Pentagon or defense contractors’ computers and download valuable classified information. And then there are annoying
This
sounds like a rich menu of potential trouble, and putting the phrase "cyber" in front of almost any noun makes
it sound trendy and a bit more frightening. But notice too that these are all somewhat different problems of quite different importance, and the appropriate
response to each is likely to be different too. Some issues -- such as the danger of cyber-espionage -- may not require elaborate
technical fixes but simply more rigorous security procedures to isolate classified material from the web. Other problems may not
require big federal programs to address, in part because both individuals and the private sector have
activities like viruses, denial-of-service attacks, and other things that affect the stability of web-based activities and disrupt commerce (and my ability to send posts into FP).¶
incentives to protect themselves (e.g., via firewalls or by backing up critical data). And as Greenwald warns, there may be real costs to civil liberties if
concerns about vague cyber dangers lead us to grant the NSA or some other government agency greater control over the Internet. ¶ Third, this is another issue that cries out for some
Is the danger that some malign hacker crashes a power grid greater than the likelihood
that a blizzard would do the same thing? Is the risk of cyber-espionage greater than the potential danger from
more traditional forms of spying? Without a comparative assessment of different risks and the costs of mitigating each one, we will allocate resources on the basis
comparative cost-benefit analysis.
of hype rather than analysis. In short, my fear is not that we won't take reasonable precautions against a potential set of dangers; my concern is that we will spend tens of billions of
dollars protecting ourselves against a set of threats that are not as dangerous as we are currently being told they are.
2NC No Cyber
No cyber impact
Healey 3/20 Jason, Director of the Cyber Statecraft Initiative at the Atlantic Council, "No,
Cyberwarfare Isn't as Dangerous as Nuclear War", 2013, www.usnews.com/opinion/blogs/worldreport/2013/03/20/cyber-attacks-not-yet-an-existential-threat-to-the-us
America does not face an existential cyberthreat today, despite recent warnings. Our cybervulnerabilities are
undoubtedly grave and the threats we face are severe but far from comparable to nuclear war. ¶ The most recent
alarms come in a Defense Science Board report on how to make military cybersystems more resilient against advanced threats (in short, Russia
or China). It warned that the "cyber threat is serious, with potential consequences similar in some ways to the nuclear threat of the Cold War."
Such fears were also expressed by Adm. Mike Mullen, then chairman of the Joint Chiefs of Staff, in 2011. He called cyber "The single biggest
existential threat that's out there" because "cyber actually more than theoretically, can attack our infrastructure, our financial systems."¶
While it is true that cyber attacks might do these things, it is also true they have not only never
happened but are far more difficult to accomplish than mainstream thinking believes. The consequences
from cyber threats may be similar in some ways to nuclear, as the Science Board concluded, but mostly, they are
incredibly dissimilar. ¶ Eighty years ago, the generals of the U.S. Army Air Corps were sure that their bombers would easily topple other
countries and cause their populations to panic, claims which did not stand up to reality. A study of the 25-year history of cyber
conflict, by the Atlantic Council and Cyber Conflict Studies Association, has shown a similar dynamic where the impact of
disruptive cyberattacks has been consistently overestimated. ¶ Rather than theorizing about future cyberwars or
extrapolating from today's concerns, the history of cyberconflict that have actually been fought, shows that cyber incidents have so far tended
to have effects that are either widespread but fleeting or persistent but narrowly focused. No
attacks, so far, have been both
widespread and persistent. There have been no authenticated cases of anyone dying from a cyber
attack. Any widespread disruptions, even the 2007 disruption against Estonia, have been short-lived causing no significant
GDP loss. ¶ Moreover, as with conflict in other domains, cyberattacks can take down many targets but keeping them down over time in the face
of determined defenses has so far been out of the range of all but the most dangerous adversaries such as Russia and China. Of course, if the
United States is in a conflict with those nations, cyber will be the least important of the existential threats policymakers should be worrying
about. Plutonium
trumps bytes in a shooting war.¶ This is not all good news. Policymakers have recognized the problems since
at least 1998 with little significant progress. Worse, the threats and vulnerabilities are getting steadily more worrying. Still, experts have
been warning of a cyber Pearl Harbor for 20 of the 70 years since the actual Pearl Harbor. ¶ The transfer of
U.S. trade secrets through Chinese cyber espionage could someday accumulate into an existential threat. But it
doesn't seem so seem just yet, with only handwaving estimates of annual losses of 0.1 to 0.5 percent to the total U.S. GDP of around
$15 trillion. That's bad, but it doesn't add up to an existential crisis or "economic cyberwar."
No impact to cyberterror
Green 2 – editor of The Washington Monthly (Joshua, 11/11, The Myth of Cyberterrorism,
http://www.washingtonmonthly.com/features/2001/0211.green.html, AG)
There's just one problem: There
is no such thing as cyberterrorism­­no instance of anyone ever having
been killed by a terrorist (or anyone else) using a computer. Nor is there compelling evidence that al
Qaeda or any other terrorist organization has resorted to computers for any sort of serious destructive activity.
What's more, outside of a Tom Clancy novel, computer security specialists believe it is virtually impossible to
use the Internet to inflict death on a large scale, and many scoff at the notion that terrorists would bother trying. "I don't
lie awake at night worrying about cyberattacks ruining my life," says Dorothy Denning, a computer science professor at
Georgetown University and one of the country's foremost cybersecurity experts. "Not only does
[cyberterrorism] not rank alongside chemical, biological, or nuclear weapons, but it is
not anywhere near as serious as
other potential physical threats like car bombs or suicide bombers."
Which is not to say that cybersecurity isn't a serious problem­­
it's just not one that involves terrorists. Interviews with terrorism and computer security experts, and current and former government and
military officials, yielded near unanimous agreement that the real danger is from the criminals and other hackers who did $15 billion in damage
to the global economy last year using viruses, worms, and other readily available tools. That figure is sure to balloon if more isn't done to
protect vulnerable computer systems, the vast majority of which are in the private sector. Yet when it comes to imposing the tough measures on
business necessary to protect against the real cyberthreats, the Bush administration has balked. Crushing BlackBerrys When ordinary
people imagine cyberterrorism, they tend to think along Hollywood plot lines, doomsday scenarios in
which terrorists hijack nuclear weapons, airliners, or military computers from halfway around the world. Given the colorful
history of federal boondoggles­­billion­dollar weapons systems that misfire, $600 toilet seats­­that's an understandable concern. But, with few
exceptions, it's not one that applies to preparedness for a cyberattack. "The government is miles ahead of the private sector when it comes to
cybersecurity," says Michael Cheek, director of intelligence for iDefense, a Virginia­based computer security company with government and
private­sector clients. "Particularly the most sensitive military systems." Serious effort and plain good fortune have combined to bring this
about. Take nuclear weapons. The biggest fallacy about their vulnerability, promoted in action thrillers like WarGames, is that they're designed
for remote operation. "[The movie] is premised on the assumption that there's a modem bank hanging on the side of the computer that
controls the missiles," says Martin Libicki, a defense analyst at the RAND Corporation. "I assure you, there isn't." Rather, nuclear weapons and
other sensitive military systems enjoy the most basic form of Internet security: they're "air­gapped," meaning that they're not physically
connected to the Internet and are therefore inaccessible to outside hackers. (Nuclear weapons also contain "permissive action links,"
mechanisms to prevent weapons from being armed without inputting codes carried by the president.) A retired military official was somewhat
indignant at the mere suggestion: "As a general principle, we've been looking at this thing for 20 years. What cave have you been living in if you
haven't considered this [threat]?" When it comes to cyberthreats, the
Defense Department has been particularly
vigilant to protect key systems by isolating them from the Net and even from the Pentagon's internal network. All
new software must be submitted to the National Security Agency for security testing. "Terrorists could not gain control of our
spacecraft, nuclear weapons, or any other type of high­consequence asset," says Air Force Chief Information
Officer John Gilligan. For more than a year, Pentagon CIO John Stenbit has enforced a moratorium on new wireless networks, which are often
easy to hack into, as well as common wireless devices such as PDAs, BlackBerrys, and even wireless or infrared copiers and faxes. The
September 11 hijackings led to an outcry that airliners are particularly susceptible to cyberterrorism. Earlier this year, for instance, Sen. Charles
Schumer (D­N.Y.) described "the absolute havoc and devastation that would result if cyberterrorists suddenly shut down our air traffic control
system, with thousands of planes in mid­flight." In fact, cybersecurity experts give some of their highest marks to the FAA, which reasonably
separates its administrative and air traffic control systems and strictly air­gaps the latter. And there's a reason the 9/11 hijackers used box­
cutters instead of keyboards: It's
impossible to hijack a plane remotely, which eliminates the possibility of a high­
tech 9/11 scenario in which planes are used as weapons. Another source of concern is terrorist infiltration of our
intelligence agencies. But here, too, the risk is slim. The CIA's classified computers are also air­gapped, as is the
FBI's entire computer system. "They've been paranoid about this forever," says Libicki, adding that paranoia is a sound governing principle
when it comes to cybersecurity. Such concerns are manifesting themselves in broader policy terms as well. One notable characteristic of last
year's Quadrennial Defense Review was how strongly it focused on protecting information systems.
Cyberattacks impossible – empirics and defenses solve
Rid ‘12 (Thomas Rid, reader in war studies at King's College London, is author of "Cyber War
Will Not Take Place" and co­author of "Cyber­Weapons.", March/April 2012, “Think Again:
Cyberwar”, http://www.foreignpolicy.com/articles/2012/02/27/cyberwar?page=full)
"Cyberwar Is Already Upon Us." No way. "Cyberwar
is coming!" John Arquilla and David Ronfeldt predicted in a celebrated Rand paper
back in 1993. Since then, it seems to have arrived ­­ at least by the account of the U.S. military establishment, which is busy competing
over who should get what share of the fight. Cyberspace is "a domain in which the Air Force flies and fights," Air Force Secretary Michael Wynne
claimed in 2006. By 2012, William J. Lynn III, the deputy defense secretary at the time, was writing that cyberwar
is "just as critical to
military operations as land, sea, air, and space." In January, the Defense Department vowed to equip the U.S. armed forces for
"conducting a combined arms campaign across all domains ­­ land, air, maritime, space, and cyberspace." Meanwhile, growing piles of books
and articles explore the threats of cyberwarfare, cyberterrorism, and how to survive them. Time
for a reality check: Cyberwar is
still more hype than hazard. Consider the definition of an act of war: It has to be potentially violent, it has to be
purposeful, and it has to be political. The cyberattacks we've seen so far, from Estonia to the Stuxnet virus, simply
don't meet these criteria. Take the dubious story of a Soviet pipeline explosion back in 1982, much cited by cyberwar's true believers
as the most destructive cyberattack ever. The account goes like this: In June 1982, a Siberian pipeline that the CIA had virtually
booby­trapped with a so­called "logic bomb" exploded in a monumental fireball that could be seen from space. The U.S. Air Force estimated the
explosion at 3 kilotons, equivalent to a small nuclear device. Targeting a Soviet pipeline linking gas fields in Siberia to European markets, the
operation sabotaged the pipeline's control systems with software from a Canadian firm that the CIA had doctored with malicious code. No
one died, according to Thomas Reed, a U.S. National Security Council aide at the time who revealed the incident in his 2004 book, At the
Abyss; the only harm came to the Soviet economy. But did it really happen? After Reed's account came out, Vasily
Pchelintsev, a former KGB head of the Tyumen region, where the alleged explosion supposedly took place, denied
the story. There are also no media reports from 1982 that confirm such an explosion, though accidents and pipeline explosions in the Soviet
Union were regularly reported in the early 1980s. Something likely did happen, but Reed's book is the only public mention of the incident and
his account relied on a single document. Even after the CIA declassified a redacted version of Reed's source, a note on the so­called Farewell
Dossier that describes the effort to provide the Soviet Union with defective technology, the agency did not confirm that such an explosion
occurred. The available evidence on the Siberian pipeline blast is so thin that it shouldn't be counted as a proven case of a successful
cyberattack. Most other commonly cited cases of cyberwar are even less remarkable. Take the attacks on Estonia in April 2007, which came in
response to the controversial relocation of a Soviet war memorial, the Bronze Soldier. The well­wired country found itself at the receiving end of
a massive distributed denial­of­service attack that emanated from up to 85,000 hijacked computers and lasted three weeks. The attacks reached
a peak on May 9, when 58 Estonian websites were attacked at once and the online services of Estonia's largest bank were taken down. "What's
the difference between a blockade of harbors or airports of sovereign states and the blockade of government institutions and newspaper
websites?" asked Estonian Prime Minister Andrus Ansip. Despite his analogies, the attack was no act of war. It was certainly a nuisance and an
emotional strike on the country, but the bank's actual network was not even penetrated; it went down for 90 minutes one day and two hours
the next. The attack was not violent, it wasn't purposefully aimed at changing Estonia's behavior, and no political entity took credit for it. The
same is true for the vast majority of cyberattacks on record. Indeed, there
is no known cyberattack that has caused the
loss of human life. No cyberoffense has ever injured a person or damaged a building. And if an act is not at
least potentially violent, it's not an act of war. Separating war from physical violence makes it a metaphorical notion; it would
mean that there is no way to distinguish between World War II, say, and the "wars" on obesity and cancer. Yet those ailments, unlike past
examples of cyber "war," actually do kill people. "A Digital Pearl Harbor Is Only a Matter of Time." Keep waiting. U.S.
Defense Secretary Leon Panetta delivered a stark warning last summer: "We could face a cyberattack that could be the equivalent of Pearl
Harbor." Such alarmist
predictions have been ricocheting inside the Beltway for the past two decades, and
some scaremongers have even upped the ante by raising the alarm about a cyber 9/11. In his 2010 book, Cyber
War, former White House counterterrorism czar Richard Clarke invokes the specter of nationwide power blackouts, planes falling out of the sky,
trains derailing, refineries burning, pipelines exploding, poisonous gas clouds wafting, and satellites spinning out of orbit ­­ events that would
make the 2001 attacks pale in comparison. But the
empirical record is less hair­raising, even by the standards of the
most drastic example available. Gen. Keith Alexander, head of U.S. Cyber Command (established in 2010 and now
boasting a budget of more than $3 billion), shared his worst fears in an April 2011 speech at the University of Rhode Island: "What I'm
concerned about are destructive attacks," Alexander said, "those that are coming." He then invoked a remarkable accident at Russia's Sayano­
Shushenskaya hydroelectric plant to highlight the kind of damage a cyberattack might be able to cause. Shortly after midnight on Aug. 17, 2009,
a 900­ton turbine was ripped out of its seat by a so­called "water hammer," a sudden surge in water pressure that then caused a transformer
explosion. The turbine's unusually high vibrations had worn down the bolts that kept its cover in place, and an offline sensor failed to detect the
malfunction. Seventy­five people died in the accident, energy prices in Russia rose, and rebuilding the plant is slated to cost $1.3 billion. Tough
luck for the Russians, but here's what the head of Cyber Command didn't say: The ill­fated turbine had been malfunctioning for some time, and
the plant's management was notoriously poor. On top of that, the key event that ultimately triggered the catastrophe seems to have been a fire
at Bratsk power station, about 500 miles away. Because the energy supply from Bratsk dropped, authorities remotely increased the burden on
the Sayano­Shushenskaya plant. The sudden spike overwhelmed the turbine, which was two months shy of reaching the end of its 30­year life
cycle, sparking the catastrophe. If anything, the
Sayano­Shushenskaya incident highlights how difficult a devastating
attack would be to mount. The plant's washout was an accident at the end of a complicated and unique
chain of events. Anticipating such vulnerabilities in advance is extraordinarily difficult even for insiders;
creating comparable coincidences from cyberspace would be a daunting challenge at best for outsiders.
If this is the most drastic incident Cyber Command can conjure up, perhaps it's time for everyone to take a deep breath. "Cyberattacks Are
Becoming Easier." Just the opposite. U.S. Director of National Intelligence James R. Clapper warned last year that the
volume of malicious software on American networks had more than tripled since 2009 and that more than
60,000 pieces of malware are now discovered every day. The United States, he said, is undergoing "a phenomenon known as
'convergence,' which amplifies the opportunity for disruptive cyberattacks, including against physical
infrastructures." ("Digital convergence" is a snazzy term for a simple thing: more and more devices able to talk to each other, and formerly
separate industries and activities able to work together.) Just
because there's more malware, however, doesn't mean that
attacks are becoming easier. In fact, potentially damaging or life­threatening cyberattacks should be more
difficult to pull off. Why? Sensitive systems generally have built‐in redundancy and safety systems,
meaning an attacker's likely objective will not be to shut down a system, since merely forcing the
shutdown of one control system, say a power plant, could trigger a backup and cause operators to start
looking for the bug. To work as an effective weapon, malware would have to influence an active process ­
­ but not bring it to a screeching halt. If the malicious activity extends over a lengthy period, it has to
remain stealthy. That's a more difficult trick than hitting the virtual off­button. Take Stuxnet, the worm that
sabotaged Iran's nuclear program in 2010. It didn't just crudely shut down the centrifuges at the Natanz nuclear
facility; rather, the worm subtly manipulated the system. Stuxnet stealthily infiltrated the plant's networks, then hopped onto
the protected control systems, intercepted input values from sensors, recorded these data, and then provided the legitimate controller code
with pre­recorded fake input signals, according to researchers who have studied the worm. Its objective was not just to fool operators in a
control room, but also to circumvent digital safety and monitoring systems so it could secretly manipulate the actual processes. Building
and deploying Stuxnet required extremely detailed intelligence about the systems it was supposed to
compromise, and the same will be true for other dangerous cyberweapons. Yes, "convergence,"
standardization, and sloppy defense of control­systems software could increase the risk of generic
attacks, but the same trend has also caused defenses against the most coveted targets to improve
steadily and has made reprogramming highly specific installations on legacy systems more complex,
not less.
Cyber-Vulnerability Adv
Notes
30 second explainer: yeah whatevs nuke war outweighs.
CX Questions
1NC Util
Prefer consequences
Goodin ‘95
Robert E. Goodin, Professor of Philosophy at the University of Australia, “Utilitarianism as a Public Philosophy”, pg 26 1995
This focus on the moral importance of modal shifts can be shown to have important implications for nuclear weapons policy. The preconditions
for applying my argument surely all exist. Little need be said to justify the claim that the
consequences in view matter morally.
Maybe consequentialistic considerations are not the only ones that should guide our choices, of military
policies or any others; but where the consequences in view are so momentous as those involved in an
all-out nuclear war, it would be sheer lunacy to deny such considerations any role at all
Morality is vacuous- infinite regress
Stelzig 98
[Tim Stelzig, B.A. 1990, West Virginia University; M.A. 1995, University of Illinois at Chicago; J.D.
Candidate 1998, University of Pennsylvania. , 3/98, "COMMENT: DEONTOLOGY, GOVERNMENTAL
ACTION, AND THE DISTRIBUTIVE EXEMPTION: HOW THE TROLLEY PROBLEM SHAPES THE RELATIONSHIP
BETWEEN RIGHTS AND POLICY", 146 U. Pa. L. Rev. 901, lexis law]
Take first the epistemological problem. Every
view of morality must ultimately give some account of how it is that
we come to know what is right. An otherwise impressive moral metaphysics is pointless if
epistemologically implausible. 103 With general norms, it is plausible that we may come to learn them gradually, refining our
understanding through practice. Naturalistically learning through practice, however, is foreclosed to one who sees deontology as both
pervasive and particularist. Almost every
situation is morally different from the rest, even if only slightly so. If
deontology is exhaustive of morality, there must be a separate injunction for each situation. The epistemological [*922] problem is
that learning an essentially infinite number of separate rules to govern our conduct is implausible. It
initially might be thought that the epistemological problem could be overcome by allowing generality within the specific norms, thus making it
possible for the student of morality to learn these general principles and then derive the specific deontological prohibitions from them. The
trouble with this response is that the important theoretic work is performed by the underlying principles by which the specific deontological
maxims can be learned. This is problematic because theoretic entities are abstract. As such, Ockham’s Razor and the principles of pragmatism
dictate that we do better to recognize conceptually the general principles. There
is no logical inconsistency in positing a
deontological norm for every morally distinct situation. But if pervasive, deontological maxims would be
superfluous. Thus, it is theoretically preferable to deny them this exclusivity. 106 Suppose the epistemological problem can be skirted by
allowing that some theoretically benign generality informs our moral understanding. If deontology may be exhaustive without
being particularist, then a separate objection, the conflicts problem, arises. As was true of the epistemological
problem, the conflicts problem arises because morality has something to say about almost everything.
Because the world is complex, if rights are general, then the evaluation of most morally interesting situations
will either depend on more than one rights claim or on some other moral element, each problematic for
the claim that deontology is exhaustive of morality. The reason is structural. Our moral intuitions are highly nuanced – often
minor changes to a factual situation alter the normative evaluation of that situation. But since a limited number of general
norms, because they are general, cannot account for this contextual sensitivity, some other explanation
must be offered. Positing a greater number of more specific deontological norms could account for this
factual sensitivity. Doing so, however, threatens to reincarnate the epistemological problem. If our norms are
relatively few in number, thereby putting them within our epistemic reach, either many norms will apply to each situation to give us the
contextual sensitivity that is evident, or some other principles must be at work.
Turn- morality undercuts political responsibility leading to political failures and
greater evils
Isaac 02
Issac, poli sci prof at Indiana – Bloomington, dir Center for the Study of Democracy and Public life, ’02
(Jeffrey, PhD from Yale, Dissent Magazine, Vol. 49, Iss. 2, “Ends, Means, and Politics,” p. Proquest)
As writers such as Niccolo Machiavelli, Max Weber, Reinhold Niebuhr, and Hannah Arendt have taught, an unyielding
concern with
moral goodness undercuts political responsibility. The concern may be morally laudable, reflecting a kind of
personal integrity, but it suffers from three fatal flaws: (1) It fails to see that the purity of one’s intention does
not ensure the achievement of what one intends. Abjuring violence or refusing to make common cause with morally
compromised parties may seem like the right thing; but if such tactics entail impotence, then it is hard to view them as
serving any moral good beyond the clean conscience of their supporters; (2) it fails to see that in a world of
real violence and injustice, moral purity is not simply a form of powerlessness; it is often a form of complicity in injustice. This
is why, from the standpoint of politics--as opposed to religion--pacifism is always a potentially immoral stand. In categorically
repudiating violence, it refuses in principle to oppose certain violent injustices with any effect; and (3) it fails to
see that politics is as much about unintended consequences as it is about intentions; it is the effects of action,
rather than the motives of action, that is most significant. Just as the alignment with “good” may
engender impotence, it is often the pursuit of “good” that generates evil. This is the lesson of communism
in the twentieth century: it is not enough that one’s goals be sincere or idealistic; it is equally important, always, to
ask about the effects of pursuing these goals and to judge these effects in pragmatic and historically contextualized
ways. Moral absolutism inhibits this judgment. It alienates those who are not true believers. It promotes
arrogance. And it undermines political effectiveness.
Solvency
1NC No Solvency
Aff is insufficient because it doesn’t seek international commitments – their evidence
CCIA 12 (international not-for-profit membership organization dedicated to innovation and enhancing
society’s access to information and communications)
(Promoting Cross­Border Data Flows Priorities for the Business Community, http://www.ccianet.org/wpcontent/uploads/library/PromotingCrossBorderDataFlows.pdf)
The movement of electronic information across borders is critical to businesses around the world, but the international rules governing flows of
digital goods, services, data and infrastructure are incomplete. The global trading system does not spell out a consistent, transparent
framework for the treatment of cross­ border flows of digital goods, services or information, leaving businesses and individuals to deal with a
patchwork of national, bilateral and global arrangements covering significant issues such as the storage, transfer, disclosure, retention and
protection of personal, commercial and financial data. Dealing with these issues is becoming even more important as a new generation of
networked technologies enables greater cross­border collaboration over the Internet, which has the potential to stimulate economic
development and job growth. Despite the widespread benefits of cross­border data flows to innovation and economic growth, and due in large
part to gaps in global rules and inadequate enforcement of existing commitments, digital protectionism is a growing threat around the world. A
number of countries have already enacted or are pursuing restrictive policies governing the provision of digital commercial and financial
services, technology products, or the treatment of information to favor domestic interests over international competition. Even where policies
are designed to support legitimate public interests such as national security or law enforcement, businesses can suffer when those rules are
unclear, arbitrary, unevenly applied or more trade­restrictive than necessary to achieve the underlying objective. What’s more, multiple
governments may assert jurisdiction over the same information, which may leave businesses subject to inconsistent or conflicting rules. In
response, the United States should drive the development and adoption of transparent and high­quality international rules, norms and best
practices on cross­border flows of digital data and technologies while also holding countries to existing international obligations. Such efforts
must recognize and accommodate legitimate differences in regulatory approaches to issues such as privacy and security between countries as
well as across sectors. They should also be grounded in key concepts such as non­discrimination and national treatment that have underpinned
the trading system for decades.
The U.S. Government should seek international commitments on several key
objectives, including: prohibiting measures that restrict legitimate cross­border data flows or link
commercial benefit to local investment; addressing emerging legal and policy issues involving the digital
economy; promoting industry­ driven international standards, dialogues and best practices; and
expanding trade in digital goods, services and infrastructure. U.S. efforts should ensure that trade
agreements cover digital technologies that may be developed in the future. At the same time, the
United States should work with governments around the world to pursue other policies that support
cross­border data flows, including those endorsed in the Communiqué on Principles for Internet
Policymaking related to intellectual property protection and limiting intermediary liability developed by
the Organization for Economic Cooperation and Development (OECD) in June 2011. U.S. negotiators
should pursue these issues in a variety of forums around the world, including the World Trade
Organization (WTO), Asia Pacific Economic Cooperation (APEC) forum, OECD, and regional trade
negotiations such as the Trans­Pacific Partnership as appropriate in each forum. In addition, the U.S.
Government should solicit ideas and begin to develop a plurilateral framework to set a new global gold­
standard to improve innovation. Finally, the U.S. Government should identify and seek to resolve
through WTO or bilateral consultations or other processes violations of current international rules
concerning digital goods, services and information. Promoting Cross­Border Data Flows: Priorities for the Business
Community 2 The importance of cross­border commercial and financial flows Access to computers, servers, routers and mobile devices,
services such as cloud computing – whereby remote data centers host information and run applications over the Internet, and information is
vital to the success of billions of individuals, businesses and entire economies. In the United States alone, the goods, services and content
flowing through the Internet have been responsible for 15 percent of GDP growth over the past five years. Open, fair and contestable
international markets for information and communication technologies (ICT) and information are important to electronic retailers, search
engines, social networks, web hosting providers, registrars and the range of technology infrastructure and service providers who rely directly on
the Internet to create economic value. But they are also critical to the much larger universe of manufacturers, retailers, wholesalers, financial
services and logistics firms, universities, labs, hospitals and other organizations which rely on hardware, software and reliable access to the
Internet to improve their productivity, extend their reach across the globe, and manage international networks of customers, suppliers, and
researchers. For example, financial institutions rely heavily on gathering, processing, and analyzing customer information and will often process
data in regional centers, which requires reliable and secure access both to networked technologies and cross­border data flows. According to
McKinsey, more than three­quarters of the value created by the Internet accrues to traditional industries that would exist without the Internet.
The overall impact of the Internet and information technologies on productivity may surpass the effect of any other technology enabler in
history, including electricity and the combustion engine, according to the OECD. Networked technologies and data flows are particularly
important to small businesses, nonprofits and entrepreneurs. Thanks to the Internet and advances in technology, small companies, NGOs and
individuals can customize and rapidly scale their IT systems at a lower cost and collaborate globally by accessing on­ line services and platforms.
Improved access to networked technologies also creates new opportunities for entrepreneurs and innovators to design applications and to
extend their reach internationally to the more than two billion people who are now connected to the Internet. In fact, advances in networked
technologies have led to the emergence of entirely new business platforms. Kiva, a microlending service established in 2005, has used the
Internet to assemble a network of nearly 600,000 individuals who have lent over $200 million to entrepreneurs in markets where access to
traditional banking systems is limited. Millions of others use online advertising and platforms such as eBay, Facebook, Google Docs, Hotmail,
Skype and Twitter to reach customers, suppliers and partners around the world. More broadly, economies that are open to international trade
in ICT and information grow faster and are more productive Limiting network access dramatically undermines the economic benefits of
technology and can slow growth across entire economies.
Backdoor reform is key to solve, not abolishment
Burger et al 14
(Eric, Research Professor of Computer Science at Georgetown, L. Jean Camp, Associate professor at the
Indiana University School of Information and Computing, Dan Lubar, Emerging Standards Consultant at
RelayServices, Jon M Pesha, Carnegie Mellon University, Terry Davis, MicroSystems Automation Group,
“Risking It All: Unlocking the Backdoor to the Nation’s Cybersecurity,” IEEE USA, 7/20/2014, pg. 1-5,
Social Science Research Network,
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2468604)//duncan
This paper addresses government policies that can influence commercial practices to weaken security in products and services sold on the
commercial market. The
debate on information surveillance for national security must include consideration
of the potential cybersecurity risks and economic implications of the information collection strategies
employed. As IEEE-USA, we write to comment on current discussions with respect to weakening standards, or altering commercial products
and services for intelligence, or law enforcement. Any policy that seeks to weaken technology sold on the commercial market has many serious
downsides, even if it temporarily advances the intelligence and law enforcement missions of facilitating legal and authorized government
surveillance.∂ Specifically, we define and address
the risks of installing backdoors in commercial products,
introducing malware and spyware into products, and weakening standards. We illustrate that these are
practices that harm America’s cybersecurity posture and put the resilience of American
cyberinfrastructure at risk. We write as a technical society to clarify the potential harm should these strategies be adopted. Whether
or not these strategies ever have been used in practice is outside the scope of this paper.∂ Individual computer users, large
corporations and government agencies all depend on security features built into information technology products
and services they buy on the commercial market. If the security features of these widely available
products and services are weak, everyone is in greater danger. There recently have been allegations that
U.S. government agencies (and some private entities) have engaged in a number of activities deliberately intended
to weaken mass market, widely used technology. Weakening commercial products and services does have the benefit
that it becomes easier for U.S. intelligence agencies to conduct surveillance on targets that use the weakened
technology, and more information is available for law enforcement purposes. On the surface, it would appear these motivations would be
reasonable. However, such
strategies also inevitably make it easier for foreign powers, criminals and
terrorists to infiltrate these systems for their own purposes. Moreover, everyone who uses backdoor
technologies may be vulnerable, and not just the handful of surveillance targets for U.S. intelligence agencies. It is the opinion of
IEEE-USA’s Committee on Communications Policy that no entity should act to reduce the security of a product or service sold on the
commercial market without first conducting a careful and methodical risk assessment. A complete risk assessment would consider the interests
of the large swath of users of the technology who are not the intended targets of government surveillance.∂ A
methodical risk
assessment would give proper weight to the asymmetric nature of cyberthreats, given that technology is equally
advanced and ubiquitous in the United States, and the locales of many of our adversaries. Vulnerable products should be
corrected, as needed, based on this assessment. The next section briefly describes some of the government policies and technical strategies
that might have the undesired side effect of reducing security. The following section discusses why the effect of these practices may be a
decrease, not an increase, in security.∂ Government policies
can affect greatly the security of commercial products,
are a number of methods by which a government might affect security
negatively as a means of facilitating legal government surveillance. One inexpensive method is to
exploit pre-existing weaknesses that are already present in commercial software, while keeping these
weaknesses a secret. Another method is to motivate the designer of a computer or communications
system to make those systems easier for government agencies to access. Motivation may come from
direct mandate or financial incentives. There are many ways that a designer can facilitate government access once so motivated. For
example, the system may be equipped with a “backdoor.” The company that creates it — and, presumably,
the government agency that requests it — would “know” the backdoor, but not the product’s (or service’s)
purchaser(s). The hope is that the government agency will use this feature when it is given authority to do
so, but no one else will. However, creating a backdoor introduces the risk that other parties will find the
vulnerability, especially when capable adversaries, who are actively seeking security vulnerabilities,
know how to leverage such weaknesses.∂ History illustrates that secret backdoors do not remain secret
and that the more widespread a backdoor, the more dangerous its existence. The 1988 Morris worm,
the first widespread Internet attack, used a number of backdoors to infect systems and spread widely.
The backdoors in that case were a set of secrets then known only by a small, highly technical community. A single, putatively
innocent error resulted in a large-scale attack that disabled many systems. In recent years, Barracuda had a
completely undocumented backdoor that allowed high levels of access from the Internet addresses assigned to
Barracuda. However, when it was publicized, as almost inevitably happens, it became extremely unsafe, and Barracuda’s
customers rejected it.∂ One example of how attackers can subvert backdoors placed into systems for benign
reasons occurred in the network of the largest commercial cellular operator in Greece. Switches deployed in the system
came equipped with built-in wiretapping features, intended only for authorized law enforcement
agencies. Some unknown attacker was able to install software, and made use of these embedded wiretapping
features to surreptitiously and illegally eavesdrop on calls from many cell phones — including phones belonging
to the Prime Minister of Greece, a hundred high-ranking Greek dignitaries, and an employee of the U.S. Embassy
in Greece before the security breach finally was discovered. In essence, a backdoor created to fight crime was used to
commit crime.
either positively or negatively. There
2NC No Solvency
Aff doesn’t solve – their author
Kehl et al 14 (Danielle Kehl is a Policy Analyst at New America’s Open Technology Institute (OTI). Kevin Bankston is
the Policy Director at OTI, Robyn Greene is a Policy Counsel at OTI, and Robert Morgus is a Research Associate at
OTI, “New America’s Open Technology Institute Policy Paper, Surveillance Costs: The NSA’s Impact on the
Economy, Internet Freedom & Cybersecurity,” July 2014// rck)
The U.S. government has already taken some limited steps to mitigate this damage and begin the slow, difficult process of rebuilding trust in
the United States as a responsible steward of the Internet. But the reform efforts to date have been relatively narrow, focusing primarily on the
surveillance programs’ impact on the rights of U.S. citizens. Based on our findings, we recommend that the U.S. government take the following
steps to address the broader concern that the NSA’s programs are impacting our economy, our foreign relations, and our cybersecurity:¶
Strengthen privacy protections for both Americans and non-Americans, within the United States and extraterritorially.¶ Provide for
increased transparency around government surveillance, both from the government and companies. ¶ Recommit to
the Internet Freedom agenda in a way that directly addresses issues raised by NSA surveillance, including moving
toward international human-rights based standards on surveillance. ¶ Begin the process of restoring trust in
cryptography standards through the National Institute of Standards and Technology. ¶ Ensure that the U.S. government
does not undermine cybersecurity by inserting surveillance backdoors into hardware or software products.¶ Help to eliminate security
vulnerabilities in software, rather than stockpile them.¶ Develop clear policies about whether, when, and under
what legal standards it is permissible for the government to secretly install malware on a computer or in a
network.¶ Separate the offensive and defensive functions of the NSA in order to minimize conflicts of interest.
1NC Circumvention
Circumvention – the NSA will force companies to build backdoors
Trevor Timm 15, Trevor Timm is a Guardian US columnist and executive director of the Freedom of the
Press Foundation, a non-profit that supports and defends journalism dedicated to transparency and
accountability. 3-4-2015, "Building backdoors into encryption isn't only bad for China, Mr President,"
Guardian, http://www.theguardian.com/commentisfree/2015/mar/04/backdoors-encryption-chinaapple-google-nsa)//GV
Want to know why forcing tech companies to build backdoors into encryption is a terrible idea? Look no further than President Obama’s stark
criticism of China’s plan to do exactly that on Tuesday. If only he would tell the FBI and NSA the same thing. In a stunningly short-sighted move,
the FBI - and more recently the NSA - have been pushing for a new US law that would force tech
companies like Apple and Google to hand over the encryption keys or build backdoors into their products
and tools so the government would always have access to our communications. It was only a matter of time before other governments jumped
on the bandwagon, and China wasted no time in demanding the same from tech companies a few weeks ago. As President Obama himself
described to Reuters, China has proposed an expansive new “anti-terrorism” bill that “would essentially force all foreign companies, including
US companies, to turn over to the Chinese government mechanisms where they can snoop and keep track of all the users of those services.”
Obama continued: “Those kinds of restrictive practices I think would ironically hurt the Chinese economy over the long term because I don’t
think there is any US or European firm, any international firm, that could credibly get away with that wholesale turning over of data, personal
data, over to a government.” Bravo! Of course these are the exact arguments for why it would be a disaster for US government to force tech
companies to do the same. (Somehow Obama left that part out.) As Yahoo’s top security executive Alex Stamos told NSA director Mike Rogers
in a public confrontation last week, building backdoors into encryption is like “drilling a hole into a windshield.” Even if it’s technically possible
to produce the flaw - and we, for some reason, trust the US government never to abuse it - other countries will inevitably demand access for
themselves. Companies
will no longer be in a position to say no, and even if they did, intelligence services
would find the backdoor unilaterally - or just steal the keys outright. For an example on how this works, look no
further than last week’s Snowden revelation that the UK’s intelligence service and the NSA stole the encryption keys
for millions of Sim cards used by many of the world’s most popular cell phone providers. It’s happened
many times before too. Security expert Bruce Schneier has documented with numerous examples, “Back-door access built for the good guys is
routinely used by the bad guys.” Stamos repeatedly (and commendably) pushed the NSA director for an answer on what happens when China
or Russia also demand backdoors from tech companies, but Rogers didn’t have an answer prepared at all. He just kept repeating “I think we can
work through this”. As Stamos insinuated, maybe Rogers should ask his own staff why we actually can’t work through this, because virtually
every technologist agrees backdoors just cannot be secure in practice. (If you want to further understand the details behind the encryption vs.
backdoor debate and how what the NSA director is asking for is quite literally impossible, read this excellent piece by surveillance expert Julian
Sanchez.) It’s downright bizarre that the US government has been warning of the grave cybersecurity risks the country faces while, at the very
same time, arguing that we should pass a law that would weaken cybersecurity and put every single citizen at more risk of having their private
information stolen by criminals, foreign governments, and our own. Forcing backdoors will also be disastrous for the US economy as it would be
for China’s. US tech companies - which already have suffered billions of dollars of losses overseas because of consumer distrust over their
relationships with the NSA - would lose all credibility with users around the world if the FBI and NSA succeed with their plan. The White House
is supposedly coming out with an official policy on encryption sometime this month, according to the New York Times – but the President can
save himself a lot of time and just apply his comments about China to the US government. If he knows backdoors in encryption are bad for
cybersecurity, privacy, and the economy, why is there even a debate?
#WeWinCyberwar2.0 (ST)
Notes
Brought to you by KWei and Amy from the SWS heg lab.
Email me at ghskwei@gmail.com for help/with questions.
The thing about backdoor Affs is that all of their evidence will talk about past attacks. Press them on why
their scenario is different and how these past attacks prove that empirically, there is no impact to breakins through backdoors.
Also, a lot of their ev about mandating backdoors is in the context of future legislation, not the squo.
Also, their internal links are totally fabricated.
Links to networks, neolib, and gender privacy k, you can find those in the generics.
Links
Some links I don’t have time to cut but that I think will have good args/cards:
Going dark terrorism links: http://judiciary.house.gov/_files/hearings/printers/112th/112-59_64581.PDF
Front doors CP: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630361&download=yes
Military DA i/l ev: https://cyberwar.nl/d/20130200_Offensive-Cyber-Capabilities-are-Needed-Becauseof-Deterrence_Jarno-Limnell.pdf
http://www.inss.org.il/uploadImages/systemFiles/MASA4-3Engc_Cilluffo.pdf
Military DA Iran impact:
http://www.sobiad.org/ejournals/journal_ijss/arhieves/2012_1/sanghamitra_nath.pdf
Miltiary DA Syran impact: http://nationalinterest.org/commentary/syria-preparing-the-cyber-threat8997
T
T-Domestic
1NC
NSA spies on foreign corporations through backdoors
NYT 14
(David E. Sanger and Nicole Perlroth. "N.S.A. Breached Chinese Servers Seen as Security Threat," New York Times. 3-22-2014.
http://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-servers-seen-as-spy-peril.html//ghs-kw)
WASHINGTON — American officials have long considered Huawei, the Chinese telecommunications giant, a security threat,
blocking it from business deals in the United States for fear that the company would create “back doors” in its equipment that could allow the
Chinese military or Beijing-backed hackers to steal corporate and government secrets. But even as the United States made a public case about
the dangers of buying from Huawei, classified documents show that the
National Security Agency was creating its own back
doors — directly into Huawei’s networks. The agency pried its way into the servers in Huawei’s sealed
headquarters in Shenzhen, China’s industrial heart, according to N.S.A. documents provided by the former contractor Edward J.
Snowden. It obtained information about the workings of the giant routers and complex digital switches
that Huawei boasts connect a third of the world’s population, and monitored communications of the
company’s top executives. One of the goals of the operation, code-named “Shotgiant,” was to find any
links between Huawei and the People’s Liberation Army, one 2010 document made clear. But the plans went further: to
exploit Huawei’s technology so that when the company sold equipment to other countries — including both allies and nations that avoid buying
American products — the N.S.A. could roam through their computer and telephone networks to conduct surveillance and, if ordered by the
president, offensive cyberoperations.
NSA targets foreign systems with backdoors
Zetter 13
(Kim Zetter. "NSA Laughs at PCs, Prefers Hacking Routers and Switches," WIRED. 9-4-2013. http://www.wired.com/2013/09/nsa-routerhacking///ghs-kw)
THE NSA RUNS a massive, full-time hacking operation targeting foreign systems, the latest leaks from Edward
Snowden show. But unlike conventional cybercriminals, the agency is less interested in hacking PCs and Macs. Instead, America’s
spooks have their eyes on the internet routers and switches that form the basic infrastructure of the net, and are
largely overlooked as security vulnerabilities. Under a $652-million program codenamed “Genie,” U.S. intel agencies have hacked
into foreign computers and networks to monitor communications crossing them and to establish control
over them, according to a secret black budget document leaked to the Washington Post. U.S. intelligence agencies conducted 231 offensive
cyber operations in 2011 to penetrate the computer networks of targets abroad. This included not only installing covert “implants” in foreign
desktop computers but also on routers and firewalls — tens of thousands of machines every year in all. According to the Post, the government
planned to expand the program to cover millions of additional foreign machines in the future and preferred hacking routers to individual PCs
because it gave agencies access to data from entire networks of computers instead of just individual machines. Most of the hacks targeted the
systems and communications of top adversaries like China, Russia, Iran and North Korea and included activities around nuclear proliferation.
The NSA’s focus on routers highlights an often-overlooked attack vector with huge advantages for the intruder, says Marc Maiffret, chief
technology officer at security firm Beyond Trust. Hacking routers is an ideal way for an intelligence or military agency to maintain a persistent
hold on network traffic because the systems aren’t updated with new software very often or patched in the way that Windows and Linux
systems are. “No one updates their routers,” he says. “If you think people are bad about patching Windows and Linux (which they are) then
they are … horrible about updating their networking gear because it is too critical, and usually they don’t have redundancy to be able to do it
properly.” He also notes that routers don’t have security software that can help detect a breach. “The challenge [with desktop systems] is that
while antivirus don’t work well on your desktop, they at least do something [to detect attacks],” he says. “But you don’t even have an integrity
check for the most part on routers and other such devices like IP cameras.” Hijacking routers and switches could allow the NSA to do more than
just eavesdrop on all the communications crossing that equipment. It would also let them bring down networks or prevent certain
communication, such as military orders, from getting through, though the Post story doesn’t report any such activities. With control of routers,
the NSA could re-route traffic to a different location, or intelligence agencies could alter it for disinformation campaigns, such as planting
information that would have a detrimental political effect or altering orders to re-route troops or supplies in a military operation. According to
the budget document, the
CIA’s Tailored Access Programs and NSA’s software engineers possess “templates”
for breaking into common brands and models of routers, switches and firewalls. The article doesn’t say it, but
this would likely involve pre-written scripts or backdoor tools and root kits for attacking known but unpatched vulnerabilities in
these systems, as well as for attacking zero-day vulnerabilities that are yet unknown to the vendor and customers. “[Router software is]
just an operating system and can be hacked just as Windows or Linux would be hacked,” Maiffret says.
“They’ve tried to harden them a little bit more [than these other systems], but for folks at a place like the NSA or any other
major government intelligence agency, it’s pretty standard fare of having a ready-to-go backdoor for
your [off-the-shelf] Cisco or Juniper models.”
T-Surveillance
1NC
Backdoors are also used for cyberwarfare—not surveillance
Gellman and Nakashima 13
(Barton Gellman. Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington Post, most
recently the 2014 Pulitzer Prize for Public Service. He is also a senior fellow at the Century Foundation and visiting lecturer at Princeton’s
Woodrow Wilson School. After 21 years at The Post, where he served tours as legal, military, diplomatic, and Middle East correspondent,
Gellman resigned in 2010 to concentrate on book and magazine writing. He returned on temporary assignment in 2013 and 2014 to anchor
The Post's coverage of the NSA disclosures after receiving an archive of classified documents from Edward Snowden. Ellen Nakashima is a
national security reporter for The Washington Post. She focuses on issues relating to intelligence, technology and civil liberties. She
previously served as a Southeast Asia correspondent for the paper. She wrote about the presidential candidacy of Al Gore and co-authored a
biography of Gore, and has also covered federal agencies, Virginia state politics and local affairs. She joined the Post in 1995. "U.S. spy
agencies mounted 231 offensive cyber-operations in 2011, documents show," Washington Post. 8-30-2013.
https://www.washingtonpost.com/world/national-security/us-spy-agencies-mounted-231-offensive-cyber-operations-in-2011-documentsshow/2013/08/30/d090a6ae-119e-11e3-b4cb-fd7ce041d814_story.html//ghs-kw)
an implant’s purpose is to create a back door for future access. “You pry open the window
somewhere and leave it so when you come back the owner doesn’t know it’s unlocked, but you can
get back in when you want to,” said one intelligence official, who was speaking generally about the topic and was not privy to the budget. The official spoke on the
condition of anonymity to discuss sensitive technology. Under U.S. cyberdoctrine, these operations are known as “exploitation,” not “attack,”
but they are essential precursors both to attack and defense. By the end of this year, GENIE is projected to control
at least 85,000 implants in strategically chosen machines around the world. That is quadruple the number — 21,252 — available
in 2008, according to the U.S. intelligence budget. The NSA appears to be planning a rapid expansion of those numbers, which were
Sometimes
limited until recently by the need for human operators to take remote control of compromised machines. Even with a staff of 1,870 people, GENIE made full use of only 8,448 of the 68,975
the NSA has brought online an
automated system, code-named TURBINE, that is capable of managing “potentially millions of implants” for intelligence gathering “and
active attack.”
machines with active implants in 2011. For GENIE’s next phase, according to an authoritative reference document,
T-Surveillance (ST)
1NC
Undermining encryption standards includes commercial fines against illegal exports
Goodwin and Procter 14
(Goodwin and Proctor, legal firm. “Software Companies Now on Notice That Encryption Exports May Be Treated More Seriously: $750,000
Fine Against Intel Subsidiary,” Client Alert, 10-15-2014. http://www.goodwinprocter.com/Publications/Newsletters/ClientAlert/2014/1015_Software-Companies-Now-on-Notice-That-Encryption-Exports-May-Be-Treated-More-Seriously.aspx//ghs-kw)
On October 8, 2014, the
Department of Commerce’s Bureau of Industry and Security (BIS) announced the
issuance of a $750,000 penalty against Wind River Systems, an Intel subsidiary, for the unlawful exportation of
encryption software products to foreign government end-users and to organizations on the BIS Entity
List. Wind River Systems exported its software to China, Hong Kong, Russia, Israel, South Africa, and
South Korea. BIS significantly mitigated what would have been a much larger fine because the company
voluntarily disclosed the violations. We believe this to be the first penalty BIS has ever issued for the unlicensed export of
encryption software that did not also involve comprehensively sanctioned countries (e.g., Cuba, Iran, North Korea, Sudan or Syria). This
suggests a fundamental change in BIS’s treatment of violations of the encryption regulations. Historically, BIS has resolved voluntarily disclosed
violations of the encryption regulations with a warning letter but no material consequence, and has shown itself unlikely to pursue such
violations that were not disclosed. This
fine dramatically increases the compliance stakes for software companies
— a message that BIS seemed intent upon making in its announcement. Encryption is ubiquitous in software products.
Companies making these products should reexamine their product classifications, export eligibility, and
internal policies and procedures regarding the export of software that uses or leverages encryption (even
open source or third-party encryption libraries), particularly where a potential transaction on the horizon — e.g., an
acquisition, financing, or initial public offering — will increase the likelihood that violations of these laws
will be identified. If you would like additional information about the issues addressed in this Client Alert, please contact Rich Matheny,
who chairs Goodwin Procter’s National Security & Foreign Trade Regulation Practice, or the Goodwin Procter attorney with whom you typically
consult.
CPs
Foreign Backdoors CP
CX
In the world of the AFF does the government no longer have access to backdoors? So we don’t use or
possess backdoors in the world of the AFF, right?
1NC
(KQ) Counterplan: the United States federal government should ban the creation of
backdoors as outlined in the Secure Data Act of 2015 but should not ban the
surveillance of backdoors and should mandate clandestine corporate disclosure of
foreign-government-mandated backdoors to the United States federal government.
(CT) Counterplan: The United States federal government should not mandate the
creation of surveillance backdoors in products or request privacy keys, and should
terminate current backdoors created either by government mandates or government
requested keys but should not cease the use of backdoors.
Backdoors are inevitable—we’ll use backdoors created by foreign governments
Wittes 15
(Benjamin Wittes. Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is
the author of several books and a member of the Hoover Institution's Task Force on National Security and Law. "Thoughts on Encryption and
Going Dark, Part II: The Debate on the Merits," Lawfare. 7-22-2015. http://www.lawfareblog.com/thoughts-encryption-and-going-dark-partii-debate-merits//ghs-kw)
Still another approach is to let other governments do the dirty work. The computer scientists' report
cites the possibility of other sovereigns adopting their own extraordinary access regimes as a reason for
the U.S. to go slow: Building in exceptional access would be risky enough even if only one law
enforcement agency in the world had it. But this is not only a US issue. The UK government promises
legislation this fall to compel communications service providers, including US-based corporations, to
grant access to UK law enforcement agencies, and other countries would certainly follow suit. China has
already intimated that it may require exceptional access. If a British-based developer deploys a
messaging application used by citizens of China, must it provide exceptional access to Chinese law
enforcement? Which countries have sufficient respect for the rule of law to participate in an international exceptional access framework?
How would such determinations be made? How would timely approvals be given for the millions of new products with communications
capabilities? And how would this new surveillance ecosystem be funded and supervised? The US and UK governments have fought long and
hard to keep the governance of the Internet open, in the face of demands from authoritarian countries that it be brought under state control.
Does not the push for exceptional access represent a breathtaking policy reversal? I am certain that the
computer scientists are
correct that foreign governments will move in this direction, but I think they are misreading the consequences of this.
China and Britain will do this irrespective of what the United States does, and that fact may well
create potential opportunity for the U.S. After all, if China and Britain are going to force U.S.
companies to think through the problem of how to provide extraordinary access without
compromising general security, perhaps the need to do business in those countries will provide much
of the incentive to think through the hard problems of how to do it. Perhaps countries far less
solicitous than ours of the plight of technology companies or the privacy interests of their users will
force the research that Comey can only hypothesize. Will Apple then take the view that it can offer phones to
users in China which can be decrypted for Chinese authorities when they require it but that it's
technically impossible to do so in the United States?
2NC O/V
Counterplan solves 100% of the case—we mandate the USFG publicly stop creating
backdoors but instead use backdoors that are inevitably mandated by foreign nations
for surveillance—solves perception and doesn’t link to the net benefit—that’s Wittes
2NC Backdoors Inev
India has backdoors
Ragan 12
(Steve Ragan. Steve Ragan is a security reporter and contributor for SecurityWeek. Prior to joining the journalism world in 2005, he spent 15
years as a freelance IT contractor focused on endpoint security and security training. "Hackers Expose India's Backdoor Intercept Program,"
No Publication. 1-9-2012. http://www.securityweek.com/hackers-expose-indias-backdoor-intercept-program//ghs-kw)
Symantec confirmed with SecurityWeek on Friday that hackers did access source code from Symantec Endpoint Protection 11.0 and
Symantec Antivirus 10.2. According to a Symantec spokesperson, “SEP 11 was four years ago to be exact.” In addition, Symantec Antivirus 10.2
has been discontinued, though the company continues to service it. “We’re taking this extremely seriously and are erring on the side of caution
to develop and long-range plan to take care of customers still using those products,” Cris Paden, Senior Manager of Corporate Communications
at Symantec told SecurityWeek. Over the weekend, the story expanded. The Lords of Dharmaraja released a purported memo outlining the
intercept program known as RINOA, which earns
its name from the vendors involved - RIM, Nokia, and Apple. The
memo said the vendors provided India with backdoors into their technology in order to them to maintain
a presence in the local market space. India’s Ministry of Defense has “an agreement with all major
device vendors” to provide the country with the source code and information needed for their SUR
(surveillance) platform, the memo explains. These backdoors allowed the military to conduct
surveillance (RINOA SUR) against the US-China Economic and Security Review Commission. Personnel from Indian Naval Military
Intelligence were dispatched to the People’s Republic of China to undertake Telecommunications Surveillance (TESUR) using the RINOA
backdoors and CYCADA-based technologies.
China has backdoors in 80% of global communications
Protalinski 12
(Emil Protalinski. Reporter for CNet and ZDNet. "Former Pentagon analyst: China has backdoors to 80% of telecoms," ZDNet. 7-14-2012.
http://www.zdnet.com/article/former-pentagon-analyst-china-has-backdoors-to-80-of-telecoms///ghs-kw)
The Chinese government reportedly has "pervasive access" to some 80 percent of the world's
communications, thanks to backdoors it has ordered to be installed in devices made by Huawei and ZTE Corporation. That's
according to sources cited by Michael Maloof, a former senior security policy analyst in the Office of the
Secretary of Defense, who now writes for WND: In 2000, Huawei was virtually unknown outside China, but by 2009 it had grown to be
one of the largest, second only to Ericsson. As a consequence, sources say that any information traversing "any" Huawei equipped
network isn't safe unless it has military encryption. One source warned, "even then, there is no doubt that the
Chinese are working very hard to decipher anything encrypted that they intercept." Sources add that most
corporate telecommunications networks use "pretty light encryption" on their virtual private networks, or VPNs. I found about Maloof's report
via this week's edition of The CyberJungle podcast. Here's my rough transcription of what he says, at about 18 minutes and 30 seconds: The
Chinese government and the People's Liberation Army are so much into cyberwarfare now that they
have looked at not just Huawei but also ZTE Corporation as providing through the equipment that they install in about 145 countries
around in the world, and in 45 of the top 50 telecom centers around the world, the potential for
backdooring into data. Proprietary information could be not only spied upon but also could be altered and in some cases could be
sabotaged. That's coming from technical experts who know Huawei, they know the company and they know the Chinese. Since that story came
out I've done a subsequent one in which sources tell me that it's
giving Chinese access to approximately 80 percent of
the world telecoms and it's working on the other 20 percent now.
China is mandating backdoors
Mozur 1/28
(Paul Mozur. Reporter for the NYT. "New Rules in China Upset Western Tech Companies," New York Times. 1-28-2015.
http://www.nytimes.com/2015/01/29/technology/in-china-new-cybersecurity-rules-perturb-western-tech-companies.html//ghs-kw)
HONG KONG — The Chinese
government has adopted new regulations requiring companies that sell
computer equipment to Chinese banks to turn over secret source code, submit to invasive audits and build so-called back
doors into hardware and software, according to a copy of the rules obtained by foreign technology companies that do billions of
dollars’ worth of business in China. The new rules, laid out in a 22-page document approved at the end of last year, are the first in a
series of policies expected to be unveiled in the coming months that Beijing says are intended to strengthen
cybersecurity in critical Chinese industries. As copies have spread in the past month, the regulations have heightened concern among foreign
companies that the authorities are trying to force them out of one of the largest and fastest-growing markets. In a letter sent Wednesday to a
top-level Communist Party committee on cybersecurity, led by President Xi Jinping, foreign business groups objected to the new policies and
complained that they amounted to protectionism. The groups, which include the U.S. Chamber of Commerce, called for “urgent discussion and
dialogue” about what they said was a “growing trend” toward policies that cite cybersecurity in requiring companies to use only technology
products and services that are developed and controlled by Chinese companies. The letter is the latest salvo in an intensifying tit-for-tat
between China and the United States over online security and technology policy. While the United States has accused Chinese military
personnel of hacking and stealing from American companies, China has pointed to recent disclosures of United States snooping in foreign
countries as a reason to get rid of American technology as quickly as possible. Although it is unclear to what extent the new rules result from
security concerns, and to what extent they are cover for building up the Chinese tech industry, the Chinese regulations go far beyond measures
taken by most other countries, lending some credibility to industry claims that they are protectionist. Beijing also has long used the Internet to
keep tabs on its citizens and ensure the Communist Party’s hold on power. Chinese
companies must also follow the new
regulations, though they will find it easier since for most, their core customers are in China. China’s Internet filters have increasingly
created a world with two Internets, a Chinese one and a global one. The new policies could further split the tech world, forcing hardware and
software makers to sell either to China or the United States, or to create significantly different products for the two countries. While
the
Obama administration will almost certainly complain that the new rules are protectionist in nature, the Chinese will
be able to make a case that they differ only in degree from Washington’s own requirements.
2NC AT Perm do Both
Permutation links to the net benefit—the AFF stops use of backdoors, that was 1AC
cross-ex
2NC AT Perm do the CP
The counterplan bans the creation of backdoors but not the use of them—that’s
different from the plan—that was cross-ex
The permutation is severance—that’s a voting issue:
3. NEG ground—makes the AFF a shifting target which makes it impossible to
garner offense—stop copying k AFFs, vote NEG to be Dave Strauss
4. Kills advocacy skills—they never have to defend implementation of an advocacy
Cyberterror Advantage CP
1NC
Counterplan: the United States federal government should substantially increase its
support for renewable energy technologies and grid decentralization.
Grid decentralization and renewables solve terror attacks
Lawson 11
(Lawson, Sean. Sean Lawson is an assistant professor in the Department of Communication at the University of Utah. He holds a PhD in
Science and Technology Studies from Rensselaer Polytechnic Institute, a MA in Arab Studies from Georgetown University, and a BA in
History from California State University, Stanislaus. “BEYOND CYBER-DOOM: Cyberattack Scenarios and the Evidence of History,” Mercatus
Center at George Mason University. Working Paper No. 11-01, January 2011. http://mercatus.org/sites/default/files/publication/beyondcyber-doom-cyber-attack-scenarios-evidence-history_1.pdf//ghs-kw)
Cybersecurity policy should promote decentralization and self-organization in efforts to prevent, defend
against, and respond to cyberattacks. Disaster researchers have shown that victims are often themselves the first
responders and that centralized, hierarchical, bureaucratic responses can hamper their ability to
respond in the decentralized, self-organized manner that has often proved to be more effective
(Quarantelli, 2008: 895–896). One way that officials often stand in the way of decentralized self-organization is by hoarding information (Clarke
& Chess, 2009: 1000–1001). Similarly, over the last 50 years, U.S.
military doctrine increasingly has identified
decentralization, self-organization, and information sharing as the keys to effectively operating in ever-more
complex conflicts that move at an ever-faster pace and over ever-greater geographical distances (LeMay &
Smith, 1968; Romjue, 1984; Cebrowski & Garstka, 1998; Hammond, 2001). In the case of preventing or defending against cyberattacks on
critical infrastructure, we must recognize that most cyber and physical infrastructures are owned by private actors. Thus, a
centralized,
military-led effort to protect the fortress at every point will not work. A combination of incentives,
regulations, and public-private partnerships will be necessary. This will be complex, messy, and difficult. But a
cyberattack, should it occur, will be equally complex, messy, and difficult, occurring instantaneously over global distances via a medium that is
almost incomprehensible in its complex interconnections and interdependencies. The
owners and operators of our critical
infrastructures are on the front lines and will be the first responders. They must be empowered to act.
Similarly, if the worst should occur, average citizens must be empowered to act in a decentralized, selforganized way to help themselves and others. In the case of critical infrastructures like the electrical
grid, this could include the promotion of alternative energy generation and distribution methods. In this
way, “Instead of being passive consumers, [citizens] can become actors in the energy network. Instead of waiting for
blackouts, they can organize alternatives and become less vulnerable to either terror or natural catastrophe”
(Nye, 2010: 203)
2NC O/V
Counterplan solves all of their grid and cyber-terrorism impacts—we mandate the
USFG provide incentives, regulations, and P3s for widespread adoption of alt energy
and grid decentralization—this means each building has its own microgrid, which
allows for local, decentralized responses to cyberterror attacks and solves their
impact—that’s Lawson
2NC CP>AFF
Only the CP solves—a centralized grid results in inevitable failures and kills the
economy
Warner 10
(Guy Warner. Guy Warner is a leading economist and the founder and CEO of Pareto Energy. "Moving U.S. energy policy to a decentralized
grid," Grist. 6-4-2010. http://grist.org/article/2010-06-03-moving-u-s-energy-policy-to-a-decentralized-grid-rethinking-our///ghs-kw)
And, while the development of renewable energy technology has sped up rapidly in recent years, the
technology to deliver this
energy to the places where it is most needed is decades behind. America’s current electricity
transmission and distribution grid was built more than a century ago. Relying on the grid to relay power from wind
farms in the Midwest to cities on the east and west coast is simply not feasible. Our dated infrastructure cannot handle the
existing load — power outages and disruptions currently cost the nation an estimated $164 billion each
year. Wind and solar power produce intermittent power, which, in small doses, has little impact on grid operations. As we introduce
increasingly larger amounts of intermittent power, our transmission system will require significant
upgrades and perhaps even a total grid infrastructure redesign, which could take decades and cost billions. With 9,200 power plants that
link homes and business via 164,000 miles of lines, a national retrofit is both cost-prohibitive and improbable. One solution to this
challenge is the development of microgrids. Also known as distributed generation, microgrids produce energy
closer to the user rather than transmitting it from remote power plants. Power is generated and stored
locally and works in parallel with the main grid, providing power as needed and utilizing the main grid at
other times. Microgrids offer a decentralized power source that can be introduced incrementally in
modules now without having to deal with the years of delay realistically associated with building central generation facilities (e.g. nuclear)
and their associated transmission and distribution system add-ons. There is also a significant difference in the up-front capital costs that are
ultimately assigned the consumer. Introducing generation capacity into a microgrid as needed is far less capital intensive, and some might
argue more economical, than building a new nuclear plant at a cost of $5-12 billion dollars.
Technological advancements in
connectivity mean that microgrids can now be developed for high energy use building clusters, such as
trading floors and hospitals, relieving stress on the macrogrid, and providing more reliable power. In fact,
microgrids can be viewed as the ultimate smart grid, providing local power that meets local needs and
utilizing energy sources, including renewables, that best fit the location and use profile. For example, on the
East Coast, feasibility studies are underway to retrofit obsolete paper mills into biomass fuel generators utilizing left over pulp wood. Pulp
wood, the waste left over from logging, can be easily pelletized, is inexpensive to produce, easy to transport, and has a minimal net carbon
output. Wood pellets are also easily adaptable to automated combustion systems, making them a valuable domestic resource that can
supplement and replace our use of fossil fuels, particularly in microgrids which can be designed to provide heating and cooling from these
biomass products.
2NC Terror Solvency
Decentralization solves terror threats
Verclas 12
(Verclas, Kristen. Kirsten Verclas works as International Program Officer at the National Association of Regulatory Utility Commissioners
(NARUC) in Washington, DC. She holds a BA in International Relations with a Minor in Economics from Franklin and Marshall College and an
MA in International Relations with a concentration in Security Studies from The Elliott School at The George Washington University. She also
earned an MS in Energy Policy and Climate from Johns Hopkins University in August 2013. "The Decentralization of the Electricity Grid –
Mitigating Risk in the Energy Sector ,” American Institute for Contemporary German Studies at John Hopkins University. 4-27-2012.
http://www.aicgs.org/publication/the-decentralization-of-the-electricity-grid-mitigating-risk-in-the-energy-sector///ghs-kw)
A decentralized electricity grid has many environmental and security benefits. Microgrids in combination with
distributed energy generation provide a system of small power generation and storage systems, which are
located in a community or in individual houses. These small power generators produce on average about 10 kW (for individual
homes) to 2 MW (for communities) of electricity. While connected to and able to feed excess energy into the grid, these generators are
simultaneously independent from the grid in that they can provide power even when power from the
main grid is not available. Safety benefits from a decentralized grid are immense, as it has build-in
redundancies. These redundancies are needed should the main grid become inoperable due to a natural
disaster or terrorist attack. Communities or individual houses can then rely on microgrids with distributed
electricity generation for their power supply. Furthermore, having less centralized electricity generation
and fewer main critical transmission lines reduces targets for terrorist attacks and natural disasters. Fewer people
would then be impacted by subsequent power outages. Additionally, “decentralized power reduces the obstacles to
disaster recovery by allowing the focus to shift first to critical infrastructure and then to flow outward to
less integrated outlets.”[10] Thus critical facilities such as hospitals or police stations would be the first to
have electricity restored, while non-essential infrastructure would have energy restored at a later date.
Power outages are not only dangerous for critical infrastructure, they also cost money to business and the economy overall. EPRI “reported that
power outages and quality disturbances cost American businesses $119 billion per year.”[11] Decentralized
grids are also more
energy efficient than centralized electricity grids because “as electricity streams through a power line a
small fraction of it is lost to various factors. The longer the distance the greater the loss.”[12] Savings that
are realized by having shorter transmission lines could be used to install the renewable energy sources close to
homes and communities. The decrease of transmission costs and the increase in efficiency would cause
lower electricity usage overall. A decrease in the need to generate electricity would also increase energy security—fewer imports of
energy would be needed. The U.S. especially has been concerned with energy dependence in the last decades; decentralized electricity
generation could be one of the policies to address this issue.
Decentralization solves cyberattacks
Kiger 13
(Patrick J. Kiger. "Will Renewable Energy Make Blackouts Into a Thing of the Past?,"
National Geographic Channel. 10-2-2013.
http://channel.nationalgeographic.com/american-blackout/articles/will-renewableenergy-make-blackouts-into-a-thing-of-the-past///ghs-kw)
The difference is that Germany’s grid of the future, unlike the present U.S. system, won’t rely on big power plants and long transmission lines.
Instead, Germany is creating a
decentralized “smart” grid—essentially, a system composed of many small,
potentially self-sufficient grids, that will obtain much of their power at the local level from renewable
energy sources, such as solar panels, wind turbines and biomass generators. And the system will be
equipped with sophisticated information and communications technology (ICT) that will enable it to
make the most efficient use of its energy resources. Some might scoff at the idea that a nation could depend entirely upon
renewable energy for its electrical needs, because both sunshine and wind tend to be variable, intermittent producers of electricity. But the
Germans plan to get around that problem by using “linked renewables”—that is, by combining multiple sources of renewable energy, which has
the effect of smoothing out the peaks and valleys of the supply. As Kurt Rohrig, the deputy director of Germany’s Fraunhofer Institute for Wind
Energy and Energy System Technology, explained in a recent article on Scientific American’s website:
"Each source of energy—be it
wind, sun or bio-gas—has its strengths and weaknesses. If we manage to skillfully combine the different
characteristics of the regenerative energies, we can ensure the power supply for Germany." A decentralized
“smart” grid powered by local renewable energy might help protect the U.S. against a catastrophic
blackout as well, proponents say. “A more diversified supply with more distributed generation inherently
helps reduce vulnerability,” Mike Jacobs, a senior energy analyst at the Union of Concerned Scientists, noted in a recent blog post on
the organization’s website. According to the U.S. Department of Energy’s SmartGrid.gov website, such a system would have the
ability to bank surplus electricity from wind turbines and solar panels in numerous storage locations
around the system. Utility operators could tap into those reserves if electricity generation ebbed.
Additionally, in the event of a large-scale disruption, a smart grid would have the ability to switch areas over to
power generated by utility customers themselves, such as solar panels that neighborhood residents
have installed on their roofs. By combining these "distributed generation" resources, a community could
keep its health center, police department, traffic lights, phone system, and grocery store operating
during emergencies, DOE’s website notes. "There are lots of resources that contribute to grid resiliency and
flexibility," Allison Clements, an official with the Natural Resource Defense Council, wrote in a recent blog post on the NRDC website.
"Happily, they are the same resources that are critical to achieving a clean energy, low carbon future."
Joel Gordes, electrical power research director for the U.S. Cyber Consequences Unit, a private-sector organization that investigates
terrorist threats against the electrical grid and other targets, also thinks that such a decentralized grid
"could carry benefits not only for protecting us to a certain degree from cyber-attacks but also providing power
during any number of natural hazards." But Gordes does offer a caveat—such a system might also offer more potential points of entry for
hackers to plant malware and disrupt the entire grid. Unless that vulnerability is addressed, he warned in an e-mail, "full deployment of [smart
grid] technology could end up to be disastrous."
Patent Reform Advantage CP
Notes
Specify reform + look at law reviews
Read the 500 bil card in the 1NC
Cut different versions w/ different mechanisms
1NC Comprehensive Reform
Counterplan: the United States federal government should comprehensively reform
its patent system for the purpose of eliminating non-practicing entities.
Patent trolls cost the economy half a trillion and counting—larger internal link to tech
and the economy
Lee 11
(Timothy B. Lee. Timothy B. Lee covers tech policy for Ars, with a particular focus on patent and copyright law, privacy, free speech, and
open government. While earning his CS master's degree at Princeton, Lee was the co-author of RECAP, a Firefox plugin that helps users
liberate public documents from the federal judiciary's paywall. Before grad school, he spent time at the Cato Institute, where he is an
adjunct scholar. He has written for both online and traditional publications, including Slate, Reason, Wired.com, and the New York Times.
When not screwing around on the Internet, he can be seen rock climbing, ballroom dancing, and playing soccer. He lives in Philadelphia. He
has a blog at Forbes and you can follow him on Twitter. "Study: patent trolls have cost innovators half a trillion dollars," Ars Technica. xx-xxxxxx. http://arstechnica.com/tech-policy/2011/09/study-patent-trolls-have-cost-innovators-half-a-trillion-bucks///ghs-kw)
By now, the story of patent
trolls has become well-known: a small company with no products of its own threatens
lawsuits against larger companies who inadvertently infringe its portfolio of broad patents. The scenario has
become so common that we don't even try to cover all the cases here at Ars. If we did, we'd have little time to write about much else. But
anecdotal evidence is one thing. Data is another. Three
Boston University researchers have produced a rigorous
empirical estimate of the cost of patent trolling. And the number is breath-taking: patent trolls ("non-practicing
entity" is the clinical term) have cost publicly traded defendants $500 billion since 1990. And the problem has
become most severe in recent years. In the last four years, the costs have averaged $83 billion per year. The study says
this is more than a quarter of US industrial research and development spending during those years.
Two of the study's authors, James Bessen and Mike Meurer, wrote Patent Failure, an empirical study of the patent system that has been widely
read and cited since its publication in 2008. They were joined for this paper by a colleague, Jennifer Ford.It's hard to measure the costs of
litigation directly. The
most obvious costs for defendants are legal fees and payouts to plaintiffs, but these
are not necessarily the largest costs. Often, indirect costs like employee distraction, legal uncertainty, and
the need to redesign or drop key products are even more significant. The trio use a clever method known as a stock
market event study to estimate these costs. The theory is simple: a company's stock price represents the stock market's best estimation of the
company's value. If the company's stock drops by, say, two percent in the days after a lawsuit is filed, then the market thinks the lawsuit will
cost the company two percent of its market capitalization. Of course, this wouldn't be a very rigorous technique if they were looking at a single
lawsuit. Any number of factors could have affected the firm's stock price that same week. Maybe the company released a bad earnings report
the next day. But with
a large sample of companies, these random factors should mostly cancel each other out,
leaving the market's rough estimate of how much patent lawsuits cost their targets. The authors used a
database of 1,630 patent troll lawsuits compiled by Patent Freedom. Because many of the lawsuits had multiple defendants,
there was a total of 4,114 plaintiff-defendant pairs. The median defendant over all of these pairs lost $20.4 million in market
capitalization, while the mean loss was $122 million.
2NC Solvency
(Senator Orrin Hatch. "Senator Hatch: It’s Time to Kill Patent Trolls for Good," WIRED. 3-16-2015.
http://www.wired.com/2015/03/opinion-must-finally-legislate-patent-trolls-existence///ghs-kw)
There is broad agreement—among both big and small businesses—that any serious solution must
include:
•
Fee shifting, which will require patent trolls to pay legal fees when their suits are unsuccessful;
•
Heightened pleading and discovery standards, which will raise the bar on litigation procedure,
making it increasingly difficult for trolls to file frivolous lawsuits;
•
Demand letter reforms, which will require those sending demand letters to be more specific and
transparent;
•
Stays of customer suits, which will allow a manufacturer’s case to move forward first, without
binding the end user to the result of that case;
•
A mechanism to enable recovery of fees, which will prevent insolvent plaintiffs from litigating
and dashing.
Some critics argue that these proposals will help only large technology companies and might even hurt
startups and small businesses. In my discussions with stakeholders, however, I have repeatedly been
told that a multi-pronged approach that tackles each of these issues is needed to effectively combat
patent trolls across all levels of industry. These stakeholder discussions have included representatives
from the hotel, restaurant, retail, real estate, financial services, and high-tech industries, as well as startup and small business owners.
Enacting legislation on any topic is a major undertaking, and the added complexities inherent in patent
law make passing patent reforms especially challenging. Crucially, we will probably have only one
chance to do so for a long while, so whatever we do must work. We must not pass any bill that fails to
provide an effective deterrent against patent trolls at all stages of litigation.
It is my belief that any viable legislation must ensure that those who successfully defend against abusive
patent litigation and are awarded fees will actually get paid. Even when a patent troll is a shell company
with no assets, there are usually other parties with an interest in the litigation who do have assets.
These parties, however, often keep themselves beyond the jurisdiction of the courts. They reap benefits
if the plaintiff forces a settlement, but are protected from any liability if they lose.
Right now, that’s a win-win situation for these parties, and a lose-lose situation for America’s
innovators.
Because Congress cannot force parties outside a court’s jurisdiction to join in a case, we must instead
incentivize interested parties to do the right thing and pay court-ordered fee awards. This is why we
must pass legislation that includes a recovery provision. Fee shifting without recovery is like writing a
check on an empty account. It’s purporting to convey something that isn’t there. Only fee shifting
coupled with a recovery provision will stop patent trolls from litigating-and-dashing.
There is no question that American ingenuity fuels our economy. We must ensure that our patent
system is strong and vibrant and helps to protect our country’s premier position in innovation.
Reform solves patent trolling
Roberts 14
(Jeff John Roberts. Jeff reports on legal issues that impact the future of the tech industry, such as privacy, net neutrality and intellectual
property. He previously worked as a reporter for Reuters in Paris and New York, and his free-lance work includes clips for the Economist, the
New York Times and the Globe & Mail. A frequent guest on media outlets like NPR and Fox, Jeff is also a lawyer, having passed the bar in
New York and Ontario. "Patent reform is likely in 2015. Here’s what it could look like," No Publication. 11-19-2014.
https://gigaom.com/2014/11/19/patent-reform-is-likely-in-2015-heres-what-it-could-look-like///ghs-kw)
A patent scholar Dennis Crouch notes, the question is how far the new law will go. In particular, real
reform will depend on
changing the economic asymmetries in patent litigation that allow trolls to flourish, and that lead troll
victims to simply pay up rather engage in costly litigation. Here are some measures we are likely to see under
the Goodlatte bill, according to Crouch and legal sources like IAM and Law.com (subscription required): Fee-shifting: Right now,
trolls typically have nothing to lose by filing a lawsuit since they are shell companies with no assets. New
fee-shifting measures, however, could put them on the hook for their victims’ legal fees. Discovery
limits: Currently, trolls can exploit the discovery process — in which each side must offer up documents
and depositions — by drowning their targets in expensive and time-consuming requests. Limiting the
scope of discovery could take that tactic off the table. Heightened pleading requirements: Right now,
patent trolls don’t have to specify how exactly a company is infringing their technology, but can simply
serve cookie-cutter complaints that list the patents and the defendant. Pleading reform would force the
trolls to explain what exactly they are suing over, and give defendants a better opportunity to assess the
case. Identity requirements: This reform proposal is known as “real party of interest” and would make it
harder for those filing patent lawsuits (often lawyers working with private equity firms) to hide behind
shell companies, and require them instead to identify themselves. Crouch also notes the possibility of
expanded “post-grant” review, which gives defendants a fast and cheaper tool to invalidate bad patents
at the Patent Office rather than in federal court.
2NC O/V
The status quo patent system is hopelessly broken and allows patent trolls to game
the system by gaining broad patents for objects such as selling objects on the
internet—those firms sue innovators and startups who “violate” their patents, costing
the US economy half a trillion and stifling innovation—that’s Lee
The counterplan eliminates patent trolls through a set of comprehensive reforms we’ll
describe below—solves their innovation argumentss and independently is a bigger
internal link to innovation and the economy
Patent reform is key to prevent patent trolling that stifle innovation and reduce R&D
by half
Bessen 14
(James Bessen. Bessen is a Lecturer in Law at the Boston University School of Law.
Bessen was also a Fellow at the Berkman Center for Internet and Society. "The
Evidence Is In: Patent Trolls Do Hurt Innovation," Harvard Business Review. November
2014. https://hbr.org/2014/07/the-evidence-is-in-patent-trolls-do-hurtinnovation//ghs-kw)
Over the last two years, much has been written about patent
trolls, firms that make their money asserting patents
against other companies, but do not make a useful product of their own. Both the White House and
Congressional leaders have called for patent reform to fix the underlying problems that give rise to
patent troll lawsuits. Not so fast, say Stephen Haber and Ross Levine in a Wall Street Journal Op-Ed (“The Myth of the Wicked Patent
Troll”). We shouldn’t reform the patent system, they say, because there is no evidence that trolls are hindering innovation; these calls are being
driven just by a few large companies who don’t want to pay inventors. But there is evidence of significant harm. The White House and the
Congressional Research Service both cited many research studies suggesting that patent
litigation harms innovation. And three
new empirical studies provide strong confirmation that patent litigation is reducing venture capital
investment in startups and is reducing R&D spending, especially in small firms. Haber and Levine admit that
patent litigation is surging. There were six times as many patent lawsuits last year than in the 1980s. The
number of firms sued by patent trolls grew nine-fold over the last decade; now a majority of patent
lawsuits are filed by trolls. Haber and Levine argue that this is not a problem: “it might instead reflect a healthy, dynamic economy.”
They cite papers finding that patent trolls tend to file suits in innovative industries and that during the nineteenth century, new technologies
such as the telegraph were sometimes followed by lawsuits. But this does not mean that the explosion in patent litigation is somehow
“normal.” It’s true that plaintiffs, including patent trolls, tend to file lawsuits in dynamic, innovative industries. But that’s just because they
“follow the money.” Patent trolls tend to sue cash rich companies, and innovative new technologies generate cash. The economic burden of
today’s patent lawsuits is, in fact, historically unprecedented. Research
shows that patent trolls cost defendant firms
$29 billion per year in direct out-of-pocket costs; in aggregate, patent litigation destroys over $60
billion in firm wealth each year. While mean damages in a patent lawsuit ran around $50,000 (in today’s dollars) at the time the
telegraph, mean damages today run about $21 million. Even taking into account the much larger size of the economy today, the economic
impact of patent litigation today is an order of magnitude larger than it was in the age of the telegraph. Moreover, these
costs fall
disproportionately on innovative firms: the more R&D a firm performs, the more likely it is to be sued
for patent infringement, all else equal. And, although this fact alone does not prove that this litigation reduces firms’
innovation, other evidence suggests that this is exactly what happens. A researcher at MIT found, for example, that medical imaging
businesses sued by a patent troll reduced revenues and innovations relative to comparable companies
that were not sued. But the biggest impact is on small startup firms — contrary to Haber and Levine, most patent trolls
target firms selling less than $100 million a year. One
survey of software startups found that 41% reported
“significant operational impacts” from patent troll lawsuits, causing them to exit business lines or change
strategy. Another survey of venture capitalists found that 74% had companies that experienced “significant
impacts” from patent demands. Three recent econometric studies confirm these negative effects.
Catherine Tucker of MIT analyzed venture capital investing relative to patent lawsuits in different industries and different regions of the
country. Controlling for the influence of other factors, she estimates that lawsuits
from frequent litigators (largely patent
trolls) were responsible for a decline of $22 billion in venture investing over a five-year period. That
represents a 14% decline. Roger Smeets of Rutgers looked at R&D spending by small firms, comparing firms that were hit by extensive
lawsuits to a carefully chosen comparable sample. The comparison sample allowed him to isolate the effect of patent lawsuits from other
factors that might also influence R&D spending. Prior to the lawsuit, firms
devoted 20% of their operating
expenditures to R&D; during the years after the lawsuit, after controlling for other factors, they reduced that spending
by 3% to 5% of operating expenditures, representing about a 19% reduction in relative R&D spending. And researchers from
Harvard and the University of Texas recently examined R&D spending of publicly listed firms that had been sued by patent trolls. They
compared firms where the suit was dismissed, representing a clear win for the defendant, to those where the suit was settled or went to final
adjudication (typically much more costly). As in the previous paper, this comparison helped them isolate the effect of lawsuits from other
factors. They found that when lawsuits were not dismissed, firms
reduced their R&D spending by $211 million and
reduced their patenting significantly in subsequent years. The reduction in R&D spending represents a
48% decline. Importantly, these studies are initial releases of works in progress; the researchers will refine their estimates of harm over
the coming months. Perhaps some of the estimates may shrink a bit. Nevertheless, across a significant number of studies using
different methodologies and performed by different researchers, a consistent picture is emerging about
the effects of patent litigation: it costs innovators money; many innovators and venture capitalists
report that it significantly impacts their businesses; innovators respond by investing less in R&D and
venture capitalists respond by investing less in startups. Haber and Levine might not like the results of this research. But
the weight of the evidence from these many studies cannot be ignored; patent trolls do, indeed, cause harm. It’s time for
Congress to do something about it.
2NC Comprehensive Reform
Comprehensive reform solves patent trolling
Downes 7/6
(Larry Downes. Larry Downes is an author and project director at the Georgetown Center for Business and Public Policy. His new book, with
Paul Nunes, is “Big Bang Disruption: Strategy in the Age of Devastating Innovation.” Previous books include the best-selling “Unleashing the
Killer App: Digital Strategies for Market Dominance.” "What would 'real' patent reform look like?," CNET. 7-6-2015.
http://www.cnet.com/news/what-does-real-patent-reform-look-like///ghs-kw)
And a new report (PDF) from technology think tank Lincoln Labs argues that reversing
the damage to the innovation
economy caused by years of overly generous patent policies requires far stronger medicine than Congress is
considering or the courts seem willing to swallow on their own. The bills making their way through Congress, for example, focus almost entirely
on curbing abuses by companies that buy up often overly broad patents and then, rather than produce goods, simply sue manufacturers and
users they argue are infringing their patents. These nonpracticing
entities, referred to derisively as patent trolls, are
widely seen as a serious drag on innovation, particularly in fast-evolving technology industries. Trolling
behavior, according to studies from Stanford Law School professor and patent expert Mark Lemley, does
little to nothing to promote the Constitutional goal of patents to encourage innovation by granting
inventors temporary monopolies during which they can recover their investment. The House of Representatives
passed antitrolling legislation in 2013, but a Senate version was killed by then-Majority Leader Harry Reid (D-Nev.) in May 2014. "Patent
trolls," said Gary Shapiro, president and CEO of the Consumer Electronics Association, "bleed $1.5 billion a week from the US
economy -- that's almost $120 billion since the House passed a patent reform bill in December of 2013." A call for 'real' patent reform The
Lincoln Labs report agrees with these and other criticisms of patent trolling, but argues for more fundamental changes to
the system, or what the report calls "real" patent reform. The report, authored by former Republican Congressional staffer
Derek Khanna, urges a complete overhaul of the process by which the Patent Office reviews applications, as
well as the elimination of patents for software, business methods, and a special class of patents for
design elements -- a category that figured prominently in the smartphone wars. Khanna claims that the Patent Office has demonstrated
an "abject failure" to enforce fundamental legal requirements that patents only be granted for inventions that are novel, nonobvious and
useful. To
reverse that trend, the report calls on Congress to change incentives for patent examiners that
today weigh the scales in favor of approval, add a requirement for two examiners to review the most
problematic categories of patents, and allow crowdsourced contributions to Patent Office databases of
"prior art" to help filter out nonnovel inventions. Khanna estimates these reforms alone "would knock
out a large number of software patents, perhaps 75-90%, where the economic argument for patents is
exceedingly difficult to sustain." The report also calls for the elimination of design patents, which offer
protection for ornamental features of manufactured products, such as the original design of the CocaCola bottle.
Reg-Neg CP
1NC Shell
Text: the United States federal government should enter into a process of negotiated
rulemaking over _______<insert plan>______________ and implement the results of
negotiation.
The CP is plan minus—it doesn’t mandate the plan, just that a regulatory negotiations
committee is created to discuss the plan
And, it competes—reg neg is not normal means
USDA 06
(The U.S. Department of Agriculture’s Agricultural Marketing Service administers programs that facilitate the efficient, fair marketing of U.S.
agricultural products, including food, fiber, and specialty crops “What is Negotiated Rulemaking?”. Last updated June 6th 2014.
http://www.ams.usda.gov/AMSv1.0/getfile?dDocName=STELPRDC5089434) //ghs-kw)
How reg-neg
differs from “traditional” notice-and-comment rulemaking The “traditional” notice-andcomment rulemaking provided in the Administrative Procedure Act (APA) requires an agency planning to adopt a rule
on a particular subject to publish a proposed rule (NPRM) in the Federal Register and to offer the public an
opportunity to comment. The APA does not specify who is to draft the proposed rule nor any particular
procedure to govern the drafting process. Ordinarily, agency staff performs this function, with discretion to determine how
much opportunity is allowed for public input. Typically, there is no opportunity for interchange of views among
potentially affected parties, even where an agency chooses to conduct a hearing. The “traditional” notice-andcomment rulemaking can be very adversarial. The dynamics encourage parties to take extreme positions in their written and oral statements –
in both pre-proposal contacts as well as in comments on any published proposed rule as well as withholding of information that might be
viewed as damaging. This adversarial atmosphere may contribute to the expense and delay associated with regulatory proceedings, as parties
try to position themselves for the expected litigation. What is lacking is an opportunity for the parties to exchange views, share information,
and focus on finding constructive, creative solutions to problems. In
negotiated rulemaking, the agency, with the
assistance of one or more neutral advisors known as “convenors,” assembles a committee of
representatives of all affected interests to negotiate a proposed rule. Sometimes the law itself will specify which
interests are to be included on the committee. Once assembled, the next goal is for members to receive training in interest-based problem-
They then must make sure that all views are heard and that each committee
member agrees to a set of ground rules for the negotiated rulemaking process. The ultimate goal is to reach
consensus on a text that all parties can accept. The agency is represented at the table by an official who is sufficiently senior
to be able to speak authoritatively on its behalf. Negotiating sessions are chaired by a neutral mediator or facilitator
skilled in assisting in the resolution of multiparty disputes. The Checklist—Advantages as well as Misperceptions The
solving and consensus-decision making.
blic
Building cooperative relationship
e “end runs” against
th
xempt the agency from any
or other
or non-parties to set aside their legal or political
.
<Insert specific solvency advocate>
Reg neg solves—empirics prove
Knaster 10
(Alana Knaster is the Deputy Director of the Resource Management Agency. She was Senior Executive in the Monterey County Planning
Department for five years with responsibility for planning, building, and code enforcement programs. Prior to joining Monterey County,
Alana was the President of the Mediation Institute, a national non-profit firm specializing in the resolution of complex land use planning and
environmental disputes. Many of the disputes that she successfully mediated, involved dozens of stakeholder groups including government
agencies, major corporations and public interest groups. She served in that capacity for 15 years. Alana was Mayor of the City of Hidden
Hills, California from 1981-88 and represented her City on a number of regional planning agencies and commissions. She also has been on
the faculty of Pepperdine University Law School since 1989, teaching courses in environmental and public policy mediation. Knaster, A.
“Resolvnig Conflicts Over Climate Change Solutions: Making the Case for Mediation,” Pepperdine Dispute Resolution Law Journal, Vol 10, No
3, 2010. 465-501. http://law.pepperdine.edu/dispute-resolution-law-journal/issues/volume-ten/Knaster%20Article.pdf//ghs-kw)
Federal and international dispute resolution process models. There are also models in U.S. and Canadian
legislation supporting the use of consensus-based processes. These processes have been successfully
applied to resolve dozens of disputes that involved multiple stakeholder interests, on technically and
politically complex environmental and public policy issues. For example, the Negotiated Rulemaking Act of
1990 was enacted by Congress to formalize a process for negotiating contentious new regulations.118 The Act provides a process called “reg
neg” by which representatives of interest groups that could be substantially affected by the provisions
of a regulation, and agency staff negotiate the provisions.119 The meetings are open to the public; however,
the process does enable negotiators to hold private interest group caucuses. If a consensus is reached on the provisions of
the rule, the Agency commits to publish the consensus rule in the Federal Register for public
comment.120 The participants in the reg neg agree that as long as the final regulation is consistent with
what they have jointly recommended, they will not challenge it in court. The assumption is that parties will
support a product that they negotiated.121 Reg neg has been utilized by numerous federal agencies to
negotiate rules pertaining to a diverse range of topics including safe drinking water, fugitive gasoline
emissions, eligibility for educational loans, and passenger safety.122 In 1991, in Canada, an initiative was launched by
the National Task Force on Consensus and Sustainability to develop a guidance document that would govern how federal, provincial, and
municipal governments would address resource management disputes. The document that was negotiated, “Building Consensus for a
Sustainable Future: Guiding Principles,” was adopted by consensus in 1994.123 The document outlined principles for building a consensus and
process steps. The ten principles included provisions regarding inclusivity of the process (this was particularly important in Canada with respect
to inclusion of Aboriginal peoples), voluntary participation, accountability to constituencies, respect for diverse interests, and commitment to
any agreement adopted.124 The
consensus principles were subsequently utilized to resolve disputes over issues
that included sustainable forest management, siting of solid waste facilities, impacts of pulp mill
expansion, and economic diversification based on sustainable wildlife resources.125 The reg neg and
Consensus for Sustainable Future model represent codified mediated negotiation processes that have withstood
the test of legal challenge and have been strongly endorsed by the groups that have participated in
these processes.
1NC Ptix NB
Doesn’t link to politics—empirics prove
USDA 6/6
(The U.S. Department of Agriculture’s Agricultural Marketing Service administers programs that facilitate the efficient, fair marketing of U.S.
agricultural products, including food, fiber, and specialty crops “What is Negotiated Rulemaking?”. Last updated June 6th 2014 @
http://www.ams.usda.gov/AMSv1.0/getfile?dDocName=STELPRDC5089434)
History In 1990,
Congress endorsed use by federal agencies of an alternative procedure known as "negotiated rulemaking,"'' also called "regulatory
negotiation," or "reg-neg." It has been used by agencies to bring interested parties into the rule-drafting process at an early stage, under circumstances that foster cooperative efforts to achieve solutions to regulatory problems.
Negotiated rules may be easier to
enforce and less likely to be challenged in litigation. The results of reg-neg usage by the federal
government, which began in the early 1980s, are impressive: large-scale regulators as the Environmental Protection Agency, Nuclear Regulatory
Commission, Federal Aviation Administration, and the Occupational Safety and Health Administration used the process on many occasions. Building on these
positive experiences, several states, including Massachusetts, New York, and California, have also begun using the procedure for a wide range of rules. The very first negotiated rule-making was
convened by the Federal Mediation and Conciliation Service (FMCS) working with the Department of Transportation, the Federal
Where successful, negotiated rulemaking can lead to better, more acceptable rules, based on a clearer understanding of the concerns of all those affected.
Aviation Administration, airline pilots and other interested groups to deal with regulations concerning flight and duty time for pilots. The negotiated rulemaking was a success and a draft rule was agreed upon that became the final
rule. Since that first reg-neg.
FMCS has assisted in both the convening and facilitating stages in many such procedures at the Departments of Labor,
EPA, as well as state-level processes, and other forms of consensus-based decision-making programs such as public policy dialogues,
Health and Human Services (HRSA), Interior, Housing and Urban Development, and the
hearings, focus groups, and meetings.
1NC Fism NB
Failure to use reg neg results in a federalism crisis—REAL ID proves
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School, cum laude.
Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and federalism. She has
presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth Circuit Judicial Conference, the
U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training and Research. She has advised National Sea
Grant multilevel governance studies involving Chesapeake Bay and consulted with multiple institutions on developing sustainability
programs. She has appeared in the Chicago Tribune, the London Financial Times, the PBS Newshour and Christian Science Monitor’s
“Patchwork Nation” project, and on National Public Radio. She is the author of many scholarly works, including Federalism and the Tug of
War Within (Oxford, 2012). Professor Ryan is a graduate of Harvard Law School, where she was an editor of the Harvard Law Review and a
Hewlett Fellow at the Harvard Negotiation Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for
the Ninth Circuit before practicing environmental, land use, and local government law in San Francisco. She began her academic career at
the College of William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured throughout
Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
b. A Cautionary Tale: The REAL ID Act The value
of negotiated rulemaking to federalism bargaining may be best
understood in relief against the failure of alternatives in federalism-sensitive [*57] contexts. Particularly
informative are the strikingly different state responses to the two approaches Congress has recently taken in tightening national security
through identifi-cation reform--one requiring
regulations through negotiated rulemaking, and the other through
traditional notice and comment. After the 9/11 terrorist attacks, Congress ordered the Department of Homeland Security (DHS) to
establish rules regarding valid identification for federal purposes (such as boarding an aircraft or accessing federal buildings). n291 Recognizing
the implications for state-issued driver's licenses and ID cards, Congress required DHS to use ne-gotiated
rulemaking to forge
consensus among the states about how best to proceed. n292 States leery of the stag-gering costs associated with proposed
reforms participated actively in the process. n293 However, the subsequent REAL ID Act of 2005 repealed
the ongoing negotiated rulemaking and required DHS to prescribe top-down fed-eral requirements for state-issued licenses. n294
The resulting DHS rules have been bitterly opposed by the majority of state governors, legislatures, and
motor vehicle administrations, n295 prompting a virtual state rebellion that cuts across the redstate/blue-state political divide. n296 No state met the December 2009 deadline initially contemplated by the statute,
and over half have enacted or considered legislation prohibiting compliance with the Act, defunding its
implementation, or calling for its repeal. n297 In the face of this unprecedented state hostility, DHS has
extended compliance deadlines even for those that did not request extensions, and bills have been introduced in both houses
of Congress to repeal the Act. n298 Efforts to repeal what is increasingly referred to as a "failed" policy have won
endorsements [*58] from or-ganizations across the political spectrum. n299 Even the Executive Director of the ACLU, for
whom federalism concerns have not historically ranked highly, opined in USA Today that the REAL ID Act violates the Tenth Amendment. n300
US federalism will be modelled globally—solves human rights, free trade, war, and
economic growth
Calabresi 95
(Steven G. Calabresi is a Professor of Law at Northwestern University and is a graduate of the Yale Law School (1983) and of Yale College
(1980). Professor Calabresi was a Scholar in Residence at Harvard Law School from 2003 to 2005, and he has been a Visiting Professor of
Political Science at Brown University since 2010. Professor Calabresi was also a Visiting Professor at Yale Law School in the Fall of 2013.
Professor Calabresi served as a Law Clerk to Justice Antonin Scalia of the United States Supreme Court, and he also clerked for U.S. Court of
Appeals Judges Robert H. Bork and Ralph K. Winter. From 1985 to 1990, he served in the Reagan and first Bush Administrations working
both in the West Wing of the Reagan White House and before that in the U.S. Department of Justice. In 1982, Professor Calabresi cofounded The Federalist Society for Law & Public Policy Studies, a national organization of lawyers and law students, and he currently serves
as the Chairman of the Society’s Board of Directors – a position he has held since 1986. Since joining the Northwestern Faculty in 1990, he
has published more than sixty articles and comments in every prominent law review in the country. He is the author with Christopher S. Yoo
of The Unitary Executive: Presidential Power from Washington to Bush (Yale University Press 2008); and he is also a co-author with
Professors Michael McConnell, Michael Stokes Paulsen, and Samuel Bray of The Constitution of the United States (2nd ed. Foundation Press
2013), a constitutional law casebook. Professor Calabresi has taught Constitutional Law I and II; Federal Jurisdiction; Comparative Law;
Comparative Constitutional Law; Administrative Law; Antitrust; a seminar on Privatization; and several other seminars on topics in
constitutional law. Calabresi, S. G. “Government of Limited and Enumerated Powers: In Defense of United States v. Lopez, A Symposium:
Reflections on United States v. Lopez,” Michigan Law Review, Vol 92, No 3, December 1995. Ghs-kw)
We have seen that a
desire for both international and devolutionary federalism has swept across the world in recent
years. To a significant extent, this is due to global fascination with and emulation of our own American
federalism success story. The global trend toward federalism is an enormously positive development that greatly increases the
likelihood of future peace, free trade, economic growth, respect for social and cultural diversity, and
protection of individual human rights. It depends for its success on the willingness of sovereign nations
to strike federalism deals in the belief that those deals will be kept.233 The U.S. Supreme Court can do its part to
encourage the future striking of such deals by enforcing vigorously our own American federalism deal.
Lopez could be a first step in that process, if only the Justices and the legal academy would wake up to the importance of what is at stake.
Federalism solves economic growth
Bruekner 05
(Jan K. Bruekner is a Professor of Economics University of California, Irvine. He is a Member member of the Institute of Transportation
Studies, Institute for Mathematical Behavioral Sciences, and a former editor of the Journal of Urban Economics. Bruekner, J. K. “Fiscal
Federalism and Economic Growth,” CESifo Working Paper No. 1601, Novermber 2005. https://www.cesifogroup.de/portal/page/portal/96843357AA7E0D9FE04400144FAFBA7C//ghs-kw)
The analysis in this paper suggests that faster
economic growth may constitute an additional benefit of fiscal
federalism beyond those already well recognized. This result, which matches the conjecture of Oates (1993) and the
expectations of most empirical researchers who have studied the issue, arises from an unexpected source: a
greater incentive to save when public-good levels are tailored under federalism to suit the differing
demands of young and old consumers. This effect grows out of a novel interaction between the rules of
public-good provision which apply cross-sectionally at a given time and involve the young and old
consumers of different generations, and the savings decision of a given generation, which is intertemporal in
nature. This cross-sectional/intertemporal interaction yields the link between federalism and economic growth. While it is encouraging that
the paper’s results match recent empirical findings showing a positive growth impact from fiscal
decentralization, additional theoretical work exploring other possible sources of such a link is clearly needed. The present results emerge
from a model based on very minimal assumptions, but exploration of richer models may also be fruitful.
US economic growth solves war, collapse ensures instability
National Intelligence Council, ’12 (December, “Global Trends 2030: Alternative Worlds”
http://www.dni.gov/files/documents/GlobalTrends_2030.pdf)
a reinvigorated US economy would increase the prospects that the growing
global and regional challenges would be addressed. A stronger US economy dependent on trade in services and cutting-edge
technologies would be a boost for the world economy, laying the basis for stronger multilateral cooperation.
Washington would have a stronger interest in world trade, potentially leading a process of World Trade Organization reform
that streamlines new negotiations and strengthens the rules governing the international trading system. The US would be in a
better position to boost support for a more democratic Middle East and prevent the slide of failing
states. The US could act as balancer ensuring regional stability, for example, in Asia where the rise of multiple
powers—particularly India and China—could spark increased rivalries. However, a reinvigorated US would not necessarily be a
Big Stakes for the International System The optimistic scenario of
panacea. Terrorism, proliferation, regional conflicts, and other ongoing threats to the international order will be affected by the presence or absence of strong US leadership but are also driven
The US impact is much more clear-cut in the negative case in which the US fails to rebound
and is in sharp economic decline. In that scenario, a large and dangerous global power vacuum would be
by their own dynamics.
created and in a relatively short space of time. With a weak US, the potential would increase for the
European economy to unravel. The European Union might remain, but as an empty shell around a fragmented continent. Progress on trade reform as well as financial
and monetary system reform would probably suffer. A weaker and less secure international community would reduce its aid
efforts, leaving impoverished or crisis-stricken countries to fend for themselves, multiplying the chances of grievance and peripheral
conflicts. In this scenario, the US would be more likely to lose influence to regional hegemons—China and India
in Asia and Russia in Eurasia. The Middle East would be riven by numerous rivalries which could erupt
into open conflict, potentially sparking oil-price shocks. This would be a world reminiscent of the 1930s
when Britain was losing its grip on its global leadership role.
2NC O/V
The counterplan convenes a regulatory negotiation committee to discuss the
implementation of the plan. Stakeholders decide how and if the plan is
implemented—then implements the decision - solves better than the AFF:
2. Agency action—traditional notice-and-comment rulemaking incentivizes actors to
withhold information which prevents agency action and guts implementation of
the plan —CP facilitates cooperation—that’s Siegel 9.
3. Collaboration—reg neg facilitates government-civilian cooperation, results in
greater satisfaction with regulations and better compliance after
implementation—social psychology and empirics prove
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental
law. She holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of
Laws in addition to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change
in the Obama White House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker
on collaborative and contractual approaches to governance. After leaving the White House, she advised the National Commission on
the Deepwater Horizon oil spill on topics of structural reform at the Department of the Interior. She has been appointed to the
Administrative Conference of the United States, the government think tank for improving the effectiveness and efficiency of federal
agencies, and is a member of the American College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative
Methods, Program Evaluation, Policy Analysis, and Public Choice and American College. She holds a PhD in Political Science from the
University of North Carolina, a BA in Government from Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the
Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9, 2000.
http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf/)
D. Compliance The compliance implications of consensus-based processes remain a matter of speculation.360 No one has yet produced
empirical data on the relationship between negotiated rulemaking and compliance, let alone data comparing the compliance implications
of negotiated and conventional rules.361 However, the Phase II results introduce interesting new findings into the debate. The
data
shows reg-neg participants to be significantly more likely than conventional rulemaking participants
to report the perception that others will be able to comply with the final rule.362 Perceiving that others will
comply might induce more compliance among competitors, along the lines of game theoretic models, at least until evidence of defection
emerges.363 Moreover,
to the extent that compliance failures are at least partly due to technical and
information deficits—rather than to mere political resistance—it seems plausible that reports of the
learning effect and more horizontal sharing of information might help to improve compliance in the
long run.364 The claim that reg-neg could improve compliance is consistent with social psychology
studies showing that in both legal and organizational settings, “fair procedures lead to greater
compliance with the rules and decisions with which they are associated.”365 Similarly, negotiated
rulemaking might facilitate compliance by bringing to the surface some of the contentious issues
earlier in the rulemaking process, where they might be solved collectively rather than dictated by the agency. Although
speculative, these hypotheses seem to fit better with Kerwin and Langbein’s data than do the rather negative expectations about
compliance. Higher
satisfaction could well translate into better long-term compliance, even if litigation
rates remained the same. Consistent with our contention that process matters, we expect it to matter to compliance as well. In
any event, empirical studies of compliance should no longer be so difficult to produce. A number of
negotiated rules are now several years old, with some in the advanced stages of implementation. A study of compliance might compare
numbers of enforcement actions for negotiated as compared to conventional rules, measured by notices of violation, or penalties, for
example.366 It might
investigate as well whether compliance methods differ between the two types of
rules: perhaps the enforcement of negotiated rules occurs more cooperatively, or informally, than
enforcement of conventional rules. Possibly, relationships struck during the negotiated rulemaking
make a difference at the compliance stage.367 To date, the effects of how the rule is developed on eventual compliance
remain a matter of speculation, even though it is ultimately an empirical issue on which both theory and empirical evidence must be
brought to bear.
And, we’ll win new net benefits here that ALL turn the aff
a. Delays—cp’s regulatory negotiation means that rules won’t be challenged during
the regulation creation process—empirics prove the CP solves faster than the AFF
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter
is a scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has
been involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more
than 50 papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the
University of Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on
environmental mediation and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court.
He has received multiple awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of
the Administrative Conference of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated
Rulemaking,” December 1999. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
Properly understood, therefore, the
average length of EPA’s negotiated rulemakings — the time it took EPA
to fulfill its goal — was 751 days or 32% faster than traditional rulemaking. This knocks a full year
off the average time it takes EPA to develop rule by the traditional method. And, note these are
highly complex and controversial rules and that one of them survived Presidential intervention.
Thus, the dynamics surrounding these rules are by no mean “average.” This means that reg neg’s
actual performance is much better than that. Interestingly and consistently, the average time for all of EPA’s reg negs
when viewed in context is virtually identical to that of the sample drawn by Kerwin and Furlong77 — differing by less than a month.
Furthermore, if all of the reg negs that were conducted by all the agencies that were included in Coglianese’s table78 were analyzed along
the same lines as discussed here,79 the
average time for all negotiated rulemakings drops to less than 685
days.80 No Substantive Review of Rules Based on Reg Neg Consensus. Coglianese argues that negotiated rules are actually subjected to
a higher incident of judicial review than are rules developed by traditional methods, at least those issued by EPA.81 But, like his analysis of
the time it takes to develop rules, Coglianese fails to look at either what happened in the negotiated rulemaking itself or the nature of any
challenge. For example, he makes much of the fact that the Grand Canyon visibility rule was challenged by interests that were not a party
to the negotiations;82 yet, he also points out that this rule was not developed under the Negotiated Rulemaking Act83 which explicitly
establishes procedures that are designed to ensure that each interest can be represented. This challenge demonstrates the value of
convening negotiations.84 And, it is significantly misleading to include it when discussing the judicial review of negotiated rules since the
process of reg neg was not followed. As for Reformulated Gasoline, the rule as issued by EPA did not reflect the consensus but rather was
modified by EPA under the direction of President Bush.85 There were, indeed, a number of challenges to the application of the rule,86 but
amazingly little to the rule itself given its history. Indeed, after the proposal was changed, many members of the committee continued to
meet in an effort to put Humpty Dumpty back together again, which they largely did; the
fact that the rule had been
negotiated not only resulted in a much better rule,87 it enabled the rule to withstand in large part a
massive assault. Coglianese also somehow attributes a challenge within the World Trade Organization to a shortcoming of reg neg
even though such issues were explicitly outside the purview of the committee; to criticize reg neg here is like saying surgery is not
effective when the patient refused to undergo it. While the Underground Injection rule was challenged, the committee never reached an
agreement88 and, moreover, the convening report made clear that there were very strong disagreements over the interpretation of the
governing statute that would likely have to be resolved by a Court of Appeals. Coglianese also asserts that the Equipment Leaks rule was
the subject of review; it was, but only because the Clean Air requires parties to file challenges in a very short period, and a challenger
therefore filed a defensive challenge while it worked out some minor details over the regulation. Those negotiations were successful and
the challenge was withdrawn. The Chemical Manufacturers Association, the challenger, had no intention of a substantive challenge.89
Moreover, a challenge to other parts of the HON should not be ascribed to the Equipment Leaks part of the rule. The agreement in the
Asbestos in Schools negotiation explicitly contemplated judicial review — strange, but true — and hence it came as no surprise and as no
violation of the agreement. As for the Wood Furniture Rule, the challenges were withdrawn after informal negotiations in which EPA
agreed to propose amendments to the rule.90 Similarly, the challenge to EPA’s Disinfectant By-Products Rule91 was withdrawn. In short,
the rules that have emerged from negotiated rulemaking have been remarkably resistant to substantive challenges. And, indeed, this far
into the development of the process, the standard of review and the extent to which an agreement may be binding on either a signatory
or someone whom a party purports to represent are still unknown — the speculation of many an administrative law class.92 Thus, here
too, Coglianese
paints a substantially misleading picture by failing to distinguish substantive
challenges to rules that are based on a consensus from either challenges to issues that were not the
subject of negotiations or were filed while some details were worked out. Properly understood, reg
negs have been phenomenally successful in warding off substantive review.
B. More democratic—reg neg encourages private sector participation—means that
regulations aren’t unilaterally created by the USFG—CP results in a fair playing field
for the entirety of the private sector
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental
law. Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition
to a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama
White House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on
collaborative and contractual approaches to governance. Laura Langbein is the Professor of Quantitative Methods, Program
Evaluation, Policy Analysis, and Public Choice and American College. She holds a PhD in Political Science from the University of North
Carolina, a BA in Government from Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,”
N.Y.U. Environmental Journal, Volume 9, 2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
2. Negotiated
Rulemaking Is Fairer to Regulated Parties than Conventional Rulemaking To test whether reg
EPA solicited their participation and
whether they believed anyone was left out of the process. They also examined how much the parties learned in each
process, and whether they experienced resource or information disparities. Negotiated rule participants were significantly more likely to say that the EPA
encouraged their participation than conventional rule participants (65% versus 33% respectively). Al-though a higher proportion of
neg was fairer to regulated parties, Ker-win and Langbein asked respondents whether
conventional rulemaking participants reported that a party that should have been represented in the rulemaking was omitted, the difference is not
statistically significant. Specifically, "a majority of both negotiated and conventional rule participants believed that the parties who should have been involved
were involved (66% versus 52% respectively)." In addition, as reported above, participants in regulatory negotiations reported significantly more learning than
their conventional rulemaking counterparts. Indeed, the disparity between the two types of participants in terms of their reports about learning was one of
the study's most striking results. At the same time, the resource disadvantage of poorer, smaller groups was no greater in negotiated rulemaking than in
conventional rulemaking. So, while
smaller groups did report suffering from a lack of resources during
regulatory negotiation, they reported the same in conventional rulemakings; no disparity existed
between the two processes on this score. Finally, the data suggest that the agency is equally responsive to the
parties in both negotiated and conventional rulemakings. This result, together with the finding that participants in regulatory
negotiations perceived disproportionate influence to be about evenly distributed, suggests that reg neg is at least as fair to the parties as conventional
rulemaking. Indeed, because
participant learning was so much greater in regulatory negotiation, the
process may in fact be more fair.
2NC Solves Better
Reg neg is better for complex rules
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental law. She
holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition to
a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama White
House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on collaborative and
contractual approaches to governance. After leaving the White House, she advised the National Commission on the Deepwater Horizon oil
spill on topics of structural reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United
States, the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the American
College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and
Public Choice and American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from
Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
4. Complex Rules Are More Likely To Be Settled Through Negotiated Rulemaking Recall that theorists disagree
over whether complex or simple issues are best suited for negotiation. The data suggest that negotiated and conventional rules differ in
systematic ways, indicating that EPA officials do not select just any rule for negotiation. When asked how the issues for rulemaking were
established, reg neg participants reported more often than their counterparts that the participants established at least some of them (44%
versus 0%). Conventional
rulemaking participants more often admitted to being uninformed of the process
for establishing issues (17% versus 0%) or offered that regulated entities set the issues (11% to 0%). A majority of both groups reported
that the EPA or the governing legislation established at least some of the issues. Kerwin and Langbein found that the types of issues
indeed appeared to differ between negotiated and conventional rules. When asked about the type of issues to be
decided, 52% of participants in conventional groups identified issues regarding the standard, including its level,
timing, or measurement (compared to 31% of negotiated rule participants), while 58% of the negotiating group identified
compliance and implementation issues (compared to 39% of participants in the conventional group). More reg neg
participants (53%) also cited compliance issues as causing the greatest conflict, compared to 32% of conventional
participants. Conventional participants more often reported that the rulemaking failed to resolve all of the issues (30%
versus 14%), but also more often reported that they encountered no "surprise" issues (74% versus 44%). Participants perceived negotiated
rules to be more complex, with more issues and more sides per issue than conventional rules. Kerwin and Langbein learned in interviews that
reg neg participants tended to develop a more detailed view about the issues to be decided than did
their conventional counterparts. The researchers interpreted this disparity in reported detail as a perception of complexity. To
measure it they computed a complexity score: the more issues and the more sides to each issue that respondents in a rulemaking could
identify, relative to the number of respondents, the more nuanced or complex the rulemaking. Using this calculation, the rules ranged in com
plexity from 1.9 to 5.0, with a mean complexity score of 3.6. The mean complexity score for reg negs (4.1) was significantly higher than the
score (2.5) for conventional rulemaking. Reg neg participants also presented a clearer understanding of the issues to be decided than did
conventional participants. To test clarity, Kerwin and Langbein developed a measure that would reflect the striking variation among
respondents in the number of different issues and different sides they perceived in their rulemaking. Some respondents could identify very few
separate issues and sides (e.g., "the level of the standard is the single issue and the sides are business, environmentalists, and EPA"), while
others detected as many as four different issues, with three sides on some and two on others. Kerwin and Langbein's measurement was in units
of issue/sides, representing a combination of the two variables, the recognition of which they were measuring; the mentions ranged from 3 to
10 issue/sides, with a mean of 7.9. Negotiated rulemaking participants mentioned an average of 8.9 issue/sides, compared to an average of
6issue/sides mentioned by their conventional counterparts, a statistically significant difference. To illustrate the difference between complexity
and clarity: If a party identified the compliance standard as the sole issue, but failed to identify a number of sub-issues, they would be classified
as having a clear understanding but not a complex one. similarly, if the party identified two sides (business vs. environment) without
recognizing distinctions among business participants or within an environmental coalition, they would also be classified as clear but not
complex in their understanding. The
differences in complexity might be explained by the higher reported rates of
learning by reg neg participants, rather than by differences in the types of rules processed by reg neg
versus conventional rulemaking. Kerwin and Langbein found that complexity and clarity were both positively
and significantly correlated with learning by respondents, but the association between learning and complexity/clarity
disappeared when the type of rulemaking was held constant. However, when the amount learned was held constant, the association between
complexity/clarity and the type of rulemaking remained positive and significant. This signifies that the
association between learning
and complexity/clarity was due to the negotiation process. In other words, the differences in
complexity/clarity are not attributable to higher learning but rather to differences between the processes. The
evidence is consistent with the hypothesis that issues selected for regulatory negotiation are different from and more
complicated than those chosen for conventional rulemaking. The data associating reg negs with
complexity, together with the finding that more issues settle in reg negs, are consistent with the
proposition that issues with more (and more di verse) sub-issues and sides settle more easily than
simple issues.
Reg neg is better than conventional rulemaking
Freeman and Langbein 00
(Jody Freeman is the Archibald Cox Professor at Harvard Law School and a leading expert on administrative law and environmental law. She
holds a Bachelor of the Arts from Stanford University, a Bachelor of Laws from the University of Toronto, and a Master of Laws in addition to
a Doctors of Jurisdictional Science from Harvard University. She served as Counselor for Energy and Climate Change in the Obama White
House in 2009-2010. Freeman is a prominent scholar of regulation and institutional design, and a leading thinker on collaborative and
contractual approaches to governance. After leaving the White House, she advised the National Commission on the Deepwater Horizon oil
spill on topics of structural reform at the Department of the Interior. She has been appointed to the Administrative Conference of the United
States, the government think tank for improving the effectiveness and efficiency of federal agencies, and is a member of the American
College of Environmental Lawyers. Laura I Langbein is the Professor of Quantitative Methods, Program Evaluation, Policy Analysis, and
Public Choice and American College. She holds a PhD in Political Science from the University of North Carolina, a BA in Government from
Oberlin College. Freeman, J. Langbein, R. I. “Regulatory Negotiation and the Legitimacy Benefit,” N.Y.U. Environmental Journal, Volume 9,
2000. http://www.law.harvard.edu/faculty/freeman/legitimacy%20benefit.pdf//ghs-kw)
In this article, we present an original analysis and summary of new empirical evidence from Neil Kerwin and Laura Langbein's two-phase study
of Environmental Protection Agency (EPA) negotiated rulemakings. n5 Their qualitative and (*62) quantitative data reveal more about reg neg
than any empirical study to date; although not published in a law review article until now, they unquestionably bear upon the ongoing debate
among legal scholars over the desirability of negotiating rules. Most importantly, this is the first study to compare participant attitudes toward
negotiated rulemaking with attitudes toward conventional rulemaking. The findings of the studies tend, on balance, to undermine arguments
made by the critics of regulatory negotiation and to bolster the claims of proponents. Kerwin and Langbein found that, according to participants
in the study, reg
neg generates more learning, better quality rules, and higher satisfaction compared to
conventional rulemaking. n6 At the same time, stakeholder influence on the agency remains about the
same using either approach. n7 Based on the results, we recommend more frequent use of regulatory
negotiation, accompanied by further comparative and empirical study, for the purposes of establishing
regulatory standards and resolving implementation and compliance issues. This recommendation
contradicts the prevailing view that the process is best used sparingly, n8 and even then, only for narrow
questions of implementation. n9
Reg negs solve better
Harter 99
(Philip J. Harter received his AB (1964), Kenyon College, MA (1966), JD, magna cum laude (1969), University of Michigan. Philip J. Harter is a
scholar in residence at Vermont Law School and the Earl F. Nelson Professor of Law Emeritus at the University of Missouri. He has been
involved in the design of many of the major developments of administrative law in the past 40 years. He is the author of more than 50
papers and books on administrative law and has been a visiting professor or guest lecturer internationally, including at the University of
Paris II, Humboldt University (Berlin) and the University of the Western Cape (Cape Town). He has consulted on environmental mediation
and public participation in rulemaking in China, including a project sponsored by the Supreme Peoples Court. He has received multiple
awards for his achievements in administrative law. He is listed in Who's Who in America and is a member of the Administrative Conference
of the United States.Harter, P. J. “Assessing the Assessors: The Actual Performance of Negotiated Rulemaking,” December 1999.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=202808//ghs-kw)
The Primary Objective of Negotiated Rulemaking Is To Create Better and More Widely Accepted Rules. Coglianese argues throughout his article
that the primary benefits of negotiated rules were seen by its advocates as being the reduction in time and in the incidence of litigation.93
While, both benefits have been realized, neither was seen by those who established it as the predominant factor in its use. For example, Peter
Schuck wrote an important early article in which he described the
benefits of negotiated solutions over those imposed by
a hierarchy.94 Schuck emphasized a number of shortcomings of the adjudicatory nature of hybrid rulemaking and many benefits of direct
negotiations among the affected parties. The tenor of his thinking is reflected by his statement, “a bargained
solution depends for its legitimacy not upon its objective rationality, inherent justice, or the moral
capital of the institution that fashioned it, but upon the simple fact that it was reached by consent of the
parties affected.”95 And, “it encourages diversity, stimulates the parties to develop relevant information about facts and values, provides
a counter-weight to concentrations of power, and advances participation by those the decisions affect.”96 Nowhere in his long list of benefits
was either speed or reduced litigation, except by implication of the acceptability of the results. My own article that developed the
recommendations97 on which the ACUS Recommendation,98 the Negotiated Rulemaking Act, and the practice itself are based describes the
anticipated benefits of negotiated rulemaking: Negotiating
has many advantages over the adversarial process. The
parties participate directly and immediately in the decision. They share in its development and concur in
it, rather than “participate” by submitting information that the decisionmaker considers in reaching the
decision. Frequently, those who participate in the negotiations are closer to the ultimate decisionmaking
authority of the interest they represent than traditional intermediaries that represent the interest in an
adversarial proceeding. Thus, participants in negotiations can make substantive decisions, rather than
acting as experts in the decisionmaking process. In addition, negotiation can be a less expensive means
of decisionmaking because it reduces the need to engage in defensive research in anticipation of
arguments made by adversaries. Undoubtedly the prime benefit of direct negotiations is that it enables
the participants to focus squarely on their respective interests.99 The article quotes John Dunlop, a true pioneer in
using negotiations among the affected interests in the public sphere,100 as saying “In our society, a rule that is developed with the involvement
of the parties who are affected is more likely to be accepted and to be effective in accomplishing its intended purposes.”101 Reducing
time and litigation exposure was not emphasized if even mentioned directly To be sure, the Congressional findings
that precede the Negotiated Rulemaking Act mention the savings of time and litigation, but they are largely the byproduct of far more significant benefits:102 (2) Agencies currently use rulemaking procedures that may
discourage the affected parties from meeting and communicating with each other, and may cause
parties with different interest to assume conflicting and antagonistic positions and to engage in
expensive and time-consuming litigation over agency rules. (3) Adversarial rulemaking deprives the
affected parties and the public of the benefits of face-to-face negotiations and cooperation in
developing and reaching agreement on a rule. It also deprives them of the benefits of shared
information, knowledge, expertise, and technical abilities possessed by the affected parties 4)
Negotiated rulemaking, in which the parties who will be significantly affected by a rule participate
directly in the development of the rule, can provide significant advantages over adversarial rulemaking.
(5) Negotiated rulemaking can increase the acceptability and improve the substance of rules, making it
less likely that the affected parties will resist enforcement or challenge such rules in court. It may also
shorten the amount of time needed to issue final rules. Thus, those who were present at the creation
of reg neg sought neither expedition nor a shield against litigation. Rather, they saw direct
negotiations among the parties — a form of representational democracy not explicitly recognized in the Administrative Procedure
Act — as resulting in rules that are substantively “better” and more widely accepted. Those benefits
were seen as flowing from the participation of those affected who bring with them a practical insight
and expertise that can result in rules that are better informed, more tailored to achieving the actual
regulatory goal and hence more effective, and able to be enforced.
Reg negs are the best type of negotiations
Hsu 02
(Shi-Ling Hsu is the Larson Professor of Law at the Florida State University College of Law. Professor Hsu has a B.S. in Electrical Engineering
from Columbia University, and a J.D. from Columbia Law School. He also has an M.S. in Ecology and a Ph.D. in Agricultural and Resource
Economics, both from the University of California, Davis. Professor Hsu has taught in the areas of environmental and natural resource law,
law and economics, quantitative methods, and property. Prior to his current appointment, Professor Hsu was a Professor of Law and
Associate Dean for Special Projects at the University Of British Columbia Faculty Of Law. He has also served as an Associate Professor at the
George Washington University Law School, a Senior Attorney and Economist for the Environmental Law Institute in Washington D.C, and a
Deputy City Attorney for the City and County of San Francisco. “A Game Theoretic Approach to Regulatory Negotiation: A Framework for
Empirical Analysis,” Harvard Environmental Law Review, Vol 26, No 2, February2002.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=282962//ghs-kw)
There are reasons to be optimistic about what regulatory negotiations can produce in even a troubled
administrative state. Jody Freeman noted that one important finding from the Kerwin and Langbein studies were that parties
involved in negotiated rulemaking were able to use the face-to-face contact as a learning experience.49
Barton Thompson has noted in his article on common-pool resources problems50 that one reason that resource users resist collective action
solutions is that it
is evidently human nature to blame others for the existence of resource shortages. That in
turn leads to an extreme reluctance by resource users to agree to a collective action solution if it involves
even the most minimal personal sacrifices. Thompson suggests that the one hope for curing resource users of such selfserving myopia is face-to-face contact and the exchange of views. The vitriol surrounding some environmental
regulatory issues suggests that there is a similar human reaction occurring with respect to some resource
conflicts.51 Solutions to environmental problems and resource conflicts on which regulated parties and environmental organizations
hold such strong and disparate views may require face-to-face contact to defuse some of the tension and remove some of
the demonization that has arisen in the these conflicts. Reinvention, with the emphasis on negotiations and face-to-face
contact, provides such an opportunity. 52 Farber has argued for making the best of this trend towards regulatory negotiation
characterizing negotiated rulemaking and reinvention. 53 Faced with the reality that some negotiation will inevitably take place
because of the slippage inherent in our system of regulation, Farber argues that the best model for allowing it
to go forward is a bilateral one. A system of bilateral negotiation would clearly be superior to a system
of self-regulation, as such a Farber has argued for making the best of this trend towards regulatory negotiation characterizing negotiated
rulemaking and reinvention. A system of bilateral negotiation would clearly be superior to a system of self-regulation, as such a system would
inevitably descend into a tragedy of the commons.54 But a
system of bilateral negotiation between agencies and
regulated parties would even be superior to a system of multilateral negotiation, due to the transaction
costs of assembling all of the affected stakeholders in a multilateral effort, and the difficulties of
reaching a consensus among a large number of parties. Moreover, multilateral negotiation gives rise to the troubling idea that there
should be joint governance among the parties. Since environmental organizations lack the resources to participate in post-negotiation
governance, there is a heightened danger of regulatory capture by the better-financed regulated parties.55 The
correct balance
between regulatory flexibility and accountability, argues Farber, is to allow bilateral negotiation but with
built-in checks to ensure that the negotiation process is not captured by regulated parties. Built-in checks
would include transparency, so that environmental organizations can monitor regulatory bargains, and the availability of citizen suits, so that
environmental organizations could remedy regulatory bargains that exceed the dictates of the underlying statute. Environmental organizations
would thus play the role of the watchdog, rather than the active participant in negotiations. The finding of Kerwin and Langbein that resource
constraints sometimes caused environmental organizations, especially smaller local ones, to skip negotiated rulemakings would seem to
support this conclusion. 56 A
much more efficient use of limited resources would require that the
environmental organization attempt to play a deterrent role in monitoring negotiated rulemakings.
2NC Cybersecurity Solvency
Reg neg solves cybersecurity
Sales 13
(Sales, Nathan Alexander. Assistant Professor of Law, George Mason University School of Law. “REGULATING CYBERSECURITY,”
Northwestern University Law Review. 2013.
http://www.rwu.edu/sites/default/files/downloads/cyberconference/cyber_threats_and_cyber_realities_readings.pdf//ghs-kw)
An alternative would be a form of “enforced self-regulation”324 in which private companies develop the
new cybersecurity protocols in tandem with the government.325 These requirements would not be
handed down by administrative agencies, but rather would be developed through a collaborative
partnership in which both regulators and regulated would play a role. In particular, firms might prepare
sets of industrywide security standards. (The National Industrial Recovery Act, famously invalidated by the Supreme Court in 1935,
contained such a mechanism,326 and today the energy sector develops reliability standards in the same way.327) Or agencies could sponsor
something like a negotiated rulemaking in which regulators, firms, and other stakeholders forge a consensus
on new security protocols.328 In either case, agencies then would ensure compliance through standard
administrative techniques like audits, investigations, and enforcement actions.329 This approach would
achieve all four of the benefits of private action mentioned above: It avoids (some) problems with information asymmetries,
takes advantage of distributed private sector knowledge about vulnerabilities and threats,
accommodates rapid technological change, and promotes innovation. On the other hand, allowing firms to help set the
standards that will be enforced against them may increase the risk of regulatory capture – the danger that agencies will come to promote the interests of the
companies they regulate instead of the public’s interests.330 The risk of capture is always present in regulatory action, but it is probably even more acute when
regulated entities are expressly invited to the decisionmaking table.331
2NC Encryption Advocate
Here’s a solvency advocate
DMCA 05
(Digital Millenium Copyright Act, Supplement in 2005. https://books.google.com/books?id=nL0s81xgVwC&pg=PA481&lpg=PA481&dq=encryption+AND+(+%22regulatory+negotiation%22+OR+%22negotiated+rulemaking%22)&source=bl&ots
=w9mrCaTJs4&sig=1mVsh_Kzk1p26dmT9_DjozgVQI&hl=en&sa=X&ved=0CB4Q6AEwAGoVChMIxtPG5YH9xgIVwx0eCh2uEgMJ#v=onepage&q&f=false//ghs-kw)
Some encryption supporters advocate use of advisory committee and negotiated rulemaking procedures to
achieve consensus around an encryption standard. See Motorola Comments at 10-11; Veridian Reply Comments at 20-23.
Reg negs are key to wireless technology innovation
Chamberlain 09
(Chamberlain, Inc. Comments before the Federal Communications Commission. 11-05-2009.
https://webcache.googleusercontent.com/search?q=cache:dfYcw45dQZsJ:apps.fcc.gov/ecfs/document/view%3Bjsessionid%3DSQnySfcTVd
22hL6ZYShTpQYGY1X27xB14p3CS1y01XW15LQjS1jj!-1613185479!153728702%3Fid%3D7020245982+&cd=2&hl=en&ct=clnk&gl=us//ghs-kw)
Chamberlain supports solutions that will balance the needs of stakeholders in both the licensed and unlicensed
bands. Chamberlain and other manufacturers of unlicensed devices such as Panasonic are also uniquely
able to provide valuable contributions from the perspective of unlicensed operators with a long history
of innovation in the unlicensed bands. Moreover, as the Commission has recognized in recent proceedings,
alternative mechanisms for gathering data and evaluating options may assist the Commission in
reaching a superior result.19 For these reasons, Chamberlain would support a negotiated rulemaking
process, the use of workshops -both large and small- or any other alternative process that ensures the widest level of
participation from stakeholders across the wireless market.
2NC Privacy Solvency
Reg neg is key to privacy
Rubinstein 09
(Rubinstein, Ira S. Adjunct Professor of Law and Senior Fellow, Information Law Institute, New York University School of Law. “PRIVACY AND
REGULATORY INNOVATION: MOVING BEYOND VOLUNTARY CODES,” Workshop for Federal Privacy Regulation, NYU School of Law.
10/2/2009. https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtables-comment-project-no.p095416544506-00103/544506-00103.pdf//ghs-kw)
Whatever its shortcoming, and despite its many critics, self-regulation is
a recurrent theme in the US approach to online
privacy and perhaps a permanent part of the regulatory landscape. This Article‘s goal has been to consider new
strategies for overcoming observed weaknesses in self-regulatory privacy programs. It began by examining the FTC‘s intermittent embrace of
self-regulation, and found that the Commission‘s most recent foray into self regulatory guidelines for online behavioral advertising is not very
different from earlier efforts, which ended in frustration and a call for legislation. It also reviewed briefly the more theoretical arguments of
privacy scholars for and against self-regulation, but concluded that the market oriented views of those who favor open information flows
clashed with the highly critical views of those who detect a market failure and worry about the damaging consequences of profiling and
surveillance not only to individuals, but to society and to democratic self-determination. These views seem irreconcilable and do not pave the
way for any applied solutions. Next, this Article presented three case studies of mandated self-regulation. This included overviews of the NAI
Principles and the SHA, as well as a more empirical analysis of the CARU safe harbor program. An assessment of these case studies against five
criteria (completeness, free rider problems, oversight and enforcement, transparency, and formation of norms) concluded that self-
regulation undergirded by law—in other words, a statutory safe harbor—is a more effective and
efficient instrument than any self-regulatory guidelines in which industry is chiefly responsible for
developing principles and/or enforcing them. In a nutshell, well-designed safe harbors enable policy makers
to imagine new forms of self-regulation that ―build on its strengths … while compensating for its
weaknesses.‖268 This embrace of statutory safe harbors led to a discussion of how to improve them by importing second-generation
strategies from environmental law. Rather than summarizing these strategies and how they translate into the privacy domain, this Article
concludes with a set of specific recommendations based on the ideas discussed in Part III.C. If Congress enacts comprehensive privacy
legislation based on FIPPs, the first recommendation is that the new law include a safe harbor program, which should echo the COPPA safe
harbor to the extent of encouraging groups to submit self-regulatory guidelines and, if approved by the FTC, treat compliance with these
guidelines as deemed compliance with statutory requirements. The FTC should be granted APA rulemaking powers to implement necessary
rules including a safe harbor rule. Congress
should also consider whether to mandate a negotiated rulemaking for an OBA
safe harbor or for safe harbor programs more generally. In any case, FTC should give serious thought to using the
negotiated rulemaking process in developing a safe harbor program or approving specific guidelines. In addition, the safe harbor
program should be overhauled to reflect second-generation strategies. Specifically, the statute should articulate default requirements but allow
FTC more discretion in determining whether proposed industry guidelines achieve desired outcomes, without firms having to match detailed
regulatory requirements on a point by point basis.
2NC Fism NB
Reg negs are better and solves federalism—plan fails
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School, cum laude.
Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and federalism. She has
presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth Circuit Judicial Conference, the
U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training and Research. She has advised National Sea
Grant multilevel governance studies involving Chesapeake Bay and consulted with multiple institutions on developing sustainability
programs. She has appeared in the Chicago Tribune, the London Financial Times, the PBS Newshour and Christian Science Monitor’s
“Patchwork Nation” project, and on National Public Radio. She is the author of many scholarly works, including Federalism and the Tug of
War Within (Oxford, 2012). Professor Ryan is a graduate of Harvard Law School, where she was an editor of the Harvard Law Review and a
Hewlett Fellow at the Harvard Negotiation Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for
the Ninth Circuit before practicing environmental, land use, and local government law in San Francisco. She began her academic career at
the College of William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured throughout
Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
1. Negotiated Rulemaking Although the most conventional of the less familiar forms, "negotiated rulemaking" between federal
agencies and state stakeholders is a sparingly used tool that holds
promise for facilitating sound administrative
policymaking in disputed federalism contexts, such as those implicating environmental law, national
security, and consumer safety. Under the Administrative Procedure Act, the traditional "notice and comment"
administrative rulemaking pro-cess allows for a limited degree of participation by state stakeholders
who comment on a federal agency's proposed rule. The agency publishes the proposal in the Federal Register, invites
public comments critiquing the draft, and then uses its discretion to revise or defend the rule in response to comments. n256 Even this iterative
process con-stitutes a modest negotiation, but it leaves participants so frequently unsatisfied that many agencies began to in-formally use more
extensive negotiated rulemaking in the 1970s. n257 In 1990, Congress passed the Negotiated Rulemaking Act, amending the Administrative
Procedure Act to allow a more dynamic [*52] and inclusive rulemaking process, n258 and a subsequent Executive Order required all federal
agencies to consider negotiated rulemaking when developing regulations. n259 Negotiated rulemaking allows stakeholders much more
influence over unfolding regulatory decisions. Under
notice and comment, public participation is limited to criticism
of well-formed rules in which the agency is already substantially invested. n260 By contrast,
stakeholders in negotiated rulemaking collectively design a proposed rule that takes into account their
respective interests and expertise from the beginning. n261 The concept, outline, and/or text of a rule is hammered out by
an advisory committee of carefully balanced representation from the agency, the regulated public, community groups and NGOs, and state and
local governments. n262 A professional intermediary leads the effort to ensure that all stakeholders are appropriately involved and to help
interpret prob-lem-solving opportunities. n263 Any consensus reached by the group becomes the basis of the proposed rule, which is still
subject to public comment through the normal notice-and-comment procedures. n264 If the group does not reach consensus, then the agency
proceeds through the usual notice-and-comment process. n265 The negotiated rulemaking process, a tailored version of interest group
bargaining within established legisla-tive constraints, can yield important benefits. n266 The
process is usually more subjectively
satisfying [*53] for all stakeholders, including the government agency representatives. n267 More
cooperative relationships are estab-lished between the regulated parties and the agencies, facilitating
future implementation and enforcement of new rules. n268 Final regulations include fewer technical
errors and are clearer to stakeholders, so that less time, money and effort is expended on enforcement.
n269 Getting a proposed rule out for public comment takes more time under negotiated rulemaking than standard notice and comment, but
thereafter, negotiated
rules receive fewer and more moderate public comment, and are less frequently
challenged in court by regulated entities. n270 Ultimately, then, final regulations can be implemented more
quickly following their debut in the Federal Register, and with greater compliance from stakeholders.
n271 The process also confers valuable learning benefits on participants, who come to better understand
the concerns of other stakeholders, grow invested in the consensus they help create, and ulti-mately
campaign for the success of the regulations within their own constituencies. n272 Negotiated rulemaking offers
additional procedural benefits because it ensures that agency personnel will be unambiguously informed about
the full federalism implications of a proposed rule by the impacted state interests. Federal agencies are already required by
executive order to prepare a federalism impact statement for rulemaking with federalism implications, n273 but the quality of statefederal communication within negotiated rulemaking enhances the likelihood that federal agencies will
appreciate and understand the full extent of state [*54] con-cerns. Just as the consensus-building process invests
participating stakeholders with respect for the competing concerns of other stake-holders, it invests participating agency
personnel with respect for the federalism concerns of state stakeholders. n274 State-side federalism
bargainers interviewed for this project consistently reported that they always prefer negotiated
rulemaking to notice and comment--even if their ultimate impact remains small--because the products
of fully informed federal consultation are always preferable to the alternative. n275
Reg negs solve federalism—traditional rulemaking fails
Ryan 11
(Erin Ryan holds a B.A. 1991 Harvard-Radcliffe College, cum laude, M.A. 1994 Wesleyan University, J.D. 2001 Harvard Law School, cum laude.
Erin Ryan teaches environmental and natural resources law, property and land use, water law, negotiation, and federalism. She has
presented at academic and administrative venues in the United States, Europe, and Asia, including the Ninth Circuit Judicial Conference, the
U.S.D.A. Office of Ecosystem Services and Markets, and the United Nations Institute for Training and Research. She has advised National Sea
Grant multilevel governance studies involving Chesapeake Bay and consulted with multiple institutions on developing sustainability
programs. She has appeared in the Chicago Tribune, the London Financial Times, the PBS Newshour and Christian Science Monitor’s
“Patchwork Nation” project, and on National Public Radio. She is the author of many scholarly works, including Federalism and the Tug of
War Within (Oxford, 2012). Professor Ryan is a graduate of Harvard Law School, where she was an editor of the Harvard Law Review and a
Hewlett Fellow at the Harvard Negotiation Research Project. She clerked for Chief Judge James R. Browning of the U.S. Court of Appeals for
the Ninth Circuit before practicing environmental, land use, and local government law in San Francisco. She began her academic career at
the College of William & Mary in 2004, and she joined the faculty at the Northwestern School of Law at Lewis & Clark College in 2011. Ryan
spent 2011-12 as a Fulbright Scholar in China, during which she taught American law, studied Chinese governance, and lectured throughout
Asia. Ryan, E. Boston Law Review, 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583132//ghs-kw)
Unsurprisingly, bargaining
in which the normative leverage of federalism values heavily influences the exchange offers the most reliable interpretive tools, smoothing out leverage imbalances and focusing
bargainers' in-terlinking interests. n619 Negotiations in which participants are motivated by shared regard for checks, localism,
accountability, and synergy naturally foster constitutional process and hedge against non-consensual dealings. All federalism
bargaining trades on the normative values of federalism to some degree, and any given negotiation may feature it
more or less prominently based on the factual particulars. n620 Yet the taxonomy reveals several forms in which federalism values
predominate by design, and which may prove especially valuable in fraught federalism contexts: negotiated rulemaking, policymaking
laboratory negotiations, and iterative federalism. n621 These ex-amples indicate the potential for purposeful federalism engineering to
reinforce procedural regard for state and fed-eral roles within the American system. (1) Negotiated Rulemaking
between state
and federal actors improves upon traditional administrative rule-making in fostering participation,
localism, and synergy by incorporating genuine state input into federal regula-tory planning. n622 Most
negotiated rulemaking also uses professional intermediaries to ensure that all stake-holders are
appropriately engaged and to facilitate the search for outcomes that meet parties' dovetailing interests.
n623 For example, after discovering that extreme local variability precluded a uniform federal program, Phase LI stormwater negotiators invited
municipal dischargers to design individually [*123] tailored programs within general federal limits. n624 Considering
the massive
number of municipalities involved, the fact that the rule faced legal challenge from only a handful of
Texas municipalities testifies to the strength of the consensus through which it was created. By contrast,
the iterative exchange within standard notice-and-comment rulemaking--also an example of feder-alism
bargaining--can frustrate state participation by denying participants meaningful opportunities for
consulta-tion, collaborative problem-solving, and real-time accountability The contrast between
notice-and-comment and negotiated rulemaking, exemplified by the two phases of REAL ID rulemaking, demonstrates
the difference be-tween more and less successful instances of federalism bargaining. n625 Moreover, the
difficulty of asserting state consent to the products of the REAL ID notice-and-comment rulemaking (given the outright rebellion
that fol-lowed) limits its interpretive potential. Negotiated rulemakings take longer than other forms of administrative
rulemaking, but are more likely to succeed over time. Regulatory matters best suited for state-federal negotiated rulemaking
include those in which a decisive federal rule is needed to overcome spill
Download