1AC SQ – Cryptowars Cryptowars are coming now. The NSA and FBI want to block and undermine strong encryption in favor of easy surveillance of all digital communication, Computer scientists are fighting back. Tokmetzi 15 Dimitri, Data Journalist at the Correspondent (Netherlands) “Think piece: How to protect privacy and security?” Global Conference on CyberSpace 2015 16 - 17 April 2015 The Hague, The Netherlands https://www.gccs2015.com/sites/default/files/documents/How%20to%20protect%20privacy%20and%2 0security%20in%20the%20crypto%20wars.pdf We thought that the Crypto Wars of the nineties were over, but renewed fighting has erupted since the Snowden revelations. On one side, law enforcement and intelligence agencies are afraid that broader use of encryption on the Internet will make their work harder or even impossible. On the other, security experts and activists argue that installing backdoors will make everyone unsafe. Is it possible to find some middle ground between these two positions? ‘This is the story of how a handful of cryptographers “hacked” the NSA. It’s also a story of encryption backdoors, and why they never quite work out the way you want them to.’ So began the blog post on the FREAK attack, one of the most ironic hacks of recent years. Matthew Green, assistant professor at John Hopkins university, and a couple of international colleagues exploited a nasty bug on the servers that host the NSA website. By forcing the servers to use an old, almost forgotten and weak type of encryption which they were able to crack within a few hours, they managed to gain access to the backend of the NSA website, making it possible for them to alter its content. Worse still, the cryptographers found that the same weak encryption was used on a third of the 14 million other websites they scanned. For instance, if they had wanted to, they could have gained access to whitehouse.gov or tips.fbi.gov. Many smartphone apps turned out to be vulnerable as well. The irony is this: this weak encryption was deliberately designed for software products exported from the US in the nineties. The NSA wanted to snoop on foreign governments and companies if necessary and pushed for a weakening of encryption. This weakened encryption somehow found its way back onto the servers of US companies and government agencies. ‘Since the NSA was the organization that demanded export-grade crypto, it’s only fitting that they should be the first site affected by this vulnerability’, Green gleefully wrote. The FREAK attack wasn’t only a show of technological prowess, but also a political statement. Ever since Edward Snowden released the NSA files in June 2013, a new battle has been raging between computer security experts and civil liberties activists on one side and law enforcement and intelligence agencies on the other. There was one set of revelations that particularly enraged the security community. In September 2013 the New York Times, ProPublica and the Guardian published a story on the thorough and persistent efforts of the NSA and its British counterpart GCHQ to decrypt Internet traffic and databases. In a prolonged, multi-billion operation dubbed ‘BULLRUN’, the intelligence agencies used supercomputers to crack encryption, asked, persuaded or cajoled telecom and web companies to build backdoors into their equipment and software, used their influence to plant weaknesses in cryptographic standards and simply stole encryption keys from individuals and companies. A war is looming But security specialists argue that by attacking the encryption infrastructure of the Internet, the intelligence agencies have made us all less safe. Terrorists and paedophiles may use encryption to protect themselves when planning and committing terrible crimes, but the Internet as a whole cannot function without proper encryption. Governments cannot provide digital services to their citizens if they cannot use safe networks. Banks and financial institutions must be able to communicate data over secure channels. Online shops need to be able to process payments safely. And all companies and institutions have to keep criminals and hackers out of their systems. Without strong encryption, trust cannot exist online. Cryptographers have vowed to fight back. Major web companies like Google and Yahoo! promised their clients strong end-to-end encryption for email and vowed to improve the security of their networks and databases. Apple developed a new operating system that encrypted all content on the new iPhone by default. And hackers started developing web applications and hardware with strong, more user-friendly encryption. In the past few years we have seen the launch of encrypted social media (Twister), smartphones (Blackphone), chat software (Cryptocat), cloud storage (Boxcryptor), file sharing tools (Peerio) and secure phone and SMS apps (TextSecure and Signal). This worries governments. In the wake of the attack on Charlie Hebdo in Paris, UK Prime Minister David Cameron implied that encryption on certain types of communication services should be banned. In the US, FBI director James Comey recently warned that the intelligence agencies are ‘going dark’ because of the emergence of default encryption settings on devices and in web applications. In Europe, the US and elsewhere politicians are proposing that mandatory backdoors be incorporated in hardware and software. Some even want governments to hold ‘golden keys’ that can decrypt all Internet traffic. The obvious question is how we can meet the needs of all concerned? One the one hand, how can we ensure that intelligence and law enforcement agencies have access to communications and data when they have a legal mandate to do so? Their needs are often legitimate. One the other, how can we ensure strong data protection for all, not only a techsavvy few? As we shall see, this crypto conflict isn’t new, nor is the obvious question the right question to ask at this moment. And, if intelligence agencies win the cryptowar – the result would gravely undermine the security of all digital communication. Computer scientists conclusively vote aff. Weitzner et al, 15, DANIEL J. WEITZNER, Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Lab; HAROLD ABELSON, Professor of Electrical Engineering and Computer Science at MIT; ROSS ANDERSON, Professor of Security Engineering at the University of Cambridge; STEVEN M. BELLOVIN, Professor of Computer Science at Columbia University. JOSH BENALOH, Senior Cryptographer at Microsoft Research; MATT BLAZE, Associate Professor of Computer and Information Science at the University of Pennsylvania; WHITFIELD DIFFIE, discovered the concept of public-key cryptography opened up the possibility of secure, Internet-scale communications. JOHN GILMORE, co-founded Cygnus Solutions, and the Electronic Frontier Foundation; MATTHEW GREEN, Research Professor at the Johns Hopkins University Information Security Institute. PETER G. NEUMANN, Senior Principal Scientist at the SRI International Computer Science Lab; SUSAN LANDAU, professor of cybersecurity policy at Worcester Polytechnic Institute. RONALD L. RIVEST, MIT Institute Professor; JEFFREY I. SCHILLER was the Internet Engineering Steering Group Area Director for Security (1994– 2003); BRUCE SCHNEIER, Fellow at the Berkman Center for Internet and Society at Harvard Law School; MICHAEL A. SPECTER, PhD candidate in Computer Science at MIT’s Computer Science and Artificial Intelligence Laboratory. “Keys Under Doormats: Mandating Insecurity By Requiring Government Access To All Data And Communications” 7/6/15 MIT Cybersecurity and Internet Policy Research Initiative http://dspace.mit.edu/handle/1721.1/97690#files-area The goal of this report is to similarly analyze the newly proposed requirement of exceptional access to communications in today’s more complex, global information infrastructure. We find that it would pose far more grave security risks, imperil innovation, and raise thorny issues for human rights and international relations. There are three general problems. First, providing exceptional access to communications would force a U-turn from the best practices now being deployed to make the Internet more secure. These practices include forward secrecy — where decryption keys are deleted immediately after use, so that stealing the encryption key used by a communications server would not compromise earlier or later communications. A related technique, authenticated encryption, uses the same temporary key to guarantee confidentiality and to verify that the message has not been forged or tampered with. Second, building in exceptional access would substantially increase system complexity. Security researchers inside and outside government agree that complexity is the enemy of security — every new feature can interact with others to create vulnerabilities. To achieve widespread exceptional access, new technology features would have to be deployed and tested with literally hundreds of thousands of developers all around the world. This is a far more complex environment than the electronic surveillance now deployed in telecommunications and Internet access services, which tend to use similar technologies and are more likely to have the resources to manage vulnerabilities that may arise from new features. Features to permit law enforcement exceptional access across a wide range of Internet and mobile computing applications could be particularly problematic because their typical use would be surreptitious — making security testing difficult and less effective. Third, exceptional access would create concentrated targets that could attract bad actors. Security credentials that unlock the data would have to be retained by the platform provider, law enforcement agencies, or some other trusted third party. If law enforcement’s keys guaranteed access to everything, an attacker who gained access to these keys would enjoy the same privilege. Moreover, law enforcement’s stated need for rapid access to data would make it impractical to store keys offline or split keys among multiple keyholders, as security engineers would normally do with extremely high-value credentials. Recent attacks on the United States Government Office of Personnel Management (OPM) show how much harm can arise when many organizations rely on a single institution that itself has security vulnerabilities. In the case of OPM, numerous federal agencies lost sensitive data because OPM had insecure infrastructure. If service providers implement exceptional access requirements incorrectly, the security of all of their users will be at risk. And, the threat to encryption is not hypothetical, the NSA has already inserted backdoors in software and undermined commercial encryption standards. Harris, 14 Shane, American journalist and author at Foreign Policy magazine. @WAR : the rise of the militaryInternet complex / Houghton Mifflin Harcourt. P.88-93 For the past ten years the NSA has led an effort in conjunction with its British counterpart, the Government Communications Headquarters, to defeat the widespread use of encryption technology by inserting hidden vulnerabilities into widely used encryption standards. Encryption is simply the process of turning a communication - say, an e-mail - into a jumble of meaningless numbers and digits, which can only be deciphered using a key possessed by the e-mail's recipient. The NSA once fought a public battle to gain access to encryption The agency then turned its attention toward weakening the encryption algorithms that are used to encode communications in the first place. The NSA is home to the world's best code makers, who are regularly consulted by public organizations, including government agencies, on how to make encryption algorithms stronger. That's what happened in 2006 - a year after Alexander arrived when the NSA helped developed an encryption standard that was eventually adopted by the National Institute of Standards and Technology, the US government agency that has the last word on weights and measures used for calibrating all manner of tools, industrial equipment, and scientific instruments. NIST's endorsement of an encryption standard is a kind of Good Housekeeping Seal of approval. It encourages companies, advocacy groups, individuals, and government agencies around the world to use the keys, so that it could decipher messages at will, but it lost that fight. standard. NIST works through an open, transparent process, which allows experts to review the standard and submit comments. That's one reason its endorsement carries such weight. NIST is so trusted that it must approve any encryption algorithms that are used in commercial products sold to the US government. But behind the scenes of this otherwise open process, the NSA was strongarming the development of an algorithm called a randomnumber generator, a key component of all encryptionx. Classified documents show that the NSA claimed it merely wanted to "finesse" some points in the algorithm's design, but in reality it became the "sole editor" of it and took over the process in secret. Compromising the number generator, in a way that only the NSA knew, would undermine the entire encryption standard. It gave the NSA a backdoor that it could use to decode information or gain access to sensitive computer systems. The NSA's collaboration on the algorithm was not a secret. Indeed, the agency's involvement lent some credibility to the process. But less than a year after the standard was adopted, security researchers discovered an apparent weakness in the algorithm and speculated publicly that it could have been put there by the spy agency. The noted computer security expert Bruce Schneier zeroed in on one of four techniques for randomly generating numbers that NIST had approved. One of them, he wrote in 2007, "is not like the others:' For starters, it worked three times more slowly than the others, Schneier observed. It was also "championed by the NSA, which first proposed it years ago in a related standardization project at the American National Standards Institute.” Schneier was alarmed that NIST would encourage people to use an inferior algorithm that had been enthusiastically embraced by an agency whose mission is to break codes. But there was no proof that the NSA was up to no good. And the flaw in the number generator didn't render it useless. As Schneier noted, there was a workaround, though it was unlikely anyone would bother to use it. Still, the flaw set cryptologists on edge. The NSA was surely aware of their unease, as well as the growing body of work that pointed to its secret intervention, because it leaned on an international standards body that represents 163 countries to adopt the new algorithm. The NSA wanted it out in the world, and so widely used that people would find it hard to abandon. Schneier, for one, was confused as to why the NSA would choose as a backdoor such an obvious and now public flaw. (The weakness had first been pointed out a year earlier by employees at Microsoft.) Part of the answer may lie in a deal that the NSA reportedly struck with one of the world's leading computer security vendors, RSA, a pioneer in the industry. According to a 2013 report by Reuters, the company adopted the NSA-built algorithm "even before NIST approved it. The NSA then cited the early use ... inside the government to argue successfully for NIST approval:' The algorithm became "the default option for producing random numbers” in an RSA security product called the bSafe toolkit, Reuters reported. "No alarms were raised, former employees said, because the deal was handled by business leaders rather than pure technologists .” For its compliance and willingness to adopt the flawed algorithm, RSA was paid $10 million, Reuters reported. It didn't matter that the NSA had built an obvious backdoor. The algorithm was being sold by one of the world's top security companies, and it had been adopted by an international standards body as well as NIST. The NSA's campaign to weaken global security for its own advantage was working perfectly. When news of the NSA's efforts broke in 2013, in documents released by Edward Snowden, RSA and NIST both distanced themselves from the spy agency- but neither claimed that the backdoor hadn't been installed. In a statement following the Reuters report, RSA denied that it had entered into a "secret contract" with the NSA, and asserted that "we have never entered into any contract or engaged in any project with the intention of weakening RS.A's products, or introducing potential 'backdoors' into our products for anyone's use." But it didn't deny that the backdoor existed, or may have existed. Indeed, RSA said that years earlier, when it decided to start using the flawed number-generator algorithm, the NSA had a trusted role in the community-wide effort to strenghten, not weaken, encryption.” Not so much anymore. When documents leaked by Snowden confirmed the NSA’s work, RSA encouraged people to stop using the number generator – as did the NIST. The standards body issued its own statement following the Snowden revelations. It was a model of carefully calibrated language. "NIST would not deliberately weaken a cryptographic standard," the organization said in a public statement, clearly leaving open the possibility- without confirming it - that the NSA had secretly installed the vulnerability or done so against NIST's wishes. "NIST has a long history of extensive collaboration with the world's cryptography experts to support robust encryption. The [NSA] participates in the NIST cryptography development process because of its recognized expertise. NIST is also required by statute to consult with the NSA.” The standards body was effectively telling the world that it had no way to stop the NSA. Even if it wanted to shut the agency out of the standards process, by law it couldn't. A senior NSA official later seemed to support that contention. In an interview with the national security blog Lawfare in December 2013, Anne Neuberger, who manages the NSAs relationships with technology companies, was asked about reports that the agency had secretly handicapped the algorithm during the development process. She neither confirmed nor denied the accusation. Neuberger called NIST “an incredibly respected close partner on many things” But, she noted, it “is not a member of the intelligence community. “All the work they do is ... pure white hat” Neuberger continued, meaning not malicious and intended solely to def end encryption and promote security. "Their only responsibility is to set standards" and "to make them as strong as they can possibly be.” That is not the NSA’s job. Neuberger seemed to be giving the NIST a get-out-of-jail-free card, exempting it from any responsibility for inserting the flaw.The 2006 effort to weaken the number generator wasn't an isolated incident. It was part of a broader, longer campaign by the NSA to weaken the basic standards that people and organizations around the world use to protect their information. Documents suggest that the NSA has been working with NIST since the early 1990s to hobble encryption standards before they're adopted. The NSA dominated the process of developing the Digital Signature Standard, a method of verifying the identity of the sender of an electronic communication and the authenticity of the information in it. “NIST publicly proposed the [standard] in August 1991 and initially made no mention of any NSA role in developing the standard, which was intended for use in unclassified, civilian communications systems” according to the Electronic Privacy Infonnation Center, which obtained documents about the development process under the Freedom of Information Act. Following a lawsuit by a group of computer security experts, NIST conceded that the NSA had developed the standard, which “was widely criticized within the computer industry for its perceived weak security and inferiority to an existing authentication technology,” the privacy center reported "Many observers have speculated that the [existing] technique was disfavored by NSA because it was, in fact, more secure than the NSA-proposed algorithm.” From NSA's perspective, its efforts to defeat encryption are hardly controversial. It is, after all, a code-breaking agency. This is precisely the kind of work it is authorized, and expected, to do. If the agency developed flaws in encryption algorithms that only it knew about, what would be the harm? But the flaws weren't secret. By 2007, the backdoor in the number generator was being written about on prominent websites and by leading security experts. It would be difficult to exploit the weakness - that is, to figure out the key that opened NSA's backdoor. But this wasn't impossible. A foreign government could figure out how to break the encryption and then use it to spy on its own citizens, or on American companies and agencies using the algorithm. Criminals could exploit the weakness to steal personal and financial information. Anywhere the algorithm was used - including in the products of one of the world's leading security companies – it was vulnerable. The NSA might comfort itself by reasoning that code-breaking agencies in other countries were surely trying to undermine encryption, including the algorithms the NSA was manipulating. And surely they were. But that didn’t answer the question, why knowingly undermine not just an algorithm but the entire process by which encryption standards are created? The NSA’s clandestine efforts damaged the credibility of NIST and shredded the NSA's long-held reputation as a trusted, valued participant in creating some of the most fundamental technologies on the lnternet, the very devices by which people keep their data, and by extension themselves, safe. Imagine if the NSA had been in the business of building door locks, and encouraged every homebuilder in America to install its preferred, and secretly flawed, model. No one would stand for it. At the very least, consumer groups would file lawsuits and calls would go up for the organization's leaders to resign. Plan The United States federal government should fully support and not undermine, weaken or make vulnerable commercial encryption and cryptographic standards in the United States. Advantage 1 – CyberCrime Undermining commercial software reduces the ability to prevent cyber-crime and only facilitates access to networks for organized criminal networks. Blaze, 15 Matt, University Of Pennsylvania Prof of Computer and Information Science Us House Of Representatives Committee On Government Oversight And Reform Information Technology Subcommittee Encryption Technology And Possible Us Policy Responses 29 April 2015 Testimony of Matt Blaze https://oversight.house.gov/wp-content/uploads/2015/05/4-29-2015-IT-SubcommitteeHearing-on-Encryption-Blaze.pdf An important task for policymakers in evaluating the FBI’s proposal is to weigh the risks of making software less able to resist attack against the benefits of more expedient surveillance. It effectively reduces our ability to prevent crime (by reducing computer security) in exchange for the hope of more efficient crime investigation (by making electronic surveillance easier). Unfortunately, the costs of the FBI’s approach will be very high. It will place our national infrastructure at risk. This is not simply a matter of weighing our desires for personal privacy and to safeguard against government abuse against the need for improved law enforcement. That by itself might be a difficult balance for policymakers to strike, and reasonable people might disagree on where that balance should lie. But the risks here go far beyond that, because of the realities of how modern software applications are integrated into complete systems. Vulnerabilities in software of the kind likely to arise from law enforcement access requirements can often be exploited in ways that go beyond the specific data they process. In particular, vulnerabilities often allow an attacker to effectively take control over the system, injecting its own software and taking control over other parts of the affected system.9 The vulnerabilities introduced by access mandates discussed in the previous section are likely to include many in this category. They are difficult to defend against or contain, and they current represent perhaps the most serious practical threat to networked computer security. For better or worse, ordinary citizens, large and small business, and the government itself depend on the same software platforms that are used by the targets of criminal investigations. It is not just the Mafia and local drug dealers whose software is being weakened, but everyone’s. The stakes are not merely unauthorized exposure of relatively inconsequential personal chitchat, but also leaks of personal financial and health information, disclosure of proprietary corporate data, and compromises of the platforms that manage and control our critical infrastructure. In summary, the technical vulnerabilities that would inevitably be introduced by requirements for law enforcement access will provide rich, attractive targets not only for relatively petty criminals such as identity thieves, but also for organized crime, terrorists, and hostile intelligence services. It is not an exaggeration to understand these risks as a significant threat to our economy and to national security. And, Vulnerabilities in software provide funding mechanisms for organized crime. Peha, 13 Jon M. Peha is a professor at Carnegie Mellon, Dept. of Electrical & Computer Engineering and the Dept. of Engineering & Public Policy, Served as Chief Technologist of the Federal Communications Commission, Assistant Director of the White House’s Office of Science and Technology Policy. "The dangerous policy of weakening security to facilitate surveillance." Available at SSRN 2350929 (2013). Weak Security is Dangerous Giving law enforcement and intelligence agencies the ability to conduct electronic surveillance is part of a strategy to limit threats from criminals, foreign powers, and terrorists, but so is strengthening the cybersecurity used by all Americans. Weak cybersecurity creates opportunities for sophisticated criminal organizations. Well-funded criminal organizations will turn to cybsercrime for the same reason they turn to illegal drugs; there is money to be made. This imposes costs on the rest of us. The costs of malicious cyberactivities take many forms, including direct financial losses (e.g. fraudulent use of credit cards), theft of intellectual property, theft of sensitive business information, opportunity costs such as the lost productivity when a computer system is taken down, and the damage to a company’s reputation when others learn its systems have been breached. One recent study says that estimates of these costs range from $24 billion to $120 billion per year in the U.S.3 Weakened security can only increase the high cost of cybercrime. Cybercrime provides substantial financial support for Russian organized crime. Grabosky, 13 Peter. Peter Grabosky is a Professor in the Regulatory Institutions Network, Australian National University, and a Fellow of the Academy of the Social Sciences in Australia. "Organised Crime and the Internet: Implications for National Security." The RUSI Journal 158.5 (2013): 18-25. The Internet is commonly used as an instrument for attacking other computer systems. Most cybercrimes begin when an offender obtains unauthorised access to another system. Systems are often attacked in order to destroy or damage them and the information that they contain. This can be an act of vandalism or protest, or activity undertaken in furtherance of other political objectives. One of the more common forms is the distributed-denial-of-service (DDoS) attack, which entails flooding a target computer system with a massive volume of information so that the system slows down significantly. Botnets are quite useful for such purposes, as are multiple co-ordinated service requests. A notorious example of a botnet-initiated DDoS attack occurred in April 2007, when government and commercial servers in Estonia were seriously degraded over a number of days. Online banking services were intermittently disrupted, and access to government sites and to online news media was limited. The attacks appear to have originated in Russia and are alleged to have resulted from the collaboration of Russian youth organisations and Russian organised-crime groups, condoned by the state, although the degree to which the Russian government was complicit in the attacks is unclear.18 Just as state actors or their agents can use the Internet to pursue what they perceive to be goals of national security, so have insurgent and extremist groups used Internet technology in various ways to further their causes. These include using the Internet as an instrument of theft in order to enhance their resource base; for instance, as a vehicle for fraud. Imam Samudra, the architect of the 2002 Bali bombings, reportedly called upon his followers to commit credit-card fraud in order to finance militant activities.19 Jihadist propaganda and incitement messages also abound in cyberspace. Yet the Internet is not used for illicit purposes solely or even primarily by political actors. Organised-crime groups use it daily on a global scale, engaging in activities that range from the illicit acquisition, copying and dissemination of intellectual property (piracy has allegedly cost the software and entertainment industries billions of dollars)20 to the plundering of banking and credit-card details, commercial trade secrets and classified information held by governments. This too may begin with unauthorised access to a computer system: indeed, the theft of personal financial details has provided the basis for thriving markets in such data, which enable fraud on a significant scale.21 Russian organized crime is the most likely scenario for nuclear terrorism. Zaitseva 7—Center for International Security and Cooperation (CISAC) Visiting Fellowfrom the National Nuclear Centre in Semipalatinsk (Lyudmila, 2007, Strategic Insights, Volume VI, Issue 5, “Organized Crime, Terrorism and Nuclear Trafficking,” rmf) The use of radioactive material for malicious purposes falls within the range of capabilities of organized criminal structures, at least those in Russia. Such a malevolent use may be an indirect evidence of the organized crime involvement in the illicit trafficking of radioactive substances. More than a dozen of malevolent radiological acts, such as intentional contamination and irradiation of persons, have been reported in open sources since 1993. One of them, which happened in Guangdong Province of China in 2002—resulted in significant exposure of as many as 74 people working in the same hospital.[55] Two incidents—both in Russia—have been linked to organized crime. A widely-publicized murder of a Moscow businessman with a strong radioactive source implanted in the head-rest of his office chair in 1993 was one of them. The director of a packaging company died of radiation sickness after several weeks of exposure. The culprit was never found and it was alleged that mafia might have been behind the ploy to remove a business competitor.[56] The same source mentioned a similar incident, which happened in Irkutsk around the same time, when somebody planted radiation sources in office chairs in an attempt to kill two company directors before the "hot seats" were discovered and removed. No speculations were made regarding the possible mafia involvement in this murder attempt, although it cannot be excluded. The less known case with strong indications that shady criminal networks may have plotted it happened more recently in St. Petersburg. On March 18, 2005, Moskovskiye Novosti published an article, in which the author discussed several high-profile assassinations and murders in Russia and abroad using various methods of poisoning. One of such killings was reportedly performed with a highly radioactive substance. In September 2004, Head of Baltik-Escort security company in St. Petersburg and FSB Colonel, Roman Tsepov, died a sudden and mysterious death as a result of what was suspected to be poisoning. However, according to a source in St. Petersburg Public Prosecutor’s Office, the posthumous examination established that the death had been caused by an unspecified radioactive element. In the past, Tsepov was reportedly in charge of collecting protection money from casinos and other commercial enterprises in St. Petersburg on behalf of ‘a high-ranking FSB official’.[57] These two incidents demonstrate that some organized crime structures have the knowledge about the characteristics and effects of specific radioactive materials, have access to these substances, and do not shy away from using them as weapons of murder, which are hard to trace to the perpetrators. Terrorist Networks and Nuclear Trafficking Terrorism changes together with society and in order to preserve itself as a phenomenon it must use what society gives it, ‘including the most modern weapons and advanced ideas.’[58] The risk of terrorists obtaining nuclear fissile material is small, but real. After the terrorist attack on the school in Beslan in September 2004, the Head of Russian Federal Agency for Atomic Energy (Rosatom, formerly Minatom) Alexander Rumyantsev said that the possibility that international terrorist groups may acquire nuclear fissile material, including HEU and plutonium, as well as nuclear weapons technologies, could no longer be ruled out.[59] Such a risk is much higher for radiological material, which is omnipresent around the world and is not subject to nearly the same level of control and protection as nuclear fissile material. Its use as a weapon in a radiological dispersal device (RDD) would also be achieved with just a fraction of time, investment, and skills required for making a crude nuclear weapon. These reasons make the deployment of radiological material the most probable scenario of nuclear terrorism. Although radioactive substances have already been used as a weapon of killing and a threat mechanism, so far, there is no evidence of their successful deployment in terrorist acts. The only case that comes close to deployment of an RDD, was recorded in Chechnya in 1998, when the local authorities found a container filled with radioactive substances and emitting strong radiation levels together with a mine attached to it buried next to a railway line.[60] The local authorities considered the incident as a foiled act of sabotage. The Chechen fighters are also believed to have made several raids on the Radon radioactive waste depository, located in the vicinity of Grozny, and stolen several containers with radioactive substances.[61] In 1996, the director of the Radon facility confirmed that about half of some 900 cubic meters of waste, with radioactivity levels of 1,500 curies, which had been stored at the Radon facility at the start of the first Chechen war in November 1994, was missing.[62] The Russian authorities believe the terrorists were planning to use them in explosions in order to spread contamination. It should be noted that Chechen extremists stand out from many other terrorist organizations by persistently making threats to use nuclear technologies in their acts of violence. The notorious burial of a radiation source in the Gorky park of Moscow in 1995 by the now late field commander Shamil Basayev and the threat by Ahmed Zakayev after the Moscow theater siege in October 2002 that the next time a nuclear facility would be seized are just two such examples.[63] In January 2003, Colonel-General Igor Valynkin, the chief of the 12th Main Directorate of the Russian Ministry of Defence, in charge of protecting Russia’s nuclear weapons, said “operational information indicates that Chechen terrorists intend to seize some important military facility or nuclear munitions in order to threaten not only the country, but the entire world.”[64] According to an assessment of a Russian expert on nonproliferation, whereas unauthorized access to nuclear munitions by terrorist groups is ‘extremely improbable,’ access and theft of nuclear weapons during transport or disassembly cannot be wholly excluded.[65] Russia’s top security officials recently admitted they have knowledge about the intent and attempts by terrorists to gain access to nuclear material. In August 2005, the director of the Russian Federal Security Service Nikolay Patrushev told at a conference that his agency had information about attempts by terrorist groups to acquire nuclear, biological and chemical weapons of mass destruction.[66] Later that year, the Minister of Interior, Rashid Nurgaliev, stated that international terrorists intended to “seize nuclear materials and use them to build WMD.”[67] If terrorists indeed attempted to gain access to nuclear material in order use them for the construction of WMD, such attempts have not been revealed to the public. Out of almost 1100 trafficking incidents recorded in the DSTO since 1991, only one has reportedly involved terrorists, other than Chechen fighters. The incident was recorded in India in August 2001, when Border Security Force (BSF) officials seized 225 gram of uranium in Balurghat, northern West Bengal along the India-Bangladesh border. Two local men, described as ‘suspected terrorists’, were arrested. Indian intelligence agencies suspect that the uranium was bound for Muslim fighters in the disputed regions of Jammu and Kashmir and that agents of Pakistan's InterService-Intelligence (ISI) were involved.[68] Whether the arrested suspects were indeed members of a terrorist organization remains unclear based on the available information. Conclusion Alliances between terrorist groups and drug cartels and transnational criminal networks are a well-known fact. Such alliances have successfully operated for years in Latin America, and in Central-, South-, and South-Eastern Asia. The involvement of organized criminal groups—albeit relatively small and unsophisticated—in nuclear smuggling activities has also been established based on the study of some 400 nuclear trafficking incidents recorded in the DSTO database between January 2001 and December 2005. Elements of organized crime could be identified in about 10 percent of these incidents. However, no reliable evidence of the “marriages of convenience” between all three—organized crime, terrorists, and nuclear trafficking—could be found. Nuclear terrorism causes retaliation and nuclear war – draws in Russia and China Robert Ayson, Professor of Strategic Studies and Director of the Centre for Strategic Studies: New Zealand at the Victoria University of Wellington, 10 (“After a Terrorist Nuclear Attack: Envisaging Catalytic Effects,” Studies in Conflict & Terrorism, Volume 33, Issue 7, July, Available Online to Subscribing Institutions via InformaWorld) A terrorist nuclear attack, and even the use of nuclear weapons in response by the country attacked in the first place, would not necessarily represent the worst of the nuclear worlds imaginable. Indeed, there are reasons to wonder whether nuclear terrorism should ever be regarded as belonging in the category of truly existential threats. A contrast can be drawn here with the global catastrophe that would come from a massive nuclear exchange between two or more of the sovereign states that possess these weapons in significant numbers. Even the worst terrorism that the twenty-first century might bring would fade into insignificance alongside considerations of what a general nuclear war would have wrought in the Cold War period. And it must be admitted that as long as the major nuclear weapons states have hundreds and even thousands of nuclear weapons at their disposal, there is always the possibility of a truly awful nuclear exchange taking place precipitated entirely by state possessors themselves. But these two nuclear worlds—a non-state actor nuclear attack and a catastrophic interstate nuclear exchange—are not necessarily separable. It is just possible that some sort of terrorist attack, and especially an act of nuclear terrorism, could precipitate a chain of events leading to a massive exchange of nuclear weapons between two or more of the states that possess them. In this context, today’s and tomorrow’s terrorist groups might assume the place allotted during the early Cold War years to new state possessors of small nuclear arsenals who were seen as raising the risks of a catalytic nuclear war between the superpowers started by third parties. These risks were considered in the late 1950s and early 1960s as concerns grew about nuclear proliferation, the so-called n+1 problem. t may require a considerable amount of imagination to depict an especially plausible situation where an act of nuclear terrorism could lead to such a massive inter-state nuclear war. For example, in the event of a terrorist nuclear attack on the United States, it might well be wondered just how Russia and/or China could plausibly be brought into the picture, not least because they seem unlikely to be fingered as the most obvious state sponsors or encouragers of terrorist groups. They would seem far too responsible to be involved in supporting that sort of terrorist behavior that could just as easily threaten them as well. Some possibilities, however remote, do suggest themselves. For example, how might the United States react if it was thought or discovered that the fissile material used in the act of nuclear terrorism had come from Russian stocks,40 and if for some reason Moscow denied any responsibility for nuclear laxity? The correct attribution of that nuclear material to a particular country might not be a case of science fiction given the observation by Michael May et al. that while the debris resulting from a nuclear explosion would be “spread over a wide area in tiny fragments, its radioactivity makes it detectable, identifiable and collectable, and a wealth of information can be obtained from its analysis: the efficiency of the explosion, the materials used and, most important … some indication of where the nuclear material came from.”41 Alternatively, if terrorism came as the act of nuclear a complete surprise, and American officials refused to believe that a terrorist group was fully responsible (or responsible at all) suspicion would shift immediately to state possessors. Ruling out Western ally countries like the United Kingdom and France, and probably Israel and India as well, authorities in Washington would be left with a very short list consisting of North Korea, perhaps Iran if its program continues, and possibly Pakistan. But at what stage would Russia and China be definitely ruled out in this high stakes game of nuclear Cluedo? In particular, if the act of nuclear terrorism occurred against a backdrop of existing tension in Washington’s relations with Russia and/or China, and at a time when threats had already been traded between these major powers, would officials and political leaders not be tempted to assume the worst? Of course, the chances of this occurring would only seem to increase if the United States was already involved in some sort of limited armed conflict with Russia and/or China, or if they were confronting each other from a distance in a proxy war, as unlikely as these developments may seem at the present time. The reverse might well apply too: should a nuclear terrorist attack occur in Russia or China during a period of heightened tension or even limited conflict with the United States, could Moscow and Beijing resist the pressures that might rise domestically to consider the United States as a possible perpetrator or encourager of the attack? Washington’s early response to a terrorist nuclear attack on its own soil might also raise the possibility of an unwanted (and nuclear aided) confrontation with Russia and/or China. For example, in the noise and confusion during the immediate aftermath of the terrorist nuclear attack, the U.S. president might be expected to place the country’s armed forces, including its nuclear arsenal, on a higher stage of alert. In such a tense environment, when careful planning runs up against the friction of reality, it is just possible that Moscow and/or China might mistakenly read this as a sign of U.S. intentions to use force (and possibly nuclear force) against them. In that situation, the temptations to preempt such actions might grow, although it must be admitted that any preemption would probably still meet with a devastating response. Advantage 2 – Innovation Backdoors stifle innovation because they require centralized information flows. Tokmetzi, 2015 Dimitri, Data Journalist at the Correspondent (Netherlands) “Think piece: How to protect privacy and security?” Global Conference on CyberSpace 2015 16 - 17 April 2015 The Hague, The Netherlands https://www.gccs2015.com/sites/default/files/documents/How%20to%20protect%20privacy%20and%2 0security%20in%20the%20crypto%20wars.pdf Unsound economics The second argument is one of economics. Backdoors can stifle innovation. Even until very recently, communications were a matter for a few big companies, often state-owned. The architecture of their systems changed slowly, so it was relatively cheap and easy to build a wiretapping facility into them. Today thousands of start-ups handle communications in one form or another. And with each new feature these companies provide, the architecture of the systems changes. It would be a big burden for these companies if they had to ensure that governments can always intercept and decrypt their traffic. Backdoors require centralised information flows, but the most exciting innovations are moving in the opposite direction, i.e. towards decentralised services. More and more web services are using peer-to-peer technology through which computers talk directly to one another, without a central point of control. File storage services as well as payment processing and communications services are now being built in this decentralised fashion. It’s extremely difficult to wiretap these services. And if you were to force companies to make such wiretapping possible, it would become impossible for these services to continue to exist. A government that imposes backdoors on its tech companies also risks harming their export opportunities. For instance, Huawei – the Chinese manufacturer of phones, routers and other network equipment – is unable to gain market access in the US because of fears of Chinese backdoors built into its hardware. US companies, especially cloud storage providers, have lost overseas customers due to fears that the NSA or other agencies could access client data. Unilateral demands for backdoors could put companies in a tight spot. Or, as researcher Julian Sanchez of the libertarian Cato Institute says: ‘An iPhone that Apple can’t unlock when American cops come knocking for good reasons is also an iPhone they can’t unlock when the Chinese government comes knocking for bad ones.’ And, backdoors undermine the fundamental structure of the internet – this disrupts any future innovation. Hugo Zylberberg, Master in Public Policy candidate at Harvard’s Kennedy School of Government, 312-2015, "The Return of the Crypto Wars," Kennedy School Review, http://harvardkennedyschoolreview.com/the-return-of-the-crypto-wars/ But backdoors are a problem for yet another reason. They clash with the end-to-end argument that is at the very core of the architecture of the internet: the network should be as simple and agnostic as possible regarding the communications that it supports. More advanced functionalities should be developed at end nodes (computers, mobiles, wearable devices). This, argue researchers, allows the network “to support new and unanticipated applications.” The end-to-end argument has ignited unprecedented levels of innovation. The back doors that intelligence agencies are trying to promote would apply to our communications system as a whole, not only to the end nodes that are the devices with which we send the messages. This violates the end-to-end argument and undermines trust in the internet as a communications system. Such backdoors would undermine the generative internet as we know it, reducing every user’s capacity to innovate and disseminate products of innovation to billions of people in a secure and sustainable way. Internet innovation minimizes energy inefficiency and is the only way to solve global warming. Crowe 14 Tyler, Energy and materials columnist for the MotleyFool 3-2-2014, "Internet of things can battle climate change," USA TODAY, http://www.usatoday.com/story/money/personalfinance/2014/03/02/internetbattle-climate-change/5899331/ Machine to machine communication, or the internet of things, is on the precipice of taking the world by storm. At its very core, machine to machine communication is the ability to connect everything, I mean everything, through a vast network of sensors and devices which can communicate with each other. The possibilities of this technological evolution span an immensely wide The way that the internet could revolutionize our lives can be hard to conceptualize all at once. So today let's focus on one place where machine to machine communication could have an immense impact: Energy consumption. Not only could this technology make turning the lights on easier, but it could be the key to us effectively managing anthropogenic carbon emissions. Regardless of your thoughts and opinions on climate change and the scope of how much carbon emissions affects the global atmosphere, we all can agree on one thing: Emitting less carbon is a good thing, especially if it can be done without impeding economic growth. For years, the battleground for the climate change debate has been on the energy generation side, pitting alternative energy options like wind and solar against fossil fuels. The problem with fixating on this side of the argument, though, is that even under the most ambitious outlooks for alternative energy growth, we will never be able to get carbon emissions below the threshold many think is required to prevent significant temperature changes over the next century. Does that mean there's no shot at significantly reducing carbon emissions? No -- we're just focusing on the wrong side of the energy equation, and that is where machine to machine communications comes into play. Let's look at how spectrum; ranging from monitoring your health through your smartphone, to your house knowing where you are to adjust lighting and heating. of things the internet of things can mean for carbon emissions, and how investors could make some hefty profits from it. Energy consumption's overdue evolution We humans are a fascinating study in We oversupply the electricity grid because we don't know precisely how much demand is needed at any given moment. It's not that we deliberately try to do things less efficiently; we just don't always have the adequate information to make the most efficient decisio n. When you add all of these little inefficiencies up, it amounts to massive amounts of wasted energy and, in turn, unnecessary carbon emissions. In the U.S. alone, 1.9 billion gallons of fuel is consumed every year from drivers sitting in traffic. That's 186 million tons of unnecessary CO2 emissions each year just in the U.S. Now, imagine a world where every automobile was able to inefficiency. We will sit in traffic on the freeway rather than take the alternative route on "slower" roads. communicate with the others, giving instant feedback on traffic conditions and providing alternative routes to avoid traffic jams. This is the fundamental concept of machine-to-machine One of the added benefits of this technology is the impact it could have on our everyday energy consumption and the ultimate reduction in total carbon emissions. A recent report by the Carbon War Room estimates that the incorporation of machine-to-machine communication in the energy, transportation, built environment (its fancy term for buildings), and agriculture sectors could reduce global greenhouse gas emissions by 9.1 gigatons of CO2 equivalent annually. That's 18.2 trillion pounds, or equivalent to eliminating all of the United States' and India's total greenhouse gas emissions combined, and more than triple the reductions we can expect with an extremely ambitious alternative energy conversion program. How is this possible? Increased communication between everything -- engines, appliances, generators, automobiles -- allows for instant feedback for more efficient travel communications, and it goes way beyond the scope of just automobiles and household conveniences. routes, optimized fertilizer and water consumption to reduce deforestation, real-time monitoring of electricity consumption and instant feedback to generators, and fully integrated heating, cooling, and lighting systems that can adjust for human occupancy. There are lots of projections and estimates related to carbon emissions and climate change, but the one that has emerged annual anthropological greenhouse gas emissions would need to decrease by 15% from recent levels to keep us under the carbon atmospheric levels. Based on current emissions and the 9.1 gigaton estimate from Carbon War Room's report, it would be enough to reduce global emissions by 18.6%, well within the range of the UN's projections. The internet of things is still very much in its infancy, but it's taking off fast. The pending boom in machine-to machine communication helps explain why Google (GOOG) shelled out more than $3.2 billion for smart-thermostat company Nest Labs. Its ability allows customers to better manage heating and cooling in households and instantly provide feedback to utilities in order to better manage energy demand during peak load hours. as the standard bearer is the amount of carbon emissions it would take to increase global temperatures by 2 degrees Centigrade. According to the UN's Environment Programme, Warming causes extinction. Roberts 13 [David, citing the World Bank Review’s compilation of climate studies, “If you aren’t alarmed about climate, you aren’t paying attention” http://grist.org/climate-energy/climate-alarmism-the-idea-issurreal] We know we’ve raised global average temperatures around 0.8 degrees C so far. We know that 2 degrees C is where most scientists predict catastrophic and irreversible impacts. And we know that we are currently on a trajectory that will push temperatures up 4 degrees or more by the end of the century. What would 4 degrees look like? A recent World Bank review of the science reminds us. First, it’ll get hot: Projections for a 4°C world show a dramatic increase in the intensity and frequency of high-temperature extremes. Recent extreme heat waves such as in Russia in 2010 are likely to become the new normal summer in a 4°C world. Tropical South America, central Africa, and all tropical islands in the Pacific are likely to regularly experience heat waves of unprecedented magnitude and duration. In this new high-temperature climate regime, the coolest months are likely to be substantially warmer than the warmest months at the end of the 20th century. In regions such as the Mediterranean, North Africa, the Middle East, and the Tibetan plateau, almost all summer months are likely to be warmer than the most extreme heat waves presently experienced. For example, the warmest July in the Mediterranean region could be 9°C warmer than today’s warmest July. Extreme heat waves in recent years have had severe impacts, causing heat-related deaths, forest fires, and harvest losses. The impacts of the extreme heat waves projected for a 4°C world have not been evaluated, but they could be expected to vastly exceed the consequences experienced to date and potentially exceed the adaptive capacities of many societies and natural systems. [my emphasis] Warming to 4 degrees would also lead to “an increase of about 150 percent in acidity of the ocean,” leading to levels of acidity “unparalleled in Earth’s history.” That’s bad news for, say, coral reefs: The combination of thermally induced bleaching events, ocean acidification, and sea-level rise threatens large fractions of coral reefs even at 1.5°C global warming. The regional extinction of entire coral reef ecosystems, which could occur well before 4°C is reached, would have profound consequences for their dependent species and for the people who depend on them for food, income, tourism, and shoreline protection. It will also “likely lead to a sea-level rise of 0.5 to 1 meter, and possibly more, by 2100, with several meters more to be realized in the coming centuries.” That rise won’t be spread evenly, even within regions and countries — regions close to the equator will see even higher seas. There are also indications that it would “significantly exacerbate existing water scarcity in many regions, particularly northern and eastern Africa, the Middle East, and South Asia, while additional countries in Africa would be newly confronted with water scarcity on a national scale due to population growth.” Also, more extreme weather events: Ecosystems will be affected by more frequent extreme weather events, such as forest loss due to droughts and wildfire exacerbated by land use and agricultural expansion. In Amazonia, forest fires could as much as double by 2050 with warming of approximately 1.5°C to 2°C above preindustrial levels. Changes would be expected to be even more severe in a 4°C world. Also loss of biodiversity and ecosystem services: In a 4°C world, climate change seems likely to become the dominant driver of ecosystem shifts, surpassing habitat destruction as the greatest threat to biodiversity. Recent research suggests that large-scale loss of biodiversity is likely to occur in a 4°C world, with climate change and high CO2 concentration driving a transition of the Earth’s ecosystems into a state unknown in human experience. Ecosystem damage would be expected to dramatically reduce the provision of ecosystem services on which society depends (for example, fisheries and protection of coastline afforded by coral reefs and mangroves.) New research also indicates a “rapidly rising risk of crop yield reductions as the world warms.” So food will be tough. All this will add up to “large-scale displacement of populations and have adverse consequences for human security and economic and trade systems.” Given the uncertainties and long-tail risks involved, “there is no certainty that adaptation to a 4°C world is possible.” There’s a small but non-trivial chance of advanced civilization breaking down entirely. Now ponder the fact that some scenarios show us going up to 6 degrees by the end of the century, a level of devastation we have not studied and barely know how to conceive. Ponder the fact that somewhere along the line, though we don’t know exactly where, enough self-reinforcing feedback loops will be running to make climate change unstoppable and irreversible for centuries to come. That would mean handing our grandchildren and their grandchildren not only a burned, chaotic, denuded world, but a world that is inexorably more inhospitable with every passing decade. Solvency The plan solves - strong encryption key to the internet. Kehl, 15 Danielle Kehl is a senior policy analyst at New America's Open Technology Institute, BA cum laude Yale 6-17-2015, "Doomed To Repeat History? Lessons From The Crypto Wars Of The 1990s," New America, https://www.newamerica.org/oti/doomed-to-repeat-history-lessons-from-the-crypto-wars-of-the1990s/ Strong encryption has become a bedrock technology that protects the security of the internet The evolution of the ecosystem for encrypted communications has also enhanced the protection of individual communications and improved cybersecurity. Today, strong encryption is an essential ingredient in the overall security of the modern network, and adopting technologies like HTTPS is increasingly considered an industry best-practice among major technology companies.177 Even the report of the President’s Review Group on Intelligence and Communications Technologies, the panel of experts appointed by President Barack Obama to review the NSA’s surveillance activities after the 2013 Snowden leaks, was unequivocal in its emphasis on the importance of strong encryption to protect data in transit and at rest. The Review Group wrote that: Encryption is an essential basis for trust on the Internet; without such trust, valuable communications would not be possible. For the entire system to work, encryption software itself must be trustworthy. Users of encryption must be confident, and justifiably confident, that only those people they designate can decrypt their data…. Indeed, in light of the massive increase in cyber-crime and intellectual property theft on-line, the use of encryption should be greatly expanded to protect not only data in transit, but also data at rest on networks, in storage, and in the cloud.178 The report further recommended that the U.S. government should: Promote security[] by (1) fully supporting and not undermining efforts to create encryption standards; (2) making clear that it will not in any way subvert, undermine, weaken, or make vulnerable generally available commercial encryption; and (3) supporting efforts to encourage the greater use of encryption technology for data in transit, at rest, in the cloud, and in storage.179 Moreover, there is now a significant body of evidence that, as Bob Goodlatte argued back in 1997, “Strong encryption prevents crime.”180 This has become particularly true as smartphones and other personal devices that store vast amount of user data have risen in popularity over the past decade. Encryption can stop or mitigate the damage from crimes like identity theft and fraud targeted at smartphone users.181 Backdoors and other vulnerabilities should be rejected, only the plan solves. The consensus of academic computer scientists agree. Abadi, et al. 14 Martín Abadi Professor Emeritus, University of California, Santa Cruz; Hal Abelson Professor, Massachusetts Institute of Technology; Alessandro Acquisti Associate Professor, Carnegie Mellon University; Boaz Barak Editorial-board member, Journal of the ACM; Mihir Bellare Professor, University of California, San Diego; Steven Bellovin Professor, Columbia University; Matt Blaze Associate Professor, University of Pennsylvania; L. Jean Camp Professor, Indiana University; Ran Canetti Professor, Boston University and Tel Aviv University; Lorrie Faith Cranor Associate Professor, Carnegie Mellon University; Cynthia Dwork Member, US National Academy of Engineering; Joan Feigenbaum Professor, Yale University; Edward Felten Professor, Princeton University; Niels Ferguson Author, Cryptography Engineering: Design Principles and Practical Applications; Michael Fischer Professor, Yale University; Bryan Ford Assistant Professor, Yale University; Matthew Franklin Professor, University of California, Davis; Juan Garay Program Committee Co-Chair, CRYPTO2 2014; Matthew Green Assistant Research Professor, Johns Hopkins University; Shai Halevi Director, International Association for Cryptologic Research; Somesh Jha Professor, University of Wisconsin – Madison; Ari Juels Program Committee Co-Chair, 2013 ACM Cloud-Computing Security Workshop; M. Frans Kaashoek Professor, Massachusetts Institute of Technology; Hugo Krawczyk Fellow, International Association for Cryptologic Research; Susan Landau Author, Surveillance or Security? The Risks Posed by New Wiretapping Technologies; Wenke Lee Professor, Georgia Institute of Technology; Anna Lysyanskaya Professor, Brown University; Tal Malkin Associate Professor, Columbia University; David Mazières Associate Professor, Stanford University; Kevin McCurley Fellow, International Association for Cryptologic Research; Patrick McDaniel Professor, The Pennsylvania State University; Daniele Micciancio Professor, University of California, San Diego; Andrew Myers Professor, Cornell University; Rafael Pass Associate Professor, Cornell University; Vern Paxson Professor, University of California, Berkeley; Jon Peha Professor, Carnegie Mellon University; Thomas Ristenpart Assistant Professor, University of Wisconsin – Madison; Ronald Rivest Professor, Massachusetts Institute of Technology; Phillip Rogaway Professor, University of California, Davis; Greg Rose Officer, International Association for Cryptologic Research; Amit Sahai Professor, University of California, Los Angeles; Bruce Schneier Fellow, Berkman Center for Internet and Society, Harvard Law School; Hovav Shacham Associate Professor, University of California, San Diego; Abhi Shelat Associate Professor, University of Virginia; Thomas Shrimpton Associate Professor, Portland State University; Avi Silberschatz Professor, Yale University; Adam Smith Associate Professor, The Pennsylvania State University; Dawn Song Associate Professor, University of California, Berkeley; Gene Tsudik Professor, University of California, Irvine; Salil Vadhan Professor, Harvard University; Rebecca Wright Professor, Rutgers University; Moti Yung Fellow, Association for Computing Machinery; Nickolai Zeldovich Associate Professor, Massachusetts Institute of Technology; "An open letter from US researchers in cryptography and information security." (2014). http://people.csail.mit.edu/rivest/pubs/Ax14.pdf Media reports since last June have revealed that the US government conducts domestic and international surveillance on a massive scale, that it engages in deliberate and covert weakening of Internet security standards, and that it pressures US technology companies to deploy backdoors and other data-collection features. As leading members of the US cryptography and information-security research communities, we deplore these practices and urge that they be changed. Indiscriminate collection, storage, and processing of unprecedented amounts of personal information chill free speech and invite many types of abuse, ranging from mission creep to identity theft. These are not hypothetical problems; they have occurred many times in the past. Inserting backdoors, sabotaging standards, and tapping commercial data-center links provide bad actors, foreign and domestic, opportunities to exploit the resulting vulnerabilities. The value of society-wide surveillance in preventing terrorism is unclear, but the threat that such surveillance poses to privacy, democracy, and the US technology sector is readily apparent. Because transparency and public consent are at the core of our democracy, we call upon the US government to subject all mass-surveillance activities to public scrutiny and to resist the deployment of mass-surveillance programs in advance of sound technical and social controls. In finding a way forward, the five principles promulgated at http://reformgovernmentsurveillance.com/ provide a good starting point. The choice is not whether to allow the NSA to spy. The choice is between a communications infrastructure that is vulnerable to attack at its core and one that, by default, is intrinsically secure for its users. Every country, including our own, must give intelligence and law-enforcement authorities the means to pursue terrorists and criminals, but we can do so without fundamentally undermining the security that enables commerce, entertainment, personal communication, and other aspects of 21st-century life. We urge the US government to reject society-wide surveillance and the subversion of security technology, to adopt state-of-the-art, privacy-preserving technology, and to ensure that new policies, guided by enunciated principles, support human rights, trustworthy commerce, and technical innovation. US should take the lead on encryption – other countries will follow. Ranger, 15 Steve Ranger, UK editor of TechRepublic, 3-23-2015, "The undercover war on your internet secrets: How online surveillance cracked our trust in the web," TechRepublic, http://www.techrepublic.com/article/the-undercover-war-on-your-internet-secrets-how-onlinesurveillance-cracked-our-trust-in-the-web/ Back in the 1990s and 2000s, encryption was a complicated, minority interest. Now it is becoming easy and mainstream, not just for authenticating transactions but for encrypting data and communications. Back then, it was also mostly a US debate because that was where most strong encryption was developed. But that's no longer the case: encryption software can be written anywhere and by anyone, which means no one country cannot dictate global policy anymore. Consider this: the right to privacy has long been considered a qualified rather than an absolute right — one that can be infringed, for example, on the grounds of public safety, or to prevent a crime, or in the interests of national security. Few would agree that criminals or terrorists have the right to plot in secret. What the widespread use of strong, wellimplemented encryption does is promotes privacy to an absolute right. If you have encrypted a hard drive or a smartphone correctly, it cannot be unscrambled (or at least not for a few hundred thousand years). At a keystroke, it makes absolute privacy a reality, and thus rewrites one of the fundamental rules by which societies have been organised. No wonder the intelligence services have been scrambling to tackle our deliberately scrambled communications. And our fear of crime — terrorism in particular — has created another issue. We have demanded that the intelligence services and law enforcement try to reduce the risk of attack, and have accepted that they will gradually chip away at privacy in order to do that. However, what we haven't managed as a society is to decide what is an acceptable level of risk that such terrible acts might occur. Without that understanding of what constitutes an acceptable level of risk, any reduction in our privacy or civil liberties — whether breaking encryption or mass surveillance — becomes palatable. The point is often made that cars kill people and yet we still drive. We need to have a better discussion about what is an acceptable level of safety that we as a society require, and what is the impact on our privacy as a result. As the University of Surrey's Woodward notes: "Some of these things one might have to accept. Unfortunately there might not be any easy way around it, without the horrible unintended consequences. You make your enemies less safe but you also make your friends less safe by [attacking] encryption — and that is not a sensible thing to do." And while the US can no longer dictate policy on encryption, it could be the one to take a lead which others can follow. White House cybersecurity coordinator Michael Daniel recently argued that, as governments and societies are still wrestling with the issue of encryption, the US should come up with the policies and processes and "the philosophical underpinnings of what we want to do as a society with this so we can make the argument for that around the planet... to say, this is how free societies should come at this." But he doesn't underestimate the scale of the problem, either. Shift in policy of protecting infrastructure is key to avert solve the coming clash between security and intelligence. Joshua A. Kroll 15, doctoral candidate in computer science at Princeton University, where he works on computer security and public policy issues at the university’s Center for Information Technology Policy, 6-1-2015, "The Cyber Conundrum," American Prospect, http://prospect.org/article/cyber-conundrum Moving to Protect-First Three months after NIST withdrew the DRBG standard, a review initiated by President Barack Obama called for a shift in policy. Regarding encryption, the President’s Review Group on Intelligence and Communications Technologies recommended that “the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge U.S. companies to do so.” But there were few visible signals that policy had changed. “No foreign nation, no hacker,” Obama said in his 2015 State of the Union speech, “should be able to shut down our networks, steal our trade secrets, or invade the privacy of American families.” But the nearly $14 billion requested for cybersecurity in the president’s fiscal year 2016 budget proposal effectively supports and reinforces current undermine-first policy, a policy that has failed to stop the flood of attacks on American businesses and the government itself by foreign intelligence services, weekend hacktivists, and common criminals. A protect-first policy of bolstering security technologies would identify the most critical pieces of security infrastructure, invest in making those defenses secure, and support their universal deployment. Such a policy would emphasize support for universal end-to-end encryption tools such as secure web browsing. A website is delivered securely when that site’s address starts with “https”—the ‘s’ stands for secure—and your browser puts a lock or key icon next to the address. Browsers can load and display secure pages, guaranteeing that while the pages are in transit from server to user, the pages remain confidential and are protected from tampering, and that the A notorious example is the Heartbleed bug, disclosed in April of 2014. Heartbleed allowed attackers to reach out across the Internet and extract the contents of a computer’s memory, including encryption keys, passwords, and private information. user’s browser verifies that the server is not an impostor. At present, secure browsing is underused and underfunded, leading to troubling security lapses. Two-thirds of the websites on the Internet were vulnerable, along with countless computers embedded in cars, wireless routers, home appliances, and other equipment. Because exploitation All of this was due to a single programming error in a software package called OpenSSL, which is used by the majority of websites that provide secure pages. By any measure, via Heartbleed usually did not leave a record, the full consequences of Heartbleed will almost certainly never be known. OpenSSL is a core piece of our cyber infrastructure. Yet it has been maintained by a very small team of developers—in the words of one journalist, “two guys named Steve”—and the foundation supporting it never had a budget reaching even $1 million per year. Despite its central role in web security, OpenSSL had never undergone a careful security audit. Matthew Green, a cryptographer at Johns Hopkins University and an outspoken critic of OpenSSL, said after Heartbleed that “the OpenSSL Foundation has some very devoted people, it just doesn’t have enough of them, and it can’t afford enough of them.” Since the Heartbleed attack, a consortium of companies, including some of the biggest names in the Internet business, pledged contributions of a few million dollars to start the Core Infrastructure Initiative (CII), a grant-making process for security audits of important infrastructure components like OpenSSL. CII’s A more proactive government policy would provide ample funding for security audits. By leaving OpenSSL to its own devices, government perpetuates the status quo and implicitly rejects a protect-first strategy. A similar budget of a few million dollars is nowhere near the few hundred million now devoted to the NSA’s SIGINT Enabling program, but it is a start. situation applies to encrypted email, the state of which is well conveyed by a recent ProPublica headline: “The World’s Email Encryption Software Relies on One Guy, Who is Going Broke.” Werner Koch, the author and maintainer of the software Gnu Privacy Guard—the most popular tool for encrypted email and a piece of critical security infrastructure used to verify the integrity of operating system updates on the most popular operating system for web servers—had been getting by on donations of $25,000 per year since 2001, and a new online fund drive was bringing only modest donations. The ProPublica piece brought attention to Koch’s plight, and a few hundred thousand dollars of donations poured in, enabling Koch to keep maintaining GPG. It was a success, of a sort. But passing the digital hat for donations is not a sustainable way to fund a critical security infrastructure. The Limitations of Surveillance Meanwhile, although precise numbers are hard to come by, one estimate is that 0.64 percent of U.S. gross domestic product is lost to cyber crime, an over–$400 billion global growth industry. Despite the fact that a cyberattack can decimate a company’s operations and pry loose its secrets, and despite billions of dollars in annual direct losses to foreign governments and criminals, the most popular , the government usually treats cybersecurity as a military or intelligence problem and therefore tends to look first to the military and the intelligence community for a solution. The result is massive surveillance that gathers situational awareness, hoping systems for secure web page delivery and encrypted email get only crumbs from the $14 billion U.S. government cybersecurity budget. Instead to connect the dots to find and stop attacks. Some surveillance happens quietly, coming into the public eye only through leaks and investigative journalism. Some happens more openly, under the guise of “information sharing” between companies and government. Surveillance of adversaries, both overseas and domestically with an appropriate court order, is prudent and necessary . Universal domestic surveillance is harder to justify on the merits to prevent attacks and inform diplomatic and military decisions . Officials argue that they need all of the data if we want them to connect the dots. But the problem is not a lack of dots. More often, the problem is that the dots can be connected in too many There is no reliable way to tell in advance which pattern marks an impending attack and which simply reflects one of the endless permutations of human social behavior. Surveillance data is more useful in hindsight. In the Sony ways. Pictures hack, intelligence and investigation were critical in connecting the dots after the attack had happened, even though they did very little to prevent the attack or to discover it in the year Aggressive surveillance has limited efficacy and imposes real costs on U.S. companies or so that it was ongoing. . Users who are suspicious of the U.S. government—a group including most foreign users and more than a few Americans—want to steer clear of products and companies that might be complicit in Analysts estimate that U.S. companies will lose at least tens of billions of dollars of business due to users’ surveillance concerns. At the same time, news of U.S. government demands for data emboldens demands for similar access by other governments—including countries with surveillance. Foreign companies market themselves as more trustworthy because, unlike American companies, they can defy information demands from U.S. authorities. much weaker civil liberties records. Anything that facilitates U.S. government access will facilitate access by other governments. Industry worries, too, about direct government attacks on their infrastructures. That is exactly what happened when the NSA tapped into the private communications lines that Google, Yahoo, and other major Internet companies use to move data internally, enabling the NSA to capture information on the users of those systems without any request or notification. Consequently, the Internet companies are seen as either complicit or vulnerable—or both. The rift between government and industry was visible at the White House Summit on Cybersecurity and Consumer Protection, held at Stanford University on February 13. Obama called for “new legislation to promote greater information sharing between government and private sector, including liability protections for companies that share information about cyber threats,” and announced that “our new Cyber Threat Intelligence Integration Center [will be] a single entity that’s analyzing and integrating and quickly sharing intelligence about cyber To the president, cyber defense means collecting more information and using it more aggressively—a policy of undermining and surveillance. threats across government so we can act on all those threats even faster.” After the speech, he signed an executive order implementing these proposals.