1AC CryptoAff - Anastasia and Miah

advertisement
1AC
SQ – Cryptowars
Cryptowars are coming now. The NSA and FBI want to block and undermine strong
encryption in favor of easy surveillance of all digital communication, Computer
scientists are fighting back.
Tokmetzi 15
Dimitri, Data Journalist at the Correspondent (Netherlands) “Think piece: How to protect privacy and
security?” Global Conference on CyberSpace 2015 16 - 17 April 2015 The Hague, The Netherlands
https://www.gccs2015.com/sites/default/files/documents/How%20to%20protect%20privacy%20and%2
0security%20in%20the%20crypto%20wars.pdf
We thought that the Crypto Wars of the nineties were over, but renewed fighting has erupted since the
Snowden revelations. On one side, law enforcement and intelligence agencies are afraid that broader use of
encryption on the Internet will make their work harder or even impossible. On the other, security experts and
activists argue that installing backdoors will make everyone unsafe. Is it possible to find some middle ground between
these two positions? ‘This is the story of how a handful of cryptographers “hacked” the NSA. It’s also a story of encryption backdoors, and why
they never quite work out the way you want them to.’ So began the blog post on the FREAK attack, one of the most ironic hacks of recent years.
Matthew Green, assistant professor at John Hopkins university, and a couple of international colleagues exploited a nasty bug on the servers
that host the NSA website. By forcing the servers to use an old, almost forgotten and weak type of encryption which they were able to crack
within a few hours, they managed to gain access to the backend of the NSA website, making it possible for them to alter its content. Worse still,
the cryptographers found that the same weak encryption was used on a third of the 14 million other websites they scanned. For instance, if
they had wanted to, they could have gained access to whitehouse.gov or tips.fbi.gov. Many smartphone apps turned out to be vulnerable as
well. The irony is this: this weak encryption was deliberately designed for software products exported from the US in the nineties. The NSA
wanted to snoop on foreign governments and companies if necessary and pushed for a weakening of encryption. This weakened encryption
somehow found its way back onto the servers of US companies and government agencies. ‘Since the NSA was the organization that demanded
export-grade crypto, it’s only fitting that they should be the first site affected by this vulnerability’, Green gleefully wrote. The FREAK attack
wasn’t only a show of technological prowess, but also a political statement. Ever since Edward
Snowden released the NSA
files in June 2013, a new battle has been raging between computer security experts and civil liberties
activists on one side and law enforcement and intelligence agencies on the other. There was one set of
revelations that particularly enraged the security community. In September 2013 the New York Times, ProPublica and the Guardian published a
story on the thorough and persistent efforts of the NSA and its British counterpart GCHQ to decrypt Internet traffic and databases. In
a
prolonged, multi-billion operation dubbed ‘BULLRUN’, the intelligence agencies used supercomputers to
crack encryption, asked, persuaded or cajoled telecom and web companies to build backdoors into their
equipment and software, used their influence to plant weaknesses in cryptographic standards and simply
stole encryption keys from individuals and companies. A war is looming But security specialists argue
that by attacking the encryption infrastructure of the Internet, the intelligence agencies have made us
all less safe. Terrorists and paedophiles may use encryption to protect themselves when planning and committing terrible crimes, but the
Internet as a whole cannot function without proper encryption. Governments cannot provide digital
services to their citizens if they cannot use safe networks. Banks and financial institutions must be able to
communicate data over secure channels. Online shops need to be able to process payments safely. And all companies and
institutions have to keep criminals and hackers out of their systems. Without strong encryption, trust
cannot exist online. Cryptographers have vowed to fight back. Major web companies like Google and Yahoo! promised
their clients strong end-to-end encryption for email and vowed to improve the security of their networks and databases. Apple developed a
new operating system that encrypted all content on the new iPhone by default. And hackers started developing web applications and hardware
with strong, more user-friendly encryption. In the past few years we have seen the launch of encrypted social media (Twister), smartphones
(Blackphone), chat software (Cryptocat), cloud storage (Boxcryptor), file sharing tools (Peerio) and secure phone and SMS apps (TextSecure and
Signal). This worries governments. In the wake of the attack on Charlie Hebdo in Paris, UK Prime Minister David Cameron implied that
encryption on certain types of communication services should be banned. In the US, FBI director James Comey recently warned that the
intelligence agencies are ‘going dark’ because of the emergence of default encryption settings on devices and in web applications. In Europe,
the US and elsewhere politicians are proposing that mandatory backdoors be incorporated in hardware and
software. Some even want governments to hold ‘golden keys’ that can decrypt all Internet traffic. The
obvious question is how we can meet the needs of all concerned? One the one hand, how can we ensure that intelligence and law enforcement
agencies have access to communications and data when they have a legal mandate to do so? Their needs are often legitimate. One the other,
how can we ensure strong data protection for all, not only a techsavvy few? As we shall see, this crypto conflict isn’t new, nor is the obvious
question the right question to ask at this moment.
And, if intelligence agencies win the cryptowar – the result would gravely undermine
the security of all digital communication. Computer scientists conclusively vote aff.
Weitzner et al, 15,
DANIEL J. WEITZNER, Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Lab; HAROLD
ABELSON, Professor of Electrical Engineering and Computer Science at MIT; ROSS ANDERSON, Professor of Security
Engineering at the University of Cambridge; STEVEN M. BELLOVIN, Professor of Computer Science at Columbia
University. JOSH BENALOH, Senior Cryptographer at Microsoft Research; MATT BLAZE, Associate Professor of Computer
and Information Science at the University of Pennsylvania; WHITFIELD DIFFIE, discovered the concept of public-key
cryptography opened up the possibility of secure, Internet-scale communications. JOHN GILMORE, co-founded Cygnus
Solutions, and the Electronic Frontier Foundation; MATTHEW GREEN, Research Professor at the Johns Hopkins University
Information Security Institute. PETER G. NEUMANN, Senior Principal Scientist at the SRI International Computer Science
Lab; SUSAN LANDAU, professor of cybersecurity policy at Worcester Polytechnic Institute. RONALD L. RIVEST, MIT
Institute Professor; JEFFREY I. SCHILLER was the Internet Engineering Steering Group Area Director for Security (1994–
2003); BRUCE SCHNEIER, Fellow at the Berkman Center for Internet and Society at Harvard Law School; MICHAEL A.
SPECTER, PhD candidate in Computer Science at MIT’s Computer Science and Artificial Intelligence Laboratory. “Keys
Under Doormats: Mandating Insecurity By Requiring Government Access To All Data And Communications” 7/6/15 MIT
Cybersecurity and Internet Policy Research Initiative http://dspace.mit.edu/handle/1721.1/97690#files-area
The goal of this report is to similarly analyze the newly proposed
requirement of exceptional access to
communications in today’s more complex, global information infrastructure. We find that it would pose far more grave
security risks, imperil innovation, and raise thorny issues for human rights and international relations. There are
three general problems. First, providing exceptional access to communications would force a U-turn from the
best practices now being deployed to make the Internet more secure. These practices include forward secrecy —
where decryption keys are deleted immediately after use, so that stealing the encryption key used by a communications server would not
compromise earlier or later communications. A related technique, authenticated encryption, uses the same temporary key to guarantee
confidentiality and to verify that the message has not been forged or tampered with. Second,
building in exceptional access
would substantially increase system complexity. Security researchers inside and outside government agree that
complexity is the enemy of security — every new feature can interact with others to create vulnerabilities. To achieve
widespread exceptional access, new technology features would have to be deployed and tested with literally hundreds of thousands of
developers all around the world. This is a far more complex environment than the electronic surveillance now deployed in telecommunications
and Internet access services, which tend to use similar technologies and are more likely to have the resources to manage vulnerabilities that
may arise from new features. Features
to permit law enforcement exceptional access across a wide range of
Internet and mobile computing applications could be particularly problematic because their typical use
would be surreptitious — making security testing difficult and less effective. Third, exceptional access
would create concentrated targets that could attract bad actors. Security credentials that unlock the data would have
to be retained by the platform provider, law enforcement agencies, or some other trusted third party. If law enforcement’s keys
guaranteed access to everything, an attacker who gained access to these keys would enjoy the same
privilege. Moreover, law enforcement’s stated need for rapid access to data would make it impractical to store keys offline or split keys
among multiple keyholders, as security engineers would normally do with extremely high-value credentials. Recent attacks on the
United States Government Office of Personnel Management (OPM) show how much harm can arise when many
organizations rely on a single institution that itself has security vulnerabilities. In the case of OPM, numerous
federal agencies lost sensitive data because OPM had insecure infrastructure. If service providers implement exceptional
access requirements incorrectly, the security of all of their users will be at risk.
And, the threat to encryption is not hypothetical, the NSA has already inserted
backdoors in software and undermined commercial encryption standards.
Harris, 14
Shane, American journalist and author at Foreign Policy magazine. @WAR : the rise of the militaryInternet complex / Houghton Mifflin Harcourt. P.88-93
For the past ten years the NSA has led an effort in conjunction with its British counterpart, the Government Communications
Headquarters, to defeat the widespread use of encryption technology by inserting hidden vulnerabilities into
widely used encryption standards. Encryption is simply the process of turning a communication - say, an e-mail - into a jumble of meaningless
numbers and digits, which can only be deciphered using a key possessed by the e-mail's recipient. The NSA once fought a public battle to gain access to encryption
The agency then turned its attention toward weakening the
encryption algorithms that are used to encode communications in the first place. The NSA is home to
the world's best code makers, who are regularly consulted by public organizations, including government agencies,
on how to make encryption algorithms stronger. That's what happened in 2006 - a year after Alexander arrived when the NSA helped developed an encryption standard that was eventually adopted by the National
Institute of Standards and Technology, the US government agency that has the last word on weights and measures used for calibrating all manner of tools,
industrial equipment, and scientific instruments. NIST's endorsement of an encryption standard is a kind of Good
Housekeeping Seal of approval. It encourages companies, advocacy groups, individuals, and government agencies around the world to use the
keys, so that it could decipher messages at will, but it lost that fight.
standard. NIST works through an open, transparent process, which allows experts to review the standard and submit comments. That's one reason its endorsement
carries such weight. NIST
is so trusted that it must approve any encryption algorithms that are used in
commercial products sold to the US government. But behind the scenes of this otherwise open process, the NSA was strongarming the development of an algorithm called a randomnumber generator, a key component of all
encryptionx. Classified documents show that the NSA claimed it merely wanted to "finesse" some points in the algorithm's design, but in reality it became the
"sole editor" of it and took over the process in secret. Compromising the number generator, in a way that only the NSA
knew, would undermine the entire encryption standard. It gave the NSA a backdoor that it could use to
decode information or gain access to sensitive computer systems. The NSA's collaboration on the algorithm was not a secret.
Indeed, the agency's involvement lent some credibility to the process. But less than a year after the standard was adopted, security researchers discovered an
apparent weakness in the algorithm and speculated publicly that it could have been put there by the spy agency. The noted computer security expert Bruce
Schneier zeroed in on one of four techniques for randomly generating numbers that NIST had approved. One of them, he wrote in 2007, "is not like the others:' For
starters, it worked three times more slowly than the others, Schneier observed. It was also "championed by the NSA, which first proposed it years ago in a related
standardization project at the American National Standards Institute.” Schneier was alarmed that NIST would encourage people to use an inferior algorithm that
had been enthusiastically embraced by an agency whose mission is to break codes. But there was no proof that the NSA was up to no good. And the flaw in the
number generator didn't render it useless. As Schneier noted, there was a workaround, though it was unlikely anyone would bother to use it. Still, the flaw set
cryptologists on edge. The NSA was surely aware of their unease, as well as the growing body of work that pointed to its secret intervention, because it leaned on an
international standards body that represents 163 countries to adopt the new algorithm. The NSA wanted it out in the world, and so widely used that people would
find it hard to abandon. Schneier, for one, was confused as to why the NSA would choose as a backdoor such an obvious and now public flaw. (The weakness had
first been pointed out a year earlier by employees at Microsoft.) Part of the answer may lie in a deal that
the NSA reportedly struck with one
of the world's leading computer security vendors, RSA, a pioneer in the industry. According to a 2013 report by
Reuters, the company adopted the NSA-built algorithm "even before NIST approved it. The NSA then cited
the early use ... inside the government to argue successfully for NIST approval:' The algorithm became "the default
option for producing random numbers” in an RSA security product called the bSafe toolkit, Reuters reported. "No alarms were raised, former employees said,
because the deal was handled by business leaders rather than pure technologists .”
For its compliance and willingness to adopt the
flawed algorithm, RSA was paid $10 million, Reuters reported. It didn't matter that the NSA had built an obvious backdoor. The algorithm
was being sold by one of the world's top security companies, and it had been adopted by an international standards body as well as NIST. The NSA's
campaign to weaken global security for its own advantage was working perfectly. When news of the NSA's efforts
broke in 2013, in documents released by Edward Snowden, RSA and NIST both distanced themselves from the spy agency- but neither claimed that the backdoor
hadn't been installed. In a statement following the Reuters report, RSA denied that it had entered into a "secret contract" with the NSA, and asserted that "we have
never entered into any contract or engaged in any project with the intention of weakening RS.A's products, or introducing potential 'backdoors' into our products
for anyone's use." But it didn't deny that the backdoor existed, or may have existed. Indeed, RSA said that years earlier, when it decided to start using the flawed
number-generator algorithm, the NSA had a trusted role in the community-wide effort to strenghten, not weaken, encryption.” Not so much anymore. When
documents leaked by Snowden confirmed the NSA’s work, RSA encouraged people to stop using the number generator – as did the NIST. The standards body issued
its own statement following the Snowden revelations. It was a model of carefully calibrated language. "NIST would not deliberately weaken a cryptographic
standard," the organization said in a public statement, clearly leaving open the possibility- without confirming it - that the NSA had secretly installed the
vulnerability or done so against NIST's wishes. "NIST has a long history of extensive collaboration with the world's cryptography experts to support robust
encryption. The [NSA] participates in the NIST cryptography development process because of its recognized expertise. NIST is also required by statute to consult
with the NSA.” The standards body was effectively telling the world that it had no way to stop the NSA. Even if it wanted to shut the agency out of the standards
process, by law it couldn't. A senior NSA official later seemed to support that contention. In an interview with the national security blog Lawfare in December 2013,
Anne Neuberger, who manages the NSAs relationships with technology companies, was asked about reports that the agency had secretly handicapped the
algorithm during the development process. She neither confirmed nor denied the accusation. Neuberger called NIST “an incredibly respected close partner on many
things” But, she noted, it “is not a member of the intelligence community. “All the work they do is ... pure white hat” Neuberger continued, meaning not malicious
and intended solely to def end encryption and promote security. "Their only responsibility is to set standards" and "to make them as strong as they can possibly be.”
That is not the NSA’s job. Neuberger seemed to be giving the NIST a get-out-of-jail-free card, exempting it from any responsibility for inserting the flaw.The 2006
effort to weaken the number generator wasn't an isolated incident. It
was part of a broader, longer campaign by the NSA to
weaken the basic standards that people and organizations around the world use to protect their
information. Documents suggest that the NSA has been working with NIST since the early 1990s to
hobble encryption standards before they're adopted. The NSA dominated the process of developing the Digital Signature Standard, a
method of verifying the identity of the sender of an electronic communication and the authenticity of the information in it. “NIST publicly proposed the [standard]
in August 1991 and initially made no mention of any NSA role in developing the standard, which was intended for use in unclassified, civilian communications
systems” according to the Electronic Privacy Infonnation Center, which obtained documents about the development process under the Freedom of Information Act.
Following a lawsuit by a group of computer security experts, NIST conceded that the NSA had developed the standard, which “was widely criticized within the
computer industry for its perceived weak security and inferiority to an existing authentication technology,” the privacy center reported "Many observers have
speculated that the [existing] technique was disfavored by NSA because it was, in fact, more secure than the NSA-proposed algorithm.” From NSA's perspective, its
efforts to defeat encryption are hardly controversial. It is, after all, a code-breaking agency. This is precisely the kind of work it is authorized, and expected, to do. If
the agency developed flaws in encryption algorithms that only it knew about, what would be the harm? But the flaws weren't secret. By 2007, the backdoor in the
number generator was being written about on prominent websites and by leading security experts. It would be difficult to exploit the weakness - that is, to figure
out the key that opened NSA's backdoor. But this wasn't impossible. A foreign government could figure out how to break the encryption and then use it to spy on its
own citizens, or on American companies and agencies using the algorithm. Criminals could exploit the weakness to steal personal and financial information.
Anywhere the algorithm was used - including in the products of one of the world's leading security companies – it was vulnerable. The NSA might comfort itself by
reasoning that code-breaking agencies in other countries were surely trying to undermine encryption, including the algorithms the NSA was manipulating. And
surely they were. But that didn’t answer the question, why knowingly undermine not just an algorithm but the entire process by which encryption standards are
created? The
NSA’s clandestine efforts damaged the credibility of NIST and shredded the NSA's long-held
reputation as a trusted, valued participant in creating some of the most fundamental technologies on
the lnternet, the very devices by which people keep their data, and by extension themselves, safe. Imagine if
the NSA had been in the business of building door locks, and encouraged every homebuilder in America to install its preferred, and secretly flawed, model. No one
would stand for it. At the very least, consumer groups would file lawsuits and calls would go up for the organization's leaders to resign.
Plan
The United States federal government should fully support and not undermine
encryption standards by making clear that it will not in any way subvert, undermine,
weaken, or make vulnerable generally available commercial encryption.
Advantage 1 – CyberCrime
Undermining commercial software reduces the ability to prevent cyber-crime and only
facilitates access to networks for organized criminal networks.
Blaze, 15
Matt, University Of Pennsylvania Prof of Computer and Information Science Us House Of
Representatives Committee On Government Oversight And Reform Information Technology
Subcommittee Encryption Technology And Possible Us Policy Responses 29 April 2015 Testimony of
Matt Blaze https://oversight.house.gov/wp-content/uploads/2015/05/4-29-2015-IT-SubcommitteeHearing-on-Encryption-Blaze.pdf
An important task for policymakers in evaluating the FBI’s proposal is to weigh
the risks of making software less able to
resist attack against the benefits of more expedient surveillance. It effectively reduces our ability to
prevent crime (by reducing computer security) in exchange for the hope of more efficient crime investigation (by
making electronic surveillance easier). Unfortunately, the costs of the FBI’s approach will be very high. It
will place our national infrastructure at risk. This is not simply a matter of weighing our desires for personal privacy and to
safeguard against government abuse against the need for improved law enforcement. That by itself might be a difficult balance for
policymakers to strike, and reasonable people might disagree on where that balance should lie. But the
risks here go far beyond
that, because of the realities of how modern software applications are integrated into complete systems. Vulnerabilities in software
of the kind likely to arise from law enforcement access requirements can often be exploited in ways that
go beyond the specific data they process. In particular, vulnerabilities often allow an attacker to effectively take control over the
system, injecting its own software and taking control over other parts of the affected system.9 The vulnerabilities introduced by access
mandates discussed in the previous section are likely to include many in this category. They are difficult to defend against or contain, and they
current represent perhaps the most serious practical threat to networked computer security. For
better or worse, ordinary
citizens, large and small business, and the government itself depend on the same software platforms
that are used by the targets of criminal investigations. It is not just the Mafia and local drug dealers
whose software is being weakened, but everyone’s. The stakes are not merely unauthorized exposure of relatively
inconsequential personal chitchat, but also leaks of personal financial and health information, disclosure of
proprietary corporate data, and compromises of the platforms that manage and control our critical
infrastructure. In summary, the technical vulnerabilities that would inevitably be introduced by
requirements for law enforcement access will provide rich, attractive targets not only for relatively
petty criminals such as identity thieves, but also for organized crime, terrorists, and hostile
intelligence services. It is not an exaggeration to understand these risks as a significant threat to our
economy and to national security.
And, Vulnerabilities in software provide funding mechanisms for organized crime.
Peha, 13
Jon M. Peha is a professor at Carnegie Mellon, Dept. of Electrical & Computer Engineering and the Dept.
of Engineering & Public Policy, Served as Chief Technologist of the Federal Communications Commission,
Assistant Director of the White House’s Office of Science and Technology Policy. "The dangerous policy
of weakening security to facilitate surveillance." Available at SSRN 2350929 (2013).
Weak Security is Dangerous Giving law enforcement and intelligence agencies the ability to conduct electronic surveillance is part of a strategy
to limit threats from criminals, foreign powers, and terrorists, but so is strengthening the cybersecurity used by all Americans.
Weak
cybersecurity creates opportunities for sophisticated criminal organizations. Well-funded criminal
organizations will turn to cybsercrime for the same reason they turn to illegal drugs; there is money to
be made. This imposes costs on the rest of us. The costs of malicious cyberactivities take many forms,
including direct financial losses (e.g. fraudulent use of credit cards), theft of intellectual property, theft of sensitive
business information, opportunity costs such as the lost productivity when a computer system is taken down, and the damage to a
company’s reputation when others learn its systems have been breached. One recent study says that estimates of
these costs range from $24 billion to $120 billion per year in the U.S.3 Weakened security can only increase
the high cost of cybercrime.
Cybercrime provides substantial financial support for Russian organized crime.
Grabosky, 13
Peter. Peter Grabosky is a Professor in the Regulatory Institutions Network, Australian National
University, and a Fellow of the Academy of the Social Sciences in Australia. "Organised Crime and the
Internet: Implications for National Security." The RUSI Journal 158.5 (2013): 18-25.
The Internet is commonly used as an instrument for attacking other computer systems. Most cybercrimes begin when an offender obtains unauthorised access to another system. Systems are often attacked in
order to destroy or damage them and the information that they contain. This can be an act of vandalism or protest, or activity undertaken in
furtherance of other political objectives. One of the more common forms is the distributed-denial-of-service (DDoS) attack, which entails
flooding a target computer system with a massive volume of information so that the system slows down significantly. Botnets are quite useful
for such purposes, as are multiple co-ordinated service requests. A
notorious example of a botnet-initiated DDoS attack
occurred in April 2007, when government and commercial servers in Estonia were seriously degraded
over a number of days. Online banking services were intermittently disrupted, and access to government sites and to online news
media was limited. The attacks appear to have originated in Russia and are alleged to have resulted from the
collaboration of Russian youth organisations and Russian organised-crime groups, condoned by the state,
although the degree to which the Russian government was complicit in the attacks is unclear.18 Just as state actors or their agents can use the
Internet to pursue what they perceive to be goals of national security, so have
insurgent and extremist groups used Internet
technology in various ways to further their causes. These include using the Internet as an instrument of
theft in order to enhance their resource base; for instance, as a vehicle for fraud. Imam Samudra, the architect of the 2002 Bali
bombings, reportedly called upon his followers to commit credit-card fraud in order to finance militant activities.19 Jihadist propaganda and
incitement messages also abound in cyberspace. Yet the
Internet is not used for illicit purposes solely or even primarily
by political actors. Organised-crime groups use it daily on a global scale, engaging in activities that range
from the illicit acquisition, copying and dissemination of intellectual property (piracy has allegedly cost the
software and entertainment industries billions of dollars)20 to the plundering of banking and credit-card details, commercial
trade secrets and classified information held by governments. This too may begin with unauthorised access to a
computer system: indeed, the theft of personal financial details has provided the basis for thriving
markets in such data, which enable fraud on a significant scale.21
Russian organized crime is the most likely scenario for nuclear terrorism.
Zaitseva 7—Center for International Security and Cooperation (CISAC) Visiting Fellowfrom the National
Nuclear Centre in Semipalatinsk (Lyudmila, 2007, Strategic Insights, Volume VI, Issue 5, “Organized
Crime, Terrorism and Nuclear Trafficking,” rmf)
The use of radioactive material for malicious purposes falls within the range of capabilities of organized
criminal structures, at least those in Russia. Such a malevolent use may be an indirect evidence of the
organized crime involvement in the illicit trafficking of radioactive substances. More than a dozen of
malevolent radiological acts, such as intentional contamination and irradiation of persons, have been reported in open
sources since 1993. One of them, which happened in Guangdong Province of China in 2002—resulted in significant exposure of as many
as 74 people working in the same hospital.[55] Two incidents—both in Russia—have been linked to organized crime.
A widely-publicized murder of a Moscow businessman with a strong radioactive source implanted in the
head-rest of his office chair in 1993 was one of them. The director of a packaging company died of radiation sickness after
several weeks of exposure. The culprit was never found and it was alleged that mafia might have been behind the ploy to remove a business
competitor.[56] The same source mentioned a similar incident, which happened in Irkutsk around the same time, when somebody planted
radiation sources in office chairs in an attempt to kill two company directors before the "hot seats" were discovered and removed. No
speculations were made regarding the possible mafia involvement in this murder attempt, although it cannot be excluded.
The less known case with strong indications that shady criminal networks may have plotted it happened more recently in St. Petersburg. On
March 18, 2005, Moskovskiye Novosti published an article, in which the author discussed several high-profile assassinations and murders in
Russia and abroad using various methods of poisoning. One of such killings was reportedly performed with a highly radioactive substance. In
September 2004, Head of Baltik-Escort security company in St. Petersburg and FSB Colonel, Roman Tsepov, died a sudden and mysterious death
as a result of what was suspected to be poisoning. However, according to a source in St. Petersburg Public Prosecutor’s Office, the posthumous
examination established that the death had been caused by an unspecified radioactive element. In the past, Tsepov was reportedly in charge of
collecting protection money from casinos and other commercial enterprises in St. Petersburg on behalf of ‘a high-ranking FSB official’.[57]
These two incidents demonstrate that some organized crime structures have the knowledge about the characteristics and effects of specific
radioactive materials, have access to these substances, and do not shy away from using them as weapons of murder, which are hard to trace to
the perpetrators. Terrorist Networks and Nuclear Trafficking Terrorism changes together with society and in order to preserve itself as a
phenomenon it must use what society gives it, ‘including the most modern weapons and advanced ideas.’[58] The
risk of terrorists
obtaining nuclear fissile material is small, but real. After the terrorist attack on the school in Beslan in September 2004, the
Head of Russian Federal Agency for Atomic Energy (Rosatom, formerly Minatom) Alexander Rumyantsev said that the possibility that
international terrorist groups may acquire nuclear fissile material, including HEU and plutonium, as well
as nuclear weapons technologies, could no longer be ruled out.[59] Such a risk is much higher for radiological material,
which is omnipresent around the world and is not subject to nearly the same level of control and protection as nuclear fissile material. Its use
as a weapon in a radiological dispersal device (RDD) would also be achieved with just a fraction of time,
investment, and skills required for making a crude nuclear weapon. These reasons make the
deployment of radiological material the most probable scenario of nuclear terrorism. Although radioactive
substances have already been used as a weapon of killing and a threat mechanism, so far, there is no evidence of their successful deployment in
terrorist acts. The only case that comes close to deployment of an RDD, was recorded in Chechnya in 1998, when the local authorities found a
container filled with radioactive substances and emitting strong radiation levels together with a mine attached to it buried next to a railway
line.[60] The local authorities considered the incident as a foiled act of sabotage. The Chechen fighters are also believed to have made several
raids on the Radon radioactive waste depository, located in the vicinity of Grozny, and stolen several containers with radioactive
substances.[61] In 1996, the director of the Radon facility confirmed that about half of some 900 cubic meters of waste, with radioactivity levels
of 1,500 curies, which had been stored at the Radon facility at the start of the first Chechen war in November 1994, was missing.[62] The
Russian authorities believe the terrorists were planning to use them in explosions in order to spread contamination. It should be noted that
Chechen extremists stand out from many other terrorist organizations by persistently making threats to use nuclear technologies in their acts of
violence. The notorious burial of a radiation source in the Gorky park of Moscow in 1995 by the now late field commander Shamil Basayev and
the threat by Ahmed Zakayev after the Moscow theater siege in October 2002 that the next time a nuclear facility would be seized are just two
such examples.[63] In January 2003, Colonel-General Igor Valynkin, the chief of the 12th Main Directorate of the Russian Ministry of Defence, in
charge of protecting Russia’s nuclear weapons, said “operational information indicates that Chechen terrorists intend to seize some important
military facility or nuclear munitions in order to threaten not only the country, but the entire world.”[64] According to an assessment of a
Russian expert on nonproliferation, whereas unauthorized access to nuclear munitions by terrorist groups is ‘extremely improbable,’ access
and theft of nuclear weapons during transport or disassembly cannot be wholly excluded.[65] Russia’s top
security officials recently admitted they have knowledge about the intent and attempts by terrorists to gain access to nuclear material. In
August 2005, the
director of the Russian Federal Security Service Nikolay Patrushev told at a conference that
his agency had information about attempts by terrorist groups to acquire nuclear, biological and
chemical weapons of mass destruction.[66] Later that year, the Minister of Interior, Rashid Nurgaliev, stated that international
terrorists intended to “seize nuclear materials and use them to build WMD.”[67] If terrorists indeed attempted to gain access to nuclear
material in order use them for the construction of WMD, such attempts have not been revealed to the public. Out of almost 1100 trafficking
incidents recorded in the DSTO since 1991, only one has reportedly involved terrorists, other than Chechen fighters. The incident was recorded
in India in August 2001, when Border Security Force (BSF) officials seized 225 gram of uranium in Balurghat, northern West Bengal along the
India-Bangladesh border. Two local men, described as ‘suspected terrorists’, were arrested. Indian intelligence agencies suspect that the
uranium was bound for Muslim fighters in the disputed regions of Jammu and Kashmir and that agents of Pakistan's InterService-Intelligence
(ISI) were involved.[68] Whether the arrested suspects were indeed members of a terrorist organization remains unclear based on the available
information. Conclusion Alliances between
terrorist groups and drug cartels and transnational criminal
networks are a well-known fact. Such alliances have successfully operated for years in Latin America,
and in Central-, South-, and South-Eastern Asia. The involvement of organized criminal groups—albeit
relatively small and unsophisticated—in nuclear smuggling activities has also been established based on the
study of some 400 nuclear trafficking incidents recorded in the DSTO database between January 2001
and December 2005. Elements of organized crime could be identified in about 10 percent of these incidents. However, no reliable
evidence of the “marriages of convenience” between all three—organized crime, terrorists, and nuclear trafficking—could be found.
Nuclear terrorism causes retaliation and nuclear war – draws in Russia and China
Robert Ayson, Professor of Strategic Studies and Director of the Centre for Strategic Studies: New Zealand
at the Victoria University of Wellington, 10 (“After a Terrorist Nuclear Attack: Envisaging Catalytic Effects,”
Studies in Conflict & Terrorism, Volume 33, Issue 7, July, Available Online to Subscribing Institutions via
InformaWorld)
A terrorist nuclear attack, and even the use of nuclear weapons in response by the country attacked in the first place, would not
necessarily represent the worst of the nuclear worlds imaginable. Indeed, there are reasons to wonder whether nuclear terrorism should
ever be regarded as belonging in the category of truly existential threats. A contrast can be drawn here with the global catastrophe that
would come from a massive nuclear exchange between two or more of the sovereign states that possess these weapons in significant
numbers. Even the worst terrorism that the twenty-first century might bring would fade into insignificance alongside considerations of
what a general nuclear war would have wrought in the Cold War period. And it must be admitted that as long as the major
nuclear
weapons states have hundreds and even thousands of nuclear weapons at their disposal, there is
always the possibility of a truly awful nuclear exchange taking place precipitated entirely by state possessors themselves. But these
two nuclear worlds—a non-state actor nuclear attack and a catastrophic interstate nuclear
exchange—are not necessarily separable. It is just possible that some sort of terrorist attack, and
especially an act of nuclear terrorism, could precipitate a chain of events leading to a massive exchange
of nuclear weapons between two or more of the states that possess them. In this context, today’s and
tomorrow’s terrorist groups might assume the place allotted during the early Cold War years to new state possessors of small nuclear
arsenals who were seen as raising the risks of a catalytic nuclear war between the superpowers started by third parties. These risks were
considered in the late 1950s and early 1960s as concerns grew about nuclear proliferation, the so-called n+1 problem. t may require a
considerable amount of imagination to depict an especially plausible situation where an act of nuclear terrorism could lead to such a
massive inter-state nuclear war. For example, in the event of a terrorist nuclear attack on the United States, it might well be wondered
just how Russia and/or China could plausibly be brought into the picture, not least because they seem unlikely to be fingered as the most
obvious state sponsors or encouragers of terrorist groups. They would seem far too responsible to be involved in supporting that sort of
terrorist behavior that could just as easily threaten them as well. Some possibilities, however remote, do suggest themselves. For
example, how might the United States react if it was thought or discovered that the fissile material used in the act of nuclear terrorism
had come from Russian stocks,40 and if for some reason Moscow denied any responsibility for nuclear laxity? The correct attribution of
that nuclear material to a particular country might not be a case of science fiction given the observation by Michael May et al. that while
the debris resulting from a nuclear explosion would be “spread over a wide area in tiny fragments, its radioactivity makes it detectable,
identifiable and collectable, and a wealth of information can be obtained from its analysis: the efficiency of the explosion, the materials
used and, most important … some indication of where the nuclear material came from.”41 Alternatively, if
terrorism came as
the act of nuclear
a complete surprise, and American officials refused to believe that a terrorist group was fully responsible
(or responsible at all) suspicion would shift immediately to state possessors. Ruling out Western ally countries like
the United Kingdom and France, and probably Israel and India as well, authorities in Washington would be left with a very short list
consisting of North Korea, perhaps Iran if its program continues, and possibly Pakistan. But at what stage would Russia
and China be definitely ruled out in this high stakes game of nuclear Cluedo? In particular, if the act of nuclear terrorism
occurred against a backdrop of existing tension in Washington’s relations with Russia and/or China,
and at a time when threats had already been traded between these major powers, would officials
and political leaders not be tempted to assume the worst? Of course, the chances of this occurring would
only seem to increase if the United States was already involved in some sort of limited armed
conflict with Russia and/or China, or if they were confronting each other from a distance in a proxy
war, as unlikely as these developments may seem at the present time. The reverse might well apply too: should a
nuclear terrorist attack occur in Russia or China during a period of heightened tension or even limited conflict with the
United States, could Moscow and Beijing resist the pressures that might rise domestically to consider
the United States as a possible perpetrator or encourager of the attack? Washington’s early response
to a terrorist nuclear attack on its own soil might also raise the possibility of an unwanted (and nuclear aided)
confrontation with Russia and/or China. For example, in the noise and confusion during the immediate
aftermath of the terrorist nuclear attack, the U.S. president might be expected to place the country’s armed forces,
including its nuclear arsenal, on a higher stage of alert. In such a tense environment, when careful planning runs up against the friction of
reality, it
is just possible that Moscow and/or China might mistakenly read this as a sign of U.S.
intentions to use force (and possibly nuclear force) against them. In that situation, the temptations to
preempt such actions might grow, although it must be admitted that any preemption would probably still meet with a
devastating response.
Advantage 2 – Cyber-Vulnerability
Backdoors and zero day vulnerabilities fundamentally undermine human security.
Dunn Cavelty, 14
Myriam, Deputy for research and teaching a the Center for Security Studies (CSS) and Senior Lecturer for
Security Politics at ETH Zurich. "Breaking the cyber-security dilemma: Aligning security needs and
removing vulnerabilities." Science and engineering ethics 20.3 (2014): 701-715.
That said, the security-implications of current actions by state entities go even further. It has been suspected for a
while and is now confirmed that the intelligence services of this world are making cyberspace more insecure directly; in order to be able to
have more access to data, and in order to prepare for future conflict. It
has been revealed that the NSA has bought and
exploited so-called zero-day vulnerabilities in current operating systems and hardware to inject NSA
malware into numerous strategically opportune points of the Internet infrastructure (Greenwald and MacAskill
2013). As soon as military and intelligence agencies became buyers of so-called zero-day vulnerabilities,
prizes have skyrocketed (Miller 2007; Perlroth and Sanger 2013), with several downsides to this: first, exposing these
vulnerabilities in order to patch them, as was the norm not so long ago, is becoming less likely. Second, the competition
for exclusive possession of such vulnerabilities might even give programmers incentives to deliberately create and then sell them (Schneier
2012b). It is unknown which computer systems have been compromised—but it is known that
these backdoors or sleeper
programs can be used for different purposes (surveillance, espionage, disruption, etc.) and activated at any time. It
also has been revealed that the US government spends large sums of money to crack existing encryption
standards—and apparently has also actively exploited and contributed to vulnerabilities in widespread
encryption systems (Simonite 2013; Fung 2013; Clarke et al. 2013). The crux of the matter is that these backdoors reduce the
security of the entire system—for everyone. The exploitation of vulnerabilities in computer systems by
intelligence agencies and their weakening of encryption standards have the potential to destroy trust
and confidence in cyberspace overall. Also, there is no guarantee that the backdoor-makers have full control over them and/or
can keep them secret— in other words, they could be identified and exploited by criminal hackers or even ‘‘terrorists’’. Here, state
practices not only become a threat for human security: paradoxically, they also become a threat for
themselves.
The universe believes in encryption – it is critical to counter dystopian state violence.
Assange, 12
Julian Assange, an Australian computer programmer, publisher and journalist. Editor-in-chief of the
website WikiLeaks. Jacob Appelbaum, American independent journalist, computer security researcher
and hacker. A core member of the Tor project; Andy Muller-Maguhn, member of the German hacker
association Chaos Computer Club; Jérémie Zimmermann, French computer science engineer co-founder
of the Paris-based La Quadrature du Net, a citizen advocacy group defending fundamental freedoms
online. Cypherpunks: Freedom and the Future of the Internet. Singapore Books, 2012. P. 3-6
Most of the time we are not even aware of how close to violence we are, because we all grant
concessions to avoid it. Like sailors smelling the breeze, we rarely contemplate how our surface world is propped
up from below by darkness. In the new space of the internet what would be the mediator of coercive force? Does it even make sense
to ask this question? In this otherworldly space, this seemingly platonic realm of ideas and information flow, could there be a notion of coercive
force? A force that could modify historical records, tap phones, separate people, transform complexity into rubble, and erect walls, like an
occupying army? The platonic nature of the internet, ideas and information flows, is debased by its physical origins. Its foundations are fiber
optic cable lines stretching across the ocean floors, satellites spinning above our heads, computer servers housed in buildings in cities from New
York to Nairobi. Like the soldier who slew Archimedes with a mere sword, so too could an armed militia take control of the peak development
of Western civilization, our platonic realm. The
new world of the internet, abstracted from the old world of brute atoms, longed
for independence. But states and their friends moved to control our new world—by controlling its
physical underpinnings. The state, like an army around an oil well, or a customs agent extracting bribes at the border,
would soon learn to leverage its control of physical space to gain control over our platonic realm. It would
prevent the independence we had dreamed of, and then, squatting on fiber optic lines and around satellite ground stations, it would go
on to mass intercept the information flow of our new world—its very essence— even as every human,
economic, and political relationship embraced it. The state would leech into the veins and arteries of our new societies,
gobbling up every relationship expressed or communicated, every web page read, every message sent and every thought googled, and then
store this knowledge, billions of interceptions a day, undreamed of power, in vast top secret warehouses, forever.
It would go on to
mine and mine again this treasure, the collective private intellectual output of humanity, with ever more
sophisticated search and pattern finding algorithms, enriching the treasure and maximizing the power imbalance between
interceptors and the world of interceptees. And then the state would reflect what it had learned back into the physical world, to start wars, to
target drones, to manipulate UN committees and trade deals, and to do favors for its vast connected network of industries, insiders and
cronies. But
we discovered something. Our one hope against total domination. A hope that with courage, insight and
universe believes in encryption.
It is easier to encrypt information than it is to decrypt it. We saw we could use this strange property to create the laws of
solidarity we could use to resist. A strange property of the physical universe that we live in. The
a new world. To abstract away our new platonic realm from its base underpinnings of satellites, undersea cables and their controllers. To fortify
our space behind a cryptographic veil. To create new lands barred to those who control physical reality, because to follow us into them would
require infinite resources. And in this manner to declare independence. Scientists in the Manhattan Project discovered that
the universe permitted the construction of a nuclear bomb. This was not an obvious conclusion. Perhaps nuclear weapons were not within the
laws of physics. However, the universe believes in atomic bombs and nuclear reactors. They are a phenomenon the universe blesses, like salt,
sea or stars. Similarly, the universe, our physical universe, has that property that makes it possible for an individual or a group of individuals to
reliably, automatically, even without knowing, encipher something, so that all the resources and all the political will of the strongest
superpower on earth may not decipher it. And
the paths of encipherment between people can mesh together to
create regions free from the coercive force of the outer state. Free from mass interception. Free from
state control. In this way, people can oppose their will to that of a fully mobilized superpower and win.
Encryption is an embodiment of the laws of physics, and it does not listen to the bluster of states, even
transnational surveillance dystopias. It isn’t obvious that the world had to work this way. But somehow the universe smiles on
encryption. Cryptography is the ultimate form of non-violent direct action. While nuclear weapons states
can exert unlimited violence over even millions of individuals, strong cryptography means that a state,
even by exercising unlimited violence, cannot violate the intent of individuals to keep secrets from
them. Strong cryptography can resist an unlimited application of violence. No amount of coercive force will ever
solve a math problem. But could we take this strange fact about the world and build it up to be a basic emancipatory building block for the
independence of mankind in the platonic realm of the internet? And as societies merged with the internet could that liberty then be reflected
back into physical reality to redefine the state? Recall that states are the systems which determine where and how coercive force is consistently
applied. The question of how much coercive force can seep into the platonic realm of the internet from the physical world is answered by
cryptography and the cypherpunks’ ideals. As
states merge with the internet and the future of our civilization
becomes the future of the internet, we must redefine force relations. If we do not, the universality of
the internet will merge global humanity into one giant grid of mass surveillance and mass control. We
must raise an alarm. This book is a watchman’s shout in the night. On March 20, 2012, while under house arrest in the United Kingdom
awaiting extradition, I met with three friends and fellow watchmen on the principle that perhaps in unison our voices can wake up the town.
We must communicate what we have learned while there is still a chance for you, the reader, to understand and act on what is happening. It is
time to take up the arms of our new world, to fight for ourselves and for those we love. Our
task is to secure self-determination
where we can, to hold back the coming dystopia where we cannot, and if all else fails, to accelerate its
self-destruction.
The state is key – only state action can resolve the fundamental security imbalance in
cyber-security.
Dunn Cavelty, 14
Myriam, Deputy for research and teaching a the Center for Security Studies (CSS) and Senior Lecturer for
Security Politics at ETH Zurich. "Breaking the cyber-security dilemma: Aligning security needs and
removing vulnerabilities." Science and engineering ethics 20.3 (2014): 701-715.
From Problem to Solution: Human-Centric Information Ethics. This article has identified and discussed implications of cyber(-in)-security for
human-security concerns, with a main focus on both the representation of the issue as a (security) political problem and the practices of
(mainly state) actors based on such representations. The
problem with the current system is that security is
underproduced, both from a traditional state-focused national security and also from a bottom-up,
human security perspective. The reason, so I have argued, is a multidimensional and multi-faceted security dilemma, produced by the
following interlinked issues: First, cyber-security is increasingly presented in terms of power-struggles, warfighting, and military action. This is not an inevitable or ‘‘natural’’ development; rather, it is a matter of choice, or at least a matter of
(complicated) political processes that has produced this particular outcome. The result is not more security, however, but less: states spend
more and more money on cyber-defense and likely also cyber-offense, which is not leading to more, but less security, as evident by the flood of
official documents lamenting the security-deficit. Second, the type of cybersecurity that is produced is based on economic maxims, often
without consideration for the particular security-needs of the population. Third, extending
a notion of national security based
on border control to cyberspace will almost inevitably have an impact on civil liberties, especially on the
right to privacy and the freedom of speech. Fourth, cyber-exploitation by intelligence agencies linked to
the manipulation of vulnerabilities is directly making cyber-space more insecure. What becomes exceedingly
clear from the developments and lessons of the last decade is that we cannot have both: a strategically exploitable
cyberspace full of vulnerabilities—and a secure and resilient cyberspace that all the cyber-security
policies call for. At the heart of this challenge is, as so often when human security is implicated, the state (cf. Kerr 2007).
On the one hand, state practices are emerging as a major part of the problem, constantly creating more
insecurity and in fact also hindering the removal of known insecurities. At the same time, a secure, safe,
and open cyberspace is not possible without involvement of the state. How, then, can this dilemma be
overcome? Because it is a dilemma extending to more than the state, solutions are not to be found solely in the cooperation between
states (cf. Booth and Wheeler 2008). Rather, a focus on a common issue of interest for all the stakeholders that are
interested in more security is needed. Such a common ground is held by vulnerabilities. If we want a
secure and resilient cyberspace, then a strategically exploitable cyberspace full of vulnerabilities has to
be actively worked against. This is a compromise that some state actors need to make if they want a
type of national security that extends to cyberspace. If such a compromise is not made, then the quest
for more national security will always mean less cyber-security, which will always mean less national security because
of vulnerabilities in critical infrastructures. The reason why vulnerabilities persist and even proliferate has already been identified above: the
current incentive structures in the market are skewed (Dynes et al. 2008). This is where states are needed to
help improve cyber-security through additional regulation (and through further encouragement of voluntary arrangement
for the increase of cyber-security in the corporate sector). Furthermore, there is no doubt from a human security perspective
that the zero-day exploit ‘‘market’’ needs to be regulated internationally for security reasons (Kuehn 2013). In addition,
prime human security concerns like the freedom of speech and the right to privacy should no longer be
seen as anti-security, but as pro-security if linked to vulnerabilities: reducing the amount of data that is
unencrypted will substantially reduce cybercrime and cyber-espionage, with benefits for both humancentred and state-centred security. In turn, the ethics that should guide our future engagement with
cyber-security have to take into account the special and all-embracing characteristics of cyberspace. So
far, ethical considerations with bearing on cyber-security have mainly been made from a military perspective, following the tradition to address
new forms of warfare and weapons systems under ethical viewpoints (cf. Rowe 2010; Dipert 2010; Barrett 2013). Cyber-security, as argued in
the very beginning, is far more than this, however: From
both a state and a human security perspective, cyberspace
has become more than just a technological realm in which we sometimes interact for social or economic
reasons. Cyberspace has become a fundamental part of life and is constitutive of new, complex
subjectivities. An ethics that fits such a broad understanding is Information Ethics. It constitutes an expansion of environmental ethics
towards a less anthropocentric concept of agent, which includes non-human (artificial) and non-individual (distributed) entities and advances a
less biologically-centred concept of ‘‘patient’’, which includes not only human life or simply life, but any form of existence. This ethics is
concerned with the question of an ‘‘ethics in the infosphere’’ (Floridi 2001) and beyond that, an ‘‘ethics of the infosphere’’ (Capurro 2006). In
information ethics, the lowest possible common set of attributes which characterises something as intrinsically valuable and an object of
respect is its abstract nature as an informational entity (Floridi 1998). In this view, all informational objects are in principle worth of ethical
consideration. However, to ensure that such an ethics does not involuntarily place the technical over the social, we must make sure that the
protection of these data is not founded ‘‘on the dignity of the digital but on the human dimensions they refer to’’ (Capurro 2006). The duty
of a moral agent is evaluated in terms of contribution to the growth and welfare of the entire infosphere
(Floridi 1999: 47), but always related to a bodily being in the world. Any process, action or event that
negatively affects the infosphere with relevance to human life impoverishes it and is an ‘‘instance of
evil’’ (Floridi and Sanders 1999, 2001). Vulnerabilities are such an evil.
The plan is critical to shift the focus from militarization of cyber policy towards
eliminating vulnerabilities.
Kroll 15,
Joshua A.Kroll, Doctoral candidate in computer science at Princeton University, where he works on
computer security and public policy issues at the university’s Center for Information Technology Policy.
6-3-2015, "The Cyber Conundrum: A Security Update," American Prospect,
http://prospect.org/article/cyber-conundrum-security-update
In February President Barack Obama called for “international protocols that … set some clear limits and
guidelines, understanding that everybody's vulnerable and everybody's better off if we abide by certain behaviors.” This arms-control
solution, however, is ill suited to cyber weapons, which can be constructed quickly and hidden anywhere, making verification of compliance
impossible. In U.S. cybersecurity, according to the president, there is “no clear line between offense and defense. Things are going back and
forth all the time.” At first glance that statement might seem like an acknowledgement of the cyber conundrum: Actions that increase the
government’s capability to undermine adversaries also limit our capability to protect ourselves. But the president also says that, “the same
sophistication you need for defenses means that potentially you can engage in offense”—in other words, that we can use cyber attacks or their
possibility as a deterrent against threats. Rather
than accepting that “everybody's vulnerable,” however, we should
aim to make all systems more secure, protecting global infrastructure and relying on the U.S. military's
significant offensive capability when it is needed. This military approach to cybersecurity allows broad industry sectors to be
treated as collateral damage. In February 2015, the National Security Agency (NSA) and its United Kingdom counterpart, Government
Communications Headquarters (GCHQ), were reported to have infiltrated several major mobile phone carriers and manufacturers of the
Subscriber Identification Module (SIM) cards used to secure mobile phones. The NSA and GCHQ sought to capture the encryption keys used by
the carriers to encrypt phone conversations and prevent installation of malicious software on phones. Experts
have long known that
phone software can be modified to cause phones to record and transmit audio or location data even
when they appear to be switched off. Previously leaked documents showed that the NSA offered this
capability to its analysts. Poor policy in the past meant to preserve surveillance capabilities has resulted in weaknesses even years after
that policy was changed. The “FREAK” and "Logjam" attacks on secure browsing technology, discovered respectively in March and May of this
year, provide clear examples. Until 1992 (and in some cases even later), the U.S. government tried to maintain surveillance of foreigners by
requiring American companies to register as arms dealers and to obtain export licenses if they wanted to sell secure web systems abroad.
Instead, companies designed systems with highly secure modes for their domestic clients, but deliberately weaker cryptography for foreign
users. This switching between security levels ultimately became part of the widely adopted standard for secure web browsing, which is still in
use today even though the government has eased export restrictions on strong cryptography. Attackers discovered how to trick systems into
using the weaker mode, which is now trivial to defeat thanks to advances in technology. When
the FREAK attack was discovered,
nearly two in five web servers on the Internet were vulnerable to this trick. The broader Logjam attack
applied to up to two-thirds of virtual private network connections, both foreign and domestic, making them
vulnerable to surveillance by sophisticated attackers. FREAK and Logjam present object lessons in why
government policies encouraging insecure systems can lead to vulnerabilities even decades after the
policy changes. Secure systems are now easier to export. But a rule proposed by the Department of Commerce's Bureau of Industry and
Security may broaden the export-licensing regime long applied to security software using cryptography to cover nearly all computer security
technology. Onerous licensing requirements for cryptographic products have made U.S. companies less globally competitive. In fact, since it can
be easier to import secure products than to get a license to export them, some companies have outsourced the development of these products
to foreign subsidiaries or “inverted” their headquarters abroad. These controls stem from the Wasennaar Arrangement on Export Controls for
Conventional Arms and Dual-Use Goods and Technologies, a multilateral organization of 41 countries that aims to promote global security by
restricting trade in conventional arms and dual-use technologies (those with both a military and a civilian application). There certainly are
security products that might reasonably be subject to export control. Today there is a thriving trade in undisclosed software vulnerabilities and
in surveillance-enabling equipment sold to states with unsavory human rights records. But the proposed rules are written broadly and could
apply to products that are purely defensive in nature, such as tools meant to assist programmers in avoiding common pitfalls by scanning for
common patterns of vulnerability, or even generic tools for writing large software systems, such as source code editors that are not specific to
security software. Again,
the government is viewing cybersecurity policy mostly as a military problem without
considering the interests of ordinary citizens and businesses. The policy landscape is not, however, without
hope. Congress has now passed legislation to limit the scope of some NSA surveillance programs, a clear
signal that it sees little benefit to open-ended surveillance as a strategy for security, online or otherwise. And in a speech on May 20, Assistant
Attorney General Leslie R. Caldwell spoke at length about the insufficiency and inadvisability of “hacking back” as a defensive tactic for U.S.
companies. Current cybersecurity policy isn’t achieving its goals. As
revised policy emerges, it will be important to
remember that increasing overall security for citizens and the private sector can be effectively balanced
with national security, military, and intelligence goals. We're a long way from complete cybersecurity, but we can move
toward a system that’s significant more effective than the one we have now.
Solvency
The plan solves - strong encryption key to the internet.
Kehl, 15
Danielle Kehl is a senior policy analyst at New America's Open Technology Institute, BA cum laude Yale
6-17-2015, "Doomed To Repeat History? Lessons From The Crypto Wars Of The 1990s," New America,
https://www.newamerica.org/oti/doomed-to-repeat-history-lessons-from-the-crypto-wars-of-the1990s/
Strong encryption has become a bedrock technology that protects the security of the internet The evolution
of the ecosystem for encrypted communications has also enhanced the protection of individual communications and improved cybersecurity.
Today, strong encryption is an essential ingredient in the overall security of the modern network, and
adopting technologies like HTTPS is increasingly considered an industry best-practice among major technology companies.177 Even the report
of the President’s Review Group on Intelligence and Communications Technologies, the panel of experts appointed by President Barack Obama
to review the NSA’s surveillance activities after the 2013 Snowden leaks, was unequivocal in its emphasis on the importance of strong
encryption to protect data in transit and at rest. The Review Group wrote that: Encryption
is an essential basis for trust on the
Internet; without such trust, valuable communications would not be possible. For the entire system to
work, encryption software itself must be trustworthy. Users of encryption must be confident, and justifiably confident, that
only those people they designate can decrypt their data…. Indeed, in light of the massive increase in cyber-crime and intellectual property theft
on-line, the use of encryption should be greatly expanded to protect not only data in transit, but also data at rest on networks, in storage, and
in the cloud.178 The
report further recommended that the U.S. government should: Promote security[] by
(1) fully supporting and not undermining efforts to create encryption standards; (2) making clear that it
will not in any way subvert, undermine, weaken, or make vulnerable generally available commercial
encryption; and (3) supporting efforts to encourage the greater use of encryption technology for data in
transit, at rest, in the cloud, and in storage.179 Moreover, there is now a significant body of evidence that, as
Bob Goodlatte argued back in 1997, “Strong encryption prevents crime.”180 This has become particularly true as smartphones and
other personal devices that store vast amount of user data have risen in popularity over the past decade. Encryption can stop or
mitigate the damage from crimes like identity theft and fraud targeted at smartphone users.181
Backdoors and other vulnerabilities should be rejected, only the plan solves. The
consensus of academic computer scientists agree.
Abadi, et al. 14
Martín Abadi Professor Emeritus, University of California, Santa Cruz; Hal Abelson Professor, Massachusetts Institute of Technology; Alessandro
Acquisti Associate Professor, Carnegie Mellon University; Boaz Barak Editorial-board member, Journal of the ACM; Mihir Bellare Professor,
University of California, San Diego; Steven Bellovin Professor, Columbia University; Matt Blaze Associate Professor, University of Pennsylvania;
L. Jean Camp Professor, Indiana University; Ran Canetti Professor, Boston University and Tel Aviv University; Lorrie Faith Cranor Associate
Professor, Carnegie Mellon University; Cynthia Dwork Member, US National Academy of Engineering; Joan Feigenbaum Professor, Yale
University; Edward Felten Professor, Princeton University; Niels Ferguson Author, Cryptography Engineering: Design Principles and Practical
Applications; Michael Fischer Professor, Yale University; Bryan Ford Assistant Professor, Yale University; Matthew Franklin Professor, University
of California, Davis; Juan Garay Program Committee Co-Chair, CRYPTO2 2014; Matthew Green Assistant Research Professor, Johns Hopkins
University; Shai Halevi Director, International Association for Cryptologic Research; Somesh Jha Professor, University of Wisconsin – Madison;
Ari Juels Program Committee Co-Chair, 2013 ACM Cloud-Computing Security Workshop; M. Frans Kaashoek Professor, Massachusetts Institute
of Technology; Hugo Krawczyk Fellow, International Association for Cryptologic Research; Susan Landau Author, Surveillance or Security? The
Risks Posed by New Wiretapping Technologies; Wenke Lee Professor, Georgia Institute of Technology; Anna Lysyanskaya Professor, Brown
University; Tal Malkin Associate Professor, Columbia University; David Mazières Associate Professor, Stanford University; Kevin McCurley
Fellow, International Association for Cryptologic Research; Patrick McDaniel Professor, The Pennsylvania State University; Daniele Micciancio
Professor, University of California, San Diego; Andrew Myers Professor, Cornell University; Rafael Pass Associate Professor, Cornell University;
Vern Paxson Professor, University of California, Berkeley; Jon Peha Professor, Carnegie Mellon University; Thomas Ristenpart Assistant
Professor, University of Wisconsin – Madison; Ronald Rivest Professor, Massachusetts Institute of Technology; Phillip Rogaway Professor,
University of California, Davis; Greg Rose Officer, International Association for Cryptologic Research; Amit Sahai Professor, University of
California, Los Angeles; Bruce Schneier Fellow, Berkman Center for Internet and Society, Harvard Law School; Hovav Shacham Associate
Professor, University of California, San Diego; Abhi Shelat Associate Professor, University of Virginia; Thomas Shrimpton Associate Professor,
Portland State University; Avi Silberschatz Professor, Yale University; Adam Smith Associate Professor, The Pennsylvania State University; Dawn
Song Associate Professor, University of California, Berkeley; Gene Tsudik Professor, University of California, Irvine; Salil Vadhan Professor,
Harvard University; Rebecca Wright Professor, Rutgers University; Moti Yung Fellow, Association for Computing Machinery; Nickolai Zeldovich
Associate Professor, Massachusetts Institute of Technology; "An
open letter from US researchers in cryptography and
information security." (2014). http://people.csail.mit.edu/rivest/pubs/Ax14.pdf
Media reports since last June have revealed that the US government conducts domestic and international
surveillance on a massive scale, that it engages in deliberate and covert weakening of Internet security
standards, and that it pressures US technology companies to deploy backdoors and other data-collection
features. As leading members of the US cryptography and information-security research communities,
we deplore these practices and urge that they be changed. Indiscriminate collection, storage, and processing of
unprecedented amounts of personal information chill free speech and invite many types of abuse, ranging from mission creep to identity theft.
These are not hypothetical problems; they have occurred many times in the past. Inserting backdoors,
sabotaging standards, and tapping commercial data-center links provide bad actors, foreign and domestic,
opportunities to exploit the resulting vulnerabilities. The value of society-wide surveillance in preventing terrorism is
unclear, but the threat that such surveillance poses to privacy, democracy, and the US technology sector is
readily apparent. Because transparency and public consent are at the core of our democracy, we call upon the US
government to subject all mass-surveillance activities to public scrutiny and to resist the deployment of
mass-surveillance programs in advance of sound technical and social controls. In finding a way forward, the five
principles promulgated at http://reformgovernmentsurveillance.com/ provide a good starting point. The choice is not whether to allow the NSA
to spy. The
choice is between a communications infrastructure that is vulnerable to attack at its core
and one that, by default, is intrinsically secure for its users. Every country, including our own, must give
intelligence and law-enforcement authorities the means to pursue terrorists and criminals, but we can
do so without fundamentally undermining the security that enables commerce, entertainment, personal
communication, and other aspects of 21st-century life. We urge the US government to reject society-wide surveillance and
the subversion of security technology, to adopt state-of-the-art, privacy-preserving technology, and to ensure that new policies, guided by
enunciated principles, support human rights, trustworthy commerce, and technical innovation.
US should take the lead on encryption – other countries will follow.
Ranger, 15
Steve Ranger, UK editor of TechRepublic, 3-23-2015, "The undercover war on your internet secrets: How
online surveillance cracked our trust in the web," TechRepublic,
http://www.techrepublic.com/article/the-undercover-war-on-your-internet-secrets-how-onlinesurveillance-cracked-our-trust-in-the-web/
Back in the 1990s and 2000s, encryption was a complicated, minority interest. Now it is becoming easy
and mainstream, not just for authenticating transactions but for encrypting data and communications. Back then, it was also
mostly a US debate because that was where most strong encryption was developed. But that's no longer
the case: encryption software can be written anywhere and by anyone, which means no one country
cannot dictate global policy anymore. Consider this: the right to privacy has long been considered a qualified rather than an
absolute right — one that can be infringed, for example, on the grounds of public safety, or to prevent a crime, or in the interests of national
security. Few would agree that criminals or terrorists have the right to plot in secret. What
the widespread use of strong, wellimplemented encryption does is promotes privacy to an absolute right. If you have encrypted a hard drive or a
smartphone correctly, it cannot be unscrambled (or at least not for a few hundred thousand years). At a keystroke, it makes
absolute privacy a reality, and thus rewrites one of the fundamental rules by which societies have been
organised. No wonder the intelligence services have been scrambling to tackle our deliberately scrambled communications. And our fear of
crime — terrorism in particular — has created another issue. We have demanded that the intelligence services and law enforcement try to
reduce the risk of attack, and have accepted that they will gradually chip away at privacy in order to do that. However, what we haven't
managed as a society is to decide what is an acceptable level of risk that such terrible acts might occur.
Without that understanding
of what constitutes an acceptable level of risk, any reduction in our privacy or civil liberties — whether
breaking encryption or mass surveillance — becomes palatable. The point is often made that cars kill people and yet we
still drive. We need to have a better discussion about what is an acceptable level of safety that we as a society require, and what is the impact
on our privacy as a result. As the University of Surrey's Woodward notes: "Some of these things one might have to accept. Unfortunately there
might not be any easy way around it, without the horrible unintended consequences. You make your enemies less safe but you also make your
friends less safe by [attacking] encryption — and that is not a sensible thing to do." And while
the US can no longer dictate
policy on encryption, it could be the one to take a lead which others can follow. White House cybersecurity
coordinator Michael Daniel recently argued that, as governments and societies are still wrestling with the issue of encryption, the US
should come up with the policies and processes and "the philosophical underpinnings of what we want
to do as a society with this so we can make the argument for that around the planet... to say, this is how free
societies should come at this." But he doesn't underestimate the scale of the problem, either.
Shift in policy of protecting infrastructure is key to avert solve the coming clash
between security and intelligence.
Joshua A. Kroll 15, doctoral candidate in computer science at Princeton University, where he works on
computer security and public policy issues at the university’s Center for Information Technology Policy,
6-1-2015, "The Cyber Conundrum," American Prospect, http://prospect.org/article/cyber-conundrum
Moving to Protect-First Three months after NIST withdrew the DRBG standard, a review initiated by President Barack Obama called for a shift in policy. Regarding
encryption, the President’s Review Group on Intelligence and Communications Technologies
recommended that “the U.S. Government should: (1) fully support and not undermine efforts to create
encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally
available commercial software; and (3) increase the use of encryption and urge U.S. companies to do
so.” But there were few visible signals that policy had changed. “No foreign nation, no hacker,” Obama said in his 2015 State of the Union speech, “should be able to shut down our
networks, steal our trade secrets, or invade the privacy of American families.” But the nearly $14 billion requested for cybersecurity in the
president’s fiscal year 2016 budget proposal effectively supports and reinforces current undermine-first
policy, a policy that has failed to stop the flood of attacks on American businesses and the government itself by foreign intelligence services, weekend hacktivists, and common criminals.
A protect-first policy of bolstering security technologies would identify the most critical pieces of
security infrastructure, invest in making those defenses secure, and support their universal deployment.
Such a policy would emphasize support for universal end-to-end encryption tools such as secure web
browsing. A website is delivered securely when that site’s address starts with “https”—the ‘s’ stands for secure—and your browser puts a lock or key icon next to the address.
Browsers can load and display secure pages, guaranteeing that while the pages are in transit from server to user, the pages remain confidential and are protected from tampering, and that the
A notorious
example is the Heartbleed bug, disclosed in April of 2014. Heartbleed allowed attackers to reach out
across the Internet and extract the contents of a computer’s memory, including encryption keys, passwords, and private information.
user’s browser verifies that the server is not an impostor. At present, secure browsing is underused and underfunded, leading to troubling security lapses.
Two-thirds of the websites on the Internet were vulnerable, along with countless computers embedded in cars, wireless routers, home appliances, and other equipment. Because exploitation
All of this was due to a single
programming error in a software package called OpenSSL, which is used by the majority of websites that provide secure pages. By any measure,
via Heartbleed usually did not leave a record, the full consequences of Heartbleed will almost certainly never be known.
OpenSSL is a core piece of our cyber infrastructure. Yet it has been maintained by a very small team of developers—in the words of one journalist, “two guys named Steve”—and the
foundation supporting it never had a budget reaching even $1 million per year. Despite its central role in web security, OpenSSL had never undergone a careful security audit. Matthew Green,
a cryptographer at Johns Hopkins University and an outspoken critic of OpenSSL, said after Heartbleed that “the OpenSSL Foundation has some very devoted people, it just doesn’t have
enough of them, and it can’t afford enough of them.” Since the Heartbleed attack, a consortium of companies, including some of the biggest names in the Internet business, pledged
contributions of a few million dollars to start the Core Infrastructure Initiative (CII), a grant-making process for security audits of important infrastructure components like OpenSSL. CII’s
A more proactive
government policy would provide ample funding for security audits. By leaving OpenSSL to its own
devices, government perpetuates the status quo and implicitly rejects a protect-first strategy. A similar
budget of a few million dollars is nowhere near the few hundred million now devoted to the NSA’s SIGINT Enabling program, but it is a start.
situation applies to encrypted email, the state of which is well conveyed by a recent ProPublica headline: “The World’s Email Encryption Software Relies on One Guy, Who is Going Broke.”
Werner Koch, the author and maintainer of the software Gnu Privacy Guard—the most popular tool for encrypted email and a piece of critical security infrastructure used to verify the integrity
of operating system updates on the most popular operating system for web servers—had been getting by on donations of $25,000 per year since 2001, and a new online fund drive was
bringing only modest donations. The ProPublica piece brought attention to Koch’s plight, and a few hundred thousand dollars of donations poured in, enabling Koch to keep maintaining GPG.
It was a success, of a sort. But passing the digital hat for donations is not a sustainable way to fund a critical security infrastructure. The Limitations of Surveillance Meanwhile, although precise
numbers are hard to come by, one estimate is that 0.64 percent of U.S. gross domestic product is lost to cyber crime, an over–$400 billion global growth industry. Despite the fact that a
cyberattack can decimate a company’s operations and pry loose its secrets, and despite billions of dollars in annual direct losses to foreign governments and criminals, the most popular
, the government usually
treats cybersecurity as a military or intelligence problem and therefore tends to look first to the military
and the intelligence community for a solution. The result is massive surveillance that gathers situational awareness, hoping
systems for secure web page delivery and encrypted email get only crumbs from the $14 billion U.S. government cybersecurity budget. Instead
to connect the dots to find and stop attacks. Some surveillance happens quietly, coming into the public eye only through leaks and investigative journalism. Some happens more openly, under
the guise of “information sharing” between companies and government. Surveillance of adversaries, both overseas and domestically with an appropriate court order, is prudent and necessary
. Universal domestic surveillance is harder to justify on the merits
to prevent attacks and inform diplomatic and military decisions
.
Officials argue that they need all of the data if we want them to connect the dots. But the problem is not a lack of dots. More often, the problem is that the dots can be connected in too many
There is no reliable way to tell in advance which pattern marks an impending attack and which
simply reflects one of the endless permutations of human social behavior. Surveillance data is more useful in hindsight. In the Sony
ways.
Pictures hack, intelligence and investigation were critical in connecting the dots after the attack had happened, even though they did very little to prevent the attack or to discover it in the year
Aggressive surveillance has limited efficacy and imposes real costs on U.S. companies
or so that it was ongoing.
. Users
who are suspicious of the U.S. government—a group including most foreign users and more than a few Americans—want to steer clear of products and companies that might be complicit in
Analysts
estimate that U.S. companies will lose at least tens of billions of dollars of business due to users’
surveillance concerns. At the same time, news of U.S. government demands for data emboldens demands for similar access by other governments—including countries with
surveillance. Foreign companies market themselves as more trustworthy because, unlike American companies, they can defy information demands from U.S. authorities.
much weaker civil liberties records. Anything that facilitates U.S. government access will facilitate access by other governments. Industry worries, too, about direct government attacks on their
infrastructures. That is exactly what happened when the NSA tapped into the private communications lines that Google, Yahoo, and other major Internet companies use to move data
internally, enabling the NSA to capture information on the users of those systems without any request or notification. Consequently, the Internet companies are seen as either complicit or
vulnerable—or both. The rift between government and industry was visible at the White House Summit on Cybersecurity and Consumer Protection, held at Stanford University on February 13.
Obama called for “new legislation to promote greater information sharing between government and private sector, including liability protections for companies that share information about
cyber threats,” and announced that “our new Cyber Threat Intelligence Integration Center [will be] a single entity that’s analyzing and integrating and quickly sharing intelligence about cyber
To the president,
cyber defense means collecting more information and using it more aggressively—a policy of
undermining and surveillance.
threats across government so we can act on all those threats even faster.” After the speech, he signed an executive order implementing these proposals.
Download