Monday afternoon rotation group and drill

advertisement
Monday afternoon
Groups and today’s rotation
Note – groups (and Houes) were chosen at random… it would honestly be a mistake to read anything
into it. Group composition will change around going forward.
Group Slytherin
Start at 1:45pm in room 220 – Aff with Darcell… then rotate at 3:15pm to room
222 with Strauss (neg)
Anthony-Ritik
Tristan-Rush
Michael-Alex B.
Saya-Niti
Group Gryffindor
Start at 1:45pm in room 222 with Strauss (neg) … then rotate at 3:15pm
to room 220 – Aff with Darcell
Grace Heather-Ally
Kelsey-Aishanee
Harris-Max
Manav-Alex
Group Ravenclaw
Start at 1:45pm in room 218 with Lauren (neg)… Then rotate at 3:15pm to room
230 – Aff with Will…
William-Akash
Alanna-Katherine
Eshaan-Ribhu
John F. –Sachet
Group Hufflepuff
with Lauren (neg)
Megan-Brian
Julian-Ben
Start at 1:45pm in room 230 – Aff with Will… then rotate at 3:15pm to room 218
Lauren-Serena
Adi-Akhil
Speech Set-up #1 – you are with Darcell or
Will
1AC Materials for speech 1.0
Plan
Plan Option #2:
The United States federal government should substantially curtail its domestic surveillance by
strengthening the USA FREEDOM Act to:
 require use of a “specific selection term” to satisfy the “reasonable, articulable suspicion
standard”
 require that information collected through “pen register or trap and trace devices” via
emergency authorizations be subject to the same procedural safeguards as non-emergency
collections.
 require “super minimization" procedures that delete information obtained about a person not
connected to the investigation.
Advantage One - Privacy
Privacy outweighs.
- Utilitarian impact calc is biased. It inflates the disad’s risk; and
- Reject Surveillance as a structural matter of power – even when “reformed”,
innocents experience powerless unless neutral oversight’s in place.
Solove ‘7
Daniel Solove is an Associate Professor at George Washington University Law School and holds a J.D. from Yale Law School. He
is one of the world’s leading expert in information privacy law and is well known for his academic work on privacy and for
popular books on how privacy relates with information technology. He has written 9 books and more than 50 law review
articles – From the Article ““I’ve Got Nothing to Hide” and Other Misunderstandings of Privacy” - San Diego Law Review, Vol.
44, p. 745 - GWU Law School Public Law Research Paper No. 289 – available from download at:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=998565
It is time to return to the
nothing to hide argument. The reasoning of this argument is that when it comes to government
surveillance or use of personal data, there is no privacy violation if a person has nothing sensitive, embarrassing, or illegal
to conceal. Criminals involved in illicit activities have something to fear, but for the vast majority of people, their activities are not
illegal or embarrassing. Understanding privacy as I have set forth reveals the flaw of the nothing to hide argument at its roots. Many commentators who respond to
the argument attempt a direct refutation by trying to point to things that people would want to hide. But the problem with the nothing to hide argument is the
underlying assumption that privacy is about hiding bad things. Agreeing
with this assumption concedes far too much ground
and leads to an unproductive discussion of information people would likely want or not want to hide. As Bruce Schneier aptly notes, the
nothing to hide argument stems from a faulty “premise that privacy is about hiding a wrong.”75 The deeper
problem with the nothing to hide argument is that it myopically views privacy as a form of concealment or secrecy. But understanding privacy as a plurality of
related problems demonstrates that concealment of bad things is just one among many problems caused by government programs such as the NSA surveillance and
data mining. In the categories in my taxonomy, several problems are implicated. The
NSA programs involve problems of information
collection, specifically the category of surveillance in the taxonomy. Wiretapping involves audio surveillance of people’s conversations. Data mining often
begins with the collection of personal information, usually from various third parties that possess people’s data. Under current Supreme Court Fourth Amendment
jurisprudence, when the government gathers data from third parties, there is no Fourth Amendment protection because people lack a “reasonable expectation of
privacy” in information exposed to others.76 In United States v. Miller, the Supreme Court concluded that there is no reasonable expectation of privacy in bank
records because “[a]ll of the documents obtained, including financial statements and deposit slips, contain only information voluntarily conveyed to the banks and
exposed to their employees in the ordinary course of business.”77 In Smith v. Maryland, the Supreme Court held that people lack a reasonable expectation of
privacy in the phone numbers they dial because they “know that they must convey numerical information to the phone company,” and therefore they cannot
“harbor any general expectation that the numbers they dial will remain secret.”78 As I have argued extensively elsewhere, the lack of Fourth Amendment
protection of third party records results in the government’s ability to access an extensive amount of personal information with minimal limitation or oversight.79
Many scholars have referred to information collection as a form of surveillance. Dataveillance, a term coined by Roger Clarke, refers to the “systemic use of
personal data systems in the investigation or monitoring of the actions or communications of one or more persons.”80 Christopher Slobogin has referred to the
gathering of personal information in business records as “transaction surveillance.”81 Surveillance
can create chilling effects on free
speech, free association, and other First Amendment rights essential for democracy.82 Even surveillance
of legal activities can inhibit people from engaging in them. The value of protecting against chilling
effects is not measured simply by focusing on the particular individuals who are deterred from
exercising their rights. Chilling effects harm society because, among other things, they reduce the range of
viewpoints expressed and the degree of freedom with which to engage in political activity. The nothing to hide
argument focuses primarily on the information collection problems associated with the NSA programs. It contends that limited surveillance of lawful activity will not
chill behavior sufficiently to outweigh the security benefits. One can certainly quarrel with this argument, but one of the difficulties with chilling effects is that it is
often very hard to demonstrate concrete evidence of deterred behavior.83 Whether the NSA’s surveillance and collection of telephone records has deterred people
from communicating particular ideas would be a difficult question to answer. Far too often, discussions of the NSA surveillance and data mining define the problem
solely in terms of surveillance. To return to my discussion of metaphor, the problems are not just Orwellian, but Kafkaesque. The
NSA programs are
problematic even if no information people want to hide is uncovered. In The Trial, the problem is not inhibited
behavior, but rather a suffocating powerlessness and vulnerability created by the court system’s use of personal data
and its exclusion of the protagonist from having any knowledge or participation in the process. The harms consist of those created by bureaucracies—indifference,
errors, abuses, frustration, and lack of transparency and accountability. One such harm, for example, which I call aggregation, emerges from the combination of
small bits of seemingly innocuous data.84 When combined, the information becomes much more telling about a person. For the person who truly has nothing to
hide, aggregation is not much of a problem. But in the stronger, less absolutist form of the nothing to hide argument, people argue that certain pieces of
information are not something they would hide. Aggregation, however, means that by combining pieces of information we might not care to conceal, the
government can glean information about us that we might really want to conceal. Part of the
allure of data mining for the government is
its ability
to reveal a lot about our personalities and activities by sophisticated means of analyzing data. Therefore, without greater
transparency in data mining, it is hard to claim that programs like the NSA data mining program will not reveal
information people might want to hide, as we do not know precisely what is revealed. Moreover, data mining aims to be predictive of
behavior, striving to prognosticate about our future actions. People who match certain profiles are deemed likely to engage in a similar pattern of behavior. It is
quite difficult to refute actions that one has not yet done. Having nothing to hide will not always dispel predictions of future activity. Another
problem in
the taxonomy, which is implicated by the NSA program, is the problem I refer to as exclusion.85 Exclusion is the problem caused when people are
prevented from having knowledge about how their information is being used, as well as barred from being able to access and correct errors in that data. The NSA
program involves a massive database of information that individuals cannot access. Indeed, the very existence of the program was kept secret for years.86 This kind
of information processing, which forbids people’s knowledge or involvement, resembles in some ways a kind of due process problem. It is a structural problem
involving the way people are treated by government institutions. Moreover, it creates a power imbalance between individuals and the government. To what extent
should the Executive Branch and an agency such as the NSA, which is relatively insulated from the political process and public accountability, have a significant
power over citizens? This
issue is not about whether the information gathered is something people want to
hide, but rather about the power and the structure of government. A related problem involves “secondary use.” Secondary
use is the use of data obtained for one purpose for a different unrelated purpose without the person’s consent. The Administration has said little about how long
the data will be stored, how it will be used, and what it could be used for in the future. The potential future uses of any piece of personal information are vast, and
without limits or accountability on how that information is used, it is hard for people to assess the dangers of the data being in the government’s control. Therefore,
the problem with the nothing to hide argument is that it focuses on just one or two particular kinds of privacy problems—the disclosure of personal information or
the terms for debate in a manner that is often unproductive. It is
important to distinguish here between two ways of justifying a program such as the NSA surveillance and data mining program. The first way is
surveillance—and not others. It assumes a particular view about what privacy entails, and it sets
to not recognize a problem. This is how the nothing to hide argument works—it denies even the existence of a problem. The second manner of justifying such a
program is
to acknowledge the problems but contend that the benefits of the NSA program outweigh the
privacy harms. The first justification influences the second, because the low value given to privacy is based upon a narrow view of the problem. The
key misunderstanding is that the nothing to hide argument views privacy in a particular way—as a form of secrecy, as the
right to hide things. But there are many other types of harm involved beyond exposing one’s secrets to the
government. Privacy problems are often difficult to recognize and redress because they create a panoply of types of harm. Courts, legislators, and others
look for particular types of harm to the exclusion of others, and their narrow focus blinds them to seeing other kinds of harms. One of the difficulties with the
nothing to hide argument is that it looks for a visceral kind of injury as opposed to a structural one. Ironically, this
underlying conception of injury is shared by both those advocating for greater privacy protections and those arguing in favor of the conflicting
interests to privacy. For example, law professor Ann Bartow argues that I have failed to describe privacy harms in a compelling manner in
my article, A Taxonomy of Privacy, where I provide a framework for understanding the manifold different privacy problems.87 Bartow’s primary complaint is that
my taxonomy “frames privacy harms in dry, analytical terms that fail to sufficiently identify and animate the compelling ways that privacy violations can negatively
impact the lives of living, breathing human beings beyond simply provoking feelings of unease.”88 Bartow claims that the taxonomy does
not have
“enough dead bodies” and that privacy’s “lack of blood and death, or at least of broken bones and buckets of money,
distances privacy harms from other categories of tort law. Most privacy problems lack dead bodies. Of course,
there are exceptional cases such as the murders of Rebecca Shaeffer and Amy Boyer. Rebecca Shaeffer was an actress killed when a stalker obtained her address
from a Department of Motor Vehicles record.90 This incident prompted Congress to pass the Driver’s Privacy Protection Act of 1994.91 Amy Boyer was murdered by
a stalker who obtained her personal information, including her work address and Social Security number, from a database company.92 These examples aside, there
is not a lot of death and gore in privacy law. If this is the standard to recognize a problem, then few privacy problems will be recognized. Horrific cases are not
typical, and the purpose of my taxonomy is to explain why most privacy problems are still harmful despite this fact. Bartow’s objection is actually very similar to the
nothing to hide argument. Those
advancing the nothing to hide argument have in mind a particular kind of visceral privacy harm, one
for horror stories represents a similar
desire to find visceral privacy harms. The problem is that not all privacy harms are like this. At the end of the day, privacy is not a horror movie, and
demanding more palpable harms will be difficult in many cases. Yet there is still a harm worth addressing, even if it is not
sensationalistic. In many instances, privacy is threatened not by singular egregious acts, but by a slow series of relatively minor acts which gradually begin
where privacy is violated only when something deeply embarrassing or discrediting is revealed. Bartow’s quest
to add up. In this way, privacy problems resemble certain environmental harms which occur over time through a series of small acts by different actors. Bartow
wants to point to a major spill, but gradual pollution by a multitude of different actors often creates worse problems. The law frequently struggles with recognizing
harms that do not result in embarrassment, humiliation, or physical or psychological injury.93 For example, after the September 11 attacks, several airlines gave
their passenger records to federal agencies in direct violation of their privacy policies. The federal agencies used the data to study airline security.94 A group of
passengers sued Northwest Airlines for disclosing their personal information. One of their claims was that Northwest Airlines breached its contract with the
passengers. In Dyer v. Northwest Airlines Corp., the court rejected the contract claim because “broad statements of company policy do not generally give rise to
contract claims,” the passengers never claimed they relied upon the policy or even read it, and they “failed to allege any contractual damages arising out of the
alleged breach.”95 Another court reached a similar conclusion.96 Regardless of the merits of the decisions on contract law, the cases represent a difficulty with the
legal system in addressing privacy problems. The disclosure of the passenger records represented a “breach of confidentiality.”97 The problems caused by breaches
of confidentiality do not merely consist of individual emotional distress; they involve a violation of trust within a relationship. There is a strong social value in
ensuring that promises are kept and that trust is maintained in relationships between businesses and their customers. The problem of secondary use is also
implicated in this case.98 Secondary use involves data collected for one purpose being used for an unrelated purpose without people’s consent. The airlines gave
passenger information to the government for an entirely different purpose beyond that for which it was originally gathered. Secondary use problems often do not
cause financial, or even psychological, injuries. Instead, the harm is one of power imbalance. In Dyer, data was disseminated in a way that ignored airline
passengers’ interests in the data despite promises made in the privacy policy. Even if the passengers were unaware of the policy, there is a social value in ensuring
that companies adhere to established limits on the way they use personal information. Otherwise, any stated limits become meaningless, and companies have
discretion to boundlessly use data. Such a state of affairs can leave nearly all consumers in a powerless position. The harm, then, is less one to particular individuals
than it is a structural harm. A similar problem surfaces in another case, Smith v. Chase Manhattan Bank.99 A group of plaintiffs sued Chase Manhattan Bank for
selling customer information to third parties in violation of its privacy policy, which stated that the information would remain confidential. The court held that even
presuming these allegations were true, the plaintiffs could not prove any actual injury: [T]he “harm” at the heart of this purported class action, is that class
members were merely offered products and services which they were free to decline. This does not qualify as actual harm. The complaint does not allege any single
instance where a named plaintiff or any class member suffered any actual harm due to the receipt of an unwanted telephone solicitation or a piece of junk mail.100
The court’s view of harm, however, did not account for the breach of confidentiality. When
balancing privacy against security, the privacy
harms are often characterized in terms of injuries to the individual, and the interest in security is often
characterized in a more broad societal way. The security interest in the NSA programs has often been
defined improperly. In a Congressional hearing, Attorney General Alberto Gonzales stated: Our enemy is listening, and I cannot help but wonder if they
are not shaking their heads in amazement at the thought that anyone would imperil such a sensitive program by leaking its existence in the first place, and smiling
at the prospect that we might now disclose even more or perhaps even unilaterally disarm ourselves of a key tool in the war on terror.101 The balance between
privacy and security is often cast in terms of whether a particular government information collection activity should or should not be barred. The
issue,
however, often is
not whether the NSA or other government agencies should be allowed to engage in particular forms of
information gathering; rather, it is what kinds of oversight and accountability we want in place when the
government engages in searches and seizures. The government can employ nearly any kind of investigatory activity with a warrant supported by
probable cause. This is a mechanism of oversight—it forces government officials to justify their suspicions
to a neutral judge or magistrate before engaging in the tactic. For example, electronic surveillance law allows for wiretapping, but limits the practice with
judicial supervision, procedures to minimize the breadth of the wiretapping, and requirements that the law enforcement officials report back to the court to prevent
abuses.102 It is these procedures that the Bush Administration has ignored by engaging in the warrantless NSA surveillance. The question is not whether we want
the government to monitor such conversations, but whether the Executive Branch should adhere to the appropriate oversight procedures that Congress has
enacted into law, or should covertly ignore any oversight. Therefore, the security interest should not get weighed in its totality against the privacy interest. Rather,
what should get weighed is the extent of marginal limitation on the effectiveness of a government information gathering or data mining program by imposing
judicial oversight and minimization procedures. Only in cases where such procedures will completely impair the government program should the security interest be
weighed in total, rather than in the marginal difference between an unencumbered program versus a limited one. Far too
often, the balancing of
privacy interests against security interests takes place in a manner that severely shortchanges the privacy interest
while inflating the security interests. Such is the logic of the nothing to hide argument. When the argument is unpacked,
and its underlying assumptions examined and challenged, we can see how it shifts the debate to its terms, in
which it draws power from its unfair advantage. It is time to pull the curtain on the nothing to hide argument. Whether explicit
or not, conceptions of privacy underpin nearly every argument made about privacy, even the common quip “I’ve got nothing to hide.” As I have sought to
demonstrate in this essay, understanding privacy as a pluralistic conception reveals that we are often talking past each other when discussing privacy issues. By
focusing more specifically on the related problems under the rubric of “privacy,” we can better address each problem rather than ignore or conflate them. The
nothing to hide argument speaks to some problems, but not to others. It represents
a singular and narrow way of conceiving of
privacy, and it wins by excluding consideration of the other problems often raised in government
surveillance and data mining programs. When engaged with directly, the nothing to hide argument can ensnare, for it forces the debate to focus on its
narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by government
data collection and use beyond surveillance and disclosure, the nothing to hide argument, in the end, has nothing to say.
Reject utilitarianism and put privacy before security. The ballot should create a side
constraint where ends don’t justify the means. This is especially applies to data
collection in the absence of probable cause.
Albright ‘14
Logan Albright is the Research Analyst at FreedomWorks, and is responsible for producing a wide variety of written content for
print and the web, as well as conducting research for staff media appearances and special projects. He received his Master’s
degree in economics from Georgia State University. “The NSA's Collateral Spying” – Freedom Works - 07/08/2014 http://www.freedomworks.org/content/nsas-collateral-spying
In short, the report, based on information obtained by Edward Snowden, reveals that during the course of its ordinary, otherwise legal surveillance operations,
the NSA also collected data on large numbers of people who were not specifically targeted. The agency
calls this practice “incidental surveillance.” I call it “collateral spying.” The report found that, on average, 9 out of every 10 people
spied on were not the intended target. The NSA has the legal authority to obtain a warrant based on
probable cause in order to surveil an individual. No one is disputing that. But when this targeting results
in collateral spying on vast numbers of innocents, in the absence of probable cause and the corresponding warrants, that is a
major problem. The NSA has asserted that such incidental data collection is inevitable, and to a certain extent that’s likely true. It is
understandable that in some situations the NSA may learn information about people other than the direct target, but this should obviously be
minimized as far as possible, and at the very least the information should be immediately purged from
government databases, not stored for years on end. In any case, the whole situation is indicative of the agency’s cavalier attitude
towards individual rights. While national security is a concern we all share, the ends do not justify the means when
those means involve violate the constitutional protections afforded to citizens by our nation’s founders. It is not okay to
violate the rights of an innocent in the process of achieving a broader goal, even if that goal is noble.
The way the NSA has been behaving is Machiavellian in the most literal sense. In his 16th century political treatise, The Prince, Niccolo
Machiavelli recognized a harsh reality of politics that still plagues us half a millennium later, writing, “A prince wishing to keep his state is very often forced to do
evil.” Taking Machiavelli’s advice as a
green light for immoral behavior has been the problem with governments throughout history, a problem
the founding fathers sought to avoid by setting down precise guidelines for what the government could and could not do in the form of a Constitution. The
disregard of these rules, and the
argument that there should be a national security exception to the Fourth
Amendment, undermines the entire purpose of the American experiment, and restores the European-style tyrannies the revolutionaries fought
against.
Even in a utilitarian framework, privacy outweighs due to relative certainty. The disad
only may cause violence - surveillance definitely does. Privacy is paramount for dignity
and protecting our unique individuality.
Schneier ‘6
Bruce Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School, a program fellow at the New
America Foundation's Open Technology Institute and the CTO of Resilient Systems. He is the author of Beyond Fear: Thinking
Sensibly About Security in an Uncertain World. Commentary, “The Eternal Value of Privacy”, WIRED, May 18, 2006,
http://www.wired.com/news/columns/1,70886-0.html
The most common retort against privacy advocates -- by those in favor of ID checks, cameras, databases, data mining and
other wholesale surveillance measures -- is this line: "If you aren't doing anything wrong, what do you have to
hide?" Some clever answers: "If I'm not doing anything wrong, then you have no cause to watch me."
"Because the government gets to define what's wrong, and they keep changing the definition." "Because you might do something wrong
with my information." My problem with quips like these -- as right as they are -- is that they accept the premise
that privacy is about hiding a wrong. It's not. Privacy is an inherent human right, and a requirement for
maintaining the human condition with dignity and respect. Two proverbs say it best: Quis custodiet custodes ipsos? ("Who watches the
watchers?") and "Absolute power corrupts absolutely." Cardinal Richelieu understood the value of surveillance when he famously said, "If one would give me six
lines written by the hand of the most honest man, I would find something in them to have him hanged." Watch someone long enough, and you'll find something to
arrest -- or just blackmail -- with. Privacy
is important because without it, surveillance information will be abused: to
protects us from abuses by those
in power, even if we're doing nothing wrong at the time of surveillance. We do nothing wrong when we make love or go to
peep, to sell to marketers and to spy on political enemies -- whoever they happen to be at the time. Privacy
the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy
of the shower, and write letters to secret lovers and then burn them. Privacy
is a basic human need. A future in which privacy would face
constant assault was so alien to the framers of the Constitution that it never occurred to them to call out privacy as an explicit right. Privacy was inherent to the
nobility of their being and their cause. Of course being watched in your own home was unreasonable. Watching at all was an act so unseemly as to be inconceivable
among gentlemen in their day. You watched convicted criminals, not free citizens. You ruled your own home. It's intrinsic to the concept of liberty. For
if we
are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our
own uniqueness. We become children, fettered under watchful eyes, constantly fearful that -- either now or in the
uncertain future -- patterns we leave behind will be brought back to implicate us, by whatever authority
has now become focused upon our once-private and innocent acts. We lose our individuality, because
everything we do is observable and recordable. How many of us have paused during conversation in the past fourand-a-half years, suddenly aware that we might be eavesdropped on? Probably it was a phone conversation, although
maybe it was an e-mail or instant-message exchange or a conversation in a public place. Maybe the topic was terrorism, or
politics, or Islam. We stop suddenly, momentarily afraid that our words might be taken out of context, then
we laugh at our paranoia and go on. But our demeanor has changed, and our words are subtly altered. This is
the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein's Iraq. And it's our future as we allow an
ever-intrusive eye into our personal, private lives. Too
many wrongly characterize the debate as "security versus privacy."
The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or
under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus
privacy. Widespread police surveillance is the very definition of a police state. And that's why we should
champion privacy even when we have nothing to hide.
Advantage Two - Humint
The squo relies Big Data surveillance. That means info overload & less HUMINT.
Volz, 14
(Dustin, The National Journal, “Snowden: Overreliance on Mass Surveillance Abetted Boston Marathon Bombing: The former
NSA contractor says a focus on mass surveillance is impeding traditional intelligence-gathering efforts—and allowing terrorists
to succeed”, October 20, 2014, ak.)
Edward Snowden on Monday suggested
that if the National Security Agency focused more on traditional
intelligence gathering—and less on its mass-surveillance programs—it could have thwarted the 2013
Boston Marathon bombings. The fugitive leaker, speaking via video to a Harvard class, said that a
preoccupation with collecting bulk communications data has led to resource constraints at U.S.
intelligence agencies, often leaving more traditional, targeted methods of spying on the back burner. "We
miss attacks, we miss leads, and investigations fail because when the government is doing its 'collect it
all,' where we're watching everybody, we're not seeing anything with specificity because it is impossible to
keep an eye on all of your targets," Snowden told Harvard professor and Internet freedom activist Lawrence Lessig. "A good
example of this is, actually, the Boston Marathon bombings." Snowden said that Dzhokhar and Tamerlan Tsarnaev were pointed out by
Russian intelligence to U.S. officials prior to the bombings last year that killed three and left hundreds wounded, but
that such actionable intelligence was largely ignored. He argued that targeted surveillance on known
extremists and diligent pursuit of intelligence leads provides for better counterterrorism efforts than
mass spying. "We didn't really watch these guys and the question is, why?" Snowden asked. "The reality of that is
because we do have finite resources and the question is, should we be spending 10 billion dollars a year on
mass-surveillance programs of the NSA to the extent that we no longer have effective means of traditional
[targeting]?" Anti-spying activists have frequently argued that bulk data collection has no record of
successfully thwarting a terrorist attack, a line of argument some federal judges reviewing the NSA's programs have also used
in their legal reviews of the activities. Snowden's suggestion—that such mass surveillance has not only failed to directly stop
a threat, but actually makes the U.S. less safe by distracting resource-strapped intelligence officials from
performing their jobs—takes his criticism of spy programs to a new level. "We're watching everybody that we have no
reason to be watching simply because it may have value, at the expense of being able to watch specific people
for which we have a specific cause for investigating, and that's something that we need to look carefully
at how to balance," Snowden said.
Big Data kills human-intel – which is key to overall US operations.
Margolis ‘13
Gabriel Margolis – the author presently holds a Master of Arts (MA) in Conflict Management & Resolution from UNC
Wilmington and in his final semester of the program when this article was published in the peer-reviewed journal Global
Security Studies . Global Security Studies (GSS) is a premier academic and professional journal for strategic issues involving
international security affairs. All articles submitted to and published in Global Security Studies (GSS) undergo a rigorous, peerreviewed process. From the article: “The Lack of HUMINT: A Recurring Intelligence Problem” - Global Security Studies - Spring
2013, Volume 4, Issue 2http://globalsecuritystudies.com/Margolis%20Intelligence%20(ag%20edits).pdf
The United States has accumulated an unequivocal ability to collect intelligence as a result of the technological advances of the 20th
century. Numerous methods of collection have been employed in clandestine operations around the world
including those that focus on human, signals, geospatial, and measurements and signals intelligence. An infatuation
with technological methods of intelligence gathering has developed within many intelligence organizations, often
leaving the age old practice of espionage as an afterthought. As a result of the focus on technical methods, some of
the worst intelligence failures of the 20th century can be attributed to an absence of human intelligence. The
21st century has ushered in advances in technology have allowed UAVs to become the ultimate technical intelligence gathering
platform; however human intelligence is still being neglected. The increasing reliance on UAVs will make
the United States susceptible to intelligence failures unless human intelligence can be properly
integrated. In the near future UAVs may be able to gather human level intelligence, but it will be a long time before classical espionage is a
thing of the past.
BW and nuclear use coming. HUMINT is key to stay-ahead of these risks.
Johnson ‘9
Dr. Loch K. Johnson is Regents Professor of Political Science at the University of Georgia. He is editor of the journal "Intelligence
and National Security" and has written numerous books on American foreign policy. Dr. Johnson served as staff director of the
House Subcommittee on Intelligence Oversight from 1977 to 1979. Dr. Johnson earned his Ph.D. in political science from the
University of California at Riverside. "Evaluating "Humint": The Role of Foreign Agents in U.S. Security" Paper presented at the
annual meeting of the ISA's 50th ANNUAL CONVENTION "EXPLORING THE PAST, ANTICIPATING THE FUTURE", New York
Marriott Marquis, NEW YORK CITY, NY, USA, Feb 15, 2009 – available via:
http://citation.allacademic.com/meta/p_mla_apa_research_citation/3/1/0/6/6/p310665_index.html
The world is a dangerous place, plagued by the presence of terrorist cells; failed or failing states; competition for scarce
resources, such as oil, water, uranium, and food; chemical, biological, and nuclear weapons, not to mention bristling arsenals
of conventional armaments; and deep-seated animosities between rival nations and factions. For self-protection, if for no other
reason, government officials leaders seek information about the capabilities and—an especially elusive topic—the intentions of those
overseas (or subversives at home) who can inflict harm upon the nation. That is the core purpose of espionage: to
gather information about threats, whether external or internal, and to warn leaders about perils facing the homeland. Further, the secret services hope to
provide leaders with data that can help advance the national interest—the opportunity side of the security
equation. Through the practice of espionage—spying or clandestine human intelligence: whichever is one's favorite term—the
central task, stated baldly, is to steal secrets from adversaries as a means for achieving a more thorough
understanding of threats and opportunities in the world. National governments study information that is available in
the public domain (Chinese newspapers, for example), but knowledge gaps are bound to arise. A favorite metaphor for intelligence is the jigsaw
puzzle. Many of the pieces to the puzzle are available in the stacks of the Library of Congress or on the Internet; nevertheless, there will continue to be several
missing pieces—perhaps the most important ones. They may be hidden away in Kremlin vaults or in caves where members of Al Qaeda hunker down
in Pakistan's western frontier. The public pieces of the puzzle can be acquired through careful research; but often discovery of the missing secret pieces has to
rely on spying, if they can be found at all. Some things— "mysteries" in the argot of intelligence professionals—are unknowable in any definitive way, such as who is likely to replace
the current leader of North Korea. Secrets, in contrast, may be uncovered with a combination of luck and skill—say, the number of Chinese nuclear-armed submarines, which are vulnerable to
Espionage can be pursued by way of human agents or with machines, respectively known
inside America's secret agencies as human intelligence ("humint," in the acronym) and technical intelligence ("techint"). Humint consists
satellite and sonar tracking.
of spy rings that rely on foreign agents or "assets" in the field, recruited by intelligence professionals (known as case officers during the Cold War or. in more current jargon, operations
officers). -_
Techint includes mechanical devises large and small, including satellites the size of Greyhound buses, equipped with fancy cameras and listening
devices that can see and hear acutely from orbits deep in space; reconnaissance aircraft, most famously the U-2; unmanned aerial vehicles (UAVs) or drones, such as the Predator—often
armed with Hellfire missiles, allowing the option to kill what its handlers have just spotted through the lens of an onboard camera); enormous ground-based listening antennae, aimed at
enemy territory: listening devices clamped surreptitiously on fiber-optic communications cables that carry telephone conversations; and miniature listening "bugs" concealed within sparkling
cut-glass chandeliers in foreign embassies or palaces.
Techint attracts the most funding in Washington, D.C. (machines are costly, especially heavy
satellites that must be launched into space), by a ratio of some nine-to-one over humint in America's widely estimated S50 billion annual intelligence budget. Human spies, though, continue to
be recruited by the United States in most every region of the globe. Some critics contend that these spies contribute little to the knowledge of Washington officials about the state of
only human agents can provide insights into that most vital of all
national security questions: the intentions of one's rivals— especially those adversaries who are well
armed and hostile. The purpose of this essay is to examine the value of humint, based on a review7 of the research literature on intelligence, survey data, and the author's
international affairs; other authorities maintain, though, that
interviews with individuals in the espionage trade. The essay is organized in the following manner: it opens with a primer on the purpose, structure, and methods of humint; then examines
some empirical data on its value; surveys more broadly the pros and cons of this approach to spying; and concludes with an overall judgment about the value of agents for a nation's security.
Those impacts cause extinction.
Ochs ‘2
Richard - Chemical Weapons Working Group Member - “Biological Weapons must be Abolished Immediately,” June 9,
http://www.freefromterror.net/other_.../abolish.html]
Of all the weapons of mass destruction, the
genetically engineered biological weapons, many without a known cure or
vaccine, are an extreme danger to the continued survival of life on earth. Any perceived military value or deterrence pales in
comparison to the great risk these weapons pose just sitting in vials in laboratories. While a "nuclear winter," resulting from a massive exchange of nuclear
weapons, could also kill off most of life on earth and severely compromise the health of future generations, they are easier to control. Biological weapons, on the
other hand, can
get out of control very easily, as the recent anthrax attacks has demonstrated. There is no way to guarantee the security of these
doomsday weapons because very tiny amounts can be stolen or accidentally released and then grow or be grown to horrendous proportions. The Black Death of the
Middle Ages would be small in comparison to the potential damage bioweapons could cause. Abolition of chemical weapons is less of a priority because, while they
can also kill millions of people outright, their persistence in the environment would be less than nuclear or biological agents or more localized. Hence, chemical
weapons would have a lesser effect on future generations of innocent people and the natural environment. Like the Holocaust, once a localized chemical
extermination is over, it is over. With
nuclear and biological weapons, the killing will probably never end. Radioactive
elements last tens of thousands of years and will keep causing cancers virtually forever. Potentially worse than that,
bio-engineered agents by the hundreds with no known cure could wreck even greater calamity on the human race than could
persistent radiation. AIDS and ebola viruses are just a small example of recently emerging plagues with no known cure or vaccine. Can we imagine hundreds of such
plagues? HUMAN EXTINCTION
IS NOW POSSIBLE.
Plan solves and is reversible. Less Big Data means conventional and targeted human
intel.
Walt, 14
(Stephen M. Walt is the Robert and Renée Belfer professor of international relations at Harvard University, “The Big
Counterterrorism Counterfactual Is the NSA actually making us worse at fighting terrorism?”,
http://www.foreignpolicy.com/articles/2014/11/10/counterterrorism_spying_nsa_islamic_state_terrorist_cve, November 10,
2014, ak.)
The head of the British electronic spy agency GCHQ, Robert Hannigan, created a minor flap last week in an article he wrote for the Financial Times. In effect,
Hannigan argued that more robust encryption procedures by private Internet companies were unwittingly aiding terrorists such as the Islamic State (IS) or al Qaeda,
by making it harder for organizations like the NSA and GCHQ to monitor online traffic. The implication was clear: The more that our personal privacy is respected
and protected, the greater the danger we will face from evildoers. It's a serious issue, and democracies that want to respect individual privacy while simultaneously
keeping citizens safe are going to have to do a much better job of reassuring us that vast and (mostly) secret surveillance capabilities overseen by unelected officials
such as Hannigan won't be abused. I
tend to favor the privacy side of the argument, both because personal freedoms are hard to get
back once lost, but also because there's not much evidence that these surveillance activities are making us
significantly safer. They seem to be able to help us track some terrorist leaders, but there's a lively debate among scholars over whether tracking and
killing these guys is an effective strategy. The fear of being tracked also forces terrorist organizations to adopt less efficient communications procedures, but it
doesn't seem to prevent them from doing a fair bit of harm regardless. So
here's a wild counterfactual for you to ponder: What
would the United States, Great Britain, and other wealthy and powerful nations do if they didn't have these vast
surveillance powers? What would they do if they didn't have armed drones, cruise missiles, or other implements of destruction that can make it
remarkably easy (and in the short-term, relatively cheap) to target anyone they suspect might be a terrorist? Assuming that there were still violent extremists
plotting various heinous acts, what would these powerful states do if the Internet was there but no one knew how to spy on it? For starters, they'd
have to
rely more heavily on tried-and-true counterterrorism measures: infiltrating extremist organizations and
flipping existing members, etc., to find out what they were planning, head attacks off before they
occurred, and eventually roll up organization themselves. States waged plenty of counterterrorism
campaigns before the Internet was invented, and while it can be difficult to infiltrate such movements and find their vulnerable points,
it's not exactly an unknown art. If we couldn't spy on them from the safety of Fort Meade, we'd
probably be doing a lot more of this.
(Note to students: Fort Meade – internally referenced – is a United States Army installation that’s home
to NSA and other intelligence agencies.)
More funding WON’T solve. Data overload overwhelms intel and guarantees ongoing
resource failures.
Tufekci ‘15
Zeynep Tufekci is a fellow at the Center for Information Technology Policy at Princeton University, an assistant professor at the
School of Information and Department of Sociology at the University of North Carolina, and a faculty associate at the Harvard
Berkman Center for Internet and Society. “Terror and the limits of mass surveillance” – Financial Times’ The Exchange - Feb 3rd
http://blogs.ft.com/the-exchange/2015/02/03/zeynep-tufekci-terror-and-the-limits-of-mass-surveillance/
The most common justification given by governments for mass surveillance is that these tools are
indispensable for fighting terrorism. The NSA’s ex-director Keith Alexander says big data is “what it’s all about”. Intelligence
agencies routinely claim that they need massive amounts of data on all of us to catch the bad guys, like the French brothers who assassinated the cartoonists of
Charlie Hebdo, or the murderers of Lee Rigby, the British soldier killed by two men who claimed the act was revenge for the UK’s involvement in the wars in Iraq and
Afghanistan. But
the assertion that big data is “what it’s all about” when it comes to predicting rare events is not supported
by what we know about how these methods work, and more importantly, don’t work. Analytics on massive datasets can be
powerful in analysing and identifying broad patterns, or events that occur regularly and frequently, but are singularly
unsuited to finding unpredictable, erratic, and rare needles in huge haystacks. In fact, the bigger the haystack — the more
massive the scale and the wider the scope of the surveillance — the less suited these methods are to finding such
exceptional events, and the more they may serve to direct resources and attention away from
appropriate tools and methods. After Rigby was killed, GCHQ, Britain’s intelligence service, was criticised by many for failing to stop his killers, Michael
Adebolajo and Michael Adebowale. A lengthy parliamentary inquiry was conducted, resulting in a 192-page report that lists all the ways in which Adebolajo and
Adebowale had brushes with data surveillance, but were not flagged as two men who were about to kill a soldier on a London street. GCHQ defended itself by
saying that some of the crucial online exchanges had taken place on a platform, believed to be Facebook, which had not alerted the agency about these men, or the
nature of their postings. The men apparently had numerous exchanges that were extremist in nature, and their accounts were suspended repeatedly by the
platform for violating its terms of service. “If only Facebook had turned over more data,” the thinking goes. But that is misleading, and makes sense only with the
benefit of hindsight. Seeking larger volumes of data, such as asking Facebook to alert intelligence agencies every time that it detects a post containing violence,
would deluge the agencies with multiple false leads that would lead to a data quagmire, rather than clues to impending crimes. For
big data analytics
to work, there needs to be a reliable connection between the signal (posting of violent content) and the event (killing
someone). Otherwise, the signal is worse than useless. Millions of Facebook’s billion-plus users post violent content every day, ranging
from routinised movie violence to atrocious violent rhetoric. Turning over the data from all such occurrences would merely flood the
agencies with “false positives” — erroneous indications for events that actually will not happen. Such data overload is not without
cost, as it takes time and effort to sift through these millions of strands of hay to confirm that they are, indeed, not needles — especially when we don’t even
know what needles look like. All that the investigators would have would be a lot of open leads with no resolution, taking away
resources from any
real investigation. Besides, account suspensions carried out by platforms like Facebook’s are haphazard, semi-automated and unreliable indicators. The
flagging system misses a lot more violent content than it flags, and it often flags content as inappropriate even when it is not, and suffers from many biases. Relying
on such a haphazard system is not a reasonable path at all. So is all the hype around big data analytics unjustified? Yes and no. There are appropriate use cases for
which massive datasets are intensely useful, and perform much better than any alternative we can imagine using conventional methods. Successful examples
include using Google searches to figure out drug interactions that would be too complex and too numerous to analyse one clinical trial at a time, or using social
media to detect national-level swings in our mood (we are indeed happier on Fridays than on Mondays). In contrast, consider the “lone wolf” attacker who took
hostages at, of all things, a “Lindt Chocolat Café” in Sydney. Chocolate shops are not regular targets of political violence, and random, crazed men attacking them is
not a pattern on which we can base further identification. Yes, the Sydney attacker claimed jihadi ideology and brought a black flag with Islamic writing on it, but
given the rarity of such events, it’s not always possible to separate the jihadi rhetoric from issues of mental health — every era’s mentally ill are affected by the
cultural patterns around them. This isn’t a job for big data analytics. (The fact that the gunman was on bail facing various charges and was known for sending hate
letters to the families of Australian soldiers killed overseas suggests it was a job for traditional policing). When confronted with their failures in predicting those rare
acts of domestic terrorism, here’s what GCHQ, and indeed the
NSA, should have said instead of asking for increased surveillance capabilities: stop
asking us to collect more and more data to perform an impossible task. This glut of data is making our job harder, not
easier, and the expectation that there will never be such incidents, ever, is not realistic.
Speech Set-up #2 – you are with Darcell or
Will
1AC Materials for speech 2.0
Plan
Plan Option #2:
The United States federal government should substantially curtail its domestic surveillance by
strengthening the USA FREEDOM Act to:
 require use of a “specific selection term” to satisfy the “reasonable, articulable suspicion
standard”
 require that information collected through “pen register or trap and trace devices” via
emergency authorizations be subject to the same procedural safeguards as non-emergency
collections.
 require “super minimization" procedures that delete information obtained about a person not
connected to the investigation.
Advantage One – Bigotry
Warrantless surveillance boosts a distinct form of racial, religious, and ethnic
discrimination. The Neg’s security interests only drive this racialized violence.
Unegbu ‘13
Cindy C. Unegbu - J.D. Candidate, Howard University School of Law - NOTE AND COMMENT: National Security Surveillance on
the Basis of Race, Ethnicity, and Religion: A Constitutional Misstep - Howard Law Journal - 57 How. L.J. 433 - Fall, 2013 – lexis;
lawrev
Picture this: you live in a society in which the government is allowed to partake in intrusive surveillance
measures without the institutionalized checks and balances upon which the government was founded. In this society, the
government pursues citizens who belong to a particular race or ethnicity, practice a certain religion, or have affiliations with
specific interest groups. Individuals who have these characteristics are subject to surreptitious monitoring, which
includes undercover government officials disguising themselves as community members in order to attend various community events and programs. The government may
also place these individuals on watch lists, even where there is no evidence of wrongdoing. These watch lists classify
domestic individuals as potential or suspected terrorists and facilitate the monitoring of their personal activity through various law enforcement agencies for an extended period of time.
This "hypothetical" society is not hypothetical at all; in fact, it is the current state of American surveillance.
The government's domestic spying activities have progressed to intrusive levels, primarily due to an increased
fear of terrorism. n1 This fear has resulted in governmental intelligence efforts that are focused on political
activists, racial and religious minorities, and immigrants. n2 [*435] The government's domestic surveillance efforts are
not only geared toward suspected terrorists and those partaking in criminal activity, but reach any innocent, non-criminal, non-terrorist national, all in the
name of national security. The government's power to engage in suspicionless surveillance and track innocent
citizens' sensitive information has been granted through the creation and revision of the National Counterterrorism Center n3 and the FBI's (Federal Bureau of Investigation) Domestic
Investigations and Operations Guide. n4 The grant of surveillance power has resulted in many opponents, including those within the current presidential administration, who challenge the
order for numerous reasons. n5 These reasons include the inefficiency of storing citizens' random personal information for extended periods of time, n6 the broad unprecedented authority
granted to this body of government without proper approval from Congress, n7 and the constitutional violations due to the deprivation of citizens' rights. n8 [*436] This Comment argues that
the wide-sweeping surveillance authority granted to the government
results in a violation of the Fourteenth Amendment's Equal Protection Clause due to far-reaching domestic
monitoring practices. Surveillance practices, such as posing as members of the community and placing individuals on watch lists without suspicion of terrorist activity, result in the
impermissible monitoring of individuals on the basis of their race or ethnicity. These practices, although done in the name of
national security, an established compelling government interest, violate the Equal Protection Clause of the Fourteenth Amendment because they are not narrowly tailored to the stated
The procedures are not narrowly tailored to the interest of national security because of the overinclusiveness of the measures.
interest.
Most everyone violates some law from time-to-time. Mass surveillance results in
selective enforcement that disproportionately impacts those lacking privilege.
Stanfill ‘13
Mel - The author now holds a Ph.D. from the University of Illinois, Urbana-Champaign in Communications and Media. The
Author was working on that PhD at this time of this writing. When this piece was written, the author held an M.A. from
California State University, East Bay in Media and Cultural Studies and had Graduate minors in Gender and Women’s Studies
and Queer Studies. The author also held a B.A. from University of California, Berkeley - English with Distinction in General
Scholarship. At the time of this writing, the author had published several Peer-Reviewed Publications. The author is internally
quoting Daniel J. Solove is the John Marshall Harlan Research Professor of Law at the George Washington University Law
School. “NSA Prism Part III: Due Process and Presumed Guilty” - July 1, 2013 – http://www.melstanfill.com/nsa-prism-part-iiidue-process-and-presumed-guilty/
Posing this as a due process question could be the way to get some traction in light of the nonchalance about privacy discussed last week: The problem with NSA
PRISM is that we are all being treated as guilty until proven innocent. This has been going on since 9/11, of course, with airport security being one highly visible
iteration, but there’s a chance that this new level of awareness of just how much everyone is being treated as criminal without due process of law could be the
straw. Solove describes the stakes well: “Even
if a person is doing nothing wrong, in a free society, that person shouldn’t have
to justify every action that government officials might view as suspicious. A key component of freedom
is not having to worry about how to explain oneself all the time.” Crossing this line into blanket assumption of guilt is what
animates the Stop Watching US petition, which says “the contents of communications of people both abroad and in the U.S. can be swept in without any suspicion
of crime or association with a terrorist organization,” though they tie their concern to the 1st and 4th amendment and “citizens’ right to speak and associate
anonymously, guard against unreasonable searches and seizures, and protect their right to privacy” without mentioning due process. (That people feel due process
has gone out the window is clear from the fact that RootsAction sent me a petition to President Obama “not to engage in any abduction or other foul play against
Snowden,” ironic when the administration’s line on why they’re disappointed that Hong Kong let him leave is that they want there to be rule of law.) Wired perhaps
elaborates the worst-case-scenario best: “Police
already abuse the immense power they have, but if everyone’s every
action were being monitored, and everyone technically violates some obscure law at some time, then
punishment becomes purely selective. Those in power will essentially have what they need to punish
anyone they’d like, whenever they choose, as if there were no rules at all.” Of course, black and Latino citizens have
been living under presumed-guilty, surveilled-within-an inch-of-their-lives, selectively-punished conditions for decades:
they’re more likely to get caught at things white folks also do and be punished more harshly for them,
even as early as middle school (see Ann Ferguson’s Bad Boys). Muslim and Arab citizens have been living under it since 9/11.
Reject this discrimination as an unacceptable wrong that must be rejected as an end
onto itself.
Shamsi ‘14
(et al; Hina Shamsi is a lecturer-in-law at Columbia Law School, where she teaches a course in international human rights. She is
also the Director of the ACLU’s National Security Project, which is dedicated to ensuring that U.S. national security policies and
practices are consistent with the Constitution, civil liberties, and human rights. She has litigated cases upholding the freedoms
of speech and association, and challenging targeted killing, torture, unlawful detention, and post-9/11 discrimination against
racial and religious minorities. Her work includes a focus on the intersection of national security and counterterrorism policies
with international human rights and humanitarian law. She also served as Senior Advisor to the U.N. Special Rapporteur on
Extrajudicial Executions. Hina is a graduate of Mount Holyoke College and Northwestern University School of Law. “The
Perversity of Profiling” – April 14th – available at the ACLU website - https://www.aclu.org/blog/perversity-profiling)
Using expanded authorities that permit investigations without actual evidence of wrongdoing, the FBI has also
targeted minority communities for interviews based on race, ethnicity, national origin, and religion. It has used informants
to conduct surveillance in community centers, mosques, and other public gathering places and against people exercising their First Amendment right to worship or
to engage in political advocacy. And among
America’s minority communities, “flying while brown” soon joined
“driving while black” as a truism of government-sanctioned discrimination and stigma. It’s hard to overstate
the damage done to the FBI’s relationship with minorities, particularly American Muslims. The damage, however, has
spread further. When federal law enforcement leads in discriminatory profiling, state and local law
enforcement will follow. Nowhere is that clearer than in New York City, where the NYPD – which is twice the size of the FBI –
launched a massive program of discriminatory surveillance and investigation of American Muslims, mapping the
places where they carry out daily activities and sending informants to spy on mosques and Muslim community organizations, student groups, and businesses. After
the Associated Press broke a series of stories describing this program in stark and shocking detail, the
NYPD defended itself, arguing that it
was only doing what the FBI was permitted to do. Again, it’s hard to overstate the harm. From the ACLU’s work with
New York’s Muslim communities, we know that a generation of youth is growing up fearful of its local
police force, scared to exercise the rights to freedom of worship, speech, and association. Fortunately, the issuance of the
revised Guidance on Race has been delayed and both the Justice Department and the civil rights community have a crucial opportunity to put a spotlight on the FBI,
which vigorously opposes those fighting for equality. According to the New York Times, the
FBI’s argument seems to be that it needs to
identify where Somalis live to investigate potential Somali terrorism suspects. But that argument must be rejected for
the same reason that we reject it in other contexts. Many mass shooters are young white males, yet we
rightly don’t map where whites live or send informants to majority white communities to ferret out potential mass shooters. Put
another way, the FBI’s argument presumes what the Ashcroft Guidance “emphatically rejects”: that crime can be prevented by the
mass stereotyping of entire communities. Not only is that wrong, it is a ham-handed approach that squanders resources that
should properly be devoted to investigating actual wrongdoing.
Advantage Two – Global Internet Freedom
New Freedom Act fails to restore US’s global credibility on Internet freedom. The
original version solves by closing SST loopholes.
Brinkerhoff ‘14
(Internally quoting Cynthia M. Wong is the senior researcher on the Internet and human rights for Human Rights Watch. Before
joining Human Rights Watch, Wong worked as an attorney at the Center for Democracy & Technology (CDT) and as director of
their Project on Global Internet Freedom. She conducted much of the organization’s work promoting global Internet freedom,
with a particular focus on international free expression and privacy. She also served as co-chair of the Policy & Learning
Committee of the Global Network Initiative (GNI), a multi-stakeholder organization that advances corporate responsibility and
human rights in the technology sector. Prior to joining CDT, Wong was the Robert L. Bernstein International Human Rights
Fellow at Human Rights in China (HRIC). There, she contributed to the organization’s work in the areas of business and human
rights and freedom of expression online. Wong earned her law degree from New York University School of Law. Human Rights
Watch is an independent, international organization that works as part of a vibrant movement to uphold human dignity and
advance the cause of human rights for all. Noel Brinkerhoff is a Political reporter and writer covering state and national politics
for 15 years. “With Support of Obama Administration, House NSA Surveillance Reform Bill Includes Gaping Loopholes” – AllGov
– May 26th - http://www.allgov.com/news/top-stories/with-support-of-obama-administration-house-nsa-surveillance-reformbill-includes-gaping-loopholes-140526?news=853242)
Lawmakers in the U.S. House of Representatives claim they have addressed the problems of the National Security Agency’s (NSA)
notorious bulk collection of data, made so famous last year by whistleblower Edward Snowden. But the
legislation adopted to end this controversial practice contains huge loopholes that could allow the NSA
to keep vacuuming up large amounts of Americans’ communications records, all with the blessing of the Obama
administration. Dubbed the USA Freedom Act, the bill overwhelmingly approved by the House (303 to 121) was criticized for
not going far enough to keep data out of the hands of government. “This so-called reform bill won’t
restore the trust of Internet users in the U.S. and around the world,” Cynthia Wong, senior Internet
researcher at Human Rights Watch (HRW), said. “Until Congress passes real reform, U.S. credibility and
leadership on Internet freedom will continue to fade.” Julian Sanchez, a researcher at the Cato Institute, a libertarian think tank,
warned that the changes could mean the continuation of bulk collection of phone records by another name. “The core problem is that this only
ends ‘bulk’ collection in the sense the intelligence community uses that term,” Sanchez told Wired. “As long as
there’s some kind of target, they don’t call that bulk collection, even if you’re still collecting millions of
records…If they say ‘give us the record of everyone who visited these thousand websites,’ that’s not bulk
collection, because they have a list of targets.” HRW says the bill, which now goes to the Senate for consideration, contains
ambiguous definitions about what can and cannot be collected by the agency. For instance, an earlier version more clearly
defined the scope of what the NSA could grab under Section 215 of the Patriot Act, which has formed the legal basis for gathering the
metadata of phone calls. “Under an earlier version of the USA Freedom Act, the government would have been
required to base any demand for phone metadata or other records on a “specific selection term” that
“uniquely describe[s] a person, entity, or account.” Under the House version, this definition was broadened to
mean “a discrete term, such as a term specifically identifying a person, entity, account, address, or device, used by the government to limit the scope”
of information sought,” according to Human Rights Watch. “This definition is too open-ended and ambiguous to prevent the sort of
creative interpretation by intelligence agencies that has been used to justify overbroad collection practices in the past,” the group claims.
The New America Foundation’s Open Technology Institute is similarly disappointed in the final House bill. “Taken together,” the Institute wrote, “ the changes
to this definition may still allow for massive collection of millions of Americans’ private information based
on very broad selection terms such as a zip code, an area code, the physical address of a particular email provider or
financial institution, or the IP address of a web hosting service that hosts thousands of web sites.”
The US can alter global practices that threaten internet freedom – but only when US
image is seen as less hypocritical.
Wong ‘13
Cynthia M. Wong is the senior researcher on the Internet and human rights for Human Rights Watch. Before joining Human
Rights Watch, Wong worked as an attorney at the Center for Democracy & Technology (CDT) and as director of their Project on
Global Internet Freedom. She conducted much of the organization’s work promoting global Internet freedom, with a particular
focus on international free expression and privacy. She also served as co-chair of the Policy & Learning Committee of the Global
Network Initiative (GNI), a multi-stakeholder organization that advances corporate responsibility and human rights in the
technology sector. Prior to joining CDT, Wong was the Robert L. Bernstein International Human Rights Fellow at Human Rights
in China (HRIC). There, she contributed to the organization’s work in the areas of business and human rights and freedom of
expression online. Wong earned her law degree from New York University School of Law – “ Surveillance and the Corrosion of
Internet Freedom” - July 30, 2013 - Published in: The Huffington Post and also available at the HRW website at this address:
http://www.hrw.org/news/2013/07/30/surveillance-and-corrosion-internet-freedom
Defenders of US and UK surveillance programs argue that collecting metadata is not as problematic as “listening to the content
of people’s phone calls” or reading emails. This is misleading. Technologists have long recognized that metadata can reveal incredibly sensitive information, especially if it is
collected at large scale over long periods of time, since digitized data can be easily combined and analyzed. The revelations have also exposed glaring
contradictions about the US Internet freedom agenda. This has emboldened the Chinese state media, for example, to cynically denounce US
hypocrisy, even as the Chinese government continues to censor the Internet, infringe on privacy rights, and curb anonymity online. Though there is hypocrisy on both sides, the
widening rift between US values and actions has real, unintended human rights consequences. For the human rights
movement, the Internet’s impact on rights crystalized in 2005 after we learned that Yahoo! uncritically turned user account
information over to the Chinese government, leading to a 10-year prison sentence for the journalist Shi Tao. The US government
forcefully objected to the Chinese government’s actions and urged the tech industry to act responsibly. In the end, that incident
catalyzed a set of new human rights standards that pushed some companies to improve safeguards for
user privacy in the face of government demands for data. US support was critical back then, but it is
hard to imagine the government having the same influence or credibility now. The mass surveillance scandal has damaged
the US government’s ability to press for better corporate practices as technology companies expand globally. It
will also be more difficult for companies to resist overbroad surveillance mandates if they are seen as
complicit in mass US infringements on privacy. Other governments will feel more entitled to ask for the
same cooperation that the US receives. We can also expect governments around the world to pressure companies to store user data locally or maintain a local
presence so that governments can more easily access it, as Brazil and Russia are now debating. While comparisons to the Chinese government are overstated, there is reason to
worry about the broader precedent the US has set. Just months before the NSA scandal broke, India began rolling out a centralized
system to monitor all phone and Internet communications in the country, without much clarity on
safeguards to protect rights. This development is chilling, considering the government’s problematic use of sedition and Internet laws in recent arrests. Over the last
few weeks, Turkish officials have condemned social media as a key tool for Gezi Park protesters. Twitter has drawn particular ire. Now the
government is preparing new regulations that would make it easier to get data from Internet companies and identify
individual users online. The Obama administration and US companies could have been in a strong position to push back in
India and Turkey. Instead, the US has provided these governments with a roadmap for conducting secret, mass
surveillance and conscripting the help of the private sector.
(Note to students: “conscripting” means compulsory enlistment of companies for state service.)
Washington will inevitably push for global Internet freedom – but US image is vital.
The Internet freedom agenda’s key to the Global Economy.
Kalathil ‘10
Shanthi Kalathil - Adjunct Faculty and Adjunct Lecturer in the Communication, Culture, and Technology (CCT) Master of Arts
Program at Georgetown University. Kalathil has extensive experience advising the U.S. government, international organizations
and nonprofits on supporting civil society, independent media, technology, transparency and accountability. Previously a senior
Democracy Fellow at the U.S. Agency for International Development and she has authored or edited numerous policy and
scholarly publications, including the edited volume Diplomacy, Development and Security in the Information Age. She has
taught courses on international relations in the information age at the Monterey Institute of International Studies and
Georgetown University. Kalathil holds degrees from U.C. Berkeley and the London School of Economics and Political Science –
“Internet Freedom: A Background Paper” – October 2010 - Available via:
http://www.aspeninstitute.org/sites/default/files/content/images/Internet_Freedom_A_Background_Paper_0.pdf
As use of the Internet has grown exponentially around the world, so too have concerns about its defining
attribute as a free and open means of communication. Around the world, countries, companies and citizens are
grappling with thorny issues of free expression, censorship and trust. With starkly different visions for the Internet
developing, this era presents challenges—and also opportunities—for those who wish to ensure the Internet
remains a backbone of liberty and economic growth. U.S. officials have made clear their vision for the
Internet’s future. President Obama, in a speech before the UN General Assembly, said that the U.S. is committed to promoting
new communication tools, “so that people are empowered to connect with one another and, in repressive societies,
to do so with security. We will support a free and open Internet, so individuals have the information to make up their own minds.” His words were reinforced by FCC
Chairman Julius Genachowski: “It
is essential that we preserve the open Internet and stand firmly behind the right of all people to connect
with one another and to exchange ideas freely and without fear.”1 Indeed, a free, widely accessible Internet stands at the heart of both
global communication and global commerce. Internet freedom enables dialogue and direct diplomacy between people and civilizations,
facilitating the exchange of ideas and culture while bolstering trade and economic growth. Conversely, censorship and other
blockages stifle both expression and innovation. When arbitrary rules privilege some and not others, the investment climate suffers. Nor
can access be expanded if end users have no trust in the network. However, making reality live up to aspirations for Internet freedom can prove difficult. Numerous
global initiatives—spearheaded by governments, private sector and civil society—are attempting to enshrine the norms, principles and
standards that will ensure the Internet remains a public space for free expression. At the same time, other
norms are fast arising—particularly those defined by authoritarian countries that wish to splinter the Internet into
independently controlled fiefdoms. Even as Internet access has expanded around the world, many governments are
attempting to control, regulate and censor the Internet in all its forms: blogs, mobile communication, social media, etc. Such
governments have devoted vast resources to shaping the Internet’s development within their own
borders, and they are now seeking to shape the Internet outside their borders as well. Indeed, Internet experts are
worried that national governments of all stripes will increasingly seek to extend their regulatory authority over the global Internet, culminating in a
balkanized Internet with limited interoperability. Hence, the next few years present a distinct window of
opportunity to elevate the principles of the free exchange of ideas, knowledge and commerce on the Internet. While U.S. leadership
within this window is vital, a global effort is necessary to ensure that these norms become a standard part of the Internet’s
supporting architecture.
Global economic decline risks nuclear war.
Merlini ‘11
[Cesare Merlini, nonresident senior fellow at the Center on the United States and Europe and chairman of the Board of Trustees of the Italian
Institute for International Affairs (IAI) in Rome. He served as IAI president from 1979 to 2001. Until 2009, he also occupied the position of
executive vice chairman of the Council for the United States and Italy, which he co-founded in 1983. His areas of expertise include transatlantic
relations, European integration and nuclear non-proliferation, with particular focus on nuclear science and technology. A Post-Secular World?
DOI: 10.1080/00396338.2011.571015 Article Requests: Order Reprints : Request Permissions Published in: journal Survival, Volume 53, Issue 2
April 2011 , pages 117 - 130 Publication Frequency: 6 issues per year Download PDF Download PDF (~357 KB) View Related Articles To cite
this Article: Merlini, Cesare 'A Post-Secular World?', Survival, 53:2, 117 – 130]
Two neatly opposed scenarios
for the future of the world order illustrate the range of possibilities, albeit at the risk of
or more of the acute tensions
apparent today evolves into an open and traditional conflict between states, perhaps even involving the use of
nuclear weapons. The crisis might be triggered by a collapse of the global economic and financial system, the
vulnerability of which we have just experienced, and the prospect of a second Great Depression, with consequences for
peace and democracy similar to those of the first. Whatever the trigger, the unlimited exercise of national sovereignty, exclusive
self-interest and rejection of outside interference would likely be amplified, emptying, perhaps entirely, the half-full glass
oversimplification. The first scenario entails the premature crumbling of the post-Westphalian system. One
of multilateralism, including the UN and the European Union. Many of the more likely conflicts, such as between Israel and Iran or India and Pakistan, have
potential religious dimensions. Short of war, tensions such as those related to immigration might become unbearable. Familiar issues of creed and identity could be
exacerbated. One way or another, the secular rational approach would be sidestepped by a return to theocratic absolutes, competing or converging with secular
absolutes such as unbridled nationalism.
The new Freedom Act won’t solve US image. Protections from the original version do
solve, even without protections for persons outside the US.
Ries ‘14
(Internally quoting Zeke Johnson, director of Amnesty International's Security & Human Rights Program. Also internally quoting
Cynthia M. Wong is the senior researcher on the Internet and human rights for Human Rights Watch. Before joining Human
Rights Watch, Wong worked as an attorney at the Center for Democracy & Technology (CDT) and as director of their Project on
Global Internet Freedom. She conducted much of the organization’s work promoting global Internet freedom, with a particular
focus on international free expression and privacy. She also served as co-chair of the Policy & Learning Committee of the Global
Network Initiative (GNI), a multi-stakeholder organization that advances corporate responsibility and human rights in the
technology sector. Prior to joining CDT, Wong was the Robert L. Bernstein International Human Rights Fellow at Human Rights
in China (HRIC). There, she contributed to the organization’s work in the areas of business and human rights and freedom of
expression online. Wong earned her law degree from New York University School of Law. Also internally quoting Center for
Democracy and Technology Senior Counsel Harley Geiger – Brian Ries is Mashable’s Real-Time News Editor. Prior to working at
Mashable, Brian was Social Media Editor at Newsweek & The Daily Beast, responsible for using Twitter, Facebook, and Tumblr
to cover revolutions, disasters, and presidential elections. During his time at The Daily Beast, he contributed to a team that won
two Webby Awards for “Best News Site”. “Critics Slam 'Watered-Down' Surveillance Bill That Congress Just Passed” - Mashable May 22, 2014 – http://mashable.com/2014/05/22/congress-nsa-surveillance-bill/)
As a result, many
of its initial supporters pulled their support. “We supported the original USA Freedom act,
even though it didn’t do much for non-US persons,” Zeke Johnson, director of Amnesty International's
Security & Human Rights Program told Mashable after Thursday's vote. He described the original
version as “a good step to end bulk collection.” However, in its current version, it's not even clear that
this bill does that at all, Johnson said. He added that Congress left a lot of "wiggle room" in the bill — something he
said is a real problem. "Where there is vagueness in a law, you can count on the administration to exploit it," Johnson
said. However, Laura W. Murphy, director of the ACLU Washington Legislative Office, took a more positive view of the bill. "While far from perfect, this bill is an
unambiguous statement of congressional intent to rein in the out-of-control NSA," she said in a statement. "While we share the concerns of many — including
members of both parties who rightly believe the bill does not go far enough — without it we would be left with no reform at all, or worse, a House Intelligence
Committee bill that would have cemented bulk collection of Americans’ communications into law." The Electronic Frontier Foundation simply called it "a weak
attempt at NSA reform." “The
ban on bulk collection was deliberately watered down to be ambiguous and
exploitable,” said Center for Democracy and Technology Senior Counsel Harley Geiger. “We withdrew support for USA FREEDOM
when the bill morphed into a codification of large-scale, untargeted collection of data about Americans
with no connection to a crime or terrorism.” And Cynthia Wong, senior Internet researcher at Human Rights
Watch, said, “This so-called reform bill won’t restore the trust of Internet users in the US and around the
world. Until Congress passes real reform, U.S. credibility and leadership on Internet freedom will
continue to fade.”
Unlike the current Act, the original bill does solve US image. This holds even if plan’s
about bulk collection – instead of every surveillance practices.
HRW ‘14
(Internally quoting Cynthia M. Wong is the senior researcher on the Internet and human rights for Human Rights Watch. Before
joining Human Rights Watch, Wong worked as an attorney at the Center for Democracy & Technology (CDT) and as director of
their Project on Global Internet Freedom. She conducted much of the organization’s work promoting global Internet freedom,
with a particular focus on international free expression and privacy. She also served as co-chair of the Policy & Learning
Committee of the Global Network Initiative (GNI), a multi-stakeholder organization that advances corporate responsibility and
human rights in the technology sector. Prior to joining CDT, Wong was the Robert L. Bernstein International Human Rights
Fellow at Human Rights in China (HRIC). There, she contributed to the organization’s work in the areas of business and human
rights and freedom of expression online. Wong earned her law degree from New York University School of Law. Human Rights
Watch is an independent, international organization that works as part of a vibrant movement to uphold human dignity and
advance the cause of human rights for all. “US Senate: Salvage Surveillance Reform House Bill Flawed” - Human Rights Watch May 22, 2014 – http://www.hrw.org/news/2014/05/22/us-senate-salvage-surveillance-reform)
It is up to the US Senate to salvage surveillance reform, Human Rights Watch said today. The version of the USA Freedom
Act that the US House of Representatives passed on May 22, 2014, could ultimately fail to end mass data collection. The version
the House passed is a watered-down version of an earlier bill that was designed to end bulk collection of
business records and phone metadata. The practice has been almost universally condemned by all but the US security establishment. “This socalled reform bill won’t restore the trust of Internet users in the US and around the world,” said Cynthia Wong,
senior Internet researcher at Human Rights Watch. “Until Congress passes real reform, US credibility and
leadership on Internet freedom will continue to fade.” The initial version of the bill aimed to prohibit bulk
collection by the government of business records, including phone metadata. The bill only addressed one component of the
surveillance programs revealed by the former National Security Agency contractor Edward Snowden, that of US record collections. However, it had broad
support as a first step, including from Human Rights Watch. On May 7, a diluted version of the bill passed unanimously out of the House
Judiciary Committee, followed by Intelligence Committee approval on May 8. While better than alternative bills offered, the version the House passed could
leave the door wide open to continued indiscriminate data collection practices potentially invading the privacy
of millions of people without justification, Human Rights Watch said.
Download