HSS Stored Communications Act Affirmative

advertisement
Mosaic And The Cloud
1ac
Advantage 1: Mosaic Theory
4th Amendment precedent change is inevitable.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Finally, the Stored Communications Act does not address any of the “big data” concerns underlying the
mosaic theory. The law does not recognize that a myriad of individually insignificant data points can be just as revealing in the aggregate
as a single revealing disclosure. It does not distinguish between government requests for a day’s, a month’s, a year’s, or even a lifetime’s worth
of data. And it does not envision that third parties would ever amass as much in-formation as they do about so many aspects of their
customers’ lives. In short, the
Stored Communications Act is an overly complex, confusing, and outdated tool
for protecting citizens’ privacy at a time when Americans need that protection more than ever. As
Warshak demonstrates, courts are be-ginning to recognize that the law fails to sufficiently combat the threat the third-party doctrine poses to
citizens’ privacy. The
concurring opinions in Jones—most notably Justice Sotomayor’s—suggest that the
Supreme Court may be reconsidering whether the third-party doctrine (on which the Stored Communications Act
relies) remains appropriate. Change seems inevitable because the status quo is unstable. But whether
that change is led by the Court or by Congress remains to be seen.
Amending the Stored Communications Act solves – it Introduces the Mosaic theory
and is easily implemented.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
The Stored Communications Act may be flawed, but it can be fixed. The following proposal
demonstrates how the law could address mosaic theory concerns with only a few significant
changes.179 First, the proposed amendment modifies the statute’s tiered framework to both incorporate a
time-based mosaic cutoff for warrantless disclosures and make the statute easier to follow and apply. Second, the proposed
amendment adds a suppression remedy for government violations of the law, bringing the statute’s
remedies into line with the Fourth Amendment’s typical protections. Incorporating this suppression
remedy will also ensure that the Stored Communications Act will be litigated in the criminal context,
allowing courts to interpret and apply the statute to changing circumstances. Finally, the proposed
amendment replaces the terms “electronic communication service” and “remote computing service”
with “network service.” This change would remove a somewhat artificial distinction in the modern
technological context, clarify that the Act should be interpreted broadly, and ensure its continued
applicability to emerging technologies. 180
Adopting the mosaic theory is key to restore privacy in electronic communications.
Ford, 2011 (Madelaine Virginia, J.D. candidate at American University Washington College of Law,
“Mosaic Theory and The Fourth Amendment: How Jones Can Save Privacy In The Face Of Evolving
Technology” American University Journal of Gender, Social Policy & the Law Lexis)
Individual privacy is a fundamental right and directly involves the ability to prevent the collection and circulation of personal
information. n147 The current Supreme Court privacy framework fails to account for the level of intrusion
occasioned by advanced technology and how its use undermines privacy protections. n148 Knotts should not control cases
involving the continuous monitoring with GPS devices, as the case dealt with a discrete journey and specifically did not address the issue of
prolonged surveillance. n149 A warrantless police beeper used for a one hundred-mile trip and warrantless GPS tracking for a month or more
are not equivalent [*1371] surveillance techniques and implicate separate privacy issues. n150 As surveillance technology continues to evolve,
the Supreme Court must reevaluate its reliance on Knotts, and United States v. Jones presents the ideal opportunity for the Court to do so.
n151 The
mosaic theory, while novel as a justification for Fourth Amendment protection, provides a
practical alternative for privacy analysis that is flexible enough to adapt to developing technologies.
n152 Just as the government has successfully used the theory to protect the unknown value of collective information, citizens too, should be
able to use this rationale to protect the unknown value of personal collective information. n153 The
adoption of the mosaic
theory and the rejection of warrantless GPS searches by police have significant implications for the future of individual
privacy and technology. n154 For example, built in vehicular GPS systems, which used to be present exclusively in luxury car models,
are now prevalent in a large variety of standard vehicles. n155 These built in "concierge systems," such as OnStar, allow police officers to
monitor the past and present location of these vehicles without installing a tracking device. n156 For police, this eliminates issues of whether
the installation of the tracking device constituted a search. n157 A Supreme Court decision, allowing warrantless GPS tracking, could imply that
law enforcement agencies do not need a warrant to collect information from built in GPS systems, which some state courts have [*1372]
already found to be a less invasive procedure. n158 It is likely that the Court's decision will also impact future court cases involving a multitude
of location sharing devices such as cellular phones, laptops, and other mobile devices. n159 The Electronic Communications Act governs cell
phone tracking and requires law enforcement to obtain a court order in order to obtain information about a person's location. n160 Lower
courts lack a consensus on the issue of whether these orders require probable cause or simply an articulable facts standard. n161 Supreme
Court affirmation of the Maynard decision would send a clear signal to law enforcement that a warrant is required to use an individual's
technological devices to determine his or her location. n162 The
Fourth Amendment protects privacy in the relationship
between the government and citizens. n163 While the Court has shifted from a property and trespass
regime of privacy evaluation, the current conceptualization of the Fourth Amendment fails to adapt to
technological advances and changing society. n164 Privacy jurisprudence must evolve to shield
individuals' lives and social practices as well as information that relates to humans' basic needs and
desires from inappropriate police uses of advanced technology. n165 The mosaic theory provides a
workable option that accommodates modern technology and allows adjustments to privacy
jurisprudence.
Privacy outweighs.
Solove ‘7 (Daniel Solove is an Associate Professor at George Washington University Law School and holds a J.D. from Yale
Law School. He is one of the world’s leading expert in information privacy law and is well known for his academic work on
privacy and for popular books on how privacy relates with information technology. He has written 9 books and more than 50
law review articles – From the Article ““I’ve Got Nothing to Hide” and Other Misunderstandings of Privacy” - San Diego Law
Review, Vol. 44, p. 745 - GWU Law School Public Law Research Paper No. 289 – available from download at:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=998565)
It is time to return to the
nothing to hide argument . The reasoning of this argument is that when it comes to government
surveillance or use of personal data, there is no privacy violation if a person has nothing sensitive, embarrassing, or illegal
to conceal. Criminals involved in illicit activities have something to fear, but for the vast majority of people, their activities are
not illegal or embarrassing. Understanding privacy as I have set forth reveals the flaw of the nothing to hide argument at its roots. Many commentators who
respond to the argument attempt a direct refutation by trying to point to things that people would want to hide. But the problem with the nothing to hide
argument is the underlying assumption that privacy is about hiding bad things. Agreeing
with this assumption concedes far too
much ground and leads to an unproductive discussion of information people would likely want or not want to hide. As Bruce
Schneier aptly notes, the nothing to hide argument stems from a faulty “premise that privacy is about hiding a
wrong.”75 The deeper problem with the nothing to hide argument is that it myopically views privacy as a form of concealment or secrecy. But understanding
privacy as a plurality of related problems demonstrates that concealment of bad things is just one among many problems caused by government programs such as
the NSA surveillance and data mining. In the categories in my taxonomy, several problems are implicated. The
NSA programs involve problems of
information collection, specifically the category of surveillance in the taxonomy. Wiretapping involves audio surveillance of people’s conversations.
Data mining often begins with the collection of personal information, usually from various third parties that possess people’s data. Under current Supreme Court
Fourth Amendment jurisprudence, when the government gathers data from third parties, there is no Fourth Amendment protection because people lack a
“reasonable expectation of privacy” in information exposed to others.76 In United States v. Miller, the Supreme Court concluded that there is no reasonable
expectation of privacy in bank records because “[a]ll of the documents obtained, including financial statements and deposit slips, contain o nly information
voluntarily conveyed to the banks and exposed to their employees in the ordinary course of business.”77 In Smith v. Maryland, the Supreme Court held that people
lack a reasonable expectation of privacy in the phone numbers they dial because they “know that they must convey numerical information to the phone company,”
and therefore they cannot “harbor any general expectation that the numbers they dial will remain secret.”78 As I have argued extensively elsewhere, the lack of
Fourth Amendment protection of third party records results in the government’s ability to access an extensive amount of personal information with minimal
limitation or oversight.79 Many scholars have referred to information collection as a form of surveillance. Dataveillance, a term coined by Roger Clarke, refers to the
“systemic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons.”80 Christopher Slobogin has
referred to the gathering of personal information in business records as “transaction surveillance.”81 Surveillance
can create chilling effects
on free speech, free association, and other First Amendment rights essential for democracy.82 Even
surveillance of legal activities can inhibit people from engaging in them. The value of protecting
against chilling effects is not measured simply by focusing on the particular individuals who are
deterred from exercising their rights. Chilling effects harm society because, among other things, they reduce
the range of viewpoints expressed and the degree of freedom with which to engage in political
activity. The nothing to hide argument focuses primarily on the information collection problems associated with the NSA programs. It contends that limited
surveillance of lawful activity will not chill behavior sufficiently to outweigh the security benefits. One can certainly quarrel with this argument, but one of the
difficulties with chilling effects is that it is often very hard to demonstrate concrete evidence of deterred behavior.83 Whether the NSA’s surveillance and collection
of telephone records has deterred people from communicating particular ideas would be a difficult question to answer. Far too often, discussions of the NSA
surveillance and data mining define the problem solely in terms of surveillance. To return to my discussion of metaphor, the problems are not just Orwellian, but
Kafkaesque. The
NSA programs are problematic even if no information people want to hide is uncovered. In
The Trial, the problem is not inhibited behavior, but rather a suffocating powerlessness and vulnerability created by the
court system’s use of personal data and its exclusion of the protagonist from having any knowledge or participation in the process. The harms consist of
those created by bureaucracies—indifference, errors, abuses, frustration, and lack of transparency and accountability. One such harm, for example, which I call
aggregation, emerges from the combination of small bits of seemingly innocuous data.84 When combined, the information becomes much more telling about a
person. For the person who truly has nothing to hide, aggregation is not much of a problem. But in the stronger, less absolutist form of the nothing to hide
argument, people argue that certain pieces of information are not something they would hide. Aggregation, however, means that by combining pieces of
information we might not care to conceal, the government can glean information about us that we might really want to conceal. Part of the
allure of data
mining for the government is its ability to reveal a lot about our personalities and activities by sophisticated means of
analyzing data. Therefore, without greater transparency in data mining , it is hard to claim that programs like the NSA data mining
program will not reveal information people might want to hide, as we do not know precisely what is revealed. Moreover,
data mining aims to be predictive of behavior, striving to prognosticate about our future actions. People who match certain profiles are deemed likely to engage in a
similar pattern of behavior. It is quite difficult to refute actions that one has not yet done. Having nothing to hide will not always dispel predictions of future activity.
Another problem in the taxonomy, which is implicated by the NSA program, is the problem I refer to as exclusion.85 Exclusion is the
problem caused when people are prevented from having knowledge about how their information is being used, as well as barred from being able to access and
correct errors in that data. The NSA program involves a massive database of information that individuals cannot access. Indeed, the very existence of the program
was kept secret for years.86 This kind of information processing, which forbids people’s knowledge or involvement, resembles in some ways a kind of due process
problem. It is a structural problem involving the way people are treated by government institutions. Moreover, it creates a power imbalance between individuals
and the government. To what extent should the Executive Branch and an agency such as the NSA, which is relatively insulated from the political process and public
accountability, have a significant power over citizens? This
issue is not about whether the information gathered is
something people want to hide, but rather about the power and the structure of government. A related
problem involves “secondary use.” Secondary use is the use of data obtained for one purpose for a different unrelated purpose without the person’s consent. The
Administration has said little about how long the data will be stored, how it will be used, and what it could be used for in the future. The potential future uses of any
piece of personal information are vast, and without limits or accountability on how that information is used, it is hard for people to assess the dangers of the data
being in the government’s control. Therefore, the problem with the nothing to hide argument is that it focuses on just one or two particular kinds of privacy
the terms
for debate in a manner that is often unproductive. It is important to distinguish here between two ways of justifying a program such as the NSA
surveillance and data mining program. The first way is to not recognize a problem. This is how the nothing to hide argument works—it denies even the
existence of a problem. The second manner of justifying such a program is to acknowledge the problems but contend that the
benefits of the NSA program outweigh the privacy harms. The first justification influences the second, because the low value
given to privacy is based upon a narrow view of the problem. The key misunderstanding is that the nothing to hide argument views
privacy in a particular way—as a form of secrecy, as the right to hide things. But there are many other types of harm
involved beyond exposing one’s secrets to the government. Privacy problems are often difficult to recognize and redress because they
problems—the disclosure of personal information or surveillance—and not others. It assumes a particular view about what privacy entails, and it sets
create a panoply of types of harm. Courts, legislators, and others look for particular types of harm to the exclusion of others, and their narrow focus blinds them to
seeing other kinds of harms. One of
the difficulties with the nothing to hide argument is that it looks for a visceral kind of injury
as opposed to a structural one. Ironically, this underlying conception of injury is shared by both those advocating for greater privacy protections
and those arguing in favor of the conflicting interests to privacy. For example, law professor Ann Bartow argues that I have
failed to describe privacy harms in a compelling manner in my article, A Taxonomy of Privacy, where I provide a framework for understanding the manifold
different privacy problems.87 Bartow’s primary complaint is that my taxonomy “frames privacy harms in dry, analytical terms that fail to sufficiently identify and
animate the compelling ways that privacy violations can negatively impact the lives of living, breathing human beings beyond simply provoking feelings of
does not have “enough dead bodies” and that privacy’s “lack of blood and
death, or at least of broken bones and buckets of money, distances privacy harms from other categories of tort law.
Most privacy problems lack dead bodies. Of course, there are exceptional cases such as the murders of Rebecca Shaeffer and Amy Boyer.
unease.”88 Bartow claims that the taxonomy
Rebecca Shaeffer was an actress killed when a stalker obtained her address from a Department of Motor Vehicles record.90 This incident prompted Congress to
pass the Driver’s Privacy Protection Act of 1994.91 Amy Boyer was murdered by a stalker who obtained her personal information, including her work address and
Social Security number, from a database company.92 These examples aside, there is not a lot of death and gore in privacy law. If this is the standard to recognize a
problem, then few privacy problems will be recognized. Horrific cases are not typical, and the purpose of my taxonomy is to explain why most privacy problems are
still harmful despite this fact. Bartow’s objection is actually very similar to the nothing to hide argument. Those
advancing the nothing to hide
argument have in mind a particular kind of visceral privacy harm, one where privacy is violated only when something deeply embarrassing or
discrediting is revealed. Bartow’s quest for horror stories represents a similar desire to find visceral privacy harms. The problem is that not all privacy
harms are like this. At the end of the day, privacy is not a horror movie, and demanding more palpable harms will be difficult in many cases. Yet
there is still a harm worth addressing, even if it is not sensationalistic. In many instances, privacy is threatened not by
singular egregious acts, but by a slow series of relatively minor acts which gradually begin to add up. In this way, privacy problems resemble certain environmental
harms which occur over time through a series of small acts by different actors. Bartow wants to point to a major spill, but gradual pollution by a multitude of
different actors often creates worse problems. The law frequently struggles with recognizing harms that do not result in embarrassment, humiliation, or physical or
psychological injury.93 For example, after the September 11 attacks, several airlines gave their passenger records to federal agencies in direct violation of their
privacy policies. The federal agencies used the data to study airline security.94 A group of passengers sued Northwest Airlines for disclosing their personal
information. One of their claims was that Northwest Airlines breached its contract with the passengers. In Dyer v. Northwest Airlines Corp., the court rejected the
contract claim because “broad statements of company policy do not generally give rise to contract claims,” the passengers never claimed they relied upon the policy
or even read it, and they “failed to allege any contractual damages arising out of the alleged breach.”95 Another court reached a similar conclusion.96 Regardless of
the merits of the decisions on contract law, the cases represent a difficulty with the legal system in addressing privacy problems. The disclosure of the passenger
records represented a “breach of confidentiality.”97 The problems caused by breaches of confidentiality do not merely consist of individual emotional distress; they
involve a violation of trust within a relationship. There is a strong social value in ensuring that promises are kept and that trust is maintained in relationships
between businesses and their customers. The problem of secondary use is also implicated in this case.98 Secondary use involves data collected for one purpose
being used for an unrelated purpose without people’s consent. The airlines gave passenger information to the government for an entirely different purpose beyond
that for which it was originally gathered. Secondary use problems often do not cause financial, or even psychological, injuries. Instead, the harm is one of power
imbalance. In Dyer, data was disseminated in a way that ignored airline passengers’ interests in the data despite promises made in the privacy policy. Even if the
passengers were unaware of the policy, there is a social value in ensuring that companies adhere to established limits on the way they use personal information.
Otherwise, any stated limits become meaningless, and companies have discretion to boundlessly use data. Such a state of affairs can leave nearly all consumers in a
powerless position. The harm, then, is less one to particular individuals than it is a structural harm. A similar problem surfaces in another case, Smith v. Chase
Manhattan Bank.99 A group of plaintiffs sued Chase Manhattan Bank for selling customer information to third parties in violation of its privacy policy, which stated
that the information would remain confidential. The court held that even presuming these allegations were true, the plaintiffs could not prove any actual injury:
[T]he “harm” at the heart of this purported class action, is that class members were merely offered products and services which they were free to decline. This does
not qualify as actual harm. The complaint does not allege any single instance where a named plaintiff or any class member suffered any actual harm due to the
receipt of an unwanted telephone solicitation or a piece of junk mail.100 The court’s view of harm, however, did not account for the breach of confidentiality.
When balancing privacy against security, the privacy harms are often characterized in terms of injuries to
the individual, and the interest in security is often characterized in a more broad societal way. The security
interest in the NSA programs has often been defined improperly. In a Congressional hearing, Attorney General Alberto Gonzales
stated: Our enemy is listening, and I cannot help but wonder if they are not shaking their heads in amazement at the thought that anyone would imperil such a
sensitive program by leaking its existence in the first place, and smiling at the prospect that we might now disclose even more or perhaps even unilaterally disarm
ourselves of a key tool in the war on terror.101 The balance between privacy and security is often cast in terms of whether a particular government information
collection activity should or should not be barred. The
issue, however, often is not whether the NSA or other government agencies should
be allowed to engage in particular forms of information gathering; rather, it is what kinds of oversight and
accountability we want in place when the government engages in searches and seizures. The government can employ nearly any
kind of investigatory activity with a warrant supported by probable cause. This is a mechanism of oversight—it forces
government officials to justify their suspicions to a neutral judge or magistrate before engaging in the tactic. For example,
electronic surveillance law allows for wiretapping, but limits the practice with judicial supervision, procedures to minimize the breadth of the wiretapping, and
requirements that the law enforcement officials report back to the court to prevent abuses.102 It is these procedures that the Bush Administration has ignored by
engaging in the warrantless NSA surveillance. The question is not whether we want the government to monitor such conversations, but whether the Executive
Branch should adhere to the appropriate oversight procedures that Congress has enacted into law, or should covertly ignore any oversight. Therefore, the security
interest should not get weighed in its totality against the privacy interest. Rather, what should get weighed is the extent of marginal limitation on the effectiveness
of a government information gathering or data mining program by imposing judicial oversight and minimization procedures. Only in cases where such procedures
will completely impair the government program should the security interest be weighed in total, rather than in the marginal difference between an unencumbered
program versus a limited one. Far too
often, the balancing of privacy interests against security interests takes place in a
manner that severely shortchanges the privacy interest while inflating the security interests . Such is the logic of
the nothing to hide argument. When
challenged, we
the argument is unpacked, and its underlying assumptions examined
and
can see how it shifts the debate to its terms , in which it draws power from its unfair advantage. It
is time to pull the curtain on the nothing to hide argument. Whether explicit or not, conceptions of privacy underpin nearly every argument made
about privacy, even the common quip “I’ve got nothing to hide.” As I have sought to demonstrate in this essay, understanding privacy as a pluralistic conception
reveals that we are often talking past each other when discussing privacy issues. By focusing more specifically on the related problems under the rubric of “privacy,”
we can better address each problem rather than ignore or conflate them. The nothing to hide argument speaks to some problems, but not to others. It
represents a singular and narrow way of conceiving of privacy, and it wins by excluding consideration of the
other problems often raised in government surveillance and data mining programs. When engaged with directly, the nothing to hide
argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of
privacy problems implicated by government data collection and use beyond surveillance and disclosure, the nothing to hide
argument, in the end, has nothing to say.
Put privacy before security. The ballot should create a side constraint where ends
don’t justify the means. This is especially applies to data collection in the absence of
probable cause.
Albright ’14 (Logan Albright is the Research Analyst at FreedomWorks, and is responsible for
producing a wide variety of written content for print and the web, as well as conducting research for
staff media appearances and special projects. He received his Master’s degree in economics from
Georgia State University. “The NSA's Collateral Spying” – Freedom Works - 07/08/2014 http://www.freedomworks.org/content/nsas-collateral-spying)
In short, the report, based on information obtained by Edward Snowden, reveals that during the course of its ordinary, otherwise legal surveillance operations,
the NSA also collected data on large numbers of people who were not specifically targeted. The agency
calls this practice “incidental surveillance.” I call it “collateral spying.” The report found that, on average, 9 out of every 10 people
spied on were not the intended target. The NSA has the legal authority to obtain a warrant based on
probable cause in order to surveil an individual. No one is disputing that. But when this targeting
results in collateral spying on vast numbers of innocents, in the absence of probable cause and the corresponding warrants,
that is a major problem. The NSA has asserted that such incidental data collection is inevitable, and to a certain extent that’s likely true. It is
understandable that in some situations the NSA may learn information about people other than the direct target, but this should obviously be
minimized as far as possible , and at the very least the information should be immediately purged from
government databases , not stored for years on end. In any case, the whole situation is indicative of the agency’s cavalier attitude
towards individual rights. While national security is a concern we all share, the ends do not justify the means when
those means involve violate the constitutional protections afforded to citizens by our nation’s founders. It is not okay to
violate the rights of an innocent in the process of achieving a broader goal, even if that goal is noble.
The way the NSA has been behaving is Machiavellian in the most literal sense. In his 16th century political treatise, The Prince, Niccolo
Machiavelli recognized a harsh reality of politics that still plagues us half a millennium later, writing, “A prince wishing to keep his state is very often forced to do
evil.” Taking Machiavelli’s advice as a
green light for immoral behavior has been the problem with governments throughout history, a problem
the founding fathers sought to avoid by setting down precise guidelines for what the government could and could not do in the form of a Constitution. The
disregard of these rules, and the
argument that there should be a national security exception to the Fourth
Amendment, undermines the entire purpose of the American experiment, and restores the European-style tyrannies the revolutionaries fought
against.
Advantage 2: The Cloud
Current Stored Communications Act is outdated and threatens to stunt technological
innovation.
Medina, 2013 (Melissa, Senior Staff Member, American University Law Review and JD, “The Stored
Communications Act: An Old Statute for Modern Times.” American University Law Review Lexis.)
Despite its previous ability to adapt to vast changes in technology, the twenty-seven-year-old SCA has become hopelessly
outdated. The Act’s framework made sense in 1986 when service providers served two distinct functions, technology and
computers were not widely accessible, and remote storage of electronic communications was prohibitively expensive. Today, email has
become an integral and necessary part of Americans’ professional and personal lives.143 Most
Americans use webmail services and many store these emails and other personal information on the
cloud.144 Individuals use the cloud to store information ranging from personal emails, photos, and
videos as a complete back-up of their hard drive.145 Businesses store emails and highly sensitive information, such as
medical and financial data, trade secret information, and business plans on the cloud.146 The legal uncertainty surrounding the
protections afforded to these remotely stored communications leaves today’s Congress with the same
problem the 1986 Congress faced— an outdated statute that threatens to inhibit technological
innovation . The 1980s signaled the beginning of an “information age” that fundamentally transformed the lives of individuals throughout
the world in ways in which the 1986 Congress could never have imagined.147 Advertisements for state-of-the-art electronics of the 1980s
reveal primitive technology compared to what is available to consumers today.148 More
exciting innovations lie ahead of
us.149 Twenty (and even ten) years from now, newer generations will likely come across advertisements for a $1,400 3D scanner150 and
other gadgets that we consider cutting edge and find the technology and price laughable.151 However, a failure to update the SCA
will inevitably stunt progress by not providing consumers and businesses the confidence to use new
technologies.152 This “chilling effect” will negatively impact consumer choice and the economy on
both a national and global level .153 Fortunately, legislators have introduced a number of proposals to address these concerns,
and recent events have fueled a privacy debate, suggesting that SCA reform may be imminent.154
Innovation in software vital to solve every existential risk
Hayes, 14 (Correspondent-Democrat & Chronicle, 10/5, Bill Gates sees innovation solving world
problems, http://www.democratandchronicle.com/story/money/business/2014/10/05/bill-gates-seesinnovation-solving-world-problems/16760969/)
ITHACA – Bill Gates delivered an optimistic message about the future to Cornell University students during a back-and-forth Wednesday
evening with President David Skorton. Gates, who fielded questions from the audience, spoke to the packed auditorium at Bailey Hall with the
message that innovations
in science, medicine and computer technologies will continue to shape the world for the
better. Progress in reducing health and income inequalities in developing countries gave him particular pride, he said. The Bill & Melinda
Gates Foundation, which he co-chairs with his wife, has dispersed more than $30 billion in grants since its inception 14 years ago. The
foundation has a mission to improve education in the United States and a global focus on improving people’s health in poor countries. “We saw
that health was the greatest injustice,” he told Skorton about his foundation’s mission to improve people’s health. Feeding the poor is only one
priority of the Gates Foundation. The philanthropic group has helped lower the number of childhood deaths from 10 million in 2000 to about 6
million today. His goal is to reduce that further to 2 million, he said. He expressed optimism that research into diseases that ravage the poorer
parts of the world — malaria, cholera, tuberculosis and others — will continue to be funded. Economic development in poorer countries has
helped reduce global inequality, which he said is at a lower level than it has ever been. “The world is A, much richer, and B, much richer in a far
more equitable way,” he told the students. That has been the opposite of what has happened in the past three decades or so in the United
States, he said. He called for tax policies to help level that inequality, with a progressive consumption tax and a high estate tax that limits the
dynastic possession of wealth. While he expressed concern about the current political climate in the country, he felt that science
innovations can overcome problems in Washington, D.C. “The things that count in society don’t depend
on politicians being geniuses,” he said. At the dedication Gates had a similar optimistic message earlier in the day during the
dedication ceremony of Gates Hall. Gates said it’s an exciting time to be involved in the computer sciences, even more than when he got
involved 46 years ago. Despite the advances over the past few decades, he said, “the full dream of what is possible with computing has not yet
been realized.” Problems
like developing vaccines, energy sources without carbon dioxide emissions, and
understanding issues as diverse as neurological disease and weather forecasting can all be tackled with
emerging technologies. “With every one of these problems, the digital tools combined with really
amazing software are going to be the reason that we can solve these things,” he said. He said figuring out
solutions depends on software-intensive techniques, and that Cornell students will be poised to make gains in those fields.
Software developments key to address every global problem
Aznar, 12 (Columnist-Sun Star, 7/23, Software ‘can solve world’s problems,’
http://www.sunstar.com.ph/cebu/business/2012/07/23/software-can-solve-world-s-problems-233596)
WITH seven billion people in the world riddled with problems like climate change, cancer, transport
deadlocks and financial crises, a computer science professor believes computer software can be developed to
solve the world’s problems. Srini Devadas, a professor of electrical engineering and computer science at the Massachusetts Institute
of Technology (MIT), in a lecture entitled “Programming the Future,” said that if the huge amount of data available in the
world is processed efficiently to discover patterns or hidden meanings, much progress can be done in
many different fields, such as healthcare, finance and energy. Devadas was flown in from MIT in Cambridge,
Massachusetts to the Philippines to participate in the MIT Lecture Series conducted by Accenture, which is part of the company’s collaboration
with the MIT Professional Education. The collaboration, called the Accenture Technology Academy, is a career development program of the
company that earns for its employees four types of certifications to further the technical development of their IT workers. This month, the
company is also dedicating celebrations for technology and made Devadas’ lecture available for some 200 IT professionals and students
yesterday. Computing paradigms Devadas’ talk focused on three computing paradigms considered central to the progress in computers,
software and hardware–programming for everyone, big data and crowds to clouds. A shortened part of his talk was presented to reporters and
he showed simple applications that could be useful to everyone, such as a program that replies to a text message sender that the person is
driving. Devadas said that most enterprises
today generate more data than they can process and that the
amount of data grows 50 percent yearly. “It’s hard to keep up with all this data. Think of it as a fire hose and try to
drink water from it. It can’t be done,” he said. “The best approach to tackling big data is to combine the strengths of both human and machine,”
he added.
Privacy concerns fracture the internet and undermines the cloud and big data’s
innovative future specifically regarding health care.
Corbin, 3/14/2014 (Kenneth, Washington, D.C.-based writer who covers government and regulatory
issues for CIO.com, “Tech CEOs Warn of Threats to Cloud, Big Data Economy” CIO
http://www.cio.com/article/2377915/regulation/tech-ceos-warn-of-threats-to-cloud--big-dataeconomy.html)
CEOs at some of the nation's leading tech companies see boundless potential for big data and
smarter, integrated systems to address major social challenges in areas ranging from medicine to
education to transportation — but at the same time, they worry that policymakers at home and
abroad could stand in the way of that vision. Top executives at firms such as Dell, IBM and Xerox gathered in the
nation's capital this week under the auspices of the Technology CEO Council, bringing with them a message that the data
economy is imperiled by concerns about security and privacy and protectionist policies that could
limit the growth of cloud computing and balkanize the Internet. "The biggest barriers I think that we
see are not around the engineering. It's around regulation. It's around protectionism. It's around trust, or lack
thereof. It's around policies and procedures," says Xerox Chairman and CEO Ursula Burns, who also chairs the CEO council.
"One of the biggest concerns that we have, and one of the reasons why we come together as a group, is to make sure that we can keep the
playing field open to advancement, open to the possibilities of this new economy, and not close it off," Burns adds. Big
Data Potential
Needs Free Flow of Information In addressing the data economy, Burns and other CEOs on the council are offering
policymakers a mixed message. On the one hand, gleaning meaningful insights from expansive data sets can have a
transformative impact on industries such as healthcare that have been slow to adopt new technologies. For instance,
Dell, one of many tech giants building health IT businesses, maintains an archive of some 7 billion
medical images, aggregated in a data set that can be mined for patterns and predictive analytics.
"There's lots that can be done with this data that was very, very siloed in the past," says company Chairman
and CEO Michael Dell. "We're really just kind of scratching the surface." On the other hand, the essential promise
of the data economy that Dell and others envision relies on the free flow of information across distributed
systems — and a foundation of trust that will give users the confidence to share their data with service
providers. Tech leaders see threats on both fronts. "What could happen," Burns warns, "is that governments
around the world figure out a way to restrict data and data portability and data usage and data
movement in such a way that actually limits the full potential of this new economy ." Many U.S. cloud firms
warn against protectionist policies under consideration in foreign markets, particularly in Western Europe, which favor local service providers
with requirements for local hosting or storage, or place restrictions on cross-border data flows. In response, tech leaders lobby against those
proposals overseas and seek to promote safeguards for cross-border data flows in U.S. trade policy. "Countries
that take a
protectionist view and say, 'I'm going to protect my data and make sure that it's secure for my own
economic purposes or political purposes,' or for whatever reason, really run the significant risk of
cutting themselves off from this bounty that comes with the data economy," says D. Mark Durcan, CEO of Micron
Technology.
Data driven healthcare is the critical factor in disease prevention – revolutionizes
planning and treatment
Marr 15 (Bernard, contributor to Forbes, he also basically wrote the book on internet data – called Big
Data – and is a keynote speaker and consultant in strategic performance, analytics, KPIs and big data,
“How Big Data Is Changing Healthcare”, http://www.forbes.com/sites/bernardmarr/2015/04/21/howbig-data-is-changing-healthcare/)
If you want to find out how Big Data is helping to make the world a better place, there’s no better
example than the uses being found for it in healthcare. The last decade has seen huge advances in the
amount of data we routinely generate and collect in pretty much everything we do, as well as our ability to
use technology to analyze and understand it. The intersection of these trends is what we call “Big Data” and it is
helping businesses in every industry to become more efficient and productive. Healthcare is no different.
Beyond improving profits and cutting down on wasted overhead, Big Data in healthcare is being used to predict
epidemics , cure disease , improve quality of life and avoid preventable deaths. With the world’s population
increasing and everyone living longer, models of treatment delivery are rapidly changing, and many of the
decisions behind those changes are being driven by data . The drive now is to understand as much
about a patient as possible, as early in their life as possible – hopefully picking up warning signs of serious
illness at an early enough stage that treatment is far more simple (and less expensive) than if it had not been spotted until later. So
to take a journey through Big Data in healthcare, let’s start at the beginning – before we even get ill. Wearable blood pressure monitors send
data to a smartphone app, then off to the doctor. (Photo by John Tlumacki/The Boston Globe via Getty Images) Prevention is better than cure
Smart phones were just the start. With apps enabling them to be used as everything from pedometers to measure how far you walk in a day, to
calorie counters to help you plan your diet, millions of us are now using mobile technology to help us try and live healthier lifestyles. More
recently, a steady stream of dedicated wearable devices have emerged such as Fitbit, Jawbone and Samsung Gear Fit that allow you to track
your progress and upload your data to be compiled alongside everyone else’s. In the very near future, you could also be sharing this data with
your doctor who will use it as part of his or her diagnostic toolbox when you visit them with an ailment. Even
if there’s nothing
wrong with you, access to huge, ever growing databases of information about the state of the health
of the general public will allow problems to be spotted before they occur , and remedies – either medicinal
or educational – to
be prepared in advance This is leading to ground breaking work, often by partnerships
between medical and data professionals, with the potential to peer into the future and identify
problems before they happen. One recently formed example of such a partnership is the Pittsburgh Health Data Alliance – which
aims to take data from various sources (such as medical and insurance records, wearable sensors, genetic data and even social media use) to
draw a comprehensive picture of the patient as an individual, in order to offer a tailored healthcare package. That person’s data won’t be
treated in isolation. It will be compared and analyzed alongside thousands of others, highlighting specific threats and issues through patterns
that emerge during the comparison. This enables sophisticated predictive modelling to take place – a doctor will be able to assess the likely
result of whichever treatment he or she is considering prescribing, backed up by the data from other patients with the same condition, genetic
factors and lifestyle. Programs such as this are the industry’s attempt to tackle one of the biggest hurdles in the quest for data-driven
healthcare: The medical industry collects a huge amount of data but often it is siloed in archives controlled by different doctors’ surgeries,
hospitals, clinics and administrative departments. Another partnership that has just been announced is between Apple and IBM. The two
companies are collaborating on a big data health platform that will allow iPhone and Apple Watch users to share data to IBM’s Watson Health
cloud healthcare analytics service. The aim is to discover new medical insights from crunching real-time activity and biometric data from
millions of potential users.
An international approach is key – only cooperation solves
Donahue et al 10 (Donald A. Donahue Jr, DHEd, MBA, FACHE, Potomac Institute for Policy Studies
and University of Maryland University College Stephen O. Cunnion, MD, PhD, MPH, Potomac Institute
for Policy Studies Fred L. Brocker, MPH, RS, Brocker Staffing & Consulting, Inc. Richard H. Carmona, MD,
MPH, FACS, 17th Surgeon General of the United States & University of
Arizona, https://www.academia.edu/3651172/Soft_Power_of_Solid_Medicine ekr)
In this article, the authors argue that a broader, coordinated program of medical diplomacy would generate dual benefits:
increased global engagement and stability and the creation of a worldwide disease surveillance
network that could detect and deter an emerging pandemic. Framing the Issue At an April 2008 conference on preparedness hosted by the
Potomac Institute for Policy Studies’ National Security Health Policy Center, Dr. C. Everett Koop observed that our national approach to communicable
disease has not changed in over a century. In Foggy Bottom—at the far end of the National Mall from the Department of Health and Human Services
(HHS)— diplomacy is similarly practiced as it was 100 years ago. This seemingly strange correlation and recent world events point to the need to augment nineteenth-century “Gunboat
Linking the considerable scientific and public health expertise of HHS to the
the State Department would serve to bolster both the U.S.
international diplomacy mission and improve global public health (Avery 2010). The positive impact of projecting quality public health
Diplomacy” with twenty-first-century “Hospital Ship Diplomacy.”
diplomatic mission of
and medical expertise—on a consistent basis—to the developing world is well documented (Gillert 1996). The success of General Petreaus’ approach in Iraq is attributed largely to enhancing
the availability of the civic infrastructure (Patraeus 2006). Médecins San Frontières (Doctors Without Borders) and similar nongovernment organizations (NGOs) sow well documented
international goodwill.
Even the Taliban recognize the value of providing medical infrastructure services where the
government does not. Providing these services was a significant factor in their early successes in parts of Afghanistan and the frontier provinces of Pakistan (Homas 2008). Serving the needs of
vulnerable populations can be an entrée to acceptance, including providing a salve for an otherwise unwanted foreign military presence. It should be noted that when U.S. Marine and French
paratrooper barracks were bombed in Beirut in 1983, Italian medical troops were left unharmed.
The efficacy of medical diplomacy in underserved
regions has been validated by first-hand experience. As a young Army medic serving in Phu Bai, Vietnam in 1971, one of the authors built an 80bed hospital which was named “Tu Ai,” a Vietnamese-Buddhist term for peace. This facility provided medical treatment to all in need with no questions asked. After the fall of South Vietnam in
1975, the Tu Ai medical facility was the only one in the I Corps area of operations (and perhaps of all Vietnam) that was allowed by the new government to continue its mission unchanged. It
continues to provide medical care to this very day. There is ample precedent that supporting improvements in world health produces a political payoff, as evidenced by multiple, if sometimes
disjointed, efforts (Avery 2010; Public Health Systems Research Interest Group Advisory Board 2009; Macqueen KM, et al. 2001; Subcommittee on Oversight & Investigations 2008). The U.S.
and other nations’ military organizations routinely conduct medical assistance missions throughout the developing world. The U.S. Agency for International Development (USAID) and a
number of medical NGOs regularly provide disaster and humanitarian medical and public health assistance. Significant goodwill was engendered by deployment of U.S. Army, Navy, and Public
Health Service (PHS) Commissioned Corps resources to Banda Aceh after the 2004 Indian Ocean earthquake and tsunami and, more recently, to Haiti. Relief missions to Pakistan have been
mounted following the 2005 earthquake and again in response to recent, epic flooding. Defining an Underutilized Resource as an Available Solution.
The benefit of having a
robust, organized health and medical presence around the globe
to help collect and disseminate medical information and coordinate public health activities, including
humanitarian assistance, is less obvious. This benefit is manifested in three ways: the fostering of human
security, the increase in effectiveness of global public health efforts, and an increase in political
legitimacy (Nye 2004). While it is beyond the scope of this commentary to debate the components and relative merits of human security, history supports the position that a
population with increased levels of disease and illness is more susceptible to destabilizing factors that can pose direct threats to state viability and create fertile fields
for radicalism and insurgency (WHO 2007). This is especially true if there are very clear differences in healthcare and public health services available to ruling and elite classes compared to that
available to the general population. Reflective of this, the National Center for Medical Intelligence (NCMI) routinely assesses medical information and reports on diseases and poor public
initiatives and interest for improving
international health throughout the executive branch of the U.S. government, these collective efforts lack meaningful
coordination into a comprehensive approach on foreign health diplomacy, and therefore fail to realize
health conditions that may contribute to politically destabilizing a county (e.g., AIDS). Despite significant, if disparate,
the cumulative benefit of and the inherent political stabilization impact fostered by an organized and coordinated global health improvement effort. Even with improved political stability, the
need for increased global health capabilities continues unabated. The emergence of SARS, H5N1 influenza, and the pandemic H1N1 outbreak clearly demonstrates that national borders and
ocean expanses no longer protect us from far-flung illnesses. In a global economy and with the ability to travel almost anywhere in the world within 24–36 hours, a local infectious disease
aberration can become an international health crisis in a matter of days. Moreover, because H1N1 influenza turned out to be not as deadly as feared, the danger of a future calamitous
pandemic occurring could be enhanced because the public may not heed future health official warnings. This is not limited to individual perception. In a rare divergence of political and clinical
focus regarding communicable disease, the Parliamentary Assembly of the Council of Europe soundly criticized the World Health Organization (WHO) and national health authorities for
“distortion of priorities of public health services across Europe, waste of large sums of public money, and also unjustified scares and fears about health risks faced by the European public at
large” (Flynn 2010).
Zoonotic diseases coming now – effective healthcare is key to check
Naish 12 (Reporter for Daily Mail, “The Armageddon virus: Why experts fear a disease that leaps from
animals to humans could devastate mankind in the next five years Warning comes after man died from a
Sars-like virus that had previously only been seen in bats Earlier this month a man from Glasgow died
from a tick-borne disease that is widespread in domestic and wild animals in Africa and Asia”
http://www.dailymail.co.uk/sciencetech/article-2217774/The-Armageddon-virus-Why-experts-feardisease-leaps-animals-humans-devastate-mankind-years.html#ixzz3E5kqxjQI)
The symptoms appear suddenly with a headache, high fever, joint pain, stomach pain and vomiting. As the illness progresses, patients can develop large areas of bruising and uncontrolled bleeding. In at least 30 per cent of cases,
Crimean-Congo Viral Hemorrhagic Fever is fatal. And so it proved this month when a 38-year-old garage owner from Glasgow, who had been to his brother’s wedding in Afghanistan, became the UK’s first confirmed victim of the
tick-borne viral illness when he died at the high-security infectious disease unit at London’s Royal Free Hospital. It is a disease widespread in domestic and wild animals in Africa and Asia — and one that has jumped the species
barrier to infect humans with deadly effect. But the unnamed man’s death was not the only time recently a foreign virus had struck in this country for the first time. Last month, a 49-year-old man entered London’s St Thomas’
hospital with a raging fever, severe cough and desperate difficulty in breathing. He bore all the hallmarks of the deadly Sars virus that killed nearly 1,000 people in 2003 — but blood tests quickly showed that this terrifyingly
. Nor was it any other virus yet known to medical science . Worse still, the gasping,
sweating patient was rapidly succumbing to kidney failure, a potentially lethal complication that had
never before been seen in such a case. As medical staff quarantined their critically-ill patient, fearful questions began to mount. The stricken man had recently come from Qatar in the
virulent infection was not Sars
Middle East. What on earth had he picked up there? Had he already infected others with it? Using the latest high-tech gene-scanning technique, scientists at the Health Protection Agency started to piece together clues from tissue
samples taken from the Qatari patient, who was now hooked up to a life-support machine.
The results were extraordinary. Yes, the virus is from the
same family as Sars. But its make-up is completely new . It has come not from humans, but from
animals. Its closest known relatives have been found in Asiatic bats. The investigators also discovered that the virus has already killed someone. Searches of global medical
databases revealed the same mysterious virus lurking in samples taken from a 60-year-old man who
had died in Saudi Arabia in July. Scroll down for video Potentially deadly: The man suffered from CCHF, a disease transmitted by ticks (pictured) which is especially common in East and West
Africa Potentially deadly: The man suffered from CCHF, a disease transmitted by ticks (pictured) which is especially common in East and West Africa When the Health Protection Agency warned the world of this newly- emerging
virus last month, it ignited a stark fear among medical experts. Could this be the next bird flu, or even the next ‘Spanish flu’ — the world’s biggest pandemic, which claimed between 50 million and 100 million lives across the globe
In all these outbreaks, the virus responsible came from an animal . Analysts now believe that the Spanish flu pandemic
originated from a wild aquatic bird. The terrifying fact is that viruses that manage to jump to us from animals — called
zoonoses — can wreak havoc because of their astonishing ability to catch us on the hop and spread
rapidly through the population when we least expect it. The virus's power and fatality rates are terrifying One leading British virologist, Professor
John Oxford at Queen Mary Hospital, University of London, and a world authority on epidemics,
warns that we must expect an animal-originated pandemic to hit the world within the next five years ,
from 1918 to 1919?
with potentially cataclysmic effects on the human race. Such a contagion, he believes, will be a new strain of super-flu,
a highly infectious virus that may originate in some far-flung backwater of Asia or Africa, and be
contracted by one person from a wild animal or domestic beast, such as a chicken or pig. By the time the first victim has
succumbed to this unknown, unsuspected new illness, they will have spread it by coughs and sneezes to family, friends, and all those gathered anxiously around them. Thanks to our crowded,
hyper-connected world, this doomsday virus will already have begun crossing the globe by air, rail,
road and sea before even the best brains in medicine have begun to chisel at its genetic secrets. Before it even
has a name, it will have started to cut its lethal swathe through the world’s population. The high security unit High security: The high security unit where the man was treated for the potentially fatal disease but later died If this new
virus follows the pattern of the pandemic of 1918-1919, it will cruelly reap mass harvests of young and fit people.
They die because of something called a ‘cytokine
storm’ — a vast overreaction of their strong and efficient immune systems that is prompted by the
virus. This uncontrolled response burns them with a fever and wracks their bodies with nausea and massive fatigue. The hyper-activated immune system actually
kills the person, rather than killing the super-virus. Professor Oxford bases his prediction on historical
patterns. The past century has certainly provided us with many disturbing precedents. For example, the 2003 global outbreak of Sars, the severe
acute respiratory syndrome that killed nearly 1,000 people, was transmitted to humans from Asian
civet cats in China. More... Man, 38, dies from deadly tropical disease after returning to the UK from Afghanistan Nine-year-old who turns YELLOW with anger: Brianna must spend 12 hours a day under UV lights
because of rare condition In November 2002, it first spread among people working at a live animal market in the southern Guangdong province, where civets were being sold. Nowadays, the threat
from such zoonoses is far greater than ever, thanks to modern technology and human population
growt h. Mass transport such as airliners can quickly fan outbreaks of newly- emerging zoonoses into deadly global wildfires. The Sars virus was spread when a Chinese professor of respiratory medicine treating people with
the syndrome fell ill when he travelled to Hong Kong, carrying the virus with him. By February 2003, it had covered the world by hitching easy lifts with airline passengers. Between March and July 2003, some 8,400 probable cases
of Sars had been reported in 32 countries. It is a similar story with H1N1 swine flu, the 2009 influenza pandemic that infected hundreds of millions throughout the world. It is now believed to have originated in herds of pigs in
Once these stowaway viruses get off the plane, they don’t have to
learn a new language or new local customs. Genetically, we humans are not very diverse ; an epidemic
Mexico before infecting humans who boarded flights to myriad destinations.
that can kill people in one part of the world can kill them in any other just as easily. On top of this, our
risk of catching such deadly contagions from wild animals is growing massively, thanks to
humankind’s relentless encroachment into the world’s jungles and rainforests , where we increasingly
come into contact for the first time with unknown viral killers that have been evolving and incubating
in wild creatures for millennia. This month, an international research team announced it had identified an entirely new African virus that killed two teenagers in the Democratic Republic of the
Congo in 2009. The virus induced acute hemorrhagic fever, which causes catastrophic widespread bleeding from the eyes, ears, nose and mouth, and can kill in days. A 15-year-old boy and a 13-year-old girl who attended the same
school both fell ill suddenly and succumbed rapidly. A week after the girl’s death, a nurse who cared for her developed similar symptoms. He only narrowly survived. The new microbe is named Bas-Congo virus (BASV), after the
A report in the journal PLoS Pathogens says the
virus probably originated in local wildlife and was passed to humans through insect bites or some
other as-yet unidentified means. There are plenty of other new viral candidates waiting in the wings,
province where its three victims lived. It belongs to a family of viruses known as rhabdoviruses, which includes rabies.
guts, breath and blood of animals around us . You can, for example, catch leprosy from armadillos, which carry the virus in their shells and are responsible for a third of
leprosy cases in the U.S. Horses can transmit the Hendra virus, which can cause lethal respiratory and neurological disease in people. In a new book that should give us all pause for thought, award-winning
U.S. natural history writer David Quammen points to a host of animal-derived infections that now
claim lives with unprecedented regularity. The trend can only get worse , he warns. Quammen highlights the Ebola fever virus, which first
struck in Zaire in 1976. The virus’s power is terrifying, with fatality rates as high as 90 per cent. The latest mass outbreak of the virus, in the Congo last month, is reported to have killed 36 people out of 81 suspected cases.
According to Quammen, Ebola probably originated in bats. The bats then infected African apes, quite probably through the apes coming into contact with bat droppings. The virus then infected local hunters who had eaten the apes
as bushmeat. Quammen believes a similar pattern occurred with the HIV virus, which probably originated in a single chimpanzee in Cameroon.
'It is inevitable we will have a global
outbreak ' Studies of the virus’s genes suggest it may have first evolved as early as 1908. It was not until the Sixties that it appeared in humans, in big African cities. By the Eighties, it was spreading by airlines to America.
Since then, Aids has killed around 30 million people and infected another 33 million. There is one mercy with Ebola and HIV. They cannot be transmitted by coughs and sneezes. ‘Ebola is transmissible from human to human through
If HIV could be transmitted by air, you and I might
already be dead. If the rabies virus — another zoonosis — could be transmitted by air, it would be the
most horrific pathogen on the planet.’ Viruses such as Ebola have another limitation, on top of their method of transmission. They kill and incapacitate people too quickly. In order to
spread into pandemics, zoonoses need their human hosts to be both infectious and alive for as long as possible, so that the virus can keep casting its deadly tentacles across the world’s population. But there is
direct contact with bodily fluids. It can be stopped by preventing such contact,’ Quammen explains. ‘
one zoonosis that can do all the right (or wrong) things. It is our old adversary, flu. It is easily transmitted through the air, via
sneezes and coughs. Sars can do this, too. But
infectious. Isolation:
flu has a further advantage. As Quammen points out: ‘With Sars, symptoms tend to appear in a person before, rather than after, that person becomes highly
Unlike Sars the symptoms of this new disease may not be apparent before the spread of
infection Isolation : Unlike Sars the symptoms of this new disease may not be apparent before the spread of infection ‘That allowed many Sars cases to be recognised, hospitalised and placed in isolation
before they hit their peak of infectivity.
But with influenza and many other diseases, the order is reversed.’ Someone who
has an infectious case of a new and potentially lethal strain of flu can be walking about innocently
spluttering it over everyone around them for days before they become incapacitated . Such reasons lead Professor
Oxford, a world authority on epidemics, to warn that a new global pandemic of animal-derived flu is
inevitable. And, he says, the clock is ticking fast. Professor Oxford’s warning is as stark as it is certain: ‘I think it is inevitable that we will have another big
global outbreak of flu ,’ he says. ‘We should plan for one emerging in 2017-2018.’ But are we adequately prepared to cope? Professor Oxford
warns that vigilant surveillance is the only real answer that we have. ‘
New flu strains are a day-to-day problem and we have to be very
careful to keep on top of them,’ he says. ‘We now have scientific processes enabling us to quickly identify the genome of the virus behind a new illness, so that we know what we are dealing
with. The best we can do after that is to develop and stockpile vaccines and antiviral drugs that can fight
new strains that we see emerging.’ But the Professor is worried our politicians are not taking this certainty of mass death seriously enough. Such laxity could come at a human cost so
unprecedentedly high that it would amount to criminal negligence. The race against newly-emerging animal-derived diseases is one that we have to win every time. A pandemic virus needs to
win only once and it could be the end of humankind.
Zoonotic diseases specifically cause extinction
Casadevall 12 (Arturo, M.D., Ph.D. in Biochemistry from New York University, Leo and Julia
Forchheimer Professor and Chair of the Department of Microbiology and Immunology at Albert Einstein
College of Medicine, former editor of the ASM journal Infection and Immunity, “The future of biological
warfare,” Microbial Biotechnology Volume 5, Issue 5, pages 584–587, September 2012,
http://onlinelibrary.wiley.com/doi/10.1111/j.1751-7915.2012.00340.x/full)
In considering the importance of biological warfare as a subject for concern it is worthwhile to review the known existential threats. At this time
this writer can identify at three major existential threats to humanity: (i) large-scale thermonuclear war followed by a nuclear winter, (ii) a
planet killing asteroid impact and (iii) infectious disease. To this trio might be added climate change making the planet uninhabitable. Of the
three existential threats the first is deduced from the inferred cataclysmic effects of nuclear war. For the second there is geological evidence for
the association of asteroid impacts with massive extinction (Alvarez, 1987). As
to an existential threat from microbes recent
decades have provided unequivocal evidence for the ability of certain pathogens to cause the
extinction of entire species. Although infectious disease has traditionally not been associated with
extinction this view has changed by the finding that a single chytrid fungus was responsible for the
extinction of numerous amphibian species (Daszak et al., 1999; Mendelson et al., 2006). Previously,
the view that infectious diseases were not a cause of extinction was predicated on the notion that
many pathogens required their hosts and that some proportion of the host population was naturally
resistant. However, that calculation does not apply to microbes that are acquired directly from the
environment and have no need for a host, such as the majority of fungal pathogens. For those types of
host–microbe interactions it is possible for the pathogen to kill off every last member of a species
without harm to itself, since it would return to its natural habitat upon killing its last host. Hence,
from the viewpoint of existential threats environmental microbes could potentially pose a much
greater threat to humanity than the known pathogenic microbes, which number somewhere near 1500 species
(Cleaveland et al., 2001; Taylor et al., 2001), especially if some of these species acquired the capacity for pathogenicity as a consequence of
natural evolution or bioengineering.
Amending the SCA solve cloud computing.
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIOS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
The Stored Communications Act has been noted as a “remarkable achievement”167 and “forwardlooking”168 for being able to adapt its Fourth Amendment-like protections to emerging technologies,
especially in light of the fact that it was enacted in 1986. Yet, technology has developed so much in the
past twenty-five years that it has outpaced the Act. The Internet has grown exponentially in those
years and is now an integral and pervasive part of millions of peoples’ lives. Countless industries, both
Internet-based and non-Internetbased, rely on and benefit from using the Internet as a market, a
service, or a forum. It is against this backdrop that the emergence of cloud computing technology
takes place. The SCA’s protections for cloud computing will have enormous implications for the
privacy rights of millions of people. Cloud computing has the potential to transform the way people
interact and how companies run their businesses. Thus, it is important that Congress update the SCA
to bring it up to date with modern technologies so that the same privacy protections that exist for
other forms of storage and communications exist in the cloud.
Amending the SCA solves certainty
Medina, 2013 (Melissa, Senior Staff Member, American University Law Review and JD, “The Stored
Communications Act: An Old Statute for Modern Times.” American University Law Review Lexis.)
Congress’s foresight in enacting the SCA has allowed the Act to adapt to a number of new
technologies surprisingly well in the twenty-seven years since its enactment. However, many now argue that
technology has outpaced the Act’s structure and underlying assumptions. Currently, the success of an action under the SCA is
necessarily tied to geography due to courts around the country ruling in different ways. Moreover,
individuals, service providers, and the government are uncertain of their rights and responsibilities.
For this reason, a clear and consistent framework is necessary . An amendment that updates the SCA
by creating a technologically neutral statute that simplifies the current structure of the SCA, and its
basis on archaic technology, is critical to ensure the continued progress of technology. However, until
Congress updates the SCA, courts must effectuate Congress’s intent by categorizing modern technology within the language of the current
statute. Courts must carefully avoid injecting their own language. By focusing on the function of ISPs with regard to the particular
communication in question, a simple framework emerges that allows courts to apply the SCA to modern technologies and addresses the
concerns that prompted Congress to adopt the SCA. Only then can courts be sure that Congress’s intent is properly carried out.
Plan
The United States federal government should amend the Stored Communication Act
to:
ï‚· Remove the distinction between remote computing service and electronic
communication service and require warrants as general rule.
ï‚· Add a statutory suppression remedy
ï‚· Clarify the scope of voluntary disclosures.
Solvency
Removing the RCS/ECS distinction is key to restore 4th amendment privacy protection
to online communication.
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIOS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
1. Remove the RCS/ECS Distinction and Require Warrants as a General Rule While the SCA was forward-looking in many
ways, its conception of electronic communications technologies remains stuck in the 1980s. Its
distinction between ECS and RCS has become unwieldy and has led to confusion and misapplication of
the statute. The different levels of protection depending on what function the Internet service is
performing at a given time belies the original purpose of the SCA, which was to provide protection in an area where
the courts were unclear in their application of Fourth Amendment privacy protections (yet where most people would have a reasonable
expectation of privacy). 139 This
expectation of privacy, however, is not one that should change according to
the different stages in the lifecycle of an e-mail or stored document. When a person sends an e-mail to another
person, he or she has the same expectation of privacy when composing the email, when clicking send, and when the other person receives the
email, much as that person would have the same expectation of privacy on a document he or she composes and stores locally on a home
computer. Thus,
my first proposed modification would be to remove the distinction between ECS and
RCS and require the government to show probable cause for a search warrant, as the Fourth
Amendment requires, before it can compel disclosure of stored electronic information. Having this as
the general rule, with specific exceptions allowing for lesser standards (such as in the case of emergencies),
would greatly simplify the statute, while allowing law enforcement to do its job.140
Supression remedy is key deters abuses by law enforcement
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIOS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
Add a Statutory Suppression Remedy A second possible change is for Congress to incorporate a
statutory suppression remedy into the SCA. This would likely deter abuses of the statute by law
enforcement officials.158 The SCA’s lack of a suppression remedy is unlike Title III of the Omnibus Crime
Control and Safe Streets Act of 1968,159 the primary federal statute that governs the interception of
wire and oral communications, or the Fourth Amendment, which provides for suppression if
government action violates the statute.160 As Justice Holmes noted, the Fourth Amendment would be
simply a “form of words” without the exclusionary rule.161 This differentiation between interception
of wire and oral communication and retrieval of stored electronic communications is unnecessary. The
same privacy interests are at stake concerning unreasonable searches and seizures, and there is the
same potential danger of law enforcement officials’ misconduct exists. Furthermore, as Orin Kerr
argues, adding an exclusionary remedy would “benefit both civil libertarian and law enforcement
interests alike.”162 For civil libertarian interests, “a suppression remedy would considerably increase
judicial scrutiny of the government’s Internet surveillance practices in criminal cases,” which would
“clarify the rules that the government must follow, serving the public interest of greater
transparency.”163 It would do this by giving defendants the option to challenge government action in
criminal cases. For law enforcement interests, a suppression remedy would allow prosecutors to have
“greater control over the types of cases the courts decided, enjoy more sympathetic facts, and have a
better opportunity to explain and defend law enforcement interests before the courts.”164 Thus,
“[t]he statutory law of Internet surveillance would become more like the Fourth Amendment law: a
source of vital and enforceable rights that every criminal defendant can invoke, governed by relatively
clear standards that by and large respect law enforcement needs and attempt to strike a balance
between those needs and privacy interests.”165
Clarifying the scope of voluntary disclosure provides certainty to cloud computing.
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIOS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
Clarify the Scope of Voluntary Disclosures Finally, Congress can enhance privacy interests by clarifying the scope of
voluntary disclosures under a standard that sufficiently protects customers. As an effect of the limited
remedies available to those who have been harmed by SCA violations, the lack of judicial opinions on
the SCA means that the provisions for when a disclosure is voluntary and when it is compelled is not
fully fleshed out. In seeking these clarifications and changes, I use Fourth Amendment doctrine as the standard for
where the SCA should be. I propose that if a cloud service provider voluntarily discloses information
because of misrepresentations or because of (illegitimate) government coercion, then it should not be
liable for such a disclosure. Yet, there may still be instances where a service provider might be liable
for a voluntary disclosure, even if there was government encouragement. For example, if there was no legal
process and the service provider voluntarily disclosed information in a non-emergency situation (or other situations to be defined statutorily),
the provider would be liable for a violation. In
assessing the voluntariness of the disclosure, a court should look at
whether the provider discloses the information under a good faith belief that it was compelled. Thus,
in a situation where there is a court order or warrant “for show” (where the service provider came to the
government seeking to disclose the information to begin with), such legal process should not absolve the service
provider of liability. The other side of liability for compelled disclosures involves the government. Here,
the SCA should seek to model its standard such that it is in line with Fourth Amendment standards for
third parties acting as agents on behalf of the government. If the government complies with the provisions in § 2703,
then it should not be liable for any SCA violations. On the other hand, if the government acted in such a way as to cause the service provider to
be its agent, then the government should be liable for violation of the SCA. While
Fourth Amendment doctrine has not
settled on the “appropriate inquiry to be performed in determining whether involvement of the
police transforms a private individual into an agent or instrument of the police,”166 any modification
to the SCA should try to settle on a particular type of test, in the interests of clarity and consistency.
Congress is better to legislate privacy policy – multiple reasons
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Fast-paced technological change has destabilized the current statutory and constitutional framework
for protecting citizens’ privacy. Simultaneously, it poses a challenge to courts wishing to craft appropriate,
narrowly tailored solutions. The last fundamental shift in Fourth Amendment doctrine—Katz v. United
States148—occurred in the 1960s and was brought on by the ubiquity of land-line telephones and fears about unchecked government
wiretapping efforts. In deciding Katz, the Court addressed a largely static technological concern. Telephones were already a
relatively old technology, and the Court had previously addressed wiretapping four decades earlier. 149 Moreover, traditional landline
telephones would remain the primary mode of instant, long-distance communication for at least two decades to come. When the Court
issued the Katz decision in 1967, it could be confident its holding would be sustainable over the long term.
In contrast, the last three decades have seen constant—and rapid—technological change. Telephones
transitioned from landlines to mobile phones and later to smartphones, which allow for voice, e-mail,
and web-based communications. E-mails, text messages, and other forms of instant messaging have
largely replaced paper letters. Websites have evolved from static, text-based documents to
multimedia experiences to interactive cloud software platforms. Many of these changes—most notably the
rise of smartphones and other highly mobile computing devices—date only to the last decade. And there is little indication
that the pace of technological change is slowing, that the “next big thing” is not waiting just around the corner. In this
fast-changing context, it is often better to formulate privacy law through legislation than by judicial
holding, as Kerr has convincingly argued.150 While philosophical justifications support this conclusion, this Note’s primary interest is
practical: unlike the courts, Congress can pass sweeping but intricate policy schemes to address present
and future concerns, and legislation is not bound by stare decisis . Courts must rule on a case-by-case
basis, whereas Congress may anticipate a wide range of circumstances and enact carefully tailored
legislation. The Maynard opinion and Justice Alito’s concurrence in Jones provide good examples of
courts’ difficulties crafting all-encompassing regulatory schemes. The D.C. Circuit ruled that Jones’s privacy had been
violated at some point during the twenty-eight days of GPS surveillance.151 Justice Alito agreed.152 Neither opinion, however,
attempted to determine with precision how long was too long. And neither opinion defined with any clarity the types
and techniques of surveillance to which the mosaic theory applied. Congress, on the other hand, can address many or all of these questions in a
single act. Then, too, courts
generally adjudicate past disputes, whereas Congress legislates future policy. As a
holdings regarding fast-changing sub-jects tend to be outdated on arrival, while
legislation can address present and future concerns. The initial passage of the Electronic
Communications Privacy Act of 1986, which (for all its faults) largely protected e-mail communications and
other online interactions at a time when few Americans owned or used personal computers, is a
testament to the lasting power of legislation. Finally, courts are generally bound by precedent, while
Congress is not. With the exception of the Supreme Court, most courts cannot make sweeping policy changes. Legislatures, on
the other hand, can repeal, modify, or pass new policies; indeed, that is their primary purpose. While
institutional and political forces often cause legislatures to support (or, by not acting, resort to) the status quo,153 Congress remains
better equipped than courts to plot out new policy directions.
result, judicial
Mosaic Theory Advantage
Uniqueness
Will Be Applied
Mosaic theory will be applied beyond Jones
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
In United States v. Jones, the Supreme Court relied on “18th-century tort law” to dodge a decidedly
twenty-first-century issue: Should Fourth Amendment doctrine account for the government’s
increasing technological ability to easily and inexpensively monitor suspects ? And if so, how? The D.C.
Circuit, in the case below, addressed the issue by introducing (or, more accurately, re-purposing3) a novel approach that
has been termed the “mosaic theory.”4 Under the mosaic theory, certain forms of long-term
surveillance violate a suspect’s reasonable expectation of privacy, even when each individual act of
surveillance would itself pass Fourth Amendment muster, because the government can use the aggregate surveillance
data to infer private details about the suspect that no individual member of the public could reasonably learn by ob-serving the suspect for a
short time. Applying the mosaic theory, the D.C. Circuit held that the government’s month-long, warrantless GPS surveillance of the defendant,
Antoine Jones, violated his Fourth Amendment right to be free from unreasonable searches. 5 On appeal, the Supreme Court agreed that the
police officers’ behavior was unconstitutional but reached its holding on different grounds. Instead of addressing the month-long monitoring of
Jones’s movements, the majority focused on the police officers’ warrantless installation of a physical GPS device on Jones’s car, determining
that even this de minimis physical trespass violated the Fourth Amendment.6 While
the Court’s holding was more limited
than privacy rights supporters might have preferred,7 five Justices seemed willing to adopt some
version of the mosaic theory, if not in Jones then in a future case not involving physical trespass .8
No Privacy in SCA
SCA doesn’t protect privacy in the status quo.
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIOS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
As forward-thinking as the SCA was for its time, it is not currently without short-comings. First, the SCA’s ECS/RCS distinction
seems particularly anachronistic in the age of cloud computing. The number of people who use the Internet has
increased exponentially103 and new types of technologies have emerged that fundamentally change the way people use the Internet. The
justification often given for Congress creating a distinction between ECS and RCS is that “by ‘renting’
computer storage space with a remote computing service, a customer places himself in the same
situation as one who gives business records to an accountant or attorney.”104 Yet, this analogy is erroneous
given the way Internet communications systems work.105 Although a third party holds the content of the
communications, the user still retains an expectation of privacy. The user does not expect the third party to use the
content to facilitate any intended transaction, unlike a person handing over financial statements to an accountant. In the case of most
cloud computing services, the third party is acting merely as a “rental locker” where the user can store
his or her data in a private and secure location on the Internet or a “package carrier” for the user’s
data. Conversely, at issue in United States v. Miller106 were records, not the contents of private communications. Professor Patricia Bellia
notes that “nothing in Miller suggests that the category [of business records] is all-encompassing— that one
lacks an expectation of privacy in anything in the hands of a third party,”107 which was also the position adopted by the Sixth Circuit in United
States v. Warshak.108 Moreover, the
Supreme Court has recognized that letters and sealed packages are “as
fully guarded from examination . . . as if they were retained by the parties forwarding them in their
own domiciles.”109 It has also recognized that when someone maintains personal property on a third party’s premises, he or she retains
a reasonable expectation of privacy, even if that third party has the right to access the property for some purposes.110 Since most cloud
computing providers provide services that are more similar to these last two functions than that of an accountant receiving financial
statements, the
SCA’s privacy protections are inadequate given the user’s privacy interest. Another major
limitation in the SCA is that it does not provide for a suppression remedy.111 This is problematic for
several reasons. First, this means that the government’s failure to comply with the SCA does not avail
the defendant of any substantive remedy (unless a court concludes that the violation rises to the level of a Fourth Amendment
violation).112 Without a proper remedy, the government can get away with less than adequate
procedure.113 Additionally, the defendant would not have a remedy against a service provider, because the service provider would simply
be complying with a court order, the validity of which it would have no reason to question,114 and would be protected under § 2707(e)’s good
faith exception.115 Second,
the lack of a suppression remedy means that few defendants have any
incentive to challenge the government’s surveillance practices.116 Without an incentive to challenge the
government’s practices under the SCA, relatively few cases apply this particular statute.117 This results in a statute with a reputation for being
poorly understood, because there are fewer judicial opinions that fully analyze the contours of the statute. Finally, the
SCA’s privacy
protections are limited because of the way the statute deals with voluntary disclosures under § 2702 as
opposed to compelled disclosure under § 2703. Voluntary disclosures are particularly important in the context of the Internet and
law enforcement, because the majority of the action takes place between law enforcement and the Internet service providers. In many
situations, however, there is a “gray zone” where potential liability for service providers disclosing
information to the government might be unclear. 118 Orin Kerr gives two examples of how this might occur: A police officer
contacts an ISP system administrator and explains that he is investigating a child molestation case. The officer asks the system administrator if
he is interested in helping out the police by voluntarily disclosing certain files. Wishing to be a good citizen, the system administrator agrees
and turns over files to the agent. Is this a case of “compelled” disclosure or “voluntary” disclosure? Alternatively, imagine that a system
administrator contacts the FBI and wants to disclose files but then asks for a subpoena just to make sure there was some sort of documentation
of the disclosure. The FBI agent agrees, forwards a subpoena to the system administrator, and then accepts the files. Does
of the subpoena turn what was a voluntarily disclosure into a compelled disclosure?119
the presence
Links
Mosaic Theory Solves Privacy
Advancing the Mosaic theory is key to privacy and doesn’t hurt investigations.
Gatewood, 2014 (Jace C., Associate Professor of Law at Atlanta's John Marshall Law School, “District
of Columbia Jones and the Mosaic Theory - In Search of a Public Right of Privacy: The Equilibrium Effect
of the Mosaic Theory” Nebraska Law Review Lexis)
Justice Alito said it best: "Society's expectation has been that law enforcement agents and others would not and indeed, in the main, simply could not - secretly monitor and catalogue every single movement of
an individual's car for a very long period." n224 It is simply that in everyday life, we reasonably expect even in
public, certain facts concerning our daily comings and goings will remain private, not because we intend for
them to be private (in most instances) but because we do not expect that any one person would be privy to all of
our day's events. n225 Implementing the mosaic theory, despite all of the criticism [*536] raised regarding its
application, will help preserve the practical guarantees of the Fourth Amendment by restoring practical
limitations and by balancing the government's interest in investigating crime and society's interest in
maintaining privacy in and out of the public eye. The mosaic theory comports to our real world
expectations of privacy much better than the idea that what a person knowingly exposes to the public
cannot also be subject to a reasonable expectation of privacy. It also preserves a degree of public
privacy in the face of advanced technology without completely undermining law enforcement's
efforts to investigate and prevent crime.
The mosaic theory is key to restore the balance of privacy in the era of advanced
technology.
Gatewood, 2014 (Jace C., Associate Professor of Law at Atlanta's John Marshall Law School, “District
of Columbia Jones and the Mosaic Theory - In Search of a Public Right of Privacy: The Equilibrium Effect
of the Mosaic Theory” Nebraska Law Review Lexis)
Professor Orin S. Kerr argues that much of Fourth Amendment jurisprudence is based on what he describes as "equilibrium-adjustment." n211
Professor Kerr describes equilibrium-adjustment as follows: Equilibrium-adjustment acts
as a correction mechanism.
When judges perceive that changing technology or social practice significantly weakens police power
to enforce the law, courts adopt lower Fourth Amendment protections for these new circumstances
to help restore the status quo ante. On the other hand, when judges perceive that changing
technology or social practice significantly enhances government power, courts embrace higher
protections to counter the expansion of government power. n212 As advanced surveillance and tracking
technology have greatly expanded the government's power, adoption of the mosaic theory would
coexist with Professor Kerr's view of Fourth Amendment jurisprudence by restoring the past level of Fourth Amendment protections. Under
the current prohibitions of the Fourth Amendment, "relatively short-term monitoring of a person's movements on public streets" n213 would
be permitted by law enforcement officials having probable cause or reasonable suspicion. n214 Likewise, the use of many other investigatory
surveillance techniques to collect and store limited information, presumably to yield sufficient information necessary to secure a warrant,
would also be permitted so long as they do not involve a trespass or invade a constitutionally protected area. n215 It is unclear
under
current constitutional proscriptions how much information can be collected and stored or how long
the government may investigate an individual using enhanced surveillance technology . n216
Implementing the mosaic theory to these kinds of Fourth Amendment issues will force the
government to make these practical assessments regarding how much and how long or risk having
evidence excluded if we assume that the appropriate remedy for a Fourth Amendment violation
under the mosaic theory would be the exclusion of the evidence [*534] gathered in violation thereof.
n217 This assessment will necessarily include an assessment by the government as to the scope of the mosaic theory's reach, including whether
or not too much information is being collected and stored or whether or not enhanced tracking surveillance is too long, thereby violating the
Fourth Amendment under the mosaic theory. This
assessment mirrors the practical considerations that protected
Fourth Amendment rights in the past. Prior to modern digital technology, law enforcement officials
routinely had to make critical assessments as to how much manpower, how much time, and how
many resources they would or could devote to a giving suspect. In making this assessment, no doubt decisions had to
be made regarding the potential of obtaining sufficient evidence to meet the probable cause standard. Those suspects that were of
limited value most likely received less attention in terms of manpower, time, and resources, thus
preserving a degree of privacy as a practical matter, while higher-valued suspects received the bulk of
the resources. But even then, the degree of privacy intrusion was still limited by practical
considerations that included how much time to devote and how much information could physically be
collected. n218 These practical limitations are exactly what protected Fourth Amendment rights in an
earlier age. The advent of advanced technology has changed the landscape of "dragnet type" n219
surveillance from something that was "difficult and costly" n220 to something "relatively easy and
cheap." n221 The goal of the mosaic theory would be to force law enforcement to make critical
decisions regarding the use of such technology instead of using it indiscriminately without concern to
privacy implications. In cases where law enforcement officials are unsure whether particular surveillance activity would fall outside the
ambient of the mosaic theory, then law enforcement would have the option of seeking independent judicial review and securing a warrant,
which, after all, is the point of the Fourth Amendment. n222 As a result, law enforcement is free to pursue its investigation in any practical
matter it deems fit, keeping in mind [*535] that abuses of technology could result in violations of the Fourth Amendment. This
resulting
effect makes the mosaic theory the great equalizer. Thus, what practical considerations did for Fourth
Amendment privacy rights in the pre-computer age, the mosaic theory will do for privacy rights in the
wake of advanced technology.
Impacts
Privacy Outweighs w/Util
Even within a utilitarian framework, privacy outweighs for two reasons:
First – Structural bias. Their link inflates the security risk and their impact’s an
epistemologically wrong.
Solove ‘8 Daniel Solove is an Associate Professor at George Washington University Law School and holds a J.D. from Yale
Law School. He is one of the world’s leading expert in information privacy law and is well known for his academic work on
privacy and for popular books on how privacy relates with information technology. He has written 9 books and more than 50
law review articles – From the Article: “Data Mining and the Security-Liberty Debate” - University of Chicago Law Review, Vol.
74, p. 343, 2008 - http://papers.ssrn.com/sol3/papers.cfm?abstract_id=990030
Data mining is one issue in a larger debate about security and privacy. Proponents of data mining justify it as an
essential tool to protect our security. For example, Judge Richard Posner argues that “[i]n an era of global terrorism and proliferation of weapons of mass destruction, the government has a
compelling need to gather, pool, sift, and search vast quantities of information, much of it personal.”9 Moreover, proponents of security measures argue that we must provide the executive branch with the discretion it needs to
protect us. We cannot second guess every decision made by government officials, and excessive meddling into issues of national security by judges and oth-ers lacking expertise will prove detrimental. For example, William Stuntz
contends that “effective, active government—government that innovates, that protects people who need protecting, that acts aggressively when action is needed—is dying. Privacy and transparency are the diseases. We need to
the prevailing view is that most rights
and civil liberties are not absolute.12 Thus, liberty must be balanced against security. But there are systematic problems with how
find a vaccine, and soon.”10 Stuntz concludes that “[i]n an age of terrorism, privacy rules are not simply unaffordable. They are perverse.”11 We live in an “age of balancing,” and
the balancing occurs
that inflate the importance of the security interests and diminish the value of the liberty
interests. In this essay, I examine some common difficulties in the way that liberty is balanced against security in the context of data mining. Countless discussions about the tradeoffs between security and liberty begin
by taking a security proposal and then weighing it against what it would cost our civil liberties. Often, the liberty interests are cast as individual rights and
balanced against the security interests, which are cast in terms of the safety of society as a whole. Courts and commentators defer
to the government’s assertions about the effectiveness of the security interest. In the context of data mining, the liberty interest is limited by narrow understandings of privacy that neglect to account for many privacy problems.
As a result, the balancing concludes with a victory in favor of the security interest. But
dimensions of data mining’s security benefits require more scrutiny, and the
as I will argue, important
privacy concerns are significantly greater than currently acknowledged.
These problems have undermined the balancing process and skew ed the results toward the security side of
the scale. Debates about data mining begin with the assumption that it is an essential tool in protecting our security. Terrorists lurk among us, and ferreting them out can be quite difficult. Examining data for patterns
will greatly assist in this endeavor, the argument goes, because certain identifiable characteristics and behaviors are likely to be associated with terrorist activity. Often, little more is said, and the debate pro-ceeds to examine
whether privacy is important enough to refrain from using such an effective terrorism-fighting tool. Many discussions about security and liberty proceed in this fashion. They commence by assuming that a particular security
measure is effective, and the only remaining question is whether the liberty interest is strong enough to curtail that measure. But given the gravity of the security concerns over terrorism, the liberty interest has all but lost before it
is even placed on the scale. Judge Richard Posner argues that judges should give the executive branch considerable deference when it comes to assessing the security measures it proposes. In his recent book, Not a Suicide Pact:
The Constitution in a Time of National Emergency,13 Posner contends that judicial restraint is wise because “when in doubt about the actual or likely consequences of a measure, the pragmatic, empiricist judge will be inclined to
give the other branches of government their head.”14 According to Posner, “[j]udges aren’t supposed to know much about national security.”15 Likewise, Eric Posner and Adrian Vermeule declare in their new book, Terror in the
Balance: Security, Liberty, and the Courts,16 that “the executive branch, not Congress or the judicial branch, should make the tradeoff between security and liberty.”17 Moreover, Posner and Vermeule declare that during
emergencies, “[c]onstitutional rights should be relaxed so that the executive can move forcefully against the threat.”18 The problem with such deference is that, historically, the executive branch has not always made the wisest
national security decisions. Nonetheless, Posner and Vermeule contend that notwithstanding its mistakes, the executive branch is better than the judicial and legislative branches on institutional competence grounds.19 “Judges
are generalists,” they observe, “and the political insulation that protects them from current politics also deprives them of information, especially information about novel security threats and necessary responses to those
threats.”20 Posner and Vermeule argue that during emergencies, the “novelty of the threats and of the necessary responses makes judicial routines and evolved legal rules seem inapposite, even obstructive.”21 “Judicial routines”
and “legal rules,” however, are the cornerstone of due process and the rule of law—the central building blocks of a free and democratic society. At many times, Posner, Vermeule, and other strong proponents of security seem to
focus almost exclusively on what would be best for security when the objective should be establishing an optimal balance between security and liberty. Although such a balance may not promote security with maximum efficiency,
it is one of the costs of living in a constitutional democracy as opposed to an authoritarian political regime. The executive branch may be the appropriate branch for developing security measures, but this does not mean that it is
the most adept branch at establishing a balance between security and liberty. In our constitutional democracy, all branches have a role to play in making policy. Courts protect constitutional rights not as absolute restrictions on
executive and legislative policymaking but as important interests to be balanced against government interests. As T. Alexander Aleinikoff notes, “balancing now dominates major areas of constitutional law.”22 Balancing occurs
through various forms of judicial scrutiny, requiring courts to analyze the weight of the government’s interest, a particular measure’s effectiveness in protecting that interest, and the extent to which the government interest can be
achieved without unduly infringing upon constitutional rights.23 For balancing to be meaningful, courts must scrutinize both the security and liberty interests. With deference, however, courts fail to give adequate scrutiny to
security interests. For example, after the subway bombings in London, the New York Police Department began a program of random searches of people’s baggage on the subway. The searches were conducted without a warrant,
probable cause, or even reasonable suspicion. In MacWade v Kelly,24 the United States Court of Appeals for the Second Circuit upheld the program against a Fourth Amendment challenge. Under the special needs doctrine, when
exceptional circumstances make the warrant and probable cause requirements unnecessary, the search is analyzed in terms of whether it is “reasonable.”25 Reasonableness is determined by balancing the government interest in
security against the interests in privacy and civil liberties.26 The weight of the security interest should turn on the extent to which the program effectively improves subway safety. The goals of the program may be quite laudable,
but nobody questions the importance of subway safety. The critical issue is whether the search program is a sufficiently effective way of achieving those goals that it is worth the tradeoff in civil liberties. On this question,
unfortunately, the court deferred to the law enforcement officials, stating that the issue “is best left to those with a unique understanding of, and responsibility for, limited public resources, including a finite number of police
officers.” 27 In determining whether the program was “a reasonably effective means of addressing the government interest in deterring and detecting a terrorist attack on the subway system,”28 the court refused to examine the
data to assess the program’s effectiveness.29 The way the court analyzed the government’s side of the balance would justify nearly any search, no matter how ineffective. Although courts should not take a know-it-all attitude, they
should not defer on such a critical question as a security measure’s effectiveness. The problem with many security measures is that they are not wise expenditures of resources. A small number of random searches in a subway
system of over four million riders a day seems more symbolic than effective because the odds of the police finding the terrorist with a bomb are very low. The government also argued that the program would deter terrorists from
bringing bombs on subway trains, but nearly any kind of security measure can arguably produce some degree of deterrence. The key issue, which the court did not analyze, is whether the program would lead to deterrence
significant enough to outweigh the curtailment of civil liberties. If courts fail to question the efficacy of security measures, then the security interest will prevail nearly all the time. Preventing terrorism has an immensely heavy
weight, and any given security measure will provide a marginal advancement toward that goal. In the defer-ence equation, the math then becomes easy. At this point, it is futile to even bother to look at the civil liberties side of the
balance. The government side has already won. Proponents of deference argue that if courts did not defer, then they would be substituting their judgment for that of executive officials, who have greater expertise in understanding
security issues. Special expertise in national security, however, is often not necessary for balancing security and liberty. Judges and legislators should require the experts to persuasively justify the security measures being developed
or used. Of course, in very complex areas of knowledge, such as advanced physics, nonexperts may find it difficult to understand the concepts and comprehend the terminology. But it is not clear that security expertise involves
such sophisticated knowledge that it would be incomprehensible to nonexperts. Moreover, the deference argument conflates evaluating a particular security measure with creating such a measure. The point of judicial review is to
subject the judgment of government officials to critical scrutiny rather than blindly accept their authority. Critical inquiry into factual matters is not the imposition of the judge’s own judgment for that of the decisionmaker under
review.30 Instead, it is forcing government officials to explain and justify their policies. Few will quarrel with the principle that courts should not “second guess” the decisions of policy experts. But there is a difference between not
“second guessing” and failing to critically evaluate the factual and empirical evidence justifying the government programs. Nobody will contest the fact that security is a compelling interest. The key issue in the balancing is the
extent to which the security measure furthers the interest in security. As I have argued elsewhere, whenever courts defer to the government on the effectiveness of a government security measure, they are actually deferring to
the government on the ultimate question as to whether the measure passes constitutional muster.31 Deference by the courts or legislature is an abdication of their function. Our constitutional system of government was created
with three branches, a design structured to establish checks and balances against abuses of power. Institutional competence arguments are often made as if they are ineluctable truths about the nature of each governmental
branch. But the branches have all evolved considerably throughout history. To the extent a branch lacks resources to carry out its function, the answer should not be to diminish the power of that branch but to provide it with the
necessary tools so it can more effectively carry out its function. Far too often, unfortunately, discussions of institutional competence devolve into broad generalizations about each branch and unsubstantiated assertions about the
inherent superiority of certain branches for making particular determinations. It is true, as Posner and Vermeule observe, that historically courts have been deferential to the executive during emergencies.32 Proponents of security
measures often advance what I will refer to as the “pendulum theory”—that in times of crisis, the balance shifts more toward security and in times of peace, the balance shifts back toward liberty. For example, Chief Justice
Rehnquist argues that the “laws will thus not be silent in time of war, but they will speak with a somewhat different voice.”33 Judge Posner contends that the liberties curtailed during times of crisis are often restored during times
of peace.34 Deference is inevitable, and we should accept it without being overly concerned, for the pendulum will surely swing back. As I argue elsewhere, however, there have been many instances throughout US history of
needless curtailments of liberty in the name of security, such as the Palmer Raids, the Japanese Internment, and the McCarthy communist hearings.35 Too often, such curtailments did not stem from any real security need but
because of the “personal agendas and prejudices” of government officials.36 We should not simply accept these mistakes as inevitable; we should seek to prevent them from occurring. Hoping that the pendulum will swing back
offers little consolation to those whose liberties were infringed or chilled. The protection of liberty is most important in times of crisis, when it is under the greatest threat. During times of peace, when our judgment is not clouded
by fear, we are less likely to make unnecessary sacrifices of liberty. The threat to liberty is lower in peacetime, and the need to protect it is not as dire. The greatest need for safeguarding liberty is during times when we least want
to protect it. In order to balance security and liberty, we must assess the security interest. This involves evaluating two components—the gravity of the security threat and the effectiveness of the security measures to address it.
It is often merely assumed without question that the secu-rity threat from terrorism is one of the gravest dangers we
face in the modern world. But this assumption might be wrong. Assessing the risk of harm from terrorism is very difficult because terrorism is such an irregular occurrence and is
constantly evolving. If we examine the data from previous terrorist attacks, however, the threat of terrorism has been severely overstated. For example, many people
fear being killed in a terrorist attack, but based on statistics from terrorism in the United States, the risk of dying from terrorism is miniscule . According to political scientist
John Mueller, [e]ven with the September 11 attacks included in the count . . . the number of Americans killed by international terrorism since the late 1960s (which is when the State Department began its accounting) is about the
Add up the eight deadliest terrorist attacks
in US history, and they amount to fewer than four thousand fatalities.38 In contrast, flu and pneumonia deaths are estimated to be around sixty
thousand per year.39 Another forty thousand die in auto accidents each year.40 Based on our experience with terrorism thus far, the risk of dying from terrorism is very low
on the relative scale of fatal risks. Dramatic events and media attention can cloud a rational assessment of risk. The
same as the number killed over the same period by lightning, or by accident-causing deer, or by severe allergic reactions to peanuts.37
year 2001 was not just notable for the September 11 attacks. It was also the summer of the shark bite, when extensive media coverage about shark bites led to the perception that such attacks were on the rise. But there were
fewer shark attacks in 2001 than in 2000 and fewer deaths as well, with only four in 2001 as compared to thirteen in 2000.41 And regardless of which year had more deaths, the number is so low that an attack is a freak occurrence.
It is certainly true that our past experience with terrorism might not be a good indicator of the future. More treacherous terrorism is possible, such as the use of nuclear or biological weapons. This complicates our ability to assess
the risk of harm from terrorism. Moreover, the intentional human conduct involved in terrorism creates a sense of outrage and fear that ordinary deaths do not engender. Alleviating fear must be taken into account, even if such
fear is irrationally high in relation to other riskier events such as dying in a car crash. But enlightened policy must not completely give in to the panic and irrational fear of the moment. It should certainly attempt to quell the fear,
but it must do so thoughtfully. Nevertheless,
most policymakers find it quite difficult to assess the threat of terrorism
modestly. In the face of widespread public panic, it is hard for government officials to make only moderate changes. Something dramatic must be done, or political heads will roll. Given the difficulty in assessing the
security threat in a more rational manner, it is imperative that the courts meaningfully analyze the effectiveness of security measures. Even if panic and fear might lead to the gravity of
the threat being overstated, we should at least ensure that the measures taken to promote security are sufficiently
effective to justify the cost. Unfortunately, as I will discuss in the next section, rarely do discussions about the sacrifice of civil liberties explain the corresponding security benefit, why such a benefit cannot be achieved in
other ways, and why such a security measure is the best and most rational one to take. Little scrutiny is given to security measures. They are often just accepted as a given,
no matter how ill-conceived or ineffective they might be. Some ineffective security measures are largely symbolic, such as the New York City subway search program. The searches
are unlikely to catch or deter terrorists because they involve only a miniscule fraction of the millions of daily passengers. Terrorists can just turn to other targets or simply attempt the bombing on another day or at another train
station where searches are not taking place. The vice of symbolic security programs is that they result in needless sacrifices of liberty and drain resources from other, more effective security measures. Nevertheless, these programs
have a virtue—they can ameliorate fear because they are highly visible. Ironically, the subway search program’s primary benefit was alleviating people’s fear (which was probably too high), albeit in a deceptive manner (as the
program did not add much in the way of security).
Data mining represents another kind of security measure, one that currently has little proven effectiveness and little symbolic
value. Data mining programs are often not visible enough to the public to quell much fear. Instead, their benefits come primarily from their actual effectiveness in reducing terrorist threats, which remains highly speculative. Thus
far,
data mining is not very accurate in the behavioral predictions it makes. For example, there are approximately 1.8 million airline passengers each day.42 A data mining program to
identify terrorists with a false positive rate of 1 percent (which would be exceedingly low for such a program) would flag eighteen thousand people as false positives. This is quite a large number of innocent people. Why is the
government so interested in data mining if it remains unclear whether it will ever be very accurate or workable? Part of the government’s interest in data mining stems from the aggressive marketing efforts of database companies.
After September 11, database companies met with government officials and made a persuasive pitch about the virtues of data mining.43 The technology sounds quite dazzling when presented by skillful marketers, and it can work
just because data mining might be effective for businesses trying to
predict customer behavior does not make it effective for the government trying to predict who will
engage in terrorism. A high level of accuracy is not necessary when data mining is used by businesses to target marketing to consumers, because the cost of error to individuals is minimal. Amazon.com, for
quite well in the commercial setting. The problem, however, is that
example, engages in data mining to determine which books its customers are likely to find of interest by comparing bookbuying patterns among its customers. Although it is far from precise, it need not be because there are few
I do not believe that the case has
been made that data mining is a wise expenditure of security resources. Those who advocate for security should be just as outraged as
bad consequences if it makes a wrong book recommendation. Conversely, the consequences are vastly greater for government data mining. Ultimately,
those on the liberty side of the debate. Although courts should not micromanage which security measures the government chooses, they should examine the effectiveness of any given security measure to weigh it against the
liberty costs. Courts should not tell the executive branch to modify a security measure just because they are not convinced it is the best one, but they should tell the executive that a particular security measure is not effective
The very point of protecting liberty is to demand that sacrifices to liberty are not in
vain and that security interests, which compromise civil liberties, are sufficiently effective to warrant
the cost.
enough to outweigh the liberty costs.
Second - Relative certainty. The disad only may cause violence - surveillance definitely
does. Privacy is paramount for dignity and protecting our unique individuality.
Schneier ‘6 Bruce Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School, a program
fellow at the New America Foundation's Open Technology Institute and the CTO of Resilient Systems. He is the author of
Beyond Fear: Thinking Sensibly About Security in an Uncertain World. Commentary, “The Eternal Value of Privacy”, WIRED, May
18, 2006, http://www.wired.com/news/columns/1,70886-0.html
The most common retort against privacy advocates -- by those in favor of ID checks, cameras, databases, data mining and
other wholesale surveillance measures -- is this line: "If you aren't doing anything wrong, what do you have to
hide?" Some clever answers: "If I'm not doing anything wrong, then you have no cause to watch me."
"Because the government gets to define what's wrong, and they keep changing the definition." "Because you might do something wrong
with my information." My problem with quips like these -- as right as they are -- is that they accept the premise
that privacy is about hiding a wrong. It's not. Privacy is an inherent human right, and a requirement for
maintaining
the human condition with
dignity and respect. Two proverbs say it best: Quis custodiet custodes ipsos? ("Who watches the
watchers?") and "Absolute power corrupts absolutely." Cardinal Richelieu understood the value of surveillance when he famously said, "If one would give me six
lines written by the hand of the most honest man, I would find something in them to have him hanged." Watch someone long enough, and you'll find something to
arrest -- or just blackmail -- with. Privacy
is important because without it, surveillance information will be abused : to
peep, to sell to marketers and to spy on political enemies -- whoever they happen to be at the time. Privacy protects us from abuses by
those in power, even if we're doing nothing wrong at the time of surveillance. We do nothing wrong when we make
love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing
in the privacy of the shower, and write letters to secret lovers and then burn them.
Privacy is a basic human need.
A future in which privacy
would face constant assault was so alien to the framers of the Constitution that it never occurred to them to call out privacy as an explicit right. Privacy was inherent
to the nobility of their being and their cause. Of course being watched in your own home was unreasonable. Watching at all was an act so unseemly as to be
inconceivable among gentlemen in their day. You watched convicted criminals, not free citizens. You ruled your own home. It's intrinsic to the concept of liberty.
For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even
plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that -- either
now or in the uncertain future -- patterns we leave behind will be brought back to implicate us, by
whatever authority has now become focused upon our once-private and innocent acts. We lose our
individuality , because everything we do is observable and recordable. How many of us have paused during
conversation in the past four-and-a-half years, suddenly aware that we might be eavesdropped on? Probably it was a
phone conversation, although maybe it was an e-mail or instant-message exchange or a conversation in a public place. Maybe the
topic was terrorism, or politics, or Islam. We stop suddenly, momentarily afraid that our words might be
taken out of context, then we laugh at our paranoia and go on. But our demeanor has changed, and our
words are subtly altered. This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam
Hussein's Iraq. And it's our future as we allow an ever-intrusive eye into our personal, private lives. Too many wrongly characterize the
debate as "security versus privacy." The real choice is liberty versus control . Tyranny, whether it arises
under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny.
Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of
a police state. And that's why we should champion privacy even when we have nothing to hide.
Moral Imperative
Reject those privacy violations as an a priori imperative. Also proves that the disad’s
all hype.
Wyden ‘14
(et al; This amicus brief issued by three US Senators - Ron Wyden, Mark Udall and Martin Heinrich. Wyden and Udall sat on the
Senate Select Committee on Intelligence and had access to the meta-data program. “BRIEF FOR AMICI CURIAE SENATOR RON
WYDEN, SENATOR MARK UDALL, AND SENATOR MARTIN HEINRICH IN SUPPORT OF PLAINTIFF-APPELLANT, URGING REVERSAL
OF THE DISTRICT COURT” – Amicus Brief for Smith v. Obama – before the United States Ninth Circuit Court of Appeals - Appeal
from the United States District Court District of Idaho The Honorable B. Lynn Winmill, Chief District Judge, Presiding Case No.
2:13-cv-00257-BLW – Sept 9th, 2014 – This Amicus Brief was prepared by CHARLES S. SIMS from the law firm PROSKAUER ROSE
LLP. This pdf can be obtained at: https://www.eff.org/document/wyden-udall-heinrich-smith-amicus)
Respect for Americans’ privacy is not a matter of convenience, but a Constitutional imperative . Despite
years of receiving classified briefings and asking repeated questions of intelligence officials in both private
and public settings, amici have seen no evidence that bulk collection accomplishes anything that other less
intrusive surveillance authorities could not. Bulk collection is not only a significant threat to the
constitutional liberties of Americans, but a needless one.9
Mechanics
A2: Mosaic Bad DA
Sets a proper balance and doesn’t change existing Fourth Amendment doctrine.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Relying on the mosaic theory insights of Maynard and the Jones concurrences, this Note’s proposed
amendment to the Stored Communications Act would address the statute’s current failings in a
measured way. Though the proposal would require the government to obtain a warrant before requesting content information or
sweeping amounts of metadata, it would allow law enforcement officers to rely on court orders and subpoenas
for more targeted (and less invasive) disclosures. In so doing, it would clarify the government’s legal
obligations, making the Act easier for the government and magistrate judges to follow and simpler for
criminal litigants to understand. Moreover, adding a suppression remedy would allow courts to apply the
law to new circumstances, making the statute adaptable to new technologies for years to come . By
statutorily implementing these changes in the manner proposed, the Stored Communications Act
would once again protect citizens’ online privacy while allowing the government to conduct criminal
investigations with clear and reasonable boundaries . And contrary to critics’ concerns about the feasibil-ity of
implementing the mosaic theory, all of these changes would be compati-ble with, and would require no fundamental reconfiguration of,
existing Fourth Amendment doctrine.
The plan just brings in electronics in with regular investigations which doesn’t hurt
crime fighting.
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIOS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
Calls for Congress to amend the SCA are not new.123 With the explosion of cloud computing and Web
2.0 services in the past few years however, it has become even more urgent and necessary for
Congress to bring the SCA into the twenty-first century. The aim of my proposals is not to provide
greater protection for stored electronic communications than in other areas, but to bring that
protection to a level approaching that of Fourth Amendment protections. Thus, the proposals recognize
the important interests of law enforcement in fighting crime and The proposals seek to bring privacy
protections in line with constitutional privacy doctrine, and would not affect legitimate emergency
disclosures or publicly disclosed information124 where the user retains no reasonable expectation of privacy. The ability
of the government to successfully investigate, prosecute, and obtain verdicts against criminals in nonInternet related areas indicates that the Fourth Amendment’s restrictions on governmental action
has not severely handicapped law enforcement’s attempts to fight crime. Meanwhile, the privacy protections
have greatly benefitted citizens in terms of law enforcement accountability and checking governmental power.
A2: Math Disproves Mosaic
Holistic account of mosaic theory solves theoretical downfalls.
Gray and Keats Citron, 2013 (David, Associate Professor, University of Maryland Francis King Carey
School of Law, and Danielle, Lois K. Macht Research Professor of Law, University of Maryland Francis
King Carey School of Law, “A Shattered Looking Glass: The Pitfalls and Potential of the Mosaic Theory of
Fourth Amendment Privacy” North Carolina Journal of Law & Technology Lexis)
By far the most promising response to the argument that the sum of nothings cannot be something,
however, is to take seriously the metaphor of the mosaic. It may well be true that the "sum of an infinite
number of zero-value parts is also zero," n188 but mosaic advocates need not and do not make their
case based on addition. n189 Quite to the contrary, their key claim is that the "whole" of one's movements in public "reveals more sometimes a great deal more - than does the sum of its parts." n190 The mosaic theory is, then, not an exercise in
arithmetic. Rather, it recognizes that, although a collection of dots is sometimes nothing more than a
collection of dots, some collections of dots, when assessed holistically, are A Sunday Afternoon on the Island of La
Grande Jatte. n191 So, too, are our lives. As Justice Sotomayor observed in Jones, a "precise, comprehensive, record of a
person's public movements ... reflects a wealth of detail about her familial, political, professional,
religious and sexual associations." n192 The tapestries of our lives are by definition an aggregation of
events and activities that, when assessed discretely, or even iteratively, may have little significance.
When assessed holistically, however, these events not only tell a detailed story of our activities and
associations, they may reveal who we are at a fundamental level and therefore expose opportunities
for manipulation and control. It may not take much. For example, according to one recent study, researchers were able to pierce
the veil of anonymity cast over a body of locational data [*416] and identify particular users by referencing as few as four "spatio-temporal
points." n193 The mosaic theory's core claim, then, is not that we have a reasonable expectation of privacy in flashing moments, or even in
meaningless arithmetic concatenations of those events. Rather, mosaic
theorists argue that we have a reasonable
expectation of privacy in the whole of our lives, and therefore have a Fourth Amendment right to be
free from constant, indiscriminate, and pervasive surveillance. n194 Building out from this core, Justice Sotomayor's
concurrence in Jones supports another important response to the arithmetic objection. Fourth Amendment privacy is not an
ethereal abstraction. To the contrary, as a constituent of rights bundled together in the first eight Amendments to the U.S.
Constitution, n195 the negative rights afforded by the Fourth Amendment n196 secure the space that is necessary to pursue the blessings of
fundamental liberty. As Justice Sotomayor points out, "Awareness
that the Government may be watching chills
associational and expressive freedoms." n197 Only by providing substantial privacy protections can we
truly be at liberty to explore and pursue the good life as we conceive it. Thus, Justice Sotomayor tells us, "GPS
monitoring - by making available at a relatively low cost such a substantial quantum of intimate information about any person whom the
Government, in its unfettered discretion, chooses to track - may alter the relationship between citizen and government in a way that is inimical
to democratic society." n198 [*417] Although
this holistic account of the mosaic theory may answer Judge
Sentelle's mathematical concerns, it appears to run full force into conceptual objections raised by Orin
Kerr that the mosaic engages a previously rejected diachronic account of the Fourth Amendment. n199
Here, however, mosaic advocates have a ready response: The objection misunderstands the thesis. Embracing a
mosaic approach to assessing Fourth Amendment privacy interests does not require taking an equally
holistic view of law enforcement conduct. That is, it may be true that officer conduct during the course of an investigation
does not constitute a "search" when assessed discretely, or even in the aggregate, but, nevertheless, may produce a mosaic of personal
information that is sufficiently expansive and detailed to implicate reasonable expectations of privacy. There is no doubt that this shift in focus
from the conduct of law enforcement to the fruits of their investigative efforts raises serious practical problems when weighing Fourth
Amendment interests. After all, officers naturally want to be able to make prospective assessments of whether the Fourth Amendment will
apply so they will know how to proceed. For now, however, it seems that a holistic framing of the mosaic theory can meet the major conceptual
objections, at least insofar as it is treated as a way to understand Fourth Amendment interests and harms. Whether and how the mosaic theory
can be converted into a useful set of practices and policies is a separate matter, which we address below. n200
A2: Doctrine Problems
Even if there are changes, the mosaic theory doesn’t hurt the public observation or
third party doctrines.
Gray and Keats Citron, 2013 (David, Associate Professor, University of Maryland Francis King Carey
School of Law, and Danielle, Lois K. Macht Research Professor of Law, University of Maryland Francis
King Carey School of Law, “A Shattered Looking Glass: The Pitfalls and Potential of the Mosaic Theory of
Fourth Amendment Privacy” North Carolina Journal of Law & Technology Lexis)
All of this suggests that recognizing the mosaic theory would require abandoning or significantly
modifying the public observation doctrine, n206 and perhaps the Katz reasonable expectation of privacy test as well. n207 This
is true even if the mosaic theory focuses on the enhanced privacy interests implicated by aggregations of data and information as a whole. First,
mosaics that trigger Fourth Amendment concerns can be aggregated in sundry ways, including by using multiple investigative techniques. n208
Without additional guidance, conducting traditional surveillance for a day, a week, or a month might reveal too much. Similarly, a targeted, but
short technologically-enhanced investigation might easily reveal enough to cross the threshold. Second, given the increasing ubiquity of what
Christopher Slobogin has called "panvasive surveillance," n209 defending a mosaic theory appears to require treating the Katz reasonable
expectation of privacy test as proscriptive rather than descriptive. Although attractive to many privacy advocates, that move would [*420]
dramatically change the Fourth Amendment landscape, potentially reopening questions once thought settled. n210 The
only way for
mosaic theorists to avoid falling off this doctrinal cliff is to come forward with a clear evaluative test
that law enforcement officers can deploy prospectively to reliably determine which investigative
techniques they can employ, and to what extent, before triggering Fourth Amendment requirements.
Thus, as we saw in the foregoing discussion of conceptual issues, n211 the focus quickly turns to the practicalities. There is simply no
doubt that adopting a mosaic theory of the Fourth Amendment will require modifying the public
observation doctrine. How much modification is required, and the type of adjustment needed, will be a function of the test advocates
adopt. n212 In contrast with the inevitable confrontation that mosaic theorists must have with the public observation doctrine, any conflict
with the third party doctrine is entirely avoidable. It is by [*421] now settled that the Fourth Amendment binds only state
actors. n213 Thus, there is no constitutional barrier to private parties' engaging in surveillance activities that would be subject to Fourth
Amendment regulations if conducted by government officials. n214 Justice Sotomayor's suggestion in Jones that the
Court might need
to fundamentally reconsider the third party doctrine if it chooses to embrace the mosaic theory n215 is
therefore not motivated by doctrinal necessity. Rather, it reflects practical concerns that the privacy
interests and harms identified by the mosaic theory will not be fully vindicated unless private actors
are also subject to constraint or government agents are limited in terms of what information they can
gather through third parties. This really involves two concerns. The first is that law enforcement officers will simply circumnavigate
the Fourth Amendment by subpoenaing from private parties information that the officers could not gather directly. The second is that
informational mosaics in the hands of private parties are no less invasive and objectionable for being in private rather than state hands. In
response to both concerns, promoters
of the mosaic theory can simply maintain that worries about the
absence of practical protections for informational mosaics in light of the third party doctrine are
constitutionally gratuitous. They are also not new. Similar arguments have been raised before the Court
when it has held the line on the third party doctrine. n216 In most of these [*422] cases, the political
branches have responded, imposing legal limitations on the gathering, preservation, and sharing of
information from banks, n217 telephone companies, n218 and e-mail providers. n219 The Court is free to
exercise the same restraint should it adopt the mosaic theory, and thereby avoid any entanglement
with the third party doctrine. Should it choose this more parsimonious path, it would go a long way toward silencing many mosaic
critics. n220
A2: Line Drawing Bad
Line Drawing is inevitable – 4th amendment generally uses mushy concepts.
Gray and Keats Citron, 2013 (David, Associate Professor, University of Maryland Francis King Carey
School of Law, and Danielle, Lois K. Macht Research Professor of Law, University of Maryland Francis
King Carey School of Law, “A Shattered Looking Glass: The Pitfalls and Potential of the Mosaic Theory of
Fourth Amendment Privacy” North Carolina Journal of Law & Technology Lexis)
[*423] As courts put the mosaic theory into practice, the first line of challenges they will need to address
are line-drawing problems. How are officers and courts to determine whether a particular informational mosaic contains enough information to
implicate Fourth Amendment rights? Does the quality of information in the mosaic come into play, or is it merely the quantity? Does the method of acquisition
matter? How are police officers to know, prospectively, whether the Fourth Amendment applies, when, and what it demands? All of these are important questions
that ultimately feed back into the various conceptual and doctrinal issues already discussed. A
good place for mosaic advocates to start is
by pointing out that these sorts of line-drawing problems are not unique to the mosaic theory. Rather,
they are endemic to the Fourth Amendment itself. n221 The animating core of the Fourth Amendment is
reasonableness. n222 Reasonableness, in turn, requires a balancing of competing law enforcement and
privacy interests. n223 It is therefore no surprise that Fourth Amendment analysis is often more
nuanced than it is definitive, or that Fourth Amendment tests tend to describe spectrums rather than
bright lines. Take, for example, the Court's approach to probable cause, the threshold requirement that must be met before officers can engage in searches
for evidence. Writing for the Court in Illinois v. Gates, n224 then-Justice Rehnquist tells us that "probable cause is a ... practical, nontechnical" standard and is "a
fluid concept - turning on the assessment of probabilities in particular factual contexts - not readily, or even [*424] usefully, reduced to a neat set of legal rules."
n225 These
are mushy standards indeed, and no doubt produce a range of reasonable, but conflicting,
views among courts, n226 not to mention angst in the law enforcement community. n227 Despite these difficulties, the Court has yet to excuse officers
or courts from responsibility for "sloshing [their] way through the factbound morass of "reasonableness.' " n228 I t is hard to see how the linedrawing concerns raised by mosaic critics are any more worrisome than the line-drawing problems
that are inherent to the Fourth Amendment. n229 Although adopting the mosaic would likely lead to
some growing pains, n230 there is no reason to think that courts and law enforcement officers are
incapable of growth. At any rate, fear of adjustment is no reason to leave a constitutional right
unprotected, much less unrecognized. Of course, if assessing aggregations of information and investigative procedures under a mosaic theory
proves too difficult using the case-by-case, fact-centered approach favored by [*425] the Court in other Fourth Amendment circumstances, n231 then there is
always the option of drawing bright lines. It would not be the first time. For example, the Court has adopted a bright(ish) line forty-eight-hour rule when assessing
the reasonableness of municipal policies governing probable cause hearings after warrantless arrests. n232 It has also excused law enforcement officers from the
burden of showing independent probable cause, or any other additional justification, when conducting searches incident to arrest. n233 If it is necessary to do so in
order to vindicate Fourth Amendment rights, while avoiding thorny line-drawing problems, the Court could follow a similar course after adopting a mosaic theory.
Cloud Computing Advantage
Uniqueness
No Protection Now
Cloud services are not protected now
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIONS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
As with most technological advancements, the law is often slow to catch up. While users might have
an expectation that the files or applications they store on the cloud are private, the reality is that they
may not have as much privacy as they would like to believe.7 The architecture of the Internet and the
way cloud computing services operate means that courts are unlikely to apply Fourth Amendment
protections. The current federal statutory framework governing stored electronic communications,
the Stored Communications Act8 (SCA) remains frozen in a 1980s conception of electronic
communications. It is a confusing statute9 that the courts have interpreted in an inconsistent and unclear manner.10
Cloud computing will not be protected now
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIONS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
Furthermore, it is unlikely that cloud computing services will have the benefits of the additional
protections that come with being an electronic communications service. Cloud computing services do
not provide the ability to access the Internet, but they provide services that utilize the Internet. Courts
have found that in the case of the Internet, it is the ISP and the telecommunications companies that
fall into the category of electronic communications service providers, because they provide the ability to send or
receive electronic communications. 102 As cloud computing users cannot use cloud services without an existing Internet connection, this
suggests that at least SaaS and PaaS providers are unlikely to be classified as ECS providers. IaaS providers might be more arguably classified as
ECS providers, depending on their function. Cloud
computing promises to be the future of Internet technology,
with increasing numbers of people and businesses using cloud services to store and process data
every day. How much the law protects privacy is a key concern for most of these users. Yet, the limitations
of the SCA, a statute that was written nearly twenty-five years ago and is used to address technologies that were not even imagined at the
time, mean that as cloud computing services become more and more ubiquitous, peoples’ privacy expectations may not line up with the law’s
protections.
A2: Squo Solves Cloud
The Courts will not advance the issue.
Robinson, 2010 (William Jeremy, Georgetown Law, J.D., “Free at What Cost?: Cloud Computing
Privacy Under the Stored Communications Act” Georgetown Law Journal Vol. 98)
The trajectory of Fourth Amendment jurisprudence suggests that courts are unlikely to enhance
privacy protections for cloud computing users. Courts only rarely serve as the initial impetus for
expanding privacy protections; when they do, it is typically through cautious extension of the Fourth
Amendment,221 in response to societal or technological changes.222 Yet, since the 1960s, the Supreme Court has applied an even
narrower view of the Fourth Amendment’s protections and the applicability of its remedies.223 The Court has focused its recent Fourth
Amendment efforts on reexamining the costs and benefits of excluding evidence gathered in violation of the Fourth Amendment224 and
limiting the range of situations that invoke the Fourth Amendment’s protections. 225 Given the
current state of its Fourth
Amendment jurisprudence, it seems unlikely that the Court will drastically expand the scope of
privacy protections for Internet users—after failing to accept an Internet-related privacy case for more
than two decades.226 Not only would this significantly change the dimensions of the Fourth Amendment’s scope, it would likely require
reassessing core privacy principles, such as the third-party disclosure doctrine, that would have repercussions beyond the digital world.
Without meaningful guidance from the Supreme Court, lower courts are in disarray with respect to
privacy interests in computer networks. Federal courts cannot reach consensus on the privacy
protections applicable to e-mail; for instance, a circuit split currently exists over the categorization of e-mail within the Stored
Communications Act.227 Courts have also struggled to determine if the Fourth Amendment protects e-mail.
In Warshak v. United States, the Sixth Circuit held that the Fourth Amendment protected a person’s e-mail from government searches and
seizures, only to retract the opinion a short time later in an en banc decision that avoided the issue.228 In United States v. Maxwell, an earlier
case heard by the Court of Appeals for the Armed Forces, the Fourth Amendment was also held to apply to e-mail,229 but other courts have
discounted its precedential value.230 These
cases reflect the degree of uncertainty among lower courts about
how the SCA and Fourth Amendment apply to e-mail—making it unlikely that they can proceed
anytime soon to the thornier issues surrounding communications and stored data in cloud services.
Links/Internals
Privacy Concerns = Balkanization
Privacy concerns are causing balkanization of the internet kills US Cloud Computing
Competitiveness.
Hickins 4/14/2014 (Michael, Senior Editor of The Wall Street Journal Digital and Director of Strategic
Communications at Oracle, “Post-Snowden ‘Balkanization’ of the Internet Should Worry CIOs” Wall
Street Journal http://blogs.wsj.com/cio/2014/04/14/post-snowden-balkanization-of-the-internetshould-worry-cios/)
CIOs concerned that the spillover from the Edward Snowden revelations could impact their plans have
highfalutin company: Victoria Espinel, a former White House aide, was in Europe last week hoping to head off
the development of protectionist-tinged “national clouds.” Such a development could undermine the
plans of corporate technology leaders hoping to standardize and automate a large number of business
processes so that they can focus on innovation. To a certain extent, reality isn’t relevant here. The idea that the
National Security Agency is spying on corporate secrets stored in servers run by U.S. cloud providers
could well be so much hogwash; U.S. cloud vendors may be no more vulnerable to or complicit with
the NSA than their overseas counterparts; other governments may be just as surreptitious as the U.S.
But the perception, whether founded in fact or not – is changing reality, thanks to ambitious
politicians and some non-U.S. based vendors eager for a competitive advantage. Ms. Espinel, now head of
global industry trade group BSA, conceded at a conference in Brussels, Belgium, that “recent disclosures about international surveillance
programs have raised important privacy and security questions.” But, she said, “national security concerns don’t have to undermine technology
innovation and economic growth — and we shouldn’t allow them to.” It remains to be seen whether her audience was receptive to that
message. The
timing of the scandal couldn’t be worse for CIOs, given that a growing number of chief
executives are, finally, beginning to recognize the potential of technology to generate innovation and
new growth opportunities. If, as many experts believe, important markets limit or block the use of
large U.S.-based cloud providers on privacy or security grounds, CIOs of multinational companies may
have to find multiple vendors to host applications or internal software development platforms in the
cloud – defeating one of the benefits of cloud-based services, namely simplification. Having to find a local cloud vendor
in, say, Turkey or China, “would be a big annoyance,” says Cary Sylvester, vice president of technology innovation and communication at Keller
Williams Realty Inc., a U.S.-based real-estate franchising company with 700-plus offices in the U.S. and Canada. Ms. Williams is using a number
of large cloud vendors to help the realtor expand into overseas markets over the next 12 to 24 months, beginning with Turkey. “I don’t want to
manage many more systems,” she told CIO Journal earlier this year, as saber rattling began in earnest. “There’s a better than 50% chance of
some trade barriers” being erected because of the NSA revelations, said J. Bradford Jensen, a professor at Georgetown University’s McDonough
School of Business. “This could be a real problem for any multinational’s operations.” Indeed, the timing of the NSA scandal couldn’t have been
worse. The development of the Internet happened to coincide with a period of stability and harmony among the world’s largest economic
powers, and allowed it to develop globally in an almost unfettered manner. The most developed nations had the most to gain in watching and
discovering the growth of the Internet, while the few tinpot dictatorships that would have liked to squelch it were either insignificant economic
players or lacked the technology to interfere in any case. It
hasn’t been until fairly recently that the true potential of
the Internet economy became clear, and the more cautious countries, who watched the U.S.
experiment on its own economy, failed to match the U.S.’s economic engine. Daniel Castro, an analyst with D.C.based think tank Information Technology and Innovation Foundation, says if the 15 nations of the European Union “had just been able to
maintain through 2006 the productivity growth they enjoyed from 1980 to 1994 [prior to the advent of the Internet], their GDP would be over
1.1 trillion euros greater today.” Now,
there is less harmony between the powers – in part due to mistrust
sowed by the Snowden revelations of spying on German premier Angela Merkel, and in part because certain political figures are
seeking to make political hay from online protectionism. To paraphrase P.T. Barnum, no one ever lost political capital betting on the endurance
of anti-American sentiment. Even
relatively small multinationals like Keller Williams or New England Biolabs Inc., which has niche
customers around the world, could be affected by this Balkanization . Companies like Cisco Systems Inc. and
Amazon.com Inc. are beginning to take steps to address the issue by developing global partnerships
and building new data centers. But these investments are hardly trivial, even for such large vendors.
Meanwhile, markets like Germany, Russia, China, and Brazil – also not trivial by any means – could become more difficult for U.S. businesses to
penetrate, and to do so using standard, automated, repeatable processes. Ms. Espinel warned her audience of the
“real risk that
countries will adopt the wrong kinds of solutions… [and] “undue restrictions on the flow of
information across borders.” And she made clear that she and her membership see these responses as alarmingly protectionist.
Privacy concerns are causing the balkanization of the internet.
Century Weekly, 9/1/2013 (“Balkanization and the Internet's Cloudy Future”
http://english.caixin.com/2013-09-01/100576259.html)
With the popular use of cloud computing technology, many have fewer concerns over data loss
surrounding a personal computer or device. Remote servers operated by companies such as Google
and Amazon and advances in communication technologies have broken down the tyranny of distance.
However, in the aftermath of revelations made by whistleblower Edward Snowden, it appears a "balkanization" of the
Internet could be ahead. "Cloud" services provide flexibility for users around the world. Somewhere
in the state of California, billions of people have their personal contacts, private email correspondence
and business information all stored within the borders of the U.S. Yet the Internet was developed
largely on a philosophy of decentralization. With cloud technology, the Internet becomes a centralized
information market where there is only one center. The reality is that putting one's whole house of information, whether
personal or business-related, on a server in California, is the equivalent of relinquishing all privacy. The U.S.'s legal protections and
respect for privacy rights had created the impression that people could fully explore the potential of
the Internet untrammeled by security fears. The Snowden incident changed all of this. While cloud
storage companies Dropbox and Amazon were not on the list of companies cooperating with the NSA,
several major U.S. companies were cited as data providers to the PRISM surveillance program. But how
will countries adapt? Will France and Germany take a page from China's playbook and establish their own computing networks in Paris and
Berlin to protect domestic enterprises? Will new legislation be introduced in countries around the world in response to the U.S. surveillance
programs? Perhaps in the future, there will be clouds in Beijing, Paris, and Tokyo – and the Internet will become increasingly Balkanized as a
result.
Global Data Key
A globalized internet is the key internal link to big data and cloud computing success
Bauer et al 14 (Matthias Bauer is Senior Economist at ECIPE. His areas of research include international trade as well as European fiscal and capital
market policy. Matthias studied business administration at the University of Hull and economics at the Friedrich Schiller University Jena. He received his Ph.D.
degree from the University of Jena after joining the Bundesbank graduate programme on the “Foundations of Global Financial Markets and Financial Stability“.
Hosuk Lee-Makiyama is the director of European Centre for International Political Economy (ECIPE) and a leading author on trade diplomacy, EU-Far East relations
and the digital economy. He is regularly consulted by governments and international organisations on a range of issues, from trade negotiations to economic
reforms. He appears regularly in European, Chinese and US media, and is noted for his involvement in WTO and major free trade agreements. Erik van der Marel is a
Senior Economist at ECIPE. His areas of expertise are in services trade and political economy of services trade policy, Russia’s trading patterns, plus total factor
productivity (TFP) and regulation including trade policy in developing countries. Bert Verschelde has a MSc in European Political Economy and graduated with a
Master's in Comparative and International Politics, “THE COSTS OF DATA LOCALISATION: FRIENDLY FIRE ON ECONOMIC RECOVERY”
http://www.ecipe.org/app/uploads/2014/12/OCC32014__1.pdf, ekr)
Industry and internet advocates have warned against an Internet which is fragmented along national
borderlines. Some of them are going as far as calling balkanisation the greatest threat to the Internet today, even
greater than censorship.9 One comprehensive study by Chander and Lê (2014) from the California International Law Centre
established that data localisation “threatens the major new advances in information technology – not only
cloud computing, but also the promise of big data and the Internet of things”.10 It is not unlikely that future
trade agreements will include disciplines against data localisation requirements, as there are often less trade-restrictive measures available to
address privacy and security. However, the more immediate effect of data localisation measures – the impact on economic recovery and
growth – is even more dangerous. As this study has shown, this
impact is a direct consequence of the complex relations
between cross-border data flows, supply chain fragmentation and domestic prices. These are complexities
that are generally not understood by policymakers, who are often in the field of security and privacy law, rather than international trade. The
findings regarding the effects on GDP, investments and welfare from data localisation requirements and discriminatory privacy and security
laws are too considerable to be ignored in policy design. It is also reasonable to assume that SMEs and new firms are the first to be displaced
from the market, as they lack resources to adapt to the regulatory changes. In the current security policy context, many regulators and privacy
advocates stress the importance of discretion to tackle problems at a national level (e.g. NetMundial 2014 draft conclusions)11. The economic
evidence however proves that unilateral trade restrictions are counterproductive in the context of today’s interdependent globalized economy.
The selfincurred losses make data localisation a policy that unilaterally puts the country at a relative loss to others while the possibilities for
offsetting the negative impact through trade agreements or economic stimulus are relatively limited over the long term.
Privacy Key
Privacy is the key driver of balkanization
Kuner et al, 2015 (Christopher, Editor in Chief of this journal and Senior Privacy Counsel in the
Brussels office of Wilson Sonsini Goodrich & Rosati, Fred H. Cate Christopher Millard Dan Jerker B.
Svantesson, and Orla Lynsky, “Internet Balkanization gathers pace: is privacy the real driver?”
International Data Privacy Law, Vol. 5, No. 1 http://intlidpl.oxfordjournals.org/content/5/1/1.full.pdf+html)
Initiatives in some individual EU member states have gone even further, with Deutsche Telekom
announcing in October 2013 that it planned to build a German-only ‘Internetz’ to keep German
Internet traffic within Germany’s physical borders.4 Meanwhile, the war of words between the EU
and USA over systematic mass surveillance has provided fertile ground for states like Russia to argue
that they too can no longer trust their citizens’ data to invasive foreign regimes. In terms rather
reminiscent of the quote from Kerstin Amer that opened this Editorial, Russian MP Vadim Dengin is
reported to have justified to the State Duma the mandatory local storage obligations in Russia’s new
data protection law with the rousing words: ‘Most Russians don’t want their data to leave Russia for the
United States, where it can be hacked and given to criminals. Our entire lives are stored over there’.5
Would any of this really enhance privacy protection? There are serious reasons for doubting that any
of the current initiatives would actually do so. This is largely because strategies of this type that are
supposed to protect individual rights, in particular by constraining data location and transfers,
continue to be commingled with motivators based on vague concepts of national sovereignty and
economic advantage. As we have noted previously, the Snowden revelations of systematic
government surveillance have led to soul-searching about both the relevance of fundamental privacy
concepts and the effectiveness of existing frameworks for ensuring protection of personal data.6 As
with that surveillance debate, arguments about Internet regionalization and data localization are often
stymied by a fundamental lack of transparency. Indeed, by mandating storage of data within their own
national boundaries, governments may hope to gain increased access to such data. Mixed messages
and hypocrisy abound and privacy is increasingly invoked as a justification, or at least used as a
smokescreen, for policies that are incoherent and, in many cases, far from privacy-enhancing.
Impacts
Ext. Innovation Important
Independently innovation is key to solve multiple scenarios for extinction.
Brent Barker, electrical engineer, and manager of corporate communications for the Electric Power
Research Institute and former industrial economist and staff author at SRI International and as a
commercial research analyst at USX Corporation, 2000 (“Technology and the Quest for Sustainability,”
EPRI Journal, Summer 2000, Vol. 25, p. 8-17)
From a social standpoint, accelerating productivity is not an option but rather an imperative for the future. It is necessary in
order to
provide the wealth for environmental sustainability , to support an aging population in the industrialized world,
and to provide an economic ladder for developing nations. The second area of opportunity for technology lies
in its potential to help stabilize global population at 10-12 billion sometime in the twenty-first century, possibly as early as
The key is economics. Global communications, from television to movies to the Internet, have brought an
image of the comfortable life of the developed world into the homes of the poorest people, firing
their own aspirations for a better quality of life, either through economic development in their own country or
through emigration to other countries. If we in the developed world can make the basic tools of prosperity--infrastructure, health
care, education, and law--more accessible and affordable, recent history suggests that the cultural drivers for producing
large families will be tempered, relatively quickly and without coercion. But the task is enormous. The physical prerequisites for
2075.
prosperity in the global economy are electricity and communications. Today, there are more than 2 billion people living without electricity, or
commercial energy in any form, in the very countries where some 5 billion people will be added in the next 50 years. If for no other reason than
our enlightened self-interest, we should strive for universal access to electricity, communications, and educational opportunity. We have little
choice, because the fate of the developed world is inextricably bound up in the economic and demographic fate of the developing world. A
third, related opportunity
for technology is in decoupling population growth from land use and, more broadly,
decoupling economic growth from natural resource consumption through recycling, end-use efficiency,
and industrial ecology. Decoupling population from land use is well under way. According to Grubler, from 1700 to 1850 nearly 2
hectares of land (5 acres) were needed to support every child born in North America, while in the more crowded and cultivated regions of
Europe and Asia only 0.5 hectare (1.2 acres) and 0.2 hectare (0.5 acre) were needed, respectively. During the past century, the amount of land
Europe and North America experiencing the fastest
the "zero threshold" in the past few decades, meaning that no additional land is needed to
support additional children and that land requirements will continue to decrease in the future. One can postulate
that the pattern of returning land to nature will continue to spread throughout the world, eventually
stemming and then reversing the current onslaught on the great rain forests. Time is critical if vast tracts
are to be saved from being laid bare, and success will largely depend on how rapidly economic opportunities
expand for those now trapped in subsistence and frontier farming. In concept, the potential for returning land to
needed per additional child has been dropping in all areas of the world, with
decreases. Both crossed
nature is enormous. Futurist and scholar Jesse Ausubel of the Rockefeller University calculates that if farmers could lift average grain yields
around the world just to the level of today's average U.S. corn grower, one-half of current global cropland--an area the size of the Amazon
basin--could be spared. If agriculture is a leading indicator, then the continuous drive to produce more from less will prevail in other parts of
the economy Certainly with
shrinking agricultural land requirements, water distribution and use around the
world can be greatly altered, since nearly two-thirds of water now goes for irrigation. Overall, the technologies of the future will, in
the words of Ausubel, be "cleaner, leaner, lighter, and drier"--that is, more efficient and less wasteful of materials and water. They will be much
more tightly integrated through microprocessor-based control and will therefore use human and natural resources much more efficiently and
productively. Energy intensity, land intensity, and water intensity (and, to a lesser extent, materials intensity) for both manufacturing and
agriculture are already heading downward. Only in agriculture are they falling fast enough to offset the surge in population, but, optimistically,
advances in science and technology should accelerate the downward trends in other sectors, helping to decouple economic development from
environmental impact in the coming century. One positive sign is the fact that recycling rates in North America are now approaching 65% for
steel, lead, and copper and 30% for aluminum and paper. A second sign is that economic output is shifting away from resource-intensive
products toward knowledge-based, immaterial goods and services. As a result, although the U.S. gross domestic product (GDP) increased 200fold (in real dollars) in the twentieth century, the physical weight of our annual output remains the same as it was in 1900. If anything, this
trend will be accelerating. As Kevin Kelly, the editor of Wired magazine, noted, "The creations most in demand from the United States [as
exports] have lost 50% of their physical weight per dollar of value in only six years.... Within a generation, two at most, the number of people
working in honest-to-goodness manufacturing jobs will be no more than the number of farmers on the land--less than a few percent. Far more
than we realize, the network economy is pulling us all in." Even
pollution shows clear signs of being decoupled from
population and economic growth. Economist Paul Portney notes that, with the exception of greenhouse gases, "in the OECD
[Organization for Economic Cooperation and Development] countries, the favorable experience [with pollution control] has been a triumph of
technology That is, the ratio of pollution per unit of GDP has fallen fast enough in the developed world to offset the increase in both GDP per
capita and the growing number of 'capitas' themselves." The fourth opportunity for science and technology stems from their enormous
potential to unlock resources not now available, to reduce human limitations, to create new options for policymakers and businesspeople alike,
and to give us new levels of insight into future challenges. Technically resources have little value if we cannot unlock them for practical use.
With technology, we are able to bring dormant resources to life. For example, it was only with the development of an electrolytic process late
in the nineteenth century that aluminum--the most abundant metal on earth--became commercially available and useful. Chemistry unlocked
hydrocarbons. And engineering allowed us to extract and put to diverse use untapped petroleum and gas fields. Over the course of history,
technology has made the inaccessible accessible, and resource depletion has been more of a catalyst for change than a longstanding problem.
Technology provides us with last-ditch methods (what economists would call substitutions) that allow us to circumvent or leapfrog over crises
of our own making. Agricultural technology solved the food crisis of the first half of the nineteenth century. The English "steam crisis" of the
1860s, triggered by the rapid rise of coal-burning steam engines and locomotives, was averted by mechanized mining and the discovery and use
of petroleum. The U.S. "timber crisis" that Teddy Roosevelt publicly worried about was circumvented by the use of chemicals that enabled a
billion or so railroad ties to last for decades instead of years. The great "manure crisis" of the same era was solved by the automobile, which in
a few decades replaced some 25 million horses and freed up 40 million hectares (100 million acres) of farmland, not to mention improving the
sanitation and smell of inner cities. Oil discoveries in Texas and then in the Middle East pushed the pending oil crisis of the 1920s into the
future. And the energy crisis of the 1970s stimulated the development of new sensing and drilling technology, sparked the advance of non-fossil fuel alternatives, and deepened the penetration of electricity with its fuel flexibility into the global economy Thanks to underground
imaging technology, today's known gas resources are an order of magnitude greater than the resources known 20 years ago, and new reserves
continue to be discovered. Technology has also greatly extended human limits. It has given each of us a productive capability greater than that
of 150 workers in 1800, for example, and has conveniently put the power of hundreds of horses in our garages. In recent decades, it has
But
global sustainability is not inevitable . In spite of the tremendous promise that technology holds for a sustainable future, there
extended our voice and our reach, allowing us to easily send our words, ideas, images, and money around the world at the speed of light.
is the potential for all of this to backfire before the job can be done. There are disturbing indications that people
sometimes turn in fear and anger on technologies, industries, and institutions that openly foster an ever-faster
pace of change. The current opposition to nuclear power genetically altered food, the globalization of the economy and the spread of
American culture should give us pause. Technology has always presented a two-edged sword, serving as both cause and effect, solving one
problem while creating another that was unintended and often unforeseen. We solved the manure crisis, but automotive smog, congestion,
and urban sprawl took its place. We cleaned and transformed the cities with all-electric buildings rising thousands of feet into the sky. But while
urban pollution was thereby dramatically reduced, a portion of the pollution was shifted to someone else's sky. Breaking limits "Limits to
growth" was a popular theme in the 1970s, and a best-selling book of that name predicted dire consequences for the human race by the end of
the century. In fact, we have done much better than those predictions, largely because of a factor the book missed--the potential of new
technology to break limits. Repeatedly, human societies have approached seemingly insurmountable barriers only to find the means and tools
to break through. This ability has now become a source of optimism, an article of faith, in many parts of the world. Today's perceived
limits, however, look and feel different. They are global in nature, multicultural, and larger in scale and complexity than
ever before. Nearly 2 billion people in the world are without adequate sanitation, and nearly as many
are without access to clean drinking water. AIDS is spreading rapidly in the regions of the world least able to fight it.
Atmospheric concentrations of greenhouse gases are more than 30% greater than preindustrial levels and are climbing
steadily. Petroleum reserves, expected to be tapped by over a billion automobiles worldwide by 2015, may last only another
50-100 years. And without careful preservation efforts, the biodiversity of the planet could become as threatened in
this coming century as it was at the end of the last ice age, when more than 70% of the species of large
mammals and other vertebrates in North America disappeared (along with 29% in Europe and 86% in Australia). All these perceived
limits require innovation of a scope and intensity surpassing humankind's current commitment. The
list of real-world problems that could thwart global sustainability is long and sobering. It includes war, disease,
famine, political and religious turmoil, despotism, entrenched poverty, illiteracy, resource depletion,
and environmental degradation. Technology can help resolve some of these issues--poverty and disease, resource depletion, and
environmental impact, for example--but it offers little recourse for the passions and politics that divide the world. The likelihood is that we will
not catch up and overtake the moving target of global sustainability in the coming century, but given the prospects for technology, which have
never been brighter, we may come surprisingly close. We
should put our technology to work, striving to lift more than
5 billion people out of poverty while preventing irreversible damage to the biosphere and irreversible
loss of the earth's natural resources.
Ext. Zoonautic Diseases Bad
New zoonotic diseases cause extinction – different from past diseases
Quammen, award-winning science writer, long-time columnist for Outside magazine, writer for
National Geographic, Harper's, Rolling Stone, the New York Times Book Review and others, 9/29/2012
(David, “Could the next big animal-to-human disease wipe us out?,” The Guardian, pg. 29, Lexis)
Infectious disease is all around us. It's one of the basic processes that ecologists study, along with predation and competition. Predators are big
beasts that eat their prey from outside. Pathogens (disease-causing agents, such as viruses) are small beasts that eat their prey from within. Although infectious
disease can seem grisly and dreadful, under
ordinary conditions, it's every bit as natural as what lions do to wildebeests and zebras. But
conditions aren't always ordinary. Just as predators have their accustomed prey, so do pathogens. And just as a lion might occasionally depart
from its normal behaviour - to kill a cow instead of a wildebeest, or a human instead of a zebra - so a pathogen can shift to a new target. Aberrations
occur. When a pathogen leaps from an animal into a person, and succeeds in establishing itself as an infectious presence, sometimes causing illness or death, the
result is a zoonosis. It's a mildly technical term, zoonosis, unfamiliar to most people, but it helps clarify the biological complexities behind the ominous
headlines about swine flu, bird flu, Sars, emerging diseases in general, and the threat of a global pandemic. It 's a word of the future, destined for
heavy use in the 21st century. Ebola and Marburg are zoonoses. So is bubonic plague. So was the so-called Spanish influenza of 1918-1919, which
had its source in a wild aquatic bird and emerged to kill as many as 50 million people. All of the human influenzas are zoonoses. As are monkeypox, bovine
tuberculosis, Lyme disease, West Nile fever, rabies and a strange new affliction called Nipah encephalitis, which has killed pigs and pig farmers in Malaysia. Each of
these zoonoses reflects the action of a
pathogen that can "spillover", crossing into people from other animals. Aids is a disease
of zoonotic origin caused by a virus that, having reached humans through a few accidental events in western and central Africa, now passes human-to-human. This
form of interspecies leap is not rare; about 60% of all human infectious diseases currently known either cross routinely or have recently crossed between other
animals and us. Some of those - notably rabies - are familiar, widespread and still horrendously lethal, killing humans by the thousands despite centuries of efforts
at coping with their effects. Others are new and inexplicably sporadic, claiming a few victims or a few hundred, and then disappearing for years. Zoonotic
pathogens can hide. The least conspicuous strategy is to lurk within what's called a reservoir host: a living organism that carries the
pathogen while suffering little or no illness. When a disease seems to disappear between outbreaks, it's often still lingering nearby, within some reservoir host. A
rodent? A bird? A butterfly? A bat? To reside undetected is probably easiest wherever biological diversity is high and the ecosystem is relatively undisturbed. The
converse is also true: ecological disturbance causes diseases to emerge. Shake a tree and things fall out. Michelle Barnes is an energetic, late 40s-ish woman, an avid
rock climber and cyclist. Her auburn hair, she told me cheerily, came from a bottle. It approximates the original colour, but the original is gone. In 2008, her hair
started falling out; the rest went grey "pretty much overnight". This was among the lesser effects of a mystery illness that had nearly killed her during January that
year, just after she'd returned from Uganda. Her story paralleled the one Jaap Taal had told me about Astrid, with several key differences - the main one being that
Michelle Barnes was still alive. Michelle and her husband, Rick Taylor, had wanted to see mountain gorillas, too. Their guide had taken them through Maramagambo
Forest and into Python Cave. They, too, had to clamber across those slippery boulders. As a rock climber, Barnes said, she tends to be very conscious of where she
places her hands. No, she didn't touch any guano. No, she was not bumped by a bat. By late afternoon they were back, watching the sunset. It was Christmas
evening 2007. They arrived home on New Year's Day. On 4 January, Barnes woke up feeling as if someone had driven a needle into her skull. She was achy all over,
feverish. "And then, as the day went on, I started developing a rash across my stomach." The rash spread. "Over the next 48 hours, I just went down really fast." By
the time Barnes turned up at a hospital in suburban Denver, she was dehydrated; her white blood count was imperceptible; her kidneys and liver had begun
shutting down. An infectious disease specialist, Dr Norman K Fujita, arranged for her to be tested for a range of infections that might be contracted in Africa. All
came back negative, including the test for Marburg. Gradually her body regained strength and her organs began to recover. After 12 days, she left hospital, still
weak and anaemic, still undiagnosed. In March she saw Fujita on a follow-up visit and he had her serum tested again for Marburg. Again, negative. Three more
months passed, and Barnes, now grey-haired, lacking her old energy, suffering abdominal pain, unable to focus, got an email from a journalist she and Taylor had
met on the Uganda trip, who had just seen a news article. In the Netherlands, a woman had died of Marburg after a Ugandan holiday during which she had visited a
cave full of bats. Barnes spent the next 24 hours Googling every article on the case she could find. Early the following Monday morning, she was back at Dr Fujita's
door. He agreed to test her a third time for Marburg. This time a lab technician crosschecked the third sample, and then the first sample. The new results went to
Fujita, who called Barnes: "You're now an honorary infectious disease doctor. You've self-diagnosed, and the Marburg test came back positive." The Marburg virus
had reappeared in Uganda in 2007. It was a small outbreak, affecting four miners, one of whom died, working at a site called Kitaka Cave. But Joosten's death, and
Barnes's diagnosis, implied a change in the potential scope of the situation. That local Ugandans were dying of Marburg was a severe concern - sufficient to bring a
response team of scientists in haste. But if tourists, too, were involved, tripping in and out of some python-infested Marburg repository, unprotected, and then
boarding their return flights to other continents, the place was not just a peril for Ugandan miners and their families. It was also an international threat. The first
team of scientists had collected about 800 bats from Kitaka Cave for dissecting and sampling, and marked and released more than 1,000, using beaded collars coded
with a number. That team, including scientist Brian Amman, had found live Marburg virus in five bats. Entering Python Cave after Joosten's death, another team of
scientists, again including Amman, came across one of the beaded collars they had placed on captured bats three months earlier and 30 miles away. "It confirmed
my suspicions that these bats are moving," Amman said - and moving not only through the forest but from one roosting site to another. Travel of individual bats
between far-flung roosts implied circumstances whereby Marburg virus might ultimately be transmitted all across Africa, from one bat encampment to another. It
voided the comforting assumption that this virus is strictly localised. And it highlighted the complementary question: why don't outbreaks of Marburg virus disease
happen more often? Marburg is only one instance to which that question applies. Why not more Ebola? Why not more Sars? In the case of Sars, the scenario
could have been very much worse. Apart from the 2003 outbreak and the aftershock cases in early 2004, it hasn't recurred. . . so far. Eight
thousand cases are relatively few for such an explosive infection; 774 people died, not 7 million. Several factors contributed to limiting the scope and impact of the
outbreak, of which humanity's good luck was only one. Another was the speed and excellence of the laboratory diagnostics - finding the virus and identifying it. Still
another was the brisk efficiency with which cases were isolated, contacts were traced and quarantine measures were instituted, first in southern China, then in
Hong Kong, Singapore, Hanoi and Toronto. If
the virus had arrived in a different sort of big city - more loosely governed, full of poor people,
lacking first-rate medical institutions - it might have burned through a much larger segment of humanity. One further factor,
possibly the most crucial, was inherent in the way Sars affects the human body: symptoms tend to appear in a person before, rather than after, that person
becomes highly infectious. That allowed many Sars cases to be recognised, hospitalised and placed in isolation before they hit their peak of infectivity. With
influenza and many other diseases, the order is reversed. That probably helped account for the scale of worldwide misery and death during the 1918-1919
influenza. And that infamous global pandemic occurred in the era before globalisation. Everything nowadays moves around the planet faster,
including viruses. When the Next Big One comes, it will likely conform to the same perverse pattern as the 1918 influenza:
high infectivity preceding notable symptoms. That will help it move through cities and airports like an angel of
death. The Next Big One is a subject that disease scientists around the world often address. The most recent big one is Aids, of which the eventual total bigness
cannot even be predicted - about 30 million deaths, 34 million living people infected, and with no end in sight. Fortunately, not every virus goes
airborne from one host to another. If HIV-1 could, you and I might already be dead. If the rabies virus could, it would be
the most horrific pathogen on the planet. The influenzas are well adapted for airborne transmission, which
is why a new strain can circle the world within days. The Sars virus travels this route, too, or anyway by the respiratory droplets of sneezes and coughs - hanging in
the air of a hotel corridor, moving through the cabin of an aeroplane - and that capacity, combined with its case fatality rate of almost 10%, is what made it so scary
in 2003 to the people who understood it best. Human-to-human
transmission is the crux. That capacity is what separates a
a global pandemic. Have you noticed the persistent, low-
bizarre, awful, localised, intermittent and mysterious disease (such as Ebola) from
level buzz about avian influenza, the strain known as H5N1, among disease experts over the past 15 years? That's because avian flu worries them deeply, though it
hasn't caused many human fatalities. Swine flu comes and goes periodically in the human population (as it came and went during 2009), sometimes causing a bad
pandemic and sometimes (as in 2009) not so bad as expected; but avian flu resides in a different category of menacing possibility. It worries the flu scientists
because they know that H5N1 influenza is extremely virulent in people, with a high lethality. As yet, there have been a relatively low number of cases, and it is
poorly transmissible, so far, from human to human. It'll kill you if you catch it, very likely, but you're unlikely to catch it except by butchering an infected chicken. But
if H5N1 mutates or reassembles itself in just the right way, if it adapts for human-to-human transmission, it could become the biggest and fastest killer disease since
1918. It got to Egypt in 2006 and has been especially problematic for that country. As of August 2011, there were 151 confirmed cases, of which 52 were fatal. That
represents more than a quarter of all the world's known human cases of bird flu since H5N1 emerged in 1997. But here's a critical fact: those unfortunate Egyptian
patients all seem to have acquired the virus directly from birds. This indicates that the virus hasn't yet found an efficient way to pass from one person to another.
Two aspects of the situation are dangerous, according to biologist Robert Webster. The first is that Egypt, given its recent political upheavals, may be unable to
staunch an outbreak of transmissible avian flu, if one occurs. His second concern is shared by influenza researchers and public health officials around the globe: with
all that mutating, with all that contact between people and their infected birds, the virus could hit upon a genetic configuration making it highly transmissible
among people. "As
long as H5N1 is out there in the world," Webster told me, "there is the possibility of disaster. . . There
is the theoretical possibility that it can acquire the ability to transmit human-to-human." He paused. "And then God help us." We're unique in the history of
mammals. No
other primate has ever weighed upon the planet to anything like the degree we do. In ecological
terms, we are almost paradoxical: large-bodied and long-lived but grotesquely abundant. We are an outbreak. And here's the thing
about outbreaks: they end . In some cases they end after many years, in others they end rather soon. In some cases they end gradually, in others
they end with a crash. In certain cases, they end and recur and end again. Populations of tent caterpillars, for example, seem to rise steeply and fall sharply on a
cycle of anywhere from five to 11 years. The crash endings are dramatic, and for a long while they seemed mysterious. What could account for such sudden and
recurrent collapses? One possible factor is infectious disease, and viruses in particular.
Ebola Addon
Breakdown of Internet in developing countries means we can’t solve Ebola--telemedicine key
Hulsroj, 14 (8/31, Director, European Space Policy Institute, We have tools to treat Ebola from afar,
http://www.ft.com/intl/cms/s/0/82cabc14-2ed7-11e4-afe4-00144feabdc0.html)
Sir, When disaster strikes, the time-honoured way for people of goodwill is to spend money with humanitarian aid organisations. With Ebola
there is certainly ample room for this. It is a disgrace that medical staff putting their lives on the line in order to help have to make do with
inadequate protective gear and that the whole anti-Ebola effort is undersupplied. There
are, of course, those who take their
humanitarian commitment beyond the spending – doctors and nurses who go to the region with
organisations such as Médecins Sans Frontières, and those within the medical institutions of the stricken countries
who do not blink in the face of the utmost danger. Those directly involved with the diagnosis and
treatment of the illness have taken a heavy hit, with more than 120 medical staff dead and double that
infected. We owe those men and women of courage our utmost support. Because the fact is that, to a large extent, there is no substitute for
their physical presence to diagnose, to treat, to clean and to bury the dead. Still, there is a question about whether an element of
support in terms of diagnosis, supervision and treatment education could be done from abroad in a
situation where doctors are in such desperately short supply on the ground. Telemedicine, via satellite or the
internet, allows diagnosis and medical advice to be given from afar, as long as proper testing and
administration of medication can be done in situ. Medical advice via a telemedicine link is not a deus ex
machina, but can to some small extent stock up available medical expertise, particularly where the evaluation of
test results is difficult or where the alternative is no diagnosis at all. Humanitarian aid organisations know how to do
this, and surely there would be a great readiness by medical professionals to volunteer their time and expertise if they would be given a way to
do so from afar, without having to completely abandon their regular professional life. Telemedicine could
put tools in their
hands to do so by, for instance, creating Ebola telemedicine hubs in metropolitan cities in the west, and
by creating internet networks of medical professionals who can evaluate test results and supervise
treatment regimes. This is a time for the global community to come together and assist in the best possible way the stricken and those
who help the stricken. We must mobilise all possible resources, financial and medical, to fight this current day plague. And no, we should not
invest in more costly systems, such as telemedicine, before the basic needs of medical staff on the ground are satisfied, such as proper
protective gear. But as
we sharpen our focus on what can and should be done, the possibility to help from
afar by the use of telemedicine tools should not be forgotten.
Fragmentation risks global pandemic spread
Mckenna, 13 (Columnist-Wired, 8/21, Censorship Doesn’t Just Stifle Speech — It Can Spread Disease,
http://www.wired.com/2013/08/ap_mers/all/1)
In October, Saudi Arabia will host millions of travelers on the hajj, the annual pilgrimage to Islam’s holy sites. The hajj carries deep meaning for
those observant Muslims who undertake it, but it also carries risks that make epidemiologists blanch. Pilgrims sleep in shared tents and
approach the crowded sites on foot, in debilitating heat. They come from all over the world, and whatever pathogens they encounter on the
hajj will travel back with them to their home countries. In past seasons, the hajj has been shown to foster disease, from stomach flus to
tuberculosis or meningitis. The Saudi Arabian government has traditionally taken this threat quite seriously. Each year it builds a vast network
of field hospitals to give aid to pilgrims. It refuses visas to travelers who have not had required vaccinations and makes public the outbreaks it
learns about. This year, though, the Saudis have been strangely opaque about one particular risk—and it’s a risk that has disease experts and
public-health agencies looking to October with a great deal of concern. They wonder if this year’s hajj might actually breed the next pandemic.
The reason is MERS: Middle East respiratory syndrome, a disease that has
been simmering in the region for months. The
virus is new, recorded in humans for the first time in mid-2012. It is dire, having killed more than half of those who
contracted it. And it is mysterious, far more so than it should be—because Saudi Arabia, where the
majority of cases have clustered, has been tight-lipped about the disease’s spread, responding slowly to
requests for information and preventing outside researchers from publishing their findings about the syndrome. Even in the
Internet age, when data sources like Twitter posts and Google search queries are supposed to tip us off
to outbreaks as they happen, one restrictive government can still put the whole world in danger by
clamming up. That’s because the most important factor in controlling epidemics isn’t the quality of our
medicine. It’s the quality of our information. The Wall of Silence To understand why MERS is so troubling, look back to the
beginning of 2003. For several months, public-health observers heard rumors of a serious respiratory illness in southern China. But when
officials from the World Health Organization asked the Chinese government about it, they were told that the countryside was simply
experiencing an outbreak of pneumonia. The
respiratory syndrome) cracked only
wall of silence around what came to be known as SARS (severe acute
by chance. An anonymous man in a chat room, describing himself as a
teacher in Guangdong Province, made the acquaintance of a teacher in California. On February 9, 2003, he asked her if she had heard of the
illness ravaging his city. She forwarded his message to an epidemiologist she knew, and on February 10 he posted it to ProMED, a listserv that
disease experts use as an informal surveillance system. That email was the world’s only warning for what was to come. By mid-March there
were already 150 cases of the new disease in seven countries. SARS wound up sickening more than 8,000 people and killing almost 800 in just
nine months. Luckily, the disease was quelled in China and Canada (where travelers from Hong Kong touched off an outbreak in Toronto)
before it had a chance to evolve into a more efficiently spreading strain. Many
experts believe that given time to mutate in
humans, SARS might have become a deadly pandemic. EVEN IN THE INTERNET AGE … ONE RESTRICTIVE
GOVERNMENT CAN PUT THE WORLD AT RISK. With more warning, SARS might not even have gained a
foothold outside of China. In Canada the virus quickly infected 251 people, killing 43. By contrast, the US had time to write
new quarantine regulations, which made a difference: America had just 27 SARS cases, with no deaths
and no hospital spread. To health authorities who lived through SARS, MERS feels unnervingly familiar. The two organisms are cousins:
Both are coronaviruses, named for their crown-shaped profile visible with an electron microscope. For this disease too, the first notice was a
posting to ProMED—this time by a doctor working in Jeddah, Saudi Arabia, describing a patient who had died several months before. That
September 2012 communiquè, which cost the doctor his job, helped physicians in London realize that a Qatari man they were treating was part
of the same outbreak. From there, MERS unspooled. People also fell ill in the United Arab Emirates, France, Germany, Italy, and Tunisia. But
Saudi Arabia, home to the vast majority of confirmed cases, remained far from forthcoming about what it knew. Announcements from the
Ministry of Health supplied little useful detail and discussed illnesses and deaths that happened some indeterminate time in the past—possibly
days, possibly even weeks. So far the number of MERS cases is just a fraction of the toll from SARS, but health officials fear that the real count
could be higher. Especially worrisome is the death rate among the afflicted: While SARS has been estimated to kill roughly 10 percent of its
victims, MERS so far has killed 56 percent. No One Thought It Would Happen Again Certainly censorship about the spread of disease is nothing
new. The largest well-documented pandemic, the great flu of 1918, is called the Spanish Influenza in old accounts not because it started in
Spain (it may have begun in Kansas) but because Spain, as a neutral nation during World War I, had no wartime curbs on news reports of
deaths. To this day, no one is sure how many people died in the 1918 flu; the best guess hovers around 50 million worldwide. Regardless, since
the virus took 11 months to circle the planet, some of those millions might have lived had the later-infected countries been warned to prepare.
After SARS, no one thought that it would happen again. In 2005 the 194 nations that vote in WHO‘s governing body promised not to conceal
outbreaks. And beyond that promise, public-health researchers have believed that Internet chatter—patterns of online discussion about
disease—would undercut any attempts at secrecy. But they’ve been disappointed to see that their web-scraping tools have picked up
remarkably little from the Middle East: While Saudi residents certainly use the Internet, what they can access is stifled, and what they are
willing to say appears muted. Nearly 100 years after the great flu, it turns out that old-fashioned censorship
can still stymie the
world in its ability to prepare for a pandemic. So what now? The behind-door seething may be having an effect. A WHO team
was finally allowed into Saudi Arabia in June, and the Saudi government has announced limits on the number of visas it will issue for this year’s
hajj. Meanwhile, governments and transnational health agencies have already taken the steps that they can, warning hospitals and readying
labs. With luck, the disease will stay contained: In July, WHO declined to elevate MERS to a “public health emergency of international concern.
But the organization warned it might change its mind later—and if it does, we should fear the worst, because our medical resources are few. At
present there is no rapid-detection method, no vaccine, and no cure. While we wait to see the full extent of MERS, the one thing
the
world can do is to relearn the lesson of SARS: Just as diseases will always cross borders, governments
will always try to evade blame. That problem can’t be solved with better devices or through a more
sophisticated public-health dragnet. The solution lies in something public health has failed to accomplish
despite centuries of trying: persuading governments that transparency needs to trump concerns about
their own reputations. Information can outrun our deadly new diseases, but only if it’s allowed to
spread.
Extinction
Jordan Pearson, motherboard writer, citing a WHO study, ’14 (“This Mathematical Model from
2006 Shows How Ebola Could Wipe Us Out,” 9/4, http://motherboard.vice.com/read/a-2006mathematical-model-shows-how-ebola-could-wipe-us-out)
The current Ebola outbreak in West Africa is the worst in history, and the death toll just surpassed 1,900.
Previous WHO estimates indicated that the outbreak would end mid-fall, but the situation is quickly
spiraling out of control and into a sea of unknowns. The “Ebola epidemic is the largest, and most severe,
and most complex we have ever seen in the nearly 40-year history of this disease,” World Health
Organization director general Margaret Chan said in a special briefing yesterday. “No one, even
outbreak responders, [has] ever seen anything like it.” Yaneer Bar-Yam, the complex systems analyst
whose model accurately predicted the global unrest that led to the Arab Spring, is also worried about
the patterns he sees in the disease's advance. Models he designed for the New England Complex Systems Institute
back in 2006 show that Ebola could rapidly spread, and, in a worse case scenario, even cause an
extinction event, if enough infected people make it through an international airport. “What happened was that
we were modelling the dynamics of the evolution of diseases—of pathogens—and we showed that if you just add a very small amount of longrange transportation, the diseases escape their local context and eventually drive everything to extinction,” Bar-Yam told Motherboard. “They
drive their hosts to extinction.” Bar-Yam says he has informed the WHO and the CDC of his findings, but they haven’t listened, he
said. “I just gave a lecture to the World Health Organization in January and I told them. I said, there’s this transition to extinction
and we don’t know when it’s going to happen,” Bar-Yam explained. “But I don’t think that there has been a sufficient
response.” Normally, the spread of a predator—and this is as true for Ebola as it is for invasive animal species—is stymied when it overexploits
its prey, effectively drying up its own food source. In rural areas like those where the current Ebola outbreak is centered, diseases tend to
contain themselves by wiping out all available hosts in a concentrated area. If
a particularly aggressive predator happens to
make it out of its local context, say, on an international flight, Bar-Yam’s models show that it can avoid
local extinction through long-range dispersal. At this point, the linear model of the disease's outbreak
makes a statistical transition into an entirely different dynamic; extinction for all of its hosts across
vast geographic distances, and only afterwards for the disease. The argument has been made that an Ebola outbreak
would not be as severe in the West as it is in Africa, because the poor healthcare infrastructure where the disease has struck is the chief vector
of its spread. Bar-Yam sees this assumption as a vast overestimation of our handle on the dynamics of disease containment. THE QUESTION
BECOMES, AT WHAT POINT DO WE HIT THE PANIC BUTTON? WHAT DOES IT LOOK LIKE TO HIT THE PANIC BUTTON? “The
behavior of an individual in a major metropolitan area in terms of engaging with the health care system depends on a lot of different factors,”
Bar-Yam explained. “A reasonable person might be have in one way, but another person will behave in another. We don’t know what happens
if someone with Ebola throws up in a subway before that gets cleaned up and people understand that happened because of Ebola.” Panic is
never a wise thing to incite, because it can result in exactly the kinds of unpredictable behavior that Bar-Yam is warning us about. However, a
healthy amount of fear is a different matter.
Small Businesses Addon
Cloud computing key to adapt small businesses to server needs.
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIONS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
In the corporate environment, cloud computing also provides many benefits. Cloud computing
provides scalability, which can be especially beneficial to emerging companies. Rather than being forced to invest in equipment,90
software, and personnel to maintain the systems, companies can purchase computing power and storage space from a cloud provider.91
Cloud computing provides for flexible “usage-based pricing,”92 because the “hours” purchased
through cloud computing “can be distributed non-uniformly in time (e.g., use 100 server-hours today
and no server-hours tomorrow”93 and the company will only have to pay for the hours it uses.94 In the event of a
business slowdown, where a company needs to scale down its resource usage, cloud computing might
reduce or even eliminate the financial loss of having under-utilized equipment.95 If a business needs to
scale up its resource usage, cloud computing allows it to add resources quickly, with very short leadtime of minutes or hours (instead of days or weeks to procure the physical equipment), which allows the matching of
resources to workload much more closely.96 For example, an Internet retailer might be extremely busy during the holidays,
but far less busy during the rest of the year. Cloud computing allows for the retailer to purchase additional resources during the holiday
season to accommodate the rush of traffic, without having to purchase and maintain underutilized systems during the rest of the year.97 This
prevents wasted resources during the rest of the year, and reduces the risk of accidentally turning away customers during a spike in
sales.98 Finally, businesses might save because the cloud provider can pass on some of the savings they
get from their economy-of-scale buying power for computing hardware and software.99 Yet, there are
many privacy implications that come along with the vast benefits of cloud computing. Having data on
the servers of a cloud service provider instead of your own means that if the provider’s servers are
compromised, then your data could potentially also be compromised.100 A cloud service provider
might retain the right to disclose information to another party.101 The “terms of service” of cloud providers might
also vary from provider to provider, leading users to potentially rely on privacy protections that may exist with one provider, but not another.
The growing trend towards cloud computing usage means that more and more people will be storing
their data on remote servers (which will likely be outside Fourth Amendment protections, as currently understood).
Small Business is the backbone of the economy.
Hecht 12/17/2014 (Jared, CEO and Co-founder, Fundera, “Are Small Businesses Really the Backbone of
the Economy?” INC http://www.inc.com/jared-hecht/are-small-businesses-really-the-backbone-of-theeconomy.html)
In a conversation about the value of small business for the U.S. economy, we can't leave out this
truth--all businesses start small. All of the 18,500 businesses in the United States with 500 or more employees once began as a
part of the small business sector. Apple started in Steve Jobs's garage. Facebook got its start in Mark
Zuckerberg's dorm room. Home-based, non-employing small businesses become small employers,
which in turn become big businesses. So, in a way, one could argue that the American small business
economy is the American economy. It's where the U.S. economy begins . As a caveat, let's keep in mind that the
majority of these numbers come from the 2010 Census, or from the Office of Economic Research's 2012 study--which are considered the most
accurate numbers reflecting the state of jobs and business in the country. Being that we're coming up on 2015, it's likely that these numbers
have undergone some adjustment since then. However, the Small Business Administration indicates that America's small business economy is
growing along with the rest of the recession recovery. Although small businesses may be growing at a slower pace due to lending challenges,
that disparity has not been enough to tip the scales.
of the U.S. Economy.
Facts are still facts, and small businesses are still the backbone
Economic decline triggers lash-out and global war---no checks
Harold James 14, Professor of history at Princeton University’s Woodrow Wilson School who
specializes in European economic history, 7/2/14, “Debate: Is 2014, like 1914, a prelude to world war?,”
http://www.theglobeandmail.com/globe-debate/read-and-vote-is-2014-like-1914-a-prelude-to-worldwar/article19325504/
As we get closer to the centenary of Gavrilo Princip’s act of terrorism in Sarajevo, there is an ever more vivid fear: it could happen
again . The approach of the hundredth anniversary of 1914 has put a spotlight on the fragility of the world’s
political and economic security systems . At the beginning of 2013, Luxembourg’s Prime Minister Jean-Claude Juncker was widely
ridiculed for evoking the shades of 1913. By now he is looking like a prophet. By 2014, as the security situation in the South
China Sea deteriorated, Japanese Prime Minister Shinzo Abe cast China as the equivalent to Kaiser Wilhelm’s Germany; and the
fighting in Ukraine and in Iraq is a sharp reminder of the dangers of escalation. Lessons of 1914 are about more
than simply the dangers of national and sectarian animosities. The main story of today as then is the precariousness of
financial globalization , and the consequences that political leaders draw from it. In the influential view of Norman Angell in his 1910
book The Great Illusion, the interdependency of the increasingly complex global economy made war
impossible. But a quite opposite conclusion was possible and equally plausible – and proved to be the
case . Given the extent of fragility, a clever twist to the control levers might make war easily winnable
by the economic hegemon. In the wake of an epochal financial crisis that almost brought a complete
global collapse, in 1907, several countries started to think of finance as primarily an instrument of raw
power, one that could and should be turned to national advantage. The 1907 panic emanated from the United States but affected the rest of
the world and demonstrated the fragility of the whole international financial order. The aftermath of the 1907 crash drove the then hegemonic
power – Great Britain - to reflect on how it could use its financial power. Between 1905 and 1908, the British Admiralty evolved the broad
outlines of a plan for financial and economic warfare that would wreck the financial system of its major European rival, Germany, and destroy
its fighting capacity. Britain used its extensive networks to gather information about opponents. London banks financed most of the world’s
trade. Lloyds provided insurance for the shipping not just of Britain, but of the world. Financial networks provided the information that allowed
the British government to find the sensitive strategic vulnerabilities of the opposing alliance. What pre-1914 Britain did anticipated the privatepublic partnership that today links technology giants such as Google, Apple or Verizon to U.S. intelligence gathering. Since last year, the Edward
Snowden leaks about the NSA have shed a light on the way that global networks are used as a source of intelligence and power. For Britain’s
rivals, the financial panic of 1907 showed the necessity of mobilizing financial powers themselves. The United States realized that it needed a
central bank analogous to the Bank of England. American financiers thought that New York needed to develop its own commercial trading
system that could handle bills of exchange in the same way as the London market. Some of the
dynamics of the pre-1914
financial world are now re-emerging. Then an economically declining power, Britain, wanted to use
finance as a weapon against its larger and faster growing competitors, Germany and the United States. Now America is in turn
obsessed by being overtaken by China – according to some calculations, set to become the world’s largest economy in 2014. In
the aftermath of the 2008 financial crisis, financial institutions appear both as dangerous weapons of
mass destruction , but also as potential instruments for the application of national power. In managing the 2008 crisis, the dependence
of foreign banks on U.S. dollar funding constituted a major weakness, and required the provision of large swap lines by the Federal Reserve. The
United States provided that support to some countries, but not others, on the basis of an explicitly political logic, as Eswar Prasad demonstrates
in his new book on the “Dollar Trap.” Geo-politics is intruding into banking practice elsewhere. Before the Ukraine crisis, Russian banks were
trying to acquire assets in Central and Eastern Europe. European and U.S. banks are playing a much reduced role in Asian trade finance. Chinese
banks are being pushed to expand their role in global commerce. After the financial crisis, China started to build up the renminbi as a major
international currency. Russia and China have just proposed to create a new credit rating agency to avoid what they regard as the political bias
of the existing (American-based) agencies. The next stage in this logic is to think about how financial power can be directed to national
advantage in the case of a diplomatic tussle. Sanctions are a routine (and not terribly successful) part of the pressure applied to rogue states
such as Iran and North Korea. But financial pressure can be much more powerfully applied to countries that are deeply embedded in the world
economy. The test is in the Western imposition of sanctions after the Russian annexation of Crimea. President Vladimir Putin’s calculation in
response is that the European Union and the United States cannot possibly be serious about the financial war. It would turn into a boomerang:
Russia would be less affected than the more developed and complex financial markets of Europe and America. The
threat of systemic
disruption generates a new sort of uncertainty, one that mirrors the decisive feature of the crisis of the
summer of 1914. At that time, no one could really know whether clashes would escalate or not. That
feature contrasts remarkably with almost the entirety of the Cold War, especially since the 1960s, when the
strategic doctrine of
Mutually Assured Destruction left no doubt that any superpower conflict would inevitably escalate .
The idea of network disruption relies on the ability to achieve advantage by surprise, and to win at no or low cost. But it is inevitably a gamble,
and raises prospect that others might, but also might not be able to, mount the same sort of operation. Just as in 1914, there is an
enhanced temptation to roll the dice, even though the game may be fatal.
Warming Addon
Cloud computing key to climate modeling
Boyce, 10 (Eric, technical writer and user advocate for The Rackspace Cloud, September 14, 2010
http://www.rackspacecloud.com/blog/2010/09/14/the-future-of-cloud-computing-the-big-25-in-thenext-25/)
The promise of the cloud isn’t just about gaming and the ability to safely store all those photos that you wish you hadn’t ever taken. Many of
the most promising cloud-based applications also require massive computational power. Searching a database
of global DNA samples requires abundant, scalable processing power. Modeling protein folding is another example of how compute
resources will be used. Protein folding is linked to many diseases including Alzheimer’s and cancer, and analyzing the folding process can
lead to new treatments and cures, but it requires enormous compute power. Projects like Folding@home are using distributed computing to
tackle these modeling tasks. The cloud
will offer a larger, faster, more scalable way to process data and thus
benefit any heavy data manipulation task. 6. Is it going to be hot tomorrow? Like protein folding modeling, climate
simulation and forecasting requires a large amount of data storage and processing. Recently the German
Climate Computing Center (DKRZ) installed a climate calculating supercomputer that is capable of analyzing 60
petabytes of data (roughly 13 million DVD’s) at over 158 teraflops (trillion calculations per second). In the next couple of decades, this
level of computing power will be widely available and will exist on remote hardware. Sophisticated climate
models combined with never before seen compute power will provide better predictions of climate
change and more rapid early warning systems
Key to warming adaptation
Pope, 10 (Vicky Pope is the head of climate science advice at the Met Office Hadley Centre, “ How
science will shape climate adaptation plans,” 16 September 2010,
http://www.guardian.co.uk/environment/cif-green/2010/sep/16/science-climate-change-adaptation)
Some would argue that the demand for information on how climate change will affect our future outstrips
the current capability of the science and climate models. My view is that as scientists, we can provide useful
information, but we need to be clear about its limitations and strive to improve information for the future. We need to be
clear about the uncertainties in our projections while still extracting useful information for practical decision-making. I have been involved in
developing climate models for the last 15 years and despite their limitations we are now able to assess the probability of different outcomes for
the first time. That means we can quantify the risk of these outcomes happening. These projections – the UK climate
projections published in 2009 - are already forming the backbone of adaptation decisions being made in the UK for 50 to 100 years ahead. A
project commissioned by the Environment Agency to investigate the impact of climate change on the Thames estuary over the next 100 years
concluded that current government predictions for sea level rise are realistic. A major outcome from the scientific analysis was that the worstcase scenarios for high water levels can be significantly reduced - from 4.2m to 2.7m – because we are able to rule out the more extreme sea
level rise. As a result, massive investment in a tide-excluding estuary barrage is unlikely to be needed this century. This will be reviewed as more
information becomes available, taking a flexible approach to adaptation. The energy industry, working with the Met Office, looked at the likely
impact of climate change on its infrastructure. The project found that very few changes in design standards are required, although it did
highlight a number of issues. For instance, transformers could suffer higher failure rates and efficiency of some types of thermal power station
could be markedly reduced because of increasing temperatures. A particular concern highlighted by this report and reiterated in today's report
from the Climate Change Committee - the independent body that advises government on its climate targets - is that little is known about how
winds will change in the future - important because of the increasing role of wind power in the UK energy mix. Fortunately many people, from
private industry to government, recognise the value of even incomplete information to help make decisions about the future. Demand
for
climate information is increasing, particularly relating to changes in the short to medium term . More
still needs to be done to refine the climate projections and make them more usable and accessible.
This is especially true if we are to provide reliable projections for the next 10 to 30 years. The
necessary science and modelling tools are being developed, and the first tentative results are being produced. We need
particularly to look at how we communicate complex and often conflicting results. In order to explain complex science to a lay audience,
scientists and journalists are prone to progressively downplay the complexity. Conversely, in striving to adopt a more scientific approach and
include the full range of uncertainty, we often give sceptics an easy route to undermine the science. All too often uncertainty in science offers a
convenient excuse for delaying important decisions. However, in the case of climate change there is overwhelming evidence that the climate is
changing — in part due to human activities — and that changes will accelerate if emissions continue unabated. In examining the uncertainty in
the science we must take care to not throw away what we do know. Science has established that climate is changing. Scientists now need
to press on in developing the emerging tools that will be used to underpin sensible adaptation
decisions which will determine our future.
Warming is inevitable–only adaptation can prevent extinction
Romero, 8 (purple, reporter for ABS-CBN news, 05/17/2008, Climate change and human extinction-are you ready to be fossilized? http://www.abs-cbnnews.com/nation/05/16/08/climate-change-andhuman-extinction-are-you-ready-be-fossilized)
Climate change killed the dinosaurs. Will it kill us as well? Will we let it destroy the human race? This was the grim, depressing
message that hung in the background of the Climate Change Forum hosted on Friday by the Philippine National Red Cross at the Manila Hotel.
"Not one dinosaur is alive today. Maybe
someday it will be our fossils that another race will dig up in the future,
" said Roger Bracke of the International Federation of Red Cross and Red Crescent Societies, underscoring his point that no less than
extinction is faced by the human race, unless we are able to address global warming and climate change in
this generation. Bracke, however, countered the pessimistic mood of the day by saying that the human race still has an
opportunity to save itself. This more hopeful view was also presented by the four other speakers in the forum. Bracke pointed out
that all peoples of the world must be involved in two types of response to the threat of climate change: mitigation and
adaptation. "Prevention" is no longer possible, according to Bracke and the other experts at the forum, since climate
change is already happening. Last chance The forum's speakers all noted the increasing number and intensity of
devastating typhoons--most recently cyclone Nargis in Myanmar, which killed more than 100,000 people--as evidence that the
world's climatic and weather conditions are turning deadly because of climate change. They also reminded
the audience that deadly typhoons have also hit the Philippines recently, particularly Milenyo and Reming, which left hundreds of thousands of
Filipino families homeless. World Wildlife Fund Climate and Energy Program head Naderev Saño said that "this
generation the last
chance for the human race" to do something and ensure that humanity stays alive in this planet.
According to Saño, while most members of our generation will be dead by the time the worst effects of climate change are felt, our children will
be the ones to suffer. How will Filipinos survive climate change? Well, first of all, they have to be made aware that climate change is a problem
that threatens their lives. The easiest way to do this – as former Consultant for the Secretariats of the UN Convention on Climate Change Dr.
Pak Sum Low told abs-cbnews.com/Newsbreak – is to particularize the disasters that it could cause. Talking in the language of destruction, Pak
and other experts paint this portrait of a Philippines hit by climate change: increased typhoons in Visayas, drought in Mindanao, destroyed
agricultural areas in Pampanga, and higher incidence rates of dengue and malaria. Sañom said that as polar ice caps melt due to global
warming, sea levels will rise, endangering coastal and low-lying areas like Manila. He said Manila Bay would experience a sea level increase of
72 meters over 20 years. This means that from Pampanga to Nueva Ecija, farms and fishponds would be in danger of being would be inundated
in saltwater. Sañom added that Albay, which has been marked as a vulnerable area to typhoons, would be the top province at risk. Sañom also
pointed out that extreme weather conditions arising from climate change, including typhoons and severe droughts, would have social,
economic and political consequences: Ruined farmlands and fishponds would hamper crop growth and reduce food sources, typhoons would
displace people, cause diseases, and limit actions in education and employment. Thus, Saño said, while environmental protection should
remain at the top of the agenda in fighting climate change, solutions to the phenomenon "must also be economic, social, moral and political."
Mitigation Joyceline Goco, Climate Change Coordinator of the Environment Management Bureau of the Department of Environment and
Natural Resources, focused her lecture on the programs Philippine government is implementing in order to mitigate the effects of climate
change. Goco said that the Philippines is already a signatory to global agreements calling for a reduction in the "greenhouse gasses"--mostly
carbon dioxide, chloroflourocarbons and methane--that are responsible for trapping heat inside the planet and raising global temperatures.
Goco said the DENR, which is tasked to oversee and activate the Clean Development Mechanism, has registered projects which would reduce
methane and carbon dioxide. These projects include landfill and electricity generation initiatives. She also said that the government is also
looking at alternative fuel sources in order do reduce the country's dependence on the burning of fossil fuels--oil--which are known culprits
behind global warming. Bracke however said that mitigation is not enough. "The ongoing debate about mitigation of climate change effects is
highly technical. It involves making fundamental changes in the policies of governments, making costly changes in how industry operates. All of
this takes time and, frankly, we're not even sure if such mitigation efforts will be successful. In the meantime, while the debate goes on, the
effects of climate change are already happening to us." Adaptation A
few nations and communities have already begun
adapting their lifestyles to cope with the effects of climate change. In Bangladesh, farmers have
switched to raising ducks instead of chickens because the latter easily succumb to weather disturbances and immediate effects, such
as floods. In Norway, houses with elevated foundations have been constructed to decrease displacement due to
typhoons. In the Philippines main body for fighting climate change, the Presidential Task Force on Climate Change, (PTFCC) headed by
Department on Energy Sec. Angelo Reyes, has identified emission reduction measures and has looked into what fuel mix could be both
environment and economic friendly. The Department of Health has started work with the World Health Organization in strengthening its
surveillance mechanisms for health services. However,
bringing information hatched from PTFCC’s studies down to and crafting
an action plan for adaptation with the communities in the barangay level remains a challenge. Bracke said that the Red
Cross is already at the forefront of efforts to prepare for disasters related to climate change. He pointed out that since the Red Cross was
founded in 1919, it has already been helping people beset by natural disasters. "The problems resulting from climate change are not new to the
Red Cross. The Red Cross has been facing those challenges for a long time. However, the frequency and magnitude of those problems are
unprecedented. This is why the Red Cross can no longer face these problems alone," he said. Using a medieval analogy, Bracke said that the
Red Cross can no longer be a "knight in shining armor rescuing a damsel in distress" whenever disaster strikes. He said that disaster
preparedness in the face of climate change has to involve people at the grassroots level. "The role of the Red Cross in the era of climate change
will be less as a direct actor and increase as a trainor and guide to other partners who will help us adapt to climate change and respond to
disasters," said Bracke. PNRC chairman and Senator Richard Gordon gave a picture of how the PNRC plans to take climate change response to
the grassroots level, through its project, dubbed "Red Cross 143". Gordon explained how Red Cross 143 will train forty-four volunteers from
each community at a barangay level. These volunteers will have training in leading communities in disaster response. Red Cross 143 volunteers
will rely on information technology like cellular phones to alert the PNRC about disasters in their localities, mobilize people for evacuation, and
lead efforts to get health care, emergency supplies, rescue efforts, etc.
Adaptation solves global wars
Werz and Conley 12 - Senior Fellow @American Progress where his work as member of the National
Security Team focuses on the nexus of climate change, migration, and security and emerging
democracies & Research Associate for National Security and International Policy @ the Center for
American Progress [Michael Werz & Laura Conley, “Climate Change, Migration, and Conflict: Addressing
complex crisis scenarios in the 21st Century,” Center for American Progress, January 2012]
The costs and consequences of climate change on our world will define the 21st century. Even if nations across
our planet were to take immediate steps to rein in carbon emissions—an unlikely prospect— a warmer climate is
inevitable . As the U.N. Intergovernmental Panel on Climate Change, or IPCC, noted in 2007, human-created “warming of the
climate system is unequivocal, as is now evident from observations of increases in global average air and ocean
temperatures, widespread melting of snow and ice and rising global average sea level.”1 As these ill effects progress they will have serious
implications for U.S. national security interests as well as global stability—extending from the sustainability of coastal military installations to
the stability of nations that lack the resources, good governance, and resiliency needed to respond to the many adverse consequences of
climate change. And as
these effects accelerate, the stress will impact human migration and conflict around
the world. It is difficult to fully understand the detailed causes of migration and economic and political instability, but the growing
evidence of links between climate change, migration, and conflict raise plenty of reasons for concern. This is
why it’s time to start thinking about new and comprehensive answers to multifaceted crisis scenarios brought on or worsened by global climate
change. As Achim Steiner, executive director of the U.N. Environment Program, argues, “The question we must continuously ask ourselves in
the face of scientific complexity and uncertainty, but also growing evidence of climate change, is at what point precaution, common sense or
prudent risk management demands action.”2 In
the coming decades climate change will increasingly threaten
humanity’s shared interests and collective security in many parts of the world, disproportionately affecting the globe’s least developed
countries. Climate change will pose challenging social, political, and strategic questions for the many different multinational, regional, national,
and nonprofit organizations dedicated to improving the human condition worldwide. Organizations as different as Amnesty International, the
U.S. Agency for International Development, the World Bank, the International Rescue Committee, and the World Health Organization will all
have to tackle directly the myriad effects of climate change. Climate change also poses distinct challenges to U.S. national security.
Recent
intelligence reports and war games, including some conducted by the U.S. Department of Defense, conclude that over the
next two or three decades, vulnerable regions (particularly sub-Saharan Africa, the Middle East, South and Southeast
Asia) will face the prospect of food shortages, water crises, and catastrophic flooding driven by climate
change. These developments could demand U.S., European, and international humanitarian relief or military
responses , often the delivery vehicle for aid in crisis situations. This report provides the foundation and overview for a series of papers
focusing on the particular challenges posed by the cumulative effects of climate change, migration, and conflict in some of our world’s most
complex environments. In the papers following this report, we plan to outline the effects of this nexus in northwest Africa, in India and
Bangladesh, in the Andean region of South America, and in China. In this paper we detail that nexus across our planet and offer wide ranging
recommendations about how the United States, its allies in the global community, and the community at large can deal with the coming
climate-driven crises with comprehensive sustainable security solutions encompassing national security, diplomacy, and economic, social, and
environmental development. Here, we briefly summarize our arguments and our conclusions. The nexus The Arab Spring can be at least partly
credited to climate change. Rising food prices and efforts by authoritarian regimes to crush political protests were linked first to food and then
to political repression—two important motivators in the Arab makeover this past year. To be sure, longstanding economic and social distress
and lack of opportunity for so many Arab youth in the Middle East and across North Africa only needed a spark to ignite revolutions across the
region. But environmental degradation and the movement of people from rural areas to already overcrowded cities alongside rising food prices
enabled the cumulative effects of long-term economic and political failures to sweep across borders with remarkable agility. It does not require
much foresight to acknowledge that other effects of climate change will add to the pressure in the decades to come. In particular the
cumulative overlays of climate change with human migration driven by environmental crises, political conflict caused by this migration, and
competition for more scarce resources will add new dimensions of complexity to existing and future crisis scenarios. It is thus critical to
understand how governments plan to answer and prioritize these new threats from climate change, migration, and conflict. Climate change
No matter what steps the global community takes to mitigate
carbon emissions, a warmer climate is inevitable. The effects are already being felt today and will intensify as climate change
Climate change alone poses a daunting challenge.
worsens. All of the world’s regions and nations will experience some of the effects of this transformational challenge. Here’s just one case in
point: African states are likely to be the most vulnerable to multiple stresses, with up to 250 million people projected to suffer from water and
food insecurity and, in low-lying areas, a rising sea level.3 As little as 1 percent of Africa’s land is located in low-lying coastal zones but this land
supports 12 percent of its urban population.4 Furthermore, a majority of people in Africa live in lower altitudes—including the Sahel, the area
just south of the Sahara—where the worst effects of water scarcity, hotter temperatures, and longer dry seasons are expected to occur.5 These
developments may well be exacerbated by the lack of state and regional capacity to manage the effects of climate change. These same
dynamics haunt many nations in Asia and the Americas, too, and the implications for developed countries such as the United States and much
of Europe will be profound. Migration Migration adds another layer of complexity to the scenario. In
the 21st century the world
could see substantial numbers of climate migrants—people displaced by either the slow or sudden onset of the effects of
climate change. The United Nations’ recent Human Development Report stated that, worldwide, there are already an estimated 700 million
internal migrants—those leaving their homes within their own countries—a number that includes people whose migration isrelated to climate
change and environmental factors. Overall migration across national borders is already at approximately 214 million people worldwide,6 with
estimates of up to 20 million displaced in 2008 alone because of a rising sea level, desertification, and flooding.7 One expert, Oli Brown of the
International Institute for Sustainable Development, predicts a tenfold increase in the current number of internally displaced persons and
international refugees by 2050.8 It is important to acknowledge that there is no consensus on this estimate. In fact there is major disagreement
among experts about how to identify climate as a causal factor in internal and international migration. But even though the root causes of
human mobility are not always easy to decipher, the policy challenges posed by that movement are real. A 2009 report by
the International
Organization for Migration produced in cooperation with the United Nations University and the Climate Change, Environment and Migration
Alliance cites numbers that range from “200 million to 1 billion migrants from climate change alone, by
2050,”9 arguing that “environmental drivers of migration are often coupled with economic, social and developmental factors that can
accelerate and to a certain extent mask the impact of climate change.” The report also notes that “migration can result from different
environmental factors, among them gradual environmental degradation (including desertification, soil and coastal erosion) and natural
disasters (such as earthquakes, floods or tropical storms).”10 (See box on page 15 for a more detailed definition of climate migrants.) Clearly,
then, climate change
is expected to aggravate many existing migratory pressures around the world. Indeed
associated extreme weather events resulting in drought, floods, and disease are projected to increase the
number of sudden humanitarian crises and disasters in areas least able to cope, such as those already mired in
poverty or prone to conflict.11 Conflict This final layer is the most unpredictable, both within nations and transnationally, and will force the
United States and the international community to confront climate and migration challenges within an increasingly unstructured local or
regional security environment. In contrast to the great power conflicts and the associated proxy wars that marked most of the 20th century, the
immediate post- Cold War decades witnessed a diffusion of national security interests and threats. U.S. national security policy is increasingly
integrating thinking about nonstate actors and nontraditional sources of conflict and instability, for example in the fight against Al Qaeda and
its affiliated groups. Climate change is among these newly visible issues sparking conflict. But because the direct link between conflict and
climate change is unclear, awareness of the indirect links has yet to lead to substantial and sustained action to address its security implications.
Still the
potential for the changing climate to induce conflict or exacerbate existing instability in some of the world’s most
vulnerable regions is now recognized in national security circles in the United States, although research gaps still exists in many
places. The climate-conflict nexus was highlighted with particular effect by the current U.S. administration’s security-planning reviews over the
past two years, as well as the Center for Naval Analysis, which termed climate
change a “threat multiplier,” indicating that it
can exacerbate existing stresses and insecurity.12 The Pentagon’s latest Quadrennial Defense Review also recognized climate
change as an “accelerant of instability or conflict,” highlighting the operational challenges that will confront U.S. and partner
militaries amid a rising sea level, growing extreme weather events, and other anticipated effects of climate change.13 The U.S. Department of
Defense has even voiced concern for American military installations that may be threatened by a rising sea level.14 There is also well-developed
international analysis on these points. The United Kingdom’s 2010 Defense Review, for example, referenced the security aspects of climate
change as an evolving challenge for militaries and policymakers. Additionally, in 2010, the Nigerian government referred to climate change as
the “greatest environmental and humanitarian challenge facing the country this century,” demonstrating that climate change is no longer seen
as solely scientific or environmental, but increasingly as a social and political issue cutting across all aspects of human development.15 As these
three threads—climate change, migration, and conflict—interact more intensely, the consequences will be far-reaching and occasionally
counterintuitive. It is impossible to predict the outcome of the Arab Spring movement, for example, but the blossoming of democracy in some
countries and the demand for it in others is partly an unexpected result of the consequences of climate change on global food prices. On the
other hand, the interplay of these factors will drive complex crisis situations in which domestic policy, international policy, humanitarian
assistance, and security converge in new ways. Areas of concern Several regional
hotspots
frequently
come up in the international
debate on climate change, migration, and conflict. Climate migrants in northwest Africa , for example, are causing
communities across the region to respond in different ways, often to the detriment of regional and international security concerns. Political
and social instability in the region plays into the hands of organizations such as Al Qaeda in the Islamic
Maghreb. And recent developments in Libya, especially the large number of weapons looted from depots after strongman Moammar
Qaddafi’s regime fell— which still remain unaccounted for—are a threat to stability across North Africa. Effective solutions need not address all
of these issues simultaneously but must recognize the layers of relationships among them. And these solutions must also recognize that these
variables will not always intersect in predictable ways. While some migrants may flee floodplains, for example, others may migrate to them in
Bangladesh , already well known for its disastrous floods, faces rising
waters in the future due to climate-driven glacial meltdowns in neighboring India. The effects can hardly be over. In
December 2008 the National Defense University in Washington, D.C., ran an exercise that explored the impact of a flood that
sent hundreds of thousands of refugees into neighboring India. The result: the exercise predicted a new wave of migration
would touch off religious conflicts , encourage the spread of contagious diseases , and cause vast damage to
infrastructure. India itself is not in a position to absorb climate-induced pressures—never mind foreign climate migrants.
search of greater opportunities in coastal urban areas.16
The country will contribute 22 percent of global population growth and have close to 1.6 billion inhabitants by 2050, causing demographic
developments that are sure to spark waves of internal migration across the country. Then there’s the
Andean region of South America,
where melting glaciers and snowcaps will drive climate, migration, and security concerns. The average rate of glacial melting has doubled
over the past few years, according to the World Glacier Monitoring Service.17 Besides Peru, which faces the gravest consequences in Latin
America, a number of other Andean countries will be massively affected, including Bolivia, Ecuador, and Colombia. This development will
put
water security, agricultural production, and power generation at risk —all factors that could prompt people to leave
their homes and migrate. The IPCC report argues that the region is especially vulnerable because of its fragile
ecosystem.18 Finally, China is now in its fourth decade of ever-growing internal migration, some of it driven in recent years by
environmental change. Today, across its vast territory, China continues to experience the full spectrum of climate change
related consequences that have the potential to continue to encourage such migration. The Center for a New
American Security recently found that the consequences of climate change and continued internal migration in China
include “water stress; increased droughts, flooding, or other severe events; increased coastal erosion and saltwater inundation;
glacial melt in the Himala as that could affect hundreds of millions; and shifting agricultural zones”—all of which will
affect food supplies. 19 Pg. 1-7
Solvency
Plan Specific
Solves Mosaic
Some time restrictions are key – doesn’t impede targeted law enforcement, inserts
the mosaic theory, and solves privacy.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
The Revised Statute—18 U.S.C. § 2703(b)
(b)(c) Records Concerning Network Service. Electronic Communication Service or Remote Computing Service.—(1) A
governmental entity may require a network service provider provider of electronic communication service or remote
computing service to disclose a record or other information pertaining to a subscriber to or customer of such service (not
including the contents of communications) only when the governmental entity—
(1)(A) obtains a warrant issued using the procedures described in the Federal Rules of Criminal Procedure (or, in the case
of a State court, is-sued using State warrant procedures) by a court of competent jurisdiction;
(2)(B) obtains a court order for such disclosure under subsection (d) of this section, if the information required by the
present request, and by any previous requests made to the same provider during the course of the cur-rent
investigation, was initially collected by the provider from user activity taking place during a period not exceeding seven
cumulative days;
(3)(C) has the consent of the subscriber or customer to such disclosure;
(4)(D) submits a formal written request relevant to a law enforcement investigation concerning telemarketing fraud for
the name, address, and place of business of a subscriber or customer of such provider, which sub-scriber or customer is
engaged in telemarketing (as such term is defined in section 2325 of this title); or
(5)(E) seeks information under subsection (c) paragraph (2).190
190.
Explanation Revised subsection (b) would
incorporate the mosaic theory into the Stored Communications
Act to prevent warrantless disclosures of sweeping amounts of metadata. The subsection would
require a warrant for all requests to a single provider regarding information collected over a period of
at least seven cumulative days. More discrete information requests—that is, requests for information collected over
less than seven cumulative days—would remain governed by the current statute’s court order and reasonable
suspicion requirements. This proposal also would not affect the government’s ability to warrantlessly
request information made with the consent of the subscriber or customer, concerning telemarketing
fraud, or relating to basic subscriber in-formation. Adopting the mosaic theory in this statutory
manner would prevent many of the potential pitfalls that could arise if the Supreme Court were to
incorporate the mosaic theory into Fourth Amendment doctrine. And the proposal would address the
most important practical questions posed by the mosaic theory’s most vocal critic, Kerr191: 190. 1. What is the
“proper reference point for when a mosaic has been created[?]”192 2. “[H]ow long must [a surveillance] tool be used before the relevant
mosaic is created?”193 3. “If the mosaic theory applies to multiple surveillance methods, . . . [should] duration and scale questions . . . be
answered in the same way for every method[?]”194 4. “[W]hich stages of surveillance [does] the mosaic theory regulate[]: initial data
collection, subsequent analysis, or both[?]”195 5. Would the mosaic theory apply when multiple agencies request data, either in separate
investigations or in joint operations?196 By
statutorily defining the scope of the mosaic theory’s effect on
government information requests, this proposal answers these questions. The proposal, particularly
subsection (b), is intended to be a prophylactic shield against the privacy danger posed by the
government’s easy access to individuals’ metadata stored with a third party. It does not attempt to
perfectly address every possible scenario—an impossible task. Revised subsection (b) would answer Kerr’s first three
questions somewhat arbitrarily, as follows: First, the proper reference point (under the proposal) for when a mosaic has been created is a
defined time period. Second, that period is seven cumulative days. And third, duration and scale questions would be answered identically in
every case, regardless of the technological context. Prophylactic
measures and arbitrary thresholds are neither
inherently problematic nor without precedent in criminal procedure. For instance, the Miranda doctrine
prophylactically ensures criminal suspects are aware of their Fifth Amendment right against self-incrimination; Miranda warnings are not direct
applications of the Fifth Amendment.197 Similarly, the Supreme Court has held that invocation of the Fifth Amendment right to counsel expires
after fourteen days,198 an entirely arbitrary threshold. The Court did not set this bright-line number to perfectly define the boundaries of the
Fifth Amendment right. Instead, the Court’s intention was “to avoid the consequence” of “not reach[ing] the correct result most of the time.”
199 Likewise, here, the
arbitrary nature of the proposal’s seven-day mosaic threshold is a feature, not a
bug. A bright-line rule would lend confidence to the government when it requests data, to third
parties when they disclose users’ information, and to judges when they apply the law. It would also allow
individuals to determine whether a data disclosure violated their legal rights. Such a rule, despite its potential to over- and
underprotect privacy in some circumstances, is superior to an overly complex scheme that hinders
government investigations or to no rule at all, which threatens individuals’ privacy . The proposal also answers
Kerr’s fourth question: Which stage—collection or analysis—is relevant? To maintain consistency with
typical Fourth Amendment searches, the proposal defines the important stage of surveillance as
collection: service providers’ initial collection of information from individuals and the government’s
later collection of that information from those providers. If the government warrantlessly requests
too much information under § 2703(b), that request would be illegal under the revised statute. But in
keeping with the sequential nature of Fourth Amendment doctrine,200 if the government makes
multiple requests for information at different times and only the later requests violate the statute,
the earlier requests would remain valid. For example, if police request metadata originally collected during a six-day period and
later seek additional metadata collected during a separate six-day period, the government could still use the information obtained from the
first request. In conjunction with the added suppression remedy, discussed below, this approach would al-low other Fourth Amendment
doctrines, such as inevitable discovery201 and fruit of the poisonous tree,202 to apply as usual. Finally, this proposal would address Kerr’s fifth
question: how the mosaic theory would handle multiple requests made by separate officers or agencies conducting either individual
investigations or joint operations. It
would clarify that requests arising from separate investigations would be
treated separately, while requests by multiple officers (or agencies) in the same investigation would be
considered together. While these lines may not always be clear, magistrate judges mulling
government requests for court orders and warrants are in a proper position to evaluate those
requests on a case-by-case basis, and litigants can challenge the validity of magistrates’
determinations when appropriate. Incorporating this mosaic framework into the Stored
Communications Act would ensure judicial oversight and guard against government attempts to game
the system by making multiple, smaller requests adding up to a larger, mosaic whole.203 For instance, under
this proposal, the government could request in-formation pertaining to a six-day period or to two separate three-day periods (or any other
combination of time spans adding up to less than seven days). But the proposal would forbid the government from warrantlessly seeking
information collected from Monday through Saturday for an entire month (that is, twenty-four days’ worth of information but never in a single
seven-day chunk). As a result, this
proposal would draw clear boundaries allowing law enforcement officers,
judges, and individuals to know whether government requests for information are legitimate or
excessive under the statute.
Solves Certainty
Amending the SCA solves certainty
Medina, 2013 (Melissa, Senior Staff Member, American University Law Review and JD, “The Stored
Communications Act: An Old Statute for Modern Times.” American University Law Review Lexis.)
The SCA’s complicated structure has confused courts and created a myriad of different protections
that are seemingly inconsistent.167 For example, under the current language, the same email is subject to
different protection depending on whether it is in transit, stored on a home computer, opened and
stored in remote storage, unopened and stored in remote storage for 180 days or less, or unopened
and stored in remote storage for more than 180 days.168 It is therefore not surprising that courts have difficulty
construing the statute. Any change to the SCA should focus on simplifying this structure to ensure consistent
results and avoid the need for further legislative revisions soon after its enactment. Such a proposal would
include three key changes. First, Congress should focus on technological neutrality to ensure the vitality of the
amendment for years to come.169 The 1986 Congress achieved its goal of technological neutrality when it enacted the SCA in some
respects170 but failed in others.171 Accordingly, the SCA is illustrative of the benefits of technological neutrality and the cautions of the lack
thereof. The neutral aspect of the SCA allows it to be equally applicable to webmail and newer technologies such as Facebook messages and
private Twitter Direct Messages.172 On the other hand, by virtue of the Act’s structure being necessarily based on the types of service
providers in 1986 and the ways users and service providers traditionally stored communications, the
Act has become obsolete,
and many forms of modern communications are left without protection.173 Consequently, Congress should
update the Act’s framework by eliminating the outdated distinctions between ECS and RCS and
instead providing rules based on the type of communication involved.174 In so doing, the amendment will be
technologically neutral and achieve the objective that Congress sought in 1986.175 Second, Congress should provide a clear
definition of electronic storage that, at the very least, clarifies that the term “electronic storage”
encompasses both opened and unopened emails.176 Whether an email is in transit, opened, or unopened should not
determine its level of protection. Expanding the electronic storage definition in this manner will ensure proper
protection from unauthorized access by a private party or the government. S. 607 attempts to settle the
opened/unopened distinction by including language indicating that all communications in electronic
storage “with or otherwise stored, held or maintained” by an ECS or RCS are subject to a warrant
requirement.177 However, the bill’s failure to provide a clear definition of electronic storage, coupled with its retention of the ECS and
RCS distinctions raises questions as to the scope of electronic storage.178 Moreover, challenges under § 2701 will remain subject to the
ambiguous case law discussed in Part II, including Jennings. 179 This remaining uncertainty is unfortunate because courts continue to grapple
with this very issue and would greatly benefit from a clear directive.180 With
a clear definition of electronic storage,
Congress should then provide for one disclosure standard: the government must obtain a warrant to
gain access to emails that are held in electronic storage.181 This rule would ensure consistent results
and account for society’s changing privacy expectations regarding electronic communications .182
Because Fourth Amendment jurisprudence regarding privacy protections focuses on necessity and
expectations, Congress could evaluate the necessity of using electronic communications and remote
computing servicers with users’ expectation that their communications would not be exposed to a
service provider when adapting the third-party doctrine.183 In 1986, few people had access to remote storage, the
Internet, or even a computer.184 Today, these things are not only convenient, but, for many, are necessities of
everyday life.185 Few Americans make it through a day without accessing a computer or the
Internet.186 Without access to these forms of communications, modern businesses could not function
and citizens’ lives would be significantly impacted.187 Individuals should not be forced to sacrifice
privacy in order to effectively communicate. Therefore, the government should be required to obtain
a warrant to gain access to these electronic communications.188 Finally, as S. 607 does, Congress should
eliminate the 180-day distinction.189 In 1986, there was no plausible reason why a service provider would keep an electronic
communication over 180 days. Therefore, Congress adopted the 180-day rule because it analogized a stored email for over 180 days as
abandoned property using Fourth Amendment jurisprudence and archaic property law principles.190 According to Congress, individuals did not
have a reasonable expectation of privacy in these messages.191 The 180-day rule is largely irrelevant in today’s society as email is saved for
years on remote servers.192 Many of these saved emails are password protected, and most individuals reasonably believe that service
providers do not have access to them.193
Remove RCS/ECS
Removing the distinction between ECS and RCS with a warrant requirement solves
privacy.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
1. Incorporating a three-tiered mosaic framework
This proposal’s first, most significant change to the Stored Communica-tions Act would reorganize and
amend 18 U.S.C. § 2703, which regulates the government’s ability to compel institutional third parties
to disclose their customers’ stored data. While the current version of the statute is effectively tiered in
its protections, this amendment would make the tiers of protection simpler and more transparent.
Subsection (a), the highest tier, would protect content data from disclosure without a warrant.
Subsection (b), the intermediate tier, would incorporate the mosaic theory and protect against
sweeping requests for metadata. Subsection (c), the lowest tier, would provide limited protections
against less invasive disclosures. Each of these amended subsections is ex-plained in more detail below.
The Revised Statute—18 U.S.C. § 2703(a)
(a) Contents of Wire or Electronic Communications in Electronic Storage or Documents Held by Network Service
Providers.—A govern-mental entity may require the disclosure by a network service provider of—
(1) a provider of electronic communication service of the nonpublic contents of a wire or electronic communication, that
is in electronic storage in an electronic communications system for one hundred and eighty days or less; or
(2) the nonpublic contents of any document held or maintained on that service on behalf of a subscriber of such service,
only pursuant to a warrant issued using the procedures described in the Feder-al Rules of Criminal Procedure (or, in the
case of a State court, issued using State warrant procedures) by a court of competent jurisdiction. A governmen-tal
entity may require the disclosure by a provider of electronic communica-tions services of the contents of a wire or
electronic communication that has been in electronic storage in an electronic communications system for more than one
hundred and eighty days by the means available under subsection (b) of this section.
Explanation This amendment would make subsection (a) simpler, more consistent across different types of content data, and easier to apply.
First, it would equalize the protections for content information, regardless of whether the content is
being used for electronic communications, remote computer processing, or remote storage (under the
terminology of the current law).181 And borrowing a suggestion from Kerr, the amendment would replace the outdated
terms “electronic communication service” and “remote computing service” with the broader term
“network service provider,” greatly reducing the statute’s complexity.182 (In the current Act, content from
electronic communication services is governed by subsection (a), while content stored in remote computing services is governed by subsection
(b).) As a result, all requests for content information would be consolidated into a single subsection with a single level of protection. Second,
the amendment would require a warrant and probable cause for all disclosures of content
information. This uniform demand would replace the current statute’s weaker protection for certain
types of content (for example, opened e-mail messages and unopened e-mail messages stored for longer than 180 days, which the
government may now request using only an administrative subpoena).183 Traditional Fourth Amendment doctrine, in contrast, protects both
unopened and opened letters.184 Digital content information should be protected at the same level with the same consistency. As noted
above, the law’s current provisions reflect the communications technologies of the 1980s and have little justification in the Internet age.
Third, revised subsection (a)(2) would close a potentially gaping exemption in the statute: the lack of
protection for data stored in remote services that the provider may access for purposes other than
storage or remote processing.185 Removing this exception—currently found in subsection (b)(2)(B)—
would ensure the statute’s protections extend to data stored in cloud computing services such as
Google Docs and Microsoft Office Online and in cloud storage services such as Google Drive, Microsoft
OneDrive, and Dropbox. Most or all of these services contain provisions in their terms of service that grant the service providers some
rights to access the contents of the data, often to provide the targeted advertising that funds the services. 186 As a result, the current statute
does not necessarily protect content housed in these services, because it protects only communications held “solely for the purpose of
providing storage or computer processing services to such subscriber or customer.”187 Finally, the
amendment would limit the
applicability of the statute to the contents of nonpublic communications. The government would
therefore retain the ability to request the contents of public communications such as Twitter tweets,
publicly accessible Facebook posts, and other content intended for wide distribution. Because these
communications are already shared widely, their authors have little expectation of privacy in their contents.188 These changes to
subsection (a) would broaden the scope of the Stored Communications Act’s protection of content
information, making it more consistent with the Fourth Amendment’s protection of physical
documents. The proposal would not unduly hinder government investigations beyond the challenges
inherent to traditional investigations . As a result, it would bring the statute into line with the Sixth Circuit’s holding in Warshak,
which determined that the law is unconstitutional as applied to e-mail (and perhaps other forms of electronic communications).189
Warrant Requirement Solves
Removing the distinction between ECS and RCS with a warrant requirement solves
privacy.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
1. Incorporating a three-tiered mosaic framework
This proposal’s first, most significant change to the Stored Communica-tions Act would reorganize and
amend 18 U.S.C. § 2703, which regulates the government’s ability to compel institutional third parties
to disclose their customers’ stored data. While the current version of the statute is effectively tiered in
its protections, this amendment would make the tiers of protection simpler and more transparent.
Subsection (a), the highest tier, would protect content data from disclosure without a warrant.
Subsection (b), the intermediate tier, would incorporate the mosaic theory and protect against
sweeping requests for metadata. Subsection (c), the lowest tier, would provide limited protections
against less invasive disclosures. Each of these amended subsections is ex-plained in more detail below.
The Revised Statute—18 U.S.C. § 2703(a)
(a) Contents of Wire or Electronic Communications in Electronic Storage or Documents Held by Network Service
Providers.—A govern-mental entity may require the disclosure by a network service provider of—
(1) a provider of electronic communication service of the nonpublic contents of a wire or electronic communication, that
is in electronic storage in an electronic communications system for one hundred and eighty days or less; or
(2) the nonpublic contents of any document held or maintained on that service on behalf of a subscriber of such service,
only pursuant to a warrant issued using the procedures described in the Feder-al Rules of Criminal Procedure (or, in the
case of a State court, issued using State warrant procedures) by a court of competent jurisdiction. A governmen-tal
entity may require the disclosure by a provider of electronic communica-tions services of the contents of a wire or
electronic communication that has been in electronic storage in an electronic communications system for more than one
hundred and eighty days by the means available under subsection (b) of this section.
Explanation This amendment would make subsection (a) simpler, more consistent across different types of content data, and easier to apply.
First, it would equalize the protections for content information, regardless of whether the content is
being used for electronic communications, remote computer processing, or remote storage (under the
terminology of the current law).181 And borrowing a suggestion from Kerr, the amendment would replace the outdated
terms “electronic communication service” and “remote computing service” with the broader term
“network service provider,” greatly reducing the statute’s complexity.182 (In the current Act, content from
electronic communication services is governed by subsection (a), while content stored in remote computing services is governed by subsection
(b).) As a result, all requests for content information would be consolidated into a single subsection with a single level of protection. Second,
the amendment would require a warrant and probable cause for all disclosures of content
information. This uniform demand would replace the current statute’s weaker protection for certain
types of content (for example, opened e-mail messages and unopened e-mail messages stored for longer than 180 days, which the
government may now request using only an administrative subpoena).183 Traditional Fourth Amendment doctrine, in contrast, protects both
unopened and opened letters.184 Digital content information should be protected at the same level with the same consistency. As noted
above, the law’s current provisions reflect the communications technologies of the 1980s and have little justification in the Internet age.
Third, revised subsection (a)(2) would close a potentially gaping exemption in the statute: the lack of
protection for data stored in remote services that the provider may access for purposes other than
storage or remote processing.185 Removing this exception—currently found in subsection (b)(2)(B)—
would ensure the statute’s protections extend to data stored in cloud computing services such as
Google Docs and Microsoft Office Online and in cloud storage services such as Google Drive, Microsoft
OneDrive, and Dropbox. Most or all of these services contain provisions in their terms of service that grant the service providers some
rights to access the contents of the data, often to provide the targeted advertising that funds the services. 186 As a result, the current statute
does not necessarily protect content housed in these services, because it protects only communications held “solely for the purpose of
providing storage or computer processing services to such subscriber or customer.”187 Finally,
the amendment would limit the
applicability of the statute to the contents of nonpublic communications. The government would
therefore retain the ability to request the contents of public communications such as Twitter tweets,
publicly accessible Facebook posts, and other content intended for wide distribution. Because these
communications are already shared widely, their authors have little expectation of privacy in their contents.188 These changes to
subsection (a) would broaden the scope of the Stored Communications Act’s protection of content
information, making it more consistent with the Fourth Amendment’s protection of physical
documents. The proposal would not unduly hinder government investigations beyond the challenges
inherent to traditional investigations . As a result, it would bring the statute into line with the Sixth Circuit’s holding in Warshak,
which determined that the law is unconstitutional as applied to e-mail (and perhaps other forms of electronic communications).189
Suppression Remedy
Suppression remedy key
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
2. A suppression remedy The proposed amendment to the Stored Communications Act would also
include a new § 2713, titled “Prohibition of use as evidence of disclosed contents and records from
network service providers”: Whenever a network service provider discloses the contents of wire or
electronic communication stored in such service, or records concerning such service, no part of the
disclosure and no evidence derived therefrom may be received in evidence in any trial, hearing, or
other proceeding in or before any court, grand jury, department, officer, agency, regulatory body,
legislative committee, or other authority of the United States, a State, or a political subdivision
thereof if the disclosure of that information would be in violation of this chapter. This language is
adapted from 18 U.S.C. § 2515, the suppression remedy for Title III wiretaps. Adding this suppression
remedy to the Stored Communications Act would be good policy, regardless of whether the rest of
this proposal is implemented. This change is relatively straightforward and has been endorsed by even critics of the mosaic theory
such as Kerr.207 It makes little sense to treat disclosures that violate the Stored Communications Act differently than typical searches that
violate the Fourth Amendment. Adding a suppression remedy would create an additional disincentive against illegitimate government
disclosure requests. It would also increase judicial interpretation of the statute because most search and seizure case law stems from
defendants’ attempts to suppress evidence.208
SOP Addon (Warrant Requirement)
Lack of oversight in the SCA upsets balance of powers the warrant requirement key to
restore SOP
Nguyen, 2011 (Hien Timothy M., Candidate for Juris Doctor, Notre Dame Law School, 2012, “CLOUD
COVER: PRIVACY PROTECTIONS AND THE STORED COMMUNICATIOS ACT IN THE AGE OF CLOUD
COMPUTING” Notre Dame Law Review Lexis)
Although some might argue that requiring warrants might impede law enforcement activities, there
are still several important benefits to having a warrant requirement, especially given the “technological and
regulatory reach of government intrusions that exists today.”141 First, warrants “aim[ ] to prevent searches from turning
into ‘fishing expeditions.’”142 The Framers’ experience with writs of assistance and general warrants (which did not require specific
individuals or specific places to be searched) that “resulted in ‘sweeping searches and seizures without any evidentiary basis’”143 and
“‘ransacking’ and seizure of the personal papers of political dissidents, authors, and printers of seditious libel”144 led to the inclusion of the
warrant clause. Thus,
warrants must describe with “particular[ity] . . . the place to be searched, and the
persons or things to be seized.”145 Second, the warrant requirement enforces the separation of
powers and prevents the “excessive exercises of executive power.”146 Since warrants compel law
enforcement officials to justify their exercises of power,147 this provides a structural check against
abuses by such officials. At the same time, warrants “do not constitute an absolute bar to the
activities of law enforcement,” but “merely ensure that law enforcement officials focus on particular
individuals and that they are given adequate independent oversight.”148 Why then is the current statutory regime
insufficient? By allowing the government to compel disclosure with only a subpoena or a § 2703(d) court
order, the statute’s privacy protections are far lower than that of the Fourth Amendment. According to
Daniel Solove, “[u]nlike warrants, subpoenas do not require probable cause and can be issued without
judicial approval.”149 William Stuntz notes that “while searches typically require probable cause or reasonable suspicion and sometimes
require a warrant, subpoenas require nothing, save that the subpoena not be unreasonably burdensome to its target. Few burdens are deemed
unreasonable.”150 Prosecutors can issue subpoenas instead of neutral judicial officers and prosecutors can use grand jury subpoenas to obtain
third-party records.151 Grand jury subpoenas are “‘presumed to be reasonable’ and may only be quashed if ‘there is no reasonable possibility
that the category of materials the Government seeks will produce information relevant to the general subject of the grand jury
investigation.’”152 Thus,
the burden on the government to provide relevant information is far lower,
because “[n]o showing of probable cause or reasonable suspicion is necessary, and courts measure
relevance and burden with a heavy thumb on the government’s side of the scales.”153 Court orders
under § 2703(d) of the SCA also require less than a search warrant. The relevant provision calls for:
“specific and articulable facts showing that there are reasonable grounds to believe that the contents
of a wire or electronic communication, or the records or other information sought, are relevant and
material to an ongoing criminal investigation.”154 Since the standard only requires that the information is “relevant and
material” to the investigation, law enforcement officials can get away with a lot more than if the standard was probable cause. As with
subpoenas, the problem with court orders is that they “supply the judiciary with greatly attenuated
oversight powers.”155 The judge’s job is to “merely determine whether producing records is overly
burdensome” (in the case of subpoenas) or whether the “records are ‘relevant’ to a criminal investigation, a
much weaker standard [than probable cause].”156 Instead of being an impartial decision-maker that gets to determine
whether to grant a warrant, the judiciary’s involvement with subpoenas and court orders “amounts to little more than a rubber stamp of
judicial legitimacy.”157 With
the scales tipped in favor of law enforcement, the potential is greater for
officials to engage in “fishing expeditions” and the delicate balance that prevents “excesses” is upset.
Lack of SOP causes nuclear war – gender paraphrased
Ray Forrester Professor, Hastings College of the Law, University of California August 1989 The George
Washington Law Review 57 Geo. Wash. L. Rev. 1636 “Presidential Wars in the Nuclear Age: An
Unresolved Problem.”
Abramson, Wherever President Goes, the Nuclear War 'Football' is Beside Him, Los Angeles Times, April 3, 1981, at 10, col. 1 (copyright,
1981, Los Angeles Times. Reprinted by permission). On the basis of this report, the startling fact is that one man [person] alone has
the ability to start a nuclear war. A basic theory--if not the basic theory of our Constitution--is that concentration of
power in any one person, or one group, is dangerous to [humankind]mankind. The Constitution, therefore,
contains a strong system of checks and balances, starting with the separation of powers between the
President, Congress, and the Supreme Court. The message is that no one of them is safe with unchecked power. Yet, in what is
probably the most dangerous governmental power ever possessed, we find the potential for world destruction lodged in the discretion of
one person. As a result of public indignation aroused by the Vietnam disaster, in which tens of thousands lost their lives in military actions
initiated by a succession of Presidents, Congress in 1973 adopted, despite presidential veto, the War Powers Resolution. Congress finally
asserted its checking and balancing duties in relation to the making of presidential wars. Congress declared in section 2(a) that its purpose
was to fulfill the intent of the framers of the Constitution of the United States and insure that the collective judgment of both the Congress
and the President will apply to the introduction of United States Armed Forces into hostilities, or into situations where imminent
involvement in hostilities is clearly indicated by the circumstances, and to the continued use of such forces in hostilities or in such
situations. The law also stated in section 3 that [t]he President in every possible instance shall consult with Congress before introducing
United States Armed Forces into hostilities or into situations where imminent involvement in hostilities is clearly indicated. . . . Other
limitations not essential to this discussion are also provided. The intent of the law is clear. Congress undertook to check the President, at
least by prior consultation, in any executive action that might lead to hostilities and war. [*1638] President Nixon, who initially vetoed the
resolution, claimed that it was an unconstitutional restriction on his powers as Executive and Commander in Chief of the military. His
successors have taken a similar view. Even so, some of them have at times complied with the law by prior consultation with
representatives of Congress, but obedience to the law has been uncertain and a subject of continuing controversy between Congress and
the President. Ordinarily, the issue of the constitutionality of a law would be decided by the Supreme Court. But, despite a series of cases
in which such a decision has been sought, the Supreme Court has refused to settle the controversy. The usual ground for such a refusal is
that a "political question" is involved. The rule is well established that the federal judiciary will decide only "justiciable" controversies.
"Political questions" are not "justiciable." However, the standards established by the Supreme Court in 1962 in Baker v. Carr, 369 U.S. 186,
to determine the distinction between "justiciable controversies" and "political questions" are far from clear. One writer observed that the
term "political question" [a]pplies to all those matters of which the court, at a given time, will be of the opinion that it is impolitic or
inexpedient to take jurisdiction. Sometimes this idea of inexpediency will result from the fear of the vastness of the consequences that a
decision on the merits might entail. Finkelstein, Judicial Self-Limitation, 37 HARV. L. REV. 338, 344 (1924)(footnote omitted). It is difficult
to defend the Court's refusal to assume the responsibility of decisionmaking on this most critical issue. The Court has been fearless in
deciding other issues of "vast consequences" in many historic disputes, some involving executive war power. It is to be hoped that the
Justices will finally do their duty here. But in the meantime the spectre of single-minded power persists, fraught with all of the frailties of
human nature that each human possesses, including the President. World history is filled with tragic examples. Even if the Court assumed
its responsibility to tell us whether the Constitution gives Congress the necessary power to check the President, the War Powers Resolution
itself is unclear. Does the Resolution require the President to consult with Congress before launching a nuclear attack? It has been asserted
that "introducing United States Armed Forces into hostilities" refers only to military personnel and does not include the launching of
nuclear missiles alone. In support of this interpretation, it has been argued that Congress was concerned about the human losses in
Vietnam and in other presidential wars, rather than about the weaponry. Congress, of course, can amend the Resolution to state explicitly
that "the introduction of Armed Forces" includes missiles as well as personnel. However, the President could continue to act without prior
consultation by renewing the claim first made by President [*1639] Nixon that the Resolution is an unconstitutional invasion of the
executive power. Therefore, the real solution, in the absence of a Supreme Court decision, would appear to be a constitutional
amendment. All must obey a clear rule in the Constitution. The adoption of an amendment is very difficult. Wisely, Article V requires that
an amendment may be proposed only by the vote of two-thirds of both houses of Congress or by the application of the legislatures of twothirds of the states, and the proposal must be ratified by the legislatures or conventions of three-fourths of the states. Despite the
difficulty, the Constitution has been amended twenty-six times. Amendment can be done when a problem is so important that it arouses
the attention and concern of a preponderant majority of the American people. But the people must be made aware of the problem. It is
hardly necessary to belabor the relative importance of the control of nuclear warfare. A constitutional amendment may be, indeed, the
appropriate method. But the most difficult issue remains. What should the amendment provide? How can the problem be solved
specifically? The Constitution in section 8 of Article I stipulates that "[t]he Congress shall have power . . . To declare War. . . ." The idea
seems to be that only these many representatives of the people, reflecting the public will, should possess the power to commit the lives
and the fortunes of the nation to warfare. This approach makes much more sense in a democratic republic than entrusting the decision to
one person, even though he may be designated the "Commander in Chief" of the military forces. His power is to command the war after
the people, through their representatives, have made the basic choice to submit themselves and their children to war. There is a recurring
relevation of a paranoia of power throughout human history that has impelled one leader after another to draw their people into wars
which, in hindsight, were foolish, unnecessary, and, in some instances, downright insane. Whatever may be the psychological influences
that drive the single decisionmaker to these irrational commitments of the lives and fortunes of others, the fact remains that the behavior
is a predictable one in any government that does not provide an effective check and balance against uncontrolled power in the hands of
one human. We, naturally, like to think that our leaders are above such irrational behavior. Eventually, however, human nature, with all its
weakness, asserts itself whatever the setting. At least that is the evidence that experience and history give us, even in our own relatively
benign society, where the Executive is subject to the rule of law. [*1640] Vietnam and other more recent engagements show that it can
happen and has happened here. But the "nuclear football"--the ominous "black bag" --remains in the sole possession of the President.
And, most
important, his decision to launch a nuclear missile would be, in fact if not in law, a declaration of nuclear war,
be unable to survive.
one which the nation and, indeed, humanity in general, probably would
Mechanism Specific
Congress Better
Congress is better placed to act and most DAs to Congressional action apply to the
courts.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Proponents of a judicial solution, such as Solove, argue that privacy legislation has been less
comprehensive, less clear, and less adaptive to technological change than Fourth Amendment case
law.154 Solove notes that Congress has failed to regulate many technologies, including satellites, radio
frequency identification devices, and thermal imaging devices.155 He argues that privacy statutes are
“just as unclear as the Fourth Amendment—perhaps even more so.”156 And, citing the Electronic Communications
Privacy Act as well as the 1934 predecessor to the Wiretap Act, he suggests that Congress is not particu-larly adept at crafting privacy legislation
when it does act.157
But these criticisms of legislative action are overstated or apply equally to judicial
action. While it is not difficult to list areas in which Congress has failed to act (or to act comprehensively), one
can just as easily point to technologies raising privacy concerns that courts have failed to regulate
through the Fourth Amendment: aerial surveillance, chemical testing, and communications sys-tems,
to name a few.158 And even when courts do act, they often do so decades late. Jones was decided
seventeen years after the Global Positioning System went online in 1995.159 Warshak was decided
twenty-four years after Congress passed the Electronic Communications Privacy Act.160 Riley was
decided thirty-one years after the FCC approved the first mobile phone for commercial use. 161 And
the third-party doctrine continues to prevent meaningful constitutional protection for users’
documents kept in cloud storage.162 Congress may not have the best record on proactively protecting citizens from technological
threats to privacy, but neither do the courts. Moreover, although Solove is correct that privacy statutes are often
more complex than Fourth Amendment doctrine, the relative simplicity of courts’ privacy solutions
can present their own problems. Fourth Amendment doctrine is typically binary. Under the Fourth
Amendment, courts generally either recognize a reasonable expectation of privacy or they do not. If courts find this reasonable expectation of
privacy, the Fourth Amendment will require a warrant and probable cause; if not, the Constitution provides little or no protection. The
bluntness of Fourth Amendment doctrine causes courts to hesitate before extending privacy
protections to new areas,163 and it narrows the range of policy solutions that courts may
implement.164 Congress, on the other hand, has many policy levers it may pull, including intermediate
evidentiary showing requirements for police (for example, “reasonable suspicion” instead of “probable cause”),165 notice
requirements,166 and civil remedies.167 In regulating a technological issue such as metadata, where small
amounts reveal little but large amounts reveal a great deal, gradations in protection are not only
appropriate but preferable to courts’ typical all-or-nothing approach to the Fourth Amendment.
Courts Can’t
Courts can’t coherently apply the Mosaic theory.
Kerr, 2012 (Orin S., Professor of Law @ George Washington University Law School “THE MOSAIC
THEORY OF THE FOURTH AMENDMENT” Michigan Law Review Lexis)
The mosaic theory amounts to an awkward half-measure. Under the sequential approach, courts traditionally have two
options when deciding how to regulate police conduct. They can say that the conduct is never a Fourth Amendment
search, but that legislatures can regulate the conduct by enacting statutory protections; or else they
can say that conduct is always a Fourth Amendment search. The mosaic theory offers a vague middle
ground as a third option. The theory allows courts to say that techniques are sometimes a search.
They are not searches when grouped in some ways (when no mosaic exists) but become searches when
grouped in other ways (when the mosaic line is crossed). Identifying those contexts is extremely difficult, however,
such that the challenges of the method outweigh its alleged benefits. As Part III showed, implementing the
mosaic theory raises a large number of novel and complex questions that courts would need to
answer. It is hard to see how courts can answer all these questions coherently. Indeed, even the proponents of
the mosaic approach don’t seem to have any idea how it should apply.190
Can’t implement the theory – too slow technology changes.
Kerr, 2012 (Orin S., Professor of Law @ George Washington University Law School “THE MOSAIC
THEORY OF THE FOURTH AMENDMENT” Michigan Law Review Lexis)
The first difficulty with the mosaic theory is the most obvious: Its implementation raises so many
difficult questions that it will prove exceedingly hard to administer effectively. Because the mosaic theory
departs dramatically from existing doctrine, implementing it would require the creation of a new set of Fourth
Amendment rules – a mosaic parallel to the sequential precedents that exist today. The problem is not only the number of questions,
but their difficulty. Many of the questions raised in Part III of this article are genuine puzzles that Fourth Amendment text, principles and history
cannot readily answer. Judges should be reluctant to open the legal equivalent of Pandora’s Box. It
is particularly telling that not
even the proponents of the mosaic theory have yet proposed answers for how the theory should be
applied. For example, a group of Fellows at Yale’s Information Society Project who endorse the mosaic approach simply dismissed the
conceptual difficulties of its implementation on the ground that answering such puzzles is “why we have judges.”198 A pro-mosaic amicus brief
in Jones signed by several prominent legal academics was similarly nonresponsive. Although the mosaic approach requires “tough decisions” to
be made, the brief noted, courts have encountered difficult questions elsewhere in Fourth Amendment law. 199 While
one can admire
such confidence in the capabilities of the judiciary, it would provide more comfort if proponents of
the mosaic theory would at least be willing to venture guesses as to how it should apply . The
challenge of answering the questions raised by the mosaic theory has particular force because the
theory attempts to regulate use of changing technologies. Law enforcement implementation of new
technologies can occur very quickly, while judicial resolution of difficult constitutional questions
occurs at a more glacial pace. As a result, the constantly-evolving nature of surveillance practices
could lead new questions to arise faster than courts can settle them. Old practices would likely be
obsolete by the time the courts resolved how to address them, and the newest surveillance practices
would arrive and their legality would remain unknown. Like Lucy and Ethel trying to package candy on the everfaster
conveyor belt, 200 the mosaic theory could place judges in the uncomfortable position of trying to settle a wide range of novel questions for
technologies that are changing faster than the courts can resolve how to regulate them. Consider the changes in location-identifying
technologies in the last three decades. Thirty years ago, the latest in police technologies to track location was the primitive radio beeper seen in
Knotts. But radio beepers have gone the way of the 8-track tape. Today the police have new tools at their disposal that were unknown in the
Knotts era, ranging from GPS devices to cell-site records to license–plate cameras. This rapid pace of technological change creates major
difficulties for courts trying to apply the mosaic theory: If
the technological facts of the mosaic change quickly over
time, any effort to answer the many difficult questions raised by the mosaic theory will become
quickly outdated. Courts may devise answers to the many questions discussed in Part III, but by the time they do the relevant technology
is likely to have gone the way of the radio beeper.
Reasonable expectation of privacy is based on a probabilistic view of the fourth
amendment. That fails because society has no idea what surveillance is common or
rare.
Kerr, 2012 (Orin S., Professor of Law @ George Washington University Law School “THE MOSAIC
THEORY OF THE FOURTH AMENDMENT” Michigan Law Review Lexis)
The third problem with the mosaic theory is that most formulations of it are based on a probabilistic
approach to the reasonable expectation of privacy test that proves ill-suited to regulate technological
surveillance practices. Supreme Court decisions have utilized several different inquiries for what makes an expectation of privacy
constitutionally reasonable.201 In some cases the Court has looked to what a reasonable person would perceive as likely;202 in other cases the
Court has looked to whether the particular kind information obtained is worthy of protection; 203 in some cases the Court has looked to
whether the government violated some legal norm such as property in obtaining the information; 204 and in other cases the Court has simply
considered whether the conduct should be regulated by the Fourth Amendment as a matter of policy.205 Use of these multiple inquiries (what
I have called “models”) of Fourth Amendment protection allows the Court to adopt different approaches in different contexts, ideally selecting
the model that best identifies the need for regulation in that particular setting.206 For the most part, formulations of the mosaic theory rest on
the first of these approaches – what a reasonable person would see as likely. I have called this the
probabilistic approach to
Fourth Amendment protection,207 as it rests on a notion of the probability of privacy protection. The
more likely it is that a person’s will maintain their privacy, the more likely it is that government
conduct defeating that expectation counts a search. Under this model, the Fourth Amendment guards against surprises.
The paradigmatic example is Bond v. United States,208 which involved government agents manipulating the duffel bag of a bus passenger to
identify a wrapped brick of drugs inside it. A bus passenger expects other passengers to handle his bag but not “feel the bag in an exploratory
manner,”209 the Court held, so the exploratory feel violated a reasonable expectation of privacy. Both Judge Ginsburg and Justice Alito
authored mosaic opinions that rely on such probabilistic reasoning. Judge Ginsburg deemed long-term GPS monitoring a search because no
stranger could conduct the same level of monitoring as a GPS device. Justice Alito reached the same result on the grounds that a reasonable
person would not expect the police to obtain so much information. The probabilistic approach presents a poor choice to regulate technological
surveillance because most individuals lack a reliable way to gauge the likelihood of technological surveillance methods. The
probabilistic
expectation of privacy applied in Bond relied on widespread and repeated personal experience. Bus
passengers learn the social practices of bus travel by observing it first-hand. In contrast, estimating
the frequency of technological surveillance practices is essentially impossible for most people (and
most judges). Surveillance practices tend to be hidden, and few understand the relevant technologies.
Some people will guess that privacy invasions are common, and others will guess that they are rare.
But none will know the truth, which makes such probabilistic beliefs a poor basis for Fourth
Amendment regulation.
Courts can’t administer the mosaic theory
Kerr, 2012 (Orin S., Professor of Law @ George Washington University Law School “THE MOSAIC
THEORY OF THE FOURTH AMENDMENT” Michigan Law Review Lexis)
The concurring opinions in Jones invite lower courts to experiment with a new approach to the Fourth
Amendment search doctrine. The approach is well-intentioned, in that it aims to restore the balance of
Fourth Amendment protection by disabling the new powers created by computerization of
surveillance tools. But despite being well-intentioned, the mosaic theory represents a Pandora’s Box
that courts should leave closed. The theory raises so many novel and difficult questions that courts
would struggle to provide reasonably coherent answers. By the time courts worked through answers
for any one technology, the technology would likely be long obsolete. Mosaic protection also could
come at a cost of lost statutory protections, and implementing it would require courts to assess
probabilities of surveillance that judges are poorly equipped to evaluate. In this case, the game is not
worth the candle. The concurring opinions in Jones represent an invitation that future courts should
decline. Instead of adopting a new mosaic theory, courts should consider the need to engage in
equilibrium-adjustment within the confines of the traditional sequential approach.
Addons
Warming Addon
2AC — Warming Addon
2 internal links:
1) Cloud computing efficiency eliminates huge amounts of carbon emission
released by companies — key to checking warming
Markovic et. al. 13 — Dragan S. Markovic, PhD in Electrochemistry, Professor at Singidunum
University, Belgrade; Dejan Zivkovic, Noise Specialist at WESA, degree in Mechanical and Industrial
Engineering at University of Belgrade; Irina Branovic, Researcher at the Mathematical Institute of the
Serbian Academy of Sciences at University of Belgrade, previous Senior Software Engineer, Team Lead at
Betware, PhD in Information Engineering from Università degli Studi di Siena; Ranko Popovic, Professor
at Singidunum University, Belgrade; Dragan Cvetkovic, 2013 (“Smart power grid and cloud computing,”
Renewable and Sustainable Energy Reviews, Issue 24, pages 566-577, May 23, Available Online at
http://www.sciencedirect.com.proxy1.cl.msu.edu/science/article/pii/S136403211300227X, Accessed
7/16/15)
Many studies have found that cloud computing is a more energy-efficient choice in general compared
to running in-house IT operations. The studies confirm that among other benefits, cloud computing
delivers multiple efficiencies and economies of scale, which contribute to the reduction of energy
consumption per unit of work.
Report [42] released by Pike Research anticipates that much of the work done today in in-house data
centers will be outsourced to the cloud by 2020. This will lead to a 38% reduction in worldwide data
center energy expenditures by 2020, thanks to a reduction of 31% in data centers electricity
consumption (from 201.8 terawatt hours (TWh) in 2010 down to 139.8 TWh in 2020).
Another report [43], released by research firm Verdantix, estimates that cloud computing could
enable companies to save $12.3 billion off their energy bills. That translates into carbon emission
savings of 85.7 million metric tons per year by 2020.
The findings from a study [44], commissioned by Microsoft and conducted by Accenture and WSP
Environment & Energy, demonstrate that businesses that choose to run business applications in the
cloud can help reduce energy consumption and carbon emissions per user by a net 30% or more
versus running those same applications on their own infrastructure. The reduction goes even to as
much as 90% for the smallest and least efficient businesses.
2) Cloud computing solves for large scale transfer and storage of data — key to
smart grid infrastructure
Markovic et. al. 13 — Dragan S. Markovic, PhD in Electrochemistry, Professor at Singidunum
University, Belgrade; Dejan Zivkovic, Noise Specialist at WESA, degree in Mechanical and Industrial
Engineering at University of Belgrade; Irina Branovic, Researcher at the Mathematical Institute of the
Serbian Academy of Sciences at University of Belgrade, previous Senior Software Engineer, Team Lead at
Betware, PhD in Information Engineering from Università degli Studi di Siena; Ranko Popovic, Professor
at Singidunum University, Belgrade; Dragan Cvetkovic, 2013 (“Smart power grid and cloud computing,”
Renewable and Sustainable Energy Reviews, Issue 24, pages 566-577, Available Online at
http://www.sciencedirect.com.proxy1.cl.msu.edu/science/article/pii/S136403211300227X, Accessed
7/16/15)
Globally deployed smart grid infrastructure will need information technology (IT) support to integrate
data flow from numerous appliances, to predict power usage and respond to events. Cloud platforms
are ideally suited to support such data intensive, always on applications.
The need for large scale real-time computing, communication, transfer and storage of data generated
by smart grid technologies is expected to be addressed by cloud computing services. Some recent
papers such as [29] indicate that flexibility and scalability of storing and processing large amounts of
data and cost savings are some of the major benefits of using cloud solutions for smart grid
applications.
Utility companies can use cloud model to store and process large quantities of data collected from
smart meters and appliances, as well as sensors deployed across the smart grid. According to [30],
cloud platforms are an intrinsic component in creating software architecture to drive more effective
use of smart grid applications. The primary reason is that cloud data centers can accommodate the
large-scale data interactions that take place on smart grids and are better architected than centralized
systems to process the huge, persistent flows of data generated across the utility value chain.
Bulk data for demand-response analysis in smart grids comes from sensors and smart meters, and is
transmitted in different intervals (from few seconds to several hours) by using Internet protocols. Since
smart grid applications are distributed, a smart grid cloud must support different platforms with
efficient streaming technologies. At present, Cloud providers do not provide specialized data
abstractions for streams, other than TCP sockets. An open research field is stream processing in public
and private clouds, and across diverse platforms. One of the most important features of Smart Grid is a
two-way communication with consumers; according to [13], most of the ongoing activity in the Smart
Grid communication is currently in the Consumer domain. Smart devices have started to reach the
consumer market but the interoperability and complete solution for Smart Grid is still far away.
And, Status quo efforts and power grids fail — only smart grids can avert mass
blackouts and severe climate change
Mazza 7 — Patrick Mazza, independent journalist-activist focused on climate and global sustainability,
former research director at Climate Solutions, and a founder of the group, provided the research basis
for the Northwest Biocarbon Initiative, wrote several influential national papers on emerging clean
energy systems, 2007 (“Why the Smart Grid is important,” Grist, June 11, Available Online at
http://grist.org/article/adventures-in-the-smart-grid-no-1/, Accessed 7/16/15)
It’s the world’s largest machine — the interconnected network of power plants, transmission towers,
substations, poles, and wires that make up the power grid. When you flip the switch you expect the
juice to flow and don’t have much reason to think about it, except during the occasional blackout. Power
engineers and energy wonks might get passionate about the grid, but for most people it’s just a
background fact of life.
It’s time to bring the grid into the foreground, because it positions at the exact center of the world’s
most crucial issue, global climate change. The power grid is the source of one-third of U.S. global
warming emissions. Unless we clean it up we cannot avert severe climate change. The grid is also the
key to electrifying transportation and making more effective use of heat generated for buildings and
industry, source of the vast bulk of remaining emissions. The grid can be the ultimate climate saver.
But today’s power grid cannot do it. A system built on central generating stations, little changed from
the first power grids deployed in the late 1800s, lacks flexibility and smarts. We need a new grid capable
of networking millions of distributed energy devices such as solar panels, wind turbines, electric
vehicles, and smart appliances. We need an internet of energy that employs the latest in digital
technologies. We need a Smart Grid.
On August 14, 2003, an overheated transmission line in Ohio sagged into the power grid’s greatest
natural enemy, a tree branch. The resulting power failure cascaded from the Midwest to Broadway in
seconds. Power grid operators were quickly on the phones trying to grope through the grid equivalent
of the fog of war, but it was too late. The biggest blackout in U.S. history was underway, leaving 50
million people without power.
The event underscores a crucial fact. Of all major infrastructures, the power grid is the least automated
by digital technology. Contrast the big box chain which keeps a constant inventory linking checkout
stand to warehouse with the utility which must send its linemen into the field to hunt out downed
power lines. Or set the control room operator balancing power plants and demands against the internet,
constantly rerouting information flows. Modern digital systems rely on real-time data and automated
responses, while the grid functions on delayed information and human decisions.
The immediate implication is declining power reliability as demands on the grid grow. Columbia
University grid researcher Roger Anderson notes that “since 1998, the frequency and magnitude of
blackouts has increased at an alarming rate … If present trends continue a blackout enveloping half the
continent is not out of the question.”
But the more serious long-term implication is that the grid cannot take on the tasks it needs to
accomplish to reduce global warming pollution. Look on the grid of today as if it were the old
computer network with a mainframe computer at the hub and terminals at the end of the spokes. The
“mainframe” of the grid is the central power station. Transmitting power out the spokes to end users is
a relatively simple management task compared to a system in which power generators are distributed
throughout the network and power flows are many-way. Utility engineers typically resist distributed
generation specifically because it makes their management task more complex. Most states have now
enacted net metering laws which require utilities to interconnect small-scale distributed generators, but
cap the total amount in the system to avoid destabilizing the grid.
So far solar photovoltaic panels, small-scale wind-power generators, fuel cells, and other localized
generators have not penetrated far enough into the market to raise much of a challenge. But consider
the moment at which breakthroughs are achieved and distributed generation experiences an explosive
takeoff, as a number of observers project for solar PV power. Then power distribution systems will
have to be automated. In effect, an information internet backbone will automatically route and
manage the complex power flows of the energy internet.
Cogeneration is prospectively one of the largest distributed energy sources. Building and industrial
heat could be recycled to generate electricity on-site. Interconnection to the grid can make the
business case for a cogen unit, providing a market for surplus and a grid backup when the unit is
down. But utilities discourage these kind of connections, again, because they pose complex
management problems. Smart Grid systems will make cogen far more economically feasible.
In transportation, improvements in battery technology are stirring new interest in electrified options,
including plug-in hybrids and pure battery vehicles. Mass-scale electrified transport will require Smart
Grid systems. One function will be to match charging times to clean power availability. For example, in
many regions wind power tends to be generated at night. A Smart Grid can send real-time signals to
plugged-in vehicles alerting them to charge when turbine blades are turning. Another Smart Grid
function will be to manage vehicle-to-grid networks in which electrified fleets supply power to the
grid as well as receive power from it. Making intermittent renewables into a 24-7 power source
requires energy storage, and our cars which generally sit parked 22 hours a day are an ideal match.
Smart systems will manage “V2G” networks.
An energy systems revolution is upon us, and the Smart Grid is at its very center. In future
installments I will drill down more into the capabilities and potentials of the Smart Grid, as well as the
obstacles and challenges along the road there. Meanwhile, for those who want to read up, check out my
paper, “Powering Up the Smart Grid” for one of the most complete overviews of the topic.
[Insert warming impact card]
Competitiveness Addon
2AC — Competitiveness Addon
Big data and cloud computing are lead to increased critical thinking and
competitiveness
Picciano 14 — Bob Picciano, Senior Vice President of IBM’s Information and Analytics Group with
global responsibility for the strategy and management of IBM’s big data and analytics and content
management portfolio, 2014 (“Why Big Data Is The New Natural Resource,” Forbes, June 30, Available
Online at http://www.forbes.com/sites/ibm/2014/06/30/why-big-data-is-the-new-natural-resource/,
Accessed 7/16/15)
Facebook, Twitter, Pinterest. Destroyers of our attention span or innovations that make us smarter
and closer? We’re still trying to understand how today’s technologies—which many can’t seem to live
without—are transforming us.
Still, there is one change they’ve brought about that’s indisputably positive, one that most people
intuitively get.
And it’s this: if we live in an information age, then the flip side is we’re all information analysts. Cloud
computing, mobile and social computing are all changing how we communicate. Our strategy for big
data and analytics has some core tenets, which provide a common experience. The combination of
cloud, social, mobile and big data and analytics provides the user with a role-specific experience that
is easy-to-use and customizable. We know from our enterprise experience that the cloud enables
organizations to start small, grow rapidly and scale massively.
Think about it. Every minute, every instant that we’re online, we’re trying to make sense of a blizzard
of data.
When you navigate to a Web site, your brain jumps into action. You’ve been trained to rank, analyze,
prioritize, and dismiss a tumble of news headlines, video snippets, ads and links. So you can find
information quickly — or lose yourself when you have time to kill. On Facebook, as you scroll down your
news feed, your mind automatically ticks off and categorizes every new update based on what you’ve
learned, online and off, about your friends and colleagues and their current interests and passions.
Why is that a big deal? When data is a resource that anyone can mine, then decision-making
transitions from being reserved for the few, and becomes a central issue for the masses. In the past,
only statisticians, experts trained in their industries, or information scientists managing super
computers had powerful analytics tools at their fingertips.
But now we have data and approachable dashboards, crowdsourcing portals, and apps that help us
analyze and manipulate data like no generation before. We can dig down into almost every subject
imaginable, whether it’s investments and retirement plans, local school districts, ancestors or travel
options. Services ranging from Spotify to Pinterest all rely on us categorizing, analyzing and getting more
out of data.
That makes us more powerful as consumers, but it also has huge implications for our careers and our
companies. Because even as information access is becoming commonplace, the ability to analyze data
and add expertise to it is turning into a competitive advantage. Like any natural resource, some data
can be a commodity. It’s how it’s used in combination with other valuable information sources that
can help you stand out in your profession or help your company become a leader in its industry.
To make that happen, our companies need to arm us—and the rest of our colleagues throughout our
organizations—with big data and analytics platforms and tools so we can understand our businesses
and jobs. We need to learn how to use these tools to make sense of everything from hiring decisions
to customer service issues, not simply to track what happened in the past, but why it happened, what’s
likely to occur in the future and what’s the best course of action to take. So we can help figure out how
to keep ahead of a new rival in a market or dig into why a high number of customer service calls are
ending poorly.
By using data and analytics, not just instinct, we can make our companies more competitive. Big data
techniques allow us to uncover hidden connections among huge masses of data so we can react more
quickly to changes in the market or customers’ tastes. And looking at data as a natural resource can
help us think up new services based on that data.
Competitiveness among businesses is key to stimulate the economy — 4 warrants
Kolasky 2 — William J. Kolasky, Deputy Assistant Attorney General in the Antitrust Division for the U.S.
Department of Justice, 2002 (“The Role Of Competition In Promoting Dynamic Markets And Economic
Growth,” Address Before the TokyoAmerica Center, November 12, Available Online at
http://www.justice.gov/atr/speech/role-competition-promoting-dynamic-markets-and-economicgrowth, Accessed 7/16/15)
The most obvious benefit of competition is that it results in goods and services being provided to
consumers at competitive prices. But what people often forget is that producers are also consumers.
They must buy raw materials and energy to produce their products, telecommunications services to
communicate with their suppliers and customers, computer equipment to keep track of their
inventories, construction services to build their plants and warehouses, and so forth. To the extent that
prices for these goods and services are higher than those of their foreign competitors because of a
lack of competition in those markets, firms will be less competitive and will suffer in the marketplace.
A second benefit of competition is its effect on efficiency and productivity. Companies that are faced
with vigorous competition are continually pressed to become more efficient and more productive.
They know that their competitors are constantly seeking ways to reduce costs, in order to increase
profits or gain a competitive advantage. With that constant pressure, firms know that if they do not
keep pace in making efficiency and productivity improvements, they may well see their market
position shrink, if not evaporate completely. It is exactly this process of fierce competition between
rivals that leads firms to strive to offer higher quality goods, better services and lower prices.
A third benefit of competition is its positive effects on innovation. In today's technology-driven world,
innovation is crucial to success. Innovation leads to new products and new production technologies.
It allows new firms to enter into markets dominated by incumbents, and is critical for incumbent firms
who want to continue their previous market successes and stimulate consumer demand for new
products. Competition drives innovation. Without competition, there would be little pressure to
introduce new products or new production methods. Without this pressure, an economy will lag
behind others as a center of innovation and will lose international competitiveness
A fourth benefit of competition is that it fosters restructuring in sectors that have lost
competitiveness. It is difficult for governments to determine which sectors of the economy need to be
restructured, which firms in those sectors should remain or should cease to exist, and when it is best to
engage in such restructuring. Governments are subject to political constraints and pressures, which
more often than not lead to sub-optimal decisions. The competitive process, on the other hand, is
unbiased. It forces decisions to be based on market factors, such as demand, product uses, costs,
technologies, rather than the incomplete information in the possession of government bureaucrats.
The competition for capital and other resources by firms throughout the economy leads to money and
resources flowing away from weak, uncompetitive sectors and firms and towards the strongest, most
competitive sectors, and to the strongest and most competitive firms within those sectors. In these
ways, the very operation of the competitive process makes decisions on restructuring clear, and leads
to the strongest and most competitive economy possible.
US economy is still the lynchpin of the global economy — most recent evidence
Brett 15 — Shane Brett, author of "The Future of Hedge Funds", founder of "Global Perspectives", cofounder of "Gecko", received his Bachelor of Business Studies (Hons), Accounting & Finance from
Dundalk Institute of Technology, received his MBA in Management Consulting from the University of
Wales, has 19 years experience in hedge fund /asset management operations, consultancy &
technology, including programme & product management at top fund managers & administrators
worldwide, 2015 (“The Global Economy In 2015 - 5 Key Trends,” Seeking Alpha, January 11, Available
Online at http://seekingalpha.com/article/2811155-the-global-economy-in-2015-5-key-trends, Accessed
7/16/15)
The US economy created 7,000 jobs per day in 2014 and this remarkable rate of employment growth
is set to escalate in 2015. The perceived decline of American power has been greatly exaggerated.
Commentators confuse the current US unwillingness to wield hard power, for a lack of underlying real
power. They also confuse deadlock in Washington with the underlying dynamism of many US regions
and States.
The US still controls the global economy, all the world's oceans, its trade routes and its reserve
currency. It spends nearly as much on defence as the rest of the world put together. This will not
change anytime soon.
In 2015, the US will continue to be the global engine for growth, enterprise and innovation, as it has
been for most of the last century.
This should not be surprising. The English-speakers (i.e. the USA/UK) have run the world for 3 centuries
now. They have consistently defeated all challengers to world hegemony that have appeared over this
time (Philip II, Louis XIV, Napoleon, Kaiser Wilhelm II, Hitler, Stalin etc.).
Despite the chorus of BRIC hysteria over the last few years, the economic growth in these countries
has taken place because they adopted US policies of trade liberalization, economic freedom and a
free market. In 2015, they will endure a major emerging market crisis. Their power will not surpass
the US for decades (if ever).
[insert economic decline impact card]
Off Case
SEC DA
Non-Unique
SEC enforcement is failing – insider trading defendants are winning
Morrison and Foerster 2/21, (Law firm – includes prominent attorneys with decades of experience,
former prosecutors, former SEC enforcement attorneys, former senior officials of the CFTC and FINRA,
and in-house forensic accounting experts, “2014 Insider Trading: Annual Review”, 2/21/2015,
http://www.mofo.com/~/media/Files/ClientAlert/2015/02/150211InsiderTradingAnnualReview.pdf)
BBer
Speaking of streaks, the SEC’s losing streak in civil trials continued in 2014 . Last year’s Review focused on the
SEC’s high-profile loss against Mark Cuban in October 2013. In 2013, defendants also defeated insider
trading claims in SEC v. All Know Holdings Ltd.; SEC v. Kovzan; SEC v. Jensen; and SEC v. King Chuen Tang. In 2014, that streak
continued, with defendants prevailing
at trial in eight more cases, including high profile losses for the SEC in SEC v. Obus
twenty-one individuals prevailed against the SEC in those
thirteen cases from June 2013 through August 2014, either on summary judgment or at trial. In contrast, excluding
consent judgments and judgments based on a criminal conviction or plea, the SEC had only one trial
(S.D.N.Y) and SEC v. Moshayedi (C.D. Cal.).76 Altogether,
win and prevailed only once on summary judgment in insider trading cases during that time period,
securing judgments against two individuals in each of those two cases.77 10 Insider Trading | Annual Review 2014
The SEC’s Shift to Administrative Proceedings Not surprisingly , the SEC has grown shy of juries . In last year’s Review, we
highlighted the SEC’s shift towards bringing more insider trading actions as administrative proceedings,
and we included administrative proceeding outcomes in our compilation of SEC sanctions for the first
time (prior years had included only enforcement actions filed in court). The trend picked up steam in 2014 and remains one to watch in 2015.
The SEC’s increasing use of administrative proceedings generated heated public debate in 2014, most notably between Judge Jed Rakoff of the
United States District Court for the Southern District of New York, and Andrew Ceresney, the Director of the SEC Division of Enforcement.
DOJ Prosecutes
Insider trading is prosecuted by the Department of Justice
Padilla 08 – Alexandre Padilla is Assistant Professor of Economics at the Metropolitan State College of
Denver. 2008 (“ HOW DO WE THINK ABOUT INSIDER TRADING? AN ECONOMIST'S PERSPECTIVE ON THE
INSIDER TRADING DEBATE AND ITS IMPACT” Journal of Law, Economics & Policy. Spring. Available via
LexisNexis. Accessd 07-10-2015)
Currently, insider trading laws make insider trading violations both a civil and criminal offense. Civil
prosecution is usually brought by the Securities Exchange Commission in a United States court district.
The civil penalty is to be determined by the court district but cannot exceed three times the profit
gained or loss avoided as a result of illegal insider trading. One should note that any civil penalty must
be paid to the Treasure of the United States. Criminal charges are traditionally brought by the
Department of Justice. Insider trading is punishable by monetary penalties and imprisonment up to 10
years. However, the standard of proof in criminal prosecution is higher and makes illegal insider trading
more difficult to prove. If insider trading were to be decriminalized, the Department of Justice could no
longer indict and prosecute illegal insider trading. One could argue that criminalizing insider trading
increases the costs of engaging in illegal insider trading and therefore would likely discourage more
illegal insider trading than just imposing civil/monetary sanctions.
Impact Not Unique
Insider trading is high
Barkow 14 – Robert E. Barkow. Segal Family Professor of Regulatory Law and Policy and Faculty
Director, Center on the Administration of Criminal Law. NYU School of Law. 2014 “CAPITAL MARKETS,
THE CORPORATION, AND THE ASIAN CENTURY: GOVERNANCE, ACCOUNTABILITY, AND THE FUTURE OF
CORPORATE LAW” Seattle University Law Review. Available via LexisNexis. Accessed on 7-10-2015
Despite increasing sanctions and the spread of corporate compliance programs to the point of being
ubiquitous, n51 business crime remains a pressing problem. n52 In its most recent Financial Crimes
Report to the Public, the Federal Bureau of Investigation (FBI) points to steadily increasing financial
crime caseloads between 2007 and 2011, and cites insider trading, corporate fraud, and securities and
commodities fraud as being on the rise, particularly in the wake of the financial crisis and continuing
economic instability. n53 And serious offenses have occurred at [*446] institutions with compliance
programs, casting doubt on self-policing as the primary policing strategy. n54
In addition to the economic crime cases that have come to light, there is a widespread view among
the public that the fiscal crisis of 2008 was fueled in part by corporate wrongdoing. A 2010 survey
found that a majority of respondents believed financial crime contributed to the current economic crisis,
and nearly half agreed that the government is not devoting enough resources to combat business crime.
n55 Many lament that more individuals and corporations, particularly at the largest financial
institutions, have not been charged. n56 This environment set the stage for a new approach to policing
business crime.
Insider trading is high
Fried 14 – Jesse M. Fried, Professor of Law, Harvard Law School. 2014. (“Insider Trading via
the corporation” University of Pennsylvania Law Review 162. Rev. 801, March. Available for
access via LexisNexis. Accessed 07-10-2015)
The strict trade-disclosure rules for insiders reflect a strong, longstanding consensus in the United
States that a corporation's insiders - its officers, directors, and controlling shareholder (if any) - should
not be permitted to profit freely from their access to inside information about the firm. These rules
are part of an elaborate set of regulations designed to reduce insiders' [*804] ability to engage in
insider trading: buying and selling a firm's shares on inside information. n10
Unfortunately, U.S. policymakers have failed to grasp that when insiders are subject to strict tradedisclosure requirements and firms are not, insiders have a strong incentive to exploit the relatively lax
trade-disclosure rules that apply to firms in order to engage in indirect insider trading: having the firm
buy and sell its own shares at favorable prices to increase the value of the insiders' equity. Such indirect
insider trading - made possible by insiders' control over the firm's assets - can generate substantial
profits for insiders. If, for example, insiders own 10% of a firm's equity, they will capture approximately
$ 1 out of every $ 10 in insider-trading profits generated by the firm when it buys and sells its own
shares on inside information.
Insider Trading Good
Turn: Insider trading causes wealth-producing companies to become wealthier and
turns the link
Weiner 13 – Keith Weiner, PhD from the New Austrian School of Economics, leading authority in areas
of gold money and credit, president of the Gold Standard Institute, Author on economics, (“Criminalizing
Insider Trading Promotes Economy-Weakening Egalitarianism” Dec 11 2013, Forbes, available online at
http://www.forbes.com/sites/realspin/2013/12/11/criminalizing-insider-trading-promotes-economyweakening-egalitarianism/) N.H
There is a gross injustice in the law. The recent Securities and Exchange Commission case against SAC
Capital Advisors provides a clear illustration. SAC allegedly insider traded, and therefore the SEC charged them with fraud.
Insider trading is when you buy or sell based on knowledge that you have, but others don’t. An insider
trader can be charged with fraud. Wait. These are public stocks being traded. How do we get from
buying and selling stocks to fraud? For an explanation, we have to look to someone who truly
understands this kind of issue. George Orwell used thought control as a major theme in his novel 1984.
He showed that if a dictator controls the language we use then he can limit the very thoughts that
people can have. Most people don’t understand the stock market, much less securities regulation.
They do understand fraud at a basic level. Fraudsters are very dangerous criminals, because they can
hide unnoticed while abusing the trust of a large number of victims. Who wouldn’t want fraudsters to
go to jail? The key to fraud is deception. The fraudster lies to his victims, in order to take their money.
Even if you think that insider trading is bad, you can’t really say with a straight face that it is an act of
deception. You can quibble about fairness and level playing fields, but insider trading does not depend
on lying. It simply isn’t fraud. By calling it fraud, the government is trying to control how we think, and
to shut down discussion. Fraud aside, many people would still contend that insider trading is unfair, and
should be illegal. This is because of what they think the purpose of the market is, and what it should
achieve above all else: egalitarianism. One popular view of the stock market is that prices reflect
everything known about a company. Therefore a trader cannot pick stocks and get better returns than
other investors. In this view, it’s good for everyone when stock prices are rising, as they have been since
2009. Conversely, it’s bad when stock prices are falling like in 2008. Not coincidentally, this is a big
reason why people who should know better support the Fed and Quantitative Easing. They think that a
central bank can engineer a steadily rising stock market. If someone makes money while others lose it,
well, that’s tantamount to a thief taking money from innocent victims. It goes against what many feel
is the primary purpose of the stock market—the reason why it exists—to let everyone make money
together. To state this view plainly, is to make it clear why it’s wrong. In fact, the stock market has an
economic purpose, which it can only fulfill when traders are free to make and lose money. If they’re
artificially kept from winning or losing, then the market cannot perform its function. The reason why
the stock market exists has been largely forgotten, so let’s state it explicitly. The stock market
allocates capital. When it’s allowed to function, it places capital into the hands of those who are
producing wealth. It removes capital from those who are destroying wealth. This is no mere
abstraction. When a wealth-producing company is starved for capital, we are all made a little less
wealthy. This company is blocked from producing products that people want to pay for, because it
cannot get capital. When a wealth destroying company is given capital it doesn’t deserve, we are also
made less wealthy. This company will consume the capital, and then go out of business. If insiders,
outsiders, upsiders, or downsiders help allocate capital away from companies that destroy wealth and
towards those who generate it, then they deserve their profits and we all should thank them. We should
not envy them their gains or seek to punish them. We should always be aware that there is no one and
nothing else that can allocate capital, other than the free market. Everyone has a rational self-interest
in proper capital allocation. Especially those who eat, surf the Internet, or fly in airplanes. And
especially those who hope to live long enough for life-extending technology, space travel, and 3D
holographic TV. Everyone has an interest in legalizing any strategy that traders can use to better
allocate capital, including and especially insider trading.
Insider trading solves the economy
Dolgopolov No Date — Stanislav Dolgopolov is a John M. Olin Fellow in Law and Economics at the
University of Michigan Law School, no date, (“Insider Trading”, The Concise Encyclopedia of
Economics, available online at http://www.econlib.org/library/Enc/InsiderTrading.html, date accessed
7/10/15 // K.K)
Another economic argument for insider trading is that it provides efficient compensation to holders of
large blocks of stock (Demsetz 1986; Thurber 1994). Such shareholders, who provide valuable corporate
monitoring and sometimes cannot diversify their portfolios easily—and thus bear the disproportionate
risk of price fluctuations—are compensated by trading on inside information. However, proponents of
regulation point out that such an arrangement would allow large shareholders to transfer wealth
from smaller shareholders to themselves in an arbitrary fashion and, possibly, provoke conflicts
between these two groups (Maug 2002). This concern may explain why the SEC, in 2000, adopted
Regulation FD (FD stands for “full disclosure”) banning selective disclosure of information by
corporations to large shareholders and securities analysts. A common contention is that the presence
of insider trading decreases public confidence in, and deters many potential investors from, equity
markets, making them less liquid (Loss 1970). But the possibility of trading with better-informed
insiders would likely cause investors to discount the security’s price for the amount of expected loss
rather than refusing to buy the security (Carney 1987). Empirical comparisons across countries do not
clearly demonstrate that stricter enforcement of insider trading regulation has directly caused more
widespread participation in equities markets. Another argument is that insider trading harms market
liquidity by increasing transaction costs. The alleged reason is that market makers—specialized
intermediaries who provide liquidity by continuously buying and selling securities, such as NYSE
specialists or NASDAQ dealers—consistently lose from trading with insiders and recoup their losses by
increasing their bid-ask spread (the differential between buying and selling prices) (Bagehot 1971). Yet,
the lack of actual lawsuits by market makers, except in options markets, is strong evidence that
insider trading is not a real concern for them. Moreover, econometric attempts to find a relationship
between the bid-ask spread and the risk of insider trading have been inconsistent and unreliable
(Dolgopolov 2004). Empirical research generally supports skepticism that regulation of insider trading
has been effective in either the United States or internationally, as evidenced by the persistent
trading profits of insiders, behavior of stock prices around corporate announcements, and relatively
infrequent prosecution rates (Bhattacharya and Daouk 2002; Bris 2005). Even in the United States,
disclosed trading by corporate insiders generally yields them abnormal profits (Pettit and Venkatesh
1995). Thus, insider trading regulation may affect the behavior of certain categories of traders, but it
does not eliminate profits from trading on private information. The likely explanation for the fact that
profits remain is that the regulation shifts insiders’ emphasis from legal to illegal trading, changes
insiders’ trading strategies, or transfers profits to market professionals. For these reasons, some
scholars doubt the value of such laws to public investors; moreover, enforcement is costly and could
be dangerously selective. Several researchers have proposed that market professionals (notably,
securities analysts) be allowed to trade on inside information (Goshen and Parchomovsky 2001). These
researchers reason that such professionals enjoy economies of scale and scope in processing firmspecific and external information and are removed from corporate decision making. Therefore,
allowing market professionals to trade on inside information would create more liquidity in securities
markets and stimulate competition in the acquisition of information. The related argument is that the
“outside” search for information is more socially valuable, even if it is occasionally more costly, and
that trading by corporate insiders may crowd out securities research on external factors (Khanna
1997). Thus, the proponents of regulation argue that unrestricted insider trading would adversely affect
the process of gathering and disseminating information by the securities industry, and this point of view
has some empirical support (Bushman et al. 2005). On the other hand, permitting insiders to trade on
inside information may allow companies to pay managers less because they have insider-trading
opportunities. In fact, there is evidence from Japan and the United States that the cash portion of
executive salaries is lower when potential trading profits are higher (Hebner and Kato 1997; Roulstone
2003). If market professionals could trade legally on private information but insiders could not, public
shareholders would still lose, while being unable to recoup their trading losses in the form of higher
corporate profits because of lower managerial compensation (Haddock and Macey 1987). Despite
numerous and extensive debates, economists and legal scholars do not agree on a desirable
government policy toward insider trading. On the one hand, absolute information parity is clearly
infeasible, and information-based trading generally increases the pricing efficiency of financial
markets. Information, after all, is a scarce economic good that is costly to produce or acquire, and its
subsequent use and dissemination are difficult to control. On the other hand, insider trading, as
opposed to other forms of informed trading, may produce unintended adverse consequences for the
functioning of the corporate enterprise, the market-wide system of publicly mandated disclosure, or the
market for information. While the effects of insider trading on securities prices and insiders’ profits
have been extensively studied empirically, the incentive effects of insider trading and its impact on
the inner functioning of corporations are not well known. It also should be considered that individual
firms have an incentive to weigh negative and positive consequences of insider trading and decide,
through private contracting, whether to allow it. The case for having public regulation of insider trading
must, therefore, rest on such factors as inefficiency of private enforcement or insider trading’s overall
adverse impact on securities markets
Insider trading key to economic stability
Miron 12 — Jeffrey A. Miron is Senior Lecturer and Director of Undergraduate Studies in the
Department of Economics at Harvard University and Senior Fellow at the Cato Institute, 2012 (“An
Economic Defense of Insider Trading”, 2/12/12, available at
http://atlassociety.org/commentary/capitalism-and-morality/capitalism-morality-blog/4932-aneconomic-defense-of-insider-trading, date accessed 7/10/15 // K.K)
One side effect of the 2008 financial crisis has been renewed attention to the ban on insider trading.
This ban dates to 1934, when it was adopted in response to the Great Depression and the stock market
crash of 1929; Congress and various Supreme Court decisions have strengthened the ban since. In
recent years the Securities and Exchange Commission has pursued several high-profile insider cases,
such as that of Raj Rajaratnam, and Congress appears likely to ban “insider” trading by its own
members. Most policymakers, along with the general public, believe that insider trading should be
banned. Yet straightforward economic reasoning suggests the opposite. The most obvious effect of a
ban is delaying the release of relevant information about the fortunes of publicly traded companies.
This means slower adjustment of stock prices to relevant information, which inhibits rather than
promotes market efficiency. Imagine, for example, that the CEO of a pharmaceutical company learns
that a blockbuster drug causes previously unknown side effects. Absent a ban, the CEO might rush to
sell or short his company’s stock. This would have a direct effect on the share price, and it would signal
investors that something is amiss. Insider trading thus encourages the market to bid down the shares
of this company, which is the efficient outcome if the company’s fortunes have declined. Under a ban
on insider trading, however, the CEO refrains from dumping the stock. Market participants hold the
stock at its existing price, believing this is a good investment. That prevents these funds from being
invested in more promising activities. Thus the ban on insider trading leads to a less efficient
allocation of the economy’s capital. Whether these efficiency costs are large is an empirical question.
Short delays of relevant information are not a big deal, and the information often leaks despite out the
rules. Thus, the damage caused by bans is probably modest. But efficiency nevertheless argues against
a ban on insider trading, not in favor. And bans have other negatives. Under a ban, some insiders break
the law and trade on inside information anyway, whether by tipping off family and friends, trading
related stocks, or using hidden assets and offshore accounts. Thus, bans reward dishonest insiders who
break the law and put law-abiding insiders at a competitive disadvantage. Bans implicitly support the
view that individuals should buy and sell individual stocks. In fact, virtually everyone should just buy
index funds, since picking winners and losers mainly eats commissions, adds volatility, and rarely
improves the average, risk-adjusted return. Thus, if policy is worried about small investors, it should
want them to believe they are at a disadvantage relative to insiders, since this might convince them to
buy and hold the market. Bans instead encourage people to engage in stupid behavior by creating the
appearance – but not the reality – that everyone has access to the same information. The ban on
insider trading also makes it harder for the market to learn about incompetence or malfeasance by
management. Without a ban, honest insiders, and dishonest insiders who want to make a profit, can
sell or short a company’s stock as soon bad acts occur. Under a ban, however, these insiders cannot do
so legally, so information stays hidden longer. Thus, bans on insider trading have little justification.
They attempt to create a level playing field in the stock market, but they do so badly while inhibiting
economic efficiency.
Insider trading solves the economy
Fischel 84 — Daniel R. Fischel is a Professor of Law, University of Chicago Law School, 1984, (“Insider
Trading and Investment Analysts: An Economic Analysis of Dirks v. Securities and Exchange
Commission”, Hofstra Law Review, 1984, available at
http://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=2415&context=journal_articles date
accessed 7/10/15 // K.K)
This analysis suggests that we should analyze the issue of whether trading by corporate managers on
inside information is consistent with their fiduciary duties by reference to the effect of such trading on
investors' wealth. If insider trading increases investors' wealth, then investors would not contract to
prohibit the practice and would receive no benefit from a prohibition implied by law. If corporate
managers are allowed to trade on the basis of inside information, shareholders' wealth might increase
for two reasons. First, managers might possess increased incentives to create valuable information,
and thereby increase the value of the firm, if they are allowed to profit by trading. Second, insider
trading may provide firms with a valuable additional mechanism for communicating in- formation to
the market." An inevitable problem in all principal-agent relationships is that of designing a
compensation scheme that minimizes divergence of interest. If it were possible to observe the effort
and output of individual managers costlessly, it would be possible to limit agency costs by continual
renegotiations of compensation packages. Those managers who work hard and perform well would
be rewarded; those who shirk and perform poorly would be penalized. It is difficult, however, to
monitor effort and output accurately. Because managers commonly work in teams, it is difficult to
isolate the productivity and contribution of any one manager. The rational manager will reason that
other team members will share the benefits of good performance and will bear some of the costs of
bad performance. This recognition by individual managers, that they will neither capture all of the
benefits of superior performance nor bear all costs of inferior performance, decreases their incentives
to work hard. Moreover, renegotiations themselves are costly, so firms have incentives to conserve on
their use; unfortunately, reducing the number of negotiations increases the managers' incentive to
shirk and thus exacerbates the principal-agent problem. The ability to trade on the basis of valuable
information is one partial solution to this principal-agent problem. Imagine the situation of a manager
who thinks that he has a good idea that will increase the firm's value. If insider trading is prohibited,
his incentive to develop the idea and convince others to pursue it is the hope that he, not other
members of the team, will receive a reward at the time of his next salary review. If insider trading is
permitted, in contrast, the manager can immediately "renegotiate" his compensation package by
purchasing shares. Because the reward is more certain, the manager's incentive to develop the
valuable information and increase the firm's value is greater if insider trading is permitted.19 Indeed,
insider trading is the only compensation scheme that allows immediate and costless renegotiation
whenever managers believe that they have the opportunity to develop valuable information. A
second reason why investors may benefit from insider trading is that it gives firms an additional
method for communicating information. A firm, the lowest cost producer of information about itself,
has strong incentives to have its stock prices reflect information about the firm.2 ' The more
informative the firm's stock price, for example, the lower the incentive of investors to expend
resources on wasteful search in an attempt to identify mispriced securities. This reduction in
investors' search costs will increase the net returns on investment and cause shares to trade at a
higher price.
General DA’s
Politics
Plan is popular – convergence of powerful interests.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Of course, there is little question that legislatures bring their own unique downsides to the policy
process, such as special interest capture 168 and excessive partisan gridlock.169 But there are reasons
to believe these concerns are diminished in this area. Legislative capture should not prevent privacyincreasing reform because many of the relevant corporate parties, including Internet service
providers, have a commercial interest in keeping the data they hold to themselves.170 Customer data
becomes less valuable when it is widely known and no longer exclusive,171 and customers place more
trust in companies they feel will protect their privacy and information.172 As a result, regulating
metadata privacy is a rare issue on which privacy activists and giant corpo-rations can join forces, at
least as far as regulating disclosures of user data to the government. And partisan gridlock is less rigid
here than on other critical issues because privacy reforms attract political support from both the left and
the libertarian right.17
Opposition to the plan is weakening.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Furthermore, though prosecution-minded politicians and government agencies will no doubt oppose
reforms that increase the government’s investigative burdens,174 several countervailing forces make
reform nonetheless possible. First, the last few years have seen a slow decline in the “tough on crime”
movement, and key conservatives, including Newt Gingrich and Grover Norquist, have publicly advocated for
criminal justice reform.175 Coinciding with this trend, the number of prison admissions in state and federal facilities has dropped
sharply over the last decade, from the all-time high of 747,031 in 2006 to 609,781 in 2012.176 Second, media scrutiny over
federal surveillance methods—triggered in large part by Edward Snowden’s leaked disclosures about
the National Security Agency’s mass surveillance practices—has put pressure on Congress to act.177
And third, judicial decisions such as Warshak are eroding the current statutory scheme and increasing
that pressure.178 None of these factors guarantee legislative action, but together they create an environment in which privacy reformers
may be able to overcome the usual opponents of reform. In this political environment, legislative reform of online
privacy law is not only needed but possible.
Public and bipartisan congressional support supports privacy concerns in the context
of the EPCA — recent testimonies prove the SEC doesn’t even need these authorities
*SCA is Title II under EPCA
Mejia 7/8 — Lou Mejia, former chief litigation counsel of the SEC, partner with Perkins Cole, a firm
that represents technology and internet companies, J.D. from George Washington Law School, 2015
(“The SEC’s curious view of the Constitution and privacy rights,” The Hill, July 8th, Available Online at
http://thehill.com/blogs/congress-blog/247089-the-secs-curious-view-of-the-constitution-and-privacyrights, Accessed 7-14-15)
“Privacy comes at a cost,” according to the Court. That is consistent with the views of most Americans,
as a recent Pew survey confirmed we have a low level of confidence in the privacy of our personal
data, and do not believe there are adequate limits on information collected by the government.
Evidently, these developments have not sunk in at the SEC. Chair Mary Jo White is opposing an effort
in Congress that would protect the privacy rights of individuals in the content of their emails. Congress
is working to amend the Electronic Communications Privacy Act (ECPA) by establishing a uniform search
warrant requirement for the government to compel a third-party service provider to disclose the
content of an individual’s emails. In order to obtain your personal emails from a provider such as
Google, the government would have to demonstrate probable cause -- the same standard required to
search a home or open a letter.
The Email Privacy Act, which updates ECPA, has more than 280 co-sponsors in the House and the
backing of groups across the political spectrum, from the American Civil Liberties Union to Americans
for Tax Reform. In the Senate, the ECPA Amendments Act has garnered wide support as well, from
Sen. Pat Leahy (D-Vt.) to Sen. Mike Lee (R-Utah). The bills essentially codify the holding in United
States v. Warshak, where the Sixth Circuit Court of Appeals held that the use of something less than a
warrant, such as a subpoena or court order under the ECPA, violates the Fourth Amendment’s
prohibition against warrantless searches. The court made the common sense point that emails are
entitled to the same Constitutional protection as traditional forms of communication. Just as the
government may not seize your unopened letters from a third party, it should not have the ability to
seize your email from a third-party service provider.
Despite broad, bipartisan support in Congress to reform ECPA, the SEC is standing in the way. Chair
White claims that the legislation would harm the SEC’s enforcement efforts. Yet the SEC has abided by
the Warshak opinion and the requirement of a warrant for the last five years. In recent testimony on
Capitol Hill, White admitted the SEC has not subpoenaed third party providers for private emails. This
is the power she claims is essential to SEC enforcement, yet she touts the SEC’s “record year in
enforcement” in 2014 that was achieved in the absence of this power. The SEC already has many tools
to obtain the content of emails without violating the Constitution. It can subpoena the source of the
content -- the very individual whose emails it seeks, and can file an action in federal district court to
compel compliance. The SEC can also seek to have the court impose an adverse inference against the
individual for failing to comply, a sanction that would help the SEC in a later action alleging a securities
law violation. The SEC can also subpoena a third-party service provider for certain non-content
information and require it to preserve any evidence in its possession from destruction. This is a
valuable tool, as Google’s Transparency Report revealed it received more than 4,000 preservation
requests impacting 17,000 accounts.
SCA is popular — it would have already passed if it wasn’t for NSA distractions
Tummarello 14 — Katie Tummarello, technology reporter for Politico Pro, former technology writer
for The Hill, Communications Daily and Roll Call, B.A. in Public Policy from Hamilton College, 2014 (“An
end to warrantless email searches?” The Hill, March 2nd, Available Online at
http://thehill.com/policy/technology/199625-support-builds-in-house-for-ending-warrantless-email,
Accessed 7-14-15)
Legislation in the House that would end the warrantless searches of email records is gaining steam.
Privacy advocates had grown frustrated in recent months as Senate legislation that would curtail the
email powers of law enforcement was thrown off track amid revelations about National Security
Agency surveillance.
But they are increasingly optimistic that an update to the 1986 Electronic Communications Privacy Act
(ECPA) — which allows law enforcement agencies to obtain things like emails without a warrant if
they have been stored electronically for more than 180 days — could see action in the House.
The Email Privacy Act from Reps. Kevin Yoder (R-Kans.), Tom Graves (R-Ga.) and Jared Polis (D-Colo.) has
181 co-sponsors in the House, and the authors are “still pushing to get more,” according to a Yoder
spokesman.
“There’s a lot of growing support for that bill,” said Mark Stanley of the Center for Democracy and
Technology. “A lot of members of Congress see this as a common sense thing.”
More than 40 lawmakers have signed onto the bill since November, pushing the total close to the
magic number of 218, which would represent a majority of the House.
“A lot of members of Congress see this as a common sense thing,” Stanley said.
Passage of legislation to limit warrantless email searches appeared to be a done deal last year until
revelations about National Security Agency surveillance rocked the debate.
The focus on the activities of the NSA shifted Congress’s focus from law enforcement access to
national security, shunting the email issue aside.
Email privacy reform “doesn’t have that direct connection to the NSA issue the way a bill like the USA
Freedom Act does,” Stanley said, referring to legislation from Rep. Jim Sensenbrenner (R-Wis.) and
Senate Judiciary Chairman Patrick Leahy (D-Vt.) that would scale back the government’s sweeping
surveillance programs.
While the debate over NSA surveillance has largely increased awareness about digital privacy
concerns, it has also taken attention away from the specifics of email privacy reform.
The intricacies of requiring warrants for stored emails are “not as widely understood as the threats
from the NSA,” said Berin Szoka, president of TechFreedom.
“Everyone has been so focused on the NSA,” he said. “That’s the story.”
“What we’ve seen is just an amount of information that has overtaken Congressional offices and the
American people,” added Mark Jaycox, legislative analyst for the Electronic Frontier Foundation.
“ECPA certainly has taken a back seat, like many other tech reforms,” he said, pointing to an attempt
to update the Computer Fraud and Abuse Act, the statute that was used to prosecute the late Internet
activist Aaron Swartz.
Privacy advocates think the administration has missed an opportunity to make progress on what they
argue is a critical online privacy issue.
“There has obviously been a loss of trust with the public and the tech community” due to the NSA
revelations, Stanley said, and passing email privacy reform would be “a great way to show the privacy
and tech communities that they do support these privacy issues.”
Congressional momentum exists — similar bills prove no opposition to the plan
Stanley 14 — Mark Stanley, Campaign and Communications Strategist at the Center for Democracy &
Technology, M.A. from the Missouri School of Journalism, 2014 ("A Majority of the House Now Supports
ECPA Reform," Center for Democracy & Technology Blog, June 18th, Available Online at
https://cdt.org/blog/a-majority-of-the-house-now-supports-ecpa-reform/, Accessed 7-14-2015)
On Tuesday, the movement to reform the Electronic Communications Privacy Act (ECPA) reached an
historic milestone: The Email Privacy Act to update ECPA picked up its 218th cosponsor. This means an
absolute majority of Representatives support legislation to require the government to get a warrant
before accessing email.
Representatives Ron DeSantis (R-FL), Ted Deutch (D-FL), and Cedric Richmond (D-LA) were the 216th,
217th, and 218th cosponsors to join the Email Privacy Act. Introduced by Representatives Kevin Yoder
(R-KS) and Jared Polis (D-CO), the bill enjoys rare levels of bipartisan support and is now cosponsored
by a majority of Republicans in the House and a majority of the powerful Judiciary Committee. All
these numbers mean one thing: If the bill were brought to a vote, it would be guaranteed to pass.
The House should now act swiftly to approve the bill. At a time of heightened public concern over
privacy, ECPA reform is something Congress can accomplish this year. With elections right around the
corner, Members of Congress can show they take their constituents’ privacy seriously—and that they
can enact meaningful reform—by passing this bill. Voters are clearly looking to Congress to strengthen
privacy protections. In a recent poll taken in seven politically diverse regions, an overwhelming majority
of voters said they were more likely to support a candidate that backs ECPA reform.
Even law enforcement has acknowledged the need to update the antiquated statute. Recently, FBI
Director James Comey said that amending ECPA to require a warrant “won’t have any effect on our
practice.”
This leaves the Securities and Exchange Commission as the remaining source of opposition to a hugely
popular and bipartisan bill. Some members of the SEC have opposed reform because they want the
regulatory agency to have new powers to snoop on email without a warrant. However, the SEC cannot
access mail held by the Postal Service or tap a phone, and it should not be afforded new powers to
access content stored online.
The math is simple: A majority is in favor of ECPA reform to prohibit the government from accessing
email without a warrant. Now, it’s time to pass the bill.
SEC doesn’t make us link to politics — recent testimonies prove the SEC doesn’t even
use its authority
Volz 15 — Dustin Volz, staff correspondent for National Journal covering tech policy, 2015 ("SEC
Reveals It Doesn’t Use Email Snooping Power It Defends," National Journal, April 16th, Available Online
at http://www.nationaljournal.com/tech/sec-reveals-it-doesn-t-use-email-snooping-power-it-defends20150416, Accessed 7-14-2015)
The head of the Securities and Exchange Commission admitted Wednesday her agency has not
recently used subpoenas to obtain Americans' email records from Internet services providers—a
disclosure that stunned privacy advocates who have accused the agency of defending the practice to
block reforms to a decades-old digital-privacy law.
SEC Chairwoman Mary Jo White made the revelation during testimony before the House
Appropriations financial-services subcommittee.
"We've not, to date, to my knowledge, proceeded to subpoena the ISPs," White said. "But that is
something that we think is a critical authority to be able to maintain, done in the right way and with
sufficient solicitousness."
White reiterated that SEC is still worried about what information it was missing by not using the
subpoena authority to aid its investigations into financial fraud, but she said that the commission has
not engaged in the practice while debate over its use has been ongoing.
The quiet acknowledgment was quickly hailed by privacy advocates, who have long derided SEC for
single-handedly obstructing efforts to update the 1986 Electronic Communications Privacy Act, or
"ECPA."
"With this revelation, we may have cleared a major hurdle in digital-privacy reform, as the SEC was the
lone government-agency holdout engaging in this practice," Rep. Kevin Yoder, a Kansas Republican,
said in a Facebook post after the hearing.
Federal law does not require law enforcement to obtain a warrant to read emails or other forms of
online communication—such as documents saved on a cloud service—if they are more than 180 days
old. For such messages, only a subpoena, which requires a lower threshold of judicial approval, is
required.
House and Senate lawmakers reintroduced bipartisan legislation earlier this year that would change
that standard. The legislation would require law enforcement to obtain a search warrant before
accessing the content of Americans' emails. The House version of the bill, authored by Yoder and Rep.
Jared Polis, has racked up 261 cosponsors.
But while the Justice Department and other agencies have been supportive of changing the law, SEC
has for years cautioned against changes that would limit its subpoena authority.
Because of its posturing, many lawmakers and privacy groups had assumed that SEC currently engages
in subpoena-powered email grabs. An aide said Yoder was "taken aback" by White's testimony on
Wednesday when she indicated that was not the case. "He was quite surprised by that."
SEC did not immediately respond to a request for comment regarding its current policy governing
subpoena use.
Indeed, White's acknowledgment appears to be inconsistent—or at least more forthcoming—than
earlier testimony and public comments about the matter. A letter she wrote in 2013 to Sen. Patrick
Leahy, then the chairman of Senate Judiciary Committee, defended the use of ISP subpoenas as vital to
SEC investigations.
Counterplans
A2: Kerr CP
Kerr’s Amendment CP doesn’t solve – doesn’t provide enough protection.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Kerr introduced his suggestions at the end of an article explaining how the Stored Communications Act’s individual components work (and, in
some cas-es, interrelate).211 As a result, his proposals are not focused on advancing any unified goal. Instead, Kerr
proposes four discrete amendments to address his various criticisms of the law as written: (1) raising the threshold for govern-ment requests
for content data, (2) simplifying the statute by combining “re-mote computing services” and “electronic communication services,” (3) repeal-ing
18 U.S.C. § 2701, and (4) modifying the remedies for violations of the statute.212 Kerr’s
first suggestion—raising the
threshold for requests for content da-ta—primarily addresses the statute’s odd distinction (discussed
above) between new, unopened e-mails and e-mails that have been opened or stored for more than 180
days.213 But instead of advocating for a warrant for all content data, Kerr takes “a cautious middle
ground”: he suggests requiring a § 2703(d) court order, which demands only a showing of reasonable
suspicion (not probable cause).214 Because this change would eliminate the government’s current abil-ity to compel via subpoena
the disclosure of the contents of certain online documents, it is clearly a step in the right direction. But it nevertheless fails to
provide digital documents the same level of privacy protection paper documents enjoy (and Kerr has
subsequently argued for a warrant requirement for content data).215
The Kerr Amendments don’t go far enough. Only plan solves.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Both Kerr’s proposed changes and the Email Privacy Act would be positive steps toward bringing
digital privacy protections into line with the Fourth Amendment’s protections for physical documents.
Both would simplify the structure of the law. And both would increase the protection for content disclosures under § 2703(a). But neither goes far enough. Kerr’s proposal fails to re-quire a warrant for
content disclosures, and the Email Privacy Act fails to in-clude a suppression remedy. Without both a
warrant requirement and a suppression remedy, the Stored Communications Act’s protections for the
con-tents of documents stored in the cloud will lag behind the Fourth Amendment protections for
otherwise identical paper documents. Just as importantly, neither Kerr’s proposal nor the Email
Privacy Act contains any new protection against government requests for metadata. As the Maynard
and Jones opinions have suggested, and as this Note has shown, extensive metadata records can
provide insights into citizens’ lives that can be every bit as revealing as the contents of their
documents. Proposals to amend the Stored Communications Act without addressing mosaic theory
concerns will fail to address one of the greatest—and ever-growing—threats to privacy in the modern
era.
A2: Email Privacy Act CP
Fails because it doesn’t contain a suppression remedy
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
In contrast to Kerr’s proposal, which addresses several largely unrelated problems with the Stored Communications Act, the Email
Privacy Act appears to have one main goal: making the statute consistent with Warshak’s holding that
the Fourth Amendment protects the contents of e-mail communications.222 The Email Privacy Act, like
both this Note’s and Kerr’s proposals, would combine the Stored Communications Act’s provisions
concerning electronic communication services and remote computing services and equalize the protection given to each type of service.223 In doing so, the Email Privacy Act would remove the current law’s distinctions between
new, unopened e-mails and old or opened e-mails. 224 Going beyond Kerr’s recommendation (but matching this Note’s), the proposed
law would require a warrant for all content disclosures under § 2703(a).225 And the Email Privacy Act
would compel the government to notify users within three days (ten for law enforcement agen-cies)
of obtaining the contents of their communications via a warrant.226 De-spite all of these positive
changes, however, the proposed law fails to include a suppression remedy.227
The email privacy act doesn’t go far enough no suppression remedy.
Schlabach, March 2015 (Gabriel R., Law Clerk to the Honorable Joel Bolger, Alaska Supreme Court;
J.D., Stanford Law School, 2014, “PRIVACY IN THE CLOUD: THE MOSAIC THEORY AND THE STORED
COMMUNICATIONS ACT” Stanford Law Review Lexis)
Both Kerr’s proposed changes and the Email Privacy Act would be positive steps toward bringing
digital privacy protections into line with the Fourth Amendment’s protections for physical documents.
Both would simplify the structure of the law. And both would increase the protection for content disclosures under § 2703(a). But neither goes far enough. Kerr’s proposal fails to re-quire a warrant for
content disclosures, and the Email Privacy Act fails to in-clude a suppression remedy. Without both a
warrant requirement and a suppression remedy, the Stored Communications Act’s protections for the
con-tents of documents stored in the cloud will lag behind the Fourth Amendment protections for
otherwise identical paper documents. Just as importantly, neither Kerr’s proposal nor the Email
Privacy Act contains any new protection against government requests for metadata. As the Maynard
and Jones opinions have suggested, and as this Note has shown, extensive metadata records can
provide insights into citizens’ lives that can be every bit as revealing as the contents of their
documents. Proposals to amend the Stored Communications Act without addressing mosaic theory
concerns will fail to address one of the greatest—and ever-growing—threats to privacy in the modern
era.
A2: Intent CP
Ruling on anything other than the original SCA kills SOP
Medina, 2013 (Melissa, Senior Staff Member, American University Law Review and JD, “The Stored
Communications Act: An Old Statute for Modern Times.” American University Law Review Lexis.)
Until Congress acts, however, courts must construe the SCA according to its current terms. As the fragmented opinions
in Jennings demonstrate, attempting to fit modern technology into the limited technological framework of
1986 has proven to be a daunting task. This difficulty has had impacts beyond courts’ application of
the SCA. The conflicting opinions make it impossible to predict the outcome of cases and leave
individuals, service providers, and law enforcement agencies in the dark about their rights and responsibilities.
Additionally, the confusion surrounding the SCA’s applicability encourages forum shopping, as the outcome of a case is inextricably tied to
where the case is litigated.155 Therefore, it is important to promote consistency by setting forth clear rules that courts must follow to
determine whether a communication is protected by the SCA. To
provide this consistency, courts should seek to
effectuate Congress’s intent at the time the SCA was enacted in order to avoid “blur[ring] the distinctive functions of
the legislative and judicial processes.”156 Courts are constricted by the SCA and cannot enlarge its scope without
Congressional authority.157 Doing so would severely undermine the Nation’s carefully balanced system
of government that delegates specific powers to three separate branches of government.158 To avoid
upsetting this balance, courts must attempt to classify new technology according to the distinctions
embodied in the SCA.
Download