Uploaded by kimj16576

CYBERCRIME AND SOCIETY

advertisement
Cybercrime and Society
Cybercrime and Society
Third Edition
Majid Yar
Kevin F Steinmetz
Los Angeles
London
New Delhi
Singapore
Washington DC
Melbourne
SAGE Publications Ltd
1 Oliver’s Yard
55 City Road
London EC1Y 1SP
SAGE Publications Inc.
2455 Teller Road
Thousand Oaks, California 91320
SAGE Publications India Pvt Ltd
B 1/I 1 Mohan Cooperative Industrial Area
Mathura Road
New Delhi 110 044
SAGE Publications Asia-Pacific Pte Ltd
3 Church Street
#10-04 Samsung Hub
Singapore 049483
© Majid Yar and Kevin F. Steinmetz 2019
First published 2006
Second edition published 2013
This third edition published 2019
Apart from any fair dealing for the purposes of research or private study, or
criticism or review, as permitted under the Copyright, Designs and Patents
Act, 1988, this publication may be reproduced, stored or transmitted in any
form, or by any means, only with the prior permission in writing of the
publishers, or in the case of reprographic reproduction, in accordance with
the terms of licences issued by the Copyright Licensing Agency. Enquiries
concerning reproduction outside those terms should be sent to the
publishers.
Library of Congress Control Number: 2018954803
British Library Cataloguing in Publication data
A catalogue record for this book is available from the British Library
ISBN 978-1-5264-4064-8
ISBN 978-1-5264-4065-5 (pbk)
Editor: Natalie Aguilera
Editorial assistant: Eve Williams
Production editor: Rachel Burrows
Copyeditor: Christine Bitten
Proofreader: Bryan Campbell
Indexer: Silvia Benvenuto
Marketing manager: George Kimble
Cover design: Stephanie Guyaz
Typeset by: C&M Digitals (P) Ltd, Chennai, India
Printed in the UK
At SAGE we take sustainability seriously. Most of our products are printed in the UK using
responsibly sourced papers and boards. When we print overseas we ensure sustainable papers are
used as measured by the PREPS grading system. We undertake an annual audit to monitor our
sustainability.
For Rodanthi, Pamela and little Elsie
Contents
1. About the Authors
2. Preface to the Third Edition
3. Acknowledgements
4. Online Resources
5. 1 Cybercrime and the Internet: An Introduction
1. 1.1 Perceptions of cybercrime
2. 1.2 Cybercrime: Questions and answers
3. 1.3 A brief history and analysis of the internet
4. 1.4 Defining and classifying cybercrime
5. 1.5 What’s ‘new’ about cybercrime?
6. 1.6 Explaining cybercrime
7. 1.7 Challenges for criminal justice and policing
8. 1.8 Summary
9. Study questions
10. Further reading
6. 2 Researching and Theorizing Cybercrime
1. 2.1 Studying cybercrime
2. 2.2 Criminological theory
3. 2.3 Researching cybercrime
4. 2.4 Summary
5. Study questions
6. Further reading
7. 3 Hackers, Crackers and Viral Coders
1. 3.1 Hackers and hacking: Contested definitions
2. 3.2 Representations of hackers and hacking: Technological fears
and fantasies
3. 3.3 What hackers actually do: A brief guide for the
technologically bewildered
4. 3.4 Hacker myths and realities: Wizards or button-pushers?
5. 3.5 ‘Why do they do it?’ Motivation, psychology, gender and
youth
6. 3.6 Hacking and the law: Legislative innovations and responses
7. 3.7 Summary
8. Study questions
9. Further reading
8. 4 Political Hacking: From Hacktivism to Cyberterrorism
1. 4.1 Introduction
2. 4.2 Hacktivism and the politics of resistance in a globalized world
3. 4.3 The spectre of cyberterrorism
4. 4.4 Why cyberterror? Terrorist advantages of utilizing internet
attacks
5. 4.5 Rhetorics and myths of cyberterrorism
6. 4.6 Alternative conjunctions between terrorism and the internet
7. 4.7 Cyberwarfare
8. 4.8 Summary
9. Study questions
10. Further reading
9. 5 Virtual ‘Pirates’: Intellectual Property Theft Online
1. 5.1 Introduction
2. 5.2 Intellectual property, copyright and piracy: An overview
3. 5.3 Scope and scale of piracy activity
4. 5.4 The history and growth of internet piracy
5. 5.5 Who are the ‘pirates’?
6. 5.6 The development of anti-piracy initiatives
7. 5.7 Thinking critically about piracy statistics
8. 5.8 Thinking critically about intellectual property rights
9. 5.9 Summary
10. Study questions
11. Further reading
10. 6 Cyber-frauds, Scams and Cons
1. 6.1 Introduction
2. 6.2 Scope and scale of online fraud
3. 6.3 Varieties of online fraud
4. 6.4 Online fraud: Perpetrators’ advantages and criminal justice’s
problems
5. 6.5 Strategies for policing and combating internet frauds
6. 6.6 Summary
7. Study questions
8. Further reading
11. 7 Illegal, Harmful and Offensive Content Online: From Hate Speech to
‘the Dangers’ of Pornography
1. 7.1 Introduction
2. 7.2 Thinking about ‘hate speech’
3. 7.3 Hate speech online
4. 7.4 Legal, policing and political challenges in tackling online hate
speech
5. 7.5 The growth and popularity of internet pornography
6. 7.6 Legal issues relating to internet pornography
7. 7.7 Summary
8. Study questions
9. Further reading
12. 8 Child Pornography and Child Sex Abuse Imagery
1. 8.1 Introduction
2. 8.2 Child pornography and the internet: Images, victims and
offenders
3. 8.3 Legislative and policing measures to combat online child
pornography
4. 8.4 Legal and policing challenges in tackling child pornography
5. 8.5 The controversy over virtual child pornography and ‘jailbait’
images
6. 8.6 Summary
7. Study questions
8. Further reading
13. 9 The Victimization of Individuals Online: Cyberstalking and
Paedophilia
1. 9.1 Introduction
2. 9.2 The emergence of stalking as a crime problem
3. 9.3 Cyberstalking
4. 9.4 Online paedophilia
5. 9.5 Thinking critically about online victimization: Stalking and
paedophilia as moral panics?
6. 9.6 Summary
7. Study questions
8. Further reading
14. 10 Policing the Internet
1. 10.1 Introduction
2. 10.2 Public policing and the cybercrime problem
3. 10.3 Pluralized policing: The involvement of quasi-state and nonstate actors in policing the internet
4. 10.4 Privatized ‘for-profit’ cybercrime policing
5. 10.5 Explaining the pluralization and privatization of internet
policing
6. 10.6 Critical issues about private policing of the internet
7. 10.7 Summary
8. Study questions
9. Further reading
15. 11 Cybercrimes and Cyberliberties: Surveillance, Privacy and Crime
Control
1. 11.1 Introduction
2. 11.2 From surveillance to dataveillance: The rise of the electronic
web
3. 11.3 The development of internet surveillance
4. 11.4 The dilemmas of surveillance as crime control: The case of
encryption
5. 11.5 Summary
6. Study questions
7. Further reading
16. 12 Conclusion: Looking Toward the Future of Cybercrime
1. 12.1 Introduction
2. 12.2 Future directions in cybercrime
3. 12.3 Considerations for civil liberties
4. 12.4 Concluding thoughts
5. Study questions
6. Further reading
17. Glossary
18. References
19. Index
About the Authors
Majid Yar
is Professor of Criminology at Lancaster University. He has researched
and published widely across the fields of criminology, sociology,
social and political theory, and media and cultural analysis. His recent
books include Crime Deviance and Doping: Fallen Sports Stars,
Autobiography and the Management of Stigma (2014), The Cultural
Imaginary of the Internet: Virtual Utopias and Dystopias (2014) and
Crime and the Imaginary of Disaster: Post-Apocalyptic Fictions and
the Crisis of Social Order (2015).
Kevin F. Steinmetz
is an Associate Professor of Sociology at Kansas State University. He
maintains diverse research interests across the areas of cybercrime,
inequality and criminal justice, criminological theory, critical
criminology, and crime and media issues. He has published many
works in peer-reviewed journals including the British Journal of
Criminology, Theoretical Criminology, Crime Media Culture and
Deviant Behavior. His recent books include Hacked: A Radical
Approach to Hacker Culture and Crime (2016) and Technocrime and
Criminological Theory (2018, co-edited with Matt R. Nobles).
Preface to the Third Edition
Welcome to the third edition of Cybercrime and Society. Readers familiar
with previous versions of the book will notice some significant changes.
The most important of these, of course, is that it now has two authors rather
than one. From Majid’s perspective, the chance to collaborate with Kevin
has brought numerous benefits, not least of which is the injection of new
expertise, knowledge, insights and energy into the enterprise. What has
emerged is a book that is not only noticeably heftier than its predecessors,
but one that brings a fresh viewpoint to many of the most important
cybercrime issues of the day. In addition to extensive updating and
expansion of the topics addressed in the 2013 edition, all kinds of new
developments are introduced and assessed. New case studies and examples
are presented. And the international scope and coverage of the book has
been further expanded, with treatment of the US context in particular
benefitting from Kevin’s detailed knowledge. A brand new chapter has also
been added, offering readers a careful overview of how cybercrime can be
theorized and researched from a criminological standpoint. Finally, in an
effort to assist students and maintain the book’s connection to new
developments, it is now accompanied by a companion website where you’ll
find a variety of resources as well as links to videos and podcasts that
explore important cybercrime issues. All of these changes notwithstanding,
the overall ethos of the book remains as it was originally envisaged – the
desire to introduce and explore cybercrime in its social context, paying
careful and critical attention to the cultural, political and economic
dynamics and conflicts within which crime, and the responses to it, unfold.
In all, we hope that both new and returning readers will find the book an
informative and engaging introduction to cybercrime, and encourage them
to delve further into this most current and dynamic area of criminological
research.
In the electronic edition of the book you have purchased, there are several icons that reference
links (videos, journal articles) to additional content. Though the electronic edition links are not
live, all content referenced may be accessed at
https://study.sagepub.com/cybercrimeandsociety3e . This URL is referenced at several points
throughout your electronic edition.
Acknowledgements
As ever, a book of this kind would not be possible without the efforts and
assistance of numerous others. Firstly, we must acknowledge the
international community of criminologists and cybercrime researchers
whose work we drawn upon and discuss – without their research efforts we
would have precious little to share with our readers. Secondly, we wish to
thank our colleagues and students with whom ongoing dialogue fuels and
shapes our understandings. Thirdly, a special note of appreciation for our
editor at Sage, Natalie Aguilera, whose encouragement and enthusiasm for
the book was pivotal in bringing this new edition to fruition, and our thanks
to both Delayna Spencer and Eve Williams for their helpfulness and
efficiency throughout the project. We’d also like to thank Christopher
Brewer for help developing companion materials for this book. From
Majid’s perspective, the most important individual to thank is, of course, his
co-author – this new edition would not have happened without Kevin’s
interest and willingness to collaborate, and the experience has been not only
remarkably pain free but also genuinely pleasurable! He has no doubt that
the resulting book is far better than it could otherwise ever have been. In
turn, Kevin is grateful for the opportunity to collaborate on one of his
favourite cybercrime books by one of his favourite cybercrime scholars. He
has thoroughly enjoyed the experience.
Online Resources
Cybercrime and Society, Third Edition is supported by a wealth of online
resources for lecturers and students to support teaching, learning and
studying. They are available at
https://study.sagepub.com/cybercrimeandsociety3e
For instructors
PowerPoint Lectures provided for each chapter can be utilised as
guidelines for course content and adapted as needed for the module’s
necessities.
Multiple Choice Questions are designed to elaborate on chapter contents
and further critical analysis on the key issues. They are multipurpose, to be
used as tools for class discussion, and take-home and in-class examinations.
For students
A list of Sage Videos & Podcasts showcase highly relevant educational
content related to Cybercrime. They are carefully selected to enhance the
comprehension of the topics and perspectives discussed in the textbook and
support critical thinking.
A list of Web links for each chapter. These carefully curated web links –
including blogs, datasets, and webpages for societies, hacktivist groups,
online piracy news and more - directly compliment the subject matter for
every chapter and provide the most relevant research material.
1 Cybercrime and the Internet An
Introduction
Chapter Contents
1.1 Perceptions of cybercrime
1.2 Cybercrime: Questions and answers
1.3 A brief history and analysis of the internet
1.4 Defining and classifying cybercrime
1.5 What’s ‘new’ about cybercrime?
1.6 Explaining cybercrime
1.7 Challenges for criminal justice and policing
1.8 Summary
Study questions
Further reading
Overview
Chapter 1 examines the following issues:
how cybercrime is perceived and discussed in society, politics and the media;
the kinds of questions that criminologists ask about cybercrime;
the emergence and growth of the internet, the role it plays in a wide range of everyday
activities, and how this growth creates new opportunities for offending;
how cybercrime can best be defined and classified;
whether and to what extent cybercrime can be considered a ‘new’ or ‘novel’ form of criminal
activity;
issues confronting criminology in explaining cybercrime;
the challenges that cybercrime presents for criminal justice systems.
Key Terms
Anonymity
Crime
Cybercrime
Cyberspace
Deviance
e-commerce
Globalization
Hacking
Information society
Internet
Internet of Things (IoT)
Moral panic
Piracy
Policing
Pornography
Representations of crime
Social inclusion and social exclusion
Stalking
Surveillance
Transnational crime and policing
Viruses
1.1 Perceptions of Cybercrime
4 May 2000. A computer worm called the ‘Love Bug’ rapidly infects
computers worldwide. It uses infected machines to email itself to other
users, corrupting files on computers as it goes. Within hours, millions
of computers are affected, including those of UK and US government
agencies. The damage caused by the Love Bug is placed at between $7
billion and $10 billion. The prime suspect is Onel de Guzman, 24year-old college dropout from the Philippines. In August 2000 all
charges against de Guzman are dropped – the Philippines simply
doesn’t have laws that cover computer hacking under which he could
be tried and convicted. (Furnell, 2002: 159–161; Philippsohn, 2001:
61)
11 February 2003. FBI Director Robert Mueller tells the US Senate
that ‘cyberterrorism’ is a growing threat to US national security. He
claims that al-Qaeda and other terrorist groups are ‘increasingly
computer savvy’ and will, in the future, have ever greater opportunities
to strike by targeting critical computer systems using electronic tools.
(FAS.org, 2003)
4 February 2004. Graham Coutts, a 35-year-old musician from Hove,
UK, is convicted of murdering Jane Longhurst, a 31-year-old school
teacher. Coutts, who strangled his victim, is reported to have been
‘obsessed’ with images of violent sexual pornography, which he had
viewed on the internet just hours before the murder. In the wake of the
trial, UK and US government officials announce that they will
investigate ways of eradicating such ‘evil’ sites from the internet.
(BBC News, 9 March 2004)
August 2011. As urban disturbances spread across numerous English
cities, politicians and mass media claim that new social media had
been used by participants as a means of disseminating information
about incidents in real time, and utilised as a means of social
coordination to facilitate rioting and to better evade the police. The UK
Prime Minister, David Cameron, suggests that banning
‘troublemakers’ from using such media may be desirable in the
interests of public order. (Halliday, 2011)
October 2016. Major web outages were reported throughout the US
and Europe as a result of the largest distributed denial of service attack
(DDoS) ever seen. Websites like Netflix, Reddit, Twitter, and others
were temporarily taken offline. The culprit was the Mirai botnet that
exploited vulnerabilities in Internet of Things devices. The botnet was
originally developed to help its architects gain a business advantage in
the provision of servers and services for the video game Minecraft and
make inroads toward developing a DDoS mitigation enterprise. (Graff,
2017)
The above are just a few instances of what appears to be an explosion of
crime and criminality related to the growth of new forms of electronic
communication. Since the mid-1990s, the internet has grown to become a
fact of life for people worldwide, especially those living in the Western
industrialized world. Its relentless expansion, it is claimed, is perpetually in
the process of transforming the spheres of business, work, consumption,
leisure and politics (Castells, 2002). The internet is seen as part of the
globalization process that is supposedly sweeping away old realities and
certainties, creating new opportunities and challenges associated with living
in a ‘shrinking’ world. We are now said to be in the midst of a ‘new
industrial revolution’, one that will lead us into a new kind of society, an
‘information age’ (Webster, 2003). Yet awareness of, and enthusiasm for,
these changes has been tempered by fears that the internet brings with it
new threats and dangers to our well-being and security. ‘Cyberspace’, the
realm of computerized interactions and exchanges, seems to offer a vast
range of new opportunities for criminal and deviant activities. Two decades
or so on from the internet’s first appearance in popular consciousness, we
can see that the intervening years have been replete with fears about its
darker, criminal dimensions. Businesses cite threats to economic
performance and stability, ranging from vandalism to e-fraud and piracy;
governments talk of ‘cyberwarfare’ and ‘cyberterror’, especially in the
wake of the 11 September 2001 (9/11) attacks in New York; parents fear for
their children’s online safety, as they are told of perverts and paedophiles
stalking the internet’s chat rooms and social networking sites looking for
victims; hardly a computer user exists who has not been subjected to attack
by viruses and other forms of malicious software; the defenders of
democratic rights and freedoms see a threat from the state itself, convinced
that the internet furnishes a tool for surveillance and control of citizens, an
electronic web with which Big Brother can watch us all. The development
of the internet and related communication technologies therefore appears to
present an array of new challenges to individual and collective safety, social
order and stability, economic prosperity and political liberty.
Our awareness of the internet’s criminal dimensions has certainly been
cultivated and heightened by mass media representations. The news media
have played their part in identifying and intensifying public concerns, and
hardly a day goes by without some new report of an internet-related threat.
Dowland et al. (1999: 723–724) surveyed two ‘quality’ UK newspapers
over a two-and-a-half-year period, and found that, on average, stories about
computer-related crime appeared twice a week in each throughout the
period. A search of the LexisNexis database indicates that news media
interest in the coverage of cybercrime has increased significantly over time,
with 462 articles in 2000 and 4,460 in 2017. Evidence also indicates that
news coverage of certain cybercrime topics has only increased over time.
For instance, in their study of international news media coverage from 2008
to 2013, Jarvis et al. (2015: 70) found that the number of news items
discussing cyberterrorism had increased over time, with a notable uptick
after 2010.
In addition to the print media, we must also consider broadcast media
(television, radio) and the internet itself (which now constitutes a major
source of cybercrime reportage). Popular fiction has also picked up on the
internet’s more problematic dimensions, with films such as Hackers, The
Net, Black Hat, and Live Free or Die Hard sharpening the sense that our
safety may be under threat from irresponsible individuals and unscrupulous
authorities (Steinmetz, 2016; Webber and Vass, 2010). Perhaps such
representations, and the concerns they inform and incite, should not
altogether surprise us. After all, while the internet itself may be new, history
shows us that times of rapid social, economic and technological change are
often accompanied by heightened cultural anxieties (even panics) about
threats to our familiar and ordered ways of life (Goode and Ben-Yehuda,
1994). In his analysis of media coverage, Levi (2008: 373) notes that
cybercrime ‘is used as titillating entertainment which generates fear at the
power of technology beyond the control of respectable society’. Thomas
and Loader (2000: 8) suggest that social transformation wrought by internet
technologies ‘makes the future appear insecure and unpredictable’, yielding
a public and political overreaction. Such moral panics, fuelled by the media,
lead to an excessive and unjustified belief that particular individuals, groups
or events present an urgent threat to society (Critcher, 2003). Yar (2012a)
suggests that representations of the internet in the popular imagination have
increasingly come to be characterized by a ‘cyber-dystopian’ outlook, one
that portrays the social effects of new technologies in overwhelmingly
negative terms. Internet-related instances of panics include those over the
effects of pornography in the mid-1990s, and more recently over threats to
child safety from paedophiles (Cassell and Cramer, 2008; Littlewood,
2003). The proliferation of such anxieties is perhaps best viewed as a
consequence in part of the rapid shifts and reconstructions in the midst of
which we currently find ourselves. This is not to suggest, however, that the
dangers posed by cybercrime can simply be dismissed as wholly
unfounded. Nor is it to suggest that such widespread reactions ought to be
simply ignored by criminologists. Media representations, both factual and
fictional, constitute an important criminological research topic in their own
right; their careful examination enables us to uncover how the problem of
cybercrime is being constructed and defined, and how this shapes social and
political responses to it (Taylor, 2000; Vegh, 2002). Yet the weight of such
representations can also serve to obscure the realities of criminal activity
and its impacts, hindering rather than facilitating a balanced understanding.
1.2 Cybercrime: Questions and Answers
For criminologists, making sense of cybercrime consequently presents a
significant challenge, as it requires us to take, as best we can, a more sober
and balanced view, sifting fact from fantasy and myth from reality. The very
existence of this book is testimony to the authors’ belief that such an
examination is both possible and worthwhile. Indeed, numerous scholars
have already made considerable strides in this direction. By using a
combination of theoretical analysis and empirical investigation they have
attempted to get a handle on a range of pressing questions:
Just what might be meant by ‘cybercrimes’?
What is the actual scope and scale of such crimes?
How might such crimes be both like and unlike the ‘terrestrial’ crimes
with which we are more familiar?
Who are the ‘cybercriminals’?
What are the causes and motivations behind their offending?
What are the experiences of the victims of such crimes?
What distinctive challenges do such crimes present for criminal justice
and law enforcement?
How are policy-makers, legislators, police, courts, business
organizations and others responding?
How are such responses shaped by popular perceptions of computer
crime and computer criminals?
How is cybercrime shaping the future development and use of the
internet itself?
There now exists a considerable literature addressing such issues, drawn
from a wide range of disciplines including criminology, sociology, law and
socio-legal studies, political science, political economy, cultural and media
studies, science and technology studies, business and management, and
computing. The ever-expanding range of such material, along with its
dispersal across different disciplinary boundaries, makes it difficult for the
newcomer to find an accessible route into current debates. This difficulty is
exacerbated by the fact that cybercrime refers not so much to a single,
distinctive kind of criminal activity, but more to a diverse range of illegal
and illicit activities that share in common the unique electronic environment
(‘cyberspace’) in which they take place. Consequently, different academic
contributions tend to focus on some selected aspects of the cybercrime
problem, to the detriment or neglect of others. Hence the purpose of the
present volume, which is conceived as a thorough and up-to-date
introduction to the range of issues, questions and debates about cybercrime
that have come to characterize it as a field of study. Out of necessity, we
draw upon theoretical and empirical contributions from different areas of
scholarship, but with the main focus falling upon criminology and
sociology, the two areas that comprise our own primary fields of expertise.
The following chapters will furnish a critical introduction to a variety of
substantive issues. Many of them focus on recognizably different kinds of
cybercriminal activity, such as piracy, hacking, e-fraud, cyberstalking and
cyberterrorism; each is examined and analysed in light of the social,
political, economic and cultural context in which it takes shape. Other
chapters consider debates of a more general nature, addressing, for
example, the tensions apparent between internet security and policing, on
the one hand, and individual rights, freedoms and liberties, on the other.
Given the range of issues to be covered, their treatment cannot be
exhaustive. Hence each chapter contains guidance for further, more in-depth
reading, which can be found both in conventional print form and online.
Before we can move on to these more detailed examinations, however, there
are a number of important issues of background and context that must be
outlined, and some important conceptual issues that must be tackled.
Therefore, the remainder of this chapter will situate cybercrime in relation
to the growth and development of the internet, and ask questions about how
we might best classify cybercriminal activities, and why it is that
cybercrime might need to be viewed as qualitatively different from other
kinds of criminal and illicit activity.
1.3 A Brief History and Analysis of the
Internet
An examination of cybercrime ought to begin with the internet, for the
simple reason that without the latter, the former could and would not exist.
It is the internet that provides the crucial electronically generated
environment in which cybercrime takes place. Moreover, the internet should
not be viewed as simply a piece of technology, a kind of ‘blank slate’ that
exists apart from the people who use it. Rather, it needs to be seen as a set
of social practices – the internet takes the form that it does because people
use it in particular ways and for particular purposes (Snyder, 2001). What
people do with the internet, and how they typically go about it, are crucial
for understanding what kind of phenomenon the internet actually is. Indeed,
it is the kind of social uses to which we put the internet that create the
possibilities of criminal and deviant activity. To give one example, if people
didn’t use the internet for shopping, then there would be no opportunities
for credit card crimes that exploit users’ financial information. These
opportunities can potentially put millions of people’s sensitive data at risk
like when the network of the Target retail chain was breached in 2013 and
criminal actors absconded with potentially over a hundred million
customers’ credit and debit card information (Kassner, 2015a). Similarly, it
is because we use the internet for electronic communication with friends
and colleagues that the Love Bug worm, which targeted the email systems
we use for that purpose, could cause billions of dollars in damage.
The internet, as its name suggests, is in essence a computer network; or, to
be more precise, a ‘network of networks’ (Castells, 2002). A network links
computers together, enabling communication and information exchange
between them. Many such networks of information and communication
technology (ICT) have been in existence for decades – those of financial
markets, the military, government departments, business organizations,
universities and so on. The internet provides the means to link up the many
and diverse networks already in existence, creating from them a single
network that enables communication between any and all ‘nodes’ (e.g.
individual computers) within it.
The origins of the internet can be traced to the development of the SemiAutonomous Ground Environment (SAGE) system, which was an enemy
bomber early warning and interception system developed by the US
military in the 1950s. The system emerged from concerns over domestic
bomber strikes such as those witnessed in World War II. As Thomas Rid
(2016: 78) explains, ‘turning the entire North American continent into an
air defense battery meant that radar stations, computers, and interceptors
needed to be linked in real time and on a massive scale.’ The efforts
involved in developing such a system were monumental at the time and
‘had a most surprising and underappreciated impact: it helped lay the
foundation for networking computers, and ultimately for the internet’ (Rid,
2016: 79). In the 1960s, the US military, through its Defense Advanced
Research Projects Agency (DARPA), would further explore the
development of major computer networking systems. These efforts resulted
in the Advanced Research Projects Agency Network or ARPANET (Hafner
and Lyon, 1996). The aim was to establish a means by which the secure and
resilient communication and coordination of military activities could be
made possible. In the political and strategic context of the Cold War, with
the ever-present threat of nuclear confrontations, such a network was seen
as a way to ensure that critical communications could be sustained, even if
particular ‘points’ within the computer infrastructure were damaged by
attack. The ARPANET’s technology would allow communications to be
broken up into ‘packets’ that could then be sent via a range of different
routes to their destinations, where they could be reassembled into their
original form. Even if some of the intermediate points within the network
failed, they could simply be bypassed in favour of an alternate route,
ensuring that messages reached their intended recipients. The creation of
the network entailed the development not only of the appropriate computer
hardware, but also of protocols, the codes and rules that would allow
different computers to ‘understand’ each other. This development got under
way in the late 1960s, and by 1969 the ARPANET was up and running,
initially linking together a handful of university research communities with
government agencies.
From the early 1970s further innovations appeared, such as electronic mail
applications, which expanded the possibilities for communication. Other
networks, paralleling the ARPANET, were established such as the UK’s
JANET (Joint Academic Network) and the US’s NSFNET (belonging to the
American National Science Foundation). By using common communication
protocols, these networks could be connected together, forming an inter-net,
a network of networks. A major impetus for the emergence of the internet
as we now know it was given when, in 1990, the US authorities released the
ARPANET to civilian control, under the auspices of the National Science
Foundation. The same year, 1990, saw the development of a web browser
(basically an information-sharing application) by researchers at the CERN
physics laboratory in Switzerland. Dubbed the World Wide Web (www),
this software was subsequently elaborated by other programmers, allowing
more sophisticated forms of information exchange such as the sharing of
images as well as text. The first commercial browser, Netscape Navigator,
was launched in 1994, with Microsoft launching its own Internet Explorer
the following year. These browsers, and others available at the time, made
internet access possible from personal computers (PCs).
In the early days of ARPANET, few businesses saw the commercial
potential of the internet. For example, as Curran (2010: 18) explains, ‘the
commercial giant AT&T was actually invited in 1972 to take over
ARPANET, the forerunner of the internet, and declined on the grounds that
it lacked profit potential.’ By the mid-1990s, however, numerous
commercial internet service providers (ISPs) entered the market, offering
connection to the internet for anyone with a computer and access to a
conventional telephone line. These connection services, while popular, were
slow and offered a very limited capacity for transmitting data. They have
since been supplanted by much faster broadband connections as well as
services offering data connections to mobile devices such as phones. Since
the commercialization of the internet in the mid-1990s, its growth has been
incredibly rapid. Between 1994 and 1999 the number of countries
connected to the internet increased from 83 to 226 (Furnell, 2002: 7). In
December 1995 there were an estimated 16 million internet users
worldwide; by May 2002, this figure had risen to over 580 million, almost
10 per cent of the world’s total population (IWS, 2018). As of June 2017,
the total number of internet users had reached an estimated 3.89 billion,
comprising some 51.7 per cent of the global population (IWS, 2017). It is
crucial, however, to bear in mind that, despite this phenomenal growth,
access to the internet remains highly uneven both between countries and
regions, and within individual nations. The technical capacity enabling
internet access (PCs, software, reliable telecommunications grids) is
unevenly distributed: for example, an estimated 84.2 per cent of households
in Europe have internet access, while the comparable figure for the African
continent is around 18 per cent (ITU, 2017). Unequal access also follows
existing lines of social exclusion within individual countries – factors such
as employment, income, education, ethnicity, gender, and disability are
reflected in the patterns of internet use (Castells, 2002: 208–223; ITU,
2017; Miller, 2011: 97–105). These inequalities are criminologically
important, as they tell us something about the likely social characteristics of
both potential cybercriminal offenders and their potential victims (a point
we shall return to later in the chapter).
It is further worth considering not only what kinds of people are online,
where, and in what numbers, but also what they do in online environments.
As noted earlier, the range of social practices that people legitimately
engage in will create distinctive opportunities for offending – what
criminologists call ‘opportunity structures’ arising from people’s online
‘routine activities’ (Grabosky, 2001; Newman and Clarke, 2003). Therefore
one of the most important areas of internet usage relates to its applications
in electronic business, or e-commerce. Businesses increasingly use the
internet as a routine part of their activities, ranging from research and
development, to production, distribution, marketing and sales. These uses
create a range of criminal opportunities: for example, the theft of trade
secrets and sensitive strategic information (Jackson, 2000), the disruption of
online selling systems, and the fraudulent use of credit cards to obtain
goods and services (Grabosky and Smith, 2001; Smith, 2010). The
increasing prevalence of online banking and similar financial services
likewise generates opportunities for financial crimes that exploit systems
for identity authentication (Gupta et al., 2011). Similarly, the internet is
increasingly being mobilized by a range of actors for political purposes –
governments use it to inform and consult citizens, political parties use it to
recruit supporters, pressure groups use it to organize campaigns and raise
funds, and so on. Consequently, we see the emergence of cybercrimes that
are political in nature; these range from the sabotage and defacement of
official websites, and the use of the internet to instigate hate campaigns by
far-right extremists, to the activities of terrorist groups aiming to recruit
members and raise funds (Denning, 1999: 228–231; 2010; Whine, 2000).
Another important area of internet activity relates to leisure and cultural
consumption. People in increasing numbers are turning to the internet to
access everything from music and movies to celebrity gossip and online
video gaming. Yet again, the demand for such goods creates opportunities
exploited by the criminally inclined – these range from the distribution of
obscene imagery to the trade in pirate audio and video recordings, and
computer software (Akdeniz, 2000; McCourt and Burkart, 2003; Yar,
2005a). A further notable feature of the contemporary internet is the huge
public uptake of new social media platforms (such as Facebook, Twitter and
the like) which generate new patterns of vulnerability to misuse and abuse
(Yar, 2012b). The emergency of the ‘Internet of Things’ (IoT), a term which
‘quite literally means “things” or “objects” that connect to the internet – and
each other,’ has also introduced new opportunity structures in internet use
(Greengard, 2015: 15). IoT includes a range of consumer technologies now
available like fitness bands, smartwatches, kitchen appliances, laundry
machines, scales, thermostats, and automobiles (to name just a few) that
connect to the internet to allow the provision of additional services to users.
It also includes a range of commercial, industrial and governmental devices
linked to the internet. While the IoT has provided convenience and novelty
for many consumers, businesses and governments, it has also become ‘a
hotbed for hacking, data breaches, malware, cyber attacks, snooping, and
myriad other problems’ (Greengard, 2015: 24). These examples suggest that
illegitimate online activities must be viewed not in isolation, but as deeply
interconnected with their legitimate counterparts. It appears that with every
new internet development, a range of both licit and illicit uses coincides.
Yet the internet has a ‘depth’ to it, and social activity occurs even in its
abyssal layers. Most internet users only ever access the ‘surface web’ which
consists of content indexed through search engines as part of the World
Wide Web – the realm of social media, funny cat pictures, and news
websites. Attention is increasingly being given to activity that occurs in the
‘deep web’ or content not indexed in search engines and comprises the bulk
of what is available on the internet (for example information secured behind
paywalls, email accounts, etc.). Within this collection of unindexed
websites is what some refer to as the ‘dark web’ or ‘a collection of
thousands of websites that use anonymity tools like Tor [The Onion Router]
and I2P [The Invisible Internet Project] to hide their IP addresses’
(Greenberg, 2014). The key distinction concerning dark websites and
surface websites is that the former ‘hide the IP addresses of the servers that
run them’ (ibid.). Dark web tools also often protect the geographic or geolocation data of their users as well. For many, the term dark web sounds
insidious and has become synonymous with a litany of illegal marketplaces
like the infamous (and now defunct) Silk Road. Though it is true that the
dark web contains numerous websites that peddle contraband (like guns,
stolen data and drugs), supply illicit content (such as illegal forms of
pornography) and provide venues to discuss illegal matters, criminals are
not the only ones with an interest in the secrecy the dark web can provide.
For example, the dark web may also be used by political activists trying to
avoid government surveillance and censure or other persons wishing to
maintain their privacy online.
1.4 Defining and Classifying Cybercrime
One of the key problems facing scholars of cybercrime is agreeing upon
definitions and categories of cybercrime. Part of the difficulty in generating
a consensus may stem from the ambiguous nature of the ‘cyber’ prefix
itself. Cyber has a complicated history. The term originally was coined by
Norbert Wiener and Arturo Rosenblueth in the 1940s as they pioneered the
field of ‘cybernetics’ (Wiener, 1948). Wiener and his contemporaries saw
cybernetics as an emergent field focused on machines and feedback systems
(Rid, 2016). In this context, cyber was derived from a Greek word for
‘steersman’ (Wiener, 1948: 19). The pioneers of cybernetics used the term
in a particular way, focusing on the development of machines that could
reciprocally interact with their environments. The field was as much
philosophical as it was technical, imagining a future of autonomous or
semi-autonomous machinery. It became not only a serious academic area,
but its ideas resonated with a populace that was both fascinated by and
afraid of high technology.
While cyber originally had a specific meaning within the field of
cybernetics, its increasing use as a portmanteau meant the prefix drifted
further and further away from its original use by the cyberneticists. Amidst
this hubbub, cyber became a popular way to describe seemingly anything
related to computers and their potential. We might argue that the term came
to possess a certain superficiality – though stylish and futuristic sounding,
cyber in its current use does not seem to lend conceptual specificity to the
phenomena to which it is ascribed. For example, popularity for cyber would
soar in the 1980s where it would resurface as a way to describe and
categorize an emergent genre of science fiction and its associated concepts.
During this period William Gibson (1984) introduced ‘cyberspace’ to the
popular lexicon through his acclaimed novel Neuromancer. In a 2013 Q&A
session for the New York Public Library, he stated that cyberspace
constituted an attempt to describe an ‘arena in which I could set science
fiction stories’ oriented around computers and computer-generated spaces
(Gibson, 2013). This new storytelling landscape, he explained, ‘needed a
really hot name … dataspace wouldn’t work and infospace wouldn’t work
but cyberspace … it sounded like it meant something or it might mean
something but as I stared at it in red sharpie on a yellow legal pad, my
whole delight was that I knew it meant absolutely nothing so I would then
be able to specify the rules for the arena’ (ibid.).
Thus cyber became a ‘plug-and-play’ prefix that could be conjoined to any
noun or verb to denote some relationship to computers and the internet.
Throughout the 1980s and 1990s, uses of cyber proliferated to include
cyberspace, cybersex, cybershopping, cybersurfing, and, of course,
cybercrime, among others (ironically, Norbert Wiener disapproved of cyber
portmanteaus and said that they sounded ‘like a streetcar making a turn on
rusty nails’ [as cited in Rid, 2016: 103]). Media theorist McKenzie Wark
(1997: 154) describes the problem as ‘cyberhype’ or the use of certain
prefixes to give the illusion of explanatory power and significance to
concepts. As he explains;
The problem with cyberhype is the easy assumption that the
buzzwords of the present day are in some magical way instantly
transformable into concepts that will explain the mysterious
circumstances that generated them as buzzwords in the first place.
Cyber this, virtual that, information something or other. Viral
adjectives, mutated verbs, digital nouns. Take away the number you
first thought of and hey presto! Instant guide to the art of the new age,
cutting edge, psychotropic anything-but-postmodern what have you.
Not so much a theory as a marketing plan. (Wark, 1997: 154)
At some point, however, we collectively decided that not every online
activity needs to be designated ‘cyber’. Instead of ‘cybersurfing’, for
instance, we now say ‘browing the web’. ‘Cybershopping’ is just ‘online
shopping’. Even terms like ‘cyberspace’ and ‘cybersex’ have mostly fallen
by the wayside. Cyber endures in certain forms, however, most notably in
its application to harmful or illicit activities like cybercrime, cyberstalking,
cyberharassment and cyberterrorism. While originally uses of cyber were
‘pregnant with the promise of technology’, they have since come to connote
the dangers of the internet (Steinmetz and Nobles, 2018: 3; see also Wall,
2012: 5; Yar, 2014). Thus cyber no longer seems to refer to the field of
cybernetics, but it describes anti-social internet activity. These negative
connotations persist as politicians and pundits rail against the threats (both
real and imagined) that ‘cyber-attacks’, ‘cybercriminals’ and
‘cyberterrorists’ pose to our collective interests. Despite any issues that
surround cyber, the prefix has become the nomenclature of choice within
criminology to describe computer network-related crimes.
The conceptual ambiguity endemic to cyber applies to cybercrime as well.
A major problem for the study of cybercrime is the absence of a consistent
current definition, even among those law enforcement agencies charged
with tackling it (NOP/NHTCU, 2002: 3). As Wall (2001a: 2) notes, the term
‘has no specific referent in law’, yet it is often used in political, criminal
justice, media, public and academic discussions. We have already suggested
that instead of trying to grasp cybercrime as a single phenomenon, it might
be better to view the term as signifying a range of illicit activities whose
‘common denominator’ is the central role played by networks of ICT in
their commission. A working definition along these lines is offered by
Thomas and Loader (2000: 3) who conceptualize cybercrime as those
‘computer-mediated activities which are either illegal or considered illicit
by certain parties and which can be conducted through global electronic
networks’. Thomas and Loader’s definition contains an important
distinction that warrants further reflection, namely that between crime (acts
explicitly prohibited by law, and hence illegal) and deviance (acts that
breach informal social norms and rules and hence considered undesirable or
objectionable). Some analysts of cybercrime focus their attention on the
former, that is, those activities that attract the sanctions of criminal law (see,
for example, Akdeniz et al., 2000). Others, however, take a broader view,
including within discussions of cybercrime some acts which may not
necessarily be illegal, but which large sections of a society might deem to
be deviant. A prime example of this latter kind of activity would be
sexually explicit speech and imagery online (see, for example, DiMarco,
2003). Given that the primary focus of this book is crime rather than
deviance, discussions will be directed in the main toward those internetrelated activities that carry formal legal sanctions. It is important to bear in
mind, however, that crime and deviance cannot always be strictly separated
in criminological inquiry. For example, the widespread perception that a
particular activity is deviant may fuel moves for its formal prohibition
through the introduction of new laws. Alternatively, while particular
activities may be illegal, large sections of the population may not
necessarily see them as deviant or problematic, thereby challenging and
undermining attempts to outlaw them (one such instance, discussed in
Chapter 5 is that of media piracy). Such dynamics, in which boundaries
between the criminal and the deviant are socially negotiated, are a recurrent
feature of contemporary developments around the internet.
Starting with an understanding of cybercrime as ‘computer-mediated
activities that are illegal’, it is possible to further classify such crime along a
number of different lines. One commonplace approach is to distinguish
between ‘computer-assisted crimes’ (those crimes that pre-date the internet,
but which take on a new life in cyberspace, e.g. fraud, theft, money
laundering, sexual harassment, hate speech, pornography) and ‘computerfocused crimes’ (those crimes that have emerged in tandem with the
establishment of the internet, and could not exist apart from it, e.g. hacking,
viral attacks, website defacement) (Furnell, 2002: 22; Lilley, 2002: 24). On
this classification, the main way in which cybercrime can be sub-divided is
according to the role played by the technology, that is, whether the internet
plays a merely ‘contingent’ role in the crime (it could be done without it,
using other means), or if it is absolutely ‘necessary’ (without the internet,
no such crime could exist). This kind of classification is adopted by
policing bodies such as the UK’s National Crime Agency and its Cyber
Crime Unit.
While the above distinction is helpful, it may be rather limited for
criminological purposes, as it focuses on the technology at the expense of
the relationships between offenders and their targets or victims. One
alternative is to use existing categories drawn from criminal law into which
their cyber-counterparts can be placed. Thus, Wall (2001a: 3–7) sub-divides
cybercrime into four established legal categories:
Cyber-trespass – crossing boundaries into other people’s property
and/or causing damage, e.g. hacking, defacement, viruses.
Cyber-deceptions and thefts – stealing (money, property), e.g. credit
card fraud, intellectual property violations (a.k.a. ‘piracy’).
Cyber-pornography – breaching laws on obscenity and decency.
Cyber-violence – doing psychological harm to or inciting physical
harm against others, thereby breaching laws relating to the protection
of the person, e.g. hate speech, stalking.
Such classification can be seen to sub-divide cybercrime according to the
object or target of the offence: the first two categories comprise ‘crimes
against property’, the third covers ‘crimes against morality’, and the fourth
relates to ‘crimes against the person’. To these we may also wish to add
‘crimes against the state’, those activities that breach laws protecting the
integrity of the nation and its infrastructure (e.g. terrorism, espionage and
disclosure of official secrets). Such a classification is helpful, as it allows us
to relate cybercrime to existing conceptions of prohibited and harmful acts.
1.5 What’s ‘New’ about Cybercrime?
The classification outlined thus far, while indispensable, may not by itself
illuminate all the relevant characteristics of cybercrime. By relating such
crime to familiar types of offending behaviour, it stresses those aspects of
cybercrime that are continuous with ‘terrestrial’ crimes. Consequently, it
does little in the way of isolating what might be qualitatively different or
new about such offences and their commission, when considered from a
broader, non-legal viewpoint. This question of the ‘novelty’ of cybercrime
is an important one for criminologists. Some argue that cybercrime is pretty
much the same as ‘old-fashioned’ non-virtual crime, and just uses some
new tools that are helpful for the offender; what Grabosky (2001) dubs ‘old
wine in new bottles’. Others, however, insist that it represents a new form
of crime that is radically different from the kinds of ‘real world’ crimes that
predate it. Among this latter group, many criminologists (especially those
with a sociological orientation) focus their search for novelty upon the
social-structural features of the environment (cyberspace) in which such
crimes occur. It is widely held that this environment has a profound impact
upon how social interactions can take place (both licit and illicit), and so
transforms the potential scope and scale of offending, inexorably altering
the relationships between offenders and victims, and the potential for
criminal justice systems to offer satisfactory solutions or resolutions
(Capeller, 2001). Particular attention is given to the ways in which the
establishment of cyberspace variously ‘transcends’, ‘explodes’,
‘compresses’ or ‘collapses’ the constraints of space and time that limit
interactions in the ‘real world’. Borrowing from sociological accounts of
globalization as ‘time–space compression’ (Harvey, 1989), theorists of the
internet suggest that cyberspace makes possible near-instantaneous
encounters and interactions between spatially distant actors, creating
possibilities for ever-new forms of association and exchange (Shields,
1996). Criminologically, this seems to render us vulnerable to an array of
potential predators who can reach us almost instantaneously, untroubled by
the normal barriers of physical distance. Moreover, the ability of the
potential offender to target individuals and property is seemingly amplified
by the internet – computer-mediated communication (CMC) enables a
single individual to reach, interact with, and affect thousands of individuals
at the same time. Therefore the technology acts as a ‘force multiplier’
enabling individuals with minimal resources to generate potentially huge
negative effects (mass distribution of email scams and distribution of
viruses being two examples). In addition, great emphasis is placed on how
the internet enables the manipulation and reinvention of social identity –
cyberspace interactions give individuals the capacity to reinvent
themselves, adopting new virtual personae potentially far removed from
their ‘real world’ identities (Boellstorff, 2010; Poster, 1990; Turkle, 1995).
From a criminological perspective, this is viewed as a powerful tool for the
unscrupulous to perpetrate offences while maintaining anonymity through
disguise (Joseph, 2003: 116–118; Snyder, 2001: 252), and a formidable
challenge to those seeking to track down offenders.
From the above, we can conclude that it is the novel social-interactional
features of the cyberspace environment (primarily the collapse of space–
time barriers, many-to-many connectivity, and the anonymity and
changeability of online identity) that make possible new forms and patterns
of illicit activity. It is this difference from the ‘terrestrial world’ of
conventional crimes that makes cybercrime distinctive and original.
1.6 Explaining Cybercrime
The discipline of criminology has been concerned throughout its history
with attempts to uncover the underlying causes behind law-breaking
behaviour. Thus, theories of crime purport to locate the forces that propel or
incline people toward transgressing society’s rules and prohibitions. Such
explanations have, inevitably, been based upon data relating to criminal
activity in ‘real-world’ settings and situations. The emergence of the
internet poses challenges to existing criminological perspectives in so far as
it exhibits structural and social features that diverge considerably from
conventional ‘terrestrial’ settings. It is by no means clear whether and to
what extent established theories are compatible with the realm of
cyberspace and the crimes that occur within it. Though Chapter 2 explores
criminological theory in detail, it is worth highlighting two such challenges
for criminology here:
1. The problem of ‘where’: Many criminological perspectives are based,
implicitly or explicitly, upon ‘ecological’ assumptions. That is to say,
they view crimes as occurring within particular places that have
important defining social, cultural and material characteristics. It is in
the distinctive features of such local environments that the causes of
the crime are supposedly to be found. Recent decades have seen the
development of influential criminologies that focus upon the spatial
organization of lived environments, and explain patterns and
distributions of offending in terms of the ways in which such
environments are configured. So, for example, ‘routine activity’
approaches focus on how potential offenders are able to converge in
space and time with potential targets, thereby creating the conditions in
which offending is able to take place (Cohen and Felson, 1979; Felson,
1998). Such thinking has also inspired a range of crime mapping,
measurement and prevention programmes, again focused on
identifying ‘criminogenic’ environments and localities whose crimeinducing characteristics can then be removed (Fyfe, 2001). However,
such approaches run into difficulties when we consider cybercrimes.
The environment in which such crimes take place, cyberspace, cannot
be divided into distinctive spatial locations in any straightforward
manner, in the way we can distinguish in the ‘real world’ between
neighbourhoods and districts, urban and suburban, the city and the
country and so on. Rather, cyberspace can be viewed as basically ‘antispatial’ (Mitchell, 1995: 8), an environment in which there is ‘zero
distance’ between all points (Stalder, 1998), so that identifying
locations with distinctive crime-inducing characteristics becomes wellnigh impossible (Yar, 2005). The inability to answer the question of
‘where’ crimes take place in cyberspace indicates that criminological
perspectives based on spatial distinctions may be of limited use.
2. The problem of ‘who’: Criminological theories have also sought to
understand why it is that some individuals engage in law-breaking
behaviour while others do not. Official statistics indicate that offending
is not only spatially but also socially located; the profile of offenders
shows a preponderance of those with certain shared characteristics.
One such characteristic has been the over-representation among known
offenders of those from socially, economically, culturally and
educationally marginalized backgrounds. Consequently the fact of
‘deprivation’ (whether ‘relative’ or ‘absolute’) has come to be causally
linked to offending behaviour (Finer and Nellis, 1998; Hagan and
Peterson, 1994; Lea and Young, 1984). Such inferences have not
enjoyed universal acceptance, as Marxist and other ‘critical’
criminologists have claimed that the association of socio-economic
marginality with criminal activity is more a product of the iniquitous
class-based prejudices of the criminal justice system than of any real
concentration of offending among particular social groups. However,
much mainstream criminology gives considerable credibility to the
view that the ‘crime problem’ is one predominantly centred upon those
who suffer social exclusion (although controversy continues to rage
between those on the political Left who find the causes of such
exclusion in economic structures and processes [Wilson, 1996], and
those on the Right who attribute it to the individual’s ‘fecklessness’
and irresponsibility [Murray, 1984]). Wherever we stand on the
ultimate origins of exclusion, the relationship between marginality and
offending is an established feature of criminological explanation.
When we come to consider cybercrime, however, this correspondence
appears to break down. It was noted earlier that the capacity to access
and make full use of the internet is unevenly distributed across society,
with those in the most marginal positions enjoying least access.
Conversely, the skills and resources required to commission offences
in cyberspace are concentrated among the relatively privileged: those
enjoying higher levels of employment, income and education.
Consequently, the social patterns of internet criminality may well turn
out to be rather different from those typically identified in the
terrestrial world, with cyber-offenders being ‘fairly atypical in terms of
traditional criminological expectations’ (Wall, 2001a: 8–9). If this is
the case, then recourse to concepts such as ‘marginality’ and
‘exclusion’ to explain the origins of offending behaviour, so prevalent
in relation to ‘real-world’ criminology, might be of extremely limited
value when attempting to explain the genesis of cybercrimes.
The foregoing discussion suggests that criminology, as much as criminal
justice, faces challenges from the emergence of cybercrime. Rather than
simply being able to transpose an existing stock of empirical assumptions
and explanatory concepts onto cyberspace, the appearance of internet
crimes might require considerable theoretical innovation. Criminology itself
may need to start looking for some ‘new tools’ for these ‘new crimes’.
1.7 Challenges for Criminal Justice and
Policing
From the discussion thus far it is clear that the internet has distinctive
features that shape the crimes which take place in cyberspace. These
features pose difficulties for tackling crime when approached by the
established structures and processes of criminal justice systems. Not least
among these is that policing has historically followed the organization of
political, social and economic life within national territories. Moreover,
crime control agencies such as the police traditionally operate within local
boundaries, focusing attention and resources on crimes occurring within
their ‘patch’ (Lenk, 1997: 129; Wall, 2007: 160). Yet cybercrime, given the
global nature of the internet, is an inherently de-territorialized phenomenon.
Crimes in cyberspace bring together offenders, victims and targets that may
well be physically situated in different countries and continents, and so the
offence spans national territories and boundaries. While there are ongoing
attempts to strengthen transnational policing through agencies such as
Europol and INTERPOL (Bowling and Foster, 2002: 1005–1009), these are
largely focused upon sharing intelligence related to large-scale organized
crime. That said, increasing efforts are being made to facilitate transnational
cooperation between countries investigating cybercrime. For instance, a
more focused transnational initiative is the EU high-tech crime agency
European Network and Information Security Agency (ENISA), established
in 2004; yet its role is not directly investigatory, but restricted to
coordinating investigations into cybercrime by police in member countries
(Best, 2003). The United Nations fosters international cooperation in
cybercrime investigations also through the United Nations Office on Drugs
and Crime (UNODC), specifically through their Global Programme on
Cybercrime. This effort, however, primarily operates under the mandate to
‘assist Member States in their struggle against cyber-related crimes through
capacity building and technical assistance’ (UNODC, 2018b). In the US,
the Bureau of International Narcotics and Law Enforcement Affairs (INL) –
an arm of the US State Department – has also been actively fostering
international relationships to combat cybercrime issues, though these efforts
are primarily focused on promoting and exporting American policing
strategies and policies as well as reinforcing the ability of the US to
investigate transnational crimes that impact itself. The point is that efforts –
including others not enumerated here – are being made but are incremental
rather than revolutionary in the development of international cybercrime
investigative capabilities.
Further problems arise when we consider the constraints of limited
resources and insufficient expertise. For example, the UK’s National HiTech Crime Unit (NHTCU) was established in 2001, comprising 80
dedicated officers and with a budget of £25 million; however, this
amounted to less than 0.1 per cent of the total number of police, and less
than 0.5 per cent of the overall expenditure on ‘reduction of crime’ (Home
Office, 2002; Wales, 2001: 6). Moreover, by 2009 the NHTCU had been
absorbed into the newly established Serious and Organized Crime Agency
(SOCA), and was subsequently displaced by the Police Central e-Crime
Unit (PCeU). SOCA and the PCeU were both absorbed by the National
Crime Agency and now cybercrimes are handled by the National Cyber
Crime Unit (see discussion of these and other related developments in
Chapter 10). The agency reported that for the reporting year of 2016 to
2017, the agency expended £5.74 million on their National Cyber Crime
Unit, which was approximately only 1.12 per cent of the agency’s total
operating costs (NCA, 2017: 76). In the US, the Bureau of Justice Statistics
collects data from over 3,000 local and state law enforcement agencies.
According to their estimates, while 76 per cent of police departments with
100 or more officers have special units or personnel designated to the
investigation of cybercrime, only 26 per cent of departments with 99
officers or fewer have such designated expertise within their departments
(Reaves, 2015: 9). Unfortunately, 94.7 per cent of police departments in the
US have fewer than 100 officers (Reaves, 2015: 3). Extrapolating from
these statistics, it appears that the vast majority of US police departments do
not have officers assigned to the investigation of cybercrimes. Part of this
issue stems from training, which is often limited as a result of ‘a range of
factors … such as operating budgets and the total available agency
manpower’ (Holt et al., 2015: 44).
The lack of organizational stability and continuity in the field of cybercrime
policing may itself disrupt efforts to effectively tackle the problem of online
crime. ENISA, the EU agency, labours under similar budgetary constraints.
It was established with an annual allocation of just £17 million (€24.3
million) to coordinate transnational investigations spanning the 25 member
countries (Best, 2003); by 2012, this budget had been cut to a mere £6.8
million (€8.5 million) (ENISA, 2012); while the budget for 2016 had
increased to around £9.5 million (€11 million), it nevertheless remains well
behind even the modest funding that was allocated at inception (ENISA,
2018). In the US, federal investment in law enforcement training and
technologies to combat cybercrime has been increasing. In 2017, the federal
budget included a $19 billion increase from the previous fiscal year (35 per
cent) in cybersecurity investments (OMB, 2017). This indicates that
growing concerns about cybercrime are now being reflected in significantly
increased resource allocation. Despite this shift, however, it is worth noting
that such resources are (a) unevenly distributed across nations and regions,
and (b) need to keep pace with a problem that continues to grow in scope,
scale and complexity. Consequently, it remains to be seen whether such
investment is effective or adequate to address the challenges at hand.
A lack of appropriate expertise also presents barriers to the effective
policing of cybercrime. Investigating such crimes will often require
specialized technical knowledge and skills, and there is at present little
indication that police have the appropriate training and competence, though
inroads are being made, predominantly through the creation of specialized
officers and units (Bequai, 1999: 17; Reaves, 2015: 9). Moreover, research
indicates that many police do not view the investigation of computer-related
crime as falling within the normal parameters of their responsibilities, that
these kinds of investigations should be handled by cybercrime units or
specialists (Holt et al., 2015: 63; Hyde, 1999: 9). For instance, in their study
of police officers, Holt et al. (2015: 88) found that:
a large proportion of officers [in their sample] were actually unsure
about what they thought about cybercrime and what should be done
about it. Most of the officers had little direct experience with computer
crime to help them form opinions on these offences. The evidence
seems rather clear, however, that officers would prefer to not become
directly involved in cybercrime cases and would rather see changes in
citizen online behaviour and to the legal system.
The difficulties are further intensified once we consider the problem posed
by different legal regimes across national territories. The move toward an
international harmonization of internet law has already been noted. While
initially many countries lacked the legislative frameworks necessary to
effectively address internet-related crimes (Sinrod and Reilly, 2000: 2),
more countries are building legal frameworks to tackle such offences. The
United Nations Office on Drugs and Crime Cybercrime Repository
indicates that 159 countries have laws addressing cybercrime, amounting to
over 1,300 individual pieces of legislation (UNODC, 2018a). While laws
addressing cybercrime proliferate, issues arise in their implementation and
execution. Attempts to legislatively tackle cybercrime may also run foul of
existing national laws, for instance. In one example, the US introduction of
the Communications Decency Act in 1996, aimed at curbing ‘offensive’
and ‘indecent’ images on the internet, was partly overturned as some of its
provisions were held to breach constitutional guarantees relating to freedom
of speech and expression (Biegel, 2003: 129–136). Even where appropriate
legal measures have been put in place, many countries (especially in the
developing world) simply lack the resources needed to enforce them
(Drahos and Braithwaite, 2002). In countries facing urgent economic
problems, with states that may be attempting to impose order under
conditions of considerable social and political instability, the enforcement
of internet laws will likely come very low down on the list of priorities, if it
appears at all. In fact, this may lead us to conclude that many of the laws
developed internationally to address cybercrime constitute what Drezner
(2004: 484) describes as ‘sham standards’ where ‘governments agree to a
notional set of standards with weak or non-existent monitoring or
enforcement schemes. Sham standards permit governments to claim the de
jure existence of global regulatory coordination, even in the absence of
effective enforcement.’
1.8 Summary
Over the past two decades, cybercrime has become an increasingly widely
debated topic across many walks of life. Our understandings of cybercrime
are simultaneously informed and obscured by political and media
discussions of the problem. It is clear that the rapid growth of the internet
has created unprecedented new opportunities for offending. These
developments present serious challenges for law and criminal justice, as it
struggles to adapt to crimes that no longer take place in the terrestrial world
but in the virtual environment of cyberspace, which span the globe through
the internet’s instantaneous communication, and afford offenders new
possibilities for anonymity, deception and disguise. Society’s increasing
dependence on networks of computer technology renders us ever-more
vulnerable to the failure and exploitation of those systems. Equally, the
emergence of cybercrime poses difficult questions for criminologists and
sociologists of crime and deviance. Such academic disciplines have formed
their theories of and explanations for crime on the basis of assumptions
(about who, what, where and so on) drawn from offending in the terrestrial
world. In so far as the virtual environment of the internet may be radically
different from its terrestrial counterpart, criminology itself is challenged to
adapt its perspectives in order to come to grips with cybercrime, or to
develop new concepts and vocabularies which might better fit with the
online world that we increasingly inhabit.
Study Questions
How does public discourse represent the problem of cybercrime?
What kind of questions does the emergence of cybercrime invite us to ask?
How great a problem are cybercrimes when set alongside those terrestrial crimes with which
we are more familiar?
What is new or different about cybercrime?
What kinds of cybercrimes do everyday users of the internet encounter?
Who are the ‘cybercriminals’? Are they the ‘usual suspects’ of criminal justice and
criminology?
How might the growth of cybercrimes shape the ways in which the internet develops in the
future?
Further Reading
A valuable introduction to a range of cybercrime issues and debates can be found in David S.
Wall’s Cybercrime (2007). For a detailed discussion of a wide variety of cybercrime problems,
by a range of leading scholars in the field, see Yvonne Jewkes and Majid Yar’s (eds)
Handbook of Internet Crime (2010). The growth of the internet and its central place in the
development of the ‘information society’ are explored in Manuel Castells’ ground-breaking
The Information Age: Economy, Society and Culture: Vol. 3, End of Millennium (1998) as well
as his The Internet Galaxy: Reflections on the Internet, Business, and Society (2002). A wideranging and accessible introduction to the internet, digital media, and their social and cultural
implications can be found in Vincent Miller’s Understanding Digital Culture (2011). Thomas
Rid also provides what is likely the most comprehensive history of cybernetics and
computational machines in Rise of the Machines: A Cybernetic History (2016). Where the
Wizards Stay Up Late: The Origins of the Internet (1996), by Katie Hafner and Matthew Lyon,
provides an accessible-yet-comprehensive historical account of the internet. For up-to-date and
detailed data on the levels and trends in cybercrime, see the US State of Cybercrime Survey
(previously the Cybersecurity Watch Survey and E-Crime Watch Survey) published by CSO
magazine and the Cyber Security Survey conducted by CERT Australia and the Australian
Cyber Security Centre. The debate about whether or not cybercrime is a new or novel
phenomenon is conducted in Peter Grabosky’s article ‘Virtual criminality: Old wine in new
bottles?’ (2001) and Wanda Capeller’s ‘Not such a neat net: Some comments on virtual
criminality’ (2001). Samuel Greengard provides a useful overview of the Internet of Things in
his book The Internet of Things (2015). Finally, useful discussions about the governance and
regulation of the internet can be found in Brian Loader’s (ed.) The Governance of Cyberspace:
Politics, Technology and Global Restructuring (1997) and John Mathiason’s Internet
Governance: The New Frontier of Global Institutions (2009).
2 Researching and Theorizing Cybercrime
Chapter Contents
2.1 Studying cybercrime
2.2 Criminological theory
2.3 Researching cybercrime
2.4 Summary
Study questions
Further reading
Overview
Chapter 2 examines the following issues:
how criminological theories have been applied to various cybercrime issues;
challenges facing the academic study of cybercrime.
Key Terms
Content analysis
Deterrence theory
Drift and neutralization theory
Ethnography
Experimental design
Feminist criminology
Hidden crime
Interviews
Labelling theory
Official statistics
Radical criminology
Rational choice theory
Recording of crime
Reporting of crime
Routine activities theory
Self-control theory
Social learning theory
Strain theory
Surveys of crime and victimization
2.1 Studying Cybercrime
Crime is a notoriously difficult subject to study. Robbers, drug dealers, car
jackers, embezzlers and others are, for understandable reasons, reluctant to
talk about their illicit activities. The wrong information in the wrong hands
may not only result in arrest and incarceration, but may also bring other
forms of unwanted attention from competitors, victims and other parties. If
one is to be successful as a criminal, it is often best to avoid unnecessary
attention. This makes researching crime problematic as the often
clandestine nature of crime makes charting patterns and trends and gaining
access to criminal populations difficult. In the realm of cybercrime, these
difficulties are often magnified. The internet provides varying degrees of
anonymity to users and its global reach collapses space and time, allowing
offenders to commit crimes across the world nearly instantaneously.
Further, tools have emerged which allow offenders to further obscure their
whereabouts, identities and misdeeds.
Despite these challenges, scholars over the past few decades have
conducted numerous studies on cybercrime and cybervictimization. For
making sense of often complex phenomenon, researchers use various
methods to gather data and often employ theories to help make sense of that
data. In this spirit, the current chapter reviews prominent criminological
theories and gives examples of their application to cybercrimes to provide
the reader with tools for understanding the offences considered throughout
this book. Included in this discussion are classical and neo-classical
theories, social learning theory, drift and neutralization theory, strain theory,
self-control theory, labelling theory, feminist criminology and radical
criminology. Theory alone, however, is not enough to make sense of
cybercrime – we must first find ways to observe such offences and
offenders. This chapter thus ends with a discussion of methodological
challenges facing researchers in addition to a sampling of research methods
used within the field of cybercrime.
2.2 Criminological Theory
In their foundational criminological theory text, Bernard et al. (2016: 1)
define theory in the following terms:
Basically, a theory is part of an explanation. It is a sensible way of
understanding something, of relating it to the whole world of
information, beliefs, and attitudes that make up the intellectual
atmosphere of a people at a particular time or place.
The term theory is often colloquially used as a synonym for ‘guess’ or
‘hypothesis’. Academic theories, however, are more rigorously thought out
than mere guesses. They are the products of systematic observations of the
social world and rigorous debate. Good theories are tested through research
and argumentation to see if they hold up after intense scrutiny. Any failures
indicate that the theory must be either reworked or jettisoned – only the
strongest may survive.
Criminologists use various theories to explain and organize our knowledge
of crime, criminality, criminalization, victimization and crime control.
Because human behaviour is extraordinarily complex, no single theory – as
of yet – is capable of explaining all facets of crime. Yet, each provides some
insight into crime and control and, together, they offer a more complete
picture than we would have without them. To borrow a metaphor from
Kraska and Brent (2011), we might best think of theories as ‘lenses’. Each
one may be used to view a phenomenon and bring it into greater focus and
sharper relief. No single theory may enable an entirely clear picture, but our
ability to draw from multiple theories may offer a more robust view of the
whole. Much as it takes multiple lenses to gain a clear image through a
microscope, we likewise often require multiple theories to gain a clear
picture of cybercrime and control. What follows is a sampling of different
lenses that have been applied to the study of cybercrime, specifically from a
criminological perspective. Criminology, however, is not the only field that
studies cybercrime and related issues. Sociology, economics, psychology,
and other disciplines and fields all contribute to our collective
understanding of these phenomena. This means that each field has its own
theories to contribute. While not diminishing the importance of these bodies
of knowledge, this book limits its discussion to criminology, beginning with
classical and neo-classical criminological thought.
2.2.1 Classical and Neo-Classical Criminology
The Classical School of criminology emerged from eighteenth-century
philosophers like Cesare Beccaria and Jeremy Bentham and signalled a
radical departure from the spiritual or demonological explanations of crime
popular at the time. Among other things, classical philosophers argued that
humans were rational beings imbued with free will. Consequently,
explaining crime was a simple matter; crime was the result of what
Bentham termed a ‘hedonistic calculus’ – that without sufficient deterrents
in place, humans would always pursue pleasure over pain. Thus classical
philosophers paid greater attention to the execution of punishment. These
philosophers built from the enlightenment notion of the ‘social contract’ –
that humans are willing to surrender some of their freedoms to be protected
from those whose unfettered exercise of freedom harms others. Sceptical of
state power, classical philosophers saw state-sanctioned punishment as a
necessary evil to preserve the social contract and maximize happiness. Yet
punishment should only be applied to the extent necessary to achieve a
deterrent effect. In other words, sanctions should only be harsh enough to
discourage future offending by that person and others who witness the
sanctioning. Beccaria detailed the three components of effective
punishment: (1) it should be swift in execution, (2) the offender should be
fairly certain that punishment will occur following an infraction, and (3)
that punishment should be sufficiently severe so that the pains inflicted
outweigh the pleasure derived from the crime.
The classical school revolutionized criminal justice across the Western
world and greatly informed contemporary notions of due process and penal
sanctions. Within academic criminology, however, classical thought fell out
of favour in the nineteenth century and did not re-emerge until the 1970s in
what is commonly referred to as neo-classical criminology. Scholars
working in this area combine insights from classical theories with those
from other disciplines including economics, psychology and political
science. Three general theoretical approaches emerged within neo-classical
criminology: perceptual deterrence theory, rational choice theory and
routine activities theory.
While previous deterrence theory tended to assume that all persons would
calculate the swiftness, certainty and severity of punishment equally,
perceptual deterrence theorists argue that perceptions of punishment may
vary between individuals and, consequently, the deterrent effect of
punishment will also vary (Nagin, 1998). In other words, one person may
perceive a punishment as unlikely to occur whereas another is relatively
certain that punishment will be meted out for an infraction. For these
theorists, the latter person is far more likely to be deterred from crime. Thus
perceptual deterrence theories often focus on factors that influence
perceptions of punishment in terms of certainty, severity and celerity. This
theoretical approach has garnered some support for the effectiveness of
legal sanctions, though this effect is perhaps not as strong as might be
expected (Paternoster, 2010: 765).
Multiple scholars have applied deterrence/perceptual deterrence theory to
cybercrime. Deterrent effects stemming from threat or perceived threat of
consequences have been found for cybercrimes including cyberbullying
(Patchin and Hinduja, 2016), distributed denial of service attacks (Hui et al.,
2017) and spam emails (Kigerl, 2015). Studies of other illegal activities
have found little to no deterrent effects from punishment in cybercrimes
such as use of unauthorized software on IT systems (Silic et al., 2017).
Other studies offer mixed results; for instance, Guitton (2012: 1031)
examined the effect of governments’ abilities to attribute online behaviour
to perpetrators, which may be useful for deterring cybercrime but only for
‘users who have knowledge about attribution mechanisms (legal or
technical), and whose perception of the certainty of the police catching
them affects their rational decision to commit a crime’. In other words,
increasing capabilities to link crimes to actors may have limited deterrent
effects. In another study, Ladegaard (2017: 15) found that trading illicit
goods on online black markets actually increased following major media
coverage of law enforcement busts of such markets and the prosecution of
those involved.
Perhaps the most noteworthy contemporary cybercrime and deterrence
research examines the use of warning banners as deterrents on computer
systems (e.g. Maimon et al., 2013, 2014; Testa et al., 2017; Wilson et al.,
2015). Warning banners advertise the illegality of unauthorized computer
intrusion and the potential consequences to actual or would-be intruders in
computer networks. This research generally uses computer systems
(‘honeypots’) designed to attract intruders and monitor their behaviour as
they attempt to access and probe a system. The findings are generally
supportive of deterrence theory, particularly restrictive deterrence.
Restrictive deterrence (Gibbs, 1975) argues that deterrence measures may
not entirely stop criminal behaviour, but criminals may modify their
behaviour in reaction to the threat of sanctions. One study by Maimon et al.
(2013), for instance, found that though warning banners did not make
computer intruders desist or reduce the frequency of their trespasses,
duration of trespassing was shortened. In other words, trespassers may have
adjusted their behaviour when confronted with a measure designed to
increase perceptions of punishment certainty. Though certain issues
confront honeypot and warning banner research (for an overview, see Holt,
2017), they are novel attempts to study the effectiveness of system
administration policies and practices in deterring system trespassing.
Rational choice theory emerged in the late 1960s with the work of scholars
like Gary Becker (1968) who merged the ideas of Beccaria and Bentham
with insights from contemporary neo-classical economics. Perhaps the most
significant criminological rational choice theorists were Ronald Clarke and
Derek Cornish (Clarke and Cornish, 1985; Cornish and Clarke, 1986, 1987)
who focused primarily on what McCarthy (2002: 418) calls the ‘reasoned
offender’ approach to crime. For Cornish and Clarke (1986: 1), rational
choice theory assumes ‘that offenders seek to benefit themselves by their
criminal behavior; that this involves the making of decisions and of choice,
however rudimentary on occasion these processes might be; and that these
processes exhibit a measure of rationality, albeit constrained by limits of
time and ability and the availability of relevant information.’ From this
perspective, decisions to commit crime are not unlike decisions to engage in
most other forms of behaviour – they are made on the basis of situationally
dependent cost–benefit analysis. Thus criminals are not themselves wholly
different from non-criminals. Rather, from this position, everyone is
capable of crime depending on whether circumstances align to make crime
seem the most reasonable choice at the time. Criminals themselves are,
therefore, unremarkable. Rational choice theorists instead turn their
attention toward decisions to engage in crime, both in terms of decisions to
become involved in crime in the first place and decisions to engage in any
particular criminal event (Cornish and Clarke, 1986: 2). By focusing on
decisions and events, rational choice theory readily lends itself toward
policy development, providing insight into how criminal events can be
mitigated through policies tailored to reduce perceptions of crime’s benefit
or increase perceptions of its costs, an approach known as situational crime
prevention.
Rational choice theory has garnered its fair share of criticism. Tunnell
(2002: 265–269), for instance, raises significant questions regarding the
rationality of many criminal decisions – that many, frankly, appear patently
irrational. Further, many behaviours involve little conscious thought – we
can become so subsumed into the ordinary, practised and the routine that we
can function semi-automatically in our day-to-day lives (Tunnell, 2002).
Others have criticized that rational choices may be limited when substance
use or certain emotions are considered (Exum, 2002). Hayward and Young
argue that rational choice theory fails to fully embrace human agency and
choice; it is surprisingly deterministic. As such, they state that ‘rational
choice theory might be better renamed market positivism, for between the
determinants of poor character and opportunity for crime there is only a
small space for the most pallid of market choices’ (Hayward and Young,
2004: 264). Cullen et al. contend that rational choice theory ‘ignores many
empirically established predictors of recidivism’. The result is that ‘this
theory tends to encourage policies and programs that are cost oriented and
thus unduly favorable to punishment and deterrence’ rather than other
forms of correctional intervention (Cullen et al., 2002: 279).
Despite these criticisms, rational choice theory enjoys tremendous fanfare
from many within criminology, including those who study cybercrime
(Bachmann, 2008, 2010; Higgins, 2007; Hutchings, 2013). Research has
indicated that computer security hackers would engage in rational decisionmaking processes when making choices regarding hacking behaviours
(Bachmann, 2008; Hutchings, 2013). For instance, Hutchings (2013: 109)
found that hackers make assessments concerning risk of punishment or
detection when selecting targets, such as purposefully avoiding
‘government and military targets in order to avoid focus on their activities’.
Similarly, Bachmann (2010: 651) surveyed attendees at a major hacker
conference and found that ‘they tend to prefer rational thinking styles’ and,
as a result of their confidence in their abilities and love of challenges, ‘are
also more prone to engage in potentially risky behaviors than members of
the broader population’. Thus, for Bachmann, these results are generally
supportive of rational choice theory in that hackers make conscious and
thought-out decisions to engage in potentially risky behaviours.
In 1979, Lawrence E. Cohen and Marcus Felson (1979) introduced routine
activities theory (RAT). This theory makes classical assumptions regarding
the rationality of human actors but focuses on evaluating factors that affect
the differential distribution of criminal opportunities across space and time.
For them, social structural changes in a society can augment the prevalence
of opportunities to engage in crime. In other words, offenders are not so
much driven to crime by external forces, but are rather presented with
different levels of opportunity to engage in crime. As Cohen and Felson
(1979: 589) argue, ‘structural changes in routine activity patterns can
influence crime rates by affecting the convergence in space and time of the
three minimal elements of direct-contact predatory violations: (1) motivated
offenders, (2) suitable targets, and (3) the absence of capable guardians
against a violation.’ According to the authors, if any of these three factors
are missing, a crime will not occur.
RAT, as Reyns (2018: 36) explains, was ‘initially built … to explain why
crime rates in the United States had increased substantially following World
War II’. Questions have been raised regarding the applicability of the theory
toward the study of cybercrime. While Yar (2005b) argues that the central
concepts of the theory – motivated offenders, suitable targets and absence
of capable guardians – are modifiable and transposable to the digital realm,
he takes greater issue with a core proposition of the theory: that these
elements must converge in space and time. For Yar (2005b: 424), as ‘the
cyber-spatial environment is chronically spatio-temporally disorganized
[emphasis in original]’, cybercrimes present a unique problem for routine
activities theory. Virtual spaces are too fluid and online actors are not
necessarily predictable in their presence in such digital locations (Yar,
2005b: 417). Further, offenders and targets, according to Yar (2005b: 418)
do not necessarily need to come together in the same moments in time, as
specified in the original theory.
Others, however, insist that RAT is perfectly applicable to cybercrimes and
the online realm. Cohen and Felson (1979) originally argued that increases
in crime rates after World War II stemmed from the revolutionary changes
in the lifestyles and routine activities of people, exposing them to greater
opportunities for victimization. Researchers have also posited that recent
technological advancements have rendered similar shifts in the ‘behavioral
routine activities of individuals, which, in turn, increase vulnerability and
opportunities for victimization’ (Brady et al., 2016: 133). In what has been
called ‘cyberlifestyle-routine activities theory’, scholars address the spatial
problems described by Yar (2005b) by asserting that the convergence of
‘motivated offenders, suitable targets, and guardianship occurs within a
network’ rather than a physical location (Reyns, 2018: 37; Reyns et al.,
2011). Further, the temporal problems described by Yar (2005b) are
addressed by considering that interactions in virtual spaces can be ‘lagged’
– that ‘immediate back-and-forth contact between the victim and offender’
are not necessary (Reyns, 2018: 37).
RAT has been a popular approach in the study of cybercrime (for an
overview, see Reyns, 2018). In one study employing this theory, Maimon et
al. (2013) recorded data from a university intrusion prevention system over
the course of two years. In a challenge to Yar’s (2005b) assertion that the
temporal properties of the internet pose problems for the application of
routine activities theory to cybercrime, the authors found that ‘computer
focused crimes against the university network are more frequent during
university business hours’ (Maimon et al., 2013: 336). In other words, the
daily activities of users on university computers presented more
opportunities for offenders and victims to come together in network and
time. In another example, Holt and Bossler (2012) conducted a study of
malware infection among university students, faculty and staff, finding that
legitimate and routine computer use (checking email, using chat rooms,
shopping, etc.) was not linked to increased malware infection. Their study
did find, however, that participating in cybercrime and viewing online
pornographic material may increase one’s chance of malware infection.
Thus participating in forms of secretive or illicit online activity exposes
persons to less regulated areas of the internet, thus increasing opportunities
for victimization via malware. The authors also found that increasing
computer skill – a measure of the ability to protect oneself online – and
presence of anti-malware software also reduced likelihoods of infection. In
other words, guardianship factors mitigated the occurrence of criminal
events. These are only two examples among many who use this theory to
explain cybercrimes.
2.2.2 Social Learning Theory
Another criminological theory which has been applied to the study of
cybercrime is social learning theory. Edwin H. Sutherland’s (1939, 1947)
differential association theory was one of the first articulations of the
learning processes that contributed to crime and delinquency. Among other
points, Sutherland argued that delinquency was the result of associating
with delinquent peers and being exposed to definitions supportive of
engagement in criminal activity. In particular, he argued that associations
with delinquent peers that were more frequent, longer in duration, greater in
priority and stronger in their intensity, were more likely to transmit
definitions of behaviour conducive to deviance (Sutherland, 1947). In other
words, ‘a person becomes delinquent because of an excess of definitions
favourable to violation of law over definitions unfavorable to law violation’
(Sutherland et al., 1992: 89).
Sutherland’s differential association theory – though important – was not
without its flaws. The theory was difficult to empirically test as some of its
concepts, like deviant definitions, were troublesome to measure. To address
problems with differential association, Ronald Akers, with Robert L.
Burgess, significantly advanced social learning theory by refining and
expanding upon the work of Sutherland (Burgess and Akers, 1966). In
particular, this iteration of social learning theory incorporates insights from
psychological learning theories, particularly those concerning operant
conditioning. Their theory examines differential association with delinquent
peers and deviant definitions while also examining differential
reinforcement which, ‘refers to the balance of anticipated or actual rewards
and punishments that follow, or are consequences of, behavior’ (Akers and
Jensen, 2006: 39–40). Further, based on such reinforcements, the theory
argues that ‘Whether individuals will refrain from, or commit, a crime at
any given time (and whether they will continue, or desist, from doing so in
the future) depends on the balance of past, present, and anticipated future
rewards and punishments for their actions’ (Akers and Jensen, 2006: 39–
40). To put it simply, delinquents are likely to engage in deviant or criminal
behaviour when those actions have been positively reinforced. Further,
individuals may also model their behaviour after those engaged in by others
– they may imitate the behaviour of others (Akers and Jensen, 2006: 40).
Importantly, imitation is considered more important to initial entry into
criminal activity or deviance while reinforcement mechanisms are more
vital for continued deviation.
Though there is generally a lack of criminological theorizing and theory
testing on cybercrime, social learning theories have enjoyed a wide
application in the field. Boman and Freng (2018: 56) go so far as to claim
that such theories ‘are the most frequently tested, and most frequently
supported, criminological perspectives that explain cybercrime’. Crimes
like cyberstalking, juvenile sexting, online piracy, computer intrusions,
malware, illicit file manipulation, and others have all been examined
through social learning theory (Burruss et al., 2012; Higgins et al., 2006;
Holt et al., 2010, 2012; Marcum et al., 2014a, 2014b, 2014c; Morris and
Blackburn, 2009; for an overview, see Boman and Freng, 2018).
After laying the groundwork in previous discussions of social learning
theory, Akers (1998) more thoroughly addressed the connection between
individual-level learning processes and aggregate crime rates and patterns
in his book, Social Learning and Social Structure. Here he articulates social
structure–social learning theory (SSSL) which argues that,
Differences in the societal or group rates of criminal behavior are a
function of the extent to which cultural traditions, norms, social
organization, and social control systems provide socialization, learning
environments, reinforcement schedules, opportunities, and immediate
situations conducive to conformity or deviance. (Akers, 1998: 323)
In effect, Akers (1998: 334) proposes an integrated theory that combines
macro-structural theoretical concepts ‘drawn primarily from anomie, social
disorganization, and conflict theories’ with his original social learning
theory. Akers (1998: 330–335) thus argues that the following factors
associated with social structure impact the social learning process: (1)
differential social organization or ‘the known ecological, community, or
geographical differences across systems’ (ibid.: 332); (2) differential
location in the social structure or ‘known or probably variations in crime
rates by sociodemographic characteristics, groupings, aggregates,
collectivities, or categories’ including variables like age, race, religion, etc.
(ibid.: 333), (3) social disorganization and conflict which pertains to
structural theoretical concepts ‘drawn primarily from anomie, social
disorganization, and conflict theories’ (ibid.: 334); and (4) differential
social location in primary, secondary, and reference groups which concerns
‘the small-group and personal networks that impinge directly on the
individual’ like families, friends and schools, for example (ibid.: 334).
SSSL, according to Akers, not only explains why individuals may engage in
crime, but the process of social learning mediates the connection between
individual behaviour and social structural factors.
Holt et al. (2010) apply SSSL to the study of cybercrime. Using a sample of
580 college students, the authors conducted a partial test of SSSL in an
examination of ‘cyber-deviance’, a latent variable comprised of
participation in six activities: software piracy, media piracy, viewing online
pornography, plagiarism, using wireless internet connections without
permission, and accessing computer systems or electronic files without
permission. The study found support for the social learning components of
theory – that factors like reinforcement and imitation helped predict
participation in cyber-deviance. They also found partial support for the
social structural component of SSSL. For instance, the ‘effects of gender
and race, representing two key elements of differential location in the social
structure, were completely mediated by the social learning process’ (Holt et
al., 2010: 53).
2.2.3 Strain Theory
A select few criminological inquiries have applied strain theory to the study
of cybercrime. Though few in number, Chism and Steinmetz (2018) argue
that applications of this theory have much to offer in explaining an
assortment of cybercrimes including piracy, hacking, online scams,
cyberstalking/harassment and online hate speech. Robert K. Merton is often
described as the progenitor of strain theory, which he first introduced in his
piece entitled ‘Social structure and anomie’ (1938). For Merton, crime was
the result of conflict between a societal emphasis on monetary success and
differential access to legitimate means by which to achieve said success. In
other words, American culture emphasizes the pursuit of the American
Dream (money, a house, etc.) as an all-encompassing cultural goal, but the
social structure does not provide everyone with the equal opportunities to
pursue that goal (like a legitimate job).
For those who experience restricted access to legitimate means, Merton
described five adaptations in which they may engage. The first is
conformity, a non-deviant adaptation, which occurs when a person
continues to accept both the institutionally approved goals and means, and
acts accordingly. Deviant adaptations, however, occur when an individual
abandons either the approved goals, means or both. According to Merton
(1938), the most prominent adaptation to strain is innovation, where a
person gives up on the approved means and finds alternative, and likely
illicit, ways to pursue culturally approved goals. Ritualists, on the other
hand, give up on the American Dream and, instead, resign themselves to
plugging away at the approved means without hope for upward mobility.
Retreatists reject both the means and the goals, instead withdrawing from
social life – in a way, they give up. Finally, according to Merton (1938), a
person may be rebel, rejecting the approved means and goals and adopting
new ones in their place.
Research applying Mertonian Strain Theory to cybercrime to date has been
limited. What little research exists has been confined to digital piracy. For
instance, Larsson et al. (2012: 107) argue that ‘online piracy can be
conceptualized as an innovative approach to achieving cultural goals’.
Further, they argue that the use of anonymity-protecting services by pirates
(like virtual private networks or VPNs) can be considered a Mertonian
innovation to avoid detection and apprehension. The use of identity
concealment methods may also be a political endeavour – a form of digital
activism. As the authors explain, ‘those who actively construct new
environments that offer new means – file-sharing platforms, anonymization
tools, and so on – may even represent a Mertonian adaptation that cannot
merely be labelled as innovative, but rather as rebellious’ (Larsson et al.,
2012: 108). In a similar manner, Steinmetz and Tunnell (2013) argue that
piracy may be an innovative way to procure content in a society that
emphasizes the proliferous consumption of digital content.
Other variants of strain theory have developed over time including Messner
and Rosenfeld’s (2013) Institutional Anomie Theory and Nikos Passas’
(1990, 2000) Global Anomie Theory. But perhaps the variant that has
enjoyed the widest application to cybercrime issues has been Robert
Agnew’s (1992) General Strain Theory. Agnew combined Merton’s strain
theory with insights from the psychological stress literature to create a
theory which posits that criminal acts may be performed as an attempt to
cope with strain created by stressful situations and states. For Agnew (1992:
51, 57, 58), strain is a result of ‘failure to achieve positively valued stimuli’,
‘removal of positively valued stimuli’ and ‘presentation of negative
stimuli.’ These strains, in turn, may create ‘negative emotions’ in a person
including depression, anxiety and anger, the latter of which Agnew
considered the most criminogenic (Agnew, 1992: 59). Individuals are said
to engage in coping strategies – including possibly crime – to deal with
these emotions. Thus Agnew is interested both in the process through which
stressors lead to criminal coping and factors that influence decisions and
opportunities to engage in both criminal and non-criminal coping strategies.
Few scholars have applied Agnew’s General Strain Theory to the study of
cybercrime. Most of the literature to date has focused on the role of strain in
computer-mediated forms of bullying among adolescents (Hay and
Meldrum, 2010; Hay et al., 2010; Patchin and Hinduja, 2011; Wright and
Li, 2012). These studies have found that bullying can act as a stressor in
young persons which can lead to behaviours that harm others, as well as
forms of self-harm. Other research has indicated that bullying itself may be
an anti-social method for coping with strain (Patchin and Hinduja, 2011).
Thus bullying can both be a source and a by-product of strain. Another
example of cybercrime behaviour explored through General Strain Theory
is music piracy. Hinduja (2006, 2012) conducted research on populations of
undergraduate students and found that strain was not a significant predictor
of illegal music downloading over the internet.
2.2.4 Drift and Neutralization
Arguing that the criminological theories of the time focused too intensely
on characteristics that distinguish criminals from non-criminals, David
Matza (1964) pointed out that delinquents were actually non-delinquent
most of the time. As Bernard et al. (2016: 220) explain, Matza contended
that ‘most of the time, delinquents are engaged in routine, law-abiding
behaviors just like everyone else, but if you believe the picture painted in
these theories, delinquents should be committing delinquencies all the
time’. Instead, juveniles are ‘very much capable of civil behavior’ (Matza,
1964: 27). Therefore crime and delinquency are not continuous behavioural
patterns for offenders; they ‘intermittently act out’ the role (Matza, 1964:
26). Further, Matza (1964: 22) explains that most youths desist from
offending as they age. Juvenile delinquents are thus ‘casually,
intermittently, and transiently immersed in a pattern of illegal action’
(Matza, 1964: 28). Juvenile delinquents therefore ‘drift’ in and out of crime.
They are more likely to engage in deviant or criminal activities in situations
in which social controls have slackened.
One mechanism which may loosen behavioural constraints on juveniles is
participation in juvenile delinquent subcultures. In 1957, Matza, along with
Gresham Sykes, introduced the ‘techniques of neutralization’. Under this
perspective, even when juveniles participate in delinquent subcultures, ‘the
juvenile delinquent would appear to be at least partially committed to the
dominant social order in that he frequently exhibits guilt or shame when he
violates its proscriptions, accords approval to certain conforming figures,
and distinguishes between appropriate and inappropriate targets for his
deviance’ (Sykes and Matza, 1957: 666). For Sykes and Matza (1957),
delinquents never fully shake dominant cultural values. As such, the
delinquent devises justifications for their behaviour before engaging in
criminal or deviant activity to neutralize the social controls that ‘serve to
check or inhibit deviant motivational patterns’, thus freeing the individual
to ‘engage in delinquency without serious damage to his self-image’ (Sykes
and Matza, 1957: 667). Though some authors have treated these techniques
as catalysts for initial involvement in crime, Maruna and Copes (2005)
contend that ‘neutralization theory’ may be best suited for explaining
persistent or continued criminal behaviour. In other words, techniques of
neutralization may not cause crime in and of themselves. Instead, they may
facilitate continued offending over time.
In their original formulation, Sykes and Matza (1957) provide five
categories of techniques of neutralization which can be deployed by
delinquents including the denial of responsibility (e.g. ‘it’s not my fault! It’s
because my friends tricked me!’), the denial of injury (e.g. ‘he’s faking. He
ain’t really hurt!’), the denial of the victim (e.g. ‘no one was hurt so why are
you on my back?’), the condemnation of the condemners (e.g. ‘I learned to
smoke by watching you, Dad!’) and appeal to higher loyalties (e.g. ‘I
assaulted that protester because they disrespected our country!’). Other
techniques of neutralization have been added by scholars over time
including the metaphor of the ledger (e.g. ‘I should get a pass because I’ve
done far more good than bad!’) and defence of necessity (e.g. ‘I had to do it
because it was necessary!’) (Klockars, 1974; Minor, 1981, respectively).
Henry (1990) also adds three additional neutralizations including claim of
normalcy (e.g. ‘everyone is doing it so it must not be bad!’), denial of
negative intent (e.g. ‘I didn’t mean to hurt anyone!’) and claim of relative
acceptability (e.g. ‘My crime was relatively harmless. At least I’m not a
murderer!’).
Application of the techniques of neutralization to cybercrime has been
fairly superficial, focusing primarily on the presentation of these techniques
in offender accounts or correlation of self-reported techniques of
neutralization with pro-crime attitudes and behaviour. Cybercrimes
examined through neutralization theory include piracy (Higgins et al., 2008;
Hinduja, 2006, 2007; Holt and Copes, 2010; Moore and McMullan, 2009;
Morris and Higgins, 2009; Smallridge and Roberts, 2013; Steinmetz and
Tunnell, 2013; Yu, 2013), illicit computer intrusions and file manipulation
(Hutchings, 2013; Morris, 2012), online berating and denigrating
behaviours (Hwang et al., 2016), unauthorized software use (Silic et al.,
2017) and participation in online black markets (Ladegaard, 2017).
In an example of a qualitative study that applies the techniques of
neutralization, Steinmetz and Tunnell (2013: 60–63) examined an online
message board dedicated to intellectual property piracy and found evidence
that pirates engaged in four of the five original techniques of neutralization.
The pirates in this study were not found to actively engage in denial of
responsibility. That said, some message board users did exploit the
ambiguity often inherent in attributing online behaviour by blaming their
infringing behaviours on an alleged ‘mystery pirate’ who was said to be
using their internet connection without their permission if they were caught
(Steinmetz and Tunnell, 2013: 61). Pirates did seem to deny injury by
dismissing the alleged harms claimed by multi-media corporations like
Sony Enterprises. They also denied the victim by claiming, for instance, that
no actual theft occurred since intellectual property piracy concerns the
creation of copies rather than stealing physical items – since no one is
deprived of property, then no victim is said to exist. Pirates were also said
to frequently engage in condemnation of the condemners, most notably by
admonishing multi-media corporations that were seen as greedy and
actively engaged in stealing from artists themselves. Finally, pirates were
said to appeal to higher loyalties by, for instance, claiming allegiance to
open culture, anti-authoritarianism or anti-copyright sensibilities.
Most analyses drawing from neutralization theory – particularly those
related to cybercrime – often focus only on the relationship between
techniques of neutralization and deviance. Few, if any studies, have focused
on the occurrence of drift and the role of subculture in the use of techniques
of neutralization. A notable exception is Goldsmith and Brewer’s (2015)
theoretical revision of drift in the context of the internet, referred to as
‘digital drift’. The internet is said to present an interactive environment that
empowers individuals through anonymity, provides a cornucopia of
relatively easy-to-access information, and acts as a ‘facilitator of
encounters’ with individuals and opportunities conducive to criminal
engagement (Goldsmith and Brewer, 2015: 126). The internet creates
greater opportunities to engage in crime or even chance exposures to the
variegated forms of deviance available online. In this capacity, the internet
facilitates those committed to criminal lifestyles to pursue illicit activity,
and also enables ‘tentative, even timid, encounters which may or may not
lead to crime’ (Goldsmith and Brewer, 2015: 126). As Holt and Bossler
(2016: 91) explain, the digital drift approach argues that ‘the internet
facilitates the two conditions necessary for drift to occur: affinity
(immediate rewards, awareness of others) and affiliation (means of
hooking-up, deepening of deviant associations, skills, etc.)’. The internet
thus creates an environment more conducive to individuals drifting between
deviance and conformity. In an extension of this work, Holt and colleagues
(2018) examined the role of the police in digital drift. In Matza’s (1964:
149) original formulation, juvenile delinquents may develop a ‘sense of
injustice’ as a result of a perceived incompetence, inadequacy or sheer
obliviousness to the juvenile’s circumstances among authorities. This sense
of injustice, then, may further erode the ability of regulatory bodies to
control behaviour and thus contribute to drift. The analysis of Holt et al.
(2018) argues that similar dynamics occur in regards to the policing of
cybercrime – that cybercriminals may not be regulated by threat of law
enforcement intervention because they may view the legal system as
illegitimate from the vantage point of their subcultural values and beliefs.
2.2.5 Self-Control Theory
Most mainstream criminological theories are preoccupied with one central
question, ‘why do people commit crime?’ This question assumes that if left
alone, people are going to refrain from committing crime, only deviating
when propelled to action by internal or external influences. Reminiscent of
classical criminological thought, control theories ask an altogether different
question, ‘why don’t people commit crime?’ Thus control theories
presuppose that most people would act out in anti-social and selfish
manners if left to their own devices. Instead, internal or external controls
must be in place to properly constrain behaviour. Multiple theories have
been advanced within this perspective including Albert Reiss’ (1951)
personal and social control theory, Walter Reckless’ (1961) containment
theory, and Travis Hirschi’s (1969) social bond theory. One particular
control theory, however, has been the most frequently applied to the area of
cybercrime: Gottfredson and Hirschi’s (1990) self-control theory.
For Gottfredson and Hirschi (1990: 15) crimes are ‘acts of force or fraud
undertaken in the pursuit of self-interest.’ For them, crimes typically require
little planning and are often short-sighted, ‘trivial and mundane affairs that
result in little loss and less gain’ (Gottfredson and Hirschi, 1990: 16).
Crimes are thus relatively unremarkable acts that can be explained simply:
they are a product of low self-control. As they explain, ‘people who lack
self-control will tend to be impulsive, insensitive, physical (as opposed to
mental), risk-taking, short-sighted, and nonverbal, and they will tend
therefore to engage in criminal and analogous acts’ (Gottfredson and
Hirschi, 1990: 90). Low self-control is said to result from ‘ineffective childrearing’ where parents fail to properly monitor and react to children’s
behaviour (Gottfredson and Hirschi, 1990: 97). If parents fail to exercise
proper discipline over their children, then the child’s level of self-control
will become fixed between the ages of eight and ten years old. After this
point, their levels of self-control will remain relatively low compared to
their peers for life. For Gottfredson and Hirschi, self-control is relatively
stable throughout the life course. This self-control theory has been revisited
and revised since its inception. For instance, the definition of self-control
was adjusted to focus more on an actor’s ability to ‘consider the full range
of potential costs of a particular act’ (Hirschi, 2004: 543). The core
argument of the theory, that low self-control is the primary driver of
criminal behaviour, however, remains. Though the theory has been
thoroughly criticized on numerous grounds – such as a potential tautology
in its conceptualization and operationalization of its core concept, failure to
account for contradictory scholarship, problems defining crime, among
others (e.g. Akers, 1991; Geis, 2000) – it has been one of the most
thoroughly tested criminological theories to date.
Self-control theory has been applied to a variety of cybercrimes. Most
studies examine the role of low self-control in predicting specific kinds of
cybercrime like cyberstalking (Marcum et al., 2014a), teen sexting
(Marcum et al., 2014b), illicit computer intrusion and file manipulation
(Marcum et al., 2014c), identity theft (Moon et al., 2010) and digital piracy
(Higgins, 2007; Higgins et al., 2006; Hinduja, 2012; Moon et al., 2010).
Others have applied the theory to the study of general measures of
cybercrime. While examining learning theory and its application to ‘cyberdeviance’ (an aggregate measure of piracy, viewing online pornography,
cyber-harassment or bullying, and illicit computer access and
manipulation), Holt et al. (2012) also examined the role of self-control and
found that the concept predicted participation in cyber-deviance, though this
effect was ‘partially-mediated and conditioned’ by peer offending (Holt et
al., 2012: 392). Similarly, Donner et al. (2014: 170) created a general
measure of cybercrime consisting of,
posting hurtful information about someone on the internet,
threatening/insulting others through email or instant messaging,
excluding someone from online community, hacking into an
authorized area of the internet, illegally downloading copyrighted
files/programs, illegally uploading copyrighted files/programs, and
using someone else’s personal information on the internet without
his/her permission.
The study generally found that this composite measure of cybercrime was
predicted by low self-control. Importantly, this measure excluded three
forms of cybercrime because they were not correlated with self-control in
preliminary bivariate analyses. In other words, this study indicates that low
self-control may not predict malware distribution, using the internet for
drug transactions and posting nude photos online without permission
(Donner et al., 2014: 169).
2.2.6 Labelling Theory
Labelling theory is a general descriptor for a constellation of criminological
theories derived from the sociological traditions of symbolic interactionism
and social constructionism. Rather than focus on crime causation per se,
labelling theories focus more on societal reactions to crime and deviance as
well as the formation and adoption of deviant identities (Einstader and
Henry, 2006: 207). One of the earliest labelling theories was advanced by
Frank Tannenbaum (1938/2010) in which he describes the process by which
a delinquent is recognized as a deviant and processed and punished
accordingly. Through this process, he asserts that the juvenile is ‘tagged’
with a deviant label and eventually begins to identify as different from
others in the community. As they accept the deviant label, individuals may
engage in further acts of criminality. This process, which Tannenbaum
(1938/2010: 321–322) describes as the ‘dramatization of evil’, results in a
paradoxical situation where efforts to reform the delinquent may actually
encourage deviant behaviour or, as he explains, ‘the harder they work to
reform the evil, the greater the evil grows under their hands. The persistent
suggestion, with whatever good intentions, works mischief, because it leads
to bringing out the bad behavior that it would suppress’. In 1951, Edwin
Lemert offered his own version of labelling theory. Like Tannenbaum,
Lemert (1951/2010: 333) was not as interested in explaining initial acts of
deviance, which he termed ‘primary deviance’ stemming from various
forms of ‘social pathology’. Instead, he was interested in describing the
process through which a person could be so intensely stigmatized that their
future behaviour was driven toward deviance which he termed ‘secondary
deviance’ (Lemert, 1951/2010: 333). As he explains, ‘when a person begins
to employ his deviant behavior or a role based upon it as a means of
defense, attack, or adjustment to the overt and covert problems created by
the consequent societal reaction to him, his deviation is secondary’ (Lemert,
1951/2010: 333).
Likely one of the most complete articulations of labelling theory was
offered by Howard Becker in Outsiders (1963). Through his study of
marijuana smokers and jazz musicians, Becker (1963: 121–163) described
the processes by which groups become identified as outsiders by society
and group members learn to identify non-members as outsiders as well in
turn. He provides a robust account of the process of rule creation and
enforcement by which a group is identified as problematic, sensationalized
and subject to social sanction. On the other hand, he describes the process
of identity transformation where a person internalizes a deviant identity and
learns to appreciate their deviance (Becker, 1963: 42–78). In the case of
marijuana use, a person is not a marijuana smoker just because they have
smoked marijuana. Rather, a person becomes a marijuana user as they learn
to smoke marijuana with the proper technique, appreciate the high, and
view themselves as an authentic marijuana consumer. Thus labelling is not
only a process whereby a label is imposed on a deviant by society, but
where the deviant adopts the identity for themselves. As Becker (1963: 78)
explains,
A person will feel free to use marihuana to the degree that he comes to
regard conventional conceptions of it as the uninformed views of
outsiders and replaces those conceptions with the ‘inside’ view he has
acquired through his experience with the drug in the company of other
users.
Applications of labelling theory to the study of cybercrime are relatively
limited. Orly Turgeman-Goldschmidt (2011) provides one of the few
explicit applications of labelling theory in her examination of computer
hackers. She acknowledges that contemporary understandings of hacking
are often socially constructed (discussed further in Sections 3.2 and 3.6). In
short, ‘the hacker label has changed from a positive to a negative one’
(ibid.: 33). Importantly, she argues that hackers are a noteworthy example
of Becker’s (1963) approach ‘whereby labeling an activity as deviant is
based on the creation of social groups and not on the quality of the activity
itself’ (ibid.: 34). For the author, then, a significant concern is that the
application of this negative connotation surrounding the term hacker may
encourage secondary deviance. Drawing from qualitative interviews with
54 self-identified hackers, this study found that labelling did not seem to
contribute to secondary deviance among participants. In fact, most of the
respondents tended to view themselves as ‘positive deviants’ where they
tended to view themselves as ‘technological wizards and breakers of
boundaries’ rather than have their identities spoiled as deviant (ibid.: 47). In
other words, their ‘master status’ was uncorrupted through the labelling
process. The failure to find support for the secondary deviance hypothesis,
however, does not mean that we should ‘throw the baby out with the
bathwater’. As described here, labelling theory is a rich perspective that can
help us understand the process of criminalization (including the
criminalization of cybercrime) and its possible consequences beyond
secondary deviance.
2.2.7 Feminist Criminology
For much of its life as an academic field, criminology remained relatively
blind to the problem of gender as it relates to crime, criminalization and
victimization. The experiences of women were largely ignored and the very
construct of gender was rendered nearly invisible in the field. Yet, starting
in the 1960s, feminist scholars began to highlight the importance of gender
in the machinations of crime and justice, giving rise to what is commonly
known as ‘feminist criminology’. Representing a ‘diverse set of
perspectives’ (Flavin, 2004: 32), feminist criminology ‘concentrates on
inclusiveness by examining male and female criminality (among other
justice-related issues) and investigates gender as a variable that influences
criminal perpetration, victimization, and treatment in the criminal justice
system’ (Marganski, 2018: 12–13). While the importance of gender may
seem obvious, it is no exaggeration to state that its recognition may have
been one of the most significant advances ever in criminology. Nary is a
single facet of crime, victimization, and crime control uninfluenced by
gender. For instance, men are disproportionately more likely to be involved
in violent crime (as both perpetrator and victim). In fact, men, with the
exception of crimes like prostitution (Bernard et al., 2016: 286), perpetrate
the vast majority of offences. Further, while men are more likely to be
violently victimized in general, women are significantly more likely to be
subjected to rape and other forms of sexual assault (Barak et al., 2010: 176).
Further, women are more likely to be violently victimized by intimates and
familiars compared to men (Barak et al., 2010: 176–177). In the realm of
crime control, men disproportionately comprise the ranks of law
enforcement, corrections administration and the judiciary (Belknap, 2015).
Feminist criminology has highlighted multiple issues with traditional
criminological theories. One such issue includes the problem of
‘generalizability’ or ‘whether the same theoretical constructs can explain
both male and female offending’ (Kruttschnitt, 2016: 9). Another is the
‘gender-ratio’ problem – why is it that men and women vary so greatly in
their offending rates (Miller, 2003: 19)? Multiple theoretical approaches
have been advanced by feminist criminologists. Two will be highlighted
here for demonstrative purposes.
Steffensmeier and Allan (1996: 474) build from traditional criminological
approaches to suggest that ‘the broad social forces suggested by traditional
theories exert general causal influences on both male and female crime’.
They then suggest, however, that ‘it is gender that mediates the manner in
which those forces play out into sexual differences in types, frequency, and
contexts of crime involvements’ (Steffensmeier and Allan, 1996: 474).
They go on to describe ‘five areas of life that inhibit female crime but
encourage male crime: gender norms, moral development and affiliative
concerns, social control, physical strength and aggression, and sexuality’
(Steffensmeier and Allan, 1996: 475). In short, factors that cause crime are
said to impact both men and women but characteristics of both sex and
gender in our society permit men to act criminally in response to these
causative factors whereas women are more restricted in their ability to act
out criminally.
Another theoretical perspective within feminist criminology adopts a more
symbolic interactionist perspective towards gender, viewing the crime as a
result of gendered performances or, as Miller (2002: 434) explains, gender
is a ‘situated action or situated accomplishment’ (see also: Messerschmidt,
1993). This perspective builds from the work of West and Zimmerman
(1987), arguing that people execute performances of gender ‘in response to
situated normative beliefs about masculinity and femininity’ (Miller, 2002:
434). In other words, these beliefs provide resources from which men and
women can draw guidance as they interact with others in the social world.
From this perspective, the decision to engage in crime as well as its
execution may stem from beliefs about how to best perform gender within
the situational context. Men may therefore engage in more criminal
activities as they are more likely to engage in performances of masculinity,
which tends to value risk-taking and aggressive behaviours. Further,
women’s participation in crime and criminal subcultures may be facilitated
by situationally negotiating performances that draw from gender
stereotypes (Miller, 2002: 453).
Regardless of the approach taken, a unifying characteristic of feminist
criminology is that it draws attention to the role of gender and sex in the
experiences of crime, criminal justice and victimization. Recent advances in
feminist criminology have also become more holistic, examining how
gender intermingles with other social characteristics like race, class and
sexuality (Burgess-Proctor, 2006). Overall, tremendous work has been
conducted since the 1960s to advance our understanding of the role of
gender in crime, crime control and victimization.
Scores of studies exist that examine the role of gender in cybercrime and
cybervictimization. Marganski (2018: 18–25) recently provided a robust
overview of cybercrime research in the area, primarily focusing on the
gender dynamics involved in internet-facilitated sex crimes and human
trafficking, cyberharassment and cyberstalking. For instance, she highlights
recent cases of cyberharassment orienting around the ‘Gamergate’ fiasco. In
this scenario, game developer Zoë Quinn was accused of infidelity by her
former romantic partner, Eron Gjoni, in a blog post. As Marganski (2018:
22) explains, ‘shortly thereafter, individuals in some gaming circles began
to focus on Quinn’s sex life, suggesting that she seduced men to promote
her new game’. So-called Gamergaters insisted that this transgression was a
violation of journalistic ethics – that she would use her sexuality to sway
game reviewers. As a result of the actions of Gjoni and the outrage machine
that followed, she was harassed so severely that ‘she left her home and went
into hiding’ (Marganski, 2018: 22). Yet Marganski (2018: 22), drawing
from feminist criminology, points out that the case of Quinn was a ‘visceral
response … a reaction to the perceived violation of gender role
expectations, whereby women are expected to limit/reserve intimate
encounters, and are deemed worthy of retaliation when they disrupt our
assumptions’. In this capacity, the harassing behaviours of Gamergaters
may be seen as situated performances of hegemonic masculinity, designed
to reinforce gender hierarchy. The trials of Zoë Quinn will be revisited in
Chapter 9 (Section 9.3.1).
2.2.8 Radical Criminology
In the tradition of Marxist sociology and philosophy, radical criminologists
argue that the dominant mode of production – capitalism – shapes
motivations and opportunities to engage in crime as well as the processes of
criminalization and crime control (Bernard et al., 2016; Bonger, 1916;
Quinney, 1977/2010; Spitzer, 1975). Further, as Banks (2018: 103)
explains, ‘radical criminologists have long held that the state – along with
its corporate allies – is criminogenic. Yet the capitalist order conceals the
crimes of the elite whilst conferring criminality on those that threaten the
property of the powerful’. In this capacity, the criminal justice system and
related institutions operate in a manner that suits the interest of capitalists
while controlling threats to the social order (Spitzer, 1975).
In perhaps the earliest iteration of Marxist-inspired criminology, Willem
Bonger (1916) argued that humans were equally capable of acting in both
selfish and selfless capacities. For him, capitalism emphasized egoism over
altruism, leading to forms of predatory criminal behaviour, particularly
among working classes with little access to more legitimate means to
exercise their egoistic tendencies. After Bonger, it was not until the 1960s
that Marxist-inspired criminology would re-emerge in force, linked to
intellectual efforts like the now defunct Berkeley School of Criminology in
the US and the National Deviancy Symposium in the UK (Pavlich, 2001;
Schwendinger and Schwendinger, 2014; Stein, 2014). During this time,
radical criminologists broke from mainstream criminology – which tends to
emphasize crime causation explicitly – and explored the role of capitalism
and other forms of structural inequality in criminalization and control
processes. For instance, Spitzer (1975: 642) argued that the crime control
apparatus was generally structured to control problem populations which
consist of individuals and groups whose ‘behavior, personal qualities and/or
positions threaten the social relations of production in capitalist societies’.
Similarly, Quinney (1977/2010: 394) argued that the role of the state is to
‘secure the capitalist order’. He then sets about describing harms
perpetuated by the state as it protects the status quo and, thus, propagates
inequality and oppression (Quinney, 1977/2010). These ‘crimes of
domination’ include (1) crimes of control or the ‘felonies and
misdemeanors that law-enforcement agents, especially the police, carry out
in the name of the law, usually against persons accused of other violations’,
(2) crimes of government which are those ‘committed by elected and
appointed officials of the capitalist state’ which target threats to the state,
(3) crimes of economic domination or ‘those crimes that occur in the
capitalist class for the purpose of securing the existing economic order’
(e.g. environmental degradation, price fixing, etc.) and (4) social injuries
which are harms ‘committed by the capitalist class and the capitalist state
that are not usually defined as criminal in the legal codes of the state’ (e.g.
human rights violations) (Quinney, 1977/2010: 397–398).
Radical criminologists at the time were also intense critics of the institution
of policing. In their foundational work, Policing the Crisis, for example,
Hall et al. (1978) explore how crime ‘problems’ can be manufactured in a
manner which justifies increased coercion – notably through control
agencies like law enforcement – against working classes. Importantly, these
crime problems may be overblown in their harm as a result of political
posturing and public anxiety. In this manner, fear of crime may be used as
an ideological weapon for securing consensus behind increased oppression
of the working classes and the politically marginalized. In this capacity, the
police are viewed as a weapon of the ruling classes (see also Harring, 1983;
Platt et al., 1977). Further, such ideological trappings divert attention away
from the social harms of the politically and economically powerful which
are responsible for more property damages and physical harms (including
deaths) than all street crimes combined (Reiman and Leighton, 2012: 71).
Radical criminology has had a lasting impact on the area of critical
criminology – a body of criminological thought that focuses on social
conflict and the role of power in crime and criminalization. It greatly
informed the creation of postmodern criminology, cultural criminology,
peacemaking criminology and left realism, among others. Scholars
influenced by radical thought examine a constellation of issues ranging
from white collar and corporate crimes (e.g. Barak, 2012), to environmental
degradation (e.g. Stretesky et al., 2013) and prison privatization (e.g.
Selman and Leighton, 2010), to name only a few examples. Yet relatively
little attention has been given to cybercrime from a radical or Marxist
orientation with few exceptions. Yar (2008a: 193) argues that computer
crime control has followed the trends of terrestrial forms of crime control
which have become increasingly market driven and ‘responsibilized’ – that
individuals are to take more responsibility for their security – which he
attributes to ‘a transition to a neo-liberal mode of capitalism’. Under this
framework, information and computer security is increasingly displaced to
‘non-market quasi-public regulatory bodies’, ‘communities of self-policing’
and, importantly, private companies (Yar 2008a: 195). Yar (2008a) thus
points out that information security has primarily become a commodity to
be consumed instead of a public service rendered.
Two recent works in criminology have specifically adopted a radical
criminological approach to cybercrime and cybersecurity. Both Banks
(2018) and Steinmetz (2016) argue that contemporary social constructions
of cybercrime issues – notably hackers and hacktivists (political dissidents
who use computer mediated or facilitated protest strategies; see Chapter 4)
– are mired in myths and misconceptions. The public fear generated by
these forms of cybercrime, according to these authors, yields multiple
benefits for ‘governments, law enforcement agencies, the technosecurity
industry, commercial corporations and the media’ including harsh
punishments for online offenders, an ever-expanding surveillance apparatus,
and – as Yar (2008a) previously explains – the use of fear to create a multibillion dollar cybersecurity industry (Banks, 2018: 112). Steinmetz (2016)
further adds that contemporary information on capitalism simultaneously
harnesses the creative productive potential of hackers while seeking to
suppress the threat they pose to the capitalist mode of production. He
highlights three ‘fronts of social construction’ through which this occurs:
First, hackers are constructed as dangerous, which allows a criminal
classification to be imposed. Second, intellectual property is reified,
giving capitalists a set of property ‘rights’ to protect from the threats of
individuals like hackers. Finally, technological infrastructures are
portrayed as vulnerable, lending a degree of urgency to the need to
fight off the hacker ‘menace’ along with other technocriminals.
(Steinmetz, 2016: 218)
Once formed, these ideological machinations permit the expansion of
corporate and state control over intellectual property and technological
production.
2.3 Researching Cybercrime
Gaining a realistic measure of the scope and scale of cybercriminal
activities presents considerable challenges. Some of these problems are well
known to criminologists. Official statistics on crime, for example, have
been critiqued as ‘social constructions’ that do not necessarily provide an
‘objective’ picture of the true, underlying levels and patterns of offending
(Maguire, 2002). There are a number of reasons for this. First, such
statistics depend upon crimes having been reported to the police or other
official agencies. Studies, however, show that a large proportion of offences
simply remain unreported for a wide variety of reasons: victims may be
unaware that an offence has been committed; they may consider the offence
insufficiently ‘serious’ to warrant contacting the authorities; they may feel
that there is little likelihood of a satisfactory resolution (such as the
apprehension of the offender or the return of their property). Second, even if
an offence is reported, it may not be recorded by the police or other
agencies, again, for various reasons. For example, political priorities may
lead police to prioritize some crimes at the expense of others (Coleman and
Moynihan, 1996); police perceptions of ‘seriousness’ will affect whether
they deem tackling an offence as a worthwhile use of their time and
resources; police may have disincentives to record certain types of crime,
where they feel there is little likelihood of a successful investigatory
outcome, as a large number of unresolved offences might cultivate the
impression that police are failing to maintain ‘law and order’. Additionally,
decisions about whether or not to record a reported offence may depend
upon police judgements about the person(s) doing the reporting, such as
their perceived status, reliability or trustworthiness. Third, serious problems
are evident when we attempt to establish longitudinal measures of crime,
that is, the charting of crime trends (decreases and increases) over time. The
ways in which crime recording classifies and groups offences are subject to
change, making direct comparison over the years difficult. Moreover, it
must be remembered that crime is a legal construct (Lacey, 2002: 266–267)
– what happens to be illegal is subject to its inclusion in criminal law. Laws
change over time – new categories of offence are created, thereby making
previously permissible behaviour illegal; conversely, previously prohibited
behaviour may be decriminalized or legalized, removing it from the domain
of criminal activity. Consequently, year-on-year measures of crime rarely
compare ‘like with like’, making it difficult to derive reliable conclusions
about whether or not particular types of crime (or crime in general) are on
the increase or decrease.
These familiar problems with measuring crime are, if anything, exacerbated
in relation to cybercrime (Wall, 2007: 19–20). For example, the relatively
hidden nature of internet crimes may lead to them going unnoticed.
Unfamiliarity with laws covering computer-related crimes may lead victims
to be unaware that a particular activity is in fact illegal (Dowland et al.,
1999: 715, 721; Wall, 2001a: 8). The very limited allocation of police
resources and expertise to tackling computer crime may result in the
relevant authorities being unknown or inaccessible to victims for reporting
purposes. The extent to which the internet enables offenders to remain
anonymous may lead victims (and police) to conclude that there is little
likelihood of a perpetrator being identified (e.g. the chance of being
prosecuted for computer hacking in the USA is placed at 1 in 10,000
[Bequai, 1999: 16]). While this figure is perhaps dated, it may not be all
that disparate from contemporary prosecution numbers. For example, in
fiscal year 2016, the US Department of Justice only lists approximately 114
cases filed at the federal level for computer fraud, 33 for advanced fee
schemes, 59 for intellectual property violations, 274 for identity theft, and
337 for aggravated identity theft (USDOJ, 2016). Considering even
conservative estimates of the scale of cybercrime, these figures appear
paltry. The inherently global nature of the internet (where victim and
offender may be located in different countries, with different laws relating
to computer crime) renders effective police action particularly difficult and
time-consuming, leading such offences to be sidelined in favour of more
manageable ‘local’ problems (Wall, 2001b: 177; 2007: 160). Such factors
suggest that there may in fact be a massive under-reporting and underrecording of internet-related crimes, and a correspondingly massive (and by
definition unknown) ‘dark figure’ of cybercrime (Tcherni et al., 2016). The
problems for charting cybercrime trends over time are also heightened by
rapid innovations in internet and computer-related law. The past few
decades have seen the introduction of many new sanctions into criminal law
to cover computer-related offences. These have taken the form of national
legislation (such as the UK’s Computer Misuse Act 1990 and the US
Computer Abuse and Fraud Act 1986, No Electronic Theft Act 1997 and
the Identity Theft Enforcement and Restitution Act 2007) and a range of
criminal sanctions incorporated into national laws as a result of
international agreements, treaties and directives (such as the 1994 TRIPS
[Trade-Related Aspects of Intellectual Property Rights] agreement under
the WTO [World Trade Organization], the Council of Europe Convention of
Cybercrime [2004], and the provisions of EU law). Such innovations mean
that the range of internet activities covered by criminal law has been
constantly shifting, making trend data about cybercrime as a whole difficult
to construct.
One way in which the shortcomings of official crime measures have been
addressed is through the development of crime and victimization surveys.
These have aimed to uncover those crimes that remain unreported and
unrecorded by official statistics, thereby giving a more complete and
accurate picture of the scope and patterns of offending (Maguire, 2002:
348–358). Such surveys are particularly important for generating
knowledge about cybercrime, since there has historically been a limited
amount of officially collected and collated data specifically relating to
internet crimes. We should not conclude, however, that such alternative
measures wholly overcome the problems previously identified in relation to
official statistics. For example, no reporting is possible where victims are
unaware that an offence has been committed; those surveyed may have a
different understanding of what counts as an offence from those
administering the survey; and they may be unwilling to report an offence
even when disclosure is made anonymously. The problems in relation to
cybercrime surveys are, once again, further heightened. First, such surveys
as exist tend to be highly selective, focusing mainly upon business and/or
public sector organizations, to the exclusion of individual citizens. Prime
examples of business/organizational surveys include the annual computer
crime survey that was undertaken in the USA by the Computer Security
Institute (CSI) between 1996 and 2011 on behalf of the FBI; the US State of
Cybercrime Survey (previously the Cybersecurity Watch Survey and ECrime Watch Survey) conducted from 2004 to 2017 by the IDG-owned
CSO magazine in concert with various partners including the US Secret
Service, PricewaterhouseCoopers, Forcepoint, Deloitte, Microsoft, and
researchers at the Carnegie Mellon University; and the similar Cyber
Security Survey conducted by CERT Australia and the Australian Cyber
Security Centre. Individual-level victimization surveys are emerging,
though such efforts are primordial at best. One example is the belated
expansion of the Crime Survey for England & Wales to include citizens’
experiences of cyber offences for the first time in 2017. The US National
Crime Victimization Survey has also intermittently dealt with cybercrime
issues through supplemental studies on cyberstalking (Catalano, 2012) and
identity theft (Baum, 2006, 2007; Harrell, 2015; Harrell and Langton, 2013;
Langton, 2011; Langton and Planty, 2010). Moreover, as Wall (2001a: 7–8)
notes, there is no consistent methodology or classification across such
surveys, making comparison and aggregation of data difficult (see also
Fafinski et al., 2010). Finally, there are serious problems relating to underreporting, as many organizations may prefer not to acknowledge
victimization because of: (1) fear of embarrassment; (2) loss of public or
customer confidence (as in the case of breaches relating to supposedly
secure e-shopping and e-banking facilities); and (3) because of potential
legal liabilities (e.g. under legislation relating to data protection, which
places organizations under a legal duty to safeguard confidential
information relating to citizens and customers) (Furnell, 2002: 28, 51). All
of the above should lead us to treat statistics about cybercrime, be they
official or otherwise, with considerable caution. One of the most basic
ongoing challenges for both criminology and criminal justice in relation to
cybercrime is the need to develop basic, robust measures of the problem
itself.
Despite the difficulties outlined above, it would be mistaken to simply
ignore the available statistical measures, limited and partial as they might
be. If we wish to glean some insight into the nature and extent of the
‘cybercrime problem’ we must use such data as are available – a case of
relatively weak data being better than absolutely no data at all. So long as
we not do ‘reify’ these measures, taking them to be incontrovertible facts,
they can still be useful in giving us some preliminary indication of the
problem. Indeed, insofar as such data are taken seriously by the various
actors who discourse publicly upon cybercrime and take legislative,
policing and security decisions on its basis, we need to give these figures
due consideration. Subsequent chapters will give more detailed attention to
the ‘facts and figures’ related to different types of cybercriminal activity
(such as hacking, viruses, e-fraud, piracy, and so on). For the moment, we
can reflect on some headline figures that have appeared in high-profile
reports on cybercrime, as these give an indication of why internet crime has
become the object of concern for a range of public and private actors:
A survey distributed to more than 5,000 public and private sector
organizations in the USA revealed that in the preceding 12 months,
67.1 per cent had been targeted by malicious software, 38.9 per cent
had been targeted by ‘phishing’ attacks and 28.9 per cent had their
networks infected by ‘bots’. The total financial losses incurred from
these incidents were estimated at over $200 million (CSI, 2011: 15).
According to the 2016 Global Economic Crime Survey, cybercrime is
now the second most reported economic crime among organizations,
second only to asset misappropriation, with 32 per cent of
organizations reporting having experienced cybercrime incidents.
Further, cybercrime was the only measured economic crime that
increased between 2014 and 2016 (PWC, 2016: 9).
A 2013 report claims that the global cost of cybercrime and
cyberespionage ranges between $300 billion to $1 trillion US dollars,
with the impact being between $24 and $120 billion for the US alone
(McAfee and CSIS, 2013: 5). Another report from 2017 claims that
978 million people were impacted by cybercrime across 20 countries
with an estimated $172 billion in losses to consumers (Norton, 2018).
This same report stated that 44 per cent of consumers over the last 12
months were targeted by cybercrimes. Of these victims, 53 per cent
reported ‘having a device infected by a virus or other security threat,’
38 per cent experienced ‘debit or credit card fraud,’ 34 had ‘an account
password compromised,’ 34 per cent encountered ‘unauthorized access
to or hacking of an email or social media account,’ 33 per cent made ‘a
purchase online that turned out to be a scam’ and 32 per cent reported
hacking clicked on a scam email or provided ‘sensitive
(personal/financial) information in response to a fraudulent email’.
US copyright industries claim that annual losses due to internet
‘piracy’ (of goods such as music, films and computer software)
amount to somewhere between $200 and $250 billion (Raustiala and
Sprigman, 2012).
In 2015 alone, it is estimated that more than 3 million new pieces of
malicious software (such as viruses and Trojans) appeared (G Data
Security Labs, 2015: 2).
A recent report asserts that credit card fraud, including online credit
card frauds, are estimated to cost $21.84 billion dollars worldwide, a
20 per cent increase from 2014 (Robertson, 2016: 1).
Figures such as those above furnish an important context for understanding
why the media, politicians, law enforcement agencies and business
organizations have become increasingly exercised by the cybercrime threat.
Cybercrime can be viewed as an integral component of those globalized
risks that are increasingly coming to define ‘contemporary landscapes of
crime, order and control’ (Loader and Sparks, 2002). Moreover, if we bear
in mind estimates that as little as 5 per cent of such crimes may actually be
reported to the authorities (Furnell, 2002: 190), then the problems posed by
cybercrime may well figure among the most urgent of the early twenty-first
century.
Throughout this volume, sources of data for understanding the scope and
scale of cybercrimes will be revisited. While such measurements are
important for understanding the ‘big picture’ of cybercrime and
cybervictimization, these are not the only forms of data used by scholars. In
fact, in light of the previously described data and methodological issues,
researchers have turned to a range of methodological approaches to study
cybercrime, cybercriminals and cybervictimization beyond governmental,
quasi-governmental and corporate data sources. Many of these approaches
are well-known within criminology and only have required relatively minor
forms of retooling in their application toward cybercrime. Select examples
are briefly reviewed here including self-report surveys, interviews,
ethnographies, experimental/quasi-experimental designs and content
analyses.
Criminologists have long relied on surveys that ask participants to selfreport their involvement in criminal behaviour (Kraska and Neuman, 2008:
280). In fact, self-report surveys may be one of the most relied upon
methods of studying crime and deviance since the inception of survey
methodology in the 1950s. This research strategy was developed as one
way to address the issues confronting official sources of data that largely
rely on reports to law enforcement or other agencies. Instead, these surveys
are intended to ‘obtain information closer to the source of criminal and
delinquent behaviour’ by directly asking sample populations about their
involvement in illicit activity (Thornberry and Krohn, 2000: 34). Many selfreport surveys are cross-sectional, focusing on a sample population in one
period of time. Some of the most noteworthy criminological datasets,
however, are longitudinal in design. These studies examine study
participants’ criminal behaviour and propensity over time. Significant
longitudinal datasets used in criminological inquiry include examples like
the National Youth Survey – Family Study (NYSFS),1 the U.S. National
Longitudinal Study of Adolescent to Adult Health (Add Health),2 and the
Monitoring the Future study.3
1 Information about the NYSFS can be found at
http://ibgwww.colorado.edu/~corley/proto/NYSFS/.
2 Information regarding Add Health data can be found at
http://www.cpc.unc.edu/projects/addhealth.
3 Information regarding the Monitoring the Future study can be found at
http://www.monitoringthefuture.org/.
The study of cybercrime has also come to rely extensively on the use of
self-report surveys (Holt and Bossler, 2016: 34–37). Similar to other areas
of criminology, many of these studies rely on convenience samples of
college students (Holt and Bossler, 2016: 35). Other studies have examined
cybercrime behaviours among samples of juveniles (e.g. Marcum et al.,
2014a, b, c). Researchers have also administered self-report surveys to
attendees at major hacker conventions (e.g. Bachmann, 2010; Schell and
Holt, 2010). Though valuable for shedding light on various cybercrime
issues, these studies generally involve convenience samples. As such,
findings may not be generalizable beyond their samples. In addition,
longitudinal data are increasingly becoming available. For instance, the
Longitudinal Internet Studies for the Social Sciences data, according to its
website, ‘consists of 4500 households, comprising 7000 individuals’ and ‘is
based on a true probability sample of households drawn from the population
register by Statistics Netherlands’ (LISS, 2017). The data, according to Holt
and Bossler (2016: 37) measure ‘identity theft, hacking, stalking,
harassment, and malicious software infection victimization incidents’.
To learn more about motivations, techniques, perceptions and other factors
related to cybercrime, criminologists have also used qualitative interviews
with offenders. Berg (2009: 101) defines an interview straightforwardly as
‘a conversation with a purpose’. These studies involve talking with
offenders, asking questions and documenting their responses. McCracken
(1988: 9) explains that interviews
can take us into the mental world of the individual, to glimpse the
categories and logic by which he or she sees the world. It can also take
us into the lifeworld of an individual, to see the content and pattern of
daily experience. The long interview gives us the opportunity to step
into the mind of another person, to see and experience the world as
they do themselves.
Interviews have been a vital data-gathering strategy for criminologists for
decades. Many scholars have used interviews to study various cybercrime
phenomena as well. For instance, interviews have been used extensively by
criminologists studying hacker subcultures. Turgeman-Goldschmidt (2011)
interviewed 54 Israeli hackers to understand subcultural perceptions of
hacker identity. Similarly, Taylor (1999) interviews hackers, computer
scientists and information security practitioners to explore hacker identity
as well as tensions between hackers and the field of information security. In
addition to gathering data from hacker web forums and observations at a
hacker convention, Holt (2007) conducted 13 face-to-face and email
interviews to explore how hacker subcultural norms and values crossover
between online and offline experiences. These are only a few examples of
the many studies which make use of in-depth interviews with cybercrime
offenders.
Ethnography is another research strategy criminologists have employed to
gain a clearer understanding of cultural and subcultural dynamics related to
crime and crime control. Though there exist definitional disagreements
concerning ethnography, Berg (2009: 191) explains that, generally, ‘the
practice places researchers in the midst of whatever it is they study. From
this vantage, researchers can examine various phenomena as perceived by
participants and represent these observations as accounts’. Numerous
foundational studies within criminology have been based on ethnographic
research including Ferrell’s (1993) research into graffiti artists, Adler’s
(2003) study of drug dealers and smugglers, and Becker’s (1963)
examination of marijuana users, to name a few.
Criminologists interested in cybercrime have also employed ethnographic
methods. Because a great deal of cybercrime perpetration and offender
socialization occurs over the internet, however, most ethnographic studies
by criminologists employ virtual ethnography or the exploration of human
beings in interactive online social spaces (Hine, 2000). These kinds of
studies involve researchers overtly or covertly observing or even
participating in online communities. For example, Blevins and Holt (2009)
observed the interactions of ‘johns’ (men who solicit intercourse from sex
workers) across ten city-specific public web forums. The authors
specifically focused on the use of subcultural argot as expressions of social
norms and values among forum participants. Attitudes, identity and status
within the forums were said to be reflected in interactive expressions
concerning johns’ prior experiences with sex workers, commodification of
sex and sex workers, and expressions of sexual desires and sex acts.
Experimental designs have also been employed by criminologists. Shadish
et al. (2002: 507) state that to conduct an experiment is ‘to explore the
effects of manipulating a variable’. In other words, experimental or quasiexperimental designs involve observing and measuring the effects that
occur once a condition has been altered, typically in comparison to a group
that did not experience a change in this condition. A basic experiment, for
example, typically involves randomly assigning study participants into two
groups: one that receives a treatment and another which does not. Typically
outcomes are measured in both groups before the experiment begins, often
called a ‘pre-test’. A ‘post-test’ may then be administered after the
experiment is conducted. Comparisons are then made between the groups.
Any differences between these groups observed are attributed to the
treatment condition.
Few experiments have been conducted in the area of cybercrime, though
some exist. For example, Testa et al. (2017) conducted an experiment using
high-interaction honeypot computers, systems designed to observe and
record the actions taken by intruders independent of critical network
infrastructure. As intruders attempted to gain access, they could either gain
administrative rights or non-administrative user rights, with the former
being preferred under typical circumstances as it allows greater access and
control within the system. Intruders who secured one of these accounts
were randomly assigned into two groups. The treatment group was shown a
warning banner that foretold the potential consequences of illegal computer
intrusions. The control group was shown no such banner. With these
conditions in place, the authors examined the effect of warning banners on
the likelihood that trespassers would enter ‘navigation’ or ‘change file
permission’ commands on the target system. They found that most users
trespassed into systems with administrative privileges and that sanction
threats did not seem to restrict trespasser navigation in the system.
Additionally, warning banners seemed to increase the use of file change
commands among administrative users, which was counter to expectations.
The only group in this experiment that seemed deterred by warning banners
were users who gained access with nonadministrative accounts.
Another method criminologists have deployed for studying cybercrime is
the analysis of documents, websites, forums, blogs, emails and other
secondary social artefacts. These analyses often fall under the umbrella of
content analysis, a ‘nonreactive method used to examine the content, or
information and symbols, contained in written documents or other
communication media’ (Kraska and Neuman, 2008: 17). Such analyses can
be approached qualitatively, typically focused on the inductive
identification of patterns, themes and deeper meanings within the data.
They may also be largely quantitative, focusing on the deductive
identification of themes and categories. Some content analyses are mixtures
of both approaches. This research method enjoys a rich history within
criminology, perhaps most notably in the analysis of crime-related news
media and popular culture.
In one example of a cybercrime-related content analysis, Steinmetz and
Gerber (2015) examine hacker perspectives on privacy and surveillance
using data drawn from 2600: The Hacker Quarterly, a hacker-produced
zine – a magazine-like publication dedicated to niche subcultural topics.
Through this analysis, the authors argue that hacker discussions of
surveillance and privacy tend to reflect certain key themes: (1) a recognition
that actors and institutions can both protect and threaten privacy; (2) that
individuals are primarily responsible for protecting private information and
when personal control is not possible, then those given stewardship of
information are responsible for protecting it; (3) that threats to privacy are
seen as ubiquitous in our society; and (4) that hackers have a key role to
play in solving problems concerning privacy. Using the thematic findings
from their content analysis, the authors frame the findings in the context of
broader political patterns within hacker culture reflective of a hacker ethic,
liberalism and technolibertarianism.4
4 For more extensive discussion of technolibertarianism, refer to Jordan and
Taylor (2004).
2.4 Summary
Studying crime is not an easy endeavour. Researching cybercrime is even
less so. Scholars have thus had to rely on a variety of theories and research
methods to make sense of seemingly ephemeral phenomena. This chapter
provided an overview of many criminological theories which have been
applied to the study of cybercrime including deterrence theory, rational
choice theory, routine activities theory, social learning theory, drift and
neutralization theory, self-control theory, labelling theory, feminist
criminology and radical criminology. The reader should note that other
approaches exist which also bear relevance to cybercrime but which we
cannot cover here due to both constraints of space and their currently
minimal application to cybercrime including cultural criminology,
postmodern criminology, constitutive criminology, peacemaking
criminology, psychosocial criminology and existential criminology. In
addition to theories of crime and crime control, various data sources and
research methodologies have been relied upon to glean insight into
cybercrime. The diligent reader should bear these theories and methods in
mind as they proceed through the rest of the volume as critical points of
reflection and, importantly, sources of further questions for discussion and
even research.
Study Questions
How do different theories lend themselves to understanding different aspects of cybercrime?
How effective are different deterrent strategies in the study of cybercrime?
How can social learning theory and its arguments about the role of peer associations help us
understand certain cybercrimes?
What kind of strains may contribute to cybercrime?
How does the internet contribute to increases in opportunities for criminal activity?
How might the internet make it easier for individuals to ‘drift’ in and out of cybercrime?
What sorts of advantages do different research methods offer for the study of cybercrime?
Further Reading
Readers interested in learning more about criminological theory more generally should check
out Vold’s Theoretical Criminology (7th edn; 2016) by Thomas Bernard, Jeffrey Snipes and
Alexander Gerould. To check out more about the application of criminological theory to
cybercrime issues, refer to Thomas Holt and Adam Bossler’s Cybercrime in Progress: Theory
and Prevention of Technology-Enabled Offenses (2016) and Kevin Steinmetz and Matt
Nobles’ edited volume entitled Technocrime and Criminological Theory (2018). Regarding
research methods, an assortment of notable resources exist which the reader is encouraged to
make use of. Some examples include Royce Singleton and Bruce Straits’ Approaches to Social
Research (6th edn; 2017), John Creswell’s Research Design: Qualitative, Quantitative and
Mixed Methods Approaches (4th edn; 2014) and Bruce Berg’s Qualitative Research Methods
for the Social Sciences (7th edn; 2009).
3 Hackers, Crackers and Viral Coders
Chapter Contents
3.1 Hackers and hacking: Contested definitions
3.2 Representations of hackers and hacking: Technological fears and
fantasies
3.3 What hackers actually do: A brief guide for the technologically
bewildered
3.4 Hacker myths and realities: Wizards or button-pushers?
3.5 ‘Why do they do it?’ Motivation, psychology, gender and youth
3.6 Hacking and the law: Legislative innovations and responses
3.7 Summary
Study questions
Further reading
Overview
Chapter 3 examines the following issues:
how definitions of computer hacking and hackers are constructed and contested;
how political, criminal justice and popular representations of hackers embody wider cultural
concerns about technological and social change;
the kinds of activities involved in hacking, and the tools and techniques used to undertake
them;
the realities behind the myth of hacking as an expert and elite activity;
the ways in which various actors seek to account for and explain hackers’ motivations for
engaging in computer intrusions;
the legal responses that have emerged in recent years in response to the perceived threat to
society posed by hacking.
Key Terms
Bot
Cracking
Cryptocurrency
Denial of service
Differential association
Gender
Hacking
Malicious software or ‘malware’
Masculinity
Near-field communication (NFC)
Ransomware
Script kiddie
Subculture
Techniques of neutralization
Trojan horses
Viruses
Website defacement
Worm
Zombie
3.1 Hackers and Hacking: Contested
Definitions
A few decades ago, the terms ‘hacker’ and ‘hacking’ were known only to a
relatively small number of people, mainly those in the technically
specialized world of computing. Today they have become common
knowledge, something with which most people are familiar, if only through
hearsay and exposure to mass media and popular cultural accounts. Hacking
provides one of the most widely analysed and debated forms of
cybercriminal activity, and serves as an intense focus for public concerns
about the threat that such activity poses to society. Current discussion has
coalesced around a relatively clear-cut definition, which understands
hacking as ‘the unauthorised access and subsequent use of other people’s
computer systems’ (Taylor, 1999: xi). It is this widely accepted sense of
hacking as ‘computer break-in’, and of its perpetrators as ‘break-in artists’
and ‘intruders’, that structures most media, political and criminal justice
responses. Therefore we would appear to have a fairly definite and
unambiguous starting point for an exploration of this kind of cybercrime.
The term has in fact, however, undergone a series of changes in meaning
over the years, and continues to be deeply contested, not least among those
within the computing community. ‘Hacker’ originated in the world of
computer programming in the 1960s, where it was a positive label used to
describe someone who was highly skilled in developing creative, elegant
and effective solutions to computing problems (Levy, 1984). A ‘hack’ was,
correspondingly, an innovative use of technology (especially the production
of computer code or programs) that yielded positive results and benefits. On
this understanding, the pioneers of the internet, those who brought
computing to ‘the masses’, and the developers of new and exciting
computer applications (such as video gaming), were all considered to be
hackers par excellence, the brave new pioneers of the ‘computer revolution’
(Levy, 1984; Naughton, 2000: 313). These hackers were said to form a
community with its own clearly defined ‘ethic’, one closely associated with
the social and political values of the 1960s’ and the 1970s’ counter-culture
and protest movements. Though the term ‘hacker ethic’ was originally
coined by Steven Levy in his 1984 work Hackers: Heroes of the Computer
Revolution, it has endured in the discourse about hackers and hacker culture
over time. To summarize, this ethic emphasized, among other things, the
right to freely access and exchange knowledge and information; a belief in
the capacity of science and technology (especially computing) to enhance
individuals’ lives; a distrust of political, military and corporate authorities; a
resistance to ‘conventional’ and ‘mainstream’ lifestyles, attitudes and social
hierarchies; and a belief in meritocratic forms of social organization (Levy,
1984; Taylor, 1999: 24–26; Thomas, 2002). While such hackers would
often engage in the ‘exploration’ of others’ computer systems, they
purported to do so out of curiosity, a desire to learn and discover, and to
freely share what they had found with others; damaging those systems
while exploring, intentionally or otherwise, was considered both
incompetent and unethical. This earlier understanding of hacking and its
ethos has since been largely over-ridden by its more negative counterpart,
with its stress upon intrusion, violation, theft and sabotage. Hackers of the
‘old school’, like early hacker guru Richard Stallman (2002: 17), angrily
refute their depiction in such terms, and use the term ‘cracker’ to
distinguish the malicious type of computer enthusiast from hackers proper.
Others choose to distinguish between hackers based on intent, using labels
like ‘black hat’ hackers to describe those who focus on security for illicit
gains, ‘white hat’ hackers who target security systems only to make systems
more secure, and ‘grey hats’ who sit somewhere in between (the use of
‘hats’ to differentiate between types of hackers is borrowed from old
American Westerns where heroes wore white cowboy hats while the villains
wore black ones). Some would suggest that these differences are of little
more than historical interest, and insist that the current ‘negative’ and
‘criminal’ definition of hacking and hackers should be adopted, since this is
the dominant way in which the terms are now understood and used (Twist,
2003). Moreover, to shift around between different groups’ and individuals’
definitions only invites confusion. There is considerable value to this
pragmatic approach – throughout this and subsequent chapters, the terms
hacking and hackers will be used in the current sense to denote those illegal
activities associated with computer intrusion and manipulation, and to
denote those persons who engage in such activities. The reader should bear
in mind, however, that to this day ‘hacker’ still has a multitude of
definitions within the hacker community (The Jargon File, 2018). Further, it
is still true that not all hackers engage in illegal activity and, by some
definitions, hacking does not even require a focus on information security.
Hackers remain diverse in their interests and thus the community (or
communities) include security hackers, hardware hackers and ‘makers’, free
and open source software programmers, among others (Anderson, 2012;
Coleman, 2014; Jordan, 2008; Levy, 1984).
The contested nature of the terms is also worth bearing in mind for a good
criminological reason. It shows how hacking, as a form of cybercriminal
activity, is actively constructed by governments, law enforcement, the
computer security industry, businesses and the media; and how the equation
of such activities with crime and criminality is both embraced and
challenged by those who engage in them (Halbert, 1997; Skibell, 2002;
Steinmetz, 2016; Thomas, 2002; Wall, 2007, 2008b, 2012). In other words,
the contest over characterizing hackers and hacking is a prime example of
what sociologists such as Becker (1963) identify as the ‘labelling process’ –
the process by which categories of criminal/deviant activity and identity are
socially produced (Turgeman-Goldschmit, 2011). Reactions to hacking and
hackers cannot be understood independently from how their meanings are
socially created, negotiated and resisted. Criminal justice and other agents
propagate, disseminate and utilize negative constructions of hacking as part
of the ‘war on cybercrime’. Those who find themselves so positioned may
reject the label, insisting that they are misunderstood, and try to persuade
others that they are not ‘criminals’; alternatively, they may seek out and
embrace the label, and act accordingly, thereby setting in motion a process
of ‘deviance amplification’ (Young, 1971) which ends up producing the
very behaviour that the forces of ‘law and order’ are seeking to prevent. In
extremis, such constructions can be seen to make hackers into ‘folk devils’
(Cohen, 1972), an apparently urgent threat to society which fuels the kinds
of moral panic about cybercrime alluded to in the previous chapter. Perhaps
this is why, as David Wall (2008b: 47) suggests, hackers have become the
‘archetypal “cybercriminal”’ for many. As we shall see, such processes of
labelling, negotiation and resistance are a central feature of ongoing social
interactions about hacking.
3.2 Representations of Hackers and
Hacking: Technological Fears and
Fantasies
This chapter began by noting that hacking comprises one of the most
widely discussed areas of cybercriminal activity; indeed, for many people
cybercrime and hacking have become synonymous. The public discourse on
hacking appears to evoke fear and fascination in equal measure. The origins
of this intense controversy can be understood in a number of different ways.
The most straightforward explanation of social sensitivities to hacking
stresses the degree of threat or danger that the activity carries. From what
Hunt (1987) calls an ‘objectivist’ view of social problems, the response to
hacking is a perfectly understandable and appropriate reaction to the level
of social harm that it causes. The constant stream of estimates about
financial losses incurred from hacking, and analyses of the dangers it poses
to national security and public safety, would appear to give credence to such
a view (see, for example, Dima, 2012; Menn, 2013; Rodionova, 2017;
Shane et al., 2017). Put simply, this viewpoint states that we are right to be
extremely alarmed by hacking, because objective assessments show us that
hacking is an extremely damaging form of cybercriminal behaviour.
Numerous commentators, however, have suggested that social
representations of the hacker threat are out of all proportion to the actual
harm that their activities cause (Kovacich, 1999; Singel, 2009; Skibell,
2002; Sterling, 1994). Therefore official pronouncements on hacking often
tend towards hyperbole – as, for example, when a senior law enforcement
official told the US Senate that the nation is facing ‘a cyber equivalent of
Pearl Harbor’ (cited in Taylor, 1999: 7). Similarly exaggerated responses
may be seen in media accounts. Taylor (1999: 7) recounts a story from a
UK tabloid newspaper depicting a 17-year-old computer hacker who, while
ensconced in his bedroom, had supposedly broken into US military systems
and had ‘his twitching index finger hover[ing] over the nuclear button’. In
his autobiography, hacker Kevin Mitnick recounts similar official
sentiments when he appeared in court for various hacking-related charges.
He wound up in solitary confinement and denied access to a phone after a
prosecutor told the judge that he could ‘whistle into a telephone and launch
a nuclear missle from NORAD [North American Aerospace Defense
Command]’ (Mitnick and Simon, 2011: 209). Such lurid and (literally)
apocalyptic representations of hacking suggest that we have to look beyond
the ‘objective’ assessment of threats if we wish to understand why it evokes
such a heightened degree of alarm.
Scholars have situated social representations of hackers and hacking in the
context of wider responses to rapid technological change (Taylor, 1999,
2000; Wall, 2012). Historically speaking, such change has often incited
anxieties that technology presents a threat to human existence, a fear that
the more powerful our creations, the more they can escape our control and
do untold damage (Wall, 2012). Thus, in Mary Shelley’s Frankenstein
(1999), a scientist’s desire to ‘play God’ and generate new life results in the
creation of a monster that turns on its creator. The theme of technological
monstrosities, unwittingly unleashed by ‘mad scientists’, subsequently
became a staple of twentieth-century popular culture (Tudor, 1989; Wall,
2012). From the 1960s onwards, the computer has come increasingly to the
fore as the embodiment of this technological Pandora’s box. Thus, in
Stanley Kubrick’s 2001: A Space Odyssey (1969) the initially benevolent
spaceship computer HAL goes ‘insane’ and turns to murdering the human
members of the crew. In the 1977 film Demon Seed, an experiment in
artificial intelligence goes awry, creating a demonic computer that even
goes so far as to rape a woman in order to create its ‘child’, a human–
machine hybrid. Similar themes are replayed in the popular Terminator
series of films, in which a sentient military computer called Skynet
launches a nuclear war against its human masters. Similarly, the 2014 film
Ex Machina featured a humanoid robot imbued with artificial intelligence
that turns murderously on its creator (albiet for understandable reasons).
Such representations, it can be suggested, give voice to a wider unease
about technological innovation and its potentially uncontrollable
catastrophic effects. Films focusing on hacking reinterpret such themes in
their own particular way. The 1983 movie War Games tells the tale of a
teenage hacker who unknowingly breaks into a military computer and,
while thinking he is playing a computer game called ‘global thermonuclear
warfare’, brings the world to the very edge of nuclear oblivion. Such
representations also point to a related kind of cultural anxiety, the fear that
the more technologically oriented we become, the less human we are, the
more we lose touch with a ‘normal’ existence (Yar, 2012a). Representations
of hackers (in both fiction and non-fiction) often allude to this, depicting
them as alienated, dysfunctional and isolated loners who are able to interact
effectively only with their computers. The fact that hackers are also
invariably young in popular perceptions is also not a coincidence – the
apparent ease with which a younger generation is able to engage with the
realm of computer technology, a technology that many older people
continue to see as mysteriously daunting, merely serves to sharpen the
sense that all manner of extravagant things may be possible for those with
the know-how.
The sceptical reader may at this point object that the above examples are
merely Hollywood fictions, and that people are sufficiently astute to be
capable of distinguishing them from ‘reality’. Such a dismissal, however,
may overlook the profound ways in which popular representations shape
public understandings of the world. So, for example, the aforementioned
War Games was presented before a committee of the US Congress as an
example of possible threats from hacking (as, a few years later, was the film
Die Hard II, in which terrorists hack into an air traffic control computer in
order to hold the authorities to ransom by threatening to crash incoming
aircraft) (Taylor, 1999: 10). Law enforcement agencies have shown a
similar willingness to make inferences about real-life hackers and their
activities from depictions of their fictional counterparts (1999: 181). Movie
representations of hacking have also been said to have had a
‘disproportionately important influence upon the legislative response to the
activity’ and that an ‘over-reliance upon fictional portrayals of hacking by
the authorities has contributed to helping to create a generally fearful and
ignorant atmosphere’ (Taylor, 1999: 10; see also Hollinger and LanzaKaduce, 1988: 107; Skibell, 2003: 910). In short, cultural anxieties and their
popular representation have played a significant role in how the threat from
hacking has come to be perceived in both official and wider public
domains.
Social representations of hackers and hacking, however, have not been
exclusively of a negative kind. Rather, they exhibit considerable
ambivalence, with hackers evoking a kind of fascination and even
admiration. Early studies of public attitudes suggested that among a
significant portion of the population, especially the young, hackers and their
activities are viewed in a rather positive light (Dowland et al., 1999: 720;
Voiskounsky et al., 2000: 69–76), which parallels recent estimates that 43
per cent of consumers think it can be morally acceptable to engage in
certain cybercrime activities like unauthorized access and use of someone
else’s email and financial accounts (Norton, 2018). One survey found that
some 12 per cent of respondents in the UK felt that ‘online hacking is right
in certain circumstances’ (Yar, 2011: 21). There are a number of ways in
which such favourable representations of hacking can be understood. First,
they can be seen in relation to the aforementioned sense of threat or anxiety
evoked by new and seemingly incomprehensible technologies. In such a
situation, hackers come to represent a mastery of the arcane new world of
computing that others desire or aspire to (Wall, 2012). Consequently, public
perceptions of hackers often stress traits such as unusual intelligence and
ingenuity (Voiskounsky et al., 2000: 72–73); news media also propagate
similar images of hackers as ‘geniuses’, ‘wizards’ and ‘virtuosos’ (Furnell,
2002: 201). Depictions of hacking as arcane, mysterious and powerful also
appear in popular fictions. For example, the sci-fi thriller The Core (2003)
features a brilliant hacker who demonstrates his abilities to a sceptical
audience by hacking a mobile phone by merely whistling tones into the
handset through a foil gum wrapper; finishing this 10-second piece of
hacker virtuosity, he laconically hands the phone back to its owner,
claiming that ‘You now have free long-distance calls for life!’. The video
game Watch Dogs (2014), an open-world sandbox game, features hackers
with seemingly ‘superhman abilities to manipulate their environment,
predominantly through the exploitation of a city-wide surveillance
infrastructure called ctOS’ (Steinmetz, 2016: 185). Predominantly through
the use of the main character’s smartphone, the player can remotely
infiltrate and control cameras, drones, stop lights, and almost any other
technological device. In the opening cutscene of the game, the seemingly
magical powers of hackers are invoked through the antagonist, Damien
Brinks, claiming ‘we are the modern day magicians – siphoning bank
accounts out of thin air’. The implication here is that, while hackers are
seen as threatening in so far as they immerse themselves in an ‘alien’ and
dehumanized world of technology, they also offer a vision of individuals
reclaiming their autonomy by taking forceful control of a technological
system which they can use for their own human ends (Taylor, 2000: 43).
A second kind of threat, to which hackers appear to offer resistance, is the
perceived domination of high technology by governmental powers and/or
faceless corporations. Concerns about the internet often coalesce around a
kind of Orwellian nightmare of surveillance and manipulation, and hackers
are sometimes represented as the resistance to such unaccountable and
invasive uses of technological power – what one commentator dubs the
‘freedom fighters of the 21st century’ (Kovacich, 1999). Popular culture
again appropriates and disseminates such interpretations. The film Hackers
(1995), for example, portrays an idealistic group of young computer
outlaws who use their hacking skills to take on corporate conspirators who
are intent on causing an ecological catastrophe, which they will blame on
our innocent heroes. In a similar vein, public resentments about the
domination of computer technology by corporations such as Microsoft are
explicitly exploited in AntiTrust (2001), another computer crime movie.
Here, a (thinly disguised) Bill Gates-like computer tycoon is hell-bent on
global domination at any cost, only to be thwarted by young computer
‘geeks’ who hold fast to the ‘hacker ethic’ of freedom of information and
knowledge. Watch Dogs 2 involves the player joining a fictional hacker
group called DedSec to take on both government and corporate
organizations who would exploit technology, mainly the omnipresent ctOS
system, for their own ends. Mr. Robot (2015–present), a show about a
disaffected hacker, Elliot, struggling with his own demons while also
working with a hacker group, fsociety, to bring down the fictional global
conglomerate, E Corp (or ‘Evil Corp’ to Elliot), provides another example.
The hacker-as-hero even achieves the status of a quasi-religious ‘saviour of
humanity’ in the Wachowski siblings’ Matrix trilogy (1999–2003),
releasing the human race from enslavement by sentient and malicious
machines (Webber and Vass, 2010: 139–140). In sum, we can see that
socio-cultural fears and fantasies about computer technology shape often
ambivalent and contradictory representations of the hacker, who appears as
‘a schizophrenic blend of dangerous criminal and geeky Robin Hood’
(Hawn, 1996; cited in Taylor, 2000: xii).
3.3 What Hackers Actually Do: A Brief
Guide for the Technologically Bewildered
Having examined the contested definitions and wider representations of
hackers and hacking, it is now time to consider in a little detail what it is
that hackers actually do when they hack, and how they go about it. As will
become apparent below, hacking is, in fact, a generic label for a range of
distinct activities associated with computer intrusion, manipulation and
disruption.
3.3.1 Unauthorized Access to Computer Systems
The most fundamental form of hacker activity is that of gaining access to,
and control over, others’ computer systems. Once such access and control
have been gained (what hackers call ‘taking ownership’ of or ‘owning’ a
system), a range of further prohibited activities become possible. Such
access is made possible by the networking of computer systems, since their
interconnection makes it possible to access a system from the outside, from
other computers which can connect to it. The internet has increased the
possibilities for such intrusion many-fold since, given the network’s ‘open
architecture’, all systems connected to it are ‘public facing’, that is, they can
be communicated with remotely by anyone who can establish an internet
connection (Esen, 2002: 269). As an ever-greater number of systems have
become connected to the internet, the number of such intrusions has
increased steadily. In 2015, 79 per cent of US organizations questioned in
the annual US State of Cybercrime Survey reported having experienced a
‘security incident’ in the 12 months prior (PWC, 2015: 3). The problem of
unauthorized access has grown significantly as a ‘new wave’ of automated
methods for computer intrusion has emerged. For example, Wall (2007:
150–152) explicates how, from a single compromised computer, automated
tools are used to create a ‘botnet’ (or robot network) of ‘zombie’ machines
that can be controlled and used for a range of illicit purposes, all ‘from a
distance’. In other words, from a single initial point of intrusion, a network
of many thousands of machines can be subsequently compromised in short
order. Illicit bot use is so pervasive that one report found that 28.9 per cent
of all internet traffic consists of ‘bad bot’ activity like those used in botnets
(Imperva Incapsula, 2016).
The frequency of such intrusions is anticipated to continue rising markedly
as internet use continues to spread and an ever-greater range of internetenabled devices becomes available (such as digital televisions, games
consoles, ebook readers, tablets and especially mobile phones). Current
generation smartphones are essentially powerful hand-held computers,
capable of performing a wide range of computing functions and enjoying
connectivity to the internet and other networked services. This
sophistication, and the ever-expanding range of services available via
phones, is starting to make them a prime target for hackers and other
cybercriminals. While the security measures for desktop and laptop
computers have become increasingly elaborate and effective over time,
smartphone security lags well behind, making them especially vulnerable.
Recent studies have shown that a number of the major mobile phone
platforms are vulnerable to hacking, enabling unauthorized access to
various accounts that users connect with on their mobile devices or to data
stored on the phone itself (AVG Technologies, 2011; Thomas et al., 2015).
Smartphone users are made more vulnerable because while on the move
they are more likely to use unsecured wireless internet access points, such
as Wi-Fi hotspots. This means that information flowing to and from the
phones can be intercepted and/or impersonated. Recent developments to
smartphones also bring new vulnerabilities in their wake. For example,
phones are now equipped with ‘near field communication’ (NFC) chips –
these enable devices to transmit and exchange information with other
devices over short distances. This is the technology that enables us to use
our phones to make instant payments by simply waving the phone near a
properly equipped check-out till, or making electronic money transfers to
other users of similar devices. While this undoubtedly brings a new level of
speed and convenience for shoppers, it also means that these highly
sensitive payment data might be intercepted, altered or faked by hackers –
the tools that could be used for such hacks have already been identified by
computer security experts (Hopwell, 2012). As will be discussed in greater
detail momentarily, such intrusions have continued to proliferate in the age
of the Internet of Things. Increasingly, contemporary life consists of
interactions with physical objects linked together through the internet.
Gartner Inc. (2017b) claims that ‘8.4 billion connected things will be in use
worldwide in 2017, up 31 per cent from 2016, and will reach 20.4 billion by
2020’. As mentioned in Chapter 1, the increase in internet-connected
devices has only increased the opportunities for intrusions and the
subsequent exploitation of technology, especially as many of these devices
feature poorly configured security protocols.
3.3.2 Illegalities Following Computer Intrusion
Once access to a computer system has been established, hackers are able to
perpetrate a further range of criminal acts. These include the following:
Theft of computer resources: Hackers may use the resources of the
hacked system for their own purposes, such as the storage of illegal or
undesirable materials. In one such incident a hacker from Sweden
illegally accessed an American university’s systems, and used them to
store and distribute a massive array of pirated music (MP3) files
(Furnell, 2002: 101). Additionally, as noted above, compromised
‘zombie’ computers may also be used to stage further acts of intrusion,
as well as being hijacked in order to distribute malicious software,
‘spam’ email and the like. The rapid growth of wireless internet
connectivity has also been exploited through acts of ‘bandwidth theft’
(colloquially known as ‘leeching’), where hackers gain unauthorized
access to others’ internet connectivity. This may result in legitimate
users experiencing slow-downs in data speeds (as unauthorised users
monopolize the available bandwidth) or they may find themselves
unexpectedly billed for data they themselves have not used. Moreover,
hackers may use the stolen bandwidth for illicit activities (such as the
file-sharing of copyrighted content), for which the legitimate user
might find themselves held legally responsible. Botnets also comprise
a theft of computer resources, in which the ‘botherder’ (controller of
the botnet) uses the computing power and bandwidth of other people’s
devices for their own purposes. Offenders have also recently turned
toward using the computer power of other people’s computers to
‘mine’ cryptocurrency or cryptographically secured forms of digital
currency like Bitcoin, Monero and Ethereum. To perhaps oversimplify
the process, while acquiring cryptocurrencies can be done through
trade, often for other forms of currency at cryptocurrency exchanges,
these currencies can also be acquired by using computers to solve
complicated mathematical equations, a process known as ‘mining’. As
they progress in solving these equations, they are awarded currency.
The more currency that is mined overall, however, the harder the
equations get and, subsequently, the more computer power is needed to
solve them. In an effort to gain more access to computer power,
offenders have turned toward using malware to compromise other
computers in service of their mining efforts (Pearson, 2017). Estimates
claim that from January to September 2017, over 1.65 million
computers were compromised by malware and exploited for
cryptocurrency mining (Lopatin and Bulavas, 2017).
Theft of proprietary or confidential information: Hackers may exploit
unauthorized access in order to steal or copy information including
software, business secrets, personal information about an
organization’s employees and customers, and credit card details which
can subsequently be used for fraudulent purposes. Theft of proprietary
information is cited as a significant source of financial losses by
business and other organizations (CSI, 2011: 14-17). There have even
been cases in which seemingly reputable business organizations have
allegedly commissioned hackers to steal confidential information from
their competitors (Kassner, 2015b). Incidents also abound in which
thousands of customer credit card details have been stolen as a result
of hacking incidents, or in which hackers were able to exploit banks’
systems to arrange illegal electronic transfers of funds (Griffin, 2017;
Thompson, 2017). In one case of the former, a Russian hacker known
only as ‘Maxim’ accessed the systems of an internet retailer and stole
details of some 300,000 credit cards; when the company refused to pay
the $100,000 the hacker demanded as part of ‘his’ blackmail scheme,
‘he’ posted details of 25,000 of the cards on the internet (Philippsohn,
2001: 57). 2010 saw the conviction of Albert Gonzales to 20 years in
prison for the theft and resale of more than 170 million credit card and
ATM numbers that he had acquired through hacking (CSI, 2011: 10;
Zetter, 2010). In an instance of the latter, hackers from Sri Lanka were
arrested after they managed to electronically transfer $60 million from
the Far Eastern International Bank in Taiwan (Thompson, 2017).
Around 100 female celebrities had their iCloud accounts breached in
2014, resulting in the release of approximately 500 nude images on the
internet in an event some called ‘the Fappening’, a ‘boorish
portmanteau that speaks to the hyper-sexualized nature of the event’
(Marganski, 2018: 20; Zetter, 2014b). That same year, the home
improvement retail chain, The Home Depot, had its systems
undermined, resulting in the exposure and theft of around 56 million
consumers’ credit and debit card information (Zetter, 2014b). In 2017,
one of the major credit reporting agencies – Equifax – was
compromised as a result of a failure to properly patch critical systems
(Newman, 2017). Over 145 million people in the US – roughly half the
population – had their information compromised including ‘birth dates,
addresses, some driver’s license numbers, about 209,000 credit card
numbers, and Social Security numbers’ (Newman, 2017).
Systems sabotage, alteration and destruction: Hackers may exploit
access to cause significant amounts of damage to a system’s
operations. While outright ‘trashing’ of content is relatively rare
(Furnell, 2002: 101), it is not unheard of; there have been a number of
documented cases in which disgruntled former employees have
unleashed such destruction upon their erstwhile employers in revenge
for having been dismissed (Philippsohn, 2001: 55). More frequent is
the selective alteration of data held within the system. This may be
undertaken by hackers so as to cover their tracks, hiding from
administrators the fact that their system has been compromised,
thereby allowing the hackers to access the system on an ongoing basis.
Systems content may also be altered or erased by hackers ‘as a prank,
protest, or to flaunt their skills’ (Denning, 1999: 227). (Tampering as a
protest or political act will be considered in detail in Chapter 4.)
Hackers may also alter data so as to gain some personal advantage for
themselves, their friends or family. There have been a number of cases
in which students gained access to school or university computers to
alter their own or their friends’ grades. For instance, a student at the
Tenafly High School in New Jersey accessed the school’s system in
2017 to change his grades before sending out college applications
(Cimpanu, 2017). There was even one incident in which an inmate of a
Californian jail broke into the prison’s information system and altered
his release date ‘so that he could be home in time for Christmas’
(Denning, 1999: 227).
Website defacement and ‘spoofing’: Such hacker attacks directly target
internet websites themselves, and can take a number of distinct forms.
In instances of defacement, the website is hacked and its contents
altered. These may be viewed as pranks intended to amuse web
surfers, as exercises in self-aggrandizement by hackers who wish to
advertise their skills, or as ideologically and politically motivated
forms of protest against governments and businesses (Vegh, 2002;
Woo et al., 2004). Website defacements are an increasingly prominent
form of hacker activity, with recorded attacks rising from just five in
1995 to almost 6,000 in 2000 (Furnell, 2002: 104), and reaching a
startling 1.5 million in 2010 (Almeida and Mutina, 2011). While recent
figures are difficult to come by, it is safe to say that website
defacement remains a significant cybercrime activity. Victims have
included the US, Hong Kong, Israeli and Colombian governments, the
CIA and the US military, the UK Labour and Conservative Parties, the
New York Times, HSBC, Coca-Cola, Nike, Microsoft and the LAPD,
as well as (ironically) the sites of companies specializing in providing
internet security solutions (Almeida and Fernandez, 2009; Denning,
1999: 228–231; Furnell, 2002: 103–109; Lilley, 2002: 53–54). Figure
3.1 gives an example of a website defacement in which a group of
hackers defaced a website while encouraging the administrator/owner
to better protect the site. In the mid-2010s, website defacements
became a tactic of choice among politically active hackers and hacker
groups like Anonymous and Lizard Squad (Hern, 2015; Stosh, 2015).
Political hacks will be discussed further in Chapter 4. The second form
of website-centred hack, ‘spoofing’, does not attack organizations’
actual sites. Rather, the hacker establishes a spoof or fake website to
which the unsuspecting internet user is re-directed. Again, this can
cause considerable embarrassment to the owner of the legitimate
website, since its forged replacement may feature offensive speech,
pornographic imagery or accusations about the victim’s supposedly
unsavoury business or political practices. Spoofing is also used in the
commission of internet frauds. In such cases, the forged website is
made to appear as similar as possible to its legitimate counterpart, so
that the visitor may be unaware that it is a fake. The user will therefore
proceed to use the site as normal, for example, by attempting to log on
using their username and password. In the case of e-commerce and ebanking sites, this has been exploited to acquire access to customers’
accounts which can then be raided (Philippsohn, 2001: 60). (Website
spoofing for the commission of fraud will be considered further in
Chapter 5.)
In addition to those activities outlined above, there are a number of
important, illicit activities usually associated with hacking but which do
not, in fact, require unauthorized access or a break-in to a computer system;
rather, they can be engineered simply via email or internet connection.
Denial of service attacks: ‘Denial of service’ basically refers to a
cyber-attack ‘which prevents a computer user or owner access to the
services available on his system’ (Esen, 2002: 271). Such an attack can
be performed without direct access to a system by ‘flooding’ internetaccessible computers with communications so that they become
‘overloaded’ and are rendered unable to perform functions for
legitimate users. Denial of service attacks may involve one computer
and its internet connection to flood a server with traffic. In the age of
high bandwidth, computing power, and denial of service mitigation
services, systems are becoming increasingly difficult to overload with
a single computer. For this reason, offenders may coordinate multiple
systems to target servers with traffic at a single time, a technique
referred to as a ‘distributed denial of service’ or ‘DDoS’ attack.
Multiple examples of denial of service attacks have appeared in recent
times. In fact, one report claimed that 84 per cent of their sample of
1,010 organizations were ‘hit with at least one DDoS attack in the
previous 12 months, up from 73 per cent in 2016’ (Neustar, 2017).
During 2001, a 15-year-old Canadian, going under the hacker
pseudonym of ‘Mafiaboy’, managed to effectively shut down access to
leading e-commerce websites such as Amazon.com and eBay.com, as
well as the site of the global news service CNN. At his subsequent
trial, he was alleged to have caused some $1.7 billion in damages
(BBC News, 2001b). In addition to a host of other DDoS attacks, the
hacker group Lizard Squad took down multiple online gaming
services, including Xbox Live (Microsoft) and PlayStation Network
(Sony), which prevented millions of gamers from accessing online
features on their consoles for multiple days around the holiday season
in 2014 (Hall, 2016). In 2016, the Mirai (Japanese for ‘the future’)
botnet emerged, consisting primarily of poorly secured Internet of
Things devices infected with malware that quickly spread throughout
the world. The botnet, named after a Japanese anime titled Mirai Nikki,
was originally intended to give its designers an advantage in operating
a monetized Minecraft video game server (Graff, 2017). The goal was
to slow down competing servers’ connection speeds ‘in an attempt to
woo away players frustrated at a slow connection’ toward their own
servers (Graff, 2017). Mirai, however, proved to be so powerful that its
potential seemed wasted in this limited application. The botnet was
expanded, culminating in the coordination of compromised devices to
launch the largest series of distributed denial of service attacks the
world had ever seen at that time (Kolias et al., 2017: 80). In addition,
some botnet managers offer DDoS attacks as a purchaseable service.
One such site which offered these services, Webstresser.org, was
shutdown in 2018 (Grierson, 2018).
Distribution of ‘malicious software’: Defined previously in Chapter 2
as consisting of codes and programs which infect computers, causing
varying degrees of disruption to their operation or damage to data,
malicious software (or ‘malware’ for short) production and distribution
is often attributed to criminal hackers. Malware takes a number of
distinctive forms, such as viruses, worms, Trojan horses and
ransomware. Computer viruses, like their biological counterparts, need
hosts to reproduce and transmit themselves (Boase and Wellman,
2001: 39). This form of malware modifies other programs or files and
inserts its own code to allow itself to be passed along, similar to how a
biological virus replicates by infecting another cell and taking it over.
Worms, however, are independent pieces of software capable of selfreplication and self-transmission (e.g. by emailing themselves to others
in a computer’s address book) (Furnell, 2002: 147). The first known
worm was created in 1988 by Robert Morris, a Cornell University
graduate student (Lee, 2013). Though the worm did relatively little
damage (thanks, in part, to coding errors), it helped draw attention to
the emerging field of computer security (Lee, 2013). A Trojan horse,
as the name suggests, is a program that appears to perform a benign or
useful function, but in fact has some hidden destructive capabilities
that only become apparent after a user has downloaded and installed
the software. Recent times have seen an increase in a particularly
insidious form of malware called ransomware which prevents the user
from accessing their files or even opening programs until the user pays
the attacker a ransom for access. Some forms of ransomware encrypt
files, preventing file recovery attempts until the criminal is paid. Taken
together, such forms of malware are widely recognized as the most
disruptive and destructive kind of cyber-attack (CSI/FBI, 2003: 4).
Chapter 1 began with the example of the Love Bug worm that affected
millions of computers in May 2001, causing an estimated $7–10
billion of damage. A few months later, on 13 July, the ‘Code Red’
worm appeared, and infected more than a quarter of a million systems
in its first nine hours (CERT/CC, 2002). By 1 August, it had caused an
estimated $1.2 billion in losses (BBC News, 2001a). In January 2004,
a virus dubbed ‘MyDoom’ became the fastest spreading virus ever,
causing an estimated $20 billion of damage to businesses worldwide in
just 15 days (McCandless, 2004; Uhlig, 2004). Later the same year, the
‘Sasser’ worm infected millions of machines, including the systems of
the South African government, the UK coastguard, the Taiwanese
national post office and the Australian rail network (leaving some
300,000 passengers stranded) (The Guardian, 2004). More recently,
the ‘WannaCry’ ransomware attack, which hit computers in over 150
countries, is estimated to have potentially caused damages upwards of
$4 billion (Berr, 2017). Damages would have likely been significantly
higher if not for the actions of security researcher Marcus Hutchins
who discovered a ‘kill switch’ that halted the malware (Solon, 2017).
The reader should also remember that the Mirai botnet, responsible for
the largest denial of service attacks to date, was driven by malware that
spread through the Internet of Things. The threat continues to grow,
with the first half of 2017 alone seeing the appearance of more than 3
million new malware programs, a 60 per cent increase on the previous
year (Benzmüller, 2017). Mobile phones have been particularly
vulnerable to this recent surge, with an estimated 1.72 million new
malware samples in the first half of 2016 alone targeting Android
smartphones, a 29 per cent increase from the first half of the previous
year (G Data, 2016). The massive disruption caused by such malicious
codes is a consequence of the way in which they can use the internet to
rapidly distribute themselves – the openness of the network and the
near-instantaneity of communication enable them to replicate and
spread exponentially. Further problems arise as viruses, again like their
biological equivalents, are often capable of mutation, thereby evading
virus detection and eradication systems programmed to recognize only
earlier versions of the code (Boase and Wellman, 2001: 41–42). A
piece of malware needs be sent to only a single networked machine
from which it can automatically spread to millions of others without
any further action from its author. This capacity for generating rapid
and widespread disruption to the global network of computer systems
is now viewed as a key tool for conducting information warfare,
cyberterrorism and other politically motivated attacks (considered in
detail in Chapter 4).
3.4 Hacker myths and realities: Wizards
or button-pushers?
In Section 3.2, we noted the tendency of social representations of hacking
to stress the technical virtuosity and exceptional abilities of its practitioners.
Press reporting has propagated such imagery, not least because it enhances
the frisson of danger and fascination surrounding the activity, creating
around the figure of the hacker a kind of mystique. Popular fiction has
similarly stressed the abnormal intelligence and skills of hackers – in films
such as Hackers, The Core, The Italian Job (2003) and Blackhat (2015), the
hacker appears as a kind of savant, using knowledge and techniques far
beyond the comprehension of normal people to achieve the most aweinspiring control over computerized systems (Wall, 2012: 12). Such cultural
constructions of the hacker as a ‘genius’ have significantly shaped wider
perceptions of hacking as a social practice. If such perceptions are valid,
then the actual number of hackers may be very limited, given that only the
most gifted and accomplished can acquire the skills necessary to succeed.
Estimating numbers is particularly difficult, given the inherently covert
nature of the activity, the increasing criminal sanctions that can be brought
to bear upon hackers if identified and which serve to drive them further
underground, and the capacity for anonymity afforded by the internet
environment. Writing in the relatively early days of mass-networked
computing, Sterling (1994: 76–77) estimated that the ‘digital underground’
of hacking comprised no more than about 5,000 individuals, of whom ‘as
few as a hundred are truly “elite” – active computer intruders, skilled
enough to penetrate sophisticated systems and truly to worry corporate
security and law enforcement’. In the early 2000s, the US-based internet
security company TruSecure became aware of some 11,000 individuals
organized into about 900 different hacking groups (Twist, 2003).
Contemporary estimates of the size of the hacker underground or even
hacker community at large – much less estimates of skilled hackers –
remain elusive. One indicator, however, does give us an idea that interest in
hacking is growing: increased attendance at hacker conventions over the
years. For instance, DEF CON, one of the oldest and largest hacker
conventions globally, had over 20,000 participants in 2016, beating
previous years which ranged between 15,000 and 18,000 (DEF CON,
2018). Of course, it should be noted that these events are often open to the
general public and, therefore, a number of attendees at such events may not
be hackers. Still, it provides some indication of growing interest in hacking.
While the numbers may have increased over the past decade, the levels of
skill and knowledge necessary for effective hacking appear to present a
barrier to entry, thereby setting limits on the extent of hacking as a
cybercrime problem.
Figure 3.1 Example of a defaced website
(Xatrix.org, 2014b)
The view of hacking as an activity restricted to a small, highly able and
motivated elite, however, may be increasingly at odds with reality. Hacking
itself has undergone considerable evolution and change over recent years.
Perhaps it was true that for an earlier generation of hackers, crackers and
viral coders, technical knowledge and skills were a prerequisite, since they
had to rely on either formal training in computing and/or concerted
experimentation, trial and error in order to stage attacks. As hacking
techniques have evolved, however, an increasing range of automated
software tools have appeared which can perform much of the necessary
work. For example, the kinds of unauthorized access and hijacking of
computers described earlier can now be performed with a range of tools.
Tools like Nmap and Wireshark allow users to scan computer networks and
traffic. Some tools facilitate the identification of system weaknesses.
Programs like Metasploit, for instance, not only help identify vulnerabilities
in systems, but allow users to craft ‘payloads’ (code that will take
advantage of system weaknesses or ‘exploits’). Tools like Cain and Abel,
John the Ripper, and L0pthCrack can be used to capture and decrypt user
passwords. Others allow for the creation and distribution of malware like
Tox, which emerged in 2015 as a program for creating ransomware (Brook,
2015). Other examples include TheFatRat and Veil, the latter of which helps
malware avoid detection by antivirus programs. Similarly, denial of service
attacks can now be launched automatically using software such as
FloodNet, Low Orbit Ion Cannon (‘LOIC’), High Orbit Ion Cannon
(‘HOIC’) and Slowloris (Denning, 1999: 237; Gallagher, 2012; MansfieldDevine, 2011). These are only some examples of the hundreds of tools used
by criminal hackers and others. Many can be found on hacker or security
industry websites like SecTools.org, Nulled.to and HackForums.net. Others
can also be found through open-source programming sites like
SourceForge.net or GitHub.com as well as sites in the darkweb. Many of
these programs come with user-friendly Windows-style interfaces, pulldown menus and even ‘help’ files to guide the inexperienced user. One such
tool, the Vbs Worm Generator, allows the user to custom-design a worm
with a range of different destructive payloads, all with a few clicks of the
mouse (Furnell, 2002: 178–181). Such tools enable even the novice, with
little or no existing expertise, to generate cyber-attacks. Thus in 2001, a 20year-old from the Netherlands calling himself ‘OnTheFly’ used the Vbs tool
to create the ‘Anna Kournikova’ worm. When it spread across the internet
with alarming and unexpected rapidity, the terrified author surrendered
himself to the authorities. He subsequently revealed that it was his first
effort at viral coding and ‘it only took me a minute to write it’ (Delio,
2001). In the words of ‘Sir Dystic’, one of the authors of the Back Orifice
2000 (‘BO2K’) hacking tool, ‘we made the software easy enough for an
eight-year-old to hack with and we think it could do serious damage’ (cited
in Furnell, 2002: 123). Such technical innovations have effectively
‘democratized’ hacking, making it increasingly available to anyone with a
PC, internet connection and the curiosity or desire to see if they too can join
the ‘digital underground’. Importantly, however, persons who exclusively
over rely on tools are often derogated as ‘script kiddies’ or ‘skiddies’ in the
hacker scene. They are viewed as amateurs and often not considered ‘true’
or ‘elite’ hackers by their more advanced counterparts.
3.5 ‘Why do they do it?’ Motivation,
psychology, gender and youth
Inquiries into crime have long dwelt on the causes and motivations behind
offending behaviour – in the words of Hirschi (1969), one of the most
frequently asked questions is ‘Why do they do it?’. In this respect,
deliberations on cybercrime are no different, with a range of actors such as
journalists, academics, politicians, law enforcement operatives and
members of the public all indicating what they perceive to be the factors
underlying hackers’ dedication to computer crime. Many commentators
focus upon ‘motivations’, effectively viewing hackers as rational actors
who consciously choose to engage in their illicit activities in expectation of
some kind of reward or satisfaction. The motivations variously attributed to
hackers are wide-ranging and often contradictory. Among those concerned
with combating hacking activity, there is a tendency to emphasize
maliciousness, vandalism and the desire to commit wanton destruction
(Kovacich, 1999); the attribution of such motivations by law enforcement
and computer security agencies is unsurprising, as it offers the most clearcut way of denying hacking any socially recognized legitimacy. Among the
wider public, hackers are perceived to act on motivations ranging from selfassertion, curiosity and thrill-seeking, to greed and hooliganism (Dowland
et al., 1999: 720; Voiskounsky et al., 2000: 71; Wall, 2008a, 2008b). This
diversity of attributed motivations may, in fact, reflect the variety of
individuals who engage in hacking and the different aims and goals that
such activities satisfy. While some may see hacking as an end in itself,
others may view it as merely a convenient instrumental means to achieve
other goals. Such goals may be related to economic self-interest (as, for
example, in cases of computer-enabled fraud) or to political ideas and
ideologies (as in cases of cyberterrorism and hacktivism). From a ‘rational
choice’ viewpoint, such variety is to be expected, as computer criminals
(like their terrestrial counterparts) are deemed to be driven by a range of
motivations ‘as old as human society, including greed, lust, revenge and
curiosity’ (Grabosky and Smith, 2001: 35; Hutchings, 2013).
One way in which commentators have attempted to refine their
understandings of hacker motivations is to elicit from hackers themselves
their reasons for engaging in computer crimes. There now exist a number of
studies, both popular and scholarly, in which hackers have been interviewed
or surveyed about their illicit activities (e.g. Bachmann, 2011; Clough and
Mungo, 1992; Taylor, 1999; Verton, 2002). In addition, hackers themselves
have authored texts and documents in which they elaborate upon their ethos
and aims (see, for example, Commander X, 2016; Dr K, 2004; Mitnick and
Simon, 2011). Such insider accounts cite motivations that are often very
different from those cited by ‘outsiders’. In fact, they consistently invoke a
rationale for hacking that explicitly mobilizes the ‘hacker ethic’ of an
earlier generation of computer enthusiasts. In hackers’ self-presentations,
they are motivated by factors such as intellectual curiosity, the desire to
expand the boundaries of knowledge, a commitment to the free flow and
exchange of information, resistance to political authoritarianism and
corporate domination, and the aim of improving computer security by
exposing the laxity and ineptitude of those charged with safeguarding
socially sensitive data. Such accounts ‘straight from the horse’s mouth’,
however, do not necessarily furnish insights into hacker motivations that are
any more objectively true than those attributed by outside observers. As
Taylor (1999: 44) notes, ‘it is difficult ... to separate clearly the ex ante
motivations of hackers from their ex post justifications’. In other words,
such self-attributed motivations may well be rhetorical devices mobilized
by hackers to justify their law-breaking and defend themselves against
accusations of criminality and deviance. Viewed in this way, hackers’
accounts can be seen as part of what criminologists Sykes and Matza (1957)
call ‘techniques of neutralization’ (see Chapter 2). The view of hackers’
self-narrations as instances of such techniques can be supported if we
examine hacker accounts. A clear illustration is provided by a now famous
(or infamous) document called ‘The conscience of a hacker’ authored by
‘The Mentor’ in 1986, now better known as The Hacker’s Manifesto. In the
Manifesto, its author explains hackers’ motivations by citing factors such
as: the boredom experienced by ‘smart kids’ at the mercy of incompetent
school teachers and ‘sadists’; the desire to access a service that ‘could be
dirt-cheap if it wasn’t run by profiteering gluttons’; the desire to explore
and learn which is denied by ‘you’ who ‘build atomic bombs, ... wage wars,
... murder, cheat and lie’ (The Mentor, 1986). Such reasoning clearly
justifies hacking activities by re-labelling ‘harm’ as ‘curiosity’, by
suggesting that victims are in some sense ‘getting what they deserve’ as a
consequence of their greed, and turning the tables on accusers by claiming
the moral high ground through a citation of ‘real’ crimes committed by the
legitimate political and economic establishment. Just as agents of ‘law and
order’ present hacker motivations in a manner that supports criminalization,
insiders’ accounts of hacking serve to resist deviant and criminal labels by
legitimating hacker activities; both are part of the process through which the
social and political meanings of hacking are constructed and contested.
A second strand of thinking about hacking downplays motivations and
choices, and emphasizes instead the psychological and/or social factors that
seemingly dispose certain individuals or groups toward law-breaking
behaviour. In such accounts, free choice is sidelined in favour of a view of
human actions as fundamentally caused by forces acting within or upon the
offender. From an individualistic perspective, some psychologists have
attempted to explain hacking by viewing it as an extension of compulsive
computer use over which the actor has limited control. So-called ‘internet
addiction disorder’ is viewed as an addiction akin to alcoholism and
narcotic dependence, in which the sufferer loses the capacity to exercise
restraint over his or her own habituated desire (Young, 1998; Young et al.,
1999). Some accounts of hacking draw explicit parallels with drug
addiction, going so far as to suggest that engaging in relatively innocuous
hacking activities can lead to more serious infractions, just as the use of
‘soft’ drugs like marijuana is commonly claimed to constitute a ‘slippery
slope’ leading to the use of ‘hard’ drugs like crack cocaine and heroin
(Verton, 2002: 35, 39, 41, 51). More recently, accounts have attempted to
frame hackers as falling on the autism spectrum (Schell and Holt, 2010;
Schell and Melnychuk, 2010; see also Steinmetz, 2016: 187–188).
Investigative journalist Misha Glenny (2011: 270–271), for example, has
described hacker criminality as potentially explained by Asperger’s
Syndrome (a diagnosis now subsumed under autism spectrum disorder).
This medicalization of hacking as a psychological disorder has gained
sufficient purchase so as to have played a role in the prosecution and
sentencing of a number of hackers: in the USA the notorious hacker Kevin
Mitnick was ordered by the sentencing judge to attend an Alcoholics
Anonymous-style ‘12-step’ recovery programme for his purported
computer addiction; and in the UK, Paul Bedworth was acquitted of
hacking charges on the grounds of computer addiction (Grossman, 2001: 1–
13). Perhaps such medicalized views of hacking have contributed to recent
attempts to rehabilitate hackers such as the UK National Crime Agency’s
hacker ‘boot camps’, experimental treatment programs which focus on
redirecting criminal or at-risk hackers away from illicit activities and
toward legitimate security professions (Clark, 2017).
Both psychological and social explanations of hacking have focused in
particular upon masculinity and youth as key issues, since hackers appear to
have a distinctive gender and age distribution. In terms of gender, studies of
hacking reveal it to be an overwhelmingly male activity (Jordan and Taylor,
1998; Reagle, 2017; Tanczer, 2015; Taylor, 1999: 32; Turkle, 1984;
Steinmetz, 2016: 49–51; Yar, 2005c). For instance, in a survey of hackers
attending ShmooCon, a major hacker convention, 94.4 per cent of the
sample self-identified as male while 5.6 per cent identified as female
(Bachmann, 2010: 111). In terms of age, studies again reveal hackers to be
predominantly young, often adolescents and teenagers. Therefore Sterling
(1994: 76) claims that ‘most hackers start young’ and ‘drop out’ by their
early twenties; Taylor (1999: 32) found that those who continued to hack
into their mid-twenties complained that they were viewed as ‘has-beens’ by
the youthful majority of the hacker ‘underground’. On the other hand, some
youngsters in the hacker scene matriculate into legitimate security jobs as
they age, thus continuing work as hackers albeit in a licit sense (Taylor,
1999: 92). Attempts to explain the domination of hacking by male youth
will be considered below, with the dimensions of gender and age being
tackled in turn.
3.5.1 ‘Boys will be boys’? Masculinity and hacking
Explanations for the massive over-representation of males among hackers
variously emphasize psychological and/or socio-cultural factors. Some
commentators attribute the appeal of hacking for males (and its lack of
appeal for females) to innate psychological differences between the
genders, claiming that males are more oriented toward, and fascinated by,
mathematical and logical problem solving (Taylor, 1999: 35). An objection
to such approaches is, however, that they conflate ‘nature’ and ‘nurture’;
that is, they attribute to individuals’ innate biological heritage what in fact
may be socially learned behaviours, dispositions and inclinations.
Consequently, others instead view such psychological differences as the
outcome of cultural influences. Therefore Turkle (1984) claims that males
are culturally socialized into a desire for what she calls ‘hard mastery’,
seeking to impose their will and control over machines, thereby furnishing
them with attitudes particularly suited towards hacking (see also discussion
in Taylor, 2003). Similarly, Thomas (2002: xvi) suggests that hacking is
characterized by a ‘boy culture’ centred on demonstrations of mastery,
wherein individuals show their ability to dominate their ‘social and physical
environment’. A slightly different aspect of gendered psychology is stressed
by those who emphasize the ways in which teenage boys turn to hacking in
order to satisfy a desire for power in the face of their actual powerlessness
in ‘real-world’ social relations (Sterling, 1994: 19). In sociological and
criminological terms, this disposition may be viewed as a consequence of
the domination in our culture of what Connell (1987) calls ‘hegemonic
masculinity’. This is seen as an ideology that presents being a ‘real man’ in
terms of the ability to exercise authority, control and domination over others
(see also Jefferson, 1997; Messerschmidt, 1993). Given the pressures and
expectations placed upon boys to demonstrate their ‘manliness’ in such
terms, hacking (and crime more generally) may be seen as an appealing
avenue for those striving to negotiate their way towards a socially
recognized status as ‘real men’. A final strand of explanation places less
emphasis upon factors disposing males to engage in hacking, and instead
focuses upon those factors that actively discourage female participation in
such activities. Taylor (1999: 36–40, 2003: 136–138) notes how the culture
of hacker groups, websites and online forums exhibits strong sexist and
misogynistic dimensions; the flow of sexist and demeaning speech and
harassment of women issuing from male hackers may also be actively
encouraged by the anonymity of online interaction which loosens
inhibitions and may also apply in ‘real-world’ settings. This ‘locker room’
or ‘bar room’ culture may discourage women from involvement in hacker
circles and activities in spite of any genuine interest they might have. This
problem may be exacerbated by what Reagle (2017) calls the ‘naïve
meritorcracy’ that characterizes hacker culture. For hackers, they often
espouse the notion that all that matters is one’s skill and creativity (e.g.
Levy, 1984: 43). Reagle, however, points out that such beliefs can render
invisible barriers to entry faced by women or other minority groups.
Further, male hackers may explain away any gender disparities as a
problem external to themselves and the hacker community, like relying on
explanations which hinge on alleged biological differences or early-life
socialization (Taylor, 1999; Tanczer, 2016). The good news is that the
representation of women may be improving over time, though it is
questionable how much of an impact this has had on the masculine culture
among hackers. In his study of the Chaos Computer Club, for instance,
Sebastian Kubitschko (2015: 398) quotes a female hacker who states, ‘over
the years, more and more women come to the Congress. But essentially and
considering the active members the CCC is male-dominated and nothing
much has changed’ (see also: Steinmetz, 2016: 49–51).
3.5.2 Hacking: Just another case of ‘juvenile
delinquency’?
The second area upon which explanations of hacking focus is related to the
over-representation of youth among its practitioners. Given that most
hackers appear to be teenagers (Verton, 2002), this inevitably raises
questions about the relationship between age and the disposition to
participate in such computer-related offences. As with discussions of
gender, reflections on youth and hacking take a range of different
approaches, some psychological and some more sociologically oriented.
One tendency in explanations of hacking departs from the widely held view
of adolescence as a period of inevitable psychological turmoil and crisis
(Muncie, 1999: 92–93), suggesting that such disturbance helps account for
youthful participation in hacking alongside other forms of delinquent and
anti-social behaviour. In particular, it has been suggested that ‘juveniles
appear to have an ethical “deficit’’ which disposes them toward law- and
rule-breaking behaviour when it comes to computer use’ (DeMarco, 2001).
This view is backed up by citation of surveys about youth attitudes towards
computer crimes, which claim that significant numbers of young people
participate in computer-related offences and that many deem it acceptable
to do so (Bowker, 1999: 40). Such accounts may be seen to implicitly side
with theories of developmental psychology that claim that individuals, as
they move from childhood to adulthood, pass through a number of stages of
moral learning; it is only with ‘maturity’ that individuals are fully able to
appreciate and apply moral principles to regulate their own and others’
behaviour. As such, juveniles occupy a space of moral ‘immaturity’ in
which they are likely to act upon their hedonistic impulses with limited
regard for the impact of their actions upon others (Hollin, 2002: 159–160).
When applied to computer crime, such understandings attribute youthful
participation in hacking to a combination of adolescent ‘crisis’ and ethical
‘underdevelopment’; conversely, they can be used to explain why most
individuals ‘drop out’ of hacking as they reach psychological maturity in
their twenties.
A more socially oriented explanation of youthful involvement in hacking
stresses the problematic family backgrounds of offenders. There is an
established tradition of ‘developmental criminology’ that claims a strong
correlation between youth offending and experiences of parental neglect,
parental conflict, and family disruption and breakdown (see, for example,
Loeber and Stouthamer-Loeber, 1986). While accounts of hackers and
hacking seldom make explicit reference to such scholarly literature, they
nonetheless can be seen to reproduce its association between offending and
familial problems. So, for example, Verton (2002: xvii, xviii, 37, 86, 102,
105, 142, 145, 170, 188), in his study of teenage hackers, makes repeated
references to the ‘troubled’ family backgrounds of his subjects, citing
biographical details relating to parental conflict, divorce, ‘dysfunctional
family environment’, parents’ alcoholism, and physical abuse. Of one
hacker, dubbed ‘Explotion’, he notes that ‘his mother died when he was 13,
and his father liked to practice his left jab on his face’; of another, Willie
Gonzalez, Verton opines that ‘Willie was heading down a dangerous path
that a lot of teenage hackers ... find themselves on. He was an outsider, a
loner with no escape from a dreadful home life’ (ibid.: 105, 146). The clear
implication here is that involvement in hacking, like other forms of
‘delinquent’ behaviour, is explicable by the teenagers’ lack of ‘caring,
loving, stable families’ (ibid.: 188). Such reasoning can be criticized,
however, for falling prey to what Felson (1998) calls the ‘pathological
fallacy’, the idea that forms of ‘pathological’, ‘deviant’ or ‘abnormal’
behaviour must be explained by reference to some other ‘pathology’ or
‘abnormality’ in the individual’s personal and social circumstances. Writers
on hacking such as Fitch (2004: 7) also contest this view of hackers as
‘alienated, angsty teens’, seeing them instead as ‘normal kids, muddling
through everyday life issues’, and hence largely indistinguishable from their
peers. Further, the relationship between hacking and age may be at least
partially a result of time and opportunity – young kids usually have more
free time to tinker and play with technology. Further, early childhood
experiences with technologies may ‘build an enduring interest and
adoration – a potential propulsion for future interests in computing
technology’ (Steinmetz, 2016: 63). In this way, young hackers may not be
‘pathological’ but may just be ‘kids being kids’ differently.
An alternative socially oriented explanation for youth involvement in
hacking emerges from ‘subcultural’ and ‘differential association’
perspectives on delinquency. Subcultural criminology starts by rejecting the
depiction of offenders and offending in individualistic terms. Rather,
analysts stress that crime and delinquency are situated in the context of
social group membership; the groups to which ‘delinquent’ individuals
belong will have distinctive shared beliefs, attitudes and values, and it is
this distinctive subculture that both licenses and rewards behaviour that
may be at odds with mainstream social norms and rules (Cloward and
Ohlin, 1961; Cohen, 1955). Similarly, learning theories view crime and
delinquency as socially learned behaviour, and see close personal groups
and peer milieu as the main context for such learning. Through association
with others, individuals will learn not only the techniques for carrying out
crimes, but also the attitudes and values that support and validate such
behaviour (Sutherland and Cressey, 1974). Social–psychological
perspectives such as social learning theory similarly stress how individuals
are socialized into rule-breaking behaviour through peer association (Akers,
1998; Bandura, 1977; Burgess and Akers, 1966; Fitch, 2004). Such
perspectives can be mobilized to explain youth involvement in hacking and
other forms of computer crime, as studies clearly show the ways in which
such behaviour takes place in a distinctive socio-cultural context and
‘communal’ structure (Fream and Skinner, 1997; Holt, 2007; Holt et al.,
2010; Taylor, 1999: x, 26). Peer association in the hacking subculture takes
places in both ‘real’ and ‘virtual’ worlds (Fitch, 2004: 8–9). In the
‘terrestrial’ world, hackers participate in organized conferences, such as the
annual DEF CON hacker gathering, at which knowledge, tools and tales are
exchanged; similarly, chapters of the ‘2600’ hackers organization meet
weekly in towns and cities across the USA (Rogers, 2000: 12). In ‘virtual’
or online settings, peer groups are formed and sustained via computermediated interaction in chat rooms and via bulletin boards. Such groups not
only provide opportunities for novice or would-be hackers to learn the
tricks of the trade from their more experienced counterparts, they also
socialize new members into the distinctive ethos and attitudes of hacker
culture (Denning, 1991: 60; Holt, 2007). This culture includes those values
(noted earlier) which legitimate hacking, and counter outsider accusations
of deviance and criminality; it also includes broader lifestyle codes about
dress, speech and leisure pursuits that serve to symbolically unite hackers
and demarcate them from the despised non-hacker culture. Moreover, such
groupings provide social and symbolic rewards for their members in terms
of the peer recognition and esteem granted to those who successfully carry
off hacks (Taylor, 1999: 59–60); such recognition becomes crucial in
sustaining members’ ongoing commitment to the hacker ideology and
lifestyle. From a subcultural perspective, participation in and identification
with hacking culture may be viewed as a strategy of boundary formation
through which ‘youth’ is produced as a distinctive and oppositional
category of social membership, one which its participants experience as a
form of resistance to ‘mainstream’ adult society (Muncie, 1999: 158–169).
Though the previous discussions highlight the role of gender and age in
hacking, other factors should also be considered, even if only briefly. It is
worth noting, for example, that hacking – at least in the West – appears to
be a predominantly white activity (Bachmann, 2010; Schell & Melnychuk,
2010; Schell et al., 2002; Steinmetz, 2016; Taylor, 1999). Bachmann
(2010), for instance, found that 93.5 per cent of his sample of hackers (n =
120) were white. In addition, class is an important, if underexamined,
dimension of hacking as well. Steinmetz (2016) has argued that
involvement in hacking is a ‘heavily class-based phenomenon [emphasis in
original]’ related to issues like cost-of-entry, age of exposure and
educational experiences. In toto, the confluence of age, gender, race and
class in hacker culture have lead scholars and security experts to summarize
the prototypical hacker in the following ways: ‘[a] quick glance is all it
takes to confirm that the social base of the hacker movement is heavily
skewed towards middle-class males living in the West’ (Söderberg, 2008:
28); ‘the typical hacker is a white, suburban, middle-class boy, most likely
in high school’ (Thomas, 2002: xiii); ‘overwhelmingly male, white, and
from privileged economic positions’ (McConchie, 2015: 881); and hackers
‘often come from fairly well-to-do middle class backgrounds’ (Sterling,
1994: 59).
3.6 Hacking and the law: Legislative
innovations and responses
The appearance of any supposedly new and pressing criminal threat
typically instigates a process of social innovation in which authorities and
agents of social control take steps to curtail the perceived danger presented
by the offending individuals or groups (Cohen, 1972; Goode and BenYehuda, 1994). The threat from hacking, be it real or imagined, has incited
a range of legislative and criminal justice responses that has driven an
ongoing ‘crackdown’ on activities of computer intrusion and manipulation.
This domain of reaction and innovation will be considered below.
As Walker et al. (2000) point out, the expansion of internet-related crimes
has generated a range of legislative and legal responses, many of which
seek to adapt existing laws to the novel environment of cyberspace.
Particularly significant challenges are presented by the inherently
transnational nature of internet interactions. Consequently, legal innovations
at the national level, while considered essential, have of necessity been
complemented by both a burgeoning number of international agreements as
well as ‘informal’ regimes of governance and regulation implemented by a
range of non-governmental actors (Dupont, 2016; Walker et al., 2000: 6,
19). Below we will focus on the more specific question of legal innovations
in response to the perceived threat of hacking and computer intrusion.
It would appear that in the early years of computer crime there was a
tendency to rely upon existing legislation covering offences of trespass,
theft and fraud to tackle hacking and related offences. A number of
landmark cases, however, proved that these innovative computer crimes
could slip through the net of existing legislation, resulting in a failure to
successfully prosecute offenders. Consequently in 1988, for example, in the
case of Gold and Schifreen who hacked into Prince Philip’s mailbox, the
UK House of Lords ruled that unauthorized access to a computer system
did not constitute an offence under existing criminal law (Esen, 2002: 273;
Wasik, 2000: 274). This apparent gap in the law led to the publication of a
working paper on computer misuse by the Law Commission in 1988;
Parliament, following the Commission’s recommendations, subsequently
introduced the Computer Misuse Act in 1990 to specifically cover offences
such as hacking and computer viruses. Numerous countries have now
established similar legal provisions to cover hacking and related offences.
While there is not the space here to review even a significant portion of
relevant provisions from around the world, we can briefly examine the legal
measures instituted in the USA and the UK in order to establish a
preliminary understanding of how such laws are framed, and the kinds of
penalties they provide for, as well as some of the criticisms that have been
levelled at them.
The USA introduced one of the earliest national laws specifically oriented
to computer crime in the form of the Computer Fraud and Abuse Act
(CFAA) of 1986. This act made it a crime to ‘knowingly access computers
without authorization, obtain unauthorized information with intent to
defraud, or “cause damage” to “protected computers”’ (Biegel, 2003: 236).
The law made provision for a range of punishments according to the
particular offence, including a custodial sentence of up to five years for
first-time offenders and up to ten years for repeat offenders, as well as
significant fines. In response to technological innovations and new forms of
computer crime, the Act was subsequently amended on numerous occasions
(1986, 1989, 1990, 1994, 1996), and there are continued calls to adjust its
provisions further to close perceived ‘loopholes’ (ibid.: 236–239). Indeed, a
number of particularly significant amendments to the CFAA were
introduced by the USA PATRIOT Act of 2001 in the wake of the 9/11
attacks; these included the increase of maximum penalties from five to ten
years for first offenders and from 10 to 20 years for repeat offenders
(Milone, 2003: 85). Other subsequent enhancements of the CFAA’s
provision were introduced in The Identity Theft Enforcement and
Restitution Act (2007) (Brenner, 2010: 438–441). In its current form, the
CFAA criminalizes nine types of activity which the US Department of
Justice summarizes as ‘obtaining national security information’, ‘accessing
a computer and obtaining information’, ‘trespassing in a government
computer’, ‘accessing a computer to defraud and obtain value’,
‘intentionally damaging by knowing transmission’, ‘recklessly damaging by
intentional access’, ‘negligently causing damage and loss by intentionally
access’, ‘trafficking in passwords’ and ‘extortion involving computers’
(CCIPS, 2010: 3). In addition to these federal laws, a number of individual
US states have their own laws covering unauthorized access to computers
(Lilley, 2002: 245–246).
The UK Computer Misuse Act (CMA) (1990) also makes provisions
covering unauthorized access and damage to computer systems. These
provisions initially fell under three main sections, with a fourth being
introduced through amendments in the Police and Justice Bill 2006. Section
1 makes it an offence to intentionally attempt to gain unauthorized access to
a computer system, and makes such an offence punishable by up to two
years imprisonment and/or a limited fine. Section 2 covers unauthorized
access ‘with intent to commit or facilitate the commission of further
offences’ and is punishable by up to five years imprisonment and/or an
unlimited fine. Section 3 covers the ‘unauthorised modification of computer
material’ and so encompasses the distribution of viruses and other forms of
malicious software, and is again punishable by up to ten years
imprisonment and/or an unlimited fine (Wasik, 2010: 404–408). Section 4,
introduced via a subsequent amendment, was intended to address a
perceived weakness in the original legislation with respect to distributed
denial of service attacks. This Section creates an offence of ‘making,
supplying or obtaining articles for use in offence under Section 1 or 3’,
thereby criminalizing the production, distribution or acquisition of the kinds
of hacking software tools discussed above. The maximum penalty for
conviction under Section 4 is two years’ imprisonment (MacEwan, 2008).
The Serious Crime Act (2015) introduced further amendments to the CMA,
including: extending the length of custodial sentences to 14 years for
instances of ‘serious damage’, and up to life imprisonment in cases where
attacks ‘result in loss of life, serious illness or injury or serious damage to
national security’; extending provisions around ‘obtaining articles for
purposes relating to computer misuse’; and extending the ‘territorial scope’
of the law, so as to cover offences committed outside the country when
there is a ‘significant link with domestic jurisdiction in relation to the
offence’ (Home Office, 2015).
Both the US and UK laws have been criticized from a number of angles.
Critics in the USA have complained that, in reality, the legal provisions are
seldom invoked to their full extent, so that prosecution ‘frequently
translates into a slap on the wrist’ and that ‘few cybercriminals ever see the
inside of a US jail’ (Bequai, 1999: 17). As mentioned previously in Chapter
2, only 114 cases of computer fraud were filed in US District Courts in the
2016 fiscal year (USDOJ, 2016). This failure to bring ‘the full force of the
law’ to bear on hackers is held to undermine any deterrent effect that the
provisions might have, a problem further exacerbated by the relative
scarcity of prosecutions brought to bear (attributed in part to the problems
of identifying perpetrators, the lack of appropriate law enforcement
expertise and resources, and the lack of appropriate technical knowledge on
the part of judges and juries alike). Similarly, critics of the UK law have
bemoaned its lack of effectiveness as a deterrent, given the infrequency of
successful prosecution and the tendency toward ‘lenient’ sentences
(Coleman, 2003: 132). Between 1990 and 2006, 214 defendants were
charged under the Computer Misuse Act, with 161 of those being found
guilty (MacEwan, 2008: 4). Data issued by the law firm Pinsent Masons
indicated that only 61 prosecutions of cybercrime occurred in 2015, up
from a paltry 45 in 2014 (Muncaster, 2016). These figures indicate a serious
overall lack of enforcement given the vast number of such incidents that are
known to occur. The law has also been criticized for requiring too heavy a
burden of proof, as prosecutors are required to prove intent on the part of
the offender for the more serious offences under Sections 2 and 3 of the
Act. Additonally, some argue it is outdated, as it does not cover newer
forms of crime such as denial of service attacks which do not require
unauthorized access to a system to interfere with its operations, nor do they
require modification of a system’s contents (MacEwan, 2008: 4).
From the other side, however, some commentators have suggested that the
scarcity of successful prosecutions has meant that in those cases where
convictions are secured, there is a tendency to impose excessive and
unwarranted penalties so as to ‘make an example’ of hackers, and that this
excessiveness is fuelled by the tendency to grossly exaggerate the amount
of damage that the perpetrator’s activities have caused (Furnell, 2002: 215;
Kovacich, 1999: 574). Thus Furnell (2002: 214–215) notes that while
hacker Kevin Mitnick was sentenced to five years’ imprisonment for
computer intrusion, the average term of imprisonment in the USA for
manslaughter is only two years and ten months; this would suggest that the
cases of high-profile hackers are liable to turn into ‘show trials’ resulting in
sentences out of all proportion when compared with those for other,
arguably more serious crimes. Further, though the CFAA has always been
controversial among hackers and the tech community more generally, the
law generated considerable ire following the arrest and subsequent suicide
of hacker and activist Aaron Swartz. For illegally accessing and
downloading proprietary JSTOR (Journal Storage) files from the MIT
network, Swartz faced ‘up to 35 years in prison, to be followed by three
years of supervised release, restitution, forfeiture and a fine of up to $1
million’ (USAODM, 2011). Though the precise reasons cannot be
ascertained, many claimed that the punishments permitted under the CFAA
were too harsh and helped drive him to suicide. In an effort to change the
law, advocates have pushed for changes to the CFAA through legislation
referred to as ‘Aaron’s Law’ (Reader, 2016). As of this point, despite
multiple attempts, Aaron’s Law has not been passed through US Congress.
In the UK, Section 4 of the Computer Misuse Act, which criminalizes
instruments or tools that can be used for effecting computer intrusion, has
also been criticized. The IT community has pointed out that these same
sorts of tools are regularly developed and used by security specialists to test
systems for vulnerability and more effectively safeguard them from
intrusion. By criminalizing the manufacture, distribution or acquisition of
such tools, Section 4 threatens to criminalize lawful activity by security
specialists. As MacEwan (2008: 6) puts it: ‘these items are easily accessible
and are widely used for legitimate purposes on a regular basis. It seems that
the courts will be left with the difficult task of drawing the line of their
illegal use.’
Another serious challenge for legal provisions against hacking relates to the
international rather than national context. While numerous countries have
established laws to tackle hacking and other computer crimes, many have
been slow in instituting such provisions; one early survey of 52 countries
found that 33 had no special legal provisions for tackling cybercrime
(McConnell International, 2000: 4). Significant progress has, however, been
made in recent years, with many more countries introducing legislation,
often driven by agreements established via regional bodies such as the EU,
the African Union, the Association of Southeast Asian Nations (ASEAN)
and the Arab League (ITU, 2008). Even where countries have such legal
measures in place, there remain significant problems in terms of a lack of
harmonization or consistency across territories. These cross-national
variations, it is suggested, can encourage what is referred to as ‘regulatory
arbitrage’, with individuals and groups committing offences from those
territories where they are assured of facing little or nothing in the way of
criminal sanctions. The need to establish cross-national consistency in
cybercrime laws has consequently stimulated the development of
international treaties and agreements, such as the Council of Europe
Convention on Cybercrime. The Convention was signed by 30 countries in
November 2003 (including four non-European states that participated in the
negotiations and drafting of the convention – the USA, Canada, Japan and
South Africa), and came into force in 2004. By 2012, 47 countries had
signed the Convention, and 36 of those had also ratified it so as to
incorporate the provisions into domestic law (CoE, 2012). The Convention
aims to harmonize national laws on cybercrime, to establish criminal laws
and sanctions, and to establish a regime for international cooperation
(Coleman, 2003: 132). While the process of implementation is currently in
the relatively early stages of what will likely be a long drawn-out process,
the Convention establishes an international framework for the legal and
criminal justice response to hacking and other forms of cybercriminal
activity.
Additionally, the often international character of hacking has led to legal
issues regarding extradition, or the forced transportation of a person from
one country to another to stand trial. In one famous case, Gary McKinnon,
‘a 42-year old Briton’ was ‘accused of breaking into Pentagon computers
and raiding US army, navy and NASA networks in 2001 and 2002’
(Jewkes, 2015: 266). He claims to have accessed such systems looking for
‘evidence of official knowledge of the UFO phenomenon’ (McKinnon,
2016). The US wanted him extradited so he could stand trial, facing
terrorism charges which could have resulted in up to 70 years in prison
(Jewkes, 2015: 266). After years of legal struggle, UK Prime Minister
Theresa May dropped the extradition order against McKinnon in 2012 ‘on
human rights grounds, because medical reports warned he was at risk of
suicide if sent to face trial in the US’, a prospect considered more likely as
he was diagnosed with Asperger’s syndrome and depression in 2008
(Kennedy, 2012; see also: McKinnon, 2016). In a similar case, hacker Lauri
Love was arrested in 2013 by the British National Crime Agency for crimes
under the Computer Misuse Act for breaking into US government computer
systems belonging to the Federal Reserve, NASA and the US Army
(Parkin, 2017). These activities were conducted as part of an ‘operation’ or
‘op’ by the hacktivist group Anonymous. In 2014 the case against Love was
dropped by the British government but he was subsequently rearrested, this
time as part of a joint effort with US law enforcement. Similar to the
McKinnon case, Love has been diagnosed with depression and Asperger’s
syndrome and supporters argue he also is a suicide risk should he be
extradited, a sentiment backed up by his own words: ‘I will kill myself
before I’m put on a plane to America’ (Parkin, 2017; see also Bowcott,
2017). He won his extradition appeal (BBC News, 2018).
3.7 Summary
In the public mind, cybercrime is perhaps associated most clearly with
computer hacking. This chapter explored how images of hackers and
hacking have been constructed in popular media, and how this has shaped
legal and criminal justice responses. It was suggested that the concern over
hackers and hacking can be seen as part of wider social tensions and
anxieties about technological change and its threat to familiar ways of life.
These representations are contested by hackers themselves, who often give
a range of justifications for their activities, and claim hacking is far from
the ‘mindless vandalism’ with which it is often associated. Debates over
hacking therefore exemplify what criminologists and sociologists dub the
‘labelling process’ and the ‘social construction of deviance’. A wide range
of hacking activities is apparent on the internet today, ranging from
computer intrusion and the distribution of viruses, to website defacement
and denial of service. Such activities appear to be on the increase,
facilitated by the availability of numerous tools that enable even the
unskilled novice to engage in hacking. Participation in such activities
appears to follow a distinctive social profile – the overwhelming majority of
hackers are young and male. There are a number of explanations that can be
suggested for this pattern, relating to masculine ideologies, juvenile
delinquency and subcultural association.
Study Questions
How do representations of hackers in movies, books and press reports construct hacking as a
deviant and dangerous activity?
How do these representations connect with wider social anxieties about technology?
What kinds of harm are caused by hacking activities?
Why do so many young males find hacking so appealing?
What kinds of measures are being put in place to counter the growth of hacking?
Further Reading
Students of sociology and criminology interested in hacking should start with Paul Taylor’s
foundational Hackers: Crime in the Digital Sublime (1999). Other works in the field to
consider include Tim Jordan’s Hacking: Digital Media and Technological Determinism (2008)
and Kevin Steinmetz’s Hacked: A Radical Approach to Hacker Culture and Crime (2016).
Other more popular, but nevertheless interesting reads include Bruce Sterling’s The Hacker
Crackdown: Law and Disorder on the Electronic Frontier (1994), Kevin Mitnick and William
Simon’s (2011) Ghost in the Wires: My Adventures as the World’s Most Wanted Hacker and
Dan Verton’s The Hacker Diaries: Confessions of Teenage Hackers (2002). Steven Levy’s
Hackers: Heroes of the Computer Revolution (1984) is also an absolutely vital work in the
area on early hacker culture and the hacker ethic. For additional insights into the hacker ethic,
check out Pekka Himanen’s The Hacker Ethic: A Radical Approach to the Philosophy of
Business (2001) and Bryan Warnick’s (2004) ‘Technological metaphors and moral education.’
While the latter may seem unusual, it is likely one of the most insightful articles on hacker
senses and sensibilities. User-friendly introductions to the more technical aspects of hacking
are provided by Steven Furnell’s books Cybercrime: Vandalizing the Information Society
(2002) and Computer Insecurity: Risking the System (2005). A rare and powerful defence of
hackers from within the computer security industry is Kovacich’s article ‘Hackers: Freedom
fighters of the 21st century’ (1999). For more on youth and hacking, see Bowker’s ‘Juveniles
and computers: Should we be concerned?’ (1999), as well as Verton (above). Up-to-date and
detailed coverage of legal responses to hacking, alongside other computer crimes, can be
found in Ian J. Lloyd’s Information Technology Law, 6th edition (2011). Additional insights on
the social construction of hackers can be gleaned from David Wall’s chapter entitled ‘“The
devil drives a Lada”: The social construction of hackers as cybercriminals’ (2012), Reid
Skibell’s ‘The myth of the computer hacker’ (2002) and Deborah Halbert’s ‘Discourses of
danger and the computer hacker’ (1997). For anyone interested in how images of hackers and
hacking are constructed in popular culture, Hollywood movies such as War Games, Hackers,
AntiTrust, The Matrix, Swordfish, Girl with the Dragon Tattoo, The Core and Takedown are
essential viewing, and William Gibson’s pioneering novel Neuromancer (1984) is an
indispensable read. The television show Mr Robot is also necessary viewing.
4 Political Hacking From Hacktivism to
Cyberterrorism
Chapter Contents
4.1 Introduction
4.2 Hacktivism and the politics of resistance in a globalized world
4.3 The spectre of cyberterrorism
4.4 Why cyberterror? Terrorist advantages of utilizing internet attacks
4.5 Rhetorics and myths of cyberterrorism
4.6 Alternative conjunctions between terrorism and the internet
4.7 Cyberwarfare
4.8 Summary
Study questions
Further reading
Overview
Chapter 4 examines the following issues:
the development of hacking as a form of political resistance and protest;
the emerging debate about terrorist uses of hacking;
the ways in which hacking might enhance terrorists’ abilities to achieve their goals;
the ways in which cyberterrorism is being constructed as part of the ‘war on terror’ in the
wake of the 9/11 attacks;
other ways, apart from hacking, in which the internet may prove useful to terrorist
organizations.
Key Terms
Anonymous
Anti-globalization movement
Cyberterrorism
Cyberwarfare
Hacktivism
Terrorism
War on terror
4.1 Introduction
The previous chapter introduced a range of cybercriminal activities
associated with computer intrusion and manipulation, which are generally
identified under the label of hacking. In this chapter we will critically
explore claims that hacking has evolved in a distinctively political
direction, taking the form of activities variously described under labels such
as ‘hacktivism’ and ‘cyberterrorism’. It was noted in the previous chapter
that the early development of hacking was seemingly characterized by a
distinctive ethos or ethic, and that this outlook might be seen to have
contained a political element. In particular, the ethic stressed a ‘democratic’
vision of free and unfettered access to new technology, and a corresponding
distrust of corporate and governmental domination or control of computing
resources. As Paul Taylor (2004a) notes, however, this political potential
remained largely unrealized, for two main reasons. First, the hacker ethic
tended to focus upon access to technology itself as a primary goal, rather
than seeing it as a tool or stepping-stone for realizing other social and
political aims, and as such, the political dimension of hacking culture
remained limited. Second, the counter-cultural aspirations of pioneering
hackers were largely incorporated by corporate capitalism, turning hackers
increasingly into consumers of computing resources on the one hand, and
appropriating their skills and drive for commercial purposes on the other
(2004a: 488–489). Since the mid-1990s, however, there has been a surge of
political activity that mobilizes the tools and techniques developed by
hackers and crackers. First of all, this period has seen the development of
what is commonly known as hacktivism, the mobilization of hacking in the
service of political activism and protest. The reader should note that the use
of computers and computer networks for political protest predates the term
hacktivism. One account given by Oxblood Ruffin (2004: 1), a leading
figure of the hacker/hacktivist group Cult of the Dead Cow (cDc) claims the
term was coined in 1996 by cDc member Omega. Such activism has come
to the forefront of public awareness over the past decade through both the
use of hacking as part of the political mobilizations around the so-called
Arab Spring, and the activities of high-profile hacktivist groups such as
Anonymous. Second, since the 9/11 attacks in the USA, there has emerged
an increasing political and public focus on the threat posed by
cyberterrorism, the exploitation of electronic vulnerabilities by terrorist
groups in pursuit of their political aims. It is this range of explicitly
politicized uses of hacking that will provide the focus for discussion in the
present chapter. It should be noted from the outset, however, that the
boundaries between these forms of political activity remain fluid and deeply
contested, so that hacktivists often find themselves positioned as
‘information terrorists’ (Taylor, 2004a: 487). Equally, the nature and extent
of any cyberterrorist threat remain obscured behind an often overly
dramatic rhetoric that may hinder rather than help us in developing a sound
assessment of the nature, scope and scale of the problem. Indeed, we shall
see that the use of labels such as ‘terrorist’ to characterize certain actors is
best viewed as itself a profoundly political activity, bound up with wider
social tensions and conflicts.
4.2 Hacktivism and the Politics of
Resistance in a Globalized World
Considerable attention has been devoted in the past few decades to the
internet’s potential for transforming political life. In Western liberaldemocratic countries, the expansion of the online environment has been
grasped as a means for revitalizing a political culture increasingly
characterized by public cynicism and disengagement. Governments have
seized the internet as a tool for political communication: the state can
mobilize it to inform citizens, while citizens can be drawn into a
consultative process whereby they voice their opinions, needs and concerns
through mechanisms of online consultation (Castells, 2002: 137–164).
Countries such as Estonia have gone further, pioneering exercises in ‘edemocracy’ wherein citizens can cast their electoral votes online (Vassil et
al., 2016). More pragmatically, political parties and other political
organizations (such as pressure groups) have used the internet for purposes
of campaigning and recruitment. The internet has also been seen as a means
for stimulating citizen participation and involvement in civic life at the local
level (Carter, 1997: 136–137), and for revitalizing community cohesion
seemingly undermined by industrialization, urbanization and the
individualization processes (Evans, 2004). The area in which perhaps the
greatest impact has been seen, however, is the mobilization of the internet
by social and political resistance movements of various kinds, in both the
West and elsewhere, since the mid-1990s. This activism has moved in step
with a growing awareness of the impacts of globalization (Taylor, 2004a:
486), the rise in Western military interventionism (e.g. in Kosovo, the two
Gulf Wars and Afghanistan) and ongoing pro-democracy struggles around
the world.
The upsurge in internet activism has been understood in a variety of
political and ideological terms, depending upon the viewpoint of the
commentators. For the advocates of Western political values, it is welcomed
as a resource for ‘protecting human rights and spreading democratic values’
(Milone, 2003: 75). The intersections between ICTs and democratic
activism have been widely discussed of late in relation to the so-called Arab
Spring. This term is now commonly used to denote a number of broadbased social movements for democratic reform across North Africa and the
Middle East that gained rapid momentum from early 2011. These
movements (sometimes co-mingled with armed insurrection) led to the
collapse of autocratic regimes in Tunisia, Egypt, Yemen and Libya, and
have also manifested in popular mobilizations in Bahrain, Syria, Kuwait,
Oman, Lebanon, Morocco and Jordan. The uprisings in Tunisia and Egypt
featured a significant use of the internet (especially social media platforms
such as Facebook and Twitter) in mobilizing support, organizing
demonstrations and disseminating information via alternatives to official
state-censored media (Chokoshvili, 2011; Hassan, 2012; Stepanova, 2011).
Some have alternatively depicted the internet not as a means of promoting
Western liberal-democratic politics, but as a tool of resistance to Western
capitalist domination and US-led ‘neo-imperialism’. Alongside this, it has
been appropriated as a means to counter what is seen as a tendency toward
the exercise of domination and the violation of citizens’ freedoms by
ostensibly democratic states.
Whatever the stance taken on such activism, some key features can
nevertheless be identified. Much online political activism has taken the
form of non-violent civil disobedience and protest, modelled on the
traditions of past social movements (such as the labour movement, the civil
rights movement, women’s liberation, gay liberation and radical
environmentalism). A substantial proportion of such engagements have
mobilized the tools and techniques developed by hackers, appropriating
them for explicitly political purposes. The most prominent recent instance
of such activism can be seen in the emergence of the Anonymous hacker
collective, which has staged a number of high-profile cyber-attacks
motivated by a range of political causes such as free speech and anticensorship, resistance to state surveillance, and support of other activist
groups/movements such as WikiLeaks and Occupy (Coleman, 2014;
Hampson, 2012; Walker, 2012). Anonymous has billed itself as ‘promoting
movement by which individuals across the globe can promote access to
information, free speech, and transparency’ (http://anonanalytics.com/,
circa 9 June 2017 via Archive.org). Such activities have attracted the label
of hacktivism – a combination of hacking and activism. We can see that
such interventions take a number of distinct forms:
Virtual sit-ins and blockades: Virtual sit-ins can be seen as the cyber
equivalents of the traditional protest method by which a particular site,
associated with opposing or oppressive political interests, is physically
occupied by activists. History is replete with such instances, including:
suffragettes chaining themselves to the railings outside the British
Parliament in support of the women’s vote; the famous ‘freedom rides’
of the US civil rights movement, where Black protesters would occupy
the ‘whites only’ seating at the front of buses; and the student
occupations of university campuses during the anti-war movement of
the 1960s. The virtual counterparts of these protests take the form of a
coordinated and simultaneous request to access a particular website.
The deluge of such requests from thousands of activists overloads web
servers, making the site unavailable, a tactic previously described as
DoS or DDoS attacks. One of the most memorable early uses of this
technique occurred in 1998, when the so-called Electronic Disturbance
Theatre (EDT) organized a ‘virtual sit-in’ of Mexican government
websites in support of the Zapatista land-rights movement, whose
participants were being forcibly and violently suppressed by the
Mexican police and military (Meikle, 2002: 141–146). The sit-in
garnered worldwide publicity, raising awareness of the Zapatista cause,
and adding pressure upon the Mexican authorities to end the
suppression. These DDoS attacks were notable for using the FloodNet
hacking software (discussed in Section 3.4) in order to overload the
web servers (Taylor, 2004b: 118). Similar attacks were used in 1995 by
the Strano Network, an anti-nuclear group that targeted French
government websites (Taylor, 2004a: 488). In 2010, financial services
providers including PayPal, Visa and Mastercard suspended payments
made in support of the controversial whistleblowing organization
WikiLeaks; within days, the Anonymous hacktivist group launched
retaliatory DDoS attacks on those companies, crashing websites and
rendering them temporarily inoperative (Hampson, 2012: 512–513). In
May 2012, the website of ISP Virgin Media was shut down following a
denial of service attack staged by Anonymous, in retaliation for the
company blocking access to the file-sharing service Pirate Bay
(Fiveash, 2012).
Email bombs: The email bomb is a tool used to overload email systems
by sending mass mailings, which has the effect of overwhelming the
system, thereby blocking legitimate traffic. There are numerous
instances in which this simple technique has been used successfully by
political activists to disrupt the communications capacity of target
organizations and governments. In 1998, a group calling themselves
the ‘Internet Black Tigers’ (associating themselves with the Tamil
Tigers, the ethnic guerrilla-come-liberation movement in Sri Lanka)
swamped Sri Lankan embassies with 800 emails every day over a
period of weeks. In the same year, Spanish hacktivists tied up the
email systems of the Institute for Global Communications (IGC), in
protest over the IGC hosting websites for the Euskal Herria Journal, a
publication supporting Basque independence from Spain. Similarly,
during the NATO military intervention in Kosovo in 1999, activists
objecting to Western involvement in the war email-bombed NATO
websites (Denning, 2000a: 5). More recently, in response to Bay Area
Transportation (BART) officials disabling Wi-Fi and cellular phone
service in some San Francisco subway stations intended to stymy
protests against BART police, Anonymous called for supporters to,
among other things, overwhelm BART office email inboxes in a kind
of crowd-sourced email bomb attempt (Couts, 2011).
Website defacements: Hacktivists have also had recourse to defacing
and altering websites as a form of protest. This form of cyber-attack
(already discussed in Section 3.3) is not always politically motivated
by any means; early estimates claimed that some 70 per cent of all
defacements are ‘pranks’, in which the choice of target is
indiscriminate (Woo et al., 2004). The remaining 30 per cent, however,
can be seen as attacks targeting specific sites in order to make a
political, social or moral point. In recent times, motivations behind
defacements may have become more political considering the rising
popularity of hacktivism. Such web hacks appear to be often triggered
by offline conflicts, such as war or a rise in political tensions between
nations. So, for example, we see instances of tit-for-tat defacements as
part of the Israel–Palestine conflict, as well as between hacktivists in
Japan and Korea, China and America, and India and Pakistan (2004:
64). Domestic political differences can also trigger defacement attacks:
for example, in 2011 the website of the far-right, anti-Islamic group the
English Defence League (EDL) was attacked by a member of the
TeaMp0isoN hacker group, with the new content accusing the EDL of
being ‘terrorists’ (Leyden, 2011). Other defacements are undertaken in
protest against governments’ handling of particular issues, such as
human rights. Figure 4.1 illustrates a typical hack of this kind, in
which pro-Palestinian hacktivists defaced a website in protest against
Israel’s record of ongoing human rights abuses against Palestinians.
Anonymous has also prolifically engaged in website defacements in
the years since its inception. For instance, Ghost Sec, a group allegedly
associated with Anonymous, defaced a website linked to the Islamic
State (ISIS) in 2015 to show an advertisement for an erectile
dysfunction medication and a message to ‘calm down’ (Griffin, 2015).
While not website defacement in the traditional sense, Anonymous has
repeatedly compromised social network accounts of targets and posted
content for the purposes of protest. In 2016, for example, the group
compromised pro-ISIS Twitter accounts and filled the accounts with
gay pride and pornographic content in response to the mass shooting in
a gay nightclub in Orlando, Florida by an American sympathetic to
ISIS which killed 49 and wounded 58 (CBS San Francisco, 2016; Ellis
et al., 2016).
Viruses and worms: Viruses, worms and other forms of malicious
software (discussed in Section 2.3) are of limited use to hacktivists,
since they tend to be indiscriminate by nature, spreading across the
Web in an uncontrollable manner. For hacktivists aiming to target
specific organizations, agencies and states, the inability to restrict the
impact of such software means that it is not a widely favoured tool of
choice. However, there have been a number of notable instances in
which hacktivists have used viruses and worms. The earliest such
attack was staged by anti-nuclear activists against NASA (the US
National Aeronautics and Space Administration) in 1989; NASA
computers were targeted with the so-called ‘WANK’ worm (standing
for Worms Against Nuclear Killers). During the first Gulf War, Israeli
hackers launched virus attacks at Iraqi government systems in an effort
to disrupt their communications capacity during the US-led invasion.
More recently, the Anonymous collective appeared responsible for the
‘Fawkes Virus’ (actually a Worm) that was released in November
2011, and works by attaching itself to users’ Facebook pages, taking
control of their accounts, and starts sending out malicious links and
contents (Neal, 2011).
Doxxing and leaking: ‘Doxxing’ or ‘dropping docs’ involves the
collection of private information on an individual, often with the goal
of shaming, silencing, or coercing the person. Though the tactic can be
used for the purposes of harassment and abuse (Chapter 9), doxxing
has also been used for political objectives. In 2015, for instance,
Anonymous released a list of names of people allegedly associated
with the Ku Klux Klan, a White Supremacist organization, along with
other details like social media links, email addresses, and occupational
information in what they termed #OpKKK and #OpHoodsOff
(Franceschi-Bicchierai, 2015).1 Related to doxxing, hacktivists have
also used their skills to gain access to classified, secret, or otherwise
sensitive government and corporate data and ‘leaked’ that information
to the public. For example, the use of this strategy by hacktivists and
other tech activists gained public attention in 2010 through WikiLeaks,
a group founded by hacker and activist Julian Assange, and the
publication of a video showing a US Apache helicopter gunning down
Iraqi civilians and two Reuters journalists (Leigh and Harding, 2011).
Entitled ‘Collateral Murder’, the video had been leaked to WikiLeaks
by a US Army private and intelligence analyst Chelsea Manning along
with troves of other sensitive documents. Other examples exist of
prominent hacktivist-associated leaks, perhaps most notably those
linked to Anonymous. In 2011, for example, hackers associated with
Anonymous compromised the systems of the security firm HBGary
and ‘downloaded seventy thousand company emails, along with other
files that included a PowerPoint presentation entitled “The WikiLeaks
Threat”’ (Coleman, 2014: 207). The presentation proposed a number
of ways to disrupt and discredit WikiLeaks and its associates (ibid.:
207–236). Anonymous leaked this information with the intent to reveal
such clandestine tactics.
Development of privacy and resistance tools: It should come as no
surprise at this point that not only do hacktivists use the hacker tactics
but they also make use of hacker tools. Some tools, in fact, were
developed by hacktivists with the intent of facilitating political
resistance and circumventing internet censorship. For instance, the
Cult of the Dead Cow, as part of their Hacktivismo efforts, developed
the ‘Peekabooty’ system, which was ‘designed to enable internet users
in countries where the internet is censored to exchange files and to
gain access to officially banned web pages’ (Chase et al., 2006: 69).
Back Orifice and its predecessor, Back Orifice 2000 (see also Section
3.4), were developed by cDc with explicitly political ambitions. As
Jordan and Taylor (2004: 111) explain: ‘There is no doubt that Cult of
the Dead Cow intended BO to be a part of the hacktivist struggle.
When the 2000 version was released at hacker conference DefCon,
they explicitly called it a moment in hacktivism, as well as prefacing
their demonstration of BO2K’s with some calls to hackers to develop
political relevance in their hacks.’ Politically motivated technologists
and hackers also have made significant contributions to the
development and dissemination of publicly available cryptographic
protocols like PGP (‘Pretty Good Privacy’) (Levy, 2001).
1 The original #OpKKK and #OpHoodsOff release file can be found at
https://pastebin.com/wbvP95wg.
The use of such hacking techniques by activists offers a number of
advantages over more traditional forms of terrestrial protest. They enable
participants who may be dispersed across cities, nations and countries to
converge in the virtual environment in potentially large numbers to make
themselves felt. They also offer protesters a considerable degree of
protection from suppression by police and security forces, unlike terrestrial
protesters whose access to physical sites and locations may be blocked, or
who may be forcibly and violently removed through use of crowd dispersal
means such as water cannon and CS gas. The UK’s Guardian newspaper
went so far as to include Anonymous among its list of the top 20 ‘fighters
for internet freedom’, alongside the likes of Electronic Frontier Foundation
(EFF) founder John Perry Barlow, WikiLeaks editor-in-chief Julian
Assange, Linux creator Linus Torvalds, and ‘the inventor of the World
Wide Web’, Sir Tim Berners-Lee (Ball, 2012). It must be noted, however,
that by no means all observers view the rise of hacktivism as a benign
phenomenon or as a legitimate development of free speech and political
participation. Many, especially those in law enforcement and state circles,
make little distinction between political and non-political uses of hacking,
and view all such activities as instances of vandalism and criminal damage.
Like the hacks undertaken by their non-politicized counterparts, hacktivist
incursions on the Web fall foul of increasingly stringent computer crime
laws, and as such are punishable by substantial financial penalties and/or
lengthy custodial sentences. Between January 2011 and July 2012, 20
members of the LulzSec hacktivist collective were arrested in the USA and
the UK on a range of computer crime charges (Mills, 2012); 25 alleged
member of Anonymous were arrested across Argentina, Chile, Colombia
and Spain in February 2012, with arrests also occurring in the USA and
across Europe (Allen, 2011; Fitzpatrick, 2012). Indeed, as we shall see
below, the climate of fear generated around terrorism in recent years has led
hacktivism to be labelled by many as a form of ‘information terrorism’.
4.3 The Spectre of Cyberterrorism
Cyberterrorism, as the term suggests, basically connotes a convergence
between terrorism and cyberspace. It is claimed that the growth of and
increasing social, political and economic dependence upon the internet
afford terrorist organizations a new arena in which to pursue their goals by
staging attacks or threats against computer networks and information
systems (Milone, 2003: 75). The definition of what counts as terrorism is,
of course, deeply contested – the term is slippery in the extreme (EmbarSeddon, 2002: 1034; Grabosky, 2003: 1), and the label can be seen in part
as a rhetorical device by which particular political projects are
delegitimated by those who oppose them (Furedi, 2005). Conventional
understandings of terrorism nevertheless take it to denote a distinct form of
criminal action, characterized by the marriage of violence and politics
(Crozier, 1974). Schmid (1993: 8) understands terrorism as the use of
‘repeated violent action’ for achieving particular ends; such violence, or the
threat of its use, is mobilized by the terrorist to incite anxiety and thereby
manipulate, intimidate or coerce its targets (a population, a government)
into acceding to the terrorist’s political demands. A further salient feature of
the conventional understanding of terrorism is its association with non-state
and sub-state actors (which may or may not be state sponsored or
supported). On this basis, cyberterrorism has been defined as: ‘the
execution of a surprise attack by a subnational foreign terrorist group or
individuals with a domestic political agenda using computer technology and
the internet to cripple or disable a nation’s electronic and physical
infrastructures’ (Verton, 2003: xx).
Figure 4.1 Example of a defaced website by hacktivists
(Xatrix.org, 2014a)
In a similar vein, Denning (2000a: 1) takes the term to denote:
unlawful attacks and threats of attack against computers, networks, and
the information stored therein when done to intimidate or coerce a
government or its people in furtherance of political or social
objectives. Further, to qualify as cyberterrorism, an attack should result
in violence against persons or property, or at least cause enough harm
to generate fear. Attacks that lead to death or bodily injury, explosions,
or severe economic loss would be examples. Serious attacks against
critical infrastructures could be acts of cyberterrorism, depending on
their impact. Attacks that disrupt nonessential services or that are
mainly a costly nuisance would not.
This notion of cyberterrorism has, in recent years, been enshrined in law.
For example, the UK Terrorism Act of 2000 makes provision for those who
‘seriously interfere with or seriously disrupt an electronic system’. In the
wake of the 9/11 attacks in New York and Washington, the USA has taken
the lead in making legal provision to protect the nation’s computer systems
against terrorist attack. The USA PATRIOT Act (an acronym for Uniting
and Strengthening America by Providing Appropriate Tools Required to
Intercept and Obstruct Terrorism) was passed into law on 26 October 2001,
barely six weeks after the attacks on the Twin Towers and the Pentagon.
The Act considerably strengthens penalties under the Computer Fraud and
Abuse Act of 1986 (discussed in Section 3.6), including provision for the
life imprisonment of convicted ‘cyberterrorists’ (Hamm, 2004: 288–289;
Milone, 2003: 79); the Act also eases restrictions on electronic surveillance
by the state and law enforcement agencies (Milone, 2003: 79). So we can
see that, as part of the growing political and ideological momentum behind
the so-called War on Terror, the notion of terrorist attacks using the internet
has come increasingly to the forefront of the political agenda.
The perceived threat from cyberterrorism needs to be understood not only
in the political context of the post-9/11 world, but also in terms of the
concept of ‘critical infrastructure’ which has been developed in recent
decades by those working in the quasi-military field of ‘resilience studies’.
Critical infrastructure (CI) is defined as: ‘an infrastructure or asset the
incapacitation or destruction of which would have a debilitating impact on
the national security or economic or social welfare of a nation’ (Dunn and
Wigert, 2004: 18). CI includes a range of material systems such as those
used to provide communications (mail, telephones), energy (oil, electricity,
gas), water, food (production, import, processing, distribution and sale),
emergency services (police, ambulance, fire, rescue) and health care
provision (Milone, 2003: 76). CI also includes material assets whose
symbolic content or inherent meaning is deemed important to national
cohesion and welfare, for example, the Statue of Liberty or the Twin
Towers. Attacks on symbolically significant targets can be seen to yield
valuable publicity, which terrorist organizations need if they are to succeed
in mobilizing fear to secure their aims (Gerrits, 1992: 450–458; Paletz and
Vinson, 1992: 2). As the threat of terrorist attack has been seemingly
amplified over the past few decades, the question of protecting these key
social systems has commanded greater attention. The year 1996 saw the
establishment of the President’s Commission on Critical Infrastructure
Protection (PCCIP) under the Clinton administration, and in the wake of the
9/11 attacks the Office of Homeland Security was established in 2001,
which includes the National Infrastructure Advisory Council (NIAC) and
the President’s Critical Infrastructure Protection Board (PCIPB) (Dunn and
Wigert, 2004: 201–203). The UK established the National Infrastructure
Security Co-ordination Centre (NISCC) in 1999 (2004: 189–190). These
and a plethora of other state agencies are charged with formulating
strategies to protect critical infrastructure against terrorist attack, with
coordinating activities between state agencies and with non-state actors
(e.g. energy suppliers, water authorities, etc.), with establishing systems for
early warning against threats of terrorist attack against the CI and so on.
The sums being invested in CI protection are vast by any standard: for
example, in 2017, one of the key agencies responsible for securing
American critical infrastructure, the Department of Homeland Security, had
a gross discretionary budget of over $53 billion (DHS, 2018b: 9).
To a significant degree, the debate about terrorist threats to critical
infrastructure has been concerned with what is termed the critical
information infrastructure (CII). CII is taken to comprise ‘the information
and telecommunications sector, and includes components such as
telecommunications, computers/software, the internet, satellites, fibreoptics, etc. The term is also used for the totality of interconnected
computers and networks and their critical information flows’ (Dunn and
Wigert, 2004: 20). This ‘totality of interconnected computers and networks’
is seen as an essential element of critical infrastructures for two main
reasons. First, the growth of the internet and allied systems plays an
increasingly central role in the economic, political and social lives of
Western industrialized nations. For example, electronic communications
networks are used increasingly for commercial transactions (e-commerce)
and banking and finance (e-banking, financial transfers, stock markets);
consequently, a serious attack on these networks would ‘have a debilitating
impact on the ... economic ... welfare of a nation’ (Dunn and Wigert, 2004:
18). Second, and perhaps even more seriously, electronic information
networks are increasingly integrated with conventional critical
infrastructures: for example, they are now used to control and coordinate
essential services such as power, water, air traffic, emergency services,
national defence and so on. This dependence upon networked technologies,
while enabling greater speed and efficiency, simultaneously renders these
essential services vulnerable; hijacking these information systems via
internet connections could enable key infrastructures to be manipulated and
controlled; disruption of the information infrastructure (e.g. via viruses,
worms, logic bombs or DDoS attacks) would seriously undermine critical
infrastructural operations. Consequently, common scenarios in cyberterror
threat assessments include:
loss of power via attacks on systems that control power grids
(Denning, 2000a: 1)
disruption of financial transactions, bringing economic systems to a
halt (Denning, 2000a: 1; Gordon and Ford, 2003: 7)
crippling transport systems such as air and rail by corrupting or
crashing the computers used to control them (Verton, 2003: 8)
theft of top-secret information relating to defence and national security
by hacking government computers.
Such threat assessments have stimulated extensive investment in computer
programs to secure information infrastructures against cyber-attack,
especially in the wake of the 9/11 attacks. In 1999, the Clinton
administration committed $1.46 billion to combat the threat of
cyberterrorism (Hamblen, 1999; Miyawaki, 1999). In 2000, they issued the
first comprehensive plan for protecting the US critical information
infrastructure, Defending America’s Cyber-space: National Plan for
Information Systems Protection (Office of the United States President,
2000). In the wake of 9/11, the Bush administration created the Department
of Homeland Security and, in 2003, requested around $298 million to
bolster the protection of critical information infrastructure (Bush, 2003: 35).
This level of commitment has been sustained by the current administration
which in May of 2017 issued the executive order Strengthening the
Cybersecurity of Federal Networks and Critical Infrastructure, a document
that calls for the expansion of CII protections (Trump, 2017). The
administration also requested $814.4 million for the fiscal year 2019 to be
allocated to securing critical information infrastructure (DHS, 2018a: 6).
Similarly, other nations have substantially increased the resources allocated
to protecting information infrastructures after the 9/11 attacks: for example,
Australia’s CIIP budget was set at $2 million in May 2001, but tripled the
following year to $6 million, and was allocated a total of $24.9 million over
four years (Dunn and Wigert, 2004: 43). Australia has continued to invest in
critical infrastructure protection (including critical information
infrastructure) as evinced through the establishment of the Critical
Infrastructure Centre in early 2017. In another example, as part of their
Cyber Security Strategy, the UK government has committed £1.9 billion to
combating cybercrime, including defending critical infrastructure, between
2016 and 2021 (Hammond, 2016: 6).
4.4 Why Cyberterror? Terrorist
Advantages of Utilizing Internet Attacks
Assessments by official and academic commentators that cyberterrorism
presents a substantial and growing threat are based upon a number of
factors. The transition from terrestrial to virtual attacks is deemed to offer a
number of advantages to terrorist groups:
The internet, by its nature, enables ‘action at a distance’: Terrorists no
longer need to gain physical access to a particular location, as they can
access electronic systems from anywhere in the world (Gordon and
Ford, 2003: 6). This is held to be of particular importance in the wake
of 9/11, as many countries have instituted increasingly rigorous border
and travel controls so as to prevent terrorists from entering national
territories and/or bringing weapons material (such as explosives)
across borders. For example, the US considers its Fourth Amendment
protections to be severely weakened at their borders and so searches
and seizures are far more likely, including those of foreign nationals.
In addition, the USA now routinely records fingerprint IDs and other
personal details of foreign nationals entering the USA (Adey, 2004:
508). The use of the internet for staging terrorist attacks bypasses such
security measures, along with the risk of apprehension at airports,
ports and border checkpoints. Moreover, by staging cyber-attacks from
within the borders of so-called ‘rogue states’ (which may well endorse
or directly support the terrorist actions) or even countries that do not
have relevant extradition treaties, it is possible to exploit a ‘safe haven’
lying outside the reach of security and criminal justice agencies in the
target nations.
The internet turns actors with relatively small numbers and limited
financial and material resources into what have been called
‘empowered small agents’: The empowerment reaped by actors is a
result of the fact that the internet can function as a so-called ‘force
multiplier’. A force multiplier is something that can ‘increase the
striking potential of a unit without increasing its personnel’ (White,
1991: 18). One instance of such force multiplication is the impact of
computer viruses: small pieces of malicious code can spread rapidly
across the global network of the Web, reproducing exponentially and
corrupting systems as they go. So, for example, the ‘MyDoom’ virus
(discussed in Section 3.3) managed to cause an estimated $20 billion
of damage in just 15 days. Moreover, the potential effects of cyberattacks are further multiplied due to the high levels of interdependence
between ostensibly separate technological systems; as Rasch (2003:
vii) puts it, ‘Without electricity, there is no telephone service; without
either there is no usable internet’. Therefore disabling one element of
this infrastructure can yield a ‘cascade effect’ that spreads through
other systems with which it is interconnected and co-dependent. The
cost–benefit ratio of using electronic disruption (small investment,
massive damage) can make it a desirable weapon of choice for those
intent on causing mayhem and spreading fear. Indeed, the global
nature of the internet means that cyber-attacks are likely to be highly
visible and impact upon a large number of users, thereby assuring a
significant impact in terms of publicizing the attack (Embar-Seddon,
2002: 1038).
Anonymity: It has already been noted that one of the greatest
challenges presented by cybercrime to law enforcement is the extent to
which the internet environment affords perpetrators a degree of
anonymity or disguise (Fleming and Stohl, 2000: 38, 46). Such
anonymity can be achieved by staging cyber-attacks through the use of
‘proxies’, where activities are routed through a range of intermediate
systems and servers, making it difficult to trace the point of origin of
the attack. It can also be achieved through identity theft, where the
perpetrator uses an identity stolen from an innocent third party (such as
an account access password) in order to stage an attack in his or her
name (identity theft is discussed further in Section 6.4.1).
Lack of regulation: One of the greatest problems in securing the
internet against potential cyber-attack is the absence of any centralized
and coordinated regulation of the virtual environment. The internet has
evolved as an open, decentred network, with ‘nodes’ distributed
worldwide across legislative and territorial boundaries. While this is
considered one of its great strengths, it also constitutes one of its
greatest weaknessess, in so far as there will inevitably be innumerable
weak points at which the security of the network can be compromised.
Moreover, while responsibility for national security falls to the state
and its agencies, the vast bulk of the world’s critical information
infrastructure is held in private hands (GAO, 2017: 1; Verton, 2003:
xxiii); this renders it well-nigh impossible for any government to effect
centralized control (Grabosky, 2003: 3).
4.5 Rhetorics and Myths of
Cyberterrorism
The foregoing discussion paints a vivid picture of a technologically
dependent world which is ever-more vulnerable to terrorist groups who
might cause massive economic and social damage should they choose to
exploit the opportunities readily available. Perhaps even more so than any
other form of cybercrime, however, internet terrorism is bound up with, and
often obscured by, a great deal of rhetorical embellishment and mythmaking. Terms like ‘electronic Pearl Harbor’, ‘electronic Chernobyl’ and
‘digital Armageddon’ have been used to describe possible technological
doomsday scenarios that may arise from a cyberterror attack (Stohl, 2006:
224). It has been suggested by more sceptical commentators that there is
little empirical evidence to warrant the level of concern currently being
generated over cyberterrorism, nor to justify the sweeping enhancement of
state powers that are being instituted in order to respond to the supposed
threat. Professor Dorothy Denning (recognized as a leading world authority
on internet security) states that ‘there are few if any computer network
attacks that meet the criteria for cyberterrorism’, and that there ‘is little
concrete evidence of terrorists preparing to use the internet as a venue for
inflicting grave harm’ (Denning, 2000b: 23–24). Donoghue (2003) goes
further and claims that ‘there has never been a recorded act of
cyberterrorism pre- or post September 11th’ (see also Buxbaum, 2002: 1).
There are numerous cases cited as examples of cyberterror (see, for
example, Embar-Seddon, 2002; S. Lawson, 2002; Verton, 2003). Upon
closer examination, however, such incidents turn out to be cases of either
hacktivism (which may or may not support the political perspectives of
known terrorist organizations [Denning, 2010: 197–201]); of opportunistic
disruption supported by no discernible political, social or other agenda; or
of information warfare perpetrated by one state against another (Libicki,
2007). The closest established case of cyberterrorism known publicly to
date concerns Ardit Ferizi, a hacker from Kosovo who compromised a retail
company’s system and obtained the names, passwords, email addresses and
other data of hundreds of thousands of people (Boyd, 2016). He then
combed through that data and handed over information associated with
‘1,351 military members and other government employees’ to the Islamic
State (Weiner, 2016). He was extradited from Malaysia and tried in the
United States. US Assistant Attorney General for National Security, John P.
Carlin, stated at the time that ‘this case represents the first time we have
seen the very real and dangerous national security cyber threat that results
from the combination of terrorism and hacking’ (Weiner, 2016). It should
be noted this incident did not involve the targeting of critical infrastructure
(Davidson, 2016). That said, establishing whether or not cyber-attacks by
terrorist organizations are in fact occurring and causing harm is becoming
more, not less, difficult in a climate of growing state secrecy. For instance,
in October 2001, the Bush administration supported a bipartisan
Congressional proposal to limit government disclosures about successful
cyber-attacks (Milone, 2003: 90, 101).
We have already identified those factors (such as action at a distance,
anonymity, etc.) that lead many government, computer security and media
commentators to perceive the internet as a desirable target for terrorists. We
can conversely see, however, that there are numerous reasons why terrorist
groups might be disinclined to turn their hands to virtual engagements.
First, many of those systems which regulate critical infrastructure (power,
air traffic control, stock markets, etc.) are largely isolated from the public
internet on private networks, so access is not nearly as straightforward as
might be supposed. While the prospect of al-Qaeda hackers misdirecting
aircraft toward mid-air collisions might make spectacular media copy and
mobilize public concern with national security issues, the likelihood of such
an action is slim, precisely because air traffic control systems operate in
isolation from the internet. That said, in the era of the Internet of Things
and the Industrial Internet, concerns have increased that critical
infrastructure may be linked to the internet in exploitable ways (Simon,
2017). Only time will tell if these worries are warranted. Second, it has
been suggested that the very virtuality and immateriality of the internet
make it an unsuitable target for those whose acts are intended to generate
terror. As Denning (2000a: 9) notes, ‘unless people are injured, there is ...
less drama and emotional appeal’, and images of the visible consequences
of bomb blasts and shootings are inevitably more visceral in their impact
than more abstract reports of damaged data and monetary losses (see also:
Rid, 2013: 17). Likely the closest ‘cyberterrorists’ have come to creating
bodily harm is from the previously mentioned case of Ardit Ferizi as ISIS
published the doxxed names on a ‘kill list’ meant to threaten US
government employees and military members. As of yet, however, no actual
bodily harm has been shown to have resulted from the data theft and release
(Davidson, 2016). For this reason, Denning (2000a: 9) concludes that ‘for
now, the truck bomb poses a much greater threat than the logic bomb’ (see
also discussion in Green, 2001).
Given that there is at present little evidence to support claims of an
imminent cyberterrorist threat, we must ask why so much political energy,
institutional innovation and financial resources are being expended to
counter the perceived threat. One possible explanation is that current efforts
to institute counter-measures for cyberterrorist attacks are anticipatory in
character. That is to say, while there is deemed to be little likelihood of an
immediate attack of this kind against the critical infrastructure, the
possibility of such incidents manifesting in the future cannot be excluded;
therefore, a responsible government is duty-bound to take such preventative
measures as are possible. This would seem to chime with the position of the
UK government, whose initial threat assessment about cyberterrorist attacks
appeared considerably more realistic than that of their US counterparts.
David Blunkett, the former British Home Secretary, stated in 2002 that ‘The
risk is assessed to be low, but growing. It could change rapidly at any time
and our response will need to adjust to remain proportionate’ (UK
Parliament, 2002). This precautionary stance has been reiterated in the
recent UK Cyber Crime Strategy launched in 2011 (Home Office, 2011).
A more critical perspective, however, sees the current discourse on
cyberterrorism as fundamentally disproportionate; the scale of the response
cannot be explained in a ‘realist’ manner by looking to the actual, or
potential future, level of threat. Rather, the whole treatment of
cyberterrorism must be understood as a social construction. Furedi (2005)
notes that the category of ‘terrorism’ as a distinct and morally reprehensible
form of political violence has been understood by some scholars as a means
of delegitimating particular political struggles and thereby legitimating the
authority of the state and its monopoly over the use of political violence
(see also Oliverio, 1998). The same can be claimed for the current
discussion of cyberterrorism. On this view, characterizing those resisting
the hegemony or control of the state as ‘cyberterrorists’ serves to undermine
any popular support that such oppositional forces might enjoy, and to justify
further criminalization and social control of ‘suspect’ populations. It is in
this context that Vegh (2002: 1) discerns a growing pressure on ‘antihegemonic forms of internet use’ in the wake of 9/11; the language of
cyberterror enables authorities to re-label hackers and hacktivists as
‘information terrorists’, and to win popular and political support for
otherwise controversial controls over internet use, such as bans on the use
of encryption and steganography technologies and the monitoring of
electronic communications. In response to the measures instituted by the
Bush administration in the USA, Vegh (2002: 10) writes:
In the name of national security, the government now openly supports
the control and containment of these liberating technologies, even at
the price of curtailing the constitutional civil liberties it had so proudly
protected. It is now claimed, through news media reports, that
terrorists ... plan to carry out cyberattacks ... that will cripple the US
economy. All these technologies ... are now fully vilified in the war
against terror.
Going further, Wykes and Harcus (2010) argue that politicians and the
media have been complicit in creating a ‘myth’ of the terrorist threat post9/11, one that draws upon Islamophobic stereotypes to present a
symbolically potent narrative of ‘good versus evil’. The threat of
cyberterrorism recuperates such stereotypical and dramatized
representations in a manner that over-plays the risk presented by such
activities and furnishes state power with a justification for accelerating
online surveillance and a corresponding erosion of civil liberties and
privacy.
It must be noted, however, that government is not a homogeneous entity,
but an array of institutions, each of which may well have its own agenda
and its own self-interest to pursue. Hence we may understand the
development of the discourse on cyberterrorism as a form of ‘moral
entrepreneurship’ (Becker, 1963), an attempt to generate public and
political concern via the media so as to secure sectional interests. It is in this
vein that McCullagh (2001) notes during the era of the 9/11 terrorist attacks
how ‘some administration critics think the FBI and CIA are using potential
terrorist attacks as an attempt to justify expensive new proposals such as the
National Homeland Security Agency’. If true, then it would certainly not be
the first instance in which law enforcement agencies have constructed a
moral panic in order to secure greater powers and to capture greater
resources (see, for example, Hunt, 1987). Moreover, other actors, such as
those in the computer security industry, may also have a vested interest in
promulgating the idea of an urgent and imminent cyberterrorist threat (Yar,
2008a). Expenditure on cyber security is massive, with one claiming that
$89.13 billion dollars were spent on the procurement of information
security products and services globally in 2017 (Gartner, 2017a) and
another claiming the information security market is worth an excess of $120
billion (Morgan, 2018). State and private sector provisions to tackle
cyberterrorism are, in effect, driving a significant expansion of commercial
opportunities for computer security businesses (on the development of
commercial, for-profit cybercrime control see Chapter 10).
4.6 Alternative Conjunctions between
Terrorism and the Internet
The foregoing discussion has suggested that the threat of cyberterrorist
attack on critical infrastructures is more a case of strategically useful fancy
than hard fact. However, this does not necessarily mean that there are no
significant convergences between terrorist activities and the internet. In
contrast to the computer-focused crimes discussed thus far, it has been
argued that the internet plays a significant and growing role in computerassisted terrorist offences. In other words, terrorist groups make use of the
internet in support of their conventional, terrestrially based activities. Such
uses of the internet can be seen to fall into a number of distinct types:
Communication and coordination: The internet may be used as an
efficient means of intra-organizational communication and
coordination of terrorist activities, through the use of technologies
such as email and bulletin boards (Shelley, 2003: 305; see also
Conway, 2002). Here, terrorist groups (like other organizations, be
they political, commercial or criminal) use the internet to help secure
their goals (Fleming and Stohl, 2000: 38). Use of online tools offers a
number of advantages over conventional communications: they enable
ease of communication on a transnational basis; they afford greater
anonymity; and they afford security. The last of these points is
considered particularly important, given the ready availability of socalled ‘strong encryption’ software (such as the free PGP or ‘Pretty
Good Privacy’ developed by an American programmer, Phil
Zimmermann). Encryption (derived from the term cryptography,
meaning ‘hidden writing’) is a technique which enables
communications to be encoded prior to transmission so that they are
unreadable if intercepted; only the intended recipient has a ‘key’ which
enables the message to be decoded and restored to its original legible
form. Encryption technologies are now widely used in legitimate
internet exchanges, permitting the secure transmission of sensitive data
such as financial information and systems access passwords. This
same technology is allegedly used by terrorists and other organized
criminal groups to protect their communications from identification
and interception by security and law enforcement agencies. There are a
number of documented instances of terrorist groups using encryption
technologies. For example, the Aum Shinrikyo cult (which staged the
Sarin nerve gas attacks in Tokyo in 1995) stored their records on
computer in encrypted form; Ramsay Yousef, who participated in the
1993 bombing of the World Trade Center, was later discovered to have
used a laptop with encrypted files containing plans for further
bombings (Denning and Baugh, 1997: 1–2). The trial of a British man
for ‘preparing terrorist acts’ in 2011 revealed how he had used
sophisticated encryption methods to undertake communication with
Islamist radicals and al-Qaeda leaders (MacDonald and Bryan-Low,
2011). ISIS has also been said to use encrypted dark websites to
communicate (Wiese, 2017). Another variant on encryption is the use
of steganography (from the Greek meaning ‘compartmentalized
writing’). Steganography works by hiding data within the digital code
used to construct images; unless one knows where to look, the hidden
data will be virtually undetectable. Choudhary (2012) claims that
terrorist groups are using steganographic techniques to hide messages
within images posted or transmitted via the internet. Such claims,
however, have proved controversial, with one group of US research
scientists claiming that a comprehensive study of web images ‘found
no evidence of steganography on the Net’ around the time of alQaeda’s attacks on the US (Vegh, 2002: 13). More recent claims have
been made that ISIS and al-Qaeda have used steganography to
communicate through images (including pornographic images) on
websites like Reddit and eBay (Thomas, 2015: 762).
Propaganda, publicity and recruitment: As already noted, publicity (be
it about a group’s attacks, ideology, aims or demands) is an essential
component of terrorism. Terrorist groups have traditionally been
dependent upon established news media (newspapers, radio,
television) for dissemination of their messages (note, for example, alQaeda’s use of the Arab satellite news channel Al Jazeera to broadcast
warnings, demands and statements from Osama bin Laden). However,
this arrangement creates, from the terrorists’ point of view, an
unwelcome dependence upon actors over whom they may have no
control, and who may be actively hostile towards the group’s agenda.
The internet offers an inexpensive means to address a worldwide
audience, bypassing the dependence on intermediary news
organizations. Numerous terrorist organizations now maintain their
own websites on the open or dark web, including: the Ku Klux Klan
and Army of God in North America; Hamas, Hezbollah and ISIS in the
Middle East; the Irish Republican Army in Europe; the Shining Path
(Peru) and Fuerzas Armadas Revolucionarias de Columbia (FARC) in
Latin America; and Aum Shinrikyo and Hizbul Mujahideen in Asia.
Such sites typically provide:
a history of the organization and its activities, a detailed review of
its social and political background, accounts of its notable
exploits, biographies of its leaders, founders, and heroes,
information on its political and ideological aims, fierce criticism
of its enemies, and up-to-date news. (Weimann, 2004: 4)
As well as targeting international public opinion and enemy publics,
such sites can be used to reach potential supporters and recruits.
Recruiters may also use interactive web technologies such as chat
rooms and bulletin boards in search of sympathetic individuals who
may prove potential future members and supporters (Weimann, 2004:
8).
Information gathering: The billions of pages of the internet constitute
a massive repository of potentially useful information for terrorist
groups. Large amounts of surprisingly sensitive information are
routinely made available on the internet by governments, public bodies
and businesses (especially in countries that have freedom of
information provisions in place intended to increase government
transparency and hence accountability). Systematic exploration of the
internet (known as ‘data mining’) can enable groups to gather
information on potentially exploitable targets, their location and so on
(such techniques are routinely used by law enforcement agencies in
their intelligence-gathering on terrorist groups and organized crime
groups). The UK government’s Countering International Terrorism:
The United Kingdom’s Strategy (CONTEST) claims that terrorist
groups are now using online map services such as Google Earth and
Street View to help plan attacks (Gardham, 2011). Hacking techniques
can also be used to access confidential information. For example,
Denning (2000a: 6) reports that the IRA hired hackers to obtain
personal details about law enforcement and security operatives in order
to formulate plans for an assassination programme. Though he was not
formally linked to a terror organization prior to this event, the
previously mentioned release of US government employees and
military members to ISIS by Ardit Ferizi strikes a similar chord.
Moreover, the internet can be seen as a freely accessible library of
technical information on methods that are useful for perpetrating
terrorist attacks. Documents readily available for download include
The Anarchist’s Cookbook and The Terrorist’s Handbook, which
include instructions for making explosives from household items such
as bleach; the Sabotage Handbook, which offers instructions on
planning assassinations and anti-surveillance methods; and the
Mujahedeen Poisons Handbook, with detailed information on
manufacturing homemade poisons and poison gases (Weimann, 2004:
9).
Fundraising and financing: The internet provides a valuable tool for
terrorist fundraising. Websites are used, for example, to post details of
bank accounts via which sympathizers can make payments to
organizations. The Web also provides a useful means for soliciting
donations. Internet-user demographics are used to identify potential
supporters who can then be approached via email. Solicitations are
typically made via a front organization (such as a charity) that can then
be used to channel finance to the terrorist organization (Weimann,
2004: 6–7). More generally, the expansion of internet banking makes it
more difficult for authorities to detect suspicious transactions
(Fitzgerald, 2003: 21), and its transnational nature enables prohibited
organizations to exploit international inconsistencies and gaps in the
legal regulation of finance (Shelley, 2003: 304). Assertions have also
been made that terrorist organizations have turned to cryptocurrency to
cover their financial tracks. In a recent case, a woman was arrested in
the US for using ‘bitcoin and other virtual currencies to launder
$85,000 and send it to ISIS’ (Saravalle, 2018). Finally, a range of
cybercriminal activities can be mobilized in order to raise direct
finance for groups’ activities; for example, online copyright theft and
‘piracy’ (discussed in Chapter 5) and internet frauds (discussed in
Chapter 6) can furnish valuable avenues for the high-profit and lowrisk acquisition of monetary resources. Recent studies have presented
evidence for such connections between terrorist organizations,
organized crime groups and media ‘piracy’ (AACP, 2002; Robinson,
2015; TraCCC, 2001).
All the above intersections suggest a clear convergence between terrorism
and the internet. This should not be surprising, since terrorists (just like
other social actors and organizations) are liable to adopt new technologies
where they offer strategic advantages and efficiencies in the pursuit of their
goals. What renders the use of the internet in this way particularly
challenging are the difficulties in monitoring and regulating its illicit use.
4.7 Cyberwarfare
Though the thrust of this chapter is dedicated to political forms of hacking
typically described as hacktivism or cyberterrorism, it is worth highlighting
another way that information security is exploited for political purposes. As
mentioned previously, governments across the world have become
increasingly concerned about potential vulnerabilities in critical
infrastructure. After all, the more a society becomes networked, the more
dependent it becomes on information infrastructure. While government
leaders and pundits wring their hands over the potential of ‘lone hackers’
and ‘cyberterrorists’ to exploit weaknesses in critical infrastructure, others
see computer networks as a new domain of warfare whereby nation states
can launch attacks against each other, a threat oft described as cyberwarfare
or information warfare (Clarke and Knake, 2010: 44; Denning, 1999). Some
military and policy strategists have advised that adversarial states could
make use of information technology to spread propaganda, disrupt
communications, sow discord, conduct espionage, and sabotage operations
with the most alarmist accounts prophesizing potential crippling attacks that
could result in widespread loss of life and limb (for example, Clarke and
Knake, 2010: 64–68). Such accounts indicate that the future of warfare
could depend on information security. In response to such concerns,
multiple countries, like the US, UK, Russia, China, North Korea, Israel, and
others, have developed cadres of operatives dedicated to developing and
executing offensive and defensive operations in cyberspace. For example,
in the US, the Army, Navy, Airforce, National Security Agency, Cyber
Command (part of the US Department of Defense’s Strategic Command or
STRATCOM), Defense Advanced Research Projects Agency, and others
are all involved in developing and coordinating cyberwarfare capabilities
(Clarke and Knake, 2010: 39-44; Lindsay, 2013: 367; USDOD, 2015).
Some accounts argue that cyberwarfare may be particularly appealing to
countries lacking in military might, a notion called ‘asymmetric’ warfare
(Clark and Knake, 2010: 50). Others argue that the resources, infrastructure
and expertise needed to conduct cyberwarfare mean that it will likely be the
province of ‘states with long-established and well-funded cyber warfare
programs’ (Lindsay, 2013: 388).
One of the most recent and noteworthy examples of cyberwarfare occurred
through the deployment of a ‘digital weapon’ commonly known as Stuxnet
(Zetter, 2014a). Development of Stuxnet began in 2006 under the code
name ‘Olympic Games’ as part of a US government effort to develop digital
weapons (ibid.: 194). Three years later, in 2009, the US was embroiled in
efforts to stop Iranian uranium enrichment. While Iran insisted that such
enrichment was necessary for developing nuclear power facilities in their
country, the US and its allies were worried that Iran was attempting to
create nuclear weapons. The US joined forces with Israel to further refine
their digital weapon and eventually unleashed what would become known
as Stuxnet (other versions of Stuxnet have also been discovered as well,
including Flame). Stuxnet was a highly sophisticated and difficult to detect
malware program that involved two key components: the ‘delivery system’
and the ‘payload’ (Zetter, 2014a: 52). The former controls how the malware
transmits itself from system to system. Because Iran’s nuclear enrichment
facilities were kept separate from the internet, Stuxnet’s vector of
transmission were primarily infected USB devices (ibid.: 93). Once on a
system, Stuxnet would check for the presence of Siemen’s software and
associated programmable logic controllers (PLCs) – devices used for
managing industrial equipment (ibid.: 52). If the malware did not find its
target, it would lay dormant and wait to be transferred. If it did, the payload
would then be deployed. Stuxnet would hi-jack the PLCs used to regulate
the enrichment centrifuges, making them fail at greater rates than expected,
while reporting falsified data to the technicians. As a result of Stuxnet, Iran
experienced significant delays in their uranium enrichment efforts which
gave the US, Israel, and other countries more time to find diplomatic
solutions to the conflict.
Though Stuxnet has been linked to the US and Israel, both countries have
not officially confirmed their involvement in the Iran nuclear enrichment
attacks. This ambiguity points to a key issue surrounding cyberwarfare – its
often clandestine nature makes identifying, tracking and attributing
occurrences difficult. Even determining if an attack was conducted by nonstate actors or state actors can be nearly impossible in many cases. In
another example, Estonia was hit with a series of persistent and large-scale
DDoS attacks against many of its servers in 2007 (Brenner, 2016: 19–21).
Though the identity of the aggressor was unconfirmed, the Estonian
government initially blamed Russia for the attack and called the event
cyberwarfare. During this period, Estonians relocated a statue which they
saw as emblematic of Soviet oppression. Russians saw its removal as
disrespectful. No definitive evidence exists linking Russian authorities to
the attacks and the Russian government denies involvement (Brenner, 2016;
Clarke and Knake, 2010: 11–21). Some fingers point toward loyal Russian
hacktivists as possible culprits (Herzog, 2011). Clarke and Knake (2010)
argue, however, that the scale and sophistication of such an attack would
have required resources that only a government like Russia’s could have
provided. Estonia eventually withdrew their accusations (Brenner, 2016).
Though concerns over cyberwarfare proliferate in public and political
discourse, scholars have argued that the threat posed by cyberwarfare may
be overstated. Rid (2013), for example, argues that the term cyberwarfare
itself is misleading. Typically warfare is a violent affair of bullets and
bombs – the use of kinetic energy to accomplish state objectives. Thus the
term cyberwarfare implies that computer systems and networks are
compromised in a manner that results in harm to life and limb. Indeed, the
bleakest predictions of cyberwarfare foretell such scenarios where
cyberwarfare could cause serious damage to infrastructure – enough to
cause serious harm to persons – as well as compromise defensive
capabilities so significantly that technology-dependent military forces could
be more easily overwhelmed through more traditional military tactics.
Military and policy analyst Richard A. Clarke, in fact, goes as far as to
claim that cyberwarfare ‘may actually increase the likelihood of the more
traditional combat with explosives, bullets, and missiles’ (Clarke and
Knake, 2010: xiii). Scholars like Rid (2013), however, disagree and argue
that cyberwarfare strategies may actually reduce the violence involved in
other forms of warfare. As he explains:
First, at the technical high-end, weaponized code and complex
sabotage operations enable highly targeted attacks on the functioning
of an adversary’s technical systems without directly physically
harming the human operators and managers of such systems …
Secondly, espionage is changing: computer attacks make it possible to
exfiltrate data without infiltrating humans first in highly risky
operations that may imperil them … And finally subversion may be
becoming less reliant on armed direct action: networked computers
and smartphones make it possible to mobilize followers for a political
cause peacefully. In certain conditions, undermining the collective
trust and legitimacy of an established order requires less violence than
in the past, when the state may have monopolized the means of mass
communication. (Rid, 2013: xiv–xv, emphasis in original)
He thus argues that ‘war’ in this capacity is used more in a metaphorical
sense than in a literal one (ibid.: 9). He continues, asserting ‘lethal cyber
attacks, while certainly possible, remain the stuff of fiction novels and
action films. Not a single human being has ever been killed or hurt as a
result of a code-triggered cyber attack’ (ibid.: 13; see also Brenner, 2016:
3–4).
4.8 Summary
This chapter has explored the convergence between politics and the
internet. The first part examined the ways in which various forms of
political activism and protest have appeared online, and the ways in which
they have mobilized techniques originally developed by hackers. This
‘hacktivism’ offers a number of advantages over traditional terrestrial
protest activism, including the internet’s ability to bring together spatially
separated individuals, anonymity and protection from both state and private
security agents who might otherwise be able to disrupt protests. The second
part of the chapter focused upon the issue of cyberterrorism, something
which has acquired an intensified political and media profile in the wake of
the 9/11 attacks. It was noted how our increasing dependence on networks
of ICT apparently renders social, political and economic systems vulnerable
to technological disruption. Moreover, the internet’s relative anonymity, its
enabling of action at a distance, and its function as a force multiplier are
held to make cyber-attacks an appealing option for terrorists. However, it
was noted that there are a number of drawbacks in engaging in virtual as
opposed to terrestrial terrorist actions, and which might disincline such
groups from going online to pursue their aims. Indeed, it was argued that
the empirical evidence offers little concrete support for the claim that
cyberterrorism is an immediate or significant threat. Rather, the current
upsurge in concern on this issue ought to be situated critically and
politically as a form of ‘moral entrepreneurship’ on the part of government,
law enforcement and/or the computer security industry in pursuit of their
sectional aims and interests. However, despite this deconstruction of the
cyberterrorist threat, this chapter examined the more prosaic but
nevertheless highly effective ways in which terrorists can and do use the
internet. Finally, this chapter concluded by exploring the use of computers
and networks by nation states to accomplish political objectives, a scenario
which seems more likely in today’s geo-political climate than
cyberterrorism by non-state actors.
Study Questions
What is the difference between hackers and hacktivists?
Is hacktivism a legitimate form of political protest, or a criminal activity that should be
discouraged?
What are the implications of increasing dependence upon information technology for national
security and social and economic stability?
Is cyberterrorism a ‘clear and present danger’ or merely a ‘phantom menace’?
Is it possible to curtail the uses of the internet by terrorist organizations for the purposes of
publicity, recruitment, information gathering or financing? If so, what effects might such
counter-measures have upon legitimate and legal uses of the internet?
Will nation-state cyberwarfare capabilities increase or reduce the violence associated with
conventional forms of warfare?
Further Reading
The best available accounts of hacktivism and online politics are Tim Jordan and Paul Taylor’s
book Hacktivism and Cyberwars: Rebels with a Cause? (2004) and Athina Karatzogianni’s
Cyber-Conflict and Global Politics (2009). Gabriella Coleman’s Hacker, Hoaxer,
Whistleblower, Spy: The Many Faces of Anonymous (2014) provides the most insightful
glimpse at the operations of Anonymous circa 2014. An exhaustive survey of issues relating to
critical information infrastructure protection policy can be found in Brunner and Sutter’s
International CIIP Handbook 2008/9 (2009). For the view of cyberterrorism as a serious
threat, see Verton’s Black Ice: The Invisible Threat of Cyber-Terrorism (2003). For a more
balanced assessment, Dorothy Denning’s writings provide a valuable point of reference, such
as her article ‘Terror’s web: How the internet is transforming terrorism’ (2010). An excellent
critique of the construction of cyberterrorism in political and media discourse can be found in
Sandor Vegh’s article ‘Hacktivists or cyberterrorists? The changing media discourse on
hacking’ (2002) and in Wykes and Harcus’ ‘Cyber-terror: Construction, criminalisation and
control’ (2010). Finally, useful overviews of terrorist uses of the internet in support of their
conventional activities can be found in Weimann’s www.terror.net, United States Institute of
Peace, Special Report 116 (2004) and in Shelley’s article ‘Organized crime, terrorism and
cybercrime’ (2003). Persons interested in information on cyberwarfare should check out
Thomas Rid’s Cyber War Will Not Take Place (2013) and, for a competing view of the
phenomenon, Richard A. Clarke and Robert K. Knake’s Cyber War: The Next Threat to
National Security and What to do About It (2010). In addition, Kim Zetter’s Countdown to
Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon (2014) likely provides
the most definitive account of the Stuxnet virus.
5 Virtual ‘Pirates’ Intellectual Property
Theft Online
Chapter Contents
5.1 Introduction
5.2 Intellectual property, copyright and piracy: An overview
5.3 Scope and scale of piracy activity
5.4 The history and growth of internet piracy
5.5 Who are the ‘pirates’?
5.6 The development of anti-piracy initiatives
5.7 Thinking critically about piracy statistics
5.8 Thinking critically about intellectual property rights
5.9 Summary
Study questions
Further reading
Overview
Chapter 5 examines the following issues:
the nature of intellectual property and piracy;
the extent and growth of piracy in relation to software, music and movies;
the relationship between technological change and internet piracy;
the relationship between youth and piracy;
the kinds of legal, enforcement and campaigning activities that have emerged in response to
piracy;
the problems with official estimates of the piracy problem;
the rhetorical and ideological dimensions of the piracy debate.
Key Terms
Copyright
Downloading
File sharing
Intangible goods
Intellectual property
Piracy
Property rights
5.1 Introduction
At first glance, the notion that the internet may be used as a conduit for
stealing property may strike the reader as somewhat improbable. After all,
we normally take ‘property’ to mean a variety of material goods; and when
criminologists talk about ‘property crimes’, they are usually referring to the
theft of everyday personal possessions such as automobiles, consumer
electronics, clothing, alcohol and tobacco and so on. The internet might
seem an unlikely tool for property theft, given that such material goods
cannot be transmitted via its electronic networks; by its very nature, the
internet deals in the transmission of information, the immaterial electronic
signals of digital communication. Nevertheless, the focus for this chapter
falls upon the ways in which the internet is now identified as an arena for
the theft of property and its worldwide distribution. The property in
question is of a very particular, but increasingly common, kind – what is
known as intellectual property. The most familiar kinds of intellectual
property include goods such as books, computer software, musical
recordings and motion pictures. In legal terms, such goods are owned by
their creators or manufacturers, who make a profit by selling copies to
consumers. In recent years, the internet has become implicated in the
massive increase in theft of such intellectual properties. Indeed, given that
such property itself is nothing more than forms of information (the
arrangement of words in a book, the sequence of codes making up a
program, the succession of images comprising a movie), the internet is the
most powerful medium ever for its theft and distribution; computers and the
internet enable such information to be almost instantaneously copied
without limit, and transmitted equally quickly anywhere in the world.
Consequently, the Web has become almost synonymous with the problem
of ‘intellectual property theft’, also commonly referred to as ‘copyright
theft’ or ‘piracy’. Today, some of the world’s leading industries claim that
they lose billions of dollars every year due to the theft of their intellectual
property via the internet. This chapter will explore what is known about
such ‘copyright crimes’ – how widespread they actually are, what is being
stolen, by whom, and at what cost. A brief history of digital piracy will be
covered starting from the 1970s to today. The chapter will also examine
how crime control and criminal justice initiatives are being mobilized to
challenge internet piracy worldwide. We will also turn a critical lens upon
the current debates about intellectual property theft, however, and consider
why there is an increasingly vocal movement to allow, rather than repress,
the free flow of information and cultural goods in the online environment.
5.2 Intellectual Property, Copyright and
Piracy: An Overview
To appreciate why the copying and distribution of information may be
considered a form of criminal activity we must establish what is meant by
terms such as piracy, intellectual property and copyright. Settling upon a
precise and internationally agreed definition of piracy has proved
remarkably difficult. All legal and economic uses of the term, however,
have their basis in intellectual property (IP) law and protection. IP law is
itself immensely complicated, spanning as it does national, regional and
international laws, treaties, conventions and directives, as well as judicial
rulings and other precedents. Moreover, as Bently and Sherman (2001)
note, it is a field in radical flux, the recent decades having seen far-reaching
revisions in IP rights such as the Trade-Related Aspects of Intellectual
Property Agreement (TRIPS), which was finalized under the auspices of the
World Trade Organization in 1994. While it is not necessary to revisit the
long history and detailed development of IP law in the present context,1 a
basic definition of IP, and especially of copyright, is necessary if we are to
grasp the current legal grounds upon which the problem of internet piracy is
constituted.
1 For such a detailed account of both the development and current
parameters of IP law, see Bently and Sherman (2001) and Letterman
(2001).
At its most fundamental, IP takes the form of so-called ‘intangibles’, such
as ideas, inventions, signs, information and expression. Whereas laws
covering ‘real’ property establish rights over ‘tangibles’, IP laws establish
proprietary rights over ‘original’ forms of intellectual production (Bently
and Sherman, 2001: 1–2; WIPO, 2001: 3). IP can take a number of
recognized forms: patents, trademarks, trade secrets, industrial designs and,
crucially for us, copyright. Copyright, in essence, establishes the holder’s
(e.g. an author’s) rights over a particular form of original expression
(WIPO, 2001: 40–41). Typical objects of copyright include writing, music,
paintings, drawings, audio-visual recordings and (most recently) computer
software. As the term suggests, copyright grants the holder rights over the
copying, reproduction, distribution, broadcast and performance of the
designated ‘work’ or content. In essence, the holder retains ownership of
the expression and the right to exploit it through, for example, licensing its
copying, distribution or performance in return for some financial
compensation (e.g. the payment of a royalty or fee). So, for example, if a
person purchases a CD recording of songs, they then have ownership over
the tangible object (the CD) but not of the musical content of the CD, whose
ownership remains with the copyright holder. Therefore, the purchaser is
legally prohibited from copying, distributing or performing the content
without authorization from the holder and the payment of some agreed fee.
In essence, piracy amounts to just such an infringement of copyright. In
current uses, piracy refers to the unauthorized copying and distribution
(sometimes, though not necessarily, for commercial gain) of copyrighted
content.2
2 A distinction is sometimes made between ‘piracy’ and ‘counterfeiting’,
where the latter refers to products infringing copyright that also replicate
the original product in terms of appearance, packaging, graphics and so on,
and can therefore be passed off as authorized copies (see BPI, 2002: 4;
Kounoupias, 2003: 3–4). Hence, while unauthorized copies of copyrighted
material may or may not be ‘counterfeit’ (replicate in appearance the
legitimate product), they are all ‘pirated’ in that they violate copyright. In
reality, the two categories are often combined or used in conjunction to
cover the range of copyright thefts (see, for example, the European
Commission Green Paper on counterfeiting and piracy – 2 COM(98)569
Final (1998)).
5.3 Scope and Scale of Piracy Activity
According to the copyright industries, piracy (the unauthorized copying,
distribution and/or sale of copyrighted content) is approaching nearepidemic levels. There now exists a range of data charting (or at least
estimating) levels of piracy both globally and in particular nations and
regions. According to the International Chambers of Commerce (ICC) and
affiliate organizations, ‘counterfeiting and piracy’ amounted to $710–$917
billion, with digitally pirated goods accounting for up to $213 billion based
on 2013 and 2015 data. This same report estimates that by 2022 such
activity will amount to more than $1.9 to $2.81 trillion, and that digital
piracy will comprise as much as $856 billion of the total value (Frontier
Economics, 2017: 8). The major sectors affected are those producing
computer software, music, motion pictures and, most recently, books; each
will be considered in turn below.
In the area of musical recordings, the Recording Industry Association of
America (RIAA) claims that between 1999 and 2011, US revenues from
music sales dropped 53 per cent, from $14.6 billion to $7 billion, and
attributes this decline to the emergence of online file-sharing. It further
claims that between 2004 and 2009, 30 billion songs were illegally
downloaded from the internet, and that only 37 per cent of music consumed
by Americans is actually being paid for (RIAA, 2012). According to the
RIAA’s UK counterpart, the British Phonographic Industry (BPI), piracy
costs the British music sector £200 million annually (BPI, 2012). The
International Federation of the Phonographic Industry (IFPI) argues that the
knock-on effects of music piracy include a decline in investment in new
artists, a fall in the release of new music, and an associated loss of
employment within the industry as a whole (IFPI, 2011: 16–17). In 2015,
this organization also claimed that 20 per cent of internet users regularly
accessed unauthorized music files (IFPI, 2015). Similarly, recent estimates
by the UK’s Intellectual Property Office (IPO) indicate that 80 per cent of
all music downloaders consume music legally, indicating changes in
consumption patterns since the late 1990s and early 2000s (IPO, 2016: 19).
Further, in the era of paid digital downloads and streaming music, the IFPI
(2017: 11) now reports that for the first time since 1999, music industry
revenues have marginally increased since 2015. If the industries were
correct in their assertions that widespread digital piracy was responsible for
declining revenues since the late 1990s, then recent upticks may indicate
that the music industry has created business models which can compete
with pirated content.
In the area of motion pictures, the Motion Picture Association of America
(MPAA) claims that the US film industry is losing $25 billion per annum as
a result of piracy (Havocscope, 2012). An international anti-counterfeiting
law enforcement operation conducted in 2010 seized more than 142,000
‘pirated’ DVDs (Rockwell, 2012). One report has claimed that illegal
downloading of copyrighted content is so extensive that it takes up 23.8 per
cent of the global internet bandwidth (Price, 2013: 87). The UK film
industry, while dramatically smaller than its US counterpart, claims losses
in the region of £500 million per annum (Thorpe, 2011). In 2007, UK
authorities seized more than 2.8 million ‘fake’ DVDs, and the commercial
trade in the sale of such items is estimated to be worth £200 million a year
(FACT, 2008). The UK’s IPO has recently estimated that 24 per cent of
movie downloaders do so illegally (IPO, 2016: 19). Related to film piracy,
many consumers illegally download and watch television programming.
One report claims that 6.5 per cent of North American households access
illicit subscription television services resulting in an estimated $4.2 billion a
year in losses (Sandvine, 2017: 2–4). The previously mentioned IPO report
similarly finds that 20 per cent of downloaders of television programmes do
so illegally (IPO, 2016: 19).
The computer software industry claims the largest financial losses to piracy.
In a recent report, Business Software Alliance (BSA) claimed that more
than half of all software installed and used on systems worldwide was
unlicensed. They estimated that the value of such unlicensed software was
$52.24 billion (BSA, 2016: 6–7). The Asia Pacific region was said to have
the highest piracy rate, where 61 per cent of all software is claimed to be an
illegal copy; however, rates are also high for North America (17 per cent)
and Western Europe (28 per cent) (BSA, 2016: 6–7). Software piracy also
includes the unauthorized duplication and dissemination of video games.
Though data concerning prevalence of game piracy is difficult to come by,
one open survey conducted by PC Gamer magazine found that 35.8 per
cent of respondents claimed to have pirated video game software (Fenlon,
2016).
The fourth industry affected by illegal sharing of copyrighted content is
book publishing. Until recent years, the problem of book piracy was
considered relatively negligible, given that consumers were reluctant to
read on computer screens, generally preferring traditional printed material.
Though books are one of the few content areas where physical copies are
still purchased in greater quantities than digital ones, digital books occupy
increasing market shares today (IPO, 2017: 4). The last few years have also
seen a rapid adoption of dedicated ebook reading devices, which boast ‘eink’ screens that can closely approximate the look of printed text on an
electronic display; the same period has seen the appearance of portable
touchscreen tablet devices, as well as smartphones with ever larger and
sharper screens, which are also well-suited to function as reading devices.
In tandem with the growing uptake of such consumer devices, there has
been a rise in demand for free online access to copyrighted books. The
UK’s Intellectual Property Office’s most recent findings estimate that 17 per
cent of ebooks (14 million) are consumed illegally (IPO, 2017: 4). Related
to ebook piracy, academic publications have also been subject to copyright
infringement. Of note, the website Sci-Hub has emerged in recent years as a
free-to-access repository of scholarly publications. While most scholars
produce academic research publications with little to no compensation (it is
generally considered ‘part of the job’ for academics), access to many of
their published works are controlled by for-profit publishers. Websites like
Sci-Hub present a threat to that business model and have called into
question what it means to own scientific knowledge. As a result, the site,
started by a student at the Russian Academy of Sciences, Alexandra
Elbakyan, has been sued by Elsevier – one of the largest publishers of
academic literature globally (Graber-Stiehl, 2018). In another high-profile
case, Aaron Swartz – noted internet activist – was arrested for illegally
downloading troves of academic journal articles from JSTOR (Journal
Storage) through MIT’s systems. He was subsequently charged with
multiple crimes which could have resulted in a million dollars in fines, 35
years in prison, and other penalties. Advocates argue that this harsh
treatment of Swartz’s piracy activities contributed to his subsequent suicide
(see Steinmetz, 2016: 121).
5.4 The History and Growth of Internet
Piracy
A major factor in explaining this growth of internet piracy is the rapid
expansion of the internet itself. Worldwide, there are now an estimated 4.05
billion users (IWS, 2017). In North America, 95 per cent of the population
use the internet and penetration rates are also high in European countries
(84.6 per cent), Oceania (68.3 per cent) and Latin America and the
Carribean (65.1 per cent) (IWS, 2017). The growth of broadband internet
access in particular enables users to quickly download large quantities of
digital content in compressed format; the 35 OECD member countries boast
more than 393 million fixed broadband subscriptions, with a further 1.3
billion subscriptions for wireless broadband access (OECD, 2017). As will
be detailed later, the internet abounds with ‘file-sharing’ sites that enable
home computer users to exchange the digitized content on their hard drives
(there are a number of readily available tools for bypassing copyright
protection systems, enabling movies and music to be converted into digital
files suitable for download). The scale of such file sharing online is hinted
at by the fact that over the course of its existence, the search engine Google
has received requests (so-called ‘takedown notices’) from copyright holders
to remove links to more than 3.5 billion URLS (at the time of writing) on
the grounds that they illegally hosted copyrighted content (Google, 2018).
And, as industry representatives admit, the ‘increasingly decentralized,
fragmented and anonymous’ nature of these services makes use of legal
rights enforcement a severely limited tool for curtailing file sharing
(Valenti, 2002). In addition, there are numerous sites that give free access to
unauthorized streaming video of the latest movie releases and television
shows.
The internet can be seen to offer a number of advantages over more
traditional methods for the illegal copying and distribution of copyrighted
content. First, it bypasses the costs involved in recording the software,
music or movies onto physical media such as CDs, DVDs, videotapes and
cassettes, since the content can be transmitted and stored directly using
computer hard drives. Second, while the distribution of hard copies is slow,
the distribution of content via the internet is rapid. Third, while giving or
selling such content to consumers in physical form entails making a copy
for each user, the internet allows an unlimited number of copies to be made
by consumers from a single copy hosted on the Web. Fourth, the worldwide
distribution of pirate content in the form of hard media entails certain risks,
such as the confiscation of shipments by customs officials or the detection
of attempts to sell such goods at markets, shops, car boot sales and suchlike.
In contrast, the internet bypasses border controls, and affords distributors
and sellers a degree of anonymity and protection. If their activities are
detected, identifying and apprehending the individuals responsible are
considerably more challenging in the online environment, where identity
can be more easily disguised. Moreover, by locating the point of
distribution in countries with little or no enforcement of copyright laws, the
distributors can effectively evade the reach of law enforcement agencies
elsewhere in the world, or at least make it very difficult indeed for them to
the brought before the law.
The growth of digital piracy takes place over a long historical arch. In fact,
the unauthorized duplication and distribution of intellectual property more
generally has a history that long predates the internet and computers (see
Johns, 2002). Yet, as previously discussed, the scope and scale of
unauthorized duplication has exploded in the digital era. Digital piracy’s
beginnings, however, were relatively humble. One of the earliest examples
of digital piracy involved a small group of computer enthusiasts called the
Homebrew Computer Club (HBCC) in the 1970s (Levy, 1984; Wozniak,
2006). This group was fascinated with early microprocessor computers like
the Altair 8800. During this period, a young Bill Gates and two colleagues
were developing a BASIC interpreter (software which allows for a
programming language to be used on a system) for the 8800 and accepted
pre-orders to help finance the project. The interpreter’s release was fraught
with delays, which frustrated its committed consumer base. A leaked copy
of the software eventually made its way to members of the HBCC who
eagerly reproduced copies for each other (Levy, 1984). When Gates heard
about this activity, he penned a strongly worded open letter published in the
HBCC newsletter that condemned these copiers, branding them as pirates.
The members of the HBCC did not see the situation in the same manner,
instead viewing the act as simply sharing information, a tension which
would endure between content owners and pirates through the ensuing
decades. In the 1980s and 1990s, pirated content was shared mostly among
small groups of computer enthusiasts through bulletin board systems
(BBSs), Usenet/newsgroups, FTP (file transfer protocol)/FXP (file
exchange protocol), and internet relay chat (IRC) (Cooper and Harrison,
2001; Meyer, 1989). While these media were frequently used for licit
purposes, they served as key methods of transmission for pirated digital
goods before internet and the World Wide Web exploded in popularity
through the mid- to late-1990s (Mueller, 2016).
Digital piracy went mainstream in 1999 with the advent of Napster, a peerto-peer (P2P) file-sharing system developed by Shawn Fanning. Napster
allowed users to make their music files available for others to download
and, in turn, those users could search for music files shared by all other
users in the system. Combined with the creation of the .mp3 file format –
which allowed music files to be significantly reduced in size without major
sacrifices to audio quality – Napster made digital music piracy easy and
seemingly ubiquitous. In 2001, Napster was shut down by court order as a
result of copyright claims made by the RIAA and others. A multitude of
alternative P2P services rose from the ashes, however, including Kazaa,
Limewire, eMule and BearShare, to name a few. Like the mythical Hydra,
cutting off one head only created more. These services took P2P beyond
music piracy to the reproduction of almost all forms of digital content.
These file-sharing services too had to duck and dive shutdown attempts
from copyright holders and industry organizations. A central problem these
services had to overcome was centralization; if Napster’s servers were
shutdown, for example, then it would be impossible for its users to continue
using their services. New file-sharing protocols thus emerged that could
remain in operation if one point was taken offline. Perhaps the earliest of
such programs was Gnutella. In 2001, BitTorrent emerged, created by Bram
Cohen, and became a ‘game changer’ in digital piracy. It was a
decentralized P2P file-sharing method extraordinarily resilient to shutdown
attempts and legal challenge. A cornucopia of websites emerged dedicated
to facilitating piracy through BitTorrent including Demonoid,
KickassTorrents, and, most notoriously, The Pirate Bay.
Content industries grew increasingly concerned over the perceived
epidemic of digital piracy; an anxiety that only increased following the
advent of BitTorrent. Through measures discussed throughout this chapter,
content industries went on the offensive. For instance, between 2003 and
2008, the RIAA ‘filed, settled, or threatened legal actions against at least
30,000 individuals’ (EFF, 2008: 1). Twenty-three universities received
about 2,500 pre-litigation letters in 2007 from the RIAA that claimed
students were using university servers to illegally download music (Lessig,
2008). This campaign pressured numerous college students to settle
copyright infringement cases involving illegally downloaded .mp3s for
sums as high as $17,500 (EFF, 2008). The BSA has also heavily promoted
their ‘No Piracy’ program where persons can get paid to turn over coworkers using unauthorized copies of software (Mitchell, 2012). As will be
detailed later in this chapter, multiple industry organizations crafted antipiracy educational campaigns, providing industry propaganda for use by
school teachers to instruct children about the alleged dangers of piracy (Yar,
2008b).
As previously mentioned, in addition to deploying legal remedies, content
industries have turned toward developing technological approaches to
protect copyrighted material. These measures include code-based,
hardware-based, or combined solutions. One common approach is the use
of activation codes for accessing software packages. This practice is of
growing significance, as software is increasingly being retailed online.
Sellers permit internet users to download the software, but it can only be
activated using a code that is provided once the potential user has made the
appropriate online payment for the program. By providing such codes
online, users can install and run the software without making the necessary
purchase payment to the copyright holder. Some copy protection
mechanisms are described as DRM or ‘digital rights management’
protocols. One form of DRM involves ‘always on’ protections. For
example, some companies require that users interested in playing a video
game to have an active internet connection that can be used to validate the
authenticity of their copy of the game – is it ‘theirs’ and do they have
permission to use it? Some of these DRM technologies prove controversial.
For example the video game company Electronic Arts (EA) used always-on
DRM when they released SimCity in 2013 and many players were locked
out of the game they legitimately purchased because they could not connect
to EA’s servers, a problem that persisted for days (Anthony, 2013). In
another instance, Sony included DRM protections on certain CDs which
covertly installed software on a user’s computer that prevented the creation
of a ‘burned’ or duplicate CDs (Brown, 2015). In addition to installing
software without permission, this DRM method opened up their computers
to malware attacks. Regardless of the copy protection approach taken, there
are a swarm of pirates and hackers dedicated to circumventing these
protections, though some (like always-on DRM) have proven more difficult
to crack than others. In fact, since the time of the HBCC, hackers and other
technologists have come together to circumvent copy-protection
mechanisms put in place by content owners to prevent duplication (a
process known as ‘cracking’) and disseminate software and other forms of
digital content. This intersection of hacking and piracy is often known as
the ‘warez scene’ or ‘the scene’ (‘warez’ is short for ‘softwares’) (DécaryHétu et al., 2012).
To avoid being caught and prosecuted or sued, many pirates also employ
technological privacy-enhancing measures like VPNs or virtual private
networks, measures which are thought to be on the rise among pirates
(Larsson et al., 2012). These services route users’ network traffic through
their servers to hide their IP addresses, making it more difficult to trace who
is downloading and uploading copyrighted content. VPNs also provide a
way for pirates to bypass attempts by regulatory bodies to block access to
websites hosting or indexing pirated content. For example, a number of the
most popular piracy sites are blocked by UK ISPs pursuant to a court order
attained by intellectual property rights holders. Many UK pirates have taken
to using VPNs to access pirated content from non-UK IP addresses. Pirates
have also continued to explore other technological avenues for sharing
pirated material. For instance, they have begun returning to Usenet, a
‘distribution system for the exchange of text-based messages, providing
services similar to a set of publicly archived email lists’ (Turner et al.,
2005). Despite the technology being decades old, it is considered by many
to be the most effective way to share content while maintaining privacy.
Other pirates have turned to filelocker services. These websites allow for
persons to store content on their servers and share download links with
others, frequently through online discussion boards. A recent piracy
crackdown involved one such site, Megaupload, whose owner (Kim
Dotcom) was arrested in New Zealand through an international law
enforcement sting (Zetter, 2015). In the age of streaming services like
Netflix, Spotify and YouTube, pirates have increasingly turned toward
‘stream ripping’ or the conversion of streaming data into files which can be
stored locally. A myriad of programs and plug-ins exist, for example, which
allow users to extract videos from YouTube and save them on a hard drive,
allowing permanent access to the video, even if it were pulled from
YouTube’s services. Some piracy services have also fled to the dark web to
avoid shutdown attempts. In 2016, the torrenting site Kickass Torrents
transitioned to the TOR network, for example (Ernesto, 2016). Though
copyright holders and industry organizations have made significant efforts
to stymy piracy, each new measure seems to lead to new innovations on the
part of pirates.
5.5 Who are the ‘Pirates’?
One of the most notable features of internet piracy is its apparent ubiquity.
Far from being confined to a small class of ‘professional criminals’, piracy
activity appears to be socially widespread, and undertaken on a regular
basis by individuals who would otherwise consider themselves ‘law-abiding
citizens’. Participation in the illegal downloading of copyrighted materials
appears to span persons from various social classes and walks of life. Some
insight into the social distribution and engagement in internet piracy is
afforded by recent surveys and self-report studies; particularly noteworthy
among such findings is the disproportionate involvement of young people
in such offending.
A 2011 survey of 15,000 computer users across 32 countries found that 47
per cent ‘acquire software through illegal means most or all of the time’,
and that the ‘typical pirate’ is between 18 and 34 years of age (BSA, 2011).
A UK-based poll in 2004, conducted on behalf of the BSA, found that 44
per cent of 18–29-year-olds owned ‘pirated’ IP; the figure for the 30–50year age group was 28 per cent, and 17 per cent for the over-50s. The
survey further found that ‘there is little stigma to owning counterfeit goods’
(Thomson, 2004). The PC Gamer reader survey similarly found that the
greatest rates of piracy among PC gamers were among persons below the
age of 30, which varied between approximately 33 to 46 per cent
engagement in piracy (though, surprisingly, approximately 26 per cent of
persons over the age of 60 also reported engaging in piracy as well)
(Fenlon, 2016). The findings of the UK government’s 2003 Crime and
Justice Survey reported that 9 per cent of people over 18 years old in the
UK admitted to committing ‘technology offences’ such as ‘illegally
downloading software or music’ (Budd et al., 2005: 25). A 2004 poll in the
USA found that ‘more than half of all 8–18 year olds have downloaded
music, a third have downloaded games and nearly a quarter have
downloaded software illegally from the internet’ (Snyder, 2004: 1). A
number of studies worldwide have found high levels of ‘softlifting’
(downloaded copyrighted software from the internet) among college
students, and little weight attached to the ‘legal and moral’ objections (see
discussion in Kini et al., 2003: 63–64). A further study also suggested that
younger people are considerably more likely to engage in such behaviour
than their older internet-using counterparts (Lee et al., 2004). Significant,
then, is the apparent inverse relationship between age and propensity to
commit copyright offences: the younger the age group, the more likely their
involvement. This relationship between youth and piracy has, as we shall
see, shaped in significant ways the development of anti-piracy campaigns
and initiatives.
Some research has examined the motivations for copyright infringement
given by pirates. While providing a comprehensive list of pirate motivations
is difficult – perhaps impossible – certain key motivations (aside from the
obvious ‘they want free stuff’ narrative) are worth highlighting. For
instance, some pirates claim that they download copyrighted material out of
a desire to try out content (e.g. Gopal et al., 2006; Mueller, 2016; Orland,
2017; Peitz and Waelbroack, 2006). Referred to as ‘sampling’ or the
‘exposure effect’, the idea is that people may purchase content once they
have sampled or otherwise been exposed to said content without prior
purchase (Liebowitz, 1985; Peitz and Waelbroack, 2006). A person may
thus download a video game illegally and, if they enjoy the game, they will
then purchase a legitimate version. In other words, content deemed to be of
sufficient quality will be rewarded through licit purchase. In addition, some
research finds that some pirates claim to pirate because they could not
otherwise afford the content through legitimate channels (Steinmetz and
Tunnell, 2013: 59), a motivation consistent with criminological strain
theory (Merton, 1938). Such financial strain may lead to the conclusion that
pirates are either cheap or simply thieves. While some pirates may simply
refuse to pay for any content, Steinmetz and Tunnell (2013) argue that so
much consumer content exists that demands attention from contemporary
consumers (TV shows, movies, music, video games, etc.) that it is difficult
for many to afford access to all of it. When combined with the sampling
motivation, the implication is that many pirates do purchase content
legitimately at least some of the time but can be discriminating about what
they spend their money on.
The need to be discriminating may be warranted as pirates are a kind of
hyper-media consumer. In fact, some research indicates that they consume
so much content that they legally purchase significantly more intellectual
property than non-pirates (Kantar Media, 2013; Karaganis and Renkema,
2013; Mueller, 2016). One study that involved phone surveys of a
randomized sample of 1,000 Germans and 2,303 Americans found that:
P2P file sharers, in particular, are heavy legal media consumers. They
buy as many legal DVDs, CDs, and subscription media as their nonfile-sharing, Internet-using counterparts. In the US, they buy roughly
30% more digital music. They also display a marginally higher
willingness to pay. (Karaganis and Renkema, 2013: 5)
In another study conducted by Kantar Media (2013), they found that the top
20 per cent of copyright infringers ‘accounted for 11% of the legal content
consumed’ and ‘spent significantly more across all content types on
average’ than other pirates and non-pirates (2013: 3). In other words, the
people who engage in the most illicit content consumption also consume the
most legal content as well.
Some pirates claim to engage in illegal downloading because they subscribe
to a hackerish notion that ‘information wants to be free’ (Brand and Herron,
1985). Renowned hacker Richard Stallman (2002: 43) explains that this is
not ‘free’ in the sense of ‘free beer’ but free as in ‘free speech’. The
argument is there exists a moral dictum to free content from the confines of
copy protection and copyright law – that ‘sharing is caring’ (Steinmetz and
Tunnell, 2013: 56). Though it would be easy to dismiss this ethos as a
technique of neutralization (appeal to higher loyalties) (Sykes and Matza,
1957), the sharing dynamic runs deep in certain sectors of digital piracy
culture, including a hierarchical social organization which emphasizes
sharing. In their study of Internet Relay Chat (IRC) piracy, Cooper and
Harrison (2001: 78–80) identify three categories of pirates: leeches (persons
who download without contributing new content), traders (who engage in
patterns of mutual exchange of pirated material) and citizens (those ‘willing
to give away files in order to benefit the community as a whole, sometimes
trading, sometimes leeching, sometimes providing things for the leechers to
consume’). As citizens are the most likely to contribute content freely, they
are also considered to be the most respected. Steinmetz and Tunnell (2013:
58) examined pirates on a BitTorrent forum and found a social hierarchy
similarly based on sharing with leechers (those that download more than
they upload) being looked down upon. Conversely, persons who upload
content to other users or ‘seed’ more than they leech were viewed more
favourably.
Some pirates claim to be motivated by a disdain for contemporary business
models for selling intellectual property (Steinmetz and Tunnell, 2013: 59–
60). A person who wants to watch the latest TV shows, movies, games,
books, and other content may have to subscribe to multiple services like
HBO, Hulu and Netflix, order copies of media from companies like
Amazon for delivery (digitally or physically), or go to a brick-and-mortar
store to find their content. Piracy sites like The Pirate Bay may offer ‘onestop-shops’ where all or most of the desired content can be found and
queued for download. Some pirates thus express a sense that contemporary
business models have failed to ‘get with the times’ and make content as
convenient to access as possible. Piracy may thus be seen as an act of
convenience or even an act of resistance. In addition, some pirates ascribe
their activities to a loathing of content industries which they see as unfairly
compensating artists. They claim they would rather directly support artists
rather than pay a ‘middleman’ who may not appropriately pay artists for the
work. Linked to the sharing is the caring ethos; other pirates claim to
illegally download content out of protest against copyright laws which they
see as out of touch with the contemporary reality of digital content and the
internet.
Finally, highly motivated pirates may be motivated by money that can be
made through facilitating the illegal consumption of copyrighted content.
For example, hosts of popular streaming websites for pirated content – like
ProjectFreeTV – may generate revenue from pay-per-click advertising. For
instance, a popular show is released on a streaming service like Netflix. The
enterprising pirate filches this content and immediately uploads it to a
content hosting service and adds a link to the show on ProjectFreeTV.
Interested viewers then click on these links to access (stream or download)
episodes and each time they are also sent to an advertisers’ site via a pop-up
or a new tab opening. For each click, the original pirate might be paid a
fraction of a cent which, while not much on its own, accumulates rapidly if
thousands or tens of thousands of viewers access the content and click on
advertising links or are shown pop-ups (see Andy, 2013). There is also
money in the production or modification of set-top boxes tailored for illegal
streaming services. Kodi boxes have become popular in recent years as a
convenient and affordable way to stream television programmes and movies
in the comfort of one’s living room. Multiple add-ons (code designed to
expand the capabilities of the Kodi boxes) have been created that allow
users to access copyrighted content illegally. In fact, a small cottage
industry has developed around modifying these boxes to facilitate piracy –
making them ‘fully loaded’ – and reselling them (Barrett, 2017).
5.6 The Development of Anti-Piracy
Initiatives
The recent rise to prominence of internet piracy as a crime problem has
stimulated a range of responses from legislators, law enforcement agencies
and private sector organizations that represent the copyright industries. This
range of interventions will be considered below.
5.6.1 The Criminalization of Piracy
In the past, copyright violations have been largely tackled through a range
of civil remedies available to copyright holders: for example, injunctions,
‘delivery up’ or the destruction of infringing articles, and the payment of
damages (Bently and Sherman, 2001: 1008–1023). Even where the law had
made provision for criminal prosecution of IP violations, there tended to be
few such actions: for example, between 1970 and 1980 there were less than
20 prosecutions for copyright offences in the UK (Sodipo, 1997: 228). This
may be attributed to a range of factors, including the relatively low priority
accorded to IP crimes by over-stretched and under-resourced criminal
justice agencies; the public concern and political emphasis on more visibly
harmful offences, such as street crime and violent crime; difficulties in
policing and intelligence gathering; and the reluctance of public prosecutors
to involve themselves in a notoriously complex and specialized domain of
law. Recent years, however, have seen moves to criminalize copyright
violations, bringing them increasingly under the sway of criminal sanctions.
This has taken two main forms.
First, there has been increased willingness to use existing criminal sanctions
against ‘pirates’, encouraged both by greater political sensitivity to IP rights
and their economic importance, and by a concerted application of pressure
through lobbying by the copyright industry. One key means of achieving
this has been the formation of industry organizations (such as the
Federation Against Copyright Theft (FACT) in the UK and the Alliance for
Creativity and Entertainment (ACE) in the US) that conduct investigations
and gather information on piracy activities, and lay complaints before
public prosecuting authorities. There have been a number of high-profile
piracy cases in which such procedures have led to criminal convictions
carrying substantial custodial sentences over the past 20 years: for example,
in 2002 a FACT investigation led to a four-year prison sentence for the
convicted ‘pirate’ (Carugati, 2003). In 2018, complaints from organizations
like the National Federation of Film Distributors (FNDF) led to a
Frenchman being sentenced to two years in prison and fined €83.6 million
in damages for running Streamiz, a popular streaming site for pirated
content (Andy, 2018). In the US, the Attorney General’s Office filed 59
cases of intellectual property violations in 2016 (USAG, 2016: 12).
At a second level, recent decades have seen the incorporation of additional
provisions for criminal sanctions into both international treaties and
national laws. At the national level we can note, for example, Section 107
of the Copyright, Designs and Patents Act (1988) in the UK, which places
local administrative authorities (such as Trading Standards departments)
under a duty to enforce criminal copyright provisions, and significantly
strengthens the penalties available in comparison to the previously existing
Copyright Act of 1956 (Bently and Sherman, 2001: 1031; Dworkin and
Taylor, 1989: 121–122). In the USA, the No Electronic Theft Act (1997)
makes provision for up to three years’ imprisonment for convicted ‘pirates’;
it also extends the applicability of sanctions beyond those engaging in
piracy for commercial gain, to include, for example, the not-for-profit
digital trading engaged in by file sharers (Drahos and Braithwaite, 2002:
185). The Digital Millennium Copyright Act (1998) also criminalizes the
circumvention of copy-protection technologies. At the international level,
Article 61 of the TRIPS agreement establishes a mandatory requirement for
signatories to make criminal provisions against commercial copyright
violations. ACTA was signed in 2011 by the USA, Australia, Canada,
Japan, Morocco, New Zealand, Singapore, Switzerland and South Korea
(with Mexico signing the Treaty the following year). ACTA effectively
extends the range of criminalization measures under previous treaties such
as TRIPS, for example by including criminal sanctions against a range of
very loosely defined offences such as ‘aiding and abetting’ and gaining an
‘economic advantage’ (Arthur, 2012). These measures proved controversial
in Europe: while the EU initially signed the Treaty in 2011, it was rejected
the following year after heated debate in the European Parliament (BBC
News, 2012). Despite this setback, we can nevertheless see that overall the
extension of available criminal sanctions and the greater willingness to
pursue them have, taken together, significantly reconfigured piracy,
rendering it more grave in an attempt to stem its growth.
5.6.2 Legal Regulation for Content Control
Another recent legislative trend has used statutory measures to tackle piracy
by enabling the control of internet content, including the blocking of user
access to suspect websites. Such measures mobilize internet intermediaries
(such as ISPs and online service hosts) and hold them potentially liable for
breaches committed by their customers. For example, the UK’s Digital
Economy Act (DEA) 2010 introduced controversial measures that required
ISPs to send warning letters to customers who had been identified (by rights
holders) of breaching copyright; the receipt of three such warnings within a
year could result in users being blacklisted from internet access (Williams,
2012). Though eventually repealed, these provisions ordered ISPs to block
access to popular file-sharing sites Newzbin2 and The Pirate Bay (TPB)
(Whittaker, 2012); attempts to access either site from the UK now direct the
user to a page that bluntly states ‘Error – site blocked’. In addition, the
Digital Economy Act of 2017 increased the maximum jail sentence for
piracy from two years to ten in an attempt to deter those who would post
and distribute copyrighted material (Rigg, 2017). Measures similar to those
of the DEA were introduced in France under the HADOPI (Haute Autorité
pour la diffusion des œuvres et la protection des droits sur internet) law in
2009 (Anderson, 2009). The measures in HADOPI that allowed for the
revocation of internet access following multiple infractions was repealed in
2013, though it was replaced with an automated system that would issue
fines instead (Datoo, 2013). In 2010, two Bills were proposed in the US
Congress, the Stop Online Piracy Act (SOPA) and the Preventing Real
Online Threats to Economic Creativity and Theft of Intellectual Property
(PROTECT IP) Act or PIPA. Both proposed measures that would allow
ISPs to be instructed to block users’ access to foreign-hosted sites that had
been identified as illegally hosting copyrighted content. These blocking
proposals proved controversial with the public, ISPs and search engine
providers, and both Bills faltered in the face of Congressional opposition
(as well as opposition from the Obama administration). Subsequently, the
Online Protection and Enforcement of Digital Trade Act (OPEN) was
introduced to the US Senate in December of 2011 and to the US House of
Representatives in January 2012, containing significant amendments to the
measures proposed in SOPA and PIPA (Franzen, 2012). Though introduced,
the bill was never voted upon and was thus cleared from the legislative
agenda. If passed, OPEN would have enabled the kind of access blocking
already enabled in Europe by the likes of the DEA and HADOPI, discussed
above.
5.6.3 Policing, Enforcement and Piracy
We have already noted the recent increase in policing and enforcement
activity in which industry organizations are playing a leading role. It is
worth noting here the proliferation of industry financed ‘anti-piracy’
organizations whose raison d’être combines research, intelligence
gathering, policing, education and lobbying activities. The past two decades
have seen the creation of the Counterfeiting Intelligence Bureau (CIB), the
International Intellectual Property Alliance (IIPA), the International AntiCounterfeiting Coalition (IACC), the Featured Artists’ Coalition, Fair Play
Canada, Stop Piracy, and the aforementioned ACE and FACT, as well as
numerous existing trade organizations that have established specialist
groups and initiatives to combat piracy (such as the Motion Picture
Association of America [MPAA], the Business Software Alliance [BSA]
and the Recording Industry Association of America [RIAA]). Such
organizations aim to assist in enforcement and investigation efforts by
engaging in a range of increasingly intensive policing activities. Where
public agencies have been reluctant to invest time and resources in tackling
IP violations, industrial and commercial interests have filled the void. In
addition to intelligence gathering and undercover operations, they have
attempted to bring IP crime into the criminal justice mainstream through,
for example, the appointment of specialist liaison personnel to ‘assist’ and
‘advise’ responsible agencies in the detection and prosecution of ‘copyright
theft’. Governments have themselves responded to concerted pressure from
these groups by establishing specialist units within criminal justice agencies
to address IP violations, for example, the UK now has the Organised Crime
Agency’s e-Crime unit and the National Technical Assistance Centre
(NTAC), and the US has the FBI’s Internet Fraud Complaint Centre (IFCC)
and ICE’s National Intellectual Property Rights Coordination Center (IPR
Center; INTERPOL has established an Intellectual Property Crimes College
for educating law enforcement agencies around the world about enforcing
IP laws and there are concerted efforts at the EU level to strengthen
Europol’s powers of enforcement in the area of IP including the Intellectual
Property Crime Coordinated Coalition or IPC3.
5.6.4 Anti-Piracy Education Campaigns
A fourth area of response has been the development and implementation of
‘anti-piracy education’ campaigns, involving both copyright industries and
public agencies. Particular focus has fallen upon young people, given their
apparently disproportionate involvement in illegal internet downloading,
copying and the distribution of copyrighted materials.
Though industry efforts have seemed to veer away from such campaigns in
light of changes in the media market and consumption patterns (e.g. ready
availability of streaming services like Netflix, Hulu and Amazon Prime),
the late 1990s and 2000s saw a spate of educational programmes produced
by umbrella organizations that represent various sectors of the copyright
industries (software, music and/or motion pictures). Such programmes
typically provided a range of materials, exercises and gaming activities that
are intended for use in the classroom, thereby incorporating anti-piracy into
the school curriculum. Programmes typically target younger children
between the ages of 8 and 13. For example, there was the Friends of Active
Copyright Education (FA©E) initiative of the Copyright Society of the
USA – their child-oriented programme was called Copyright Kids. Also
notable was the SIAA’s (Software & Information Industry Association)
CyberSmart! school programme. A third was the BSA’s (Business Software
Alliance) ‘Play It Cyber Safe’ programme featuring the cartoon character
Garret the Ferret, aka ‘The Copyright Crusader’. A fourth campaign was
produced by the Government of Western Australia’s Department of
Education and Training, and is called ‘Ippy’s Big Idea’. A fifth campaign
was the MPAA’s ‘Starving Artist’ schools’ road show. This was a roleplaying game designed for school children that was taken ‘on tour’ in 2003
to 36,000 classrooms across the USA. The game invited students ‘to come
up with an idea for a record album, cover art, and lyrics’ (Menta, 2003).
Having completed the exercise, the students were told that their album is
already available for download from the internet, and are asked ‘how they
felt when they realized that their work was stolen and that they would not
get anything for their efforts’ (Menta, 2003). All such campaigns attempted
to create a moral consensus that unauthorized copying is a form of theft,
and as such is as immoral as stealing someone else’s material possessions;
moreover, they also target the children’s parents in an attempt to warn them
of the possible legal repercussions if their children are caught engaging in
piracy.
In addition to anti-piracy curriculum, industry and umbrella organizations
crafted advertising campaigns designed to dissuade potential pirates. One of
the most notorious appeared in the early 1990s from the Software
Publishers Association (SPA) called ‘Don’t Copy that Floppy’ featuring a
rapper speaking out against unauthorized copying of software. The ad was
distributed to high schools on VHS cassettes for use in the classroom. In the
early 2000s, the UK released a series of anti-piracy ads featuring a singing
troubadour slandering a man for engaging in activities like buying
counterfeit goods and downloading pirated movies, dubbing the man a
‘Knock-Off Nigel’ (Parkes, 2012). In the late 2000s the MPAA sponsored
an advertisement which appeared in movie theatres and on DVDs.
Commonly referred to as the ‘You Wouldn’t Steal a Car’ or ‘Piracy, It’s a
Crime’ campaign, the ad features a series of crimes like purse-snatching,
shoplifting and carjacking and asserts that the viewer would not do these
activities. It then draws an equivalence between these forms of theft and the
downloading of copyrighted films, all driven by a steady and driving beat.
These campaigns were all part of an industry-fuelled public awareness
campaign designed to stigmatize copyright infringement.
5.7 Thinking Critically about Piracy
Statistics
Thus far, we have followed official understandings of the piracy problem,
and considered the ways in which legal and crime control initiatives have
emerged as a response. We also need to examine, however, the discourses
of copyright industries and law enforcement agencies with a critical eye. As
noted in previous chapters, crime (that which is criminalized) is not a
politically neutral category; rather, it inevitably reflects patterns of interest
and relations of power in society. Therefore, we cannot simply take the facts
and figures about piracy at face value. Rather, we must reflect upon them as
social constructions that may represent a partial point of view, and which
may be used by powerful social groups in pursuit of their own interests and
agendas.
The figures for software and media piracy that currently circulate within
commercial/industrial/media/criminal justice discourses are typically the
product of a set of inferences and calculative techniques that are open to
methodological challenge. First, there is a problem in that estimates of
overall piracy levels and trends are often extrapolated from detection and
conviction rates. An obvious problem with this is that the levels of piracy
detected, and the number of criminal cases brought and convictions
secured, will depend upon the level of policing activity. Therefore, apparent
increases in the levels of piracy may, in fact, be the result of more
intensified enforcement activity, as greater attention from enforcement
agencies will inevitably lead to a greater proportion of offences being
uncovered. Further problems arise with other attempts to quantify the scope
and scale of the problem. For example, the allocation of monetary measures
to losses from piracy often takes the ‘legitimate product price’ as a baseline,
such as when calculating the MPAA figures for losses to film piracy, the
assumption is that each unauthorized copy deprives the copyright holder of
the sale of one unit of the legitimate product: for example, the loss of
revenue from a ticket to see the film in a movie theatre or the purchase of
an authorized Blu-Ray, DVD or streamed copy of the film. As Drahos and
Braithwaite (2002: 93–94, 97) point out, this is based upon the erroneous
assumption that every consumer of an unauthorized copy would have
chosen, had the pirate version not been available, to pay the price asked for
the authorized exhibition or distribution of that same film (see also Lessig,
2004: 71). Quite simply, it may well be that while consumers are prepared
to pay the lower prices demanded for pirated versions of software, music or
movies, they are not prepared to pay the higher prices attached to the
legitimate offering – there is no inevitability that the consumers of the
‘pirate’ would otherwise have paid to purchase an authorized copy.
A more general problem arises from the official reliance on partial industry
sources for piracy figures. Industry bodies have a vested interest in
maximizing the figures (whether in the form of the number of copies, lost
revenues or profits, lost jobs or lost taxation for the public purse). The
larger the figures are, the greater the pressure that can be brought to bear on
legislators and enforcement agencies in the drive for more rigorous action
(the US government, for example, derives its figures for piracy directly
from those provided by industry bodies, with little in the way of
independent corroboration). As the CEBR (2002: 29) succinctly puts it,
there is ‘nothing preventing the companies from overstating their losses
through counterfeiting for lobbying purposes’. Yet these industry
‘guesstimates’ usually become reified into incontrovertible facts, and
provide the basis for further discussion and action on the ‘piracy problem’.
From the industry’s viewpoint, the inflation of the figures is the starting
point for a ‘virtuous circle’: high figures put pressure on legislators to
criminalize, and enforcement agencies to police more rigorously; the
tightening of copyright laws produces more copyright theft as previously
legal or tolerated uses are prohibited, and the more intensive policing of
piracy results in more seizures; these in turn produce new estimates
suggesting that the problem continues to grow unabated, which then
legitimates industry calls for even more vigorous action. What the true
underlying levels and trends in piracy might be, however, remains
inevitably obscured behind this process of socio-statistical construction.
5.8 Thinking Critically about Intellectual
Property Rights
The emergence of internet piracy as a crime problem depends in a
fundamental way upon the claim that people can, and ought, to have
property rights in relation to the expression of ideas. The very language of
piracy is emotive, as it draws an equivalence between copying and
bloodthirsty seafaring brigands. Yet we must consider the partial and onesided character of this viewpoint. By no means all commentators, or
creators of music, software and so on, agree with the claim that such
copying practices are harmful, or that they should be made the subject of
criminal sanctions. Some of the critical counter-arguments to the
‘demonization of piracy’ (Litman, 2000) are considered below.
First, it can be noted that the criminalization of piracy turns upon the idea
that ‘property rights’ are something natural and unquestionable. This notion
of property rights is often associated with the philosopher John Locke
(1632–1704), who argued that property rights emerge when ‘man’ adds
‘his’ [sic] ‘honest labour’ to nature, generating a product over which ‘he’
has not only rights of possession and use, but also of transfer (Lemos,
1975). The property ‘he’ has created is ‘his’ to do with as ‘he’ pleases.
Historical and cross-cultural investigations, however, illustrate many
alternative conceptions of property. For example, Erich Kasten’s fieldwork
among the First Nation tribes of the Canadian Pacific Northwest reveals
conceptions of property radically different from those dominant in the West;
among these peoples, primacy is given to obligations to ancestors to
preserve and transmit the group’s shared heritage intact to future
generations, not to individuals claiming proprietary rights which can be
exploited for commercial gain (Kasten, 2004: 10; also Onwuekwe, 2004:
66). Other examples of ‘non-Western’ approaches to property include the
adat customary laws of Indonesia and the collectivization of Yoruba art in
Nigeria, to name but a few (Story, 2003: 795–796). This anthropological
diversity in conceptions of property has recently inspired in the West
alternative notions of IP, based on the idea of the ‘commons’. The commons
refers to ‘a wide variety of creations of nature and society that we inherit
freely, share and hold in trust for future generations’ (Bollier, 2003: 2). As
such ‘creative works’ are seen as constitutive of ‘our common culture’
rather than the private property of individuals or organizations (2003: 2).
Advocates of the ‘natural rights’ position may respond by mobilizing the
moral principle that the one who creates something by virtue of his or her
‘honest labour’ justly deserves to enjoy benefits from that labour. As Martin
(1998: 38) points out, however, while we may agree that the individual
concerned deserves some ‘fitting reward’, this in no sense entails that this
reward should comprise the full market value of that which is produced, nor
does it in any way entail that the individual should have a right to determine
under what conditions others may make use of what has been created (see
also Epstein, 2004; Hettinger, 1989: 34–35). The notion that products are
naturally the property of individuals, and that production naturally confers
rights to control subsequent uses, may be viewed as a myth which functions
primarily to ‘facilitate the increasing consolidation of ownership’ (McLeod,
2001: 2) by the copyright industries. In this way, Mueller (2016: 333)
argues that piracy may constitute a form of resistance against a shrinking
commons as more creative content is being subsumed under an increasingly
small number of corporations’ control.
A second way in which piracy is criminalized is by asserting a
straightforward equivalence between tangible (material) and intangible
(ideative) properties. So, for example, anti-piracy campaigns place great
emphasis upon the claim that ‘violating copyright laws ... is equivalent to
stealing’ (Play It Cybersafe, 2005: 3). By recourse to everyday conceptions
of ‘stealing’ and ‘theft’, and by drawing analogies between tangible and
intangible goods, copyright holders justify the criminalization of piracy.
This equation between tangible and intangible goods, however, can be seen
as unpersuasive and misleading. Material objects are limited by nature as to
their use – if one person is using an object, this means others cannot. As
Martin (1998: 30) puts it: ‘If one person wears a pair of shoes, no one else
can wear them at the same time’. A tangible object ‘can only be in one
place at a time’ (Kasten, 2004: 21), and only utilized by one party at a time.
This limit to use justifies the prohibition on unauthorized use – if you take
my shoes and wear them, I am denied their use. Intangibles are
fundamentally different, however. The particular expression of an idea can
be taken and used by someone else, but this in no way deprives the
possessor of the original expression – they still have the original and can
make full use of it. Therefore, in the case of digital content (music, movies,
software), it can be endlessly reproduced, but this does not entail
dispossessing anyone. Through copying there is no limit to sharing the
good, without encroaching upon any owner’s possession of their version of
the good (Bunz, 2003: 8; Steinmetz and Tunnell, 2013: 62). Hence it is
impossible to ‘steal’ copyrighted content in the way that it is possible to
steal a physical object. It can be argued that in instances of piracy, no one is
deprived of their property, but only of possible opportunities to exploit
proprietary control over forms of expression for commercial gain. As we
shall see below, whether or not individuals ought to in fact be accorded
such rights of control is a hotly contested issue.
A third way in which piracy is criminalized is by recourse to the idea that
individual authors, creators, artists and so on will be materially harmed by
the practice of piracy. Hence, for example, the aforementioned ‘Starving
Artist’ role-playing game, which links piracy to the impoverishment of
those who create cultural works. This notion of harm is backed up by
aggregate numerical claims, such as the BSA’s assertion that software
piracy has resulted in ‘more than 111,000 jobs lost’ (BSA, 2005: 1). A study
by TERA Consultants (2010: 5) similarly claimed that in 2008 the
European Union lost approximately 185,000 jobs to piracy. Yet the claim
that piracy does direct material harm to individual ‘creators’ can be seen to
rest upon a significant factual inaccuracy. In today’s cultural economy,
authors and artists seldom retain control over copyright but routinely assign
those rights to corporate entities who then have virtual carte blanche over
decisions as to the work’s commercial exploitation. In 2000, rock musician
Courtney Love launched what has been dubbed the ‘Love Manifesto’, a
critical reflection on ‘IP theft’, artists and the recording industry. Love
begins her ‘Manifesto’ thus: ‘Today I want to talk about piracy and music.
What is piracy? Piracy is the act of stealing an artist’s work without any
intention of paying for it. I’m not talking about Napster-type software …
I’m talking about major label recording contracts’ (Love, 2000). She goes
on to demonstrate how standard practice within the recording industry
deprives musicians of copyrights, and the monies advanced to artists are
largely recouped from them by the industry under the guise of expenses for
recording and promotion. As a consequence, the musicians see little return
from their efforts and, she opines, ‘the band may as well be working at a 7Eleven’ (Love, 2000). The appeal to artists’ well-being as an anti-piracy
strategy is based upon the erroneous claim that royalties earned from
authoring provide anything like a viable living. Critic and activist Brian
Holmes points out that, of the 100,000 members of the world’s best
established composers’ rights organization, only 2,500 have a vote on
organization policy, since the privilege of voting is reserved for those who
earn more than US $5,000 per annum from their compositions; in fact, he
claims that only 300 members of the organization make a viable living from
royalties derived from copyright (Holmes, 2003: 13). It has also been
argued that piracy is in the financial interests of most recording artists; most
performers make their living from concert performance, and this is best
supported and promoted by having their music circulated as widely as
possible, including via copying. As musician Ignacio Escolar puts it, ‘Like
all musicians, I know that 100,000 pirate fans coming to my shows are
more profitable than 10,000 original ones’ (Escolar, 2003: 15).
The foregoing discussion is intended to draw attention to the ways in which
internet piracy, like other forms of criminalized cyber-activity, draws its
rhetorical power from some commonplace ideas and ideologies. As the
drive to clamp down on copying via the internet builds apace,
criminologists need to pause and reflect upon whether the claims about its
harmfulness are justified, and whether curtailing practices such as file
sharing is in the interests of consumers or artists, or just of those large
corporations who increasingly control access to cultural goods.
5.9 Summary
This chapter has explored what is perhaps the most common form of
internet-related crime, that of piracy or the illegal downloading of software,
music and movies. The growth of the internet, combined with the move to
digital media, has created a medium through which copyrighted content can
easily be duplicated and distributed worldwide. Copyright industries now
claim that illegal copying results in multi-billion dollar losses every year,
and threatens the jobs and livelihoods of those working in the creative
sectors of the economy. Young people in particular appear to be enthusiastic
participants in piracy activities, and have increasingly become the focus of
both legal action and anti-piracy education campaigns. A wide array of
industry-financed anti-piracy organizations now exists, and they are
increasingly involved in policing activities. Recent years have also seen
major legislative innovations at both national and transnational levels in an
effort to curtail copyright offences. Critical voices have been raised,
however, against what is perceived as the incremental criminalization of
cultural communication and exchange, and have argued in favour of the free
circulation of knowledge and ideas. The issue of piracy is consequently
becoming a political and ideological battleground on which the meanings of
crime and deviance are negotiated and contested.
Study Questions
What is intellectual property and what kinds of form does it take?
How does the internet facilitate piracy?
How have copyright industries, governments and law enforcement bodies responded to the
growth of illegal copying online?
How might we explain the high levels of involvement in such practices by young people?
Whose interests does the criminalization of copying serve? Is there a case for decriminalizing
piracy?
How might we rethink intellectual property rights in the internet age? Should we reinforce the
old model or adopt new ways of thinking about copyright?
Further Reading
An accessible introduction to intellectual property and IP law is the World Intellectual
Property Organization’s WIPO Intellectual Property Handbook: Policy, Law and Use (2001).
A comprehensive and detailed guide is Lionel Bently and Brad Sherman’s Intellectual
Property Law (2001). For discussions of copyright in relation to music, see Simon Frith and
Lee Marshall’s (eds) Music and Copyright (2004). For a critical take on IP generally, see Peter
Drahos and John Braithwaite’s excellent Information Feudalism: Who Owns the Knowledge
Economy? (2002); equally useful (and engaging) is Siva Vaidhyanathan’s book Copyrights
and Copywrongs: The Rise of Intellectual Property and How it Threatens Creativity (2003).
Similarly, the reader may enjoy Lawrence Lessig’s Free Culture: The Nature and Future of
Creativity (2004) as well as Michael Perelman’s Steal this Idea: Intellectual Property and the
Corporate Confiscation of Creativity (2002). For a reimagining of copyright law, refer to
Lawrence Lessig’s Remix: Making Art and Commerce Thrive in the Hybrid Economy (2008).
For the copyright industry perspective on piracy, consult the websites of the Motion Picture
Association of America (www.mpaa.org), the Recording Industry Association of America
(www.riaa.org) and the Business Software Alliance (www.bsa.org). For a critical analysis of
claims about the piracy problem, see Majid Yar’s ‘The global epidemic of movie “piracy”:
Crime-wave or social construction?’ (2005a) and ‘The rhetorics and myths of anti-piracy
campaigns: Criminalization, moral pedagogy and capitalist property relations in the
classroom’ (2008). Students interested in learning more about specific high-profile cases of
piracy and pirates should check out the 2017 documentary Kim Dotcom: Caught in the Web as
well as the 2013 documentary TPB AFK: The Pirate Bay Away from Keyboard.
6 Cyber-frauds, Scams and Cons
Chapter Contents
6.1 Introduction
6.2 Scope and scale of online fraud
6.3 Varieties of online fraud
6.4 Online fraud: perpetrators’ advantages and criminal justice’s
problems
6.5 Strategies for policing and combating internet frauds
6.6 Summary
Study questions
Further reading
Overview
Chapter 6 examines the following issues:
the nature and extent of online frauds;
the various forms taken by frauds, such as auction fraud, advanced fee frauds and website
spoofing;
the ways in which the move to the internet proves advantageous for fraudsters and presents
problems for law enforcement and policing;
the emerging and ongoing strategies for combating internet fraud, along with their limitations.
Key Terms
Advance fee fraud
Identity theft
Internet auctions
Phishing
Social engineering
Spoofing
6.1 Introduction
Gottfredson and Hirschi (1990) make a valuable distinction between crimes
according to the means by which they are perpetrated: while some crimes
resort to force (e.g. breaking and entering or mugging), others proceed by
fraud or deception. Fraud, know more colloquially as ‘scams’ and ‘cons’, is
a far from new phenomenon. History is replete with examples of such
crimes, ranging from the large-scale and audacious to the small-scale and
petty (Maurer, 2000; Moore, 2000). In legal terms, fraud can be defined as:
‘A false representation by means of a statement or conduct made knowingly
or recklessly in order to gain material advantage’ (Martin, 2003: 211).
Frauds therefore part victims from their money or property by recourse to
misinformation and deception. One may be deceived as to the true value of
something that one is purchasing, as to its authenticity, as to whom it
actually belongs and so on. One may also be deceived as to who the
individual is with whom one does business: they may fraudulently present
themselves as someone else, as representatives of some legitimate
organization, as the holder of some occupational or professional role to
which they in fact have no claim, and so on. The deception furnishes the
critical tool with which the victim is induced to voluntarily part with their
money or goods (as opposed to being compelled to do so as in instances of
crimes of force). This chapter explores such offences taking place in online
environments, something that has proliferated rapidly – as one commentator
put it, the internet ‘can be viewed as a breeding ground for fraud’ (Fried,
2001: 1).
6.2 Scope and Scale of Online Fraud
As with many other kinds of cybercrime, analysis of internet fraud has been
hindered by a relative lack of systematic and/or official data. Fortunately,
tremendous inroads have been made recently in an attempt to measure
online fraud across various countries (see: Button and Cross, 2017: 13–20).
Multiple countries, for example, have established agencies which document
reported fraud and other cybercrime incidents. One of the primary reporting
agencies in the US is the Internet Crime Complaint Centre (IC3) in the
USA. The IC3 (formerly the Internet Fraud Complaint Centre [IFCC]) is an
organizational collaboration between the FBI and the National White Collar
Crime Centre (NW3C), whose primary role is to receive public reports of
cybercrimes and refer them to the relevant criminal justice agencies for
action. The IC3 publishes an annual report of online frauds, compiled from
the public complaints they receive.
In 2017, the IC3 received over 301,000 complaints, the vast majority of
which related to frauds of various kinds (other, much smaller categories of
complaints received related to issues such as computer intrusion and child
pornography) (IC3, 2018: 1). This figure comprised a significant increase
over the preceding decade: in 2000 the IC3 received just short of 17,000
complaints, rising to 231,000 in 2005 (IC3, 2012: 6). This total, however,
was a slight decrease from earlier in the decade as IC3 documented
approximately 314,000 complaints in 2010 (IC3, 2012: 6). The financial
loss incurred by victims from these offences was estimated as totalling
more than $1.4 billion, almost a billion more than reported in 2010 (IC3,
2018: 1; 2012: 6). Such figures must of course be treated with caution, as
some (unknown) proportion of the increase between 2000 and 2017 may
well be attributable to greater reporting of offences, rather than solely to the
increase in their prevalence. On the other hand, the recent levelling-off of
complaints to IC3 may give the misleading impression that fraud incidents
and other cybercrimes are not increasing. Instead, victims may be more
likely to report frauds to third-party organizations (banks, online auction
websites, etc.) rather than file an official report. In total, the data thus gives
us an idea that online fraud is a significant issue, though gaining an accurate
picture of its scope or scale based on IC3 data alone is difficult.
Countries outside of the US have also begun developing official reporting
agencies that collect data on fraud. In Australia, the Australian Competition
and Consumer Commission’s (ACCC) Scamwatch and the Australian
Cybercrime Online Reporting Network (ACORN) – established in 2014 as
‘the central reporting agency for cybercrime within Australia’ – collect
national fraud reporting data (Button and Cross, 2017: 17). In 2017, they
received over 200,000 fraud reports which was roughly similar to the prior
year (Rickard, 2018: iii). That said, they had previously documented a 47
per cent increase in fraud reports from 2015 to 2016 (Rickard, 2017).
Losses in 2017 were estimated at $340 million AUD (Rickard, 2018: iii).
Canada also has an official reporting agency called the Canadian AntiFraud Centre (CAFC). As Button and Cross (2017: 18) explain, ‘the CAFC
is the central reporting agency for mass marketing schemes for victims
across Canada, but also those from outside where there is a Canadian link’.
In 2017, the organization reported over $110 million CD lost to these frauds
across 71,793 complaints.1 The resulting average equals $1,536.77 lost per
complaint. In the UK, frauds can be reported to the Action Fraud
organization. This data is generally not reported on its own and is, instead,
combined with victimization survey data in regular statistical reports
published through the Office for National Statistics (described below).
1 Data for the Canadian Anti-Fraud Centre were acquired by directly
contacting the organization through their Facebook page.
In addition to official reporting data sources, multiple countries have begun
administering victimization surveys which measure online fraud
victimization (for an overview, see Button and Cross, 2017: 13–17).
Examples in the US include the US’s National Crime Victimization
Survey’s 2015 identity theft supplement, which found that in 2014 ‘17.6
million persons, or 7% of all US residents age 16 or older, were victims of
one or more incidents of identity theft in 2014’ (Harrell, 2015: 1) and the
2011 US Federal Trade Commission Consumer Fraud Survey that estimates
approximately 11 per cent of US adults were victims of consumer fraud in
2011 (Anderson, 2013: i). In the UK, there is the Office for National
Statistics Crime in England and Wales data. Their data on fraud
victimization found that, in 2017, 7 per cent of persons targeted by fraud
across England and Wales with 72 per cent of victims subjected to bank and
credit account fraud, 25.1 per cent to consumer and retail fraud, 1.9 per cent
to advance fee fraud and 1.1 per cent to other forms of fraud (ONS, 2018b:
Table E1).2 Further, this victimization data revealed that only 14 per cent of
frauds were reported to the police or the UK’s Action Fraud agency (ONS,
2018a: 9). This survey also explored why victims may not report data to the
Action Fraud centre, finding that 66 per cent of respondents were unaware
of this reporting agency, 10 per cent thought the incident would be ‘reported
by another authority’ and 8 per cent believed that the ‘incident was too
trivial or not worth reporting’ (ONS, 2018a: 11). The UK also regularly
publishes the National Fraud Indicator report. This report was originally
operated by the National Fraud Authority but this body was dissolved in
2014. The report was renewed by the UK Fraud Costs Measurement
Committee (UKFCMC) and affiliated organizations in 2016 (UKFCMC,
2017: 2). In this report, they estimate that identity theft cost UK adults
approximately £1.3 billion. In Australia, victimization surveys are
administed by the Australian Bureau of Statistics (ABS). The data reveal
that between 2014 and 2015, 8.5 per cent of the population 15 or older were
victims of fraud with losses estimated to be approximately $3 billion AUD
(ABS, 2016a). Based on victimization data from these countries, Button and
Cross (2017: 15) argue that ‘a conservative figure of between 5 and 10 per
cent of the adult population falling victims to fraud would be appropriate’.
2 Specific percentage breakdowns across fraud categories in the Office of
National Statistics Crime in England and Wales survey were calculated
from their appendix tables for 2017, available at
https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/da
tasets/crimeinenglandandwalesappendixtables.
A wide assortment of online frauds exists, seemingly limited only by the
creativity of fraudsters. It would therefore be impossible to provide a
comprehensive overview of fraud varieties. The next section will therefore
cover select examples of major online frauds including internet auction
frauds, banking frauds, investment frauds, romance frauds and advanced fee
frauds. Additionally, brief consideration will be given to a particular genre
of information security related fraud which emanates from hacker culture
called ‘social engineering’.
6.3 Varieties of Online Fraud
6.3.1 Internet Auction Frauds
A significant proportion of reported internet frauds involve online auction
sites such as eBay. Such online auctions have expanded rapidly over the
past two decades; eBay now has more than 170 million active buyers,
purchasing a total value of $88.4 billion worth of goods in 2017 (eBay,
2018: 1). Auction sites provide a forum for bringing together buyers and
sellers of a bewildering array of goods (cars, clothes, consumer electronics,
books, movies, music, antiques, art, collectibles such as stamps, coins and
other memorabilia, DIY equipment, home furnishings, sport and leisure
goods, concert and theatre tickets, office equipment, cosmetics, land and
properties, food and drinks, and so on). Internet auction frauds accounted
for 5.83 per cent of all fraud complaints received by the IC3 in 2014 (IC3,
2015). Roughly $19 million in losses were reported during this period as a
result of online auction frauds, equating to approximately $879 in losses per
complaint; however, this median figure disguises wide variations, with
some individuals being defrauded of just a few dollars, while others will
lose thousands or even tens of thousands.
In the past, auction sites have publically denied that fraud is an extensive
problem. Examining recent eBay shareholder reports, however, indicates
that eBay may have changed their tune regarding auction fraud having
stated that ‘failure to deal effectively with fraudulent activities on our
platforms would increase our loss rate and harm our business, and could
severely diminish merchant and consumer confidence in and use of our
services’ (eBay, 2017: 21). Thus while fraudulent auctions may comprise a
diminuitive minority of overall transactions on the site, this minority is
anything but insignificant for both users and auction websites. Varieties of
auction fraud are outlined below.
The ‘Fencing’ of Stolen Goods
Criminologists have argued that finding a viable market for stolen goods is
crucial for the success of criminal enterprises involving property theft.
Finding a buyer for stolen goods may, in fact, be more challenging than
perpetrating the theft itself. Sutton (1998: 1), a pioneer of the ‘market
reduction approach’ to crime control, claims that ‘success or failure to
convert stolen property into cash appears to play an important part in
whether they [thieves] continue to offend’. Therefore success in finding a
market for such goods may actively stimulate theft, whereas failure may act
as a disincentive, since previous thefts have failed to yield a financial
return. Traditionally, stolen goods have been ‘fenced’ locally, with buyers
being sought among friends, workplace colleagues, neighbourhood
acquaintances, and in pubs and clubs (Sutton et al., 2001). Transporting
stolen goods outside the locality runs the risk of interception by police,
customs and other authorities, as well as the problem of doing business with
people one does not know from previous personal contact. Subsequently,
the locally focused markets provide sites for policing and surveillance:
pubs, clubs and street corners can be watched, local informants can alert
police to the presence of hawkers of stolen merchandise, and so on. The
development of internet auction sites, however, provides thieves with a
global market through which to sell stolen items to unsuspecting customers.
A growing number of such cases have been documented, some involving
large-scale organized thievery and sophisticated internet auction operations
for their disposal. Examples include:
a family from Chicago, Illinois regularly shoplifted over the course of
10 years and sold the stolen merchandise over eBay for an alleged total
of $4.2 million (CBS Chicago, 2014)
a woman from Dallas, Texas shoplifted from craft and hobby stores
between 2009 and 2013 and made between $400 thousand and $1
million selling the goods on eBay (Marusak, 2015)
a group of amateur motorcycle thieves in Austin, Texas, whose bikes
were dismantled and sold online as spare parts for $15,000 (Cha, 2005)
a frauster operating out of Roswell, Georgia worked with identity
thieves who would use fraudulent credit cards to purchase expensive
consumer electronics and other goods. He would purchase these items
from the thieves at 60 per cent retail value and sell them over eBay. He
ran this con for over 10 years (USDOJ, 2013)
instances involving the internet auction of, among other stolen items,
antiquarian books, M29 air guns, high school band equipment, car
parts, dinosaur fossils and archaeological finds (ABC News Online,
2005; cnet, 2005; Scuka, 2002).
Non-delivery of Items
A common form of internet auction fraud reported by victims is the nondelivery of items for which they have paid. One study estimates that 0.62
per cent of auction buyers do not receive the goods for which they have
paid; a rate 62 times higher than that suggested by auction providers
themselves (Gavish and Tucci, 2008: 91). In such cases, the putative seller
advertises non-existent items for sale. Bidders are informed that they have
won the auction in which they have participated, and are requested to
forward the appropriate payment. However, they receive no goods in return.
Fraudulent ‘sellers’ typically register themselves with the auction site under
a bogus ID, thereby frustrating attempts by defrauded customers and law
enforcement agencies to bring them to book (Curry, 2005: 2). When
examining all forms of fraud reported to the IC3 beyond internet auctions,
non-payment and non-delivery were the most prevalent with 84,079 claims
reported (IC3, 2018: 20).
Product Inauthenticity and Misrepresentation
Such misrepresentation can take a number of forms. For example, it may
involve attributing inflated retail values to items so as to represent the
auction prices as substantial savings; it may also involve misrepresenting
the condition of the item (Fried, 2001: 4), as when used goods are
advertised as new, or items in poor working condition are advertised as
fully functional. One of the most common forms of misrepresentation takes
the form of selling counterfeit goods that are advertised as authentic items
(Treadwell, 2011). There is growing evidence that auction sites are
extensively used for trading counterfeit DVDs, CDs and computer software
packages, as well as counterfeit clothing, perfumes and other items (Enos,
2000; Treadwell, 2011). Importantly, product misrepresentation also occurs
outside of the context of online auction frauds and IC3 (2018: 20) received
reports of 5,437 cases of product misrepresentation in 2017. The issue of
peddling counterfeit goods online expands beyond auction sites to other
online marketplaces. For example, the Amazon Marketplace, which allows
independent vendors to sell goods through the Amazon.com platform, has
become a hotbed for the distribution of counterfeited goods (Hern, 2018).
Shill Bidding
Shill bidding is the act of bidding on your own items in order to encourage
a higher end price: the seller places false bids by either (a) using multiple
fake identities or aliases to place bids on their own items, or (b) arranging
for associates to place bids for the items with no intention of actually
purchasing them. Such ‘shilling’ serves to artificially force up the price of
the item, so as to maximize the money that can be extracted from legitimate
bidders (Fried, 2001: 4). Though precise estimates are difficult to come by,
early figures claimed that only 2 per cent of fraud complaints concern this
type of deception (Dolan, 2004: 12): it is very difficult for legitimate
bidders to detect whether or not other persons bidding for the same items
are genuine buyers or ‘shills’ for the seller. Consequently, the vast majority
of such frauds may well pass undetected by buyers, unlike cases of
misrepresentation of goods (which will be detected upon delivery) or cases
of non-delivery of purchased items. Therefore, shill bidding may well be far
more prevalent than reported incidents suggest.
Other types of auction fraud involve methods such as: ‘fee stacking’, where
the seller adds on additional fees after the auction has been completed,
thereby increasing the agreed price for the item; failing to deliver the item,
giving the seller a refund but deducting some money on the grounds of
‘administration costs’; ‘buy and switch’, where a winning bidder will swap
the purchased item for a damaged but otherwise identical item they already
possess, and claim a refund – so that they will end up with both an
undamaged item and their money, while the seller is left with damaged
goods (Curry, 2005: 2–3).
Auction sites use various mechanisms that in theory make transactions
reliable and minimize the possibility of fraud. For example, eBay and other
sites invite sellers and buyers to leave feedback and ratings on the
transactions that they have undertaken: Was the buyer/seller with whom
they dealt satisfactory? Was the product as advertised and expected? Were
the goods delivered promptly? Feedback for each user is readily available to
others contemplating doing business with them, so they can check out the
individual’s or company’s track record according to past customers. If a
buyer/seller has positive feedback, then this supposedly reassures others
that they are safe and reliable (Boyd, 2002). However, this feedback
mechanism is easily exploited: for example, the fraudster can register with
the site using multiple false identities and leave positive feedback for himor herself, or arrange for accomplices to do the same. This then creates a
positive track record that will help encourage potential victims to enter into
a transaction. Moreover, in cases involving the sale of stolen goods, buyers
may have no reason to complain as they are likely to be blissfully unaware
that the item they have purchased at a bargain price is in fact stolen
property.
6.3.2 Advanced Fee Frauds or the Nigerian Scam
One of oldest and best-known frauds or ‘cons’ goes by the name of ‘the
Spanish Prisoner’, so-called since it first dates back to the time of Francis
Drake, the Anglo–Spanish war and the Spanish Armada in the sixteenth
century. The fraud took the following form. A well-to-do young English
gentleman is approached by a man purporting to be an exiled Spanish
noble. He tells the Englishman he has been forced to flee his political
enemies at home. He then produces a portrait of a beautiful young woman,
saying that this is his sister, who is being held prisoner in Spain. The
nobleman says he needs the help of a principled Englishman to help him
arrange for his sister’s escape from her captors, since he was forced to flee
Spain without money and has no resources of his own. He then tells the
Englishman that his sister has in her possession a fortune in jewels and
gems, the family’s inherited riches. The man who is willing to help with the
sister’s escape will be given a substantial share of the jewels in return;
moreover, his beautiful sister will be ‘eternally grateful’ to the foreign
gentleman for helping to free her. The Englishman, dazzled by the prospect
of instant riches and the adoration of a beautiful Spanish noblewoman,
readily agrees to advance the Spaniard the funds he needs. Subsequently,
the Spaniard may well return on a number of occasions and relate how
further problems have arisen, and that more money is needed to ensure the
sister’s safe release. The Englishman continues to furnish further funds,
until he has no more to give; at this point, the fictional Spaniard nobleman
simply disappears, taking the Englishman’s money with him. This fraud
has, over the centuries, reappeared in various forms: for example, in the
early twentieth century, numerous wealthy Americans were defrauded in the
‘Mexican prisoner’ scam, a reworking of the Spanish con in a new setting.
In recent decades, the latest incarnation of this fraud has been the so-called
‘Nigerian Letter Scam’, otherwise known as the ‘419 fraud’ (named after
the relevant section of the Nigerian penal code prohibiting this kind of
activity) (USDS, 1997: 4). In the Nigerian fraud, the potential victim is
contacted by mail, fax and more recently via email. The communication
purports to come from a former or current government official or his son,
daughter or widow. It tells how the individual has a fortune (often ill-
gotten) in Nigerian bank accounts, but the money cannot be safely extracted
because of the turbulent political situation in the country. A trustworthy
foreigner is needed to whom the funds can be transferred. In return for
making his or her account available, the foreigner will be given a
substantial share of the millions of dollars that will be extracted from
Nigeria. However, since all the Nigerian’s resources are trapped in this
account, he or she needs the foreigner to advance some money to make the
transfer possible (hence the term ‘advance fee fraud’). A typical example of
a Nigerian scam email is presented in Figure 6.1. The victim may well give
substantial sums repeatedly to the fraudster, in the hope of becoming an
instant millionaire once the monies have been successfully released.
Needless to say, there are no millions, and having ‘taken’ the victim for
large sums, the ‘Nigerian’ simply disappears. Nigerian scams are not the
only kind of advanced fee fraud, however. A wide assortment of variations
of the con now exists. They are all unified, however, by a central element –
a person is promised a large reward in the future if money is sent to the
fraudster in the present.
Since the rapid expansion of the internet from the mid-1990s onwards, the
fraudsters have moved from using mail and fax to mass emailings as a
means of contacting their potential victims. Available data indicates
significant losses for victims as a result of these frauds. The ACCC
Scamwatch system (2018b) claims reported losses of $1,665,373 AUD in
2017 due to 419 scams, averaging $1,294 per victim. The IC3 (2018: 20–
21) estimated $57,861,324 million in losses across 16,368 advanced fee
fraud cases (including 419 frauds), resulting in average losses per case of
$3,535. Such frauds constitute 5.4 per cent of crimes reported to IC3. The
ONS Crime in England and Wales survey reports 56,000 cases of advanced
fee fraud in 2017 or 96 per 100,000 in the population (ONS, 2018a: 21).
Interestingly, the ONS data shows a 53 per cent decrease in reported
advanced fee frauds from the previous year.
6.3.3 Romance Scams
Online romance or dating scams appear to have become increasingly
commonplace. Such frauds exploit the rapid rise of online dating sites.
Typically, the fraudsters hook their victims by feigning romantic interest,
and then exploit the victims’ emotional investment in the ‘relationship’ to
procure monies and gifts. A commonplace strategy is to claim personal
hardships and family tragedies in order to pressure the victim into offering
financial ‘support’. Research indicates that persons ‘high in romantic
beliefs’ and having a ‘tendency toward idealisation of romantic partners’
were more likely to fall for romance scams (Whitty and Buchanan, 2012:
4). Whitty (2013: 670–671) provides an example of a romance scam which
should give the reader an idea of how intense and insidious these scams can
be:
Some of the women believed they were communicating with an
American soldier working in Iraq. They were told that the fake persona
was in love with them, about to finish their service and wanted to retire
by moving to the United Kingdom and marry the victim. In the process
of the move, they asked the victim if they would accept bags
containing personal goods for them to take care of, whilst they get
ready for their move. After acceptance, the victim received a phone
call from someone who claimed to be a diplomat and was looking after
her supposed lover’s bags. The victim was told that, given the flight
had been delayed, the diplomatic seals were about to expire and that
money was needed to ensure that the bags could still be sent through to
the victim. The victim sent money next to learn they were too late and
that the bags had been sent to customs and x-rayed, only to find that
there was money and gold in the bags. This led to a series of new
reasons why the victim was asked to pay money, including paying
money to the fake persona who claimed to have been wounded in Iraq.
Some women were asked to go to Ghana to sign documents so that
they could claim the money. When they arrived, they were kidnapped
and brought by men with guns to an isolated room and were left there
for a couple of days. Eventually, the men with guns showed them some
of the money and provided them with a document to sign and they
were then released. The women went back to the United Kingdom still
believing in the con and continued to give money until they were
informed by law enforcement that they had been victims of the
romance scam.
As these scams are preying on victims’ emotions and longing for love and
companionship, they can be extraordinarily damaging. Financially, in 2017,
the IC3 (2018: 20–21) received more than 15,000 complaints from victims
of these and similar scams, who lost an average of $13,751 each, amounting
to a total of $211 million. The ACCC’s (2018a) Scamwatch statistics show
$20.5 million AUD losses in 2017 across 3,763 cases resulting in average
losses of $5,456. Victims of these frauds thus incur major financial losses
due to these scams. Beyond monetary damages, however, victims suffer
significant emotional distress, feeling by turns foolish and betrayed (Button
and Cross, 2017; Whitty and Buchanan, 2016). While the relationship may
have been a fraud, it often feels very real for the victims. They thus grieve
its loss and struggle to cope, an experience which is compounded by the
fact that family and friends often do not understand their trauma (Whitty &
Buchanan, 2016). Given these emotional dynamics, it is thought that there
are a huge number of such offences that go unreported; one study estimated
that there could be as many as 200,000 victims of dating scams annually in
the UK alone (Walker, 2011).
6.3.4 Phishing and Spoofing
One of the most recent types of internet fraud takes the form of bogus
emails and/or websites intended to induce victims to voluntarily disclose
passwords and other details that can then be used to access their bank and
credit card accounts. These take two forms, which are usually used in
combination to perpetrate the fraud: ‘phishing’ (from ‘fishing’) for
information via email and ‘spoofing’ via bogus versions of legitimate
financial services websites.
Phishing involves the mass distribution of emails, casting a wide net to
capture potential marks. While phishing can be used for an assortment of
frauds, it is perhaps most associated with frauds purporting to originate
from banks, credit card companies and e-sellers that attempt to trick victims
into handing over sensitive information like passwords, user names, social
security numbers, etc. (James, 2005). The more sophisticated variants of
these mails use logos and wording associated with the legitimate company
in order to cultivate the impression that the email is genuine (for a typical
example of a phishing email, see Figure 6.2). These emails may ask users to
confirm their account information and other details and send them back to
the fraudster directly. Often times, however, the mails request that the
account holder visit the bank, credit card company or other website to
refresh their personal and other details in order to update their account. The
account holders will be provided with a hyperlink that supposedly takes
them to the company’s website where they can log on and enter the
appropriate information. The hyperlink will in fact direct them to a bogus or
spoofed website that looks identical to the legitimate company website
which account holders are accustomed to using. Here they will log on and
enter their details. The fraudsters therefore gain access to the users’
passwords and other security and authentication information (date of birth,
etc.), which can then be used to empty bank accounts or steal from credit
cards. In addition to incorporating spoofed webpages, phishing emails may
also act as a vehicle for delivering malware onto computers, tricking the
person in downloading a malicious attachment disguised as something as
innocuous as a document or photo file. The 2013 Target breach, for
instance, which compromised the credit card information of millions of
customers, started with a phishing email sent to a third-party contractor who
worked with the corporation (Smith, 2014).
Figure 6.1 Example of a ‘Nigerian Letter Scam’ email
Other kinds of phishing emails exist such as ‘spear-phishing’ which
involves emails more custom-tailored to the mark. For instance, a scammer
may pretend to be someone closely affiliated with the target to better gain
their trust and compromise their security. The idea is that a target is more
likely to click a link or download an illicit attachment if they believe it is
coming from a person they know. A related tactic is ‘whaling’ or spearphishing that focuses on high-ranking targets within an organization, like a
CEO. These kinds of emails may not grab as many targets as a general
phishing approach, but they are more likely to be successful against a
specific target.
In addition to phishing through emails, fraudsters have also turned toward
the mass distribution of fraudulent messages through cellular text messages,
a tactic called ‘smishing’ (a play on SMS ‘short message service’, which is
the protocol widely used for text messaging and phishing). Similar to
phishing emails, smishing messages will try to elicit information from
targets under false pretences and may ask users to visit spoofed websites on
their smartphones. Though the tactic is well worn, increased attention is
also being given to what many now call ‘vishing’ or phone-based phishing.
These cons often rely on phone number lists or random-digit dialing
strategies to call potential marks en masse. When a target picks up their
phone, a fraudster will jump on the line and generally attempt to trick the
caller into purchasing goods or services under false pretences or merely
attempt to solicit information to be used in later crimes (such as identity
theft).
As internet usage has exploded over the past few decades, so too has the use
of phishing by fraudsters. By 1999, spoofing or ‘page-jacking’, wherein
web users are redirected to bogus but authentic-looking versions of genuine
websites, was thought to affect some 25 million pages, or 2 per cent of the
total sites then available on the internet (Grazioli and Jarvenpaa, 2000: 3).
According to the Anti-Phishing Working Group (APWG), there were over
50,000 ‘unique phishing websites’ detected on the internet for June 2017
alone (APWG, 2017: 3). They also report over 92,000 ‘unique phishing
email reports’ or phishing ‘campaigns’ from consumers for the same period
(2017: 3). The phishing and spoofing emails and websites identified in June
2017 hijacked the brands and credentials of 452 companies, such as banks
and e-retailers (2017: 3). The sophistication of these emails and sites is such
that even experienced internet consumers are usually unable to identify
them as fraudulent. For instance, Bakhshi et al. (2009) conducted a phishing
campaign at a university and found they could dupe 23 per cent of their
sample into surrendering sensitive information. In a similar study, Greening
(1996) found that 47 per cent of the students in his sample provided valid
passwords in response to a researcher-designed phishing inquiry. Jagatic et
al. (2007) designed an experimental study that tested the likelihood of
surrendering sensitive information based on perceived email senders. Their
study found that 72 per cent of students who thought a phishing email was
coming from an acquaintance fell for the scam. Only 16 per cent of students
who were sent phishing emails from strangers succumbed to the ruse.
Such frauds are effective and difficult to detect because they exploit the
ways in which trust is socially organized, in both terrestrial and virtual
settings. In our dealings with others, we normally make inferences as to
their trustworthiness on the basis of their appearance or ‘self-presentation’
(Goffman, 1969; Rutter, 2000: 4). If others conform to our typical
understandings of trustworthiness (i.e. they look and behave according to
our expectations of appropriate appearance and conduct), we take them to
be exactly what they claim to be. So, for example, if we encounter on the
street an individual dressed in a police uniform, and behaving according to
our expectations about how a police officer normally acts, we take it as
given that this person is in fact a police officer and not someone merely
pretending to be one. Fraudsters and con artists are adept at borrowing the
tools of self-presentation so as to convince others that they are someone
they are in fact not (Bergmann, 1969; Nash, 1976). Phishing and spoofing
effectively appropriate the markers we use to make judgements about
trustworthiness and authenticity (the logos, the brand names, the
‘professional’ appearance of the websites) in order to deceive us into
believing that they are legitimately what they purport to be. Only after the
victim has been defrauded does it become apparent that their trust in the
other’s appearances was fundamentally misplaced.
6.3.5 Investment Fraud
A further prominent form of online fraud involves the trading in stocks and
shares. The growth of online share trading has created unprecedented
opportunities for fraudsters to exploit.
Investment frauds can take a number of forms. The first and most
straightforward is the solicitation of investments in non-existent companies.
As early as 1996, the US Securities and Exchange Commission (SEC), the
stock market regulatory body, had brought several cases to court involving
such fraudulent online investment schemes (Ryder, 2011). Fraudsters
contact potential investors by means such as mass email, online investment
newsletters, bulletin boards and chat rooms (Beresford, 2003: 2). Another
common form of investment fraud is so-called ‘pump-and-dump’ schemes.
In such cases, the fraudster contacts the victim claiming to have inside
information about a stock-market listed company (usually a small enterprise
with a limited market capitalization) which he or she claims is about to ‘go
through the roof’; that is, its share values will imminently rise rapidly once
the ‘secret’ information goes public. The investor is induced to buy stock in
the company in expectation of a rapid and spectacular return. The sudden
purchasing of stock by victims pushes up the company’s share price
(‘pumping’ the stock). Once the price has risen, the fraudster ‘dumps’ his or
her own stock in the company in order to cash in on the temporary rise. The
stock then inevitably falls following the dumping, often to below the price
at which the victims purchased it, leaving them out of pocket, while the
fraudster reaps a healthy profit at their expense (Beresford, 2003: 2–3;
Fried, 2001: 6). Of the complaints about online fraud reported by the IC3
during 2017, investment scams entailed the highest mean dollar losses to
victims – $31,351 (IC3, 2018: 20–21). The most recent report from the
ACCC similarly states that ‘investment scams had the highest losses
reported in 2017 with reports to Scamwatch and ACORN exceeding $64.6
million [AUD]’ (ibid.: 2).
Figure 6.2 Example of a ‘phishing’ email
6.3.6 Social Engineering
Though information security hacks may involve sophisticated computer
tricks to break or bypass security systems, sometimes the most lethal tool in
the security hacker’s toolkit is social engineering or the influence,
manipulation or deception of the people involved in information security.
This genre of fraud focuses on tricking people into handing over sensitive
information like login credentials, social security numbers and any other
details that could be used to compromise a person’s or organization’s
security (Button and Cross, 2017: 81; Drew and Cross, 2013; Hadnagy,
2014). As Thomas (2002: 62) explains, ‘oftentimes, social-engineering
skills will be the primary way in which hackers get system access’. For
many in the hacking and security fields, a common saying is that humans
compromise the weakest link in security; why bother cracking a password
when a person could likely have that information phished out from them?
As a term, social engineering predates contemporary information security
practices. It originally described an occupation that emerged in the late
1800s and early 1900s that portrayed humans and even society as similar to
machines that could be manipulated to operate more efficiently and
effectively (McClymer, 1980). The term was eventually adopted by phone
phreaks – predecessors to the contemporary hacker underground that
focused on exploring and manipulating telephone systems – sometime in
the 1960s or 1970s to describe their practice of convincing phone company
employees to facilitate their endeavours, usually under false pretences
(Lapsley, 2013). Phreaks were also known to use these skills to pull off a
wild assortment of pranks (McLeod, 2014). Since this time, the term has
become embedded into contemporary hacking and information security
parlance to describe any number of strategies for getting sensitive
information from people.
Many of the strategies used by ‘social engineers’ have been reviewed here
including phishing, vishing, smishing and others (for a more comprehensive
overview of strategies, see: Hadnagy, 2011, 2014; Mitnick and Simon,
2002). All of these tactics and others are brought to bear as nigh
indespensible tools for compromising security. They have also become a
central feature of hacker culture and the broader security scene. For
instance, DEF CON regularly hosts a social engineering village, multiple
books have been published dedicated to social engineering (for example,
Hadnagy, 2011, 2014; Mitnick and Simon, 2002) and security conferences
regularly feature talks regarding the vulnerabilities created by the human
element of information security. Some of the most well-known hacker tales
feature social engineering as well. For example, social engineering was
central to the crimes and stunts of (in)famous hacker Kevin Mitnick through
the 1980s and 1990s which involved phone phreaking, network intrusions,
data exfiltration and fleeing from the law (Mitnick and Simon, 2011). The
purpose of highlighting the phenomenon in this chapter is to point out that
forms of online fraud may be more than individual acts of deception and
manipulation. Fraud has become a firmly entrenched component of hacker
culture and the broader field of information security. Social engineering
helps us remember that issues we normally think of as being ‘technical’ in
nature – like security hacking – are just as often social and psychological as
well. The so-called ‘human factor’ is just as important to consider
(Leukfeldt, 2017).
6.4 Online Fraud: Perpetrators’
Advantages and Criminal Justice’s
Problems
The perpetration of frauds in online, rather than terrestrial, environments
offers a number of advantages to the fraudster, and a number of
corresponding difficulties for law enforcement agencies.
6.4.1 Anonymity and Disguise
From the point of view of fraudsters, the internet offers valuable
opportunities to disguise themselves and their identities. They can present
themselves under false aliases, or steal the identity of some other
unsuspecting and innocent person for the purposes of committing their
offences (i.e. identity theft: see Cabinet Office, 2002; Copes and Vieraitis,
2012; Gordon et al., 2004; Smith, 2010). They can effectively change their
personal and social characteristics, such as age, gender, ethnic group,
country of residence and so on. They can also adopt multiple fictitious
identities that can be used in frauds to present themselves as previous
satisfied customers in order to win the trust of their victims. Even when the
commission of a fraud has been detected, identifying the individual(s)
responsible may prove extremely difficult; the trail may lead to an innocent
third party whose identity has been stolen, or it may simply prove
impossible to find the party behind the aliases and false personae (Fried,
2001: 9). Additionally, fraudsters are increasingly hiding behind encrypted
networks to conduct their frauds and distribute stolen information (Button
and Cross, 2017: 85). The increasing popularity of cryptocurrencies like
Bitcoin may also coincide with fraudsters using these currencies to help
prevent their fraudulent transactions from being traced.
6.4.2 Jurisdiction
A second area in which the internet offers advantages for fraudsters is their
ability to exploit the problems of policing across national jurisdictions. The
identification of, and bringing criminal sanctions against, fraudsters located
outside the jurisdiction in which the victim is located are beset with
problems. First, it is time-consuming and costly for law enforcement
agencies, who are often struggling to meet other crime and order issues that
take public and political priority over fraud. Second, taking such action
requires the cooperation of law enforcement agencies in other countries;
while there have recently been moves to institutionalize transnational
agreements for such cooperative investigation of cybercrime, they are still
under development. In addition, many scams may operate out of countries
with little interest in pursuing and prosecuting such crimes (Button and
Cross, 2017: 84). Third, websites such as online auction houses may prove
uncooperative, fearing that negative publicity about frauds may deter
customers, or that they may themselves be held legally liable for facilitating
the offences (as happened in June 2004 when the jewellers Tiffany & Co.
filed a suit against eBay for allegedly facilitating and participating in
counterfeiting and false advertising [Flahardy, 2004: 1]). Fourth, the ability
for particular countries to take effective action against online fraud through
regulation and legislation is severely hampered by the transnational
character of the offences. So, for example, in 2001 the Japanese National
Police (NPA) floated a range of legislative proposals in response to the
growing problems of internet auction fraud, particularly the sale of stolen
goods. The proposed laws, however, quickly came in for criticism. Gohsuke
Takama, a Japanese internet security expert stated:
Applying one country’s local laws to any Internet industry will cause
problems – especially international jurisdiction issues – and the people
who put this legislation together clearly didn’t consider that. How can
a used-goods broker law in Japan be applied to an Internet auction
company which is registered outside of Japan, owned by non-Japanese,
has transactions processed through a credit card company located
outside of Japan, operates on web servers located outside of Japan, but
has Japanese pages? (quoted in Scuka, 2002)
This case exemplifies the problem of transnational regulation and policing
when it comes to internet fraud. Taken together, the above problems are
likely to severely hinder the possibility of a satisfactory resolution for the
victim. Hence it comes as no surprise that victimization surveys relating to
online fraud reveal high levels of dissatisfaction with the law enforcement
outcomes following the lodging of complaints; Dolan’s (2004: 16) survey
of victims of auction fraud found that 51 per cent were ‘not satisfied with
the response they received from law enforcement agencies’.
6.4.3 Under-reporting
As with other forms of cybercrime, there are significant issues related to the
under-reporting of offences. Victims of online frauds may be reluctant to
report their victimization for a number of reasons:
they may decide that the relatively small amounts of money involved
do not make pursuing the matter worthwhile
they may feel embarrassed about having been taken in by a fraudster
in cases where the scheme in which they participated involved
seemingly illegal activity (such as trading in shares on the basis of
insider information), they may fear that they themselves will be
charged
they may not know to whom to report the offence, or there may be no
authority which has been allocated responsibility
they may judge that there is little likelihood of a successful outcome,
especially if the person who has defrauded them is seemingly located
in another country.
All these factors may give fraudsters a sense of security in the expectation
that they are unlikely to suffer legal consequences and sanctions as a result
of their schemes. In addition, fraud may be significantly under-reported by
organizations as the offenders may be organizational members or
employees, a problem commonly referred to as ‘insider threat’ (Collins et
al., 2016). In addition to fraud, threats posed by malign insiders can involve
intellectual property theft, disruption or destruction, embezzlement, insider
trading and other crimes (Williams et al., 2018: 3). In their recent study,
Collins and colleagues (2016: 6) found that 44 per cent of malicious insider
threats to organizations (those not involving espionage or unintentional
damage) involved fraud. Such frauds may go un- or under-reported because
they can be potentially embarrassing for these organizations. Such reporting
issues should not undermine, however, that frauds and thefts by insiders can
be significantly detrimental for organizations and employers, particularly
for small businesses (Kennedy and Benson, 2016).
6.4.4 Cost Efficiencies and Savings
A final factor to be considered is the considerable cost advantages garnered
by using the internet in fraud schemes. For example, while in the past
advance fee fraudsters would seek potential victims through the relatively
costly and time-consuming practice of posting letters and sending faxes,
now they are able to compose and distribute emails to tens of thousands of
internet users in a matter of minutes and at virtually no cost. Similarly,
those engaging in investment fraud no longer need to produce expensive,
glossy brochures to send to potential victims, or laboriously cold call
individuals via telephone; instead, emails and professional-looking
investment newsletters can be cheaply and quickly composed on any PC
and rapidly disseminated via free email accounts. Such factors considerably
reduce the start-up costs for frauds and massively extend the number of
potential victims that can be approached. The internet also has advantages
for those fencing stolen goods, as they no longer need to go to the expense
(and indeed risk) of transporting the goods to various locations where they
can be hawked; instead, they can be kept in storage until a sale has been
made online, when they can be sent via post at the expense of the purchaser.
6.5 Strategies for Policing and Combating
Internet Frauds
There are a number of emerging and potential strategies by which the
growing problem of internet fraud may be tackled. These are considered
below.
6.5.1 Legislative Innovation and Harmonization
There is an apparent pressing need to update national fraud legislation in
order to take account of the innovative forms of fraud that occur in online
environments. Some of the crucial areas include the need to establish
appropriate criteria for electronic evidence, and for the redefinition of
evidentiary categories traditionally used in prosecuting fraud cases, such as
the understanding of what constitutes a ‘document’, ‘writing’, a ‘signature’
and so on (Smith and Urbas, 2001: 5). Many countries have taken concerted
steps in this direction. For example, recent measures have included the
Identity Theft Enforcement and Restitution Act of 2008 (US) and The
Fraud Act 2006 (UK). Others, however, lack the appropriate legal
instruments for effective action. There is an equally urgent need to
internationally harmonize such provisions in order to enable the prosecution
of internet fraudsters in cross-national cases. The establishment of
international treaties and conventions (such as the Council of Europe
Convention on Cybercrime) will assist in this regard, but there remains at
present a limited degree of symmetry between internet fraud provisions
worldwide.
6.5.2 Centralized Reporting and Coordination
As already noted, one of the difficulties facing the policing of internet fraud
has been the widespread absence of relevant agencies to whom victims can
make a complaint. Fortunately, tremendous inroads have been made in this
regard with multiple countries starting their own fraud reporting centres. As
detailed previously in this chapter, such agencies include the Internet Crime
Complaint Center (IC3; US), Action Fraud (UK), Canadian Anti-Fraud
Centre (CAFC), Scamwatch (Australia) and the Australian Cybercrime
Online Reporting Network (ACORN). These kinds of central reporting
agencies should be encouraged as they offer a two-fold benefit. First, they
provide a centralized, national point of contact for complainants, enabling
reported frauds to be passed on to the most appropriate law enforcement
agency for investigation and action. Second, their recording activities fulfil
a strategic function, helping to detect trends and patterns in online fraud,
such as the appearance of new kinds of scams. Though improvements have
been made, more efforts are needed to create central reporting agencies
worldwide.
6.5.3 Specialized Training and Coordination in
Law Enforcement
One of most pressing issues for the policing of online fraud is the lack of
specialized expertise and training. Resources need to be made available for
the recruitment and retention of staff with the necessary technical skills,
such as computer forensics (Smith and Urbas, 2001: 4). Similar problems
arise with the lack of adequate procedures for inter-agency cooperation and
coordination, at both national and transnational levels. In the latter area, the
establishment of international cybercrime law enforcement bodies, such as
Europol’s ENISA, marks a move toward a more focused partnership
between national-level agencies and investigations.
6.5.4 Technological Anti-fraud Innovation
A final area of action relates to the development of technologies to curtail
the perpetration of online fraud. Measures include:
innovations in user authentication technologies, such as the use of
biometrics and digital signatures (Wright, 1998: 10; W. Lawson, 2002:
2–3)
the use of web traffic analysis, visitor profiling, artificial intelligence
solutions, machine learning, data mining, big data and related solutions
to identify potential cases of fraud and possibly provide early warning
about suspicious and fraudulent patterns of use on websites such as
auction houses (Experian, 2002: 12; Raj and Portia, 2011)
schemes for website certification and endorsement, such as the
WebTrust programme pioneered in the USA (Smith, 2003: 16).
However, the deployment of any or all of the above cannot be seen as a
technological panacea, as each entails its own problems and limitations.
The use of more stringent authentication technologies can prove expensive
and may deter legitimate customers from using e-commerce facilities,
particularly as they raise concerns about excessive intrusion and
surveillance (van der Ploeg, 2000). For instance, the previously referenced
eBay stockholder report explicitly details the company’s concerns regarding
balancing quality of service with security:
Although we have implemented measures to detect and reduce the
occurrence of fraudulent activities, combat bad buyer experiences and
increase buyer satisfaction, including evaluating sellers on the basis of
their transaction history and restricting or suspending their activity,
there can be no assurance that these measures will be effective in
combating fraudulent transactions or improving overall satisfaction
among sellers, buyers, and other participants. Additional measures to
address fraud could negatively affect the attractiveness of our services
to buyers or sellers, resulting in a reduction in the ability to attract new
users or retain current users, damage to our reputation, or a diminution
in the value of our brand names. (eBay, 2017: 21–22)
Similarly, the deployment of web traffic analysis, profiling, artificial
intelligence, machine learning, big data, and other techniques are often
resource-intensive and may only be viable for the largest e-business sites
(such as Amazon, which already utilizes such systems). A problem with
certification and endorsement schemes is that there is no agreed standard by
which users can judge whether or not a site is legitimate: for example, by
2002 there were already some 20 different ‘Webseals’ or certification
schemes in circulation in Australia alone (Smith, 2003: 14–15). Moreover,
past experience with the use of technological e-crime countermeasures has
shown that determined offenders are highly adept at finding ways to
circumvent and exploit such safeguards. Consequently, technical solutions
may only offer a temporary respite from the problems of internet frauds.
6.6 Summary
This chapter introduced a range of fraudulent activities that are perpetrated
by using online facilities such as emails, websites and auction houses. Such
frauds can take a range of more or less sophisticated forms, and can prove
difficult to tackle. Problems encountered include the under-reporting of
victimization for a variety of reasons; the difficulties in identifying
perpetrators who usually make recourse to using false or stolen identities;
the lack of centralized and coordinated law enforcement agencies with
adequate resources to tackle the problem; and problems of jurisdiction
arising from the transnational character of many of the offences. It has been
noted that the move from the terrestrial to virtual environment offers many
advantages for fraudsters, encouraging them to participate in what can be an
increasingly lucrative and relatively low-risk form of criminal enterprise.
Strategies for responding to cyber-fraud have been explored, but it is clear
that there is no simple tool, technique or policy that can curtail fraudulent
activities online. As an ever-greater amount of commercial transactions is
being undertaken in online environments, so we can expect a corresponding
ongoing growth in cyber-frauds, scams and cons.
Study Questions
What is the distinctive nature of fraud, and how does it differ from other kinds of criminal
activity?
How do fraudsters exploit mechanisms of trust in order to successfully defraud their victims?
How might characteristics of the internet such as its global reach and online anonymity assist
fraudsters and frustrate law enforcement?
What are the drawbacks of turning to technology as a ‘quick fix’ for internet fraud?
What might internet users themselves do in order to reduce their risk of victimization?
Further Reading
A useful overview of the issues surrounding online fraud is provided by Russell Smith in
‘Identity theft and fraud’ (2010) and by Mark Button and Cassandra Cross in Cyberfrauds,
Scams and their Victims (2017). For more detailed and up-to-date information about the
patterns and levels of online fraud, see the IC3’s annual Internet Fraud – Crime Report. For a
detailed study of auction frauds, see Kyle Dolan’s, ‘Internet auction fraud: The silent victims’
(2004). For a valuable overview of the challenges presented by large-scale but low-impact
frauds (so-called ‘micro frauds’), see David S. Wall’s, ‘Micro-frauds: Virtual robberies, stings
and scams in the information age’ (2010). Current data on trends in phishing and spoofing can
be found on the Anti-Phishing Working Group’s website at www.antiphishing.org, which also
contains an interesting archive of past emails. Monica Whitty and Tom Buchanan have
conducted ground-breaking research on romance scams which researchers should consider
(Whitty, 2013; Whitty and Buchanan, 2012, 2016). A valuable theoretical exploration of issues
relating to online trust is Jason Rutter’s ‘From the sociology of trust towards a sociology “etrust”’ (2000). Detailed discussion of legislative and technological counter-measures to
combat online fraud can be found in Russell Smith’s paper, ‘Travelling in cyberspace on a
false passport: Controlling transnational identity-related crime’ (2003). The journal Computer
Fraud and Security is a valuable resource for reports on the latest research and anti-fraud
initiatives as they emerge.
7 Illegal, Harmful and Offensive Content
Online From Hate Speech to ‘the
Dangers’ of Pornography
Chapter Contents
7.1 Introduction
7.2 Thinking about ‘hate speech’
7.3 Hate speech online
7.4 Legal, policing and political challenges in tackling online hate
speech
7.5 The growth and popularity of internet pornography
7.6 Legal issues relating to internet pornography
7.7 Summary
Study questions
Further reading
Overview
Chapter 7 examines the following issues:
the debate surrounding the criminalization of ‘hate speech’;
the proliferation of such speech online, and the legal and law enforcement challenges it
presents;
the prevalence of pornographic material online;
criminal justice issues around internet pornography, such as children’s access to sexually
explicit online content and the circulation of supposedly ‘extreme and violent pornography’.
Key Terms
BDSM
First Amendment rights
Hate speech
Indecency
Legal pluralism
Obscenity
Pornography
Trolling
7.1 Introduction
The growth of the internet has been hailed by many as a powerful medium
for free speech and open communication. The online environment enables
individuals and groups to share ideas, experiences and information in ways
previously impossible using existing media. This free flow and exchange
are viewed as the lifeblood of democratic cultures (McGonagle, 2001: 21)
and the internet as an invaluable means by which citizens can bypass
restrictions on speech imposed by authoritarian and repressive
governments. The political and social value of such internet communication
is indicated by the burgeoning of online civic and community groups, and
by the rise of direct action movements pursuing goals such as global social
justice and human rights. The liberation of communication enabled by the
internet, however, also has much darker and problematic dimensions. The
celebratory attitude toward internet speech inevitably runs into difficulties
when confronted with online content that may be illegal, or deemed
offensive and harmful to individuals, communities or the social fabric. This
chapter explores the complex web of issues that arise in relation to illegal,
offensive and/or harmful content online. Two forms of online content will
be examined. First, there are the various forms of discriminatory ‘hate
speech’ which target individuals or groups on the grounds of their ‘racial’,
ethnic, religious, gender, sexual or other characteristics. Second, there are
those forms of (visual and written) representation that might be considered
sexually offensive, such as so-called ‘hardcore’ and violent pornography
(debates relating to child pornography and the sexualized representation of
minors will be considered separately in the next chapter). Such online
content raises a range of thorny issues. For example, in relation to both hate
speech and pornography, some forms of representation are illegal and some
are not; the same representation may be permissible in one country but not
in another; regulating access to such content online can prove difficult (as
when young children may be exposed to highly explicit adult sexual
content). Arguments may be made for the prohibition of such content on the
grounds of harm, while others may resist such restriction in the name of
freedom of speech. Even where the content is illegal, the transnational
nature of the internet presents serious challenges for law enforcement.
Debates relating to offensive and harmful content are, in short, among the
most contentious issues relating to the internet. As we shall see, the legal
and social regulation of internet speech presents a variety of challenges that
are only just starting to be addressed.
7.2 Thinking about ‘Hate Speech’
The term ‘hate speech’ has emerged in recent decades, initially in response
to concerns about the ways in which the use of racist language contributes
to discrimination against, and incites hatred and violence towards, minority
ethnic individuals and groups. Concern about the social consequences of
such language emerged in a number of countries out of various historical
and political contexts. For example, in Germany the political and cultural
trauma of Nazism and anti-Semitism focused attention upon forms of
expression that might contribute towards the revival of racial intolerance
and hatred (Brugger, 2002; Timofeeva, 2003: 260); the issue was further
intensified from the 1970s onwards in Germany with the arrival of large
numbers of so-called Gastarbeiter or ‘guest workers’ from countries such
as Turkey. In the UK, the tensions and conflicts arising from extensive postwar immigration from British Commonwealth countries (especially from
the West Indies and South Asia) created political pressures to address the
discrimination and hostility directed towards ‘newcomers’ by segments of
the majority white population; similar developments occurred in France in
the context of migration from former French colonial countries in North
Africa (Bleich, 2003: 2–3). In the USA, the Civil Rights movement of the
1960s inspired a range of anti-discrimination measures aimed at realizing
the goal of social inclusion for African-Americans and overcoming a legacy
of prejudice emanating from the era of slavery (McCormack, 1998). From
the 1970s onwards, other social constituencies which had historically
suffered varying forms of discrimination, exclusion and violence also
started to campaign for action against demeaning, hateful and prejudicial
forms of representation. From these movements, there emerged an
understanding of hate speech that encompassed derogatory representations
not just on grounds of ‘race’ or ethnicity, but also including dimensions
such as religion, gender, sexual orientation and disability. From this context,
then, hate speech can perhaps best be understood as speech which
denigrates a person or group in a way that promotes violence or otherwise
creates a hostile environment on the basis of race, colour, creed, ethnicity,
sexual orientation, religion, nationality, immigration status, disability, or
similar characteristics.
Recognition of the need to take legal and institutional action against such
forms of expression has, however, been far from universal. So, for example,
recent years have seen in many Western countries a heated and acrimonious
debate about so-called ‘political correctness’ and the regulation of language
use (see, for example, Hughes, 1993; Schwartz, 2003). The pressure to
legally prohibit hate speech is founded on a perception that such utterances
are actively harmful to the individuals and communities that they target.
The harms issuing from hate speech can be understood in two distinctive
ways, each of which will be considered briefly below.
First, hate speech is seen as harmful in that it plays a pivotal role in inciting
and condoning acts of violence and discrimination against target
populations. So, for example, US law prohibits ‘those words that are
directed to inciting or producing imminent lawless action and are likely to
incite or produce such action’ (Steinhardt, 2000: 253; also Biegel, 2003:
330–331). Hence threats of violence ‘consisting of a serious expression of
intent to commit an imminent act of violence against an identified
individual or group, may be punished’ (Wendel, 2004: 1410). In the UK,
legal provision goes much further, with legislation such as the Race
Relations Act 1965 and the Public Order Act 1986 effectively prohibiting
the incitement of racial hatred and defamation on grounds of ethnic or
national origin, even in the absence of any likelihood of imminent lawless
action following the utterance (Steinhardt, 2000: 253, 258). Most recently,
the law has been extended under the Serious Organised Crime and Police
Act 2005, providing for criminalization of incitement to religious hatred;
this was deemed necessary to close the loophole where incitement to racial
hatred could be undertaken under the disguise of criticizing an individual’s
or group’s religious beliefs. Impetus for such change has been driven in part
by the rise of Islamophobia in many Western countries in the period since
terrorist attacks in New York, Madrid and London (Runnymede Trust,
1997; Sayyid and Vakil, 2011). These more wide-ranging provisions can be
seen to enshrine the view that hate speech lays the groundwork for violence
through its non-immediate but nevertheless significant impacts on social
beliefs by contributing to a culture of bigotry. Hate speech is seen here as
part of an interconnected array of mechanisms of subordination, both
symbolic and physical, which support each other (Lederer and Delgado,
1995: 5). Therefore Tsesis (2002) draws attention to many historical
instances in which hate speech has legitimated and encouraged a climate in
which violence and abuse can take place: examples include the upsurge of
anti-Semitic language preceding the Nazi Holocaust and the dehumanizing
representation of Africans that supported the slave trade (Tsesis, 2002: 51–
52, 101–106). More recently, we can note the demonization of the Tutsi
ethnic group in Rwanda, which played an integral part in the 1994 genocide
that led to the massacre of 800,000 people by their fellow citizens from the
Hutu ethnic group (Wendel, 2004: 1404). In all these cases, the gradual
build-up of hate speech created a structure of beliefs that supported violence
on a near incomprehensible scale. It is the desire to forestall the
development of such a culture that legitimizes the restriction of hateful
speech and representation.
The second way in which the harmfulness of hate speech is understood
turns not so much on its ability to incite acts of violence and abuse, as upon
the view that such utterances are themselves intrinsically harmful to those
who are targeted. Patterns of abusive speech directed at individuals and
groups do damage to those targeted, undermining their selfhood and
violating their human dignity. Exposure to such speech can lead the victim
to internalize the insults directed at them, and produce negative
psychological effects such as feelings of humiliation and emotional distress
(McGonagle, 2001: 3–4; also Delgado, 1982). In one of the few empirical
studies of the impact of (racist and sexist) hate speech upon individuals,
Nielsen (2002) found that the experience of insults delivered by strangers in
public places left victims feeling fearful, intimidated and threatened.
Similarly, Delgado and Stefancic (2009: 362) identify a range of impacts
‘including depression, repressed anger, diminished self-concept, and
impairment of work or school performance’. From this perspective, insults
are themselves forms of injury, with effects every bit as real as those
resulting from acts of physical violence.
7.3 Hate Speech Online
Online hate speech is by no means a unique phenomenon, rather an
extension of utterances (both written and verbal) that occur in the terrestrial
world. The expansion of the internet has seen an unwelcome proliferation
of such speech. This takes a number of forms, which are explored below.
The most noticeable online hate speech takes the form of websites (and
associated chat rooms and bulletin boards) established by organized
political groups. These are typically far right, ultra-nationalist, white
supremacist and neo-Nazi in orientation. In addition, the USA has seen an
extensive online presence of extreme Christian fundamentalists, antiabortion and anti-government militia groups. Moreover, the rise of the socalled ‘Alt-Right’ – a strain of extreme political conservatism characterised
by implicitly or explicitly racist and white supremacist ideology – has made
itself glaringly visible online (Tait, 2018). Such sites variously contain
offensive and hateful representations of Blacks, Jews, Muslims, Arabs,
other people of non-European origin, women, LGBTQ individuals, and
persons with physical and mental disabilities. One of the earliest and most
notorious online hate websites is Stormfront. It was founded in 1996 by a
former Ku Klux Klan leader, Don Black, and became a focal point for
online white supremacy (Vysotsky and McCarthy, 2016: 451). Since the
founding of Stormfront, online hate sites have proliferated. The precise
number of such websites is difficult to gauge, as they often appear,
disappear and move location (Craven, 1998: 3–4; Schaffer, 2002: 72).
Moreover, as there is no precise agreed definition of what might comprise
‘hateful content’, there is room for variation in whether or not a particular
site may or may not be included in different studies. It is clear, however,
that their number has grown rapidly over recent years. Ken Stern, a lawyer
specialising in anti-Semitism and extremism, claimed that in 1997 there
were some 300 such sites in the USA; by 1999, the Simon Wiesenthal
Center in Los Angeles had documented the existence of 600 ‘hate sites’ on
the internet (Whine, 2000: 235); this figure had risen to 11,500 by 2010
(Lohr, 2010). The Southern Poverty Law Center (SPLC) have identified the
existence of 954 different hate groups in the US (SPLC, 2018), the vast
majority of which will have an online presence and garner significant
numbers of followers on social media. Moreover, the circulation of such
speech online appears to have moved from marginal spaces associated with
extremist groups to more mainstream forums such as new social media like
YouTube and Facebook. For example, Citron and Norton (2011) note the
existence of Facebook pages such as Kill a Jew Day and We Hate GAY
People, while YouTube has featured videos with titles such as How To Kill
Beaners, Execute the Gays and Murder Muslim Scum. There is some (albeit
limited) empirical research evidence that exposure to such hateful messages
can be ‘persuasive in changing inter-group attitudes’ and thereby reinforce
discrimination and prejudice (Hargrave and Livingstone, 2009: 169).
The use of the internet to disseminate messages offers a number of
advantages to hate groups. As for other political movements and
organizations, the Web provides extremists with an extremely efficient and
cost-effective means of communication, enabling actors with limited
financial and material resources to reach a potentially global audience
(Schafer, 2002: 71). Moreover, the degree of protection it affords speakers
(through anonymous postings, use of pseudonyms, etc.) makes it possible to
propagate hate speech with a significantly reduced risk of identification and
prosecution under anti-hate speech laws (Deirmenjian, 2000: 1021).
Further, given the generally unpalatable nature of their utterances, extremist
groups find it difficult to secure avenues to air their views through
conventional, mainstream media; the internet neatly bypasses this
dependence, granting such groups access to and control over the means of
communication. Finally, the internet enables extremists to tailor their
messages to appeal to particular target audiences who might be deemed
open to influence (Craven, 1998: 4). So, for example, white supremacist
organizations allow visitors to download ‘white power music’, typically
heavy metal accompanied by ‘violent, hateful and profane lyrics’ (Schafer,
2002: 78). Offering such material is seen as an effective way to capture the
attention of young people, providing a conduit for disseminating ideologies
of violence. In sum, as Don Black, former ‘Grand Dragon’ of the Ku Klux
Klan, states: ‘as far as recruiting, [the Internet has] been the biggest
breakthrough I’ve seen in the 30 years I’ve been involved in [white
nationalism]’ (quoted in Wolf, 2004: 4). In recent years, far-right and neoNazi groups have made extensive use of new social media to exchange
views and build support for ideologies that vilify and target minority
groups. For example, the far-right Norwegian bomber and spree killer
Anders Behring Breivik used the internet to make contact with like-minded
extremists across Europe and in the USA, and to disseminate his 1,500 page
anti-Muslim ‘manifesto’ prior to killing 76 people in July 2011 (Rayner,
2011). More recently, white supremacists associated with the so-called ‘AltRight’ have used online meme culture (an online culture heavily reliant on
transferring messages through in-jokes and viral images) to appeal to the
sensibilities of internet-savvy young white people such as through the
infamous ‘Pepe the Frog’ or the ‘Sad Frog’ meme which was used by
online white supremacists and misogynists (importantly, this meme is not
always used in a hateful manner and its creator, Matt Furie, has actively
fought its co-optation by hate groups [see, for example, Sommerlad, 2018]).
Such use of meme culture was apparent among white supremacists at the
‘Unite the Right’ protest rally in Charlottesville, Virginia in which one
person was killed by a white supremacist and 19 were injured (Coaston,
2018; Washington Post, 2017).
A second form taken by internet hate speech is the direct distribution of
such content to targets through methods like email messages. There have
been a number of such cases documented in the USA. In one instance,
Richard Machado, a student at the University of California, Irvine, sent an
email to 59 mainly Asian students. In his email, signed ‘Asian hater’, he
asserted that ‘I personally will make it my life career [sic] to find and kill
everyone one [sic] of you personally. OK?????? That’s how determined I
am...’ (Wolf, 2004: 6). He went on to demand that they leave the university
if they did not wish to be killed. The effect on his targets was pronounced:
‘many ... were prepared to arm themselves with pepper spray, became
suspicious of strangers, and refused to go out alone in the dark’
(Deirmenjian, 2000: 1020). In another case, 22-year-old Kingman Quon
sent threatening emails to hundreds of Latino employees at universities,
corporations and government agencies across the USA. He stated that
Latinos were ‘too stupid’ to gain entry to university or get jobs without the
help of ‘affirmative action’ (positive discrimination) programmes, and said
that he intended to ‘come down and kill them’ (Wolf, 2004: 8). In a third
case, 19-year-old Casey Belanger, a student at the University of Maine in
New England, posted a message on the University’s gay/lesbian/bisexual
internet discussion system (expletives deleted):
I hope you die screaming in hell ... you’d [sic] better watch your ...
back you little ... I’m [sic] gonna shoot you in the back of the ... head
... die screaming [name of student], burn in eternal ... hell ... I hate
gay/lesbian/bisexuals. (Kaplan and Moss, 2003: 8)
In all three of the above cases the offenders were successfully prosecuted,
as all had made threats of violence against specific individuals (that is to
say, their speech contained a ‘serious expression of intent to commit an
imminent act of violence against an identified individual or group’).
Another method of directly disseminating hate speech to targets online has
been social media like Facebook, Twitter and YouTube. For instance, before
he killed six people in a mass murder event in 2014, Elliot Rodger posted a
video on YouTube discussing his motivations for killing. This video was
filled with self-righteous anger stemming from a misogynistic view that
women had ‘forced’ him to be celibate by refusing to have sex with him –
viewing sex as an entitlement – calling himself the ‘Supreme Gentlemen’
before saying that he planned to ‘slaughter every single spoiled, stuck-up,
blonde slut’ in the sorority house he targeted as part of his rampage
(Rodger, 2014). The video is not only a testimonial to his motivations, but
also a broader threatening message of hate directed at women more
generally. Rodger has since been heralded within the ‘incel’ (a portmanteau
of ‘involuntary celibate’) online subculture which is composed of men who
view themselves as wrongfully deprived of sex by women, shrouding their
discussions in misogynistic language. Incels have used venues like Reddit
(where they are now banned) and social media to spread their message
among each other as well as the broader public. Alek Minassian, the incel
who drove a van into a crowd of mostly women in Toronto, killing 10 and
injuring others, posted on Facebook a declaration that ‘the Incel Rebellion
has already begun! We will overthrow all the Chads [men viewed as
hoarding sex and women for themselves] and Stacys [women viewed as
failing to give sex to the incels]’ (Crilly et al., 2018). These are only a few
examples of social media being used to spread hate speech. Scores of other
examples exist targeting groups based on race, nationality, sexual
orientation, gender, religious affilation, and others. The point is that social
media has become an important medium for the dissemination of hateful
and threatening speech, a problem social media companies have struggled
to deal with.
Social media has also been instrumental in the proliferation and
dissemination of online anti-Muslim speech. In fact, considering the current
global political situation, public fears of terrorism, and mass xenophobia,
anti-Muslim hate speech has proliferated online in addition to other forms
of hate speech directed against foreigners, migrants, and other groups
considered ‘outsiders’. Anti-Muslim speech in particular appears to be
mired in beliefs that Muslims pose a threat to the physical, social, political
and economic well-being of non-Muslims (Awan, 2016; Tell MAMA, 2016:
75–79). For instance, the Online Hate Prevention Institute analysed 50
Facebook pages which propagate anti-Muslim speech and described them
as falling within various categories including ‘Muslims as a security threat
or threat to public safety’, ‘Muslims as a cultural threat’, ‘Muslims as
economic threat’, ‘content dehumanising or demonising Muslims’, ‘threats
of violence, genocide and direct hate targeting Muslims’ and ‘hate targeting
refugees/asylum seekers’ (Oboler, 2013: 14). Similar to other forms of hate
speech, most of these utterances came from people on the far-right (Tell
MAMA, 2016: 67). Some commentators and academics have argued that
the proliferation of anti-Muslim speech – including speech that spread via
social media websites – coinciding with the 2016 US presidential election
and the withdrawal of the UK from the European Union may be associated
with increased terrestrial acts of violence and vandalism (Maza, 2017; Patel
and Levinson-Waldman, 2017; Roberts, 2017).
It is also unsurprising that social media has been used to propagate antiSemitic or anti-Jewish hate speech. Anti-Semitism has a long history which
has found new life in the online sphere where hate speech and conspiracy
theories proliferate. In 2017, the World Jewish Congress, an international
organization of Jewish communities, conducted a study examining 382,000
anti-Semitic posts on social media (WJC, 2017). They found that most
online anti-Semitism took the form of ‘expressions of hatred’ which include
‘generalizations, names, obscenities, curses, or hate expressions’ (73 per
cent; ibid.: 19). Second most prominent were the use of anti-Semitic
symbols (40 per cent) followed by calls for violence (8 per cent). Seven per
cent of posts were categorized as ‘dehumanization’ which involves
‘accusing the Jews of injustice or attempting to prove by scientific, social,
or sociological methods that Judaism is a satanic and evil sect, or a
community that is trying to take over the world’ (ibid.: 31). Four per cent of
social media posts were Holocaust denials. The Anti-Defamation League
also conducted a study of anti-Semitism on Twitter (ADL, 2018). Though
the ADL notes that the methodology used makes statistically linking online
anti-Semitism to triggering events difficult, ‘impressionistic observations
based on the tweets that surfaced consistently’ in their samples indicates
that anti-Semitic content seems to coincide with current events like the
Harvey Weinstein sexual assault scandal. In other words, not only is antiSemitism present and visible online but tends to spike when emboldened by
cultural and political circumstances, similar to other forms of hate speech.
7.4 Legal, Policing and Political
Challenges in Tackling Online Hate
Speech
The criminalization of hate speech is itself a controversial and divisive
topic, and there is no consensus that such speech ought indeed be made
legally punishable. We have already encountered the arguments for
prohibiting such representations on grounds of both the directly and
indirectly harmful consequences they may have for those targeted.
However, there are numerous legal and political commentators who, while
condemning hate speech, oppose its criminalization. One line of argument
against such sanctions points out the difficulty in defining what is to count
as ‘hateful’, ‘degrading’ and ‘discriminatory’. Who is entitled to arbitrate
which utterances fall under such labels? Those subjected to criticisms by
others may use claims of ‘hatefulness’ to silence the critics, denying them
the right to exercise free speech. Political, journalistic, academic and artistic
censorship will inevitably result if all speech considered ‘hateful’,
‘degrading’ or ‘discriminatory’ by various social constituencies is silenced.
Therefore it is argued that banning hate speech, however offensive the
majority may find it, comes at too high a price, namely the loss of freedom
of opinion, expression and information (McGonagle, 2001: 21). Those who
oppose the prohibition of hate speech in general also wish to safeguard the
internet’s potential as a means of free expression. Therefore Steinhardt
(2000: 249) argues that: ‘Although “hate speech” and other vile utterances
can be found in cyberspace, censoring speech in this medium would go
against the free, open, anarchic and global nature of the Internet, and
severely impede its growth’. Consequently, those taking the anti-prohibition
stance argue that the ‘best antidote to hate speech is more speech’ which
aims to ‘help educate people, promote positive messages, and spread
truthful information’ (Wolf, 2004: 16). Cory Doctorow, author and
electronics right advocate who works extensively with the Electronic
Frontier Foundation, has on many occasions argued that the ‘answer to bad
speech is more speech’ (Doctorow, 2008b: 2, 2010, 2013). He elaborates
that this approach is ideal but ‘not because bad speech isn’t harmful’. In an
interview with the American Library Association, he explains that,
Bad speech is fantastically harmful. At its worst, right ... my father, as
an infant, was in a building that was burned down in Poland ... with ...
in his mother’s arms, they waited out the fire and escaped it during a
pogrom that was brought about through bad speech. Bad speech leads
to bad acts, but our best check against bad speech isn’t censorship. Our
best check against bad speech is more speech, because censorship is an
ineffective remedy against bad speech, because it only impacts that
bad speech that the establishment doesn’t care to hear. And bad speech
that the establishment is okay with thrives in censorship regimes. And
the rubric of bad speech becomes a way to silence and suppress dissent
that would rebalance power in society. (Doctorow, 2013)
Doctorow’s point is that the costs of censoring speech are too high for a
civil society to allow.
Even where there is legal provision in place against hate speech, the global
nature of the internet presents significant difficulties to its effective
enforcement. What happens, for example, when the target or victim of hate
speech is in one country, the group or individuals responsible for that
speech are in another, and the online content is hosted on web servers
located in a third territory? Questions of jurisdictional responsibility for
dealing with hate speech in such situations pose serious challenges. Since
the mid-1990s, the EU has been taking steps to harmonize provisions
dealing with illegal and harmful content online, including the improvement
of international cooperation on enforcement across member countries
(Rorive, 2002: 10). The Council of Europe’s Convention on Cybercrime
was amended in 2003 to include an additional protocol to tackle the
‘dissemination of racist and xenophobic material through computer
systems’ (Van Blarcum, 2005). National authorities have responded, albeit
with varying degrees of urgency, to these European initiatives. For example,
in 1998 the UK government announced that its National Criminal
Intelligence Service (NCIS) was now ‘in close liaison with other countries
to combat Internet abuse’, including ‘acting against threatening abusive and
racist material’ (Whine, 2000: 246). The following year saw the creation of
the UK’s National Hi-Tech Crime Unit (NHTCU) in an attempt to further
enhance capacity in dealing with internet-based offences. However, a
combination of under-resourcing and excessive demands effectively led to
the demise of both units, which were absorbed into the Serious Organised
Crime Agency (SOCA) in 2006 (Jewkes, 2010: 541–542) and then the
National Crime Agency in 2013, a body with a remit much broader than
tackling internet-related offences.
Perhaps the greatest obstacle to the effective regulation and punishment of
hate speech online arises from the variation in hate crime laws across
national territories worldwide. Given the transnational span of the internet,
extremist groups can effectively bypass hate speech prohibitions in one
country by locating websites on servers in other territories where those
same restrictions do not apply (Deirmenjian, 2000: 1021). So, for example,
in the late 1990s the extremist British National Party (BNP) registered its
website in Tonga, enabling them to safely post material that would fall foul
of the UK’s anti-racism laws were the site hosted in Britain (Whine, 2000:
241). Similarly, the French extremist right-wing organization Charlemagne
Hammer Skinheads relocated their website to Canada after their Francebased service was terminated by their ISP (2000: 241); although 13
individuals belonging to the group were charged with ‘promoting racial
hatred, uttering death threats, and desecrating a grave’ in the UK and
France, authorities there could not shut down the website due its location in
Canada (Deirmenjian, 2000: 1021). A particularly dramatic illustration of
this problem is presented by the case of German anti-hate legislation. Due
to its unique historical experience, Germany is perhaps the nation with the
most stringent of all anti-hate speech laws, including the prohibition of Nazi
symbols (such as the swastika), Holocaust denial and defaming the memory
of the dead (Timofeeva, 2003: 260–263). However, there is nothing
preventing the burgeoning neo-Nazi movement in Germany from posting
such material in other territories.
A great number of hate sites emanate from the US where free speech enjoys
a legal protection matched in few other countries. The First Amendment to
the US Constitution prohibits lawmakers from encroaching upon the
freedom of expression, even where that speech is racist, sexist, homophobic
or otherwise discriminatory in character. The only exemption to First
Amendment protection arises, as already noted, when the speech contains a
serious and imminent threat of violence against identifiable persons, or
directly incites others to commit specific criminal acts against those
persons. Anything ‘falling short of incitement to imminent violent action’
enjoys constitutional protection (Wendel, 2004: 1410). For those advocating
the legal restriction of hateful speech, the US Constitution gives a ‘virtually
unlimited license for hate speech’ (Tsesis, 2002: 180). For extremists
worldwide propagating such speech, the nature of US constitutional law,
when combined with the ease of transnational communication afforded by
the internet, inevitably provides something of a ‘safe haven’ from
prosecution under laws in their own countries.
Though law has run heavily into the tension between the supporting
freedom of speech and the regulating hate speech, these tensions also
manifest through online culture, perhaps most notably through trolling.
Trolling, according to Mantilla (2015: 4), ‘consists of making online
comments or engaging in behaviors that are purposefully meant to be
annoying or disruptive’. It is related to the practice of ‘flaming’ which is ‘a
form of trolling where a person attacks someone else verbally through
insults, name-calling, or other forms of antagonism, often over hot-button
topics such as religion, politics, or sexism’ (ibid.: 7). Its origins can be
traced back to early online communities and reached maturity (for lack of a
better word) in the 2000s and 2010s through online message boards like
4chan and Reddit. While these forms of online speech are sometimes
relatively benign attempts to get under someone’s skin, they can also be
belligerent and vicious verbal or textual attacks qualifying as harassment.
Scholars have pointed out the political and pranksterish dimensions of
trolling, arguing that it subverts social expectations in a way that calls into
question our contemporary sensibilities and notions of ‘political
correctness’ in often humorous and provocative ways (Coleman, 2012,
2014; Phillips, 2015). Coleman (2014: 5) has also pointed out the key role
trolling tactics have played in the hacktivist activities of Anonymous like
their protests agains the Church of Scientology which, in addition to other
pranksterish methods, involved sending sex workers to their location as
well as unsolicited pizza deliveries. At the same time, others have argued
that trolls may propagate hate speech and create a toxic environment for
marginalized populations (Jane, 2017b; Mantilla, 2015; Quinn, 2017). As
Jane (2017b: 85) notes, ‘there is no evidence to support the idea that a
flamer’s self-diagnosis of “joker” or “prankster” exempts their target from
fear, distress, or offence’. Trolling thus occupies the precarious space
between free speech and hate speech and between laughter and disaster.
One of the most (in)famous trolls to date, Andrew Auernheimer or ‘weev’,
provides a noteworthy case study in trolling and hate speech. His trolling
endeavours have included founding of the ‘Gay Niggers Association of
America’, named after a Danish short film called ‘Gay Niggers from Outer
Space’, and an information security firm called ‘Goatse Security’, which
harkens to the early online prank where people would trick each other into
viewing a picture of a man significantly distending his rectum (Kushner,
2014; Weisman, 2014). These names were specifically chosen to antagonize
the sensibilities of others who found such words and images offensive.
These are only a few examples of what some would consider to be
outrageously lewd utterances and behaviour done for the purposes of
provocation. He has summarized his position as:
I am not a bigot except by the original definition of bigot in that I air
my opinion very loudly without the concern for others’ fucking cries
of help about ‘he’s saying something we don’t like to hear!’ That’s
actually the initial definition of bigot before it was associated with
racists – someone who loudly airs their opinion. And it says something
stark now that bigot is a pejorative term. (as featured in Weisman,
2014)
He was eventually incarcerated in 2012 for publicly revealing a major
security flaw in AT&T’s website, which was seen as an affront to free
speech for many online civil liberties advocates. His conviction was
eventually vacated in 2014. Upon release, he wrote an article for the white
supremacist site The Daily Stormer (which he helps run alongside its
founder, Andrew Anglin) called ‘What I Learned in Prison’ where he
declared that he was a white supremacist, showing off a swastika tattoo, and
made numerous hateful statements about racial and ethnic minorities. In
2016, he exploited a vulnerability among networked printers and made
devices across the country churn out an anti-Semetic flyer (FranceschiBicchierai, 2016). Amidst these incidents and others, the value of free
speech is still championed as an underlying motivation for Auernheimer. In
short, he is perhaps the physical embodiment of this tension between
freedom of speech and hate speech that trolling represents. Trolling will be
visited further in Chapter 9 (Section 9.3.1).
7.5 The Growth and Popularity of Internet
Pornography
The production of sexually explicit imagery dates back to the era of human
prehistory. One of the earliest depictions of the human form ever
discovered, the so-called ‘Venus of Willendorf’ (dated at 24,000–22,000 bc)
is a figurine of a woman with ‘large pendulous breasts ... prominent mons
pubis with a distinctly rendered vulva ... and well padded buttocks’; an
image, it has been suggested, of ‘explicitness and thinly veiled eroticism’
(Lane, 2001: 1–2). The later development of both artisanal technology and
trade in the ancient Greek and Roman worlds created a thriving commercial
use of sexual images, which have been found to adorn a bewildering array
of objects, including wine coolers, dinnerware, bronze bowls, vases,
domestic walls in the form of paintings and murals, and ‘even children’s
drinking bowls and plates’ (2001: 3–4). In the early modern era, the
development of printing from the mid-fifteenth century onwards was
rapidly followed by a proliferation of sexually explicit publications,
including books, pamphlets, posters and cartoons (Kutchinsky, 1992: 41;
Lane, 2001: 7). The Victorian era was also characterized by a thriving, if
largely clandestine, trade in pornography and erotica (Hyde, 1964; Mackie,
2004: 981). Visual representations began to overtake the more traditional
written form with the invention first of photography in the 1840s, then the
emergence of motion pictures in the 1890s; this move from verbal to visual
depiction helped to broaden the appeal of pornography beyond a literate
elite to embrace viewers and consumers from all social strata and walks of
life (Kutchinsky, 1992: 41–42; Zimmer and Hunter, 1999). Throughout the
twentieth century, the popularity of pornography continued to grow across
the Western world, in the form of adult movie theatres and magazines such
as Playboy, Penthouse, Hustler and a legion of others too numerous to
mention. Perhaps the first great technological innovation vis-à-vis
pornography during the last century was the introduction of mass-market
videos in the 1970s, which permitted viewers to enjoy pornographic films
in the privacy of their homes. This was later followed by the availability of
pornographic material via dedicated cable and satellite subscription
television channels. By the start of the new century, Americans were renting
an estimated $5 billion worth of adult videos per annum, and paying a total
of $150 million to view such movies on pay-per-view cable services (Lane,
2001: xv); worldwide, an estimated $20 billion was being spent every year
on adult videos, and $5 billion on cable viewing (IWF, 2004). By 2006, the
global pornography industry was estimated to be worth some $97 billion
(Ropelato, 2010). Unfortunately, recent figures for pornography industry
revenues and worth are hard to come by and available estimates should be
considered with a degree of scepticism (Tarrant, 2016). Regardless, all of
the above serves to illustrate two main points: first, that the appetite for
sexually explicit material appears to be a consistent feature of human
cultures; and, second, that its development has gone hand-in-hand with the
technological development of media and communication. Therefore, it
should come as no surprise that the circulation of pornography has been a
prominent feature of the internet since its inception.
Even in the early years of networked computing, the distribution of
pornographic images was prevalent. In the late 1970s and early 1980s,
programmers (mostly young men in university science, engineering and
computing departments) were zealously developing software enabling such
images to be transmitted, recomposed and viewed through Usenet systems.
By 1996, of the ten most popular Usenet groups, five were sexually
oriented, and one (alt.sex.net) attracted some 500,000 readers every day
(Lane, 2001: 66–67). However, it was with the massive expansion of the
internet resulting from the surge in home computing in the mid-1990s (see
Section 1.3) that internet pornography really took off. Amidst all the
anxiety generated by the ‘problem’ of online pornography, it is easy to
overlook the critical role that internet pornographers played in both the
technological and commercial developments of the medium. For many
years, internet pornographers were the only web entrepreneurs who were
actually making profits, providing clear evidence that e-commerce was a
commercially viable proposition. Moreover, many of the key technologies
enabling the delivery of digital content to internet shoppers (such as secure
credit card payment systems) were pioneered by pornographers, and
subsequently adopted by those selling more ‘conventional’ online content
such as music, software and movies, as well as e-shopping sites such as
Amazon (Bennett, 2001: 381). Though we can be certain that a huge
amount of pornographic material is available online, reliable estimates of
the availability and consumption of internet pornography are difficult to
come by. In a study commissioned by the US Department of Justice, Stark
(2007) analysed a government-secured random sample of websites index by
Google and MSN between 2005 and 2006 and found that approximately 1.1
per cent of websites featured ‘adult’ content. Though proportionally that
figure is relatively small, it still ‘amounts to hundreds of millions of adult
webpages’ (Stark, 2007: 13). Streaming pornography giant Pornhub (2018)
claims they had 28.5 billion annual visits to their website in 2017 with an
average of 81 million visits per day. Further, they document 4,052,543
videos uploaded to their site by years end totalling approximately 68 years’
worth of pornography if viewed continuously. The work of Ogas and
Gaddam (2011: 14) also gives us a glimpse into the number of internet
searches conducted for pornographic content. They examined the searches
entered into Dogpile (a meta-search engine that looks for results across
Google, Yahoo! and other search engines) between July 2009 and July
2010, totalling 400 million searches, and estimated that approximately 13
per cent of searches entered were for pornographic content.
7.6 Legal Issues Relating to Internet
Pornography
There are, of course, many difficult definitional (and implicitly moral)
questions surrounding the issue of pornography. For example, what are the
boundaries (if any) between ‘art’, ‘erotica’ and ‘pornography’? Such
questions, fascinating and important though they are, will not provide the
focus of discussion here. Rather, the crucial criminological issue to be
addressed relates to the shifting and contested boundaries between
pornography and obscenity. It is the latter which is typically subjected to
formal legal prohibition and sanction, while interest in the former may
attract more informal condemnation as variously unhealthy, ‘filthy’, ‘dirty’,
immoral, unsavoury or deviant. The question facing those confronted with
the prevalence of pornography online tends to centre on determining which
kinds of representation cross the threshold into being obscene, and hence
are appropriate targets for legal action.
The definition of obscenity is itself subject to wide historical and
contemporary cross-cultural variations. A representation that in a more
liberal legal regime may be considered permissible may fall foul of
obscenity laws in another country. This issue is of particular relevance
when considering material on the internet, given the medium’s
transnational, border-spanning character. In British law (following the
Obscene Publications Act of 1959), ‘an article shall be deemed obscene if
its effect ... is, if taken as whole, such as to tend to deprave and corrupt
persons who are likely, having regard to all relevant circumstances, to read,
see or hear the matter contained or embodied in it’ (Akdeniz, 1996: 236).
The Criminal Justice and Public Order Act 1994 updated this legislation to
take account of such material in an electronic formation (Akdeniz and
Strossen, 2000). Two problems are readily apparent when considering these
provisions. First, terms such as ‘deprave’ and ‘corrupt’ are highly
ambiguous and indeterminate, and subject to a wide array of judgements.
Second, the attempt to extend the British definition of obscenity to
encompass online content is inherently problematic, given that it may be
‘published’ in other countries with significantly different legal provisions
relating to obscenity.
British obscenity laws have been most recently extended to counter the
production, circulation and possession of ‘extreme and violent
pornography’. What is at stake here is not so much the depiction of acts of
actual sexual violence against victims as consensual performance in
scenarios that ‘realistically’ depict sexual violence. This issue came to
prominence in the UK following the sexual assault and murder of Jane
Longhurst, a 31-year-old schoolteacher, at the hands of Graham Coutts. At
Coutts’ trial a concerted link was made between the murder and his
consumption of violent pornography online, including simulated
strangulation, rape and necrophilia. The media outcry following Coutts’
conviction, combined with a campaign led by the victim’s mother, resulted
in a petition with some 50,000 signatories being submitted to government,
calling for the banning of ‘extreme internet sites promoting violence against
women in the name of sexual gratification’. This led ultimately to
legislation criminalizing the possession of ‘extreme pornography’, making
such possession punishable by up to three years’ imprisonment (Jewkes,
2010: 527). Extreme pornography is defined (in the Criminal Justice and
Immigration Act 2008) as that which ‘portrays, in an explicit and realistic
way, any of the following: (a) an act which threatens a person’s life, (b) an
act which results, or is likely to result, in serious injury to a person’s anus,
breasts or genitals, (c) an act which involves sexual interference with a
human corpse’. The controversy aroused by this legislation centres on the
fact that it criminalizes the portrayal of these scenarios, even where they
are entirely fictionalized, provided they are deemed ‘realistic’. This has led
to criticism from various quarters, including those who campaign to end
sexual violence against women, some of whom have argued that blaming
pornography for acts of sexual assault serves to obscure the complex social
underpinning of such violence. Even more concerted criticism has emerged
from those invested in the BDSM (bondage, domination and
sadomasochism) ‘scene’, who object to the provisions on the grounds that
they conflate sexual violence with the consensual enactment of sexual
fantasy, the latter being central to BDSM ‘play’ (Harper and Yar, 2011).
Such controversies serve to illustrate the fact that achieving a social
consensus on what does or does not (or should or should not) constitute an
offence is no easy matter, and consequently the parameters of online
regulation will continue to be the subject of considerable disagreement.
Under US law, there is a distinction between obscene, indecent and
offensive speech (Akdeniz, 1996). The latter two enjoy constitutional
protection under the First Amendment (as discussed in the previous section
of this chapter). There have been attempts by both social conservatives and
radical feminist campaigners to remove the constitutional protection
traditionally granted to indecent and offensive pornographic materials.
Activists such as Catherine MacKinnon and Andrea Dworkin have argued
that all pornography harms women, in that it both objectifies women and
has a causal relation to sex crimes against women (MacKinnon, 1996).
Evidence in support of a direct link between pornography and sexual
violence, however, is inconclusive at best. For example, it has been pointed
out that if there was a direct relationship between pornography consumption
and sexual violence, we would expect that the widespread availability of
internet pornography would coincide with rising rates of sexual violence.
Instead, increases in pornography viewership have co-occurred with
declining rates of sexual assault rates since the mid-1990s tracked by
victimization data (Castleman, 2016). Though correlation is not definitive
evidence, it does problematize the argued direct link between pornography
and sexual violence. That said, some research supports a connection
between sexual violence perpetrated by men and their consumption of
pornography (for a useful overview, see DeKeseredy, 2016). Though a
direct causal effect may not be present, pornography – particularly content
featuring violent or degrading treatment of women – may contribute to and
help shape the perpetration of sexual violence by men. The relationship is
thus complicated.
Consequently, attempts to remove the First Amendment protection for
indecent and offensive speech have failed before the courts. Obscenity,
however, enjoys no such protection as established by the US Supreme Court
in Miller v. California (1973). According to the Court in what is now
known as the Miller test, material may be considered obscene under state
regulation if ‘“the average person, applying contemporary community
standards” would find that the work, taken as a whole, appeals to the
prurient interest’ in sex, ‘depicts or describes, in a patently offensive way,
sexual conduct specifically defined by the applicable state law’ and ‘lacks
serious literary, artistic, political, or scientific value’ (Miller v. California,
1973: syllabus). The provision for judgement of obscenity according to
‘community standards’ has enabled controls at a local level: for example,
prohibiting adult movie theatres, strip clubs and sex shops from setting up
in residential areas where such enterprises are unwelcome, forcing them to
locate their businesses in other physical locales. The application of such
provisions in the context of the internet, however, presents significant
difficulties: for example, explicit material is typically accessed not in
publicly available sites, but in the privacy of people’s own homes, and those
purveying the content are likely to be situated in locales (or even other
countries) where ‘community standards’ about what is ‘patently offensive’
may be very different. Therefore, how can this provision be applied in the
following (hypothetical, though not unlikely) case1: an individual residing
in a highly conservative community in America’s so-called Bible Belt
views, say, ‘hardcore’ sadomasochistic (S&M) images and video provided
by an internet business located in sexually liberal San Francisco or
Amsterdam. Whose standards can, or ought, to be applied here? Those of
the community in which the consumer resides, but who are not in any way
exposed to his or her viewing habits? Those of the community in which the
provider or publisher resides? Or those of the community of online users
who frequent such sites? In short, the localized (and even national)
regulation of supposedly ‘obscene’ content encounters serious obstacles
when dealing with a medium that inherently spans geographical and legal
territories.
1 Lest this example be seen as an improbable occurrence, it is worth noting
that in the USA, 53 per cent of men belonging to the highly conservative
Christian Promise Keepers movement (committed to sexual abstinence and
virginity before marriage) admitted to having viewed online pornography
during the week before questioning (IFR, 2004).
The issue of legal pluralism (the variation in obscenity laws across different
countries) becomes even more exacting when we look beyond Western
nations which take a generally permissive view when it comes to sexual
representation, and consider those countries that have much stricter
definitions of obscenity. In Saudi Arabia, for example, internet access has
been channelled through a single control centre controlled by the
government, enabling authorities to block access to pornographic sites,
which are held to breach ‘public decency’ (EFA, 2002; Freedom House,
2017). This, however, has led to a flourishing ‘underground’ industry
providing access to pornography sites for Saudis willing and able to pay for
explicit content as well as methods to circumvent content blocks (Procida
and Simon, 2003; Weisman, 2015). Further, Saudis interested in accessing
pornographic material may also turn toward circumvention tools like virtual
private networks to bypass censorship technologies (Freedom House,
2017). A case in India further highlights some of the cross-national
difficulties that can arise. In late 2004, a 17-year-old Indian male used his
mobile phone camera to video himself being given oral sex by his 16-yearold girlfriend. A few days later, the recording was posted for sale by 23year-old student Ravi Raj Singh on the Indian auction site Baazee.com, a
subsidiary of eBay. In 2000, India had passed the Information Technology
Act, criminalizing the online transmission and sale of pornography.
Consequently, not only was the 17-year-old arrested, but also taken into
custody was Avnish Bajaj, the chief executive of Baazee.com. Bajaj’s arrest
sparked an international diplomatic incident, since he is a US citizen, and
American authorities attempted to intervene, including the then US
National Security Advisor, Condoleezza Rice (who was reportedly ‘furious’
over Bajaj’s treatment) (Haines, 2004a; Harding, 2004).
A further issue, one which has been of great public and political concern, is
the fear that minors will be exposed (either wittingly or unwittingly) to
sexually explicit adult-oriented content online. It has been noted that even
the most seemingly innocuous web searches often return results containing
content deemed unsuitable for children (Thornburgh and Lin, 2004: 43, 47–
48; Zimmer and Hunter, 1999: 3). That said, internet search providers have
made significant advances in creating content filters to prevent minors from
inadvertently accessing obscene content in their searches, though these
filters are often easily circumvented or disabled and they may not
completely block obscene material. Moreover, the practice of mass directmarketing emails (so-called ‘spam’) exposes children to a range of offers
for highly explicit sexual content when checking their inboxes, often
disguised under seemingly innocent headings (such as ‘hi there’, ‘about
your request’ and so on) (Biegel, 2003: 80). It is the fear that children will
be inadvertently exposed to such material which fuelled what has been
described as ‘The Great Cyberporn Panic of 1996’, when mainstream media
produced a slew of articles about the danger posed to children by online
pornographic content (Zimmer and Hunter, 1999: 11; also Wilkins, 1997:
4). Concern over this issue continues to run high. A 2011 study in the USA
found that parents’ greatest concern about their children’s internet usage
was that they ‘were viewing inappropriate websites showing violence or
porn’ (Spectorsoft, 2011: 2). Such concern is not wholly without merit as
demonstrated by one recent survey which found that 9 per cent of children
ages 12–15 have seen ‘something of a sexual nature that made them feel
uncomfortable, either online or on their mobile phone’ (Ofcom, 2017: 164).
A further dimension, however, is added to this debate when we consider the
extent to which minors engage in the active and deliberate search for such
content. According to Ropelato (n.d.: 5) 90 per cent of 8–16-year-olds have
viewed pornography online. A UK-based survey of 9–19-year-olds found
that 10 per cent of this age group admit to deliberately visiting
pornographic sites (Livingstone et al., 2005: 13). A Canadian study of 13and 14-year-olds estimated that 90 per cent of boys and 70 per cent of girls
had viewed pornography, primarily via the internet (Walter, 2010: 106–
107). Though some of these studies are dated, there is little reason to think
that age-based consumption patterns have changed in recent years. Such
activities effectively mean that minors are acquiring access to material
online that they would be legally prohibited from purchasing in more
conventional ‘terrestrial’ forms (such as pornographic magazines, videos
and DVDs).
Concerns about minors’ access to sexually explicit content have inspired a
number of legislative attempts to curtail children’s exposure to
pornography, especially in the USA. In 1996, Congress passed the
Communications Decency Act (CDA). Among the Act’s provisions were
measures to criminalize whoever:
by means of a telecommunications device, knowingly ... makes,
creates, or solicits, and initiates the transmission of, any comment,
request, suggestion, proposal, image, or other communication which is
obscene or indecent, knowing that the recipient of the communication
is under 18 years of age;
and further criminalizes:
the use of an ‘interactive computer service’ to ‘send’ or ‘display in a
manner available’ to a person under 18, ‘any comment, request,
suggestion, proposal, image, or other communication that, in content,
depicts or describes in terms patently offensive as measured by
contemporary community standards, sexual or excretory activities or
organs, regardless of whether the user of such service placed the call or
initiated the communication’. (Moorefield, 1997: 32)
In June 1997, however, the US Supreme Court ruled that the CDA was
unconstitutional, since the banning of any online service that displayed
‘indecent’ content that may be accessible by under-18s amounted to a
denial of access to that same material by adults, hence breaching their First
Amendment rights (Akdeniz and Strossen, 2000: 215–216; Zimmer and
Hunter, 1999: 12). The CDA having been struck down, Congress tried again
the following year, introducing the Child Online Protection Act (COPA) in
1998. This Act was less sweeping than the CDA, and made provision for
prohibiting ‘any communication for commercial purposes that is available
to any minor [those under 17] and that includes any material that is harmful
to minors’. ‘Harmfulness’ was defined as:
any communication, picture, image, graphic image file, article,
recording, writing, or other matter of any kind that is obscene or that
(a) the average person, applying contemporary community standards,
would find, taking the material as a whole and with respect to minors
is designed to appeal to, or is designed to pander to, the prurient
interest; (b) depicts, describes, or represents, in a manner patently
offensive with respect to minors, an actual or simulated sexual act or
sexual contact, an actual or simulated normal or perverted sexual act,
or a lewd exhibition of the genitals or post-pubescent female breasts;
and (c) taken as a whole, lacks serious literary, artistic, political, or
scientific value for minors. (Akdeniz and Strossen, 2000: 137)
It was hoped that this new legislation, focused more precisely upon specific
commercially available content directed towards young people, could
succeed where the CDA had failed. However, COPA was also deemed
unconstitutional by the Supreme Court in June 2004; one of the judges,
Justice Anthony M. Kennedy, justified the decision by stating that COPA
entailed ‘a potential for extraordinary harm and a serious chill upon
protected speech’ (ALA, 2004: 2). Subsequent appeals and further
proceedings failed to overturn the judgement, and in 2008 the Third Circuit
Court of Appeals upheld the position that COPA violates the First
Amendment, and enjoined all attempts to enforce the Act (Brenner, 2010:
448–449). The government did finally score a victory when the Supreme
Court upheld a third piece of legislation, the Children’s Internet Protection
Act (CIPA) in 2003. The Act requires ‘public libraries that rely on federal
funds for Internet use to install filtering devices on library computers to
protect children from ... pornography and obscenity’ (Sekulow, 2004: 5).
The remit of this law, however, is so limited that it leaves the majority of
avenues by which children might access adult pornographic material
untouched – applying as it does only to public (not private) libraries, and
only to those which use federal (rather than state, local, or private
charitable) funds for internet use.
The overall failure of statutory regulation in respect of children’s access to
sexually explicit content has led to an emphasis on voluntary regulation,
especially the use of internet filtering software (of the kind prescribed in the
CIPA) by ISPs and/or parents. There now exists a wide range of
commercially available filtering packages, such as Net Nanny, SpyAgent,
Qustodio, Witigo and Surfie. Such software contains databases of adultoriented sites, and parents can use them to ‘lock’ web browsers’ access to
such sites; consequently, parents can allow their children to surf the internet
with the reassurance that youngsters will neither be unintentionally exposed
to pornographic material, nor will they be able to access such content
should they deliberately go looking for it. Web filters have come in for
considerable criticism as child-protection tools, however. First, they seem to
vary in quality as some filters may block as little as 40 per cent of adult
websites (Stark, 2007: 13) and maybe relatively inconsequential in
protecting children from online sexual content (Przybylsku and Nash,
2018). Hence use of filters may give parents a false sense of security, as
they will not shield children from all sexually explicit content. Second,
studies have shown that filtering software typically ‘over-filters’ and
thereby blocks access to socially acceptable, socially valuable and nonoffensive content (EPIC, 1997). For instance, Stark’s (2007: 13) analysis of
web filters found that some of the more restrictive filters, while they may
block around 91 per cent of adult content, may also prevent access to 23 per
cent of ‘clean’ web pages. More serious and worrying are findings that
popular filters block access to many adolescent health information sites,
misrecognizing the discussion of sexual health issues (such as safe sex) for
pornography (Larkin, 2002: 1946). Third, while filtering software is at least
partially effective against sexually explicit internet websites, it has proven
to be completely ineffective in blocking access to explicit content via other
means, such as peer-to-peer file-sharing systems like Gnutella which often
provide unrestricted and free access to large quantities of pornographic
material. All of the foregoing limitations suggest that neither technological
nor legislative ‘fixes’ are likely to address parental and political concerns
over children’s access to online pornography, and that it may be time to
reconsider some of the prevalent assumptions about the inherently harmful
nature of exposure to such content.
Though previous regulatory failures have necessitated individual vigilance
and the use of filtering technologies, the UK introduced provisions in the
Digital Economy Act of 2017 which may be the most ambitious attempt to
prevent pornographic access among minors in the Western world. The law
requires that pornographic websites engage in age verification of visitors.
The objective is to prevent minors from accessing illicit material. The
regulation considers any material to be pornographic if it was ‘produced
solely or principally for the purposes of sexual arousal’ (Digital Economy
Act, 2017). No single method of age verification is required under the law
and has thus left it up to the industry to develop standards (Burgess and
Clark, 2018). One solution is being offered by MindGeek – which owns
some of the biggest pornographic websites including Pornhub – is called
AgeID which will ‘require users to sign up with an email address and
password, then use a third party mechanism to prove they’re over 18’
(Cheshire, 2018). The agency tasked with overseeing this regulation is the
British Board of Film Classification (BBFC) which will have the authority
to levy fines as well as order ISPs to block websites and service providers
(including social networks) to discontinue service for violators (Rigg,
2018). Critics have argued, however, that this bill gives the government
unprecedented power to ‘block websites, en-masse, without court orders’
and is thus a threat to online liberties (Burgess and Clark, 2018). Concerns
have also been expressed that many age verification measures will be built
on centralized systems which could promote surveillance (Cheshire, 2018).
In addition, though the law will likely make it more difficult for minors to
access pornographic material through mainstream pornographic channels, it
is yet to be seen what impact it may have on accessing pornographic
materials through other methods such as peer-to-peer file sharing.
7.7 Summary
This chapter explored a range of issues relating to illegal, harmful, hateful
and offensive content online. Debates in this area generate particular
controversy, as attempts to control online expression inevitably come into
conflict with the freedom of speech. The issues considered here are
complicated, since what may be deemed ‘hateful’, ‘harmful’ or ‘offensive’
may or may not be illegal, depending upon the kind of expression (political,
sexual) and the legal provisions in different national territories. A particular
difference is notable between the USA and Western European nations.
While the former places a premium on the preservation of individual
freedom of expression, the latter tend to place a greater emphasis upon
social harmony, cultural inclusion and the dignity of minority groups.
Consequently, we can see a gap between American understandings of
prohibitable representations, be they hateful or obscene, and those prevalent
in Europe. This ‘legal pluralism’ is also apparent within Europe itself, with
significant differences on the prohibition of hate speech and the definition
of obscenity. If we extend our view to the global scene, even greater
divergences become clear. While hate speech, obscenity and pornography
are not phenomena in any way unique to the internet, their appearance
online intensifies regulatory problems, as the internet allows the circulation
of such representations to span territorial boundaries and cultural contexts,
with their different moralities about acceptable and unacceptable speech.
We can anticipate that issues relating to obscene, offensive and indecent
material will continue to be among the most intensively debated and
contested issues related to internet crime.
Study Questions
Why does the internet offer advantages to those wishing to propagate hateful ideas and
ideologies?
Ought such speech be banned from the internet, or are the dangers of political censorship too
great?
Is the popularity of pornography online a social problem?
Should young people have access to sexually explicit representations via the internet?
Should fantasy representations of ‘sexual violence’ be criminalized, or ought they be
recognized as a legitimate form of sexual self-expression?
Further Reading
Useful overviews about the arguments for prohibiting ‘hate speech’ are provided in L.
Nielsen’s article ‘Subtle, pervasive, harmful: Racist and sexist remarks in public as hate
speech’ (2002), in Alexander Tsesis’ book, Destructive Messages: How Hate Speech Paves the
Way for Harmful Social Movements (2002) and in Delgado and Stefancic’s book
Understanding Words That Wound (2004). From a perspective defending the freedom of
speech, see T. McGonagle’s ‘Wrestling (racial) equality from tolerance of hate speech’ (2001).
For an examination of this issue as related to the internet, see Barry Steinhardt’s ‘Hate speech’,
in Akdeniz et al. (eds), The Internet, Law and Society (2000), Chapter 12 of Stuart Biegel’s
book Beyond Our Control? Confronting the Limits of Our Legal System in the Age of
Cyberspace (2003), Antonio Roversi’s book Hate on the Net: Extremist Sites, Neo-fascism Online, Electronic Jihad (2008) and Jesse Daniels’ Cyber Racism: White Supremacy Online and
the New Attack on Civil Rights (2009). Daniels also provides a detailed overview of racism
online in her piece published in New Media & Society entitled ‘Race and racism in internet
studies: A review and critique’ (2013). To learn more about trolling, readers should check out
Whitney Phillips’ This Is Why We Can’t Have Nice Things: Mapping the Relationship between
Online Trolling and Mainstream Culture (2015). For an analysis of the relationship between
far-right extremism, trolling, journalism and hate speech, see Whitney Phillips’ report for Data
& Society entitled ‘The oxygen of amplification’ (2018). On the issue of pornography, a
valuable worldwide survey is provided by Richard Procida and Rita Simon in their book
Global Perspectives on Social Issues: Pornography (2003). Specific discussion of
pornography online can be found in David Bennett’s article ‘Pornography-dot-com:
Eroticising privacy on the internet’ (2001) and in Yaman Akdeniz and Nadine Strossen’s
‘Sexually oriented expression’, in Akdeniz et al. (2000). A fascinating account of the
development of cyber-pornography as big business is Frederick Lane’s Obscene Profits: The
Entrepreneurs of Pornography in the Cyber Age (2001). On young people’s access to
pornography online, see Dick Thornburgh and Herbert Lin’s ‘Youth, pornography, and the
internet’ (2004). The issues around the criminalization of ‘extreme and violent pornography’
are explored in Harper and Yar’s article ‘Loving violence? The ambiguities of BDSM imagery
in contemporary popular culture’ (2011). Finally, the reader should check out Ogi Ogas and
Sai Gaddam’s A Billion Wicked Thoughts: What the Internet Tells us About Sexual
Relationships (2011) for a general exploration of sexuality and sexual fetishes in online porn
consumption.
8 Child Pornography and Child Sex Abuse
Imagery
Chapter Contents
8.1 Introduction
8.2 Child pornography and the internet: images, victims and offenders
8.3 Legislative and policing measures to combat online child
pornography
8.4 Legal and policing challenges in tackling child pornography
8.5 The controversy over virtual child pornography and ‘jailbait’
images
8.6 Summary
Study questions
Further reading
Overview
Chapter 8 examines the following issues:
the problem of child pornography or child sex abuse imagery;
the circulation of child pornography on the internet;
evidence about the extent of the problem, and the characteristics of victims and offenders;
legal, criminal justice and regulatory attempts to tackle internet child pornography;
the challenges of eradicating such pornographic material, especially those raised by legal and
moral differences across different countries;
controversies over virtual and animated pornographic representations of ‘childlike’ characters
as well as those concerning the sexualization of non-pornographic images of adolescent
children.
Key Terms
Child
Child pornography
Hentai
Indecency
Jailbait images
Legal pluralism
Obscenity
Pornography
Second Life
Sexting
Virtual pornography
8.1 Introduction
Chapter 7 explored issues around the circulation of offensive and harmful
content online, focusing on hate speech, pornography and obscenity. This
chapter continues that exploration by examining the complex and
compelling issues related to online child pornography and child sex abuse
imagery. Online communications that entail the sexualized representation of
minors consistently feature high on the list of cybercrime problems, and
have become the target of concerted legal, political, mass media and public
attention. Unlike the contested issues of hate speech and ‘adult’
pornography considered in the preceding chapter, there is a much greater
legal and social consensus about the need to prohibit child pornography.
Considerable resources have therefore been committed to tackling the
problem, and there is generally a high degree of international cooperation
and coordination between police and other criminal justice agencies.
Nevertheless, there arise serious jurisdictional and practical issues when it
comes to enforcing anti-child pornography laws in the online environment.
8.2 Child Pornography and the Internet:
Images, Victims and Offenders
It would be fair to say that no other cybercrime issue has elicited the degree
of anxiety as that over the circulation of sexual images of minors on the
internet. These are conventionally referred to as ‘child pornography’,
although many experts and commentators prefer to use the term ‘child
sexual abuse images’ (CSAIs) or ‘child exploitation material’ (CEM), so as
to highlight the direct harms to children often entailed in the production of
such materials. Whichever term we use, the very idea of such images
provokes revulsion among the overwhelming majority in society. It also
produces fear: fear that children are inevitably being coerced and abused in
the production of such images, and fear that the availability of such material
will stimulate or encourage viewers to seek out children for sexual
gratification. The child pornography issue has garnered even further public
attention due to a number of high-profile cases involving celebrities who
have been accused of and/or convicted for possession of indecent pictures
of children (e.g. rock stars Gary Glitter and Pete Townsend in the UK, and
Michael Jackson and Jared Fogle in the US). Child pornography recently
received widespread attention in the US and abroad following the arrest and
prosecution of Larry Nassar, a former USA Gymnastics national team
doctor and professor at Michigan State University. In addition to being
accused of sexually abusing hundreds of young women and a young man,
federal agents also found around 37,000 child pornography images and
videos on hard drives seized from his home (Hinkley and LeBlanc, 2017).
As a result, he was tried and convicted on federal child pornography
charges and sentenced to 60 years in prison (Held, 2018). Though
sensational, such material is in fact a far from novel phenomenon. For
example, historians have documented the prevalence of both child
pornography and child prostitution from the Victorian era onwards (see, for
example, Brown and Barrett, 2002; Pearsall, 1993). As we shall later see,
however, the migration of such material onto the internet presents a whole
range of new problems for law making and law enforcement.
Child pornographic images or CSAIs do not comprise a homogeneous
category, as they vary significantly in terms of content. Consequently, they
have been classified according to different types or levels. For example,
Taylor et al. (2001) furnish a 10-fold typology of such images, ranging from
the ‘indicative’ (prima facie non-erotic images of children that are used
‘inappropriately’ for sexual gratification), through ‘nudist’ and ‘erotic
posing’, to the most extreme representations entailing assault, sadistic
violence and bestiality (cited in Jewkes and Andrews, 2007: 64). The
problem of CSAIs cannot be entirely separated from the problem of
interpersonal sexual victimization of minors for three main reasons. First, at
least some of the kinds of images under consideration necessitate actual acts
of sexual exploitation and abuse of children as part of their production.
They comprise a permanent record of actual acts of abuse, and such abuse
is a necessary ingredient of the production of these images (hence the use of
the term ‘child sex abuse images’, rather than the more unspecific term
‘child pornography’). Second, a significant proportion of those involved in
the circulation and consumption of such images may also be involved in
committing contact abuse against minors (Yar, 2010: 234), and the two
activities (sexually abusing minors and consuming sexualized imagery of
minors) may exist in a mutually enforcing dynamic (Wall, 2007: 114–115).
Third, even where consumers of such imagery are not directly involved in
committing contact offences against minors, the very demand for such
imagery incites others to produce it, thereby encouraging further acts of
abuse so that they can be photographed and filmed.
Gauging the precise extent of child pornography online is itself a difficult
task. While contemporary statistics concerning the quantity of child
pornography available on the internet are difficulty to come by, reporting
agencies do give an indication as to which countries host the most child
pornography content. Until 2015 and 2016, the US was considered the most
prominent host of online child pornography in the world by reporting
agencies (INHOPE, 2016; IWF, 2017). Post-2016 figures, however, indicate
that such material is increasingly being hosted in European countries (65
per cent including Russia and Turkey) (IWF, 2017: 4-5). Approximately 87
per cent of reported websites came from five countries in 2017 – the
Netherlands, the US, Canada, France and Russia (IWF, 2017: 19). An
indication of the pervasiveness of child pornography can also be gleaned
from examinations of peer-to-peer file sharing networks, of which child
pornographers make extensive use. For instance, one source claims there
are 116,000 requests for ‘child pornography’ every day on the Gnutella
peer-to-peer file-sharing network alone (Ropelato, 2010). A 2016 analysis
estimates that ‘about 3-in-10,000 Internet users worldwide were sharing
known CEM [child exploitation material]’ across the five P2P networks
analysed, though they admit that this may be an overestimation (Bissias et
al., 2016: 189). This study also estimates that 56 per cent of file-sharing
software installations that were linked to sharing child pornography-related
material came from four countries – Brazil, China, Mexico and the United
States (ibid.).
Data provided by a variety of internet monitoring organizations in recent
years indicates an increasing number of sites hosting such content being
reported by the public (IWF, 2017: 4–5; Quayle, 2010: 344–345). These
increasing figures, however, may more properly be an indication of greater
public awareness and willingness to file reports, and/or the availability of
more mechanisms through which authorities can be made aware of such
content. The reliability of such statistics may be further problematized
because there is an inevitable degree of subjective judgement involved in
deciding that an image of a minor is in fact ‘sexual’, ‘pornographic’,
‘indecent’ or ‘obscene’. Quayle and Taylor (2003: 3) note, for example, the
controversy over the nude photographs of her own children taken by wellknown photographer Sally Mann: responses to these images have varied
from Time magazine’s selection of Mann as ‘Photographer of the Year’ to
her work being vilified as ‘criminal garbage’ and ‘incestuous’. This case
illustrates how determining (and so counting) images as child pornography
is not an entirely straightforward matter, and may result in over-estimation
based on the concerns of a worried public. There are other factors, however,
which might lead us to conclude that estimates of the amount of child
pornography online may significantly underplay its extent. Such
underestimation may arise in particular from the fact that a significant
portion of such material is likely to be hidden from direct public visibility
given its illegal nature. Very little child pornography (especially of the most
explicit kind) is likely to be openly visible or accessible via the internet.
Rather, it is likely to be secreted away behind password-protected sites that
can only be viewed by those who have been granted access. Relatedly,
darknets have also become a safe haven for purveyors of CSAIs which, by
design, are secretive and difficult to monitor and track (Moore and Rid,
2016: 21; Owen and Savage, 2015: 9). Hence inferences about the extent of
the problem, and about whether it is growing, have to be inferentially
derived from known cases where the online sharing of such imagery has
been uncovered by the police and other investigating bodies (some of which
will be considered later in the chapter).
It is also worth noting what is known about the victims (those abused and
depicted) of child pornography. Recent estimates find girls are the most
likely to be the victims of child pornography (INHOPE, 2016: 8; IWF,
2017: 4; Seto et al., 2018: 20, 22). Evidence also indicates that the vast
majority of children featured in such online material are Caucasian or
Asian, with very few images of Black children (INTERPOL, 2018: 40; Seto
et al., 2018: 22; Taylor et al., 2001: 96). The type of sexual abuse inflicted
upon victims also seems to vary with approximately 33 per cent of child
pornography material showing victims subjected to sexual activity with
adults ‘including rape or sexual torture’, 21 per cent depicting children
involved in ‘non-penetrative sexual activity’ and 46 per cent involving
‘other indecent images not falling within the other categories’ (IWF, 2017:
5). Recent longitudinal research of child pornography images from 2002 to
2014 found ‘a trend toward more egregious sexual content over time’
indicating that victims were being subjected to harsher forms of sexual
abuse in recent years (Seto et al. 2018: 4). Some of the most extreme
variants which have emerged includes the production and distribution of
‘hurtcore’ pornography (particularly on the darkweb) which features brutal
acts like torture, sexual assault, rape, and even murder inflicted on people
including children (Ormsby, 2018: 257–280). Studies also have found that
children are more likely to be abused by someone they know like family or
acquaintances (Gewirtz-Meydan et al., 2018: 242; Mitchell and Jones,
2013; Seto et al., 2018: 23).
Considering the gravity and magnitude of the abuse that victims may be
subjected to, it is worth noting the effects that pornographic sexual
exploitation may have on victims. Children may experience a host of
negative outcomes as a result of sexual abuse including enduring feelings of
shame, humiliation, guilt and anger in addition to negative mental health
outcomes like ‘posttraumatic stress disorder, depression, anxiety,
aggression, and substance abuse’, though not everyone suffers from such
long term outcomes (Domhardt et al., 2015: 476). Victims of child
pornography, however, report additional maladies perhaps unique to this
particular form of exploitation. For instance, in a survey of 133 victims,
Gewirtz-Meydan and colleagues (2018) found that some victims
experienced guilt or shame during their victimization out of fear of who
might see the images and that they might be thought a willing participant in
their exploitation. Additionally, some victims experience significant
remorse and shame later in life as adults because they were willing
participants at the time of the abuse because of ‘promises of fame and
admiration’, the abusers made ‘them feel special’ or they did not fully
understand the acts in which they were participating (ibid.: 246). Further,
the victimization may continue through adulthood because the images may
remain in circulation – adults exploited and abused as children in this
manner thus reported being ‘haunted’ by these images (ibid.: 246). At the
same time, some victims see a silver-lining in this form of abuse: it creates
a record of the abuse which (1) can be used against the perpetrator in court
and (2) validates their story in the event that others doubt their account.
Regardless, it is clear that child pornography may seriously and
detrimentally impact victims.
Criminal investigations of child pornography rings have also enabled
criminologists and psychologists to engage in offender profiling in an
attempt to identify common characteristics amongst such offenders. Some
studies have found, for example, that child pornography consumers may
suffer from psychological problems such as mood disorders, anxiety and
compulsive disorders, psychosexual issues, interpersonal problems and
cognitive distortions (Houtepen et al., 2014: 468–469; Quayle, 2010: 357–
358). The data from such studies, however, are often contradictory and the
studies themselves are subject to a variety of methodological problems that
may warrant a degree of scepticism about the findings (Jewkes and
Andrews, 2007: 66–67). Many studies have also offered typologies for child
pornography offenders. These typologies vary significantly but most are
organized around the relationships they maintain with children (i.e.
consume, trade and distribute child pornography exclusively v. use child
pornography in addition to engaging in contact offences with children), the
motivations for viewing the content (e.g. some may view child pornography
out of curiosity while others may view such material to satisfy fetishistic
urges or as part of an indiscriminate sexual palette), the types of collecting
behaviours performed (e.g. collecting through unsophisticated and unsecure
methods v. more sophisticated and secure methods), and their relationship
to the production and consumption process (i.e. consumers v. producers v.
distributors), to name some general patterns (for an overview, see Henshaw
et al., 2017: 420–423). Though potentially useful, such typologies suffer
from a lack of empirical validation and may thus constitute informed
conjecture rather than scientific fact (ibid.: 419). In terms of sociodemographic characteristics, the one clear pattern apparent from studies is
that the vast majority of offenders are Caucasian males (Adams and Flynn,
2017: 5; Houtepen et al., 2014: 467). Apart from these co-relations with
gender and ethnicity, internet-oriented offenders of this kind appear to have
few common characteristics, coming from a wide range of socio-economic
backgrounds, with varying levels of educational attainment, and
engagement in different patterns of family and interpersonal life (Jewkes
and Andrews, 2007: 68–69). Both the widespread nature of online child
pornography consumption and the absence of clear predictive
characteristics for offenders present significant challenges for tackling this
problem (for an overview, see Babschishin et al., 2015; Henshaw et al.,
2017: 425–426; Houtepen et al., 2014).
Thus far, we have considered child pornography in which adult offenders
sexually exploit minors as victims. Recent years, however, have seen a
growing concern about patterns of potentially damaging sexual behaviour
that involve minors as both offenders and victims. A particularly prominent
instance of such conduct entails what Bryce (2010) calls ‘self-victimization’
behaviours, in which minors voluntarily engage in sexually oriented
electronic communication. The discussion has focused especially upon the
growing popularity of so-called ‘sexting’ (a neologism that combines the
terms sex/sexuality and texting). Sexting entails sending text messages or
similar electronic communications (primarily using portable devices such as
phones) which have an explicit sexual content. Young people may share
electronically with peers, friends or romantic partners self-generated sexual
content (Bryce, 2010: 334) such as nude or provocative photographs of
themselves taken with camera phones. A recent review and meta-analysis of
the literature found that studies show between 14.8 and 27.4 per cent of kids
sext with higher percentages being reported in more recent analyses
(Madigan et al., 2018: 327). Further, this research also found that
approximately 12 per cent of youths have forwarded sext messages without
the original sender’s consent while another 8.4 per cent admit to having
received forwarded sexts. Both the individuals who produce such images of
themselves and those who take possession of and/or further circulate these
images may be deemed guilty of an offence (provided the individual
depicted is legally a minor and the image is judged indecent). Concern
about this issue has led to at least 20 US State Legislatures enacting
legislation to curtail sexting by minors (Hinduja and Patchin, 2015). In
some cases, the legislation makes it an offence for minors to transmit
sexualized images of themselves, while in others prohibitions are focused
upon other people who circulate or share such images. These laws have
been opposed by organizations such as the American Civil Liberties Union
(ACLU) who argue that they in effect turn minors into potential felons and
registered sex offenders for engaging in activity that is for the most part
‘consensual, intended to be innocuous, although naive’ (Alseth, 2010). The
dilemma raised by action against sexting is that it threatens to criminalize
those very persons that child pornography laws are supposedly intended to
protect.
In addition to sexting, some minors have also been involved in ‘livestreaming’ (broadcasting video and audio feeds in real-time) themselves
engaged in sexual or sexualized activities. Such live-streaming can be
performed as a result of exploitative deception or coercion on the part of
child pornographers, a practice also known as ‘live-distant child abuse’ or
LDCA (Europol, 2017: 39; IWF, 2018b). In these scenarios, an abuser may
exploit a child into performing sexual acts alone or may engage in direct
contact-abuse for one or more viewers in real time over the internet. These
acts of exploitation can involve audience participation in the abuse through
viewer requests or demands (Europol, 2017: 39). Children, however, may
also live-stream nudity or sex acts as a form of self-victimization. For
example, they may use programs like Omegle or Chatroulette, which
randomly pair participants together for one-on-one interaction sessions that
may include audio-video feeds, to broadcast themselves engaged in various
sex-related acts. Though children may view such interactions as thrilling
transgressions, they may also be participating in sexually exploitive efforts
by abusers who may take advantage of the situation for sexual gratification.
Further, adult or child viewers may also ‘capture’ or record the interactions
for later use or distribution. Children may also be extorted by offenders into
performing live-streamed sex acts. For example, a 15-year-old Canadian
girl committed suicide after an offender ‘recorded a webcam chat’ where
she ‘showed her breasts’ and subsequently threatened to ‘broadcast the
video online unless she performs 3 more “shows”’ (Acar, 2016: 111).
Unfortunately, the abuser made good on his threat. The teen had to move
and attend a new school but the abuser created social media profiles to
poison her new friendship networks with information of the abuse event.
She killed herself as a result. Of note, these chat services may also result in
children being subjected to the obscene or sexual activities of adults as well.
8.3 Legislative and Policing Measures to
Combat Online Child Pornography
Historically speaking, the problem of child sexual abuse has been grossly
neglected. It is only in relatively recent times that the nature and scale of the
problem have come to be publicly acknowledged and politically addressed.
Consequently, the past few decades have seen concerted legal efforts to
address child pornography in general, and internet pornography in
particular. In the UK, the Protection of Children Act (PCA) 1978 – later
extended by the Criminal Justice Act 1988 – makes it a criminal offence ‘to
take, distribute, exhibit, or possess even one “indecent” photograph of a
child’ (Akdeniz, 2001a: 6; ECPAT, 1997: 9; Edwards et al., 2010: 414). A
conviction for possession under these provisions can carry a prison sentence
of up to five years. In the USA, federal statutes prohibiting child
pornography were introduced in the form of the Protection of Children
Against Sexual Exploitation Act 1977 and the Child Protection and
Obscenity Enforcement Act 1988. Specific provisions relating to the
internet were introduced by the Communications Decency Act (CDA) 1996
and the Child Pornography Prevention Act (CPPA) 1996. While the CDA
has had some of its provisions declared unconstitutional by the Supreme
Court (see the discussion in Section 7.6), those components of the Act
dealing with obscene material (as opposed to merely ‘offensive’ adult
pornography) stand; consequently, online child pornography (which has
been ruled obscene) is effectively prohibited by this law. Similarly, while
the CPPA has run into difficulties with some of its provisions (such as those
relating to so-called ‘pseudo-photographs’, discussed in Section 8.5), the
general prohibition of pornographic representations of children stands
(Brenner, 2010: 447–451; Graham, 2000: 467–468). Other countries across
the world have introduced such laws specifically aimed at tackling child
pornography, both on- and offline, include the Netherlands, Denmark,
Switzerland, Sweden, Norway, Germany, France, Ireland, Austria, Canada,
Taiwan, the Philippines, Sri Lanka, South Africa, New Zealand, Australia,
Hong Kong, Iceland, Romania, Israel, Hungary, Honduras, Panama, Peru,
and the Slovak Republic to name a few (Akdeniz, 2001b; ECPAT, 1997: 9–
11; ICMEC, 2016: 18–42). At the international level, the EU has
extensively debated the need for some kind of common content regulation
where it comes to issues such as obscenity. Considerable cultural
divergence across member states, however, has made formal controls
difficult to realize. As Charlesworth (2000: 59) notes, there are ‘some
Member States, for example the UK, operating rigorous regimes of
censorship over depictions of sexual activity, while others, like the
Netherlands, prefer a rather more laissez-faire approach to their citizens’
proclivities in this area’ (see also: Kuipers, 2006). Consequently, he
suggests that:
Even in those areas of moral judgement where some degree of
consensus might reasonably be expected, such as the undesirability of
child pornography, the extent of that consensus does not appear to
extend to the uniform interpretation of subject matter, uniform
definition of offences or uniformity of punishment across the EU.
(Charlesworth, 2000: 59)
Greater progress has been made at the international level via the provisions
of the Council of Europe’s Convention on Cybercrime (previously
discussed in Section 3.6). Article 9 of the Convention requires all
signatories to make it an offence under domestic criminal law to offer,
distribute or procure child pornography via a computer system, and to
possess child pornography ‘in a computer system or on a computer-data
storage medium’ (CoE, 2001). As of May 2018, a total of 57 countries have
ratified and brought into force its provisions, although a large number have
‘reserved’ (decided to set-aside or adjust) at least some elements of the
convention (CoE, 2018).
Another potential mechanism for establishing a global regime of regulation
relating to child pornography is the UN Convention on the Rights of the
Child. To date, 196 countries have ratified the Convention with the only
exception being the US, though it has signed the Convention indicating that
ratification may be impending (UNICEF, 2018). Widespread ratification of
the convention, however, has done little to globalize standards of legal
protection for children from sexual exploitation when it comes to the
domestic laws of the signatory nations. A 2016 review of relevant laws in
196 countries, conducted by the International Centre for Missing and
Exploited Children (ICMEC, 2016: vi), claimed that only 82 countries
‘have sufficient legislation’ and that 35 have ‘no legislation at all
specifically addressing child pornography’. It further noted that 134
countries ‘do not provide for computer-facilitated offenses’ in their
legislation (ibid.). The ICMEC’s report, however, does not count countries
which ban all pornography indicating that the number of countries that
prohibit child pornography – by default or by specific mandate – would be
higher.
The enforcement of anti-child pornography laws has been vigorously
pursued in many countries in recent years, and has been notable for the
degree of international cooperation involved. Few, if any, other forms of
cybercrime have received such concerted efforts to overcome barriers
between national law enforcement agencies, and to ensure that internet
offenders are brought before the law. In 1995, for example, police from
across Europe, the USA, South Africa and the Far East were involved in
‘Operation Starburst’, an investigation of a paedophile ring using the
internet to distribute child pornography; 37 men were arrested worldwide
(Akdeniz, 2001a: 8). Similarly, in 1998, investigators from the British
National Crime Squad, US Customs and INTERPOL coordinated an
operation against ‘The Wonderland Club’, an organized ring of paedophiles
spanning at least 33 countries, which had been using the internet to
distribute child pornography for a number of years. Across Europe and
Australia 47 suspects were arrested, along with 32 in the USA (Graham,
2000: 461–465). ‘Operation Ore’ (1999–2003) led to 3,744 arrests and
1,451 in the UK of individuals who had subscribed to a US-based child
pornography website (Jewkes and Andrews, 2007: 62; Wall, 2007: 113); the
arrests were made possible by cooperation with US law enforcement
authorities who provided credit card details of UK residents who had
subscribed to the service. Between 2007 and 2011, ‘Operation Rescue’
investigated a Netherlands-based web forum called boylover.net, and to
date 184 arrests have resulted from the investigations, spanning numerous
countries including the UK, the USA, the Netherlands, Australia, New
Zealand, Italy, Spain and Thailand (CEOP, 2011). In 2013, almost 350
people were arrested around the world as part of a three-year international
effort spearheaded by the Toronto Police involving numerous law
enforcement agencies internationally as part of ‘Operation Spade’
(Neuman, 2013). Over 380 children were said to be rescued from
exploitation as a result (Toronto Police Service, 2013).
In 2014 and 2015, the FBI conducted perhaps one of the most legally
controversial child pornography sting operations dubbed ‘Operation
Pacifier’. The agency seized a major child pornography website over the
TOR system called ‘Playpen’ which had hosted hundreds of thousands of
images uploaded and downloaded across 60,000 user accounts (Cox, 2016).
Claims were made that it was effectively one of, if not the, largest child
pornography site on the web. Rather than shut down the site, however, the
FBI operated the site for two weeks and infected connecting offender
computers with malware, thus allowing them to identity many users’ true IP
addresses (Cox, 2017). Though the operation resulted in the arrest and
prosecution of hundreds, critics argued that the operation was too expansive
in scope as it targeted ‘as many as 158,000 computers around the world’
with authority granted by a single warrant from a judge in Virginia (ibid.).
In addition, the case raises questions as to the legality and ethics of the FBI
operating a child pornography site – essentially perpetuating the abuse of
children – even if the ultimate objective is to arrest offenders and
potentially exposed critical flaws in systems which could be taken
advantage of by other criminals (ibid.; Vaas, 2017).
A number of countries have also broadened attempts to combat child
pornography by involving ISPs and other internet intermediaries. In some
nations, this has taken the form of establishing ISPs’ legal liability for thirdparty content placed on sites that they host or provide access to. So, for
example, ISPs in both Germany and France have faced criminal charges of
providing access to child pornography (Akdeniz, 2001a: 14). Other
countries, such as the UK, have favoured self-regulation and cooperation
with ISPs. In such cases, child pornography has been targeted by
organizations such as the Internet Watch Foundation (IWF) which have
established ‘hotlines’ on which the public can report suspicious material
online; the IWF then passes the information on to ISPs so that they can
remove the webpages, and to the relevant policing agency so that they can
investigate with a view to bringing criminal charges against those
responsible for posting the offending material (Akdeniz, 2001a: 17–18;
Sutter, 2003: 75). In 2017, the IWF (2017: 4–5) received 132,636 such
reports via its internet hotline (an increase of 26 per cent over 2016), 61 per
cent of which were ‘confirmed as containing child sexual abuse images and
videos.’ Such hotlines now exist across Europe, the USA and elsewhere,
and are coordinated by associations such as Internet Hotline Providers in
Europe (INHOPE) which now has member organizations in 45 countries
(INHOPE, 2016). The US also maintains its own reporting agency, the
National Center for Missing & Exploited Children (NCMEC) including a
reporting hotline, the CyberTipline. The online adult pornography industry
has also involved itself in efforts to eradicate child pornography. The
Association of Sites Advocating Child Protection (ASACP), for example, is
an alliance of US-based adult pornographic content providers that
undertakes measures such as: self-regulation through approved membership
and certification of legitimate adult sites; informing member providers
about child protection laws; engaging in educational and outreach activities
directed toward government and the public; and providing a hotline through
which both adult content providers and consumers can report child
pornography identified on the internet (ASACP, 2018). Legislation has also
been developed in many countries requiring ISPs to retain and preserve data
that could be used for evidence in child pornography cases. As of 2016, 79
countries had laws which required data retention to facilitate child
pornography prosecution (ICMEC, 2016: vi).
8.4 Legal and Policing Challenges in
Tackling Child Pornography
There are a number of obstacles facing attempts to curtail online child
pornography, especially given that the production, distribution and
consumption of such material often span national territories. First, while
many countries have made efforts in recent years to combat child
pornography, there remain many others without adequate provision.
Second, even where provisions are in place, there may be significant
differences in definitions as to what actually constitutes child pornography
(Akdeniz, 1999; Taylor and Quayle, 2003). The UK’s PCA (Protection of
Children Act 1978), for example, prohibits ‘indecent’ images of children,
yet what precisely constitutes indecency is left vague and up to the courts to
decide. Other countries may ban explicit depictions of child sexual abuse
but may not prohibit material that ‘depicts children in a sexualised manner
or content’ sometimes referred to as child sexual exploitation material
(INTERPOL, 2018: 14). This raises the very real possibility that what in
one country may be deemed to fall within the bounds of anti-child
pornography laws will in another be deemed legally acceptable. Similarly,
different national legal regimes may prohibit different specific
pornography-related practices. For example, it is illegal in countries like
Belarus and Russia to produce and distribute child pornography but
possession without intent to distribute is allowed (ICMEC, 2016: 36;
Newton, 2011; USSD, 2017a: 37–38). An analogous pattern of national
differences arises when we consider the legal punishments laid down for
child pornography related offences. To take an extreme cross-national
divergence, under US federal law, the production of child pornography may
have a minimum sentence of 15 years with a maximum of a life sentence
for repeat or violent offenders (USSC, 2012: 3); however, under Thai and
Estonian law, the maximum prison sentence for the same production and
distribution offences comprises a fine and no more than three years in
prison (USSD, 2016: 42; USSD, 2017b: 10). These differences in the
definition, scope and treatment of child pornography offences make it
particularly difficult to establish an international basis for acting effectively
against online pornography involving children. Moreover, even where a
legal provision is in place, the lack of technical expertise, funding and
police responsiveness to internet crime may seriously hamper efforts to
combat child pornography (Jewkes and Andrews, 2005).
A further issue, and one which deserves close attention in its own right, is
the problem of divergence as to who exactly counts as a ‘child’ for the
purposes of anti-pornography legislation. Under UK law, following the
Sexual Offences Act of 2003, a child is defined as any person less than 18
years of age (this replaced the previous provision of the PCA 1978, in
which a child was legally defined as a person under 16). This legal
definition of children as under 18 is common to anti-pornography laws in
many countries, including the USA, Canada, Ireland, and a host of others.
Other countries, however, have lower thresholds for material to be
considered child pornography. In Spain, Australia and Portugal, for
example, child pornography covers only those materials involving
depictions of under-16-year-olds (Krone, 2005: 1; Picotti and Salvadori,
2008: 35). Countries like Serbia and Estonia set their age limit even lower
at 14 years of age (USSD, 2017b: 10; 2017c: 24). Such differences create a
situation in which, for example, UK and US authorities are unable to act
against Spanish or Australian websites featuring sexually explicit images of
16- and 17-year-olds, as they are permitted by Spanish and Australian law,
even though they are illegal in other countries in which they may be viewed
online. Problems related to the age of those appearing in pornographic
images are further generated by the fact that the producers, distributors and
consumers of such material may not necessarily be aware of the true age of
those shown. An illustrative case of this problem, from the pre-internet era,
is that of porn actress Traci Lords (real name Nora Louise Kuzma).
Lords/Kuzma started nude modelling in magazines such as Penthouse in
1983, and between 1984 and 1986 appeared in almost 100 hardcore movies,
becoming one of the world’s most famous porn stars. It was revealed in
1986, however, that at the start of her nude modelling career she had in fact
been only 15 years old, and 16 when she started to appear in hardcore
pornographic films. She had deceived magazine publishers and film
producers about her age by presenting a fake birth certificate and driver’s
licence. Subsequently, US authorities attempted to prosecute the owners of
X-Citement Video, the company that produced Lords’ first pornographic
feature, for manufacturing and distributing child pornography (however, the
authorities were undermined by the fact that even US customs had been
fooled by Lords’ fake passport, which she had used to travel to France to
make a pornographic film) (EFF, 2001). The legal status of those featured is
also made more difficult to assess since, as Lane (2001: 125) points out,
‘images of teens at or near their 18th birthday have been proven to be a
highly popular and lucrative subject for online pornography business’; there
are now thousands of websites which seek to attract visitors by promising
explicit pictures of ‘barely legal teens’. Supporting this point, Ogas and
Gaddam (2011: 16) found that youth-related keywords comprised 13.5 per
cent of all sexual searches in their analysis of online search-engine requests,
the most prominent by a wide margin. While some readers might find the
male sexual fascination with teenage women unsavoury and unwholesome,
the legal point is that the unwary producer, distributor or consumer might
completely inadvertently find him- or herself falling foul of anti-child
pornography laws (as did Traci Lords’ employers, co-workers, and millions
of fans across the world). This confusion is only exacerbated by the
difference in legal definitions of the child in different countries, with some
territories permitting sexual representations of those as young as 14. It
must, however, be borne in mind that this issue only applies to images of
persons who are clearly post-pubescent and sexually matured, and so could
easily be mistaken as individuals who have reached the age at which their
appearance in pornography is permitted by the laws of their country. The
largest proportion of child pornographic material online depicts those who
are clearly pre-pubescent, and therefore cannot be taken to be anything
other than minors (IWF, 2017: 4; Seto et al., 2018: 19).
The above problems are exacerbated by the practical challenges presented
for monitoring and policing the internet. Cybercrimes are, by all
indications, massively under-reported (estimated at 10 per cent or less) and
only about 2 per cent result in successful prosecutions (Jewkes, 2010: 526).
As the number of such offences has grown rapidly, the reporting, recording
and prosecution rates have commensurately fallen, with existing criminal
justice systems ill-equipped to cope with the deluge. Detecting child sex
offences online (especially those related to child pornography) is made
especially difficult since only a small proportion of this material will
circulate through openly accessible internet sites – it is much more likely to
be secreted away behind firewalls and password-protected fora, such that
only those already involved with the relevant paedophile networks will be
aware of their existence and have access to them. Recent discussions of
cybercrime have noted the challenge posed by darknets, closed private
networks that are used for illegal activities such as illicit file sharing
(Ormsby, 2018). Such networks have also been linked to the circulation of
CSAIs, offering offenders a significant degree of protection from detection
by law enforcement agencies (Beckett, 2009). For example, one analysis
estimates that ‘child abuse content is the most popular type of content on
the Tor [The Onion Router] Dark Net’ (Owen and Savage, 2015: 9).
Offenders can further anonymise themselves by using a range of readily
available tools including proxy servers (that enable users to hide their IP
addresses), virtual private networks (VPNs), virtual private servers (VPSs)
and secure socket layer (SSL) tunnelling. It should be noted that not all use
of such tools are for criminal purposes but, rather, in an attempt to maintain
privacy in a context where internet communications are subject to evermore intrusive monitoring by states and corporations alike (Whittaker,
2000). Their ready availability, however, renders it increasingly difficult to
detect offending behaviour and to identify the perpetrators. Beyond
technical measures, child pornography perpetrators may also resort to
various non-technical means to avoid detection and apprehension like using
coded language or imagery such as referring to child pornography as simply
‘CP’. In addition to providing some cover for their activities, such lingo
also allows members to communicate amongst each other and demonstrate
in-group knowledge as demonstrated in a 2007 FBI intelligence bulletin
which describes various symbols used by paedophiles to denote their victim
preferences (FBI, 2007).
8.5 The Controversy Over Virtual Child
Pornography and ‘Jailbait’ Images
While it is safe to say that most people are revolted by the idea of the
pornographic exploitation of children, this chapter has demonstrated that
the issue presents many difficulties and there exist ‘grey areas’ regarding
what constitutes child pornography. This section addresses issues related to
child pornography that present additional challenges and ambiguities both
academically and legally. The first is the problem of so-called ‘pseudophotographs’, ‘virtual’ pornography and pornographic illustrations. It is
widely acknowledged that ‘in most cases, child pornography is a permanent
record of the sexual abuse of an actual child’ (Akdeniz, 2001a: 6). This is
not the case, however, with pseudo-photographs and virtual pornography.
Pseudo-photographs comprise one or more non-obscene images that have
been digitally manipulated to produce a pornographic representation
involving a minor. These, for example, may take the form of a child’s face
transposed onto the image of a naked adult performing a sexual act - such
transpositions are commonplace in adult pornography found on the Web,
with the faces of popular celebrities (such as Emma Watson, Gal Gadot and
Scarlett Johannson) being digitally mapped onto images of nude models,
thereby providing their fans with a pictorial rendition of their fantasies of
seeing their favourite public figures in pornographic scenarios. They may
also comprise digital images which have been computer-generated ‘from
scratch’, and therefore no image of an actual child has been used in creating
the representation, a phenomenon that may accelerate with recent advances
in artificial intelligence in what is known as ‘deepfake porn’ (Cole, 2017,
2018). There has been a further proliferation of ‘simulated’ or ‘virtual’
child pornography, including cartoons, drawings and animations. Especially
popular is the variation of Japanese hentai animation known as lolicon (for
‘Lolita complex’) that features childlike females depicted in erotic and
sexually explicit scenarios (Matthews, 2011). Under UK law, following the
Criminal Justice and Public Order Act 1994, pseudo-photographs are
prohibited alongside real pornographic images of actual children. These
provisions were extended under The Coroners and Justice Act 2009 to
include pornographic ‘non-photographs’ (such as cartoons and animations)
in which the ‘predominant impression conveyed’ is that those depicted are
under 18 years of age (Backlash, 2010). The situation in the US, however, is
rather different. The CPPA (1996) also introduced such a prohibition, but in
2001 this provision was overturned by the US Supreme Court (Akdeniz,
2001c; see also Mota, 2003). In the US, the traditional justification for
excluding child pornography from First Amendment guarantees of free
expression has been the harm committed against a child during the
production of such images; that is, that ‘child pornography is a permanent
record of the sexual abuse of an actual child’, and as such the production of
such images entails a direct violation of criminal law against child sexual
and physical abuse. Since no such harm need be committed to an actual
child to produce pseudo-photographs and virtual photography, however, this
justification does not apply. Nor is it possible under US federal law to
appeal to the harms that might ensue against children, in that pseudoimages may encourage viewers to subsequently commit abusive acts against
real children; as already noted in the discussion of hate speech (Section
7.2), the harm criterion is justifiable grounds for prohibiting speech only if
it is ‘directed to inciting or producing imminent lawless action [or] likely to
incite or produce such action’. In the absence of any concrete evidence that
exposure to pornographic pseudo-photographs of children directly
contributes to such actions, it cannot be exempted from First Amendment
protection (Oswell, 2003: 5–6). The Prosecutorial Remedies and Other
Tools to End the Exploitation of Children Today (PROTECT) Act of 2003
did introduce provisions prohibiting computer-generated obscene images of
minors which are ‘indistinguishable from’ images of real children; however,
this provision explicitly excludes drawings or cartoons that do not meet the
‘indistinguishable’ criterion (Brenner, 2010: 450; Kornegay, 2006).
The issue of virtual child pornography has also arisen with respect to online
virtual reality environments such as Second Life and other massively
multimedia online role-playing games (MMORPGs). Launched in 2003,
Second Life is a virtual world in which ‘residents’ (users) can interact with
each other, and the environment, via virtual personae called ‘avatars’.
Online sexual encounters between residents, using their avatars, are a
popular activity (Wilson, 2009: 1130–1131). Controversy has arisen due to
the practice of some (adult) residents in using childlike avatars to engage in
virtual sexual activities – what is known as sexual ‘age play’ (Meek-Prieto,
2008). In one widely discussed case, a user calling him- or herself ‘Sasami
Wishbringer’ appeared as an ‘eight-year-old-girl’ and offered ‘virtual sex’
to ‘adult’ avatars (Johnson and Rogers, 2009: 63). There is now
considerable argument, and uncertainty, about whether the use of such
avatars for ‘sex’ in virtual environments contravenes the laws around
obscene imagery, such as those introduced by the aforementioned Coroners
and Justice Act 2009. Linden Lab (the creators of Second Life), however,
effectively pre-empted any legal resolution of the matter when, in 2007,
they acted to prohibit all such ‘age play’, proclaiming that it ‘may violate
real-world laws in some areas’ and that such activity is ‘broadly abhorrent’
to the community of users (Duranske, 2007).
Other issues related to child pornography have also proven problematic in
recent years such as so-called ‘jailbait’ images. ‘Jailbait’ is a slang term
denoting adolescent minors viewed by some as mature in their physical
development and sexually desirable. The term is a boorish
acknowledgement that adults pursuing sexual interactions with minors –
generally young girls – below the ‘age of consent’ could face prison or jail
time in many jurisdictions if caught. Sex acts between an adult and a minor
below this age are thus often considered ‘statutory rape’ regardless of the
child’s perceived willingness to participate. Acknowledging the legal
barriers in place, most jailbait material generally does not feature teenage
minors engaged in explicitly sexual acts but the material is still circulated
and consumed for the purposes of sexual gratification. A host of websites
and even communities have emerged dedicated to distributing such content.
For instance, Reddit once hosted ‘r/jailbait’ until it was shut down after a
user crossed the boundary of ‘jailbait’ into child pornography by posting an
image explicitly depicting a sexual act with a minor (Caulfield, 2011;
Marantz, 2018).
In many countries including the US and UK, jailbait images – while
perhaps morally and culturally frowned upon – are considered legal because
the images are typically not made through direct sexual abuse or
exploitation of minors and are not usually considered to be obscene in the
typical sense. Though technically legal, they can certainly test the limits of
legal and moral sensibilities and may be associated with other law
violations. For example, while some of the images may be publically posted
online by adolescents (most likely through social media), others may be
accessed through illicitly compromised user accounts. In addition, viewer
obsessions with some of the youths featured on these sites have led to
harassment and stalking (discussed further in Chapter 9), resulting in
significant harms for victims. Such crimes are evident in the case of Angie
Varona who, in 2007, posted pictures online of herself in revealing clothing
at the age of 14 to an image hosting website (Karar and Effron, 2011).
Though not intended for public consumption, her account was
compromised and the images went viral. An intense following developed.
Her images were featured across multiple websites and online message
boards. Social media pages dedicated to her and her images also emerged.
In addition to shame and humiliation – both on- and offline – she was
harassed and gravely threatened with physical harm. As she has described,
people claimed ‘that I deserve everything that’s going to come for me, that
they’re going to rape me when they see me because I want it and because I
ask for it … Someone found out my address and everything … threatening
me, saying that they know where I live’ (as quoted in Karar and Effron,
2011). According to her, depression set in and she turned toward escapist
behaviours like substance abuse and running away. Years after it started, she
describes still living with the trauma of accidental sexual celebrity status,
feeling that it persists and will continue to haunt her (ibid.). Of note, despite
facing backlash over the subreddit r/jailbait, Reddit still hosts multiple
‘subreddits’ dedicated to Varona, including one given the tongue-in-cheek
title ‘r/AngieVaronaLegal’ at the time of writing.
All of the foregoing serves to illustrate the problems encountered when
trying to find nationally based legal solutions to what are inherently global
and transnational problems: as with some other forms of child pornography
related activities, the divergences over whether or not pseudo-images and
virtual images are to be prohibited creates a situation in which producers,
distributors and consumers of the same images are subject to different legal
treatment according to the country in which they are located. This
inevitably obstructs attempts to generate a unified and effective strategy to
remove child pornography from the internet. Further, the existence of
jailbait images demonstrates that even otherwise non-explicit photos of
young persons can be sexualized by some and create significant issues for
minors, testing the limits of efforts to curtail the sexual abuse of children.
8.6 Summary
This chapter explored a range of issues relating to the circulation of child
pornography on the internet. While the exact extent of the problem is
difficult to establish, it is certainly significant, and the production of such
imagery often entails serious harms of sexual and physical abuse against
minors. While child pornography is not a phenomenon unique to the
internet, its appearance online intensifies regulatory problems, as the
internet allows the circulation of such representations to span territorial
boundaries and cultural contexts, with their different moralities about
acceptable and unacceptable communications. Problems with detecting
offending material, hidden away behind protective firewalls and passwords,
increase the challenge for law enforcement agencies. There have been
significant advances towards international harmonization and cooperation
in relation to tackling child pornography, reflecting the growing degree of
global moral consensus on this issue. Nevertheless, significant differences
remain over issues such as what counts as an ‘obscene’ or ‘indecent’ image
of a child, who counts as a ‘child’, whether or not artefacts such as
computer-generated images, pseudo-photographs, cartoon and animations
ought to be included within anti-pornography laws, and what kinds of
penalties should apply to those who produce, disseminate or possess
prohibited imagery.
Study Questions
What kinds of representations might be considered to comprise child pornography?
How can we establish the extent of such material online?
What are the challenges facing moves to tackle child pornography online?
How should we regulate issues concerning ‘jailbait images’?
How far should prohibitions on sexual representations of minors be extended? Should they
encompass drawings, cartoons, animations and other representations not involving a real child
in their production? Or sexting behaviours and live-streaming that young people engage in
voluntarily?
Further Reading
The debate about online child pornography is tackled in Philip Jenkins’s Beyond Tolerance:
Child Pornography Online (2001); Ian O’Donnell and Clair Milner’s Child Pornography:
Crime, Computers and Society (2007); and Ethel Quayle and Max Taylor’s Child
Pornography: An Internet Crime (2003). The legal responses to child pornography are
examined in Yaman Akdeniz, Internet Child Pornography and the Law: National and
International Responses (2008). Also valuable on this topic is the website www.cyberrights.org which contains up-to-date discussion and analysis of legal developments. Child
pornography on the darkweb is also discussed extensively in Eileen Ormsby’s The Darkest
Web (2018). Issues on the policing of child pornography are addressed in Yaman Akdeniz’s
Sex on the Net: The Dilemma of Policing Cyberspace (1999); Yvonne Jewkes and Carol
Andrews’ ‘Policing the filth: The problems of investigating online child pornography in
England and Wales’ (2005); and Majid Yar’s ‘Cyber-sex offences: Patterns, prevention and
protection’ (2009). INTERPOL has recently provided a relatively thorough overview of child
pornography issues in their report entitled Towards a Global Indicator on Unidentified Victims
in Child Sexual Exploitation Material (2018). The US State Department also reports the legal
status of child pornography across most countries in their annual human rights reports. On the
controversy over criminalizing sexting by young people, see Sarah Wastler’s ‘The harm in
“sexting”?’ analysing the constitutionality of child pornography statutes that prohibit the
voluntary production, possession and dissemination of sexually explicit images by teenagers’
(2010). On the issue of criminalizing animated representation of children, see Matthews’
‘Manga, virtual child pornography, and censorship in Japan’ (2011). For opposing positions on
the prohibition of virtual child sex imagery in Second Life, see Meek-Prieto’s ‘Just age playing
around? How second life aids and abets child pornography’ (2008), and Johnson and Rogers’
‘Too far down the yellow brick road – cyber-hysteria and virtual porn’ (2009).
9 The Victimization of Individuals Online
Cyberstalking and Paedophilia
Chapter Contents
9.1 Introduction
9.2 The emergence of stalking as a crime problem
9.3 Cyberstalking
9.4 Online paedophilia
9.5 Thinking critically about online victimization: Stalking and
paedophilia as moral panics?
9.6 Summary
Study questions
Further reading
Overview
Chapter 9 examines the following issues:
the nature and development of stalking as a form of criminal offence;
the debate about online stalking or ‘cyberstalking’ as a growing crime problem;
the issue of online child sexual victimization;
the argument that current concerns about internet stalking and paedophilia are in fact moral
panics, and an unwarranted over-reaction to a marginal crime problem.
Key Terms
Child
Child pornography
Cyberstalking
Gendertrolling
Moral panic
Paedophilia
Revenge porn
Stalking
Virtual mob
9.1 Introduction
In Chapters 7 and 8 we explored criminological issues related to various
kinds of harmful and objectionable speech and representation that appear
online. Such speech typically takes the form of representations of
individuals and social groups that portray them in ways that can be seen as
offensive (such as the racist depiction of minority ethnic groups, the
sexualized representation of children and the degrading representation of
women in pornography). This chapter further explores criminological issues
related to online communication, but focuses upon somewhat different
phenomena. The online behaviours considered here comprise
communications that are directed by individuals at specific persons, and
which are unwelcome by their recipients and may contain abusive,
threatening or inappropriate words or images. The first problem examined
is that of so-called ‘cyberstalking’, the persistent and targeted harassment of
an individual via electronic communication such as email. The second issue
is the use of internet communications by paedophiles to contact children in
order to engage them in sexual abuse. Such behaviours may be purely
‘virtual’ in character, and restricted to the online environment. In some
instances, however, they may serve as an accompaniment to, or preparation
for, further physical harassment and victimization in direct interpersonal
encounters (such as the cyberstalker who combines email communications
with physical intrusion or assault upon his or her target, or the paedophile
who uses communications with minors in internet chat rooms to prepare or
‘groom’ victims for subsequent physical sexual abuse). Cyberstalking has
garnered increasing attention in recent years, as society has become more
attuned to problems of interpersonal harassment and victimization more
generally. The issue of online paedophilia has evoked even greater concern,
allied to worries about related child protection issues such as child
pornography and child abduction. In this chapter, the patterns and scope of
such online conduct will be discussed and their risks will be assessed. We
will also examine, however, critical perspectives that interpret the issue of
online victimization as symptomatic of a moral panic about the threat of
strangers, rather than reflecting any extensive risk of criminal victimization
as such.
9.2 The Emergence of Stalking as a Crime
Problem
Cyberstalking can be understood as an online variant of similar behaviours
that take place in other non-virtual contexts and environments. Therefore, it
will be necessary to situate cyberstalking in the context of stalking activities
more generally. The term ‘stalking’ first came to prominence in the 1980s
to describe ‘persistent harassment in which one person repeatedly imposes
on another unwanted communications and/or contacts’ (Mullen et al., 2001:
9). Stalking is held to be characterized by repeated behaviours including:
making phone calls to victims; sending them variously letters, gifts or
offensive material; following and watching the victim; trespassing on the
victim’s property; loitering near and approaching the victim; contacting and
approaching the victim’s family, friends and co-workers (Brewster, 2003:
213; McGuire and Wraith, 2000: 317). Stalking may be a prelude to serious
physical assault and even homicide (Fritz, 1995). The impact of stalking
upon its victims can be profoundly debilitating. Fear and anxiety are
commonplace reactions; victims may seek psychological treatment as a
consequence, lose time from work, and may even be unable to return to
work (Smith et al., 2017: 161; Stieger et al., 2008: 239; Tjaden, 1997: 2).
Victims may also be forced to make adaptations to their lives to try to evade
their stalker, sometimes referred to as ‘protective measures’. These can
include changing their telephone number, installing home security devices,
carrying personal safety devices, taking self-defence lessons, changing their
car, moving home, changing job, changing their surname and even moving
abroad (McGuire and Wraith, 2000: 323). Estimates about the social
prevalence of stalking vary. The US Bureau of Justice Statistics measured
stalking as part of their 2006 Supplemental Victimization Survey (part of
the National Crime Victimization Survey) which estimates that among
persons 18 or older, 1.5 per cent had been stalked (Catalano, 2012: 1).
Women were found to be more likely to be stalked than men (2.2 per cent
and 0.8 per cent respectively). The US Centers for Disease Control – as part
of their 2010–2012 National Intimate Partner and Sexual Violence Survey
which involved over 41,000 participants over the age of 18 in all 50 US
states – found that 15.8 per cent of women and 5.3 per cent of men reported
having experienced stalking in their lifetime where they ‘felt very fearful or
believed that [they] or someone close to [them] would be harmed or killed
as a result’ (Smith et al., 2017: 85, 90). In the UK, the 2017 Crime Survey
for England & Wales reports that 14.8 per cent of people between 16 and 59
years of age reported experiencing stalking since the age of 16, with those
numbers varying tremendously between women (20 per cent) and men (9.7
per cent) (ONS, 2018c). Additionally, 3.6 per cent of respondents claim to
have been stalked in the last year (4.6 per cent of women, 2.4 per cent of
men). One Austrian study (based on a survey of 2,000 women) claims an
annual prevalence rate of 1–4 per cent, with up to 18 per cent of women
being subjected to stalking over their lifetimes (Neuberger et al., 2011). In
Australia, ‘an estimated 1 in 6 women (17 per cent or 1.6 million) and 1 in
15 men (6.5 per cent or 587,000) experienced an episode of stalking since
the age of 15’ according to the 2016 Personal Safety Survey (ABS, 2016b).
The reader should note, at this point, that these statistics indicate that the
nature of stalking and harassment more generally is heavily gendered
(Kappeler and Potter, 2018: 105), a pattern that, as will be discussed,
persists in various forms in the online realm.
In the public mind, stalking has tended to be associated with the obsessive
pursuit of celebrities by psychologically disturbed fans. Well-known cases
of ‘star stalking’ have involved the likes of Jodie Foster, who was stalked
by John Hinckley; he went on to shoot President Ronald Reagan in 1982 in
an attempt to impress Foster. Others reportedly subjected to stalking include
Catherine Zeta-Jones, Mel Gibson, Gwyneth Paltrow, Halle Berry, Steven
Spielberg and Brad Pitt. Indeed, the first tailor-made anti-stalking
legislation, passed in California in 1991, was introduced in the wake of the
stalking and subsequent murder of actress Rebecca Schaeffer by a fan
(Mullen et al., 2001: 10; Spitzberg and Cadiz, 2002: 136). Lowney and Best
(1995: 39) found that in US media reports of stalking during 1989 and
1990, 69 per cent of stories focused on celebrity victims. Hollywood
movies have also linked stalking to celebrity in films such as The King of
Comedy (1983) and The Fan (1996), thereby reinforcing the connection
between fame and victimization among media audiences. Such ‘stranger
stalkings’, undertaken by individuals unknown to the victim, tend to be
characterized by ‘erotomania’. In such cases, the stalker is convinced that
he or she has a special and intimate relationship with the victim, that the
victim loves them and wishes to reciprocate their devotion, and the victim’s
words and actions (even if they seem to clearly signify disinterest and
rejection) are interpreted by the stalker as signs that they will be eventually
united (McGuire and Wraith, 2000: 320). While the stalker may present a
physical threat against the victim (as in the case of Rebecca Schaeffer), it is
just as likely that violence will be directed towards those the stalker
perceives as the enemies of their ‘beloved’; for example, in 1992 Günter
Parche, an obsessive devotee of tennis player Steffi Graf, stabbed her
sporting rival, Monica Seles.
During the 1990s, however, understandings of stalking began to shift away
from the focus upon celebrities to encompass a wider perception of the
phenomenon. Pivotal in this shift was research and campaigning undertaken
by feminist academics and women’s rights activists. Studies revealed that
stalkings by strangers constituted only a minority of all cases. Much more
prevalent were cases where the perpetrator (usually male) was the estranged
or former partner of the victim (usually female). Overall, around 75–80 per
cent of stalkers are estimated to be male, and around 75–80 per cent of all
victims are thought to be female (Smith et al., 2017; Spitzberg, 2002).1 This
clearly gendered pattern in stalking offences, combined with the
preponderance of incidents involving intimates, naturally led many
researchers to view stalking as primarily an extension of domestic violence
and abuse (see, for example, Marganski and Melander, 2018; Roberts and
Dziegielewski, 1996; Smith et al., 2017; Wright et al., 1996). Stalking by
estranged intimates may be motivated by a desire to punish the former
partner for having ended the relationship; alternatively, they may pursue the
former partner from ‘a desire to express their ongoing love and desire to
reconcile the relationship’, hoping that their persistence will demonstrate
their commitment (McGuire and Wraith, 2000: 318). Whatever the
motivation, such behaviour induces responses on the part of the victim
ranging from resentment, frustration and anger to anxiety and fear.
Moreover, domestic stalking is often linked to violent and abusive patterns
of behaviour, with about 80 per cent of women stalked by former partners
reporting having been physically assaulted during their relationship, and 31
per cent reporting incidents of sexual assault (Tjaden, 1997: 2). In addition
to stranger and domestic variants, typologies of stalking also identify a
number of other forms, such as ‘acquaintance stalking’ (where the victim is
harassed by someone known to them, but with whom they do not have any
close relationship, such as a colleague or neighbour) and ‘nuisance stalking’
(involving less invasive but nonetheless significant harassment) (Emerson
et al., 1998; Roberts and Dziegielewski, 1996).
1 This stalking figure was derived from estimates given in the CDC’s 2017
report concerning findings from their National Intimate Partner and Sexual
Violence Survey (Smith et al., 2017). These findings were initially
disaggregated between men and women. We aggregated these estimates to
create new estimates of the total number of men and women victimized by
men, women, or both in stalking. Based on these new estimates, we
examined percentages of population stalked by men, women, or both
finding that approximately 77 per cent of offenders are male, 17 per cent
female, and 6 per cent had stalking perpetuated against them by both. We
engaged in the same aggregating process to determine that the proportion of
total victims of stalking were women (76 per cent) and men (24 per cent).
The rise of stalking as a crime problem over the past few decades can be
understood in two distinctive ways. First, it can be suggested that its
increasing prominence is a reflection of a massive increase in harassing
behaviours. Mullen et al. (2001: 12) identify a number of social changes
that might help explain such an increase. First, they point to a greater
instability and breakdown of relationships, which creates more situations in
which conflict between estranged partners can emerge. Second, they point
to the growing prominence of celebrity within our culture, which
encourages intimacy seeking on the part of fans who are actively invited to
form emotional connections with idealized stars. Third, they point to greater
social isolation that makes it more difficult for people to establish intimate
relationships, thereby creating the possibility of inappropriate behaviour
when seeking to find emotional and physical reciprocity as an answer to
loneliness. It has also been pointed out, however, that the behaviours
associated with stalking are not in any sense new; rather, it has been
described as ‘a new term for an old crime’ (McGuire and Wraith, 2000:
316). Clinically documented cases of erotomania and obsessional pursuit
date back at least as far as the eighteenth century (Mullen et al., 2001: 9).
Indeed, what today is discussed as stalking was once represented in
literature as an idealized form of romantic love. Therefore it may be that
such behaviour has not necessarily undergone a dramatic increase, but that
the ways in which we culturally think about interpersonal relationships have
changed, creating a social problem with behaviours that once were tolerated
or even admired. Pivotal in such a cultural shift has been the impact of the
women’s movement, which has drawn attention to the various forms of
psychological as well as domestic abuse perpetrated by male partners, and
has helped Western societies break with the older patriarchal assumption
about women as basically the ‘property’ of their husbands. Women’s rights
to govern the path of their relationships, including the right to terminate
them on their own terms, have sensitized us to those situations in which
such rights are violated by abusive or persistent former partners. Hence the
rise of stalking as a criminal category can be seen to owe much to shifting
perceptions about just what kinds of behaviour constitute a social problem
and public concern.
As already noted, the first legislation specifically aimed at criminalizing
stalking emerged in the USA in 1991. By 1992, 26 US states had introduced
similar legal provisions, and by 2000 all 50 US states had anti-stalking
provisions on the statute books (Kolarik, 1992: 35; Spitzberg and Cadiz,
2002: 128). In Australia, the state of Queensland passed an anti-stalking law
in 1993, quickly followed by the other states; New Zealand introduced
similar legislation in 1997 (Mullen et al., 2001: 10). Also in 1997, the UK’s
Protection from Harassment Act (PHA) came into effect. Like other antistalking laws, the PHA relies upon the mechanism of restraining orders to
prohibit stalkers from further contact with their victims. Under the PHA,
breach of an order may be punished with a custodial sentence of up to five
years (McGuire and Wraith, 2000: 321). Its protections were extended
under the Domestic Violence, Crime and Victims Act of 2004 which
extends the availability of restraining orders to protect victims from
stalking, even if the defendant has been acquitted (CPS, 2018). Restraining
orders, however, have been criticized from a number of angles. First, they
may be undermined by sporadic enforcement by police as violations of
these orders may not always lead to arrests (Fritz, 1995). In an early study
of police decision making, domestic violence and restraining orders, Kane
(2000: 576) found that while arrest rates were substantially higher when
restraining orders were in effect, they were still made in only 44 per cent of
cases when other aggravating factors were not present. Instead, police arrest
decisions may be based more on other ‘conditions where the offender
presented a high degree of risk to the victim’ (ibid.). Thus while restraining
orders may help, they are no guarantee of law enforcement custodial
intervention. Second, they may be ineffective in deterring the stalker,
especially where the individual suffers from a delusional disorder or other
serious mental health problems that impair judgement (McGuire and
Wraith, 2000: 321). As Goode (1995: 10) puts it, ‘The serious obsessive
stalker is unlikely to know or believe that his or her conduct is against the
law and is likely, even if he or she does know, to disregard that law’. Third,
the prohibition of ‘repeated harassing behaviour’ may be over-broad, and
potentially criminalize legitimate activity. Therefore Goode wonders about
the legal standing of individuals such as investigative journalists, protesters
and summons-servers. There are, he concludes, ‘any number of persistent
visitors and callers who, at worst, are a pest, perhaps even a criminal pest,
but should not be called stalkers and made subject to heavy criminal
penalties’ (1995: 7). The potential for the misapplication of such laws is
clearly illustrated by an Australian case in which a group of Aboriginal
youth were prosecuted for loitering in a shopping centre (1995: 7). Such
difficulties arise from the fact that stalking can comprise a wide range of
different behaviours, and consequently any legal prohibitions must be
framed in very broad terms; as a result, they inadvertently run the risk of
curtailing freedom of action and expression. Such concerns
notwithstanding, intensive media coverage of the problem continues to
drive efforts to further legislate against stalking: for example, in 2012 the
UK government extended existing harassment laws through the Protection
of Freedoms Act which created a specific offence of stalking (Johnson and
Cordon, 2012). Further, the act also includes a provision for ‘stalking
offences which are also racially and religiously aggravated’ thus accounting
for the use of stalking as an instrument of hate-based harassment (CPS,
2018).
9.3 Cyberstalking
Cyberstalking has been defined as ‘the repeated use of the Internet, email or
related digital electronic communication devices to annoy, alarm, or
threaten a specific individual’ (D’Ovidio and Doyle, 2003: 10). It is
generally viewed as an extension of more familiar stalking behaviours that
may be undertaken through physical presence, telephone and mail. Nobles
and colleagues (2014: 991), for example, argue that ‘there is a population of
stalking victims and there is a smaller subset of those that are also
cyberstalking victims due to the special circumstance involving
technology’. Others, like Bocij et al. (2002: 3), however, see it as ‘an
entirely new form of deviant behaviour’. Perhaps, on a balanced view,
cyberstalking ought to be understood as a new variant of an existing pattern
of criminal conduct, one that exhibits both continuities and discontinuities
from its ‘terrestrial’ counterpart. Commentators such as Koch (2000) and
Best (1999) have suggested variously that cyberstalking is a trivial problem,
one whose incidence is rare, or may even be wholly imaginary (see also the
discussion in Bocij and McFarlane, 2002). Others have argued, however,
that internet stalking is both increasingly common and has serious effects
and consequences for those who find themselves victimized by an online
stalker.
Assessing the scope of cyberstalking incidents is rendered difficult due to
the relative paucity of systematic and reliable data deriving from large-scale
studies. Further methodological problems arise when we take into account
the varying definitions of cyberstalking used by different analysts, which
make it difficult to make cross-comparisons or to aggregate data. For
example, Bocij and McFarlane (2002: 26) define cyberstalking as only
those forms of harassment that ‘a reasonable person, in possession of the
same information, would think causes another reasonable person to suffer
emotional distress’. Setting aside the problem of who counts as
‘reasonable’, Bocij and McFarlane’s understanding is more restricted than
that of D’Ovidio and Doyle (2003, cited above), which includes behaviours
that merely annoy. Consequently, when examining such data as are
available, we cannot always be certain that different measures of
cyberstalking activity are necessarily based upon the same or similar
conceptions of what counts as part of the behaviour. Nevertheless, we can
note assessments and estimates that give some broad indication of how
widespread a problem cyberstalking might actually be.
These problems aside, major strides have been made in recent years in
measuring the extent of cyberstalking victimization and online harassment
more generally. In the US, the CDC’s National Intimate Partner and Sexual
Violence Survey, while not explicitly focusing on cyberstalking, gives some
indication as to the prevalence of associated behaviours. For instance,
between 13 and 14 per cent of women and men victimized by stalking
reported receiving unsolicited contact from their stalkers in the form of
‘unwanted emails, instant messages, [and] social media’ (Smith et al., 2017:
88, 91). One German study (based on an internet survey of more than 6,000
people) found a cyberstalking prevalence rate of 6.3 per cent (Dressing et
al., 2011). Perhaps the most comprehensive study of online harassment –
including cyberstalking – was recently undertaken by the Data & Society
Institute and the Center for Innovative Public Health Research involving a
‘nationally representative survey of 3,002 Americans 15 and older
conducted from May 17th through July 31st, 2016’ (Lenhart et al., 2016: 3).
This study found that 47 per cent of respondents reported experiencing
harassment online (ibid.: 3). Regarding cyberstalking specifically, the study
reports that ‘8% of American internet users have been cyberstalked to the
point of feeling unsafe or afraid’ (ibid.: 33–34). While useful, these data
should be viewed as provisional and indicative rather than conclusive as
nation-specific studies such as these cannot be unproblematically
extrapolated to other countries. That said, they give us at least some
glimpse into the possible prevalence of cyberstalking activity.
Evidence also exists that risk of cyberstalking victimization is not evenly
distributed across the population. According to the survey by Lenhart and
colleagues (2016: 4), while men and women were equally likely to be
harassed online, ‘women experience a wider variety of online abuse,
including more serious violations’ and also appear more likely to be
cyberstalked than men (10% and 5% respectively) (ibid.: 4, 34).
Additionally, when men do experience harassment behaviours – like being
threatened, having their personal information exposed, or being surveilled –
they were less likely to define these acts as harassment, indicating
perceptions may be shaped by gender. Relatedly, ‘women were almost three
times as likely as men to say the harassment made them feel scared, and
twice as likely to say the harassment made them feel worried’ (ibid.: 4). In
their study, risk based on victim’s race and ethnicity appeared relatively
equal, with likelihood of stalking varying between 18 and 22 per cent across
whites, blacks and Hispanics (though, of these groups, blacks reported
being more likely to experience cyberstalking). In addition, younger
persons and lesbian, gay, or bisexual persons reported a greater frequency
of incidents. When age is factored in, young women in particular appear
vulnerable to cyberstalking. As the authors explain, ‘14% of internet users
under 30 years of age have been cyberstalked, including 20% of women
under 30’ (ibid.: 36). In another study, Nobles et al. (2014: 1006) found that
stalking and cyberstalking victims differed across both levels of education
as well as household income. Based on this finding, the authors suggest that
‘these demographic differences suggest support for the so-called “digital
divide,” a term used to characterize social inequality in access to
technologies, including the Internet’ (ibid.: 1006). While the results of a
single study seldom make for incontrovertible fact, the finding makes sense
when taken at face-value. Opportunities to be victimized by cyberstalking
may strongly correlate to how ‘wired’ someone is – the number of online
accounts they have, total time spent on the internet, and visibility online.
While technology use is becoming increasingly ubiquitous, there still exist
socio-demographic divisions in technology adoption and use based on class,
race and gender (as described in Section 1.3). While such connectivity
undoubtedly confers certain advantages, it may also correspond to greater
opportunities to be stalked online (see also: Reyns et al., 2011). They also
predict that the increasing proliferation of technological access in society
may narrow the digital divide and, if it does, we would expect to see
demographic differences in victimization likelihood dissipate.
Tactics employed in cyberstalking vary and limits on such behaviour seem
only circumscribed by the creativity of the perpetrator. Measurements of the
popularity of given methods are few and far between, however. Early
estimates examining cyberstalking behaviours may provide some insight.
For instance, one study examined data from The New York Police
Department’s Computer Investigation and Technology Unit (CITU) which
receives and investigates public complaints about cyberstalking as well as
other forms of computer-related crime like fraud, hacking and child
pornography. Of the cases reported to CITU between January 1996 and
August 2000, 42.8 per cent involved ‘aggravated harassment by means of a
computer or the Internet’ (D’Ovidio and Doyle, 2003: 12). According to the
data collected by the CITU, email abuse was involved in 92 per cent of
cases. Other forms of stalking included the use of instant messaging
services (apparent in 13 per cent of cases), chat rooms (8 per cent), message
boards (4 per cent) and websites (2 per cent) (D’Ovidio and Doyle, 2003:
12). Another early study of cyberstalking tactics involving an internet-based
survey by Bocij (2003) reveals a somewhat different pattern. The most
commonly reported form of harassment was that of threats and abuse in
internet chat rooms (47.62 per cent), followed by email (39.88 per cent) and
instant messaging (38.69 per cent). The different findings derived from
these two sources might be explained by the fact that they relate to different
periods (1996–2000 for the CITU, and 2003 for Bocij’s study). In the
intervening years, both instant messaging and chat rooms became more
common and more widely used, and the shift in modes of harassment might
reflect this change in availability. Alternatively, the differences might reflect
national differences in cyberstalking behaviours. The largest group among
Bocij’s respondents were from the UK (45.5 per cent), followed by the
USA (39.9 per cent), with the remainder resident in Canada (7.2 per cent)
and Australia (2.4 per cent) (for the remaining 5.4 per cent, no country of
residence was presented). Bocij’s study may also be unreliable because, as
he himself notes, there are some serious sampling problems related to the
methodology used, that of so-called ‘snowball sampling’. This entails the
initial distribution of the questionnaire to a small number of respondents
(five in Bocij’s case), who are each then requested to forward it to a number
of others, and those subsequent recipients are asked to do likewise, and so
on. Inevitably, respondents will recruit others known to them (such as
family, friends and colleagues) who are likely to have common social
characteristics (such as class background and occupation), and who are also
likely to share patterns and preferences when it comes to internet use (the
commonalities between respondents was clearly evident in the fact that 30.5
per cent had a degree-level education and 11.4 per cent had a postgraduate
qualification, rates much higher than those for the populations of the USA,
the UK, Australia and Canada generally). In short, while such studies are
suggestive and help to map out further avenues for inquiry, we must await
more systematic and scientific research before being able to draw more
definite conclusions about both the dominant forms of cyberstalking
behaviour and the perpetrators and victims of such offences.
Unfortunately, these early figures of cyberstalking behaviour seem almost
quaint in the era of social media, which has become an instrumental tool for
online surveillance, intrusion and harassment. The act of so-called
‘Facebook stalking’ or ‘social media stalking’ – the use of social media to
learn more about a person, often for the purposes of relatively benign
curiosity – has become relatively common among social media users
(MacDonald, 2018). The prevalence of such activity should not, however,
diminish the use of social media for conducting intrusive and persistent
stalking and harassment. Social media can provide a smorgasbord of
information useful for stalkers including, but not limited to, media
preferences, hobbies, political views, education, employment history, social
relationships, events attended, group membership and even geo-location
data. Interestingly, the use of social media for stalking purposes is not
limited to users. In 2018, a Facebook engineer was fired from the company
for using his position to stalk women online (Weise, 2018). One news
media source claims that others have also been fired from the company in
the past for similar abuses including stalking ex-intimate partners (Cox and
Hoppenstedt, 2018). Though exact figures of the prevalence of the use of
social media for stalking are elusive, there is little denying that social media
can be a powerful tool for stalkers.
Like other forms of stalking, cyberstalking can take a tremendous toll on
victims (Dreßing et al., 2014). For these reasons, victims may turn to
various coping strategies to mitigate the impact of cyberstalking and reduce
contact with the offender. While not focusing exclusively on cyberstalking,
Lenhart and colleagues (2016: 5) report that 63 per cent of their survey
respondents who had experienced online harassment have taken measures
to protect themselves. Of those, 43 per cent ‘changed their contact
information’, 33 per cent ‘asked for help from a friend or family member,
law enforcement, or domestic violence resources’, 27 per cent ‘reported or
flagged content that was posted about them without their permission’, 27
per cent ‘disconnected from online networks and devices by abandoning
social media, the internet, or their cell phone’ (ibid.: 5). Victims also
reported experiencing some form of ‘isolation or disconnectedness’ as a
result of the aggression ranging from strained interpersonal relations to
shutting down online accounts (ibid.: 6). In their study of cyberstalking
victimization experiences among college students as well as participants in
two online cyberstalking support groups, Tokunaga and Aune (2017: 1461)
develop a seven-category taxonomy of tactics victims use to manage and
mitigate cyberstalking against them. For the purposes of brevity, we can
distil their taxonomy to five key categories, presented below:
1. ‘Ignore/avoidance’, where victims engaged in evasive tactics to avoid
contact or reduce exposure to cyberstalking behaviours. They may also
act like the behaviours ‘did not bother him or her’. The authors
indicate that these tactics were the most commonly reported within
their sample (ibid.: 1462).
2. Online presence management, which involves attempts to control the
availability of information and methods of communication available to
stalkers online. These may include changing social media privacy
settings, deleting accounts, placing blocks on the stalker when
possible, altering usernames, email addresses, or implementing
computer security programs. These methods are intended to
technologically create distance between stalker and victim and reduce
opportunities for harassment.
3. ‘Help seeking’ or victims turning toward other persons or
organizations – like law enforcement or internet service providers –
looking for assistance in thwarting the cyberstalker. This category also
includes maintaining a written record of cyberstalking to amass
evidence against the harasser.
4. ‘Negotiation/threat’ and ‘derogation’ which constitutes a battery of
strategies involving confronting the harasser with pleas, arguments,
insults, or threats as well as attempts to get revenge or retribution on
the stalker such as publicly shaming them on social media. The idea is
to attempt to directly dissuade the actor from continuing their
harassment.
5. ‘Compliance/excuses’ whereby victims ‘played along’, ‘falsely
disclosed about themselves’, or even ‘made excuses of not being able
to or wanting a relationship’. These tactics generally involve passive
attempts to placate the stalker without escalating the behaviours (or
even hoping they will go away).
Additionally, fear of cyberstalking may lead people to engage in
preventative measures like self-censorship online. Lenhart and colleagues
(2016) found that while 27 per cent of respondents assert that they censor
themselves online to avoid possible harassment, 41 per cent of young
women (15 to 29 years of age) report self-censorship behaviours as opposed
to 33 per cent of men in the same age group. Interestingly, one study found
that victims of cyberstalking, compared to stalking victims, were more
likely to engage in so-called ‘self-protective behaviors’ (Nobles et al., 2014:
1006).
One significant way in which cyberstalking appears to vary from stalking
more generally is that the former is more likely to remain mediated and at a
distance. In other words, it is less likely to entail direct, face-to-face
harassment in which perpetrator and victim are physically co-present. One
study, for example, found that around 26 per cent of cyberstalking cases
remained purely online rather than ‘spilling over’ into terrestrial contexts
(Dreßing et al., 2014: 63). This study also indicated, however, that ‘twelve
per cent of victims reported having been grabbed or held down by the
perpetrator, 8.8 per cent reported having been hit with the hand, and 3.8 per
cent reported having been attacked with objects’ (ibid.). While these
numbers are not insignificant, they do indicate that most cyberstalking cases
remain ‘at the level of inducing emotional distress, fear and apprehension’,
rather than resulting in physical harm to the victim (Ogilvie, 2000: 3). That
is to say, a majority of cyber-stalking incidents remain at-a-distance and
mediated, although a significant minority appear to be combined with faceto-face, immediate and physical acts of violence and abuse. There is no
reason, however, why psychological harm should be treated any less
seriously than its physical counterpart, as the emotional suffering inflicted
by cyberstalking can be just as profound and acute. Moreover, there are a
number of recorded instances in which the virtual and terrestrial converge,
resulting in grave consequences to the well-being of the stalker’s target. In
one such case, a man from Los Angeles sought revenge upon a woman who
had spurned his romantic advances by posing as her in online chat rooms.
He posted the woman’s personal details, including her address, and claimed
that she was looking for men with whom she could act out her ‘rape
fantasies’. Subsequently, the woman suffered the terrifying ordeal of being
‘repeatedly awoken in the middle of the night by men banging on her front
door, shouting that they were there to rape her’ (Maharaj, cited in Ogilvie,
2000: 3). In another case, a young man, again in the USA, tracked down a
female former classmate and stalked her online. The two-year cycle
culminated in his driving to her workplace and shooting her as she was
getting into her car (Romei, 1999).
A second way in which cyberstalking appears to differ from terrestrial
stalking is in the prevalence of non-intimate and non-domestic incidents
online. It was noted earlier that assessments of stalking typically show that
the largest proportion of cases involve harassment by ex-intimates, and that
harassment by strangers is very rare. Comparisons of two recent major
surveys on stalking and cyberstalking, however, show differences between
the two in terms of offender–victim relationships. The CDC’s study of
stalking victimization – including methods associated with online stalking –
found that approximately 51 per cent of stalking offenders were current or
ex-intimate partners (importantly, women were much more likely to be
stalked by a current or former partner, similar to other forms of aggression
toward women) (Smith et al., 2017: 89, 93).2 In their study of online
harassment, however, Lenhart and colleagues (2016: 40) found that 32 per
cent of cyberstalking victims were harassed by current or ex-intimate
partners. Their study, however, is blighted by the fact that 20 per cent of
cyberstalking victims were unable to identify their stalker. As such, the
numbers between stalking and cyberstalking could be comparable if this
dark figure were uncovered. In addition, these divisions are not nearly as
prominent as those uncovered in prior research. For instance, in one early
study of cyberstalking, McFarlane and Bocij (2003) found that only 12 per
cent of victims reported being stalked by ex-intimates. Most common were
cases in which the victim and offender had only recently met in online
communications fora, such as chat rooms (33.3 per cent), followed by total
strangers (22 per cent) and acquaintances (16.7 per cent). It thus appears
that cyberstalking may facilitate stranger- or acquaintance-stalking perhaps
because, as some have argued, the feeling of anonymity afforded by internet
communication serves to lower inhibitions against aggressive, threatening
and anti-social interaction (Kabay, 1998; Taylor, 1999), thereby
encouraging web users to engage in harassment, or because of increased
opportunities to offend and diminished barriers for entry into stalking.
These divisions, however, may be diminishing over time as our on- and
offline lives become increasingly intertwined.
2 These figures for the estimated percentage of the population stalked by
relationship type were derived from the CDC’s 2017 report of findings from
the National Intimate Partner and Sexual Violence Survey (Smith et al.,
2017). These findings were initially disaggregated between men and
women. To determine total population estimates of stalking based on
relationship type, we had to aggregate population estimates between men
and women as these aggregated proportions were not given in the initial
report.
The issue of anonymity has much exercised commentators on
cyberstalking. It is widely noted that the ability of offenders to hide their
identities poses a major challenge for criminal justice agencies investigating
cyberstalking incidents (Bocij et al., 2002: 4; D’Ovidio and Doyle, 2003:
15–16; Ogilvie, 2000: 2). Cyberstalkers may draw from a host of
anonymity-protecting tools to protect themselves, including many of those
described throughout this book, like VPNs, proxy servers, TOR, and others.
Even though their use can make it extraordinarily difficult for law
enforcement to trace perpetrators, available evidence indicates that use of
anonymyzing technologies by cyberstalkers is relatively uncommon
(D’Ovidio and Doyle, 2003: 16; Hitchcock, 2003); consequently the
problem of anonymous cyberstalking may have been inflated. It must be
borne in mind, however, that as anonymizing services become more widely
available and more familiar to web users, the possibility remains that the
use of such technologies in stalking cases will become more frequent.
9.3.1 Revenge Pornography, Upskirt Photography
and Virtual Mobs
Related to the issue of cyberstalking, recent events have generated
increased awareness of a phenomenon commonly referred to as ‘revenge
porn’ or ‘sextortion’, the online non-consensual distribution of nude
photographs or videos of people – mostly women – for the purposes of
retribution or humiliation by a former intimate partner (Marganski, 2018:
20). While the release of such photos or videos may be harmful enough for
victims, they may also be a part of a broader harassment and stalking
campaign against them that may involve extortion, defamation and active
attempts to ruin victim’s relationships with friends, family and even
employers (ibid.: 20). As a result, just like other forms of online harassment
and cyberstalking, sextortion can have major psychological and livelihood
impacts on victims, potentially exacerbated by the intensely intimate and
private nature of the content released (ibid.). To make matters worse,
websites have emerged dedicated to revenge pornography. One of the first
of such sites was Hunter Moore’s IsAnybodyUp.com which allowed
offenders to not only post sexual photos of people but also disclose
identifying information including ‘full name, profession, social-media
profile and city of residence’ (Morris, 2012). In addition to running this
website, Moore plead guilty in court to paying an associate to ‘steal nude
photos from victims by accessing victims’ e-mail accounts through social
engineering’ (Geuss, 2015). While IsAnybodyUp has gone by the wayside,
revenge porn continues to proliferate online.
In light of public attention to the offence as well as the nature of harms
inflicted, various countries have moved to criminalize revenge porn. The
UK illegalized the practice through the Criminal Justice and Courts Act of
2015 which makes it illegal to disclose ‘private sexual photographs and
films … without the consent of an individual who appears in the
photograph or film’ and ‘with the intention of causing that individual
distress’ and may result in a fine and up to two years in prison
(legislation.gov.uk, 2015). In the US, 39 states plus Washington DC have
laws on the books prohibiting revenge porn with other states entertaining
legislation, as of 16 April 2018 (CA Goldberg, 2018). While many states
use relatively consistent language in their prohibitions, some differences
exist. For instance, in New Jersey, law prohibits the non-consensual
disclosure of images ‘of another person whose intimate parts are exposed or
who is engaged in an act of sexual penetration of sexual contact’ (NJ Rev
Stat § 2C, 2013). Under this law, a person need not have malicious intent to
have violated the law. In North Carolina, on the other hand, one must have
disclosed the image with the intent to ‘coerce, harass, intimidate, demean,
humiliate, or cause financial loss to the depicted person’ or with the intent
to cause others to engage in such malicious ends (General Statutes § 14190.5A, 2015). Thus in one jurisdiction, an act will only be prohibited if
intent to cause stated harms can be proven while others prohibit all such
disclosures regardless of intent. At the time of writing, efforts are underway
to criminalize revenge pornography at the federal level through the Ending
Nonconsensual Online User Graphic Harassment (ENOUGH) Act (Lahitou,
2017).3 Revenge pornography laws have not been unproblematic, however,
and some advocates argue that such laws may run foul of First Amendment
protections as evinced by the recent 12th Court of Appeals decision to
overturn Texas’ revenge porn law partly on the grounds that ‘the law
defines revenge porn too broadly’ and potentially infringing on freedom of
speech (LeFebvre, 2018).
3 At the time of writing, the ENOUGH Act was still pending approval
through the US Legislature. Progress on the bill can be tracked at
https://www.congress.gov/bill/115th-congress/senate-bill/2162/all-info.
Recognizing the harms that can stem from such harassment, other countries
across the world (or some of their states/provinces) have also implemented
revenge porn laws including Australia, Canada, Brazil, France, Germany,
the Philippines, and Scotland, to name a few. In addition to criminal
remedies, multiple countries have allowed for civil litigation against
offenders. For example, Chrissy Chambers – a popular YouTube personality
– recently won a suit in the UK against her former boyfriend who has
filmed, without her knowledge or consent, them having sex and posted
videos of them having sex to a pornographic website two years after they
broke up when she began dating her current girlfriend (Kleeman, 2018). In
the US, a woman only described as ‘Jane Doe’ was award $6.4 million in
damages in ‘one of the biggest judgements ever in a so-called revenge porn
case’ (Hauser, 2018).
Related to the topic of revenge porn, multiple women have been subjected
to ‘creepshots’ or ‘upskirt photos’, terms which refer to the photographing
of ‘revealing, close-up images of women taken without their consent, often
in public’ by so-called ‘shooters’ (Cox, 2018). Websites, forums and other
venues have emerged online dedicated to hosting and disseminating these
invasive pictures. Motherboard journalists investigated a creepshot forum
and ‘found thousands of threads, many containing upskirt images of
potentially underage girls wearing school uniforms, women shopping, and
women on public transport, with the photographer often stalking the target
for extended periods of time’ (ibid.). Though the creation of ‘upskirt’
photographs of women has been practised long before the advent of the
internet, such activities have found a new life online where these photos can
be shared en masse and celebrated among their producers and consumers.
Further, it brings together a confluence of problematic behaviours including
harassment, stalking, obscenity and even child pornography. Interestingly,
the criminalization of ‘upskirt’ photographs in the US has only been
addressed in recent years as, traditionally, photos taken of clothed persons
in public spaces was – for the most part – protected (Marvin, 2015).
Multiple states have begun implementing laws to address this gap, however
(ibid.: 125–127). Efforts to criminalize such behaviour, however, have not
been uncontroversial. A Texas law which criminalized photographing
persons in public places if the photo was made without consent and made
‘with the intent to arouse or gratify the sexual desire of any person’ was
overturned for infringing on photographer’s First Amendment rights (Ex
parte Thompson, 2014). The court considers photography to be a form of
artistic expression and stated that ‘banning otherwise protected expression
on the basis that it produces sexual arousal or gratification is the regulation
of protected thought, and such a regulation is outside the government’s
power’ (ibid.). In the UK, the act is currently not an offence in England and
Wales but is prohibited in Scotland, though efforts are being made at the
time of writing to criminalize upskirt photography more widely (Groome,
2018). Other countries have similar prohibitions against upskirt
photography including Australia, India and New Zealand.
In addition to revenge porn and creepshots, a particularly intense form of
cyberstalking and online harassment has emerged in recent years known as
a ‘brigading’, ‘dogpiling’ or forming a ‘virtual mob’ (Lenhart et al., 2016:
24; Quinn, 2017: 52; Swinford, 2016). Involved is a form of harassment
where offenders coalesce into an online mob and ‘dogpile’ onto the victim,
en-masse, in a concentrated harassment campaign. These may be as benign
as short-lived social media-based heckles from the online ‘peanut gallery’
to intense, vicious and long-term crusades against the victim involving
continuous and multifarious breaches of privacy (like doxing), personal
threats and insults, relationship and employment sabotage, and various
forms of public shaming, to name a few. One of the most noteworthy cases
of brigading involved the independent video game developer, Zoë Quinn,
and the ‘GamerGate’ fiasco (previously discussed in Section 2.2.7). As
Quinn (2017: 10–14) explains in Crash Override, her autobiographical
account of her ordeal, her harassment started in 2014 after an abusive exboyfriend posted an irate, slanderous and mendacious diatribe to the
webforum Something Awful accusing her of misdeeds and slights including
infidelity and using sex to garner positive reviews for her games. Though
the forum administrators quickly took the post down, it set off a multi-year
cascade of events that would make Quinn’s life a veritable living hell. In
addition to other indignities and aggressions, she describes her experience
as:
I’ve been forced to watch as hackers tracked down almost everyone
I’ve ever known, including my father, high school classmates, and
former employers. I’ve read their lengthy discussions about how to
drive me to suicide and the merits of raping me versus torturing me
first and raping me afterward. I’ve watched major corporations bow to
them in fear while those with less resources take risks to do the right
thing and suffer for it. I’ve gritted my teeth as major conventions like
South by Southwest give these people panels, platforms where they
can whitewash what they’ve done to me, my family, my industry, and
totally unrelated bystanders. I’ve fought back against a near-cartoonish
rogues’ gallery of everyone from child pornographers to washed-up
pop-culture icons while people who should know better fell prey to
disinformation, lies, and logical fallacies. I watched in horror as too
many of my attackers used their attacks on me as a springboard into
the mainstream, from neo-Nazis radicalizing and recruiting young
blood to the far-right base of Donald Trump’s propaganda machine,
voting base, and closest advisers. (ibid.: 19)
While the internet is often celebrated for its potential to bring people
together, such community-building capacity can also be used for insidious
purposes. Virtual mobs can inflict tremendous harm on targets who struggle
to combat an onslaught of harassment and stalking which may cut across
legal jurisdictions and be facilitated by a veil of anonymity or quasianonymity. Quinn’s case is an example of what Mantilla (2015: 12) has
termed ‘gendertrolling’, a persistent and intensely misogynistic form of
harassment which she argues generally exhibits seven common traits
including that such attacks (1) ‘are precipitated by women asserting their
opinion online’ (we might add that just being visible online may be
enough), (2) ‘feature graphic sexualized and gender-based insults’, (3)
‘include rape and death threats – often credible ones – and frequently
involves IRL [in real life] targeting’, (4) ‘cross multiple social media or
online platforms’, (5) ‘occur at unusually high levels of intensity and
frequency’, (6) ‘are perpetuated for an unusual duration (months or even
years)’, and (7) ‘involve many attackers in a concerted and often
coordinated campaign’. On a more positive note, Quinn has worked to
make the best of a bad situation by becoming an advocate against online
harassment through her organization ‘Crash Override’.
9.4 Online Paedophilia
Concerns about the online victimization of individuals have been most
pronounced where they deal with the sexual victimization of children. Since
the 1980s, the US and the UK in particular have seen intense anxiety about
the problem of child abuse in general, and sexual abuse in particular.
Common media discussion of organized ritual abuse, stranger child
abduction and ‘recovered memories’ of repressed childhood abuse have
contributed incrementally to this heightened sense of children’s
vulnerability. This sensitivity has been further heightened by a number of
widely reported cases in which children have been abducted, abused and
then murdered by paedophiles, such as the case of Megan Kanka in the
USA and Sarah Payne in the UK. In the late 1990s, the focus upon
paedophilia turned increasingly towards the online environment. We have
already explored in the previous chapter the attention devoted to child
pornography. The problem of internet-oriented abuse is often discussed in
tandem with the circulation of obscene images of children; indeed, the two
issues are frequently treated as different facets of the same problem (see, for
example, Babchishin et al., 2015; Forde and Patterson, 1998; Stanley,
2002). It is claimed that many of those investigated for child pornography
offences have also participated in the actual sexual abuse of children.
Research by Seto and colleagues (2011: 135–136) involving two metaanalyses of research into child pornography and male ‘contact offenders’
found that ‘approximately 1 in 8 online offenders have a known contact
sexual offence history at the time of their index offence’ and that ‘the
prevalence was higher when self-report information was used, with
approximately half of the online offenders admitting to a contact sexual
offence’. A question remains, however, over the possible causal relationship
between these forms of behaviour: does interest in, and consumption of,
child pornography encourage or lead to child sexual abuse? Or is it that
those already disposed to abusive behaviour inevitably also take pleasure in
pornographic representations of similar activities? The figures cited above
imply that at least half of consumers of such images do not engage in any
actual physical abuse. This suggests that the relationship between child
pornography and child sex abuse is more complicated than is often
supposed (Stanley, 2002: 9). This complexity, however, is seldom explored
in public discussion, and the two phenomena are typically treated as
synonymous or continuous.
Online child sex abuse can be differentiated between that which remains
virtual (restricted to communicative abuses committed via the internet) and
that which serves as a prepatory method for later physical contact and abuse
(the so-called ‘grooming’ of potential physical abuse victims). Each will be
discussed in turn below.
UK-based research by Rachel O’Connell and her colleagues at the
Cyberspace Research Unit (CRU) claims to have uncovered a wide range of
‘cybersexploitation’ practices taking place in internet chat rooms, work
considered foundational within the scholarship on online child grooming
(Aitken et al., 2018: 1171). In such cases, adults or older adolescents with a
sexual interest in young children use online communication in order to
‘identify, deceive, coerce, cajole, form friendships with and also to abuse
potential victims’ (O’Connell, 2003: 2). This may entail, first, adults
engaging children in sexually explicit conversations. Examples include
asking questions of the child such as ‘Have you ever been kissed?’ or ‘Do
you ever touch yourself?’ (O’Connell, 2003: 7). Second, such behaviour
may entail the ‘fantasy enactment’ of sexual scenarios through online
conversation with a child. Third, it may entail what is described as ‘cyberrape’, in which coercion and threats are used to force a child into acting out
the sexual scenarios proposed by the abuser. It has been suggested by many
commentators that adult abusers exploit the internet to disguise their
identity, posing as children or adolescents in order to win the friendship and
trust of their victims (Feather, 1999: 7; NCIS, 1999). Consequently,
children may be blissfully unaware that their internet ‘friend’ is in fact an
adult whose aim is to deceive them into participating in sexual conversation
and interaction, though more recent research by Aitken and colleagues
(2018: 1187) finds that the online grooming interactions they studied
indicated that ‘for the most part’ offenders appeared ‘to give their real ages,
and where deceit was employed, they merely represented themselves as
younger than their actual age, rather than a similar or younger age than the
victim’. Thus it is uncertain the level of age-based deceit employed by
abusers. O’Connell et al. claim that surveys of children’s online interaction
show that 53 per cent of chat room users aged between 8 and 11 reported
having had sexual conversations online (O’Connell et al., 2004: 5). It is
unclear, however, what proportion of such conversations involved an adult
interlocutor; indeed, given the ease with which disguise is possible, it may
be virtually impossible to determine whether a given conversation is an
adult–child interaction, or a case of child-to-child or teen-to-teen
‘cyberflirting’. Indeed, only 6 per cent of children aged 9–16 reported that
‘online conversations of a sexual nature were unpleasant or offensive’
(2004: 6). Putting the above points together, it may be suggested that the
vast majority of children’s online interactions of a sexual nature are nothing
more than explorations of a burgeoning curiosity about physical
relationships. Such curiosity is unremarkable, especially in the context of a
consumer culture that actively encourages children and adolescents to take
an interest in sex and sexuality (Durham, 2009; Hayward, 2013: 541–542).
This does not, however, imply that adult–child sexual interactions online
are a non-existent problem. In the aforementioned studies conducted by the
CRU, researchers registered themselves online under the guise of a child,
typically a girl aged between 9 and 12 years of age. Subsequently, a number
of self-professed adults engaged this ‘child’ in conversations of an
explicitly sexual kind. The ‘digilante’ (a portmanteau of ‘digital’ and
‘vigilante’) group Perverted Justice has used volunteers posing as children
online to document attempts by child abusers to solicit sex from minors.
The organization gathers evidence of such solicitation, often with location
set for the abuser to meet the ‘child’ in person, and hands it off to law
enforcement officials. The UK’s Sexual Offences Act 2003 makes it a
criminal act to incite a child to engage in sexual activity, ‘such as, for
example, persuading children to take their clothes off, causing the child to
touch themselves sexually, sending indecent images of themselves, etc.’.
Therefore those online child–adult interactions in which the child may be
encouraged to engage in sexual acts are defined as ‘non-contact abuse’ and
punishable by up to 10 years imprisonment (CPS, 2017; O’Connell et al.,
2004: 6). A host of US federal and state laws also prohibit solicitation of
children for sexual purposes with similarly significant penalties possible for
offenders (Lewis et al., 2009).
The second area of online paedophile activity is linked to the commission
of offline contact abuse. Recent years have witnessed a growing concern
that paedophiles are using internet chat room contacts with children in order
to establish a relationship of apparent friendship and trust, which can then
be exploited to arrange face-to-face meetings in which sexual abuse can
take place. Abusers have also used the internet to communicate and share
knowledge, including through documents like the Pedophile’s Handbook, a
‘how-to’ guide passed around among child contact abusers, as well as
online forums and chat rooms (mostly on the darkweb) (Ormsby, 2018:
252–255). O’Connell (2003: 6–8) identifies a number of ‘stages’ to such
‘grooming’, starting with friendship formation (getting to know the child),
progressing to relationship formation (becoming the child’s ‘best friend’),
risk assessment (evaluating likelihood of being caught), leading to the
exclusivity stage (where intimacy, trust and secrecy are established). It is
only when such a bond has been formed that the paedophile will move on to
suggest sexual contact. Though research has generally been supportive of
these stages, evidence exists that grooming need not proceed in such a
linear fashion. These stages, in fact, may be fluid and offenders may engage
in activities, such as risk assessment, throughout the process (Aitken et al.,
2018: 1171–1172; Black et al., 2015: 141). Data from the Youth Internet
Safety Survey provide insights into the prevalence of the online sexual
solicitation of children. In the first wave, conducted between 1999 and
2000, the study found that 19 per cent of study participants had received
unwanted sexual solicitations (Mitchell et al., 2014: 4). The third wave of
the study, however, conducted in 2010 found that the number had declined
to 9 per cent (ibid.). In addition, so called ‘aggressive solicitations’ – those
conducted through offline attempts at sexual solicitation (like those
conducted through mail or telephone) or requests to make such contact –
remained relatively stable over time, oscillating between 3 and 4 per cent.
‘Distressing sexual solicitations’ where a child felt ‘very or extremely upset
or afraid’ were reported at 5 per cent in 2000 and had declined to 2 per cent
in 2010 (ibid.: 3–4).
In response to such findings and their associated public outcries, the UK’s
Sexual Offences Act 2003 introduced an offence of internet grooming,
‘designed to catch those aged 18 or over who undertake a course of conduct
with a child under 16 leading to a meeting where the adult intends to
engage in sexual activity with a child’ (Home Office, 2002: 25). Conviction
for this offence can result in a custodial sentence of up to five years. This
law was recently extended in Section 67 of the Serious Crime Act of 2015
which makes a criminal offence for ‘anyone aged 18 or over to intentionally
communicate with a child under 16, where the person acts for a sexual
purpose and the communication is sexual or intended to elicit a sexual
response’ (MoJ, 2017). In the US, such grooming offences are prohibited
under Title 18 Section 2422 of the United States Code which prohibits
‘knowingly persuading, inducing, enticing or coercing an individual to
travel in interstate or foreign commerce with the purpose of engaging in
prostitution or any criminal sexual activity’ with stiffer penalties for those
soliciting persons under the age of 18, as well as Section 2425 which
penalizes ‘the use of the mail or facility of interstate or foreign commerce,
including a computer, to transmit information about a minor under the age
of 16 for criminal sexual purposes’ (OUSA, 2018a, 2018b). Under these
statutes, offenders may be fined in addition to receiving up to 15 years in
prison. Some states have also crafted legislation addressing the act as well.
The precise extent of such grooming activity is difficult to assess, as its
identification depends upon a judgement about an individual’s ultimate
intention to abuse. Often no physical abuse need actually be attempted to
secure a conviction, merely a perception that an individual intended to use a
meeting set up via the internet to engage in the sexual abuse of a minor. As
Bennion (2003: 3) points out:
The object is to catch adults who try to make friends with children so
as later to have sex with them. But how is that to be proved? If the
suspect goes on to carry out a sexual assault that can be charged as an
offence in itself, but there is then no need for the preliminary offence
of grooming. Where no assault later ensues, how can preliminary
grooming be established?
While implementation of this law may be beset with evidential dilemmas,
what is clear is the conviction among most politicians, child protection
organizations and members of the public that internet grooming is a real and
palpable threat that must be tackled through criminal legislation.
9.5 Thinking Critically about Online
Victimization: Stalking and Paedophilia as
Moral Panics?
Despite the wide-ranging consensus that online victimization is a serious
and growing threat, a number of academic commentators have been quick
to critically analyse its rise to prominence. Such analysts operate within a
perspective that views crime problems as social constructions. From this
viewpoint, it is quite possible for intense public, political and mass media
concern to centre on an issue or behaviour to an extent that is in fact quite
unwarranted, objectively speaking. A behaviour that is rare (or even nonexistent) may come to be the focus of public anxiety, fear and panic. Hence
Joel Best (1999) examines how stalking was ‘discovered’ as a crime
problem during the course of the 1990s. Behaviour that, in previous times,
may well have evoked little concern or comment suddenly came to be seen
as an imminent and widespread threat. The extent of such incidents was
inflated by incrementally expanding the behaviours that were captured
under the evocative and sinister label of ‘stalking’. For example, D’Ovidio
and Doyle’s (2003) definition of cyberstalking (see Section 9.3) includes
within its ambit electronic communications intended to ‘annoy’ their
recipients. In all likelihood, the vast majority of people regularly experience
communications which annoy them: for example, being pestered by a
demanding colleague, friend, family member, or even the sales and
marketing departments of companies that repeatedly send spam mail
advertising their goods and services. If studies ask respondents whether
they have received repeated electronic communications that annoy them,
the large majority are likely to respond affirmatively. This then has the
effect of making stalking appear as a problem nearing epidemic
proportions. Moreover, analysts of moral panics point out the role played in
problem construction by interest groups who have a stake in promoting
public and political awareness of a particular issue. Hence Best notes how
the rise of stalking has been accompanied by the expansion of what he calls
a ‘victim industry’. This industry includes lawyers who represent those who
claim victimization; medical professionals and therapists who study, treat
and counsel victims; educators and academics who are drafted in to train
people and raise awareness about stalking and how to deal with it; and the
mass media for whom a novel crime wave makes for attention-grabbing
news coverage. Through the efforts of such actors, the newly minted social
problem can become institutionalized and the target of concerted political
and law enforcement attention (see: Kappeler and Potter, 2018: 93–114).
The case of paedophilia is, if anything, even more open to analysis as a
socially constructed moral panic. Over the past few decades, there have
emerged successive ‘scares’ about different forms of child abuse. As
Kappeler and Potter (2018: 63–91) explain, multiple factors contributed to
ongoing public anxiety over threats of abduction, abuse and exploitation of
children since the 1970s such as the use of misleading statistics which
conflated ‘missing’ with ‘abducted’ children, moral entrepreneurs who
actively campaigned on or otherwise promoted awareness of these issues
(see: Becker, 1963) and highly publicized events involving dangers to
children (many of which were overstated, misleading, or outright false). For
example, in the early 1980s there were a number of cases in the USA and
Europe in which workers in child care centres and nurseries were accused
of organizing ritual, systematic and ongoing satanic sexual abuse of their
young charges, in some cases allegedly involving the drinking of human
blood, cannibalism and human sacrifice. Such allegations were
subsequently proven groundless, but not before they had generated
widespread panic among frightened parents and extremely damaging
criminal investigations of those accused (De Young, 2004). In the 1990s,
there occurred an outcry about child sex abuse when thousands of adults
started coming forward claiming to have recovered previously repressed
memories of childhood abuse at the hands of their parents and family
members. Many academic psychologists subsequently judged a great
proportion of such cases as instances of ‘false memory syndrome’, in which
individuals had been persuaded (or persuaded themselves) that they had
undergone abuse that in fact had not occurred (Loftus, 1996). From this
viewpoint, the apparent threat presented by paedophiles to children online
might be viewed as a new instalment of this pattern of over-reaction to the
risk of child victimization, especially when we consider that data from the
Youth Internet Safety Survey indicates that the ‘main source of unwanted
sexual solicitations was other youth and young adults under the age of 25,
not older adults stereotyped as “internet predators”’ (Mitchell et al., 2014:
4). Frank Furedi (2005: viii, 32–34) views the recent public response to
online paedophilia as symptomatic of what he calls our contemporary
‘culture of fear’, one which is beset by ‘obsessive preoccupations about
safety’. This ‘obsessive preoccupation’ has centred in particular upon the
threat to children at the hands of strangers who intend to variously abuse,
abduct and/or murder them. Parents have responded by becoming
pathologically suspicious of anyone who might come into contact with their
children, attributing to them the worst possible intentions. Children are
increasingly counselled that danger is lurking everywhere, and all strangers
present a threat to their well-being. This percolation of fear becomes clear
when we consider the responses of the children interviewed by Burn and
Willett (2003: 10–11) as part of an evaluation of a child-oriented internetrisk education campaign called Educanet. A group of 11-year-old girls had
the following to say about paedophiles and the internet:
Becky: Most people are perverts, innit [local expression – ‘isn’t it’]?
Claire: You know, like on the internet.
Daniella: There’s millions.
In such a cultural context, children as well as their parents are coming to
increasingly view all contact with strangers as a potential overture to sexual
abuse or worse (Furedi, 2001). Further, this fear extends to anxiety of online
predation. UNICEF (2016: 48) claims that an opinion poll of ‘more than
10,000 18-year-olds from 25 countries found that over 80 per cent of them
believe children are at risk of being sexually abused or manipulated online’.
In 2010, one survey of over 600 parents of school-age children found the
greatest perceived risks to children among the respondents was kidnapping
(38 per cent) (as described in Kappeler and Potter, 2018: 75). Interestingly,
when asked about online risks, the same proportion listed the most
significant perceived risk as online predators. Some child welfare agencies
have expressed serious concerns over the effects on children’s health and
development wrought by over-protective parents who isolate their children
to keep them safe from predation (Furedi, 2005: 116). More reasonable
assessments, however, might point out that the risk of stranger abuse is
statistically almost marginal: for example, figures for police recorded crime
in England and Wales in 2016–7 reveal that 1,130 offences of ‘grooming’
were recorded by police over the preceding year or approximately 11 per
100,000 person under the age of 16, with no indication as to what
proportion of these incidents involved an online element (or even which
percentage involved strangers) (ONS, 2018b: A4). Interestingly, the most
recent data (for the period Jan–Dec 2017) shows a three-fold jump in
recorded incidents, to 3,323. While this may represent a sudden increase in
the actual number of such incidents, we hypothesise that it is more likely
the result of greater reporting as a result of heightened public attention to
the issue. In particular, this period saw intensive media coverage about the
activities of ‘grooming gangs’ in a number of British cities (including
Newcastle, Rotherham, Rochdale, Telford and Oxford), cases in which
groups of men had systematically groomed and then sexually abused
hundreds of vulnerable girls and young women. Additionally, concerted
criticism of social media companies’ supposedly inadequate steps to tackle
child-oriented sexual predation may have contributed to greater alacrity on
the part of those platforms when it comes to identifying and reporting
suspect behaviour (Yar, 2018). Relatedly, Kappeler and Potter (2018: 89)
isolate some facts about contemporary risks to children in the US: (1) ‘less
than one-hundredth of 1% of all missing children’ suffer the most feared
fate of abduction combined with murder or sexual abuse; (2) missing
persons and crimes against children have declined in recent decades; (3)
‘the falling rates of crimes against children combat the myth that the
Internet amplifies danger’, (4) ‘children are far more likely to be harmed by
people they know than by people they don’t’; and (5) most missing children
have problematic home lives and may ‘run away or are pushed out’ because
of ‘dysfunctional relationships’ or because ‘they are victims of abuse and
neglect’. Even in the 1980s and 1990s, such risks to children were wildly
overstated. In the UK, for instance, Furedi (2005: 110) points out that in the
10 years between 1983 and 1993, 57 children were murdered by strangers,
an average of five per year. In contrast, more than 2,000 children are killed
or seriously injured every year as a result of road traffic accidents in Great
Britain (UKDOT, 2017). Yet child murder at the hand of paedophiles has
attracted far greater media attention, public anxiety and political action than
issues relating to road safety, despite the fact that it is the latter that presents
by far the greater threat to children’s safety. Consequently, critics such as
Furedi call for a balanced reassessment of the risks that children actually
face, rather than surrendering to a wave of cultural panic that will inevitably
further erode people’s ability to establish relationships of openness, trust
and social bonds with their fellow citizens. The over-estimation of the
dangers of online victimization might simply serve to create a generation of
adults who view the internet as a dangerous no-go territory, rather than
embracing it as an invaluable opportunity for fostering social
communication, interaction and exchange.
9.6 Summary
This chapter has explored recent high-profile debates about the online
victimization of individuals in the forms of cyberstalking and internet
paedophilia. Numerous assessments indicate that these are serious and
growing threats to people’s online safety, and may well spill over into the
terrestrial world with tragic consequences for victims. However, critics have
argued that the risks of online victimization have been grossly overestimated and inflated, and that the dangers presented by internet stalkers
and paedophiles are in fact negligible. They also point to the damage that
can be caused to individuals, families and communities when the perceived
risks of such victimization reach a point where they induce individuals to
engage in extreme avoidance behaviour. It may well be that while online
victimization cannot simply be dismissed as a ‘non-problem’, the
disproportionate estimation of its likelihood may serve to undermine
people’s trust and engagement with this new media. The internet offers,
after all, unprecedented new opportunities for building social contacts and
relationships that transcend barriers of space and culture, enabling us to
resist processes of individualization and isolation that have come to
characterize Western societies. This medium’s potential as a basis for
creating social solidarity may yet be damaged if the fear of ‘stranger
danger’ entices us to withdraw from rather than embrace open
communication.
Study Questions
What are the apparent patterns of stalking behaviour on the internet?
How might they differ from cases of terrestrial stalking?
How do we distinguish between ‘dogpiling’ that should be considered public and free speech
and that which constitutes harassment?
What do you understand by ‘paedophilia’?
Is online paedophilia a serious threat to children’s safety?
Is there a ‘moral panic’ about online victimization?
Further Reading
A useful overview of the debate about stalking is provided in McGuire and Wraith’s article,
‘Legal and psychological aspects of stalking: A review’ (2000). The wide-ranging
cyberstalking research conducted by Paul Bocij and his colleagues is available online at
www.stalking-research.org.uk. For a critical look at stalking, Chapter 3 of Paul Best’s book,
Random Violence: How We Talk about New Crimes and New Victims (1999) is recommended.
The reader should also refer to the Data and Society Research Institute’s report entitled Online
Harassment, Digital Abuse, and Cyberstalking in America (Lenhart et al., 2016). For a firsthand account of virtual mob harassment, check out Zoë Quinn’s Crash Override: How
Gamergate [Nearly] Destroyed my Life, and How We Can Win the Fight against Online Hate
(2017). Emma Jane’s Misogyny Online (2017b) and Karla Mantilla’s Gendertrolling: How
Misogyny Went Viral (2015) are also illuminating. On the issue of online paedophilia, the work
of the Centre of Expertise on Child Sexual Abuse is available for download at
https://www.csacentre.org.uk/. A helpful overview and consideration of online child
exploitation can be found in Jo Bryce’s (2010) ‘Online sexual exploitation of children and
young people’. A critical reading of this issue is offered by Frank Furedi in his books
Paranoid Parenting (2001) and Culture of Fear: Risk Taking and the Morality of Low
Expectation (2005). For more about moral panics and mythologies surrounding the crimes of
stalking and child predation, see Victor Kappeler and Gary Potter’s The Mythology of Crime
and Criminal Justice (5th edn) (2018).
10 Policing the Internet
Chapter Contents
10.1 Introduction
10.2 Public policing and the cybercrime problem
10.3 Pluralized policing: The involvement of quasi-state and non-state
actors in policing the internet
10.4 Privatized ‘for-profit’ cybercrime policing
10.5 Explaining the pluralization and privatization of internet policing
10.6 Critical issues about private policing of the internet
10.7 Summary
Study questions
Further reading
Overview
Chapter 10 examines the following issues:
the developing responses to cybercrime by public police authorities, at local, national and
transnational levels;
the crucial involvement of quasi-state and non-state actors in internet policing and regulation;
the emergence of a for-profit private sector that provides internet policing, security and crimeprevention technologies and services;
explanations for the pluralized and privatized character of internet policing;
the problems and challenges that the current configuration of internet policing brings in its
wake.
Key Terms
Digilantism
Governance
Pluralization
Police
Policing
Privatization
Scambaiter
10.1 Introduction
The term ‘policing’ is used to denote a wide range of regulatory practices
that serve to monitor social behaviour and ensure conformity with laws and
normative codes. It is most commonly associated with the police, a formally
organized institutional apparatus that is charged with upholding laws on
behalf of society and is ultimately directed by and accountable to the state.
The police have been centrally involved in responding to the problem of
cybercrime – although as we shall see, their efforts in this regard have
sometimes been criticized for being piecemeal and inadequate, and subject
to a number of challenges that prevent a thorough response to the scope and
scale of internet offences. It is important, however, to emphasize that
policing can be informal as well as formally organized, and can involve a
wide range of social actors and institutions other than the police themselves.
So the police are but one of the many agencies that undertake policing
across various walks of life. Indeed, it has been argued that such
‘pluralization’ and ‘fragmentation’ of policing is a hallmark of
contemporary systems of social control. This involvement of other actors in
policing activity is especially pertinent in the case of cybercrime, where we
can discern an especially high level of involvement in policing on the part
of private organizations, charities and internet users. In this chapter we will
consider the wide range of policing activities (both public and private) that
have emerged in response to cybercrime, and explore the problems,
challenges and tensions that arise as society attempts to grapple with a
range of new crime problems.
10.2 Public Policing and the Cybercrime
Problem
In contemporary society, the police are expected to fulfil not one but a range
of different roles and functions. Most obviously, there is the idea that police
should serve to prevent crimes from taking place (through, for example,
publicly visible patrolling that is supposed to act as a deterrent to would-be
offenders) and to maintain public order. Further, the police are expected to
detect crimes, investigate them by identifying offenders and gathering
evidence, and bring offenders to book so that they can be tried and (if
proven guilty) convicted and punished. These days, the police are also
supposed to act as a source of public information, advice and reassurance,
and to work pro-actively with citizens and communities to help tackle the
root causes of offending (Mawby, 2008). This diverse set of roles and
expectations in relation to terrestrial crime sometimes places the police in
the unenviable situation of having to balance different imperatives and work
effectively on many different fronts simultaneously. Confusion about the
precise role and limits of police activity might ensue, along with overstretch arising from ever-increasing demands and limited resources. This
situation can also be seen to apply in respect of cybercrime, which adds
some very specific problems of its own, making policing the internet a
particularly challenging task.
Policing demands brought about by the ‘computer revolution’ focused
initially upon relatively specialized tasks related to computer forensics,
where criminal investigations require the recovery, reconstruction,
collection, collation and authentication of digital evidence (Walden, 2010).
As computer-related crimes have become more commonplace, so the
demand for such expertise has increased. Such forensic work can be very
painstaking and time-consuming, with the demands exacerbated in cases
where computer data are protected by encryption technologies that must be
‘cracked’ before they become legible (see the discussion of encryption in
Chapter 11). As computers have become interconnected through networked
technologies, forensic investigation also has had to deal with
communications data which can provide crucial evidence of involvement in
online offences. For instance, many investigations hinge on tracking down
an IP address which can be linked to a particular suspect in a location but
offenders may obstruct such investigative efforts through the use of
anonymizing technologies like bulletproof hosting services, fast flux and
VPNs. Investigators also often rely on tracing electronic financial
transactions to find suspects, a task which has gotten more difficult in the
age of cryptocurrency (Ormsby, 2018: xxiv). Moreover, modern computer
storage devices are capable of holding huge amounts of data, and networks
are capable of transmitting huge bitstreams of data, so the sheer volume of
material that must be sifted, sorted and analysed presents a major challenge
in itself. These difficulties inevitably place significant demands upon police
resources, with such work requiring very specific expertise not available
amongst ‘rank and file’ officers who tend to feel they are not properly
equipped to handle such crimes (Burke et al., 2002). Digital forensics are
discussed further in Section 9.2.1.
In principle, the police might be expected to investigate the entire range of
internet-based offences that have been explored in preceding chapters:
hacking, the distribution of malicious software, piracy, frauds, thefts, hate
speech, stalking, and the circulation of offensive and illegal content such as
child pornography. In reality, however, the sheer volume of online offences
necessitates a significant degree of selectivity, with police resources being
targeted to those offences that are deemed most serious in terms of their
scale and the harms that are caused to victims. This selectivity relates to
what Wall (2007: 161) calls ‘the de minimis trap’ – referring to the dictum
that de minimis non curat lex, meaning that ‘the law does not deal with
trifles’. Given that the vast majority of cyber offences are low-impact in
nature, entailing minimal harm to many discrete victims, they tend to fall
below the threshold of seriousness that will activate involvement from the
police and other criminal justice agencies. As we shall see later in this
chapter, responsibility for addressing many such offences tends to fall with
actors outside the system of public policing, or is addressed through
technological measures aimed at prevention rather than reactive
investigation and prosecution. Police efforts in relation to cybercrime have
tended to focus mainly upon offences such as child pornography, hacking,
political offences related to terrorism, and interpersonal offences where the
risk of ‘real world’ harm to victims is deemed significant (Hinduja, 2004a;
Yar, 2013). These crimes, as Yar (2013: 494) describes, tend to fall within
certain ‘hierarchies of standing’ which ‘allocate differential significance to
crimes according to the victims, offenders and harms involved in them.’
The problem of selective police focus is further compounded by the fact
that many officers simply lack the requisite knowledge to understand
cybercrime issues and understand their harms (Boes and Leukfeldt, 2016:
186). Thus officers may not have the ability to adequately make
assessments about cybercrime seriousness. Bond and Tyrrell (2018: 13), for
example, conducted a survey of police officers and staff in England and
Wales and found ‘a significant lack of understanding and confidence felt by
police when investigating and managing revenge pornography cases’. From
these findings, they argue that ‘inconsistent and incomplete knowledge is
likely to lead to ineffective management of cases as well as increased victim
dissatisfaction and suffering’ (ibid.: 13). Financial and technological
resource limitations may similarly curtail law enforcement involvement in
cybercrime investigations. Jewkes and Andrews (2005: 51) further suggest
that such limitations on resources and expertise force officers to focus on
‘“low-hanging fruit” that need fewer resources and may require less
complicated investigations in order to yield results’.
Additionally, police responses have been constrained not only by resource
limitations but also by the established culture and ethos of policing which
may be resistant to redefining its remit away from a traditional diet of
terrestrial and street-based offences, instead viewing cybercrime
investigations as not ‘real police work’ (Holt et al., 2015: 63; Jewkes and
Andrews, 2005: 53; Jewkes and Yar, 2008: 595; Nhan and Huey, 2013: 79,
88; Wall, 2007: 160–161). Relatedly, research also indicates that while
some officers welcome additional training for handling cybercrimes and the
creation of specialized local units, many officers are not so enthusiastic
(Bossler and Holt, 2012; Holt and Bossler, 2012; Holt et al., 2015: 58, 88).
As the scale of cyber offences has grown, and the risks of online
victimization have become more widely acknowledged, police forces have
incrementally developed their capacity to deal with these problems. If we
take the UK as an example, all of the territorially based police forces now
have some kind of specialist units that are tasked with addressing at least
some variants of internet-based crimes. The extent of this provision, as well
as its organizational structure, will vary across forces. In many cases,
responsibility for tackling internet-based offences will be shared across
various units: for example, e-crime investigations may fall under the
purview of units dealing with issues such as fraud, financial crime, child
protection and sex offences, as well as falling to specialist computer crime
units (Jewkes, 2010: 540–541). At a national level, the perceived need for
centralized expertise and coordination of e-crime investigations at the turn
of the century gave birth to the UK’s National Hi-Tech Crime Unit
(NHTCU) in 2001 (subsequently absorbed into the Serious and Organised
Crime Agency [SOCA] when the latter was created in 2006). In 2009, the
Police Central e-Crime Unit (PCeU) was established. The unit, based in the
Metropolitan Police Service, provides specialist training and coordinates
investigative responses across various regional forces. The PCeU’s remit is
confined to ‘the most serious incidents’ relating to computer hacking,
malicious software, denial of service and fraud (PCeU, 2011). Both SOCA
and the PCeU were consolidated into the National Crime Agency which
handles cybercrime cases through its National Cyber Crime Unit. The NCA
also hosts the Child Exploitation and Online Protection Centre (CEOP)
which undertakes investigation into child-related offences, as well as
providing education, training and advice. In addition to law enforcement
agents, CEOP’s personnel include experts from other organizations such as
child protection charities, internet providers and computer software
companies (e.g. Microsoft). Similar patterns of police provision are
apparent in other national contexts. For example, there has been a
significant growth of specialist computer crime units in the USA, located
variously at local, state and federal levels. According to research by
Hinduja and Schafer (2009: 285), however, this distribution is uneven, with
some large states (e.g. California) boasting numerous such units, while
others appear to have little or no specialist provision to tackle e-crime. In
examining the uneven distribution of these specialized units across police
organizations, Willits and Nowacki (2016: 118) found that ‘larger agencies
are more likely to have a cybercrime unit than smaller agencies, state-level
agencies are more likely to have a cybercrime unit than municipal and
county agencies, and agencies with increased advanced material technology
use and specialization are more likely to have cybercrime units than other
agencies’ (see also: Reaves, 2015; discussion in Section 1.7) In particular,
organizational size appears to be a major predictor of the adoption of
cybercrime specialists within organizations (Reaves, 2015; Willits and
Nowacki, 2016; see discussion in Section 1.7). Unfortunately, most
departments are relatively small, meaning that the vast majority of
departments in the country have no personnel or units specifically
designated for cybercrime investigations (Reaves, 2015). At the federal
level, investigative activities for various cybercrimes are dispersed across
multiple agencies including the congressionally funded National White
Collar Crime Centre (NW3C), the Federal Bureau of Investigation (FBI)
and the Department of Homeland Security (DHS). Canada’s efforts to
police cybercrime are led by the Royal Canadian Mounted Police and
includes their Integrated Technological Crime Units (ITCUs) and Technical
Investigation Services (TIS); in Australia, investigating cybercrime falls
under the remit of the Australian Federal Police (AFP) and their
Cybercrime Operations team. In addition to the creation of specialized
units, organizations have begun developing standardized training
programmes and guidelines for cybercrime investigators. To provide a few
examples, organizations like the Law Enforcement Cyber Center in the US
have been created to provide officers with a repository of resources
including links to training, details about recent developments in cybercrime
investigations and educational materials. The National White Collar Crime
Center (NW3C) provides both in-person and online cybercrime
investigation training courses for law enforcement personnel. In the UK, the
Association of Chief Police Officers (ACPO) created a Good Practice
Guide for Digital Evidence which provides guidance to officers for
conducting searches and seizures of computer technologies, analysing
digital evidence and other issues (ACPO, 2012). These are just a few
instances of the ways in which policing in various countries is being
reconfigured and adapted to respond to cybercrime threats.
As noted at various points in preceding chapters, one of the major law
enforcement challenges presented by cybercrime arises from the
transnational, border-spanning character of the internet. Though such
crimes are not wholly divorced of local and territorial contexts in which
they originate and span (Lusthaus and Varese, 2017), the de-territorialized
nature of cybercrime presents particular difficulties for a system of policing
that has historically been organized on a territorial basis: police forces are
configured on a geographical basis and carry law enforcement
responsibilities for their own ‘patch’ (Boes and Leukfeldt, 2016: 188; Wall,
2007: 160). Consequently, cybercrimes have necessitated a significant
increase in cross-police and cross-jurisdictional cooperation and
coordination in order to effectively investigate globally distributed offences.
Police investigations into child pornography rings (such as operations
‘Starburst’, ‘Rescue’ and ‘Pacifier’ discussed in Section 8.3) epitomize this
transnational police cooperation, requiring as they do the involvement of
forces from many different countries. Policing cybercrime in this way,
however, can encounter numerous difficulties, including problems of
linguistic diversity, variations in policing cultures, asymmetries in available
resources and levels of expertise across different forces, as well as the
aforementioned problem of legal pluralism or statutory variations across
national systems. This latter issue has been addressed through attempts to
harmonize cybercrime law via international treaties and conventions, such
as the Council of Europe’s Convention on Cybercrime. The practical
problems of cooperation are being addressed through existing transnational
structures of political governance. For example, the EU created its own hitech crime agency, the European Network and Information Security Agency
(ENISA), in 2004 to supplement the capacity already available through the
likes of Europol and INTERPOL. Further, in 2018, ENISA, the European
Defense Agency (EDA), Europol and the Computer Emergency Response
Team for the EU instituions, agencies and bodies (CERT-EU) recently
signed a ‘memorandum of understanding’ to ‘establish a cooperation
framework between their organisations’ to combat cybercrime (Europol,
2018). Addionally, in 2018 eight EU member states (Lithuania, Estonia,
France, Finland, Netherlands, Spain and Romania) initiated plans to create a
‘European Cyber Rapid Response Force’ (a further four countries –
Germany, Greece, Belgium and Slovernia – have joined the agreement as
‘observers’) (CBR, 2018). The Association of Southeast Asian Nations
(ASEAN) has similarly established ASEANAPOL which, among other
charges, works to facilitate cooperation between partner countries in
combating cybercrime. Between 2015 and 2017, INTERPOL and the
Canadian government sought to increase the capacities of select Latin
American and Carribean countries to investigate, prosecute and prevent
cybercrimes through the Cybercrime Capacity Building Project in Latin
America and the Carribean (INTERPOL, 2015). The sorts of initiatives
noted here can be seen as part of an emerging (albeit ad hoc) transnational
‘architecture’ of public policing intended to furnish a more adequate
response to globalized cybercrime problems.
10.2.1. Digital Forensics
As previously mentioned, law enforcement has increasingly been tasked
with searching for, seizing and analysing digital data as evidence in
criminal cases, particularly those involving various kinds of cybercrimes.
The ‘collecting, analysing and reporting on digital data in a way that is
legally admissible’ constitutes the field of digital or computer forensics
(www.forensiccontrol.com, as quoted in Kävrestad, 2017: 4). While this
definition is simple and straightforward, gathering and analysing digital
data in a way that will be admissible in court is often easier said than done
and presents significant challenges for investigators and forensic
technicians. Evidence must be gathered in a manner that reduces the
chances that the data will be unduly altered (and thus rendered inadmissible
in many jurisdictions) (ibid.). For example, when seizing digital devices,
investigators may place them within ‘Faraday bags’ which help prevent the
devices from being accessed and augmented remotely. Forensic technicians,
when looking through seized storage, may also analyse a copy of the data to
avoid tampering with the original (ibid.). There are thus significant
challenges confronting investigators involved in building cybercrime cases
regarding technical and legal know-how. As the Dude from The Big
Lebowski might say, there are a ‘lotta ins, lotta outs, lotta what-have-you’s’.
To help overcome these issues, some agencies have dedicated digital
forensics personnel or units to help investigators gather, secure and analyse
digital evidence.
Despite these challenges, digital forensics has become increasingly
important for law enforcement agencies tasked with cybercrime
investigations, though, as previously noted, requisite resources and
expertise in the area are unevenly distributed across agencies. Officers
make use of multiple tools to facilitate digital evidence acquisition and
analysis. For instance, if investigators are looking for evidence of child
pornography possession, they may employ a program like RedLight, a
digital scanner which searches for images likely to be pornographic
‘through characteristics such as high concentrations of skin tone, and edges
indicative of humans’ (DFCSC, 2018a). They may also use programs like
Video Previewer which scans video files and creates a .pdf file featuring
snapshots from the file (DFCSC, 2018b). The idea is to allow law
enforcement to search these files while minimizing time spent watching
every frame. Other programs, like FieldSearch, allow for more
comprehensive searches of computers for evidence of possible crimes
including data on ‘Internet histories, images, multimedia files and results
from text searches’ (JTIC, 2018). FieldSearch creates reports for
investigators that may include file names, metadata (like dates created and
accessed), file location data, image previews, and other pieces of
information. Programs like OSForensics allow technicians to search for and
within files (including possibly deleted files) and locally stored emails,
recover login and password details, and chart system information, among
other functions. Other applications may allow law enforcement to crack
certain file and system encryption methods to access otherwise secure data
(see Chapter 11 for more on encryption). These are only a few examples of
digital forensics tools that may be employed by cybercrime investigators.
10.3 Pluralized Policing: The Involvement
of Quasi-state and Non-state Actors in
Policing the Internet
As indicated in the introduction to this chapter, a distinctive feature of
internet policing is the extensive involvement of a wide range of quasi-state
and non-state actors and agencies. Despite the increasing attention to
cybercrimes on the part of public police forces, the greater part of internet
policing activity is, in fact, undertaken by organizations located in the
private, charitable and voluntary sectors, as well as by computer users
themselves who are ‘responsibilized’ to take measures that protect
themselves from victimization. We will consider here the involvement of a
range of such non-state actors in cybercrime policing, before moving on to
explain and evaluate these arrangements.
From its inception the culture of the internet has tended to favour selfregulation amongst users, not least because of a commitment to preserving
the internet as a space of open communication free from state censorship
and interference. Consequently, user communities have sought to institute
their own systems of (formal and informal) social control and sanctions in
order to address transgressive behaviour, rather than refer such problems to
public agencies as a matter of course. The distinctive regulatory architecture
of the internet has taken the form of a complex amalgam of dispersed
mechanisms, rather than simply emerging as a ‘top-down’ state control.
While statutory measures have been introduced over time to control internet
content, such formal measures are only one small part of the overall
apparatus that regulates internet activity. The preference for a selfregulatory system was instituted in the early years of the internet. For
example, it was the Clinton Administration that, in 1998, created the
Internet Corporation for Assigned Names and Numbers (ICANN) to take
responsibility for the management of internet domain names and IP
addresses, as well as promoting the operational stability of the internet
(ICANN, 2008; Mueller, 1999). ICANN is a not-for-profit corporation
comprising a wide range of ‘stakeholders’ including government
representatives, business organizations, independent consultants and
academics, and seeks to develop its policies ‘through bottom-up, consensusbased processes’. While critical questions have been asked about the extent
to which ICANN is able to be truly independent of governmental influence
(Kleinwaechter, 2003), the pluralistic nature of its composition is testimony
to the multi-party character of internet regulation. Moreover, ICANN’s
remit is fairly limited, being restricted to formal and structural aspects of
the internet as an operating environment, and it takes no responsibility for
managing or arbitrating on the substantive content of websites.
The regulation, policing or control over internet content (including the
prohibition or removal of content) is a deeply contested issue amongst
stakeholders and users. As noted in Chapter 1, the culture of early internet
activists was deeply imbued with both a ‘counter-cultural’ commitment to
the maximization of free speech and open communication, and a
corresponding suspicion of both governmental control and the motivations
of powerful corporations. Many of these activists expressed fears that the
internet would progressively fall under the sway of state censorship;
something that, alas, has been borne out by increasing state efforts
worldwide to block access to politically inconvenient content through use
of blocking and filtering technologies (see Deibert et al., 2007; Zittrain et
al. 2017). The defence of free speech, and of the open exchange of
knowledge and ideas, has been ardently defended by a number of
organizations. For example, the Electronic Frontier Foundation (EFF) was
established in 1990 by a coalition of ‘lawyers, policy analysts, activists, and
technologists’ in order to defend free speech online and combat attempts at
censorship by mounting legal challenges against both the US government
and large corporations (see www.eff.org; see also Gelman and McCandlish
(1998) for an example of the EFF’s public role). Similarly, the Electronic
Privacy Information Centre (EPIC) was set up in 1994 with a view to
protecting users’ privacy and defending their civil liberties in the online
environment (see http://epic.org; see also EPIC, 2007). As we shall see
below, however, the rise of various internet-based crime problems has
stimulated the growth of a wide range of extra-state organizations and
alliances aimed at monitoring the internet for objectionable content and
activities, and pursuing the prohibition and removal of such material.
Attempts to police the internet by non-state actors have emerged hand-inhand with the proliferation of online crime problems and intensified public
awareness and concern over such crimes. Considerable attention is now
devoted to monitoring the internet for content that may variously be seen as
illegal or improper (such as child pornography, hate speech, and illegitimate
and unlicensed copyrighted content). For example, the Internet Watch
Foundation (IWF) was established in the UK in 1996 by an association of
internet service providers (ISPs), with government backing, although it
operates as a self-regulating charitable trust; its membership has
subsequently expanded to include operators of mobile telecommunications
services, content providers, filtering companies, search engine providers
and financial companies (see www.iwf.org.uk). Its initial brief was to
combat child pornographic material (or, as they prefer it, ‘child sexual
abuse images’); however, its brief was later expanded to cover both
criminally obscene (but non-child-oriented) content and instances of hate
speech (material inciting racial hatred). The IWF operates a ‘hotline’ to
which interested parties (be they representatives of organizations or
individual members of the internet-using public) can report illegal content.
The IWF produces a ‘blacklist’ of websites or pages it deems to contravene
relevant UK laws on child sex abuse, obscenity and race hatred, and this list
is used by many ISPs to block access to, or remove from the Web,
offending content. They also produce a continually updated ‘hash list’ of
illegal images which helps service providers remove child pornographic
images, monitor newsgroups for illegal content, and provide other services
to help reduce the proliferation of child exploitation imagery. As a result of
its operations the IWF claims to have reduced the proportion of child
pornographic content that is hosted in the UK from 18 per cent of the
worldwide total in 1997 to less than 1 per cent today (IWF, 2017: 4–5;
2018a). Other organizations pursuing similar roles include the Internet
Hotline Providers in Europe (INHOPE – founded in 1999) and the
Association of Sites Advocating Child Protection (ASACP – an alliance of
US-based adult pornographic content providers which undertakes measures
such as: self-regulation through approved membership and certification of
legitimate adult sites; informing member providers about child protection
laws; engaging in educational and outreach activities directed toward
government and the public; and providing a hotline through which both
adult content providers and consumers can report child pornography
identified on the internet [ASACP, 2018]).
The role of organizations such as the IWF remains controversial, however.
For example, in 2008 there emerged a dispute in the UK following a
decision by the IWF that a page on the free online encyclopedia
‘Wikipedia’ contravened child pornography laws, and arranged to have
access to said page blocked for those accessing the internet using UK-based
ISPs. The page in question dealt with the veteran German rock band The
Scorpions, and featured the cover of an album, Virgin Killer, with a
photograph of a naked pre-pubescent girl (BBC News, 2008a). Critics of the
IWF’s actions objected on a number of grounds: first, because by denying
users access to the offending page, the IWF were effectively blocking
access to its non-illegal text content as well its potentially (yet unproven)
illegal imagery; second, the block on this one page had knock-on effects for
UK users of Wikipedia, and for some time they were unable to edit any of
the encyclopedia’s pages (Wikipedia, 2008); third, it has been argued that
the album in question has been freely available across Europe for some 30
years since its initial release (in 1976), and during that time no legal
challenge has been presented to it on the grounds of its illegality.
Consequently, the IWF has acted as ‘police, judge and jury’ in bypassing
due legal process and deciding unilaterally to censor internet content. Some
24 hours later, the IWF retracted its ban, stating that ‘in light of the length
of time the image has existed and its wide availability, the decision has been
taken to remove this webpage from our [black]list’ (BBC News, 2008b)
(UK-based readers can now view the offending image and judge for
themselves, at: http://en.wikipedia.org/wiki/Virgin_Killer). Critics may be
less than reassured by these developments, as the IWF’s volte-face may be
seen as something only forced upon them by the public outcry and negative
media coverage (Richmond, 2008). Moreover, it raises wider issues around
the private policing of the internet in so far as such organizations are not
inherently publicly accountable via democratic mechanisms, as would be
the policing decisions of public agencies. To surrender judgements about
what comprises the public interest to private and unaccountable groups may
be seen as a dangerous development that needs to be curtailed if the
freedom of online expression is to be effectively safeguarded.
Paralleling the emergence of organizations that police the Web for images
of child sexual abuse, recent years have also seen the establishment of
groups addressing hate speech online, and in the context of post-9/11
concerns about terrorism, groups that monitor the internet for extremist
Islamic sites that advocate terrorists attacks (see discussion in Chapter 7).
For example, the Jewish human rights organization The Simon Wiesenthal
Center has now been active for a number of years in monitoring the internet
for anti-Semitic hate websites and moving to have them shut down (see
www.wiesenthal.com). Similar activities are undertaken by the AntiDefamation League (see www.adl.org/); the American-Arab AntiDiscrimination Committee (ADC); Measuring Anti-Muslim Attacks
(MAMA or TellMAMA; see https://tellmamauk.org/) and The Southern
Poverty Law Center (which monitors and exposes online hate speech by
far-right groups through its Intelligence Report, as well as offering hate
speech education and training; see https://www.splcenter.org/). Such groups
work through a variety of strategies, ranging from exposing instances of
online hate speech, through pressuring ISPs to remove such content, to
mounting legal challenges against those who post such material online.
A third area that has seen an upsurge of non-state policing activity is that
undertaken by groups representing the interests of commercial intellectual
property (IP) rights holders, especially copyright in areas such as music,
motion pictures and computer software. The proliferation of illegal file
sharing and downloading of copyrighted content has provoked IP rights
holders to invest in the creation of a veritable slew of organizations
dedicated to policing the internet for such violations (see the discussion in
Section 5.6.3).
Recent years have highlighted an additional kind of non-state social control
and crime fighting online, a form of vigilantism involving individuals or
groups acting as sleuths to help solve crimes and catch criminals. Such
endeavours are often described through terms like ‘online vigilantism’, ‘evigilantism’, ‘cyber-vigilantism’ and ‘digilantism’ (Huey et al., 2012; Jane,
2017a: 3; Nhan et al., 2017). Jane (2017a: 3) defines digilantism as
‘politically motivated (or putatively politically motivated) practices outside
of the state that are designed to punish or bring others to account, in
response to a perceived or actual dearth of institutional remedies’. Some
digilantes may attempt to assist law enforcement investigations (Huey et al.,
2012). In 2013, for instance, Reddit users (‘Redditors’) attempted to
identify the culprit behind the Boston Marathon bombing (Nhan et al.,
2017). Shortly after the event, thousands of threads appeared on the site
dedicated to investigating the event and identifying suspects with many
feeding insights gleaned back to police investigators (ibid.: 350). Other
cases may involve individual actors seeking justice against their aggressors
as evidenced in the case of Alanah Pearce, a gaming journalist who
received multiple sexual harassment messages over social media (Jane,
2016). Rather than go to the cops, she used information available on these
platforms to identify the perpetrators and, as they were minors, she found
their mothers and made them aware of their sons’ behaviour. Further, a host
of online groups have emerged dedicated to digilantism like Perverted
Justice, which targets online child sexual abusers (discussed previously in
Section 9.4) and 419 Eaters, so-called ‘scambaiters’ who attempt to
uncover, identify, shame, or even punish online fraudsters.
Like other forms of vigilantism, though such efforts may be wellintentioned, digilantism can go awry. There is always a chance that
digilante investigations could result in erroneous conclusions. In the
previous example of the Boston Marathon digilantes, some Redditors
‘mistakenly identified Sunil Tripathi, a missing American student, as a
perpetrator’ and ‘as a result Tripathi’s mother began receiving threatening
messages’ (Nhan et al., 2017: 354). The mistake was realized when his
body was ‘found in the Providence River; the young student had apparently
killed himself before the bombings occurred’ (ibid.: 354). The legitimacy of
digilantism may also be a matter of perspective. As Jane (2016: 289) points
out, ‘what might seem like fair and reasonable street justice from one
viewpoint may well resemble vengeance-motivated lynching or even
sadism from another’. She uses GamerGate as one example of such
subjectivity – GamerGaters may have seen themselves as crusaders against
attacks on free speech and ethics in journalism but many outside observers
saw their actions as constituting an unjustified and misogynistic virtual
lynch mob. Acts of digilantism, according to Jane (ibid: 290), can also
‘become worse than the ill supposedly being addressed’ and that the
campaigns may be used by malevolent actors to do malicious things. As
indicated in the GamerGate scenario, digilante activities may also
perpetuate forms of inequality and oppression against marginalized groups
including, but not limited to, sexism and racism (Jane, 2017b; Nakamura,
2014).
10.4 Privatized ‘For-Profit’ Cybercrime
Policing
We turn now to consider another way in which the policing of the internet
has been taken on by non-state actors, namely the commercial (for-profit)
provision of goods and services aimed at crime prevention, detection and
resolution. The private sector now provides a wide range of IT security
products and services on a commercial basis. These products and services
are variously concerned with: safeguarding the integrity and operation of
computer systems; controlling access to systems; and protecting the data
content of systems from theft, unauthorized disclosure and alteration. The
provision of such security measures takes an ever-expanding array of forms,
including:
security services, such as consultancy on threat analysis, systems
security design and implementation, contingency planning and disaster
recovery (Nugent and Raisinghani, 2002: 7)
design and provision of software for user authentication and
controlling access (e.g. password systems, smart cards and most
recently biometrics such as fingerprinting and face recognition)
(Halverson, 1996: 9; Nugent and Raisinghani, 2002: 8; Smith, 2006;
Wright, 1998: 10)
design and provision of software for countering ‘hacking’ or
unauthorized intrusion (e.g. firewalls, intrusion detection systems,
early warning systems) (Grabosky and Smith, 2001: 40; Grow, 2004:
84)
design and provision of software for detecting and eradicating
‘malicious software’ (e.g. viruses, worms and Trojan horses)
provision of systems for safeguarding confidential, proprietary and
business-sensitive data (e.g. encryption software to enable secure
financial transactions over public networks such as the internet, and
technologies preventing the unauthorized copying and reproduction of
copyright-protected digital content such as software, music and motion
pictures) (Rassool, 2003: 5–6; Vaidhyanathan, 2003: 176–177)
training for organizations and their employees in using and
implementing security systems and procedures.
The overall financial scale of this industry is difficult to determine with
precision, depending on how narrowly or broadly cybercrime policing and
control is defined. Nevertheless, we can glean some preliminary indications
of its extent and growth. As mentioned in Section 4.5, one current estimate
claims that global spending on information security services and products
range around $89.13 billion in 2017 per annum (Gartner, 2017a). According
to their estimates, expenditures were highest in the area of security services
($57.7 billion) followed by infrastructure protection ($17.5 billion),
network security equipment ($11.7 billion), consumer security software
($4.75 billion), and identity access management ($4.67 billion). Another
claims that the global information security market was worth $3.5 billion in
2004 and currently exceeds $120 billion (Morgan, 2018). Relatedly,
governments are increasing their spending on information security,
including the procurement of security products and services. For instance,
an estimate from the non-profit organization Taxpayers for Common Sense
(2017) estimates that the US federal government spent approximately $28
billion on cybersecurity in 2016, a $20.5 billion increase since 2007. Other
national governments are making significant investments in cybersecurity
as well. In 2016, the UK government pledged to increase their cybersecurity
spending by £1.9 billion (Flinders, 2016). As part of its national Cyber
Security Strategy (or ‘the Strategy’), Australia is set to spend approximately
$630 million AUD over the next ten years which includes investments in
cybersecurity research, development, implementation and the procurement
of cybersecurity services for government agencies and even small
businesses (Brangwin, 2016).
From an individual consumer perspective, the greatest investment in
internet policing technologies falls in the areas of preventive software. As
noted in Chapter 7, concerns about children’s access to sexually explicit and
violent content have encouraged parents to purchase internet filtering
software. Similarly, fears about online financial fraud, identity theft and
malicious software (viruses, worms and the like) have served to stimulate a
massive market for software applications that scan for and eliminate
viruses, block attempts at phishing and generally protect computer systems
behind firewalls (McGraw, 2006). Almost no personal computer user is
now without a package such as Avast, Microsoft Defender, Norton Internet
Security, Kaspersky, Drive Sentry, McAfee Anti-Theft, Web Root, Trend
Micro and so on. Spurred on by recent major information security breaches,
many companies now offer credit monitoring and identity theft
protection/notification services to consumers worried about criminals taking
advantage of their sensitive information including LifeLock, IdentityForce,
and IdentityGuard, among others. These are just a couple of examples of
the scale of the commercial sector that has emerged to privately meet the
demand for internet security and protection.
10.5 Explaining the Pluralization and
Privatization of Internet Policing
The fact that internet policing takes a pluralized form, involving a wide
range of non-state and commercial actors, should not come as a surprise.
Researchers have been exploring for some decades the ways in which
policing in Western societies appears to be increasingly ‘pluralized’,
‘privatized’, ‘fragmented’ or otherwise dispersed across a widening array of
actors who work in partnership with (or parallel to) the state’s crime control
agencies. The explanations for this shift are varied, but analysts broadly
agree that there have been significant changes in the provision of policing
and crime control, marked by its ‘commodification’ in tandem with a
‘responsibilisation’ in which the burden of crime control is shifted toward
non-state agencies and individuals (Muncie, 2005: 37, 39). In this context,
non-state actors are encouraged to protect themselves against the threat of
criminal victimization via market mechanisms, such that they become
consumers of security goods and services, rather than citizens who have a
right to expect protection from the state as part of a social contract
(Bowling and Foster, 2002: 981–982). Of course, the provision of security
as a consumer commodity is not a new development per se. As Churchill
(2016) indicates, the security industry has existed for centuries. Because of
the diffuse nature of the internet which spans the jurisdictional boundaries
that often limits traditional policing forces, increases the potential for
anonymization of offenders, and obscures the nature of many offences, it is
perhaps unsurprising that such pluralization has been more intense
regarding the organization of internet policing. Responsibility for crime
prevention falls largely (albeit not exclusively) upon the potential victims,
and they are typically expected to access such provision through the
purchase and contracting of security goods and services. Likewise, the
provision of crime prevention and detection constitutes an ever-expanding
array of market opportunities for private security providers. We must situate
these developments in internet policing within the context of systemic
political–economic changes, specifically the emergence of new modes of
social coordination, or governance, as part of the transition to a neo-liberal
mode of capitalism.
The concept of ‘governance’ has become one of the most oft cited yet
contested social scientific concepts in recent years (Lee, 2003: 3). On our
understanding, governance refers to a specific mode of social coordination
and ordering, one which moves away from government by a centralized
state apparatus towards a more heterodox, self-organizing network of actors
situated within market and civil societal, as well as governmental and quasigovernmental, spheres (Rhodes, 1997; see also Kjaer, 2004; Stoker, 2008).
Some scholars have referred to such governance as ‘nodal governance’ or a
‘form of governance wherein the government is not the central authority
within a cooperative, but just one of the actors involved’ (Boes and
Leukfeldt, 2016: 198; see also: Nhan, 2010). In the move to governance, the
state increasingly eschews the hierarchical, top-down delivery of social
order, instead externalizing responsibility onto extra-governmental actors,
while maintaining an interest in ‘steering’ such activity through the
formulation of policy goals and agenda-setting (Crawford, 2006; also
Jessop, 2003, 2004). We see the emergence of this mode of social
governance as integrally tied to the transition to a neo-liberal regime of
capitalist accumulation and regulation (Jessop, 2002). The supposed
failures of Keynesian economics and state welfarism to ensure either
sustained economic growth or the achievement of public welfare goals
(including crime reduction) paved the way for a ‘de-centring’ of the state.
This move found its political articulation in the rise of the New Right, with
its insistence that the competitive mechanisms of the market are inherently
more efficient than state bureaucracies in the delivery of social goods; that
the scope of state expenditure needs to be ‘rolled back’ in order to reduce
the burdens of taxation which inhibit economic growth; that the state
provision of social goods undermines individual self-reliance and
responsibility, and curtails freedom of choice. Consequently, there has
emerged a range of state strategies in keeping with the neo-liberal project,
including the liberalization and deregulation of markets, and privatization
of the public sector (Jessop, 2002: 461). In the domain of policing and
crime control, the displacement of responsibility onto extra-governmental
actors (businesses, voluntary organizations and individuals) has the
advantage of externalizing costs and relieving public agencies such as the
police from the burden of responsibility for an ever-widening array of crime
control tasks.
We suggest, then, that the emergence and rapid growth of the commercial
sector in internet crime control must, in the first instance, be situated within
the context of the neo-liberal regime of governance. As Wall (2001c: 174)
notes, while the public police may resent the consolidation of the private
sector in cybercrime policing, ‘resource managers appear happy not to
expend scarce resources on costly investigations’. The character of this
tension becomes apparent if we consider the limited resources the police
can make available for tackling the rapid rise in computer-related crimes.
As noted in Chapter 1, state-based cybercrime policing initiatives such as
the UK’s National Cybercrime Unit and the EU’s ENISA are supported
with very limited resources as compared to other areas of police activity. As
Wall (2007: 183) observes, ‘the public police mandate prioritises some
offending over others’, and the public and political focus upon ‘street
crimes’ and ‘conventional crimes’ (crimes of violence, robbery, drugrelated offences etc.) diverts resources away from the policing of computer
crime. In such a situation, the police have little choice but to accept the
incursion of commercial organizations into computer crime control.
10.6 Critical Issues about Private Policing
of the Internet
In this final section, we will consider some important problems that may
follow from the development of private policing of the internet. Two issues
in particular will be considered here, those of accountability and equity.
With respect to accountability, we have already noted that critics have
raised doubts (as in the recent case of the IWF and Wikipedia) about the
warrant that private actors have (or ought to have) in policing the internet.
The history of public policing in Western democracies has typically been a
balancing act between upholding the authority of the police as agents of the
state in enforcement of its laws, and ensuring their accountability to the
public to whom they are ultimately responsible. Thus there have developed
over time mechanisms to ensure that police are subject to public scrutiny
and to whom they must provide justification for their decisions (be they
decisions to act, or not to act, in response to particular crime problems and
situations). For example, in England and Wales each police force is now
accountable to an elected Police and Crime Commissioner. Through such
mechanisms the public interest can be upheld and the public voice on
policing decisions heard. No such mechanisms, however, exist where
private policing is concerned (Loader, 1997, 2000; Loader and Walker,
2004, 2006). Any individual or group can potentially issue themselves with
a brief to police the internet according to their own interests, priorities or
moral convictions (thus, as we have already seen, members of particular
ethnic and religious minorities police the internet for what they deem
instances of hate speech; women’s rights activists do likewise with
pornography; those interested in child protection tackle child sexual abuse
images; and IP rights holders search for and take action against what they
deem the illegal exploitation of their properties). Inevitably, conflicts can
and do arise about whether the actions of such groups are actually in the
wider public interest (as opposed to any given group’s sectional interest),
and when controversial decisions are taken (as in the IWF’s decision to
block access to Wikipedia) there is no direct mechanism through which they
can be held accountable. Therefore the proliferation of private policing
brings with it a ‘democratic deficit’ and a potential ‘legitimacy gap’, in so
far as private actors are considerably harder to oversee and regulate.
The second salient issue is that of equity. The principle of equity holds that
freedom from criminal predation, and protection from it, is the right of all
citizens. Hence the system of public policing aspires to the principle that it
serves all citizens equally, and the protection offered by the police should in
no way reflect individuals’ social or economic standing. The system can
and does of course fall far short of this principle in practice, favouring some
sections of society over others when it comes to protection from crime, but
the principle nevertheless remains a standard by which the effectiveness of
public policing is judged, and serves as grounds for criticism when it falls
short of its aspiration. The privatized (especially market-mediated)
provision of policing works on a quite different principle, however. Since
its goods and services are commodities that are to be purchased, the market
system inevitably discriminates amongst its potential users (i.e. customers)
according to their ability to pay for the goods and services in question
(Bayley and Shearing, 1996: 593–595; Burbidge, 2005). Therefore the
internet user who wishes to protect him- or herself from computer viruses,
the theft of sensitive personal data via hacking, and so on, will only be able
to do so in so far as he or she can afford to buy the software that provides
this protection. This situation threatens to entrench a hierarchy of inequality
when it comes to protection from internet-based crimes, with the most
financially secure enjoying the greatest online security, while the
economically disadvantaged come to occupy the most risk-intensive
positions in the online world. This can be seen as a further extension of the
problem of ‘digital divides’ (van Dijk and Hacker, 2003; Warschauer,
2004), the social exclusion from new information and communication
technologies that serves only to exacerbate existing patterns of social
inequality. If the most socially disadvantaged cannot make full and effective
use of new communication technologies for fear of criminal victimization,
this will inevitably place them at a further social, economic and political
advantage since the accrual of agency in all these spheres of action is
increasingly mediated by electronic systems such as the internet.
10.7 Summary
This chapter has considered the strategies for policing the internet that have
emerged in response to cybercrime problems. We see that state-led
initiatives have attempted to reform policing at local, national and
transnational levels. Investments have been made in developing the
expertise needed to more effectively respond to the challenge of
cybercrime, and we have seen increasing levels of cooperation and
coordination between police forces across the world. Public policing
responses remain inadequate, however, in light of the scope, scale and
spread of cybercrime, and limited by resource constraints as well as other
pressures that tend to prioritize terrestrial crime problems. Consequently,
policing the internet has evolved in such a way that a wide range of nonstate and private commercial actors is extensively involved in matters of
internet regulation and control. This pluralization and privatization can be
connected to wider trends in the reconfiguration of policing, associated with
a shift from state-based government to networked governance in matters of
social control. Yet this move to dispersed, multi-lateral internet policing
brings in its wake its own challenges and difficulties, not least those
associated with accountability and equity in the provision of security.
Study Questions
What are the crucial differences between ‘the police’ and ‘policing’?
What kinds of developments in public policing are apparent as a response to cybercrime?
What are the factors that limit public police responses to cybercrime?
In what ways are non-state actors involved in policing the internet?
What role is played by private companies in internet policing?
How can we explain the pluralized and privatized character of internet policing?
What problems might this configuration of policing generate? How might they be overcome?
Further Reading
Further consideration of police responses to cybercrime can be found in Yvonne Jewkes’
‘Public policing and internet crime’ (2010) and in Chapter 8 of David Wall’s book
Cybercrime: The Transformation of Crime in the Information Age (2007). The rise of private
cybercrime policing is examined in Majid Yar’s ‘The private policing of internet crime’
(2010). Matthew Williams offers an insightful account of how communities of internet users
regulate and control the online environment in his chapter ‘The virtual neighbourhood watch:
Netizens in action’ (2010). Readers may also be interested in a book-length treatment of
Thomas J. Holt, George W. Burruss and Adam M. Bossler’s survey research of police
departments titled Policing Cybercrime and Cyberterror (2015). Johnny Nhan also provides an
interesting study of the nodal nature of contemporary cybercrime investigations in Policing
Cybercrime: A Structural and Cultural Analysis (2010). For those interested in the adoption of
technology by law enforcement beyond the realm of cybercrime investigations, check out
Sarah Brayne’s study of the Los Angeles Police Departments’ use of surveillance technologies
entitled ‘Big data surveillance: The case of policing’ (2017).
11 Cybercrimes and Cyberliberties
Surveillance, Privacy and Crime
Control
Chapter Contents
11.1 Introduction
11.2 From surveillance to dataveillance: The rise of the electronic web
11.3 The development of internet surveillance
11.4 The dilemmas of surveillance as crime control: The case of
encryption
11.5 Summary
Study questions
Further reading
Overview
Chapter 11 examines the following issues:
the development of surveillance as a tool for social and crime control;
the development of internet surveillance;
the resistance that surveillance efforts have encountered;
the tensions between surveillance and liberty apparent in conflicts over encryption technology.
Key Terms
Anonymity
Big Data
Data brokers
Dataveillance
Encryption
Metadata
Privacy
Surveillance
11.1 Introduction
It is clear from the discussion in the preceding chapters that the internet
carries with it significant new risks of criminal victimization, and thus
presents some pressing challenges for legislators and criminal justice
agencies. Attempts to police the internet for the purposes of crime control
also raise serious dilemmas and dangers, however. Central here is the
tension between surveillance and the monitoring of online activities, on the
one hand, and the need to protect users’ privacy and confidentiality, on the
other. Law enforcement agents need to able to identify offenders and collect
evidence of online crimes. Offenders, however, are able to exploit
anonymity and disguise themselves and their activities from prying eyes.
Privacy enhancing technologies (e.g. anonymous web-surfing services and
encryption software) may be used to thwart policing. Consequently, the
imperative for criminal justice actors is to reduce the potential abuse of
internet privacy by greater surveillance and monitoring of people’s
activities, which inevitably invades the privacy and confidentiality of online
communications. From the perspective of many users, however, there is
great unease at the prospect of their every move and utterance being subject
to intrusion and examination. Critics of internet surveillance point out that
we risk losing control over the personal information that circulates on the
Web – the content of our emails, the sites we visit, our financial and other
sensitive information. While authorities argue that there is a legitimate need
to access such details in the course of law enforcement, the potential for
abuse is all too apparent. For example, political dissidents and others
suspected of sympathies opposed to the state may find themselves the
subject of electronic surveillance, an issue which has come increasingly to
the fore in the wake of 9/11 and the ‘war on terror’. Moreover, those
selfsame privacy enhancing technologies that offenders may use to evade
detection and prosecution are at the same time essential for protecting
legitimate users from criminal victimization. Thus encryption technologies
are now routinely used to secure sensitive information such as credit card
details, thereby protecting such data from interception and abuse by
unauthorized third parties, as evinced by the transition to encrypted chipand-PIN systems by credit card companies in the mid-2010s to help protect
consumer information after major data breaches like the one that hit Target
in 2013. As a result, reducing public access to online privacy protection in
the name of policing may leave us less, not more, secure from criminal
threats (as with attempts to outlaw public access to and use of sophisticated
encryption techniques). At the present time, we are witnessing an intense
struggle over the balance between online surveillance and privacy; while
authorities move towards greater monitoring in order to tackle organized
criminals, terrorists, paedophiles, stalkers and so on, civil libertarians
encourage users to evade such invasion of privacy by making greater
recourse to privacy enhancing tools. Efforts to secure society against the
threat of cybercrime thus carry with them far-reaching consequences for the
future of online freedom itself.
11.2 From Surveillance to Dataveillance:
The Rise of the Electronic Web
The past few decades have seen a massive growth in academic studies of
surveillance and its social implications. Indeed, ‘surveillance studies’ now
constitutes an inter-disciplinary specialism in its own right, with attendant
journals, conferences and research programmes. This attention to
surveillance first emerged in the 1970s, greatly influenced by Michel
Foucault’s ([1977] 1991) ground-breaking analysis of ‘panopticism’, those
modern techniques by which individuals and social groups were subjected
to perpetual visual monitoring. For Foucault, surveillance was a key
instrument with which modern societies disciplined and controlled
populations; observation and inspection were used as a basis for judgement
and intervention, enabling supposedly deviant or problematic individuals
(such as the ‘criminal’, the ‘sick’ and the ‘insane’) to be exposed to a
regime of corrective intervention aimed at ‘normalizing’ them. Foucault
and others have observed how, with the emergence of modern society, a
whole host of institutional sites appeared, each utilizing surveillance as a
basis for training minds and bodies to conform to dominant expectations
about proper, decent, healthy and productive behaviour. Such sites included
the prison, the hospital, the school, the reformatory, the insane asylum and
the industrial factory. More recently, however, analysts such as Deleuze
(1995) have claimed that the use of surveillance for corrective or
normalizing interventions (disciplining) is on the wane. Instead, we are now
moving into a ‘society of control’, in which surveillance functions as a
means of managing populations through mechanisms of inclusion and
exclusion (see also Diken and Lausten, 2002; Rose, 2000). Monitoring now
informs judgements about who is to be included in social spaces, and who
is to be exiled to its margins, refused entry through the locked doors that
protect gated communities, apartment blocks, airports, shopping malls,
banks, workplaces and leisure complexes. Closed-circuit television (CCTV)
operators and private security guards zealously watch over our movements,
barring those who are deemed unwelcome, illegitimate, undesirable or
suspicious (Davis, 1990; McCahill, 2001). The massive growth in use of
video monitoring in particular (what Lyon [1994] calls the ‘electronic eye’)
has seemingly created a society in which we are always being watched,
examined and judged (Norris and Armstrong, 1999).
Historically, the development of surveillance has taken shape around the
visual monitoring of physical persons as they move and act in a variety of
terrestrial settings (the street, the workplace, the shop, the transit area,
public transport, etc.). With the appearance of networked electronic
communications, however, it has become possible for individuals to engage
in a variety of behaviours and exchanges in spaces in which they are not
physically present; social life now increasingly takes place in what Castells
(1996) calls the ‘space of flows’, those virtual terrains created by webs of
electronic communication which do not exist in any place in particular. The
internet represents the most prominent and widely used such virtual space.
Surveillance in this arena consequently takes a rather different form from its
terrestrial counterpart. Rather than monitoring the physical presence and
bodily actions of subjects, internet surveillance observes and collects the
digital footprints that all online activities leave in their wake. These
surveillance technologies thus take advantage of a core feature of computer
and network technologies – they ‘informate’ or, as Shoshana Zuboff (1984:
10) explains, ‘activities, events, and objects are translated into and made
visible by information’. In other words, these technologies convert people,
experiences, processes, etc. into data which can be presented, stored,
recalled, redistributed and analysed. All web users, whether they know it or
not, leave a ‘data trail’, the electronic record of their mouse clicks and
keystrokes, the websites they have visited, the searches they have run, the
materials they have downloaded, the personal information they have
entered, the words and images they have sent via email, and so on. Because
such data is often generated as a by-product of people’s online activities,
such information has been described as ‘data exhaust’ (Zuboff, 2015: 79).
From such dataveillance (Gandy, 1993) it is possible to construct a digital
double or simulation of an individual and her or his activities, without ever
having to engage in physical observation (Bogard, 1996; Poster, 1990).
From a web user’s online activities it is possible to discover and track,
among other things, his or her consumer choices and preferences; sexual
orientation, fantasies and fetishes; political opinions and sympathies;
professional interests and career aspirations; personal associations,
friendships and intimate relationships (King, 2001: 40–41; also Huey and
Rosenberg, 2004: 603; Zuboff, 2015). The collection and collation of such
data are now routinely gathered, analysed, bought and sold by businesses.
One’s digital reconstruction can have a profound impact on access to social
goods, resources and opportunities. For example, the aggregation of such
data can be used in deciding whether or not we are given credit, loans and
other financial services, or whether or not we will be insured against death,
injury or accident. Beyond the commercial uses of these data, we can note
the ways in which state and other security agencies utilize electronic
monitoring to determine whether or not we present a risk to the economic or
political order (organized criminals, terrorists) or to society and morality
(paedophiles, pornographers, stalkers). Indeed, it has been suggested that
the use of electronic databases is becoming ever-more central to the entire
apparatus of criminal justice, with computerized information processing
being used both to construct offenders and mediate punishment (Aas, 2005;
Brayne, 2017; Eubanks, 2018; Pattavina, 2004). While advocates of both
business and law enforcement may be fairly sanguine about such
developments, seeing them as necessary and desirable, many other
observers express alarm about the ways in which surveillance can or is
being used to sort, classify, control and exclude.
11.3 The Development of Internet
Surveillance
As Gillmor (2004: 211) notes, the original architecture of the internet made
no provision for tracking who had visited particular websites, or what they
had done there. As the Web developed into a commercial medium, however,
the collection of user information became quickly institutionalized. In 1996,
the manufacturers of Netscape Navigator (the most popular web browser in
use at the time) introduced ‘cookie’ files (also adopted by Internet Explorer
and other browsers). Cookies are small files created every time a user visits
a website. The files contain a record of the user’s activities at the site, and
the cookie is then sent to the user’s computer where it is stored. When the
user next visits the site, the cookie is sent back to the host system, enabling
identification of the returning visitor. In this way, websites are able to build
up a record of an individual’s activities (Denning, 1999: 86). Concerns have
long been raised about the invasiveness of cookies as they ‘allow a Web site
to record the user’s comings and goings, usually without his knowledge or
consent’ (Fischer-Hübner, 2000: 178; also Hinduja, 2004b: 47). Information
collected via cookies is but one (albeit important) dimension of a growing
effort to accumulate and exploit users’ details for commercial and
competitive advantage.
Because of the informating qualities of computer and network technologies,
there is no shortage of data exhaust produced by internet users. The
accumulation, collation, analysis and distribution of such data is now big
business in today’s economy and companies have come up with ingenious
methods of tracking online user behaviour. Whether it be for making
predictions about consumer behaviour, distributing targeted advertising,
producing market evaluations, conducing risk assessments, swaying public
opinion, or other uses, many companies covet the kinds of insights that such
data and their analysis can provide. As part of this digital ‘gold rush’
companies have emerged which make gathering and accumulating troves of
data on consumers and users their primary business model. Google provides
perhaps the most notable example of a company which collects and
monetizes user data. The company started as a search engine and has since
expanded to provide a host of services and products to users including the
Android smartphone operating system, Google Maps, the Chrome web
browser, Google Play, YouTube, among others. Many of these products and
services are provided for little to no cost for the end user and enjoy
significant market shares. For Instance, the Google search engine held
approximately 75 per cent of the market as of 2017 (SearchEngine Journal,
2018) and Google Chrome is the most popular web browser as of July 2018
(https://www.w3counter.com/globalstats.php). Instead of charging for these
products and services, Google instead makes most of its revenue from
advertising. Alphabet, Google’s parent company, reported over $110 billion
in revenue in 2017 of which $95 billion was from advertising through
Google products and services (Alphabet, 2018). While the company gathers
user data to create a customized and ‘smart’ experience for its users, this
data also allows Google to create targeted advertising and provide other
data-intensive services for businesses. This means that the company, as part
of its business model, routinely surveils its user base and monetizes their
information.
Contemporary information capitalism (or ‘surveillance capitalism’ [Zuboff,
2015]) has also coincided with the rise of ‘information brokers’ or ‘data
brokers’ who specialize in creating datasets of information about
individuals, often from a variety of sources, which can then be sold or
licensed to other companies. The scope and scale of the datasets amassed by
some of these firms is staggering, sometimes being referred to as ‘Big
Data’. In their 2014 report to Congress, the US Federal Trade Commission
examined nine major data brokerage firms and found that ‘one of the nine
data brokers has 3,000 data segments for nearly every US consumer’,
giving an indication of the tremendous volume of data in which these firms
deal (Ramirez et al., 2014: iv). The report also provides an ‘illustrative list’
of the kinds of data such companies collect (ibid.: B-3–B-6) including
names, location data, social security numbers, family ties, birth dates,
demographic information, criminal records, voting registration, social
media usage, housing information, leisure activities, media preferences,
credit information, insurance data, travel information, spending and
consumption habits, among others. According to Roderick (2014: 730)
commercial data brokerage ‘is now a US$300b-a-year industry’, though the
sheer scale and complexity of the industry makes precise valuation difficult.
This commercial market in personal information means that potentially
anyone, anywhere, can discover such details about us without our
knowledge or consent, and use it for whatever purposes they wish.
Concern about the collection and exploitation of personal data has further
intensified with the rapid growth of new social media platforms which are
filled with user-generated content about individuals’ lives, relationships,
activities, interests and preferences (Yar, 2012b). Indeed, the business
model of social media companies such as Facebook is based upon their
ability to offer marketing and advertising opportunities to third parties,
utilizing the detailed personal information collected from users’ online
activities (Sullivan, 2012). In 2017, the company reported an average of 1.4
billion daily active users and over $40 billion in revenue which mostly
stemmed from advertising (Facebook, 2018). Twitter similarly derives
significant earnings from advertising and reported $2.4 billion in revenue in
2017 (Twitter, 2018). While social media regularly gathers data on user
behaviours, relations and preferences for the purposes of advertising, this
data is often still collected by other parties even where service providers
take steps to carefully manage users’ personal information. In fact, data
brokerage firms routinely scour social media platforms for user data. For
example, in May 2010, the US-based media research firm Nielsen was
accused of such ‘data scraping’ from the health-related social network
PatientsLikeMe. Nielsen had been found to be using automated software to
copy messages from private forums where users discussed their health
problems including depression, bipolar disorder and other serious mental
health issues. It is thought that Nielsen subsequently sold such data to
clients (e.g. drug companies) as a form of consumer intelligence (Angwin
and Stecklow, 2010). Inevitably, such cases have led to numerous criticisms
about the extent to which users’ personal data are being utilized without
their explicit knowledge. Perhaps the most notorious example in recent
memory of third party companies exploiting scavenged social media data is
the case of Cambridge Analytica. In addition to other misdeeds, the firm,
along with SCL, gained and retained data on up to 87 million Facebook
users without their permission from a Cambridge University professor,
Aleksandr Kogan (Cadwalladr and Graham-Harrison, 2018; Kang and
Frenkel, 2018). Allegations are that this data may have been used to engage
in controversial ‘psychographic targeting’ of advertisements, perhaps most
notably for the 2016 Donald Trump presidential campaign (Valdez, 2018).
The collectors, sellers and consumers of such information may suggest that
alarm is unnecessary since adequate protection against abuses is provided
by data protection and privacy laws. Many countries now have such
provisions in place. One of the most significant attempts to legislate
consumer privacy protections in recent times is the European Union’s
General Data Protection Regulation (GDPR). Implemented in 2018, the
GDPR is an ambitious and far-reaching attempt to regulate the use of
personal information for commercial or professional activities that builds
from and extends the protections provided in the EU’s Data Protection
Directive as well as the Directive on Privacy and Electronic
Communications (the ‘ePrivacy’ Directive). Wired journalist Arielle Pardes
(2018) explains that the law,
gives people the right to ask companies how their personal data is
collected and stored, how it’s being used, and request that personal
data be deleted. It also requires that companies clearly explain how
your data is stored and used, and get your consent before collecting it
… People can also object to personal data being used for certain
purposes, like direct marketing.
In other words, the GDPR significantly increases privacy protections for
individuals by requiring companies to adopt a privacy-stance by default and
empowers individuals with greater controls over how their information is
used. Importantly, because the law covers all EU citizens, the law indirectly
regulates the use of data by actors outside of the EU. As Tiku (2018)
explains,
The law protects individuals in the 28 member countries of the
European Union, even if the data is processed elsewhere. That means
the GDPR will apply to publishers like WIRED; banks, universities,
much of the Fortune 500; the alphabet soup of ad-tech companies that
track you across the web, devices, and apps; and Silicon Valley tech
giants.
As such, businesses around the world have had to adjust to the rule
including companies like Google and data brokerage firms like Acxiom
(ibid.). Other regulatory bodies and governments have adopted privacy
protection measures as well. For example, the UK’s Data Protection Acts
1984 and 1998 regulate the unauthorized disclosure of personal information
to third parties, regulations which were updated in 2018 to conform with the
regulations imposed by the GDPR (even though the UK is in the process of
leaving the EU and its associated legal order and institutions) (Gov.uk,
2018), while the Criminal Justice and Public Order Act 1994 made
‘procuring disclosure of, and selling, computer-held information’ a criminal
offence. Similar provisions exist in Canada (under the Personal Information
Protection and Electronic Documents Act [PIPEDA] 2001), Australia (with
some provisions of the 1988 Privacy Act applying to certain businesses),
and other countries. In the US, however, individual data protection laws
tend to be more piecemeal. For instance, federal protections have been
extended to all ‘individually identifiable health information’ held and
processed by most health care-related organizations and providers through
the Health Insurance Portability and Accountability Act of 1996 (HIPAA)
including measures for the securing of electronic records (USDHHS, 2003).
Credit information also enjoys some limited protections through the Fair
Credit Reporting Act (FCRA) which also allows individuals the ability to
correct inaccuracies in their credit reports (Rieke et al., 2016: 16). Children
under the age of 13 are given some legal protections against data gathering
under the Children’s Online Privacy Protection Act of 1998 (COPPA)
which, in addition to other provisions, requires that parents give consent for
their children’s data to be collected, be allowed to request access and delete
data about their children, and that companies provide reasonable security
measures to protect said data (FTC, 2015). The Federal Information
Security Modernization Act (FISMA) also requires the maintenance of
informational security policies and practices recommended by the National
Institute of Standards and Technology (NIST), though these requirements
only apply to federal agencies. In 2012, the Obama Administration
proposed the Consumer Privacy Bill of Rights as ‘a blueprint for privacy in
the information age’ which, ideally, would have led to significant legislative
changes at the federal level (Obama, 2012). Unfortunately, legislative
efforts to realize the vision of this blueprint have not yet been passed.
Instead, individual states have been left to implement consumer privacy
protections like those recently adopted by California (Hautala, 2018).
Critics may point out a number of problems with a reliance upon such
statutory measures in order to protect web users’ privacy, however. First,
regulations against selling personal information generally do not cover
situations in which the user has ‘consented’ to share their information; data
acquired via the use of cookies and similar monitoring software are
generally held to have been provided with consent, and so are not covered
by law (though some laws, like the EU’s PECR, require consent before
using cookies to gather information). Second, there is an inevitable problem
of legal pluralism: nothing prevents websites and information services
located in non-regulated or under-regulated territories from collecting and
selling such information. Such differences in legal provisions are clear
when comparing Europe with the USA. As Rieke and colleagues (2016: 15)
note:
In practice, the US and EU have very different approaches to privacy.
In the US, the starting assumption is that processing of data is
permitted, whereas in the EU, all personal data processing needs a
legal justification and is subject to a set of interlinked obligations for
transparency, fairness and lawfulness. In contrast to the US, in the EU,
data privacy enjoys a status as a fundamental right. The result is a
broad and comprehensive legal posture. In the US, privacy law lacks a
clear source of moral authority, except where the Constitution
addresses privacy with respect to government actors. The result is a
body of US privacy law that has been characterized as ‘haphazard and
riddled with gaps’.
Inevitably, leaving user privacy to business self-regulation and consumer
choice will lead to greater space for the exploitation of personal information
where it is commercially advantageous to do so. Additionally, repositories
of information held by data brokers and other companies may not
necessarily be secure. Such data stores can and have been hacked using
stolen passwords or by exploiting weaknesses in systems’ security. For
instance, the credit reporting agency and data brokerage company Equifax
was breached in 2017 which exposed the data of approximately 143 million
Americans including personal identifying information (Gallagher, 2018).
The collection and storage of such data may present additional risks as well.
In their 2014 report, the US Federal Trade Commissions expressed concern
that services used to conduct ‘people searches’ (searches of information
about specific individuals) could ‘be used to facilitate harassment, or even
stalking, and may expose domestic violence victims, law enforcement
officers, prosecutors, public officials, or other individuals to retaliation or
other harm’ (ibid.: vi). They also argued that storing such data, particularly
indefinite storage, may create other security risks – like allowing identity
thieves to access enough data to ‘predict passwords, challenge questions, or
other authentication credentials’ for online accounts (ibid.: vi).
A second important dimension of internet monitoring relates to surveillance
and data collection by governmental agencies. This is not always strictly
separated from the commercial trade in personal data. For example, data
brokerage firms can offer data search and analysis tools to law enforcement
and government agencies such as the Accurint series of search tools (which
originally operated under the name ChoicePoint before being acquired and
rebranded as LexisNexis Risk Solutions) (Rieke et al., 2016: 34). As of
2011, the company claimed that the Accurint for Law Enforcement service
was ‘used by over 4,000 federal, state and local law enforcement agencies
across the country’ (LexisNexis, 2011). US law enforcement and
intelligence services, like the FBI, CIA and NSA, may use information
brokerage databases for sourcing basic intelligence about US citizens as
well as foreign nationals. Up until 2014, the US Federal Government spent
$61.7 million on ChoicePoint services alone (FRD, 2014: 57).
Beyond the acquisition of data from brokerages, governments around the
world have engaged in various forms of internet surveillance and ‘data
mining’. For instance, following the example of the US after the 9/11
terrorist attacks (discussed further below), the UK passed the AntiTerrorism, Crime and Security Act in 2001 which has considerably eroded
data protection regimes while enhancing state agencies’ ability to acquire
and retain communications data for ‘national security’ purposes (Henning,
2001). In 2012, the UK government courted further controversy when it
announced legislative proposals (in the form of the Communications Data
Bill) which would have enabled security services to monitor web users’
patterns of email communication, social media usage and web browsing
(Townsend, 2012). While this Bill foundered in the face of opposition, in
2016 Parliament passed the Investigatory Powers Act. This new Act
effectively legalised mass surveillance and data collection by the police and
security services that had been going on, without oversight, for many years
(MacAskill, 2016). In 2018, however, key elements of the Act were deemed
by the courts to be in breach of EU law, forcing the government to
undertake further amendments (Cobain, 2018). China perhaps operates one
of the most comprehensive domestic internet surveillance and censorship
through programmes like ‘Golden Shield’ and ‘the Great Firewall’ which
permit the government to monitor and block both domestic and foreign
internet traffic (Economy, 2018). In 2015, the Chinese government also
implemented the ‘Great Cannon’ which ‘is able to adjust and replace
content as it travels around the internet’ to promote messages more
favourable to the ruling party (ibid.).
The US has perhaps one of the most storied histories regarding internet
surveillance programmes. Federal protections for individuals, such as the
Privacy Act 1974, have been amended under the PATRIOT Act following
the 9/11 attacks (discussed in Section 4.3), allowing federal agencies greatly
increased access to private and personal information. As the American Civil
Liberties Union (ACLU, n.d.) explains, the PATRIOT Act greatly increases
US authority in regards to conducting the following:
Records searches. It expands the government’s ability to look at
records on an individual’s activity being held by third parties (like data
brokers) (Section 215)
Secret searches. It expands the government’s ability to search private
property without notice to the owner (Section 213)
Intelligence searches. It expands a narrow exception to the Fourth
Amendment that had been created for the collection of foreign
intelligence information (Section 218)
‘Trap and trace’ searches. It expands another Fourth Amendment
exception for spying that collects ‘addressing’ information about the
origin and destination of communications, as opposed to the content.
Importantly, as part of these changes to search and seizure laws, the
PATRIOT Act expanded the Foreign Intelligence Surveillance Act (FISA)
of 1978 which originally allowed wiretaps and similar searches without
probable cause when gathering foreign intelligence (ibid.). Under the
PATRIOT Act, these searches could be conducted domestically as long as
one end of the communication was with a foreign person thus creating a
significant legal grey-area in regards to the probable cause requirement of
the Constitution’s Fourth Amendment against unreasonable searches and
seizures. Moreover, while the Privacy Act regulates government databases,
its provisions do not apply to the private sector, thereby creating a situation
in which agencies can bypass restrictions by sourcing data from data
brokers and the like.
As part of this expansion of government power, the Pentagon launched its
‘Total Information Awareness’ project in 2003, with a budget of $54
million, in order to integrate information technologies for collecting,
collating and cross-matching intelligence data (Stratford, 2003: 13). In
response to privacy concerns, the US Congress eventually discontinued
support for the programme in 2003. A report the following year, however,
found data mining to be ‘ubiquitous’ among government departments, with
52 federal agencies running some 1,999 projects aimed at assembling
information about ‘terrorists’, ‘terrorist activities’ and ‘terror suspects’
(TIMJ, 2004: 7–8). Particularly worrying is that fact that much of the
surveillance programs operated by US agencies are subjected to relatively
little oversight and accountability (though some limited improvements have
occured in recent years [Pincus, 2014; Siddiqui, 2015]). Additionally, prior
to implementation of the PATRIOT Act, the FBI operated an email content
search system originally known as Carnivore, later renamed the less
menacing DSC 1000 and followed by the more expansive DSC 3000 (Wired
Staff, 2006). The software could be installed on an ISP’s system, enabling
the Bureau to ‘wiretap’ emails (Stratford, 2003: 12). The Combating
Terrorism Act (passed just two days after the 9/11 attacks) allowed
intelligence services to use the system without the need for any judicial
approval (Coriou, 2002: 5); nor was it possible for ISPs to contest an order
from the FBI (a so-called ‘national security letter’) before a judge (Swartz,
2004: 6).
The US government has also operated the highly secretive ECHELON
project which has now been running for over 40 years, and involves a
partnership between intelligence agencies in the USA, the UK, Canada,
Australia and New Zealand (Campbell, 1988: 10; Denning, 1999: 171;
Matney, 2015). Originally designed to intercept and eavesdrop upon
international telephone calls using automated voice recognition systems,
ECHELON now embraces the searching of worldwide email (as well as
fax) traffic for keywords that will trigger an alert. The governments
involved have long denied the system’s existence, hiding behind appeals to
national security and official secrets. Over time however, various
disclosures have brought to light the alarming extent of ECHELON’s
abilities. Its network includes ‘satellite interception stations, microwave
ground stations, spy satellites, radio listening posts, and secret facilities that
tap directly into land-based telecommunications networks’ (Denning, 1999:
172). Its massive processing capacity is thought to give ECHELON the
ability to examine ‘almost every telephone call, fax transmission and email
message transmitted around the world daily’ (Poole, 2000). A report
published by the European Parliament, following an inquiry into the
ECHELON system, noted how it was being used for purposes well beyond
those most people would associate with national security (e.g. anti-terrorism
operations). In particular, the report highlighted the ways in which
economic espionage had become incorporated into the system’s routine
activities, gathering secret and confidential information on US business’s
foreign competitors (Furnell, 2002: 262). It claimed that the purpose of the
system was ‘to intercept private and commercial communications, and not
military communications’ (EP, 2001: 9). Authorities in Japan, France and
Brazil have complained that ECHELON has been used to spy on trade
negotiations and business deals, and the intercepted details have been
passed on to US-based competitors. It has also been reported that
monitoring has been extended to political and other organizations who
operate within the law, but whose agenda may be critical of the US and
other governments: these include the environmentalist group Greenpeace,
the human rights organization Amnesty International, and the development
charity Christian Aid (Sykes, 1999: 173). Add to this the lack of any
democratic oversight of the system’s operations and a situation clearly
exists in which abuses of privacy laws may be perpetrated by states against
their own and other nations’ citizens on an ongoing basis (Campbell, 2015).
In 2013, Edward Snowden, an employee for the US National Security
Agency (NSA), leaked to journalists the details of a massive surveillance
apparatus operated by the US government. The leak revealed the NSA was
harvesting massive troves of data on both US citizens and foreign nationals
including ‘metadata’, or information about data (for example, instead of
recording a call, details about who was called and the duration of the call
might be recorded). One programme which has received the most attention
is PRISM, or the Planning Tool for Resource Integration, Synchronization
and Management. PRISM allows the agency to access the servers of
companies and services like Yahoo!, Google, Facebook, YouTube, Skype,
AOL and Apple for user data including ‘email, video and voice chat,
videos, photos, voice-over-IP (Skype, for example) chats, file transfers,
social networking details, and more’ though ‘the extent and nature of the
data collected from each company varies’ (Greenwald and MacAskill,
2013). Using the powers afforded to it by the FISA and PATRIOT acts, the
NSA used PRISM ‘to obtain targeted communications without having to
request them from the service providers and without having to obtain
individual court orders’ (ibid.). Another programme called ‘XKeyscore’
allows the NSA to ‘wiretap’ a user and gather data ‘including the content of
emails, websites visited and searches, as well as their metadata’
(Greenwald, 2013). These are just two examples of the surveillance
capabilities revealed in the Snowden leaks. Additionally, the NSA’s
surveillance efforts also involved coordination with the UK’s Government
Communications Headquarters (GCHQ) to crack the encryption used by
many online companies and services to facilitate surveillance (encryption
will be discussed further in the following section) (Ball et al., 2013). The
mass surveillance measures revealed by Snowden proved intensely
controversial. Civil liberties groups, technologists and others argued that
these programmes were overly intrusive and expansive and, as such,
violated privacy rights. There was also international outrage as the
surveillance measures meant that the US was actively spying on huge
portions of the world’s population. Others argued that such surveillance is
necessary for the purposes of national security. Though the US government
has claimed that at least 50 terrorist threats were thwarted by the
surveillance regime, little or no evidence exists to support these assertions
(Elliot and Meyer, 2013; Isikoff, 2015; McLaughlin, 2015).
As evident through the NSA controversy, the current trend clearly conforms
to the wider development of a ‘network of policing’ in which extra-state
actors are co-opted into the ever-expanding surveillance apparatus. Tech
companies, for example, regularly receive requests from government
agencies for information. As explained in the opening paragraph of a report
from the Harvard Law Review (2018: 1722):
Facebook received 32,716 requests for information from US law
enforcement between January 2017 and June 2017. These requests
covered 52,280 user accounts and included 19,393 search warrants and
7,632 subpoenas. In the same time period, Google received 16,823
requests regarding 33,709 accounts, and Twitter received 211 requests
regarding 4,594 accounts. Each company produced at least some
information for about eighty per cent of requests … Of the many
conclusions that one might draw from these numbers, at least one thing
is clear: technology companies have become major actors in the world
of law enforcement and national security.
To restate, these figures are for US law enforcement requests alone. The
numbers increase dramatically when worldwide requests from governments
are considered. US law, like the Communications Assistance for Law
Enforcement Act (CALEA) of 1994, often requires companies to design
products and services in a way that allowed for the accommodation of
intelligence and law enforcement requests for surveillance. Some
companies, however, have done more than what was required to facilitate
government surveillance. For example, between 2001 and 2008, AT&T
actively cooperated with the NSA in such a manner that it was clear the
company ‘went above and beyond to facilitate government surveillance
without much concern over the legality of that surveillance’ (Harvard Law
Review, 2018: 1726). Some companies, however, have actively resisted
such requests (ibid.: 1727). Tech companies’ reluctance in the face of these
demands may be traced to two eminently pragmatic concerns. First, their
inclusion within the state’s policing efforts may set them against their
customers, who may refuse subscription to or purchase of services if they
know that their information and activities may come to the attention of
security agencies. Relatedly, the facilitation of expansive or even illegal
surveillance may harm their public image which could have implications
not just for customer relations, but also for investors. Second, the retention
of vast quantities of data relating to every user’s every move will require a
massive increase in data storage capacities, thus entailing considerable extra
expense.
11.4 The Dilemmas of Surveillance as
Crime Control: The Case of Encryption
The controversy over the encryption of internet communication in many
ways exemplifies the tensions and dilemmas raised by the contradictory
imperatives of state surveillance and user privacy. The issue of encryption
was briefly discussed in Section 4.6. It was noted that encryption (along
with allied methods of encoding information such as steganography) can be
used to protect electronic communications from inspection by third parties.
Documented uses of encryption by those involved in criminal activities
include the encoding of email communications, of files stored on computers
and computer networks, and of information posted on publicly accessible
websites (Denning and Baugh, 2000: 106). Use of such techniques has been
encountered by investigations of organized crime, terrorist groups, and
internet paedophile and child pornography rings (Ormsby, 2018; Rogerson
and Pease, 2004: 3). The inability to overcome criminal’s use of technology,
like encryption, to avoid surveillance and apprehension has been called the
‘Going Dark’ problem (Comey and Yates, 2015).
Encryption basically works by reordering data (e.g. in a text file) according
to a pattern specified by a ‘key’. A key is a string of digital code or ‘bits’
(ones and zeros). Without access to this key, one does not know how to
rearrange the data contents of a file back to their original order so as to
make them legible; in their encrypted form, the data will look like just a
random and nonsensical mass of characters. It is possible to discover the
encryption key by ‘brute force’ – using a computer to run through all the
possible combinations of one and zeros until the correct order is hit upon
that will allow the data contents to be rearranged in their proper form. The
ability to do so, however, will depend crucially upon the amount of
computing power available and the length of the key – adding a single digit
to the key length will double the number of possible combinations that have
to be tried. In 1998, two American researchers built a supercomputer (called
the EFF DES cracker) that could try out 100 billion keys per second; using
this machine, they successfully cracked an encryption code 56 bits in length
in less than three days (Denning and Baugh, 2000: 118). Such cracking can
be easily avoided by simply increasing the length of the encryption key
used, however: a key of 128 bits
is totally unfeasible to break ... by brute force, even if all the
computers in the world are put to the task. To break one in a year
would require, say, 1 trillion computers (more than 100 computers for
every person on the globe), each running 10 billion times faster than
the EFF DES Cracker. (Denning and Baugh, 2000: 118)
Even if today’s computational prowess were brought to bear on the issue,
128-bit encryption is virtually unbreakable through brute force attacks and
the problem is only exacerbated with the steady adoption of 256-bit
encryption standards. Because of the difficulty (or even impossibility) of
brute forcing many cryptographic standards, other methods are often relied
upon which exploit weaknesses in the cryptographic algorithms and
software.
Many widely used computer applications now include encryption tools as
standard; for example, Microsoft’s Office software currently offers 256-bit
encryption for its Word documents, email messages and so on. Similarly,
Adobe’s Acrobat software (used to create PDFs) now comes equipped with
256-bit encryption. So called ‘public key’ methods of encryption, like
Pretty Good Privacy (PGP), are freely available for download from the
internet but because of their particular design require larger key lengths to
avoid brute force attacks, such as key lengths in the neighbourhood of 3072
bits or higher. Regardless, such powerful tools for data protection mean that
‘encryption, properly implemented, can be nearly foolproof’ (Reitinger,
2000: 135). Both business and individual internet users, concerned about
the possibility that their competitor or foreign governments may intercept
sensitive communications, are making increasing recourse to using such
tools to protect their data from prying eyes.
The concern aroused by encryption on the part of states and law
enforcement agencies has resulted in an ongoing battle over the past decade
to subject encryption technologies to a variety of forms of statutory
regulation. In fact, governments and intelligence agencies like the NSA
have sought to control access to and use of digital cryptography methods
for decades (Levy, 2001). In France, for example, the mid-1990s saw
attempts to institute a public ban on so-called ‘strong encryption’, with the
state arguing that only those with something illegal to hide need have access
to such security tools. The state also attempted to introduce a so-called ‘key
deposit’ system, where all manufacturers of encryption software would
have to provide the police with a key allowing data scrambled using their
programs to be deciphered by the authorities (Akdeniz, 1997). The
proposals were finally withdrawn as a result of pressure from the EU, which
wanted to promote the use of communication security technologies as part
of its blueprint for developing the information economy within Europe. The
respite was short-lived, however, and in the wake of the 9/11 attacks, the
passing of the Law on Everyday Security in November 2001 introduced
mandatory requirements for encryption tool providers to surrender codes
enabling the authorities to read encrypted messages (Coriou, 2002: 7). A
similar trajectory emerged in the UK, with the government proposing in its
Electronic Communications Bill (1998) to allow law enforcement agencies
to demand keys from encryption users, with failure to comply carrying a
‘presumption of guilt’ and resulting in a two-year custodial sentence (FIPR,
1999: 2). The proposal was dropped from the final draft of the Bill
following concerted opposition from many quarters. The proposals
requiring surrender of a decryption key were revived, however, and
successfully seen into law in the Regulation of Investigatory Powers Act
(RIPA) in 2000 (which has since been expanded in 2003, 2005, 2006 and
2010). Disclosure can be demanded ‘(a) in the interests of national security;
(b) for the purpose of preventing or detecting crime; or (c) in the interests of
the economic well being of the United Kingdom’ (RIPA, Section 49(2),
cited in Walker, 2001). Convictions for failure to disclose carry a maximum
penalty of 2 years imprisonment, although this increases to 5 years if the
case involves either national security or indecency involving a child
(Chatterjee, 2012). Government opposition to end-to-end encryption
erupted again in 2017, as then-Home Secretary Amber Rudd voiced
criticism of law enforcement’s inability to access communications on
platforms such as WhatsApp that use such encryption (McCarthy, 2017).
Proposals to compel communications companies to give real-time access to
user content appear to have foundered, however, after the government’s
failure to secure an outright parliamentary majority in the surprise general
election of that year. The response in the USA has been, if anything, even
more concerted. Throughout the second half of the twentieth century, the
NSA and similar agencies controlled the use of digital cryptography
predominantly through secrecy – cryptography details were largely kept
under lock-and-key and only certain external organizations were allowed to
work with cryptography under agreements with the US government (Levy,
2001). Technologists and mathematicians, however, began exploring
cryptography and developed publicly accessible encryption methods (ibid.).
In response to mounting pressure and anxiety about the emergence of public
encryption which would obstruct US intelligence and law enforcement data
gathering abilities, the NSA pushed for a ‘key escrow’ system in the late
1980s which was argued to be ‘a solution that not only could give the
unprecedented protection of strong crypto to the masses’ but ‘would also
preserve the government’s ability to get a hold of the original plaintext
conversations and messages’ (Levy, 2001: 229). In other words, the NSA
wanted to build a backdoor into the encryption to allow government access
to otherwise secure data. The desire to control encryption continued through
the Clinton Administration in the early 1990s and the US government
promoted a raft of measures which would have placed draconian limits on
the use of encryption and given law enforcement agencies wide-ranging
powers for interception and decryption. One example was the proposal for
the Clipper Chip, which were to be installed in all telephones and
computers and ‘would enable the government to decode any electronic
communication’ (Sykes, 1999: 160). The move was defeated due to
resistance from computer manufacturers and civil liberties organizations.
Authorities argued that such interception capabilities would only be used to
track the communications of drug dealers and terrorists, and that the privacy
of ordinary US citizens would be respected. Yet, as well-known internet
freedom campaigner John Perry Barlow pithily quipped, ‘trusting the
government with your privacy is like trusting a Peeping Tom to install your
window blinds’ (cited in Sykes, 1999: 160). The years through the mid- and
late 1990s saw repeated attempts to introduce legislation which would
variously require manufacturers to supply keys, build secret ‘back doors’
into encryption software, ban foreign sales of encryption capable programs,
and ban strong encryption all together. As in other countries, the tide finally
turned in favour of law enforcement and against computer privacy activists
in the aftermath of the 9/11 attacks. The PATRIOT Act of 2001 instituted no
specific additional measures against encryption but, as already noted,
greatly increased law enforcement’s scope for data collection and
interception without the safeguards of judicial review or appeal. The
provisions of the Act were renewed by Congress in 2011, extending them
until at least 2019. Even today, US law enforcement agencies like the FBI
argue for controls of access to encryption to combat the aforementioned
‘Going Dark’ problem (Comey and Yates, 2015). For instance, a major
legal battle unfolded between the FBI and Apple after the 2015 mass
shooting which occurred in San Bernardino, CA. After the shooting, the
FBI seized the smartphone of the shooter and got a court order requiring
Apple to help the FBI crack the encryption on the device to facilitate their
investigation. Citing concerns that helping the US government crack
encryption in this case would set a precedent that authoritarian countries
like China and Russia could abuse, Apple refused to comply with the order.
The FBI, on the other hand, argued that failure to comply essentially made
the devices ‘warrant-proof’ (Selyukh, 2016). The case ignited debate over
the tension between civil liberties like privacy and security endemic with
encryption. Much of the fervour died down after the FBI successfully
cracked the phone by paying $900,000 to professional hackers who
discovered a ‘previously unknown software flaw’ (Nakashima, 2016).
It can be argued that efforts to restrict or regulate the use of encryption (as,
for example, through key escrow schemes or building ‘back doors’ into the
software) may ultimately be counter-productive in tackling crime and
terrorism. Criminal organizations and actors, aware of the authorities’
ability to access data encrypted using commercial software, are liable to
make recourse instead to tools for which no back doors or master keys exist
(or even privately commission software tailored to their needs).
Consequently, the access acquired by law enforcement will prove
ineffective in countering encryption used by professional criminals; its only
use will be to enable surveillance of legitimate organizations and
individuals.
The encryption issue has thrown up clear tensions not only between
governments and their citizens, but also within governments themselves.
While those agencies concerned with crime and national security have
pushed for restrictions on non-governmental use of encryption, those
departments dealing with commerce, trade, industry and technology have
been encouraging business use of those selfsame technologies. The
perceived necessity for such use arises from the need to establish greater
information security. Public engagement with e-commerce (online
purchasing, electronic banking, etc.) has been limited by concerns over the
security of sensitive data that might be intercepted and exploited by third
parties such as hackers and fraudsters. As a result, building greater
protection for customer information, and thus greater confidence in the
integrity of internet business systems, is perceived as key for the continued
growth of the e-commerce sector. Thus, in its 2017 report on ‘the cyber
threat to UK business’, the government’s National Cyber Security Centre
(NCSC) argues that:
The cyber threat to UK business is significant and growing … This
threat is varied and adaptable. It ranges from high volume,
opportunistic attacks … to highly sophisticated and persistent threats
involving bespoke malware designed to compromise specific targets
… The rise of internet connected devices gives attackers more
opportunity … The past year has been punctuated by cyber attacks on
a scale and boldness not seen before. (NCSC, 2017: 4)
It goes on to advise that, in order to mitigate this threat, businesses need to
invest in ‘technical controls’ such as ‘access control, boundary firewalls and
internet gateways, malware protection, patch management and secure
configuration’ and by ‘using best practice encryption’ (ibid.: 18–19). In
addition, recent events have also highlighted the need to make sure
cryptographic measures meet certain standards as evinced by the recent
breach of MyFitnessPal, a fitness app run by UnderArmor, which resulted
in the data of 150 million users being compromised (Bradley, 2018). While
encryption was used to protect the breached passwords, some were
protected with dated methods which could be more easily cracked. Such
advocacy of information security measures, including the use of data
encryption, has also been apparent at the international level. The
Organisation for Economic Cooperation and Development (OECD), in their
Recommendation Concerning Guidelines for Cryptography Policy, argues
that cryptography techniques ‘are critical to the development and use of
national and global information and communications networks and
technologies, as well as the development of electronic commerce’ (OECD,
1997). The OECD goes on to recommend that national encryption controls
must respect the rights to privacy, information security, and the needs and
demands of business. Thus we can see a basic tension between the agendas
of different authorities, with some advocating the widespread use of
encryption, and others seeking to curtail it in the name of crime control and
policing.
Another source of resistance to anti-encryption laws is the computer
security industry. This sector, which provides computer security goods and
services to commercial organizations, communities and individuals, has
grown apace in recent years (as discussed in Chapter 10). Research
estimates placed global revenues from the sale of security software alone at
$22.1 billion in 2015 (Gartner, 2016). The sector also furnishes security
services such as threat analysis and risk assessment; secure systems design
and implementation; and training for organizations and their employees in
using and implementing security systems and procedures (Gartner, 2017a;
Halverson, 1996: 9; Nugent and Raisinghani, 2002: 7–8; Rassool, 2003: 5–
6; Vaidhyanathan, 2003: 176–177; Wright, 1998: 10). Naturally, computer
security firms have a vested interest in the increasing uptake to information
security technologies such as encryption, and its legal restriction would
prove highly damaging to the future prospects for economic growth in this
market. Therefore, it comes as no surprise that information security
providers have made common cause with privacy advocates and civil
libertarians in lobbying to block bans and excessive controls on public
encryption. This alliance has continued to frustrate law enforcement in its
hopes to effectively outlaw or severely limit encryption technologies in the
name of crime fighting.
11.5 Summary
This chapter has dealt with the dilemmas and tensions that have arisen from
expanded efforts to institute greater surveillance and monitoring in response
to cybercrime. From the perspective of law enforcement agencies, the
ability to use anonymity, disguise and cryptographic encoding is a serious
impediment in tackling internet crime. States have taken this claim
seriously by instituting ever-greater monitoring and interception of online
communications. It is clear, however, that such measures have breached the
limits of rights to privacy that many citizens of liberal-democratic countries
might reasonably expect. The wave of legislative innovation in the wake of
the 9/11 attacks has merely served to amplify state security agencies’ ability
to monitor citizens, with, in many cases, severely limited accountability and
democratic controls. This has resulted in ongoing critique of and resistance
to internet surveillance by activists, and greater use of privacy enhancing
tools by internet users. The tension between surveillance and privacy
enhancement also appears in so far as the same tools that can be exploited
for criminal activity can also be used to afford users protection from
criminal victimization. This is nowhere clearer than in the case of
encryption, which can be viewed simultaneously as a serious threat to law
enforcement efforts and as an indispensable resource for businesses and
individuals to protect themselves from crime. This situation leads to
competing agendas between different state agencies, with the imperatives of
economic growth and law enforcement clashing when it comes to public
access to encryption tools. The outcome of such contestations and
negotiations in the coming years will have profound consequences for the
social development of the internet, its availability as an arena for criminal
enterprise, and its accessibility as a space in which civic cultures and social
relations may develop.
Study Questions
In what ways are our day-to-day uses of the internet subject to surveillance?
Are such surveillance measures justified by claims that they are necessary for tackling
cybercrime?
Can citizens trust states where it comes to their privacy?
Why is the regulation of privacy-enhancing technologies such as encryption so difficult to
address? Is it possible to meet the demands and needs of all social constituencies at the same
time?
Is encryption a ‘silver bullet’ when it comes to privacy?
If it comes to a choice between security and liberty, on which side should we err?
Further Reading
On the growth of surveillance in modern societies, see David Lyon’s Surveillance After
September 11 (2003) along with Roy Coleman and Michael McCahill’s Surveillance and
Crime (2010). On the development of informational surveillance, or dataveillance, Oscar
Gandy’s The Panoptic Sort: A Political Economy of Personal Information (1993) and
Bogard’s The Simulation of Surveillance: Hypercontrol in Telematic Societies (1996) provide
excellent and provocative reading. On the privacy implications of contemporary surveillance
practices, Charles Sykes’ The End of Privacy (1999) provides a stimulating and accessible, if
disturbing, assessment. Prominent surveillance scholar, Gary Marx, also summarizes much of
his foundational thinking and provides new insights in the area of surveillance in Windows into
the Soul: Surveillance and Society in an Age of High Technology (2016). Valuable and nontechnical overviews about the encryption debate can be found in Dorothy Denning’s
Information Warfare and Security (1999) and Bruce Schneier’s Secrets and Lies: Digital
Security in Networked World (2000). For an overview of the history of cryptography and its
political dimensions, see Steven Levy’s Crypto: How the Code Rebels Beat the Government –
Saving Privacy in the Digital Age (2001). Readers interested in data brokers may want to
check out the US Federal Trade Commission’s report entitled Data Brokers: A Call for
Transparency and Accountability (Ramirez et al., 2014) and Leanne Roderick’s article entitled
‘Discipline and power in the digital age: The case of the US consumer data broker industry’
(2014). Finally, the websites of the Electronic Privacy Information Centre (www.epic.org) and
the Electronic Freedom Foundation (www.eff.org) are invaluable resources for critical analysis
of developments in the area of internet surveillance and privacy.
12 Conclusion Looking Toward the Future
of Cybercrime
Chapter Contents
12.1 Introduction
12.2 Future directions in cybercrime
12.3 Considerations for civil liberties
12.4 Concluding thoughts
Study questions
Further reading
Overview
Chapter 12 examines the following issues:
The implications for future cybercrimes stemming from the Internet of Things, biological
implants, cloud-computing, and state and state-sponsored hacking and propaganda;
Additional considerations for tensions between attempts to regulate harmful online behaviours
and civil liberties like freedom of speech and privacy.
Key Terms
Bio-hacking
Cloud-computing
Fake news
Privacy fatigue
Privacy paradox
12.1 Introduction
As argued by its architects and advocates, the internet promises openness,
decentralization, relative anonymity and scalability. Like the wishes granted
by WW Jacob’s ‘Monkey’s Paw’, however, these promises may have
unintended or even paradoxical consequences. The previous chapters
explored numerous cybercrimes enabled by the internet and related
technologies including malware creation and distribution, unauthorized
system access, website defacements, cyberterrorism, state-sponsored
hacking, intellectual property piracy, online hate speech, online
pornography, cyberstalking and online child pornography, to name a few.
Also explored were various perils and pitfalls resultant from attempts to
combat cybercrime including those stemming from legal gaps, insufficient
law enforcement resources and expertise, jurisdictional limitations and civil
liberties issues related to freedom of speech and privacy. While avoiding
sensationalist speculation, the current chapter looks toward the future,
ruminating on some emergent issues for the field of cybercrime. Namely,
we explore the criminal potential of the Internet of Things, network-capable
implants, cloud-computing, and state-sponsored hacking and
misinformation campaigns. We also revisit and reflect further on persistent
civil liberties issues in the field, specifically those surrounding freedom of
speech and online privacy.
12.2 Future Directions in Cybercrime
12.2.1 Internet of Things
Amidst the proliferation of internet-connected devices, the Internet of
Things promises to play an instrumental role in cybercrime and security in
the coming years. We hurriedly connect all manner of in-home devices like
thermostats, refrigerators, light fixtures, security systems and cameras to
our networks that offer to ease our burdens and enhance our lifestyles. We
not only invite these devices into our homes, but we also increasingly carry
IoT devices on our person in the form of smartphones, smartwatches and
fitness trackers. These devices only increase the amount of data we are
distributing across the internet including details about our daily activities,
health and preferences making us more vulnerable to surveillance and data
breaches (Bradley, 2018; Greenwald and MacAskill, 2013; Ramirez et al.,
2014; Yar, 2012b; Zuboff, 2015). In our haste, however, we may
inadvertently introduce major vulnerabilities into our networks as many of
these devices are poorly secured (Greengard, 2015: 24). We have already
seen how such vulnerabilities can be exploited to wage major DDoS
attacks, such as those evinced by the Mirai botnet (Graff, 2017).
Additionally, an increasing percentage of our critical infrastructure is being
linked to the internet including power, water, natural gas,
telecommunications and related systems. In particular, this dimension of the
Internet of Things may play a bigger role in conflicts between nation states
in the coming years as evinced by recent reports that the Russian
government was involved in ‘infiltrating and probing’ the US electrical grid
(Newman, 2018).
12.2.2 Biological Enhancements: Networking the
Body
The trend toward the proliferation of network-connected devices is unlikely
to slow in the near future. In fact, such devices may only become more
integrated into our day-to-day lives as scientists, engineers, and so-called
‘bio-hackers’ develop implantable technologies with networking
capabilities, presenting potential new threats. Some devices are relatively
rudimentary like radio frequency identification (RFID) chips implanted into
hands (Grauer, 2018). Others, however, are relatively sophisticated
including smart orthopaedic and heart implants (O’Connor and Kiourti,
2017; Varma and Ricci, 2013). While these devices may provide significant
medical advantages to patients or help techno-utopians and cyborgenthusiasts realize their dreams of an enhanced future, they may also
present new opportunities for criminal activity. For instance, the hacker and
security researcher Barnaby Jack found a vulnerability in certain
pacemakers that would allow him to wirelessly connect to nearby devices
and shut them off or force them to set off a high-voltage charge (Alexander,
2013). He has previously uncovered a way to hack nearby insulin pumps to
deliver fatal doses (Goodin, 2011). Other researchers have found potentially
lethal vulnerabilities in implantable cardiac devices as well (Marin et al.,
2016). Though malicious implant compromises are likely a minor risk at the
present, such risks may only grow over time as medical technology and biohacking advances.
12.2.3 Cloud Computing
Similarly, cloud-based technologies also present new opportunities for
cybercrime activities. In essence, ‘The Cloud’ is a method whereby data can
be stored on an external server and accessed from other locations through
the internet. Cloud computing can provide different types of services to
users including (1) ‘software as service’ or the ability to ‘use the provider’s
applications running on cloud infrastructure’, (2) ‘platform as service’ or
the ability ‘to deploy onto the cloud infrastructure’ the user’s ‘own
applications without installing any platform or tools on their local
machines’, and (3) ‘infrastructure as service’ or the provisioning of
‘processing, storage, networking, and other fundamental computing
resources’ (Hashizume et al., 2013: 3). These forms of data storage and
management provide immense convenience as users can access their data
from almost anywhere with an internet connection without having to lug
around storage media. It also means, however, that their data is on the
internet and, if not properly secured, it could be accessed by others. For
example, in 2014 the Apple iCloud accounts of multiple female celebrities
like Jennifer Lawrence and Kate Upton were compromised, resulting in the
release of private nude photos (Marganski, 2018: 20). Another recent
example involves Timehop, a service that helps users to nostalgically revisit
old social media posts. The company suffered a significant data breach
compromising the data of 21 million users (including names, phone
numbers, email addresses, gender and date of birth) when ‘someone
accessed a database in Timehop’s cloud infrastructure that was not
protected by two-factor authentication’ (Ha, 2018). As more data migrates
to cloud storage, these kinds of intrusions and breaches may only become
more common.
12.2.4 State-sponsored Hacking and Propaganda
As previously indicated, the future of cybercrime will likely include greater
involvement by state and state-sponsored actors. The events surrounding
the 2016 US Presidential campaign demonstrate the evolving nature of
online government espionage and social media propaganda strategies.
During this period, Russian agents used the internet to sow discord amidst
the American electorate (Matishak, 2018). Taking advantage of the
relatively superficial ‘click-and-share’ method by which users generally
consume information over social media, a Russian ‘troll farm’ called the
‘Internet Research Agency’ spread disinformation about candidates, the
government, political protests, and almost any other matter related to
American political discourse, most notably in the form of ‘fake news’.
Though fake news is by no means a new issue, it has taken on a new life in
the age of social media. The Russian actors also compromised the security
of the Democratic National Committee and the Democratic Congressional
Campaign Committee, exfiltrating data that was potentially used to damage
the campaigns of key Democrats (ibid.). The problem was exacerbated by
non-state opportunists who capitalized on the discord by creating and
distributing their own fake news to generate web traffic and advertising
revenue for themselves (Altheide, 2018: 214). While perhaps an
oversimplification, many people fell for these posts because they aligned
with their own political biases and they appeared to be shared by people
they knew on social media. Such propaganda exploited the promotional
algorithms used by companies like Facebook and Google (Altheide, 2018:
217–221) and the tendency for many users to superficially digest social
media content, perhaps because they may be too busy and distracted to
question the content and conduct independent verification.
State sponsored security breaches and disinformation campaigns are likely
to become a more significant part of the contemporary political landscape
as countries like the US, UK, China, North Korea, Russia, Israel and others
increase their digital capabilities. At the same time, we should expect that
non-state actors and organizations will similarly turn toward information
security as a means to resist governments, including hacktivists and ‘protostate hackers’ – hackers operating within a non-state ‘militant organization’
– like the electronic division of the Palestinian Islamic Jihad, a group
dedicated to the armed resistance of Israel, including its surveillance
apparatus (Skare, 2018: 2).
These are only a few examples of emergent cybercrime issues. Just as
computer and networking technologies are in a constant state of flux, so too
are the pathways and opportunity structures for cybercrime. Focus on
change, however, makes it easy to forget that there is some stability in the
field as well. Many of the cybercrime issues described in the first edition of
this book back in 2006 are just as relevant today. As the old saying goes, the
more things change, the more they stay the same. Forms of fraud,
exploitation, sabotage, espionage, propaganda, theft and harassment predate
the internet and will endure on- and offline for years to come. Additionally,
our attempts to curtail such behaviours will likely result in similar conflicts
of social values and civil liberties – a case of ‘same song, different verse’. It
is toward emergent verses in these conflicts we now turn.
12.3 Considerations for Civil Liberties
12.3.1 Freedom of Speech
Just as cybercrimes will continue in some form or fashion for the
foreseeable future, so too will the range of related civil liberties issues
stemming from attempts to combat these offences. In particular, this chapter
revisits two tensions touched upon in this book and reflects on their future
implications for the study of cybercrime, security and regulation. The first
is the conflict between freedom of speech and the regulation of speech and
content. Chapters 7 and 8 highlighted difficulties in regulating hate speech
and obscene content (including child pornography) in a way that also
preserves freedom of speech. Each country adopts different standards
through varying approaches with the US likely providing the most
vociferous defences of free speech under its First Amendment. Today, we
see tensions in freedom of speech resurfacing through controversial debates
affiliated with the contemporary Alt-Right, anti-feminist groups, and white
supremacists which are blossoming among certain online communities
(Nagel, 2017). Though free speech is a core value for liberal democracies,
members of the far-right often also deploy free speech rhetoric as a rallying
cry for their causes and a flak jacket to protect them from condemnation
(Hoffmann-Kuroda, 2017; Nagel, 2017; Uyehara, 2018). Yet, this rhetoric
may encourage violence. Take, for example, the 2017 Alt-Right/white
supremacist ‘Unite the Right’ rally in Charlottesville, VA, where counterprotesters were attacked with a moving vehicle resulting in one person dead
(Heather Heyer) and 19 injured (Heim, 2017; Heim et al., 2018). Here, the
far-right protestors brandished traditional white supremacist and neo-Nazi
iconography, including hate symbols and language employed in online
subcultures like ‘kek’ – a derivation of ‘lol’ from World of Warcraft which
became a favourite term of online trolls (see: Neiwert, 2017). Though
perhaps we can quibble about whether such speech caused such violence, it
is difficult to deny that such speech coincides with racial violence. The
question remains – much as it has always lingered over the issue – should
such hate speech be restricted? The problem is unlikely to go away in the
near future.
Free speech issues continue to regularly emerge around issues of obscene
material like pornography. As previously explained in Chapters 7 and 8,
there is a clear need for many to regulate forms of pornography like child
pornography but widespread disagreement persists on what constitutes child
pornography and what other sorts of obscene material should be regulated –
tensions which are unlikely to be resolved any time soon. For many, the
underlying issue is distinguishing between protected forms of expression
and those that are harmful. Perhaps more pressing for contemporary times,
though, is the problem of ‘fake news’ which has resulted in multiple cries
for social media platforms and even the government to regulate the spread
of such disinformation (for example, Dreyfuss, 2018; Rankin, 2018). It has
also become a cautionary tale about the perils of irresponsible journalism
and its potential to amplify manipulative disinformation (Phillips, 2018).
Critics, however, caution that efforts to curtail disinformation and
propaganda may limit free speech (for example, Priday, 2018). The problem
raises difficult questions, many of which are familiar to the problem of
speech regulation and have no simple answer:
Who gets to decide which speech is offensive?
How do we differentiate between ‘fake news’ and articles which
present legitimate alternative political views?
Should freedom of speech protect deceptive speech?
Should such speech be censored? Or is the solution simply ‘more
speech’ and education?
These questions, and others, will only continue to drive debates around
online civil liberties and the regulation of online communications and
content.
While it has received little coverage up to this point in any of the previous
chapters, it is worth highlighting that free speech values and rhetoric have
historically been deployed by hackers and pirates. For instance, in 2001, a
Russian security researcher named Dmitry Sklyarov gave a talk at DEF
CON about a vulnerability in Adobe’s eBook copy protections. He was
charged with violating the Digital Millennium Copyright Act for publically
disclosing the vulnerability. As Cory Doctorow (2008a: 10) explains, ‘The
FBI threw him in the slam for thirty days. He copped a plea, went home to
Russia, and the Russian equivalent of the State Department issued a blanket
warning to its researchers to stay away from American conferences, since
we’d apparently turned into the kind of country where certain equations are
illegal’. For many hackers and technologists, code is seen as speech which
deserves to be shared widely and freely (Stallman, 2002) and people should
be free to talk about technology, code and security freely as well (for more
information about coders’ rights and reporting in the US, see EFF, 2018).
And since code is analogous to speech, some hackers believe that people
should also be free to play with the code as they wish. In other words, there
is disagreement about whether code should be treated as speech or property
and this disagreement is a key schism between hackers and organizations
and institutions which attempt to control and regulate technology. A similar
tension is evident among pirates who often – though not always – resist
notions of proprietary ownership of intellectual property and, instead, view
content as speech which should be shared freely (Steinmetz and Tunnell,
2013). Multiple authors have explored the role of classical liberalism in
informing the values and politics of online subcultures including hackers,
pirates and other technologists (for example, Coleman, 2017; Fuchs, 2013;
Jordan, 2001; Steinmetz, 2016). The point is not to say that these liberal
values are a problem (we quite enjoy freedom of speech, thank you very
much), the point is that interpretations of these values may vary and conflict
– including those held by hacker groups, pirates and other technologists and
those who wish to regulate code, computers and networks. Values like
freedom of speech have been and continue to be instrumental in the creation
of computer technologies and digital infrastructure but they may also be
sites of conflict as well, now and for years to come, including in the area of
cybercrime.
12.3.2 Online Privacy
The second major civil liberties issue that will continue to dominate public
and political debates surrounding cybercrime was highlighted throughout
Chapter 11 but deserves further consideration: the tension between privacy
and surveillance (for security or profit). The Snowden leaks were a
watershed moment, revealing the scope and scale of governmental
surveillance and increasing awareness about online privacy. Further, recent
major data breaches have potentially made the public more aware of the
dangers of corporate data collection and retention. Despite awareness and
resistance, however, both governmental and corporate surveillance
programs will continue for the foreseeable future, be it from programs like
the US’ PRISM programs or Google’s predictive data mining. We can
expect resistance to such surveillance regimes to persist as well as attempts
to criminalize the avoidance of surveillance such as the adoption of
cryptographic technologies.
In the age of corporate or government surveillance, data breaches, identity
theft and other threats to personal data, one can easily see why activists
continue to advocate for public vigilance against privacy threats. Yet it is
difficult to fully participate in contemporary social, political and cultural
life without regularly surrendering sensitive personal information, be it
through social media, online shopping, filling out government forms, or
even simply browsing the web. In fact, while people may express
favourable attitudes toward strong privacy protections, many regularly
surrender their information through casual internet use – a notion some
have termed the ‘privacy paradox’ (Kokolakis, 2017). It can be difficult to
be an engaged digital citizen and maintain control over one’s privacy
without constant vigilance about the kinds of data one may be releasing. As
a result, some internet users may succumb to ‘privacy fatigue’ or ‘a sense of
weariness toward privacy issues, in which individuals believe that there is
no effective means of managing their personal information on the Internet’
(Choi et al., 2018: 42). In the area of cybercrime, the implication is that
criminal opportunities online may arise for reasons other than technological
or human error but because of apathy and resignation in the face of the
seemingly Sisyphean task of safeguarding one’s sensitive information.
12.4 Concluding Thoughts
Looking forward, there is plenty of work ahead for both scholars and
practitioners in the areas of cybercrime and information security. The image
given thus far may be one of futility – that fighting these crimes is like
cutting heads off the Hydra. There exists reason for optimism, however.
Governments, corporations, online communities and the general public are
actively grappling with these issues. Though sometimes messy and
controversial, the fact that such conversations are occurring is promising.
As a society, we continue to innovate both in our measures to reduce the
social harms caused by cybercrime as well as those created by attempts to
curtail such offences. Some battles are lost. Others, however, are won. At
the very least, the future promises to keep us busy. In this spirit, to borrow
from the late Albert Cohen (1993), we’d like to raise a glass and propose a
toast, ‘to criminals and crime!’
Study Questions
What advantages could criminals gain by exploiting biological implants and would these
advantages ever be enough to warrant concern of widespread victimization?
Is the convenience offered by cloud computing worth the potential for major data breaches?
How does the value of free speech continue to inform contemporary debates over online
regulations and cybercrime?
In today’s informatized society, what does privacy actually mean? And is privacy dead?
Further Reading
There are a host of books not mentioned in previous chapters that interested students and
scholars may find informative as they continue to think about issues surrounding cybercrime,
security and society. Readers interested in the implications of media, like the internet, to
influence communication and narrative framings should check out Neil Postman’s Amusing
Ourselves to Death (1985) and Marshall McLuhan’s Understanding Media (1964). Though
both of these works predate the modern internet, their insights are useful for thinking about
how the internet may influence contemporary discourse. Readers interested in exploring more
about how we, as a society, think about technology and the internet may find Majid Yar’s The
Cultural Imaginary of the Internet (2014) to be a useful starting point. For more information
about the adjacent and informative fields of the Sociology of Technology Science and
Technology Studies, consider starting with John Law and Wiebe Bijker’s Shaping
Technology/Building Society: Studies in Sociotechnical Change (1992), Bruno Latour’s
Reassembling the Social: An Introduction to Actor-Network Theory (2007) and Shoshana
Zuboff’s In the Age of the Smart Machine: The Future of Work and Power (1989).
Glossary
ADVANCE FEE FRAUD
Also known as the ‘Nigerian Letter Scam’ and the ‘Spanish Prisoner’,
this fraud typically works by offering the victim large financial returns
in return for assistance in extracting monies which are supposedly held
captive in bank accounts in another country. An ‘advanced fee’ is
requested from the victim to assist with the extraction, but he or she
never gets to see a share of the (in reality non-existent) millions
promised.
ANONYMITY
The ability to engage in social action and communication without
having one’s identity made available to others.
ANONYMOUS
A loosely organized collective of hacktivists that emerged in 2003;
from 2008 it has been especially active in organizing hacks in support
of a range of causes including freedom of speech, resistance to internet
surveillance and resistance to corporate capitalism and militarism.
ANTI-GLOBALIZATION MOVEMENT
A broadly based social protest movement that emerged during the
1990s in resistance to economic globalization and its control by
Western governments and corporations.
BDSM
Bondage, Domination and Sadomasochism. Fetishes that focus upon
the enactment or performance of domination, submission, punishment
and obedience as a source of sexual gratification.
BIG DATA
A term which can refer to massive datasets, often gathered from
multiple sources, or the specific use of massive datasets for the
purposes of surveillance, targeted advertising and other predictive
endeavours.
BIO-HACKING
A form of hacking focusing on enhancing the human body through
augmentation.
BOT
A ‘robot network’ of computers whose security defences have been
comprised by hacking, and which can be controlled by an external
party.
CHILD
Developmental stage in the early years of life. Who counts as a child,
and which distinctive characteristics are associated with being a child
(childhood) are subject to great variation across cultures and history.
CHILD PORNOGRAPHY
Representations featuring a child or children depicted in an explicitly
sexualized manner and/or engaging in sexual activity. A child may
variously be defined for the purposes of child pornography as (a) a
pre-pubescent, or (b) an individual who may have physically matured
but who is still under the legal age of majority. Given legal differences
across countries, any given article may or may not be deemed to be
child pornography dependent on how the age of legal majority or
sexual maturity is defined.
CLOUD-COMPUTING
The provision of storage and processing power by companies to
provide users with software, infrastructure and platforms accessible
from multiple locations and synced in relative real-time.
CONTENT ANALYSIS
A form of research that involves the systematic and in-depth analysis
of secondary documents (i.e. historical documents, news articles,
magazine articles, television shows, etc.) for themes and patterns.
COPYRIGHT
Those property rights associated with ‘original’ expressions, be they in
visual, spoken, written, audio or other forms. The possession of
copyright over an expression entitles the holder to control its copying
and distribution.
CRACKING
A generally derogatory term used to describe activities associated with
‘hacking’ in its second sense, that of unauthorized access to computer
systems.
crime
Any act that contravenes the law and is subject to prosecution and
punishment by the state.
CRYPTOCURRENCY
Cryptographically secured forms of digital currency. While they may
be used legitimately, they are popular among online offenders because
they are difficult to trace.
CYBERCRIME
Any criminal activity that takes place within or by utilizing networks
of electronic communication such as the internet.
CYBERSPACE
The interactional space or environment created by linking computers
together into a communication network.
CYBERSTALKING
Stalking that takes place via online communication mechanisms such
as email, chat rooms, instant messaging and discussion lists.
CYBERTERRORISM
Terrorist activity that targets computer networks and information
systems for attack.
CYBERWARFARE
The exploitation of computers and computer networks by nation states
for the purposes of disrupting, disabling, or destroying an adversarial
state’s systems, including critical infrastructure.
DATA BROKER
A company or firm that specializes in creating datasets of information
about individuals, often from a variety of sources, which can then be
sold or licensed to other companies.
DATAVEILLANCE
Surveillance that focuses not upon the visual or other tracking of the
physical individual, but on collecting, collating and analysing data
about the individual’s activity, often in the form of electronic records.
DENIAL OF SERVICE
An attack on a networked computer or computers that disrupts normal
operations to such an extent that legitimate users can no longer access
their services. Attacks of this nature which involve the use of multiple
computer systems directed toward a target are referred to as distributed
denial of service (DDoS) attacks.
DETERRENCE THEORY
A theory of crime which focuses on the use of punishments to deter
future criminal behaviours by offenders or would-be offenders.
DEVIANCE
Behaviour that may not necessarily be criminal, but which may
transgress collective norms about appropriate or acceptable behaviour.
While crimes are subject to formal, legal sanctions, those actions
deemed deviant may be subject to informal sanctions such as social
stigmatization and denunciation.
DIGILANTISM
A portmanteau of ‘digital’ and ‘vigilantism’ which refers to the taking
of actions by persons online to punish or otherwise hold accountable
individuals for perceived slights or crimes outside of legitimate
institutional methods of redress.
DIFFERENTIAL ASSOCIATION
A criminological concept, developed by Edwin Sutherland, which
holds that young people in particular learn how to engage in deviant
and criminal activity through their association with others who are
similarly inclined.
DOWNLOADING
The act of copying digital material posted online into the storage
medium on one’s computer.
DRIFT AND NEUTRALIZATION THEORY
A theory of crime which explains that criminals and non-criminals are
often times indistinguishable from one another and that individuals
tend to ‘drift’ in and out of criminal activity and patterns. It also
explores the use of techniques of neutralization which are said to
permit deviants/criminals to violate mainstream norms and values.
E-COMMERCE
Market economic activity undertaken via the internet or similar
electronic communication networks.
ENCRYPTION
Techniques and tools associated with encoding or scrambling data in
such a way as to render these incomprehensible to others not in
possession of a ‘key’ that is needed to decipher the data into their
original legible form.
ETHNOGRAPHY
An in-depth form of research where the researcher immerses
themselves within the population under study in an attempt to
understand the world from their perspective.
EXPERIMENTAL DESIGN
A form of research where, in its simplest form, the researcher divides
their sample population into two or more groups with one acting as a
‘control’ group receiving no special treatment and a ‘experimental’
group which has some treatment administered to them. Ideally, any
differences on measured outcomes between the two groups should be a
result of the treatment exclusively.
FAKE NEWS
News that is purposefully designed to misinform its audience.
FEMINIST CRIMINOLOGY
A perspective in criminology that focuses on the role of gender in
crime, victimization and criminal justice.
FILE SHARING
The practice of allowing others to make copies of files stored on a
computer via downloading. The practice is generally associated with
the sharing of music, movies, images and software via websites
dedicated to such copying.
FIRST AMENDMENT RIGHTS
Those rights to unobstructed expression guaranteed by the First
Amendment to the Constitution of the United States of America.
GENDER
That aspect of male–female differences which is attributable to social
upbringing and cultural codes, rather than to any innate sexual or
biological differences between men and women.
GENDERTROLLING
A term developed by Karla Mantilla (2015) which denotes an intense
form of harassment waged against women online.
GLOBALIZATION
The social, economic, political and cultural processes in which local
and national spatial limits on interaction are overcome, and so come to
span the globe.
GOVERNANCE
Concept used to denote a change in regimes of social ordering,
entailing a move from ‘top-down’ state-led control to a ‘network’ of
actors and institutions (both public and private) who engage in
coordinated action and partnership.
HACKING
A term with two distinctive meanings. First, the act of creative
problem solving when faced with complex technical problems and,
second, illicit and usually illegal activities associated with
unauthorized access to, or interference with, computer systems.
HACKTIVISM
Political activism and social protest that use hacking tools and
techniques.
HATE SPEECH
Any form of speech or representation that depicts individuals or
groups in a derogatory manner with reference to their ‘race’, ethnicity,
gender, religion, sexual orientation, or physical or mental disability in
such a manner as to promote or provoke hatred.
HENTAI
Sub-genre of Japanese comic books (Manga) and animation (Anime)
focused on the depiction of explicit sexual scenarios.
HIDDEN CRIME
Criminal acts that tend to go largely unobserved, unremarked and
unrecorded in official assessments and measures of criminal activity.
IDENTITY THEFT
The theft of personal information and biographical details, enabling
another party to pass themselves off as the victim. Identity theft is
typically used to facilitate a variety of frauds.
INDECENCY
Any form of representation, expression or action (especially of a
sexual nature) which may be held to be potentially offensive to the
sensibilities of some or most of a society’s members. In many (though
not all) Western democracies, expressions that may be considered
indecent are nevertheless tolerated and not subject to strict legal
prohibition.
INFORMATION SOCIETY
A stage of socio-economic development in which the importance
previously allocated to the production of material goods and resources
is superseded by the centrality of knowledge and information in
economic activity.
INTANGIBLE GOODS
Goods over which an individual or other legally recognized entity (e.g.
a company) has rights of possession, but which do not take a
materially tangible form.
INTELLECTUAL PROPERTY
Property that takes the form of ideas, expressions, signs, symbols,
designs, logos and similar intangible forms.
INTERNET
The publicly accessible network of computers that emerged in the
1970s and came to span the globe by the late 1990s.
INTERNET AUCTIONS
Online marketplaces enabling individuals and businesses to post a
wide variety of items for sale.
INTERNET OF THINGS
The integration of internet connectivity into consumer, industry and
related devices for the purposes of providing additional functionality.
INTERVIEWS
A form of data gathering within the social sciences that involves
conversing with research participants in a purposeful manner designed
to glean insights into their perceptions, experiences, values, beliefs and
so on and documenting those conversations.
JAILBAIT IMAGES
Images depicting adolescent minors sexualized by their viewers. The
word is a recognition that, though an adult viewer may find the subject
attractive, actual sexual interaction would be against the law.
LABELLING THEORY
A theory of crime which focuses on societal reactions to crime,
particularly the social process involved in stigmatizing a person as a
‘criminal’ and the potential outcomes such a label may generate. It
also explores the processes by which persons come to identify
themselves as insiders within criminal groups.
LEGAL PLURALISM
The differences in legal regulations and prohibitions apparent across
different states.
MALICIOUS SOFTWARE OR ‘MALWARE’
A general term for a variety of computer codes (such as viruses, logic
bombs and Trojan horses) which are designed to disrupt or interfere
with a computer’s normal operation.
MASCULINITY
The dominant cultural norms, ideas, values and expectations
associated with being male. These can include norms about sexual
orientation and behaviour, aggression and violence, legitimate interests
and aspirations, emotional expressivity, dress and body language, and
a host of similar social attributes.
METADATA
Information about data. For instance, metadata from phone calls would
not include the content of the call itself, but might include the parties
involved, duration and location data of the callers.
MORAL PANIC
An unwarranted or excessive reaction to a perceived problem of crime,
deviance or social disorder, often produced by representations in the
mass media.
NFC
‘Near Field Communication’. A technology used extensively in the
latest generation of ‘smartphones’ and mobile devices that permits the
wireless transmission of data over short distances, enabling such
devices to be used for making instantaneous payments and financial
transactions.
OBSCENITY
A notoriously slippery term used to denote representations,
expressions or actions (often of a sexual nature) that are held to be
generally offensive and therefore unacceptable by society at large.
Obscenity is almost invariably subject to legal prohibition and formal,
criminal sanctions. Just what constitutes an ‘obscenity’ is, however,
deeply contested, and subject to profound variation across cultural
contexts and to change over time.
OFFICIAL STATISTICS
Measures of the scope, scale and nature of criminal offending
compiled and published by the state and allied criminal justice
agencies.
PAEDOPHILIA
The sexual attraction among adults towards children of either sex.
PHISHING
The fraudulent practice of sending emails to individuals that purport to
come from a legitimate internet retailer or financial service. The aim of
phishing is to persuade the victim to voluntarily disclose sensitive
information, such as bank account and credit card details, which can
then be exploited to defraud the individual concerned.
PIRACY
A popular term for copyright violations; the unauthorized copying,
distribution or sale of informational goods over which some party
claims to possess proprietary rights.
PLURALIZATION
A term often used in relation to policing and crime control activities, in
order to denote a shift in which such activities are increasingly
undertaken by a wide range of actors and agencies in addition to the
police.
POLICE
A formally organized institutional apparatus that is charged with
upholding laws on behalf of society and is ultimately directed by and
accountable to the state.
POLICING
A wide range of activities that serve to monitor and control social
behaviour. Policing may be undertaken by official state-sanctioned
bodies (such as the police), by private organizations, by communities
or individuals.
PORNOGRAPHY
Visual or written representations of a sexually explicit nature, whose
primary aim or use is to stimulate sexual excitement.
PRIVACY
The right to be left alone; freedom from observation and interference
from others.
PRIVACY FATIGUE
A sense of exhaustion and resignation stemming from a perceived
futility in efforts to protect private information.
PRIVACY PARADOX
A term denoting the tendency of people who care about online privacy
to also regularly give away private information through regular
internet usage.
PRIVATIZATION
A term used to describe a process in which functions previously
fulfilled by the state are turned over to a range of private (often
commercial) actors.
PROPERTY RIGHTS
Legally institutionalized rights to own and control goods.
RADICAL CRIMINOLOGY
A theoretical perspective in criminology that focuses on the role of
class and the political economy in crime, victimization, criminal
justice and criminalization rooted within the Marxist theoretical
tradition.
RANSOMWARE
A form of malware that involves limiting the functionality of a
computer system and a demand from the offender for a ransom
payment. Generally assurances are made by the offender that
functionality will be restored upon successful payment.
RATIONAL CHOICE THEORY
A theory of crime which focuses on the decision making process
engaged in by people who choose to engage in criminal activities. It
generally argues that people are rational decision makers who will
make cost–benefit decisions to engage in crime based on both
objective and subjective criteria.
RECORDING OF CRIME
The process through which reported incidents are officially classified
and recorded as instances of crimes.
REPORTING OF CRIME
The process through which incidents of criminal victimization are
reported to authorities such as the police or to other organizations and
bodies who monitor crime levels.
REPRESENTATIONS OF CRIME
The constructions and depictions of crime and criminals that circulate
within mass media, political discourse and official accounts of the
‘crime problem’.
REVENGE PORN
Also known as ‘sextortion’, it involves the online non-consensual
distribution of nude photographs or videos of people – mostly women
– for the purposes of retribution or humiliation.
ROUTINE ACTIVITIES THEORY
A theory of crime that focuses on how changes in the routine activities
of everyday life influences opportunities for people to engage in crime.
In particular, changes in routine activities may alter the likelihood of
motivated offenders, suitable targets, and an absence of capable
guardians to be brought together in time and space.
SCAMBAITER
A form of digilantism in which actors attempt to ‘scam the scammers’
by shaming, punishing, or revealing the crimes of fraudsters through
often deceptive or coercive means.
SCRIPT KIDDIE
A derogatory term used in hacker circles to denote a relatively
unskilled computer user who mainly relies on premade programs or
scripts to launch their computer-based intrusions or sabotages.
SECOND LIFE
An online virtual world than enables users (called ‘residents’) to
inhabit a simulated environment and interact with other users via
virtual personae called ‘avatars’.
SELF-CONTROL THEORY
A theory of crime which holds that people commit crime because of an
inability to regulate their impulses or because the environment fails to
properly corral their behaviours.
SEXTING
A neologism combining the words ‘sex’ and ‘texting’, used to describe
sexually explicit messages and images exchanged between individuals
using text messaging services on mobile devices.
SOCIAL ENGINEERING
A term used to describe the influence, manipulation or deception of the
people involved in information security. Originates from phone phreak
and hacker culture.
SOCIAL INCLUSION AND SOCIAL EXCULSION
The duality in which some individuals and groups are economically,
politically and culturally incorporated within the social order, and
others are excluded from full participation within it.
SOCIAL LEARNING THEORY
A theory of crime which argues that crime is a learned behaviour. It
gives particular attention to the role of peer groups as instrumental in
transferring pro-criminal knowledge, values and beliefs.
SPOOFING
The fraudulent practice of establishing facsimiles of legitimate
websites to which victims can be directed and where they will
unknowingly surrender sensitive information such as bank details,
credit card numbers and account passwords.
STALKING
Repeated harassing or threatening behaviour, in which an offender
persistently contacts, follows, approaches, threatens or otherwise
subjects a victim to unwelcome attentions.
STRAIN THEORY
Theories of crime which focus on the role of social and environmental
factors in creating stress and pressure on individuals that may push
them into criminal activity.
SUBCULTURE
The notion of a ‘culture within a culture’; a group which has its own
distinctive norms, codes and values, which are used to unite its
members and differentiate the group from others in ‘mainstream’
society or rival groupings.
SURVEILLANCE
The systematic observation and monitoring of people and places as a
tool for effecting greater control over behaviour.
SURVEYS OF CRIME AND VICTIMIZATION
An alternative to official crime statistics, criminal victimization
surveys ask members of the public about their experiences of crime,
including many that, for a variety or reasons, may not have been
reported to the police or other authorities.
TECHNIQUES OF NEUTRALIZATION
A concept developed by criminologists Graham Sykes and David
Matza to describe those strategies which youth use to dissociate
themselves from dominant moral codes about acceptable behaviour,
and which serve to loosen inhibitions and assuage guilt, thereby
freeing them to engage in criminal and/or deviant acts.
TERRORISM
A notoriously slippery and contested term; in its most conventional
sense, it denotes the use of violence or the threat of violence in the
pursuit of political ends.
TRANSNATIONAL CRIME AND POLICING
Criminal activities that span the borders of national territories, and
crime prevention and control initiatives designed to tackle or reduce
such offences.
TROJAN HORSES
Malicious software programs which are infiltrated into computers
disguised as benign applications or data.
TROLLING
The act of challenging or irritating a person or a group, often through
the sardonic use of inflammatory rhetoric or images.
VIRTUAL MOB
Also known as ‘brigading’ or ‘dogpiling’, it is a form of harassment
where offenders coalesce into an online mob and ‘dogpile’ onto the
victim, en masse, in a harassment campaign.
VIRTUAL PORNOGRAPHY
Pornographic representations (either images or animations) that are
entirely computer generated.
VIRUSES
Pieces of computer code that can ‘infect’ computer systems causing
disruption to their normal operation.
WAR ON TERROR
Rhetorical and political response among Western governments in the
wake of the 11 September 2001 (9/11) attacks in New York, which
adopts a highly aggressive and ‘pro-active’ stance in identifying,
capturing and disabling actual, suspected and potential terrorists, along
with those who are perceived to sympathize with or support their
goals.
WEBSITE DEFACEMENT
The activity of altering the code organizing a website so as to alter the
visible screen content.
WORM
Malware that replicates and spreads across a computer network.
ZOMBIE
An internet-connected computer that has been compromised by
hacking, such that it can be controlled by an external party. Zombie
computers are typically linked together into a larger network of
similarly compromised machines so as to create a ‘botnet’ or ‘robot
network’.
References
AACP (Alliance Against Counterfeiting and Piracy) (2002) Proving the
Connection: Links between Intellectual Property Theft and Organised
Crime. London: AACP.
Aas F (2005) Sentencing in the Information Age: From Faust to Macintosh.
London: GlassHouse Press.
ABC News Online (2005) ‘Seller charged over stolen goods on eBay’, 2
February. Available at:
https://web.archive.org/web/20070810111829/www.abc.net.au/news/new
sitems/200502/s1294115.htm (accessed 16 July 2018).
ABS (Australian Bureau of Statistics) (2016a) 4528.0 – Personal Fraud,
2014–15. Available at: www.abs.gov.au/ausstats/abs@.nsf/mf/4528.0
(accessed 24 April 2018).
ABS (Australian Bureau of Statistics) (2016b) Experience of Stalking –
Personal Safety Survey 2016. Report. Available at:
www.abs.gov.au/ausstats/abs@.nsf/Lookup/by%20Subject/4906.0~2016
~Main%20Features~Experience%20of%20Stalking~28 (accessed 31
May 2018).
Acar KV (2016) Sexual extortion of children in cyberspace. International
Journal of Cyber Criminology 10(2): 110–126.
ACCC (Australian Competition & Consumer Commision) (2018a) Dating
& Romance. Available at: www.scamwatch.gov.au/types-ofscams/dating-romance (accessed 25 April 2018).
ACCC (Australian Competition & Consumer Commission) (2018b)
Nigerian Scams. Available at: www.scamwatch.gov.au/types-ofscams/unexpected-money/nigerian-scams (accessed 25 April 2018).
ACLU (American Civil Liberties Union) (n.d.) Surveillance under the
USA/PATRIOT Act. Available at: www.aclu.org/other/surveillanceunder-usapatriot-act (accessed 25 July 2018).
ACPO (Association of Chief Police Officers) (2012) ACPO Good Practice
Guide for Digital Evidence. Available at: www.digitaldetective.net/digital-forensicsdocuments/ACPO_Good_Practice_Guide_for_Digital_Evidence_v5.pdf
(accessed 18 July 2018).
Adams W and Flynn A (2017) Federal Prosecution of Commercial Sexual
Exploitation of Children Cases, 2004–2013. Report No. NCJ 250746,
Washington, DC: Bureau of Justice Statistics. Available at:
www.bjs.gov/content/pub/pdf/fpcsecc0413.pdf (accessed 21 May 2018)
Adey P (2004) Secured and sorted mobilities: Examples from the airport.
Surveillance and Society 1(4): 500–519.
ADL (Anti-Defamation League) (2018) Quantifying Hate: A Year of AntiSemitism on Twitter. Report, New York, NY: Anti-Defamation League.
Available at: www.adl.org/resources/reports/quantifying-hate-a-year-ofanti-semitism-on-twitter (accessed 1 August 2018).
Adler PA (2003) Wheeling and dealing: An ethnography of an upper-level
drug dealing and smuggling community. In D Harper and HM Lawson
(eds) The Cultural Study of Work. Lanham, MD: Rowman and
Littlefield.
Agnew R (1992) Foundation for a general strain theory of crime and
delinquency. Criminology 30(1): 47–88.
Aitken S, Gaskell D and Hodkinson A (2018) Online sexual grooming:
Exploratory comparison of themes arising from male offenders’
communications with male victims compared to female victims. Deviant
Behavior 39(9): 1170–1190.
Akdeniz Y (1996) Computer pornography: A comparative study of the US
and UK obscenity laws and child pornography laws in relation to the
Internet. International Review of Law, Computers and Technology 10(2):
235–262.
Akdeniz Y (1997) UK Government Policy on Encryption. Available at:
https://ssrn.com/abstract=41701 or http://dx.doi.org/10.2139/ssrn.41701
(accessed 30 July 2018).
Akdeniz Y (1999) Sex on the Net: The Dilemma of Policing Cyberspace.
London: South Street Press.
Akdeniz Y (2000) Child pornography. In Y Akdeniz, C Walker and D Wall
(eds) The Internet, Law and Society. Harlow: Longman.
Akdeniz Y (2001a) Governing Pornography and Child Pornography on the
Internet: The UK Approach. Available at: www.cyberrights.org/documents/us_article.pdf (accessed 16 July 2018).
Akdeniz Y (2001b) International Developments Section of Regulation of
Child Pornography on the Internet. Available at: www.cyberrights.org/reports/interdev.htm (accessed 16 July 2018).
Akdeniz Y (2001c) United States Section of Regulation of Child
Pornography on the Internet: Cases and Materials Related to Child
Pornography on the Internet. Available at: www.cyberrights.org/reports/uscases.htm (accessed 16 July 2018).
Akdeniz Y (2008) Internet Child Pornography and the Law: National and
International Responses. Aldershot: Ashgate.
Akdeniz Y and Strossen N (2000) Sexually oriented expression. In Y
Akdeniz, C Walker and D Wall (eds) The Internet, Law and Society.
Harlow: Longman.
Akdeniz Y, Walker C and Wall D (eds) (2000) The Internet, Law and
Society. Harlow: Longman.
Akers RL (1991) Self-control as a general theory of crime. Journal of
Quantitative Criminology 7(2): 201–211.
Akers RL (1998) Social Learning and Social Structure. Boston, MA:
Northeastern University Press.
Akers RL and Jensen G (2006) The empirical status of social learning
theory of crime and deviance: The past, present, and future. In FT
Cullen, J Wright and KR Blevins (eds) Taking Stock: The Status of
Criminological Theory. New Brunswick, NJ: Transaction Publishers.
ALA (American Library Association) (2004) ALA Intellectual Freedom
Committee Report to Council. Available at:
https://web.archive.org/web/20061204135852/http://www.ala.org/ala/oif/
ifgroups/ifcommittee/ifcinaction/ifcreports/ifcreportac04.pdf (accessed
16 July 2018).
Alexander W (2013) Barnaby Jack could hack your pacemaker and make
your heart explode. Vice, 25 June. Available at:
www.vice.com/en_us/article/avnx5j/i-worked-out-how-to-remotelyweaponise-a-pacemaker (accessed 14 August 2018).
Allen N (2011) Teenager arrested in UK as FBI raids anonymous hacking
group, The Telegraph, 19 July. Available at:
www.telegraph.co.uk/news/worldnews/northamerica/usa/8648863/Teena
ger-arrested-in-UK-as-FBI-raids-Anonymous-hacking-group.html
(accessed 16 July 2018).
Almeida M and Fernandez K (2009) Microsoft, HSBC, Sony, Coca-Cola,
New Zealand hacked, zone-h, 21 April. Available at: www.zoneh.org/news/id/4708 (accessed 16 July 2018).
Almeida M and Mutina B (2011) Defacements statistics 2010, zone-h, 6
January. Available at: www.zone-h.org/news/id/4737 (acessed 16 July
2018).
Alphabet (2018) Alphabet Inc. Form 10-K for the Fiscal Year Ended
December 31, 2017. Available at:
https://abc.xyz/investor/pdf/20171231_alphabet_10K.pdf (accessed 30
July 2018).
Alseth B (2010) Sexting and the law: Press send to turn teenagers into
registered sex offenders, ACLU Washington, 24 September. Available at:
www.aclu-wa.org/blog/sexting-and-law-press-send-turn-teenagersregistered-sex-offenders (accessed 16 August 2018).
Altheide DL (2018) Capitalism, hacking, and digital media. In A Scribano,
FT Lopez, and ME Korstanje (eds) Neoliberalism in a Multi-Disciplinary
Perspective. Basingstoke, UK: Palgrave Macmillan, pp. 203–227.
Anderson C (2012) Makers: The New Industrial Revolution. New York,
NY: Crown Publishing.
Anderson KB (2013) Consumer Fraud in the United States, 2011: The Third
FTC Survey. Washington, DC: US Federal Trade Commission.
Anderson N (2009) France passes harsh anti-P2P three-strikes law (again),
Ars Technica, 15 September. Available at: http://arstechnica.com/techpolicy/2009/09/france-passes-harsh-anti-p2p-three-strikes-law-again/
(accessed 16 July 2018).
Andy (2013) Making money from movie streaming sites, an insider’s story,
TorrentFreak, 19 October. Available at: https://torrentfreak.com/makingmoney-from-movie-streaming-sites-an-insiders-story-131019/ (accessed
1 March 2018).
Andy (2018) Pirate site admin sentenced to two years prison & €83.6
million damages, TorrentFreak, 21 February. Available at:
https://torrentfreak.com/pirate-site-admin-sentenced-to-two-years-prisone83-6-million-damages-180221/ (accessed 16 July 2018).
Angwin J and Stecklow S (2010) ‘Scrapers’ dig deep for data on Web, The
Wall Street Journal, 12 October. Available at:
http://online.wsj.com/article/SB100014240527487033585045755443812
88117888.html (accessed 16 July 2018).
Anthony S (2013) DRM deceit: SimCity doesn’t need to be always online,
says Maxis developer, ExtremeTech, 13 March. Available at:
www.extremetech.com/gaming/150598-drm-deceit-simcity-doesnt-needto-be-always-online-says-maxis-developer (accessed 21 February 2018).
APWG (Anti-Phishing Working Group) (2017) Phishing Activity Trends
Report 1st Half 2017. Available at:
http://docs.apwg.org/reports/apwg_trends_report_h1_2017.pdf (accessed
25 April 2018).
Arthur C (2012) ACTA goes too far, says MEP, The Guardian, 1 February.
Available at: www.guardian.co.uk/technology/2012/feb/01/acta-goes-toofar-kader-arif (accessed 16 July 2018).
ASACP (Association of Sites Advocating Child Protection) (2018) About
ASACP. Available at: www.asacp.org/index.php?content=aboutus
(accessed 23 May 2018).
AVG Technologies (2011) Cybercrime Futures. Available at:
www.avg.com.au/files/media/Future%20Poll_Cybercrime_Futures_FIN
AL_2011–09–16.pdf (accessed 16 July 2018).
Awan I (2016) Islamophobia on social media: A qualitative analysis of the
Facebook’s walls of hate. International Journal of Cyber Criminology
10(1): 1–20.
Babchishin KM, Hanson RK and VanZuylen H (2015) Online child
pornography offenders are different: A meta-analysis of the
characteristics of online and offline sex offenders against children.
Archives of Sexual Behavior 44(1): 45–66.
Bachmann M (2008) What Makes Them Click? Applying the Rational
Choice Perspective to the Hacking Underground. PhD Thesis, University
of Central Florida. Available at: http://stars.library.ucf.edu/etd/3790/
(accessed 16 July 2018).
Bachmann M (2010) The risk propensity and rationality of computer
hackers. International Journal of Cyber Criminology 4(1/2): 643–656.
Bachmann M (2011) Deciphering the hacker underground: First quanitative
insights. In TJ Holt and BH Schell (eds) Corporate Hacking and
Technology-Driven Crime: Social Dynamics and Implications. Hershey,
PA: IGI Global.
Backlash (2010) Dangerous Cartoons Act. Available at:
www.backlash.org.uk/the-law/dangerous-cartoons-act/ (accessed 16 July
2018).
Bakhshi T, Papadaki M and Furnell S (2009) Social engineering: Assessing
vulnerabilities in practice. Information Management & Computer
Security 17(1): 53–63.
Ball J (2012) The Guardian’s open 20: Fighters for internet freedom. The
Guardian, 20 April. Available at:
www.guardian.co.uk/technology/2012/apr/20/twenty-fighters-openinternet (accessed 16 July 2018).
Ball J, Borger J and Greenwald G (2013) Revealed: How US and UK spy
agencies defeat internet privacy and security. The Guardian, 6
September. Available at: www.theguardian.com/world/2013/sep/05/nsagchq-encryption-codes-security (accessed 25 July 2018).
Bandura A (1977) Social Learning Theory. Englewood Cliffs, NJ: PrenticeHall.
Banks J (2018) Radical criminology and the techno-security-capitalist
complex. In KF Steinmetz and MR Nobles (eds) Technocrime and
Criminological Theory. New York, NY: Routledge.
Barak G (2012) Theft of a Nation: Wall Street Looting and Federal
Regulatory Colluding. Landham, MD: Rowman and Littlefield.
Barak G, Leighton P and Flavin J (2010) Class, Race, Gender, & Crime (3rd
edn). Lanham, MD: Rowman & Littlefield.
Barrett B (2017) The little black box that took over piracy. Wired, 27
October. Available at: www.wired.com/story/kodi-box-piracy/ (accessed
28 February 2018).
Baum K (2006) Identity Theft, 2004. Report, Washington, DC: US
Department of Justice. Available at:
www.bjs.gov/content/pub/pdf/it04.pdf (accessed 29 November 2017).
Baum K (2007) Identity Theft, 2005. Report, Washington, DC: US
Department of Justice. Available at:
www.bjs.gov/content/pub/pdf/it05.pdf (accessed 29 November 2017).
Bayley D and Shearing C (1996) The future of policing. Law and Society
Review 30(3): 585–606.
BBC News (2001a) Code red cost tops $1.2bn, BBC News, 1 August.
Available at: news.bbc.co.uk/2/hi/business/1468986.stm (Accessed 16
July 2018).
BBC News (2001b) ‘Mafiaboy’ hacker jailed, BBC News, 13 September.
Available at: news.bbc.co.uk/2/hi/science/nature/1541252.stm (accessed
16 July 2018).
BBC News (2004) UK and US to act on web sex sites, BBC News, 9 March.
Available at: news.bbc.co.uk/2/hi/uk_news/3545147.stm (accessed 16
July 2018).
BBC News (2008a) Wikipedia child image censored, BBC News, 8
December. Available at: news.bbc.co.uk/1/hi/uk/7770456.stm (accessed
16 July 2018).
BBC News (2008b) IWF backs down in Wiki censorship, BBC News, 9
December. Available at: news.bbc.co.uk/1/hi/technology/7774102.stm
(accessed 16 July 2018).
BBC News (2012) ACTA: Controversial anti-piracy agreement rejected by
EU, BBC News, 4 July. Available at: www.bbc.co.uk/news/technology18704192 (accessed 16 July 2018).
BBC News (2018) Lauri Love case: Hacking suspect wins extradition
appeal, BBC News, 5 February. Available at: www.bbc.com/news/ukengland-42946540 (accessed 27 June 2018).
Becker GS (1968) Crime and punishment: An economic approach. Journal
of Political Economy 76(2): 169–217.
Becker H (1963) Outsiders: Studies in the Sociology of Deviance. New
York: Free Press.
Beckett A (2009) The dark side of the Internet, The Guardian, 26
November. Available at:
www.guardian.co.uk/technology/2009/nov/26/dark-side-internet-freenet
(accessed 16 July 2018).
Belknap J (2015) The Invisible Woman: Gender, Crime, and Justice (4th
edn). Stamford, CT: Cengage Learning.
Bennett D (2001) Pornography-dot-com: Eroticising privacy on the
Internet. The Review of Education/Pedagogy/Cultural Studies 23: 381–
391.
Bennion F (2003) The UK Sexual Offences Bill: A Victorian spinster’s
view of sex. Common Law 12(1): 61–66.
Bently L and Sherman B (2001) Intellectual Property Law. Oxford: Oxford
University Press.
Benzmüller R (2017) Malware numbers of the first half of 2017, G Data
Security Blog, 24 August. Available at:
www.gdatasoftware.com/blog/2017/07/29905-malware-zahlen-desersten-halbjahrs-2017 (accessed 11 January 2018).
Bequai A (1999) Cyber-crime: The US experience. Computers and Security
18(1): 16–18.
Beresford A (2003) Securities/Investment Fraud. Washington, DC: National
White Collar Crime Centre.
Berg B (2009) Qualitative Research Methods for the Social Sciences (7th
edn). Boston, MA: Allyn & Bacon.
Bergmann J (1969) The original confidence man. American Quarterly
21(3): 560–577.
Bernard TJ, Snipes JB and Gerould AL (2016) Vold’s Theoretical
Criminology (7th edn). New York, NY: Oxford University Press.
Berr J (2017) ‘WannaCry’ ransomware attack losses could reach $4 billion.
CBSNews, 16 May. Available at: www.cbsnews.com/news/wannacryransomware-attacks-wannacry-virus-losses/ (accessed 11 January 2018).
Best J (1999) Random Violence: How We Talk about New Crimes and New
Victims. Berkeley, CA: University of California Press.
Best J (2003) European cybercrime squad gets green light. ZDNet News, 24
November. Available at: www.zdnet.com/article/european-cybercrimesquad-gets-green-light/ (accessed 16 July, 2018).
Biegel S (2003) Beyond Our Control? Confronting the Limits of Our Legal
System in the Age of Cyberspace. Cambridge, MA: MIT Press.
Bissias G, Levine B, Liberatore M, Lynn B, Moore J, Wallach H and Wolak
J (2016) Characterization of contact offenders and child exploitation
material trafficking on five peer-to-peer networks. Child Abuse &
Neglect 52: 185–199.
Black PJ, Wollis M, Woodworth M and Hancock JT (2015) A linguistic
analysis of grooming strategies of online child sex offenders:
Implications for our understanding of predotory sexual behavior in an
increasingly computer-mediated world. Child Abuse & Neglect 44: 140–
149.
Bleich E (2003) Race Politics in Britain and France: Ideas and
Policymaking since 1960. Cambridge: Cambridge University Press.
Blevins KR and Holt TJ (2009) Examining the virtual subculture of johns.
Journal of Contemporary Ethnography 38(5): 619–648.
Boase J and Wellman B (2001) A plague of viruses: Biological, computer
and marketing. Current Sociology 49(6): 39–55.
Bocij P (2003) Victims of cyberstalking: An exploratory study of
harassment perpetrated via the Internet. First Monday 8(10). Available at:
www.ojphi.org/ojs/index.php/fm/article/view/1086/1006 (accessed 16
July 2018).
Bocij P and McFarlane L (2002) Cyberstalking: Genuine problem or public
hysteria? Prison Service Journal 140: 32–35.
Bocij P, Griffiths M and McFarlane L (2002) Cyberstalking: A new
challenge for criminal justice. The Criminal Lawyer 122: 3–5.
Boellstorff T (2010) Coming of Age in Second Life: An Anthropologist
Explores the Virtually Human. Princeton, NJ: Princeton University Press.
Boes S and Leukfeldt ER (2016) Fighting cybercrime: A joint effort. In RM
Clark and S Hakim (eds) Cyber-Physical Security. Berlin, Germany:
Springer. (pp. 185–203).
Bogard W (1996) The Simulation of Surveillance: Hypercontrol in
Telematic Societies. Cambridge: Cambridge University Press.
Bollier D (2003) Preserving the Commons in the information order. In
WSIS World Information: Knowledge of Future Culture. Vienna: Institut
für Neue Kulturtechnologien.
Boman JH and Freng A (2018) Differential association theory, social
learning theory, and technocrime. In KF Steinmetz and MR Nobles (eds)
Technocrime and Criminological Theory. New York, NY: Routledge.
Bond E and Tyrrell K (2018) Understanding revenge pornography: A
national survey of police officers and staff in England and Wales. Journal
of Interpersonal Violence, Online First. DOI:
https://doi.org/10.1177/0886260518760011.
Bonger W (1916) Criminality and Economic Conditions. Boston, MA:
Little, Brown.
Bossler AM and Holt TJ (2012) Patrol officers’ perceived role in
responding to cybercrime. Policing: An International Journal 35(1): 165–
181.
Bowcott O (2017) Lauri Love would be at high risk of killing himself in
US, court told. The Guardian, 29 November. Available at:
www.theguardian.com/uk-news/2017/nov/29/lauri-love-high-risk-killinghimself-alleged-us-government-hacking-extradition (accessed 12 January
2018).
Bowker A (1999) Juveniles and computers: Should we be concerned?
Federal Probation: A Journal of Correctional Philosophy and Practice
63(2): 40–43.
Bowling B and Foster J (2002) Policing and the police. In M Maguire, R
Morgan and R Reiner (eds) The Oxford Handbook of Criminology (3rd
edn). Oxford: Oxford University Press.
Boyd A (2016) Hacker pleads guilty in first case of cyber terrorism.
Military Times, 15 June. Available at:
www.militarytimes.com/2016/06/15/hacker-pleads-guilty-in-first-caseof-cyber-terrorism/ (accessed 30 January 2018).
Boyd J (2002) In Community we trust: Online security communication at
eBay. Journal of Computer-Mediated Communication 7(3). Available at:
https://doi.org/10.1111/j.1083-6101.2002.tb00147.x (accessed 16 July
2018).
BPI (British Phonographic Industry) (2002) Market Information 184.
London: BPI.
BPI (British Phonographic Industry) (2012) Reducing online copyright
infringement. Available at:
https://web.archive.org/web/20120712071913/www.bpi.co.uk/ourwork/policy-and-lobbying/article/second-article.aspx (accessed 16 July
2018).
Bradley T (2018) Security experts weigh in on massive data breach of 150
million. Forbes, 30 March. Available at:
www.forbes.com/sites/tonybradley/2018/03/30/security-experts-weighin-on-massive-data-breach-of-150-million-myfitnesspalaccounts/#3e2522463bba (accessed 25 July 2018).
Brady PQ, Randa R and Reyns BW (2016) From WWII to the World Wide
Web: A research note on social changes, online ‘places,’ and a new
online activity ratio for routine activities theory. Journal of
Contemporary Criminal Justice 32(2): 129–147.
Brand S and Herron M (1985) ‘Keep designing’: How the information
economy is being created and shaped by the hacker ethic. Whole Earth
Review 46: 44–52.
Brangwin N (2016) Cybersecurity. Report, Canberra, Australia: Parliament
of Australia. Available at
www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliam
entary_Library/pubs/rp/BudgetReview201617/Cybersecurity (accessed
20 July 2018).
Brayne S (2017) Big data surveillance: The case of policing. American
Sociological Review 82(5): 977–1008.
Brenner S (2010) Recent developments in US internet law. In Y Jewkes and
M Yar (eds) Handbook of Internet Crime. Cullompton: Willan.
Brenner SW (2016) Cyberthreats and the Decline of the Nation-state. New
York, NY: Routledge.
Brewster M (2003) Power and control dynamics in prestalking and stalking
situations. Journal of Family Violence 18(4): 207–217.
Brook C (2015) Author behind ransomware Tox calls it quits, sells
platform. ThreatPost, 4 June. Available at: https://threatpost.com/authorbehind-ransomware-tox-calls-it-quits-sells-platform/113151/ (accessed
12 January 2018).
Brown A and Barrett D (2002) Knowledge of Evil: Child Prostitution and
Child Sexual Abuse in Twentieth Century England.
Cullompton/Portland, OR: Willan.
Brown B (2015) Sony BMG Rootkit Scandal: 10 Years Later.
NetworkWorld, 28 October. Available at
www.networkworld.com/article/2998251/malware-cybercrime/sonybmg-rootkit-scandal-10-years-later.html (accessed 21 February 2018).
Brugger W (2002) The treatment of hate speech in German constitutional
law. German Law Journal 40(1). Available at:
https://static1.squarespace.com/static/56330ad3e4b0733dcc0c8495/t/56b
936b5ab48def04c00afda/1454978742214/GLJ_Vol_04_No_01_Brugger.
pdf (accessed 7 July 2018).
Brunner E and Sutter M (2009) International CIIP Handbook 2008/9.
Zurich: Swiss Federal Institute of Technology.
Bryce J (2010) Online sexual exploitation of children and young people. In
Y Jewkes and M Yar (eds) Handbook of Internet Crime. Cullompton:
Willan.
BSA (Business Software Alliance) (2005) Fact Sheet: The Cyber Frontier
and Children. Available at:
https://web.archive.org/web/20060329085409/http://www.playitcybersaf
e.com/pdfs/cyberfrontier-children.pdf (accessed 16 July 2018).
BSA (Business Software Alliance) (2011) Nearly Half the World’s PC
Users Acquire Software Illegally Most or all the Time, BSA Reports.
Available at: www.bsa.org/news-and-events/news/news-archive/2011/en09072011-ipsos?sc_lang=en (accessed 16 July 2018).
BSA (Business Software Alliance) (2016) Seizing Opportunity Through
License Compliance: BSA Global Software Survey, May 2016. Available
at: http://globalstudy.bsa.org/2016/downloads/studies/BSA_GSS_US.pdf
(accessed 20 February 2018).
Budd T, Sharp C and Mayhew P (2005) Offending in England and Wales:
First Results from the 2003 Crime and Justice Survey, Home Office
Research Study 275. London: Home Office.
Bunz M (2003) Remember: Mimesis is a form of creativity. In World
Information: Knowledge of Future Culture. Vienna: Institut für Neue
Kulturtechnologien.
Burbidge S (2005) The governance deficit: Reflections on the future of
public and private policing in Canada. Canadian Journal of Criminology
and Criminal Justice 47(1): 63–86.
Burgess M and Clark L (2018) The UK wants to block online porn. Here’s
what we know. Wired, 8 May. Available at:
www.wired.co.uk/article/porn-block-ban-in-the-uk-age-verifcation-law
(accessed 22 May 2018).
Burgess RL and Akers RL (1966) A differential association-reinforcement
theory of criminal behaviour. Social Problems 14(2): 128–147.
Burgess-Proctor A (2006) Intersections of race, class, gender, and crime:
Future directions for feminist criminology. Feminist Criminology 1(1):
27–47.
Burke A, Sowerbutts S, Blundell B and Sherry M (2002) Child
pornography and the Internet: Policing and treatment issues. Psychiatry,
Psychology and Law 9(1): 79–84.
Burn A and Willett R (2003) ‘What Exactly is a Paedophile?’: Children
Talking About Internet Risk. Available at:
http://sites.uclouvain.be/rec/index.php/rec/article/viewFile/4761/4491
(accessed 16 July 2018).
Burruss GW, Bossler AM and Holt TJ (2012) Assessing the mediation of a
fuller social learning model on low self-control’s influence on software
piracy. Crime & Delinquency 59(8): 1157–1184.
Bush GW (2003) Securing the Homeland: Strengthening the Nation.
Available at:
www.dhs.gov/sites/default/files/publications/homeland_security_book.pd
f (Accessed 14 February 2018).
Button M and Cross C (2017) Cyberfrauds, Scams and their Victims. New
York, NY: Routledge.
Buxbaum P (2002) Cyberterrorism: A Sorry Excuse for Government
Intervention. Available at:
https://web.archive.org/web/20070208121024/http://searchcio.techtarget.
com/originalContent/0,289142,sid19_gci866647,00.html (accessed 16
July 2018).
CA Goldberg (2018) State Revenge Porn Laws. Available at:
www.cagoldberglaw.com/states-with-revenge-porn-laws/ (accessed 1
June 2018).
Cabinet Office (2002) Identity Fraud: A Study. London: Cabinet Office.
Cadwalladr C and Graham-Harrison E (2018) Revealed: 50 million
Facebook profiles harvested for Cambridge Analytica in major data
breach. The Guardian, 17 March. Available at:
www.theguardian.com/news/2018/mar/17/cambridge-analyticafacebook-influence-us-election (accessed 26 July 2018).
Campbell D (1988) Somebody’s listening. New Statesman, 12 August: 10–
12.
Campbell D (2015) Global spy system ECHELON confirmed at last – by
leaked Snowden files. The Register, 3 August. Available at:
www.theregister.co.uk/2015/08/03/gchq_duncan_campbell/ (accessed 31
July 2018).
Capeller W (2001) Not such a neat net: Some comments on virtual
criminality. Social and Legal Studies 10: 229–242.
Carter D (1997) ‘Digital democracy’ or ‘information aristocracy’?
Economic regeneration and the information economy. In B Loader (ed.)
The Governance of Cyberspace: Politics, Technology and Global
Restructuring. London: Routledge.
Carugati A (2003) Interview with MPAA President Jack Valenti,
Worldscreen.com, February. Available at:
http://worldscreen.com/interviewscurrent.php?filename=203valenti.txt
(accessed 16 July 2018).
Cassell J and Cramer M (2008) High tech or high risk: Moral panics about
girls online. In T McPherson (ed.) Digital Youth, Innovation, and the
Unexpected. Cambridge, MA: MIT Press.
Castells M (1996) The Rise of the Network Society. New York, NY:
Blackwell.
Castells M (1998) The Information Age: Economy, Society and Culture:
Vol. 3, End of Millennium. Malden, MA: Blackwell.
Castells M (2002) The Internet Galaxy: Reflections on the Internet,
Business, and Society. Oxford: Oxford University Press.
Castleman M (2016) Evidence mounts: More porn, LESS sexual assault.
Psychology Today, 14 January. Available at:
www.psychologytoday.com/us/blog/all-about-sex/201601/evidencemounts-more-porn-less-sexual-assault (accessed 3 May 2018).
Catalano S (2012) Stalking victims in the United States – Revised.
Washington, DC: US Department of Justice. Available at:
www.bjs.gov/content/pub/pdf/svus_rev.pdf (accessed 29 November
2017).
Caulfield P (2011) Reddit dumps controversial ‘jailbait’ section after user
posts child porn. Daily News, 12 October. Available at:
www.nydailynews.com/news/reddit-dumps-jailbait-section-kiddie-pornpost-article-1.962693 (accessed 24 May 2018).
CBR (Computer Business Review) (2018) Lithuania leads seven EU
countries in forming a cybersecurity response team. CBR Online, 28
June. Available at: www.cbronline.com/news/eu-cybersecurity (accessed
25 July 2018).
CBS Chicago (2014) Family Accused of Selling Stolen Goods for $4m On
Ebay. CBS Chicago, 5 March. Available at:
http://chicago.cbslocal.com/2014/03/05/family-accused-of-selling-stolengoods-for-4m-on-ebay/ (accessed 25 April 2018).
CBS San Francisco (2016) Anonymous Hacks Pro-ISIS Twitter Accounts,
Infusing them with Gay Pride, Porn. CBS San Francisco, 15 June.
Available at: http://sanfrancisco.cbslocal.com/2016/06/15/anonymoushacks-pro-isis-twitter-accounts-infusing-them-with-gay-pride-porn/
(accessed 23 January 2018).
CCIPS (Computer Crime and Intellectual Property Section) (2010)
Prosecuting Computer Crimes. Washington, DC: Office of Legal
Education. Available at: www.justice.gov/sites/default/files/criminalccips/legacy/2015/01/14/ccmanual.pdf (accessed 12 January 2018).
CEBR (Centre for Economics and Business Research) (2002) Counting
Counterfeits: Defining a Method to Collect, Analyse and Compare Data
on Counterfeiting and Piracy in the Single Market, Final report for the
European Commission Directorate-General Single Market. London:
CEBR.
CEOP (Child Exploitation and Online Protection Centre) (2011) Hundreds
of Suspects Tracked in International Child Abuse Investigation. CEOP,
16 March. Available at:
web.archive.org/web/20150920003526/http://ceop.police.uk/MediaCentre/Press-releases/2011/HUNDREDS-OF-SUSPECTS-TRACKEDIN-INTERNATIONAL-CHILD-ABUSE-INVESTIGATION/ (accessed
16 July 2018).
CERT/CC (2002) CERT Advisory CA-2001-23 Continued Threat of the
‘Code Red’ Worm. CERT, 17 January. Available at:
http://seclists.org/cert/2001/17 (accessed 16 July 2018).
Cha A (2005) Police Find that on eBay Some Items are a Real Steal. Duluth
News Tribune.com, 8 January. Available at:
https://web.archive.org/web/20050204192229/http://www.duluthsuperior
.com:80/mld/duluthsuperior/10597328.htm (accessed 16 July 2018).
Charlesworth A (2000) The governance of the internet in Europe. In Y
Akdeniz, C Walker and D Wall (eds) The Internet, Law and Society.
Harlow: Longman.
Chase M, Mulvenon J and Hachigian N (2006) Comrade to comrade
networks: The social and political implications of peer-to-peer networks
in China. In J Damm and S Thomas (eds) Chinese Cyberspaces:
Technological Changes and Political Effects. New York, NY: Routledge,
pp. 57–89.
Chatterjee B (2012) Fighting child pornography through UK encryption
law: A powerful weapon in the law’s armoury. Child & Family Law
Quarterly 24: 410–427.
Cheshire T (2018) Porn giant MindGeek denies planning to snoop on UK
viewers. Skynews, 27 January. Available at:
https://news.sky.com/story/porn-watchers-will-have-to-prove-they-areover-18-under-new-laws-11224093 (accessed 16 July 2018).
Chism, KA and Steinmetz KF (2018) Technocrime and strain theory. In KF
Steinmetz and MR Nobles (eds) Technocrime and Criminological
Theory. New York, NY: Routledge.
Choi H, Park J and Jung Y (2018) The role of privacy fatigue in online
privacy behavior. Computers in Human Behavior 81: 42–51.
Chokoshvili D (2011) The Role of the Internet in Democratic Transition:
Case Study of the Arab Spring. Budapest: Central European University.
Available at: www.google.com/url?
sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiLyfTut6Tc
AhVJOKwKHVScD9sQFggqMAA&url=http%3A%2F%2Fwww.etd.ceu
.hu%2F2011%2Fchokoshvili_davit.pdf&usg=AOvVaw2CyWaIroPMEg
VxHtQCNZg- (accessed 16 July 2018).
Choudhary K (2012) Image steganography and global terrorism.
International Journal of Scientific & Engineering Research 3(4): 1–12.
Churchill D (2016) Security and visions of the criminal: Technology,
professional criminality and social change in Victorian and Edwardian
Britain. British Journal of Criminology 56: 857–876.
Cimpanu C (2017) Student hacks high school, changes grades, and sends
college applications. BleepingComputer, 3 December. Available at:
www.bleepingcomputer.com/news/security/student-hacks-high-schoolchanges-grades-and-sends-college-applications/ (accessed January 11,
2018).
Citron D and Norton H (2011) Intermediaries and hate speech: Fostering
digital citizenship for our information age. Boston University Law
Review 91: 1–18.
Clark L (2017) The National Crime Agency is sending hackers to rehab.
Wired, 26 July. Available at: www.wired.co.uk/article/national-crime-
agency-prevent-uk-hackers-jake-davis (accessed 1 August 2018).
Clarke RA and Knake RK (2010) Cyber War: The Next Threat to National
Security and What to do about It. New York, NY: Ecco.
Clarke RV and Cornish DB (1985) Modeling offenders’ decisions: A
framework for research and policy. Crime and Justice 4: 147–185.
Clough B and Mungo P (1992) Approaching Zero: Data Crime in the
Computer Underworld. London: Faber & Faber.
Cloward R and Ohlin L (1961) Delinquency and Opportunity: A Theory of
Delinquent Gangs. London: Routledge.
Cnet (2005) Man charged with selling dinosaur fossils on eBay. Cnet, 27
March. Available at: www.cnet.com/news/man-charged-with-sellingdinosaur-fossils-on-ebay/. (accessed 23 July 2018).
Coaston J (2018) The Alt-Right is going on trial in Charlottesville. Vox,
March 8. Available at: www.vox.com/2018/3/8/17071832/alt-rightracists-charlottesville (accessed 22 May 2018).
Cobain I (2018) UK has six months to rewrite snooper’s charter, high court
rules. The Guardian, 27 April. Available at:
www.theguardian.com/technology/2018/apr/27/snoopers-charterinvestigatory-powers-act-rewrite-high-court-rules (accessed 27 July
2018).
CoE (Council of Europe) (2001) Convention on Cybercrime. Available at:
http://conventions.coe.int/Treaty/EN/Treaties/html/185.htm (accessed 16
July 2018).
CoE (Council of Europe) (2012) Convention on Cybercrime. Available at:
http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?
NT=185andCL=ENG.
CoE (Council of Europe) (2018) Chart of Signatures and Ratifications of
Treaty 185. Available at: www.coe.int/en/web/conventions/fulllist/-/conventions/treaty/185/signatures?p_auth=GuBHMUgq (accessed
29 May 2018).
Cohen A (1955) Delinquent Boys: The Culture of the Gang. New York:
Free Press.
Cohen AK (1993) The Social Functions of Crime. Available at:
www.asc41.com/Photos/Cohen_Albert_withPoem.html (accessed 15
August 2018).
Cohen L and Felson M (1979) Social change and crime rate trends: A
routine activity approach. American Sociological Review 44: 588–608.
Cohen S (1972) Folk Devils and Moral Panics. London: MacGibbon.
Cole S (2017) AI-assisted fake porn is here and we’re all fucked.
Motherboard, 11 December. Available at:
https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-aiporn (accessed 29 May 2018).
Cole S (2018) People are using AI to create fake porn of their friends and
classmates. Motherboard, 26 January. Available at:
https://motherboard.vice.com/en_us/article/ev5eba/ai-fake-porn-offriends-deepfakes (accessed 25 May 2018).
Coleman C (2003) Securing cyberspace – new laws and developing
strategies. Computer Law and Security Report 19(2): 131–136.
Coleman C and Moynihan J (1996) Understanding Crime Data.
Buckingham: Open University Press.
Coleman GE (2012) Phreaks, hackers, and trolls. In M Mandiberg (ed.) The
Social Media Reader. New York, NY: NYU Press, pp. 99–119.
Coleman GE (2014) Hacker, hoaxer, whistleblower, spy: The many faces of
Anonymous. New York, NY: Verso.
Coleman GE (2017) From internet farming to weapons of the geek. Current
Anthropology 58(S15): S91–S102.
Coleman R and McCahill M (2010) Surveillance and Crime. London: Sage.
Collins M, Theis M, Trzeciak R, Strozer J, Clark J, Costa D, Dassidy T,
Albrethsen M and Moore A (2016) Common Sense Guide to Mitigating
Insider Threats (5th edn). Pittsburg, PA: Software Engineering Institute.
Comey JB and Yates SQ (2015) Going dark: Encryption, technology, and
the balances between public safety and privacy. FBI.gov, 8 July.
Available at: www.fbi.gov/services/operational-technology/going-dark
(accessed 25 July 2018).
Commander X (2016) Behind the Mask: An Inside Look at Anonymous.
Los Gatos, CA: SmashWords.
Connell R (1987) Gender and Power. Cambridge: Polity Press.
Conway M (2002) What is cyberterrorism? Current History 101(659): 436–
442.
Cooper J and Harrison DM (2001) The social organization of audio piracy
on the Internet. Media, Culture & Society 23: 71–89.
Copes H and Vieraitis L (2012) Identity Thieves: Motives and Methods.
Boston, MA: Northeastern University Press.
Coriou L (2002) The Internet on Probation. Available at:
www.cdt.org/files/security/usapatriot/020900rwb.pdf (accessed 16 July
2018).
Cornish DB and Clarke RV (1986) Introduction. In DB Cornish and RV
Clarke (eds) The Reasoning Criminal: Rational Choice Perspectives on
Offending. New York, NY: Springer-Verlag.
Cornish DB and Clarke RV (1987) Understanding crime displacement: An
application of rational choice theory. Criminology 25(4): 933–947.
Couts A (2011) Anonymous attacks BART for cell phone censorship.
Digital Trends, 15 August. Available at:
www.digitaltrends.com/mobile/anonymous-attacks-bart-for-cell-phonecensorship/ (accessed 29 January 2018).
Cox J (2016) The FBI’s ‘unprecedented’ hacking campaign targeted over a
thousand computers. Motherboard, 5 January. Available at:
https://motherboard.vice.com/en_us/article/qkj8vv/the-fbisunprecedented-hacking-campaign-targeted-over-a-thousand-computers
(accessed 4 June 2018).
Cox J (2017) DOJ, FBI executives approved running a child porn site.
Motherboard. Available at:
https://motherboard.vice.com/en_us/article/bjg9j4/doj-fbi-childpornography-sting-playpen-court-transcripts (accessed 19 November
2018).
Cox J (2018) Inside the private forums where men illegally trade upskirt
photos. Motherboard, 8 May. Available at:
https://motherboard.vice.com/en_us/article/gykxvm/upskirt-creepshotsite-the-candid-forum (accessed 16 July 2018).
Cox J and Hoppenstedt M (2018) Sources: Facebook has fired multiple
employees for snooping on users. Motherboard, 2 May. Available at:
https://motherboard.vice.com/en_us/article/bjp9zv/facebook-employeeslook-at-user-data (accessed 4 June 2018).
CPS (Crown Prosecution Service) (2017) Rape and Sexual Offences Chapter 2: Sexual Offences Act 2003. Available at:
www.cps.gov.uk/legal-guidance/rape-and-sexual-offences-chapter-2sexual-offences-act-2003-principal-offences-and (accessed 22 July
2018).
CPS (Crown Prosecution Service) (2018) Stalking and Harassment.
Available at: www.cps.gov.uk/legal-guidance/stalking-and-harassment
(accessed 11 June 2018).
Craven J (1998) Extremism and the internet (EMAIN). The Journal of
Information, Law and Technology. Available at:
https://warwick.ac.uk/fac/soc/law/elj/jilt/1998_1/craven/ (accessed 16
July 2018).
Crawford A (2006) Networked governance and the post-regulatory state?
Steering, rowing and anchoring the provision of policing and security.
Theoretical Criminology 10(4): 449–479.
Creswell JW (2014) Research Design: Qualitative, Quantitative, and Mixed
Methods Approaches (4th edn). Thousand Oaks, CA: Sage.
Crilly R, Guly C and Molloy M (2018) What do we know about Alek
Minassian, arrested after Toronto van attack. The Telegraph, 25 April.
Available at: www.telegraph.co.uk/news/2018/04/24/do-know-alekminassian-arrested-toronto-van-attack/ (accessed 22 May 2018).
Criminal Justice and Public Order Act (1994) c. 33, Part XIII, Section 161.
Available at: www.legislation.gov.uk/ukpga/1994/33/section/161/199603-31 (accessed 24 July 2018).
Critcher C (2003) Moral Panics and the Media. Buckingham: Open
University Press.
Crozier B (1974) A Theory of Conflict. London: Hamish Macmillan.
CSI (Computer Security Institute) (2011) 2010/2011 Computer Crime and
Security Survey. San Francisco, CA: Computer Security Institute.
CSI/FBI (2003) CSI/FBI Computer Crime and Security Survey. San
Francisco, CA: Computer Security Institute.
Cullen FT, Pratt TC, Miceli SL and Moon MM (2002) Dangerous liaison?
Rational choice theory as the basis for correctional intervention. In AR
Piquero and SG Tibbetts (eds) Rational Choice and Criminal Behavior:
Recent Research and Future Challenges. New York, NY: Routledge.
Curran J (2010) Reinterpreting internet history. In Y Jewkes and M Yar
(eds) Handbook of Internet Crime. New York, NY: Willan Publishing.
Curry S (2005) Online Auctions: The Bizarre Bazaar. Available at:
www.scambusters.org/onlineauctions.pdf (accessed 16 July 2018).
D’Ovidio R and Doyle J (2003) A study on cyberstalking. FBI Law
Enforcement Bulletin 72(3): 10–17.
Daniels J (2009) Cyber Racism: White Supremacy Online and the New
Attack on Civil Rights. Lanham, MD: Rowman & Littlefield.
Daniels J (2013) Race and racism in Internet studies: A review and critique.
New Media & Society 15(5): 695–719.
Datoo S (2013) France drops controversial ‘Hadopi Law’ after spending
millions. The Guardian, 9 July. Available at:
www.theguardian.com/technology/2013/jul/09/france-hadopi-law-antipiracy (accessed 26 February 2018).
Davidson J (2016) ISIS threatens feds, military after theft of personal data.
The Washington Post, 31 January. Available at:
www.washingtonpost.com/news/federal-eye/wp/2016/01/31/isisthreatens-feds-military-after-theft-of-personal-data/?
utm_term=.5541211a93e9 (accessed 30 January 2018).
Davis M (1990) City of Quartz: Excavating the Future of Los Angeles.
London: Verso.
De Young M (2004) The Day Care Ritual Abuse Moral Panic. Jefferson,
NC: McFarland.
Décary-Hétu D, Morselli C and Leman-Langlois S (2012) Welcome to the
scene: A study of social organization and recognition among warez
hackers. Journal of Research in Crime and Delinquency, 49: 359–382.
DEF CON (2018) Frequently Asked Questions About DEF CON. Available
at: www.defcon.org/html/links/dc-faq/dc-faq.html (accessed 12 January
2018).
Deibert R, Palfrey J, Rohoznski R and Zittrain J (eds) (2007) Access
Denied: The Practice and Policy of Global Internet Filtering. Boston,
MA: MIT Press.
Deirmenjian J (2000) Hate crimes on the Internet. Journal of Forensic
Sciences 45(5): 1020–1022.
DeKeseredy WS (2016) Pornography and violence against women. In CA
Cuevas and CM Rennison (eds) The Wiley Handbook on the Psychology
of Violence. Malden, MA: Wiley-Blackwell, pp. 501–516.
Deleuze G (1995) Postscript on the societies of control. In G Deleuze
Negotiations. New York: Columbia University Press.
Delgado R (1982) Words that would: A tort action for racial insults,
epithets, and name-calling. Harvard Civil Rights-Civil Liberties Law
Review 17: 133–181.
Delgado R and Stefancic J (2004) Understanding Words That Wound.
Boulder, CO: Westview Press.
Delgado R and Stefancic J (2009) Four observations about hate speech.
Wake Forest Law Review 44: 353–370.
Delio M (2001) Why worm writer surrendered. Wired Magazine, 14
February. Available at: www.wired.com/2001/02/why-worm-writersurrendered/ (accessed 16 July 2018).
DeMarco J (2001) It’s not just fun and ‘war games’ – juveniles and
computer crime. US Attorney’s Bulletin 49: 48–54.
Denning D (1991) Hacker ethics. In Proceedings of the 13th National
Conference on Computing and Values. New Haven, CT: Research Centre
on Computing and Society.
Denning D (1999) Information Warfare and Security. New York: AddisonWesley.
Denning D (2000a) Cyberterrorism. Available at:
http://palmer.wellesley.edu/~ivolic/pdf/Classes/Handouts/NumberTheory
Handouts/Cyberterror-Denning.pdf (accessed 16 July 2018).
Denning D (2000b) Cyberterrorism, testimony before the Special Oversight
Panel on Terrorism Committee on Armed Services US House of
Representatives, 23 May. Available at: www.stealthiss.com/documents/pdf/CYBERTERRORISM.pdf (accessed 16 July
2018).
Denning D (2010) Terror’s web: How the Internet is transforming terrorism.
In Y Jewkes and M Yar (eds) Handbook of Internet Crime. Cullompton:
Willan.
Denning D and Baugh, Jr. WE (1997) Encryption and Evolving
Technologies: Tools of Organized Crime and Terrorism, Working Group
on Organized Crime. Washington DC: National Strategy Information
Center.
Denning D and Baugh W (2000) Hiding crimes in cyberspace. In D Thomas
and B Loader (eds) Cybercrime: Law Enforcement, Security and
Surveillance in the Information Age. London: Routledge.
DFCSC (Digital Forensics and Cybersecurity Center) (2018a) RedLight –
Pornography Scanner. South Kingstown, RI: The University of Rhode
Island. Available at: https://dfcsc.uri.edu/research/redLight (accessed 13
July 2018).
DFCSC (Digital Forensics and Cybersecurity Center) (2018b) Video
Previewer. South Kingstown, RI: The University of Rhode Island.
Available at: https://dfcsc.uri.edu/research/videoPreviewer (accessed 13
July 2018).
DHS (Department of Homeland Security) (2018a) Budget-in-Brief: Fiscal
Year 2019. Available at:
www.dhs.gov/sites/default/files/publications/DHS%20FY19%20BIB.pdf
(accessed 14 February 2018).
DHS (Department of Homeland Security) (2018b) FY 2018 Budget in
Brief. Available at:
www.dhs.gov/sites/default/files/publications/DHS%20FY18%20BIB%2
0Final.pdf (accessed 30 January 2018).
Digital Economy Act (2017) Available at:
www.legislation.gov.uk/ukpga/2017/30/part/3/enacted (accessed 22 May
2018).
Diken B and Lausten CB (2002) Zones of indistinction – security, terror,
and bare life. Space and Culture. Available at:
http://journals.sagepub.com/doi/abs/10.1177/1206331202005003009
(accessed 16 July 2018).
Dima B (2012) Top 5: Corporate losses due to hacking. Hot for Security, 17
May. Available at: www.hotforsecurity.com/blog/top-5-corporate-lossesdue-to-hacking-1820.html (accessed 16 July 2018).
DiMarco H (2003) The electronic cloak: Secret sexual deviance in
cyberspace. In Y Jewkes (ed.) Dot.cons: Crime, Deviance and Identity on
the Internet. Cullompton: Willan.
Doctorow C (2008a) Content: Selected Essays on Technology, Creativity,
Copyright, and the Future of the Future. San Francisco, CA: Tachyon
Publishing.
Doctorow C (2008b) Little Brother [PDF version]. New York, NY: Tor.
Available at: https://craphound.com/littlebrother/Cory_Doctorow__Little_Brother.pdf (accessed 2 May 2018).
Doctorow C (2010) The real cost of free. The Guardian, 5 October.
Available at: www.theguardian.com/technology/blog/2010/oct/05/freeonline-content-cory-doctorow (accessed 2 May 2018).
Doctorow C (2013) The Library: ‘It’s a Good Daddy–Daughter Afternoon’,
interview given at the American Library Association. Available at:
www.ilovelibraries.org/author-video/cory-doctorow-0 (accessed 2 May
2018).
Dolan K (2004) Internet auction fraud: The silent victims. Journal of
Economic Crime Management 2(1): 1–22.
Domhardt M, Münzer A, Fegert JM and Goldbeck L (2015) Resilience in
survivors of child sexual abuse: A systematic review of the literature.
Trauma, Violence, & Abuse 16(4): 476–493.
Donner CM, Marcum CD, Jennings WG, Higgins GE and Banfield J (2014)
Low self-control and cybercrime: Exploring the utility of the general
theory of crime beyond digital piracy. Computers in Human Behavior 34:
165–172.
Donoghue A (2003) Cyberterror: Clear and present danger or phantom
menace? ZDNet, 8 December. Available at:
www.zdnet.com/article/cyberterror-clear-and-present-danger-orphantom-menace/ (accessed 16 July 2018).
Dowland P, Furnell S, Illingworth H and Reynolds P (1999) Computer
crime and abuse: A survey of public attitudes and awareness. Computers
and Security 18(8): 715–726.
Dr K (2004) Hacker’s Tales: Stories from the Electronic Front Line.
London: Carlton Publishing.
Drahos P and Braithwaite J (2002) Information Feudalism: Who Owns the
Knowledge Economy? London: Earthscan.
Dreßing H, Bailer J, Anders A, Wagner H and Gallas C (2014)
Cyberstalking in a large sample of social network users: Prevalence,
characteristics, and impact upon victims. Cyberpsychology, Behavior,
and Social Networking 17(2): 61–67.
Dressing H, Anders A, Gallas C and Bailer J (2011) Cyberstalking:
Prevalence and impact on victims. Psychiatrische Praxis 38(7): 336–341.
Drew JM and Cross C (2013) Fraud and its PREY: Conceptualising social
engineering tactics and its impact on financial literacy outcomes. Journal
of Financial Services Marketing 18(3): 188–198.
Dreyfuss E (2018) Facebook’s fight against fake news keeps raising
questions. Wired, 20 August. Available at:
www.wired.com/story/facebook-fight-against-fake-news-keeps-raisingquestions/ (accessed 15 August 2018).
Drezner DW (2004) The global governance of the Internet: Bringing the
state back in. Political Science Quarterly 119(3): 477–498.
Dunn M and Wigert I (2004) International CIIP Handbook: An Inventory
and Analysis of Protection Policies in Fourteen Countries. Zurich: Swiss
Federal Institute of Technology.
Dupont B (2016) The polycentric governance of cybercrime: The
fragmented networks of international cooperation. Cultures & Conflits
(2): 95–120.
Duranske B (2007) Second Life pornography allegations draw international
press attention. Virtually Blind, 9 May. Available at:
http://virtuallyblind.com/2007/05/09/ageplay-second-life-worldwide/
(accessed 16 July 2018).
Durham M (2009) The Lolita Effect: The Media Sexualization of Young
Girls and What We Can Do About It. London: Duckworth.
Dworkin G and Taylor RD (1989) Blackstone’s Guide to the Copyright,
Designs and Patents Act 1988. London: Blackstone.
eBay (2017) Annual Report 2017. Available at:
http://files.shareholder.com/downloads/ebay/6171987215x0x977008/81E
A0A42-11FF-4D30-860B18D1A5DC3C5D/2017_eBay_AnnualReport.pdf (accessed 25 April
2018).
eBay (2018) eBay Inc. Reports Fourth Quarter and Full Year 2017 Results.
Available at:
http://files.shareholder.com/downloads/ebay/6222900089x0x970140/C1
C54245-B6C1-4344-84285251DADBBDB7/EBAY_News_2018_1_31_Earnings.pdf (accessed 25
April 2018).
Economy EC (2018) The great firewall of China: Xi Jinping’s internet
shutdown. The Guardian, 29 June. Available at:
www.theguardian.com/news/2018/jun/29/the-great-firewall-of-china-xijinpings-internet-shutdown (accessed 25 July 2018).
ECPAT (End Child Prostitution, Child Pornography and Trafficking of
Children for Sexual Purposes) (1997) Child Pornography: An
International Perspective. Available at http://www.crimeresearch.org/articles/536/ (accessed 20 November 2018).
Edwards L, Rauhofer J and Yar M (2010) Recent developments in UK
cybercrime law. In Y Jewkes and M Yar (eds) Handbook of Internet
Crime. Cullompton: Willan.
EFA (Electronic Frontiers Australia) (2002) Internet Censorship: Law and
Policy Around the World. Available at:
www.efa.org.au/Issues/Censor/cens3.html#sau (accessed 16 July 2018).
EFF (Electronic Frontier Foundation) (2001) Online Censorship and Free
Expression. Available at:
https://web.archive.org/web/20061208175154/www.eff.org//Censorship/
?f=gottesman_lords_video_case.article (accessed 16 July 2018).
EFF (Electronic Frontier Foundation) (2008) RIAA v. The People: Five
Years Later. Available at: www.eff.org/files/eff-riaa-whitepaper.pdf
(accessed 23 May 2016).
EFF (Electronic Frontier Foundation) (2018) Coders’ Rights Project
Vulnerability Reporting FAQ. Available at:
www.eff.org/issues/coders/vulnerability-reporting-faq (accessed 14
August 2018).
Einstadter WJ and Henry S (2006) Criminological Theory: An Analysis of
its Underlying Assumptions (2nd edn). Lanham, MD: Rowman &
Littlefield.
Elliot J and Meyer T (2013) Claims on ‘attacks thwarted’ by NSA spreads
despite lack of evidence. ProPublica, 23 October. Available at:
www.propublica.org/article/claim-on-attacks-thwarted-by-nsa-spreadsdespite-lack-of-evidence (accessed 25 July 2018).
Ellis R, Fantz A, Karimi F and McLaughlin EC (2016) Orlando shooting:
49 killed, shooter pledged ISIS allegiance. CNN, 13 June. Available at:
www.cnn.com/2016/06/12/us/orlando-nightclub-shooting/index.html
(accessed 23 January 2018).
Embar-Seddon A (2002) Cyberterrorism: Are we under siege? American
Behavioral Scientist 45(6): 1033–1043.
Emerson R, Ferris K and Brooks-Gardner C (1998) On being stalked.
Social Problems (45)3: 289–314.
ENISA (European Network and Information Security Agency) (2012)
ENISA Budget 2012. Available at: www.enisa.europa.eu/aboutenisa/accounting-finance/files/enisa-budget-2012 (accessed 16 July
2018).
ENISA (European Network and Information Security Agency) (2018)
Accounting & Finance. Available at: www.enisa.europa.eu/aboutenisa/accounting-finance (accessed 3 January 2018).
Enos L (2000) Yahoo! sued for auctioning counterfeit goods. Ecommerce
Times, 29 March. Available at:
www.ecommercetimes.com/story/2849.html (accessed 16 July 2018).
EP (European Parliament) (2001) Report on the Existence of a Global
System for the Interception of Private and Commercial Communications
(ECHELON Interception System), Final, A5–0264/2001, 11 July.
Brussels: European Parliament.
EPIC (Electronic Privacy Information Center) (1997) Faulty Filters: How
Content Filters Block Access to Kid-friendly Information on the Internet.
Brussels: EPIC. Available at: www.epic.org/reports/filter_report.html
(accessed 16 July 2018).
EPIC (Electronic Privacy Information Center) (2007) Privacy and Human
Rights: An International Survey of Privacy Laws and Developments.
Washington, DC: EPIC.
Epstein R (2004) Liberty Versus Property? Cracks in the Foundations of
Copyright Law, John M. Olin Law and Economics Working Paper No.
204. Available at:
https://web.archive.org/web/20051025122042/www.law.uchicago.edu/La
wecon/index.html (accessed 17 July 2018).
Ernesto (2016) Kickass Torrents enters the dark web, adds official Tor
address. TorrentFreak, 7 June. Available at:
https://torrentfreak.com/kickasstorrents-enters-the-dark-web-addsofficial-tor-address-160607/ (accessed 26 February 2018).
Escolar I (2003) Please pirate my songs. In WSIS World Information:
Knowledge of Future Culture. Vienna: Institut für Neue
Kulturtechnologien.
Esen R (2002) Cyber crime: A growing problem. Journal of Criminal Law
66(3): 269–283.
Eubanks V (2018) Automating Inequality: How High-Tech Tools Profile,
Police, and Punish the Poor. New York, NY: St. Martin’s Press.
European Commission (1998) Combating Counterfeiting and Piracy in the
Single Market, Green Paper 2COM(98)569 Final. Brussels: EC.
Europol (2017) Internet Organised Crime Threat Assessment. Available at:
www.europol.europa.eu/sites/default/files/documents/iocta2017.pdf
(accessed 25 May 2018).
Europol (2018) Four EU Cybersecurity Organisations Enhance
Cooperation. Available at: www.europol.europa.eu/newsroom/news/foureu-cybersecurity-organisations-enhance-cooperation (accessed 17 July
2018).
Evans K (2004) The significance of virtual communities. Social Issues 2(1):
1474–2918.
Ex parte Thompson PD-1371-13 (2014) Available at:
https://caselaw.findlaw.com/tx-court-of-criminal-appeals/1678295.html
(accessed 4 June 2018).
Experian (2002) Internet Fraud: A Growing Threat to Online Retailers.
London: Experian.
Exum ML (2002) The application and robustness of the rational choice
perspective in the study of intoxicated and angry intentions to aggress.
Criminology 40(4): 933–966.
Facebook (2018) Facebook Reports Fourth Quarter and Full Year 2017
Results. Available at:
https://s21.q4cdn.com/399680738/files/doc_financials/2017/Q4/Q42017-Press-Release.pdf. (accessed 23 July 2017).
FACT (Federation Against Copyright Theft) (2008) FACT Seizure Figures
and Analysis. Available at:
https://web.archive.org/web/20111224004323/http://www.factuk.org.uk:80/site/media_centre/dvd_seiz_0405.htm (accessed 17 July
2018).
Fafinski S, Dutton WH and Margetts H (2010) Mapping and Measuring
Cybercrime. Oxford: Oxford University Internet Institute.
FAS.org (2003) Statement for the Record of Robert S. Mueller, III,
Director, Federal Bureau of Investigation, on War on Terrorism Before
The Select Committee on Intelligence of The United States Senate,
Washington, DC. Available at:
https://fas.org/irp/congress/2003_hr/021103mueller.html (accessed
January 1, 2018).
FBI (Federal Bureau of Investigation) (2007) Symbols and Logos Used by
Pedophiles to Identify Sexual Preferences. Available at:
https://file.wikileaks.org/file/FBI-pedophile-symbols.pdf (accessed 24
May 2018).
Feather M (1999) Internet and Child Victimisation, paper presented at the
Children and Crime: Victims and Offenders conference, Brisbane,
Australia, 17–18 June.
Felson M (1998) Crime and Everyday Life (2nd edn). Thousand Oaks, CA:
Pine Forge Press.
Fenlon W (2016) PC Piracy Survey results: 35 percent of PC gamers pirate.
PC Gamer, 26 August. Available at: www.pcgamer.com/pc-piracysurvey-results-35-percent-of-pc-gamers-pirate/ (accessed 20 February
2018).
Ferrell J (1993) Crimes of Style: Urban Graffiti and the Politics of
Criminality. Boston, MA: Northeastern University Press.
Finer C and Nellis M (eds) (1998) Crime and Social Exclusion. London:
Blackwell.
FIPR (Foundation for Information Policy Research) (1999) Unprecedented
Safeguards for Unprecedented Capabilities. Available at:
www.fipr.org/publications/hoover.pdf (accessed 17 July 2018).
Fischer-Hübner S (2000) Privacy and security at risk in the global
information society. In D Thomas and B Loader (eds) Cybercrime: Law
Enforcement, Security and Surveillance in the Information Age. London:
Routledge.
Fitch C (2004) Crime and Punishment: The Psychology of Hacking in the
New Millennium. Bethesda, MD: SANS Institute.
Fitzgerald V (2003) Global Financial Information, Compliance Incentives
and Terrorist Funding. Available at:
https://web.archive.org/web/20040711234949/http://users.ox.ac.uk:80/%
7Entwoods/Fitzgerald.pdf (accessed 17 July 2018).
Fitzpatrick A (2012) Police arrest 25 anonymous hackers in four countries.
Mashable, 28 February. Available at:
http://mashable.com/2012/02/28/anonymous-arrest/ (accessed 17 July
2018).
Fiveash K (2012) Virgin media site goes tits-up in Pirate Bay payback
attack. The Register, 9 May. Available at:
www.theregister.co.uk/2012/05/09/virgin_media_website_anonymous/
(accessed 17 July 2018).
Flahardy C (2004) Tiffany and Co. cracks down on eBay counterfeiters.
Corporate Legal Times 14(154): 1–2.
Flavin J (2004) Feminism for the mainstream criminologist: An
invitation… In BR Price and NJ Sokoloff (eds) The Criminal Justice
System and Women: Offenders, Prisoners, Victims, & Workers (3rd edn).
New York, NY: McGraw-Hill.
Fleming P and Stohl M (2000) Myths and Realities of Cyberterrorism,
paper presented at the International Conference on Countering Terrorism
Through Enhanced International Cooperation, 22–24 September,
Courmayeur, Italy. Available at:
www.comm.ucsb.edu/faculty/mstohl/Myths%20and%20Realities%20of
%20Cyberterrorism.pdf (accessed 17 July 2018).
Flinders K (2016) UK government re-announces £1.9bn cyber security
spend. ComputerWeekly.com, 1 November. Available at:
www.computerweekly.com/news/450402098/UK-government-reannounces-19bn-cyber-security-spend (accessed 20 July 2018).
Forde P and Patterson A (1998) Paedophile internet activity. Trends and
Issues in Criminal Justice 97. Canberra: Australian Institute of
Criminology.
Foucault M ([1977] 1991) Discipline and Punish: The Birth of the Prison.
Harmondsworth: Penguin.
Franceschi-Bicchierai L (2015) The hacktivist group released a long list of
names and social media profiles of alleged racists. Motherboard, 5
November. Available at:
https://motherboard.vice.com/en_us/article/kb7eyv/anonymous-hackersofficially-dox-hundreds-of-alleged-kkk-members (accessed 24 January
2018).
Franceschi-Bicchierai, L. (2016) A hacker made ‘thousands’ of internetconnected printers spit out racist flyers. Motherboard, 27 March.
Available at: https://motherboard.vice.com/en_us/article/gv5ykw/hackerweev-made-thousands-of-internet-connected-printers-spit-out-racistflyers (accessed 16 May 2018).
Franzen C (2012) The OPEN Act introduced: Can it kill SOPA and PIPA?
Talking Points Memo, 19 January. Available at:
http://idealab.talkingpointsmemo.com/2012/01/the-open-act-introducedcan-it-kill-sopa-and-pipa.php (accessed 17 July 2018).
FRD (Federal Research Division) (2014) Federal Government Strategic
Sourcing of Information Products and Services, Report. Washington, DC:
Library of Congress. Available at:
www.loc.gov/flicc/publications/FRD/Strategic-Sourcing_2014Q4_Final[1].pdf (accessed 30 July 2018).
Fream A and Skinner W (1997) Social learning theory analysis of computer
crime among college students. Journal of Research in Crime and
Delinquency 34(4): 495–518.
Freedom House (2017) Freedom on the Net 2017: Saudi Arabia Country
Profile. Available at: https://freedomhouse.org/report/freedomnet/2017/saudi-arabia (accessed 3 May 2017).
Fried R (2001) Cyber scam artists: A new kind of .con. Available at:
https://web.archive.org/web/20170809045518/http://www.crime-sceneinvestigator.net/CyberScam.pdf (accessed 17 July 2018).
Frith S and Marshall L (2004) Music and Copyright. Edinburgh: Edinburgh
University Press.
Fritz J (1995) A proposal for mental health provisions in state anti-stalking
law. Journal of Psychiatry and Law Summer: 295–318.
Frontier Economics (2017) The Economic Impacts of Counterfeiting and
Piracy. Available at:
https://cdn.iccwbo.org/content/uploads/sites/3/2017/02/ICC-BASCAPFrontier-report-2016.pdf (accessed 20 February 2018).
FTC (Federal Trade Commission) (2015) Complying with COPPA:
Frequently Asked Questions. Available at: www.ftc.gov/tipsadvice/business-center/guidance/complying-coppa-frequently-askedquestions (accessed 31 July 2018).
Fuchs C (2013) The Anonymous movement in the context of liberalism and
socialism. Interface: A Journal for and About Social Movements 5(2):
345–376.
Furedi F (2001) Paranoid Parenting. London: Penguin/Alan Lane.
Furedi F (2005) Culture of Fear: Risk Taking and the Morality of Low
Expectation (rev. edn). London: Continuum.
Furnell S (2002) Cybercrime: Vandalizing the Information Society. London:
Addison-Wesley.
Furnell S (2005) Computer Insecurity: Risking the System. New York:
Springer.
Fyfe N (2001) Geography of Crime and Policing. Oxford: Blackwell.
G Data Security Labs (2015) G Data Security Labs Malware Report.
Available at:
https://public.gdatasoftware.com/Presse/Publikationen/Malware_Reports
/G_DATA_PCMWR_H1_2015_EN.pdf (accessed 26 October 2017).
G Data (2016) G Data Mobile Malware Report: Threat Report: H1/2016.
Available at:
https://file.gdatasoftware.com/web/en/documents/whitepaper/G_DATA_
Mobile_Malware_Report_H1_2016_EN.pdf (accessed 11 January 2018).
Gallagher S (2012) High orbits and slowlorises: Understanding the
anonymous attack tools. Ars Technica, 16 February. Available at:
http://arstechnica.com/business/2012/02/high-orbits-and-slowlorisesunderstanding-the-anonymous-attack-tools/2/ (accessed 17 July 2018).
Gallagher S (2018) Equifax breach exposed millions of driver’s licenses,
phone numbers, emails. ArsTechnica, 8 May. Available at:
https://arstechnica.com/information-technology/2018/05/equifax-breach-
exposed-millions-of-drivers-licenses-phone-numbers-emails/ (accessed
23 July 2018).
GAO (United States Government Accountability Office) (2017) Critical
Infrastructure Protection: DHS Risk Assessments Inform Owner and
Operator Protection Efforts and Departmental Strategic Planning.
Available at: www.gao.gov/assets/690/688028.pdf (accessed 30 January
2018).
Gandy O (1993) The Panoptic Sort: A Political Economy of Personal
Information. Boulder, CO: Westview Press.
Gardham D (2011) Terrorists are harnessing hi-tech communications,
government warns. The Telegraph, 12 July. Available at:
www.telegraph.co.uk/news/uknews/terrorism-in-theuk/8633311/Terrorists-are-harnessing-hi-tech-communicationsgovernment-warns.html (accessed 17 July 2018).
Gartner (2016) Gartner Says Worldwide Security Software Market Grew
3.7 Percent in 2015. Available at:
www.gartner.com/newsroom/id/3377618 (accessed 26 July 2018).
Gartner (2017a) Gartner Forecasts Worldwide Security Spending Will
Reach $96 Billion in 2018, up 8 Percent from 2017. Available at:
www.gartner.com/newsroom/id/3836563 (accessed 18 July 2018).
Gartner Inc (2017b) Gartner Says 8.4 Billion Connected ‘Things’ Will be in
Use in 2017, up 31 Percent from 2016. Available at:
www.gartner.com/newsroom/id/3598917 (accessed 15 December 2017).
Gavish B and Tucci C (2008) Reducing internet auction fraud.
Communications of the ACM 51(5): 89–97.
Geis F (2000) On the absence of self-control as the basis for a general
theory of crime: A critique. Theoretical Criminology 4(1): 35–53.
Gelman R and McCandlish S (1998) Protecting Yourself Online: An
Electronic Frontier Foundation Guide. New York: Harper Collins.
General Statutes § 14-190.5A (2015) Available at:
www.ncleg.net/Sessions/2015/Bills/House/PDF/H792v6.pdf (accessed 1
June 2018).
Gerrits R (1992) Terrorists’ perspectives: Memoirs. In D Paletz and A
Schmid (eds) Terrorism and the Media. Newbury Park, CA: Sage.
Gewirtz-Meydan A, Walsh W, Wolak J and Finkelhor D (2018) The
complex experience of child pornography survivors. Child Abuse &
Neglect 80: 238–248.
Geuss M (2015) Hunter Moor of ‘IsAnybodyUp’ notoriety sentenced to 30
months in prison. Ars Technica, 3 December. Available at:
https://arstechnica.com/tech-policy/2015/12/hunter-moore-ofisanybodyup-notoriety-sentenced-to-30-months-in-prison/ (accessed 1
June 2018).
Gibbs J (1975) Crime, Punishment, and Deterrence. New York, NY:
Elsevier Scientific.
Gibson W (1984) Neuromancer. London: HarperCollins.
Gibson W (2013) William Gibson: Live from the NYPL [Q&A Session],
The New York Public Library. Available at: www.youtube.com/watch?
v=ae3z7Oe3XF4 (accessed on 12 December 2017).
Gillmor D (2004) We the Media. London: O’Reilly.
Glenny M (2011) Darkmarket: Cyberthieves, Cybercops, and You. New
York, NY: Alfred A. Knopf.
Goffman E (1969) The Presentation of Self in Everyday Life.
Harmondsworth: Penguin.
Goldsmith A and Brewer R (2015) Digital drift and the criminal interaction
order. Theoretical Criminology 19(1): 112–130.
Goode E and Ben-Yehuda N (1994) Moral Panics: The Social Construction
of Deviance. Oxford: Blackwell.
Goode M (1995) Stalking: Crime of the ‘90’s? Criminal Law Journal 19(1):
1–10.
Goodin D (2011) Insulin pump hack delivers fatal dosage over the air. The
Register, 27 October. Available at:
www.theregister.co.uk/2011/10/27/fatal_insulin_pump_attack/ (accessed
14 August 2018).
Google (2018) Google Transparency Report. Available at:
www.google.com/transparencyreport/removals/copyright/ (accessed 28
June 2018).
Gopal R, Bhattacharjee S and Sanders GL (2006) Do artists benefit from
online music sharing. Journal of Business 79: 1503–1534.
Gordon G, Willox N, Rebovich D, Regan T and Gordon J (2004) Identity
fraud: A critical national and global threat. Journal of Economic Crime
Management 2(1): 3–47.
Gordon S and Ford R (2003) Cyberterrorism? Cupertino, CA: Symantec.
Gottfredson M and Hirschi T (1990) A General Theory of Crime. Palo Alto,
CA: Stanford University Press.
Gov.uk (2018) Data Protection. Available at: www.gov.uk/data-protection
(accessed 24 July 2018).
Graber-Stiehl I (2018) Science’s Pirate Queen. The Verge, 8 February.
Available at: www.theverge.com/2018/2/8/16985666/alexandraelbakyan-sci-hub-open-access-science-papers-lawsuit (accessed 20
February 2018).
Grabosky P (2001) Virtual criminality: Old wine in new bottles? Social and
Legal Studies 10: 243–249.
Grabosky P (2003) Cyberterrorism. Available at:
https://web.archive.org/web/20031209025720/www.alrc.gov.au/reform/s
ummaries/82.htm (accessed 18 July 2018).
Grabosky P and Smith R (2001) Telecommunication fraud in the digital
age: The convergence of technologies. In D Wall (ed.), Crime and the
Internet. London: Routledge.
Graff GM (2017) How a dorm room Minecraft scam brought down the
Internet. Wired, 13 December. Available at: www.wired.com/story/miraibotnet-minecraft-scam-brought-down-the-internet/ (accessed 18
December 2017).
Graham W (2000) Uncovering and eliminating child pornography rings on
the internet: Issues regarding and avenues facilitating law enforcement’s
access to ‘Wonderland’. Law Review of Michigan State University –
Detroit College of Law 2: 457–484.
Grauer Y (2018) A practical guide to microchip implant. Ars Technica, 3
January. Available at: https://arstechnica.com/features/2018/01/apractical-guide-to-microchip-implants/ (accessed 14 August 2018).
Grazioli S and Jarvenpaa S (2000) Perils of internet fraud: An empirical
investigation of deception and trust with experienced internet consumers.
IEEE Transactions on Systems, Man, and Cybernetics 30(4): 395–410.
Green J (2001) The myth of cyberterrorism: There are many ways terrorists
can kill you – computers aren’t one of them. The Washington Quarterly
Online. Available at:
https://web.archive.org/web/20030206231732/www.washingtonmonthly.
com/features/2001/0211.green.html (accessed 18 July 2018).
Greenberg A (2014) Hacker lexicon: What is the dark web? Wired
Magazine, 19 November. Available at: www.wired.com/2014/11/hackerlexicon-whats-dark-web/ (accessed 6 December 2017).
Greengard S (2015) The Internet of Things. Cambridge, MA: The MIT
Press.
Greening T (1996) Ask and ye shall receive: A study in ‘social
engineering’. ACM SIGSAC Review 14(2): 8–14.
Greenwald G (2013) XKeyscore: NSA tool collects ‘nearly everything a
user does on the internet’. The Guardian, 31 July. Available at:
www.theguardian.com/world/2013/jul/31/nsa-top-secret-program-onlinedata (accessed 25 July 2018).
Greenwald G and MacAskill E (2013) NSA Prism program taps in to user
data of Apple, Google and others. The Guardian, 7 June. Available at:
www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data
(accessed 25 July 2018).
Grierson J (2018) Website linked to cyber-attacks against UK banks is shut
down. The Guardian, 25 April. Available at:
www.theguardian.com/technology/2018/apr/25/webstresser-org-websitecyber-attacks-uk-banks-shut-down (accessed 27 June 2018).
Griffin A (2015) Anonymous group takes down ISIS website, replaces it
with a viagra ad along with message to calm down. Independent, 26
November. Available at: www.independent.co.uk/life-style/gadgets-andtech/news/anonymous-group-takes-down-isis-website-replaces-it-withviagra-ad-and-message-to-calm-down-a6749486.html (accessed 23
January 2018).
Griffin A (2017) ‘Equifax hack: Huge scale of cyber attack means that
many people will have had personal details stolen without knowing’, The
Independent, 8 September. Available at:
http://www.independent.co.uk/life-style/gadgets-and-tech/news/equifaxhack-credit-card-safety-security-am-i-part-of-it-in-what-to-do-latestmillions-a7935831.html (accessed 16 January 2018).
Groome I (2018) Criminal act? What is upskirting, what is the law around
the photos and will it become a criminal offence. The Sun, 22 March.
Available at: www.thesun.co.uk/news/5623843/upskirting-meaningdefinition-photos-criminal-offence-uk/ (accessed 4 June 2018).
Grossman W (2001) From Anarchy to Power: The Net Comes of Age. New
York, NY: New York University Press.
Grow B (2004) Software. Business Week 21: 84.
The Guardian (2004) Sasser worm hits up to 1m computers. Available at:
www.theguardian.com/technology/2004/may/04/security.business
(accessed 23 July 2018).
Guitton C (2012) Criminals and cyber attacks: The missing link between
attribution and deterrence. International Journal of Cyber Criminology
6(2): 1030–1043.
Gupta M, Kumar P and Rao HR (2011) Who is guarding the doors: Review
of authentication in e-banking. In R Sharman, SD Smith and M Gupta
(eds) Digital Identity and Access Management: Technologies and
Frameworks. Hershey, PA: IGI Global.
Ha A (2018) Timehop admits that additional personal data was
compromised in breach. TechCrunch, 11 July. Available at:
https://techcrunch.com/2018/07/11/timehop-data-breach/ (accessed 15
August 2018).
Hadnagy C (2011) Social Engineering: The Art of Human Hacking.
Indianapolis, IN: Wiley.
Hadnagy C (2014) Unmasking the Social Engineer: The Human Element of
Security. Indianapolis, IN: Wiley.
Hafner K and Lyon M (1996) Where the Wizards Stay Up Late: The
Origins of the Internet. New York, NY: Simon & Schuster.
Hagan J and Petersen R (eds) (1994) Inequality and Crime. Stanford, CA:
Stanford University Press.
Haines L (2004) Ebay India boss cuffed in porn vid scandal. The Register,
20 December. Available at:
www.theregister.co.uk/2004/12/20/ebay_india_scandal/ (accessed 18
July 2018).
Halbert D (1997) Discourses of danger and the computer hacker.
Information Society 13: 361–374.
Hall C (2016) Infamous Lizard Squad attacks on Sony, Microsoft lead to
federal charges. Polygon, 7 October. Available at:
www.polygon.com/2016/10/7/13202218/infamous-lizard-squad-attackson-sony-microsoft-lead-to-federal-charges (accessed 23 January 2018).
Hall S, Critcher C, Jefferson T, Clarke J and Roberts B (1978) Policing the
Crisis: Mugging, the State, and Law and Order. London: MacMillan.
Halliday J (2011) David Cameron considers banning suspected rioters from
social media. The Guardian, 11 August. Available at:
www.guardian.co.uk/media/2011/aug/11/david-cameron-rioters-socialmedia (accessed 18 July 2018).
Halverson G (1996) As internet booms, so do hacker-proofing measures.
Christian Science Monitor 88(144): 9.
Hamblen M (1999) Clinton commits $1.46B to fight cyberterrorism. CNN,
26 January. Available at:
https://web.archive.org/web/20030206023421/http://www.cnn.com:80/T
ECH/computing/9901/26/clinton.idg/ (accessed 18 July 2018).
Hamm M (2004) The USA PATRIOT Act and the politics of fear. In J
Ferrell, K Hayward, W Morrison and M Presdee (eds) Critical
Criminology Unleashed. London: GlassHouse.
Hammond P (2016) Forward. In HM Government National Cyber Security
Strategy. Available at:
www.gov.uk/government/uploads/system/uploads/attachment_data/file/5
67242/national_cyber_security_strategy_2016.pdf (accessed 30 January
2018).
Hampson N (2012) Hacktivism: A new breed of protest in a networked
world. Boston College International and Comparative Law Review
35(2): 511–542.
Harding L (2004) Delhi schoolboy sparks global porn row. The Guardian,
21 December. Available at:
www.theguardian.com/technology/2004/dec/21/newmedia.schoolsworld
wide (accessed 18 July 2018).
Hargrave A and Livingstone S (2009) Harm and Offence in Media Content:
A Review of the Evidence (2nd edn). Bristol: Intellect Books.
Harper S and Yar M (2011) Loving violence? The ambiguities of BDSM
imagery in contemporary popular culture. In A Karatzogianni (ed.),
Violence and War in Culture and the Media: Five Disciplinary Lenses.
London: Routledge.
Harrell E (2015) Victims of Identity Theft, 2014. Washington, DC: US
Department of Justice. Available at:
www.bjs.gov/content/pub/pdf/vit14.pdf (accessed 29 November 2017).
Harrell E and Langton L (2013) Victims of Identity Theft, 2012.
Washington, DC: US Department of Justice. Available at:
www.bjs.gov/content/pub/pdf/vit12.pdf (accessed 29 November 2017).
Harring SL (1983) Policing a Class Society: The Experience of American
Cities, 1865–1915. New Brunswick, NJ: Rutgers University Press.
Harrison L (2000) Web filters force Beaver College to change name. The
Register, 23 November. Available at:
www.theregister.co.uk/2000/11/23/web_filters_force_beaver_college/
(accessed 18 July 2018).
Harvard Law Review (2018) Chapter 1: Cooperation or resistance?: The
role of tech companies in government surveillance. Harvard Law Review
131: 1722–1741. Available at: https://harvardlawreview.org/wpcontent/uploads/2018/04/1722-1741_Online.pdf (accessed 25 July 2018).
Harvey D (1989) The Condition of Postmodernity. Oxford: Blackwell.
Hashizume K, Rosado DG, Fernàndez-Medina E and Fernandez EB (2013)
An analysis of security issues for cloud computing. Journal of Internet
Services and Applications 4(5): 1–13.
Hassan K (2012) Making sense of the Arab Spring: Listening to the voices
of Middle Eastern activists. Development 55(2): 232–238.
Hauser C (2018) $6.4 million judgment in revenge porn case is among
largest ever. The New York Times, 11 April. Available at:
www.nytimes.com/2018/04/11/us/revenge-porn-california.html (accessed
1 June 2018).
Hautala L (2018) California’s new data privacy law the toughest in the US.
CNet, 29 June. Available at: www.cnet.com/news/californias-new-dataprivacy-law-the-toughest-in-the-us/ (accessed 24 July 2018).
Havocscope (2012) United States movie piracy losses. Available at:
https://web.archive.org/web/20110506003822/http://www.havocscope.co
m:80/united-states-movie-piracy-losses/ (accessed 18 July 2018).
Hawn M (1996) Fear of a hack planet: The strange metamorphosis of the
computer hacker. The Site, 15 July.
Hay C and Meldrum R (2010) Bullying victimization and adolescent selfharm: Testing hypotheses from General Strain Theory. Journal of Youth
and Adolescence 39: 446–459.
Hay C, Meldrum R and Mann K (2010) Traditional bullying, cyberbullying,
and deviance: A general strain theory approach. Journal of Contemporary
Criminal Justice 26(2): 130–147.
Hayward K (2013) ‘Life Stage Dissolution’ in Anglo-American advertising
and popular culture: Kidults, lil’ Britneys and middle youths. The
Sociological Review 61: 525–548.
Hayward K and Young J (2004) Cultural criminology: Some notes on the
script. Theoretical Criminology 8(3): 259–273.
Heim J (2017) Recounting a day of rage, hate, violence and death. The
Washington Post, 14 August. Available at:
www.washingtonpost.com/graphics/2017/local/charlottesville-timeline/?
noredirect=on&utm_term=.d56bc9636fb5 (accessed 14 August 2018).
Heim J, Barrett D and Natanson H (2018) Man accused of driving into
crowd at Charlottesville ‘Unite the Right’ rally charged with federal hate
crimes. The Washington Post, 27 June. Available at:
www.washingtonpost.com/local/man-accused-of-driving-into-crowd-atunite-the-right-rally-charged-with-federal-hatecrimes/2018/06/27/09cdce3a-7a20-11e8-80be6d32e182a3bc_story.html?utm_term=.d63b342734b7 (accessed 15
August 2018).
Held A (2018) Michigan State University reaches $500 million settlement
with Nassar abuse victims. National Public Radio, 16 May. Available at:
www.npr.org/sections/thetwo-way/2018/05/16/611624047/michiganstate-university-reaches-500-million-settlement-with-nassar-abuse-victi
(accessed 21 May 2018).
Henning VH (2001) Anti-Terrorism, Crime and Security Act 2001: Has the
United Kingdom made a valid derogation from the European Convention
on Human Rights. American University International Law Review, 17,
1263.
Henry S (1990) Degrees of Deviance: Student Accounts of their Deviant
Behavior. Salem, WI: Sheffield Publishing.
Henshaw M, Ogloff JRP and Clough JA (2017) Looking beyond the screen:
A critical review of the literature on the online child pornography
offender. Sexual Abuse 29(5): 416–445.
Hern A (2015) Lenovo website hacked and defaced by Lizard Squad in
Superfish protest. The Guardian, 26 February. Available at:
www.theguardian.com/technology/2015/feb/26/lenovo-website-hackedand-defaced-by-lizard-squad-in-superfish-protest (accessed 11 January
2018).
Hern A (2018) Amazon site awash with counterfeit goods despite
crackdown. The Guardian, 27 April. Available at:
www.theguardian.com/technology/2018/apr/27/amazon-site-awash-withcounterfeit-goods-despite-crackdown (accessed 1May 2018).
Herzog S (2011) Revisiting the Estonian cyber attacks: Digital threats and
multinational responses. Journal of Strategic Security 4(2): 49–60.
Hettinger E (1989) Justifying intellectual property. Philosophy and Public
Affairs 18(1): 31–52.
Higgins GE (2007) Digital piracy, self-control theory, and rational choice:
An examination of the role of value. International Journal of Cyber
Criminology 1(1): 33–55.
Higgins GE, Fell BD and Wils AL (2006) Digital piracy: Assessing the
contributions of an integrated self-control theory and social learning
theory using structural equation modelling. Criminal Justice Studies
19(1): 3–22.
Higgins GE, Wolfe SE and Marcum CD (2008) Music piracy and
neutralization: A preliminary trajectory analysis from short-term
longitudinal data. International Journal of Cyber Criminology 2(2): 324–
336.
Himanen P (2001) The Hacker Ethic: A Radical Approach to the
Philosophy of Business. New York, NY: Random House, Inc.
Hinduja S (2004a) Perceptions of local and state law enforcement
concerning the role of computer crime investigative teams. Policing: An
International Journal of Police Strategies & Management 27: 341–357.
Hinduja S (2004b) Theory and policy in online privacy. Knowledge,
Technology, and Policy, 17(1): 38–58.
Hinduja S (2006) Music Piracy and Crime Theory. New York: LFB
Scholarly Publishing.
Hinduja S (2007) Neutralization theory and online software piracy: An
empirical analysis. Ethics and Information Technology 9: 187–204.
Hinduja S (2012) General strain, self-control, and music piracy.
International Journal of Cybercriminology 6(1): 951–967.
Hinduja S and Patchin JW (2015) State Sexting Laws: A Brief Review of
State Sexting and Revenge Porn Laws and Policies. Cyberbullying
Research Center. Available at: https://cyberbullying.org/state-sextinglaws.pdf (accessed 21 May 2018).
Hinduja S and Schafer J (2009) US cybercrime units on the World Wide
Web. Policing: An International Journal of Police Strategies and
Management 32(2): 278–296.
Hine C (2000) Virtual Ethnography. Thousand Oaks, CA: Sage.
Hinkley JA and LeBlanc B (2017) Ex-USA gymnastics doctor Larry Nassar
sentenced to 60 years in federal child pornography case. IndyStar, 7
December. Available at:
www.indystar.com/story/news/crime/2017/12/07/ex-usa-gymnasticsdoctor-larry-nassar-sentenced-60-years-federal-child-pornographycase/932660001/ (accessed 21 May 2018).
Hirschi T (1969) Causes of Delinquency. Berkeley, CA: University of
California Press.
Hirschi T (2004) Self-control and crime. In RG Baumeister and KD Vohs
(eds) Handbook of Self-Regulation: Research, Theory, and Applications.
New York, NY: Guilford.
Hitchcock J (2003) Cyberstalking and law enforcement. Police Chief
Magazine 70(12). Available at:
www.theiacp.org/Portals/0/pdfs/responsetovictims/RESOURCES_DOC
UMENTS/TS_Links/20_Spring2003_CriticalResponse.pdf (accessed 18
July 2018).
Hoffman-Kuroda L (2017) Free speech and the Alt-Right. Qui Parle:
Critical Humanities and Social Sciences 26(2): 369–382.
Hollin C (2002) Criminological psychology. In M Maguire, R Morgan and
R Reiner (eds), The Oxford Handbook of Criminology (3rd edn).
Oxford: Oxford University Press.
Hollinger RC and Lanza-Kaduce L (1988) The process of criminalization:
The case of computer crime laws. Criminology 26(1): 101–126.
Holmes B (2003) The Emperor’s Sword: Art under WIPO. In WSIS World
Information: Knowledge of Future Culture. Vienna: Institut für Neue
Kulturtechnologien.
Holt TJ (2007) Subcultural evolution? Examining the influence of on- and
off-line experiences on deviant subcultures. Deviant Behavior 28(2):
171–198.
Holt TJ (2017) On the value of honeypots to produce policy
recommendations. Criminology & Public Policy 16(3): 739–747.
Holt TJ and Bossler AM (2012) Predictors of patrol officer interest in
cybercrime training and investigation in selected United States police
departments. Cyberpsychology, Behavior, and Social Networking 15(9):
464–472.
Holt TJ and Bossler AM (2016) Cybercrime in Progress: Theory and
Prevention of Technology-Enabled Offenses. New York, NY: Routledge.
Holt TJ and Copes H (2010) Transferring subcultural knowledge on-line:
Practices and beliefs of persistent digital pirates. Deviant Behavior 31(7):
625–654.
Holt TJ, Burruss GW and Bossler AM (2010) Social learning and cyberdeviance: Examining the importance of a full social learning model in the
virtual world. Journal of Crime and Justice 33(2): 31–61.
Holt TJ, Bossler AM and May DC (2012) Low self-control, deviant peer
associations and juvenile cyberdeviance. American Journal of Criminal
Justice 37: 378–395.
Holt TJ, Burruss GW and Bossler AM (2015) Policing Cybercrime and
Cyberterror. Durham, NC: Carolina Academic Press.
Holt TJ, Brewer R and Goldsmith A. (2018) Digital drift and the ‘sense of
injustice’: Counter-productive policing of youth cybercrime. Deviant
Behavior, Online First. DOI: 10.1080/01639625.2018.1472927.
Home Office (2002) Home Office Annual Report 2001–2. London: HMSO.
Home Office (2011) Cyber Crime Strategy. London: HMSO.
Home Office (2015) Serious Crime Act 2015: Fact sheet: Overview of the
Act. Available at:
www.gov.uk/government/uploads/system/uploads/attachment_data/file/4
15943/Serious_Crime_Act_Overview.pdf (accessed 17 January 2018).
Hopwell L (2012) Hack exposes Google Wallet PIN. ZDNet, 9 February.
Available at: www.zdnet.com/hack-exposes-google-wallet-pin1339331400/ (accessed 18 July 2018).
Houtepen JABM, Sijtsema JJ and Bogaerts S (2014) From child
pornography offending to child sexual abuse: A review of child
pornography offender characteristics and risks for cross-over. Aggression
and Violent Behavior 19: 466–473.
Huey L and Rosenberg R (2004) Watching the web: Thoughts on expanding
police surveillance opportunities under the cyber-crime convention.
Canadian Journal of Criminology and Criminal Justice 46(4): 597–606.
Huey L, Nhan J and Broll R (2012) ‘Uppity civilians’ and ‘cybervigilantes’: The role of the general public in policing cyber-crime.
Criminology & Criminal Justice 13(1): 81–97.
Hughes R (1993) Culture of Complaint: Fraying of America. Oxford:
Oxford University Press.
Hui K, Kim SH and Wang Q (2017) Cybercrime deterrence and
international legislation: Evidence from distributed denial of service
attacks. MIS Quarterly 41(2): 497–523.
Hunt A (1987) Moral panic and the moral language in the media. British
Journal of Sociology 48: 629–648.
Hutchings A (2013) Hacking and fraud: Qualitative analysis of online
offending and victimization. In K Jaishankar and N Ronel (eds) Global
Criminology: Crime and Victimization in a Globalized Era. Boca Raton,
FL: CRC Press.
Hwang J, Lee H, Kim K, Zo H and Ciganek AP (2016) Cyber neutralisation
and flaming. Behaviour & Information Technology 35(3): 210–224.
Hyde H (1964) A History of Pornography. New York: Farrar Straus Giroux.
Hyde S (1999) A few coppers change. The Journal of Information, Law and
Technology (JILT) 2. Available at:
https://warwick.ac.uk/fac/soc/law/elj/jilt/1999_2/hyde/ (accessed 18 July
2018).
ICANN (Internet Corporation for Assigned Names and Numbers) (2008)
Available at: www.icann.org (accessed 18 July 2018).
IC3 (Internet Crime Complaint Center) (2012) 2011 Internet Crime Report.
Washington, DC: NW3C. Available at:
https://pdf.ic3.gov/2011_IC3Report.pdf (accessed 18 July 2018).
IC3 (Internet Crime Complaint Center) (2015) 2014 Internet Crime Report.
Washington, DC: Federal Bureau of Investigation. Available at:
https://pdf.ic3.gov/2015_IC3Report.pdf (accessed 18 July 2018).
IC3 (Internet Crime Complaint Center) (2018) 2017 Internet Crime Report.
Washington, DC: Federal Bureau of Investigation. Available at:
https://pdf.ic3.gov/2017_IC3Report.pdf (accessed 7 July 2018).
ICMEC (International Centre for Missing & Exploited Children) (2016)
Child Pornography: Model Legislation & Global Review. Available at:
www.icmec.org/wp-content/uploads/2016/02/Child-Pornography-ModelLaw-8th-Ed-Final-linked.pdf (accessed 23 May 2018).
IFPI (International Federation of the Phonographic Industry) (2011) IFPI
Digital Music Report 2011: Music at the Touch of a Button. London:
IFPI. Available at: www.ifpi.org/content/library/DMR2011.pdf (accessed
19 July 2018).
IFPI (International Federation of Phonographic Industries) (2015) Digital
Music Report 2015: Charting the Path to Sustainable Growth. London:
IFPI. Available at: www.ifpi.org/downloads/Digital-Music-Report2015.pdf (accessed 20 February 2018).
IFPI (International Federation of Phonographic Industries) (2017) Global
Music Report 2017: Annual State of the Industry. London: IFPI.
Available at: www.ifpi.org/downloads/GMR2017.pdf (accessed 20
February 2018).
Imperva Incapsula (2016) Imperva Incapsula Bot Traffic Report. Available
at: www.incapsula.com/blog/bot-traffic-report-2016.html (accessed 17
January 2018).
INHOPE (2016) Facts, figures & trends, 2016. Available at:
http://www.inhope.org/Libraries/2016/Facts_and_figures_2016.sflb.ashx
(accessed 19 November 2018).
INTERPOL (2015) INTERPOL Cybercrime Capacity Building Project in
Latin America and the Carribean. Available at:
www.interpol.int/INTERPOL-expertise/Multi-year-programmes/CyberAmericas/INTERPOL-Cybercrime-Capacity-Building-Project-in-LatinAmerica-and-the-Caribbean/Assessment-visits (accessed 20 July 2018).
INTERPOL (2018) Towards a Global Indicator on Unidentified Victims in
Child Sexual Exploitation Material – Technical Report. Available at:
www.interpol.int/Media/Files/Crime-areas/Crimes-againstchildren/Technical-report-Towards-a-Global-Indicator-on-UnidentifiedVictims-in-Child-Sexual-Exploitation-Material-February-2018 (accessed
23 May 2018).
IPO (Intellectual Property Office) (2016) Online Copyright Infringement
Tracker: Latest wave of research Mar 16 – May 16. Available at:
www.gov.uk/government/uploads/system/uploads/attachment_data/file/5
46223/OCI-tracker-6th-wave-March-May-2016.pdf (accessed 28
Feburary 2018).
IPO (Intellectual Property Office) (2017) Online Copyright Infringement
Tracker. Available at:
www.gov.uk/government/uploads/system/uploads/attachment_data/file/6
28704/OCI_-tracker-7th-wave.pdf (accessed 20 February 2018).
Isikoff M (2015) NSA program stopped no terror attacks, says White House
panel member. NBCNews, 2 November. Available at:
www.nbcnews.com/news/world/nsa-program-stopped-no-terror-attackssays-white-house-panel-flna2D11783588 (accessed 25 July 2018).
ITU (International Telecommunications Union) (2008) ITU Global
Cybersecurity Agenda: High Level Experts Group Global Strategic
Report. Geneva: ITU. Available at:
www.itu.int/en/action/cybersecurity/Documents/gca-chairman-report.pdf
(accessed 19 July 2018).
ITU (International Telecommunications Union) (2017) ICT Facts and
Figures 2017. Available at: www.itu.int/en/ITUD/Statistics/Documents/facts/ICTFactsFigures2017.pdf (accessed 19 July
2018).
IWF (Internet Watch Foundation) (2004) Significant Trends 2003. London:
IWF
IWF (Internet Watch Foundation) (2017) Annual Report 2017. Available at:
www.iwf.org.uk/sites/default/files/reports/201804/IWF%202017%20Annual%20Report%20for%20web_0.pdf (accessed
23 May 2017).
IWF (Internet Watch Foundation) (2018a) Our History. Available at:
www.iwf.org.uk/what-we-do/why-we-exist/our-history (accessed 20 July
2018).
IWF (Internet Watch Foundation) (2018b) Trends in Online Child Sexual
Exploitation: Examining the Distribution of Captures of Live-Streamed
Child Sexual Abuse. Available at:
www.iwf.org.uk/sites/default/files/inlinefiles/Distribution%20of%20Captures%20of%20Livestreamed%20Child%20Sexual%20Abuse%20FINAL.pdf (accessed 25
May 2018).
IWS (Internet World Stats) (2017) Internet Usage Statistics. Available at:
www.internetworldstats.com/stats.htm (accessed 20 February 2018).
IWS (Internet World Stats) (2018) Internet Growth Statistics. Available at
www.internetworldstats.com/emarketing.htm (accessed 20 November
2018).
Jackson M (2000) Keeping secrets: International developments to protect
undisclosed business information and trade secrets. In D Thomas and B
Loader (eds) Cybercrime: Law Enforcement, Security and Surveillance
in the Information Age. London: Routledge.
Jagatic TN, Johnson NA, Jakobsson M and Menczer F (2007) Social
phishing. Communications of the ACM 50(10): 94–100.
James L (2005) Phishing Exposed. Rockland, MA: Syngress.
Jane EA (2016) Online misogyny and feminist digilantism. Continuum:
Journal of Media & Cultural Studies 30(3): 284–297.
Jane EA (2017a) Feminist digilante responses to a slut-shaming on
Facebook. Social Media + Society, Online First: 1-10. DOI:
10.1177/2056305117705996.
Jane EA (2017b) Misogyny Online. Thousand Oaks, CA: Sage.
The Jargon File (2018) Available at: http://catb.org/jargon/html/ (accessed
11 January 2018).
Jarvis L, Macdonald S and Whiting A (2015) Constructing cyberterrorism
as a security threat: A study of international news media coverage.
Perspectives on Terrorism 9(1): 60–75.
Jefferson T (1997) Masculinities and crime. In M Maguire, R Morgan and
R Reiner (eds) The Oxford Handbook of Criminology. Oxford: Oxford
University Press.
Jenkins P (2001) Beyond Tolerance: Child Pornography Online. New York:
New York University Press.
Jessop B (2002) Liberalism, neoliberalism, and urban governance: A statetheoretical perspective. Antipode, 34(4): 452–472.
Jessop B (2003) Governance and metagovernance: On reflexivity, requisite
variety, and requisite irony. In H Bang (ed.) Governance as Social and
Political Communication. Manchester: Manchester University Press.
Jessop B (2004) Hollowing out the nation-state and multilevel governance.
In P Kennett (ed.) A Handbook of Comparative Social Policy.
Cheltenham: Edward Elgar.
Jewkes Y (2010) Public policing and internet crime. In Y Jewkes and M
Yar (eds) Handbook of Internet Crime. Cullompton: Willan.
Jewkes Y (2015) Crime and Media (3rd edn). London: Sage.
Jewkes Y and Andrews C (2005) Policing the filth: The problems of
investigating online child pornography in England and Wales. Policing
and Society 15(1): 42–62.
Jewkes Y and Andrews C (2007) Internet child pornography: International
responses. In Y Jewkes (ed.) Crime Online. Collumpton: Willan.
Jewkes Y and Yar M (2008) Policing cybercrime. In T. Newburn (ed.)
Handbook of Policing. New York, NY: Willan (pp. 580–605).
Jewkes Y and Yar M (2010) The Handbook of Internet Crime. Collumpton:
Willan.
Johns A (2002) Pop music pirate hunters. Daedalus 131(2): 67–77.
Johnson M and Rogers K (2009) Too far down the yellow brick road –
Cyber-hysteria and virtual porn. Journal of International Commercial
Law and Technology 4(1): 61–70.
Johnson W and Cordon G (2012) New anti-stalking laws expected.
Independent, 8 March. Available at:
www.independent.co.uk/news/uk/politics/new-antistalking-lawsexpected-7544904.html (accessed 19 July 2018).
Jordan T (2001) Language and libertarianism: The politics of cyberculture
and the culture of cyberpolitics. The Sociological Review 49(1): 1–17.
Jordan T (2008) Hacking: Digital Media and Technological Determinism.
Malden, MA: Polity Press.
Jordan T and Taylor P (1998) A sociology of hackers. The Sociological
Review 46(4): 757–780.
Jordan T and Taylor P (2004) Hacktivism and Cyberwars: Rebels With a
Cause? London: Routledge.
Joseph J (2003) Cyberstalking: An international perspective. In Y Jewkes
(ed.) Dot.cons: Crime, Deviance and Identity on the Internet.
Cullompton: Willan.
JTIC (Justice Technology Information Center) (2018) FieldSearch.
Available at: https://justnet.org/fieldsearch/fieldsearch.html (accessed 13
July 2018).
Kabay M (1998) Anonymity and Pseudonymity in Cyberspace:
Deindividuation, Incivility and Lawlessness Versus Freedom and
Privacy. Paper presented at the Annual Conference of the European
Institute for Computer Anti-virus Research, Munich, 16–18 March.
Kane RJ (2000) Police responses to restraining orders in domestic violence
incidents: Identifying the custody-threshold thesis. Criminal Justice and
Behavior 27(5): 561–580.
Kang C and Frenkel S (2018) Facebook says Cambridge Analytica
harvested data of up to 87 million users. The New York Times, 4 April.
Available at: www.nytimes.com/2018/04/04/technology/markzuckerberg-testify-congress.html (accessed 31 October 2018).
Kantar Media (2013) OCI tracker benchmark study ‘Deep Dive’ analysis
report. Prepared for Ofcom. Available at:
http://stakeholders.ofcom.org.uk/binaries/research/telecomsresearch/online-copyright/deep-dive.pdf (accessed 9 June 2016).
Kaplan J and Moss M (2003) Investigating Hate Crimes on the Internet.
Washington, DC: Partners Against Hate.
Kappeler VE and Potter GW (2018) The Mythology of Crime and Criminal
Justice (5th edn). Long Grove, IL: Waveland Press.
Karaganis J and Renkema L (2013) Copy culture in the US & Germany.
New York, NY: The American Assembly, Columbia University.
Retrieved June 9, 2016 from http://piracy.americanassembly.org/wpcontent/uploads/2013/01/Copy-Culture.pdf (accessed 22 November
2018).
Karar H and Effron L (2011) Angie Varona: How a 14-year-old unwillingly
became an internet sex symbol. ABCNews, 9 November. Available at:
https://abcnews.go.com/Technology/angie-varona-14-year-unwillinglyinternet-sex-symbol/story?id=14882768 (accessed 24 May 2018).
Karatzogianni A (ed.) (2009) Cyber-Conflict and Global Politics. London:
Routledge.
Kassner M (2015a) Anatomy of the Target data breach: Missed
opportunities and lessons learned. ZDNet, 2 February. Available at:
www.zdnet.com/article/anatomy-of-the-target-data-breach-missedopportunities-and-lessons-learned/ (accessed 8 December 2017).
Kassner M (2015b) Symantec exposes Butterfly hacking group for
corporate espionage. TechRepublic, 6 August. Available at:
www.techrepublic.com/article/symantec-exposes-butterfly-hackinggroup-for-corporate-espionage/ (accessed 16 January 2018).
Kasten E (2004) Ways of owning and sharing cultural property. In E Kasten
(ed.) Properties of Culture – Culture as Property. Berlin: Dietrich Reimer.
Kävrestad J (2017) Guide to Digital Forensics: A Concise and Practical
Introduction. Cham, Switzerland: Springer.
Kennedy JP and Benson ML (2016) Emotional reactions to employee theft
and the managerial dilemmas small business owners face. Criminal
Justice Review 41(3): 257–277.
Kennedy M (2012) Gary McKinnon will face no charges in UK. The
Guardian, 14 December. Available at:
www.theguardian.com/world/2012/dec/14/gary-mckinnon-no-uk-charges
(accessed 12 January 2018).
Kigerl AC (2015) Evaluation of the CAN SPAM Act: Testing deterrence
and other influences of e-mail spammer legal compliance over time.
Social Science Computer Review 33(4): 440–458.
King L (2001) Information, society and the panopticon. The Western
Journal of Graduate Research 10(1): 40–50.
Kini R, Pamakrishna H and Vijayaraman B (2003) An exploratory study of
moral intensity regarding software piracy of students in Thailand.
Behaviour and Information Technology 22(1): 63–70.
Kjaer AM (2004) Governance. London: Routledge.
Kleeman J (2018) YouTube star wins damages in landmark UK ‘revenge
porn’ case. The Guardian, 17 January. Available at:
www.theguardian.com/technology/2018/jan/17/youtube-star-chrissychambers-wins-damages-in-landmark-uk-revenge-porn-case?
CMP=fb_gu (accessed 1 June 2018).
Kleinwaechter W (2003) From self-governance to public-private
partnership: The changing role of governments in the management of the
Internet’s core resources. Loyola of Los Angeles Law Review 36(3):
1103–1126. Available at: http://digitalcommons.lmu.edu/llr/vol36/iss3/2/
(accessed 19 July 2018).
Klockars CB (1974) The Professional Fence. New York, NY: Free Press.
Koch L (2000) Cyberstalking Hype. Available at:
https://web.archive.org/web/20000902130158/https://www.zdnet.com/zd
nn/stories/comment/0,5859,2577187,00.html (accessed 31 July 2018).
Kokolakis S (2017) Privacy attitudes and privacy behaviour: A review of
current research on the privacy paradox phenomenon. Computers &
Security 64: 122–134.
Kolarik G-L (1992) Stalking laws proliferate, but critics say constitutional
flaws also abound. American Bar Association Journal, November: 35–
36.
Kolias C, Kambourakis G, Stavrou A and Voas J (2017) DDoS in the IoT:
Mirai and other botnets. Computer 50(7): 80–84.
Kornegay J (2006) Protecting our children and the constitution: An analysis
of the ‘virtual’ child pornography provisions of the PROTECT Act of
2003. William and Mary Law Review 47(6): 2129–2167.
Kounoupias N (2003) Copyright Piracy Disruption, Deterrence and
Destruction: Targeting the Traders in Fakes. Paper presented at the AntiCounterfeiting Group (ACG) conference, London, 15 May.
Kovacich G (1999) Hackers: Freedom fighters of the 21st century.
Computers and Security 18(7): 573–576.
Kraska P and Brent JJ (2011) Theorizing Criminal Justice: Eight Essential
Orientations (2nd edn). Long Grove, IL: Waveland Press.
Kraska PB and Neuman WL (2008) Criminal Justice and Criminology
Research Methods. Boston, MA: Pearson Education, Inc.
Kravets D (2011) Patriot Act turns 10, with no signs of retirement. Wired,
26 October. Available at: www.wired.com/threatlevel/2011/10/patriotact-turns-ten/ (accessed 19 July 2018).
Krone T (2005) Child Exploitation, Australian Institute of Criminology
High Tech Crime Brief 2. Canberra: Australian Institute of Criminology.
Available at: https://aic.gov.au/publications/htcb/htcb002 (accessed 19
July 2018).
Kruttschnit C (2016) 2015 presidential address to the American Society of
Criminology. Criminology 54(1): 8–29.
Kubitschko S (2015) Hackers’ media practices: Demonstrating and
articulating expertise as interlocking arrangements. Convergence: The
International Journal of Research into New Media Technologies 21: 388–
402.
Kuipers G (2006) The social construction of digital danger: Debating,
defusing and inflating the moral dangers of online humor and
pornography in the Netherlands and the United States. New Media &
Society 8(3): 379–400.
Kushner D (2014) We all got trolled. Matter, 22 July. Available at:
https://medium.com/matter/the-martyrdom-of-weev-9e72da8a133d
(accessed 16 May 2018).
Kutchinsky B (1992) Pornography, sex crime and public policy. In S Gerull
and B Halstead (eds) Sex Industry and Public Policy. Canberra:
Australian Institute of Criminology.
Lacey N (2002) Legal constructions of crime. In M Maguire, R Morgan and
R Reiner (eds) The Oxford Handbook of Criminology (3rd edn). Oxford:
Oxford University Press.
Ladegaard I (2017) We know where you are, what you are doing, and we
will catch you: Testing deterrence theory in digital drug markets. British
Journal of Criminology, Online First. DOI: 10.1093/bjc/azx021.
Lahitou J (2017) What is the ENOUGH Act? Lawmakers are pushing to
criminalized revenge porn with a new bill. Bustle, 28 November.
Available at: www.bustle.com/p/what-is-the-enough-act-lawmakers-arepushing-to-criminalize-revenge-porn-with-a-new-bill-6337236 (accessed
1 June 2018).
Lane F (2001) Obscene Profits: The Entrepreneurs of Pornography in the
Cyber Age. London: Routledge.
Langton L (2011) Identity Theft Reported By Households, 2005–2010.
Washington, DC: US Department of Justice. Available at:
www.bjs.gov/content/pub/pdf/itrh0510.pdf (accessed 29 November
2017).
Langton L and Planty M (2010) Victims Of Identity Theft, 2008.
Washington, DC: US Department of Justice. Available at:
https://www.bjs.gov/content/pub/pdf/vit08.pdf (accessed 29 November
2017).
Lapsley P (2013) Exploding the Phone: The Untold Story of the Teenagers
and Outlaws who Hacked Ma Bell. New York, NY: Grove Press.
Larkin M (2002) Pornography-blocking software may also block health
information sites. The Lancet 360: 1946.
Larsson S, Svensson M and de Kaminski M (2012) Online piracy,
anonymity and social change: Innovation through deviance.
Convergence: The International Journal of Research into New Media
Technologies 19(1): 95–114.
Latour B (2007) Reassembling the Social: An Introduction to ActorNetwork Theory. New York, NY: Oxford University Press.
Law J and Bijker W (1992) Shaping Technology/Building Society: Studies
in Sociotechnical Change. Cambridge, MA: MIT Press.
Lawson S (2002) Information Warfare: An Analysis of the Threat of
Cyberterrorism Towards the US Critical Infrastructure. Bethesda, MD:
SANS Institute.
Lawson W (2002) Too Many Passwords. Available at:
www.icdri.org/biometrics/Too%20Many%20Passwords.htm (accessed 19
July 2018).
Lea J and Young J (1984) What Is to Be Done about Law and Order?
Harmondsworth: Penguin.
Lederer L and Delgado R (1995) Introduction. In L Lederer and R Delgado
(eds) The Price We Pay: The Case Against Racist Speech, Hate
Propoganda, and Pornography. New York: Hills and Wang.
Lee G, Low DR, Wiley J and Thirkell P (2004) Internet Pirates:
Generational Attitudes Towards Intellectual Property Online. ANZMAC
2004: Marketing Accountabilities and Responsibilities, held at
Wellington, New Zealand, 29 November–1 December.
Lee M (2003) Conceptualizing the New Governance: A New Institution of
Social Coordination. Paper presented at the Institutional Analysis and
Development Mini-Conference, 3 and 5 May, Workshop in Political
Theory and Policy Analysis, Indiana University, Bloomington, Indiana,
USA.
Lee TB (2013) How a grad student trying to build the first botnet brought
the Internet to its knees. The Washington Post, 1 November. Available at:
www.washingtonpost.com/news/the-switch/wp/2013/11/01/how-a-gradstudent-trying-to-build-the-first-botnet-brought-the-internet-to-itsknees/?utm_term=.41855f99cfba (accessed 11 January 2018).
LeFebvre R (2018) Texas court rules 2015 revenge porn law is
unconstitutional. Engadget, 20 April. Available at:
www.engadget.com/2018/04/20/texas-court-revenge-porn-lawunconstitutional/ (accessed 1 June 2018).
Legislation.gov.uk (2015) Criminal Justice and Courts Act 2015. Available
at: www.legislation.gov.uk/ukpga/2015/2/part/1/crossheading/offencesinvolving-intent-to-cause-distress-etc/enacted (accessed 12 June 2018).
Leigh D and Harding L (2011) WikiLeaks: Inside Julian Assange’s War on
Secrecy. New York, NY: Public Affairs.
Lemert E (1951/2010) Social pathology. Republished in H Copes and V
Topalli (eds) Criminological Theory: Readings and Retrospectives. New
York, NY: McGraw-Hill.
Lemos R (1975) Locke’s theory of property. Interpretation 5: 226–244.
Lenhart A, Ybarra M, Zickuhr K and Price-Feney M (2016) Online
Harassment, Digital Abuse, and Cyberstalking in America. Data &
Society Research Institute. Available at:
www.datasociety.net/pubs/oh/Online_Harassment_2016.pdf (accessed 31
May 2018).
Lenk K (1997) The challenge of cyberspatial forms of human interaction to
territorial governance and policing. In B Loader (ed.) The Governance of
Cyberspace: Politics, Technology and Global Restructuring. London:
Routledge.
Lessig L (2004) Free Culture: The Nature and Future of Creativity. New
York, NY: Penguin.
Lessig L (2008) Remix: Making Art and Commerce Thrive in the Hybrid
Economy. New York, NY: Penguin.
Letterman GG (2001) Basics of International Intellectual Property Law.
New York: Transnational.
Leukfeldt R (ed.) (2017) Research Agenda: The Human Factor in
Cybercrime and Cybersecurity. The Hague, The Netherlands: Eleven
International Publishing.
Levi M (2008) White-collar, organised and cyber crimes in the media:
Some contrasts and similarities. Crime, Law and Social Change 49(5):
365–377.
Levy S (1984) Hackers: Heroes of the Computer Revolution. New York,
NY: Doubleday.
Levy S (2001) Crypto: How the Code Rebels Beat the Government –
Saving Privacy in the Digital Age. New York, NY: Penguin.
Lewis M, Miller P and Buchalter AR (2009) Internet Crimes Against
Children: An Annotated Bibliography of Major Studies. Washington,
DC: National Institute of Justice. Available at: www.loc.gov/rr/frd/pdffiles/internet-report-2.pdf (accessed 5 June 2018).
LexisNexis (2011) Case Studies: Lexisnexis Accurint for Law
Enforcement. Available at:
www.lexisnexis.com/government/solutions/casestudy/accurintle.pdf
(accessed 24 July 2018).
Leyden J (2011) English Defence League site pulled offline after
defacement. The Register, 11 February. Available at:
www.theregister.co.uk/2011/02/11/edl_defacement/ (accessed 31 October
2018).
Libicki M (2007) Conquest in Cyberspace: National Security and
Information Warfare. New York: Cambridge University Press.
Liebowitz SJ (1985) Copying and indirect appropriability: Photocopying of
journals. Journal of Political Economy 93(5): 945–957.
Lilley P (2002) Hacked, Attacked and Abused: Digital Crime Exposed.
London: Kogan Page.
Lindsay JR (2013) Stuxnet and the limits of cyber warfare. Security Studies
22(3): 365–404.
LISS (2017) About the Panel. CentErdata Institute for Data Collection and
Research. Available at: www.lissdata.nl/about-panel (accessed 19
October 2017).
Litman J (2000) The Demonization of Piracy. Paper presented at the 10th
Conference on Computers, Freedom and Privacy, Toronto, 6 April.
Available at: www-personal.umich.edu/~jdlitman/papers/demon.pdf
(accessed 19 July 2018).
Littlewood A (2003) Cyberporn and moral panic: An evaluation of press
reactions to pornography on the Internet. Library and Information
Research 27(86): 8–18.
Livingstone S, Bober M and Helsper E (2005) Internet Literacy Amongst
Children and Young People. Findings from the UK Children Go Online
Project. Available at:
https://web.archive.org/web/20061212014717fw_/http://personal.lse.ac.u
k:80/bober/UKCGOonlineLiteracy.pdf (accessed 19 July 2018).
Lloyd IJ (2011) Information Technology Law (6th edn). Oxford: Oxford
University Press.
Loader B (ed.) (1997) The Governance of Cyberspace: Politics, Technology
and Global Restructuring. London: Routledge.
Loader I (1997) Thinking normatively about private security. Journal of
Law and Society 24(3): 377–394.
Loader I (2000) Plural policing and democratic governance. Social and
Legal Studies 9(3): 323–345.
Loader I and Sparks R (2002) Contemporary landscapes of crime, order and
control: Governance, risk and globalization. In M Maguire, R Morgan
and R Reiner (eds) The Oxford Handbook of Criminology (3rd edn).
Oxford: Oxford University Press.
Loader I and Walker N (2004) State of denial? Rethinking the governance
of security. Punishment and Society 6(2): 221–228.
Loader I and Walker N (2006) Necessary virtues: The legitimate place of
the state in the production of security. In J Wood and B Dupont (eds)
Democracy, Society and the Governance of Security. Cambridge:
Cambridge University Press.
Loeber R and Stouthamer-Loeber M (1986) Family factors as correlates and
predictors of juvenile conduct problems and delinquency. In M Tonry
and N Morris (eds) Crime and Justice: An Annual Review of Research
(vol. 7). Chicago, IL: University of Chicago Press.
Loftus, E (1996) The Myth of Repressed Memory: False Memories and
Allegations of Sexual Abuse. New York: St. Martin’s Press.
Lohr S (2011) Online hate sites grow with social networks. New York
Times, 16 March. Available at:
https://bits.blogs.nytimes.com/2010/03/16/online-hate-sites-grow-withsocial-networks/ (accessed 20 July 2018).
Lopatin E and Bulavas V (2017) Miners on the rise. SecureList/Kaspersky
Labs, 12 September. Available at: https://securelist.com/miners-on-therise/81706/ (accessed 11 January 2018).
Love C (2000) Love Manifesto. Available at:
www.reznor.com/commentary/loves_manifesto1.html (accessed 19 July
2018).
Lowney K and Best J (1995) Stalking strangers and lovers: Changing media
typifications of a new crime problem. In J Best (ed.) Images of Issues:
Typifying Contemporary Social Problems. New York: Aldine de Gruyter.
Lusthaus J and Varese F (2017) Offline and local: The hidden face of
cybercrime. Policing: A Journal of Policy and Practice, Online First.
DOI: 10.1093/police/pax042.
Lyon D (1994) The Electronic Eye: The Rise of Surveillance Society.
Oxford: Polity Press.
Lyon D (2003) Surveillance after September 11. Oxford: Blackwell.
MacAskill E (2016) ‘Extreme surveillance’ becomes UK law with barely a
whimper. The Guardian, 19 November. Available at:
www.theguardian.com/world/2016/nov/19/extreme-surveillancebecomes-uk-law-with-barely-a-whimper (accessed 27 July 2018).
MacDonald A and Bryan-Low C (2011) UK case reveals terror tactics. Wall
Street Journal, 7 February. Available at:
www.wsj.com/articles/SB10001424052748704570104576124231820312
632 (accessed 19 July 2018).
MacDonald K (2018) Social media spying is turning us into a stalking
society. The Guardian, 13 February. Available at:
www.theguardian.com/commentisfree/2018/feb/13/social-media-spyingstalking (accessed 4 June 2018).
MacEwan N (2008) The Computer Misuse Act 1990: Lessons from its past
and predictions for its future. Criminal Law Review. Available at:
http://usir.salford.ac.uk/15815/7/MacEwan_Crim_LR.pdf (accessed 19
July 2018).
Mackie G (2004) Publishing notoriety: Piracy, pornography and Oscar
Wilde. University of Toronto Quarterly 73(4): 980–990.
MacKinnon C (1996) Only Words. Cambridge, MA: Harvard University
Press.
Madigan S, Ly A, Rash CL, Van Ouytsel J and Temple JR (2018)
Prevalence of multiple forms of sexting behavior among youth. JAMA
Pediatrics 172(4): 327–335.
Maguire M (2002) Crime statistics: The ‘data explosion’ and its
implications. In M Maguire, R Morgan and R Reiner (eds) The Oxford
Handbook of Criminology (3rd edn). Oxford: Oxford University Press.
Maimon D, Alper M, Sobesto B and Cukier M (2014) Restrictive deterrent
effects of a warning banner in an attacked computer system. Criminology
52(1): 33–59.
Maimon D, Kamerdze A, Cukier M and Sobesto B (2013) Daily trends and
origin of computer-focused crimes against a large university computer
network. British Journal of Criminology 53: 319–343.
Mansfield-Devine S (2011) Anonymous: Serious threat or mere annoyance.
Network Security 2011(1): 4–10.
Mantilla K (2015) Gendertrolling: How Misogyny Went Viral. Santa
Barbara, CA: ABC-CLIO, LLC.
Marantz A (2018) Reddit and the struggle to detoxify the Internet. The New
Yorker, 19 March. Available at:
www.newyorker.com/magazine/2018/03/19/reddit-and-the-struggle-todetoxify-the-internet (accessed 24 May 2018).
Marcum CD, Higgins GE and Ricketts ML (2014a) Juveniles and cyber
stalking in the United States: An analysis of theoretical predictors of
patterns of online perpetration. International Journal of Cyber
Criminology 8(1): 47–56.
Marcum CD, Higgins GE and Ricketts ML (2014b) Sexting behaviors
among adolescents in rural North Carolina: A theoretical examination of
low self-control and deviant peer association. International Journal of
Cyber Criminology 8(2): 68–78.
Marcum CD, Higgins GE, Ricketts ML and Wolfe SE (2014c) Hacking in
high school: Cybercrime perpetration by juveniles. Deviant Behavior
35(7): 581–591.
Marganski AJ (2018) Feminist theory and technocrime. In KF Steinmetz
and MR Nobles (eds) Technocrime and Criminological Theory. New
York, NY: Routledge.
Marganski A and Melander L (2018) Intimate partner violence
victimization in the cyber and real world: Examining the extent of cyber
aggression experiences and its association with in-person dating
violence. Journal of Interpersonal Violence 33(7): 1071–1095.
Marin E, Singelée D, Garcia FD, Chothia T, Willems R and Preneel B
(2016) On the (In)security of the Latest Generation Implantable Cardiac
Defibrilators and How to Secure Them. ACSAC ’16, 5–9 December.
Available at: www.esat.kuleuven.be/cosic/publications/article-2678.pdf
(accessed 15 August 2018).
Martin B (1998) Information Liberation. London: Freedom Press.
Martin E (ed.) (2003) Oxford Dictionary of Law (5th edn). Oxford: Oxford
University Press.
Maruna S and Copes H (2005) What have we learned from five decades of
neutralization research? Crime and Justice 32: 221–320.
Marusak J (2015) Woman sentenced for selling stolen items on eBay.
Charlotte Observer, 14 May. Available at:
www.charlotteobserver.com/news/local/crime/article20969706.html
(accessed 25 April 2018).
Marvin JT (2015) Without a bright-line on the green line: How
Commonwealth v. Robertson failed to criminalize upskirt photography.
New England Law Review 50: 119–143.
Marx G (2016) Windows into the Soul: Surveillance in an Age of High
Technology. Chicago, IL: University of Chicago Press.
Mathiason J (2009) Internet Governance: The New Frontier of Global
Institutions. New York: Routledge.
Matishak M (2018) What we know about Russia’s election hacking.
Politico, 18 July. Available at:
www.politico.com/story/2018/07/18/russia-election-hacking-trumpputin-698087 (accessed 14 August 2018).
Matney L (2015) Uncovering ECHELON: The top-secret NSA/GCHQ
program that has been watching you your entire life. Techcrunch, 3
August. Available at: https://techcrunch.com/2015/08/03/uncoveringechelon-the-top-secret-nsa-program-that-has-been-watching-you-yourentire-life/ (accessed 31 July 2018).
Matthews C (2011) Manga, virtual child pornography, and censorship in
Japan. In CAEP (ed.) Applied Ethics: New Wine in Old Bottles?
Sapporo: Hokkaido University Center for Applied Ethics and Philosophy.
Matza D (1964) Delinquency and Drift. New York, NY: John Wiley &
Sons, Inc.
Maurer D (2000) The Big Con: The Story of the Confidence Man and the
Confidence Trick. London: Arrow.
Mawby R (2008) Models of policing. In T Newburn (ed.) Handbook of
Policing (2nd edn). Cullompton: Willan.
Maza C (2017) Trump’s speech causes more anti-Muslim hate crimes than
terrorism, study shows. Newsweek, 16 November. Available at:
www.newsweek.com/trump-speech-anti-muslim-hate-crime-terrorismstudy-713905 (accessed 29 June 2018).
McAfee and CSIS (2013) The Economic Impact of Cybercrime and
Cyberespionage. Center for Strategic and International Studies. Available
at https://csis-prod.s3.amazonaws.com/s3fspublic/legacy_files/files/publication/60396rpt_cybercrimecost_0713_ph4_0.pdf (accessed 26 October 2017).
McCahill M (2001) The Surveillance Web: The Rise of Visual Surveillance
in an English City. Cullompton: Willan.
McCandless D (2004) Anatomy of a virus. The Guardian, 5 February.
Available at:
www.theguardian.com/technology/2004/feb/05/viruses.security
(accessed 20 July 2018).
McCarthy B (2002) New economics of sociological criminology. Annual
Review of Sociology 28: 417–442.
McCarthy K (2017) UK ministers to push anti-encryption laws after
election. The Register, 25 May. Available at:
www.theregister.co.uk/2017/05/25/uk_to_push_antiencryption_laws_afte
r_election/ (accessed 27 July 2018).
McClymer JF (1980) War and Welfare: Social Engineering in America,
1890–1925. Westport, CT: Greenwood Press.
McConchie A (2015) Hacker cartography: Crowdsourced geography,
OpenStreetMap, and the hacker political imaginary. ACME: An
International E-Journal for Critical Geographies 14: 874–898.
McConnell International (2000) Cyber Crime … and Punishment? Archaic
Laws Threaten Global Information. Available at:
www.iwar.org.uk/law/resources/cybercrime/mcconnell/CyberCrime.pdf
(accessed 20 July 2018).
McCormack T (1998) Multiculturalism, racism and hate speech. Institute
for Social Research Newsletter 13(1). Available at:
www.math.yorku.ca/ISR/newsletter/multi_racism_hate.htm (accessed 20
July 2018).
McCourt T and Burkart P (2003) When creators, corporations and
consumers collide: Napster and the development of on-line music
distribution. Media, Culture and Society 25(3): 333–350.
McCracken G (1988) The Long Interview: Qualitative Research Methods
Series, Vol. 13. Newbury Park, CA: Sage.
McCullagh D (2001) Bin Laden: Steganography master? Wired, 7 February.
Available at: www.wired.com/news/politics/0,1283,41658,00.html
(accessed 20 July 2018).
McFarlane L and Bocij P (2003) An exploration of predatory behaviour in
cyberspace: Towards a typology of cyberstalkers. First Monday 8(9).
Available at: www.firstmonday.org/ojs/index.php/fm/article/view/1076
(accessed 20 July 2018).
McGonagle T (2001) Wrestling (racial) equality from tolerance of hate
speech. Dublin University Law Journal 23: 21–54.
McGraw G (2006) Software Security: Building Security In. Boston, MA:
Addison-Wesley Professional.
McGuire B and Wraith A (2000) Legal and psychological aspects of
stalking: A review. The Journal of Forensic Psychiatry 11(2): 316–327.
McKinnon G (2016) Theresa May saved my life – now she’s the only hope
for the Human Rights Act. The Guardian, 15 November. Available at:
www.theguardian.com/commentisfree/2016/nov/15/theresa-may-savedmy-life-human-rights-act (accessed 12 January 2018).
McLaughlin J (2015) US mass surveillance has no record of thwarting large
terror attacks, regardless of Snowden leaks. The Intercept, 17 November.
Available at: https://theintercept.com/2015/11/17/u-s-mass-surveillancehas-no-record-of-thwarting-large-terror-attacks-regardless-of-snowdenleaks/ (accessed 25 July 2018).
McLeod K (2001) Owning Culture: Authorship, Ownership, and
Intellectual Property Law. New York: Peter Lang.
McLeod K (2014) Pranksters: Making Mischief in the Modern World. New
York, NY: NYU Press.
McLuhan M (1964) Understanding Media. New York, NY: New American
Library.
Meek-Prieto C (2008) Just age playing around? How second life aids and
abets child pornography. North Carolina Journal of Law and Technology
9: 88–109.
Meikle G (2002) Future active: Media activism and the Internet. New York,
NY: Routledge.
Menn J (2013) Trillion-dollar global hacking damages estimate called
exaggerated. Reuters, 22 July. Available at: www.reuters.com/article/ushacking-estimate/trillion-dollar-global-hacking-damages-estimate-calledexaggerated-idUSBRE96L0M920130722 (accessed 11 January 2018).
Menta R (2003) New from the RIAA: Let’s Play Starving Artist. Available
at:
https://web.archive.org/web/20040507063440/www.p2pnet.net/article/82
14 (accessed 20 July 2018).
The Mentor (1986) The conscience of a hacker. Phrack Inc. 1(7): Phile 3 of
10. Available at: http://phrack.org/issues/7/3.html#article (accessed 20
July 2018).
Merton RK (1938) Social structure and anomie. American Sociological
Review 3: 672–682.
Messerschmidt J (1993) Masculinities and Crime. Lanham, MD: Rowan
and Littlefield.
Messner SF and Rosenfeld R (2013) Crime and the American Dream (5th
edn). Belmont, CA: Wadsworth.
Meyer GR (1989) The Social Organization of the Computer Underground,
Masters Thesis, Northern Illinois University. Available at:
www.g2meyer.com/cu/ComputerUnderground.pdf (accessed 21 February
2018).
Miller v. California, 413 US 15 (1973) Available at:
https://supreme.justia.com/cases/federal/us/413/15/ (accessed 16 May
2018).
Miller J (2002) The strengths and limits of ‘doing gender’ for
understanding street crime. Theoretical Criminology 6: 433–460.
Miller J (2003) Feminist criminology. In MD Schwartz and SE Hatty (eds)
Controversies in Critical Criminology. Cincinnati, OH: Anderson
Publishing.
Miller V (2011) Understanding Digital Culture. London: Sage.
Mills E (2012) Two hackers plead guilty to LulzSec attacks on Web sites.
Cnet.com, 25 June. Available at: www.cnet.com/news/two-hackersplead-guilty-to-lulzsec-attacks-on-web-sites/ (accessed 20 July 2018).
Milone M (2003) Hacktivism: Securing the national infrastructure.
Knowledge, Technology and Policy 16(1): 75–103.
Minor WW (1981) Techniques of neutralization: A reconceptualization and
empirical examination. Journal of Research in Crime and Delinquency
18(1): 295–318.
Mitchell KJ and Jones LM (2013) Internet-facilitated Commercial Sexual
Exploitation of Children. Durham, NH: Crimes Against Children
Research Center. Available at:
http://unh.edu/ccrc/pdf/Final_IFCSEC_Bulletin_Nov_2013_CV262.pdf
(accessed 21 May 2018).
Mitchell KJ, Jones LM, Finkelhor D and Wolak J (2014) Trends in
Unwanted Online Experiences and Sexting: Final Report. Durham, NH:
Crimes Against Children Research Center. Available at:
https://scholars.unh.edu/cgi/viewcontent.cgi?article=1048&context=ccrc
(accessed 5 June 2018).
Mitchell S (2012) The BSA’s ‘nauseating’ anti-piracy tactics. Alphr, 14
March. Accessed at: www.alphr.com/news/enterprise/373567/the-bsasnauseating-anti-piracy-tactics (accessed 23 May 2016).
Mitchell WJ (1995) City of Bits: Space, Place and the Infobahn.
Cambridge, MA: MIT Press.
Mitnick K and Simon WL (2002) The Art of Deception: Controlling the
Human Element of Security. Indianapolis, IN: Wiley.
Mitnick K and Simon WL (2011) Ghost in the Wires: My Adventures as the
World’s Most Wanted Hacker. New York, NY: Little, Brown.
Miyawaki R (1999) The Fight against Cyberterrorism: A Japanese View.
Washington, DC: Centre for Strategic and International Studies.
MoJ (Ministry of Justice) (2017) New Crackdown on Child Groomers
Comes into Force. Available at: www.gov.uk/government/news/newcrackdown-on-child-groomers-comes-into-force (accessed 11 June
2018).
Moon B, McCluskey JD and McCluskey CP (2010) A general theory of
crime and computer crime: An empirical test. Journal of Criminal Justice
38: 767–772.
Moore L (ed.) (2000) Con Men and Cutpurses: Scenes from the Hogarthian
Underworld. London: Allen Lane.
Moore D and Rid T (2016) Cryptopolitik and the darknet. Survival 58(1):
7–38.
Moore R and McMullan EB (2009) Neutralizations and rationalizations of
digital piracy: A qualitative analysis of university students. International
Journal of Cyber Criminology 3(1): 441–451.
Moorefield J (1997) Legal update: Communications Decency Act of 1996.
Boston University Journal of Science and Technology Law 13: 32–38.
Morgan S (2018) 2018 Cybersecurity Market Report, Cybersecurity
Ventures. Available at: https://cybersecurityventures.com/cybersecuritymarket-report/ (accessed 18 July 2018).
Morris A (2012) Hunter Moore: The most hated man on the Internet.
Rolling Stone, 13 November. Available at:
www.rollingstone.com/culture/news/the-most-hated-man-on-the-internet20121113 (accessed 1 June 2018).
Morris RG and Blackburn AG (2009) Cracking the code: An empirical
exploration of social learning theory and computer crime. Journal of
Crime & Justice 32(1): 1–34.
Morris RG and Higgins GE (2009) Neutralizing potential and self-reported
digital piracy: A multitheoretical exploration among college students.
Criminal Justice Review 34(2): 173–195.
Mota S (2003) The US Supreme Court addresses the Child Pornography
Prevention Act and Child Online Protection Act in Ashcroft v. Free
Speech Coalition and Ashcroft v. American Civil Liberties Union. Federal
Communications Law Journal 55(1): 85–98.
Mueller G (2016) Piracy as labour struggle. Communication, Capitalism &
Critique 14: 333–345.
Mueller M (1999) ICANN and internet governance: Sorting though the
debris of ‘self-regulation’. Info – The Journal of Policy, Regulation and
Strategy for Telecommunications 1(6): 497–520.
Mullen P, Pathé M and Purcell R (2001) Stalking: New constructions of
human behaviour. Australian and New Zealand Journal of Psychiatry 35:
9–16.
Muncaster P (2016) UK cybercrime prosecutions rise by a third.
InfoSecurity Magazine, 23 May. Available at: www.infosecuritymagazine.com/news/uk-cybercrime-prosecutions-rise-by/ (accessed 17
January 2018).
Muncie J (1999) Youth and Crime: A Critical Introduction. London: Sage.
Muncie J (2005) The globalization of crime control – the case of youth and
juvenile justice: Neo-liberalism, policy convergence and international
conventions. Theoretical Criminology 9(1): 35–64.
Murray C (1984) Losing Ground. New York: Basic Books.
Nagel A (2017) Kill All Normies: Online Culture Wars from 4Chan and
Tumblr to Trump and the Alt-Right. Hampshire, England: Zero Books.
Nagin D (1998) Criminal deterrence research at the outset of the twenty
first century. Crime and Justice 23: 1–42.
Nakamura L (2014) ‘I WILL DO EVERYthing That Am Asked’:
Scambaiting, digital show-space, and the racial violence of social media.
Journal of Visual Culture 13(3): 257–274.
Nakashima E (2016) FBI paid professional hackers one-time fee to crack
San Bernardino iPhone. Washington Post, 12 April. Available at:
www.washingtonpost.com/world/national-security/fbi-paid-professionalhackers-one-time-fee-to-crack-san-bernardinoiphone/2016/04/12/5397814a-00de-11e6-9d3633d198ea26c5_story.html?utm_term=.aa93955c45b9 (accessed 25 July
2018).
Nash J (1976) Hustlers and Con Men: An Anecdotal History of the
Confidence Man and His Games. New York: Evans.
Naughton J (2000) A Brief History of the Future: The Origins of the
Internet. London: Phoenix.
NCA (National Crime Agency) (2017) National Crime Agency Annual
Report and Accounts 2016–2017. Available at:
www.nationalcrimeagency.gov.uk/publications/814-national-crimeagency-annual-report-2016-17/file (accessed 5 January 2018).
NCIS (National Criminal Intelligence Service) (1999) Project Trawler:
Crime on the Information Highways. Available at: www.cyberrights.org/documents/trawler.htm (accessed 20 July 2018).
NCSC (National Cyber Security Centre) (2017) The Cyber Threat to UK
Business 2016–17. London: HMSO.
Neal D (2011) Anonymous will rein in Facebook ‘Fawkes virus’. The
Inquirer, 11 November. Available at:
www.theinquirer.net/inquirer/news/2124367/anonymous-rein-facebooks-fawkes-virus (accessed 20 July 2018).
Neiwert D (2017) What the kek: Explaining the Alt-Right ‘deity’ behind the
‘meme magic’. Southern Poverty Law Center, 8 May. Available at:
www.splcenter.org/hatewatch/2017/05/08/what-kek-explaining-alt-rightdeity-behind-their-meme-magic (accessed 15 August 2018).
Neuberger F, Schönberger S and Raml R (2011) Stalking and health – an
Austrian prevalence study. Gesundheitswesen 73(4): 74–77.
Neuman S (2013) Police arrest hundreds in global child porn sting. National
Public Radio, 14 November. Available at: www.npr.org/sections/thetwo-
way/2013/11/14/245248716/police-arrest-hundreds-in-global-child-pornsting (accessed 23 May 2018).
Neustar (2017) Worldwide DDoS Attacks & Cyber Insights Research
Report: Taking Back the Upper Hand From Attackers. Available at:
https://hello.neustar.biz/201705-Security-Solutions-DDoS-SOC-ReportLP.html?utm_medium=press%20release&utm_source=internal-pressrelease&utm_campaign=201705-securitysolutions-ddosddossocreportpr&ucid=70139000001bbz5aam (accessed 15 January
2018).
Newman G and Clarke R (2003) Superhighway Robbery: Preventing eCommerce Crime. Cullompton: Willan.
Newman LH (2017) The worst hacks of 2017. Wired, 31 December.
Available at: www.wired.com/story/worst-hacks-2017/ (accessed 11
January 2017).
Newman LH (2018) The worst cybersecurity breaches of 2018 so far.
Wired, 9 August. Available at: www.wired.com/story/2018-worst-hacksso-far/ (accessed 15 August 2018).
Newton P (2011) Russia: Where keeping child porn is legal. CNN, 2 June.
Available at:
http://edition.cnn.com/2011/WORLD/europe/06/02/russia.child.porn/
(accessed 20 July 2018).
Nhan J (2010) Policing Cyberspace: A Structural and Cultural Analysis. El
Paso, TX: LFB Publishing.
Nhan J and Huey L (2013) ‘We don’t have these laser beams and stuff like
that’: Police investigations as low-tech work in a high-tech world. In S
Leman-Langlois (ed.) Technocrime, Policing, and Surveillance. New
York, NY: Routledge (pp. 79–90).
Nhan J, Huey L and Broll R (2017) Digilantism: An analysis of
crowdsourcing and the Boston Marathon bombings. British Journal of
Criminology 57(2): 341–361.
Nielsen L (2002) Subtle, pervasing, harm: Racist and sexist remarks in
public as hate speech. Journal of Social Issues 58(2): 265–280.
NJ Rev Stat § 2C (2013) Available at: https://law.justia.com/codes/newjersey/2013/title-2c/section-2c-14-9 (accessed 1 June 2018).
Nobles MR, Reyns BW, Fox KA and Fisher BS (2014) Protection against
pursuit: A conceptual and empirical comparison of cyberstalking and
stalking victimization among a national sample. Justice Quarterly 31(6):
986–1014.
NOP/NHTCU (National Hi-Tech Crime Unit) (2002) Hi-Tech Crime: The
Impact on UK Business. London: NHTCU.
Norris C and Armstrong G (1999) The Maximum Surveillance Society: The
Rise of CCTV. Oxford: Berg.
Norton (2018) 2017 Norton Cyber Security Insights Report Global Results.
Mountain View, CA: Symantec. Available at:
www.symantec.com/content/dam/symantec/docs/about/2017-ncsirglobal-results-en.pdf. NUA Internet Statistics (2003) ‘How many
online?’, at: www.nua.ie/surveys/how_many_online/ (accessed 24
January 2018).
Nugent J and Raisinghani M (2002) The information technology and
telecommunications security imperative: Important issues and drivers.
Journal of Electronic Commerce Research 3(1): 1–14.
O’Connell R (2003) A Typology of Cybersexploitation and Online
Grooming Practices. Preston: Cyberspace Research Unit.
O’Connell R, Price J and Barrow C (2004) Cyber Stalking, Abusive Cyber
Sex and Online Grooming: A Programme of Education for Teenagers.
Preston: Cyberspace Research Unit.
O’Connor C and Kiourti A (2017) Wireless sensors for smart orthopedic
implants. Journal of Bio- and Tribo-Corrosion 3: 20.
O’Donnell I and Milner C (2007) Child Pornography: Crime, Computers
and Society. Cullompton: Willan.
Obama B (2012) Consumer Data Privacy in a Networked World: A
Framework for Protecting Privacy and Promoting Innovation in the
Global Digital Economy. Washington DC: The White House. Available
at:
https://web.archive.org/web/20160321094303/https://www.whitehouse.g
ov/sites/default/files/privacy-final.pdf (accessed 24 July 2018).
Oboler A (2013) Islamophobia on the Internet. Caulfield South, Victoria,
Australia: Online Hate Prevention Institute. Available at:
www.scribd.com/document/190416797/Islamophobia-on-the-InternetThe-growth-of-online-hate-targeting-Muslims#fullscreen&from_embed
(accessed 29 June 2018).
OECD (Organisation for Economic Cooperation and Development) (1997)
OECD Guidelines for Cryptography Policy. Available at:
www.oecd.org/sti/ieconomy/guidelinesforcryptographypolicy.htm
(accessed 25 July 2018).
OECD (Organisation for Economic Cooperation and Development) (2017)
OECD Broadband Statistics. Available at:
http://www.oecd.org/sti/broadband/1.1-TotalBBSubs-bars-2017-06.xls
(accessed 20 February 2018).
Ofcom (2017) Children and Parents: Media Use and Attitudes Report.
Available at:
www.ofcom.org.uk/__data/assets/pdf_file/0020/108182/children-parentsmedia-use-attitudes-2017.pdf (accessed 11 June 2018).
Office of the United States President (2000) Defending America’s Cyberspace: National Plan for Information Systems Protection. Washington,
DC: The White House.
Ogas O and Gaddam S (2011) A Billion Wicked Thoughts: What the
Internet Tells Us About Sexual Relationships. New York, NY: Dutton.
Ogilvie E (2000) Cyberstalking: Trends and Issues in Criminal Justice, No.
166. Canberra: Australia Institute of Criminology.
Oliverio A (1998) The State of Terror. Albany, NY: SUNY Press.
OMB (Office of Management and Budget) (2017) The President’s Budget
for Fiscal Year 2017. Available at:
https://obamawhitehouse.archives.gov/omb/budget (accessed 15
December 2017).
ONS (Office for National Statistics) (2018a) Crime in England and Wales:
Year ending September 2017. Available at:
www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/article
s/overviewoffraudandcomputermisusestatisticsforenglandandwales/201801-25/pdf (accessed 24 April 2018).
ONS (Office for National Statistics) (2018b) Crime in England and Wales:
Appendix Tables. Available at:
www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/datase
ts/crimeinenglandandwalesappendixtables (accessed 6 June 2018).
ONS (Office for National Statistics) (2018c) Stalking: Findings from the
Crime Survey for England and Wales. Available at:
https://tinyurl.com/y7kvysft (accessed 31 May 2018).
Onwuekwe C (2004) The commons concept and intellectual property rights
regime: Whither plant genetic resources and traditional knowledge?
Pierce Law Review 2(1): 65–90.
Orland K (2017) EU study finds piracy doesn’t hurt game sales, may
actually help. ArsTechnica, September 26. Available at:
https://arstechnica.com/gaming/2017/09/eu-study-finds-piracy-doesnthurt-game-sales-may-actually-help/ (accessed 28 February 2018).
Ormsby E (2018) The Darkest Web. Sydney, Australia: Allen & Unwin.
Oswell D (2003) When images matter: Internet child pornography, forms of
observation and an ethics of the virtual. Information, Communication &
Society 9(2): 244–265.
OUSA (Offices of the United States Attorneys) (2018a) Use of Interstate
Facilities to Transmit Information About a Minor (18 USC. 2425).
Available at: www.justice.gov/usam/criminal-resource-manual-1992-useinterstate-facilities-transmit-information-about-minor-18-usc (accessed 5
June 2018).
OUSA (Offices of the United States Attorneys) (2018b) Coercion and
Enticement (18 USC. 2422). Available at:
www.justice.gov/usam/criminal-resource-manual-2001-coercion-andenticement-18-usc-2422 (accessed 5 June 2018).
Owen G and Savage N (2015) The Tor Dark Net, Global Commission on
Internet Governance. Available at:
www.cigionline.org/sites/default/files/no20_0.pdf (accessed 23 May
2018).
Paletz D and Vinson C (1992) Introduction. In D Paletz and A Schmid (eds)
Terrorism and the Media. Newbury Park, CA: Sage.
Pardes A (2018) What is GDPR and why should you care? Wired, 24 May.
Available at: www.wired.com/story/how-gdpr-affects-you/ (accessed 24
July 2018).
Parkes M (2012) Making plans for Nigel: The industry trust and film piracy
management in the United Kingdom. Convergence: The International
Journal of Research into New Media Technologies 19(1): 25–43.
Parkin S (2017) Keyboard warrior: The British hacker fighting for his life.
The Guardian, 8 September. Available at:
www.theguardian.com/technology/2015/jul/16/british-man-lauri-loveaccused-hacking-us-government-computer-networks-arrested (accessed
12 January 2018).
Passas N (1990) Anomie and corporate deviance. Contemporary Crises
14(2): 157–178.
Passas N (2000) Global anomie, dysnomie, and economic crime: Hidden
consequences of neoliberalism and globalization in Russia and around
the world. Social Justice 27(2): 16–44.
Patchin JW and Hinduja S (2011) Traditional and nontraditional bullying
among youth: A test of General Strain Theory. Youth & Society 43(2):
727–751.
Patchin JW and Hinduja S (2016) Deterring teen bullying: Assessing the
impact of perceived punishment from police, schools, and parents. Youth
Violence and Juvenile Justice, Online First. DOI:
10.1177/1541204016681057.
Patel F and Levinson-Waldman R (2017) The Islamophobic Administration.
New York, NY: Brennan Center for Justice. Available at:
www.brennancenter.org/publication/islamophobic-administration
(accessed 29 June 2018).
Paternoster R (2010) How much do we really know about criminal
deterrence? The Journal of Crime, Law & Criminology 100(3): 765–824.
Pattavina A (ed.) (2004) Information Technology and the Criminal Justice
System. London: Sage.
Pavlich G (2001) Critical genres in radical criminology in Britain. British
Journal of Criminology 41: 150–167.
PCeU (2011) PCeU – Police Central e-Crime Unit. Available at:
https://web.archive.org/web/20111213152854/www.met.police.uk/pceu/
(accessed 20 July 2018).
Pearsall R (1993) The Worm in the Bud: The World of Victorian Sexuality.
London: Pimlico.
Pearson J (2017) At least 1.65 million computers are mining cryptocurrency
for hackers so far this year. Motherboard, 12 September. Available at:
https://motherboard.vice.com/en_us/article/vb74j3/at-least-165-millioncomputers-are-mining-cryptocurrency-for-hackers-so-far-this-year
(accessed 11 January 2018).
Peitz M and Waelbroeck P (2006) Piracy of digital products: A critical
review of the theoretical literature. Information Economics and Policy
18: 449–476.
Perelman M (2002) Steal this Idea: Intellectual Property and the Corporate
Confiscation of Creativity. New York, NY: Palgrave Macmillan.
Philippsohn S (2001) Trends in cybercrime – an overview of current
financial crimes on the Internet. Computers and Security 20: 53–69.
Phillips W (2015) This is Why We Can’t Have Nice Things. Cambridge,
MA: MIT Press.
Phillips W (2018) The oxygen of amplification. Data & Society. Available
at: https://datasociety.net/output/oxygen-of-amplification/ (accessed 29
June 2018).
Picotti L and Salvadori I (2008) National Legislation Implementing the
Convention on Cybercrime – Comparative Analysis and Good Practices.
Strasbourg, France: Directorate General of Human Rights and Legal
Effects. Available at:
www.coe.int/t/dg1/legalcooperation/economiccrime/cybercrime/TCY/DOC%20567%20study2-dversion8%20provisional%20(12%20march%2008).PDF (accessed 20
July 2018).
Pincus W (2014) Oversight board says NSA data mining puts citizens’
privacy at risk but sees no abuse. The Washington Post, 14 July.
Available at: www.washingtonpost.com/world/nationalsecurity/oversight-board-says-nsa-data-mining-puts-citizens-privacy-atrisk-but-sees-no-abuse/2014/07/14/04e232e0-06e7-11e4-bbf1cc51275e7f8f_story.html?noredirect=on&utm_term=.c71bf00f697f
(accessed 31 July 2018).
Platt T, Frappier J, Ray G, Schauffler R., Trujillo L, Cooper L, Currie E and
Harring S (1977) The Iron Fist and the Velvet Glove: An Analysis of the
US Police (3rd edn). San Francisco, CA: Crime and Social Justice
Associates.
Play It Cybersafe (2005) Parents’ and Teachers’ Guide. Available at:
https://web.archive.org/web/20050514191702/www.playitcybersafe.com/
pdfs/TG-copyrightCrusader-2005.pdf (accessed 20 July 2018).
Poole P (2000) ECHELON: America’s Secret Global Surveillance Network.
Available at:
https://web.archive.org/web/20000815053709/http://fly.hiwaay.net/~pspo
ole/echelon.html (accessed 20 July 2018).
Pornhub (2018) 2017 Year in Review. PornHub Insights, 9 January.
Available at: www.pornhub.com/insights/2017-year-in-review (accessed
2 May 2018).
Poster M (1990) The Mode of Information: Post-Structuralism and Social
Contexts. Cambridge: Polity.
Postman N (1985) Amusing Ourselves to Death: Public Discourse in the
Age of Show Business. New York, NY: Basic Books.
Price D (2013) Sizing the piracy universe. NetNames. Available at:
www.netnames.com/assets/shared/whitepaper/pdf/netnames-sizingpiracy-universe-FULLreport-sept2013.pdf (accessed 27 February 2018).
Priday R (2018) Fake news laws are threatening free speech on a global
scale. Wired, 5 April. Available at: www.wired.co.uk/article/malaysiafake-news-law-uk-india-free-speech (accessed 15 August 2018).
Prigg M (2004) eBay scam could cost buyers millions. Evening Standard,
15 October. Available at: www.standard.co.uk/news/ebay-scam-couldcost-buyers-millions-7235515.html (accessed 31 October 2018).
Procida R and Simon R (2003) Global Perspectives on Social Issues:
Pornography. Lanham, MD: Lexington Books.
Protection of Children Act (1978) Available at:
www.legislation.gov.uk/ukpga/1978/37 (accessed 20 July 2018).
Przybylski AK and Nash V (2018) Internet filtering and adolescent
exposure to online sexual material. Cyberpsychology, Behavior, and
Social Networking 21(7): 405–410.
PWC (PriceWaterhouseCoopers) (2015) US Cybersecurity: Progress
Stalled: Key Findings from the 2015 US State of Cybercrime Survey.
Available at: www.pwc.com/us/en/increasing-iteffectiveness/publications/assets/2015-us-cybercrime-survey.pdf
(accessed 11 January 2018).
PWC (PriceWaterhouseCoopers) (2016) Global Economic Crime Survey.
Available at: www.pwc.com/gx/en/economic-crimesurvey/pdf/GlobalEconomicCrimeSurvey2016.pdf (accessed 26 October
2017).
Quayle E (2010) Child pornography. In Y Jewkes and M Yar (eds)
Handbook of Internet Crime. Cullompton: Willan.
Quayle E and Taylor M (2003) Child Pornography: An Internet Crime.
London: Routledge.
Quinn Z (2017) Crash Override: How GamerGate [Nearly] Destroyed my
Life and How We Can Win the Fight Against Online Hate. New York,
NY: Public Affairs.
Quinney R (1977/2010) Class, state, and crime: On the theory and practice
of criminal justice. Republished in H Copes and V Topalli (eds)
Criminological Theory: Readings and Retrospectives. New York, NY:
McGraw Hill.
Raj SBE and Portia AA (2011) Analysis on Credit Card Fraud Detection
Methods. International Conference on Computer, Communication and
Electrical Technology – ICCCET2011, 152–156. Available at:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5762457
(accessed 26 April 2018).
Ramirez E, Brill J, Ohlhausen MK, Wright JD and McSweeny T (2014)
Data Brokers: A Call for Transparency and Accountability. Washington,
DC: Federal Trade Commission. Available at:
www.ftc.gov/system/files/documents/reports/data-brokers-calltransparency-accountability-report-federal-trade-commission-may2014/140527databrokerreport.pdf (accessed 23 July 2018).
Rankin J (2018) Tech firms could face new EU regulations over fake news.
The Guardian, 24 April. Available at:
www.theguardian.com/media/2018/apr/24/eu-to-warn-social-mediafirms-over-fake-news-and-data-mining (accessed 15 August 2018).
Rasch M (2003) Foreword. In D Verton (ed.) Black Ice: The Invisible
Threat of Cyber-Terrorism. Emeryville, CA: McGraw-Hill/Osborne.
Rassool RP (2003) Antipiracy – Trends and technology (a report from the
front). Available at
https://www.researchgate.net/publication/242389770_ANTIPIRACY__TRENDS_AND_TECHNOLOGY_A_REPORT_FROM_THE_FRONT
(accessed 20 November 2018).
Raustiala K and Sprigman C (2012) How much do music and movie piracy
really hurt the US economy? Freakanomics Blog, 12 January. Available
at: http://freakonomics.com/2012/01/12/how-much-do-music-and-moviepiracy-really-hurt-the-u-s-economy/ (accessed 20 July 2018).
Rayner G (2011) Norway killer Anders Behring Breivik emailed
‘manifesto’ to 250 British contacts. The Telegraph, 26 July. Available at:
www.telegraph.co.uk/news/worldnews/europe/norway/8664159/Norwaykiller-Anders-Behring-Breivik-emailed-manifesto-to-250-Britishcontacts.html (accessed 20 July 2018).
Reader R (2016) 3 years after Aaron Swartz’s death, here’s what’s
happened to Aaron’s law. Mic, 11 January. Available at:
https://mic.com/articles/132299/3-years-after-aaron-swartz-s-death-heres-what-s-happened-to-aaron-s-law#.3PaUWkh2D (accessed 12 January
2018).
Reagle J (2017) Naïve meritocracy and the meanings of myth. Ada: A
Journal of Gender, New Media, and Technology 11. Available at:
http://adanewmedia.org/2017/05/issue11-reagle/ (accessed 12 January
2018).
Reaves BA (2015) Local Police Departments, 2013: Personnel, Policies,
and Practices. US Department of Justice. Available at:
www.bjs.gov/content/pub/pdf/lpd13ppp.pdf (accessed 8 December
2017).
Reckless WC (1961) A new theory of delinquency and crime. Federal
Probation 25: 43–46.
Reiman J and Leighton P (2012) The Rich Get Richer and the Poor Get
Prison (10th edn). Boston, MA: Pearson.
Reiss AJ (1951) Delinquency as the failure of personal and social controls.
American Sociological Review 16: 196–207.
Reitinger P (2000) Encryption, anonymity and markets: Law enforcement
and technology in a free market world. In D Thomas and B Loader (eds)
Cybercrime: Law Enforcement, Security and Surveillance in the
Information Age. London: Routledge.
Reyns BW (2018) Routine activity theory and cybercrime: A theoretical
appraisal and literature review. In KF Steinmetz and MR Nobles (eds)
Technocrime and Criminological Theory. New York, NY: Routledge.
Reyns BW, Henson B and Fisher BS (2011) Being pursued online:
Applying cyberlifestyle-routine activities theory to cyberstalking
victimization. Criminal Justice and Behavior 38(11): 1149–1169.
Rhodes R (1997) Understanding Governance: Policy Networks,
Governance, Reflexivity and Accountability. Bristol: Open University
Press.
RIAA (Recording Industry Association of America) (2012) Scope of the
Problem. Available at:
https://web.archive.org/web/20110513225745/www.riaa.com/physicalpir
acy.php?content_selector=piracy-online-scope-of-the-problem (accessed
20 July 2018).
Richmond S (2008) The Internet Watch Foundation must learn from the
Wikipedia debacle. Daily Telegraph, 10 December. Available at:
https://web.archive.org/web/20081211053729/http://blogs.telegraph.co.u
k/shane_richmond/blog/2008/12/10/the_internet_watch_foundation_mus
t_learn_from_the_wikipedia_debacle (accessed 20 July 2018).
Rickard D (2017) Targeting Scams: Report of the ACCC on Scams Activity
2016. Australian Competition & Consumer Commission. Available at:
www.accc.gov.au/system/files/1162%20Targeting%20Scams%202017_F
A1.pdf (accessed 24 April 2018).
Rickard D (2018) Targeting Scams: Report of the ACCC on Scam Activity
2017. Australian Competition & Consumer Commission. Available at:
www.accc.gov.au/system/files/F1240_Targeting%20scams%20report.PD
F (accessed 28 June 2018).
Rid T (2013) Cyber War Will Not Take Place. Cambridge, MA: Oxford
University Press.
Rid T (2016) Rise of the Machines. New York, NY: W. W. Norton &
Company.
Rieke A, Yu H, Robinson D and von Hoboken J (2016) Data Brokers in an
Open Society. London: Open Society Foundation. Available at:
www.opensocietyfoundations.org/sites/default/files/data-brokers-in-anopen-society-20161121.pdf (accessed 24 July 2018).
Rigg J (2017) How the Digital Economy Act will come between you and
porn. Engadget, May 5. Available at:
www.engadget.com/2017/05/03/digital-economy-act-explainer/
(accessed 26 February 2018).
Rigg J (2018) UK delays mandatory age verification on porn sites.
Engadget, 12 March. Available at: www.engadget.com/2018/03/12/ukdelays-porn-age-verification/ (accessed 22 May 2018).
Roberts A and Dziegielewski S (1996) Assessment typology and
intervention with the survivors of stalking. Aggression and Violent
Behaviour 1(4): 359–368.
Roberts R (2017) Hate crime targeting UK mosques more than doubled in
past year, figures show. The Independent, 9 October. Available at:
www.independent.co.uk/news/uk/home-news/hate-crime-muslimsmosques-islamist-extremism-terrorism-terror-attacks-a7989746.html
(accessed 29 June 2018).
Robertson D (2016) The Nilson Report. Available at:
www.nilsonreport.com/upload/content_promo/The_Nilson_Report_1017-2016.pdf (accessed 26 October 2017).
Robinson B (2015) IP piracy & developing nations: A recipe for terrorism
funding. Rutgers Law Record 42: 42–80.
Rockwell M (2012) Two-month anti-counterfeit operation nets goods in 43
countries. GSNMagazine, 22 February. Available at:
www.gsnmagazine.com/node/25687?c=border_security (accessed 20
July 2018).
Roderick L (2014) Discipline and power in the digital age: The case of the
US consumer data brokerage industry. Critical Sociology 40(5): 729–
746.
Rodger E (2014) Elliot Rodger’s retribution. YouTube. Available at:
www.youtube.com/watch?v=zExDivIW4FM (accessed 22 May 2018).
Rodionova Z (2017) Cyber security report: Hacking attacks on UK
businesses cost investors £42bn. The Independent, 12 April. Available at:
www.independent.co.uk/news/business/news/cyber-hacking-attack-cost-
uk-business-investors-ftse-companies-lose-120-million-a7678921.html
(accessed 16 January 2018).
Rogers M (2000) A New Hacker Taxonomy. Available at:
www.cerias.purdue.edu/homes/mkr/hacker.doc (accessed 20 July 2018).
Rogerson M and Pease K (2004) Privacy, Identity and Crime Prevention.
Available at:
https://web.archive.org/web/20070403200805/www.foresight.gov.uk/pre
vious_projects/Cyber_Trust_and_Crime_Prevention/Reports_and_Public
ations/Privacy_Identity_and_Crime_Prevention/Michelle_Ken.pdf
(accessed 20 July 2018).
Romei S (1999) Net firms led killer to victim. The Australian, 4–5
December: 19–22.
Ropelato J (n.d.) Internet Pornography Statistics. Available at:
http://www.ministryoftruth.me.uk/wpcontent/uploads/2014/03/IFR2013.pdf (accessed 20 November 2018).
Ropelato J (2010) Internet Pornography Statistics. Available at:
https://web.archive.org/web/20110121041013/http://internet-filterreview.toptenreviews.com/internet-pornography-statistics.html (accessed
20 July 2018).
Rorive I (2002) Strategies to tackle racism and xenophobia on the Internet –
where are we in Europe? International Journal of Communications Law
and Policy 7: 1–10.
Rose N (2000) Government and control. British Journal of Criminology 40:
321–339.
Roversi A (2008) Hate on the Net: Extremist Sites, Neo-fascism On-line,
Electronic Jihad. Aldershot: Ashgate.
Ruffin O (2004) Hacktivism, From Here to There. Presentation given at the
Yale Law School CyberCrime and Digital Law Enforcement Conference.
Available at: www.arifyildirim.com/ilt510/oxblood.ruffin.pdf (accessed
23 January 2018).
Runnymede Trust (1997) Islamophobia: A Challenge for Us All. London:
Runnymede Trust.
Rutter J (2000) From the Sociology of Trust Towards a Sociology ‘e-Trust’.
Available at: http://jasonrutter.co.uk/wp-content/uploads/2015/03/Rutter2001-From-the-Sociology-of-Trust-Towards-a-Sociology-of-E-trust.pdf
(accessed 20 July 2018).
Ryder N (2011) Financial Crime in the 21st century: Law and Policy.
Cheltenham: Edward Elgar Publishing.
Sandvine (2017) 2017 Global Internet Phenomena – Spotlight: Subscription
Television Piracy. Available at:
www.sandvine.com/hubfs/downloads/reports/internetphenomena/sandvine-spotlight-subscription-television-piracy.pdf
(accessed 28 February 2018).
Saravalle E (2018) Bitcoin can help terrorists secretly fund their deadly
attacks. Fox News, 9 January. Available at:
www.foxnews.com/opinion/2018/01/09/bitcoin-can-help-terroristssecretly-fund-their-deadly-attacks.html (accessed 30 January 2018).
Sayyid S and Vakil A (eds) (2011) Thinking Through Islamophobia: Global
Perspectives. London: Hurst.
Schafer J (2002) Spinning the web of hate: Web-based hate propogation by
extremist organizations. Journal of Criminal Justice and Popular Culture
9(2): 69–88.
Schell BH and Holt TJ (2010) A profile of the demographics, psychological
predispositions, and social/behavioral patterns of computer hacker
insider and outsiders. In TJ Holt and BH Schell (eds) Corporate Hacking
and Technology-Driven Crime: Social Dynamics and Implications.
Hershey, PA: IGI Global.
Schell BH and Melnychuk J (2010) Female and male hacker conference
attendees: Their autism-spectrum quotient (AQ) scores and self-reported
adulthood experiences. In TJ Holt and BH Schell (eds) Corporate
Hacking and Technology-Driven Crime: Social Dynamics and
Implications. Hershey, PA: IGI Global.
Schell BH, Dodge JL and Moutsatsos SS (2002) The Hacking of America:
Who’s Doing It, Why, and How. Westport, CT: Quorum Books.
Schmid A (1993) The response problem as a definition problem. In A
Schmid and R Crelinsten (eds) Western Responses to Terrorism. London:
Frank Cass.
Schneier B (2000) Secrets and Lies: Digital Security in Networked World.
New York, NY: Wiley.
Schwartz H (2003) The Revolt of the Primitive: An Inquiry into the Roots
of Political Correctness. Piscataway, NJ: Transaction.
Schwendinger H and Schwendinger J (2014) Who Killed the Berkeley
School? Struggles over Radical Criminology. Brooklyn, NY: Thought
Crimes.
Scuka D (2002) Auctions booming, but so are the crooks. Japan Inc., April.
Available at:
https://web.archive.org/web/20030706184614/www.japaninc.net/article.p
hp?articleID=776 (accessed 20 July 2018).
Search Engine Journal (2018) Meet the 7 most popular search engines in
the world. Search Engine Journal, 7 January. Available at:
www.searchenginejournal.com/seo-101/meet-search-engines/ (accessed
23 July 2018).
Sekulow J (2004) Pormography on the net. Issues in Science and
Technology, Spring: 5.
Selman D and Leighton P (2010) Punishment for Sale: Private Prisons, Big
Business, and the Incarceration Binge. Landham, MD: Rowman &
Littlefield.
Selyukh A (2016) A year after the San Bernardino and Apple-FBI, where
are we on encryption? NPR, 3 December. Available at:
www.npr.org/sections/alltechconsidered/2016/12/03/504130977/a-yearafter-san-bernardino-and-apple-fbi-where-are-we-on-encryption
(accessed 25 July 2018).
Seto MC, Buckman C, Dwyer RG and Quayle E (2018) Production and
Active Trading of Child Sexual Exploitation Images Depicting Identified
Victims. National Center for Missing & Exploited Children. Available at:
www.missingkids.com/content/dam/ncmec/en_us/Production%20and%2
0Active%20Trading%20of%20CSAM_FullReport_FINAL.pdf (accessed
23 May 2018).
Seto MC, Hanson RK and Babchishin KM (2011) Contact sexual offending
by men with online sexual offenses. Sexual Abuse: A Journal of
Research and Treatment 23(1): 124–145.
Shadish WR, Cook TD and Campbell DT (2002) Experimental and QuasiExperimental Designs for Generalized Causal Inference. Belmont, CA:
Wadsworth.
Shane S, Perlroth N and Sanger DE (2017) Security breach and spilled
secrets have shaken the NSA to its core. New York Times, 12 November.
Available at: www.nytimes.com/2017/11/12/us/nsa-shadow-brokers.html
(accessed 16 January 2018).
Shelley L (2003) Organized crime, terrorism and cybercrime. In A Bryden
and P Fluri (eds) Security Sector Reform: Institutions, Society and Good
Governance. Baden-Baden: Nomos.
Shelley M (1999) Frankenstein. Ware: Wordsworth.
Shields R (ed.) (1996) Cultures of the Internet: Virtual Spaces, Real
Histories, Living Bodies. London: Sage.
Siddiqui S (2015) Congress passes NSA surveillance reform in vindication
for Snowden. The Guardian, 3 June. Available at:
www.theguardian.com/us-news/2015/jun/02/congress-surveillancereform-edward-snowden (accessed 31 July 2018).
Silic M, Barlow JB and Back A (2017) A new perspective on neutralization
and deterrence: Predicting shadow IT usage. Information &
Management, Online First. DOI:
https://doi.org/10.1016/j.im.2017.02.007.
Simon T (2017) Critical Infrastructure and the Internet of Things. Center
for International Governance Innovation and Chatham House. Available
at:
www.cigionline.org/sites/default/files/documents/GCIG%20no.46_0.pdf
(accessed 2 February 2018).
Singel R (2009) Is the hacking threat to national security overblown?
Wired, 3 June. Available at:
www.wired.com/threatlevel/2009/06/cyberthreat/ (accessed 20 July
2018).
Singleton RA and Straits BC (2017) Approaches to Social Research (6th
edn). New York, NY: Oxford University Press.
Sinrod E and Reilly W (2000) Cyber-crimes: A practical approach to the
application of federal computer crime laws. Santa Clara Computer and
High Technology Law Journal 16(2): 1–53.
Skare E (2018) Digital surveillance/militant resistance: Categorizing the
‘proto-state hacker’. Television & New Media Online First: 1–16. DOI:
10.1177/1527476418793509.
Skibell R (2002) The myth of the computer hacker. Information,
Communication and Society 5: 336–356.
Skibell R (2003) Cybercrime and misdemeanors: A reevaluation of the
Computer Fraud and Abuse Act. Berkeley Technology Law Journal 18:
909–944.
Smallridge JL and Roberts JR (2013) Crime specific neutralizations: An
empirical examination of four types of digital piracy. International
Journal of Cyber Criminology 7(2): 125–140.
Smith H (2014) Massive target hack traced back to phishing email.
Huffington Post, 12 February. Available at:
www.huffingtonpost.com/2014/02/12/target-hack_n_4775640.html
(accessed 30 April 2018).
Smith R (2003) Travelling in Cyberspace on a False Passport: Controlling
Transnational Identity-related Crime. Paper presented at the British
Criminology Conference Selected Proceedings, Keele, July 2002.
Available at: www.britsoccrim.org/volume5/004.pdf (accessed 20 July
2018).
Smith R (2006) Biometric solutions to identity-related cybercrime. In Y
Jewkes (ed.) Crime Online. Cullompton: Willan.
Smith R (2010) Identity theft and fraud. In Y Jewkes and M Yar (eds)
Handbook of Internet Crime. Cullompton: Willan.
Smith R and Urbas G (2001) Controlling Fraud on the Internet: A CAPA
Perspective. Canberra: Australian Institute of Criminology.
Smith SG, Chen J, Basile KC, Gilbert LK, Merrick MT, Patel N, Walling M
and Jain A (2017) The National Intimate Partner and Sexual Violence
Survey (NISVS): 2010–2012 State Report. Atlanta, GA: National Center
for Injury Prevention and Control, Centers for Disease Control and
Prevention.
Snyder F (2001) Sites of criminality and sites of governance. Social and
Legal Studies 10: 251–256.
Snyder M (2004) Pirates of the 21st century. NY Metro Parents, 21
November. Available at: www.nymetroparents.com/article/Pirates-of-the21st-Century-Teaching-Kids-about-Cyberethics (accessed 20 July 2018).
Söderberg J (2008) Hacking Capitalism: The Free and Open Source
Software Movement. New York, NY: Routledge.
Solon O (2017) Marcus Hutchins: Cybersecurity experts rally around
arrested WannaCry ‘hero’. The Guardian, 11 August. Accessed at:
www.theguardian.com/technology/2017/aug/11/marcus-hutchinsarrested-wannacry-kronos-cybersecurity-experts-react (accessed 11
January 2018).
Sommerlad J (2018) Pepe the Frog creator sues InfoWars for breach of
copyright. The Independent, 7 March. Available at:
www.independent.co.uk/life-style/gadgets-and-tech/news/pepe-the-froginfowars-copyright-lawsuit-alt-right-alex-jones-matt-furie-whitesupremacists-neo-a8244001.html (accessed 22 May 2018).
Sodipo B (1997) Piracy and Counterfeiting: GATT TRIPS and Developing
Countries. London: Kluwer.
Spectorsoft (2011) Survey reveals top parental concerns about kids online.
Available at:
https://web.archive.org/web/20140309011739/http://www.spectorsoft.co
m/mkt/survey.pdf (accessed 29 June 2018).
Spitzberg B (2002) The tactical topography of stalking victimization and
management. Trauma, Violence and Abuse 3(4): 261–288.
Spitzberg B and Cadiz M (2002) The media construction of stalking
stereotypes. Journal of Criminal Justice and Popular Culture 9(3): 128–
149.
Spitzer S (1975) Toward a Marxian theory of deviance. Social Problems 22:
638–651.
SPLC (Southern Poverty Law Center) (2018) The Year in Hate. Available
at: www.splcenter.org/news/2018/02/21/year-hate-trump-buoyed-whitesupremacists-2017-sparking-backlash-among-black-nationalist (accessed
21 May 2018).
Stalder F (1998) The Logic of Networks: Social Landscapes vis-a-vis the
Space of Flows. Available at: www.ctheory.net/articles.aspx?id=263
(accessed 20 July 2018).
Stallman R (2002) Free Software, Free Society: Selected Essays of Richard
M. Stallman. Boston, MA: Free Software Foundation.
Stanley J (2002) Child abuse and the Internet. Journal of the Health
Education Institute of Australia 9(1): 5–27.
Stark PB (2007) The Effectiveness of Internet Content Filters. Washington,
DC: US Department of Justice. Available at:
www.stat.berkeley.edu/~stark/Preprints/filter07.pdf (accessed 16 May
2018).
Steffensmeier D and Allan E (1996) Gender and crime: Toward a gendered
theory of female offending. Annual Review of Sociology 22: 459–487.
Stein D (2014) A spectre is haunting law and society: Revisiting radical
criminology at UC Berkeley. Social Justice 40: 72–84.
Steinhardt B (2000) Hate speech. In Y Akdeniz, C Walker and D Wall (eds)
The Internet, Law and Society. Harlow: Longman.
Steinmetz KF (2016) Hacked: A Radical Approach to Hacker Culture and
Crime. New York, NY: NYU Press.
Steinmetz KF and Gerber J (2015) ‘It doesn’t have to be this way’: Hacker
perspectives on privacy. Social Justice 41(3): 29–51.
Steinmetz KF and Nobles MR (2018) Introduction. In KF Steinmetz and
MR Nobles (eds) Technocrime and Criminological Theory. New York,
NY: Routledge.
Steinmetz KF and Tunnell KD (2013) Under the pixelated Jolly Roger: A
study of on-line pirates. Deviant Behavior 34: 53–67.
Stepanova E (2011) The Role of Information Communication Technologies
in the ‘Arab Spring’. PONARS Eurasia Policy Memo No. 159, May.
Available at: http://pircenter.org/kosdata/page_doc/p2594_2.pdf
(accessed 20 July 2018).
Sterling B (1994) The Hacker Crackdown: Law and Disorder on the
Electronic Frontier. Harmondsworth: Penguin.
Stieger S, Burger C and Schild A (2008) Lifetime prevalence and impact of
stalking: Epidemiological data from Eastern Austria. The European
Journal of Psychiatry 22(4): 235–241.
Stohl M (2006) Cyber terrorism: A clear and present danger, the sum of all
fears, breaking point, or patriot games? Crime, Law and Social Change
46: 223–238.
Stoker G (2008) Governance Theory and Practice: A Cross-Disciplinary
Approach. London: Palgrave Macmillan.
Story A (2003) Burn Berne: Why the leading international copyright
convention must be replaced. Houston Law Review 40(3): 763–801.
Stosh B (2015) Hundreds of Israeli sites hacked for Anonymous #OpIsrael.
FreedomHacker.net, 7 April. Available at:
https://freedomhacker.net/hundreds-of-israeli-sites-hacked-foranonymous-opisrael-3922/ (accessed 11 January 2018).
Stratford J (2003) Internet Surveillance: Recent US Developments.
Available at: www.iassistdata.org/sites/default/files/iqvol273stratford.pdf
(accessed 20 July 2018).
Stretesky PB, Long MA and Lynch MJ (2013) The Treadmill of Crime:
Political Economy and Green Criminology. New York, NY: Routledge.
Sullivan M (2012) Facebook IPO: Why your data is worth $104 billion. PC
World, 18 May. Available at:
www.pcworld.com/article/255783/facebook_ipo_why_your_data_is_wor
th_104_billion.html (accessed 20 July 2018).
Sutherland EH (1939) Principles of Criminology (3rd edn). Philadelphia, PA:
Lippencott.
Sutherland EH (1947) Principles of Criminology (4th edn). Philadelphia, PA:
Lippencott.
Sutherland E and Cressey D (1974) Principles of Criminology (9th edn).
Philadelphia, PA: Lippincott.
Sutherland EH, Cressey DR and Luckenbill DF (1992) Principles of
Criminology (11th edn). Lanham, MD: General Hall.
Sutter G (2003) Don’t shoot the messenger? The UK and online
intermediary liability. International Review of Law, Computers and
Technology 17(1): 73–84.
Sutton M (1998) Handling Stolen Goods and Theft: A Market Reduction
Approach. Home Office Research Study No. 69. London: Home Office.
Sutton M, Schneider J and Hetherington S (2001) Tackling Theft with the
Market Reduction Approach. Home Office Crime Reduction Research
Series Paper 8. London: Home Office.
Swartz N (2004) Patriot Act provision ruled unconstitutional. The
Information Management Journal, November/December: 6.
Swinford S (2016) Internet trolls who whip up ‘Virtual Mobs’ or post
photoshopped images face jail. The Telegraph, 10 October. Available at:
www.telegraph.co.uk/news/2016/10/09/internet-trolls-who-whip-upvirtual-mobs-or-post-photoshopped-im/ (accessed 1 June 2018).
Sykes C (1999) The End of Privacy. New York: St. Martin’s Press.
Sykes G and Matza D (1957) Techniques of neutralization: A theory of
delinquency. American Sociological Review 22: 664–670.
Tait A (2018) ‘Let’s Colonise MySpace!’: Inside the alt-right’s Internet. The
New Statesman, 13 January. Available at:
www.newstatesman.com/science-tech/2018/01/let-s-colonise-myspaceinside-alt-right-s-internet (accessed 21 May 2018).
Tanczer LM (2016) Hacktivism and the male-only stereotype. New Media
& Society 18(8): 1599–1615.
Tannenbaum F (1938/2010) Crime and the community. Republished in H
Copes and V Topalli (eds) Criminological Theory: Readings and
Retrospectives. New York, NY: McGraw-Hill.
Tarrant S (2016) The Pornography Industry: What Everyone Needs to
Know. Oxford: Oxford University Press.
Taxpayers for Common Sense (2017) Cyber Spending Database. Available
at: www.taxpayer.net/national-security/cyberspending-database/
(accessed 18 July 2018).
Taylor M and Quayle E (2003) Child Pornography: An Internet Crime. New
York: Brunner-Routledge.
Taylor M, Quayle E and Holland G (2001) Child pornography: The Internet
and offending. Recherche sur les Politiques 2(2): 94–100.
Taylor P (1999) Hackers: Crime in the Digital Sublime. London: Routledge.
Taylor P (2000) Hackers – cyberpunks or microserfs? In D Thomas and B
Loader (eds) Cybercrime: Law Enforcement, Security and Surveillance
in the Information Age. London: Routledge.
Taylor P (2003) Maestros or misogynists? Gender and the social
construction of hacking. In Y Jewkes (ed.) Dot.cons: Crime, Deviance
and Identity on the Internet. Cullompton: Willan.
Taylor P (2004a) Hacktivism – resistance is fertile? In C. Sumner (ed.) The
Blackwell Companion to Criminology. Oxford: Blackwell.
Taylor P (2004b) Keyboard protest: Hacktivist spiders on the Web. In J
Carter and D Moreland (eds) Anti-Capitalist Britain. Cheltenham: New
Clarion Press.
Tcherni M, Davies A, Lopes G and Lizotte A (2016) The dark figure of
online property crime: Is cyberspace hiding a crime wave? Justice
Quarterly 33(5): 890–911.
Tell MAMA (Measuring Anti-Muslim Attacks) (2016) A Constructed
Threat: Identity, Prejudice and the Impact of Anti-Muslim Hatred: Tell
MAMA Annual Report 2016. Available at: https://tellmamauk.org/wpcontent/uploads/2017/11/A-Constructed-Threat-Identity-Intolerance-andthe-Impact-of-Anti-Muslim-Hatred-Web.pdf (accessed 29 June 2018).
TERA Consultants (2010) Building a Digital Economy: The Importance of
Saving Jobs in the EU’s Creative Industries. Available at:
www.teraconsultants.fr/en/issues/Report-for-the-International-Chamberof-Commerce-European-Study-on-the-Impact-of-Piracy-in-the-EU
(accessed 26 February 2018).
Testa A, Maimon D, Sobesto B and Cukier M (2017) Illegal roaming and
file manipulation on target computers. Criminology & Public Policy
16(3): 689–726.
Thomas D (2002) Hacker Culture. Minneapolis, MN: University of
Minnesota Press.
Thomas DR, Beresford AR and Rice A (2015) Security metrics for the
Android ecosystem. SPSM’15 Proceedings of the 5th Annual ACM CCS
Workshop on Security and Privacy in Smartphones and Mobile Devices:
87–98.
Thomas D and Loader B (2000) Introduction. In D Thomas and B Loader
(eds) Cybercrime: Law Enforcement, Security and Surveillance in the
Information Age. London: Routledge.
Thomas G (2015) Gideon’s Spies: The Secret History of the Mossad. New
York, NY: Thomas Dunne Books/St. Martin’s Griffin.
Thompson I (2017) Hackers nick $60m from Taiwanese bank in tailored
SWIFT attack. The Register, 11 October. Available at:
www.theregister.co.uk/2017/10/11/hackers_swift_taiwan/ (accessed 16
January 2018).
Thomson I (2004) Britain becoming a nation of pirates. Computer Reseller
News, 8 September. Available at:
https://web.archive.org/web/20050126025419/www.crn.vnunet.com/new
s/1157189.
Thornberry TP and Krohn MD (2000) The self-report method for
measuring delinquency and crime. Criminal Justice 2000 4: 33–83.
Report given to the National Institute of Justice (No. NCJ 182411).
Thornburgh D and Lin H (2004) Youth, pornography and the Internet.
Issues in Science and Technology Winter: 43–48.
Thorpe V (2011) Illegal movie downloads ‘threaten the future of British
film market’. The Guardian, 13 March. Available at:
www.guardian.co.uk/film/2011/mar/13/illegal-downloads-threatenbritish-film (accessed 20 July 2018).
Tiku N (2018) Europe’s new privacy law will change the web, and more.
Wired, 19 March. Available at: www.wired.com/story/europes-newprivacy-law-will-change-the-web-and-more/ (accessed 24 July 2018).
TIMJ (2004) US Government still mining data. The Information
Management Journal, July/August: 7–8.
Timofeeva Y (2003) Hate speech online: restricted or protected?
Comparison of regulations in the United States and Germany. Journal of
Transnational Law and Policy 12(2): 253–286.
Tjaden R (1997) The crime of stalking: How big is the problem? National
Institute of Justice Research Review, November: 1–12.
Tokunaga RS and Aune KS (2017) Cyber-defense: A taxonomy of tactics
for managing cyberstalking. Journal of Interpersonal Violence 32(10):
1451–1475.
Toronto Police Service (2013) Project Spade Saves Children. Available at:
www.torontopolice.on.ca/modules.php?
op=modload&name=News&file=article&sid=7171 (accessed 23 May
2018).
Townsend M (2012) Security services to get more access to monitor emails
and social media. The Guardian, 28 July. Available at:
www.guardian.co.uk/technology/2012/jul/28/isecurity-services-emailssocial-media?newsfeed=true (accessed 20 July 2018).
TraCCC (Transnational Crime and Corruption Centre) (2001) Transnational
Crime, Corruption and Information Technology. Washington, DC:
TraCCC.
Treadwell J (2011) From the car boot to booting it up?: eBay, online
counterfeit crime, and the transformation of the criminal marketplace.
Criminology & Criminal Justice 12(2): 175–191.
Trump DJ (2017) Presidential Executive Order on Strengthening the
Cybersecurity of Federal Networks and Critical Infrastructure (Executive
Order No. 13800). Available at: www.whitehouse.gov/presidentialactions/presidential-executive-order-strengthening-cybersecurity-federalnetworks-critical-infrastructure/ (accessed 14 February 2018).
Tsesis A (2002) Destructive Messages: How Hate Speech Paves the Way
for Harmful Social Movements. New York: NYU Press.
Tudor A (1989) Monsters and Mad Scientists: A Cultural History of the
Horror Film. Oxford: Blackwell.
Tunnell KD (2002) The impulsiveness and routinization of decisionmaking. In AR Piquero and SG Tibbetts (eds) Rational Choice and
Criminal Behavior: Recent Research and Future Challenges. New York,
NY: Routledge.
Turgeman-Goldschmit O (2011) Identity construction among hackers. In K
Jaishankar (ed.) Cyber Criminology: Exploring Internet Crimes and
Criminal Behavior. Boca Raton, FL: CRC Press.
Turkle S (1984) The Second Self: Computers and the Human Spirit.
London: Granada.
Turkle S (1995) Life on the Screen: Identity in the Age of the Internet. New
York: Simon and Schuster.
Turner TC, Smith MA, Fisher D and Welser HT (2005) Picturing Usenet:
Mapping computer-mediated collective action. Journal of ComputerMediated Communication 10(4): 1048.
Twist J (2003) Cracking the hacker underground. BBC News, 14
November. Available at:
http://news.bbc.co.uk/2/hi/technology/3246375.stm (accessed 20 July
2018).
Twitter (2018) Twitter Announces Fourth Quarter and Fiscal Year 2017
Results. 8 February. Available at:
http://files.shareholder.com/downloads/AMDA2F526X/6347880716x0x970887/F821AE20-8D83-4246-B8ECE34A75D64C34/Q4_2017_Earnings_Press_Release.pdf (accessed 23
July 2018).
Uhlig R (2004) New email virus ‘is biggest threat yet’. The Telegraph, 28
January. Available at:
www.telegraph.co.uk/news/worldnews/1452853/New-email-virus-isbiggest-threat-yet.html (accessed 23 July 2018).
UKDOT (United Kingdom Department for Transport) (2017) Reported
Road Casualties by Road User Type and Severity: Great Britain [Excel
Spreadsheet]. Available at:
www.gov.uk/government/uploads/system/uploads/attachment_data/file/6
65201/ras30001.ods (accessed 6 June 2018).
UKFCMC (UK Fraud Costs Measurement Committee) (2017) Annual
Fraud Indicator 2017. Available at: www.croweclarkwhitehill.co.uk/wpcontent/uploads/sites/2/2017/11/Annual-fraud-indicator-2017.pdf
(accessed 24 April 2018).
UK Parliament (2002) Hansard 377 (75). Available at:
www.publications.parliament.uk/pa/cm200102/cmhansrd/vo020108/text/
20108w61.htm (accessed 20 July 2018).
UNICEF (United Nations Children’s Emergency Fund) (2016) Annual
Report 2016. New York, NY: UNICEF. Available at:
www.unicef.org/publications/files/UNICEF_Annual_Report_2016.pdf
(accessed 5 June 2018).
UNICEF (United Nations Children’s Emergency Fund) (2018) Convention
on the Rights of the Child – FAQs and Resources. Available at:
www.unicef.org/crc/index_30225.html (accessed 25 May 2018).
UNODC (United Nations Office on Drugs and Crime) (2018a) Cybercrime
Repository. Available at: www.unodc.org/cld/v3/cybrepo/ (accessed 5
January 2018).
UNODC (United Nations Office on Drugs and Crime) (2018b) Global
Programme on Cybercrime. Available at:
www.unodc.org/unodc/en/cybercrime/global-programmecybercrime.html (accessed 5 January 2018).
USAG (United States Attorney General’s Office) (2016) United State’s
Attorney’s Annual Statistical Report. Available at:
www.justice.gov/usao/page/file/988896/download (accessed 26 February
2018).
USAODM (US Attorney’s Office – District of Massachusetts) (2011)
Alleged Hacker Charged with Stealing over Four Million Documents
from MIT Network. Available at:
https://web.archive.org/web/20120526080523/http://www.justice.gov/usa
o/ma/news/2011/July/SwartzAaronPR.html (accessed 12 January 2018).
USDHHS (US Department of Health & Human Services) (2003) Summary
of the HIPAA Privacy Rule. Available at:
www.hhs.gov/sites/default/files/privacysummary.pdf (accessed 24 July
2018).
USDOD (United States Department of Defense) (2015) The DoD Cyber
Strategy. Available at:
www.defense.gov/Portals/1/features/2015/0415_cyberstrategy/Final_2015_DoD_CYBER_STRATEGY_for_web.pdf (accessed
7 February 2018).
USDOJ (US Department of Justice) (2013) ebay Fraudster Sentenced for
Fencing Stolen Property. Available at: www.justice.gov/usaondga/pr/ebay-fraudster-sentenced-fencing-stolen-property (accessed 25
April 2018).
USDOJ (US Department of Justice) (2016) United States Attorneys’
Annual Statistical Report. Washington DC: US Department of Justice.
Available at: www.justice.gov/usao/page/file/988896/download
(accessed 4 December 2017).
USDS (US Department of State) (1997) Nigerian Advanced Fee Fraud.
Department of State Publication 10465. Washington, DC: US State
Department.
USSC (United States Sentencing Commission) (2012) Report on the
Continuing Impact of United States v. Booker on Federal Sentencing –
Part C: Child Pornography Offenses. Available at:
www.ussc.gov/sites/default/files/pdf/news/congressional-testimony-andreports/booker-reports/2012booker/Part_C11_Child_Pornography_Offenses.pdf (accessed 24 May
2018).
USSD (United States State Department) (2016) Thailand 2016 Human
Rights Report. Washington, DC: US State Department. Available at:
www.state.gov/documents/organization/265588.pdf (accessed 24 May
2018).
USSD (United States State Department) (2017a) Belarus 2017 Human
Rights Report. Washington, DC: US State Department. Available at:
www.state.gov/documents/organization/277387.pdf (accessed 24 May
2018).
USSD (United States State Department) (2017b) Estonia 2017 Human
Rights Report. Washington, DC: US State Department. Available at:
www.state.gov/documents/organization/277405.pdf (accessed 24 May
2018).
USSD (United States State Department) (2017c) Serbia 2017 Human Rights
Report. Washington, DC: US State Department. Available at:
www.state.gov/documents/organization/277459.pdf (accessed 29 May
2018).
Uyehara M (2018) How free speech warriors mainstreamed white
supremacists. GQ, 8 May. Available at: www.gq.com/story/how-freespeech-warriors-mainstreamed-white-supremacists (accessed 15 August
2018).
Vaas L (2017) It is not OK to break the law to catch criminals, judge rules.
Naked Security, 8 June. Available at:
https://nakedsecurity.sophos.com/2017/06/08/it-is-not-ok-to-break-thelaw-to-catch-criminals-judge-rules/ (accessed 4 June 2018).
Vaidhyanathan S (2003) Copyrights and Copywrongs: The Rise of
Intellectual Property and How it Threatens Creativity. New York: New
York University Press.
Valdez A (2018) Everything you need to know about Facebook and
Cambridge Analytica. Wired, 23 March. Available at:
www.wired.com/story/wired-facebook-cambridge-analytica-coverage/
(accessed 26 July 2018).
Valenti J (2002) If You Cannot Protect What You Own, You Don’t Own
Anything. Report presented to the US Senate Committee on Commerce,
Science and Transportation, on behalf of the MPAA, 28 February.
Available at:
https://web.archive.org/web/20040208090728/www.mpaa.org/jack/2002/
2002_02_28b.htm (accessed 21 July 2018).
Van Blarcum C (2005) Internet hate speech: The European framework and
the emerging American haven. Washington and Lee Law Review 62:
781–829.
Van der Ploeg I (2000) Biometrics and Privacy: A Note on the Politics of
Theorizing Technology. Available at:
https://web.archive.org/web/20040113140713/www.bmg.eur.nl/smw/pub
lications/vdp_01.pdf (accessed 21 July 2018).
Van Dijk J and Hacker K (2003) The digital divide as a complex and
dynamic phenomenon. The Information Society 19: 315–326.
Varma N and Ricci RP (2013) Telemedicine and cardiac implants: What is
the benefit? European Heart Journal 34(25): 1885–1895.
Vassil K, Solvak M, Vinkel P, Trechsel AH and Alvarez RM (2016) The
diffusion of internet voting. Usage patterns of internet voting in Estonia
between 2005 and 2015. Government Information Quarterly 33(3): 453–
459.
Vegh S (2002) Hacktivists or cyberterrorists? The changing media discourse
on hacking. First Monday, 7 October. Available at:
http://firstmonday.org/ojs/index.php/fm/article/view/998 (accessed 21
July 2018).
Verton D (2002) The Hacker Diaries: Confessions of Teenage Hackers.
Berkeley, CA: McGraw-Hill/Osborne.
Verton D (2003) Black Ice: The Invisible Threat of Cyber-Terrorism.
Emeryville, CA: McGraw-Hill/Osborne.
Voiskounsky A, Babeva J and Smyslova O (2000) Attitudes towards
computer hacking in Russia. In D Thomas and B Loader (eds)
Cybercrime: Law Enforcement, Security and Surveillance in the
Information Age. London: Routledge.
Vysotsky S and McCarthy AL (2016) Normalizing cyberracism: A
neutralization theory analysis. Journal of Crime and Justice 40(4): 446–
461.
Walden I (2010) Computer forensics and the presentation of evidence in
criminal cases. In Y Jewkes and M Yar (eds) Handbook of Internet
Crime. Cullompton: Willan.
Wales E (2001) Global focus on cybercrime. Computer Fraud and Security
(1): 6–6(1).
Walker C (2001) Encryption and the Regulation of Investigatory Powers
Act 2000. Paper presented at the 16th BILETA Annual Conference, 9–10
April, Edinburgh, Scotland.
Walker C, Wall D and Akdeniz Y (2000) The Internet, law and society. In Y
Akdeniz, C Walker and D Wall (eds) The Internet, Law and Society.
Harlow: Longman.
Walker J (2012) Unmasking anonymous. ITNOW 54(2): 28–29.
Walker P (2011) Online dating scams dupe 200,000 study finds. The
Guardian, 28 September. Available at:
www.guardian.co.uk/lifeandstyle/2011/sep/28/online-dating-scams-study
(accessed 21 July 2018).
Wall D (2001a) Cybercrimes and the Internet. In D Wall (ed.) Crime and the
Internet. London: Routledge.
Wall D (2001b) Maintaining order and law on the Internet. In D Wall (ed.)
Crime and the Internet. London: Routledge.
Wall D (ed.) (2001c) Crime and the Internet. London: Routledge.
Wall DS (2007) Cybercrime: The Transformation of Crime in the
Information Age. Cambridge: Malden.
Wall DS (2008a) Cybercrime and the culture of fear: Social science
fiction(s) and the production of knowledge about cybercrime.
Information, Communication & Society 11(6): 861–884.
Wall DS (2008b) Cybercrime, media, and insecurity: The shaping of public
perceptions of cybercrime. International Review of Law, Computers, and
Technology 22: 45–63.
Wall D (2010) Micro-frauds: Virtual robberies, stings and scams in the
information age. In T Holt and B Schell (eds) Corporate Hacking and
Technology-Driven Crime: Social Dynamics and Implications. Hershey,
PA: IGI GlobalMA/Polity.
Wall DS (2012) ‘The Devil drives a Lada’: The social construction of
hackers as cyber-criminals. In C Gregorious (ed.) Constructing Crime:
Discourse and Cultural Representations of Crime and ‘Deviance’.
Basingstoke, UK: Palgrave MacMillan.
Walter N (2010) Living Dolls: The Return of Sexism. London: Virago.
Wark M (1997) Cyberhype: The packaging of cyberpunk. In A Crawford
and R Edgar (eds) Transit Lounge. North Ryde, NSW, Australia:
Craftsman House, pp. 154–157.
Warnick BR (2004) Technological metaphors and moral education: The
hacker ethic and the computational experience. Studies in Philosophy
and Education 23: 265–281.
Warschauer M (2004) Technology and Social Inclusion: Rethinking the
Digital Divide. Boston, MA: MIT Press.
Washington Post (2017) Deconstructing the symbols and slogans spotted in
Charlottesville. Washington Post, 18 August. Available at:
www.washingtonpost.com/graphics/2017/local/charlottesville-videos/?
utm_term=.c6f953df73c9 (accessed 22 May 2018).
Wasik M (2000) Hacking, viruses and fraud. In Y Akdeniz, C Walker and D
Wall (eds) The Internet, Law and Society. Harlow: Longman.
Wastler S (2010) The harm in ‘sexting’?: Analyzing the constitutionality of
child pornography statutes that prohibit the voluntary production,
possession, and dissemination of sexually explicit images by teenagers.
Harvard Journal of Law and Gender 33: 687–702.
Webber C and Vass J (2010) Crime, film and the cybernetic imagination. In
Y Jewkes and M Yar (eds) Handbook of Internet Crime. Cullompton:
Willan.
Webster F (2003) Theories of the Information Society (2nd edn). London:
Routledge.
Weimann G (2004) www.terror.net. United States Institute of Peace, Special
Report 116. Washington, DC: USIP.
Weiner R (2016) Hacker who sent ‘kill list’ of US military personnel to
ISIS: ‘I feel so bad’. The Washington Post, 23 September. Available at:
www.washingtonpost.com/local/public-safety/hacker-who-sent-kill-listof-us-military-personnel-to-islamic-state-i-feel-sobad/2016/09/23/dc0ba0ea-8196-11e6-b002-307601806392_story.html?
utm_term=.167b6a7f0d18 (accessed 30 January 2018).
Weise E (2017) Terrorists use the dark web to hide. USA Today, 28 March.
Available at: www.usatoday.com/story/tech/news/2017/03/27/terrorists-
use-dark-web-hide-london-whatsapp-encryption/99698672/ (accessed 30
January 2018).
Weise E (2018) Facebook fires engineer in possible online stalking case
involving tinder. USA Today, 3 May. Available at:
www.usatoday.com/story/tech/news/2018/05/03/facebook-fires-engineerpossible-online-stalking-case-tinder/577697002/ (accessed 4 June 2018).
Weisman C (2015) Why porn is exploding in the Middle East.
Salon/Alternet, 15 January. Available at:
www.salon.com/2015/01/15/why_porn_is_exploding_in_the_middle_eas
t_partner/ (accessed 22 May 2018).
Weisman VL (2014) The Hacker Wars [documentary]. Available at:
www.youtube.com/watch?v=ku9edEKvGuY (accessed 16 May 2018).
Wendel W (2004) The banality of evil and the First Amendment. Michigan
Law Review 102: 1404–1422.
West C and Zimmerman DH (1987) Doing gender. Gender & Society 1:
125–151.
Whine M (2000) Far right extremists on the Internet. In D Thomas and B
Loader (eds) Cybercrime: Law Enforcement, Security and Surveillance
in the Information Age. London: Routledge.
White J (1991) Terrorism: An Introduction. Pacific Grove, CA:
Brooks/Cole.
Whittaker R (2000) The End of Privacy: How Total Surveillance is
Becoming a Reality. New York: The New Press.
Whittaker Z (2012) ‘Censorship creep’: Pirate Bay block will affect onethird of UK. ZDNet, 27 June. Available at:
www.zdnet.com/blog/london/censorship-creep-pirate-bay-block-willaffect-one-third-of-uk/5235 (accessed 21 July 2018).
Whitty MT (2013) The scammers persuasive techniques model. British
Journal of Criminology 53: 665–684.
Whitty MT and Buchanan T (2012) The Psychology of the Online Dating
Romance Scam. Leicester, UK: University of Leicester. Available at:
www2.le.ac.uk/departments/media/people/monicawhitty/Whitty_romance_scam_report.pdf (accessed 25 April 2018).
Whitty MT and Buchanan T (2016) The online dating romance scam: The
psychological impact on victims – both financial and non-financial.
Criminology & Criminal Justice 16(2): 176–194.
Wiener N (1948) Cybernetics: Or Control and Communication in the
Animal and the Machine. New York, NY: John Wiley & Sons.
Wikipedia (2008) Internet Watch Foundation and Wikipedia. Available at:
http://en.wikipedia.org/wiki/Internet_Watch_Foundation_and_Wikipedia
(accessed 21 July 2018).
Wilkins J (1997) Protecting our children from internet smut: Moral duty or
moral panic? The Humanist 57(5): 4.
Williams C (2012) Ofcom presses ahead with Digital Economy Act piracy
crackdown. The Telegraph, 26 June. Available at:
www.telegraph.co.uk/technology/news/9356094/Ofcom-presses-aheadwith-Digital-Economy-Act-piracy-crackdown.html (accessed 21 July
2018).
Williams M (2010) The virtual neighbourhood watch: Netizens in action. In
Y Jewkes and M Yar (eds) Handbook of Internet Crime. Cullompton:
Willan.
Williams M, Levi M, Burnap P and Gundur RV (2018) Under the corporate
radar: Examining insider business cybercrime victimization through an
application of Routine Activities Theory. Deviant Behavior. DOI:
10.1080/01639625.2018.1461786.
Willits D and Nowacki J (2016) The use of specialized cybercrime policing
units: An organizational analysis. Criminal Justice Studies 29(2): 105–
124.
Wilson R (2009) Sex play in virtual worlds. Washington and Lee Law
Review 66: 1127–1174.
Wilson T, Maimon D, Sobesto B and Cukier M (2015) The effect of a
surveillance banner in an attacked computer system. Journal of Research
in Crime and Delinquency 52: 829–855.
Wilson WJ (1996) When Work Disappears. New York: Knopf.
WIPO (World Intellectual Property Organization) (2001) WIPO Intellectual
Property Handbook: Policy, Law and Use, WIPO publication no. 489(E).
Geneva: WIPO.
Wired Staff (2006) DCS-3000 is the FBI’s new Carnivore. Wired, 27 April.
Available at: www.wired.com/2006/04/dcs3000-is-the-/ (accessed 25
July 2018).
WJC (World Jewish Congress) (2017) The Rise of Anti-Semitism on Social
Media: Summary of 2016. New York, NY: World Jewish Congress.
Available at: http://vigo.co.il/wpcontent/uploads/sites/178/2017/06/AntiSemitismReport2FinP.pdf
(accessed 1 August 2018).
Wolf C (2004) Regulating Hate Speech qua Speech is Not the Solution to
the Epidemic of Hate on the Internet. Paper presented at the OSCE
Meeting on the Relationship Between Racist, Xenophobic and AntiSemitic Propaganda on the Internet and Hate Crimes, Paris, 16–17 June.
Available at:
https://web.archive.org/web/20041219004928/http://www.adl.org/osce/os
ce_legal_analysis.pdf (accessed 21 July 2018).
Woo H, Kim Y and Dominisk J (2004) Hackers: Militants or merry
pranksters? A content analysis of defaced web pages. Media Psychology
6: 63–82.
Wozniak S (2006) iWoz. New York, NY: W. W. Norton & Company.
Wright H (1998) Biometrics: The next phase in network security. NZ
Infotech Weekly, 28 September: 10.
Wright MF and Li Y (2012) Kicking the digital dog: A longitudinal
investigation of young adults’ victimization and cyber-displaced
aggression. Cyberpsychology, Behavior, and Social Networking 15(9):
448–454.
Wright J, Burgess A, Laszlo A, McCrary G and Douglas J (1996) A
typology of interpersonal talking. Journal of Interpersonal Violence
11(4): 487–502.
Wykes M and Harcus D (2010) Cyber-terror: Construction, criminalisation
and control. In Y Jewkes and M Yar (eds) Handbook of Internet Crime.
Cullompton: Willan.
Xatrix.org (2014a) Ladderhouse.Org Defacement. Available at:
www.xatrix.org/def.php?d=ladderhouse.org (accessed 31 July 2018).
Xatrix.org (2014b) Lowellchamber.Org Defacement. Available at:
www.xatrix.org/def.php?d=lowellchamber.org (accessed 31 July 2018).
Yar M (2005a) The global epidemic of movie ‘piracy’: Crime-wave or
social construction? Media, Culture and Society 27(1): 677–696.
Yar M (2005b) The novelty of cybercrime: An assessment in light of
routine activity theory. European Journal of Criminology 2(4): 407–427.
Yar M (2005c). Computer hacking: Just another case of juvenile
delinquency?. The Howard Journal of Criminal Justice 44(4): 387–399.
Yar M (2008a) Computer crime control as industry: Virtual insecurity and
the market for private policing. In KF Aas, HO Gundhus and HM Lomell
(eds) Technologies of InSecurity: The Surveillance of Everyday Life.
New York, NY: Routledge.
Yar M (2008b) The rhetorics and myths of anti-piracy campaigns:
Criminalization, moral pedagogy and capitalist property relations in the
classroom. New Media and Society 10(4): 605–623.
Yar M (2009) Cyber-sex offences: Patterns, prevention and protection. In K
Harrison (ed.) Dealing With High-Risk Sex Offenders in the Community:
Risk Management, Treatment and Social Responsibilities. Cullompton:
Willan.
Yar M (2010) The private policing of internet crime. In Y Jewkes and M
Yar (eds) Handbook of Internet Crime. Cullompton: Willan.
Yar M (2011) A New Age of Hackers. Reading: PCTools.
Yar M (2012a) Virtual utopias and dystopias: The cultural imaginary of the
Internet. In K Tester and M Hviid-Jacobsen (eds) Utopia and Social
Theory. Aldershot: Ashgate.
Yar M (2012b) E-Crime 2.0: The criminological landscape of new social
media. Information and Communications Technology Law 21(3): 207–
219.
Yar M (2013) The policing of Internet sex offences: Pluralised governance
versus hierarchies of standing. Policing and Society 23(4): 482–497.
Yar M (2014) The Cultural Imaginary of the Internet. New York, NY:
Palgrave Macmillan.
Yar M (2018) A failure to regulate? The demands and dilemmas of tackling
illegal content and behaviour on social media. The International Journal
of Cybersecurity Intelligence and Cybercrime 1(1): 5–20.
Young J (1971) The role of the police as amplifiers of deviancy, negotiators
of reality and translators of fantasy. In S Cohen (ed.) Images of
Deviance. Harmondsworth: Penguin.
Young K (1998) Internet addiction: The emergence of a new clinical
disorder. CyberPsychology and Behavior 1(3): 237–244.
Young K, Pistner M and O’Mara J (1999) Cyber disorders: The mental
health concern for the new millennium. CyberPsychology and Behavior
2(5): 475–479.
Yu S (2013) Digital piracy justifications: Asian students versus American
studies. International Criminal Justice Review 23(2): 185–196.
Zetter K (2010) TJX hacker gets 20 years in prison. Wired, 25 March.
Available at: www.wired.com/threatlevel/2010/03/tjx-sentencing/
(accessed 21 July 2018).
Zetter K (2014a) Countdown to Zero Day: Stuxnet and the Launch of the
World’s First Digital Weapon. New York, NY: Crown Publishers.
Zetter K (2014b) The year’s worst hacks, from Sony to celebrity nude pics.
Wired, 23 December. Available at: www.wired.com/2014/12/top-hacks2014/ (accessed 11 January 2018).
Zetter K (2015) Judge rules Kim Dotcom can be extradited to US to face
charges. Wired, 22 December. Available at:
www.wired.com/2015/12/kim-dotcom-extradition-ruling/ (accessed 24
May 2016).
Zimmer E and Hunter D (1999) Risk and the Internet: Perception and
Reality. Available at:
https://web.archive.org/web/20000815105816/www.copacommission.org
/papers/webriskanalysis.pdf (accessed 21 July 2018).
Zittrain J, Faris R, Noman H, Clark J, Tilton C and Morrison-Westphal R
(2017) The Shifting Landscape of Global Internet Censorship.
Cambridge, MA: Internet Monitor, Berkman Klein Center for Internet &
Society, Harvard University. Available at:
https://dash.harvard.edu/bitstream/handle/1/33084425/The%20Shifting%
20Landscape%20of%20Global%20Internet%20Censorship%20Internet%20Monitor%202017.pdf?sequence=5 (accessed 17 July
2018).
Zuboff S (1984) In the Age of the Smart Machine: The Future of Work and
Power. New York, NY: Basic Books.
Zuboff S (2015) Big other: Surveillance capitalism and the prospects of an
information civilization. Journal of Information Technology 30: 75–89.
Index
Acar KV, 180
accountability, 230–231
acquaintance stalking, 195
Action Fraud, 130–131, 146
activism. See hacktivism
Acxiom, 240
Adler PA, 46
Adobe, 247
advanced fee frauds, 134–136, 138
Advanced Research Projects Agency Network (ARPANET), 7–8
Agnew R, 31
Aitken S, 208
Akdeniz Y, 186
Akers RL, 29–30
al-Qaeda, 3, 98
algorithms, 257
Allan E, 38
Alliance for Creativity and Entertainment (ACE), 118, 120
Alseth B, 179
American-Arab Anti-Discrimination Committee (ADC), 225
American Civil Liberties Union (ACLU), 179, 242–243
Amnesty International, 244
Andrews C, 219
Anglin A, 162
Anna Kournikova (worm), 66–67
anonymity
child pornography and, 185–186
cyberstalking and, 203, 206
cyberterrorism and, 94, 97–98
hacking and, 65, 70
hate speech and, 156
internet and, 14, 23
online fraud and, 143
piracy and, 111
surveillance and, 235
Anonymous, 77, 83, 85, 86, 87, 88
Anti-Counterfeiting Trade Agreement (ACTA), 118–119
Anti-Defamation League (ADL), 159, 225
anti-Muslim hate speech, 158
Anti-Phishing Working Group (APWG), 139
anti-piracy initiatives, 113, 117–121
anti-Semitic hate speech, 159, 225
anti-stalking legislation, 196
Anti-Terrorism, Crime and Security Act (2001), 242
AntiTrust (film), 58
Apple, 248–249
Arab Spring, 83, 84
Army of God, 98–99
ASEANAPOL, 221
Asperger’s Syndrome, 69
Assange J, 87, 88
Association of Chief Police Officers (ACPO), 220
Association of Sites Advocating Child Protection (ASACP), 183, 224
Association of Southeast Asian Nations (ASEAN), 221
asymmetric warfare, 101
AT&T, 245
auction sites, 131–133
Auernheimer A, 162
Aum Shinrikyo, 98–99
Aune KS, 201
Australia
anti-piracy initiatives in, 121
crime and victimization surveys in, 43
cyberterrorism and, 92
internet policing in, 220, 228
online fraud in, 130, 131
stalking and cyberstalking in, 196
surveillance in, 240
Australian Bureau of Statistics (ABS), 131
Australian Competition and Consumer Commission (ACCC), 130,
135, 136, 141
Australian Cyber Security Centre (ACSC), 43
Australian Cybercrime Online Reporting Network (ACORN), 130,
140, 146
Australian Federal Police (AFP), 220
autism spectrum disorder, 69
Bachmann M, 27, 73
Bahrain, 84
Bajaj A, 167
Bakhshi T, 139
bandwidth theft (leeching), 60
Banks J, 39, 40–41
Barlow JP, 88, 248
Baugh W, 246
Bay Area Transportation (BART), 86
BDSM (bondage, domination and sadomasochism), 165–166
Beccaria C, 24, 26
Becker GS, 26
Becker H, 36–37, 46, 54, 97
Bedworth P, 69
Belanger C, 157
Belarus, 184
Bennion F, 209–210
Bentham J, 24, 26
Bently L, 108
Bequai A, 75
Berg B, 46
Berkeley School of Criminology, 39
Bernard TJ, 23, 32
Berners-Lee T, 88
Best J, 194, 197–198, 210
Biegel S, 74
Big Data, 238
bio-hacking, 256
Bissias G, 176
BitTorrent, 112–113
Black D, 156, 157
Blackhat (film), 65
Blevins KR, 47
Blunkett D, 96
Bocij P, 197, 198, 199–200, 203
Bollier D, 123
Boman JH, 29
Bond E, 219
bondage, domination and sadomasochism (BDSM), 165–166
Bonger W, 39
Bossler AM, 28, 33–34, 46
botnets, 60
Braithwaite J, 122
Brazil, 176, 244
Breivik AB, 157
Brent JJ, 24
Brewer R, 33
brigading, 205–206
British Board of Film Classification (BBFC), 170
British National Party (BNP), 161
British Phonographic Industry (BPI), 109
Bryce J, 179
Buchanan T, 136
Budd T,, 115
Bureau of Justice Statistics (BJS), 17–18, 194
Burgess M, 170
Burgess RL, 29
Burn A, 211
Business Software Alliance (BSA), 110, 113, 114–115, 120, 121, 124
Button M, 130, 131
Cambridge Analytica, 239
Cameron D, 3
Canada, 130, 168, 176, 220, 240
Canadian Anti-Fraud Centre (CAFC), 130, 146
Carlin JP, 95
Carnivore, 243
Castells M, 236
Center for Innovative Public Health Research, 198
Centers for Disease Control and Prevention (CDC), 194, 198, 202
Central Intelligence Agency (CIA), 242
Centre for Economics and Business Research (CEBR), 122
CERN (European Organization for Nuclear Research), 8
CERT Australia, 43
Chambers C, 204–205
Chaos Computer Club, 70
Charlemagne Hammer Skinheads, 161
Charlesworth A, 181
Chase M, 88
Child Exploitation and Online Protection Centre (CEOP), 219–220
child exploitation material (CEM), 175. See also child pornography
Child Online Protection Act (COPA) (US, 1998), 169
child pornography
challenges in tackling, 183–186
freedom of speech and, 259
images, victims and offenders in, 175–180
jailbait images and, 188–189
legislative and policing measures for, 180–183, 221, 224, 226
virtual pornography and, 186–187, 188–189
See also online paedophilia
Child Pornography Prevention Act (CPPA) (US, 1996), 180–181, 187
Child Protection and Obscenity Enforcement Act (US, 1988), 180
child sexual abuse images (CSAIs), 175–176. See also child
pornography
Children’s Internet Protection Act (CIPA) (US, 2003), 169
Children’s Online Privacy Protection Act (COPPA) (US, 1998), 240–
241
China, 176, 242
Chism KA, 30
Choi H, 261
ChoicePoint, 242
Choudhary K, 98
Christian Aid, 244
Christian Promise Keepers movement, 167n1
Churchill D, 229
Citron D, 156
civil liberties, 258–261
Clark L, 170
Clarke RA, 101–102
Clarke RV, 26
Classical School of criminology, 24–25. See also neo-classical
criminology
cloud computing, 256–257
Code Red (worm), 64
Cohen A, 261
Cohen B, 112
Cohen L, 27
Coleman GE, 162
Collins M,, 144
Combating Terrorism Act (US, 2001), 243
Communications Assistance for Law Enforcement Act (CALEA) (US,
1994), 245
Communications Decency Act (CDA) (US, 1996), 19, 168–169, 180
Computer Emergency Response Team, 221
Computer Fraud and Abuse Act (CFAA) (US, 1986), 74–75, 76, 90
Computer Investigation and Technology Unit (CITU), 199
computer-mediated communication (CMC), 14
Computer Misuse Act (CMA) (1990), 75, 76, 77
computer security industry, 250
Computer Security Institute (CSI), 43
computer viruses, 63
Connell R, 70
containment theory, 34
content analysis, 47–48
CONTEST, 99
convenience samples, 45–46
Convention on Cybercrime (Council of Europe), 77, 146, 160, 181,
221
cookies, 237–238
Cooper J, 116
Copes H, 32
copyright, 107–108. See also piracy
Copyright Act (1956), 118
Copyright, Designs and Patents Act (1988), 118
Copyright Society of the USA, 121
copyright theft. See piracy
The Core (film), 57, 65
Cornish DB, 26
Coroners and Justice Act (2009), 187
Council of Europe, 77, 146, 160, 181, 221
counterfeiting, 108n2. See also piracy
Counterfeiting Intelligence Bureau (CIB), 120
Countering International Terrorism: The United Kingdom’s Strategy
(CONTEST), 99
Coutts G, 3, 165
Cox J, 205
cracking, 113
credit card fraud, 44
creepshots, 205
Crime and Justice Survey, 115
crime and victimization surveys, 43, 45, 130–131
Crime Survey for England & Wales, 43, 194
Criminal Justice Act (1988), 180
Criminal Justice and Courts Act (2015), 204
Criminal Justice and Immigration Act (2008), 165
Criminal Justice and Public Order Act (1994), 165, 187, 240
criminological theory
overview of, 15–16
classical and neo-classical criminology, 23–28
drift and neutralization theory, 32–34, 68, 116
feminist criminology, 37–39
labelling theory, 35–37, 54
radical criminology, 39–41
self-control theory, 34–35
social learning theory, 28–30, 72
strain theory, 30–31, 115
critical criminology, 40
critical information infrastructure (CII), 91–92
critical infrastructure (CI), 91
Cross C, 130, 131
cryptocurrency, 60, 100, 218
CSO (magazine), 43
Cullen FT, 26
Cult of the Dead Cow (cDc), 83, 87–88
cultural criminology, 40
culture of fear, 211
Curran J, 8
Cyber Crime Strategy, 96
cyber-deceptions and thefts, 13
cyber-pornography, 13
Cyber Security Strategy, 92, 228
Cyber Security Survey, 43
cyber-trespass, 13
cyber-vigilantism, 226
cyber-violence, 13
cyberbullying, 25, 31
cybercrime
as challenging for criminal justice and policing, 16–19
civil liberties and, 258–261
criminological theory and, 15–16
definitions and categories of, 10–13
as different or new, 13–14
drift and neutralization theory and, 33–34
feminist criminology and, 38–39
future directions in, 255–258
key questions and answers on, 5–6
labelling theory, 35–37, 54
legislation and, 42–43
mass media representations of, 4–5, 55–58, 65
measures of, 41–46, 130–131
opportunities for, 9–10
perceptions of, 3–5
perceptual deterrence theory and, 25–26
radical criminology and, 40–41
rational choice theory and, 26–27
research methods and, 46–48
routine activities theory and, 27–28
self-control theory and, 35
social learning theory and, 30
strain theory and, 30–31
See also policing; specific crimes
cyberliberties, 235. See also surveillance
cybernetics, 10–11
cybersexploitation, 207–208
Cyberspace Research Unit (CRU), 207–209
cyberstalking, 193, 197–206, 210–213. See also stalking
cyberterrorism
advantages of internet attacks and, 93–94
computer-assisted terrorist offences and, 97–100
concept and threat of, 3, 83, 88–92
legislation and, 90
rhetorics and myths of, 94–97
cyberwarfare, 100–102
The Daily Stormer (website), 162
dark web, 10, 98–99, 114
Data & Society Institute, 198
data brokerage firms, 242
data mining, 99, 242
Data Protection Act (1984), 240
Data Protection Act (1998), 240
data protection and privacy laws, 239–242
dataveillance, 237
deep web, 10
deepfake porn, 186
DEF CON, 66, 72, 142, 259
Defense Advanced Research Projects Agency (DARPA), 7
Deirmenjian J, 157, 161
Deleuze G, 236
Delgado R, 155
Delio M, 67
DeMarco J, 71
Demon Seed (film), 56
denial of service attacks, 62–63, 66. See also distributed denial of
service (DDoS) attacks
Denning D, 61–62, 90, 94, 95, 99, 244, 246
Department of Homeland Security (DHS), 92, 220
deprivation, 15–16
deterrence theory, 25. See also perceptual deterrence theory
deviance, 12
Die Hard II (film), 56
differential association theory, 28–29, 72
differential reinforcement, 29
digilantism, 208, 226
digital divide, 199, 231
digital drift, 33–34
Digital Economy Act (2010), 119
Digital Economy Act (2017), 119, 170
digital forensics, 221–222
Digital Forensics and Cybersecurity Center (DFCSC), 222
Digital Millennium Copyright Act (US, 1998), 118
digital rights management (DRM) protocols, 113
Directive on Privacy and Electronic Communications (ePrivacy
Directive), 240
disguise, 143
distributed denial of service (DDoS) attacks, 3, 25, 63, 85, 101–102,
255
Doctorow C, 160, 259
dogpiling, 205–206
Dolan K, 144
Domestic Violence, Crime and Victims Act (2004), 196–197
Domhardt M, 178
Donner CM, 35
Donoghue A, 94
D’Ovidio R, 197, 198, 199, 210
Dowland P, 4
doxxing, 87
Doyle J, 197, 198, 199, 210
Drahos P, 122
Dreßing H, 202
Drezner DW, 19
drift and neutralization theory, 32–34, 68, 116
Dunn M, 91–92
Dworkin A, 166
e-commerce, 9
e-democracy, 84
e-vigilantism, 226
eBay, 131–132, 134, 143, 147
ECHELON, 243–244
Educanet, 211
Egypt, 84
Elbakyan A, 110
Electronic Arts (EA), 113
Electronic Disturbance Theatre (EDT), 85
Electronic Frontier Foundation (EFF), 88, 113, 160, 223–224
Electronic Privacy Information Centre (EPIC), 224
Elsevier, 110
email bombs, 86
encryption technologies
cyberterrorism and, 96, 98
policing and, 218, 222
surveillance and, 235, 245, 246–250
English Defence League (EDL), 86
Equifax, 241–242
equity, 230, 231
Escolar I, 125
Esen R, 62
Estonia, 84, 101–102, 184
ethnography, 46–47
European Defense Agency (EDA), 221
European Network and Information Security Agency (ENISA), 17, 18,
146, 221, 230
European Parliament (EP), 244
European Union (EU), 118–119, 181, 221, 239–240, 241, 247. See
also Convention on Cybercrime (Council of Europe)
Europol, 17, 146, 221
Ex Machina (film), 56
Ex parte Thompson (2014), 205
experimental designs, 47
Explotion (hacker), 71
Facebook, 239
Fair Credit Reporting Act (FCRA) (US, 1970), 240
Fair Play Canada, 120
fake news, 257
false memory syndrome, 211
The Fan (film), 194
Fanning S, 112
Fawkes Virus (worm), 87
Featured Artists’ Coalition, 120
Federal Bureau of Investigation (FBI)
child pornography and, 182
crime and victimization surveys and, 43
internet policing and, 220
online fraud and, 129
piracy and, 120
surveillance and, 242, 243, 248–249
Federal Information Security Modernization Act (FISMA) (US, 2014),
241
Federal Trade Commission, 130
Federation Against Copyright Theft (FACT), 118, 120
fee stacking, 134
Felson M, 27, 72
feminist criminology, 37–39
‘fencing’ of stolen goods, 132–133, 143
Ferizi A, 95
Ferrell J, 46
FieldSearch, 222
filelocker services, 114
Fischer-Hübner S, 238
Fitch C, 72
flaming, 161–162
Flavin J, 37
force multiplier, 93
Foreign Intelligence Surveillance Act (FISA) (US, 1978), 243, 244
Foster J, 194
Foucault M, 236
419 Eaters, 226
France
anti-piracy initiatives in, 119
child pornography in, 176, 183
hacktivism in, 85
hate speech in, 154, 161
surveillance in, 244, 247
Frankenstein (Shelley), 56
Fraud Act (2006), 145–146
freedom of speech, 258–260
Freng A, 29
Fried R, 129
Fuerzas Armadas Revolucionarias de Columbia (FARC), 98–99
fundraising and financing, 99–100
Furedi F, 96, 211, 212–213
Furnell S, 76
Gaddam S, 164, 185
Gallagher S, 241–242
GamerGate, 38–39, 226
Gartner Inc., 60
Gates B, 112
gendertrolling, 206
General Data Protection Regulation (GDPR), 239–240
General Strain Theory, 31
Gerber J, 48
Germany, 153–154, 155, 161, 183
Gewirtz-Meydan A, 178
Ghost Sec, 86
Gibson W, 11
Gillmor D, 237
Gjoni E, 38–39
Glenny M, 69
Global Anomie Theory, 31
Global Economic Crime Survey (2016), 44
Global Programme on Cybercrime, 17
globalization, 3–4
Gnutella, 112, 170, 176
Goldsmith A, 33
Gonzales A, 61
Gonzalez W, 71
Goode M, 197
Google, 111, 238, 240
Gottfredson M, 34–35, 129
governance, 229
Government Communications Headquarters (GCHQ), 245
Grabosky P, 14, 67
Graf S, 195
Graff GM, 63
Greenberg A, 10
Greengard S, 10
Greening T, 139
Greenpeace, 244
Greenwald G, 244–245
Guardian (newspaper), 88
Guitton C, 25
Guzman O de, 3
Ha A, 257
hacker ethic, 53–54, 68
Hackers (film), 58, 65
hackers and hacking
activities of, 58–64. See also distributed denial of service (DDoS)
attacks; malicious software (malware)
definitions of, 53–55
freedom of speech and, 259–260
legislation and, 73–77
motivations and characteristics of, 67–73
myths and realities of, 64–67
piracy and, 113
representations of, 55–58, 65
research on, 46
state and state-sponsored actors in, 257–258
See also political hacking
hacktivism, 83, 84–88, 89, 94–95. See also Anonymous
HADOPI (Haute Autorité pour la diffusion des oeuvres et la protection
des droits sur internet), 119
Hall S, 40
Hamas, 98–99
Harcus D, 96–97
Hargrave A, 156
Harrell E, 130
Harrison DM, 116
Harvard Law Review (journal), 245
Hashizume K, 256
hate speech, 153–162, 225, 258–259
Hauser C, 205
Hawn M, 58
Hayward K, 26
HBGary, 87
Health Insurance Portability and Accountability Act (HIPAA) (US,
1996), 240
hegemonic masculinity, 70
Henry S, 32–33
Hezbollah, 98–99
Hinckley J, 194
Hinduja S, 31, 220
Hirschi T, 34–35, 67, 129
Hizbul Mujahideen, 98–99
Holt TJ
on cybercrime, 17–18
drift and neutralization theory and, 33–34
on hacking, 46
routine activities theory and, 28
self-control theory and, 35
social learning theory and, 30
virtual ethnography and, 47
Homebrew Computer Club (HBCC), 112, 113
honeypots, 25–26
Hunt A, 55
hurtcore pornography, 177
Hutchings A, 27
Hutchins M, 64
ICE, 120
iCloud, 257
Identity Theft Enforcement and Restitution Act (US, 2008), 74, 145–
146
incel subculture, 158
India, 167
Industrial Internet, 95
information capitalism, 238–239
Information Technology Act (India, 2000), 167
Institute for Global Communications (IGC), 86
Institutional Anomie Theory, 31
intangible goods, 108
Integrated Technological Crime Units (ITCUs), 220
intellectual property. See piracy
Intellectual Property Crime Coordinated Coalition (IPC3), 120
intellectual property (IP), 107–108, 123–125, 225–226
Intellectual Property Office (IPO), 109–110
International Anti-Counterfeiting Coalition (IACC), 120
International Centre for Missing and Exploited Children (ICMEC),
181
International Chambers of Commerce (ICC), 109
International Federation of the Phonographic Industry (IFPI), 109
International Intellectual Property Alliance (IIPA), 120
International Narcotics and Law Enforcement Affairs (INL), 17
internet
brief history and analysis of, 7–10
free speech and, 153
use of and penetration rates, 111
internet activism. See hacktivism
internet addiction disorder, 69
internet auction frauds, 131–134
Internet Black Tigers, 86
Internet Corporation for Assigned Names and Numbers (ICANN), 223
Internet Crime Complaint Centre (IC3), 129–130, 131–132, 133, 135,
136, 140, 146
Internet Fraud Complaint Centre (IFCC), 120, 129
Internet Hotline Providers in Europe (INHOPE), 183, 224
Internet of Things (IoT), 9–10, 59–60, 63, 95, 255–256
Internet Research Agency, 257
internet service providers (ISPs), 8
Internet Watch Foundation (IWF), 177, 183, 224–225
INTERPOL, 17, 120, 182, 221
interviews, 46
Investigatory Powers Act (2016), 242
investment fraud, 140
Iran, 101
Irish Republican Army (IRA), 98–99
Islamic State (ISIS), 86, 98–99
Islamophobia, 154–155
Israel, 101
The Italian Job (film), 65
Jack B, 256
Jacob, WW, 255
jailbait images, 188–189
Jane EA, 162, 226
JANET (Joint Academic Network), 8
Japan, 143, 244
Jarvis L, 4
Jensen G, 29
Jewkes Y, 77, 219
Jordan, 84
Jordan T, 88
jurisdiction, 143–144
Justice Technology Information Center (JTIC), 222
Kane RJ, 196–197
Kanka M, 207
Kantar Media, 116
Kaplan J, 157
Kappeler VE, 211, 212
Karaganis J, 116
Kasten E, 123, 124
Kennedy AM, 169
The King of Comedy (film), 194
Knake RK, 101–102
Koch L, 197–198
Kodi boxes, 117
Kogan A, 239
Kraska P, 24
Kraska PB, 47–48
Krohn MD, 45
Kruttschnit C, 37–38
Ku Klux Klan, 87, 98–99, 156
Kubitschko S, 70
Kubrick S, 56
Kuwait, 84
labelling theory, 35–37, 54
Ladegaard I, 25
Lane F, 163, 185
Larsson S, 31
Law Enforcement Cyber Center, 220
leaking, 87
learning theories, 72
Lebanon, 84
leeching (bandwidth theft), 60
left realism, 40
legal pluralism, 167
Lemert E, 36
Lenhart A, 198–199, 200–201, 202–203
Levi M, 4–5
Levy S, 53
LexisNexis, 242
Libya, 84
Linden Lab, 187
Lindsay JR, 101
Linux, 88
live-distant child abuse (LDCA), 179–180
live-streaming, 179–180
Livingstone S, 156
Lizard Squad, 63
Loader B, 5, 12
Loader I, 45
Locke J, 123
Longhurst J, 3, 165
Longitudinal Internet Studies for the Social Sciences, 46
Lords T, 184–185
Love Bug (worm), 3, 64
Love C, 124–125
Love L, 77
Lowney K, 194
LulzSec, 88
Lyon D, 236
MacEwan N, 76
Machado R, 157
MacKinnon C, 166
Mafiaboy (hacker), 63
Maimon D, 25–26, 28
malicious software (malware)
cryptocurrency mining and, 60
cyberwarfare and, 101
hacktivism and, 87
measures of, 44
policing and, 228
tools for, 66
types of, 63–64
Mann S, 177
Manning C, 87
Mantilla K, 161, 206
Marganski AJ, 37, 38–39
Martin B, 123–124
Martin E, 129
Maruna S, 32
masculinity, 69–70
Matrix (films), 58
Matza D, 32, 68
Maxim (hacker), 61
May T, 77
McCarthy B, 26
McConchie A, 73
McCracken G, 46
McCullagh D, 97
McFarlane L, 198, 203
McKinnon G, 77
Measuring Anti-Muslim Attacks (MAMA), 225
memes, 157
Menta R, 121
The Mentor (hacker), 68
Merton RK, 30–31
Messner SF, 31
Mexico, 85, 176
Microsoft, 247
Miller J, 38
Miller v. California (1973), 166
Milone M, 84
Minassian A, 158
MindGeek, 170
Minecraft (video game), 3
mining, 60
Mirai (botnet), 3, 63, 64, 255
Mitnick K, 55, 69, 76, 142
Monitoring the Future, 45
Moore H, 204
Moorefield J, 168
moral panics, 5, 97, 210–213
Morocco, 84
Morris A, 204
Morris R, 63
Moss M, 157
Motion Picture Association of America (MPAA), 109, 120, 121, 122
Mr. Robot (television series), 58
Mueller G, 124
Mueller R, 3
Mullen P, 193, 196
multimedia online role-playing games (MMORPGs), 187
MyDoom (virus), 64, 93
Napster, 112
NASA (National Aeronautics and Space Administration), 87
Nassar L, 175
National Center for Missing & Exploited Children (NCMEC), 183
National Crime Agency, 13, 17, 69, 77, 160, 219
National Crime Squad (NCS), 182
National Crime Victimization Survey, 130, 194
National Crime Victimization Survey (NCVS), 43
National Criminal Intelligence Service (NCIS), 160
National Cyber Crime Unit (NCCU), 13, 17, 219
National Cyber Security Centre (NCSC), 249
National Cybercrime Unit, 230
National Deviancy Symposium, 39
National Federation of Film Distributors (FNDF), 118
National Fraud Authority, 131
National Hi-Tech Crime Unit (NHTCU), 17, 160, 219
National Infrastructure Advisory Council (NIAC), 91
National Infrastructure Security Co-ordination Centre (NISCC), 91
National Institute of Standards and Technology (NIST), 241
National Intellectual Property Rights Coordination Center, 120
National Longitudinal Study of Adolescent to Adult Health (Add
Health), 45
National Science Foundation (NSF), 8
National Security Agency (NSA), 242, 244–245, 248
National Technical Assistance Centre (NTAC), 120
National White Collar Crime Centre (NW3C), 129, 220
National Youth Survey – Family Study (NYSFS), 45
NATO (North Atlantic Treaty Organization), 86
near field communication (NFC) chips, 59
neo-classical criminology, 24–28. See also routine activities theory
Netherlands, 176, 181
Netscape Navigator, 237
Neuman WL, 47–48
Neustar, 63
neutralization theory. See drift and neutralization theory
New Zealand, 196
Nhan J, 226
Nielsen, 239
Nielsen L, 155
Nigerian scam, 134–136, 138
No Electronic Theft Act (US, 1997), 118
Nobles MR, 12, 199
Norton H, 156
Nowacki J, 220
NSFNET (National Science Foundation), 8
nuisance stalking, 195
Oboler A, 158
Obscene Publications Act (1959), 165
obscenity, 164–166, 181. See also pornography
Occupy, 85
O’Connell R, 207–209
Ofcom, 168
Office for National Statistics (ONS), 130, 135–136. See also Crime
Survey for England & Wales
Office of Homeland Security, 91
official statistics, 41–42, 130–131
Ogas O, 164, 185
Ogilvie E, 202
Oman, 84
online banking, 9, 99–100
online fraud
perpetrators’ advantages and, 142–145
policing and, 145–147, 226, 228
scope and scale of, 44, 129–131
varieties of, 131–142, 138, 141
Online Hate Prevention Institute, 158
online paedophilia, 193, 206–213
online privacy, 235, 260–261. See also anonymity; surveillance
online vigilantism, 226
OnTheFly (hacker), 66–67
Operation Ore, 182
Operation Pacifier, 182
Operation Rescue, 182
Operation Spade, 182
Operation Starburst, 182
Organisation for Economic Cooperation and Development (OECD),
250
Organised Crime Agency,’ 120
OSForensics, 222
Owen G, 185
Palestinian Islamic Jihad, 258
Palestinians, 86
Parche G, 195
Pardes A, 240
Passas N, 31
pathological fallacy, 72
PatientsLikeMe, 239
PATRIOT Act (US, 2001), 242–243, 244, 248
Payne S, 207
PC Gamer (magazine), 110, 115
peacemaking criminology, 40
Pearce A, 226
peer-to-peer (P2P) file-sharing systems, 111, 112–113, 170–171, 176
perceptual deterrence theory, 25–26
personal and social control theory, 34
Perverted Justice, 208, 226
phishing, 137–140, 141, 142
phone phreaks, 142
phreaks, 142
Pinsent Masons, 75–76
piracy
anti-piracy initiatives and, 113, 117–121
concept of, 9, 107–108
critical approach to, 121–123
drift and neutralization theory and, 33, 116
freedom of speech and, 259, 260
history and growth of, 111–114
measures of, 44
‘pirates’ and, 114–117
scope and scale of, 109–110
strain theory and, 31, 115
Police and Crime Commissioners, 230
Police and Justice Bill (2006), 75
Police Central e-Crime Unit (PCeU), 17, 219
policing
challenges of, 16–19
critical issues about private actors in, 230–231
pluralization and privatization of, 228–230
privatized for-profit actors and, 227–228
public actors and, 217–222
quasi- state and non-state actors and, 222–226
political correctness, 154
political hacking, 9, 83–84. See also cyberterrorism; cyberwarfare;
hacktivism
Poole P, 244
Pornhub, 164, 170
pornography, 153, 163–171, 259. See also child pornography; revenge
pornography
postmodern criminology, 40
Potter GW, 211, 212
President’s Commission on Critical Infrastructure Protection (PCCIP),
91
President’s Critical Infrastructure Protection Board (PCIPB), 91
PRISM (Planning Tool for Resource Integration, Synchronization and
Management), 244
privacy, 235, 260–261. See also anonymity; surveillance
Privacy and Electronic Communications Regulations (PECR), 241
privacy paradox, 260–261
programmable logic controllers (PLCs), 101
Prosecutorial Remedies and Other Tools to End the Exploitation of
Children Today (PROTECT) Act (US, 2003), 187
Protection from Harassment Act (PHA) (1997), 196
Protection of Children Act (PCA) (1978), 180, 183
Protection of Children Against Sexual Exploitation Act (US, 1977),
180
pseudo-photographs, 186–187
Public Order Act (1986), 154
publicity, 91, 98–99
pump-and-dump schemes, 140
punishment, 24–27
Quayle E, 177
Quinn Z, 38–39, 205–206
Quinney R, 39–40
Quon K, 157
Race Relations Act (1965), 154
radical criminology, 39–41
radio frequency identification (RFID) chips, 256
Ramirez E, 238
ransomware, 63–64
Rasch M, 93
rational choice theory, 25, 26–27
Reagan R, 194
Reagle J, 70
reasoned offender approach, 26
Reckless WC, 34
Recording Industry Association of America (RIAA), 109, 112, 113,
120
recording of crime, 41–42, 130–131
recruiting, 99
Reddit, 226
RedLight, 222
Regulation of Investigatory Powers Act (RIPA) (2000), 247–248
regulatory arbitrage, 77
Reiss AJ, 34
Reitinger P, 247
Renkema L, 116
reporting of crime, 41–42, 130–131
resistance, 84–88
restraining orders, 196–197
restrictive deterrence, 25–26
revenge pornography, 203–205
Reyns BW, 27
Rice C, 167
Rid T, 7, 102
Rieke A, 241
Roderick L, 239
Rodger E, 158
romance scams, 136–137
Ropelato J, 168
Rosenblueth A, 10–11
Rosenfeld R, 31
routine activities theory (RAT), 15, 25, 27–28
Royal Canadian Mounted Police, 220
Rudd A, 248
Ruffin O, 83
Russia, 101–102, 176, 184
Rwanda, 155
Saravalle E, 100
Sasser (worm), 64
Saudi Arabia, 167
Savage N, 185
scambaiters, 226
Scamwatch, 130, 135, 136–137, 140, 146
Schaeffer R, 194
Schafer J, 156, 220
Schmid A, 90
Sci-Hub, 110
script kiddies, 67
Second Life (virtual reality environment), 187
Securities and Exchange Commission (SEC), 140
Seles M, 195
self-control theory, 34–35
self-report surveys, 45–46
self-victimization, 179–180
Semi-Autonomous Ground Environment (SAGE) system, 7
Serious and Organised Crime Agency (SOCA), 219
Serious and Organized Crime Agency (SOCA), 17
Serious Crime Act (2015), 75, 209
Serious Organised Crime Agency (SOCA), 160
Serious Organised Crime and Police Act (2005), 154
Seto MC, 177, 207
sexting, 179
sextortion, 203–205
Sexual Offences Act (2003), 184, 208, 209
Shadish WR, 47
Shelley M, 56
Sherman B, 108
shill bidding, 133–134
Shining Path, 98–99
ShmooCon, 69
Silk Road, 10
Simon Wiesenthal Center (SWC), 156, 225
Simon WL, 55
Singh RR, 167
Sir Dystic (hacker), 67
Sklyarov D, 259
smishing, 139, 142
Smith R, 67
Snowden E, 244–245, 260
Snyder M, 115
social bond theory, 34
social constructionism, 35
social contract, 24
social engineering, 141–142
social exclusion, 9, 16, 231
social identity, 14
social learning theory, 28–30, 72
social media, 156, 158–159, 200
social practices, 7, 9–10
social structure–social learning theory (SSSL), 29–30
softlifting, 115
Software & Information Industry Association, 121
Software Publishers Association (SPA), 121
Sony, 113
Southern Poverty Law Center (SPLC), 156, 225
Spain, 86
spam emails, 25, 168
Sparks R, 45
spear-phishing, 139
Spitzer S, 39
spoofing, 62, 137, 139–140
Sri Lanka, 86
stalking, 193–197, 202. See also cyberstalking
Stallman R, 54, 116
Stark PB, 164, 170
state-sponsored hacking and propaganda, 257–258
statutory rape, 188
Stefancic J, 155
Steffensmeier D, 38
steganography, 96, 98
Steinhardt B, 154, 159–160
Steinmetz KF
content analysis and, 48
on hacking, 72, 73
on neutralization, 33
on piracy, 115, 116
radical criminology and, 40–41
on strain theory, 30, 31
on use of ‘cyber,’ 12
Sterling B, 65–66, 69, 73
Stern K, 156
stolen goods, 132–133, 143
Stop Piracy, 120
Stormfront, 156
strain theory, 30–31, 115
Strano Network, 85
streaming services, 114
strong encryption software, 98, 247
Stuxnet, 101
subcultures, 32, 72, 158
surveillance
concept of, 235–237
development of, 237–246
dilemmas of, 246–250, 260–261
surveillance capitalism, 238–239
Sutherland EH, 28–29
Sutton M, 132
Swartz A, 76, 110
Sykes C, 248
Sykes G, 32, 68
symbolic interactionism, 35, 38
Syria, 84
Takama G, 143
take-down notices, 111
Tamil Tigers, 86
Tannenbaum F, 35–36
Taxpayers for Common Sense, 227–228
Taylor et al. (2001), 176
Taylor M, 177
Taylor P
on hacking, 46, 53, 55, 68, 69, 70
on hacktivism, 83, 88
TeaMp0isoN, 86
Technical Investigation Services (TIS), 220
TERA Consultants, 124
Terminator (films), 56
terrorism, 9, 88–90. See also cyberterrorism
Terrorism Act of (2000), 90
Testa A, 47
Thailand, 184
Thomas D, 5, 12, 70, 73, 142
Thornberry TP, 45
Tiffany & Co., 143
Tiku N, 240
Timehop, 257
Tokunaga RS, 201
Torvalds L, 88
Trade-Related Aspects of Intellectual Property Agreement (TRIPS),
108, 118
Tripathi S, 226
Trojan horses, 63
trolling, 161–162
TruSecure, 66
Tsesis A, 155
Tunisia, 84
Tunnell KD, 26, 31, 33, 115, 116
Turgeman-Goldschmit O, 36–37, 46
Turkle S, 70
Twitter, 239
2001: A Space Odyssey (film), 56
Tyrrell K, 219
UK Fraud Costs Measurement Committee (UKFCMC), 131
under-reporting, 144–145
UNICEF, 212
‘Unite the Right’ rally (Charlottesville, VA), 157, 258–259
United Nations (UN), 17
United Nations Convention on the Rights of the Child (UNCRC), 181
United Nations Office on Drugs and Crime (UNODC), 17, 18–19
United States
anti-piracy initiatives in, 118, 119, 120
child pornography in, 176, 180–181, 183, 184, 187
cyberterrorism and, 90–92, 93
cyberwarfare and, 100–101
freedom of speech in, 258–259
hate speech in, 154, 155–157, 161, 258–259
internet policing in, 220, 223, 227–228
online fraud in, 129–130
online paedophilia in, 206–207, 209
pornography in, 163, 166–167, 168–170
stalking and cyberstalking in, 194, 196, 198, 199, 204, 205
surveillance in, 240–246, 248–249
See also specific acts
United States Customs and Border Protection, 182
United States Department of Justice, 42, 74–75
United States Department of State, 17
upskirt photos, 205
US State of Cybercrime Survey (previously the Cybersecurity Watch
Survey and E-Crime Watch Survey), 43, 59
USA PATRIOT Act (US, 2001), 74, 90, 242–243, 244, 248
Usenet, 114
Van Blar
Download