Uploaded by Ash S

Can We Employ Design Based Regulation While Avoiding Brave New World yeung

advertisement
Law, Innovation and Technology
ISSN: 1757-9961 (Print) 1757-997X (Online) Journal homepage: https://www.tandfonline.com/loi/rlit20
Can We Employ Design-Based Regulation While
Avoiding Brave New World?
Karen Yeung
To cite this article: Karen Yeung (2011) Can We Employ Design-Based Regulation
While Avoiding Brave�New�World?, Law, Innovation and Technology, 3:1, 1-29, DOI:
10.5235/175799611796399812
To link to this article: https://doi.org/10.5235/175799611796399812
Published online: 07 May 2015.
Submit your article to this journal
Article views: 217
View related articles
Citing articles: 5 View citing articles
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rlit20
(2011) 3(1) Law, Innovation and Technology 1–29
Can We Employ Design-Based Regulation
While Avoiding Brave New World?
Karen Yeung*
… there’s always soma to calm your anger, to reconcile you to your enemies, to make you
patient and long-suffering. In the past you could only accomplish these things by making a
great effort and after years of hard moral training. Now, you swallow two or three half-gramme
tablets, and there you are. Anybody can be virtuous now. You can carry at least half your
mortality about in a bottle. Christianity without tears—that’s what soma is.
Mustapha Mond, Resident Controller for Western Europe
(Aldous Huxley, Brave New World (1931))
INTRODUCTION
Aldous Huxley’s Brave New World has been aptly described as ‘one of the great
philosophical thought experiments’,1 depicting a world in which people are totally
designed and shaped by their rulers, utterly lacking in individual creativity and where
relationships and pleasures are shallow and superficial. Given the accelerating pace of our
scientific knowledge, the technological mastery employed by the rulers in Brave New
World now seems far from fanciful. Indeed, one of the greatest attractions of utilising
technology to tackle social problems lies in its potential to achieve its behavioural
objectives with 100 per cent effectiveness and in circumstances where design is selfenforcing so that no human intermediation is required to secure compliance with the
desired standards.2 Why bother with human station attendants to ensure that passengers
*
1
2
Director of TELOS; Professor of Law, King’s College London, UK. I am grateful to Roger Brownsword,
Timothy Endicott, Liz Fisher, Duncan Lockett-Yeung and Eloise Scotford for comments on earlier drafts.
Any errors remain my own.
Jonathan Glover, Choosing Children (Oxford University Press, 2006) 94.
Jonathan Zittrain, ‘Tethered Appliances, Software as Service, and Perfect Enforcement’ in Roger Brownsword
and Karen Yeung (eds), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes
(Hart Publishing, 2007).
1
2
Law, Innovation and Technology
hold valid tickets when a computer can sell tickets and authorise entry onto station
platforms efficiently, without error or the need to break for lunch? Although few might
regret the loss of jobs for law enforcement officials, the turn to ‘design’ as an instrument
for implementing social goals has worrying implications for liberty, autonomy and
responsibility.
Huxley’s dystopian vision resonates throughout many of the objections raised in
various debates concerning the legitimacy of employing technological means for specific
social purposes. For example, criminologists warn that the use of situational crime
prevention techniques that employ situational stimuli to guide conduct towards lawful
outcomes may express a lack of respect for individuals as rational agents capable of acting
on the basis of reasons.3 In a similar vein, ICT lawyers have highlighted how easily
software code can be used and abused by those in authority whilst escaping public notice.4
And controversy rages throughout bioethical debates concerning the moral acceptability
of employing technologies for human enhancement, with opponents claiming that it will
give rise to an invidious form of eugenics (albeit ‘liberal’ in nature) even when undertaken
voluntarily by mentally competent, informed adults rather than driven by any state
sponsored programme of social improvement.5 These apparently disparate critiques are
united in their concern that deliberate attempts to shape social outcomes through
technological means (whether by individuals, the state or commercial actors) could
seriously destabilise our moral and social foundations.
On the other hand, the benefits of technology in the form of valued social goods,
including improved health, longevity, safety, security, efficiency and productivity, are not
lightly to be dismissed. Accordingly, an important challenge for any society committed to
maintaining the freedom of individuals to pursue their own view of the right and the
good (subject to appropriate constraints) is to identify whether (and if so, on what
conditions) technological measures may be legitimately employed in pursuit of social
goals. This paper makes small steps towards meeting this challenge by examining the use
of ‘design’ as an instrument of regulation. For this purpose, I adopt the cybernetic
understanding of regulation proposed by Christopher Hood and his colleagues, who
identify three components to any control system—a standard-setting element, some
means by which information about the operation of the system can be gathered, and
some provision for modifying behaviour to bring it back within the acceptable limits of
3
4
5
eg RA Duff and S Marshall, ‘Benefits, Burdens and Responsibilities: Some Ethical Dimensions of Situational
Crime Prevention’ in Andrew von Hirsch, David Garland and Alison Wakefield (eds), Ethical and Social
Perspectives on Situational Crime Prevention (Hart Publishing, 2000) 17.
See Lawrence Lessig, Code and Other Laws of Cyberspace (Basic Books, 1999).
The literature is enormous. Some well-known discussions include President’s Council on Bioethics, Beyond
Therapy: Biotechnology and the Pursuit of Happiness (Washington, DC, 2003); Francis Fukuyama, Our
Posthuman Future (Profile, 2002); Jürgen Habermas, The Future of Human Nature (Polity, 2003); Michael J
Sandel, The Case Against Perfection (Harvard University Press, 2007); and Erik Parens (ed), Enhancing
Human Traits (University of Georgetown Press, 1998).
Can We Employ Design-Based Regulation While Avoiding Brave New World?
3
the system’s standards.6 Although technology can be used for monitoring (CCTV being
the most ubiquitous example in the UK) and sanctioning purposes (such as an electric
fence which administers an electric shock whenever an object comes in contact with it),
design-based (or ‘architectural’) instruments are employed at the standard-setting stage
of the regulatory cycle, consisting of technical measures intended to influence actions by
shaping, structuring or reconfiguring the practical conditions or preconditions for
action.7
Although social planners have employed design since ancient times to fashion spaces,
places, artefacts and processes with the aim of discouraging behaviour deemed
undesirable, such as the filling in of burial shafts within Egyptian pyramids to prevent
would-be looters from accessing the treasures locked within, advancement of knowledge
in the biological and neurological sciences now means that design-based regulation is no
longer limited to the design of inanimate objects, but can also be directly embedded into
biological organisms, including plants, animals and human beings.8 For example, in order
to raise levels of industrial production and productivity, plants can be engineered to
provide greater resistance to particular diseases or predators, the growth rate of animals
bred for food production can be accelerated, and psychotropic drugs which enhance
human cognition are now available. Design-based approaches may seek to achieve their
desired aim in a variety of ways. Some aim to alter the surrounding conditions for action
in order to encourage the desired behavioural response (eg speed humps prompt drivers
to reduce their speed in order to avoid damage to their vehicle and discomfort to
passengers). Others seek to mitigate the adverse effects of harm-generating activities (eg air
bags installed in motor vehicles aim to reduce the severity of injuries to occupants that
would otherwise arise). However, the primary focus of this paper is on design-based
instruments that seek to prevent the harm-generating activity from occurring.
Preventative approaches to design may seek either to reduce the likelihood of the
event deemed undesirable or to prevent the undesired action occurring altogether. For
example, motor vehicles can be designed to encourage the wearing of seatbelts by issuing
a warning signal when passengers have not belted up, or they can be designed to prevent
the engine from starting unless all occupants are securely belted.9 Technologies of the
6
7
8
9
Christopher Hood, Henry Rothstein and Robert Baldwin, The Government of Risk (Oxford University Press,
2001) 23.
Lee Tien, ‘Architectural Regulation and the Evolution of Social Norms’ (2004) 9 International Journal of
Communications Law & Policy 1, 3.
Scholars of Science and Technology Studies (STS) have long recognised the significance of the design of
material objects on social behaviour. See eg Madeline Akrich, ‘The De-Scription of Technical Objects’ in
Weibe Bijker and John Law (eds), Shaping Technology (MIT Press, 1992) 205–24; Jaap Jelsma, ‘Innovating
for Sustainability: Involving Users, Politics and Technology’ (2003) 16 Innovation 103.
The use of circumvention techniques by users may blunt or even nullify the effectiveness of technoregulatory measures. For example, in order to avoid the immobilising effect of seatbelt ignition locks
mandated in all new passenger vehicles in 1973, some drivers in the US learned how to disable the ignition
lock, while others would fasten their seatbelt before sitting on top of it. Strong public opposition to auto-
4
Law, Innovation and Technology
latter kind, which seek to design out all opportunities for regulatees to behave other than
in the manner dictated by the technology, can range from the highly sophisticated, such
as automated ticket machine readers controlling entry onto station platforms, through to
quite simple devices, such as concrete bollards preventing vehicles from entering
pedestrian zones. One leading legal scholar uses the term ‘techno-regulation’ to describe
these instruments,10 and I focus upon them as a springboard for my examination. This
is not to suggest that other design-based regulatory techniques are unimportant. But it is
the action-forcing character of techno-regulation that makes it a particularly powerful
form of control, and appears to pose the most serious threats to individual liberty and
moral freedom. By focusing on techno-regulation, a class of design-based instruments
that are constructed on the basis of a common approach to their underlying design
mechanics, rather than on a particular type of technology or character of the design target,
I hope to probe beneath the surface of scholarly concerns about architectural approaches
to social control that have been raised in various social and technological contexts.
My analysis begins by briefly examining some of the concerns expressed about the use
of design-based approaches in particular contexts by scholars from several disciplines. I
will suggest that underlying these apparently disparate concerns is a set of anxieties about
the potential of design-based regulation to undermine the social foundations that are
necessary for moral agency and responsibility. This leads on to an examination of the
way in which techno-regulation alters the social conditions for moral decision-making.
I do this by employing a simple thought experiment. I imagine two contrasting regulatory
approaches that a road safety authority might employ in order to reduce the number of
motor vehicle accidents arising from drivers running through red lights. The first involves
traditional legal regulation in the form of a legal prohibition backed by a penal sanction,
whilst the second employs automatic braking technology that brings vehicles to a halt
when a red light is encountered. The impact of these two measures on three individuals
of contrasting (and stylised) moral dispositions is then explored: one individual from
each end of the moral spectrum (the bully and the good samaritan respectively) and an
ordinary fallible individual who generally tries to act morally but occasionally fails to do
so. By comparing the implications of techno-regulation with traditional legal regulation,
both aimed at pursuing the same regulatory goal, their differential implications for the
social foundations of moral decision-making are drawn into sharper focus. Drawing on
the insights of moral and legal philosophy, I will suggest that techno-regulation may
matic seatbelt and other safety devices eventually prompted Congress to pass legislation which prohibited
the relevant regulatory agency (the National Highway Safety Administration) from requiring either ignition
locks or continuous buzzer warnings for more than eight seconds: see Committee for the Safety Belt
Technology Study, Buckling Up: Technologies to Increase Seat Belt Use, Special Report 278 (National Academy
of Sciences, Washington, DC, 2004).
10
Roger Brownsword, ‘Code, Control, and Choice: Why East is East and West is West’ (2005) 25 Legal Studies
1; Roger Brownsword, Rights, Regulation, and the Technological Revolution (Oxford University Press, 2008)
241.
Can We Employ Design-Based Regulation While Avoiding Brave New World?
5
threaten two conceptions of responsibility. First, by rendering the law enforcement process
otiose, it weakens our commitment to one of the most important institutional processes
through which we express our ‘basic responsibility’ as rational beings who can be called
upon to offer an explanation of our reasons for action. Secondly, techno-regulation’s
action-forcing character may erode moral freedom. Should this erosion of moral freedom
be a cause for concern? No, provided that agents continue to have an adequate range of
opportunities to act in ways they judge to be morally correct or desirable. However, it is
the cumulative effect of techno-regulation installed by regulators acting independently
across a wide range of social contexts that might spell moral disaster, if it leads to such a
severe reduction in moral freedom that opportunities for ascriptions of moral
responsibility lose their meaning.
Hence section III of the paper proceeds by considering whether it is possible for an
autonomy-respecting society to reap the benefits of techno-regulation without destroying
the social foundations upon which moral freedom and responsibility rests. Unlike many
critics of architectural forms of regulation, who fear that by embracing design-based
measures we will proceed down the slippery slope towards Brave New World, I offer a
potentially more optimistic scenario. My cautious optimism stems from an understanding
of the relationship between technology and moral responsibility as complex and
contingent, so that the emergence of new, increasingly powerful technologies that could
be used for regulatory purposes does not necessarily lead to an overall diminution in
moral freedom and responsibility. By suggesting that a middle path is possible, I will argue
that moral freedom can be understood as a vital social good which bears the hallmarks
of what is referred to by ‘new institutionalists’ as a common pool resource. Hence
regulators might legitimately utilise techno-regulatory measures that ‘consume’ this
resource if proper care is taken to ensure that the health and vitality of the underlying
resource system (the ‘moral commons’) is maintained. Provided that such measures are
employed ‘sustainably’, then they may be legitimately employed in pursuit of valued social
goods, particularly the prevention of harm.
Once we have obtained a clearer understanding of the moral implications of technoregulation for our social foundations, we can then attempt to construct a legal framework
which will help to ensure that design-based techniques are utilised legitimately in ways
that both sustain our moral resources and respect constitutional principles. Although
identifying the detailed content and contours of such a framework is beyond the scope
of this paper, it is likely to employ multiple criteria. Accordingly, section IV of the paper
provides a brief sketch of the kinds of criteria that any plausible framework is likely to
encompass, including an assessment of the legitimacy of the state’s intended purpose,
the likely benefits of the technology, whether it conforms to the ethical commitments of
the moral community in question (particularly where those commitments have
constitutional status), and whether the anticipated benefits justify the resulting erosion
of individual liberty and moral freedom. In the concluding section I draw together the
6
Law, Innovation and Technology
threads of my argument. It must be emphasised that my aim is to open up various lines
of inquiry that may deepen our understanding of this important phenomenon, raising
further questions which require more extensive research and reflection rather than
offering any definitive solutions.
I. TECHNO-REGULATION AND ITS DISCONTENTS
For regulators, the attractions of ‘techno-regulation’, design-based instruments that force
regulatees to act in the desired manner, can be readily identified. It promises to achieve
their desired objective with perfect effectiveness so that once the technology is installed,
no further enforcement action by regulators or cooperation on the part of regulatees is
needed. Yet the self-enforcing character of techno-regulation has generated a range of
concerns, some focused on its efficacy and effects, whilst others focus on its potential to
jeopardise important values. In order to achieve its intended ends, techno-regulation
entails complete reliance on the accuracy and precision of the design ‘rules’ thereby
adopted. Given that it is impossible to construct perfect linguistic rules due to the
indeterminacy of language, the infinite variability of the social contexts in which they are
applied, and the plurality of possible ways in which those contexts can be interpreted, it
would be naive to expect that design-engineers could overcome these limitations simply
by translating them into technological form.11 For example, Justice Michael Kirby has
warned that internet filters designed to prohibit access to materials considered ‘harmful
to minors’ may inadvertently prevent access to a range of legitimate materials, such as
lawful erotic material, or discussions about censorship.12 Even if these design challenges
could be overcome, design can have adverse and unanticipated indirect effects. For
example, urban geographers have demonstrated how the design of urban cities, including
both macro land use patterns and the internal design of buildings, discriminates against
the physically disabled by not accounting for their mobility requirements.13
But it is not the bluntness of design or its practical impact that worries lawyers,
criminologists and applied ethicists. They worry about the threats that techno-regulation
poses to democratic, constitutional and ethical values. Thus, Lawrence Lessig has argued
that the use of architecture to regulate behaviour in cyberspace undermines several
The classic exposition of the indeterminacy of rules can be found in HLA Hart, The Concept of Law
(Clarendon, 1961). For a helpful discussion of the problems with rules in implementing regulatory goals,
see Julia Black, Rules and Regulators (Clarendon, 1997). For a discussion of technological design difficulties,
see Jonathan Zittrain, The Future of the Internet (Penguin, 2009) 114–15; and Karen Yeung, ‘Towards an
Understanding of Regulation by Design’ in Brownsword and Yeung (n 2) 90–95.
12 Michael Kirby, ‘New Frontiers: Regulating Technology by Law and “Code”’ in Brownsword and Yeung (n 2)
367.
13 Brendan Gleeson, ‘A Place on Earth: Technology, Space, and Disability’ (1998) 5 Journal of Urban Technology
97.
11
Can We Employ Design-Based Regulation While Avoiding Brave New World?
7
important principles of democratic governance, fearing that state-sponsored code-based
regulation may lack transparency and erode public accountability.14 Others worry that
architecture erodes accountability by removing, or at least significantly reducing, the
extent to which individuals may raise objections to the application of regulatory standards
in particular cases by appealing to the discretion and judgement of enforcement officials,
whilst locking in inappropriately defined standards.15 Internet scholars have also
highlighted how the foundational architecture of the internet profoundly affects the
balance of power between governments and the governed, warning that ‘closed’
architectures (those that leave no room for user creativity and discretion, akin to the
action-forcing character of techno-regulation) allow authoritarian and libertarian
governments alike to enforce their wills much more easily than they can at present,
without the knowledge of their citizens, much less their consent and cooperation.16
Criminologists have also offered critiques of architectural regulation, drawing
attention to moral concerns arising from the re-emergence of so-called ‘situational crime
prevention’ techniques that seek to channel behaviour in ways that reduce the occurrence
of criminal events through the use of situational stimuli to guide conduct towards lawful
outcomes, preferably in ways that are unobtrusive and invisible to those whose conduct
is affected.17 Although some commentators have focused on the impact of such
techniques on criminal behaviour, pointing to the risk that such techniques may displace
crime rather than eliminate it, others worry about the ‘expressive signatures’ associated
with such approaches.18 In particular, the use of architecture to reduce crime can be
interpreted as failing to signify respect for individuals, implying that people are incapable
of responding to appeals to moral reason or exercising self-control and restraint.19 In a
related vein, medical devices designed to prevent medical professionals from harming
patients in specific ways could signal distrust in professionals, depriving them of their
ability to exercise professional judgement with regard to the best means of securing
patient safety in individual cases.20
Fears about the implications of employing design-based approaches to prevent
conduct deemed undesirable (or, put differently, to encourage conduct deemed desirable)
dominate debate among bioethicists concerning the moral acceptability of using
technologies to enhance human capacities. Although most of this discussion is concerned
with the individual use of such technologies, given that attempts by the state to sponsor
and implement technological programmes to ‘improve’ the physical and moral character
14
15
16
17
18
19
20
Lessig (n 4) 135.
Yeung (n 11) 79.
Zittrain (n 11) 103.
David Garland, ‘Ideas, Institutions and Situational Crime Prevention’ in von Hirsch et al (n 3) 1.
Karen Yeung and Mary Dixon-Woods, ‘Design-based Regulation and Patient Safety: A Regulatory Studies
Perspective’ (2010) 71 Social Science and Medicine 502.
Duff and Marshall (n 3) 17.
Yeung and Dixon-Woods (n 18).
8
Law, Innovation and Technology
of its citizens is now widely viewed as repugnant, Alan Buchanan has recently argued that
the quest for economic growth is likely to result in state support for the use of human
enhancement technologies that improve industrial productivity, even if not in the overt
form of eugenics employed in the USA and many northern European countries
(including Sweden, Denmark and Germany) throughout the early twentieth century.21
One claim often made by those opposed to the use of technologies for the purposes of
human enhancement, even by mentally competent individuals, is that it entails the
cultivation of inauthentic virtues. Underpinning these concerns is a fear that it will result
in a loss of moral responsibility: it is the technology, rather than the moral character of the
relevant individual, that is responsible for the resulting display of virtue, and hence the
individual is not an appropriate candidate for moral praise. For example, in a provocative
paper, Savulescu and Sandberg explore the moral acceptability of using so-called ‘love
drugs’ to sustain sexual fidelity to reduce the risk of infidelity between couples in their
post-reproductive years.22 They argue that although individuals who remain faithful to
their partners solely through an act of will, unaided by technological enhancements,
might be especially commendable, it does not follow that ingesting the drug to reduce the
risk of infidelity should be regarded as morally unacceptable.23
Roger Brownsword’s objections to techno-regulation in general reach further still.
For him, techno-regulation entails more than a loss of moral responsibility; in a direct and
unmediated way it excludes moral responsibility. Even if the demands of good governance
can be satisfactorily accommodated, he worries that techno-regulation strikes at the heart
of fundamental notions of respect and responsibility by forcing individuals to act in the
manner dictated by techno-regulatory design.24 Hence Brownsword worries that within
a techno-regulatory environment, individuals are no longer morally responsible agents to
be credited with acts that respect others and to be blamed where they fall short of moral
requirements.25 His concerns point to the root causes that lie at the heart of the preceding
critiques, by suggesting that techno-regulation threatens the basic foundations that enable
a moral community to exist. While the preceding critiques arise from a wide and diverse
range of social, technological and scholarly domains, they share an underlying anxiety
that the use of design-based approaches for controlling human conduct could threaten
the moral and social foundations to which individual freedom, autonomy and responsi21
22
23
24
25
Allen E Buchanan, Beyond Humanity? (Oxford University Press, 2011) 50. For a brief history of eugenics,
see Allen E Buchanan, Dan W Brock, Norman Daniels and Daniel Wikler, From Chance to Choice: Genetics
and Justice (Cambridge University Press, 2000) 27–60.
Julian Savulescu and Anders Sandberg, ‘Neuroenhancement of Love and Marriage: The Chemicals Between
Us’ (2008) 1 Neuroethics 31. They draw upon laboratory findings showing that the insertion of genes and
the administration of the chemicals oxytocin and vasospressin can successfully convert monogamous voles
into being non-monogamous and vice versa. Because the same chemicals and genes are present in primates,
including humans, it is possible that their physiological effects are similar.
Savulescu and Sandberg (n 22) 40.
Brownsword 2005 (n 10) 17.
Ibid, 18.
Can We Employ Design-Based Regulation While Avoiding Brave New World?
9
bility are anchored. Fears expressed by ICT lawyers with regard to code-based controls in
cyberspace can be understood as rooted in concerns about the capacity of citizens, both
individually and collectively, to hold their governors to account. In essence, their
objections rest on an understanding of responsibility as a social practice that is grounded
in a dynamic relationship between individuals and the legal institutions that govern them.
Likewise, worries expressed by criminologists regarding the use of situational crime
prevention technologies focus directly on the way in which such approaches belie an
understanding of individuals as incapable of bearing responsibility for their actions, of
individuals as rational beings who can be trusted to display consideration for the interests
of others, and who are capable of responding to and acting on the basis of moral and
prudential reasons. And by relying on technological means to cultivate virtue, opponents
of human enhancement technologies imply that individual responsibility for moral virtue
is thereby diminished.
Taken together, these critiques help us to identify more clearly the basis of our
intuitive horror when encountering Huxley’s Brave New World, where social stability
arises not from individuals exercising moral self-restraint, but because its inhabitants
have been so technologically designed and conditioned that they cannot but act in ways
deemed desirable by their controller. They highlight the need to acquire a better
understanding of the way in which technological forms of regulation affect the social
foundations that are not merely conducive to, but necessary for, moral responsibility.
Accordingly, sections II and III will seek to interrogate the implications of technoregulation for moral agency and responsibility by examining the ways in which
techno-regulation affects the social foundations for moral decision-making. To do this,
we need to explore what it means to be a moral agent and why moral responsibility
matters, and it is to this task that I first turn.
II. TECHNO-REGULATION AND ITS THREATS
TO MORAL AGENCY AND RESPONSIBILITY
Moral Responsibility
To say that an agent is morally responsible for something, say an action, omission or
attitude, is to say that the agent is worthy of a particular kind of reactive attitude—praise,
blame or something akin to these—for having performed it.26 For the purposes of this
26
Neil Levy, ‘The Good, The Bad and the Blameworthy’ (2005) 1 Journal of Ethics & Social Philosophy 4; A
Eshleman, Stanford Encyclopaedia of Philosophy, ‘Moral Responsibility’, http://plato.stanford.edu/entries/
moral-responsibility (accessed 8 June 2011); Joseph Raz, ‘Responsibility and the Negligence Standard’ (2010)
30 Oxford Journal of Legal Studies 1, 13–15.
10
Law, Innovation and Technology
exploration, morality is understood as an ‘informal public system applying to all rational
persons, governing behaviour that affects others, and has the lessening of evil or harm as
its goal’.27 Thus, whether a given action is considered morally required, prohibited,
permitted or desired will depend upon how that action affects the interests of others.
Such ascriptions are only possible and meaningful if individuals enjoy moral freedom: the
freedom to make and act upon their own judgement concerning what morality requires
or prohibits. Judgements of moral responsibility require, first, that the relevant action is
properly understood as having been caused by a moral agent 28—that is, one who has the
normative competence and ability to act on the basis of moral reasons and be so guided
and can thus be held responsible for the action under consideration.29 Second, others
must be vulnerable to having their legitimate interests harmed by the action. Both of
these conditions must be met before moral agency and moral responsibility can arise.
This rather abstract claim can be readily illustrated by imagining a community in which
one or other of these conditions is absent. In a world where individuals have no freedom
for independent moral judgement and action, they cannot be held morally responsible for
any of their actions. Likewise, in a world where individuals are utterly invulnerable to
harm, any action taken against them by others, however violent or aggressive, cannot be
characterised as morally wrongful.
Responsibility can, however, be understood in various senses. We are responsible for
many actions which merit neither praise nor blame. Ascriptions of moral responsibility
can be understood as an expression of what Gardner refers to as an individual’s basic
responsibility, which refers to the ability of an agent to explain oneself, to offer an account
for oneself, at the time she is confronted by her accusers. Basic responsibility is crucial to
our sense of being in the world.30 It is fundamental to our identity as rational agents, that
is, as creatures who act on the basis of reason and who, as individuals, want our lives to
make rational sense, to add up to a story not only of what but also of whys.31 As Isaiah
Berlin put it:
Eshleman, ibid.
This discussion is concerned with morality in a normative rather than a descriptive sense as it is understood
by secular philosophers rather than more religiously influenced philosophers. This conception of morality
conforms to what Raz describes as ‘morality in the narrow sense’ which ‘is meant to include only all those
principles which restrict the individual’s pursuit of his personal goals and his advancement of his selfinterest. It is not “the art of life”, ie the precepts instructing people how to live and what makes for a
successful, meaningful and worthwhile life.’ Joseph Raz, The Morality of Freedom (Oxford University Press,
1986) 213.
29 Stephen J Morse, ‘Excusing and the New Excuse Defenses: A Legal and Conceptual Review’ (1998) 23 Crime
and Justice 331, 344.
30 Raz (n 26) 14.
31 John Gardner, ‘The Mark of Responsibility (With a Postscript on Accountability)’ in Michael W Dowdle
(ed), Public Accountability (Cambridge University Press, 2006) 233.
27
28
Can We Employ Design-Based Regulation While Avoiding Brave New World?
11
When I say that I am rational, at least part of what I mean is that it is my reason that
distinguishes me as a human being from the rest of the world. I wish, above all, to be conscious
of myself as a thinking, willing, active being, bearing responsibility for my choices and able to
explain them by reference to my own ideas and purposes.32
The importance of moral responsibility derives from that of basic responsibility and, to
this extent, it can be understood as a mark of basic responsibility.33
These concepts help us to interrogate techno-regulation’s implications for moral
decision-making. Critical to this exploration is an understanding of a morally responsible
agent as one who acts on the basis of moral reasons, that is, reasons for action that pertain
to the interests of others. Accordingly, the following analysis examines how technoregulation affects the decision-making context in which moral agency operates. It
proceeds by way of a thought experiment, imagining the impact of a simple technoregulatory device aimed at reducing the number of motor vehicle accidents on three
individuals with stylised moral dispositions. By comparing a traditional legal approach,
(ie one that employs legal rules backed by sanctions) with a techno-regulatory approach,
to pursue a collective goal that could conceivably be employed in contemporary
industrialised societies (ie the reduction of motor vehicle accidents and injuries), their
differing implications for moral freedom, moral agency and moral responsibility are
drawn into sharper focus.
The Case of Accident-Prevention Technology
Thus let us imagine that a road safety authority wishes to bring about a reduction in the
number and severity of motor vehicle accidents. Based on convincing evidence that many
accidents are caused by drivers’ inadvertent failure to stop at red signals, the authority is
considering two measures: the first entails passing a law that creates a criminal offence of
failure to stop at a red light; the second entails installing simple traffic management
technology that automatically brings all motor vehicles to a halt on encountering a red
signal. Although the primary purpose of the technology is to prevent drivers
unintentionally running red lights, it also prevents drivers from intentionally running red
lights. The following discussion considers how these two policy measures affect both
moral responsibility and basic responsibility by examining their likely impact on three
individual drivers of contrasting dispositions, who we can describe as a bully (exemplified
by Alex, the teenage thug in Anthony Burgess’s novel, A Clockwork Orange),34 a good
Isaiah Berlin, ‘Two Concepts of Liberty’ in Henry Hardy and Roger Hausheer (eds), The Proper Study of
Mankind (Pimlico, 1998) 191, 202.
33 Gardner (n 31) 233; see Gary Watson, ‘Two Faces of Responsibility’ in Gary Watson (ed), Agency and
Answerability (Clarendon, 2004) 261.
34 Anthony Burgess, A Clockwork Orange (William Heinemann, 1962).
32
12
Law, Innovation and Technology
samaritan (exemplified by Eric, a devout Christian steadfastly endeavouring to undertake
what he considers to be morally right in the eyes of God on every occasion, portrayed in
the film Chariots of Fire),35 and an ordinary person (let us call him John, a fairly
unexceptional, ordinary man in full-time employment who enjoys various hobbies such
as television watching, drinking beer with friends and other kinds of commonplace leisure
activities). For this purpose, it is assumed that all three individuals live in a community
which bears a striking resemblance to that of Britain in 2011.
Legal Prohibitions vs Techno-Regulation
Once it becomes a criminal offence for drivers to run red lights, each driver is legally
obliged to bring his vehicle to a stop when approaching a red light. Whether or not a
driver discharges this legal obligation is a matter for each driver to determine, since each
individual remains entirely free to act in accordance with his own judgement. Since Alex
takes pleasure in driving his car through red lights, he typically ignores the legal
prohibition, preferring to run the risk of being punished for violating the law rather than
curb his reckless behaviour. Legal prohibition has no impact on Alex’s moral freedom or
responsibility, save in so far as the imposition of a legal duty to stop at red lights might
serve to reinforce the moral obligation to do so in ordinary circumstances. In contrast, if
automatic braking technology is introduced, this course of action will no longer be open
to him. Does this mean that Alex then ceases to be a morally responsible agent? Clearly,
Alex’s liberty to act as he wishes has been restricted. Yet he has not lost the capacity to
choose to do right or wrong, for there continue to be a wide range of other means he
could employ to harm other road-users—he could shoot at them with a firearm, hurl
objects at them, or use other physical means of attack. What he cannot now do is run his
car through a red light in order to intimidate and injure them. Alex has also lost the ability
to injure others by a particular means, ie the freedom to use his car as a weapon. But he
retains the choice or ability to harm others. Alex’s reasons for running red lights were
not based on any belief that such activity was morally correct or permissible; he did so
simply because the activity gave him pleasure. So while he might complain that the
technology now prevents him from deriving pleasure from running red lights, he cannot
complain that it has diminished his moral agency. Although it is no longer meaningful to
make a moral judgement about Alex’s conduct when his car stops at red lights, his moral
agency otherwise remains unaffected.36 Alex may no longer be a moral agent in a very
narrow and specific sense—that is, in relation to the action of his vehicle when it
approaches a red light, but he remains a moral agent for all other purposes.
35
36
Written by Collin Welland, directed by Hugh Hudson, 1981.
I am grateful to Timothy Endicott for helping me to clarify my thoughts on this issue. Cf Brownsword 2005
(n 10).
Can We Employ Design-Based Regulation While Avoiding Brave New World?
13
How do the two measures compare when confronted by the good samaritan, who
always endeavours to help those in need? Consider a situation in which Eric, whilst
driving, sees an elderly pedestrian collapsed on the pavement who urgently requires
medical attention. Eager to provide assistance, Eric immediately parks his vehicle, picks
up the pedestrian and considers how he can best provide assistance. If a law is introduced
which prohibits drivers from running red lights, Eric remains free to act upon his own
judgement and proceed through red lights when he deems it reasonably safe in order to
get to a hospital as quickly as possible. If he chooses to proceed through red lights, he
thereby exposes himself to the possibility of criminal punishment. However, there are
several points throughout the legal process that provide him with an opportunity to offer
an excuse for his action based on his injured passenger’s overriding need for urgent
medical attention. First, he may seek to persuade the prosecuting authority not to bring
charges against him. If that fails, then he may seek to persuade the court that his action
was justified so that it would be unfair to find him guilty of an offence. If this second
approach is unsuccessful, Eric might nevertheless seek to persuade the court at the
sentencing stage that any punishment should be withheld or nominal in magnitude,
thereby acknowledging the moral propriety of his action. In other words, the legal process
provides a formal system within which a community can call upon individuals to offer
an account of their actions and, as we shall see, this has direct implications for basic
responsibility.37
But if the techno-regulatory approach is adopted, this course of action is no longer
possible. Let us assume that if Eric proceeds to the nearest hospital, his car will
automatically stop at all the red lights en route and he is therefore unlikely to arrive at the
hospital in time to save the pedestrian’s life. In these circumstances, the effect of the
technology is to alter the social context in which Eric must exercise his moral judgement.
The estimated journey time to hospital is likely to be considerably longer than it would
otherwise have been. But Eric still has a range of other options. He might, for example,
attempt to transport the injured pedestrian to a nearby GP surgery (assuming it is closer)
in the hope that a duty doctor might be able to provide urgent medical assistance. Or he
might consider that the pedestrian’s best hope is to call emergency services to attend to
him in his present location. So Eric still has considerable latitude in determining the right
course of action and thus retains some measure of moral freedom. And even if Eric
decides that he should nevertheless attempt to transport his pedestrian to hospital (where
the most comprehensive medical assistance would be available), what we cannot do is
blame Eric for failing to get to the hospital more quickly, for he lacked full and direct
control of his vehicle. But we can still evaluate his motivations as morally praiseworthy
in attempting to assist the pedestrian in need of urgent medical care.
37
Gardner (n 31).
14
Law, Innovation and Technology
Finally, how do the two proposals affect the morality of John’s action? Like other
ordinary people, John generally tries to do the right thing, and this attitude informs his
general approach to driving. He endeavours to respect the rules of the road and therefore
he generally takes action to bring his car to a halt on encountering a red signal. But, like
other ordinary people, John is not infallible. On one or two occasions he has failed to
stop at red signals. Sometimes this is entirely accidental, due to John’s foot slipping off the
brake inadvertently so that the brake fails to activate despite his conscious attempt to
activate it, or if the sun shines directly into his eyes and he misinterprets the traffic signal
as yellow or green. In these circumstances the legal prohibition will not result in any
change to his conduct, since his intention is to stop at red lights, but he accidentally fails
to implement his intention, either through error, inadvertence or plain bad luck. Like
Eric, John might attempt to persuade the prosecutor to refrain from pressing charges
against him, or seek to persuade the court to find him not guilty of any criminal offence,
on the basis that his violation should be excused.38 In contrast, the technological measure
will make it impossible for John to run red lights unintentionally. Indeed, this is the
paradigm case of accidental action that the technology seeks to prevent.
Does the removal of opportunities for accidental harm have any moral import?
Although the matter is open to debate, it is difficult to argue that John’s moral agency is
adversely affected in any significant sense. John’s explicit intention is to stop at the red
signal. But, like all other humans, he does not have perfect mastery over his physical
movements and the ways in which his intended physical exertions interact with their
surrounding environment. The effect of the technology is to interfere with his physical
agency, but it does not interfere with his moral agency. That said, prior to the introduction
of automatic braking technology, John very occasionally succumbed to the temptation to
proceed through a red light when driving late at night and no other cars or pedestrians
were in view. So by excluding the possibility of running red lights, the installation of
automatic braking technology would remove the temptation to run red lights
intentionally. Hence we could no longer praise John for his law-abiding self-restraint if
we observed him stopping at red signals. To that extent, the technology narrows the scope
for passing moral judgement on John’s actions and reduces opportunities for him to
exercise self-restraint in respecting the rules of the road. But a wealth of other
opportunities would remain for John to engage in morally correct action and cultivate
moral virtue in his daily life.
These thought experiments illustrate how each regulatory instrument operates and
their contrasting moral implications. Traditional regulation in the form of legal
38
The strength of John’s excuse will be considerably weaker than the excuse offered by Eric: although John’s
excuse is based on his genuine attempt to discharge his legal duty (by intending to stop at a red light) and
might arguably be regarded as not morally blameworthy, Eric’s action was undertaken in order to provide
urgent assistance to a person in need to whom he owed no legal duty of care and is therefore clearly worthy
of moral praise.
Can We Employ Design-Based Regulation While Avoiding Brave New World?
15
prohibitions backed by sanctions keep intact each individual’s moral agency and capacity
for moral responsibility. While the legal prohibition provides individuals with a reason for
action, each individual retains her freedom to determine whether or not to act on the
basis of those reasons. In all circumstances, both ordinary and exceptional, drivers remain
free to act on the basis of their own moral judgement, in the full knowledge that driving
through red signals amounts to a breach of legal duty for which they may be punished.
However, the process by and through which the law is formally enforced provides various
opportunities for an agent to account for her actions as justified or excused, and thereby
benefit from the favourable exercise of prosecutorial or judicial discretion. In contrast,
techno-regulation operates on an entirely different basis, providing no equivalent
mechanism for calling agents to account for the morality of their actions. Instead, by
technologically preventing drivers from undertaking action deemed undesirable, agents
are deprived of the freedom to determine for themselves whether the excluded action is
the right thing to do.
These scenarios also illustrate how automatic braking technology reduces the extent
to which moral judgements can be made about a driver’s conduct when stopping at red
lights. Within a traffic management system that automatically brings cars to a halt at red
signals, drivers no longer ‘cause’ their vehicle to stop at red signals. They cannot therefore
be regarded as ‘responsible’ for the actions of their vehicle when approaching traffic lights.
But neither Eric, Alex nor John has lost the ability to control himself: except in relation to
his vehicle, the freedom and agency of each is left entirely intact. So each driver’s agency
has been eroded only to the extent that he lacks control over the actions of his vehicle
when approaching red lights. Techno-regulation can also diminish the scope of an agent’s
moral freedom in circumstances where it prevents agents from engaging in action that
they would otherwise deem morally correct or desirable, illustrated by Eric’s assessment
that it would be morally right to run red lights when safe to do so in order to transport
his injured passenger to hospital. But even then, Eric’s moral agency is arguably left intact,
for he still has an adequate range of options that he might pursue in seeking to do the right
thing. The technology has altered the social context in which he must exercise his moral
judgement, but his moral agency is only partially curtailed.
Techno-Regulation’s Implications for Responsibility
By altering the decision-making context in which agents operate, techno-regulation
significantly enhances the security and safety of potential victims, and appears to offers
considerable advantages over its more traditional legal counterpart. But this enhanced
security comes at a price. Something is lost when we turn from law to design as a solution
to social ills. First, it precludes the application of legal enforcement procedures and
thereby bypasses an important institutional mechanism through which our basic
responsibility is expressed. Secondly, it may reduce the scope of drivers’ moral freedom,
16
Law, Innovation and Technology
at least in circumstances where the technological measure prevents agents pursuing a
course of action that would otherwise be available and which they might in some
circumstances judge to be morally desirable or required. The significance of these losses
is explored more fully in the discussion that follows.
(i) The Erosion of Basic Responsibility
Although Brownsword’s critique is focused on the implications of techno-regulation for
moral responsibility, the preceding hypotheticals demonstrate how techno-regulation may
directly threaten basic responsibility, thereby challenging our self-understanding as
individuals capable of reasoned action and communication. Gardner argues that when the
law calls people to account through the legal process, and in according rights to certain
people to bring others to account (the plaintiff in a civil suit, the prosecution in a criminal
case), it provides a powerful institutional mechanism through which a structured explanatory dialogue is fairly and publicly conducted. On Gardner’s account, the courtroom
struggle is not merely of instrumental value arising from the due enforcement of law, but
also a site in which the intrinsic value of basic responsibility is instantiated, enabling the
general community to demand an account from the accused person whilst providing the
accused with a formal and public opportunity to offer an account of her action.
In characterising the public trial as a formal process of reasoned communication
between rational beings, Gardner’s analysis reflects, in a deeper, thicker sense, concerns
expressed by ICT lawyers that design-based approaches to regulation deny individuals a
right to appeal against the application of standards embodied in design. These concerns
share common ground with Owen Fiss’s powerful critique of informal dispute settlement
processes. Fiss argues that it is through judicial adjudication that judges, acting on behalf
of the public, interpret, explicate and give force to the values embodied in authoritative
legal texts and bring reality into accord with them.39 When parties to a legal dispute settle
out of court, this communicative dialogue and process of affirmation of the community’s
legal standards is left wanting. In a similar vein, by giving agents no option but to act in
the manner dictated by design, behaviour that might otherwise be properly subjected to
collective scrutiny cannot occur. As a consequence, the community simply has no occasion
to demand an explanation for that behaviour. In contrast, when conduct is alleged to be
in violation of legal standards, the legal process provides a formal opportunity for the
community to reflect upon, and publicly affirm, its commitment to those standards,
whilst agents are given an opportunity to explain the reasons motivating their action.
How we evaluate the seriousness of this loss associated with a turn to techno-regulation
is beyond the scope of this paper. I suspect, however, that this question cannot meaningfully be answered in the abstract, divorced from a particular social context in which the
39
Owen M Fiss, ‘Against Settlement’ (1984) 93 Yale Law Journal 1073.
Can We Employ Design-Based Regulation While Avoiding Brave New World?
17
law enforcement process is avoided. Rather, the following discussion focuses on the loss
of moral freedom that a turn towards design-based forms of regulation may entail.
(ii) The Erosion of Moral Responsibility
By restricting an agent’s freedom to pursue of a course of action that she would otherwise
have judged morally desirable or required, techno-regulation may undermine moral
freedom and responsibility. It is important to emphasise, however, that the mere fact that
a technological measure restricts certain kinds of action and thereby restricts general
liberty does not necessarily entail a reduction in moral freedom and responsibility. An
automatic cut-off switch built into electric lawnmowers to prevent the blades activating
unless continual pressure is applied to the handle does not erode the user’s moral freedom
or responsibility. It is only in circumstances where the actions closed off by technoregulation could plausibly be regarded as morally desirable or required that moral
freedom is restricted. However, as Eric’s case demonstrates, a technological measure that
simply closes off action that an agent would otherwise deem morally appropriate need not
jeopardise moral agency if individuals have an adequate range of other opportunities to
act in ways they judge morally desirable or required. In other words, the erosion of moral
freedom arising from a single techno-regulatory intervention viewed in isolation may be
relatively minor and pose no serious threat to moral agency.
But techno-regulatory measures do not, in practice, stand alone. If there is a wholesale
shift in favour of techno-regulatory means for implementing public policies, their
aggregate effects could be enormous. So, while we might not object to the installation of
automatic locking systems that employ DNA analysis and verification before authorising
entry to the nation’s intelligence-gathering headquarters, we may have serious
reservations about plans to roll out these security measures to safeguard access to all
government offices and facilities. The feared scenario is ‘death by a thousand cuts’ rather
than a single catastrophic event. If the wholesale adoption of techno-regulation results in
such a significant reduction in moral freedom that regulatees no longer have an adequate
range of alternative opportunities for moral action, then they can no longer be considered
moral agents. By profoundly altering the social setting in which we demand (require)
certain conduct from one another and respond adversely to one another’s failure to
comply with these demands, a systematic and widespread shift toward techno-regulation
could theoretically seriously threaten the social foundations that are necessary for moral
agency and freedom to exist.40 Such a community may be safe and stable, but, like
Huxley’s Brave New World, it cannot be described as a moral community. Is it possible for
a community to employ techno-regulation whilst preserving its moral foundations? In the
following section, I will suggest that the erosion of moral freedom wrought by techno40
Watson (n 33).
18
Law, Innovation and Technology
regulation need not lead to moral collapse if it is employed ‘sustainably’ in ways that
enable the community to benefit from the social goods offered by techno-regulation,
without unacceptably eroding its moral foundations. In making this claim, I draw upon
the insights of a body of social scientific research that is concerned with understanding
problems of collective action that arise in seeking to preserve and maintain natural
resources that may be jeopardised by excessive human exploitation. I will suggest that
environments in which agents enjoy an adequate range of opportunities for moral action
can be understood as a social resource that are, like many natural resources, vulnerable
to degradation due to problems of collective action. The work of new institutional
theorists suggests that, in certain circumstances, problems of collective action may be
resolved through concerted human action, and it is on the strength of their work that I
suggest that the moral risks which techno-regulation poses need not undermine our
moral foundations if used appropriately. Accordingly, it is to problems of natural resource
degradation and management that I first turn.
III. SUSTAINING MORAL FREEDOM AS A COLLECTIVE GOOD
The Tragedy of the Commons
Contemporary analyses of natural resource depletion problems have been significantly
shaped by the ‘tragedy of the commons’ metaphor employed by ecologist Garrett Hardin
in his discussion of human population growth, and it provides a helpful starting point for
understanding the problems that arise from the erosion of moral freedom wrought by
techno-regulation. Hardin employed the metaphor of a grazing commons, a pasture that
is open to all and from which no one can be excluded.41 Each herder receives a direct
benefit from grazing his animals on the commons but only bears a share of the cost
resulting from overgrazing. Accordingly, each herder, acting rationally, will graze as many
of his animals as possible, without regard to the degradation thereby caused. The
cumulative impact of each herder’s actions on the commons produces ‘tragic’ results:
Each man is locked into a system that compels him to increase his herd without limit—in a
world that is limited. Ruin is the destination toward which all men rush, each pursuing his
own best interests in a society that believes in the freedom of the commons.42
The so-called ‘commons dilemma’ arises because people’s short-term selfish interests
conflict with long-term group interests and the common good. It exemplifies a problem
Hardin’s Tragedy of the Commons framework was subsequently conceptualised as a ‘prisoner’s dilemma’:
RM Dawes, ‘Formal Models of Dilemmas in Social Decision Making’ in MF Kaplan and S Schwartz (eds),
Human Judgment and Decision Processes: Formal and Mathematical Approaches (Academic Press, 1975).
42 Garrett Hardin, ‘The Tragedy of the Commons’ (1968) 162 Science 1243, 1244.
41
Can We Employ Design-Based Regulation While Avoiding Brave New World?
19
of collective action: such problems occur when the combined effect of individuals’
rational action leads to outcomes that are sub-optimal from the perspective of their
common welfare. Although problems of collective action were identified as early as
Aristotle,43 several contemporary models for analysing and resolving collective action
problems have been offered by social scientists of various stripes.44 The tragedy of the
commons generates a collective action problem owing to the special characteristics of the
relevant resource, referred to in the more technical literature as ‘common pool
resources’.45 Unlike ordinary goods, it is either impossible or extremely costly to exclude
others from access to the good.46 Common pool resources are not, however, ‘pure public
goods’ because consumption of the pooled resource is rivalrous. Because a common pool
resource system generates a finite supply of resource units, one person’s appropriation
from the pool diminishes the quantity of resource units available to others, slightly
degrading the overall health and quality of the resource system and thus exposing it to
problems of overcrowding or overuse. So, for example, while information is a pure public
good—your use of my chocolate cake recipe does not prevent me (or anyone else) from
using it, the ocean is a common pool resource: by extracting fish from the ocean, I
diminish the number of fish available to others.
Hardin’s analysis portrayed users of common pool resources as locked in a dilemma
from which they could not extract themselves, so that some kind of external, centralised
intervention was needed to avoid the tragedy of overuse or destruction. However, the
pioneering work of Elinor Ostrom and her colleagues (sometimes referred to as ‘new
institutionalists’)47 has demonstrated, drawing upon careful empirical observation as the
basis for theoretical development, that, contrary to conventional analysis, the social
dilemma generated in commons contexts need not lead to tragedy and is capable of
resolution that avoids overuse or destruction. In particular, the new institutionalists
demonstrate that the human-environment interaction which is part of the commons
problem is open to reflection and deliberation and, at least in certain conditions, can be
influenced and changed in order to sustain the shared resource system for the benefit of
all users.48
43
44
45
46
47
48
Aristotle, Politics, Bk II, ch 3.
Mancur Olson, The Logic of Collective Action: Public Goods and the Theory of Groups (Harvard University
Press, 1965).
Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge
University Press, 1990).
Elinor Ostrom, Roy Gardner and James Walker, Rules, Games and Common-Pool Resources (University of
Michigan Press, 1994).
James G March and Johan P Olsen, ‘The New Institutionalism’ (1984) 78 American Political Science Review
734; P Cammack, ‘The New Institutionalism’ (1992) 21 Economy and Society 397. For a helpful outline of
various strands of new institutionalism, see Julia Black, ‘New Institutionalism and Naturalism in SocioLegal Analysis: Institutionalist Approaches to Regulatory Decision Making’ (1997) 19 Law and Policy 51.
Ostrom (n 45).
20
Law, Innovation and Technology
Although Hardin’s tragedy of the commons theory was concerned with the
preservation of natural resources, it has also been applied to problems concerning the
management and governance of social and political resources, including human-made
structures such as irrigation systems, bridges and highways, more abstract socio-technical
resources such as radio frequencies, computer-processing units available on mainframe
computers, budget allocations of government and corporate treasuries,49 the findings of
scientific research undertaken at universities, and, more recently, socially constructed
cultural resources50 (including ideational resources available on the internet),51 public
utility infrastructure,52 and even the public interested policies of regulatory agencies.53 In
the following discussion, I argue that moral freedom can be understood as a common
pool resource that is vulnerable to degradation and erosion from techno-regulatory
measures. Accordingly, the work of the new institutionalists can help us to understand the
nature and extent of the problem of moral erosion, and to identify whether (and if so
how) such problems might be resolved.
The ‘Moral Commons’ as a Common Pool Resource System
In order to apply conventional common pool resource theory to the problem of moral
erosion, it is first necessary to demonstrate that the moral freedom which technoregulation may endanger can be characterised as a common pool resource. For this
purpose, we must identify and understand the key variables that contribute to the health,
viability and dynamics that occur within the relevant common pool resource system,
including: the relevant ‘resource system’; the ‘resource units’ which are generated by the
resource system; the ways in which resource units are ‘appropriated’ when resource units
are withdrawn from the common pool; and whether any ‘providers’ (those who arrange
for the provision of the relevant resource system) can be identified.54 In the present
context, the moral freedom available in an autonomy-respecting community can be
understood as the relevant resource, providing the space in which moral agency and
responsibility is both possible and practically realisable. While it is rather awkward to
refer to moral freedom in terms of ‘units’, in the way that we might refer to natural
49
50
51
52
53
54
William Blomquist and Elinor Ostrom, ‘Institutional Capacity and the Resolution of a Commons Dilemma’
(1985) 5 Policy Studies Review 383.
Michael J Madison, Brett M Frischmann and Katherine J Strandburg, ‘Constructing the Commons in the
Cultural Environment’ (2010) 95 Cornell Law Review 657.
John Cahir, ‘The Withering Away of Property: The Rise of the Internet Information Commons’ (2004) 24
Oxford Journal of Legal Studies 619.
R Kunneke and M Finger, ‘The Governance of Infrastructures as Common Pool Resources’, unpublished
paper presented at the Fourth Workshop on the Workshop (WOW4), Bloomington, 2009.
William W Buzbee, ‘Recognising the Regulatory Commons: A Theory of Regulatory Gaps’ (2003) 89 Iowa
Law Review 1.
See Ostrom (n 45) ch 2.
Can We Employ Design-Based Regulation While Avoiding Brave New World?
21
resources such as fish, game or atmospheric gases, the extent to which moral freedom is
present in any community is typically a matter of degree: just as communities differ in the
extent to which public architecture and technology permit members to enjoy general
freedom to act as they wish, so they also vary in the extent to which technology allows and
enables individuals to enjoy moral freedom to act in accordance with their own assessment
of what morality requires or prohibits.
Unless some minimum threshold of moral freedom is available within the community,
providing agents with an adequate range of opportunities for moral action, then it cannot
be a moral community. As Watson puts it, ‘holding people responsible is not just a matter
of the relation of an individual to her behaviour, it also involves a social setting in which
we demand (require) certain conduct from one another and respond adversely to one
another’s failure to comply with these demands’.55 Where this minimum threshold of
moral freedom obtains, that is, in social settings capable of sustaining meaningful moral
agency and responsibility, the socio-technical foundations of that community can be
understood as a common pool resource system—a ‘moral commons’ which provides
space for moral action and interaction that sustains the moral community to which it
gives life.56 Unlike naturally occurring common pool resource systems, such as water
reservoirs, wildlife reserves and public grazing commons, the moral commons is a socially
constructed resource system, ultimately grounded in the bond of trust between both the
governors and the community they govern, and between individual members of the
community, but whose size, shape and quality is affected by the dynamic and complex
socio-technical interaction that unfolds as new technologies are taken up and applied.
The resource unit thereby generated is comprised of the moral freedom that individuals
share jointly with others when acting in accordance with their own moral judgement.
An agent’s enjoyment of moral freedom does not, however, have any adverse effect on
the underlying moral commons, for it does not detract from any other member’s
enjoyment of their common moral freedom. Just as individuals who keep to the paved
footpath that crosses a manicured lawn can enjoy its delights without degrading its
quality, individuals who exercise their moral agency do no damage to the moral commons
that they inhabit. Even when individuals engage in morally wrongful behaviour, such
actions do not thereby reduce the opportunities for moral action available to others: for
example, Alex’s refusal to obey red signals does not prevent any other driver from doing
so. From a regulator’s perspective, however, the moral freedom available within a moral
commons is a potential resource upon which it could draw in pursuing its regulatory
goals. So, for example, a highway authority might wish to introduce the automatic braking
technology referred to above in order to enhance road safety. While such a measure might
55
56
Watson (n 33).
Note that moral freedom is not a ‘public good’ in the economic sense, for this requires two conditions: nonexcludability and non-rivalrous consumption. Although the first condition applies to the moral commons,
the second does not, since there is a limit to the total amount of consumption of the good.
22
Law, Innovation and Technology
reduce road traffic injuries caused by drivers accidentally failing to stop at red signals, it
does so by reducing the liberty of drivers in ways that could, as Eric’s case demonstrates,
erode their moral freedom and degrade the quality of the underlying moral commons.
A ‘commons dilemma’ arises because, although a small degradation of the moral
commons may not be a serious matter when considered in isolation, the aggregate effect
of a series of small erosions may lead to its long-term degradation and even destruction.
However, if common pool resources are utilised ‘sustainably’ so that the rate at which
resource units are appropriated does not exceed the rate at which they are regenerated,
then the overall health of the resource system can be preserved for the benefit of all users
and the common good.57 In the context of the moral commons, this means that as
technologies are employed in ways that restrict moral freedom, existing restrictive technologies are abandoned, or new technological applications emerge, in ways that expand
moral freedom and enrich the moral commons. The most straightforward cases of
regulatory action that extend the range of opportunities for moral action (thereby
expanding the moral commons) occur when action is taken to dismantle or disable
existing techno-regulatory measures. For example, just as the removal of ticket inspectors
from the entrance to station platforms enhances the moral freedom of individuals, who
thereby acquire greater freedom to board trains without a ticket, so too does the removal
of automated entrance barriers to train platforms that do not permit entry without a
valid ticket. However, the scope and vitality of the moral commons is affected not only
by the action-forcing technology of regulators, but also by a wide range of social, political
and technical variables, including the adoption of technology by private actors, all of
which may affect the nature and extent of moral freedom available within any given
community in complex and often unpredictable ways.
For example, the development and availability of the automobile to the general
population greatly enhanced individuals’ freedom of action, providing a radical new,
highly flexible and individualised form of transport. Although this had a profoundly
liberating and expansive impact on the general liberty of individuals, it also opened up
new opportunities for causing harm to other road users, eventually resulting in the
passage of laws to regulate the competence of drivers and impose new legal obligations
on automobile drivers.58 A similar radical, albeit rather different, dynamic can be seen in
the development of the internet, which vastly extended individual freedom of action,
enabling those with access to the internet to interact in new, unexpected and powerful
ways. But it also enabled the creation of new forms of wrongdoing, providing new ways
in which individuals and communities might be harmed or exploited. It also provided a
new vehicle through which the state could exert control over its citizens, including
surveillance of their movements in cyberspace and opening up new opportunities for
57
58
Ostrom (n 45).
See Susan Brenner, Law in an Era of ‘Smart’ Technology (Oxford University Press, 2007) ch 5.
Can We Employ Design-Based Regulation While Avoiding Brave New World?
23
state censorship through the operation of filtering and blocking software. In other words,
the emergence of the internet radically altered the contours of the moral commons,
significantly expanding its size in many respects, but also allowing for the possibility of
significant shrinkage in others. This is not to assert that all new technologies affect the
moral commons simply by extending the range of actions and opportunities that become
available for individual use and enjoyment. So, for example, although the creation of a
new kind of ornamental plant variety might provide new opportunities for gardening
enthusiasts to augment their gardens, it has no effect on the range of opportunities for
moral action. In other words, while the emergence of new technologies may enhance
general liberty, it is only when such technologies can be applied in ways that may harm
the interests of others that the scope of moral freedom is affected.
These illustrations suggest that moral freedom can be understood as a renewable
resource, at least to the extent that alterations to the moral commons may arise from
actions and events that expand, rather than restrict, opportunities for moral action.
Accordingly, if the complex socio-technical environment and political culture in which
the moral community operates is able to facilitate regeneration of the moral commons,
then it is at least theoretically possible to imagine that it could be utilised sustainably in
ways that permit the adoption of techno-regulatory instruments without permanently
damaging its foundations. Hence the challenges facing a moral community that wishes
to reap the potential benefits of techno-regulation without jeopardising its moral and
political foundations are twofold. The first is to identify which techno-regulatory
measures should be granted the ‘right to graze’ upon the moral commons. Here the
challenge is to construct a principled basis for identifying which individual technoregulatory measures qualify as morally legitimate. The second challenge is to devise and
maintain institutional mechanisms for making and enforcing decisions that will ensure
that the moral commons is utilised sustainably in ways that avoid its degradation and
over-exploitation. Although detailed exploration of these two tasks is beyond the scope
of this paper, I will offer some brief reflections on the first challenge in order to highlight
the importance of law’s contribution.
IV. WHICH DESIGN-BASED REGULATORY
MEASURES QUALIFY FOR ADMISSION?
If the moral commons is to be utilised sustainably, then some kind of ‘admissions policy’
is required so that only design-based regulatory proposals that qualify as legitimate can
gain access. Once the moral commons is understood as a vital resource system that is
vulnerable to over-exploitation, the need to utilise it wisely and sustainably becomes
apparent. Hence only measures which offer substantial social benefits will qualify for
admission, and even then, some might nevertheless be excluded if the diminution of
24
Law, Innovation and Technology
moral freedom that their implementation entails is considered excessive. As the following
sketch demonstrates, these assessments are likely to raise difficult questions of degree for
which there will be no precise, objective measure. But the impossibility of making precise
evaluations does not detract from the importance and value of engaging in public and
reasoned evaluation, and there are likely to be some areas in which core guiding principles
can be identified. So, for example, some design-based measures can be readily dismissed
as illegitimate in seeking to further purposes which are clearly improper. Winner famously
argues that the purpose of low-hanging overpasses on Long Island in New York were
designed by Robert Moses to prevent access to Jones Beach by blacks, as the predominant
users of buses, which were too large to pass through the overpasses, unlike cars, which
were largely owned by whites.59 While the discriminatory impact of the Long Island
Bridge might not have been intentional,60 some more recent technological measures are
employed for the explicit purpose of adversely discriminating against particular social
groups, such as the ‘mosquito’, a device employed by some property owners to deter the
presence of teenagers by sending out a high-pitched buzzing sound that only young
people can hear, causing them considerable irritation and discomfort.61
By contrast, measures intended to prevent harm to others appear to qualify as socially
valuable in a direct and straightforward manner, particularly those aimed at preventing
harm to human health and safety. If the risk and severity of the harm thereby prevented
are considerable, and the resulting erosion to moral freedom is relatively minor, then the
relevant measure would readily qualify as legitimate and thus worth adopting. Consider,
for example, the introduction of raised footpaths alongside roads to provide pedestrians
with safe passage by preventing vehicles from occupying the three feet of highway that
would otherwise be accessible. Although raised footpaths restrict drivers’ freedom of
action and might, in exceptional circumstances (such as those faced by Eric and other
good samaritans), reduce their freedom to act in accordance with their moral judgement,
such a measure substantially enhances the safety and security of pedestrians (although
cars can, of course, still mount the pavement, but this becomes considerably more
difficult). No one would plausibly suggest that a local authority that installs raised
pavements is guilty of unacceptably threatening the foundations of a moral community.
Indeed, a local authority which failed to install simple protective architectural measures
where there was an established history of injury to pedestrians from motor vehicle
accidents might be vulnerable to claims of failing to discharge its legal duty of care to
road users.62 Similarly, the installation of pedestrian zones in busy city centres aimed at
Langdon Winner, ‘Do Artifacts Have Politics?’ (1980) 109 Daedalus 123.
See Bernward Joerges, ‘Do Politics Have Artefacts?’ (1999) 29 Social Studies of Science 411.
61 The UK Children’s Commissioner has called for a ban on the device: see ‘Kids Commissioner Calls for Ban
on Mosquito, Ultra-Sonic Anti-Teen Device’ The Times, 12 February 2008.
62 cf Stovin v Wise [1996] AC 923.
59
60
Can We Employ Design-Based Regulation While Avoiding Brave New World?
25
creating a safer environment for pedestrians utilising city centre amenities is unlikely to
be considered morally problematic.
While the raising of pavements or the pedestrianisation of city centres provide simple
and effective design-based solutions to the problem of pedestrian safety, it would be
unwise to regard them as typical. Part of their simplicity and appeal lies in the strong
consensus that surrounds the nature and severity of the harm which the technology aims
to prevent, the fairly trivial loss of moral freedom that they entail, and the absence of any
tangible harm associated with their use. Although motorists might complain about the
inconvenience associated with the loss of the three feet of highway that would otherwise
have been available to them (in the case of raised pavements) or the loss of direct vehicular
access through the city centre (in the case of pedestrianised city centres), no reasonable
person would regard such inconveniences as seriously harmful to the interests or wellbeing of drivers. But as technology becomes increasingly powerful and sophisticated,
moral disagreement concerning what counts as harm is likely to be the norm rather than
the exception.63 Consider, for example, a proposal intended to protect potential victims
of sexual assault through the ‘treatment’ of ‘high risk’ individuals (ie those considered to
possess a predisposition for engaging in unwanted sexually aggressive behaviour) using
libidinal suppressants (‘chemical castration’), thereby eliminating their sexual desire and
function.64 Although there is no doubt that the harm to victims of sexual assault is very
serious indeed, a technological fix of this nature is likely to provoke claims that the
proposed treatment is seriously harmful and therefore illegitimate. The way in which the
relevant harm is characterised will depend, however, upon one’s ethical outlook. So, for
example, supporters of human rights would reject any such measure on the basis that it
involves a serious violation of an individual’s rights unless full and informed consent has
been provided.65 For those who adhere to the view that human dignity should never be
compromised, then even informed consent might not suffice if the administration of
libidinal suppressants is seen as instrumentalising the person and thereby undermining
human dignity.66 In contrast, utilitarians might argue that such a measure is, on balance,
socially valuable because the diminution of pleasure and well-being experienced by the
technologically altered individuals is outweighed by the avoidance of pain and suffering
of potential victims.
Even if a particular measure can successfully avoid the charge that it causes harm, its
legitimacy will also depend on establishing that the resulting moral risk occasioned by the
For a discussion of competing interpretations of the harm principle see Roger Brownsword, ‘Cloning,
Zoning and the Harm Principle’ in Sheila AM McLean (ed), First Do No Harm (Ashgate, 2005).
64 See K Harrison and B Rainey, ‘Suppressing Human Rights? A Rights-Based Approach to the Use of
Pharmacotherapy with Sex Offenders’ (2009) 29 Legal Studies 47.
65 Ibid. For ECHR purposes, the relevant rights are likely to include the right to respect for family and private
life (Art 8), the right to found a family (Art 12), and the right to freedom from torture, inhuman or
degrading treatment (Art 3).
66 Ibid.
63
26
Law, Innovation and Technology
technology is justified by its benefits. Some form of ‘mediating principle’ is therefore
required so that the expected benefits of the measure can be traded off and evaluated
against its attendant moral risks. If the degradation of the moral commons is taken
seriously, then a simple cost-benefit principle will not suffice, for it fails to give due weight
to the fundamental importance of our moral foundations. On this view, the adoption of
a proportionality principle, or even a precautionary approach, might be considered more
suitable, although neither is problem-free.67 Thus, even in circumstances where there is
broad consensus that a particular design-based measure is intended to alleviate a clearly
identified harm, and is not itself harmful, the resulting benefit must be of sufficient
magnitude and importance that it should be granted access to ‘graze’ at the moral
commons. How to determine this magnitude and importance is a matter for serious
debate among regulatory scholars, legal philosophers and public lawyers. Rather than a
legal formula, this will require a thicker legal principle that takes proper account of the
foundational importance of moral freedom and responsibility, and how these are legally
and politically embedded in the fabric of our social institutions and governance
mechanisms.
CLOSING REFLECTIONS
As our technological knowledge and power advances, we can expect regulators to turn to
design-based measures to tackle social problems. Although various scholars have
identified ways in which design-based policy instruments might undermine democratic
and moral values, this paper has examined how ‘techno-regulation’, design-based controls
that are configured such that regulatees have no choice but to act in accordance with the
desired regulatory pattern, may threaten the social foundations necessary to ground basic
responsibility and its expression in the form of moral responsibility. In so doing, it draws
together critiques from various disciplines concerned with the legitimacy of employing
technological forms of control in specific contexts as well as contributing to a body of
literature concerned with the legitimacy of regulatory decisions which has hitherto tended
to focus on the substantive decisions and goals of regulatory officials rather than the
means by which those goals are pursued.68 Using accident prevention technology as an
The literature in relation to both is extensive and adopts multiple discourses, not all of which are germane
to an analysis of moral legitimacy. See eg Per Sandin, Martin Peterson, Sven Ove Hansson, Christina Rudén
and André Juthe, ‘Five Charges Against the Precautionary Principle’ (2002) 5 Journal of Risk Research 287;
Alec Stone Sweet and Jud Mathews, ‘Proportionality Balancing and Global Constitutionalism’ (2008) 47
Columbia Journal of Transnational Law 73. Grappling with the problems that these principles raise in relation
to the purposes of this paper is a significant and separate task.
68 eg Giandomenico Majone, ‘The Regulatory State and its Legitimacy Problems’ (1999) 22 West European
Politics 1; Fritz Scharpf, Governing in Europe: Effective and Democratic (Oxford University Press, 1998);
Cosmo Graham, ‘Is there a Crisis in Regulatory Accountability?’ in Robert Baldwin, Colin Scott and
Christopher Hood (eds), A Reader on Regulation (Oxford University Press, 1998).
67
Can We Employ Design-Based Regulation While Avoiding Brave New World?
27
illustration, I have argued that although moral agency is not necessarily compromised by
techno-regulation, it can erode moral freedom. Although an isolated techno-regulatory
measure may appear benign, the cumulative effect of techno-regulatory action by a range
of regulators acting independently across a variety of social contexts might ultimately
lead to such a significant erosion of moral freedom that meaningful moral agency can no
longer be sustained.
By drawing on ‘tragedy of the commons’ theory, I have argued that it is at least
theoretically possible for a community to enjoy the benefits of techno-regulation without
destroying the social foundations upon which moral freedom and responsibility rest.
Because the foundations of our moral environment can be understood as a collective
common pool resource system, regulators can legitimately draw upon the moral freedom
sustained by the underlying moral commons in pursuing the benefits of technoregulation, provided that care is taken to ensure that the health and vitality of the
commons is maintained. In other words, if techno-regulatory instruments are implemented ‘sustainably’, then they can be employed in pursuit of valued social goods,
particularly the prevention of harm, without jeopardising our moral foundations.
My argument is dependent, however, upon establishing and maintaining a workable
framework of principles, supported by appropriate institutional arrangements, that serve
to safeguard the health of the moral commons, including its sustainable use. Such
arrangements will need to provide a means for principled scrutiny of design-based
measures in order to assess whether they can lay claim to legitimacy. Although I have not
attempted to identify the scope and content of these principles, I have outlined various
considerations that any acceptable framework will need to address. These include
identifying whether the purpose of the measure is legitimate, evaluating whether the
anticipated benefits are of sufficient magnitude to justify the erosion of moral freedom
according to a suitable mediating principle, and assessing whether the measure meets the
ethical and democratic standards of the particular moral community in question. So, for
example, moral communities committed to human rights will need assurance that the
design-based measure will operate in ways that respect those rights.69 Although some
measures readily qualify as legitimate, such as the installation of raised pavements to
ensure the safe passage of pedestrians, as design-based regulatory measures become
increasingly powerful and sophisticated in line with our advancing technological
expertise, controversy and contestation are likely to become common and deeply felt. I
envisage that the relationship between this framework and design-based regulatory
instruments would be similar to that which arises between the rule of law and legal rules
and institutions. Although there is considerable disagreement as to the precise content and
69
According to Brownsword’s proposed analytical framework, an assessment of the moral legitimacy of a
regulator’s approach to regulating consists of two stages: (1) Does it threaten the moral commons? (2) Does
it threaten a specific community’s moral values (human rights and human dignity)? This paper addresses
the first stage of the assessment: see Brownsword 2008 (n 10).
28
Law, Innovation and Technology
function of the rule of law, there is nonetheless strong consensus about certain core
requirements such that laws that fail to conform to them cannot lay claim to legitimacy.
In a similar vein, design-based instruments, including action-forcing design, which fail to
conform to a ‘rule of design’ cannot lay claim to legitimacy, despite their effectiveness in
bringing about some collectively desired outcome.
My argument should not, however, be interpreted as suggesting that design-based
approaches to regulation, including techno-regulation, are invariably a good idea. In many
contexts, agent-based approaches may be more effective and legitimate than systemsbased approaches, which often include design-based measures.70 Consider, for example,
a traffic management experiment in the Dutch city of Drachten, in which roads serving
45,000 people are ‘verkeersbordvrij’: free of nearly all road signs.71 Drachten is one of
several European test sites for a traffic planning approach called ‘unsafe is safe’. The city
has removed its traffic signs, parking meters, and even parking spaces. The only rules are
that drivers should yield to those on their right at an intersection, and that parked cars
blocking others will be towed. The result so far is, apparently, counterintuitive—a
dramatic improvement in road safety. Without signs to obey mechanically, people are
forced to drive more mindfully—operating their cars with more care and attention to
the surrounding circumstances. They communicate more with pedestrians, cyclists and
other drivers using hand signals and eye contact. They see other drivers rather than other
cars.
The Drachten experiment is useful in alerting regulators to the possibility of seeking
modification of techno-regulatory proposals in ways that preserve scope for agent
discretion, allowing many of the intended benefits to be reaped without removing an
agent’s freedom of action. Such a strategy might be particularly valuable in circumstances
where there is reasonable disagreement about the right standard or the right action. So,
for example, the automated braking facility installed in motor vehicles system could allow
the driver the freedom to disable the automatic safety mechanism. If such an override
facility were available to Alex, Eric and John, this would enable them to run red lights
intentionally, allowing scope for Alex to use his car as an instrument of intimidation,
enabling Eric to use his car to help save his passenger’s life, and providing opportunities
for John to exercise moral self-restraint. In this way, users retain the freedom to engage
in intentional action—whether it be judged as morally wrongful, permissible or required.
The resulting intervention would, of course, no longer qualify as ‘techno-regulation’ as
drivers are no longer compelled to conform to the technologically designed standard. A
safety override facility could also help to sharpen rather than blunt moral judgement, by
rendering the line between unintended and intended action more visible. In order to deRichard Ashcroft, ‘The Ethics of Governance of Medical Research: What does Regulation have to do with
Morality?’ (2003) 1 New Review of Bioethics 41.
71 I have taken this example from Zittrain (n 11) ch 6 (‘The Lessons of Wikipedia’).
70
Can We Employ Design-Based Regulation While Avoiding Brave New World?
29
activate the safety device, discrete, conscious action on the part of the user is required,
leaving a clear evidential trail and thereby precluding the user from claiming that her
action was unintended. Thus, if Alex chooses to override the automatic braking facility
in order to run his car through red lights to harass pedestrians, he cannot plead that his
conduct was accidental, because his car could not have could not have proceeded through
a red light unless he had deliberately disabled the automatic safety facility. Equally, Eric
retains the freedom to override the automatic braking facility in order to arrive at the
hospital more quickly, but in circumstances where he can plead that his action was
justified by the urgency of the need to assist his dying passenger. And, whether he likes it
or not, John will still have to wrestle with his moral conscience in deciding whether to run
a red light when no one else is about. In other words, an override facility may help to
sharpen rather than blunt moral judgements by helping to delineate the distinction
between unintended and intended actions, whilst preserving many of the benefits of the
safety-enhancing technology by preventing accidental harms, although whether a
particular intentional action should be morally evaluated as worthy of praise or blame will
remain open to moral disagreement and debate.
Although lawyers have been actively involved in debates about the use of technology
for the purposes of monitoring and surveillance, little attention has been devoted to the
use of technology as a general regulatory instrument at the level of standard-setting.72
Yet Huxley’s Brave New World has no need for CCTV, digital tracking technology and
other like technologies for monitoring its inhabitants, for it is governed entirely by technoregulatory means. By demonstrating how the moral freedom and responsibility which
are crucial to our self-understanding as rational beings may be threatened by the
widespread use of techno-regulation, this paper helps us to understand why Brave New
World is one which any autonomy-respecting community would steadfastly wish to avoid.
While the use of design-based measures to shape social behaviour is far from new,
developments in biotechnology, nanotechnology and neurotechnology, combined with
ever-expanding computing capabilities, are likely to generate opportunities for much
more precise, targeted, invasive and permanent forms of regulatory control than anything
that has gone before. Accordingly, serious reflection and discussion concerning these
developments by scholars from a wide range of disciplines, including law, are both
pressing and necessary, lest we find ourselves in the position of the slowly boiling frog:
unlike a frog plunged into boiling water, which immediately jumps out, one placed in
cold water that is slowly heated will sink into a tranquil stupor, and before long, with a
smile on its face, it will unresistingly allow itself to be boiled to death.
72
Various criminologists were, however, actively engaged in academic critiques of situational crime prevention
techniques in the late 1990s. See eg Duff and Marshall (n 3).
Download