Reconstructing the public domain in the wake of Bayh-Dole

advertisement
Reconstructing the public domain in the wake of Bayh-Dole
Alain Pottage
I
How has the Bayh-Dole Act changed the character or orientation of university-based
research?1
From
one
perspective,
the
answer
is
relatively
simple.
The
implementation of the Act has dissolved a traditional culture of basic science: ‘For a
century or more, the white-hot core of American innovation has been basic science.
And the foundation of basic science has been the fluid exchange of ideas at the
nation's research universities. It has always been a surprisingly simple equation:
Let scientists do their thing and share their work – and industry picks up the spoils.
Academics win awards, companies make products, Americans benefit from an everrising standard of living. That equation still holds, with the conspicuous exception of
medical research. In this one area, something alarming has been happening over the
past 25 years: Universities have evolved from public trusts into something closer to
venture capital firms. What used to be a scientific community of free and open
debate now often seems like a litigious scrum of data-hoarding and suspicion’.2 This
complaint about the end of basic (biomedical) science shades into a somewhat
different argument about the effects of privatization on the culture of ‘university
research norms’. Starting from the premise that the Bayh-Dole Act has created the
conditions for a productive balance between privatization and the public domain,3
this line of argument proposes that we should now reflect on how to adjust the terms
and implementation of the legislation so as to conserve the culture of university
research norms, but only to the extent that the latter actually work to enhance the
There are some empirical resources for that question, but studies of the xx are just
resources; much depends on the in reation tt evidence.
2
Law of unintended consequences; the
3
‘[R]esearch universities have developed informal policies that attempt to preserve certain
elements of traditional research norms while incorporating some of the developmentpromoting aspects of property rights. Specifically, university policies privatize discoveries
that are likely to have specific commercial uses while leaving in the public domain other
discoveries that may have a variety of future research uses, may be necessary for the
development of many different commercial products, or may be difficult to utilize effectively
without access to other discoveries’ (Arti K Rai,‘Regulating scientific research’ (1999) 94
Northwestern University Law Review, xx-152, at p 144).
1
public domain.
The difference between these two perspectives is quite interesting. The argument
from basic science adopts the theory of the differentiation of basic science and
applied research that informed the representation of the 19th century German
research university, which emerged as a reaction to the technocratic form of the
polytechnic or the technical university.4 Plainly, there was a utilitarian justification
for the cultivation of science for science’s sake, and one of the proofs of that
justification may have been the quality of the patents in chemical inventions that
were compulsorily acquired from German corporations by the Allies under the terms
of the Treaty of Versailles. The utilitarian justification is, if anything, even plainer
in the constitution of basic science in the United States. Before the Second World
War, there was no such thing as basic science in the sense of university-based,
publicly-funded, research: ‘Until Hiroshima, the average American’s conception of
the “scientist at work” was either the self-taught Thomas Edison or a white-coated
industrial research chemist – not an academic at all’.5 In Vannevar Bush’s vision of
things, the economically disinterested science of the research university was
something that could be made useful, but this broader utility was supposed to be a
spin-off from the non-utilitarian ethos of science itself. By contrast, the argument
that we should conserve university research norms to the extent that they are
‘efficient’ in enhancing the public domain has already given up on this ideal of
science for science’s sake. First, it effectively reduces the practice of a science to a set
of norms. Second, it reduces the broad assemblage of norms described by Merton to a
headline orientation toward the public rather than private. Third, it construes this
preference as a more or less efficient (and tunable) instrument for translating
science into technology. The effect is to turn the once differentiated (if only ideally)
sphere of basic science into an ‘upstream’ resource of downstream production. The
Which had themslevs been In established France and Germany), in the wake of the French
revolution, the old corporate or guild form of the university as a universitas magistrorum et
scholarium.
5
Steve Fuller, ‘The road not taken. Revisiting the original new deal’ in Philip Mirowski &
Esther-Mirjam Sent, Science Bought and Sold. Essays in the Economics of Science (Chicago:
University of Chicago Press, 2002) 444-462, at p 444.
4
theme of innovation dissolves cultural and institutional specificities into a single,
economically calculable, process.
As one would expect, this understanding of science is expressed even more
affirmatively in the argument in favor of the Bayh-Dole project; quite simply, the
effective transfer of title to inventions to universities has benefited the public by
giving institutions and their researchers the necessary incentives to license
concepts6 that would otherwise have remained unexploited.7 This premise is restated
by Birch Bayh (and two other significant actors in the story of the Bayh-Dole Act) in
a commentary on the Federal Circuit decision in Stanford v Roche: ‘While royalties
resulting from successful commercialization are reinvested in campus research,
paying associated technology transfer costs, and in rewarding inventors, those are
not the primary goals of Bayh-Dole. Bringing new products into the marketplace
where they benefit the public through providing enhanced health, safety, and the
realization of better living standards – as well as the promotion of economic growth
– is the real objective’.8 Again, the understanding is that science matters as a public
good only to the extent that it functions as an upstream input into innovation. Take
the argument that ‘[t]he Bayh-Dole Act unleashed the previously untapped potential
of university inventions, allowing them to be turned from disclosures in scientific
papers into products benefiting the taxpaying public’.9 What if scientific papers were
actually the medium of a knowledge culture that was valuable as such, and also,
perhaps, as a resource for a differentiated culture of technology? Three decades of
STS scholarship, and a much more long-standing line of economic argument, have
problematized this distinction, but it may be that the story Bayh-Dole should
prompt us to return to the question of what ‘basic science’ actually is.
Most university inventions as proof of concept stage; see Jerry G Thursby & Marie C
Thursby, ‘The disclosure and licensing of university inventions’, NBER Working Paper No
W9734.
7
The same point is made in the amicus brief filed in the name of Birch Bayh for Stanford v
Roche: ‘Although the federal patent policies preceding the Bayh-Dole Act led to the creation
of thousands of patentable inventions, the vast majority of those inventions remained on
government shelves, unlicensed, undeveloped, and unable to benefit the public. Of more than
28,000 patents owned by the federal government, only four percent were licensed for
development’ (ref).
8
Birch Bayh, Joseph P Allen & Howard W. Bremer, ‘Universities, inventors, and the BayhDole Act’ (2009) 3:24 Life Sciences Law & Industry 1-5, at p 3.
9
Bayh, Allen & Bremer, ‘Universities, inventors, and the Bayh-Dole Act’, at p 1.
6
If we take the biography of the Bayh-Dole Act as a case study in the construction of
the public domain, then what do we mean by the public domain? More precisely, in
what sense does the ‘public domain’ overlap with ‘basic science’, and in what sense
does the public domain supposed to be ‘open’. My suggestion is that the sense of the
public domain that informs the broad legal debate about the effects of Bayh-Dale
depends on a deep archetype of nature as the paradigm of a public resource. The
same archetype informs intuitions about the distinction between discovery and
invention, or abstraction and concreteness, that is so central to the debate
surrounding the Bilski and Myriad Genetics cases. For reasons that we know very
well, the figure of ‘nature’ is even more problematic now than it was in the late 18th
or early 19th century. One might see the emergence of open source synthetic biology
perhaps as a reaction to the effects of the Bayh-Dole Act, and hence as an exercise in
reconstructing the public domain, but synthetic biology has gone yet further in
dissolving the figure of nature and the old sense of the public domain.
II
First, we should notice the sense in which more skeptical takes on Bayh-Dole
presuppose the role of the university, either as a corral of ‘basic science’ or as an
institutional platform for ‘traditional research norms’. Both modes of skepticism
take the university to be one of two figures of the public, the first being the res
publica of the university itself – the community, institution, or normative ethos –
and the second the public domain as the fund or constituency on behalf of which the
university is supposed to produce and culture ‘basic science’, or to exercise its
‘traditional research norms’. The argument puts two ‘publics’ in play; crudely, the
proposition is that it is by cultivating the ‘public’ in the narrower sense of the
university that we best serve the ‘public’ in the broader sense of the public domain.
This diffraction of the public into different avatars or instantiations, or between
means and ends, or principal and agent, goes to the question of what we mean by
openness or ‘publicity’ in science. ‘Openness’ is not an abstract quality, but an effect
of how particular sub-publics generate and practice ‘science’ in the interests of the
broader public. So, for example, if we say that ‘[i]n order for basic research truly to
be in the public domain, it must be performed by entities that refrain from claiming
any significant property rights in such research’,10 then the crucial question is how
science is actually held in the institution of the university. Even if we take it that
there is more to Mertonian norms than ‘vocabularies of justification’,11 then is the
university really just a platform for these norms? Can the agency of the university
be explained in terms of a specific normative orientation?
The institution has a specific historical trajectory and sociological density, and the
story of Bayh-Dole could be seen as just an interesting episode in that trajectory.
Indeed, the question of the constitution of the res publica of the university is crucial
to deciphering the effects of the Bayh-Dole Act. Historically, the practice of waiving
Federal title to inventions by means of IPAs (Institutional Patent Agreements) may
have been much more influential than many commentators allow. Elizabeth Popp
Berman has suggested that the passage of the Bayh-Dole Act and its subsequent
implementation within the university was prepared and facilitated first by a ‘skilled
actor’ working in the NIH in the 1960s and 1970s, and then by the rapid
institutionalization of university patent offices or technology transfer offices in the
same period.12 Within the space of xxx decades, the creative use of IPAs had fostered
Rai, ‘Regulating scientific research’, at p 144.
Ref Michael Mulkay
12
See Elizabeth Popp Berman, ‘Why did universities start patenting? Institution-building
and the road to the Bayh-Dole Act’ (2008) 38:6 Social Studies of Science 835-871. The
argument starts from the empirical observation that university patenting ‘increased almost
as rapidly in the 12 years leading up to Bayh-Dole (by about 250%) as it did in the 12 years
following the Act (by about 300%)’ (at p 836). Berman suggests that the origins lie in the
practice adopted by the Department of Health, Education and Welfare, the parent of the
NIH, which in the 1950s began waiving title to inventions in cases where the grantee could
show that it had the necessary administrative competence to develop the invention. Initially
the device of Institutional Patent Agreement, which was introduced in the 1950s, was used
less restrictedly, albeit on a case-by-case basis, and when, in the 1960s the approach
hadened, and IPAS were routinely denied, the NIH’s patent counsel, Norman Latker, began
to advocate the use of streamlined IPAs as a way of transferring technology to the market. A
minor scandal over the non-development of (proto-)inventions generated within the NIH’s
medicinal chemistry program in the late 1960s facilitated Latker;’s strategy, and, even
though relatively few IPAs were issued, the form of the instrument functioned as a kind of
‘proto-institution’, that is, ‘a new practice, rule, or technology that is narrowly diffused and
only weakly entrenched, but that has the potential to become widely institutionalized’ (at p
846). The issuing of IPAs fostered the evolution of the function of the university patent
administrator. When the first conference of university patent officers was held at Case
10
11
the creation of a broad administrative platform of technology transfer expertise: ‘As
patent administration became a job description, and particularly as actual patenting
or technology transfer offices were formed, individuals and groups were created who
had interests in perpetuating the practice as well’.13 It is not surprising that one of
the most upbeat retrospective appraisals of ‘Bayh-Dole at 30’ was offered in 2010 by
the Association of University Technology Managers, which created a dedicated
website for the anniversary: www.b-d30.com. Keeping in mind the question of the
constitution of the university, there are two crucial points here.
First, even if we think about the university as a platform for ‘research norms’, then
we need to bear it in mind that the norms of administrators may not be entirely
congruent with those of scientists.14 Looking ahead to the example of synthetic
biology, we can find an illustration of the potential tensions between technology
transfer practices and the preferences of university researchers. Kenneth Oye and
Rachel Wellhausen refer to the case of Adam Arkin, named as the co-inventor of a
patent relating to ‘a system and method for simulating operation of biochemical
systems’, who says that he was pressured by Stanford to apply for what he
characterized as ‘an example of an outrageously broad IPR claim’.15 Second, and
Western Reserve University in 1974, it had 118 participants. When in 1978 the newlyappointed Secretary for the HEW reversed policy and decided to review all applications for
IPAs, leaving some 30 inventions in limbo, it was a patent administrator who brought this
situation to the attention of Senator Birch Bayh. And, in an interview given in 2005, Bayh
identified this factor as a crucial impetus for the movement toward legislation: ‘Discoveries
were lying there, gathering dust. So the taxpayers weren't being protected. We'd spent $30
billion in research for ideas that weren't helping anybody’ (cited in Clifton Leaf, ‘The law of
unintended consequences’ Fortune, 19 September, 2005). On the history of university
patenting, see also Grischa Metlay, ‘Reconsidering renormalization: stability and change in
20th-century views on university patents’ (2006) 36:4 Social Studies of Science, 565-597. Cf
Rebecca Henderson, Adam B Jaffe & Manuel Trajtenberg, ‘Universities as a source of
commercial technology: a detailed analysis of university patenting, 1965-1988’ (1998) 80:1
Review of Economics and Statistics 119-127.
13
Berman, ‘Why did universities start patenting?’, at p 853.
14
See generally (2010) 81:3 Journal of Higher Education 243-249 (Special issue on ‘Norms in
Academia’). See also David H Guston, ‘Stabilizing the Boundary between US Politics and
Science: The Rôle of the Office of Technology Transfer as a Boundary Organization’ (1999)
29:1 Social Studies of Science 87-111.
15
The authors observe that ‘as the parts, methods, and design principles that constitute
synthetic biology take on significant commercial value, conflict between technology licensing
offices wishing to privatize intellectual property and researchers seeking to the strengthen
the intellectual commons will only increase’Kenneth A Oye & Rachel Wellhausen, ‘The
intellectual commons and property in synthetic biology’ in Markus Schmidt et al (eds),
more to the point, the agency of the university cannot be construed as an effect of
legislative or normative cultures. Two or three decades of STS and ANT scholarship
have revealed the sense in which norms – textual, verbal, or tacit normative
propositions – are the elements of broader networks or assemblages, and the
articulation of the assemblage shapes its constituent elements. As Stanley Fish once
observed, texts are not self-implementing; they come to life only in a skein of social
dispositions, practices, technologies; the same is obviously true of tacit normative
expectations. So talk of norms is meaningless unless we give some account of their
social or material felicity conditions.16 How are norms (formal or informal) wired,
mediated, communicated, and translated into practice? More precisely, how has the
textual material of the Bayh-Dole Act been integrated into the multiple form of the
university?17 How is a public made?
III
The question of the university raises broader question of what we mean by openness
or ‘publicity’. One approach to these questions is offered by Chris Kelty’s study of
free software culture, which centers on the characterization of the geek community
as a ‘recursive public’:
Recursive publics are publics concerned with the ability to build, control,
modify, and maintain the infrastructure that allows them to come into being in
the first place and which, in turn, constitutes their everyday practical
commitments and the identities of the participants as creative and
autonomous individuals.18
Synthetic Biology. The Technoscience and its Societal Consequences (Dordrecht: Springer,
2009) 121-140, at pp 137-138.
16
The short cut isto look at statsical data or the xx to say ththey are ore or less efficcious,
bu this is the xx
17
On the university as multiplicity, see Dirk Baecker, ‘A systems primer on universities’
(2011), at: http://www.dirkbaecker.com/Universities.pdf
18
Chris Kelty, Two Bits. The Cultural Significance of Free Software (Durham NC: Duke
University Press, 2008), at p 7. This formulation is amplified further on in the book: ‘Why
recursive? I call such publics recursive for two reasons: first, in order to signal that this kind
A recursive public is the made by the strategic activity of materializing its own
conditions of existence. In the case of free software culture this means that
participants selectively pattern the medium of the Internet so as to generate the
technical, material, and discursive codes that enable their communications about
‘code’. So the Internet is not already a public domain; it is the medium or
environment out of which a recursive public creates itself. In a sense, a recursive
public simply is this ongoing process of self-creation. Bearing in mind the broad
theme of Bayh-Dole, Kelty’s notion of ‘recursive publics’ brings out some essential
points about terms such as ‘open’, ‘public’, or ‘basic’.
First, a public is not an entirely spiritual thing; it has be embodied, materialized or
wired into a medium of existence. The recursive public of the free software
movement might be a normative community, but this ‘ethical’ existence is
maintained by an ongoing reconstruction of the socio-technical medium of the
Internet. As information theorists observe, the content of a medium is always
another medium. What is the medium of existence of the classical public domain? Is
it the research university as a contemporary version of the universitas magistrorum
et scholarium? Is it the medium of text and its associated apparatus of publication,
archiving, and distribution. Whatever the answer, the public domain cannot simply
be knowledge or information; it has to include the socio-historical felicity conditions
and material media that are presupposed by the communication of information.
Second, to the extent that the public domain is identified with a specific normative
ethos, then these norms – whether formal or informal, macro or micro – also have to
be wired, materialized, and communicated, by some means. This has implications for
of public includes the activities of making, maintaining, and modifying software and
networks, as well as the more conventional discourse that is thereby enabled; and second, in
order to suggest the recursive “depth” of the public, the series of technical and legal layers—
from applications to protocols to the physical infrastructures of waves and wires—that are
the subject of this making, maintaining, and modifying. The first of these characteristics is
evident in the fact that geeks use technology as a kind of argument, for a specific kind of
order: they argue about technology, but they also argue through it. They express ideas, but
they also express infrastructures through which ideas can be expressed (and circulated) in
new ways’( at p 29))
an analysis of the ability of legislation to ‘engineer’ or ‘re-engineer’ a public domain.
Norms do not function instrumentally; they have felicity conditions whose
contingencies have been amply characterized in the sociology of law.19
Third, putting the first and second points together, a public exists within an
assemblage or network in the classical STS sense. And Kelty’s notion of the
recursive public specifies how a public exists within a network. The process that
constitutes a recursive public is a process of internal differentiation in which a
particular public detaches itself from the medium in which it remains immanent.20
One might say that a recursive public is one of the ways in which a network becomes
present to itself.
Fourth, one of the most crucial implications of this process of differentiation or
detachment is that publics are always – structurally, operationally – closed. The
differentiation of the public from within a medium necessarily creates a distinction
between what is inside and what is outside, and a recursive public ongoingly creates
that distinction by reconstructing the medium that defines what qualifies as
appropriate (or perhaps just recognizable) communication. One can join a recursive
public, but only if one plays by its discursive and non-discursive ‘rules’. For the
recursive public of the free software movement openness really means inclusiveness,
but the movement of inclusion is always centripetal. The ‘source’ of open source is
vivified through closure, selectivity, and reduction. The same is true of all processes
of knowledge formation. In terms of Bayh-Dole, and broader debates about the
constitution of the public domain, the point might be that instead of beginning with
normative ideals of openness we should pay more attention to the modalities of
closure that condition openness.
See especially Niklas Luhmann, Law as a Social System (Oxford: Oxford University Press,
2007).
20
Having revealed the sociality of networks, ANT is now taking on the task of characterizing
this mode of differentiation; for a statement of this intent see Bruno Latour, ‘Note brève sur
l’écologie du droit saisie comme énonciation’, in Frédéric Audren & Laurent de Sutter (eds),
Cosmopolitiques 8. Pratiques cosmopolitiques du droit, 34-40.
19
Fifth, if media do not of themselves constitute publics – the Internet is the just the
medium from out of which the public of the free software movement fashions itself,
and print was just a medium from which diverse modern publics emerged – then the
broader point is that publics are multiple. There is no such thing as a singular public
domain, only a medium or set of media that are folded into a public sphere by
different modes of recursive politics.
All of this means that in reflecting on the extent or openness of the public domain,
we should pay closer attention, first, to the material and socio-technical conditions of
existence of publics, and second, to the way that these conditions are folded or
‘closed’ into publics.
IV
In a chapter of his popular economic history, David Landes traces ‘the invention of
invention’ back to the mediaeval period, and to a specifically European culture of
labor, time, science, and, above all, enterprise: ‘Enterprise was free in Europe.
Innovation worked and paid, and rulers and vested interests were limited in their
ability to prevent or discourage innovation’.21 The truth, however, is that invention
in the sense of patent law emerged only in the late 18th century, precisely because
‘vested interests’ had until then worked to keep ‘ideas’ embedded in matrices of
patronage, territory and corporate interest.22 Indeed, one might say that invention in
the modern sense only really got going when ‘novelty’ became a routine and effective
criterion
of
bureaucratic
practice
(rather
than
an
ad
hoc
adjudicative
determination), and when invention became an industry of the kind that is
associated with Thomas Edison; when, that is to say, the course of innovation was
construed as a succession of technical solutions to technical problems, when
David S Landes, The Wealth and Poverty of Nations. Why Some Are So Rich And Some So
Poor (New York: Norton & Co, 1999), at p 59.
22
See Alain Pottage & Brad Sherman, Figures of Invention. A History of Modern Patent Law
(Oxford: Oxford University Press, 2010), chapter 2, and Mario Biagioli, ‘Patent republic.
Representing inventions, constructing rights and authors’ (2006) 73 Social Research 11291172.
21
inventors were primed to recognize ‘solutions’ when they happened,23 and when the
process of invention itself became as industrialized or serialized as any of its
products. This industrial sense of invention was affirmed by the legal criterion of
novelty as the concept that relayed invention to invention: ‘While the patents-asprivileges regime was primarily concerned with the novelty of an invention in a
certain place, early US patent law started to conceive of novelty in terms of the
difference between a patent and another that preceded it’.24 Given that inventions
necessarily mobilized nature (in the shape of diverse forces and materials) how did
the chain of novelty replicate itself without extracting anything from nature?
First, this question was new to the late 18th century. The concern to define the public
domain was a direct reflex of the concern to define the newly-forged right of the
inventor. Before then, what could the public have been if not the interest
represented by a patron or sovereign?25 How could ‘basic science’ be disembedded
from networks of patronage and local or tacit knowledge any more easily than
‘invention’? If a patent was a species of private right, and if the justification of that
right was contractual – the inventor had to give something to the public that it did
not already have – then the question of what the public already had became
pertinent. So too did the question of how applications of natural forces or materials
could add to nature without taking from it. How did the relaying of innovation to
innovation pass through nature without diminishing it? Crucially, the modern sense
of the public domain in patent law is a legacy of the way that this question was
answered.
Commenting on Daguerre’s discovery of the photochemical effects of silver iodide because
mercury happened to be in same cabinet as a preparation of iodized silver, Friedrich Kittler
observes that ‘if an accidental effect like the one that occurred in Daguerre’s cupboard had
taken place 200 years earlier, …the whole matter would have fallen flat again simply
because no one would have captured, stored, recorded, and exploited it as a natural
technology. …Daguerre thus represent[s] the beginning of an epoch where the duration, the
reproducibility, and practically even the success of inventions – and in the end that means
historically contingent effects – are guaranteed (Friedrich Kittler, Optical Media
(Cambridge: Polity Press, 2010), at p 130). Kittler observes (also at p 130) that ‘I hope it is
clear how much chemistry must have historically already taken place in order that iodized
silver and quicksilver could be accidentally placed in the same cupboard’.
24 Biagioli, ‘Patent republic’, at p 1142.
25
Notwithstanding references to the public interest.
23
The core of the modern sense of the public domain was worked out in the arcane 19 th
century theory of the ‘principle’ of a machine. To recapitulate a history that is more
fully set out elsewhere,26 the patent act of 1793 had characterized the subject matter
of machine patents as the ‘principle’ of a machine, and, in interpreting these
provisions courts in the US drew on English cases, notably that of Boulton & Watt v
Bull, to say that in effect a principle was a kind of ‘neither/nor’. The principle of a
machine was defined as a kind of double negative: neither the principle of nature or
science that was applied by the machine, nor the material form or configuration of
the machine. Or, in other terms, the principle of a machine could be identified by
distinguishing between two senses of the term ‘principle’: ‘the principle so embodied
and applied and the principle of such embodiment and application, are essentially
distinct; the former being a truth of exact science, or a law of natural science, or a
rule of practice; the latter a practice founded upon such truth, law, or rule’.27 What
was a ‘principle of embodiment’ as distinct from an embodied principle of nature?
One answer was proposed by George Ticknor Curtis in his mid-century treatise.
Curtis explained that human intervention in nature took effect by reorganizing
matter, by ‘placing its particles in new relations’.28 But if that were all that
invention involved, then the invention itself would be a material embodiment rather
than the ‘principle’ of a material embodiment. The patentable ‘principle’ of a
machine patent consisted in the agency or operation of the material machine rather
than its material form or configuration: ‘the form or arrangement of matter is but
the means to accomplish a result of a character which remains the same, through a
certain range of variations of those means’.29 Yet, given that the functioning of any of
these classical machines necessarily employed natural forces or principles, how
could one say that a machine patent did not enclose natural forces or properties that
were already in the public domain? Curtis’s argument was that the operation of the
See Pottage & Sherman, Figures of Invention, esp chapter 4.
Wintermute v Redington 30 F. Cas. 367 (C.C.N.D. Ohio 1856).
28 ‘Over the existence of matter itself [man] has no control. He can neither create nor destroy
a single matter of it; he can only change its form, by placing its particles in new relations,
which may cause it to appear as a solid, a fuel, a gas’ (Curtis, A Treatise on the Law of
Patents for Useful Inventions (Boston: Little & Brown, 1849) at p xxv).
29 Curtis at p 17
26
27
machine elicited a certain effect from nature; a new machine ‘call[ed] into effect
some latent law, or force, or property’.30 So the ‘principle’ of a machine – the ‘thing’
protected by a patent and specified in an infringement action – was the action of the
machine in eliciting a specific inflection of physical or mechanical forces.
With the question of public domains in mind, there are two important points about
this sense of the invention as the operation of eliciting an effect from nature. First,
in the language of classical patent doctrine, a machine patent related to the mode of
operation of a machine, or, more precisely, ‘a new mode of operation, by means of
which a new result is obtained’.31 A machine patent related to the machine as a
means in itself, as the functioning of a machine rather than its ultimate end or
function. The difficult – but crucial – exercise here was to imagine the functioning of
a machine not just as the articulation of its moving parts but more importantly as
the way in which the machine provoked a specific effect from nature. The patent did
not enclose natural forces or properties as such but only the means by which those
forces or properties were cajoled into producing a particular effect. And, again, the
means in question was not the material configuration of the machine but the ‘idea of
means’32 that it expressed. In that sense machines embodied ideas, but ideas
qualified as inventive by virtue of their capacity to instruct machines to elicit effects.
The upshot of this theory was, first, that the public domain or ‘storehouse’ of nature
remained undiminished. Other inventors were free to design machines that elicited
different effects from the same forces or properties, or even machines that elicited
the same effect by way of a different idea of means. Second, the process of invention
was indebted to but not entirely immersed in nature. Different technologies for
eliciting effects from nature could succeed each other, differentiated and relayed by
the criterion of novelty, so that innovation remained a differentiated and in some
sense self-producing process.
Curtis, 1849, xxvi.
Winans v Denmead 56 U.S. 330, 341 (1854).
32 See generally Robinson 1890.
30
31
This sense of the public domain has already been rendered problematic, if not
entirely unworkable, by the emergence of biotechnological and software inventions.
In the case of software inventions the old distinction between the ‘idea of means’ and
the effect elicited from nature disappears. Concepts formed in relation to the
medium of energy do not necessarily work in relation to the medium of information.
And it is now a trite observation that biotechnologies invent the distinction between
nature and culture rather than taking it as their premise.33 But synthetic biology
goes further in reinventing invention. Prospectively, one might say the effect of
technological and commercial evolution of gene synthesis technologies will be to
generate a medium that entirely uncouples the notion of the public domain from any
representation of nature, or from any distinction between basic science and
technology.
V
The progress of synthetic biology is likely to be shaped by the economics of gene
synthesis. Prospective accounts of synthetic biology – is it possible to write about
synthetic biology in any register other than that of futurity? – look forward to the
invention of DNA printers that will able to ‘print out’ designed sequences on
demand.34 There are some good economic reasons for suggesting that such devices
are unlikely to be central to the practice of synthetic biology, but the point of the
image is that the automated technologies that are involved in assembling, correcting
and proofreading DNA are becoming increasingly refined, reliable, and cheap
(relatively speaking):35 ‘It seems plausible that …commercial gene synthesis could
Ref Strather.
For
a
current
example
of
‘bioprinting’,
see
Wired:
http://www.wired.com/rawfile/2010/07/gallery-bio-printing/
35
Incidentally, this process of evolution has implications for the collection of standardized
DNA parts that is often taken as the headline feature of synthetic biology as open source
science. In the early days of the Biobricks initiative the DNA parts were stored as wet DNA,
and these material molecules were carefully packaged up as sets and distributed to
participants in the annual iGEM competition. As Biobrick parts increase in number and as
DNA synthesis becomes more affordable, the parts could be stored as sequence information
in electronic databases rather than ‘wet’ parts in registries, and gene synthesis could be
outsourced to commercial operators.
33
34
reach the same level of convenience as for synthetic oligos: a cost and time on a par
with overnight shipping. When this condition is met, much of the work currently
done to manipulate DNA in research labs will be outsourced. Instead of cloning into
vectors stored in those labs, custom or standard vectors could simply be
resynthesized on demand’.36 The argument is that the availability of cheap synthetic
DNA has become an economic and technological factor in its own right:
[T]he increasing availability of gene sequencing creates more and larger
electronic gene databases. This drives demand for protein-expression systems,
directed evolution and metabolic engineering, which creates demand for
synthetic biology technologies and tools.37
New opportunities might be phrased as challenges – ‘a paradoxical gap exists
between our ability to synthesize and our ability to design valuable novel
constructs’38 – but the basic point is that the technical evolution of oligo synthesis is
turning DNA into the kind of medium that is ‘designable’ in a way that collapses the
division between nature and invention that was essential to the old theory of the
public domain.39 One might say that biotechnology had already achieved the same
effect, but synthetic biology promises to turn life into a truly digital medium.
The enterprise of synthetic biology is usually presented as a hierarchy of operations
of different orders of scale. In ascending order of abstraction, first, the engineering
and characterization of parts (promoters or open reading frames), second, the
assembly of parts into genes, third, the relaying of genes to make pathways or
Peter A Carr & George M Church, ‘Genome engineering’ (2009) 27:12 Nature Biotechnology
1151-1162, at p 1156. This may not be the whole story. The current costs of single-gene
custom synthesis are still prohibitive for many university laboratories, even before taking
into account the fact that much of the action in synthetic biology will lie in combinatorial
swapping, and hence increased costs of synthesis.
37
Mike May, ‘Engineering a new business’ (2009) 27: 12 Nature Biotechnology 1112-1120, at
p 1113, referencing the research of John Bergin.
38
Peter A Carr & George M Church, ‘Genome engineering’ (2009) 27:12 Nature Biotechnology
1151-1162, at p 1151.
39
And, incidentally, this quality of ‘writeability’ or ‘designability’ has been characterized as
an effect of the Bayh-Dole Act.(see Richard Hogrefe. ‘A short history of oligonuckeotide
synthesis’, available at:
http://www.trilinkbiotech.com/tech/oligo_history.pdf)
36
devices, and, finally the design and assembly of parts into genomic ‘software’. From
the perspective of the molecular biologist, the challenges posed by these operations
have to do with physical assembly and operational contextualization. For those who
do not already presume the future development of DNA synthesis, ‘the limit of what
synthetic biology can achieve is becoming determined by our ability to physically
assemble DNA’.40 So, for example many Biobricks parts are not usable because they
contain scar sequences that complicate the process of assembly. Turning to
contextualization, the point is, first, that synthetic genomes have to be booted, and,
second, that the operation of synthetic genes or genomes will be conditioned by
contexts that remain opaque.41 But for those who have taken up the prospective
theory of synthetic biology, the limitations of physical assembly are likely soon to be
overcome.42 Indeed, from this perspective assembled DNA is already of less interest
than the things that one can do with it: ‘Although the assembly of large DNA
circuits is presently a technological challenge, and is therefore valuable, the relative
value of that assembled DNA is quite small. Of much greater value are the
molecules or behaviors specified by those sequences: networks that enable
computation or fabrication, enzymes that facilitate processing plants into fuels or
fine chemicals, proteins and other molecules that serve as therapeutics and
antibiotics’.43 Ultimately, then, the presentation of synthetic biology abstracts from
the materialities of assembly and operation to the technique of design.
The practice of synthetic biology is scheduled to become an exercise in computeraided design: ‘Once natural enzymatic and regulatory modules are adapted, refined
and measured, they can be combined – at the drawing console – with a high degree
Tom Ellis, Tom Adie, & Geoff S Baldwin, ‘DNA assembly for synthetic biology: from parts
to pathways and beyond’ (2011) 3 Integrative Biology 109-118, at p 110.
41
‘[F]or biological systems especially, the background environment is still very incompletely
understood when contrasted with other disciplines, such as electronics design. Though a
given genome sequence may be known, the functions of many predicted proteins typically
remain unknown and the relationships between known functions completely unmapped’
(Carr & Church, ‘Genome engineering’, at p 1159).
42
‘’The long-term expectation in this area is that increasingly available DNA synthesis will
make some of the current assembly restrictions unnecessary, and that new or modified
standards will develop to take advantage of those resources’ (Carr & Church, ‘Genome
engineering’, at p 1154).
43
Robert Carlson, ‘The changing economics of DNA synthesis’ (2009) 27:12 Nature
Biotechnology 1091-1094,at p 1093.
40
of abstraction (ideally with intuitive graphics) while increasingly sophisticated
computational methods handle “lower level” steps’.44 The prototypes for this kind of
software platform already exist, in the guise of programs such as Gene Designer and
Clotho, which set out precisely the kind of ‘drawing console’ that we look forward to.
As Adrian Mackenzie points out, the ambition is to concentrate into a software
interface an entire repertoire of technologies, each of which might once have
addressed ‘nature’ in the mode of experimentation or provocation, but which now
collectively serve as instruments for ‘shap[ing] things across multiple scales and
locations’.45 In such an interfaces, biological techniques and materials are rendered
entirely; the operations transacted in these ‘development environments’ consist of
‘browsing lists of components, cutting and pasting, dragging and dropping
components on screen, applying various commands to selected components, and then
ordering the DNA construct via a commercial web service’.46 So, ‘[in] the compressed
space of the software interface, the history of molecular biology as a series of
technical accomplishments is re-rendered as an expanding tree of menu options’.47
Of course there are some continuities with established programs of biological
science. Synthetic biology will involve doing what molecular biologists have been
doing for decades, that is, experimenting with combinatorial swappings.48 One of the
Peter A Carr & George M Church, ‘Genome engineering’ (2009) 27:12 Nature Biotechnology
1151-1162, at p 1154.
45
Adrian Mackenzie, ‘Design in synthetic biology’ (2010) 5:2 Biosocieties 180-198, at p 183.
46
Mackenzie, ‘Design in synthetic biology’, at p 188.
47
Mackenzie, ‘Design in synthetic biology’, at p 189.
48
Hence the representation of synthetic biology as a mode of ‘accelerated evolution’, or as a
continuation of with the ‘ancient manipulation and testing of billion-base-pair DNA systems
[that] is evident in the diversity of dog breeds and agricultural species relative to their wild
ancestors’ (Carr & Church, ‘Genome engineering’ at p 1151). Biotechnology had already
suspended, the process of evolution, and synthetic biology, or more precisely the medium of
synthetic DNA, takes things further. Some fifteen years ago, Hans-Joerg Rheinberger
observed that molecular biology had acquired the capacity to ‘invent’ biological reality: ‘What
is new about molecular biological writing is that we have now gained access to the texture –
and hence the calculation, instruction, and legislation – of the human individual’s organic
existence – that is, to a script that until now it has been the privilege of evolution to write,
rewrite and alter. What Darwin called ‘methodical’ or ‘artificial selection’ has barely
scratched the surface of this script in the last 10,000 years. For, in a sense, artificial selection
itself was still nothing more than a specific human mode of natural evolution. This has now
gone; and with its disappearance, natural evolution has come to an end. Molecular biology
will come to invent biological reality’ (Hans-Joerg Rheinberger, ‘Beyond nature and culture:
44
virtues of the decreasing cost of gene synthesis is that it might eventually allow
synthetic biology designers to design and assemble constructs (within the framework
of the drawing console) combinatorially. Designers would be able to order tens or
hundreds of variants on the same construct, each of which might have (say) a
different substitution of a promoter or ORF at the same location. The resulting
constructs could then be compared for performance in making biofuels or
pharmaceuticals. Perhaps, too, some artifacts of synthetic biology are more like
science of an older kind; for example, one might say of the ‘manipulation of genomes
by constructing, deleting, and to some extent reorganizing components’, that this is
not yet ‘design’.49
But the basic program of synthetic biology (prospectively construed) is already
suggested by one of the first exercises in construction, namely, the operation of
‘refactoring’ that is described in Drew Endy’s first practical papers on synthetic
biology.50 In the case of software, the technique of refactoring involves editing or
rewriting code so as to alter its structure but not its performance. In synthetic
biology it involves translating or transcribing biological materials into a new
medium: ‘Without substantially altering any biological function, refactoring readies
a specific biological substance for wider participation in processes of design,
modification, standardization and experimentation’.51 The technique of refactoring
pointed the way toward what is scheduled to happen when gene synthesis really
takes off; the agency of biological materials will be reconstructed in the medium of
synthetic DNA, which will in turn be amenable to combinatorial design of an
intensity and effect that is quite novel. The effect of transcription or reconstruction
A note on medicine in the age of molecular biology’, (1995) 8:1 Science in Context, 249-263, at
p 252). Synthetic biology now invents a reality that is no longer, or not necessarily, biological.
49
‘These tend to be proof-of-principle reports pushing the limits of scale – often asking, How
much can this cell tolerate? – but not of design’ (Carr & Church, ‘Genome engineering’, at p
1153).
50
Leon Chan, Sriram Kosuri & Drew Endy, ‘Refactoring bacteriophage T7’ (2005) 1:1
Molecular Systems Biology: ‘A system that is partially understood can continue to be studied
in hope of exact characterization. Or, if enough is known about the system, a surrogate can
be specified to study, replace, or extend the original. Here, we decided to redesign the
genome of a natural biological system, bacteriophage T7, in order to specify an engineered
biological system that is easier to study and manipulate’.
51
Adrian Mackenzie, ‘Design in synthetic biology’ (2010) 5:2 Biosocieties 180-198, at p 190.
is to grant DNA a new potentiality; not the potentiality that it had in vivo,52 nor
even the potentiality that was actualized by the opportunistic interventions of
recombinant DNA technologies, but the potentiality that emerges from the digitized
medium of synthetic DNA and the sociality that infuses that medium.53 Katherine
Hayles54 resists the argument that the computer is ‘the ultimate solvent that is
dissolving all other media into itself’,55 and argues instead that code is involved in
relations of ‘intermediation’ with analogue media, notably texts, bodies, and
consciousnesses. The point is that the world is not entirely digital, it consists in the
difference between the digital and analogue, and is animated by the ‘complex
feedback loops [that] connect humans and machines, old technologies and new,
language and code, analogue processes and digital fragmentations’.56 But, if we start
from the perspective of Kelty’s notion of the ‘recursive publics’, then these relations
of ‘intermediation’ are simply the medium from which a recursive public produces
itself. What kind of recursive public will synthetic biology – whether open source or
commercial – fashion for itself?
VI
The emergence of synthetic biology has raised questions that are similar to those
posed by the Bayh-Dole Act in relation to biomedical research. For example, a
comparison of the views of scientists engaged in synthetic biology suggests that
there is no shared understanding as to where to draw the line between what should
be private and what should be common. Although there is ‘widely shared agreement
Though of course the very fact of observation changes what exists only in vivo.
One should be careful here; for example, the practice of bioinformatics involved a similar
‘medial effect’ (see Adrian Mackenzie, ‘Bringing sequences to life: how bioinformatics
corporealizes sequence data’ (2003) 22 New Genetics and Society, 315-332). What is new
about synthetic biology is the totality of the medium.
54
N Katherine Hayles, My Mother Was a Computer. Digital Subjects and Literary Texts.
(Chicago: University of Chicago Press, 2005), at p 31.
55
Cf, Friedrich Kittler, Gramophone, Film, Typewriter (Stanford: Stanford University Press,
1999), at pp 1-2: ‘Inside computers everything becomes a number: quantity without image,
sound, or voice. …With numbers, everything goes. Modulation, transformation,
synchronization; delay, storage, transposition; scrambling, scanning, mapping – a total
media link on a digital base will erase the very concept of medium’ (Kittler 1999: 1-2).
56
Hayles, My Mother Was a Computer, at p 31.
52
53
on the need for common ownership of infrastructure, including registries of parts for
basic research and education, standards for performance and interoperability, and
design and testing methods’, there is also divergence as to whether the design tools
needed to turn these infrastructural elements into synthetic biology applications
should or should not be proprietary.57 From one perspective the progress of
techniques of design and assembly, and the future of synthetic biology as an
information science, might depend on an effective ‘commons’,58 but as synthetic DNA
becomes a relatively low-grade commodity, this is precisely the area in which
economic value is likely to be concentrated. But perhaps the real point of introducing
the case of synthetic biology into the debate is to take it as the endpoint of a process
in which Bayh-Dole is implicated.
That endpoint is nicely formulated by Adrian Mackenzie in his analysis of synthetic
biology as a novel design practice: ‘biological work’ becomes ‘a process that is no
longer primarily concerned with experiment and knowledge production, but with the
organization of work, production and innovation’.59 In a sense this endpoint is
already prefigured in both skeptical and affirmative accounts of Bayh-Dole. For
example, the analysis of the effects of the Act in terms of a flow of upstream and
downstream research already imagines science less as a practice of ‘experiment and
knowledge production’ than as a mode of ‘work, production and innovation’.
Scientific knowledge is noticeable or relevant only to the extent that it has the
potentiality to act as an upstream resource in a flow that is conceptually mapped
backwards, from products to potential, and that is mapped in terms of an
instrumental logic of efficient innovation. Whereas the old theory (or mythology) of
basic science and its technological applications imagined separate spheres (or
publics, perhaps) with traversable boundaries, even the skeptical economic analysis
See Oye & Wellhausen, ‘The intellectual commons and property in synthetic biology’, at p
137-138
58
‘[T]he vision is to decouple the characterization of pathway components by specialists from
the end user’s ability to search and build pathways from these data. The data generated by
end users building and testing pathways could then be incorporated, extended and searched
by other users, allowing the pathway data set to grow’ (Travis S Bayer, ‘Transforming
biosynthesis into an information science’ (2010) 6 Nature Chemical Biology 859-861, at p
859).
59
Mackenzie, ‘Design in synthetic biology’, at p 189.
57
of the effects of Bayh-Dole has implicitly given up on the notion of science as a public
in its own right. The evolution of synthetic biology might or might not prove that
premise to be mistaken.
Download