What is the matter with e-Science? – thinking aloud about informatisation in knowledge creation Paul Wouters Networked Research and Digital Information (Nerdi) NIWI-KNAW The Royal Netherlands Academy of Arts and Sciences PO Box 95110 1090 HC Amsterdam The Netherlands T 3120 4628654 F 3120 6658013 http://www.nerdi.knaw.nl paul.wouters@niwi.knaw.nl This long quotation is a future scenario taken from a rather old article (De Jong & Rip, 1997). It describes a possible future in which “discovery environments” have fully developed. Scenarios like this, but usually less well written, are now quite popular to make the case for e-science. By staying rather close to what is presently technically possible these scenarios avoid the stigma of science fiction and are able to operationalise the technical challenges into real-world computer science problems. I think they capture the sometimes rather grandiose future visions that are dominant in the escience movement. In this presentation I wish to address the question: why should we study e-science and if we wish to do it how can we conceptualise it in a somewhat productive way? E-science is generally defined as the combination of three different developments: the largescale sharing of computational resources, the provision of access to massive, distributed and heterogeneous datasets (in the order of tera to petabytes), and the use of digital platforms for collaboration and communication. E stands not in the first place for “electronic” but for “enhancement”. The core idea of the e-science movement is that knowledge production will be enhanced by the combination of pooled human expertise, data and sources, and computational and visualisation tools. I see it as significant that most of this is still promise rather than practice. But a promise with a very material financial and infrastructural embodiment. Not something to lightheartedly dismiss as the next fad in science policy. E-science is a discursive construction at the interface of technoscientific practices, computer technology design and science policy that is being embodied in material infrastructures, a demand for new sociotechnical skills in research, and a pressure on existing scientific and scholarly practices. The justification is the promise. The promise is a new version of the old idea of the World Brain of H G Wells which also played an important role in the construction of the internet. We should not think that the promise limits itself to computational research. The UK develops a large e-social science programme. In the Netherlands there is a modest move to create something called e-humanities. The e-Science community is interested in the sociology of science in order to change habits and structures that may hamper the further development of e-science. STS might be instrumental in the spread of e-science across the board. This means that, apart from funding opportunities for new case studies, there may be an interesting intellectual challenge for STS. I see four different types of approach and conceptualisation of e-science. I guess somehow they are all represented in this panel. First, of course, it seems obvious that the analysis of e-science as a political movement might be very interesting. Second, since very broadband computer networks and distributed storage and computing facilities seem at the heart of escience, there is scope for technology assessment studies. Michael Nentwich’s book Cyberscience (Nentwich, 2003) is a beautiful exemple of this approach. It tends to lead to very inclusive encyclopediac descriptions of all sorts of developments that seem relevant to cyberscience or e-science. More importantly, this approach by the nature of its description tends to reify the phenomenon of e-science. In the end escience is everywhere, difficult to escape and difficult to skepticise. Here the third approach might come to our rescue: the trusted case study zooming in not on the technology with its promises, but on the hands-on scientific practice. And suddenly, e-science seems to be nowhere anymore. We see the usual tinkering of the scientist, easily making use of both online and offline resources, mixing heterogeneous stuff as they have always done. The local is everything. Whether an e-science resource is being used or not is determined in the local context in which all the supposedly global stuff is recontextualised. The case study might of course focus on the creation of a particular e-science project, for example the Virtual Observatory. A clear case of the application of the social shaping of technology. If it is the case that the local is connected to other localities through networks, a fourth approach seems promising: to focus on those networks and connections. It might overcome the inherent myopia of local case studies while avoiding the totalising perspective of a unifying drive to e-science. Several theoretical and methological frameworks have been developed that seem relevant to this fourth take on e-science: virtual ethnography developed by Christine, ANT, and perhaps the combination of network analysis with the social shaping of technology. I think that a very interesting candidate here is the notion of the “epistemic object” as it has been developed by Rheinberger (Rheinberger, 1997). In two ways: it seems very productive to analyse some specific features of e-science, as I try to explain in this paper. But also, it may give as a good theoretical drive to study e-science. In using the epistemic object to deconstruct and analyse e-science, e-science might help us to find out in more detail and across a wider spectrum of epistemic cultures how epistemic objects actually work, how they can and more importantly cannot be recombined with each other, and transmuted in technical objects and vice-versa. This relates to digitisation, signification and spaces of representation. If the epistemic object is a digital representation and if many if not all technical objects in the research practice are digital, and if epistemic objects of other fields are also both digital and available, there seems no techno-material barrier anymore for the endless recombination of epistemic objects. This is the promise of e-science. Nevertheless, there will be many obstacles left. Many recombinations will not be possible, much promise of e-science will turn out to be not even desirable. These failures of e-science that may come to the surface in a very naked sense (no longer hidden by the noise of material impossibilities) seem to me extremely promising to deeper understand the nature of epistemology in the STS sense of the word, to understand the culture of knowledge. This is I think a very good reason, and possibly the best, to study e-science and take it seriously. We would turn the failures (which is not the same as the controversies) into our main epistemic object with respect to escience. Perhaps some more detailed explanation is possible. Rheinberger localises the epistemic object in an experimental system, others have put it in other contexts. I think the concept can be applied across the board of knowledge creation if we focus on the circulation of reference that is at the heart of both scientific and scholarly research. For Rheinberger the objectivity of science is generated by chains of representation in which every referent turns into a representation as soon as we focus on it. Representation looses its referential meaning. Epistemic objects are bundles of traces. “The activity of scientific representation is to be conceived as a process without ‘referent’ and without assignable ‘origins’.” Rheinberger distinguishes two elements of experimental systems. The first is the research object, which is called the “epistemic object” that embody that which is not yet known. The second element are the set of experimental conditions in which the research objects are embedded, which he calls the “technical objects”. Within a particular experimental system both types of elements “are engaged in a non-trivial interplay, intercalation, and interconversion, both in time and in space. The technical conditions determine the realm of possible representations of an epistemic thing; and sufficiently stabilized epistemic things turn into the technical repertoire of the experimental arrangement” (Rheinberger, 1997, p. 29). So what is the difference between a technical object and an epistemic one? It is functional rather than structural, there is no “essence” here. “We cannot once and for all draw such a distinction between different components of a system. Whether an object functions as an epistemic or a technical entity depends on the place or “node” it occupies in the experimental context.” (Rheinberger, 1997, p. 30). The main functional criterion is that epistemic objects are generators of questions. A technical product is stabilised and is first and foremost “an answering machine”. “In contrast an epistemic object is first and foremost a question-generating machine” (Rheinberger, 1997, p. 32) “What is significant about representation qua inscription is that things can be represented outside their original and local context and inserted into other contexts. It is this kind of representation that matters.” (Rheinberger, 1997, p. 106) Thinking about e-science this is especially interesting because the very core of what many e-science projects aim for is the decontextualisation of objects and subsequent recontextualisation on the fly and in any context. How is this being made possible? By metadata, a rather dull word for information that should describe the “meaning” of the object/data in such a way that other machines and humans can make use of those objects/data in contexts that might have been unthinkable at the moment of the production of the object/data. Metadata are representations of the original context of epistemic objects in such terms that new contexts can be created for these objects to generate new questions. The main trick that should do this work is not simply querying the epistemic object in its new context, but basically the reconfiguring of new epistemic objects by the recombination of already existing ones in new contexts (or in Rheinberger’s terms new technical objects). This means of course that also technical objects can be turned into epistemic objects and the other way around. Seen in this perspective, the whole e-science business is nothing new at all, except for the scale and ease with which objects can be interchanged since they are already digital representations. By representing the whole universe of relevant stuff (both technical and epistemic objects) in digital objects, the interconversion is indeed seamless (apart from the hard work behind the scenes and the hard work of producing the material conditons for the whole business of digital representation). But then again: isn’t scale and ease all that matters? I think this perspective does give both a way of speaking about e-science which is sufficiently different from ‘actor’s speak” to be interesting (for them and for us), and a way of formulating a research agenda that has the potential of critically interrogating the very notion of e-science (Woolgar, 1988) while at the same time studying it “in vivo/silico”. References De Jong, H., & Rip, A. (1997). The computer revolution in science: steps towards the realization of computer-supported discovery environments. Artificial Intelligence, 91(2), 225-256. Nentwich, M. (2003). Cyberscience. Research in the Age of the Internet. Vienna: Austrian Academy of Sciences Press. Rheinberger, H.-J. (1997). Toward a History of Epistemic Things: Synthesizing Proteins in the Test Tube: Stanford University Press. Woolgar, S. (1988). Science: the very idea. London: Tavistock.