ACTIVIST PHILOSOPHY OF TECHNOLOGY: ESSAYS 1989-1999 Paul T. Durbin 1 ACTIVIST PHILOSOPHY OF TECHNOLOGY: ESSAYS 1989-1999 Contents Chapters 1. INTRODUCTION: A GENUINELY PRAGMATIC PHILOSOPHY OF TECHNOLOGY 2. PHILOSOPHY OF TECHNOLOGY: RETROSPECT AND PROSPECT 3. HOW TO DEAL WITH TECHNOSOCIAL PROBLEMS 4. SOME POSITIVE EXAMPLES 5. BIOETHICS AS SOCIAL PROBLEM SOLVING 6. ENGINEERING ETHICS AND SOCIAL RESPONSIBILITY 7. COMPARATIVE PERSPECTIVES 8. PHILOSOPHY OF SCIENCE AND SOCIAL RESPONSIBILITY EPILOGUE Introduction Many authors of collections of essays, often under the prodding of editors or publishers, attempt to turn a set of essays into a coherent book. Unfortunately, judging from reviews of the results, it is almost always clear to the reader where the seams are. No matter how much effort to the contrary, it is always clear to an astute reader -- and especially to readers in a specialized field who are already familiar with a good bit of the work in question -- where a particular essay begins and ends, as well as how it does or doesn't mesh well with what precedes or follows. So I am not even going to try that here. What I offer here and in three separate volumes is a collection of essays, and I won't try to hide the fact. I think there is coherence, at least in overall 2 point of view, but I don't claim more than that. I don't want, however, to ignore totally what it is that those seeking a coherent volume are looking for; they want what they are publishing or editing to be a real book. To solve the problem, I am going to try something that I haven't seen done elsewhere. I am going to treat this set of essays as if I were editing the essays of someone else. There are examples to follow there; indeed, practically every famous author, after his or her death, is the object of such a venture. A disciple, a family member, a literary executor or someone with similar interests decides that the thoughts spread over time of that author merit publication in a single volume or set. I'm not claiming that my writings deserve such treatment -- now before or even after my death -- but it's a model to be followed. What I have done for that purpose is to collect essays that were written over a period of twenty years and put them together as a reflection of my developing point of view over that period. At first I tried to combine everything in a single volume; now I treat three sets of essays as three separate volumes. I will leave it to the reader to decide whether my approach was a good choice or not. But for me, it is the only honest option open to me, short of starting from the beginning and writing three new books, beginning to end. And that admittedly more difficult chore, even if I were to undertake it, would not actually reflect a developing point of view as well as I think these sets of essays do. Here is the way I have collected my essays, introducing each one as if I were introducing the essays of someone else. This volume is introductory throughout. It includes the essays that I first put together as "Activist Essays in Philosophy of Technology: 1989-1999," and put on my Philosophy Department website as far back as 2000. However, in this revision, I eliminate two of the essays from that set, because they reappear in different versions later and because they fit better under a later theme. The separate second set of essays, “Activist Philosophy of Technology: Essays 1999-2009, will take two steps forward: Part One attempts to elaborate on and enlarge the first four chapters here, trying to develop the original foundations in as clear a fashion as I can at this point in my life. Part Two attempts to advance some central themes in this first set of essays, deepening my insights about the need for a broader social responsibility perspective in a whole range of professional fields. One way it does that is to add examples of new fields not covered here. At one point, I thought about including a Part Three in that second set. Instead, I have now created a third volume, which introduces the most topical part of my recent essays, with a focus on the related themes of sustainable development and globalization. 3 This first set of essays begins with an introduction with a history. A long time ago, in my contribution to the conference I set up at the University of Delaware in 1975 (Durbin, 1978), which would lead to the founding of the Society for Philosophy and Technology, I had argued for an opening in the would-be field for an American Pragmatist approach. I based my version more on the thought of George Herbert Mead than on the better-known John Dewey. It took almost twenty years for me to produce a book along those lines: my Social Responsibility in Science, Technology, and Medicine (1992). Then, a short time later, when Carl Mitcham, my primary collaborator in establishing SPT, was editing -- along with Leonard Waks – a volume of Research in Philosophy and Technology on the topic, "technology and social action," they invited me to do a lead essay. What I wrote, "In Defense of a Social-Work Philosophy of Technology," borrows shamelessly from early chapters of Social Responsibility. So the history of the development of my thinking on what I see as the best approach to a philosophy of technology -- an activist approach as legitimate philosophizing -- shows a continuity from the very beginning in 1975 to almost the end of the century, in 1999. This first chapter here appeared, as the leadoff essay, "In Defense of a Social-Work Philosophy of Technology," in Carl Mitcham, ed., Research in Philosophy and Technology, volume 16: Technology and Social Action (1999). Chapter 1 INTRODUCTION: A GENUINELY PRAGMATIC PHILOSOPHY OF TECHNOLOGY Without apology, in this chapter I espouse a piecemeal, public-interest-activism approach to philosophy of technology. It is modeled after the social ethics of G. H. Mead (1934, 1936, 1964a, 1964b) and John Dewey (1929, 1935, 1948). As I have said elsewhere (Durbin, 1992), that may not satisfy many philosophers, but the situation reminds me of the old saying of Winston Churchill: A piecemeal approach to social problem solving may seem the worst sort of ethics for our technological age -- except for all the rest. "Professional ethics," in one form or another, has become something of a mainstream activity, both in certain segments of academe and in certain circles within professional associations. Conferences involving an amazing array of professional disciplines and associations have been held at the University of Florida, and there is an Association for Practical and Professional Ethics, based at Indiana University, that runs regular meetings -- equally well attended -- every year. Carl Mitcham (1998?) and Leonard Waks (Mitcham and Waks, 1997) have lamented the fact (as they see it) that this growing body of literature includes all too few explicit references to the centrality of technology in generating the problems that applied and professional ethics practitioners address. Mitcham and Waks admit that biomedical ethics, engineering ethics, and computer ethics often, perforce, address issues related to technology and particular technological devices -- computers themselves, but also artificial intelligence, etc., in the case of computer 4 ethics. But, Mitcham and Waks complain, "the technological" in these cases is all too often subordinated to the ethical (often to very traditional ethics) rather than transforming ethics. I believe there is something to be said for the Mitcham/Waks complaint. However one defines technology -- whether in terms of new instrumentalities or devices or processes, or in terms of so-called "technoscience" (that peculiar admixture of science and engineering and other technical expertise with capitalism or modern goverance so common in our era) -- the phenomena associated with contemporary technologies or technological systems ought to have a central place in contemporary discourse. And that means they should have such a place in ethical and legal discourse -- and therefore also in the discourse of those philosopher/ethicists concerned with real-world issues in our technological society. In this book, I take it for granted that academic ethicists have at least made a beginning in taking note of technosocial problems. What I advocate is that they should take greater notice of these issues. And I am urging them to do so in an activist fashion. Some philosophers have claimed that academic ethicists have a special claim to contribute to the solution of the sorts of technosocial problems I have in mind. I dispute that claim if it assumes that philosophers can claim a special expertise in these areas. In my opinion, we are all involved in technical decisions: the experts who are involved directly with them, those who hire or otherwise deploy the experts, citizens directly or indirectly impacted by the decisions, and the entire democratic citizenry who pay the taxes that support the ventures or benefit the corporations involved in them in myriad ways or who must often pay (not only through taxation) for the foulups so often associated with large technological undertakings (and not only with technological disasters). Technical expertise is often central to the creation of technosocial problems -- but also to their solution or at least remediation. Corporate or governmental expertise is also involved. Citizens can become experts, but they continue to have a legitimate democratic voice when they do not. Philosophers in general, and ethicists in particular, often gain their own expertise -- most commonly in arriving at legal or political or social consensus on technosocial issues. But no one -- none of the actors in these complicated issues -- has any more expertise than he or she does in his or her own limited area of focus. We are all involved, together, in the sorts of decisions (and often the lack of considered decisions) that I have in mind. What I focus on in this book is the help that philosopher/ethicists can contribute in the search for solutions to technosocial problems -- but especially to how they can do a better job of it than they have done so far. Ralph Sleeper (1986) has interpreted Dewey's philosophy as fundamentally meliorist. I like that. Sleeper's contrast of Dewey with Martin Heidegger and Ludwig Wittgenstein seems to me especially instructive. According to Sleeper (p. 206), Heidegger and Wittgenstein "have none of Dewey's concern regarding the practice of philosophy in social and political criticism." Earlier in his book (p. 7), Sleeper had noted how this "accounts for [Dewey's] . . . pervasive sense of social hope. It accounts for . . . his dedication to the instruments of democratic reform; his historicism and his commitment to education; his theological agnosticism and his lifelong struggle to affirm the 'religious' qualities of everyday life." I suspect it is clear to anyone who 5 has read Dewey carefully that the sorts of problems Dewey wanted to attack with his transformed, meliorist philosophy are very similar to those dealt with by leading advocates of an ethics of technology. Mead did not live nearly as long as Dewey, and the social problems to which he addressed his equally meliorist philosophy were those of just the first three decades of the twentieth century. That was before the high-technology period of "post-industrialism" or the so-called "scientifictechnological revolution," as it was called in the pre-1989 Communist Bloc. But the spirit of Mead's philosophy is the same as Dewey's. And, as seems to me often to have been the case, Mead is clearer than Dewey was when it comes to stating the theoretical underpinnings of their shared approach. According to Mead (1964, p. 266): "The order of the universe that we live in is the moral order. It has become the moral order by becoming the self-conscious method of the members of a human society. . . . The world that comes to us from the past possesses and controls us. We possess and control the world that we discover and invent. . . . It is a splendid adventure if we can rise to it." In other words, societies acting to solve their problems in a creative fashion are by definition ethical. Traditional definitions of ethics are inadequate, Mead thought, and he grounded his socialaction approach on this inadequacy. This is emphasized by Hans Joas (1985, p. 124) in a recent reinterpretation of Mead: "[Mead] and Dewey developed the premises of their own ethics through criticism of utilitarian and Kantian ethics." Specifically, according to Joas, "In Mead's opinion, the deficiencies of utilitarian and Kantian ethics turn out to be complementary: 'The Utilitarian cannot make morality connect with the motive, and Kant cannot connect morality with the end.'" Utilitarians, who base their view on people's self-interest (according to Mead), fail to provide an adequate grounding for altruistic social action. Kant, on the other hand (again according to Mead), fails to see that the right way to do one's duty is not predetermined; it must be worked out in a social dialogue or struggle of competing values. In both Dewey and Mead, ethics is not a set of guidelines or a system but the community attempting to solve its social problems in the most intelligent and creative way its members know how. In a technological world, ethics is community action attempting to solve urgent technosocial problems. I believe one can make a positive defense of a social ethics of technology. What this means for me is to demonstrate that there is some hope that some of the major social problems of our technological age are in fact being solved. A recent study of reform politics and public interest activism (McCann, 1986, p. 262) says just that: 6 "Throughout the [United States], myriad progressive groups have been mobilizing and acting on behalf of crucial issues largely outside the glossy mainstream of media politics: the variety of church, campus, and community organizations mobilized around issues of U.S. policy in South Africa and Central America as well as nuclear arms policy; the increasingly effective women's and gay-rights movements; the growing numbers of radical ecologists and advocates of "Green Party" politics; the renewed efforts to mobilize blacks, ethnics, and the multitude of the poor by Rev. Jesse Jackson and others; the diverse experiments of working people both in and out of labor unions to reassert themselves; and the legions of intellectuals committed to progressive economic and social policy formulation -- all have constituted elements of an increasingly dynamic movement to build an eclectic base of progressive politics in the nation." This puts the case for progressive reform generally. Here, I want to concentrate on the contributions that contemporary philosophers, including academic philosophers, might make to the solution of technosocial problems. In an earlier book (Durbin, 1992), I concentrated on the kinds of reforms technically-trained professionals might be able to bring about. I took up specific examples, focusing on seven of ten representative types of technosocial problems. Part two of that book addressed general problems, such as education, health care, and politics. Part three focused on problems specifically related to technology: biotechnology, computers, nuclear weapons and nuclear power, and problems of the environment. In each case, I tried to show how likely it is that no real reform will actually take place unless technical professionals are willing to go beyond what is demanded by their professions to get involved with activist groups seeking to bring about more fundamental change. I made the same claim with respect to academic philosophers generally but also with respect to philosophers of technology. What I do in this book is expand on this challenge to my fellow philosophers. How philosophers of technology might contribute, within our intellectual climate today, I do not take up again until chapter 7. Before launching into a demonstration of how the approach might work out in practice, in my earlier book, I felt a need to provide a sample case. What I chose for this purpose was the case of professionals attempting to deal with problems of families in our technological world. There we see clearly displayed the combined power (if they get involved in activist ways) and weakness (if they do not) of that set of professionals most people would see as likely to get involved in activism in our culture. What I hoped to show by this means was a pattern: trained professionals -- in this case, social workers and other "helping" professionals -- who attempt to deal with the problems they are trained to address are helpless to get their professional goals accomplished if they do not go beyond mere professional work, if they do not get involved in activist coalitions with people outside their professions. In the rest of that book, I tried to show this same pattern with respect to technical professionals. In this book I focus mainly on philosophers, assuming that the other activists are still active. In a nutshell this is my claim here. There are a great many social problems in our technological world. Many ethical solutions have been proposed. But in the end none of them seems as likely to be a solution as an approach like that of Mead and Dewey that would urge 7 philosophers to work alongside other activists in dealing with the real problems that face us. Other ethics-of-technology approaches might also work, but in my view that can only happen if their practitioners become as actively involved as Mead and Dewey were. Why should anyone accept this social work model of philosophy of technology? Clearly they should not do so on the authority of Dewey and Mead -- let alone on mine. At this point, an early reviewer of the manuscript complained that I do not develop a detailed theory or program of activist philosophy of technology. At first I was taken aback; why would anyone call for a theory of what is basically an anti-theoretical approach? But a moment's reflection made me sensitive to the complaint — though I still resist its thrust. I claim here only to be following the lead of Dewey and Mead. In my opinion, Dewey has already produced an excellent defense of activism in his Reconstruction in Philosophy (1920; 2d ed., 1948) and even a program of sorts in Liberalism and Social Action (1935). The incredible extent of Dewey's activism is documented in Bullert (1983). Mead, for his part, felt no need to provide either a theory or a program; he simply viewed it as an expected extension of his philosophical commitment to get deeply involved in a variety of causes and political activities in and around Chicago. (See Feffer, 1993, chapters 9-13; note that Feffer is highly critical with respect to the impact of Mead's interventions.) I am not here claiming to update Dewey's defense of activism for our own time; the reader who is interested enough can go back to Reconstruction in Philosophy and Liberalism and Social Action — or, for that matter, to Dewey's The Quest for Certainty (1929) or A Common Faith (1934). I actually prefer Mead's attitude, that activism simply follows from a commitment to pragmatism. But if people are not going to be persuaded on the basis of authority, they need an argument. And a fully satisfying argument is difficult to come by. No one could be persuaded on the basis of a rigorously compelling logical argument -certainly not on the basis of a claim that it is contradictory, in the literal sense, to defend ivory tower solutions for real-world problems. Dewey and Mead opposed the academicizing of twentieth-century philosophy, but they did so precisely because they thought that philosophy has almost always, down through the centuries, been linked to the attempt to solve real-life problems. No more than that. Neither is any factual argumentation likely to be totally compelling. There might be a social philosophy or a political philosophy argument, but nothing of that sort is likely to be genuinely decisive. Mead and Dewey offered historical arguments, but I doubt that they really expected academic philosophers to be persuaded. In the end, it seems to me that what it comes down to is a social responsibility argument -- a demonstration of the urgency of social problems in our technological world combined with the 8 opportunity that exists to do something about these urgent problems. In the list of (classes of) problems I referred to as a touchstone in my earlier book, some of the issues have the urgency of sheer survival -- e.g., nuclear proliferation or worldwide ecological collapse -- and others are related to fears about the survival of human values in the face of genetic engineering or possible new advances in applications of artificial intelligence or "smart" programming of computerized systems that escape human control. But others are keyed to threats to the good life in a democratic society: technoeconomic inequities or disparities between rich and poor (nations or individuals); hazards of technological workplaces or extreme boredom in high-technology jobs or widespread technological unemployment even among highly trained professionals; extreme failures of schools -- including universities and professional schools -- to prepare their graduates (or dropouts) for the jobs that need doing today, or for a satisfying and effective political/civic life; the widely-recognized but also confusing health care crisis; even technological and commercial threats to the arts and traditional high culture. Such a list, as a generalized list of classes of contemporary problems, cannot even begin to hint at the urgency I have in mind. It is genuinely felt problems, of numbers of people in local communities everywhere throughout modern society, that will be compelling. People motivated to do something about particular local problems do not look kindly on an academic retreat to the ivory tower. But what I would stress is not people's disfavor; I would emphasize the opportunity such issues represent for philosophers to get involved. And some have gotten involved; that is the other half of my argument (or sermon). In my earlier book, I offered several examples. The first was related to a very technical aspect of contemporary philosophy of science -- as academic a field as there could possibly be -- and has to do with philosophical interpretations of artificial intelligence. Quite a few philosophers of science have simply jumped on the bandwagon in this field, defending even the most extreme anti-humanistic claims of the artificial intelligence community. But some philosophers (e.g., Hubert Dreyfus, 1992, and John Searle, 1992) have gained a certain notoriety as opponents of exaggerated claims for artificial intelligence. I do not address this kind of contribution at all in this book. However, in chapter 8, I try to show that even those forbiddingly academic folks, philosophers of science, can make at least limited social contributions. While I find the work that academic philosophers have done on artificial intelligence interesting, I am not overwhelmed by the contributions that others think that academic philosophers can make. Thomas Perry (1986) claims that certain philosophers (Perry mentions Judith Thomson, Thomas Scanlon, James Rachels, and Jeffrey Reiman) have thrown "increasing light on the privacy problem" (p. xiii; presumably in discussions of issues such as abortion and euthanasia). Certainly many applied ethicists have made contributions to public debate on such issues, but my claim is that they do not necessarily thereby contribute to the solving of social problems. To do that (as one example), they would have to join with others to bring about real reform. I look at two examples here, bioethics (in chapter 5) and engineering ethics (chapter 6). Returning to the possibility of direct contributions by academic philosophers to the solution of social problems, I here add some other examples not included in the earlier book. (They include 9 nuclear waste disposal, the regulation of toxic products more generally, and environmental ethics broadly, among others. They are taken up in chapter 4.) A second (still academic) example has to do with work on encyclopedias and other integrative publishing ventures, as well as integrative teaching programs in colleges and universities. Here, a small number of philosophers have exempted themselves from the normal promotion-ladder process in academia -- often against extreme pressure not to get involved -- to devote themselves to integration work. One example is the work of the editors of volumes such as the Encyclopedia of Bioethics (1978 and 1995). Similar projects in other fields help solve our social problem of intellectual fragmentation by bringing together, in a coherent whole, the work of specialist scholars in a vast array of fields -- a task for which thousands of students, not to mention physicians and other healthcare workers and their patients (in the bioethics example), ought to be as grateful as for the original specialist scholarly expertise. Similarly, a small but important band of interdisciplinarily-inclined philosophers have worked with others to establish integrative programs that help otherwise bewildered, career-oriented undergraduates to see some connections in the facts (and specialist hypotheses) they are so pressured to absorb. (See Marsh, 1988; Klein, 1990; and Edwards, 1996.) A third example has to do with philosophers who have ventured completely outside their academic roles, joining with others in ethics committees, technology assessment commissions, and so on. The best known example is the small group of bioethicists who worked with the two U.S. national commissions which, in the 1970s and 1980s, studied the regulation of human biomedical and behavioral research. By their own admission (see Beauchamp and Childress, 1989, pp. 13-14; Brock, 1987, and Weisbard, 1987), these philosophers discovered that their abstract theories helped them very little toward reaching consensus on controversial issues; for that they had to devise a set of principles of lesser generality that almost all the commissioners could agree on. The resulting guidelines do not, strictly speaking, solve problems in the practice of medicine and related areas of professional practices; only the participants in local controversies can do that (and even then only partially and temporarily). But the influence of the philosophers on the commissions, and of the resulting commission guidelines on practice, seems to have had an overall social benefit. And this continues today, with (U.S.) Presidential commissions on cloning and similar ventures. A final example among possibilities for philosophical activism I take directly from the conclusion of my earlier book. The final way I have said (Durbin, 1992) that contemporary philosophers can contribute to the modern world is as what I would call secular preachers -- advocates of vision in the solution of social, political, and cultural problems. I had in mind philosophers like Albert Borgmann in Technology and the Character of Contemporary Life (1984) and Crossing the Postmodern Divide (1992). Bruce Kuklick, in The Rise of American Philosophy (1977), maintains that this role has come largely to be scorned by academic philosophers after the rise of philosophical professionalism. I believe Kuklick is, for the most part, correct; but I also believe that the small 10 number of philosophers who still feel called upon to play this role are not necessarily out of the philosophical mainstream. Another recent American philosopher who has been perceived as playing this cultural role is Richard Rorty (1979, 1982, 1989) -- though he tends to look to literary figures rather than philosophers for such cultured vision. Presumably, in this dichotomy, he would think of himself as more a literary figure, an essayist, rather than a philosopher -- at least in the narrow academic sense. On the other hand, many critics -- and I include myself among them -- do not see Rorty as sufficiently activist in the Mead/Dewey sense. Rorty would exercise his culture-criticism -especially his criticism of the contemporary culture of academic philosophy -- exclusively at the intellectual level. And even at that level, some critics have accused him of lacking the conviction that a preacher, even a secular preacher, needs. One of Rorty's defenders, Konstantin Kolenda (1990), attempts to address these criticisms -of Rorty's lack of a "philosophically serious social activism" like that of Dewey (see Richard Bernstein, 1980a, 1980b, 1987), or of lacking a democratic liberalism with specific content (see Cornel West, 1985 and 1989). Kolenda appeals to the political credo that Rorty proposed in response to West's goading. But, strangely, neither Kolenda nor Rorty relates this credo to activist attempts to see it put into practice — though, very recently, Rorty (1998) has made something of a move in that direction. (On Rorty, see also Saatkamp, 1995.) I would not commend secular preaching, whether Borgmann's or Rorty's, if it were not connected to activism. Intellectual discourse unrelated to specific solutions for real and urgent problems is no better outside than inside the academy. Some concluding notes: I would not want anyone to think that I have provided, here, anything like a comprehensive list of all -- or even a representative sample -- of the philosophical work in the United States in which philosophers have joined in activist crusades to solve urgent technosocial problems. Even Michael McCann (1986), in his broader-ranging summary of progressive activists, had to resort to generality when he referred to "legions of intellectuals committed to progressive economic and social policy formulation." Perhaps "legions" exaggerates, if one is applying the claim to philosopher/activists, but surely there are many more of them than the "ivory tower" stereotype would suggest -- and surely there are more than I am personally aware of, especially given that much activism is buried in group efforts on local issues. These activists are, as often as not, the proverbial unsung heroes. Moreover, I would not want anyone to think that I approve of any and all activism(s), philosophical or other. Not all activism is good. All voices have the right to be heard in a democracy, but voices of groups that work to undercut this very democratic freedom -- indeed, voices of groups that are not positively committed to expanding democracy, to the removal of power structures or social structures that keep some groups down -- seem to me to be abusing the freedom they claim to be exercising. What I (along with Mead and Dewey) want is for philosophers to join with progressive activists, with those who are consciously fighting for the expansion of social justice and the elimination of unjust inequities. 11 As I said earlier, it is going to be very difficult to offer an argument that will persuade very many academic philosophers. So my appeal, in the end, is to the overwhelming urgency of technosocial problems, large and small, local, national, and international. I am just happy that some philosophers, recognizing this urgency, have joined with progressive groups in trying to solve the problems. Here I argue that there should be more. Chapter 2 PHILOSOPHY OF TECHNOLOGY: RETROSPECTIVE AND PROSPECTIVE VIEWS I next turn to three additional foundational chapters: this first one invites Albert Borgmann -in my opinion the best North American interpreter of Martin Heidegger, representing the earliest dominant tradition in philosophy of technology -- to take his message about the importance of "focal things and practices," sorely threatened in technological society, a step forward into activism. The section begins with a history of the practical aims of many early philosophers of technology, then turns to how these aims might best be implemented -- in Borgmann's case, by activism to expand the impact of his favored small groups devoted to such "focal" practices. The paper first appeared in a volume devoted to Borgmann's work, Technology and the Good Life? (2000), edited by Eric Higgs, Andrew Light, and David Strong. Philosophers have become interested in technology and technological problems only recently - though Karl Marx in the nineteenth century as well as Plato and Aristotle in the classical period had paid some attention either to technical work or to its social implications. Within recent decades, among North American philosophers paying significant attention to technology, Albert Borgmann (1984, 1992) holds a special place because of the originality of his call to citizens of technological society, urging them to rethink the way they live. What I want to argue, in my brief historical remarks here, is that Borgmann's work might appear to be at least partially misguided - at least it might appear so to philosophers like myself who are primarily concerned with technosocial problems -- unless it is interpreted in a special way. A Retrospective: The perspective I bring to these brief historical remarks reflects my practical (or "praxical" would be better) bent. In that, I differ with others who have recently summarized the history of philosophy of technology in the United States (Mitcham 1994; Ihde 1993). For me, the primary concerns about technology that gave rise to philosophy of technology were practical -- even political. Philosophers and social commentators were worried about negative impacts of nuclear weapons systems, chemical production systems, the mass media and other (dis)information systems (among others) on contemporary life in the Western world -- including negative impacts on the environment and on democratic institutions. And typically they wanted to do something -preferably politically -- about the situation. 12 Among the first broadly philosophical works to say to those early philosophers of technology (myself included) that this might be a difficult struggle was the translation into English, in 1964, of Jacques Ellul's The Technological Society. There Ellul spelled out what he called the "essentials" of a "sociological study of the problem of technology." (The word he actually uses is "Technique" -- a hypostatizing term for the sum of all techniques, all means to unquestioned ends.) According to Ellul, Technique is the "new milieu" of contemporary society, replacing the old milieu, nature; all social phenomena today are situated within it rather than the other way around; all the beliefs and myths of contemporary society have been altered to the core by Technique; individual techniques are ambivalent, intended to have good consequences but contributing at the same time to the ensemble of Technique; so that, for instance, psychological or administrative techniques are part of the larger Technique, and no particular utilization of them can compensate for the bad effects of the whole. All of this leads to Ellul's overall characterization: there can be no brake on the forward movement of the artificial milieu, on Technique as a whole; values cannot change it, nor can the state; means supplant ends; Technique develops autonomously. This was the Ellul most of us knew in the 1960s when we first started reflecting philosophically on technology. More knowledgeable students of Ellul, however, saw this as merely Ellul's warning -- a warning about what Technique (technology?) demands if we do not heed his warning and act decisively. But how can we act, given Ellul's pessimistic conclusions? What these Ellulians say we missed was the dialectical nature of Ellul's thinking. Every sociological warning was matched by a theological promise; more particularly, The Technological Society was intended (they say) to be read in tandem with The Ethics of Freedom (1976). According to one of these scholars: "Ellul's intention is to attempt to make . . . [the absolute] freedom [of Christian revelation] present to the technological world in which we live. In so doing, he hopes to introduce a breach in the technical system. It is Ellul's view that in this way alone are we able to live out our freedom in the deterministic technological world that we have created for ourselves" (Wenneman 1990, p. 188). This reading of Ellul seems to have been, at that time, limited almost exclusively to a group of Ellul's fellow conservative Christians (see Ellul 1972) -- a group already influenced by some of Ellul's sources in Kierkegaard and so-called existential theology (Garrigou-Lagrange 1982). Some of these same religious critics of technology were influenced, at the same time, by translations of works of Martin Heidegger into English. But in the 1960s this did not, to any great extent, reflect Heidegger's concerns about technological society. At the opposite end of the political spectrum, we were influenced, in the late 1960s and early 1970s, by the writings of Herbert Marcuse (especially One-Dimensional Man, 1964) -- the widely acclaimed "guru of the New Left." Where Marcuse's neo-Marxism seemed to differ from the dire warnings of Ellul's pessimism about technology was in its offering of a possible solution to technosocial problems. Marcuse and other neo-Marxists were, in some ways, as pessimistic as Ellul. No amount of liberal democratic politics, they said, could get at the roots of technosocial problems. But there was a way out: to challenge the technoeconomic system as a whole. (Marcuse was explicit that 13 this meant challenging, not only the capitalist technoeconomic system of the West, but also its imitator, the "bureaucratic socialist" technoeconomic system of the Soviet Union and its satellites.) Only a wholesale revolutionary challenge to the political power of technocapitalists and quasi-capitalistic bureaucratic socialists could do the trick; it was (he thought) possible to deal with technosocial problems, but all at once and not one at a time. The means was revolutionary consciousness-raising -- and, at least for a time, Marcuse (1972) saw the vehicle as the student uprisings, worldwide, in the late 1960s. (After the New Left faded, Marcuse found hope in the radical feminist movement -- but in the end he seems to have lost all hope, matching Ellul's pessimism of the right with a deep pessimism of the left; see Marcuse 1978.) Between these extremes -- in our philosophical consciousness at the time -- loomed a liberalcentrist hope. Daniel Bell, a sociologist (others would say a social commentator) rather than a philosopher, had already announced The End of Ideology (1962) (presumably it was the end of ideologies of the right as well as the left). Now he came forward to announce The Coming of Post-Industrial Society (1973) -- a society in which experts, including technical experts, offered the hope of solving technosocial problems. Bell was not, however, an unalloyed optimist. As much as he believed that non-ideological technocratic expertise could solve at least our major problems, just that much did he also worry about the "rampant individualism" of our culture. One of his best known books (Bell 1976) -which also influenced those of us trying to fashion a philosophical response to technosocial problems at that time -- was an exhaustive documentation of the anarchy of cultural modernism in the twentieth century. Bell did not, like Ellul, counsel a return to traditional religion as an anchor for a world adrift, but he did maintain that technological managerialism could not save us if there were no cultural standards -- if thinkers in the late twentieth century could not solve our "spiritual crisis." So the first philosophers of technology in the United States, in the late 1960s and early 1970s, had a variety of approaches to turn to in the search for solutions to such technosocial problems as nuclear war and environmental destruction--techno-philosophies of the right, left, and center. In the next decade -- from the late 1970s until the mid-1980s -- the picture became more complex, but a political spectrum remained a useful lens through which to view the fledgling philosophy of technology scene. Langdon Winner's influential Ellul-inspired book, Autonomous Technology (1977), might suggest the contrary. Early in the book Winner says: "Ideological presuppositions in radical, conservative, and liberal thought have tended to prevent discussion of . . . technics and politics." About liberals, Winner says: "[The] new breed of [liberal] public-interest scientists, engineers, lawyers, and white-collar activists [represent] a therapy that treats only the symptoms [and] leaves the roots of the problem untouched. . . ." On what later came to be called neoconservatism, he has this to say: "The solution [Don K.] Price offers the new polity is essentially a balancing mechanism, which contains those enfranchised at a high level of knowledgeability and forces them to cooperate with each other . . . [as] a virtuous elite . . . in the new chambers of power. . . ." And about Marxist radicals of the time (before the fall of the Soviet Union): 14 "The Marxist faith in the beneficence of unlimited technological development is betrayed. . . . To the horror of its partisans, it is forced slavishly to obey [technocapitalist] imperatives left by a system supposedly killed and buried." And Winner (1977, 277) concludes: "It can be said that those who best serve the progress of [an unexamined] technological politics are those who espouse more traditional political ideologies but are no longer able to make them work." But this is not the whole of Winner's story. He makes these points, in fact, in a book devoted to a different sort of technological politics -- an "epistemological Luddism" that would set out, explicitly, to examine the goals of large technological enterprises in advance, and would hold them to lofty democratic standards. In subsequent books (1986, 1992), Winner has been even more explicit about this, and -- though he is still generally viewed as a technological radical -- he has come, more and more, to espouse participatory-democracy movements as the solution to particular technosocial problems. More devoted Ellulians of this period were not explicitly political, but their religious philosophies were most compatible with a theological conservatism. (See Hanks 1984; Lovekin 1991; and Vanderburg 1981). At the opposite end of the political spectrum from these conservative Christians, other neoMarxists carried on Marcuse's critique of technology even after the decline of the New Left. Philosopher Bernard Gendron's Technology and the Human Condition and historian David Noble's America by Design: Science, Technology, and the Rise of Corporate Capitalism both appeared in 1977. Both echoed aspects of Marcuse's critique even when they did not explicitly cite him. It would be over a decade before an explicitly neo-Marcusean philosophy of technology would appear, in Andrew Feenberg's Critical Theory of Technology (1991). It makes explicit the arguments that continued to predominate in neo-Marxist critiques of technology in the late 1970s and 1980s -- right up to the demise of Soviet Communism. (See Gould 1988; Feenberg's book actually appeared after the official disavowal of Communism in Russia.) It was at this stage that Heideggerianism entered the philosophy of technology debate in the United States. (See Heidegger 1977.) I will not deal with that influence here except in terms of three avowed neo-Heideggerians. Hans Jonas was, at the time, the best known of the three. His magnum opus, The Imperative of Responsibility: In Search of an Ethics for the Technological Age was not translated from the German in which he composed it (though he had been a professor at the New School for over twenty years) until 1984. But he had already published an influential essay, "Toward a Philosophy of Technology," in the Hastings Center Reports in 1979. And he was already well known in the 1970s for his "heuristics of fear" in the face of such technological developments as bioengineering: "Moral philosophy," he said, "must consult our fears prior to our wishes to learn what we really cherish" in an age of unbridled technological possibilities. Don Ihde (beginning with Technics and Praxis, 1979, and Existential Technics, 1983), with his downplaying of some Heideggerian influences in favor of a Husserlian phenomenology, may seem to be an exception to my political reading of this decade in philosophy of technology. But in later works -- especially Technology and the Lifeworld (1990) -- Ihde has espoused an environmental activism that could only be implemented politically. At this point, while mentioning Ihde's later environmentalism, I want to digress for a moment. 15 During the second decade of the development of philosophy of technology in the United States, there developed a parallel tradition of reflection on technology. What I have in mind is environmental ethics, since a significant portion of the literature in that field touches on negative impacts on the environment of particular technological developments: the nuclear industry and electric power companies; the chemical industry; agriculture using pesticides, herbicides, and chemical fertilizers; the automobile; and so on. Without going into these issues -- and making no claims about natural affinities between philosophy of technology and environmental ethics -- it seems fair, here, to point out how strong the political dimension is in environmental ethics. And I am not just thinking of radical environmentalism, eco-feminism, or similar approaches; almost all of environmental ethics, it seems to me, is and ought to be political. Finally, we come to Albert Borgmann and his 1984 neo-Heideggerian book, Technology and the Character of Contemporary Life. I have argued elsewhere (and will not repeat those arguments here; see Durbin 1988 and 1992) that Borgmann's proposals for the reform of our technological culture -- his appeal to "focal things and practices" -- is an implicit appeal to expand focal communities. That is, it presupposes at least educational activism and probably political activism. Furthermore, the communitarian followers of Robert Bellah, who have found in Borgmann's writings an eloquent statement of goals they are striving for in our bureaucratized and technologized culture (see Bellah's comment on dust jacket of Borgmann 1992) are clearly committed to a social movement. Many view that movement as neo-conservative, a charge that has also been leveled at Borgmann; but accepting that assessment is not a necessary concomitant of seeing Borgmann's work as having political implications. In this retrospective, I have concentrated on two decades -- roughly the mid-sixties to the mideighties -- and I have made a deliberate choice to emphasize contributions to philosophy of technology that reflect a commitment to the solving of technosocial problems, typically by political means of one sort or another. There were, of course, other contributions to the development of the philosophy of technology in those years; I have myself, in fact, chronicled those other developments elsewhere (Durbin 1994) under two headings that do not emphasize the politics of technology, "The Nature of Technology in General," and "Philosophical Studies of Particular Technological Developments." However, even in many of the books I mention in that survey -- books that do not seem to have a political slant -- it is easy to perceive the political orientations of their authors. In any case, it is the political thrust of philosophy of technology that renders urgent the critical point I want to make in the second half of this chapter. A Prospective View: The Future of Philosophy of Technology: In Social Responsibility in Science, Technology, and Medicine (1992), I discuss several ways in which philosophers might follow the lead of a number of activist technical professionals who have, in recent decades, been working to achieve beneficial social change. Some of the ways I list are academic: clarifying issues, or helping to move academic institutions in positive directions. Some of the ways involve working outside academia -- for example, on ethics or environmental or technology assessment committees. But, in addition, I join the lament of those decrying the loss of "public intellectuals" or "secular preachers" -- a modern counterpart to the scholar-preachers who provided moral leadership to earlier generations of American society on issues such as slavery or child labor or injustices against workers. The example I mention in my earlier book -- of a recent philosopher/secular preacher -- is Albert Borgmann. Especially in 16 Crossing the Postmodern Divide (1992), he is explicit about playing the role of a public intellectual. I feel that the need for vision is so great in our culture of fragmented specialized knowledge that it is time to welcome philosopher-preachers back into the mainstream. Their numbers have been exceedingly small since the death of John Dewey, but we might hope for a resurgence now. Bringing about such a happy eventuality, however, will not be easy. Public intellectuals, visionaries, secular preachers, academic activists of any sort are going to have a very difficult time in our technological culture. The philosophers and social commentators I listed in my retrospective, above, did sometimes make a public impression. Ellul was widely hailed as the first thinker to awaken American intellectuals to the dangers of technology; Marcuse's critique of technology was widely influential among student radicals and others in the New Left; and Bell served as the favorite target of abuse for those same radicals. In the next decade, Winner and Ihde were (and are) ubiquitous speakers and panelists, and both also have influenced graduate students. Ellulianism has spread slowly and continues to be influential in much the same circles as in the late 1960s. Jonas left few disciples, but his influence in biomedical circles -- in particular in the Hastings Center, itself very influential -- was strong. As I mentioned earlier, much attention has been paid to Albert Borgmann's contributions to philosophy of technology. Whatever may be Borgmann's influence on others, whatever influence he may have that extends into the future, there remain good reasons to question the lasting influence of the other philosophers of technology that I have mentioned. Some may think it quaint of me even to include Marcuse and Bell. Will that be the same fate, in twenty years, of Winner and Ihde and Jonas? Though an Ellulian school has persisted for twenty-five years, so far it has produced no other thinker of note. Then there is the issue of impact -- of solutions for key technosocial problems. No one can say that ideas of Ellul or Winner or Ihde or Jonas -- or, for that matter, of neo-Marxists -- have not had some influence on activists who have had success on particular issues. I would think, in particular, of Winner's influence on Richard Sclove, with his Loka Institute and FASTnet activist electronic mail network. But probably, of all those mentioned, it is philosophers in the environmental ethics community who have had the greatest and most direct impact on particular solutions for major technosocial problems. So, if I think back to why most of us early philosophers of technology got involved, in the sixties, seventies, and eighties -- and if I am right that what motivated the great majority of us were concerns over major technosocial disasters such as nuclear proliferation and widespread environmental degradation -- then I believe I am not being unrealistic in saying that the field has not had the impact that I personally hoped it would. For the most part, it has not even had a great impact in academia. What I want to talk about now is why this is so. The key, it seems to me, is to be found in the phrase, "in our technological culture." I have always had problems with Ellul's characterizations of "technological society" in the abstract. But a description with much the same thrust -- and which is both more neutral and can be tied down to specific observations in ways I find difficult with Ellul -- is available in the sociological work 17 of Peter Berger (and colleagues)(especially 1966 and 1973). Berger sometimes (1966) refers to his work as sociology of knowledge; at other times he describes his basic method as phenomenological (1973, acknowledging a special debt to the "phenomenology of everyday life" of Alfred Schutz 1962). He is also indebted to Karl Marx (though not to doctrinaire Marxists), to George Herbert Mead (1934), and, in a special way, to Max Weber. What Berger proposes is that we describe our culture in terms of a spectrum of degrees of "modernization," with no particular culture or society prototypically "modern." What (to Berger and colleagues) makes any particular culture "modernized" is two things: its dependence on technological production, and its administration by means of bureaucracy. (Nearly all of Berger's ideas about bureaucracy seem to come from Weber -- see Gerth and Mills 1958 -- and sociologists influenced by Weber.) Thus, the more technologized and bureaucratized a culture is, the more it makes sense to call it "modernized." And this allows comparisons both over time -historically -- and cross-culturally, as between more and less modernized societies even at the present time. (Berger and colleagues do not like to refer to particular societies as "underdeveloped," but they think it less offensive to refer to some as less modernized.) With this characterization as his basis, Berger is able to identify key (he even says "essential") characteristics of workers in technological production facilities (including agriculture), as well as of citizens in a bureaucratized society -- which characteristics carry over into a rigidly compartmentalized private life. For example, "modern" individuals play several roles in both work and private life; they have many anonymous social relations; they see themselves as units in very large systems; and so on. It extends as well into the "secondary carriers" of modern consciousness -- the media in the broadest sense and mass education. The latter both prepare young people for life in such a society and reinforce the "symbolic universe" that gives it meaning -- and they do so in ways decidedly different from those in non-modernized societies. Furthermore, many people in less modernized societies envy the lifestyles of those in more modernized societies, though they often do not realize what a price -- in terms of values and lifestyles -- living in a modernized society exacts. I admit that there are many similarities between this account and Ellul's indictment of ours as a society controlled by "Technique." (Both Berger and Ellul were influenced by Weber.) The difference, for me, lies in the attitudes of the two. Ellul views technicized society as an unmitigated disaster, inimical to human freedom. Berger simply sets out a framework to understand our society -- and he remains open to various forms of resistance to modernization, in both modernized and less modernized societies (though he does not think it realistic to expect societies to return to a romanticized premodern past). The way I see all of this impinging on the potential for philosophers of technology to have an impact on society is that they (we) must do so within what Berger calls the "secondary carriers" of modernization: that is, we must exert our influence either through the media or through education. And these are, by definition, oriented toward fostering modernization, not criticizing it. Almost all the impacts I mentioned, above, with respect to the philosophers of technology I listed, have been made through the media -- through book publishing, magazine articles, lectures (mostly) on the academic circuit, occasionally in interviews on radio or television. And we all 18 know both the audience limitations of academic media and the ephemeral character of the impacts of the mass media. Today's "hot" book is tomorrow's remaindered book. The handful of books by academics that have had or are likely to have any lasting impact are just that, a handful -- in technology-related areas, probably no more than the works of Lewis Mumford (1934, 1967, 1970) and Rachel Carson (1962). For most of us, there is little hope that our writings will have that kind of lasting impact -- even if we manage to make a momentary impression even in intellectual circles. Similarly for education. Any lasting impact via mass education must come through influencing teachers and textbooks, and everyone knows by now how bureaucratized both textbook publishing and the public schools are. If we think instead of teaching the teachers, of influencing the next generation, then the impact will be by way of training graduate students; and the regimentation of graduate education is hardly conducive to producing reformers, social critics, activists who will change technological society for the better. It can happen. Some of the most critical of our current crop of philosophers of technology have survived the worst evils of contemporary graduate education in philosophy. But it is not easy, and the scholar who expects to exert a lasting impact on society via that route is almost by definition not a person who is thinking about real changes in society. Conclusion: What should we conclude from this retrospective and prospective? Abstractly, it would seem there are four possibilities. Some people will scoff. I had unrealistic hopes in the first place, they will say. Philosophy's aims should be much more limited -- limited, for instance, to analyzing issues, leaving policy changes to others (to the real wielders of power whose efforts might be enlightened by the right kind of philosophical speculations); or limited to critiquing our culture (following Hegel) after its outlines clearly appear and it fades into history, imperfect like all other mere human adventures. Others will go to the opposite extreme. I set my sights too low, they will say. We must still hold out for a total revolution. The injustices of our age, as well as its ever-increasing depredations of planet Earth, demand this. Still others are likely merely to lament the fate to which technological anti-culture has doomed us; we must resign ourselves to the not-dishonorable role of being lonely prophetic voices crying out against our fate. Then there is my own conclusion, a hope -- following John Dewey (1929, 1935, 1948) -- that we will actually do something about the technosocial evils that motivated us in the first place. That, in simple terms, we will abandon any privileged place for philosophy, joining instead with those activists who are doing something about today's problems --and, to some extent, succeeding in limited ways in particular areas (see McCann, 1986, as well as Durbin, 1992). Albert Borgmann might be read as endorsing any one of these options: limiting philosophy's scope to analyses of technology (however large-scale, Hegel-like those analyses might be); or offering radical, even revolutionary alternatives to a device-dominated culture, really hoping that a revolution will come about; or merely lamenting our sad, commodity-driven fate, our culture's wasting of its true democratic heritage. 19 But I hope he would, with me, endorse the fourth option. We might, no matter how weak our academic base, still manage to succeed in conquering particular technosocial evils one at a time. And environmental ethicists, one of the positive examples I list in chapter 4, may be showing us the best way -- precisely because they do not try to succeed alone, but join with other environmental activists, fighting every inch of the way. Chapter 3 HOW TO DEAL WITH TECHNOSOCIAL PROBLEMS This chapter compares and contrasts my Meadian and Deweyan activist approach with various approaches to an ethics of technology, claiming that if any of them is going to have any impact, it must be by joining forces with real-world activists. The paper was originally given as the vice-presidential keynote address at the 1997 Society for Philosophy and Technology international conference in Dusseldorf, Germany. In 1997, I participated in a conference on technology and the future of humankind. Some of the concerns that make that issue topical have to do with possibilities of altering human nature, either genetically or by substituting artificial for human intelligence. Stated another way, the concerns have to do with whether or not we humans can control, or continue to control, the dangerous technologies of genetic manipulation and artificial intelligence. A third concern of many at that conference was another issue of control, controlling technology's negative impacts on the environment. One traditional way in which humans have attempted to control dangerous techniques and technologies is by formulating ethical guidelines for the behavior of technical workers. From the classical age of Greek philosophy through the Middle Ages, the primary way of doing this was to define all technical workers as inferior, subordinating them to the supposedly wise leadership of certain members of a leisure class with the breadth of vision to decide issues (especially issues of justice) in a reasonable fashion (Medina, 1993). Martha Nussbaum, in her book, The Fragility of Goodness (1986), has admirably summarized Greek debates about how best to do this — debates pitting Plato against popular thinkers whose arguments he summarizes (and challenges) in the Protagoras, and pitting Aristotle against Plato. Nussbaum ends up siding with Aristotle, and her reasons for doing so can be helpful in dealing with our concerns here. Much of her book focuses on Greek tragedies — of Aeschylus, Sophocles, and Euripides — rather than on the arguments of the philosophers. And her favoring of Aristotle's view of ethics, in the end, is at least partly motivated by a belief that his views better capture what was best in the Greek culture of the classical period. The fundamental issue is revealed in Nussbaum's subtitle, Luck and Ethics in Greek Tragedy and Philosophy, and here is her opening summary: 20 "It was evident to all the thinkers with whom we shall be concerned that the good life for a human being must to some extent, and in some ways, be self-sufficient, immune to the incursions of luck. . . . "This book will be an examination of the aspiration to rational self-sufficiency in Greek ethical thought: the aspiration to make the goodness of a good human life safe from luck through the controlling power of reason." And Nussbaum ends this way: "Our own Aristotelian inquiry cannot claim to have answered our original questions [about luck and ethics] once for all in favor of an Aristotelian ethical conception. . . .[But Euripedes'] Hecuba leaves us with an appropriate image for [the] further work [that needs to be done]. In place of the story of salvation through new arts [the Protagoras], in place of the stratagems of the hunter and the solitary joy of the godlike philosopher [Plato], we are left with a new (but also very old), picture of deliberation and of writing. We see a group of sailors, voyaging unsafely. They consult with one another and take their bearings from that rock, which casts . . . its shadow on the sea." Nussbaum clearly thinks that an Aristotelian ethic, which does not try to escape from but incorporates the uncertainties that luck brings into our lives, is still a useful guide in our modern age, where we attempt to protect ourselves from bad luck (and natural forces) by technological means. Were she asked, Nussbaum would probably go further, and say that an Aristotelian ethic can also help us to deal with the untoward consequences of those very technological means, when they escape from human control. (Nussbaum deals only glancingly with the "big two" among modern ethical theories, Kant's theory of categorical imperatives, and Utilitarianism; but she clearly believes that Aristotelianism is superior to those theories as well.) One aspect of Nussbaum's discussion that links her reflections to contemporary technological concerns is her discussion of techne (she often seems to prefer "craft" to "technique" or "art" as her favored translation) as a means of dealing with tuche or luck. She heads her discussion of Plato's Protagoras, "A Science of Practical Reasoning," with this quote: "Every circumstance by which the condition of an individual can be influenced, being remarked and inventoried, nothing . . . [is] left to chance, caprice, or unguided discretion, everything being surveyed and set down in dimension, number, weight, and measure" (Jeremy Bentham, Pauper Management Improved). A short time later, Nussbaum summarizes the myth of Prometheus: "These proto-humans (for their existence is so far more bestial than human) would soon have died off, victims of starvation, overexposure, the attacks of stronger beasts. Then the kindness of Prometheus (god named for the foresight and planning that his gifts make possible) granted to these creatures, so exposed to tuche, the gift of the technai. House-building, farming, yoking and taming, metal-working, shipbuilding, hunting; prophecy, dream-divination, weather-prediction, 21 counting and calculating; articulate speech and writing; the practice of medicine . . . with all these arts they preserved and improved their lives. Human existence became safer, more predictable; there was a measure of control over contingency." The connection with Bentham's modern faith in "dimension, number, weight, and measure" as means of improving the human lot could not be clearer. Except that Nussbaum's project, in this chapter and later, is to show that the ethics first of Plato and then of Aristotle offers a better, more reasonable control of human misfortunes than scientific-technological means — including Bentham's utilitarian reforms. Nussbaum's focus is on tuche or (bad) luck in ancient Greece though she clearly thinks the lessons to be learned there are relevant for the ages. What means, on the other hand, have recent thinkers explicitly proposed for dealing with misfortunes associated with modern science and technology? I think they can be summed up under four broad categories (as long as we are willing to entertain the possibility of overlaps): 1. Technology Assessment: This has been the technical experts' method of choice. It has a great many variations, both in design and in execution, but a brief and generic summary is possible. One textbook, which attempted to summarize the state of the art at the beginning of the popularity of the technology assessment movement in the USA (in the 1970s), organizes the method around ten strategies: 1. problem definition; 2. technology description; 3. technology forecast; 4. social description; 5. social forecast; 6. impact identification; 7. impact analysis; 8. impact evaluation; 9. policy analysis; and 10. communication of results (Porter, et al. 1980). This bare-bones skeleton can easily mask the extraordinary difficulties involved. Any sort of forecasting is difficult, and technological forecasting is no easier. One leader in the field, Joseph Coates, is quoted as identifying not just first-order and second-order consequences of a new technology (TV), but third-, fourth-, fifth-, and sixth-order consequences! And so the problems or difficulties mount. In actual assessments — for instance, by the Office of Technology Assessment of the U.S. Congress, during the roughly twenty years of its official existence — impact analysis often ended up being restricted to economic impact assessments using the economists' technique of cost-benefit analysis (sometimes risk-cost-benefit analysis). Even aside from the obvious difficulty of quantifying costs and benefits in monetary terms — along with the further difficulty of quantifying people's values or choices in the same terms — this approach is fraught with other difficulties. For example, deciding what count as internal or external costs (externalities); settling on a discount rate for future costs; leaving ultimate decisions to officials who can ignore everything said in the assessment; etc. And of course the obvious problem already mentioned, that of reducing everything to economic choices and values, is absolutely fundamental. Some authors have attempted to put an ethical coloring on the method, linking it or even equating it with an ethical utilitarianism (usually with value assignments transcending the purely 22 monetary). Others, worried about the limitations of utilitarianism as a defensible ethical system, have attempted to maintain its broad outlines but correct its fundamental limitations by making non-consequentialist assumptions, such as (especially) egalitarian rules of justice, which would trump some consequentialist assessments (Shrader-Frechette, 1985 and 1991). Other expertocrat assessors have attempted to make other compromises between consequentialist assessments and ethical rules — and one of these will be described later. Still others have eschewed any appeal to ethics, claiming to leave any alleged inequities arising from expert assessors' judgments to the democratic political process for rectification (Florman, 1981). This amounts to a compromise, not with ethical rules for the control of technologies, but with politics as the preferred method. 2. Proposals for Ethical Rules as Limits on Technology (or Particular Technologies): I have, before (Durbin, 1992), considered a short list of four or five ethical approaches to the control of technological problems. In addition to Shrader-Frechette (just mentioned), I listed Hans Jonas, some Heideggerians, and some Ellulians. To that list, I would now add Carl Mitcham (who has recently added ethical concerns to his metaphysical concerns) and also Hans Lenk. Jonas is the best known (see especially his 1984) ethics-of-technology advocate, on the basis of his avowedly post-Kantian "categorical imperative of fear or caution" in the face of such new human powers as biotechnology. Neo-Heideggerian (or post-Heideggerian) Albert Borgmann (1984, 1992); is less concerned with new moral rules than he is with "focal things and practices" that offer a counter to the consumerist Zeitgeist of our technological age. Others have seen similarities between this approach and the new communitarianism in ethics (see Bellah, et al., 1986). Ellulians, often conservative Christians but not necessarily so (see Hottois, 1984, 1988), offer something akin to religious existentialism as a reply to the excesses of technology — a kind of "just say no" resistant attitude (see Wenneman, 1990). Mitcham, in his more metaphysical writings (see his 1994), has always seemed to favor a humanistic/romantic resistance against the "engineering approach" to problem solving; this resistance clearly borrows from Ellul and is similar to Borgmann's approach. But now that he has explicitly taken it upon himself to produce a "high-tech ethics" (forthcoming), he is more willing to preach a gospel of cooperation between engineers and technical experts, on one hand, and humanistic and other critics — along with ordinary citizens concerned about controlling technology's bad effects. The common theme in all of these approaches is that what we need to depend on for the control of technology is moral rules, or good moral character, or exemplary moral behavior (perhaps especially on the part of technical experts). 23 Hans Lenk (e.g., 1987 and 1991) carries this approach to an extreme with his proposal that we acknowledge the multiple levels of individual and collective contributions to technological activities and assign specific (kinds and levels of) responsibilities to each (to the extent possible). In this venture, he has found willing listeners in the Verein Deutscher Ingenieure, the main German engineering professional society (see Lenk, 1992 and 1997). I want to pause a moment now to look at what our chances would be if we adopted this approach to controlling biotechnology or expert systems — including such feared negative consequences as cloned or otherwise genetically engineered superhumans. The key here (as in chapter 2, above) is to be found in the fact that ours is a technological culture (see Berger and colleagues, 1966 and 1973). "Modernized" cultures — in spite of claims put forward by postmodern critics — continue to be dominated by the twin features of technological production (often, today, supertechnologized in terms of computerization and automation) and bureaucracy (also almost always computer-supported today). This leads to consequences for individual and collective lifestyles in high-tech societies — separation of work from private life, numerous scripted roles in both, etc. — but also to the fact that "modernized" cultural values are transmitted by what Berger and colleagues call "secondary carriers." These include, especially, education — typically for a long time and to a high level if one is to contribute productively — and the mass media, including today the electronic media. So today, if one wants new ethical rules to have an influence on large numbers of the expert citizens and workers who might have some hope of controlling the computerized milieux in which they work (and, often, play), as well as such dangerous new technologies as bioengineering, it must be in one of two ways. One way is to intervene in the technical education of experts in the appropriate fields — in our sample cases, computers, biotechnology, and ecology. For the most part, reform proposals of this kind have recommended ethics courses for future computer scientists and biotechnologists. (I am not aware of very many cases where environmental ethics is a requirement for future ecologists or environmental studies — though an environmental policy program I work in does strongly recommend a course in environmental ethics.) I have been involved, directly or indirectly, in at least two such programs, and I have enjoyed working with future computer programmers and future biomedical scientists (who will, in fact, be doing biomedical engineering). These are bright and eager students with extremely promising careers. And an ethics course may have some impact on their professional work in the future — but only if it is taught as an invitation to ongoing continuing education, to lifelong learning. If a student does no more than learn a few rules now, those rules are almost certainly going to be too general to help in the future in problematic situations; if, on the other hand, students practice now for future problematic situations, and — when real problems arise — if they relearn again and again, in ever more detail, how really applicable rules help in really controversial situations, then an ethics course may help (some). Similar problems arise with respect to the other way we might have an influence, through the media: publishing books and articles, disseminating ethics case decisions to larger audiences in 24 professional societies, occasionally getting ethics issues (and, implicitly, ethical guidelines) into mass-media publications or broadcasts or even movies or TV shows, and so on. As philosophers, we have been trained to believe strongly in the power of the word, written or spoken — or broadcast (imaginatively as opposed to the banalty of most broadcasting). But we should be very realistic here. If we consider our greatest preachers of ethical rules for technology to have been philosophers like Hans Jonas, then we need to do our homework to find out just how many people have actually read Jonas's writings, writings of others influenced by him, and so on. And of course we need to go further and ask how many (of the crucial people we want to reach) have actually heeded his rules of caution. In my experience, the numbers here are even more discouraging than the numbers reached by ethics education for technical experts; almost none of the young computer professionals or biotechnologists I have known (even if they took one of my classes), and even fewer of their coworkers and managers (when I have talked with them later) have ever so much as heard of Hans Jonas — or Albert Borgmann, Carl Mitcham, etc. The philosophical voice today is a muted voice, and most of the philosophers that I know are extremely wary of those popularizations of ethical rules or ideas that occasionally find their way into broadcasts or media productions that do reach larger audiences. Do we really want our deepest concerns about cloning to be dealt with through "Jurassic Park"? On the other hand, do BBC-type considerations of these same issues actually have an impact on the behavior of the biotechnology professionals we want to reach? 3. Radical Politics: Worries about the inadequacy of preaching do-good rules — as well as an almost complete assurance that, if left to their own risk/cost/benefit calculations, devotees of "virtual reality" or cloning or further depradations of the environment in the name of "sustainable development," technical professionals will always favor more of the same rather than controls on their work — have led others to the conclusion that the only effective way to control technological developments that we consider undesirable must be political. I have already mentioned Samuel Florman — who is, properly speaking, an advocate of unfettered technological advance . . . until it generates public controversy, when the appropriate way to deal with it (Florman says) is through public hearings and other administrative mechanisms of the modern liberal-democratic polity. This, however, is a far cry from the views I have in mind here. Many advocates of political, as opposed to ethical, control of technology have been Marxists or neo-Marxists. One of the best known is the historian, David Noble (especially 1977 and 1984). In America by Design (1977), Noble concentrates on documenting the rise of sciencebased technocapitalism. The politics of control is muted there, mostly a short reference at the end to the "labor trouble," "personnel problems," and "politics" that technocorporate managers and their sympathizers fear as obstacles to the continuing advance of corporate capitalism. 25 Forces of Production (1984) is a little more political, as it focuses on further developments of technocapitalism fueled by automation; Noble says this at the end: "Certainly it is of the utmost importance that working people — including engineers and scientists — have belatedly begun to confront technology as a political phenomenon" (p. 350). But it is in a series of articles (1983) that Noble is most explicit about a call to a neo-Luddism on the part of workers displaced by automation and similar "advances"; they should, he says, "seize control of their workplaces." Noble then expands on this idea: "The real challenge posed by the current technological assault is for us to become able to put technology not simply in perspective but aside, to make way for politics. The goal must not be a human-centered technology but a human-centered society" (1983, p. 92). A little more philosophical than Noble, and with a more cooperative approach to politics (but still neo-Marxist) is Andrew Feenberg (1991 and 1995). He reinterprets Marxian thought in a direction that plays down any determinism, economic or technological. He also claims that the "unequal distribution of social influence over technological design" — keeping it in the hands of experts for the advantage of the managerial classes — is an injustice (1995, p. 3). And his fundamental proposal for reform is a democratization of the workplace, with workers cooperating wherever possible with those enlightened managers who have paid attention to calls for social responsibility and environmental concern (1991, pp. 190 and 195). If carried through to its conclusion, this sort of reform might be every bit as radical as the one Noble proposes, but in Feenberg's gentler phrasing, it sounds less confrontational. And it should be noted that Feenberg is making his proposal consciously after and in light of the fall of Communism in the old East Bloc. (Since my purpose here is to talk about controls on technologies — or technological excess — I see no need here to mention one other political philosophy. It would give a complete green light to any and all technological developments, either on laissez-faire principles or on the capitalist principle that the market should decide everything.) 4. Progressive Activism: Conservatism, neo-conservatism, nineteenth-century liberalism, twentieth-century "moderate" liberalism, socialism or radicalism — these do not exhaust the stops along the political spectrum, even a spectrum of political attempts to control bad effects of technological development. More than once I have argued that what we need — to bring particular technologies under control — is a combination of radical unmasking of status quo myths together with progressive politics (Durbin, 1995). But my most consistent stance has been to leave out the radical part and simply advocate progressive activism (Durbin, 1992 and 1997). And progressive activism is what I would advocate here as the most effective means of controlling particular technologies, whether biotechnology or runaway computer technologies or technological developments that threaten to undermine any progress that has been made toward sustainability on environmental issues. 26 Elsewhere I have argued that, because there are a number of activist groups already working to avoid excesses in biotechnology developments, philosophers (along with other humanists or critical academics of various sorts) ought to join forces with these activists in trying to bring under control particular new biotechnologies, one at a time. Similarly for excesses in the implementation and dissemination of computers — in overautomation, surveillance, databanks, etc. — where activists are already at work and philosophers can do a great deal of good by joining forces with them (Durbin, 1992, chapters 7 and 8). On the environment, I have argued against both ecologists who refuse to become activists (on alleged "pure science" grounds) and philosophers who would turn environmental ethics into an academic game (Durbin, 1992, chapter 10). And I have gone further, to suggest that if there is to be sustainable development, it can only come about if we focus on individual development efforts in particular locales and, more important, if in those local efforts all the relevant parties can be persuaded to get involved in an effort aimed at balanced compromise. Some of the partisans will always favor development at the expense of other interests; others will demand a cessation of all development efforts; and a whole range of voices in between will favor other interests. Getting all of them to work together is seldom possible, but getting enough of them to pull together and counter both extremes is at least occasionally possible. And where this happens, there can be some approximation of sustainability — sometimes by slowing or even stopping a particular development initiative, but sometimes also by allowing a particular development to proceed with adequate concern for the local environment and adequate consideration given to justice for those most often made to suffer in the name of development, namely, poor workers and their families (Durbin, 1997). In my opinion, these are the lessons to be learned from the philosophical school of American Pragmatism — especially from William James, George Herbert Mead, and John Dewey, but also from their recent disciples in philosophy of technology, such as Larry Hickman (1990). Conclusion: An astute reader of Dewey (or Hickman) might wonder, at this point, why I started this essay with Martha Nussbaum defending Aristotelian ethics as the best means of dealing with bad luck, including the ill effects of technological development. I might, on another occasion, make the case that Dewey and Hickman have misread at least some parts of Aristotle — that an Aristotelian practical politics and social ethic could be made compatible with Deweyan activism. But that is not necessary here. It is enough to note Nussbaum's main point in the passages I quoted at the beginning of this chapter. The way to deal with the evils of the world, the mischances of ill fortune or the excesses of blind advocates of technological advances, is not to escape to some Platonic heaven, hoping to leave bad luck behind. Nor should we attempt to calculate and quantify all risks and costs, hoping that some magical technology assessment will provide political and managerial decision makers with all the "objective" facts and risk assessments they need to make wise decisions for our technological society — as though that were ever the path to democratic control of 27 technologies. No, like the sailors in Euripides' Hecuba whom Nussbaum describes, we need to remain in our boat in the midst of the stream, trying the best way we can — philosophers, other academics and experts of all kinds, and activist citizens — to steer a course that will most likely, but never certainly, get us where we want to go. We may, of course, capsize; but we are more likely to achieve our goals by steering an activist middle course than by following some ideal ethical plan or some spuriously concrete risk assessment. If other philosophers of technology insist on trying to devise the ethics of technology, or if they attempt to perfect the ideal risk/cost/benefit assessment for each particular technology under consideration — all I would insist on is that their efforts are not likely to lead to any practical controls on particular technological developments unless they join with us activists in the middle of the stream. Chapter 4 SOME POSITIVE EXAMPLES This chapter, which was written to introduce American Pragmatism (in a broader sense) to a European audience, provides a half dozen positive examples -- beginning with Larry Hickman's insertion of Dewey's thought into the very center of philosophy of technology controversies and going all the way to practical politicking on technology issues in Internet bulletin boards associated with Richard Sclove. Under the title, "Pragmatismo y tecnologia,"it was originally written, by invitation, for the Spanish journal, Isegoria: Revista de Filosofia Moral y Politica, in 1995. When I was invited to produce this survey of recent work on "Pragmatism and Technology," I decided (Durbin, 1995) to focus on a small handful of philosophical contributions that approach the understanding and control of contemporary science-based technology pragmatically. I further limited myself to contributions of North American philosophers. I here repeat that survey, with the aim of expanding on what I said in chapter I; some philosophers have made positive contributions to the solution of technosocial problems. However, as I said earlier (1995), I think that lessons can be learned from contrasting the decidedly real-world pragmatism of some North American philosophers with more abstract, theoretical, or foundational critiques of modern technology by European philosophers. Some European philosophers that I would propose for contrast are Gilbert Hottois (especially in his Le paradigme bioéthique: Une éthique pour la technoscience [the bioethics paradigm: an ethic for technoscience], 1990), Hans Jonas (especially Das Prinzip Verantwortung [the principle of responsibility], 1979), and José Sanmartín (especially Los nuevos redentores: Reflexiones sobre la ingeniería genética, la sociobiología y el mundo feliz que prometen [the new redeemers; reflections on genetic engineering, sociobiology, and the happy world they promise], 1987). 28 I do not summarize the work of these authors here. Each is well known generally (Jonas, of course, did most of his philosophizing in his later years in the USA), and each is especially well known in a particular European culture -- French, German, or Spanish, respectively. I do not here make any explicit comparisons and contrasts. What I do offer is a summary of some key North American contributions -- leaving the actual comparisons and contrasts to the reader. It can be said, preliminarily, that Hottois, Jonas, and Sanmartín all subject recent technologies -- and most especially genetic engineering -- to fundamental, even foundational, critiques. Jonas's critique, the first and best known of the three, explicitly links his to a post-Kantian new categorical imperative based on our fears of recent technology's unprecedented expansion of human powers. Hottois goes even further, appealing to an ethical impulse deeper than any particular traditional ethical approach -- which ethical impulse (he argues) is fundamentally threatened by "technoscientific" (especially genetic?) threats to what it means to be human (or ethical, at the root level). Sanmartín's views explicitly reject any appeal to such "deep" philosophical reflections, but he still insists on fundamental transformations of social norms (making them more responsive, for example, in the case of genetic testing, to the basic rights of those tested). All of these philosophers are interested -- it is even fair to say, are passionately interested -- in practical changes in our way of life in a technoscientific world. But, to the North American philosophers to be discussed, none of their approaches, however practical, is pragmatic. Here I need to pause to mention some meanings of "pragmatism" and "pragmatic." So far as I know, Immanuel Kant was the first philosopher to use the adjective pragmatische; this was in one of his titles, Anthropologie in pragmatischer Hinsicht, 1798 (and it may have been no more than a stylistic variant on the praktischen of the Kritik der praktischen Vernunft, 1788). In the two centuries since, "pragmatic" -- and, later, "pragmatism" -- have had many different meanings in the philosophical literature. These range from "pragmatics," as the third subdivision of formal semantics (syntax, semantics, and pragmatics), to the names of particular philosophical traditions -- whether European (for example, Giovanni Papini in Italy and Edouard Le Roy in France) or North American. In addition to the variety of philosophical uses, the term "pragmatic" also has more than one usage in ordinary, everyday language. Some people are said to be pragmatic in a good sense -they manage to get a great many things done efficiently; while other usages are more pejorative: "He is (merely) pragmatic, but she has a longer-range view of things." And so on. Here, I reserve the term "pragmatism" for the school of American Pragmatists (especially John Dewey and George Herbert Mead), including recent disciples. "Pragmatic" is the adjective I use to describe the work of some philosophers who, without being Pragmatists in that sense, 29 nonetheless follow Dewey's advice, pitching in and working directly with non-philosophers to solve particular social problems -- here, problems of our high-technology contemporary society. 1. Larry Hickman's John Dewey's Pragmatic Technology (1990): I begin my survey with this book for several reasons. The first and most obvious is that its subject is John Dewey, the philosopher most people think of first when discussing American Pragmatism. A second reason is that Larry Hickman successfully reintroduces Dewey's voice -mostly neglected until now -- into recent debates, European and American, about contemporary technology. Still another reason is that, of all Dewey disciples writing today, Hickman is most sympathetic to the kinds of European approaches to problems of technology I have mentioned for purposes of contrast. Scholarship on Dewey in North American philosophical circles in recent decades has mushroomed (see, for example, Morris and Shapiro, 1993; and Westbrook, 1991). Hickman acknowledges this, leaning (for instance) very often on the fine intellectual biography of Dewey by Ralph Sleeper, The Necessity of Pragmatism: John Dewey's Conception of Philosophy (1986) -- where Sleeper concludes that the one consistent theme that unites all of Dewey's contributions is meliorism: the claim, namely, that philosophy both ought to and does contribute to the improvement of the human condition. What Hickman contributes to this flood of recent Dewey studies is the claim that, for Dewey, philosophy (rightly understood) and technology (understood as problem solving within the context of real-life conflicting social values) are identical. Two quotes summarize Hickman's arguments. The first: "Inquiry was reconstructed by Dewey as a productive skill whose artifact is knowing. He argued that knowing is characterizable only relative to the situations in which specific instances of inquiry take place, and that it is an artifact produced in order to effect or maintain control of a region of experience. . . . Knowing is thus provisional . . . [and] the goal of inquiry is not epistemic certainty, as it has been taken to be by most of the philosophical tradition since Plato" (p. xii). And the second: "Of the three giants of twentieth-century philosophy -- Wittgenstein, Heidegger, and Dewey -only Dewey took it as his responsibility to enter into the rough-and-tumble of public affairs, and only Dewey was able to construct a responsible account of technology" (p. xv). I would modify this last quote in only one way: Hickman does not mean to say that these other major twentieth-century philosophers should not have taken on these responsibilities. Hickman's book can stand up to philosophical criticisms on its own, but I want here, parenthetically, to provide two argument sketches that show why the Dewey/Hickman model is a particularly good one for philosophers of technology -- and philosophers generally. 30 First, a social-scientific argument based on contributions of Dewey's collaborator, G. H. Mead: There simply is no intellectually satisfying alternative to the Pragmatists' sociology-ofknowledge challenge to all versions of epistemology (and their behavioral-psychology parallels). As Mead argued with respect to scientific knowing (see 1964) -- and he and Dewey extended, elsewhere, to all forms of human knowing, even behavioral scientists who may think they are confirming individualistic stimulus-response models of knowing in their laboratories are and must be involved in a group process (confirmation). Similarly, all knowledge claims are group-specific and goal-directed -- not mere reactions to external stimuli (whether ideas, sensations, or anything else of that sort, whether proposed by philosophers or behavioral scientists) -- and the goals are always related to living meaningfully within the relevant group. The fullest elaboration of this argument is provided in Peter Berger and Thomas Luckmann's The Social Construction of Reality (1966). That remarkable book is an excellent summary of philosophical theories converging on the Pragmatist point of view (see the book's notes), but the book's subtitle, A Treatise in the Sociology of Knowledge (together with explicit claims made by the authors), indicates that their primary intent is to provide an empirically testable, sociological account of how real knowing actually takes place in real life. Second, a phenomenological argument: Since the social science argument is controversial, a second, quasi-philosophical argument may be in order. It is best exemplified in another work of Peter Berger, The Homeless Mind: Modernization and Consciousness (1973). There Berger and co-authors defend a view that "the sociology of knowledge always deals with consciousness in the context of a specific social situation" (p. 16). Here is a summary of their phenomenological method: "Although consciousness is a phenomenon of subjective experience, it can be objectively described because its socially significant elements are constantly being shared with others. Thus the sociology of knowledge, approaching a particular situation, will ask: What are the distinctive elements of consciousness in this situation? How do they differ from the consciousness to be found in other situations? Which elements of consciousness [i.e., of particular consciousnesses] are essential or intrinsic, in the sense that they cannot be 'thought away'?" (p. 14). To summarize, the point of these Mead-inspired arguments supporting the Dewey/Hickman thesis is that all knowledge claims are made in specific social contexts, and these contexts cannot be "thought away." In much more elaborate form, the attacks of Hubert Dreyfus (1992) and John Searle (1992) against artificial intelligence make the same sorts of assumptions. Dewey maintains explicitly in Reconstruction in Philosophy (1948) that: "Philosophy grows out of, and in intention is connected with, human affairs." And Dewey goes on: 31 "[This] means more than that philosophy ought in the future to be connected with the crises and the tensions in the conduct of human affairs. For it is held [here] that in effect, if not in profession, the great systems of Western philosophy all have been thus motivated and occupied." It would appear to be pure vanity if I were to list here, as a second example, my Social Responsibility in Science, Technology, and Medicine (1992). But I do believe that my book pushes Hickman's version of a Deweyan philosophy one step further than Hickman has explicitly gone. I argue there -- and I am continuing my argument here -- that philosophers ought to follow Dewey's maxim to the letter. They should, explicitly, "in profession," go beyond academic professionalism and get involved ("progressively") in the crucial issues of the day. My argument presupposes a degree of confidence that something in fact can be achieved -and is being achieved -- through these activist efforts. This approach has led me to describe mine (see chapter I, above) as a "social work model" of good philosophizing -- a characterization I think Dewey and Mead might have approved. The argument I offer in its favor is not philosophical in any academic sense. It assumes the urgency of the social problems that have drawn most of my colleagues into philosophy of technology -from environmental catastrophes, to major biotechnology threats, to widespread computer-based invasions of privacy. Problems of this sort have always bothered philosophers of technology, from Karl Marx and neo-Marxists to Martin Heidegger and neo-Heideggerians, plus a whole range of younger philosophers in the Society for Philosophy and Technology and elsewhere -including applied ethicists. Based on the urgency of the problems and the ineffectual character of the ethical responses of most of these philosophers, my argument (such as it is) is simple: only progressive social activism seems to offer any hope of solving any of these urgent problems, even limitedly and temporarily. Not all contemporary North American philosophers claiming to be followers of Dewey would subscribe to this argument, but at least some would be sympathetic (see West, 1989). I turn next to three philosophers who do not claim to be Deweyans but who have done what Dewey proposes; that is, they have become activists, deeply involved with other activists in dealing with major contemporary technosocial problems (in North America, for the most part). 2a. Kristin Shrader-Frechette's Burying Uncertainty: Risk and the Case against Geological Disposal of Nuclear Waste (1993): Almost since the beginning of her philosophical career, Kristin Shrader-Frechette has been involved with a variety of technology assessment and environmental impact assessment commissions, first at the state level and then at higher and higher levels up to the Federal level in Washington, D.C. Indeed, I think it is a fair guess to say that no North American philosopher has been involved in more such committees. In some ways this is strange, because, since the appearance of Nuclear Power and Public Policy (1980; discussed below), Shrader-Frechette has often been accused of being not only anti-nuclear but anti-technology in general -- a charge she has repeatedly felt that she has to combat. But several characteristics -- the fairness of her 32 arguments, the expertise that she brings to discussions, and the fact that she always tries to make a positive contribution --keep getting her invited back again and again. The latest book, Burying Uncertainty (in many ways the most detailed of her books), is a good example of all of these qualities. Four-fifths of the book constitute her critique of the major plan to bury nuclear wastes deep in a mountain in Nevada. The critique includes many by-nowfamiliar features of her arguments: the risk assessments used to justify the plan are faulty because they hide certain value judgments; the subjective risk assessments used are in fact mistaken in many cases; faulty inferences are drawn from these faulty assessments; there are fatal but unavoidable uncertainties in predictions of the geological suitability of the site; and the entire venture violates an American sense of fair play and equity, especially with regard to the people of the state of Nevada. These are her conclusions. The arguments in support of them are meticulous, even-handed, and unemotional in every case. This does not mean, of course, that they have been or will be viewed as such by Federal officials, including scientists, especially bureaucrats in the Department of Energy with vested interests in pushing the official project to completion; she has even been heckled when presenting her arguments in their presence. A second notable point is that Shrader-Frechette knows what she is talking about; indeed, her knowledge of both geology and the risk assessment process is remarkable in a philosopher in these days of academic specialization -- though her critics, naturally, maintain that some of her geological claims are irrelevant and that her accounts of particular risk assessments are biased against official government experts. One bias Shrader-Frechette does not attempt to hide is in favor of equity; she has even given one of her more general studies a subtitle that underscores this bias: Risk and Rationality: Philosophical Foundations for Populist Reforms (1991). This might make her sympathetic toward some aspects of Dewey's progressivism, but the social philosopher she invokes most often is John Rawls and his contractarian, neo-Kantian theory of justice as fairness. What typifies Shrader-Frechette's approach more than anything, however, and what clearly makes her a welcome addition to any discussion (including the discussion, here, of how to deal fairly with the urgent problem of finding a place to put highly toxic nuclear wastes), is her insistence on being more than just a critic. She feels it necessary to make a positive contribution to the discussion; as she says, one purpose of the book is "to provide another alternative to the two current options of either permanently disposing of the waste or rendering it harmless" (p. 2). Admittedly providing only a sketch (one-fifth of the book versus the four-fifths critiquing current policy as epistemologically faulty and ethically unfair), what Shrader-Frechette argues for, in place of permanent disposal, is placing "high-level radwastes in negotiated (with the host community [or communities]), monitored, retrievable, storage facilities" for at least a hundred years. 33 It is too early to tell whether Shrader-Frechette's book will have any impact, whether on blindered Department of Energy scientists and officials, or on public officials more generally -or even on the educated public (except perhaps in Nevada). But one thing is clear now: if a philosopher were to choose to follow Dewey's advice, to get involved actively in trying to solve some urgent technosocial problem like the disposal of nuclear wastes, he or she would have to search far and wide for a better model than Kristin Shrader-Frechette as she makes her case in this book. 2b. Shrader-Frechette's Nuclear Power and Public Policy: The Social and Ethical Problems of Fission Technology (1980): This earlier venture into the epistemological/ methodological fallacies of nuclear policy, along with its ethical inequities, is clearly more strident than Burying Uncertainty. There is already all the care -- to get the facts right, to deal with risk assessors on their own terms (even when pointing out their errors), and to argue carefully and meticulously -- that one finds later. Also, as later, the ultimate aim is to make an equity-based ethical claim; but here it is reduced to little more than a dozen pages. And, though Shrader-Frechette, when she wrote this book, already had an exemplary record of working with assessment teams, this early venture does not show the same degree of care as the later one when it comes to understanding and appreciating the motives and feelings of her opponents. 2c. Shrader-Frechette's Science Policy, Ethics, and Economic Methodology (1985): About midway between Nuclear Power and Burying Uncertainty, Shrader-Frechette broadened the scope of her critique, taking on the fallacies and hidden assumptions of a whole host of technology and environmental-impact assessments. Science Policy is an extended critique of risk/cost/benefit analysis, the most widely used methodology in these various assessments. In this book, Shrader-Frechette points out general and specific problems and she makes an extended case for what she calls regional equity -- avoiding, where possible, imposing risks or costs on people in particular geographical regions. In this middle one of the three books mentioned here, Shrader-Frechette clearly moves toward providing positive alternatives to the methodologies she has criticized. She offers two: an ethically-weighted version of risk/cost/benefit analysis, and a technology tribunal -- a public procedure for weighting equitably the competing values that different scientists bring to their risk/benefit analyses. Shrader-Frechette is here, then, clearly moving toward the positively collaborative attitude so much in evidence in Burying Uncertainty -- though perhaps the generality of the argument, focusing on a variety of assessments, probably dooms the book to have less of an impact than the later book. (Nuclear Power may have had more of an impact, though it also gave more ammunition to opponents accusing her of being anti-technology.) 3. Carl Cranor's Regulating Toxic Substances: A Philosophy of Science and the Law (1993): This is another exceedingly careful, fair, and open-minded critique of prevailing practice in another area of risk-assessment: the legal control of human exposure to toxic substances. As 34 Cranor says explicitly, his book is "not a wholesale evaluation or critique" of either the scientific process of assessing risks of toxic substances or the legal procedures for lessening or controlling the risks. What the book does offer is an argument for strengthening administrative -- as opposed to tort/liability -- procedures for dealing with control of toxic substances; more particularly: "I argue that present assessment strategies, as well as some recommended by commentators, both of which are temptingly inspired by the paradigm of research science -- the use of careful, detailed, science-intensive, substance-by-substance risk assessments -- paralyze regulation" (p. 10). Cranor's approach thus parallel's Shrader-Frechette's in applying both philosophy of science and ethics (along with, in Cranor's case, philosophy of law) approaches to a major technosocial problem. Cranor, however, comes across as much less critical, much more sympathetic toward the risk assessment scientists and bureaucratic regulators than Shrader-Frechette. Like Shrader-Frechette, Cranor has been deeply involved with actual practitioners. His acknowledgments mention a University of California Toxic Substances Research and Training Program, a University of California/Riverside Carcinogen Risk Assessment Project, the U.S. Office of Technology Assessment, and the California Environmental Protection Agency -- not to mention the office of U.S. Congressman George E. Brown, Jr., then chairman of the Committee on Science, Space, and Technology. Cranor worked for a year as a Congressional Fellow in Congressman Brown's office, and Brown supplies a warm endorsement in a preface to the book. Here, then, is another excellent philosopher-model for anyone who would follow Dewey's getinvolved advice -- though Cranor's mode of philosophizing is even farther from Dewey's than is Shrader-Frechette's. 4. Richard Sclove's "FASTnet" and "Scishops" Networks: One more example of philosophical activism is the work of political philosopher and Internet guru Richard Sclove, especially in his bulletin boards, "FASTnet" and "Scishops." In his philosophical writings, Sclove (1997) has argued for populist technological design, attempting to counter the near-universal claim in our culture that technical design is a matter exclusively for experts. He has collected dozens of examples of citizens not only contributing to large-scale technical design projects but initiating them and leading the experts throughout the design and construction process. Sclove admits that these efforts have often been thwarted, and projects that began democratically have ended up being as anti-democratic as other large-scale technological developments. But his anti-expertism case is strong. However, the part of Sclove's work I am emphasizing here is his electronic-mail networks, FASTnet and Scishops. Many contributions in their early days added to Sclove-like examples of scientific and engineering activism in "science shops" -- scientific/technical experts helping activist groups unable otherwise to afford the scientific expertise needed to counter corporate and governmental power -- and similar community science (and technology) projects. 35 But as the newly-Republican U.S. Congress began in 1995 to threaten serious cutbacks in science funding, funding for the Office of Technology Assessment and the Environmental Protection Agency, and so on, FASTnetters and Scishoppers quickly joined the widespread lobbying on the Internet (and elsewhere) against these cutbacks. Other e-mail networks, such as Sci-Tech-Studies (otherwise dominated by fairly esoteric academic discussions of the nature and role of the field of Science and Technology Studies), also provided opportunities for activist philosophers (and other academics) to get involved. Not all of this electronic chatter added up to significant political counter-power -- or even serious real-world activism -- but there seems little doubt that some people in Congress did experience at least a small groundswell of citizen pressure that might end up having some lasting influence. Of course, the Republicans under the leadership of Newt Gingrich and his allies, had already mastered electronic politicking; so perhaps the best one can say is that FASTnet, Scishops, SciTech-Studies, and similar efforts only amounted to a partially successful counterforce. Nonetheless, this provides another example of a way in which academic philosophers could get involved fruitfully in activist efforts to solve technosocial problems. It seems to me that this is another excellent example of philosophical activism, one toward which Dewey might have had much sympathy. 5. Another set of activist philosophers can be found among the ranks of environmental ethicists: That new field has drawn a number of philosophers, though by no means all of them are activists. There has even been a small controversy in the journal, Environmental Ethics (see Hargrove, 1984, and Lemons 1985), about whether or not philosophical environmental ethicists ought to be activists. In my opinion (Durbin, 1992b), a dichotomy separating philosophers worrying about academic "professional standards" from those who venture outside academia to work with activists on the solution of urgent environmental problems, would be a disaster. In any case, a reasonably large number of environmental philosophers have chosen the activist path. (See, for example, Naess, 1989; Paehlke, 1989; Marietta and Embree, 1995; and Light and Katz, 1995.) Nor does this mean that they must give up on academic respectability -- provided that that does not lead them to forget the urgency of particular local environmental crises. Not to mention the overwhelming urgency of such global environmental issues as upper-atmosphere ozone depletion, the threat of global warming, worsening industrial pollution in countries committed to rapid industrial growth in previously unspoiled parts of the world, nuclear proliferation with attendant problems of wide dispersion of nuclear wastes, and so on and on. Conclusion: I have here surveyed only five philosophers or groups of philosophical activists, but it seems to me that they represent a uniquely North American approach that is an interesting subset of American philosophers of technology. Almost from the beginning of the United States, North Americans have been accused of being peculiarly practical, even anti-theoretical. This can hardly be a fair criticism anymore, if one observes standard contributions to the philosophical literature on science and technology today. But at least some North American philosophers would not have taken the claim as a criticism in the first place. They -- we -- would take it as a 36 compliment. The world faces urgent social problems today, many of them linked to science and technology. Why not at least try to get in there with other activists and help solve these problems? Chapter 5 BIOETHICS AS SOCIAL PROBLEM SOLVING Chapters 5, BIOETHICS AS SOCIAL PROBLEM SOLVING and 6, ENGINEERING ETHICS AND SOCIAL RESPONSIBILITY, are a matched set. Here in Chapter 5, I take the disarray of contemporary bioethics theories as an invitation to see that the most important work of philosophers doing bioethics is done in collaboration with medical experts and others on ethics and research ethics committees, especially at the local level. The paper was originally written, at the invitation of the editors, John Monagle and David Thomasma, for Health Care Ethics: Critical Issues for the 21st Century (1998). What I offer here are some philosophical reflections on work done roughly in the last quarter of the twentieth century in bioethics. (See, among other texts, Arras and Rhoden, 1989; Beauchamp and Childress, 1994; Beauchamp and McCullough, 1984; Beauchamp and Walters, 1989; Edwards and Graber, 1988; Jonsen, Siegler, and Winslade, 1998; Levine, 1991; Mappes and Zembaty, 1986; Monagle and Thomasma, 1997; Munson, 1992.) I offer the reflections in the spirit of American Pragmatism -- not as represented recently by Richard Rorty (1979, 1982, 1991), but in the older, progressive tradition of John Dewey (1929, 1934, 1948) and George Herbert Mead (1964), with some reference to the still older views of William James (1897). Bioethics Philosophically Construed: Robert Veatch (1989) quotes a representative, Russell Roth, of the American Medical Association as saying it is not up to philosophers but to the medical profession to set its moral rules: "So long as a preponderance of the providers of medical service -- particularly physicians -- feel that the weight of the evidence favors the concept that the public may be better served -- that the greatest good may be best accomplished -- by a profession exercising its own responsibility to the state or to someone else, then the medical profession has an ethical responsibility to exert itself in making apparent the superiorities of [this] system" (p. 155). 37 Veatch cites this claim in a book that places it in a broader context, within a framework (p. 146) of "different systems or traditions of medical ethics . . . including the Hippocratic tradition, various Western religions, ethical systems derived from secular philosophical thought, and ethics grounded in philosophical and religious systems of non-Western cultures" -- e.g., China and India, but also the old Soviet Union and Islamic countries. Nonetheless, Veatch takes it to be obvious that any such profession-related or parochial or denominational system of medical or health care ethics requires "critical thinking" about "how an ethic for medicine should be grounded" (p. 146). Far and away the most popular summation of this foundational approach is provided in Tom Beauchamp's and James Childress's Principles of Biomedical Ethics (1994[and later editions]). As a critic of the approach, Albert Jonsen (1990), puts the matter, the first edition of the Beauchamp and Childress book filled a vacuum in the early years of the bioethics movement: it "provided the emerging field of bioethics with a methodology" that was in line with "the [then] currently accepted approaches of moral philosophy" and thus "could be readily taught and employed by practitioners" (p. 32). Jonsen goes on with a neat summary: "That method consisted of an exposition of the two major 'ethical theories,' deontology and teleology, and a treatment of four principles, autonomy, nonmaleficence, beneficence, and justice, in the light of those theories." Jonsen then adds: "The four principles have become the mantra of bioethics, invoked constantly in discussions of cases and analyses of issues" (p. 32). While Jonsen is critical of the Beauchamp-Childress approach, he recognizes that it is reflective of "currently accepted approaches in moral philosophy." As witness to this, two other popular textbooks, addressed to wider ranges of applied or professional ethics, can be cited. Michael Bayles, in Professional Ethics (1989), provides what was once probably the most widely used single author textbook for professional ethics generally. Like Beauchamp, Bayles is a utilitarian, but his approach can be adapted easily to any other ethical theory. Bayles endorses a general rule: "When in doubt, the guide suggested here is to ask what norms reasonable persons [generally, not just in the professions] would accept for a society in which they expected to live (p. 28)." He goes on, however, with this pithy summary of what comes next: "There are several levels of justification. An ethical theory is used to justify social values. These values can be used to justify norms. The norms can be either universal (applying to everyone) or role related (applying only to persons in the roles). Roles are defined by norms indicating the qualifications for persons occupying them and the type of acts they may do, such as represent clients in court. Norms can then be used to justify conduct" (p. 28). This exactly parallels the model used by Beauchamp and Childress. 38 Joan Callahan's Ethical Issues in Professional Life (1988), while perhaps not as popular as Bayles's textbook once was, is a popular anthology. It is perhaps most notable for its dependence on the notion of "wide reflective equilibrium." As her sources, Callahan cites John Rawls, Norman Daniels, and Kai Nielsen, but she could as easily have cited dozens of other philosophers espousing one version or another of what Kurt Baier calls the "moral point of view." Here is how Callahan's somewhat wordy summary of the approach begins: "Things are much the same in ethics [as in science]. We begin with our 'moral data' (i.e., our strongest convictions of what is right or wrong in clear-cut cases) and move from here to generate principles for behavior that we can use for decision making in cases where what should be done is less clear" (p. 10). This lays out the top-down, theory to decision approach. Then Callahan says: "But, as in science, we sometimes have to reject our initial intuitions about what is right or wrong since they violate moral principles we have come to believe are surely correct. Thus, we realize we must dismiss the initial judgment as being the product of mere prejudice or conditioning rather than a judgment that can be supported by morally acceptable principles." This is the application part, but Callahan immediately adds the other pole in the dynamic equilibrium: "On the other hand, sometimes we are so certain that a given action would be wrong (or right) that we see we must modify our moral principles to accommodate that judgment." This exactly reflects Beauchamp and Childress: "Moral experience and moral theories are dialectically related: We develop theories to illuminate experience and to determine what we ought to do, but we also use experience to test, corroborate, and revise theories. If a theory yields conclusions at odds with our ordinary judgments -- for example, if it allows human subjects to be used merely as means to the ends of scientific research -- we have reason to be suspicious of the theory and to modify it or seek an alternative theory" (1994, pp. 15-16). Jonsen (1990, p. 34) believes that the term "theory" here is being used very loosely, but if we employ different terms and talk simply about different approaches to ethics, it is clear that some authors have opted for other approaches to bioethics that they think are more congruent with their experiences. A notable example is the team of Edmund Pellegrino and David Thomasma (1981, 1988), who say they base their approach on Aristotle and phenomenology -- but mostly on good clinical practice (1981, p. xi). In one of their books devoted to the foundations of bioethics, Pellegrino and Thomasma (1988) summarize their approach: 39 "Our moral choices are more difficult, more subtle, and more controversial than those of [an earlier] time. We must make them without the heritage of shared values that could unify the medical ethics of [that] era. Our task is not to abandon hope in medical ethics, but to undertake what [Albert] Camus called 'the most difficult task of all: to reconsider everything from the ground up, so as to shape a living society inside a dying society.' That task is not the demolition of the edifice of medical morality, but its reconstruction along three lines we have delineated: (1) replacement of a monolithic with a modular structure for medical ethics, with special emphasis on the ethics of making moral choices in clinical decisions; (2) clarification of what we mean when we speak of the good of the patient, and setting some priority among the several senses in which that term may be taken; and (3) refurbishing the ideal of a profession as a true 'consecration'"(p. 134). The Pellegrino and Thomasma approach has much in common with the virtue ethic of Alasdair MacIntyre (1981, 1988). And the more recent of the two Pellegrino and Thomasma foundations books culminates in what they call "a physician's commitment to promoting the patient's good." This updated version of a Hippocrates-like oath has an overarching principle -devotion to the good of the patient -- and thirteen obligations that are said to flow from it. These range from putting the patient's good above the physician's self-interest through respecting colleagues in other health professions and accepting patients' beliefs and decisions to "embody[ing] the principles" in professional life (1988, pp. 205-206). While admitting that such an oath is not likely to meet with general acceptance "given the lack of consensus on moral principles" today, Pellegrino and Thomasma end with this plea: "We invite our readers to consider this amplification of our professional commitment as a means of meriting the trust patients must place in us and as a recognition of the centrality of the patient in all clinical decisions" (1988, p. 206). The Pellegrino and Thomasma reference to the lack today of a consensus on moral principles hints at a fundamental problem for bioethics. What are concrete decisionmakers to do if, as seems almost inevitable, defenders of conflicting approaches to bioethics cannot reach agreement? If those attempting to justify particular ethical decisions cannot themselves reach a decision, are we unjustified in the meantime in the decisions that we do make? Beauchamp and Childress (1994, p. 46) attempt to play down this issue, at least as regards utilitarian and deontological theories: "The fact that no currently available theory, whether rule utilitarian or rule deontological, adequately resolves all moral conflicts points to their incompleteness." Admitting that there are many forms of consequentialism, utilitarianism, and deontology, as well as approaches that emphasize virtues or rights, they conclude by defending a process -- which they say "is consistent with both a rule-utilitarian and a rule-deontological theory" -- rather than an absolute theoretical justification (p. 62). Not all bioethicists are satisfied with this treatment of theoretical disagreement. H. Tristram Engelhardt (1986, 1991), in particular, has devoted much time and energy to arriving at a more satisfying solution. He begins his daunting effort to provide a true foundation for bioethics 40 (1986, p. 39) with a framework: "Controversies regarding which lines of conduct are proper can be resolved on the basis of (1) force, (2) conversion of one party to the other's viewpoint, (3) sound argument, and (4) agreed-to procedures." Engelhardt then demolishes the first three as legitimate foundations for the resolution of ethical disagreement, beginning with the easiest: "Brute force is simply brute force. A goal of ethics is to determine when force can be justified. Force by itself carries no moral authority" (p. 40). Engelhardt then attacks any assumed religious foundation for the resolution of moral controversy, calling "the failure of Christendom's hope" to provide such a foundation, either in the Middle Ages or after the Reformation, a major failure. He then adds, "This [religious] failure suggests that it is hopeless to suppose that a general moral consensus will develop regarding any of the major issues in bioethics" (p. 40). Engelhardt then turns to properly philosophical hopes: "The third possibility is that of achieving moral authority through successful rational arguments to establish a particular view of the good moral life." But he adds immediately: "This Enlightenment attempt to provide a rationally justified, concrete view of the good life, and thus a secular surrogate for the moral claims of Christianity, has not succeeded" (p. 40). The evidence for this Engelhardt had supplied earlier -- and it parallels the obvious disagreements among schools of thought referred to by Beauchamp and Childress. This leaves only the fourth possibility: "The only mode of resolution is by agreement. . . . One will need to discover an inescapable procedural basis for ethics" (p. 41). This may sound like Beauchamp's and Childress's retreat to process, but Engelhardt wants to make more of it than that. "This [procedural] basis, if it is to be found at all, will need to be disclosable in the very nature of ethics itself." "Such a basis appears to be available in the minimum notion of ethics. . . . If one is interested in resolving moral controversies without recourse to force as the fundamental basis of agreement, then one will have to accept peaceable negotiation among members of the controversy as the process for attaining the resolution of concrete moral controversies" (p. 41). This, Engelhardt says, should "be recognized as a disclosure, to borrow a Kantian metaphor, of a transcendental condition . . . of the minimum grammar involved in speaking rationally of blame and praise, and in establishing any particular set of moral commitments" (p. 42). The generally poor reception that Engelhardt's foundational efforts have received (see Moreno, 1988, and Tranoy, 1992, among others) -- as opposed to the wide recognition he has received for particular contributions to the discussion of concrete controversies --could suggest that there might be something fundamentally wrong about the search for ultimate ethical justification in bioethics. This suggestion leads to the final group of authors to be mentioned in these reflections on philosophical bioethics. Albert Jonsen (1990, p. 34) mentioned earlier as a critic of the Beauchamp and Childress approach, says this: "In light of the diversity of views about the 41 meaning and role of ethical theory in moral philosophy, we need not be surprised at the confusion in that branch of moral philosophy called 'practical' or (with a bias toward one view of theory) 'applied ethics.'" Jonsen goes on: "Authors who begin their works with erudite expositions of teleology and deontology hardly mention them again when they plunge into a case." "It is this that the clinical ethicists notice and that leads some of them to answer the theorypractice question by wondering whether it is the right question and whether the connection between these classic antonyms is not just loose or tight, but even possible or relevant" (p. 34). Two of the authors Jonsen is referring to are himself and Stephen Toulmin, in The Abuse of Casuistry (1988), where (Jonsen says) they argue for an approach in which bioethicists should "wrestle with cases of conscience . . . [where they will] find theory a clumsy and rather otiose obstacle in the way of the prudential resolution of cases" (1990, p. 34). Jonsen likens this to deconstructionism in literary studies and the critical legal studies approach in philosophy of law; he is also explicit, in another place (1991), about the rhetorical nature of the casuistic approach. Without saddling these other authors with casuistry as the approach, Jonsen (1990) also puts his and Toulmin's critique of applied ethics within the recent tradition of anti-theorists headed by Richard Rorty (1979, 1982, 1991) and Bernard Williams (1985). (In a review of The Abuse of Casuistry, John Arras 1990, adds Stuart Hampshire 1986, and Annette Baier 1984.) In short, recent bioethics, philosophically construed, is a confusing battleground, with contributions from absolute foundationalists to case-focused rejectors of theory and a variety of approaches in between (or all around). Bioethics More Broadly Construed: It should be remembered -- for purposes of this chapter but more generally -- that bioethics has never been exclusively or even primarily a philosopher's affair. Indeed, it could be claimed that philosophers are and ought to be outsiders to the real communities making the important bioethical decisions (Churchill, 1978, pp. 14-15). One of the earliest calls for the post-World War II medical research community to police itself ethically came from a physician, Henry K. Beecher, writing in the Journal of the American Medical Association (1966) and the New England Journal of Medicine (1966) -- both regular sources of bioethics commentary right down to the present. Beecher's calls for reform were followed up by sociologists: for example, Bernard Barber et al., Research on Human Subjects: Problems of Social Control in Medical Experimentation (1973), and Renée Fox, Experiment Perilous: Physicians Facing the Unknown (1974). 42 Historians also became interested--see, for example, James Jones, Bad Blood: The Tuskegee Syphilis Experiment (1993). Celebrated cases also did a great deal to coalesce the field, from Karen Quinlan and Elizabeth Bouvia to Jack Kevorkian, from Baby Doe to Baby M., from celebrated heart transplant cases to proposals for mandatory testing for the AIDS virus (see Pence, 1994). What even the briefest reflection on these cases reminds us is how bioethics involves patients, families, hospital administrators, lawyers and judges, government officials, and even the public at large. And public involvement reminds us, further, that significant numbers of commissions have been involved, from the local level -- e.g., the New York State Task Force on Life and the Law -to the national level -- the (U.S.) National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, or a Netherlands Government Committee on Choices in Health Care -- to the international level, for example, the Draft Report of the European Forum of Medical Associations. Philosophers have, obviously, been involved in setting up prestigious bioethics institutes. But the institutes themselves are important parts of the bioethics community, with impressive numbers of non-philosophers on their mailing lists. And physicians (e.g., Willard Gaylin at the Hastings Center, along with many others) and lay people (the Kennedy family supporting the Kennedy Institute) have also played major roles. For me, the most proper locus of bioethics decisionmaking is in typically small local groups of physicians, nurses, administrators, lawyers, and local public officials -- all together with patients and their families -- wrestling with specific cases and issues within their own communities. This shows up already in one of the earliest bioethics textbooks, that of Samuel Gorovitz et al. (1976, with six co-editors and at least another half dozen people directly involved). And this small-group focus continues right down to the present, most notably in the incredible diversity of ethics committees and other groups that have sprung up in hospitals and all sorts of health care institutions since the promulgation of the Reagan Administration's Baby Doe regulations and the enactment of the (U.S.) Patient Self-Determination Act in 1991. (On bioethics committees, see McCarrick, 1992, pp. 285-305.) Philosophical bioethicists, it seems to me, do some of their best work in these groups, as they work collectively to solve local cases and issues and to formulate policies for their own institutions. Pragmatic Reflections on Philosophical Bioethics: 43 William James (1897, see 1987, p. 520) -- faced at the end of the nineteenth century with much the same sort of disagreement about the foundations of ethics that there is a hundred years later about the foundations of bioethics -- summed up the situation this way: "Various essences of good have thus been found and proposed as bases of the ethical system. Thus, to be a mean between two extremes; to be recognized by a special intuitive faculty; to make the agent happy for the moment; to make others as well as him happy in the long run; to add to his perfection or dignity; to harm no one; to follow from reason or universal law; to be in accordance with the will of God; to promote the survival of the human species." But, James says, none of these has satisfied everyone. So what he thinks we must do is treat them all as having some moral force and go about satisfying as many of the claims as we can within the limit of knowing that we can never satisfy all of them at once. "The guiding principle for ethical philosophy," James concludes, must be "simply to satisfy at all times as many demands as we can." And, following this rule, society has, historically, striven from generation to generation "to find the more and more inclusive [moral] order" -- and has, James thinks, done so successfully, gradually eliminating slavery and other evils tolerated in earlier eras (p. 623). In many ways this sounds like Engelhardt's condition of the possibility of ethical discourse, but James would never accept Engelhardt's characterization of the approach as Kantiantranscendental. It is simply a procedural rule for particular communities of ethical truth-seekers attempting to find a satisfactory concrete solution for particular problems -- in a process that must inevitably go on and on without end. Concrete ethical solutions are not dictated by an abstract commitment to the conditions of ethics, but must be worked out arduously through struggle and competing ideals. John Dewey was as opposed to transcendental foundations as James. In the mood of The Quest for Certainty (1929), Dewey would probably have been bemused -- and also angry -- at the persistent academic search for an ultimate foundation for our practical decisions in bioethics. But in the more open spirit of Reconstruction in Philosophy (1948) Dewey would have attempted to see how the "principled" approach (e.g., of Beauchamp and Childress) is "in effect, if not in profession" (in Dewey's words) "connected with human affairs." In that book, Dewey continues his attack on "ethical theory" as "hypnotized by the notion that its business is to discover some . . . ultimate and supreme law"; instead, he proposes that ethics be reconstructed so that we may "advance to a belief in a plurality of changing, moving, individualized goods and ends, and to a belief that principles, criteria, laws are intellectual instruments for analyzing individual or unique situations" (pp. 162-163). In A Common Faith, (1934) Dewey adds that community efforts to solve social problems progressively can generate an attitude akin to religious faith that makes social problem solving a meaningful venture. And in Liberalism and Social Action (1935), Dewey tries to lead the way in applying his approach to the "confusion, uncertainty, and conflict" that marked his times -- just as the 44 bioethics community is attempting to do today with respect to confusions, uncertainties, and conflicts that arise in health care today. George Herbert Mead (1964, p. 266), an opponent of both utilitarian and Kantian approaches to ethics, offers in place of those (he thinks) inadequate systems a positive formulation of what ethics should mean: "The order of the universe that we live in is the moral order. It has become the moral order by becoming the self-conscious method of the members of a human society. . . . The world that comes to us from the past possesses and controls us. We possess and control the world that we discover and invent. And this is the world of the moral order." Then Mead adds: "It is a splendid adventure if we can rise to it." If we pay attention to these American Pragmatists, I think that what we can say about bioethics in the last quarter of the twentieth century is that philosophers contribute most when they contribute to the progressive social problem solving of particular communities. Some do this, admittedly, at the national (President's Commission) or even international level (e.g., philosophical advisors to the World Health Organization), but even in those cases they do so as members of groups made up of physicians, lawyers, and other concerned citizens. And most do so at the local level -- where, in Mead's words, they are only being truly ethical if they are contributing to the progressive social problem solving (case resolution, policy formulation, etc.) of some particular group in which they represent only one voice, and a small one at that. Some Lessons: Does this self-awareness on the part of philosophers as to their limited role in bioethics suggest any lessons for us? The most obvious lesson is humility. Philosophers can and do help to clarify issues (sometimes even answers), but the real moral decisions in bioethics, for the most part, are made by others. Another lesson has to do with the urgency of the real-world problems that bioethics faces -which are, after all, what got philosophers involved in the first place. Medicine and the health care system generally -- including those parts of it that operate in open or covert opposition to the entrenched power of physicians and hospitals -- face enormous problems today, from rampant inflation and calls for rationing to the questioning of the very legitimacy of hightechnology medicine. All the while, doctors and nurses, etc., must continue to face life and death issues every day -- from calls for active euthanasia to the AIDS crisis -- not to mention the daunting task of caring for ordinary ills of ordinary people who, with increasing frequency, cannot pay for their medical care. 45 It is probably inevitable, given the structure of philosophy today as an academic institution, that philosophical bioethicists will continue narrow technical debates among themselves about ultimate justifications of bioethical decisions. But academicism and careerism in bioethics should be recognized for what they are -- distractions (however necessary, for some purposes) from the real focus of bioethics. Beyond these lessons for philosophers, does American Pragmatism have any lessons to provide to the bioethics community more generally? Probably only this: that we should all heed James's call for tolerance and openness to minority views. Bioethics has come a long way in just twenty-five or so years. Significant consensus has been achieved on issues from informed consent to be a research subject to the importance of asking patients what they want done -- if anything, especially of a high-technology sort -- in their last weeks and days and hours. But equally significant issues remain -- as they always will in a society open to change. And all of us, from the smallest local bioethics group to the international community, ought to remain open to change. As William James (1877, see 1967, p. 625) said: "Every now and then . . . someone is born with the right to be original, and his revolutionary thought or action may bear prosperous fruit. He may replace old 'laws of nature' by better ones; he may, by breaking old moral rules in a certain place, bring in a total condition of things more ideal than would have followed had the rules been kept." Chapter 6 ENGINEERING ETHICS AND SOCIAL RESPONSIBILITY The essay that occupies this slot is a little different from the others here. It was an essay I volunteered on my own initiative -- to the Bulletin of Science, Technology, and Society (1997). I had ended ten years of teaching engineering ethics and was turning my attention fulltime to teaching bioethics, including a stint at Jefferson Medical College in Philadelphia. Having done the survey of bioethics -- Chapter 5, above -- I thought a similar survey of engineering ethics was in order. As in the previous section, I offer here philosophical reflections on roughly twenty- five years of work on engineering ethics in the USA. (For other countries, see Lenk and Ropohl, 1987, and Mitcham, 1992.) My comments fall into three parts. In the first I discuss efforts of philosophers to contribute to the field. In the second, I focus on the contributions of engineers. And in the third, where I focus on the social responsibility aspect, I consider possibilities for fruitful collaboration. Philosophers and Engineering Ethics: 46 In the early 1970s, engineering ethics seemed to be a promising field for philosophers to enter -- along with the new field of bioethics, that had recently supplanted the old field of medical ethics, as well as business ethics and several other branches of what was coming to be called applied or professional ethics. Technology was being widely criticized. There were a number of scandalous cases or emerging issues associated with engineering and related areas of applied science. Old codes of ethics were seen as in need of updating and better enforcement. And some philosophers, perhaps especially those associated with technology and society programs in academia, thought they saw interesting issues ripe for conceptual analysis. Besides, it was a time of retrenchment in the graduate education of philosophers, so there seemed to be opportunities for employment in engineering-related settings. My view is based on some experience with these efforts, but in any case common sense should tell us that there are several possible roles for philosophers to play when it comes to examining ethics and engineering. One can, for instance, play the role of external gadfly, where "external" refers to a position entirely outside the engineering community (see Churchill, 1978). This community, as I am defining it here, ought to include not only engineers in the strict sense but engineering managers and technicians as well as many other related technical workers-from chemists and applied physicists to econometricians engaged in technological planning or forecasting. (On the other side, to philosophical critics I would add quite a few critics who are not professional philosophers--religious or other humanistic critics, literary critics, journalists and other non-academics, including laypersons who have taken it upon themselves to learn enough about engineering and technology to be responsible critics.) I have argued throughout this book and elsewhere (Durbin, 1992) that progressive social activism is the most likely solution for the major social problems facing our technological world. I made my earlier appeal to technical professionals, urging them to join in with other social activists in seeking such solutions. Here I acknowledge the leadership of the antitechnology gadflies I would ask the technical professionals to work with. It is also possible to play the role of internal gadfly, within engineering (or research-anddevelopment) institutions; some people consider this to be the proper role of the philosopher (or humanist critic) with respect to the engineering or any other professional community (see Baum, 1980). According to this view, one can be part of an ethics case review panel, or of a technology assessment team, or a philosopher/professor of engineering ethics in an engineering school and play the role of gadfly every bit as effectively as--perhaps even more effectively than--someone from the outside. It is also possible, finally, to serve on one of these committees without thinking of oneself as a stranger or gadfly. Philosophers, for example, have been asked to help revise codes of ethics. Some also (and occasionally religious ethicists do this as well) serve as laypersons on ethics review panels for engineering (and other) professional societies. Not to mention the efforts of philosophers to elucidate concepts associated with engineering ethics (Baum, 1980, pp. 47-48 47 and 61-72) or to write engineering ethics textbooks (see Johnson, 1991; Martin and Schinzinger, 1990, and Harris, Pritchard, and Rabins, 1999). What can we conclude about these efforts of North American philosophers over the past quarter century? I will try to summarize the results by looking at what happened at gatherings associated with the most ambitious project to be undertaken in the United States--the National Project on Philosophy and Engineering Ethics, directed by Robert J. Baum. The first stages of the development of this project have been well described by one of Baum's colleagues, Albert Flores (1977). He starts by pointing out conflicts that persist for individual engineers even if they conscientiously follow their society's code of ethics; legal challenges to professional societies' activities; and thorny ethical issues associated with doing engineering in foreign cultures -- in short, he recognizes that there are "serious issues that challenge the professional engineer's commitment to acting as a true professional." Then Flores asks himself whether anything might be done to help solve these problems and says this: "One plausible suggestion is that since these questions clearly raise moral and ethical issues, it seems reasonable to expect some helpful guidance from scholars and academics with competence in ethical theory." The National Endowment for the Humanities agreed and provided funding for a multiyear project in which engineers would learn something about academic ethical theory, philosophers would learn more about engineering, and philosopher-engineer teams would develop ethics projects of various sorts. An outstanding example of one of these projects is the textbook, Ethics in Engineering (1990), by philosopher Mike Martin and engineer Roland Schinzinger. Another feature of the National Project on Philosophy and Engineering Ethics was a series of national conferences, beginning with one at Rensselaer Polytechnic Institute in 1979. Rachelle Hollander, a philosopher who is also the program manager for the agency of the National Science Foundation that funded the second and third national conferences, has described the second conference, held at the Illinois Institute of Technology in 1982. Hollander (1983) focuses on philosophical contributions: "Philosophers . . . develop[ed] abstract principles on which engineering obligations could rest. One presentation attempted to ground engineers' whistleblowing rights in more general moral rights to behave responsibly, while yet another developed an argument that engineers are morally required to act on the basis of a principle of due care, requiring those who are in a position to produce harm to exercise greater care to avoid doing so." But Hollander also points out how these abstract principles were challenged at the conference, not only by engineers but by other philosophers. And she ends her report with a summary of some other disagreements -- "There was, for example, considerable discussion about whether whistleblowing is ever justified, about the [conflicting] loyalty that engineers owe the public, their clients, [and] their employers," and so on -- along with recommendations for the future. Among these, Hollander points out how important social (as opposed to but encompassing individual) responsibility is; that risk assessment is a social problem; and that engineers, 48 engineering educators, other educators, and a whole host of other actors must cooperate in solving such social problems. The third national conference (and so far the last) was held in Los Angeles in 1985, and it picked up on Hollander's (and others') focus on the concrete problem of risk assessment. The proceedings of the conference were edited by Albert Flores and published under the title, Ethics and Risk Management in Engineering (1989). Almost half of the contributions, following the earlier pattern, are by engineers. But philosophers and other critics outside the engineering community have interesting things to say in the volume. Deborah Johnson argues on moral grounds that government needs to have a role in dealing with the risks associated with toxic wastes; Thomas Donaldson appeals to well known ethical theories to raise doubts about whether international standards can be established to regulate such risks; and Kristin Shrader-Frechette argues that all risk assessments necessarily involve value judgments. In addition, Sheila Jasanoff discusses the differences between ethical and legal analyses of risk issues, while Carl Cranor focuses on the legal mechanisms--the law of torts and regulatory law -- that currently control social responses to exposures to toxic substances and similar technological risks. These are worthy contributions to the literature, both of engineering ethics and of (applied) philosophy, and these same authors have produced several books extending their contributions (see Cranor, 1992; Jasonoff, 1986; and Shrader-Frechette, 1991). But if we look beyond the three national conferences to the general body of philosophical literature in this period, one thing is overwhelmingly clear. Nothing approximating the pronounced movement of philosophers into the field of bioethics ever occurred; there simply was no groundswell of philosophers moving into engineering ethics. A diligent perusal of The Philosopher's Index from 1975 right up to the present reveals only a handful of articles and even fewer books on any aspect of ethics in relation to engineers. In spite of early promise, (philosophical) engineering ethics remained stagnant while bioethics boomed -- indeed, engineering ethics very nearly disappeared from the philosophical literature. No key concepts paralleling the so-called mantra of bioethics (see chapter V, above) -autonomy, beneficence, non-maleficence, and justice -- have ever been put forward. Philosophers have written introductory textbooks, and contributed articles or chapters to anthologies (see, for example, the contributions to Johnson, 1991), but nothing even remotely approximating the attempts of bioethicists to provide philosophical foundations for their field (see Engelhardt, 1986 and 1991) has emerged. I know most of the philosophers involved in engineering ethics, and, by these remarks, I mean no disparagement of their efforts. But I believe all of us who had high hopes in the 1970s for the development of philosophical engineering ethics have been deeply disappointed. Engineers and Engineering Ethics: Is the record any less disappointing on the other side of the fence -- among engineers, scientists in government and industry, think-tank technical experts, etc.? Well, it happens that 49 the American Association for the Advancement of Science -- about halfway through the period under review here -- conducted a survey of engineers' and scientists' ethics activities and published the results in a report (Chalk, Frankel, and Chafer, 1980). The stated objectives of the report included documenting the ethics activities of the AAASrelated societies surveyed; the codes of ethics and other formal principles adopted; significant issues neglected; and recommendations for the future. Four engineering societies reported on are the American Society of Civil Engineers (approximately eighty thousand members), the American Society of Mechanical Engineers (with roughly the same number), the National Association of Professional Engineers (about the same), and the Institute of Electrical and Electronic Engineers (more than double the size of the others). All have active ethics programs, with differing levels of staffing, based in part on a code of ethics and enforcement procedures. Few allegations of ethics violations are reported as being investigated and even fewer lead to sanctions -- though in a handful of cases members have been expelled. The electrical engineers, shortly before the report was issued, had initiated a formal program, with some funding, to support whistleblowing and similar activities. And NSPE regularly publishes, in Professional Engineer, case reports and decisions of its judicial body. The American Chemical Society, another large technical group whose members often work with engineers in large technology-based corporations, is also reported on. It too has an active ethics program, but one that seems most often to concentrate on allegations of unethical or unfair employment practices. Only a handful of the organizations discussed in the AAAS report replied that they spend much time or effort on "philosophical" tasks -- defining and better organizing ethics codes or principles. More work than before goes into education, increasing ethical sensitivity in the workplace, and providing better enforcement procedures. The need for this last item, though it is important (and might lead to more enforcement proceedings), would seem not to have a high priority considering the small number of investigations the societies are actually conducting. The recommendations of the AAAS report will be summarized below. One can follow more recent developments in Professional Ethics Report (since 1988), another venture of the American Association for the Advancement of Science -- this time, under the auspices of its Committee on Scientific Freedom and Responsibility and Professional Society Ethics Group. This quarterly newsletter provides regular updates on the activities of member societies -- including all the major and some minor engineering societies and numerous other scientific and technical societies. In general, the activities reported -- including new or updated codes of ethics, more rigorous enforcement and/or more equitable investigation procedures -- are simply an extension, with modest increases, of the activities discussed in the earlier report. There are regular reports on new legislation and court decisions, and there is even an occasional review of a book that contributes to the advancement of thinking about professional ethics. 50 Activity on the enforcement front is best followed in the continuing series of case presentations, and quasi-judicial decisions, that appear regularly in Professional Engineer. Even if the incremental improvements reported in Professional Ethics Report, and the greater sensitivity to ethics issues displayed in Professional Engineer (and similar sources), continue into the future, we cannot expect a great deal from these efforts. The recommendations of the 1980 AAAS report (mentioned earlier) included the following. In addition to heightened sensitivity and more enforceable rules, as well as better and more frequently utilized investigational procedures, the other recommendations were: better definition of principles and rules; recognition of the inevitable conflict between employee efforts to protect the public and employer demands; more publicity for sanctions imposed; coordination of ethics efforts of the various professional societies and inclusion of ethics efforts of such other institutions as corporations and government agencies; benchmarks for judging when ethics efforts have succeeded; and full-scale studies, including full and complete histories of cases. Very few of these laudable ventures seem yet even to be contemplated, and there is little to suggest that very many of the recommendations will be carried out. In general, the ethics activities of the professional societies have been more successful than the efforts of philosophers to help out in the process, but there are still glaring weaknesses. As one example, the ethics activities of the professional societies -- however much publicity they sometimes receive -- still represent a small, almost infinitesimal part of the activities of engineering and other technical societies. Meanwhile, allegations of unethical or negligent behavior on the part of technical professionals seem to be increasing dramatically. Possibilities for Engineer-Philosopher Cooperation: If we turn from limited successes in the enforcement of ethics violations within the professional technical communities to broader concerns of social responsibility, there may be some hope for improvements -- but only if there can be greater cooperation between engineers, other technical professionals, and non-engineers (including applied ethicists) interested in improving the situation. Among critics of engineering, there are several well known philosophers, historians, and other critics who harp on the shortcomings in the system of professional sanctioning of unethical, negligent, or incompetent engineers (and other technical professionals). Langdon Winner (1990), while criticizing the case approach to education in engineering ethics, says this: "Ethical responsibility now involves more than leading a decent, honest, truthful life, as important as such lives certainly remain. And it involves something much more than making wise choices when such choices suddenly, unexpectedly present themselves. Our moral obligations must now include a willingness to engage others in the difficult work of defining the 51 crucial choices that confront technological society. . . . Any effort to define and teach engineering ethics which does not produce a vital, practical, and continuing involvement in public life must be counted not just a failure, but a betrayal as well" (p. 64). With respect to some of the earliest efforts of the engineering professional societies to adopt codes of ethics, the historian Edwin Layton has -- in The Revolt of the Engineers: Social Responsibility and the American Engineering Profession (1971) -- amply demonstrated that, while individual engineers were genuinely motivated to improve engineers' behavior, their activities were quickly co-opted by powerful leaders and turned into defensive rhetoric to enhance the public image of the newly-developing large corporations -- and their allies, the newly powerful engineering professional societies. In a similar vein, Layton's fellow historian, David Noble, in America by Design: Science, Technology and the Rise of Corporate Capitalism (1977), argues that these same powerful engineering leaders throughout the twentieth century have worked hand-in- hand with other governmental, educational, and social leaders -- in the name of "progressivism" -- to shore up a threatened capitalism, using not only codes of ethics but the promise of "neutral" science and technology, to keep nascent workers' movements in check. Finally (among these social critics), Carl Mitcham (1991) maintains that engineering in the modern sense is driven by an ideal of efficiency, and any external values that might be said to influence it -- political or legal, social, cultural, even economic values -- must, if they are to be really influential, be stated in input-output terms or must be translatable into other sorts of quantitative formulations. In Mitcham's view, this almost necessarily sets up a tension between engineering values and such non-technical ideals as living in harmony with nature, following otherworldly or transcendental ideals, or even making deontological ethical judgments about limits on human activities (including engineered systems but also almost any other type of social organization or group activity in a technological world). On the other side of the fence, even Samuel Florman (1976, 1981) -- as staunch a defender of the "existential pleasures of engineering" against the profession's antitechnology critics as there is -- admits that current-day engineering education plus a number of recent historical and cultural trends have conspired to produce a fairly conservative and non-imaginative engineering community today. In Florman's words, "The unpleasant truth is that today's engineers appear to be a drab lot. It is difficult to think of them as the heirs of the zealous, proud, often cultured, and occasionally eloquent engineers of the profession's Golden Age" (1976, p. 92). These criticisms, even if they are taken to be indicative of real problems, should not preclude discussion of potential areas of collaboration between engineers and critics in order to improve the situation. With respect to possible contributions from the side of philosophers and critics, we can anticipate that some philosopher/engineering ethicists will continue to contribute to the ongoing reform efforts of the engineering and other technical professions. Engineers seem still to want the help of philosophers (along with lawyers) in rethinking, revising, and coordinating their 52 codes of ethics. Ethicists (some from academia, others religious ethicists) continue to be invited to be members of ethics review panels, technology assessment teams, and similar committees and commissions. And engineering ethicists are often members of business and professional ethics organizations attempting to improve the climate in corporations, government agencies, and other large, bureaucratic institutions. In addition, it seems clear that the handful of philosophers writing books on ethical concepts related to engineering -- as well as the somewhat larger number writing about risk assessment and environmental ethics -- will continue their efforts. On the other hand, among those drawn to the more critical, gadfly approach, I think there are even greater opportunities and challenges -- but only under certain conditions. First, among engineers and other technical professionals, it must be recognized that with increasing technical advances come greater social responsibilities. In an earlier book (Durbin, 1992), I have mentioned several specific areas of technological activity that have direct bearing on society -- for example, biotechnology, computers, nuclear power (nowadays often concerns over nuclear wastes), and technological developments with a negative impact on the environment. In these and other areas, I believe that engineers and other technical professionals (e.g., computer experts, environmental engineers) have a duty to society to deal effectively with any problems that are directly related to their work. At the very least, they have an obligation to cooperate with government regulatory agencies legally mandated to solve these problems. Too often, technical professionals view regulators as a nuisance and a bother rather than as collaborators in a joint effort to deal with what the public, and their elected representatives, perceive as social problems -- even, in some cases, as catastrophes. To these specific areas of technological concern, I would add (again see Durbin, 1992) three others that are at least indirectly related to technical expertise -- cries for educational reform (including cries for technological literacy on the part of the public) and for health reform (where at least some of the myriad problems are related to the continual introduction of new drugs and technologies -- and the large number of technical personnel required to make them effective), as well as problems associated with the mass media. In this last area, again, technical professionals often seem readier to complain about alarmism than to cooperate in getting out technically accurate news about new technical ventures, including the social problems that too often accompany them. In my opinion, all seven of these areas of social concern -- and I would include under those broad headings a great many local instantiations of the problems -- demand social responsibility on the part of individual technical professionals, on the part of their professional societies, and (often especially) on the part of the organizations in which they work. As for the philosophers and other humanistic and lay critics of science and technology, I see their principal obligation -- in this context -- as one of displaying a much greater spirit of cooperation, rather than confrontation, than is normally the case. If the social critics of 53 technology really want to do something about technosocial problems, it behooves them to work cooperatively with technical experts -- not to mention with corporate and government officials. Conclusion: To sum up, I believe that the recent history of engineering ethics in the USA is not a happy one. Philosophical engineering ethics has turned out to have an extremely limited impact in academia. And the efforts of engineers and their professional societies are too limited in both scope and impact. With Robert Baum and Albert Flores -- in their original hopes for the National Project on Philosophy and Engineering Ethics -- I believe that the way to go is through collaborative efforts involving philosophers and engineers. But I would qualify my optimism about the approach by saying that its success depends on significant behavioral changes. The engineers and their professional societies need to broaden their outlook, moving beyond a focus on individual misconduct to broader social responsibilities, and also to welcome a broader range of people into the dialogue. On the other hand, philosophers, social critics, reporters and editors, environmental activists (and so on) need to be less confrontational and more willing to dialogue. Together, I am convinced, we can hope to solve some of the more pressing social issues facing our technological society. This seems to me a better definition of engineering ethics than a definition that focuses mainly on individual engineers' and technical professionals' potential misconduct. And actions based on the new focus might, in the next twenty-five years, see engineering ethics make a significantly greater impact on society than has been the case in the last twenty-five years. Chapter 7 COMPARING PHILOSOPHY OF TECHNOLOGY WITH OTHER SCIENCE AND TECHNOLOGY FIELDS This essay was prepared for a prestigious international conference, under the auspices of the International Academy of the Philosophy of Science, held in Karlsruhe, Germany, in 1997. I had been asked to talk about philosophy of technology, as represented in the Society for Philosophy and Technology, to a skeptical audience. The title for the conference was "Advances in Philosophy of Technology?" Note the question mark at the end. The essay fits here because I ended it with a challenge to defenders of all the fields compared to get involved beyond academia, to help improve our technosocial world, if they really wanted to make an advance. Has philosophy of technology, in whatever sense, made any advances? This was the central theme of a conference held in Karlsruhe in 1997. My contribution there addressed the narrower 54 question of whether there had been any advances in North American philosophy of technology in the previous fifteen years. Attempting to answer this question, I discovered -- and reported on -quite a few recent books and a few journal articles. In spite of this seemingly-significant flood of publications, however, critics question whether any significant advances are being made in these admittedly numerous books and articles. I begin my recapitulation of my contribution to that conference with Joseph Pitt, past president of the Society for Philosophy and Technology. He quotes friends of his in the Society for History of Technology as reacting with horror to a proposal for a joint meeting: "Oh, no! Those SPT people hate technology. Further, they know nothing about technology" (Pitt, "Philosophy of Technology, Past and Future," 1995). Philosophers of technology, in this view, have certainly not been making any advances -- at least, not any advances that would mean anything to people outside the would-be field. This raises the obvious question: What counts as a genuine advance in technology studies? And the view or thesis that I want to defend here is this: In all respects except one, advances in the philosophy of technology are approximately equal, in their progressiveness, to progress in the fields with which those advances have been negatively contrasted -- namely, the philosophy of science and social studies of science and technology. (The one exception is important, since I consider it the most important area of advance.) In my conclusion, I make some comments about all of these fields, including philosophy of technology, contrasting academic with real-world social progress (that one exception). Advances in North American Philosophy of Technology I begin with the best evidence there is to support a claim that there have been advances in the philosophy of technology in the USA and Canada. To support such a claim, I point to the work of the North American philosophers who traveled to the first international conference of the Society for Philosophy and Technology in Bad Homburg in 1981 and whose papers were printed in the proceedings volumes, Technikphilosophie in der Diskussion (1982), and Philosophy and Technology (1983) -- both edited by Friedrich Rapp and myself. At least six of the North Americans invited to Bad Homburg can be cited in support of the claim that there are continuing advances, right up to the present. I have in mind Stanley Carpenter, Don Ihde, Alex Michalos, Carl Mitcham, Kristin Shrader-Frechette, and Langdon Winner. (I set aside my own case for now, not out of modesty but because I want to make a separate point at the end.) To these six can be added one other philosopher at Bad Homburg, Bernard Gendron -- not in terms of his own later work but viewing his as a springboard to the later development of that part of the environmental ethics movement that has a close relationship to technological issues -- and Albert Borgmann, who was not at Bad Homburg, but whose thought has undergone development in ways that have led people to say that his work represents the first real tradition in North American philosophy of technology (see chapter II, above). 55 Stanley Carpenter went to Bad Homburg at least partly on the basis of a book that he had coedited (with Alan Porter, Alan Roper, and Fred Rossini), A Guidebook for Technology Assessment and Impact Analysis (1980). At the conference, Carpenter's contribution was listed under the technology assessment heading, but his interests were already oriented toward environmental concerns, and focused particularly on ways in which an "alternative" or "appropriate" technology is necessary if the ecosystem is to be preserved. Carpenter has not so far produced another book after Bad Homburg, but he has been a regular participant in the series of Society for Philosophy and Technology international meetings that continues today. For instance, at the 1993 SPT conference near Valencia, Spain, Carpenter presented a paper, "When Are Technologies Sustainable?" Again, at the 1996 conference in Puebla, Mexico, his topic was similar: "Toward Refined Indicators of Sustainable Development." Don Ihde had also written a book on philosophy of technology before Bad Homburg, Technics and Praxis: A Philosophy of Technology (1979), but his case differs from that of Carpenter in two respects: he has written several more books, and he is the editor of a philosophy of technology book series published by Indiana University Press. The first book published in that series, Larry Hickman's John Dewey's Pragmatic Technology (1990), shows that Ihde was not interested, in the series, in pushing his own phenomenological approach to philosophy of technology, but is open to a variety of approaches. Ihde's own approach does show up in his later books, Existential Technics (1983), Consequences of Phenomenology (1986), and Technology and the Lifeworld: From Garden to Earth (1990) -- even in his Philosophy of Technology: An Introduction (1993), though that textbook does present other views. In general, one can say that Ihde's development is a matter of greater depth and clarity in his phenomenological analysis, though Technology and the Lifeworld gives more than a passing nod to the centrality of environmental concerns. Alex Michalos talked about technology assessment at Bad Homburg, but he had been invited at least in part because of his editing of the journal, Social Indicators Research, which is devoted in large part to quality-of-life measurements in our technological culture. Michalos has continued these efforts in a massive way, with his five-volume North American Social Report (1980-1982) and his four-volume Global Report on Student Well-Being (1991-1993), and with regular contributions to all sorts of conferences devoted to various aspects of measuring the quality of life today. Carl Mitcham's contribution to the Bad Homburg proceedings focused on what he called "the properly philosophical origins" of modern technology, as opposed to the more commonlydiscussed social or economic or scientific origins. And this metaphysical/religious approach to the understanding of technology both reflected Mitcham's earlier work -- in the two volumes he compiled with Robert Mackey, Bibliography of the Philosophy of Technology (1973, which cites other approaches but gives heavy emphasis to the metaphysical/religious), and Philosophy and Technology: Readings in the Philosophical Problems of Technology (1972; reprinted with revised bibliography, 1983) -- and presaged his later work, Thinking through Technology: The Path between Engineering and Philosophy (1994). Many reviewers have applauded this as Mitcham's masterpiece and as the first true summary of the development of the field. 56 Kristin Shrader-Frechette's first major work, Nuclear Power and Public Policy, appeared in 1980. In later books, she has addressed Risk Analysis and Scientific Method (1985) and Risk and Rationality (1991). These and others of her publications are always masterpieces of clarity and precision -- no matter whether the risk analysts she attacks appreciate her criticisms or not. In my opinion, Shrader-Frechette's most interesting book to date is Burying Uncertainty: Risk and the Case against Geological Disposal of Nuclear Waste (1993). There all her skills as an analyst and arguer are on display as much as ever; and the comprehensiveness of her survey of arguments on all sides is admirable. But what makes me admire the book more than anything else -- and more than her earlier contributions -- is her new-found awareness of how enormous the pressure is in technical communities to ignore, and resist, the force of her arguments, no matter how clear and convincing (see chapter IV, above). Langdon Winner's contribution to the Bad Homburg conference, "Techne and Politeia: The Technical Constitution of Society," follows up on his themes in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought (1977). A typically Winnerian gem of an essay, "Techne and Politeia" was used many times in many arenas, and shows up in Winner's later collection of essays, The Whale and the Reactor: A Search for Limits in an Age of High Technology (1986). It is probably Winner more than any other single author whom historians and sociologists of technology love to hate, and he has returned the favor in, "Upon Opening the Black Box and Finding It Empty: Social Constructivism and the Philosophy of Technology" (1991), his presidential address at the 1991 SPT conference in Puerto Rico. Bernard Gendron's Bad Homburg paper, "The Viability of Environmental Ethics," suggests another progressive path in the history of the philosophy of technology in the last fifteen years. In 1989 and 1992, Eric Katz published two excellent annotated bibliographies of environmental ethics in Research in Philosophy and Technology (volumes 9 and 12), and the theme of volume 12 is Technology and the Environment. Many younger philosophers associated with SPT have taken up this theme, notably David Rothenberg, in Hand's End: Technology and the Limits of Nature (1993) -- where Rothenberg argues against setting up any opposition between human, including technological, civilization and nature; David Strong, in Crazy Mountains: Learning from Wilderness to Weigh Technology (1995; here Strong tries to heed Rothenberg's message but ends up seeing many more positive features in natural wilderness than in today's consumeroriented technological society); and Eric Katz (again), in Nature as Subject: Human Obligation and Natural Community (1997). There Katz argues against applications of traditional ethical theories to environmental problems, as the right approach, and in favor of a more radical "moral justification for the central policies of environmentalism" in terms of "the direct moral consideration and respect for the evolutionary processes of nature" (p. xvi). Katz has also teamed up with Andrew Light in the editing of Environmental Pragmatism (1996) -- a collection dear to my heart because the essays collected generally argue that we should go beyond theoretical debates to a discussion of real environmental issues and even more toward attempts to work out (with others) solutions for real environmental problems. Albert Borgmann was not at Bad Homburg, but his thought has been viewed by some as the only contribution to philosophy of technology that has given rise to its own tradition or school of thought. Borgmann published Technology and the Character of Contemporary Life, his neo- 57 Heideggerian manifesto, in 1984. This was followed by Crossing the Postmodern Divide in 1992. David Strong's Crazy Mountains, mentioned earlier, is an explicit attempt to apply Borgmann's theses in an effort to arrive at a philosophy of wilderness in the midst of -- and as confronting -- technological culture. In 1995, a group of Borgmann disciples convened a conference, "Workshop on Technology and the Character of Contemporary Life," in Jasper National Park in Canada. Approximately twenty philosophers attended -- some disciples, some critics -- and Borgmann concluded the meeting with a thoughtful reply to his critics and some reflections on the future of philosophy of technology. The organizers still hope to publish a volume based on the proceedings, but nothing has been decided yet. Comparative Perspectives Everything I have summarized so far in support of a claim that there have been advances in North American philosophy of technology since Bad Homburg is, actually, preparatory to the question I want to address in this paper. It should be obvious that there has been progress in the field of philosophy of technology in some sense. But exactly what do we mean when we speak of "advances," whether in the philosophy of technology or in any other similar field today? Is it just a matter of a continuing stream of new books and new journal articles published? I want to address this issue comparatively, by way of a comparison and contrast with developments in the philosophy of science and the sociology of science and technology. First, however, we need some definitions of what it may mean to speak of advancing or making progress in any academic field. Discussing the rise of analytical philosophy in the early twentieth century, Bertrand Russell (1945, p. 834) once claimed that, using logical techniques, analytical philosophy is "able, in regard to certain problems, to achieve definite answers" (in contrast with older philosophical approaches); in this respect, Russell claimed, analytical philosophy's methods "resemble those of science." Like scientific advance, Russell was assuming, there can be similar philosophical progress, with one contribution building on others, and so on. In the United States at least, this has become the ideal of academic progress, with one article in a "leading" journal in a "cuttingedge" field worth more, in terms of merit and reward, than any other kind of publication -except possibly a "major" book reviewed (favorably) in all those leading journals. However, once this academic standard of progress was extended, by departmental committees and deans, to almost every field of higher learning, it began to come under attack. An early and vituperous version can be seen in Jacques Barzun's Science: The Glorious Entertainment (1964). These critics maintain that, when the standard is applied in humanities fields such as literature, history, and the arts -- and many of the critics lump philosophy together with other humanistic disciplines -- it is totally inappropriate. The only measuring rod we can use in these fields (and, as we will see below, later post-modern critics now say this is true even in the sciences) is greater and greater originality, especially in terms of persuading whatever are perceived to be the relevant audiences. 58 A few transcendentalist metaphysicians and theologians object to both the strict (progressive) academic standard and the much broader "originality" (postmodern?) standard as retrogressive chasing after increasingly trivial minutiae. The only real progress moves in the opposite direction, toward more and more comprehensive syntheses -- ever closer approaches to truth or beauty or goodness (sometimes capitalized as Truth, Beauty, and Goodness). Such Hegel-like synthesizers are, I admit, rare today; but there are "right-side-up" dialectical materialist neoHegelians and others who insist on real social progress as the only appropriate standard. (I will return to this at the end of the chapter.) Finally, still others insist on what I would call an Aristotelian model, recognizing that academic fields are divided along disciplinary lines, each with its own standards. At least some of the sciences may meet the standard criterion of progress within limited domains, but most intellectual endeavors can make only "intensive" or "qualitative" progress, providing no more than a deeper appreciation of, or new insights into, old truths, traditional arts and crafts, and so on. We can now ask whether, in the past twenty years or so, there has been progress, in any of these senses, in philosophy of technology or in such allegedly more progressive fields as the philosophy of science and the sociology of science and technology. Philosophy of Science I take as my starting point for comparison here the (U.S.) Philosophy of Science Association's collaborative volume, Current Research in Philosophy of Science (1979), edited by Peter Asquith and Henry Kyburg. Two articles in the book are illustrative: Noretta Koertge's "The Problem of Appraising Scientific Theories" (pp. 228-251) and Ronald Giere's "Foundations of Probability and Statistical Inference" (pp. 503-533). Koertge says, "Philosophers of science [especially Popperians] have made considerable progress in providing clear accounts of how to appraise the content and the test record of a theory" -- and the series of citations she lists may seem impressive to at least sympathetic readers (though Koertge also adds immediately, "They have had much less success in explicating complicated mixed appraisals" -- p. 246). Giere says, "The development and consolidation of the 'subjective' Bayesian account of statistical inference during the past twenty-five years has been a remarkable intellectual achievement" (p. 508).This, however, must be balanced against Giere's claim less than a decade later, in what can only be called a philosophical "conversion" to "naturalized epistemology": "My skepticism [has] progressed to the point that I now believe there are no special philosophical foundations to any science [or, in the example above, statistical inferences in science]. There is only deep theory, which, however, is part of science itself. And there are no 59 special philosophical methods for plumbing the theoretical depths of any science" (Explaining Science: A Cognitive Approach, 1988, p. xvi). As evidence of the current state of philosophy of science in the USA, I can cite two recent books: Robert Klee's Introduction to the Philosophy of Science: Cutting Nature at Its Seams (1997), and Joseph Rouse's Engaging Science: How to Understand Its Practices Philosophically (1996). Klee's exciting and challenging introductory survey of everything that has happened in the philosophy of science since the 1930s ends with a chapter on the realism-antirealism debate. At the end, Klee says, "I have never tried to hide from the reader my realist leanings" (p. 239), and the main sources he appeals to are articles by Ian Hacking (1983), Richard Boyd (1984), and Richard Schlagel (1991). Antirealists referred to are Bas van Fraassen, in his The Scientific Image (1980), and Larry Laudan and Arthur Fine in articles included in Jarrett Leplin's Scientific Realism (1984). Though Klee seems to be up-to-date in his sources, an attentive reader will note that the articles cited are not much more recent than Current Research (1979); and the mere fact that Klee ends with a debate as old as that on realism versus antirealism should give one pause. Even when (in another chapter) Klee cites a clearly progressive claim -- in Wesley Salmon's "Four Decades of Scientific Explanation" (1989) -- the reader can quickly check Joseph Hanna's "An Interpretive Survey of Recent Research on Scientific Explanation" in Current Research and see that Salmon has added little new in the intervening decade. And Hanna admits that there has been only limited ("intra-paradigmatic") progress within several different and competing approaches. Rouse's book is, if anything, even more troublesome for anyone claiming that recent philosophy of science has been progressive. Rouse mounts a detailed attack not only on realism but also on its opponents -- he discusses in detail Larry Laudan (1984), Dudley Shapere (1984), Richard Miller (1987), and Peter Galison (1987), not to mention Arthur Fine (1986), who is analyzed and critiqued in chapter after chapter, and a whole raft of social constructionists, but particularly Harry Collins (1992) -- all in the name of "cultural studies of science," with a heavy dependence on such feminist critics of science as Donna Haraway. Though Rouse is extremely careful about uses and misuses of the label "postmodernist," his book is intended to be a contribution to the right kind of postmodernist critique of scientific progress claims. Deans and promotion committees are likely to continue to accept publication in Philosophy of Science and similar journals as unquestionable evidence of contributions to the advancement of philosophy of science. But as soon as anyone actually reads the articles published there, he or she will see that their authors have no illusions that the field is any longer even cohesive, much less progressive in the narrow sense. From Sociology of Science to Sociology of Scientific Knowledge (SSK) 60 According to one source (Gaston, 1980), sociology of science as a subspecialty within sociology only dates back to the 1950s. From the mid-fifties until 1980, the field was dominated by one giant figure, Robert K. Merton -- though his On the Shoulders of Giants (1965) is an eloquent defense of the claim that intellectual originators, no matter how creative they may seem, always owe enormous debts to those who have gone before them. Between the 1950s and the late 1970s, almost all sociologists of science felt that they owed a major debt to Merton. His model of objective science as requiring the sharing of information, mutual criticism, disinterestedness, and universalism (disregarding social characteristics in the recognition of the importance of contributions to science) became the basis of other sociologists' research. As Gaston summarizes the situation: "The model of a social system of science in which scientists pursue knowledge in a social environment, hoping and expecting to receive recognition for their original contributions, provides a multitude of research questions -- what has come to be called 'Mertonian' sociology of science" (Gaston, 1980, p. 475). This approach continues to have its followers -- most notably in the various forms of the Science Citation Index and cognate series -- but hardly anyone today thinks of this tradition when referring to advances in social approaches to the study of science. In 1979, Bruno Latour and Steve Woolgar published Laboratory Life: The Construction of Scientific Facts, and a new tradition was launched. One of its principal aims was to undercut the Mertonian model and the positivist philosophy that was perceived to lie at its core. Since then, the "sociology of scientific knowledge" -- as the field was renamed to emphasize its focus on the actual doing of scientific work rather than on allegedly authoritative products of successful scientific work -- has been perceived by almost everyone in science and technology studies as one of the most prolific, rapidly advancing fields in all of academia. Joseph Rouse dates the revolution from the so-called "Edinburgh Strong Programme," associated especially with the names of Barry Barnes (1974) and David Bloor (1976), and he goes on to list the fragments of later social constructivism as including "Bath relativism, ethnographic studies, discourse analysis, actor/network theory, and constitutive reflexivity" (Rouse, 1996, p. 1). But he and nearly every other commentator treats constructivism as an advancing -- if not monolithic -field. Indeed, nearly everyone who is not unalterably opposed to it (see Gross and Levitt, 1994) thinks of the constructivist school(s) as advancing at an amazing pace. What I want to do here is contrast later with earlier stages of one of these strands, laboratory studies. If we date this subspecialty in constructivist studies from Latour and Woolgar's Laboratory Life (1979), it is fairly easy to demonstrate that there have been a large number of later developments building on earlier ones. In Karin Knorr Cetina's summary in the Handbook of STS (1995), the developments extend Latour and Woolgar's examples, from Eisenstein (1979) on the printing press as a social agent of change, to Amann and Knorr Cetina (1990) on image interpretations in molecular biology, to Henderson (1991) on computer graphics, to Hirschauer (1991) on sex-change surgery -- to broader sets of examples in Lynch's Art and Artifact in Laboratory Science (1985) and Latour's Science in Action (1987). (See Knorr Cetina, 1995, p. 155.) Indeed, it sometimes seems that any adequate list would be too long to summarize. (Knorr Cetina tries, in her 1995.) It would take a churlish critic to deny that there has been progress here -- and I have not even referred to advances in actor/network theory and similar approaches. 61 Nonetheless, even Knorr Cetina as the loyal chronicler of these advances admits that her favored approach, laboratory studies, has its limits. The most important ones she lists have to do with their microscopic focus on individual laboratories rather than on consensus building among larger groups of scientists; and with their failure to account for larger societal contexts that influence laboratory life (Knorr Cetina, 1995, pp. 161-162). And of course this does not even mention criticisms by jealous defenders of science's progressivism (Gross and Levitt, 1994), who view what is alleged to be progress here as no more than an ever-broadening smear campaign against more and more hardworking scientists. In concluding this section, it seems fair to say that advances in laboratory studies continue right down to the present; but it is also fair to say that such studies have their limits and their critics. Social Constructivist Studies of Technology Moving closer to a direct parallel to philosophy of technology, several sociologists (and sociologically-oriented historians) in the mid-1980s extended their constructivist studies, in an explicit way, to the study of technology -- usually, of particular technologies. It was this group of scholars whom Winner was attacking in his paper, "Upon Opening the Black Box and Finding It Empty" (1991). And representatives of this school have fought back. (See Bijker, 1993, and Aibar, 1996.) Wiebe Bijker, in his summary of developments in the field in the Handbook of STS (1995), traces its roots to Thomas Hughes, the historian, in his masterly study, Networks of Power: Electrification in Western Society, 1880-1930 (1983). Hughes then combined with Bijker and Trevor Pinch to edit the book that others often list as the beginning of the new tradition, The Social Construction of Technological Systems (1987). That does not leave much time for a great deal of development between 1987 (or even 1983) and Bijker's summary (1995). Nonetheless, people do perceive the constructivist study of technological systems as a rapidly advancing field. But what kind of advance has there been? Bijker and John Law, in Shaping Technology/Building Society (1992), offer an answer. According to them, technology studies had earlier been "fragmented": "There are internalist historical studies; there are economists who are concerned with technology as an exogenous variable; more productively, there are economists who wrestle with evolutionary models of technical change; there are sociologists who are concerned with the 'social shaping' of technology; and there are social historians who follow the heterogeneous fate of system builders" (p. 11). 62 By the end of the book -- which summarizes the evidence in a somewhat heterogenous collection of essays, though written by leading figures in the field -- Bijker and Law conclude that a "first step" has been taken in understanding "that technical questions are never narrowly technical, just as social problems are not narrowly social" (p. 306). Back in the introduction, Bijker and Law had summarized the progress made so far: "The last five years has seen the growth of an exciting new body of work by historians, sociologists, and anthropologists, which starts from the position that social and technical change come together, as a package, and that if we want to understand either, then we really have to try to understand both" (p. 11). In short, all that Bijker and Law are claiming as advances in the new field so far is that there has been a "development of an empirically sensitive theoretical understanding of the processes through which sociotechnologies are shaped and stabilized" (p. 13). But everyone knows that theoretical arguments are never-ending, and if there is to be any progress in this new field it will show up in detailed studies that confront theory with evidence. And Hughes had already displayed that process admirably, in Networks of Power, in 1983. So where do we stand at this point in our comparative survey? The new sociology of scientific knowledge, especially laboratory studies, comes closest to the ideal of science-like progress, with one article building on others in continuous advance. Paradoxically, however, these studies are narrow and limited, and defenders of science maintain that, cumulatively, they serve to undermine scientific progress and give comfort to the enemies of science. Studies in the new social contructionist approach to technology have so far seen only theoretical advances -- and every new theoretical formulation is met with challenges, even within the field. Philosophy of science today is a battleground, fragmented and splintered not only into subspecialties, but also setting modernists against postmodernists in seemingly endless variations. So what started out as the most progressive of science studies fields, in the narrow sense, now shows advances only in specialty areas and within particular paradigms. Citation indices document all of these advances, along with advances in the sciences themselves, but nearly everyone treats them as raw data awaiting a theoretical interpretation. And what about philosophy of technology? I think the evidence I displayed (at the earlier outset) supports the claim that this field is just about as progressive (or lacking in progress in the narrow sense) as any of the comparator fields discussed here. Conclusion 63 Are there, then, no advances in science and technology studies -- or at least none that go beyond qualitative change? I believe that real though limited progress has been made during the years surveyed here, but it is not in the academic sense implicit in the conference title, "Advances in the Philosophy of Technology." To make this point, I can quote Bijker and Law at the end of Shaping Technology/Building Society (1992): "When things go wrong, it may not make much sense to blame technologies. Neither does it necessarily make sense to blame people, nor even . . . economic systems. . . . If we want to make sense of [technological] horrors -- and more important, do something about them -- . . . what we urgently need is a tool kit . . . for going beyond the immediate scapegoats and starting to grapple with and understand the characteristics of heterogeneous systems" (p. 306). To which I would say amen, but especially to the phrase, "more important, do something about them." Surely we do need theoretical advances, but even more surely we need to make more progress in solving the real-world problems of our technological society. In the very first volume of Research in Philosophy and Technology (1978), I argued for a social action approach to philosophy of technology (following the lead of the American Pragmatist philosophers, George Herbert Mead and John Dewey). I repeated that call to action at the Bad Homburg meeting. And I made my most extensive appeal in Social Responsibility in Science, Technology, and Medicine (1992). I believe that progressive activists have been making progress in solving technosocial problems (see McCann, 1986), and there is no reason why philosophers and other academics cannot join with them. At Bad Homburg, I quoted German colleagues, Hans Lenk and Günter Ropohl: "The multidisciplinary and systems-like interlocking of techn(ologi)cal problems requires . . . the interdisciplinary cooperation of social science experts and generalists, . . . systems analysts and systems planners. Philosophy has to accept the challenge of interdisciplinary effort. . . . It has to step out of the ivory tower of restricted and strictly academic philosophy" (Durbin, 1983, p. 2). But we must take this plea quite literally, and cooperate not merely with other experts; we must also cooperate with all sorts of citizens of good will who are seeking progressive solutions for serious contemporary social problems. And we must hope that philosophers of science and academic philosophers of technology and sociologists of science and students of the social construction of technology will do likewise. It is important to understand sociotechnologies, but it is more important to do something about the social problems associated with them. 64 Chapter 8 PHILOSOPHY OF SCIENCE AND SOCIAL RESPONSIBILITY An astute reader will have noted that, unlike my Social Responsibility book, which was addressed in large part to technical professionals other than philosophers, here I have been primarily addressing my invitation to activism to fellow philosophers. As I came to the end of this collection, I decided to include an essay I had done -- as a keynote address for a conference labeled "Discovering New Worlds" in 1993 in Puerto Rico -- that was intended to take on the most difficult of all tasks related to that invitation to philosophers. In a book devoted to inviting philosophers to join in technosocial activism, academic philosophers of science would seem to be a most unreceptive audience. It is not that philosophers of science think that nothing they have to say is relevant to social responsibility. Alex Michalos (1984), though intially reluctant, did end up — in a widely used summary of philosophy of science — finding areas of social responsibility relevance. And many of the most traditional positivist philosophers of science (as explicitly stated by Reichenbach, 1951) saw their role as defenders of the objectivity of science, which they simply assumed was progressive or socially beneficial. I am aware of only one famous foray by a philosopher of science into social activism of a sort: Michael Ruse's serving as an expert witness in the 1981-1982 "creation science" trial in Arkansas; and most philosophers of science at the time thought of that foray into activism as a disaster (see LaFollette, 1982, and Ruse, 1982). In my experience, most philosophers of science today — even as the field has become hopelessly fragmented (see the previous chapter and Durbin, 1994) — are satisfied to argue with one another in about as inbred a fashion as the most academic of academicians. Even so, I want here to issue a call to activism to them as much as any other philosophers. One often hears, in philosophy of science as well as other intellectual circles, nasty put-downs of opponents. Philosophers of science opposed to Thomas Kuhn do not simply object to what they see as his relativism; they get angry about the matter. (See, for a humorous though serious example, Laudan, 1990.) Similarly, Joseph Pitt (1990), reviewing the introductory textbook, Philosophy of Technology, does not just object to Frederick Ferre's approach; he feels the need to use "harsh words." And the same phenomenon occurs everywhere in scientific and technological literature, with attacks on others as quacks or charlatans, as "just plain wrong," and so on. (See Radner and Radner, 1982.) I understand the passion for truth, the insistence on rooting out error, that motivates these exchanges. I also understand the motives of recent skeptics who challenge the grounds on which 65 people take their stand in making such judgments. The issue turns on whether one thinks it is or is not possible to discover the truth or some warranted-assertability approximation to the truth. I want to sidestep that issue here. The key word in doing so is "discover." People often claim to have discovered the truth or to have uncovered an error or a mistake. I do not want to focus on such (alleged) discoveries, but on discovering, on the process of discovery, on the doing of science (including biomedical science, science in an engineering context, and other areas of technical work). In that arena, it seems to me, there is much more room for a cooperative attitude, for working together, for seeking commonalities. Focusing on this, with respect to science and technology policy, emphasizes that these noble endeavors are, above all, human projects. They only work well if the individuals involved share motives and knowledge bases, communicate, critique one another's work constructively, and generally collaborate in a common enterprise. And if this is done well — scientists have always assumed —society will benefit, at least in the long run. Here I want to shorten that long run, and to focus on possible contributions of philosophers of science rather than the scientists they study. Some Samples from the Literature on Discovering Given the general inclination of philosophers to concentrate on warranted assertions, it might come as something of a surprise to discover how much has been written, in recent decades, on the discovery process. There is, of course, the now-vast literature in what is sometimes called the "sociology of scientific knowledge," and I will refer here to some authors in that tradition (those traditions). But I refer to a variety of authors from other traditions as well. I have chosen just a small sample, but I have tried to make it representative of the whole field of science, from abstract mathematics through the physical and natural sciences to engineering. a. Mathematics The first major figure to emphasize the discovering process was the mathematician George Polya, beginning with his popular and influential handbook, How to Solve It (1957 [1945]). In 1954, Polya published a two-volume study, Mathematics and Plausible Reasoning, where he says: "Mathematics is regarded as a demonstrative science. Yet this is only one of its aspects. Finished mathematics presented in a finished form appears as purely demonstrative, consisting of proofs only. Yet mathematics in the making resembles any other human knowledge in the making. You have to guess a mathematical theorem before you prove it; you have to guess the idea of the proof before you carry through the details. You have to combine observations and follow analogies; you have to try and try again. The result of the mathematician's creative work 66 is demonstrative reasoning, a proof; but the proof is discovered by plausible reasoning, by guessing. If the learning of mathematics reflects to any degree the invention of mathematics, it must have a place for guessing, for plausible inference" (p. vi). This is the pattern I want to emphasize here: what Polya does not say explicitly is that the plausibility/discovery feature is most often found in social collaboration. b. High-Energy Physics The sociological historian Andrew Pickering is well known as one of the most extreme advocates of the so-called "strong programme" in sociology of science. He says (Pickering, 1984, p. 12) the key to his analysis of the dynamics of research traditions in high-energy physics is a theme he calls "opportunism in context": "Research strategies," he says, "are structured in terms of the relative opportunities presented by different contexts for the constructive exploitation of the resources available to individual scientists." Some of the limited resources that constrain practice are material, such as major pieces of equipment available only at certain laboratories. But Pickering focuses even more on theoretical resources: "The most striking feature of the conceptual development of HEP [high-energy physics] is that it proceeded through a process of modelling or analogy." Then Pickering points out: "Two key analogies were crucial to the establishment of the quark-gauge theory picture. As far as quarks themselves were concerned, the trick was for theorists to learn to see hadrons as quark composites, just as they had already learned to see nuclei as composites of neutrons and protons, and to see atoms as composites of nuclei and electrons. As far as the gauge theories of quark and lepton interactions were concerned, these were explicitly modelled upon the already established theory of electromagnetic interactions known as quantum electrodynamics." Pickering then recognizes the role of educational background: "The point to note here is that the analysis of composite systems was, and is, part of the training and research experience of all theoretical physicists. Similarly, in the period we will be considering, the methods and techniques of quantum electrodynamics were part of the common theoretical culture of HEP." And he concludes with a reference to analogy as a crucial part of the story: "Thus expertise in the analysis of composite systems and, albeit to a lesser extent, quantum electrodynamics constituted a set of shared resources for particle physicists. And, as we shall see, the establishment of the quark and gauge-theory traditions of theoretical research depended crucially upon the analogical recycling of those resources into the analysis of various experimentally accessible phenomena" (Pickering, 1984, p. 12). 67 Even physics, that most "objective" of fields (according to the traditional view), Pickering is saying, is determined or at least constrained by numerous background educational and social pressures. c. The Plate-Tectonics Revolution in Geology Ronald Giere has been recognized for two decades as a leader in philosophy of science focusing on the foundations of probability and statistical inference. Giere (1988, p. xvi) now thinks his earlier approach was mistaken: "My skepticism [has] progressed to the point that I now believe there are no special philosophical foundations to any science. There is only deep theory, which, however, is part of science itself. And there are no special philosophical methods for plumbing the theoretical depths of any science. There are only the methods of the sciences themselves." It was at least partly Giere's study of the fairly recent revolution in geology that led him to this point. In a series of articles in the early 1980s (e.g., Giere, 1984), and more particularly in his book, Explaining Science: A Cognitive Approach (1988), Giere has focused on the platetectonics revolution in geology to illustrate his new Quine-inspired "naturalized epistemology" of science. He concludes his account in Explaining Science (1988, p. 277) this way: "An evolutionary model of science grounded on natural, cognitive mechanisms removes any need to feel apologetic in the face of the obvious fact that the approach to a scientific issue adopted by individual scientists often seems more determined by the accidents of training and experience than by an objective assessment of the available evidence." Giere then makes an explicit reference to common sense: "This is just what one should expect of normal cognitive agents. What sorts of models any individual will regard as most promising or appropriate will of course be strongly influenced by which sorts of models have been learned first and used most." And he denies that there is anything wrong with this: "This is not irrationality or anything of the sort. It is normal human behavior, and scientists are normal human beings. Nor does this imply a relativist view of science. The right kinds of interactions among scientists favoring different approaches, together with extensive interactions with nature (mediated by appropriate technology), can produce widespread agreement on the best available approach." According to Giere (1988, p. 277), "That is the lesson of the 'revolution' in geology for those who would seek to understand how science works." 68 d. Genetics Under this heading, I want to cite two examples. The first is the obvious place to start, with James Watson's story in The Double Helix (1968). Watson begins that famous account with a preface: "Here I relate my version of how the structure of DNA was discovered. In doing so I have tried to catch the atmosphere of the early postwar years in England, where most of the important events occurred. As I hope this book will show, science seldom proceeds in the straightforward logical manner imagined by outsiders" (Watson, 1968, p. ix). Watson proceeds to illustrate his point: "Instead, its steps forward (and sometimes backward) are often very human events in which personalities and cultural traditions play major roles. To this end I have attempted to re-create my first impressions of the relevant events and personalities rather than present an assessment which takes into account the many facts I have learned since the structure was found. Although the latter approach might be more objective, it would fail to convey the spirit of an adventure characterized both by youthful arrogance and by the belief that the truth, once found, would be simple as well as pretty." Watson even points out how pettiness can be involved, while making an allusion to common sense: "Thus many of the comments may seem one-sided and unfair, but this is often the case in the incomplete and hurried way in which human beings frequently decide to like or dislike a new idea or acquaintance" (Watson, 1968, p. ix). Watson's account, which caused a stir among interpreters of science at the time, is now over twenty years old. But others have continued to pursue the path that he sketched out in historiography as well as in molecular biology. Recently Karin Knorr-Cetina, a leader among the new breed of sociologists of science, has (together with Klaus Amann) carried out a fascinating series of studies on image analysis, one of the keys to the discovery of the structure of DNA by Watson and Francis Crick, and especially Rosalind Franklin. Knorr-Cetina and Amann (1990, p. 259) begin one report of their recent studies with the observation that, "Philosophers, historians, and sociologists of science have long considered writing to be a central part of scientific activities." They admit this is as it should be but add: "Yet from within scientific inquiry, the focus of many laboratory activities is not texts, but images and displays." Knorr-Cetina and Amann then concentrate on talk about images in the laboratory related to four environments: laboratory practice, invisible physical reactions, the image as it will appear in future publications, and case precedents in the field. 69 Knorr-Cetina and Amann (1990, p. 281) conclude: "Image surface calculations, reconstructions of events in the test tubes of the lab, and remedial actions designed to transform badly turned-out pictures into showcases of data exemplify the type of work performed when technical images are inspected in the laboratory. Suffice it to add that many autoradiographs or other images are not just inspected once, but give rise to several image-related conversations, and many images are internally related by being predecessors or successors of others. Thus, instead of looking more or less directly at the laboratory and glossing the invisible processes therein, participants looked first at other pictures and let themselves be guided to these processes by the appearance of the pictures." Knorr-Cetina and Amann make their point explicitly: "The example illustrates that there are variations on the procedures that participants combine in image dissection" (p. 281). And it is clear that they think this process is at work in all sorts of image analyses in biomedical research. e. The Life Sciences Generally My example here may be less appropriate than others in this listing. David Hull's Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science (1988) reinforces fairly traditional sociological accounts of science as a reward system (see Gaston, 1984), and it claims to describe natural science as a whole rather than just the life sciences. But the marvelously detailed accounts of competition that Hull includes focus on biologists as the communities of scientists he knows best. As his title indicates Hull's book concentrates on the real-life process of science, not some philosopher's abstraction. Hull ends with the claim that his book was intended as the fulfillment of Thomas Kuhn's and Stephen Toulmin's earlier projects. Here is a sample of Hull's rhetoric (1988, p. 7) on the way scientists compete: "In science, 'weasel words' serve an important positive function. They buy time while the scientists develop their positions. It would help, one might think, if scientists waited until they had their views fully developed before they publish, but this is not how the process of knowledge development in science works. Science is a conversation with nature, but it is also a conversation with other scientists. Not until scientists publish their views and discover the reactions of other scientists can they possibly appreciate what they have actually said." Hull concludes this paragraph with an almost astounding concession, coming from a fairly traditional philosopher of science: "No matter how much one might write and rewrite one's work in anticipation of possible responses, it is impossible to avoid all possible misunderstandings, and not all such misunderstandings are plainly 'misunderstandings.' Frequently scientists do not know what they 70 intended to say until they discover what it is that other scientists have taken them to be saying. Scientists show great facility in retrospective meaning-change" (Hull, 1988, p. 7). f. Natural Science Generally Under this heading, I cite three authors. i) The first, Daniel Rothbart, does not depart much from traditional approaches in philosophy of science; the article I quote from (Rothbart, 1984) is filled with references to reference, meaning, and "semantic field theory." But Rothbart's conclusion -- that metaphor is "an essential aspect of scientific reasoning" -- even he says is at odds with one of the deepest prejudices positivistically inclined philosophers of science have held, namely, that metaphors, though indispensable in science, are always ultimately explicable in literal terms. Rothbart's conclusion (p. 611) is based on the treatment of several examples using this model: "The function of metaphoric projection is to reorganize the semantic field by introducing new saliencies into the field by highlighting some features and eliminating others. New attributes are formed and can be directly beneficial when a conventional field of concepts fails to permit certain desirable features to emerge. . . . "If metaphor forms the basis of concept formation, then conceptual problem solving is in many cases fundamentally metaphoric. Assuming that a conceptual problem is some weakness within the system of concepts, the gain from metaphor is expansion of the range of possible features attributable to the subject. This range was apparently too limited with the subject's own semantic field. When the primary subject is juxtaposed with prototypes from an alternative field, the metaphoric projection causes a reformulation of the network of similarities and differences." Rothbart then makes his own explicit contrast with standard philosophy of science: "Although metaphoric projection would not by itself validate a given hypothesis, it becomes a matter of rational preference for scientists to reformulate problematic concepts through metaphor. Its epistemic value arises from expansion of available similarity features" (Rothbart, 1984, p. 611). ii) My second example here is the well known physicist, physics educator, and historian of physics, Gerald Holton (see Holton, 1978, 1988). He has introduced into the history of science literature a tool he calls thematic analysis. Holton claims that a small number of themata — typically antithetical dyads such as atomicity/continuum or analysis/synthesis, but also an occasional triad such as constancy/evolution/ catastrophic change — play an extraordinarily large role in explaining major discoveries in the history of science. Holton prefaces one of his studies (1978, p. vii) this way: 71 "Considering the progress made in the sciences themselves over the past three centuries, it is remarkable how little consensus has developed on how the scientific imagination functions. Speculations concerning the processes by which the mind gathers truths about nature are among the oldest and still most prolific and controversial cognitive productions. Unless the inevitable distortion of near perspective is misleading me, it appears that only in the relatively recent period have proposals been made that have long-range promise. "The chief aim of this book is to contribute concepts and methods that will increase our understanding of the imagination of scientists engaged in the act of doing science." Later Holton says: "A finding of thematic analysis that appears to be related to the dialectic nature of science as a public, consensus-seeking activity is the frequent coupling of two themata in antithetical mode, as when a proponent of the thema of atomism finds himself faced with the proponent of the thema of the continuum. . . . The persistence in time, and the spread in the community at a given time, of these relatively few themata may be what endows science, despite all its growth and change, with what constant identity it has. The interdisciplinary sharing of themes among various fields in science tells us something about both the meaning of the enterprise as a whole and the commonality of the ground of imagination that must be at work" (1978, pp. 10-11). This imaginative constancy—of competing themes or paradigms—is very different from any positivist continuity of ever-better theories to account for theory-independent data, new or old. iii) It is the philosopher Nicholas Rescher, however, who has gone farthest along these lines in his interpretation of the nature of science. I have in mind especially Rescher's book, Dialectics: A Controversy-Oriented Approach to the Theory of Knowledge (1977). Rescher makes a complex case for his view. I cite here only a few short passages from his concluding chapter: "This final chapter will explore the prospects of devising a disputational model for scientific inquiry. The basic idea of such a model is to cast the innovating scientist in the role of an advocate who sets out to propound and defend a certain thesis" (p. 110). Rescher contrasts this with progressive claims about scientific evidence, while at the same time denying any claim that his view would ignore the role of evidence: "Such an approach to scientific inquiry by no means denies the crucially important role of the standard considerations regarding the nature of scientific evidence. . . . "Experimentation plays a central role in this probative process. The devising of experiments to probe a theory at its weakest points, experiments which might — if their eventuation is suitably negative — throw serious doubt upon its claims, comes to be an objective that proponent and opponent share in common. This is so because counter-indicative experimental findings are a 72 powerful, indeed virtually decisive weapon in the opponent's armory. And on the other hand, the favorable issue of such an experimental test is a strong asset to the proponent's case" (Rescher, 1977, p. 112). Finally, Rescher relates his view to claims that Thomas Kuhn and others have made about historical controversies over scientific evidence: "Such a dialectical-disputational model of the process of scientific inquiry has many attractive features in accounting for the actual phenomenology of scientific work. Not only does it explain the element of competition that all too plainly characterizes the actual modus operandi of the scientific community. It accounts also for the 'Planck phenomenon' . . . which envisages an old school of stubborn resistance to scientific innovation that is never conquered in the course of progress but simply bypassed" (Rescher, 1977, p. 113). g. Engineering Billy Koen, an engineer, is one of the few authors of any kind — including historians, philosophers, and social scientists (for others, see Downey, Donovan, and Elliot, 1989) — who has discussed the thinking processes involved in actual engineering practice. He starts his little book on the subject, Definition of the Engineering Method (1985), with an acknowledgement that almost nothing has been written about engineering method, in contrast to scientific method. But a major theme throughout Koen's book, and his final conclusion, is that everyone is an engineer in the sense that he or she must "develop, learn, discover, create and invent the most effective and beneficial heuristics" or problem-solving techniques to deal with life. Engineers are just very important examples of social problem solvers in a world dominated by technology. With respect to engineers, Koen finds that their practice revolves around two things: heuristic problem solving techniques, and, under this heading, an insistence on using only what is state-ofthe-art. After defining the engineering method in these terms, Koen (1985, p. 41) feels he must take one final step: "Defining a method does not tell how it is to be used. We now seek a rule to implement the engineering method. Since every specific implementation of the engineering method is completely defined by the heuristic it uses, this quest is reduced to finding a heuristic that will tell the individual engineer what to do and when to do it." In a controversial conclusion, Koen seems to reduce engineering to something close to personal whim: "My Rule of Engineering is in every instance to choose the heuristic for use from what my personal [state of the art] takes to be the [state of the art] representing the best engineering practice at the time I am required to choose" (Koen, 1985, p. 42). 73 But Koen does not mean anything subjective when he says this; he thinks this sort of engineering state of the art is as common, in a group of engineers at a given time, as the parallel "common practice" in medicine. h. Technology Assessment Arriving at satisfactory engineering solutions — even the best solutions under the circumstances, given a particular state of the art — does not complete the picture when it comes to technological practice, however. Too often, as we know sadly enough, technological developments turn out to have unexpected environmental, social, or political consequences. In order to deal with these in an orderly, and hopefully in an anticipatory fashion, another technique has been developed — technology assessment — to aid in the formulation of technology policy or, more generally, policies for our technological world. Technology assessment, along with its most common feature, risk/cost/benefit analysis, can be seen as a way of providing decision makers in government or industry with reasonably objective grounds for their decisions. This is the final arena I want to refer to in which actual practice differs significantly from idealized models. Helen Longino, who has recently gained recognition for her novel social approach to scientific knowledge (Longino, 1990), had earlier looked at how a real-life technology assessment works. The specific case she reviews is the workings of the National Research Council's Committee on Biological Effects of Ionizing Radiation [BEIR], relative to the nuclear generation of electricity. Longino (1985, p. 184) concludes that, "The pressure from regulatory and other agencies to have an answer to questions about radiation hazards may force scientists to compromise even in the absence of adequate grounds for consensus." And she contrasts this with the alleged aims of such commissions to provide objective grounds for decision makers: "I used to think of this debate as a nice illustration of a view I have developed elsewhere -that scientific objectivity is a function of the social character of science -- just because the debate is focused on the background or auxiliary assumptions (the risk models) mediating between hypotheses and data. I am less sanguine about this today. Certainly the behavior of the National Academy's panel does not meet conditions for objectivity, such as openness to criticism and alternate views. Not only does it attempt to impose consensus where there clearly is none, but the debate is skewed by the exclusion of points of view such as [John] Gofman's [anti-nuclear views] (Longino, 1985, p. 184). Postmodern Interpretations 74 What is going on here? Some people have taken the appearance of these discussions of reallife scientific and technological practice to signal a wholesale rejection of objectivity in science. One of the strongest statements of this point of view is to be found in Gayle Ormiston and Raphael Sassower's Narrative Experiments: The Discursive Authority of Science and Technology (1989). Tracing their sources to W.V. Quine, Thomas Kuhn, Paul Feyerabend, Richard Rorty, Michel Foucault, Jacques Derrida, and Jean-Francois Lyotard (along with Ludwig Wittgenstein and John Austin), Ormiston and Sassower (1989, pp. 16-17) say that: "Instead of locating authority in a particular genre or discursive mode, our identification of discursive displacement attempts to show how the fabrication and deployment of rules is pertinent to any interpretive experiment. In order to talk about the dissemination of authority, we have used two rules — 'use creates' and 'all learning is recollection' — rules legitimated by their use alone. The use of these rules demonstrates the impossibility of fixing in any permanent fashion the boundaries and limits that constitute cultural matrices." This claim is made in the context of a still stronger one: "The metadisciplinary perspective of this text offers a critically comprehensive overview of the cultural and humanistic context of science and technology. Such an account is not concerned to provide a hierarchical ordering of science, technology, and the humanities. Instead, it attempts to undermine any such ordering by demonstrating how science, technology, and the humanities develop in concert with one another; they are mutually constitutive of one another and their culture. Science, technology, and traditional humanistic studies, then, are modes of one another" (Ormiston and Sassower, 1989, p. 14). Few authors have gone as far as this in claiming that science and technology are humanistic enterprises (just as, for Ormiston and Sassower, the reverse is true), with all that that entails. But their sources, especially Feyerabend, Derrida, Lyotard, and Rorty, have clearly tried to undermine the authority of science in our culture. Does this imply relativism? In a delightful recent attack on relativism, Larry Laudan (1990) constructs a dialogue involving a relativist in debate with a positivist (who makes many references to writings of Carl Hempel), a realist (whose favorite author seems to be Hilary Putnam, in some of his writings), and a pragmatist (Laudan himself). Laudan (p. xi) says he had to work hard to make his relativist "clever and argumentatively adept," and he notes how two of his chief sources for the position, Quine and Kuhn, try to resist the relativist label. But Laudan is convinced that all these authors — and he would probably now add Ormiston and Sassower to his list — are relativists, if not explicitly then by implication. One recent philosopher who is explicitly relativist — as well as being as clever and argumentatively adept as Laudan could want — is Joseph Margolis in Pragmatism without Foundations: Reconciling Realism and Relativism (1988). 75 Margolis first selects out of all the meanings of relativism one that he can defend. It is a relativism that rejects foundationalism in all fields, including science, as incompatible with what we know about the limits of human knowing. With foundationalism also goes transcendentalism, though Margolis is careful to defend the human possibility of deriving certain transcendent truths within historical contexts. Enough of that for now, however. As I said earlier, I do not want to get into the relativism debate here. My message is a much less ambitious one than that. A More Modest Interpretation: The Art of Doing Science All the talk, among the authors I have quoted, about the discovery process in science (and engineering) reminds me of the work I did in an earlier book, Logic and Scientific Inquiry (Durbin, 1968). What I focused on there, influenced by Norwood Russell Hanson's Patterns of Discovery (1958; and other writings, e.g., 1961), was a "logic" of discovery. Nonetheless, I ended with a non-logical characterization of the scientific discovery process as a loose set of patterns for the resolution of conflict within research communities that in some ways anticipated the formulations of Ronald Giere, David Hull, and especially Nicholas Rescher. I believe I was on the right track in that book (based on my doctoral dissertation) even though my motivation at the time may seem surprising; it was to follow up on leads in Aristotle and the Aristotelian tradition in order to arrive at a better understanding of the processes of modern scientific discovery. I would like here to return to the Aristotelian tradition once again for some commonsense hints about how to interpret what the authors quoted here so extensively are saying about the discovery process in science (and engineering). In an obscure and seldom-noted passage buried in his systematization of Aristotle's thoughts on art (Summa theologiae I-II, q. 57, art. 3), Thomas Aquinas points out that even deductive reasoning requires art or artfulness or craftiness. And this is exactly what we have seen George Polya claim, hundreds of years later. Although Aquinas has many other things to say about art, some of which may be helpful in interpreting technology — even modern technology (see Durbin, 1981) — his treatment of the subject is vague and abstract. However, there is another obscure and seldom-noted sentence in Aquinas (in the article cited above) that may suggest how we can be more concrete in interpreting the practice of science (and engineering). What I have in mind is a passage in which Aquinas equates art with prudence in two areas, ship navigation and military tactics. 76 Aquinas is generally quite concrete about what prudence entails (see Summa II-II, qq. 47-56), though I will not here go into all the details. Among other things, he says that intellectual practice (in any field) requires the obvious: a good memory, quick wit, good reasoning skills, and so on. He adds that a lack of good judgment is a fault, along with lack of hard work, precipitation, lack of foresight and circumspection, failure to consult with and learn from the experience of others, and especially changing data and failing to bring the process to a timely conclusion. (Some of these points are already made in Aristotle's Nicomachean Ethics, book VI, but Aquinas is also systematizing a great many authors, including Cicero.) I would not want to be misunderstood here. In this matter, I am not treating these classical authors as authorities. They are simply codifying common sense. It should be obvious to anyone who reflects on any sort of intellectual practice in any intellectual community that such virtues are to be encouraged and such bad habits avoided. I refer to Aristotle and Aquinas only because I first noticed these commonsense hints in their works; others could just as easily infer them from any of the authors quoted above — from George Polya, fifty years ago, to the most recent sociologists of science doing anthropology-like studies of laboratory life. Furthermore, there is an aspect of the matter that is not adequately emphasized by Aristotle and Aquinas. It is the fact that intellectual activity, especially of a scientific sort, almost always takes place in communities of scientists, or engineers, and other technical workers. And they have elaborate sets of beliefs, values, procedures, and techniques. One of my favorite philosophers, the American Pragmatist George Herbert Mead, is as clear as anyone on what this implies. Attacking all sorts of individualist epistemologies, from David Hume to Immanuel Kant to G. W. F. Hegel to Bertrand Russell, Mead uses Russell's sense data theory to argue that science could never be built up by the accumulation of individual scientists' experiences. Instead, these experiences must arise within a world taken for granted by the scientist's community — a world filled with particular meanings, assuming certain laws to be true and to have been arrived at using certain methods, and so on (see Mead, 1964 [1917]; see also Kuhn, 1970, pp. 176 and 182-185). Conclusion The conclusion I would draw from all of this is that discovery, the process of discovery in science and other technical fields, is as much a matter of discourse, of intellectual give-and-take, of cooperation (as well as competitiveness), as in other intellectual communities. There are differences, of course, but I am here focusing on commonalities. Furthermore, the communities of scientific and technological researchers exist within broader intellectual communities; for instance, within universities and interrelated professional societies, not to mention publishing houses, the media, and so on. In my view, we should take advantage of these commonalities and foster the awareness, not only among students but among administrators of all kinds, public officials, and the public at large that science and technology are above all collaborative enterprises. 77 In short, competitiveness surely has a place in science and technology, and it can be easily understood how such competition often gets out of hand. But a focus on discovering, on the process of discovery, on the day-to-day life of practicing scientists shows how much of the scientific and technological enterprise is a matter of intellectual teamwork -- at its best undertaken out of a sense of social responsibility and for the benefit of humankind. When I first wrote that last qualifier (Durbin, 1994 [actually written in 1993]), about social responsibility, I was not thinking explicitly about this book and its appeal to philosophers to get involved in activism on technosocial issues. But the connection between the social-melioration rhetoric of scientists and my appeal here — for philosophers of science to follow scientists' example — seems appropriate in the present context. What I have emphasized throughout the book is that philosophers — and perhaps especially my fellow philosophers of technology — ought to profess the same noble goals, of serving humankind, as scientists do. And this seems to me all the more urgent, for both scientists and philosophers, in our "age of technology," with its myriad social problems. EPILOGUE Throughout this book-length set of essays, I have discussed how some philosophers in the applied philosophy community have gotten involved in activism, and others — perhaps the larger percentage — have not. In the two chapters where this issue is most directly engaged — chapter 5 on bioethics and chapter 6 on engineering ethics — I noted a significant difference. Philosophical bioethicists, as a group, often seem to want to make an impression on fellow philosophers in academia. Some get more respect for this than others. But where nearly all philosophers who write about bioethical issues get more respect than anywhere else is in their socially responsible work on ethics committees or commissions in a great variety of institutional settings. Some of the best work has been done on national and international commissions, but all except the most resolutely academic of us who do work in bioethics belong to some sort of health or healthcare committee in a hospital or similar setting. And what I maintain is that that is where the best and most socially responsible work is being done. Philosophers involved with engineering ethics (next) — and such kindred areas as computer ethics or the ethics of biotechnology — are a different story. The vast majority of real-world engineering ethics is done by engineers — and especially by a relatively small group of engineers who have a special concern to uphold the good name of the profession. Some work with sanctioning committees of technical professional organizations; others with licensing boards; and so on. Only occasionally do such agencies and groups call upon philosophers for help. When they do, it is true, they may be referred to those few philosophers in the USA who have gained renown as contributors to academic engineering ethics; but, in those cases, what they are often looking for is what one might call legal advice, advice on how to improve codes of ethics or similar matters. Except for technology assessment boards (where these still exist) or environmental impact assessment boards, few of these group activities wrestle directly with 78 the kinds of thorny cases that healthcare ethics committees do. So the activism of academic philosophers involved in engineering ethics is less direct than that of bioethicists — or so I have argued in chapter 6. With respect to the ethics of technology more broadly, the story is different once again — and the record much more diverse. As I argued in chapter 3, there is a great diversity here among scholars explicitly identifying themselves as philosophers of technology. Some explicitly join the engineering ethicists or computer ethicists and attempt to provide better codes of ethics for practitioners — or, in a slight move toward activism, try to teach future engineers (for example) who enroll in engineering ethics classes, some of which are required courses in engineering (or computer science, or biotechnology, etc.) programs. Others eschew ethical preaching — as they perceive the matter — and are only willing to contribute by serving as fellow experts in such ventures as technology assessment or environmental impact assessment teams. Still others think that real change in the ethos of engineering and other technical professions can only be brought about by political means — ranging from moderate to progressive to radical. Finally, there are the secular preachers on the "technology question" — philosophers like Martin Heidegger or Jacques Ellul or Langdon Winner or Albert Borgmann — who attempt to deal, in one fashion or another, with technological culture as a whole. In my opinion, all too few of these avowed ethicists of technology have any chance of impacting our technological society (or particular high-technology societies, such as the United States or the European Community, or the nations of the Pacific Rim) unless — following the guidance of the American Pragmatists G. H. Mead and John Dewey — they get involved with progressive activists attempting to deal with particular technosocial evils in particular locales. (Even if, as one example, they wish to have an impact on, say, global warming, real reform work must begin with particular industrial or governmental agencies, usually starting in particular countries.) In short, what I have argued here is that philosophers of technology, of whatever stripe, will need to become activists if they expect to have any impact on the real-world technosocial problems that they say they are concerned about. But what about the academic philosophy community more generally? A careful reader of chapter 8 might conclude that, in spite of my examples there, I really do not have much hope that academic philosophers of science are likely to become activists in large numbers. It is too easy to read the ones cited there as exceptions. And in any case they are not talking about activism on the part of the scientists they describe — only about the human, social, and collaborative aspects of the scientific discovery process itself. In fact, such a careful reader would be right; I do not expect to persuade many philosophers of science. Nor do I have much hope for the academic epistemologists with whom philosophers of science have so much in common. Because of perceived links between applied ethics and academic ethics more generally, some people tell me I ought to expect more from the academic ethics community. After all, ethics is, by definition, other-directed. If we have moral obligations (no matter how abstractly 79 rationalized), surely they are obligations to other persons — or, in deference to social responsibilities, some would add today, to animals or to the biosphere. All of this is surely true, but what critics of academic ethics in the twentieth century — and John Dewey was among the earliest — have been most concerned about is academic ethicists' resolutely theoretical stance. Richard Rorty (1998, pp. 130-131) has caught the spirit of this critique in his lament about graduate education in analytical philosophy in the last several decades: "As philosophy became analytic, the reading habits changed. . . . Fewer old books were read, and more recent articles. . . . Romance, genius, charisma . . . have been out of style in anglophone philosophy for several generations. I doubt that they will ever come back into fashion, just as I doubt that American sociology departments will ever again be . . . centers of social activism." But, some will say, countering Rorty, that philosophy in the USA has become amazingly diverse in the last decade or so (see Mandt, 1986). What, say, of philosophers of artificial intelligence, of Continental philosophers, of philosophers of art, social and political philosophers, and so on and on. Are all of those good people — or all of them who are not activists in the best tradition of American Pragmatism — are they too to be branded with a charge of anti-activism? Surely not, or I would never have written this book. Among all these philosophers — well-intentioned for the most part even when their focus seems mostly to be on their next academic promotion — are to be found the philosophers who are already doing progressive work with activists outside the academy or whom I would like to enlist in the adventure. My only complaint, in that respect, has to do with any academicism — any needless worry about doing "real philosophy" — that would keep more of them from venturing outside the walls of academe to get involved in the pressing social ills that vex our technological world. It may be that we face no more urgent problems today than citizens have in any earlier age, but surely our world does have problems, very serious problems; and surely those involved in trying to solve them have a right to expect us philosophers to play our part. What I hope is that, in the twenty-first century, more philosophers will heed this call than have in the twentieth century. Sources: 1. Ludus Vitalis, volume XV, number 27, 2007, pp. 195-197. 80 2. "Paul Durbin," chapter 5 in Philosophy of Technology: 5 Questions, ed. Jan-Kyrre Berg Olsen and Evan Selinger (Automatic Press, 2007), pp. 45-54 3. Status unknown; sent 15 July 2008; written in response to invitation for new Ludus Vitalis forum. 4. This review essay was not written specifically for publication, but I did hope it might lead to a panel at the SPT 2009 conference in Twente, the Netherlands. 5. "Ethics and New Technologies." In F. Adams, ed., Ethical Issues for the Twenty-First Century. Charlottesville, VA: Philosophy Documentation Center. Pp. 37-56. 6. Note sent to Ludus Vitalis editor at centro.lombardotoledano@gmail.com: This paper has a complex history. I prepared it first for a World Congress of Philosophy panel in 2003, but that panel was later cancelled. I then presented a small portion of the paper at the Society for Philosophy and Technology international conference in Delft, Holland, in 2005, but the brief version was turned down for inclusion in the proceedings volume of that conference. Next I included significant parts within a chapter in my Philosophy of Technology: In Search of Discourse Synthesis (2007, in Techne: see spt.org). But I still think the whole paper, as originally conceived and exploratory as it is, has merit; and I also think Ludus Vitalis might be a good place for it. 7. To appear in D. Goldberg and I. van de Poel, eds., Proceedings Volume from Workshop, "Engineering Meets Philosophy." London: Springer. 8. To appear in International Science Reviews 33:3 (2008): 225-235? 9. ACM Ubiquity, Vol.8, Issue 45 November 13 – November 19, 2007. 10. To appear in special number guest-edited by Arun Tripathi (general editor is Karamjit Singh Gill), as "Ethics and Aesthetics of Technologies," in "AI & Society" journal; see details about "AI & Soc" journal at http://www.springerlink.com/content/102816. 11. To appear in book from Universidad Politecnica de Madrid, "Estudios de la ciencia, tecnologia y sociedad en la investigacion y la formacion." 12. This appeared in Problemy Ekorozwoju (Problems of Sustainable Development), 3:2, 2008; in both English and Polish. Note: This essay appeared originally in Jorge Martinez Contreras, Raul Gutierrez Lombardo, and Paul T. Durbin, eds., Tecnologia, Desarrollo Economico y Sustentabilidad (special number of Ludus Vitalis, Mexico City, 1997), and only minor cosmetic changes have been made here. 13. This paper was prepared, by request, for publication in the next issue of Problemy Eskorzwoju, 4:1, to appear in October. 14. I gave a talk on this theme at the University of Barcelona in May of 2008. It probably came as close as anything I had done up to that point in putting all the pieces of my approach together. Unfortunately, in doing so, it became more than a little unwieldy. So, to do a better job, I decided to build a "globalization" essay around the skeleton of that talk. Then I decided to put together this new set of my activist essays, with the result that there are here a great many repetitions of earlier essays; in one case, the wholesale incorporation of essay 12, above, within this one. Still, I keep it here as a coherent piece. Anyone worried about too much repetition can just skip a repeat -- possibly checking off in his or her head the earlier essay repeated here.