Chapter 8: Anti-realism from facts about science The problem of induction and the underdetermination problem are difficulties arising from philosophical considerations. However, there are also many empirical considerations from the nature of science that can lead to anti-realism. Argument from theory change The most convincing argument against scientific realism is the argument from theory change. One recent version is Laudan’s pessimistic meta-induction. Argument from theory change The history of science shows that numerous theories were once believed to be true, yet have eventually been discarded. These discarded theories posited the existence of entities which, it turns out, do not exist. Consequently, we have good reason to think that the unobservable entities posited by our current theories do not exist either. Therefore, realism is not empirically adequate. This is an argument for atheism about unobservable entities, not just agnosticism. Approximate truth Does the adoption of a new theory imply the complete falsity of the previous theory? E.g., Newtonian physics vs. relativity. Most realists want to account for theory change while keeping the idea that predictive and explanatory success give us good reason to think that a theory is true. Therefore, they say that theories are not perfectly true, but only approximately true. Approximate truth is also called verisimilitude. Despite many efforts, no satisfying definition of approximate truth has been found. However, verisimilitude is important not just for science, but for many statements implying truth. E.g., “It is noon”, “The Earth is spherical”, Newtonian mechanics. The problem with approximate truth is that, in the absence of a precise definition, it becomes too permissive and relative to be really useful, just like the notion of similarity. (Verisimilitude = similar to the world) Anything is approximately true to some extent, just like any two things are both similar and dissimilar in an infinity of ways. Still, we cannot really do without approximate truth, so we cannot reject scientific realism on the basis that the notion of approximate truth is too vague. To partially explain what it means for a statement to be approximately true, many realists use the notion of reference. A term successfully refers if there is something, or some things, which are picked out by it. Laudan (1981) argues that a scientific theory cannot be even approximately true if its central theoretical terms do not refer to anything. Sense and reference Sense: the ideas and descriptions associated with the term. Reference: the thing or things the term is used to talk about. Example: Term: ball Sense: round, somewhat elastic thing, of various colours, that is used to play some games, and so on. Reference: my basketball, Dona’s red ball, Rex’s old tennis ball, and so on. Some terms have a sense, but no reference: E.g., Bigfoot, unicorn, the present king of France, etc. Some terms have changed their sense, but keep the same reference: “Whale” went from a kind of fish, to a kind of mammal, but still refers to the same animals. This means that when we talk about whales, we are talking about the same thing than our ancestors, because our terms have the same reference. There are two ways to specify the reference of a term: By giving the sense ►E.g., triangles are closed three-sided and three-angled shapes. By giving an ostensive definition ►To answer the question “What is a triangle?” by pointing at a triangle and saying “This!”. However, theoretical/unobservable terms cannot be given an ostensible definition, because it works only for observable terms. The reference of such terms is fixed by their sense, as defined by the theory of which they are part. E.g., The reference of the term ‘electron’ is fixed by the theory of electrons, so that ‘electron’ refers to small, negatively charged, entities orbiting the nuclei of atoms, and so on. Problem with the reference of scientific terms The problem with this is that any change in theory will change the sense, and therefore the reference of our theoretical terms. Kuhn pointed out that there have been a lot of theory changes in science’s history. Thus, it follows that modern scientists are just not talking about the same entities as past scientists using a different theory. E.g., atoms, species, mass, etc. Putnam’s response In response to this problem, Putnam (1975) advanced the idea of the division of linguistic labour. The idea is that many terms (e.g., gold, elm, French Spaniel) have a precisely delineated reference, which only a few experts know, but that does not prevent ordinary people to use the term – even if most people cannot even reliably discriminate between samples. So, how are ordinary people able to correctly refer? Putnam’s causal theory of reference Putnam’s answer is that the reference of natural kind terms (e.g., water, gold, electron) depends not on the description associated with a term, but on the cause lying behind the term’s use. Causal theory of reference: the reference of natural kind terms is fixed by whatever causes the experiences that gives rise to the use of the terms. On the plus side, Putnam’s theory allows for continuity of reference across theory changes. ►E.g., electron has always referred to whatever causes the phenomena that prompted its introduction, such as the conduction of electricity in metals. On the negative side, this theory makes successful reference too easy. The pessimistic meta-induction 1. There have been many empirically successful theories in the history of science that have subsequently been rejected and whose theoretical terms do not refer according to our best current theories. ►Some examples: The crystalline spheres of ancient and medieval The humoral theory of medicine The effluvial theory of static electricity The catastrophist geology The phlogiston theory of chemistry The caloric theory of heat The vital force theories of physiology The electromagnetic ether The optical ether The theory of circular inertia Theories of spontaneous generation. 2. astronomy Our best current theories are no different in kind from those discarded theories and so we have no reason to think they will not ultimately be replaced as well. 3. By induction, we have positive reason to expect that our best current theories will be replaced by new theories, according to which some of the central theoretical terms of our best current theories do not refer. Therefore, we should not believe in the approximate truth or the successful reference of the theoretical terms of our best current theories. Realist responses to the pessimistic meta-induction A) Restrict realism to mature theories Realists attack premise 2 of the argument by saying that current theories are not really like most theories of Laudan’s list. Current theories are mature, and so we should consider only mature theories when making the induction. A theory is mature when it met requirements such as coherence with the basic principles of theories in other domains, and possession of well-entrenched set of basic principles which define the domain of the science and the appropriate methods for it, and limits the sort of theories that can be proposed. Realists argue that current theories are mature: - They incorporate the law of conservation of energy, and the idea that all matter is build out of the chemical elements (hydrogen, oxygen, carbon, etc.) - They use a common system of measurements units (metre, kilogram, amp, volt, degree, etc.) - Theories in one domain are often used as background theories in other domains. B) Restrict realism to theories with novel predictive success. Realists also attack premise 1 of Laudan’s argument, by saying that his notion of empirical success is too permissive. Mere empirical adequacy to the observations obtained so far is not constraining enough, we also need novel predictive success. Mere empirical adequacy is not enough, because the right empirical consequences can always be added to a theory in an ad doc manner. Also, if there are two incompatible but equally empirically adequate theories, we cannot be realist about both. The underlying issue is about what counts as confirmation of a theory by the evidence. Predictionists say that only new evidence confirms theories. Explanationists say that only the explaining of known facts confirms theories. An intermediary positions between the predictionist and explanationist extremes is that both kinds of evidence provide support for theories, but that novel predictions are especially compelling. E.g., Fresnel’s theory of light as transverse waves (1818) and conical refraction. But what does novelty mean exactly? Temporal novelty Temporal novelty: when a prediction is about a phenomenon that has not yet been observed. Problem: this seems arbitrary, because whether a prediction is novel would depend on an historical accident. When exactly in time someone first observes a phenomenon entailed by a theory may have nothing to do with how and why the theory was developed. Epistemic novelty Epistemic novelty: when a phenomenon was not known by the scientist before constructing the theory that predicts it. Problem: it seems that whether or not a scientist knows about a given phenomenon is not really important, as long as the theory was not purposely build to account for this phenomenon. Yet, we cannot refer to the intentions of the scientist, for this would introduce a psychological dimension contrary to the objectivity required to give novelty epistemological strength. Use novelty Use novelty: a prediction is use-novel if the scientist did not explicitly build this result into the theory or use it to set the value of some parameter crucial to its derivation, and if no other theory predicts it. Leplin (1997) Problems: A) A theory cannot have any novel success if we know nothing of the reasoning of the scientist who made it, even if it successfully predicts phenomena we did not know about. B) If we know all the phenomena in one domain, then regardless of how much more explanatory, simple, unified, or whatever else a new theory is, we can never regard it as true. It seems there is no perfect notion of novelty, but novelty still remains a strong motivation for being some sort of scientific realist, for realism explains why theories sometimes produce predictions of new types of phenomena which are then observed. Counter-examples to the no-miracles argument The no-miracles argument is the realist claim that only scientific realism explains why science enjoys great instrumental and predictive success. The pessimistic meta-induction is an attack on the no-miracles argument to the effect that science is in fact no very successful: even empirically adequate theories eventually end up being rejected. The realists’ reply to the meta-induction is that most theories in Laudan’s list are not mature or have no novel predictive power. But even if we restrict Laudan’s list according to these criteria, there remains some examples of abandoned theories that nevertheless enjoyed empirical success, thus undermining the no-miracles argument. Ether theory of light, caloric theory of heat. 1. There are examples of theories that were mature and had novel predictive success, but whose central theoretical terms do not refer according to our best current theories. 2. Successful reference of its central theoretical terms is a necessary condition for approximate truth. 3. Thus, that there are examples of theories that were mature and had novel predictive success but which are not approximately true. 4. Approximate truth and successful reference of central theoretical terms is not a necessary condition for the novel-predictive success of scientific theories. Therefore, the no-miracles argument is undermined. If approximate truth and successful reference do not explain the novel predictive success of theories, there is no reason to think that this predictive power is explained by realism. Realist responses to the counter-examples 1) Develop an account of reference according to which the relevant abandoned theoretical terms refer after all. Using something like Putnam’s causal theory of reference, realists can say that rejected terms still refer after all. E.g., ether in fact refers to the electromagnetic field. However, this may make reference trivial. Were Aristotle and Newton really referring to geodesic motion in a curved space-time? 2) Restrict realism to those theoretical claims about unobservables that feature in an essential way in the derivation of novel predictions. The parts of past theories that have been abandoned were not responsible for their predictive success. The parts that were essential, like theoretical laws and mechanisms, have been kept. We should only be realist about the essential parts of theories, not the idle ones. (Kitcher, 1993, Psillos, 1999) However, the notion of ‘essential’ is too vague and risk to become ad hoc. Also, it disconnects the success of science from the truth of its theoretical terms. This is a dangerous strategy for a realist, as it goes against the spirit of the no-miracles argument. Skip 8.2 Multiple models Idealisation Nancy Cartwright suggests another form of anti-realism based on the fact that our theories are so abstracted from the real world that they become false. How the Laws of Physics Lie (1995) Cartwright distinguishes two senses of “idealisation”: idealisation and abstraction. Idealisation: the theoretical or experimental manipulation of concrete circumstances to minimise or eliminate certain features. E.g., real surface -> frictionless plane The laws obtained through idealisation are approximately true and can be used directly in some cases. E.g., used directly for very smooth surfaces Abstraction: subtraction of concrete facts about objects, and elimination of interfering causes. The laws obtained through abstraction cannot even be approximately true, because relevant causal features have been subtracted and the laws and thus not about concrete situations. Rather, abstracted laws are only true of an abstract model. If, like the traditional view (i.e., covering law model), we take these abstract or fundamental laws to be genuine claims about reality, they will be false. In other words, the laws of physics lie. E.g., the law of gravitation states what happens to bodies upon which no other forces are acting; but there are no such bodies in the actual universe, and so strictly speaking it cannot be true of anything. Structural realism The strongest argument for realism is the no-miracles argument, and the strongest argument for anti-realism is the pessimistic meta-induction. John Worrall (1989) suggests structural realism as a middle ground between the two, pointing out that when there is theory change, the old entities are often discarded, while the mathematical structure is preserved. Worrall argues that we should not accept a full scientific realism implying that the nature of things is correctly described by the metaphysical and physical content of our best theories. Instead, we should adopt the structural realist emphasis on the mathematical or structural content of our theories. This avoids the force of the meta-induction by not committing us to believe in entities postulated by the theory. Yet, this does not make the success of science – especially the novel predictions – seem miraculous, on the basis that it is the structure of a theory that describes the world. Structural realism has become quite popular recently. E.g., Worrall (1989), Redhead (1996), Stein (1989). However, this position is still unclear: Is it an epistemological view to the effect that we should only believe in the structure, and stay agnostic about the entities, or Is it a metaphysical view to the effect that the only thing theories are intended to describe is the structure? Big Conclusions: Scientific Method Science seems to work, yet we were unable to find a precisely defined scientific method. None of these is adequate: Inductivism Falsificationism Social constructivism/Kuhn’s view Still, we came a long way: From Bacon’s view of science as an almost purely mechanical process involving little more than the collection of data, to a dynamic view of science involving the use of multiple strategies (induction, selective falsification, IBE), free of constraints at the discovery stage, and influenced by historical and social contexts. Is science rational? Science is rational if anything is. The lack of proof for the rationality of induction and IBE does not mean that they are irrational. Historical and sociological influences on science are good reasons to pay attention to the context in which a theory has been formulated, yet does not mean that anything goes. Big Conclusions: Realism Debate Scientific realism is not obviously true. Antirealism can account for any empirical success a realist approach can. In fact, an antirealist might even do as if she was a realist, if it gives an appreciable edge, but the realist can never do as if she was an antirealist. But neither is scientific realism obviously false. There is nothing incoherent in the idea that science is getting closer to the truth. It represents a natural extension of metaphysical realism and of our notion of truth.