Problems in the Philosophy of Science 1 Chapters from Problems in the Philosophy of Science by John Watling Contents: Author’s Note 2 Knowledge as a Guide to Action: the Future. 3 Knowledge as a Guide to Action: the Past. 9 Invalid to Truth, Valid to Probability? 16 Trial and Error 22 The Classical Theory of Probability 29 Bayes’ Theorem 39 Absolute Space: Newton’s Argument 43 Problems in the Philosophy of Science 2 Author’s Note Each of these chapters is intended to be complete in itself but to contribute to larger arguments which, I’m afraid, will not always be apparent. I have put related chapters together, where relations exist, but there will be other chapters on the topics dealt with here, often standing between these. For example, there will be other chapters on issues in induction, in probability and in space. Issues not discussed here at all will figure largely in the book, for example, knowledge, causal explanation, teleological explanation and time. Problems in the Philosophy of Science 3 Knowledge as a Guide to Action: the Future. It is often held that when scientific knowledge enables us to achieve our ends it does so by enabling us to foretell what will and what not happen. David Hume, for example, takes that for granted in his Enquiry Concerning Human Understanding, for he offers an explanation of how experience leads us to form correct expectations concerning the future and considers he has explained how it is that we are able to adjust means to ends and how it is that our species subsists. ‘Custom, then,’ he says, ‘is the great guide of human life,’ and, since custom is his explanation of how we come to form correct expectations, he there takes it for granted that if we can foretell we can guide our actions. What he most often asserts is that fore knowledge is essential for the ability to guide our actions, and that, at least, seems correct. If we do not know what will happen, how can we plan our actions to take advantage of it or to minimize its ill effects? Let us consider a simple example of such a calculation. Suppose I wish to reach a certain town and decide to catch a bus. I see one approaching a stop a little distance away. If I run, I can reach the stop by the time the bus does, but will it stop? It seems important to know, for, if it is going to stop, I can run and reach the stop in time. If it isn’t, I shall only waste my breath by running. That reasoning is fallacious. Suppose the truth is that the bus is not going to stop. Does that show that it would be useless to run? Not at all. Perhaps, if I were to run, the driver would see me running and stop. The fact that the bus is not going to stop is quite consistent with that. Again, suppose the truth is that the bus is going to stop. That does not show that it would be sensible to run, so as to be at the stop in time. Perhaps, if I were to run, the driver would see me running and drive past. The fact that something will happen does not mean that I can plan my actions to take advantage of it; the fact that it won’t, does not mean that I have to do without it. What I need to know is not whether the bus will stop, but whether it would stop, if I were to run. The bus driver is a human agent but that is not what makes the reasoning fallacious. The fact that the surface of a bog will remain flat for the next hour is not something I can take advantage of to walk across it. It will remain flat but, if I were to walk into it, it would not. Perhaps the reasoning is fallacious because drivers and soft wet ground are sensitive things. Problems in the Philosophy of Science Surely, knowledge of the future states of insensitive things is something upon which we can base our plans? The ebb and flow of the tide is a good example. If, having hired a boat upon a tidal river, I know the time at which the tide will turn, then I may be able to row quite a long way down the river, counting upon turning with the tide and having its help in both directions. Without that knowledge I could not venture so far, for fear of a hard row back against the current. Here is a fact that I seem to be able to take advantage of. However, just as before, what I need to know is when the tide would turn if I were to row down the river on it and, just as before, that fact does not follow from a fact concerning, when it will turn. If I were to justify my plan by the fact concerning the time at which the tide will turn, my reasoning would be as fallacious as before. There is, indeed, something I know about the river, that its current is not much affected by small boats rowing on its surface, which, together with my knowledge of when the tide will turn, does enable me to justify my plan. This fact, which I can take advantage of, is not merely about the future of the river, for it is about its insensitivity: it is the fact that, whether or not my plan were put into effect, the tide would turn at the time at which it will turn. Therefore, although when I am dealing with insensitive things knowledge of their future is of great value, it is of no use by itself. Moreover, although of great value, it may not be indispensable; it may be possible to discover what their future would be without employing any knowledge of what it will be. Of course, knowledge of what will or will not happen in the future, combined with knowledge that a plan is one I shall adopt is relevant to the assessment of the plan. That is why I can look back later and, knowing that I did adopt the plan and that I did not achieve what I set out to achieve, judge the plan a bad one. However, when I assess a plan in order to decide whether to adopt it, the information that I shall adopt it is lacking. The fallacy that I have been trying to expose is that of arguing from what will happen to what would happen if. There is a particular form of that argument, equally fallacious, which sometimes leads people to think that a knowledge of what will happen enables us to guide our actions. They sometimes think that if they know that one or other of two things will happen, that both won’t fail, then they can conclude that by preventing one of them they can ensure the other. That, too, is a delusion. The fact that one or other of them will happen is quite consistent with the 4 Problems in the Philosophy of Science 5 fact that neither would if one were prevented. Suppose it is a fact that either I shan’t run or the bus will stop. It cannot be that this implies that by running I can stop the bus, for it is itself implied by the fact that the bus will stop and that fact does not imply that my running would stop it. The fact that the bus will stop is quite compatible with the fact that my running would ensure that it did not. People fail to notice examples like that and think only of ones like this, that a certain piece of iron will not be heated without expanding. That, of course, is implied by the fact that either it will not be heated or will expand and seems to them to justify the conclusion that by heating the piece of iron they can make it expand. That conclusion is true, but that fact no reason for it. Perhaps they know that the conclusion is true, but the knowledge concerning the future of the piece of iron does not enable them to prove it. Of course, those who accept this implication must feel uncomfortable about the other conclusion that ought, on this form of argument, equally to follow, that I can ensure that the piece of iron will remain cool by preventing it from expanding. They can hardly feel confident that that is true. Sometimes the words we use to speak of what will happen are so close to those we use to speak about what would happen if, that it is difficult to avoid confusing the one with the other. Suppose that I am going to miss the bus. Then it is difficult not to agree that whatever I do I shall miss the bus. However, the words ‘Whatever I do, I shall miss the bus’ seem to express the very same conclusion that might have been expressed by the words ‘Whatever I were to do, I should miss the bus’, words which express something which implies that there is nothing I can do to catch the bus. Yet the fact that I shall miss the bus does not imply that there is nothing I can do to catch it. What has gone wrong? The trouble lies in the ambiguity of the words ‘Whatever I do, I shall miss the bus’. They might be taken to mean that every one of the things I shall do will be followed by my missing the bus. Taken in that sense, they express something which follows from the fact that I shall miss the bus but does not imply that I cannot catch it. They might, however, be taken to mean that every one of the things I can do, not just the things I shall do, would be followed, if I were to do it, by my missing the bus. That is a very different thing. It does not follow from the fact that I shall miss the bus but does imply that I cannot catch it. To know that, among all the things I shall do, none will be followed by my catching the bus is to know only that I shall miss it. To know that, among all the things I can do, none would be followed by my catching it is to know more. If Problems in the Philosophy of Science 6 we confuse these two ways of taking the words, then we may be deceived into thinking that the fact that I shall miss the bus implies that I cannot catch it. There is another confusion which arises over the word ‘if’ which can make these fallacious forms of inference seem valid. Someone who argues from the premise that either he won’t run or the bus will stop together with the further premise that he will run can rightly conclude that the bus will stop. He might put this by saying ‘Either I shan’t run or the bus will stop. So, since I shall run, the bus will stop.’ If he wasn’t sure whether or not he was going to run, he might express the validity of the same argument, without embracing the conclusion, by saying ‘Either I shan’t run or the bus will stop. So, if I run, the bus will stop.’ Since the argument is valid, it must be right for him to say that but, in saying that, isn’t he arguing from the premise that either he won’t run or the bus will stop to the conclusion that if he runs the bus will stop? Therefore, from the validity of argument of the form ‘One or other of these things is true, the first isn’t, so the second is’ we seem to have established the validity of argument of the form I have been declaring fallacious, and which is indeed fallacious, ‘One or other of these things is true, so, if the first weren’t true, the second would be’. Where is the mistake here? It is natural to assume that, when we express an argument, the words with which we follow the word ‘so’ are words we employ in the expression of our conclusion. In fact we often employ those words in other ways. We do so in the expression of the first of the two arguments just considered, for that was expressed ‘Either I won’t run or the bus will stop, so, since I will run, the bus will stop.’ The words ‘since I will run’ follow the word ‘so’ but, far from entering into the expression of the conclusion of the argument, they introduce one of its premises. To believe otherwise, you would have to accept that words such as ‘since I will run, the bus will stop’ could serve to express a conclusion that someone might draw. They could not, for the word ‘since’ introduces an argument, not an opinion which might be arrived at on the basis of argument. Now the word ‘if’, in the second of the two arguments, has the same function as the word ‘since’ in the first: it introduces a premise, not a conclusion. The conclusion is that the bus will stop and the argument to that conclusion is valid. Once we recognize that the word ‘if’ can be used, not to express a conclusion but to add a further Problems in the Philosophy of Science premise, there is no temptation to identify the valid argument ‘Either this or that, so, if not this, that’ with the invalid argument ‘Either this or that, so, if this were not true, that would be.’ There are two quite different things that the word ‘if’ can be used to do. One is to introduce a further premise, or, perhaps it would be better to say, to indicate a further premise that someone might be in a position to introduce because he knew it to be true. The other is to express a conclusion that may enable us to assess a plan of action, a conclusion of the form ‘If that were true, this would be.’ Unfortunately, it is not always possible to tell from the words someone employs which of the two arguments he intends. If he says ‘Either you won’t run or you will catch the bus, so, if you run, you will catch the bus’ there is nothing about his words to enable us to judge whether he is trying to convince us that running for the bus would be a good plan on the inadequate grounds that either we shan’t run or the bus will stop, grounds which might hold only because we are not going to run, or whether he is quite reasonably pointing out that if we know that we shall run he has information to contribute that would allow the two of us, putting our knowledge together, to complete a proof that the bus will stop. There is, however, a grammatical question we can put to him to decide the matter. Could what he intends be expressed equally well with the verb following the word ‘if’ in the subjunctive mood? If it could, then he intends the invalid argument. The sentence ‘Either you won’t run or the bus will stop so, if you were to run, the bus would stop’ could not express the valid argument from the premise that either you won’t run or the bus will stop together with the premise, supposed true, that you will run to the conclusion that the bus will stop. If what he intends could not be expressed in that way, then he intends the valid argument. The subjunctive mood for its verb is a sign that the word ‘if’ is being used in a certain way, to express the sort of thing we need to know in order to assess our plans, but the absence of the subjunctive mood, that is, the presence of the indicative mood, is not a sign that the word is not being used in that way. That is why the grammatical question may be required. It seems that we must recognize, corresponding to these two ways of using the word ‘if’, two different things which are both commonly called ‘supposing’. In one, we argue from a supposition in order, perhaps, to determine its truth or in order to determine what conclusions we could reach if we discovered it to be true. In the other, we argue to the effects and other 7 Problems in the Philosophy of Science 8 consequences of the supposition. In supposition of the first kind, every truth is relevant. For example, once we have found something to be true that contradicts the supposition then the purpose of supposing it has been achieved and the supposition can be rejected. In supposition of the second kind, many truths are irrelevant. I have been arguing that many truths about what will happen in the future are. The effects and other consequences of a supposition, which is what we investigate in supposition of the second kind, are what we need to know in order to assess the wisdom of bringing something about, of not preventing it, or of preventing it. It is what we need to know in order to decide, not whether something is true, but whether it would be a good thing if it were. Therefore the knowledge we seek when we make suppositions of the second kind is the most important knowledge we can have. If there is no such knowledge, then we never assess plans or act for reasons and, if science does not provide such knowledge, it is of no help in any practical matter. There cannot be knowledge of this kind if there are not facts concerning what would be true if, facts of conditional form. Science cannot provide such knowledge if it is no part of the business of science to investigate such facts. It is surprising, therefore, to find that many people who have thought about these matters have concluded that there are no such facts and no such knowledge. I find it evident that we do act for reasons and do assess plans. I find it evident that science does provide information which enables us to devise means of achieving ends we could not otherwise achieve. So I argue that there must be such knowledge and such facts. It is important, however, not to ignore the case against them. Perhaps it isn’t strong enough to stand against the evidence to which I have just pointed but it must be cogent to have impressed people so much. For the moment, I want to insist only upon the importance of questions of the form ‘What would happen if ....?’ They are important because, if they can’t be answered, no other questions have any importance at all. Problems in the Philosophy of Science 9 Knowledge as a Guide to Action: the Past. To assess a plan we need to know, not what will happen, but what would happen if the plan were carried out. Is it equally true that we need to know, not what has happened, but what would have happened were the plan now carried out? The words ‘What would have happened if the plan were now carried out?’ have an awkward ring and some people reject the idea that they express any question at all. It is not easy to say why the awkwardness arises but there are good reasons against treating it as a ground for rejecting such questions. Perhaps the awkwardness arises from the natural expectation that when someone begins ‘If that were done’ he is about to make a remark about what would follow. That would show the question to be unusual but not to be unacceptable. Perhaps, again, the words ‘What would have happened if the plan were adopted?’ have an awkward ring because the question they express seems to have the same answer as the question ‘What has happened?’ That might lead people to identify the two questions, to forget about the more complicated one and to employ the simpler one in its place. If that neglect is how the awkwardness of the words arises, it is an awkwardness we should face, for there is good reason to doubt whether the neglect is justified. Is what would have happened if a plan were now adopted always what has happened? Is it, if what has happened prevents the adoption of the plan? When a bus has not stopped and its not stopping prevents my getting on it would seem to follow that, if I were to get on, the bus would, earlier, have stopped. What would have happened if I were to get on and what did happen would not be the same. That argument can be given a slightly different form. Although a bus has not stopped someone who did not know whether it did or not might say, with truth, that if the bus had not stopped I would not now get on. Argument by contraposition, of the form ‘Since, if this were not true that would not be, if that were true this would be, is commonly held valid. If such argument is valid, then it may be concluded that if I were to get on, the bus would have stopped. Once again, what would have happened if I were to get on and what would happen would not be the same. Perhaps, yet again, the awkwardness arises because, when we wonder whether to adopt a plan, we are concerned largely with the effects of adopting it. To ask about the effects of a prospective action upon events that have already taken place does seem absurd, for it suggests that an action Problems in the Philosophy of Science 10 might affect what has already happened. However, it is not true that when we consider adopting a plan we are concerned only with its effects. There are circumstances which we know our plan will not affect, as the time at which the tide will turn is not affected by my rowing down the river, but concerning which we nevertheless need to know how they would stand if we were to adopt the plan. Circumstances before the moment for which action is planned will, of course, not be affected by our plans, but there will be much the same reasons for wishing to know how they would stand if the plan were adopted. The circumstances following the adoption of the plan may, indeed, arise from circumstances preceding it, as the turn of the tide arises from the entry of a tidal wave into the river mouth, so that knowledge concerning the preceding circumstances seems important. It seems possible that the feeling of awkwardness does arise from such a suggestion of causation but, if it does, the feeling is a misleading one. I am inclined to conclude that these questions cannot be rejected and may be important. If that is so, then the problem posed at first, ‘Which do we need to know in order to assess a plan, what has happened or what would have happened if the plan were carried out?’ is a real one. How should it be answered? Suppose that I am on a beach from which it is dangerous to swim during the third hour after high tide, that the tide was high two hours ago and that there is a life guard on duty who would prevent me swimming if I attempted to do so. Would swimming now be a good idea? If I were to swim now, would I be in danger? If it is correct to argue from what has happened, then the answer is that swimming would be a bad idea and that if I were to swim now I would be in danger. The tide was high two hours ago and swimming in the third hour after high tide is dangerous. If, on the other hand, it is correct to argue from what would have happened earlier if the plan were adopted, then swimming would be a good idea and if I were to swim I would be safe, for if I were to swim the tide would not have been high two hours ago. Thanks to the life guard, I cannot swim during the third hour after high tide. The truth is, I think, that swimming now is a bad idea, for swimming now would put me in danger, but that, if I were to swim now, the tide would not have been high two hours ago so that I should be safe. In other words, swimming now would put me in danger but if I were to swim now I should not be in danger. Certainly, my answer seems a very paradoxical one. Can the appearance of paradox be removed? The paradox is present because the question whether swimming now would put me in Problems in the Philosophy of Science 11 danger seems to rest upon the question whether if I were to swim now I should be in danger, yet I have suggested that the former is true and the latter false. In the last chapter I assumed that what we need to know in order to assess a plan was what would happen if the plan were now adopted. That assumption seems an eminently sensible one. What could be wrong with it? This might be: that there is a distinction among what would happen if questions, some being what we need to know in order to assess a plan, others not. I shall attempt to show that to be the case. When I considered the example about my trying to catch a bus I argued that the fact that the bus was going to stop did not show that if I were to run to catch it I should get on it. The fact that the bus was going to stop was consistent with the possibility that if I were to run to catch it I would not. I want now to emphasize that, in the example, it is true that if the bus were going to stop and I were going to run for it, then I would succeed in getting on. It must be, therefore, that that fact, together with the fact that the bus is going to stop, does not imply that if I were to run for the bus I would succeed in getting on. Although if I were to run and it were to stop I would succeed in getting on and although it is going to stop, it may be that if I were to run I would not succeed. Argument of that form, ‘if this and that were true, the other would be; that is true, so if this were true the other would be,’ is called argument by exportation. Here it is the premise ‘It is going to stop’ that suffers exportation. Such argument must in general be invalid, since this example of it certainly is. Now consider the swimming example once more. If it were two hours after high tide and I were to swim, I should be in danger. Is it correct to argue by exportation that, since it is two hours after high tide, if I were to swim I should be in danger? I suggest that this example of exportation is as invalid as the other and that, although it is true that the tide was high two hours ago and true that if the tide had been high two hours ago and I were to go swimming now I should be in danger, it is not true that if I were to go swimming now I should be in danger. On the other hand, I admit that the conclusion that swimming now would put me in danger can be validly drawn. How can argument to the latter, causal, conclusion be valid if argument to the former is not? That would be possible if, when we ask whether swimming now would put me in danger, which is a question about causation, about what effects swimming now would have, we were asking whether, if things had been as they were and I were to swim, I should be in danger, Problems in the Philosophy of Science for that is not at all the same thing as asking whether if I were to swim I should be in danger. The answer to the former question is yes. The tide was high two hours ago, so that if things had been as they were and I were to go swimming I should be in danger. The answer to the second is no, since, if I were to swim, the tide would not have been high two hours ago. Since exportation is invalid for conditionals, these two answers are compatible with each other and together compatible with the fact that the tide was high two hours ago. If that is right, then although exportation does not hold for conditionals it does hold for causation. Obviously, that is what we always assume. When we want to know what effects an action would have, we look at the situation as it has been and as it is. We argue that, since the currents are strong, going swimming would be a dangerous thing to do. Confronted with a taut rope, we conclude that, since it is taut, a jerk at one end would produce a jerk at the other. We do not conclude that, since if it were loose and jerked at one end the other end would not move, a jerk at one end would not produce a jerk at the other. We argue that, since the atmosphere is present, a suction pump will draw water from a well that is not too deep. We do not argue that, since if the atmosphere were not present and a suction pump were operated no water would rise, a suction pump will not draw water from a well. Peter Downing, who was the first person to call attention to this difficulty concerning the relationship between conditionals and causation, drew a different lesson from it. His conclusion in the example of the high tide would be, I think, that the presence of the life guard does not show that if I were now to swim the tide would not have been high and, not showing that, does not show that if I were now to swim I should not be in danger. His view is that antecedents concerning times earlier than the moment for which action is planned can be validly exported from a conditional, those concerning times later than that moment cannot. That suggestion that the validity of an argument involving conditionals should depend upon the temporal order of the events with which the antecedents are concerned struck many people as a great implausibility. Not everything implausible is false, but the suggestion does attribute to conditionals in general peculiarities which might be thought to belong only to those concerning causation. For that reason a resolution deriving from the nature of causal conditionals seems better. By a 12 Problems in the Philosophy of Science 13 causal conditional I mean one concerning what an event or action, if it occurred, would produce or prevent. If causal conditionals have the character I have suggested, then it cannot be that any later event produces or influences an earlier one, however firmly the two may be linked together. There is a link, it might be thought, that allows us to argue from the fact that a lighted match has been thrown into a tank containing a half and half mixture of air and petrol vapour to the conclusion that an explosion will follow. That same link allows us to argue from the absence of an explosion to the conclusion that no lighted match was thrown into a tank containing a half and half mixture of air and petrol vapour. The fact that the link enables us to make inferences as surely in one direction as in the other leaves many people puzzled about why we accept that throwing a match into such a tank would cause an explosion but would not for a moment accept that the absence of an explosion would prevent a match having earlier been thrown into such a tank. However, if, as I have suggested, the causal question is not ‘What would happen if a match were thrown into such a tank?’ but ‘What would happen if, things having been as they were until now, a match were now thrown into such a tank?’ and, correspondingly, not ‘What would happen if no explosion were to occur?’ but ‘What would happen if, things having been as they were until now, no explosion were now to occur?’, then when we ask what the effect of throwing the match would have been we do not, assume that no explosion would occur, whereas when we ask what the effect of there being no explosion would have been we do assume that a match would have been thrown. To see this clearly, consider the situation in which a match was thrown into such a tank and an explosion followed. If my suggestion is correct, then it is not true that the absence of an explosion at the time at which the explosion occurred would have prevented a match being thrown earlier, since it is not true that if things had been as they were until the time of the explosion and no explosion had occurred, then no match would have been thrown. A match was thrown, so, if things had been as they were until the time of the explosion, a match would have been thrown. That explains why we accept that throwing a match into such a tank would cause an explosion but would not for a moment accept that the absence of an explosion would prevent a match having been thrown. Problems in the Philosophy of Science 14 The suggestion fits well with the way we argue about causation and the way we assess plans. Probably anyone who saw that to ask what would happen if a plan were now adopted is one thing and that to ask what would happen if things had been as they have been and the plan were now adopted is another would choose the latter as the right question for investigating the consequences of adopting the plan and for assessing it. Unfortunately, it is difficult to explain why that should be the right question or to prove that it is. Perhaps it is correct, when considering what effects an event or action would now have, to consider what would happen if it were now to follow the things that have happened, but why would it not be more sensible to consider what would happen if it were now to follow the things that would have happened if it were now to happen? If the former is correct, it is easy to see that a later event cannot prevent an earlier one, but without an explanation of why it is correct, there is no explanation of why it cannot. Indeed it seems as plausible to explain why we ask about what would happen if, things having been as they have been, the event were to happen by pointing to the fact that the event could not affect how things have been, as it does to explain why the event could not affect how things have been by pointing to the fact that in order to investigate the effects of an event we ask, not what would have happened, but what would have happened had things been as they have. That observation suggests that there might be another way of resolving the problem set in the example of the high tide, that of maintaining that to assess a plan the knowledge we need is always knowledge of its effects, of what its adoption would produce and what prevent. To answer that question, it might be held, what has and what has not happened and what will and will not happen are relevant, if the adoption of the plan would not affect them; if it would, they are not relevant. The fact that the tide was high two hours ago is relevant to the question whether swimming now would be a good idea because swimming now could not affect the time of the last high tide. The fact that the bus will stop may be irrelevant to the question whether running to catch it would be a good idea because running to catch it may affect the career of the bus. The fact that the tide will turn at five o’clock is relevant to the plan of rowing down the river because rowing down the river could not affect that occurrence. That resolution of the problem would introduce the temporal relationships between the things that have happened or will Problems in the Philosophy of Science 15 happen, on the one hand, and the moment at which the plan is to be adopted or the event to occur, on the other, only indirectly. The fact that later events cannot affect earlier ones would be invoked to explain why facts concerning what has already happened are relevant to the assessment of a plan, while those concerning what will happen may not be. Some facts concerning what will happen later, those concerning happenings that could not be affected by the plan, will be as relevant as those concerning what has already happened. However, this approach to the problem, sensible as it seems, faces a dilemma. If causal questions are questions concerning what would happen if, then the problem posed by the example of swimming from the dangerous beach arises, for that is an example of an event, the tide’s having been high two hours ago, that could not be affected by the plan of going swimming but which is, nevertheless, irrelevant to what would happen if the plan were adopted, irrelevant because, if I were to swim, the tide would not have been high. Yet the idea that causal questions are not questions about what would happen if is difficult to accept. Even those philosophers, such as David Hume and John Stuart Mill, whose accounts of causation exclude all consideration of what would happen if, assume, when off guard, that that is what they are about. It is to that dilemma that the suggestion that causal questions are a special sort of what would happen if question, of the what would happen in present circumstances if form, offers a solution. Problems in the Philosophy of Science 16 Invalid to truth, valid to probability? When induction is subjected to criticism, it is very common and very natural to respond that, although such argument can never show an hypothesis to be true, it can sometimes show one to be probable. The suggestion is that inductive conclusions can only be of the form ‘It is probable that ….’ or ‘It is probable to such and such a degree that …’ but that such conclusions can be validly drawn. Two immediate difficulties arise. First, it is with the hypothesis itself that an experimental enquiry is concerned. If to conclude that an hypothesis is probable is not to conclude that it is true, then the suggestion has the consequence that no experimental enquiry answers the question to which it is addressed. It answers another question, one with obscure relevance to the original. Second, if there are valid arguments to probability conclusions, then there are valid arguments to non-probability conclusions. Their existence arises from the incompatibility between the conclusion that it is probable that something is so and the conclusion that it is probable that the same thing is not so. That incompatibility has the consequence that experimental evidence implying the former probability excludes experimental evidence implying the latter. For example, the conclusion that there are swans on the Serpentine and all are white contradicts the conclusion that there are swans on the Serpentine and all are black yet, if induction can yield probability conclusions, the observation that there are swans and only white swans on the lake in Regents Park might make the first conclusion probable. It would then be, not merely probable, but certain that not all the swans on the lake in Kew Gardens were black. If they were, then the same inductive argument would show the second conclusion to be probable and a contradiction would have been established: the contradiction that it was both probable and improbable that there are swans and only white swans on the Serpentine. These difficulties do not face only such a defence of induction. They face any attempt to rescue a doubtful argument by claiming that, although it cannot establish truth, it can establish probability. The first difficulty shows that valid arguments to probability conclusions would be a doubtful substitute for arguments to unqualified conclusions, the second that, if an argument fails to establish the truth of an hypothesis, it will fail to establish its probability. Problems in the Philosophy of Science 17 Some people who have noticed the existence of the argument that presents the second difficulty have alleged that its appearance of force is misleading and rests upon a misunderstanding of probability. According to J.M. Keynes’s view of that concept, the conclusion that something is probable, reached from one piece of evidence, might be perfectly compatible with the conclusion that the same thing is improbable, reached from another. The one probability no more excludes the other than the distance of a place from me excludes its nearness to you. Keynes held that, just as the distances of places, near and far, are, of their nature, relative to people, so the probabilities of conclusions were, of their nature, relative to evidence. A probability conclusion cannot shake off the evidence upon which it appears to be based but remains a probability relative to that evidence. This view of Keynes’s must not be confused with the almost uncontentious view that a piece of evidence often supports, or favours, a conclusion that it fails to establish, so bearing to it a relationship of support which is naturally thought of as a probability relationship. It is undoubtedly possible for two things to be true, one telling in favour of an hypothesis, the other telling against it. If you have been dealt a hand at whist in which every card is apparently a club, then that evidence of your eyes tells very strongly in favour of the conclusion that every card is indeed a club. The fact that there is that evidence and that it does so tell is perfectly compatible with the existence of another piece of evidence, that the pack was shuffled thoroughly and dealt in the correct way, telling very strongly in the opposite direction. That does not at all show that if one person on one piece of evidence, the evidence of his eyes, reaches the conclusion that it is probable that the hand is all clubs and another person, on the other piece, the evidence of the thorough shuffling and correct dealing, the conclusion that it is probable that it is not, their conclusions are compatible. Reached from evidence pointing in opposite directions, is it plausible that they do not contradict? English usage fits badly with Keynes’s view. We do express a relationship when we say that evidence makes a conclusion probable but, unless our words are greatly misleading, it is the relationship which holds between any two things when one makes the other so. That relationship is not a probability at all. It is common to all sorts of facts having nothing to do with probability, such as the unfortunate one that vibration and jolting make hovercraft rides on rough seas unpleasant. Moreover, what is made so, is so. Therefore if there are Problems in the Philosophy of Science probability judgments of a relational kind, to the effect that evidence makes a conclusion probable, then there are judgments of a non-relational kind, to the effect that the conclusion is probable. The same thing holds of hypothetical probability judgments. When we say that, if certain things are true, a speculation is probable, how can we resist the conclusion, when we discover that those things are true, that the speculation is probable? Even the rather artificial expression Keynes makes use of when he speaks of a conclusion being probable on our new evidence, improbable on our old, implies the existence of non-relational probabilities. Presumably, ‘It is probable on that evidence’ means ‘To judge by that evidence it is probable’ which might, in turn, be explained as ‘Were I to judge by that evidence, I would conclude it probable’ or ‘Were anyone to judge by that evidence, he ought to conclude it probable’. What I say I would conclude and what I say anyone ought to conclude is a non-relational probability. Relational probabilities, probabilities that are relationships, can be accurately expressed only by employing terms such as ‘supports’ or ‘ tells in favour of’ and avoiding the words ‘probable’ and ‘probability’ altogether. Appearances, however, in this case linguistic usages, are often misleading about logic. No doubt, as I shall argue presently, we often argue from evidence to a probability conclusion and express our argument in such words as ‘This evidence makes it probable that that is true’, but much the same words are regularly employed to express probability relationships, not arguments at all. The mathematical calculus of probability provides rules for reasoning between propositions concerning probabilities, propositions commonly expressed in such terms as ‘The probability of a six, given that the dice has fallen with an even numbered side up, is one third’. If non-relational probabilities were intended, as the words unambiguously suggest, then the validity of the following evidently invalid argument would be inescapable. Consider three premises: The probability of his having thrown a six, given that he has thrown an even number, is one third. The probability of his having thrown a six, given that he has thrown an even number greater than two, is one half. 18 Problems in the Philosophy of Science 19 He has thrown an even number greater than two. The third premise gives us that he has thrown an even number and that he has thrown an even number greater than two. Therefore, that third premise together with the first implies that the probability of his having thrown a six is one third and, together with the second, that the probability of his having thrown a six is one half. The probability of his having thrown a six cannot be both one third and one half, so the premises imply a contradiction and must be mutually incompatible. In fact, the three premises are perfectly compatible and the argument I have just presented is invalid. It can be invalid only if the expression of the first two premises is misleading. If the first, despite being expressed with the word ‘given’, is not indeed an hypothetical, but asserts a probability relationship, perhaps that of telling in favour of to degree one third, between the proposition that he has thrown an even number and the proposition that he has thrown a six, and if the second is understood likewise, then the three premises tell of a piece of evidence which supports a conclusion to a certain degree and of which a part supports the conclusion to a lesser degree. There is no contradiction there, yet these premises are commonly expressed in a way that unambiguously suggests that they contradict. Since common usage misrepresents these probabilities, may not Keynes be right despite the fact that, if he is, all probabilities are misrepresented? The consideration that refutes Keynes’s view, the fact that it cannot accommodate, is that we employ probability terms to express our conclusions from evidence. If we did not, probability considerations could have no relevance to the defence of induction as valid argument. We argue: ‘All the swans I have ever seen have been white, so it is probable that all swans are white.’ On Keynes’s view, if such conclusions exist, they must be probabilities relative to evidence. It is easy to see that they can be no such thing by considering how unsatisfactory an expert would be who, when asked ‘How probable is it?’, replied that evidence from one source told strongly in its favour, from another, strongly against it, from a third, less strongly against it and so on. If all probabilities are relative to evidence and the probabilities in the conclusions we seek are not, then the probabilities we seek cannot exist at all. Keynes must reject the client’s question as illegitimate, deny that such questions exist and admit that, on his Problems in the Philosophy of Science 20 view, there are no probability conclusions to be reached from evidence. It might seem, and has often been suggested, that there is a relational probability the expert could state which would answer the question put to him. He could give the probability on all his evidence taken together. The suggestion seems a plausible one, for it is in the light of that relational probability that the expert would answer, if he was willing to answer at all, but cannot be right. If it were, then in answering the question it would be pointless for him to try to obtain evidence he did not yet have, an attempt that might be the only sensible thing to do. The probability on all the evidence he possesses seems better fitted to be an answer, not to the question concerning the matter he was consulted about, but to one about him, ‘How probable was it to him at that time?’, a question which might interest an historian seeking to explain or assess his action in giving the advice he gave. However, the idea that something could be probable for one person but improbable for another, or probable for a person at one time but improbable for him at another, is refuted by the conclusion at which we have just arrived, that people acquire evidence in order to answer a probability question and that different evidence does not imply a different question. If various people ask ‘Is it probable?’, they are all asking the same question. It would not be right to give one answer to one, another to another. The probability on his evidence is not the probability to him, for the probability to him is the same as the probability to everyone else. It may be, of course, that something seems probable to one person and not to another. Perhaps the probability on the evidence he has answers the question ‘How probable does it seem to him?’ It does not. A body of evidence may support a conclusion without the person having the evidence seeing that it does so, just as it may prove a conclusion without his seeing that it does. The probability on his evidence does not even answer the question whether it ought to seem probable to him, for that depends upon whether he ought to have seen that his evidence supported it and the argument from the evidence might be so complicated that he could hardly be expected to have seen it. Keynes’s view that all probability judgments are relative to evidence is mistaken. The probability conclusions we seek to reach from evidence are not relative to that evidence. Different pieces of evidence, when brought to bear upon the same probability question, will sometimes lead Problems in the Philosophy of Science 21 to different and incompatible answers to it. The proof that if inductive arguments are valid to probability conclusions, they are valid to non-probability conclusions, stands unshaken. Yet what account of probability conclusions can be given, if they are not weaker conclusions that may follow when the unqualified conclusions do not? The word ‘probable’ is employed to argue to a conclusion and to admit that the argument is no proof. To say ‘probably’ is to admit invalidity. Therefore the proposal that induction, or any other argument, is valid to probability conclusions is a contradiction in terms: it is the proposal that inductive argument is valid to conclusions that are invalidly drawn. Once this is realised, the other difficulty that confronted the proposal we have been considering also disappears. The difficulty was that if probability conclusions were weaker than the hypothesis under investigation, then their acceptability as the outcome of the investigations would be dubious. It disappears because to argue ‘Since the swans on Regents Park lake are all white, it is probable that the swans on the Serpentine are all white’ is to conclude that the swans on the Serpentine are all white, not to conclude something less. Again, the fact that probability conclusions from different but compatible pieces of evidence sometimes contradict is no longer surprising. To speak of a conclusion as a probability conclusion is to admit that it is invalidly drawn from the evidence, however strongly we may think that that evidence supports it. There is nothing surprising in two invalidly drawn conclusions contradicting one another however mutually consistent the premises. If we rely on arguments that are less than perfect, as, of course, we often need to do, we must accept the possibility of being led astray. This insight, however, reveals a new way of defending induction. May it not be that some inconclusive arguments are better than others, that some are good, some bad? If so, perhaps inductive argument, although proving nothing, is good argument nevertheless. These claims demand investigation. Problems in the Philosophy of Science 22 Trial and Error When we are searching for something, it may be that each place is one in which we have reason to look, even though we have little indication that it is the right place, or that the thing exists at all. If the thing does exist and is in that place, we shall find it and the possibility of finding it there may make it worth looking. That reason holds even when we cannot look everywhere, or cannot be sure that we have looked everywhere. John Stuart Mill described such a situation as one in which we lack a method of discovery but possess a method of proof. We have no means of knowing where to look but, once we do look in the right place, the evidence of our eyes proves that the thing is there. Of course, if we can investigate all the possibilities, we have a method of discovery. Even if we can do no more than eliminate possibilities, provided that we know that the thing exists and that we can be sure that we have investigated all the possibilities, we have a method of discovery, since eliminating all but one establishes that one. An enquiry guided by no method of discovery is, in one sense of the word, an experimental one: it involves trials for which failure is at least as likely as success. The concession that no method of discovery is available in the search for laws of nature is sometimes held to meet the criticisms of induction. According to that conception, science proceeds by investigating such possibilities as chance or genius suggests. These possibilities, the hypotheses, are put to observational test to discover whether experience tells for or against them. That, I think, is what people often have in mind when they speak of the hypothetico-deductive method. It employs induction as a method of proof but not as a method of discovery. One may be forgiven for failing to see how this could be thought to meet the criticisms. An investigator who employs the method relies upon induction, for he treats observations that verify consequences of an hypothesis as evidence in its favour. He may demand many observations before treating them as evidence but, if he is to reach any but negative conclusions, he must make an inductive inference. Since it is upon the validity of such inference that criticism falls, what has the concession achieved? If induction is invalid, how can it provide a method of proof? It must be that those who are impressed by the hypothetico-deductive method, which might, more frankly, be called the hypothetico-inductive method, are concerned more about Problems in the Philosophy of Science the inconsistency of induction than about its validity, for it is that inconsistency which disqualifies it as a method of discovery. According to the inductive principle, many inconsistent hypotheses are supported by the evidence, so the evidence points out no hypothesis, not even a wrong one, as a law of nature. After a fashion, the hypothetico-deductive method avoids this difficulty. An investigator employs it by choosing one hypothesis, deducing consequences from it and making an observation to see if those consequences are true. If they are, he places increased confidence in his choice. There are, of course, other hypotheses, inconsistent with the one he has chosen to investigate, with those same consequences, yet he does not take his observations to be evidence for those hypotheses, nor place increased confidence in them. Most of them he has never thought of; those he has he is ignoring for the time being. He will ignore them for ever if his experiments continue to bear out the deductions he makes from the one he has chosen. Had he chosen a different hypothesis, he would have taken it alone to be supported, It seems, then, that the principle of proof he is employing is not that an observation supports any hypothesis of which it is a consequence but that it supports any hypothesis which is the only hypothesis he is engaged in testing of which it is a consequence. This principle is consistent. It does not imply that the same evidence supports several hypotheses, for if several hypotheses are under test, none of them is supported. That is how the hypothetico-deductive method escapes the criticism of inconsistency. The principle is consistent but not plausible. If the truth of a consequence is not evidence, the addition of the fact that the hypothesis is the only one under investigation will not yield something that is. What is more, the principle we have been considering was framed for a single investigator; if we frame one that all can make use of, we lose the consistency just gained. One investigator might deduce a consequence from one hypothesis that another deduced from a rival hypothesis and observation might show them both that the consequence was true. Evidence in favour of an hypothesis is evidence against another incompatible with it, yet the hypothetico-deductive method recommended generally would imply that the observation favoured each of the two. The method cannot be recommended to more than one person. The fact that it may be worthwhile investigating a possibility when there is little or no 23 Problems in the Philosophy of Science 24 evidence that it is true demands some departure from the maxim that we concern ourselves only with probabilities but does not lead to a non-inductive scientific method. The fact that it may be worthwhile not merely investigating, but acting upon, a possibility for which we have little evidence, requires a more radical departure from that maxim and suggests a further departure, more radical again. There are often considerations that are not evidence in favour of a possibility but that make it sensible to act upon it. If, reaching home without a key, we cannot remember whether we locked the front door, then the fact that it is a short way to the front door but a long way to the back makes it sensible to go to the front, to act as we would if we were sure the front door were open, although, of course, the fact is no evidence at all that the front door is open. That is an application of the rule for fault finding, test first for those faults which are easiest to test for. That rule is a sensible one, not because accessible things are most likely to go wrong, but because following it will probably save time and trouble. These examples do not suggest an alternative to relying upon induction, for the reason for acting upon the possibilities is that of arriving at evidence for or against them. However, even in these examples, finding evidence is not the whole reason for the course of action. Trying the front door will enable us to go in by it, if it is open, and locating the fault may enable us to remove it. In other examples, finding evidence for or against the possibility is no part of the reason for acting upon it. It may be wise to treat a valuable object as fragile even when we have good reason to think it is not, but treating it as fragile will prevent us from finding out whether it is. This example, again, is of no help in suggesting a non-inductive method, since acting upon the improbability, far from furthering knowledge, completely frustrates it. What is needed is a course of action which may bring knowledge that a possibility holds, but, if so, will not do it by providing evidence that it does. Adopting the possibility as one of our beliefs, if only for a trial period, may be the course of action we seek. That is what I had in mind when I spoke of a yet more radical departure from the maxim that we should concern ourselves only with probabilities. When we look for the scissors, there may be no chance that they are where we look, but they may be there and, if they are, we shall find them. When we try to find the law governing events in which we are interested, there may be no chance that the hypothesis we have lit upon is true, but it may be. However, if it is, we seem little better off, for what evidence can we have that it Problems in the Philosophy of Science 25 is? Is there such a thing as finding a true hypothesis as there is such a thing as finding the scissors? There is: it is coming to an unshakeable, or, as the pragmatist American philosopher C. S. Peirce put it, stable, belief in the hypothesis. If, when we light upon a true hypothesis, we accept it and adopt an attitude which will never lead us to relinquish it, then, perhaps by pure chance, we have taken possession of it, we have found it. The method of trial and error offers the possibility of such an unshakeable belief in a true hypothesis. If we adopt the method of accepting an hypothesis that has occurred to us and not relinquishing it unless we find experimental evidence which contradicts it, then, if we have chosen a true hypothesis, we shall never, so long as appearances do not deceive us and so long as we stick to the method, give it up, for, since it is true, experiment will never go against it. If, in addition, we adopt the method of rejecting our hypothesis if experiment does go against it, as it may do if our hypothesis is false, then our belief in a false hypothesis will not be unshakeable. We may be encumbered with it for a long time, perhaps for ever, but it may be that we shall be faced with experimental evidence against it. The positive and negative parts of the method are equally important. If we are liable to relinquish an hypothesis for no reason at all, for reasons, such as fashion, which have nothing to do with truth or falsity, on grounds from which it does not follow that the hypothesis is false, or on grounds about which we are not certain, then our belief in a true hypothesis will not be unshakeable. If we do not reject hypotheses when experiment goes against them, or if we accept hypotheses which do not have experimental consequences, or if we close our senses to all observation, then our belief in a false hypothesis may be unshakeable. The advantage the method offers, that, if we come by a true hypothesis, we shall keep it, while, if we become encumbered with a false hypothesis, we have at least a chance of getting rid of it, would then be lost. Two objections at once present themselves. First, although experiment will not tell against a true hypothesis it may appear to do so, since things sometimes appear as they are not. Similarly, an experiment that tells against a false hypothesis may not appear to do so. However, if the evidence of our senses is reliable, although not perfectly reliable, then, by following the method, we can make it unlikely that we shall give up a true hypothesis once we have one. Again, even if we sometimes misread experiments, the chance will still remain that we shall be able to free ourselves Problems in the Philosophy of Science 26 from a false belief. Second, if trial and error is to leave us holding a true belief, not merely working upon a true hypothesis, the method must be that of entering into a trial belief, not merely of making a trial working assumption. Unfortunately, although it is possible to try out a plan of action that one does not believe will succeed, it is doubtfully possible to try believing what one does not believe. That is because it is doubtfully possible to hold a belief from any motive other than that of believing what is true, if, indeed, that is a motive. The seventeenth century French philosopher Pascal recommended belief in God on the grounds that a believer stood to gain much if his belief were true and to lose little if it were false, but it is not possible that someone should believe in God from that motive. Someone impressed by Pascal’s reasoning may wish he believed in God or he may believe in God for other reasons and congratulate himself on this advantage of his belief, but he cannot have the belief from that motive. Similarly, if one does not believe an hypothesis, one cannot believe it in order to conduct a trial and reap the profits the trial may bring. If that is so, then, although someone may proceed according to the trial and error method, adopting beliefs without evidence, retaining them unless proved false but rejecting them if proved false, no one can be moved to do so by the fact, or the fiction, if that is what it is, that the method may yield knowledge of laws of nature. If the method is a good one, it is sensible to point out that those who pursue it may gain and cannot lose, but it is not sensible to recommend it on that account. This criticism can be met only by the admission that scientific knowledge lies, not in belief in laws, but in action upon them. If trial and error, as so described, were a method which it is possible and rational to adopt, that would explain the widespread conviction that induction is valid. When people suppose themselves to proceed by induction they single out, from all the hypotheses for which, according to the inductive principle, an experiment provides evidence, the one they accept which is, perhaps, the only one they have thought of. They erroneously conclude that the experiment provides evidence in favour of that hypothesis and their confidence in it is strengthened. Someone who proceeds by trial and error retains his belief in an hypothesis unless experiment goes against it. He does this on the grounds that it makes it very likely that he will retain any true belief he has come by and gives him a chance of ridding himself of any false ones. The reasoning is different, but the result is the same: if experiment does not go against an hypothesis we retain it. Unfortunately, Problems in the Philosophy of Science 27 although the trial and error method involves no invalid inferences, as the inductive method does, its justification is no better. There is no reason to suppose that we worsen our chances of coming to a stable true belief by relinquishing beliefs that we hold before we come across experimental disproof of them. Indeed, the hypothesis we reject may be true, so that we shall have missed a chance if we let it go, but, equally, it may not be true, and in retaining it we may miss the chance of the truth of the matter which would, perhaps, have been the very next thing we thought of. The negative part of the trial and error method, the rejection of hypotheses that have received experimental disproof, may seem beyond criticism, but the philosophy of the method, that there may be good reason for accepting an improbable hypothesis, suggests investigating the idea that there may be good reason for accepting a proven falsehood. Perhaps one knew that something couldn’t be true, but, if nearly all its consequences were, would it not have been better to have continued to accept it? Of course, if one’s only aim is to come to the truth on this question, rejecting the hypothesis is the only course and this part of the method is justified. However, one might despair of that aim and seek the truth on other matters, or one might despair of it for the time being and postpone it. Those purposes might be furthered by accepting a false, or even a contradictory, hypothesis. ‘I know it to be false but I believe it true’ seems an impossible attitude but it is, perhaps, not much worse than the one upon which trial and error rests: ‘I don’t believe it but I believe it in the hope of learning from that belief’. The admission, unavoidable if trial and error is to be counted a possibility, that scientific knowledge lies not in belief in laws but in action upon them, points to a way in which it is possible to accept an hypothesis one knows to be false. That was exactly Hume’s attitude to the existence of an external world, that is, to the existence of a world of things independent of our own minds: he knew that there was not such a world but couldn’t help believing that there was. He believed that there were conclusive reasons against an external world, yet he held, too, not only that soon after being convinced by those reasons we should relapse into the belief, but that it was good that we should. Without the belief in an external world we should be unable to make the vast number of true predictions upon which our lives depend, we should lack the foresight which enables us to avoid disaster. "Nature has not left this matter to his choice, and has doubtless Problems in the Philosophy of Science esteem’d it an affair of too great importance to be trusted to our uncertain reasonings and speculations." Hume held, then, that it was all-important that we should believe an hypothesis that could be proved a contradiction. Similar examples have arisen in science. Niels Bohr recommended a theory of the structure of the hydrogen atom which he admitted to have contradictory consequences. Of course, such recommendations can be made only in the absence of a non-contradictory hypothesis with the same true consequences as the contradictory one, but such a non-contradictory hypothesis may be beyond our power to devise. Unlike induction, trial and error is a method, not merely a pretence at one. However, it is not a method which there is any good reason to adopt. 28 Problems in the Philosophy of Science The Classical Theory of Probability The mathematicians who made the first studies of probability in the seventeenth and eighteenth centuries based their calculations upon the idea that the probability of an event was the proportion, among all the different ways things might turn out, of those which involved the event. That fundamental idea has become known as the Classical theory of probability. If it is correct, then there is a logical connection between the value of the proportion among the possibilities and the value of the probability; each may be deduced from the other. When someone discounts his chance of winning a raffle on the grounds that any one of a thousand tickets may win, only one of which is his, he deduces his small chance from the small proportion. The converse inference is less noticed but quite common. Someone who has taken a collection of objects from a box and finds it difficult to replace them so that the box will again close may conclude that, of all the ways the objects may be arranged, only a few allow the box to close. He has argued from a low probability to a low proportion. When objects are easy to pack, that is, he concludes, because almost any arrangement of them will fit, when difficult, because hardly any will. Mendel arrived at his theory of inheritance in bi-sexual plant reproduction by the same inference. When he found that the probability that two individual pea plants should have a tall offspring was high, he concluded that many of the different constitutions offspring of those parents might possess led to tallness, few to shortness. In the light of such inferences he framed his theory of the dual character of plant constitutions, of how the constitutions of parent plants restrict those possible for their offspring, and of how the constitution of a plant determines its other characteristics, in such a way that the proportion, among all the constitutions possibly arising from a fertilization, which would produce an offspring of a certain character always equalled the value which his breeding experiments led him to attribute to the probability of that fertilization yielding such an offspring. The Classical theory was developed in the study of games of chance and it is sometimes thought correct there, if nowhere else. What is the probability that when a penny is thrown twice it will fall heads twice? It seems plausible to argue in the following way. There are four 29 Problems in the Philosophy of Science 30 possibilities, heads followed by heads, heads followed by tails, tails followed by heads and tails followed by tails. Heads turns up twice in one of these four ways, so the probability is one quarter. In other studies, it is not difficult to find problems to which the theory gives the wrong answers. Of all the directions in which a stone released from the hand might move, straight down is only one. Therefore, on the theory, the probability that the stone will fall straight down is vanishingly small, whereas, in reality, it is very great. There is a consideration, however, that makes it doubtful whether either of these answers, the right one about the penny or the wrong one about the stone, is the only answer the theory gives to these problems. The eighteenth century mathematician D’Alembert deduced a different answer for the probability of heads on each of two successive throws of a penny. He argued that there were three, not four, possibilities: heads followed by heads, heads followed by tails, and tails. Once a tail had been thrown on the first throw, the issue was decided, there was no need to throw again. Heads turns up twice in one of these three ways, so the probability is one third. Everyone disagreed with D’Alembert, yet his reasoning was as sound as theirs. If two coins are thrown, things must turn out in one of his three ways, just as they must in one of the more usually recognised four. Again, things cannot turn out in more than one of his three ways, any more than they can turn out in more than one of the four. He counted ‘tails with the first throw and either heads or tails with the second’ as one way, while most people count it as two, but there is nothing in the notion of a way things might turn out, or of a possibility, that makes it right to count that as two ways, or two possibilities, rather than one. The fault lies, not in either way of counting, but in the Classical theory, which allows both. According to the theory, both answers are right and, since if one is right the other is not, both wrong. Is there a common sense interpretation of the statement of the Classical theory which perverse people like D’Alembert refuse to recognise and which makes the customary answer right and his answer wrong? If the theory included the condition that every possibility of the form ‘either this or that’ should be counted as two possibilities, ‘this’ and ‘that’, then his answer would be wrong, for he counted ‘Either tails with the first throw and heads with the second or tails with the first throw and tails with the second’ as one possibility, not two. People have suggested inserting this condition into the theory in the hope that it will remove the contradiction D’Alembert’s Problems in the Philosophy of Science 31 argument revealed. If ways, or possibilities, cannot be counted, perhaps non-disjunctive ways, or non-disjunctive possibilities, can be. A number of difficulties arise. First, our judgments of probability do not always accord with this condition. Suppose we are following someone along a road; he is out of sight when the road branches into three, two leading up the side of a hill, the third continuing along the valley. When we reach the branch we cannot see which road he has taken. What is the probability, on the evidence that he has taken one of the roads, that he has gone along the valley? There are three roads altogether, one and only one of which he must have taken, only one leads along the valley, so the proportion and the probability sought is one third. Again, he must have gone either up the hill or along the valley, so, of the two things he might have done, one takes him along the valley and the proportion and the probability sought is one half. Unexpanded, the theory gives both answers. There are three ways, one of which he must take, of which going along the valley is one. There are two ways, one of which he must take, of which going along the valley is one. Once the condition that disjunctive possibilities, of the form ‘either this or that’ are to count as two is included, the theory gives the answer of one third, not the answer of one half. In this example, however, there is no feeling that that is the answer common sense would give. The answer that he is as likely to have gone up hill as stayed in the valley seems just as sensible as the answer that he is as likely to have taken one road as to have taken either of the others. The new condition may yield a consistent theory but does it yield a correct one? Second, although, on the new condition, it is easy to see that D’Alembert’s way of arguing cannot be correct, it is not so obvious that the customaryway of arguing is correct, for how can we be sure that the four customarily recognised possibilities are not of the ‘either…, or....’ form? They are not formulated in that way, but neither need D’Alembert’s be. His third possibility may be stated as ‘tails on the first throw’ and, if it is right for his opponents to insist that this conceals the fact that it is a disjunction of ‘tails on the first throw and heads on the second’ and ‘tails on the first throw and tails on the second’, why is it not right for D’Alembert to insist that the possibility, customarily counted as one, of ‘heads on the first throw and tails on the second’ ought to be stated so as to make clear that it is a disjunction of two possibilities, ‘heads on the first throw with the sovereign’s nose pointing east and tails on the second’ and ‘heads on the first throw with Problems in the Philosophy of Science 32 the sovereign’s nose pointing west and tails on the second’? The insistence on non-disjunctive possibilities accords with our feeling that D’Alembert’s argument is wrong but quite fails to accord with our feeling that the customary argument is right. Third, the four customarily recognised possibilities will have to be divided again and again before non-disjunctive possibilities are reached. The danger arises that when they are reached we shall find that they are infinite in number. If they are, the Classical theory will yield very paradoxical probabilities. Consider the simple example of one line crossing another of finite length. Suppose one of the lines is divided into four equal parts and has a left hand end and a right hand end. What is the probability, given that the other line crosses it, that it crosses in the lefthand quarter rather than in the remaining three quarters? There are four ways the crossing can occur, for it can occur in any of the four quarters, so the probability, on the Classical theory, might be held to be one quarter. However, those four ways are not non-disjunctive ways, for falling in the left-hand quarter is falling either in its left-hand half or in its right hand half. To reach nondisjunctive ways we must consider the points at which the line might cross and if space is infinitely divided, a possibility that the Classical theory must face, there are an infinite number of them. Then we shall have to calculate the proportion, of all the infinite number of points on the whole line that lie in the left-hand quarter. There is a simple and convincing proof, discovered by Georg Cantor, that there are as many points on a short line as on a long one. Stand the first quarter of our line up, so that it is at an angle to the rest, draw a straight line back from the right-hand end through the new position of the left-hand end and onwards. Choose a point on this new line left of the left-hand end. By drawing straight lines through the chosen point across both the long line and the standing up short line, every point on the long line can be paired with some point on the short and every point on the short paired with some point on the long, leaving no point on either line unpaired with a point on the other. Since this can be done, there must be the same number of points on each. Cantor adopted the simple criterion that two collections contain an equal number of things if the things in each can be paired with the things in the other without leaving anything in either collection unpaired, an unequal number if this cannot be done. That criterion is always adopted for collections of a finite number of things. Cantor’s innovation was to apply it to those containing an infinite number. If this is right, then, although not all the points of the line lie in the Problems in the Philosophy of Science 33 left-hand quarter, there are as many in the left-hand quarter as there are in the whole line, and as many in the left-hand quarter as in the remaining three quarters. Therefore the proportion of all the non-disjunctive ways in which the crossing could take place which lie in the left-hand quarter is unity and the probability of the crossing taking place in the left-hand quarter is unity. It is just as high as the probability of it occurring in the right-hand three quarters and just as high as the probability that it will occur anywhere on the line. These results are absurd. Evidently the Classical theory needs a further restriction: the proportion of ways will only equal the probability when the number of ways is finite. Since there is the danger that the condition that the ways be nondisjunctive will mean that their total number is infinite, these two conditions together may well bring it about that, on the Classical theory, no event has any probability at all, or, at least, that many events thought to have probabilities do not. The infinitely large and the infinitely small present many difficulties in science and those difficulties can sometimes be overcome by turning from a consideration of what is true of, say, a collection of infinitely many things to consider what is true of all collections of finitely many things, no matter how large those collections may be. Perhaps that change of subject would help here, for, although dividing the line into parts never gives non-disjunctive ways in which the second line can cross the first, that would not matter if, however finely the line were divided, the proportion of the divisions that fell within the left-hand quarter remained one quarter or if, by making the division fine enough, that proportion could be brought as close to one quarter as we pleased. Unfortunately, neither of these holds. There is a way of dividing the line which keeps the proportion at one quarter: divide the line into equal parts, divide it again into a larger number of equal parts, and so on. However far that division is taken, the proportion of parts in the left-hand quarter remains one quarter. Unfortunately, there are many ways of dividing the line in which the proportion is not one quarter and which, when continued, do not bring it nearer to one quarter. For example, if the left-hand quarter is divided into two parts, the second quarter from the left into four, the third into eight and the fourth into sixteen, the proportion of parts falling in the lefthand quarter is one fifteenth. If the division is continued by dividing each of those parts into two, the proportion remains one fifteenth and it does so however often this process is repeated. The change of subject provides no answer to the problems which infinite divisibility poses for the Problems in the Philosophy of Science 34 Classical theory. Supporters of the Classical theory have not given up easily. They have a resort which we have not considered: they can ignore Cantor’s proof that there are as many points in a short line as in a long one and continue to hold the common sense view that exactly a quarter of all the points of a line fall into a quarter of its length. From the fact that the left-hand quarter of the line is one quarter of the length of the whole line they can conclude that in one quarter of the ways in which it may be made the crossing falls in the left-hand quarter. So they can arrive at a probability of one quarter, an answer which agrees with common sense. Of course, in order to reach this satisfactory conclusion they have to dismiss a simple and compelling proof, one with which they can find no fault except that it tells against their theory. However, we are often confronted with arguments which we know cannot be proofs, because we know that their conclusions are false, but in which there is no evident fault. Zeno’s arguments against the possibility of motion are the best known examples. Admittedly, the possibility of motion, from which we conclude that Zeno’s arguments cannot be proofs, is a better rooted conviction than the truth of the Classical theory of probability but the example does show, if showing is needed, that arguments in which there is no evident fault can be faulty. If Cantor’s argument could be set aside without further difficulties arising it would, perhaps, be right to ignore it. The conclusion of his argument, however, lies at the foundations of geometry, and the French mathematician Joseph Bertrand demonstrated that when the Classical theory is applied with this assumption to problems concerning space, inconsistencies once more emerge. His most notorious proof of this, known as Bertrand’s paradox, involves rather complex geometry. The following example, based upon this and concerning the problem we have already discussed of the intersection of two lines, makes the same point more simply. Just as, according to the common sense assumption, there are only a quarter as many points in a quarter of a line as there are in the whole of it, so there are only a quarter as many directions within an angle of ninety degrees as there are in one of three hundred and sixty. Consider again the probability of one line intersecting another within the quarter of its length lying at the left-hand end. Choose a point close to the second line and consider the angle that the line subtends at that point. Each point at which the crossing might fall determines a direction from the chosen point so that the number of ways the crossing might fall equals the number of Problems in the Philosophy of Science 35 these directions and the probability of the crossing falling within the left-hand quarter of the line will be the proportion of them which lies within the angle subtended by the left-hand quarter. That proportion, on the assumption that has been made to enable the Classical theory to be applied to this problem, is the proportion in size of the angle subtended at the chosen point by the left-hand quarter and the angle subtended there by the whole line. This proportion, therefore, provides a second way of calculating the probability, the first being the proportion between the length of the left-hand quarter and the length of the whole line. These two proportions, of angles and of lengths, are not equal. Three lines from the chosen point which divide the whole line into four equal parts do not divide the angle it subtends into four equal parts. They would do so if the line lay on the arc of a circle with its centre at the chosen point, but it does not, it is a straight line. Therefore, with the assumption, the Classical theory implies both of two different values for the probability sought: one quarter and some fraction less than one quarter. To put the same thing in another way, the assumption and the Classical theory together contradict the simple geometrical fact that three lines dividing an angle into four equal parts do not divide a straight line crossing them into four equal parts. By ignoring Cantor’s proof, we have arrived at a false geometry. Since, without the assumption that Cantor’s proof can be ignored, the Classical theory gives absurd results for the probability, the theory must be weighed against this simple geometrical fact. There seems no help but to reject it and accept that a proportion, even of non-disjunctive possibilities, does not establish an equal probability. These difficulties arise only if space is infinitely divisible, but this is a possibility that the Classical theory cannot dismiss. It is worth considering why it is that equally serious consequences do not ensue if space is not infinitely divisible, so that there are a finite number of points on a line. A quarter of a line may then have only a quarter as many points as the whole of it and a quarter of an angle only a quarter as many directions as the whole. The reason that two inconsistent probabilities cannot be calculated is that there will be directions from the chosen point on which no point of the line will lie, so that the number of points and the number of directions cannot be taken to be the same. This can be illustrated with the space of the chess board as it is used in chess. That space is not infinitely divisible, for, if two pieces are not next to one another they cannot be less than one square apart. Consider one corner of the board. There are Problems in the Philosophy of Science 36 three obvious directions from that corner in which the side, the bottom and the diagonal of, the board lie. In play, a rook can move along either of the first two, a bishop along the third. There are two more, less obvious, directions which lie between the diagonal and the side and between the diagonal and the bottom. There are straight lines following those directions and a knight can move along them, but a piece can be placed in those directions from the corner only on some of the ranks and files of the board, not on all. The second rank, for example, has no square on it lying in the direction mid-way between that of the diagonal and that of the side. That is why a knight’s move skips either a rank or a file. If space is not infinitely divisible then there must be similar restrictions upon the placing of an object upon a line. It is sometimes said, in defence of the Classical theory, that when it gives two answers for a probability, that is because there are two different problems. In the example of the intersecting lines, the length proportion would give the right value if the crossing arose from rolling a rod over the line without allowing it to twist, the angle proportion would be the right value if it arose from spinning a needle about the chosen point. However, it is not true that the Classical theory does not provide an answer until the problem has been filled out in one or other of these two ways. So long as the problem is stated fully enough to enable the possibilities to be determined, the theory gives an answer It may be a fault of the theory that it allows probabilities on very slender evidence to be calculated, but it is in the nature of the theory to do that. On the theory you cannot refuse to address the problem of how likely it is that a penny will fall heads on the grounds that you have not been told what kind of penny is in question. If that information has not been included, then you must, according to the theory, consider what kinds of penny there might possibly be. That may make the question difficult to answer, but does not mean that the problem is one that the theory cannot solve. Certainly, D’Alembert seems wrong and his opponents right. In the absence of other satisfactory explanations, it seems likely that that is because, without noticing it, we introduce probability assumptions for which, according to the Classical theory, there should be no need and which D’Alembert, quite correctly, refuses to introduce. It seems likely that we take the answer of one quarter to be correct because we make the assumption that the four ways in which the two throws might turn out, heads followed by heads, heads followed by tails, tails Problems in the Philosophy of Science 37 followed by heads and tails followed by tails, are equally probable. On that assumption, the three ways in which the throws might turn out, heads followed by heads, heads followed by tails and tails, are not equally probable, so that the answer of one quarter for the probability of heads followed by heads will be justified and the answer of one third for that probability will not be. If this is our argument, then we are not employing the Classical theory, or anything like it, for we are deducing probabilities, not from a knowledge of the possibilities alone, but from a knowledge of the probabilities. The lesson to be learnt from D’Alembert’s argument would be that the probabilities cannot be deduced from the possibilities, that the Classical theory is an illusion and that the conclusions apparently reached by applying it can be justified only by the additional information that each of the possibilities is as probable as each of the others. That would make it easy to explain why the theory succeeds so well with games of chance but not so well with other problems. Pennies are often so made and so used that the two possibilities, heads and tails, are equally probable. Cards are often shuffled and dealt so that a particular card is as likely to be dealt to one person as to another. When we are given no indication of the probabilities, as we are not in the example of following someone along a road, or when we know that the different possibilities are not equally probable, as in the example of a stone released from the hand, we can reach no conclusion. If D’Alembert’s argument convinces us that probability conclusions require probability premises, we might, nevertheless, allow that something survives of the converse arguments which invoke the Classical theory. The person who finds it difficult to pack objects into a box could not conclude that few of the ways in which the objects might be arranged allow the box to be closed, since there are many equally good answers to the question ‘In how many ways may the objects be arranged?’, but we might allow him justified in concluding that few of the equally probable ways in which the objects might be arranged allow the box to be closed. We might allow this much to the Classical theory: that all probabilities arise from a preponderance, or lack of preponderance, of a finite number of equal probabilities. Ought we to allow this? Is it right to conclude, from the existence of a probability, that there is a finite number of different equally probable ways in one or other of which things must turn out? Admittedly, if there are more than two equally probable ways, then a number of unequally probable ways must exist. For example, if heads followed by Problems in the Philosophy of Science 38 heads is as likely as each of the other three ways a thrown coin thrown twice might fall, then heads followed by heads, on the one hand, and not heads followed by heads, on the other, are two ways in which the throws might turn out, one much less probable than the other. There is no such simple demonstration that if there is a number of unequally probable ways, there must be a number of equally probable ones. Packing the objects into the box might be difficult because, of all the arrangements one might possibly think of and which one is equally likely to think of, only a very few allow the box to close, but it might be difficult because, although of all the arrangements one might think of most allow it to close, each of those arrangements is very unlikely to occur to one. It might be that if one particular object is put in upside down, then almost any arrangement of the others will fit them in, whereas, if that object is put in the right way up, there are very few arrangements of the others that can be made at all. If it is also true that one is very unlikely to think of putting that object upside down, then it will be difficult to arrange the objects even if most of the arrangements one might make would fit them in. The arguments of the Classical theory work in neither direction. The probabilities cannot be deduced from the possibilities nor can the existence of a set of equally probable possibilities be deduced from the probabilities. Problems in the Philosophy of Science 39 Bayes’ Theorem. There is a theorem of the probability calculus, named after an eighteenth century English mathematician, Thomas Bayes, which has been held to show that the consequences of an hypothesis, if verified, provide evidence for its truth. Consider three propositions, A, B and C. According to the theorem, if the probability of A on B and C together is several times its probability on C alone, then the probability of B on A and C together will be several times its probability on C alone. In whatever ratio B increases the probability given by C to A, A will increase the probability given by C to B. Now consider B to be some universal hypothesis, A to be a consequence of it and C to be some other piece of evidence. Since A is a consequence of B, its probability on B and C together is as high as can be, unity, perhaps many times its probability on C alone, which may be low. If so, then, according to Bayes’ theorem, the probability of the universal hypothesis B, on the evidence of its consequence, A, together with C, will be many times its probability on C alone. Verifying that consequence, A, will provide evidence upon which the hypothesis is many times more probable than before. Of course, the new probability, though greater, may still be less than one half, in which case it would be better called an improbability; nevertheless the verification of the consequence will have provided increased probability, made the hypothesis less unlikely than before. If Bayes’ theorem is true, inductive argument seems valid. This argument is persuasive. It convinced, amongst others, the mathematician Laplace and the economist J. M. Keynes. Keynes pointed out that if, without inductive evidence, the probability of an hypothesis were zero, then the multiplying effect of inductive evidence would fail to raise the probability, having, so to speak, nothing to work on. He claimed the validity, therefore, not of inductive argument, but of argument from inductive premises together with some assumption that would ensure to the hypothesis in question a probability greater than zero. He suggested a principle, his principle of limited independent variety, which was intended to ensure that no hypothesis was faced with an infinite number of rivals. Whether there could be an assumption which achieved this for any hypothesis that an investigator might suggest seems doubtful; so does the value of an inductive method which requires so strong an unjustified assumption. There is no excuse, though, for entering into those questions, since a more careful consideration of this Problems in the Philosophy of Science 40 application of Bayes’ theorem would have shown Laplace and Keynes that the justification of induction it seems to provide is an illusion. Suppose a man is to have two shies, first at one coconut and then at another. Whatever his chances of hitting them, provided he has some chance and provided that success with the first throw does not rule out success with the second, Bayes’ theorem shows that the conclusion that he will knock both down is more probable on the evidence that he will make two throws and knock the first down than on the evidence, by itself, that he will make two throws. So the theorem seems to show that the evidence that he has knocked the first down supports the conclusion that he will knock both down. If the theorem did show that, it would show that knocking the first down supported knocking the second down; for what supports a conclusion supports the consequences of that conclusion. We seek support for generalisations in order to argue from them. However, Bayes’ theorem does not show that the evidence that he will knock the first coconut down raises the probability that he will knock the second down. Knocking the first down is not implied by knocking the second down, as it is by knocking both down, so a premise for the inverse argument is missing. What is more, just as the theorem shows that knocking down the first raises the probability of knocking down both, so it shows that, to the same degree, it raises the probability of knocking down the first and not knocking down the second. Therefore, unless the theorem is selfcontradictory, it does not show that knocking down the first at all supports knocking down the second and so does not show that knocking down the first supports knocking down both. If Bayes’ theorem shows that the consequences of an hypothesis support that hypothesis, it shows just as conclusively that they do not. The step which gives rise to this contradictory result is the seemingly obvious one from ‘knocking down the first gives raised evidence to knocking down both’ to ‘knocking down the first supports, or is evidence in favour of, the conclusion that both will be knocked down’. It is tempting to think that what gives increased support supports, that what adds to the evidence for a conclusion is evidence for that conclusion; in fact, giving increased support lacks an important characteristic of support : it is not necessarily transmitted to consequences. Evidence may raise the probability of a conclusion without raising the probability of every consequence of that conclusion; it may do that because the probability of some of the consequences may, without that evidence, Problems in the Philosophy of Science 41 have already been as high as that of the conclusion becomes on the evidence. An obvious example of this arises over two throws of a balanced penny. There, a head on the first throw raises the probability of a head on both from a quarter to a half, but it does not raise the probability of a head on the second throw at all, for that is a half whether or not the first throw is a head. The same is true of a million and one throws of a penny. A head on each of the first million throws raises the probability of a head on all million and one from very little to a half, but it does not raise the probability of a head on the million and first, for that is a half merely on the evidence that it will take place. Evidence can raise the probability of a conclusion without raising the probability of all its consequences because a low probability for one of its consequences will mean a low probability for the conclusion. Evidence for that one will raise the probability of the conclusion without at all raising the probability of any of the other consequences. The conclusion that a pig will fly tomorrow morning and rain will fall tomorrow afternoon has a very low probability; its probability is much greater on the evidence of our own eyes tomorrow morning that a pig is flying, but that evidence does not support the conclusion. If it did, it would support the other consequence, that rain will fall in the afternoon. It is this possibility, that the probability of a conclusion may be low because of the low probability of only some of its consequences, that explains Bayes’ theorem. In all the examples we have considered so far, except perhaps the last, the conclusion has had a low probability, or no better than a half, even when the evidence which raises its probability is included. Of course, probability-raising evidence does not support the conclusion even when it raises its probability very high, perhaps to unity. The augmented evidence does support the conclusion but the augmenting evidence does not. For example, the evidence that a penny has fallen heads, added to the evidence that the penny will be thrown once and then thrown again, raises the probability that the penny will fall heads and then be thrown again. It does not support that conclusion, since it does not support the consequence that the penny will be thrown again, but the augmented evidence, that the penny will be thrown once, fall heads and then be thrown again, does support that conclusion. Indeed, it entails it. Laplace end Keynes forgot that evidence for a conclusion is evidence, just as strong, for Problems in the Philosophy of Science 42 each of its consequences. Had they not, they would have noticed that their belief that Bayes’ theorem validated induction was inconsistent with something they, along with everyone else, assumed to be true: that the result of a large number of throws of a penny provides no evidence for the results of future throws. Induction would lead us to infer a head, rather than a tail, after a million throws of a penny had been made and a million heads had turned up. If Bayes’ theorem validated induction, it would lead us to that same expectation; that is contrary to the actual consequence of the theorem which is that, after a million heads, a head is no more and no less likely than after a million tails. When John Stuart Mill discussed the syllogism, he claimed that there cannot be an argument from ‘All men are mortal’ to ‘If the Duke of Wellington is a man, he is mortal’. There cannot be an argument to what has already been asserted, and the premise that all men are mortal comprises the conclusion that if the Duke is a man, he is mortal. When we seem to argue from a generalisation to one of its instances, our true argument, Mill thought, lies concealed: it is an argument from our evidence for ‘All men are mortal’, the mortality of Aristotle, Alexander and countless more men, to the mortality of the Duke. Argument proceeds, when seen truly, from particular instances to particular instances and we should not allow ourselves to be forced ‘by the arbitrary fiat of logicians’ to follow the ‘high priori road’ through the generalisation. There is no reason why we should be forced to march up a hill and then march down again. As we shall see, there have been logicians since Mill who claim that we can only march down, creating some puzzlement about how we got to the top, but the illusory arguments based upon Bayes’ theorem demonstrate the value of Mill’s contention. Once we see clearly that, whether it proceeds through a generalisation or not, inductive argument proceeds from particular to particular, from the evidence for a generalisation to each of its consequences, we cannot be taken in by these misuses of Bayes’ theorem. Problems in the Philosophy of Science 43 Absolute Space: Newton’s Argument. When we say that someone has never travelled far in his whole life we mean that he has never travelled far from his native town or never moved much with respect to any of the chief features on the surface of the earth, That is certainly consistent with his having, along with everyone and everything else on the earth, radically changed his position within the solar system. His movement on the earth is one thing, his movement in the solar system, another. These notions of movement and position are relative notions, for they are movements or positions on or in or with respect to something else or some group of other things. Similarly, the notions of direction, speed and acceleration must, very often, be relative notions. These relative notions involve relations. The relative notion of position on the earth involves the relation of position on, a relation which enters into many other notions of relative position. The relative notion of speed on the road involves the relation of relative speed which, again, enters into many other relative notions of speed. Is there, besides the various relative positions of a thing, something else we might think about: its position? Are there, that is, besides the relative notions of position, direction, speed and acceleration on or in things or groups of things, other, non-relative, often called absolute, notions? Grammar seems to allow their existence, for it is perfectly grammatical to speak of someone’s position or speed without mentioning anything else. Perhaps, however, grammar only seems to allow it; perhaps the appearance arises because grammar allows elliptical, that is, clipped, expressions of position and speed which leave the missing term of the relationship to be understood. Grammatical discomfort does arise over a non-relative form of one of the things we must specify when we give a position, that is, distance. It is awkward to reply ‘You’ve said how far it is from here and how far it is from the church and how far it is from a large number of other places, but you haven’t said how far it is.’ ‘How far is it?’ is grammatically awkward once relational interpretations, in particular ‘How far is it from here?’, have been ruled out. The legitimacy and popularity of these ellipses undoubtedly arise because, rather than thinking of the position relationships between the different objects in innumerable pears, we choose one object and think of the relationships of the innumerable others to that same one. There is no need to specify a term Problems in the Philosophy of Science 44 of a relationship when that term is always the same. Another instance of that way of thought, in which we grasp a very large number of pair-relationships by considering the relationship of all things to a chosen one, arises in tonal music. According to one account, to hear a piece of music in a key is to fasten upon the pitch relationship between each note of the piece and a chosen note, not upon, or not with such emphasis upon, the relationship between adjacent notes when neither is the chosen-note. In the development of astronomy, the distinction between relative and absolute spatial notions was forced upon everyone’s attention by the controversy over the Ptolemaic theory that the sun and the planets circle the earth and the Copernican theory that the earth and the planets circle the sun. In that controversy people often distinguished apparent movements from real movements. Relative movements often mislead us, but the distinction between relative and absolute is not that between apparent and real. The relative movement of the sun to the earth as the sun rises is certainly real; it is what it perhaps gives rise to, an apparent absolute movement of the sun, that may be unreal. It is unreal if Copernicus was right that the sun is not in absolute motion. I say that the sun’s movement relative to the earth perhaps gives rise to its apparent absolute movement because, although the sun undoubtedly appears to move, it may be that it appears to move relative to the earth and that it does not appear to move absolutely. The argument which people are tempted to bring against that suggestion, that if that were the appearance then it should also be that the earth appears to move relative to the sun, I think invalid.- If, in our perceptions of movement, we have adopted the earth as our base, then we shall perceive a relative movement between the sun and the earth as a movement of the sun and not as a movement of the earth. How we perceive a relative movement depends not only upon the relative movement itself but upon our choice of base. That choice, which introduces the asymmetry, must be influenced by factors of the sort investigated in Gestalt psychology such as the comparison in size between the moving objects. Probably the great angular length of the earth’s horizon compared to that of the diameter of the sun would compel our choice of the earth as a base for movement even if it had not been our habitual one, but size does not always have that effect. A ship far off may seem to move past an island although the ship appears large and the island no more than a Problems in the Philosophy of Science 45 point of rock. The difference between a relative movement and an apparent movement to which it may give rise can be supported, if support is needed, by considering that it is possible to perceive two things to be in relative movement without either seeming to be in absolute movement or, again, that an apparent movement may arise, not from a relative movement, but from an arrangement of things which are at rest relative to one another and to everything else in sight. For example, the lines in a pattern drawn upon a piece of paper held still may appear to move. However, although a relative movement is not at all the same thing as an apparent movement, there is another sense of the word ‘apparent’, one in which to call something apparent is not to imply that it is unreal, in which relative movement is plausibly held to be apparent, but absolute movement, if it exists at all, never apparent but always hidden from our senses. On that view, relative position, direction, movement and acceleration are detectable by our senses but their absolute counterparts, if they exist, are not. Newton took that view, as many others have done, also insisting that absolute movement was a possibility and did exist. Anyone who does that ought to feel some doubts about accepting that the sun, or anything else, could appear to us to be in absolute motion. Perhaps, although it is puzzling, things that we could never see to be so can present the visual appearance of being so. Railway lines, for example, perhaps present the visual appearance of being parallel and of meeting although we could never see them to be both parallel and meeting. Parallel lines cannot meet and what we see to be so is so. However, the appearance of absolute movement, on the view Newton held, is more puzzling for, on that view, absolute movement is a possibility and does occur, unlike the meeting of parallel lines, so that something that can and does occur and can appear to occur cannot, nevertheless, be seen to occur. So far as I know, Newton did not face that problem. Someone who did might meet it by arguing that what is often taken to be the appearance of absolute motion is in fact something else; perhaps, as I have suggested, it might be held to be motion relative to a chosen, but neglected, base. The clarification produced by the dispute over the Ptolemaic and Copernican theories may have prompted the thoughts that led some philosophers, notably Leibniz and Berkeley, to reject altogether the notions of absolute position, movement and direction. It was the success of Newton’s mechanics, which gives one absolute notion, that of acceleration, a central part in the Problems in the Philosophy of Science 46 explanation of the action of material things upon one another, which prevented that rejection becoming general. Isaac Newton presented his mechanics in the Principia or Mathematical Principles of Natural Philosophy. The notions of absolute place and absolute motion, along with other fundamental notions such as those of mass and inertial force, are defined at the beginning of the work. In a discussion of his definitions, the Scholium to definition VIII, he puts forward the view I have already mentioned, that absolute motion is not open to the senses as relative motion is, remarks that bodies apparently at rest are not always truly at rest and takes up the question of how the true, in fact the absolute, motion of a body is to be discovered. He shows how this may be done by means of an example that has become classical, his bucket experiment. It is only one example of many he could have given for, he says, the treatise that follows deals with that problem and with the converse problem of what can be discovered once the absolute motions are known. ‘For to this end it was that I composed it.’ To perform the bucket experiment you tie a rope to a beam so that its top end cannot turn and hang a bucket half full of water from it. Then, keeping the bucket the right way up, you turn it round and round upon the rope until it has been raised appreciably, for the rope shortens as it twists, and release it with a helping spin to speed its unwinding. When you do that, the weight of the bucket with the water in it causes the rope to unwind and spin the bucket. At first, the bucket spins round the water it contains but gradually the water takes up the rotation of its container until they are spinning together. As that happens, the surface of the water assumes a hollow shape, depressed at the centre and raised at the rim. Newton argues that the hollow shape proves ‘the true and absolute circular motion of the water’. The hollow shape water assumes when rotating in a vessel can be demonstrated with rather less trouble by stirring water in a cup and, if that is the only observation needed to establish absolute circular movement, the simpler experiment would serve. The point of the bucket and rope procedure is to show that the hollow shape is most pronounced, not when the water and the bucket are in relative motion, for when the bucket is spinning and the water is not the surface is quite flat, but when they are spinning together. That, Newton says, shows that the hollow shape does not depend upon the movement of the water relative to the surrounding bodies. In fact, Newton has two arguments. One is that the hollow shape taken up by the water when it is spinning proves its true and absolute circular motion. The other is that the flat Problems in the Philosophy of Science 47 shape taken up by the water when it is not spinning and the bucket is, and the hollow shape taken up by the water when both water and bucket are spinning, together prove the true and absolute circular motion of the water. The first argument, if it is valid, owes its validity to a principle of his mechanics, one that relates force and acceleration. The second argument is more ambitious, for it attempts a justification of that principle, at least as far as showing that it could not be replaced by one relating force and some merely relative acceleration. Someone’s assessment of the first argument will depend upon his assessment of Newton’s mechanics, a theory that enjoyed unparalleled respect, and of its application to the experiment. Someone’s assessment of the second argument will depend upon whether it occurs to him that there are other relative movements of the spinning water besides that to the bucket. Newton’s experimental proof that the hollow shape does not arise from movement relative to the bucket quite fails to show that it does not arise from some relative movement. There are countless others from which it might come, many of them increasing as the hollow shape develops. Newton himself points out the existence of movement relative to the fixed stars, without noticing its serious consequences for his experiment, and Ernst Mach, writing at the end of the nineteenth century, pointed out that possibility again along with others such as movement relative to the body of the earth and stressed, what is true, that Newton’s experiment discounts neither of them as the source of the hollow shape. Let us, however, return to the first argument, Newton’s contention that the hollow shape, by itself, demonstrates the absolute circular motion of the water. Newton’s statement of his argument is short. The ascent of the water up the sides of the bucket reveals ‘its endeavour to recede from the axis of motion’. That endeavour, in turn, shows the absolute circular motion of the water. Presumably Newton did not intend the word ‘endeavour’ to be taken literally but held that the ascent of the water revealed the push that each particle of the water was receiving from the other parts, a push outwards which, when opposed by the rigid sides of the bucket, resulted in a push upwards. Such a push, I take him to have reasoned, must exist or the water’s weight would have kept its surface flat. Explaining the hollow shape as the result of the force exerted by each part of the water upon the other parts implies that the water is divided up into a large number of parts acting independently. That is also implied by Newton’s conclusion that the water is in absolute circular motion. He does not intend that the water as a whole is in Problems in the Philosophy of Science 48 circular motion, as it would be if someone standing still in absolute space swung the bucket round his body, but that the parts of the water are in circular motion around some axis. The hollow shape, then, reveals these outwards forces which the particles of the water are exerting. How is it that these outward forces reveal the absolute circular movements of the particles exerting them? Again, Newton does not say, but there must be two steps: from the outward forces exerted by the particles to their own inward acceleration and from their inward acceleration to their circular movement. The first step must involve the notion of inertial force which he explains in Definition III of the Principia. That force, unlike the one to which present day physicists give the same name, is exerted by a body upon another that interferes with its movement. When a body exerts an inertial force in a certain direction, it is itself absolutely accelerated in the opposite direction. When one body strikes against another, it pushes against it and is itself slowed. The other body is accelerated and pushes back on the first. Newton assumes that the outward push exerted by the particles of water is an inertial force, so that the particles of water must themselves be accelerated in the direction opposite to their push. They must be accelerated inwards just as, when a body strikes another and pushes it forwards it is itself slowed, that is, accelerated backwards. It is not, of course, possible to observe that the push is an inertial one; for all the hollow shape tells us, the push the particles exert might arise from an electrical or magnetic push upon the particles themselves and not from their movement. However, since the shape grows as the rotation grows, Newton’s assumption is reasonable enough. If it is right, then there must be an inward absolute acceleration of all the particles of the water towards the line in absolute space along which the centre line of the bucket lies. I think he must have taken this to imply that each particle remains in steady absolute acceleration towards the centre of the bucket. If it did, it would follow that the particles were in absolute circular movement around the centre line of the bucket, for the particles do not approach nearer to that line and circular movement around a point is the only movement in which a thing is steadily accelerated towards that point without ever getting nearer to it. The argument is complete but the final step invalid. If Newton’s principle of inertial force is correct, then, at any one moment, all the particles of the water are in absolute acceleration towards the same line in absolute space. It is a line along which, at that moment, the centre line of the bucket lies. At a later moment, the particles will also be accelerated towards some line in absolute Problems in the Philosophy of Science 49 space and, of course, it will be a line in which, at the later moment, the centre line of the bucket lies. It need not be the same line at the later moment as at the earlier, for the bucket may have moved. If it is not, then it will not be true that there is a focus of acceleration of all the particles of water towards which none of them is moving. All we observe is that they are not moving nearer to the centre of the bucket; we do not observe that they are not moving nearer to, or further from, that line in absolute space along which, at any given moment, the centre of the bucket lies and towards which at that moment they are accelerating. If the particles are moving rapidly past the points towards which they are accelerating, then each of them may pursue a wavy line through absolute space and none need be in absolute circular motion. It is strange that Newton took no account of this possibility since, as George Berkeley pointed out in his essay on motion, he himself believed the bucket to have a movement compounded of its movement round the earth’s axis and the movement of that axis round the sun. That movement of the bucket, however, is an accelerated movement, not merely the uniform one we are considering, and would prevent the surface of the water in the bucket from assuming a perfectly symmetrical hollow shape. It must be that Newton was, excusably, idealising his experiment and its conclusion, arguing that he observed an almost symmetrical hollow shape and could conclude that the particles were in almost circular absolute movement. In fact he could conclude no such thing. Although Newton’s experiment does not prove absolute circular movement, it does, if his mechanical principles are correct, prove something concerning absolute spatial quantities. It proves the absolute acceleration of the water particles and it proves that any two of them are in absolute spin. If the particles are moving in wavy lines, the lines pursued by particles at opposite ends of a diameter will intersect in such a way that at one time one of them will lie absolute north of the other, assuming north to be an absolute direction, while at a later time the other will lie north of it. Perhaps such spin was what Newton intended, although spinning does not imply circular movement. That is easy to see by considering the water in an eddy carried along by a fast river. The water particles in the eddy are moving in a circle relative to the other water in the river and they are spinning relative to that water; they are moving in a wavy line, not a circle, relative to the river banks, yet they will be spinning relative to the bank we stand on since now one particle in the eddy, now another, is nearest to that bank. The particles in the eddy remain the same Problems in the Philosophy of Science 50 distance apart; things that are moving together or moving apart can spin through almost two right angles without either of them moving in a curve or being otherwise accelerated. Two cars do that when they meet and pass one another at steady speeds in straight lines. Newton’s bucket experiment is, in short, just one example to which Newton’s principles relating force to absolute acceleration may be applied. It proves less than he claims. It certainly does not, as I pointed out at first, eliminate rival theories of mechanics which associate forces, notwith absolute accelerations, but with relative ones. Mach suggested a theory in which forces are associated with any acceleration relative to another massive body. Since the distant stars make up nearly all the mass of the massive bodies we know of, Mach’s theory has the consequence that forces are invariably associated with accelerations relative to the ‘heaven of the fixed stars’, as Berkeley called it, which Newton must have taken as the perceptible representative of his absolute space. The experiment proves less than Newton claims, for neither it, nor any experiments like it, can establish his mechanics. If anything is to be proved, Newton’s mechanical principles must be taken into the premises. Once that has been done, the absolute acceleration of the water particles does follow and, if that exists, absolute movement and absolute position must exist too. It is Newton’s mechanics which supports the existence of absolute space. Newton’s argument for and the opposing arguments against absolute space do not meet each other directly, for Newton argues that things sometimes are absolutely accelerated while the opposing arguments seek to discredit the notion of absolute acceleration. If Newton is right that things sometimes are absolutely accelerated, then the difficulties people find with the notion must have answers, but Newton does not provide them. That does not mean that his argument is no proof, only that, if it is one, it may be unsatisfying. It will be unsatisfying to those who reject the absolute notions, for, if it shows their conclusion to be wrong, it does so without revealing any fault in their reasoning. Proofs are often of that kind. There is often reasoning we know to be mistaken only because quite unrelated but simple and evidently correct reasoning leads to an opposite conclusion. If we are convinced that Newton’s theory is true, we must accept the absolute notions and, perhaps, persist with the attempt to remove the difficulties people have found with them. However, if the difficulties are serious and remain without answers, there is little reason to Problems in the Philosophy of Science 51 accept Newton’s mechanics. A mechanical theory derived from his by replacing the notion of absolute acceleration with that of acceleration relative to some postulated but otherwise unidentified object would escape the difficulties while providing equally good explanations and having exactly the same consequences for relative accelerations, movements and positions. Of course, every mention of absolute position, movement or acceleration must be replaced by position, movement and acceleration relative to the same single postulated object. It might, indeed, be argued that Newton’s theory is intelligible only because we treat space itself as an object to which other objects may be related and understand absolute position, movement and acceleration as position, movement and acceleration relative to space. Newton himself does that when he speaks of the movement of a body in a ship as arising partly from its movement in the ship, partly from the movement of the ship on the earth and partly from the movement of the earth in immovable space. That way of speaking avoids one difficulty, that of a non-relational notion of movement, only to fall into another, that of treating space as if it were something in space. Newton’s phrase ‘in immovable space’ makes that difficulty quite explicit. That difficulty, in its turn, is removed by postulating an otherwise unidentified object and replacing mention of relationships to space by mention of relationships to it. This is not merely a matter of substituting for the word ‘space’ an expression referring to the postulated object but it is simple enough. For example, the question whether the earth is situated in infinite space would be replaced by the question whether there is a limit to how far the earth can move from the postulated object. The derived theory has the advantage that Mach’s theory can be seen as one suggestion for the identity of the object. Mach identifies it as the centre of gravity of all massive things. The existence of this alternative to Newton’s theory means that his bucket experiment does not provide even indirect support for the notions of absolute position, movement and acceleration.