PERFECTIONIST LIBERALISM IN MILL AND BOSANQUET

advertisement
MAGIC, SCIENCE AND RELIGION
Geoffrey Thomas
1.INTRODUCTION
This is the first of five lectures forming the ‘philosophy’ component of the
course and covering the following topics:
1.
2.
3.
4.
5.
The demarcation problem
Laws and explanation
Sceptical challenge (1) : the problem of induction (Hume)
Scientific realism and progress
Sceptical challenge (2) : Kuhn & paradigm shifts
The philosophy component of MSR locks into PPH in two ways:
In the first place, within MSR it sketches in outline ‘what is this thing called
“science” ?’ It looks at the logical structure of science, while the historical
component traces the emergence and hegemony of science in its distinctive
modern forms; and the political component considers the cultural
dimension, the ways in which science has been seen (e.g.) as a
distinguishing feature of Western civilisation.
Secondly, it provides a bridge to ‘Problems of Explanation and Interpretation’
(‘PEI’). The science we fix on in MSR is natural science, the study of natural
phenomena : the kind of things done in physics, chemistry, biology and their
hybrids of bio-chemistry and the rest. A central question in PEI is how far
human agency and society can or should be studied by the same aims and
methods as the natural sciences.
Just to draw out a thread. There is an ambiguity in our use of the term,
‘science’. If by ‘science’ we have in mind the German idea of Wissenschaft –
the systematic and precise investigation of a subject-matter, where the nature
of the subject-matter determines the degree of precision attainable – then e.g.
history is a science and political science is a science. Not all subject-matters
allow of the same degree of precision. Aristotle saw this long ago
(Nicomachean Ethics, I.3) :
Our discussion will be adequate if its degree of clarity fits the subject-matter;
for we should not seek the same degree of exactness in all sorts of argument
alike, any more than in the products of different crafts. … [T]he educated
person seeks exactness in each area to the extent that the nature of the
subject allows (Aristotle, 1985 Irwin tr., Hackett : 3-4).
In the English-speaking world ‘science’ generally has a narrower connotation.
Science is the ordered knowledge of natural phenomena and of the relations
between them. The operative model, as suggested just above, is the natural
sciences which are distinguished by two main features. The first centres on
1
the collection of facts and observations in quantitative terms. To spell
that out just a bit in a standard model, science involves :





abstraction (looking at phenomena in groups or classes under specific
characteristics and interrelations rather than at particular items
in their full circumstantiality)
precise measurement and quantification of phenomena
hypotheses
(claims)
that
have
empirical
(observable)
consequences which can be checked/ confirmed through the
experimental control of phenomena and the manipulation of
variables
answerability to one main criterion of success – what Mary
Hesse has termed the ‘pragmatic criterion of predictive success’
(Mary Hesse, ‘Theory and Value in the Social Sciences’, Action and
Interpretation, ed. C. Hookway & P. Pettit, Cambridge, 1978: 4.) – with
a view, of course, to manipulation and control of the subject-matter.
The list could be added to or and refined but this combination of elements
constitutes the predominant – or an influential - scientific model or image. Not
all natural sciences embody the elements to an equal degree ; experimental
control of phenomena and the manipulation of variables are less in biology
than in physics or chemistry and are virtually non-existent in some areas of
evolutionary biology. As well, the model does not add up to a logic of scientific
discovery. You cannot do good science just by assembling and activating
these elements. The model cannot tell you how to grab a fruitful, illuminating
hypothesis. Moreover, creative science may in its initial stages ignore one or
more of the elements.
The second feature of the natural sciences is the assumption – a continuity
between the ancient Greeks and modern science – that ‘the universe is a
systematic and ordered place and that every observation, no matter how
unexpected, is capable of being fitted into a rational hypothesis which it is
within our intellectual capacity to discover, if not immediately, then in due
course when we have acquired the necessary data’ (Magnus Pyke, The
boundaries of science, 1961 : 9). If we are assuming system and order then
(it’s natural to suppose) we are assuming a law-governed or lawlike realm of
phenomena within which the ordered knowledge of natural phenomena and
of the relations between them is to be gained.
2. CONNECTION OF TOPICS
The way we have talked about science so far suggests that it is a specialised
and in fact rather special activity. If so, the least we can try to do is to mark off
genuine from pseudo-science, or genuine science from other legitimate
activities such as philosophy. Mainstream philosophy in the form of
metaphysics, after all, also aims to deliver a picture of the world as a
systematic and organised place. But nobody supposes it’s science. What I
am talking about here is the so-called demarcation problem.
2
If we assume in science a law-governed or lawlike realm of phenomena, then
we need to probe the nature of scientific laws and their role in scientific
explanation. This is, then, the next topic : laws and explanation.
But all is not plain sailing. There is a rock-bottom problem about the rationality
of assuming that there is a law-governed or lawlike realm of phenomena. This
problem is Hume’s problem of induction. It can be seen as a sceptical
challenge to science.
Next up, we need to consider, when we speak of the collection of facts and
observations in quantitative terms, whether successive scientific theories
draw closer and closer to the truth, that science fulfils something deeper to
reality than (merely) the ‘pragmatic criterion of predictive success’. The idea is
science yields truth, that it maps onto and faithfully depicts (‘corresponds
with’) an objectively existing real world, is labelled scientific realism. It’s a
popular view and we must consider its credentials. It readily goes along with
another view, namely that science is incremental or cumulative - and
progressive. Newton knew more and better than Aristotle or Descartes;
Einstein knew more and better than Newton. Newton himself said that he had
seen further by standing on the shoulders of giants. Here we can turn to a
second sceptical challenge. Thomas Kuhn, an influential philosopher of
science, does not accept that science is cumulative. He believes that certain
ruptures occur in the history of science – ‘scientific revolutions’ – which
involve what he calls ‘paradigm shifts’. What paradigm shifts mean, among
other things, is that Aristotelian, Newtonian, and Einsteinian physics work
within such radically different frameworks of assumptions that their results are
‘incommensurable’. Facts do not accumulate; paradigms get replaced.
3. THE DEMARCATION PROBLEM
Recall a couple of items from our characterisation of science above :


hypotheses (claims) that have empirical (observable) consequences
which can be checked/ confirmed through the
experimental control of phenomena and the manipulation of variables
Karl Popper suggested that what distinguishes science from metaphysics [for
which we can read ‘philosophy’] and pseudo-science is not confirmation but
refutation – the possibility of falsifying a claim. Here is a useful statement by
Theodore Schick.
(http://www.csicop.org/si/9703/end.html) :
By construing science as the attempt to falsify rather than verify hypotheses,
Popper thought that he could avoid the problem of induction and distinguish
real science from pseudoscience. The success of a test does not entail the
truth of the hypothesis under investigation. But, he believed, the failure of a
test does entail its falsity. So if science is viewed as a search for refutations
rather than confirmations, the problem of induction drops out and the mark of
a scientific theory becomes its ability to be refuted. Thus we have Popper's
3
famous demarcation criterion: a theory is scientific if it is falsifiable. If
there is no possible observation that would count against it, it is not
scientific.
More details next week. In the meantime check out :
AF Chalmers, What is This Thing Called Science ?, 2nd ed., 1982, 38-49, 6067.
C Hempel, Philosophy of Natural Science, ch. 2 –3, 1966, 3-32.
KR Popper, ‘Science : Conjectures and Refutations’, Conjectures and
Refutations, 5th ed., 1974, 33-65.
S Psillos, ‘Underdetermination Undermined’, Scientific Realism, 1999, 162182.
H Sankey, ‘The Theory-Dependence of Observation’, Cogito, 13, 1999, 2016.
GLT : 01 March 2006
4
MAGIC, SCIENCE AND RELIGION
Geoffrey Thomas
4. THE DEMARCATION PROBLEM (cont’d)
Sir Karl Popper (1902-94) is notable for a famous answer to this problem –
the problem of distinguishing genuine science from pseudo-science and
philosophy. Briefly, he argues that the hallmark of a scientific theory is that
it is (not confirmable but) falsifiable by observation and experiment.
Confirmationists and falsifications alike assume that a theory can be tested
against data. Two problems arise : (1) data, in the form of observations, may
themselves be theory-laden; (2) with regard to falsficationism, if the QuineDuhem thesis is right then any theory can accommodate any recalcitrant
evidence. (If this sounds too flip, check out §5.3.1 below.)
Primary reading:
AF Chalmers, What is This Thing Called Science ?, 2nd ed., 1982, 38-49, 6067.
C Hempel, Philosophy of Natural Science, ch. 2 –3, 1966, 3-32.
KR Popper, ‘Science : Conjectures and Refutations’, Conjectures and
Refutations, 5th ed., 1974, 33-65.
S Psillos, ‘Underdetermination Undermined’, Scientific Realism, 1999, 162182.
H Sankey, ‘The Theory-Dependence of Observation’, Cogito, 13, 1999, 2016.
5
5. POPPER’S FALSIFICATIONISM
5.1. CONFIRMATIONISM
Well but, what’s wrong with confirmationism ? In confirming a hypothesis/
theory we deduce certain consequences which are consistent with it; and
observationally we find those very consequences. We have evidence in
favour of the hypothesis, which is thus confirmed. What’s the problem ?
Logic
To begin, confirmationism seems to involve the fallacy of ‘affirming the
consequent’.
A scientific theory might be confirmed in the following way :
If Einstein’s theory is true then light rays passing close to the sun are
deflected. Careful measurement reveals that they are deflected.
Therefore Einstein’s theory is true (Patrick Shaw, Logic and its Limits,
1981 : 162).
As Shaw points out, this argument is fallacious. It is an example of the fallacy
of affirming the consequent. My hypothesis is (say) that it is raining :
If p then q
q
---------p
6
If it is raining then the pavement is wet
The pavement is wet (confirms the hypothesis)
It is raining
But the same consequence (‘The pavement is wet’) is consistent with
hypotheses quite different from ‘It is raining’, e.g. ‘A main drain has fractured’
or ‘Vandals have been splashing pedestrians with a hose-pipe’.
Impossibility of complete or conclusive verification
There is another problem about confirmationism. For a hypothesis to be
completely or conclusively confirmed, recourse must be had to a complete set
of relevant observations. But any such set is impossible to make. Take
Boyle’s Law as an example of a hypothesis :
For a fixed amount of gas (fixed number of molecules) at a fixed temperature,
the pressure and the volume are inversely proportional.
E.g. if you squeeze a balloon [pressure increases], it gets smaller [volume
decreases]. Complications aside – mainly that Boyle is postulating ideal
conditions – the problem for the confirmationist is that Boyle is offering a
generalisation covering the behaviour of all gases, past, present and future –
and there is no possibility of making a complete set of relevant observations.
Just piling up confirmatory instances gets us nowhere. Twenty trillion
confirmations go nowhere towards showing that all gases behave as Boyle
says. We can never close the gap between the number of confirmatory
observations we have made and the number necessary to complete the set of
relevant observations.
Robert Boyle (1627-91)
5.2 SPELLING OUT FALSIFICATIONISM
But, as Popper put it, one contrary instance can refute a hypothesis. If we
find one instance of a mass of gas behaving contrary to Boyle’s Law then
Boyle’s hypothesis has been refuted. And if there is no possibility of refuting a
theory, because it is consistent with every possible observation, then it is not
scientific (or ‘empirical’, as he also says). This is Popper’s basic idea.
Note that refutation need not entail total abandonment of the hypothesis. The
result may be simply that the hypothesis is untenable in its present form and
needs to be refined.
Let Popper talk for himself before we proceed (Conjectures and Refutations,
33-9) :
7
The problem which troubled me at the time was neither, "When is a theory
true?" nor, "When is a theory acceptable?" My problem was different. I
wished to distinguish between science and pseudo-science; knowing very
well that science often errs, and that pseudo-science may happen to
stumble on the truth.
I knew, of course, the most widely accepted answer to my problem: that
science is distinguished from pseudo-science—or from "metaphysics"—by
its empirical method, which is essentially inductive, proceeding from
observation or experiment. But this did not satisfy me. On the contrary, I
often formulated my problem as one of distinguishing between a genuinely
empirical method and a non-empirical or even a pseudo-empirical
method—that is to say, a method which, although it appeals to
observation and experiment, nevertheless does not come up to scientific
standards. The latter method may be exemplified by astrology, with its
stupendous mass of empirical evidence based on observation—on
horoscopes and on biographies.
But as it was not the example of astrology which led me to my problem I
should perhaps briefly describe the atmosphere in which my problem
arose and the examples by which it was stimulated. After the collapse of
the Austrian Empire there had been a revolution in Austria: the air was full
of revolutionary slogans and ideas, and new and often wild theories.
Among the theories which interested me Einstein’s theory of relativity was
no doubt the most important. Three others were Marx’s theory of history,
Freud’s psycho-analysis, and Alfred Adler’s so-called "individual
psychology."
There was a lot of popular nonsense talked about these theories, and
especially about relativity (as still happens even today), but I was fortunate
in those who introduced me to the study of this theory. We all—the small
circle of students to which I belonged—were thrilled with the result of
Eddington’s eclipse observations which in 1919 brought the first important
confirmation of Einstein’s theory of gravitation. It was a great experience
for us, and one which had a lasting influence on my intellectual
development.
The three other theories I have mentioned were also widely discussed
among students at that time. I myself happened to come into personal
contact with Alfred Adler, and even to cooperate with him in his social
work among the children and young people in the working-class districts of
Vienna where he had established social guidance clinics.
It was during the summer of 1919 that I began to feel more and more
dissatisfied with these three theories—the Marxist theory of history,
psycho-analysis, and individual psychology; and I began to feel dubious
about their claims to scientific status. My problem perhaps first took the
simple form, "What is wrong with Marxism, psycho-analysis, and individual
psychology? Why are they so different from physical theories, from
Newton’s theory, and especially from the theory of relativity?"
To make this contrast clear I should explain that few of us at the time
would have said that we believed in the truth of Einstein’s theory of
gravitation. This shows that it was not my doubting the truth of these other
three theories which bothered me, but something else. Yet neither was it
that I merely felt mathematical physics to be more exact than the
8
sociological or psychological type of theory. Thus what worried me was
neither the problem of truth, at that stage at least, nor the problem of
exactness or measurability. It was rather that I felt that these other three
theories, though posing as sciences, had in fact more in common with
primitive myths than with science; that they resembled astrology rather
than astronomy.
I found that those of my friends who were admirers of Marx, Freud, and
Adler, were impressed by a number of points common to these theories,
and especially by their apparent explanatory power. These theories
appeared to be able to explain practically everything that happened within
the fields to which they referred. The study of any of them seemed to have
the effect of an intellectual conversion or revelation, opening your eyes to
a new truth hidden from those not yet initiated. Once your eyes were thus
opened you saw confirming instances everywhere: the world was full of
verifications of the theory. Whatever happened always confirmed it. This
its truth appeared manifest; and unbelievers were clearly people who did
not want to see the manifest truth; who refused to see it, either because it
was against their class interest, or because of their repressions which
were still "un-analysed" and crying aloud for treatment.
The most characteristic element in this situation seemed to me the
incessant stream of confirmation, of observations which "verified" the
theories in question; and this point was constantly emphasized by their
adherents. A Marxist could not open a newspaper without finding on every
page confirming evidence for his interpretation of history; not only in the
news, but also in its presentation—which revealed the class bias of the
paper—and especially of course in what the paper did not say. The
Freudian analysts emphasized that their theories were constantly verified
by their "clinical observations." As for Adler, I was much impressed by a
personal experience. Once, in 1919, I reported to him a case which to me
did not seem particularly Adlerian, but which he found no difficulty in
analysing in terms of his theory of inferiority feelings, although he had not
even seen the child. Slightly shocked, I asked him how he could be so
sure. "Because of my thousandfold experience," he replied; whereupon I
could not help saying: "And with this new case, I suppose, your
experience has become thousand-and-one-fold."
What I had in mind was that his previous observations may not have
been much sounder than this new one; that each in its turn had been
interpreted in the light of "previous experience," and at the same time
counted as additional confirmation. What, I asked myself, did it confirm?
No more than that a case could be interpreted in the light of the theory.
But this means very little, I reflected, since every conceivable case could
be interpreted in the light of Adler’s theory, or equally of Freud’s. I may
illustrate this by two very different examples of human behaviour: that of a
man who pushes a child into the water with the intention of drowning him;
and that of a man who sacrifices his life in an attempt to save the child.
Each of these two cases can be explained with equal ease in Freudian
and Adlerian terms. According to Freud the first man suffered from
repression (say, of some component of his Oedipus complex), while the
second man had achieved sublimation. According to Adler the first man
suffered from feelings of inferiority (producing perhaps the need to prove
9
to himself that he dared to commit some crime), and so did the second
man (whose need was to prove to himself that he dared to rescue the
child). I could not think of any human behaviour which could not be
interpreted in terms of either theory. It was precisely this fact—that they
always fitted, that they were always confirmed—which in the eyes of their
admirers constituted the strongest argument in favour of these theories. It
began to dawn on me that this apparent strength was in fact their
weakness.
With Einstein’s theory the situation was strikingly different. Take one
typical instance—Einstein’s prediction, just then confirmed by the findings
of Eddington’s expedition. Einstein’s gravitational theory had led to the
result that light must be attracted by heavy bodies (such as the sun),
precisely as material bodies were attracted. As a consequence it could be
calculated that light from a distant fixed star whose apparent position was
close to the sun would reach the earth from such a direction that the star
would seem to be slightly shifted away from the sun; or, in other words,
that stars close to the sun would look as if they had moved a little away
from the sun, and from one another. This is a thing which cannot normally
be observed since such stars are rendered invisible in daytime by the
sun’s overwhelming brightness; but during an eclipse it is possible to take
pictures of them. If the same constellation is photographed at night one
can measure the distances on the two photographs, and check the
predicted effect.
Now the impressive thing about this case is the risk involved in a
prediction of this kind. If observation shows that the predicted effect is
definitely absent, then the theory is simply refuted. The theory is
incompatible with certain possible results of observation—in fact with
results which everybody before Einstein would have expected. This is
quite different from the situation I have previously described, when it
turned out that the theories in question were compatible with the most
divergent human behaviour, so that it was practically impossible to
describe any human behaviour that might not be claimed to be a
verification of these theories.
These considerations led me in the winter of 1919–20 to conclusions
which I may now reformulate as follows.
1. It is easy to obtain confirmations, or verifications, for nearly every
theory—if we look for confirmations.
2. Confirmations should count only if they are the result of risky
predictions; that is to say, if, unenlightened by the theory in
question, we should have expected an event which was
incompatible with the theory—an event which would have refuted
the theory.
3. Every "good" scientific theory is a prohibition: it forbids certain
things to happen. The more a theory forbids, the better it is.
4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often
think) but a vice.
10
5. Every genuine test of a theory is an attempt to falsify it, or to refute
it. Testability is falsifiability; but there are degrees of testability;
some theories are more testable, more exposed to refutation than
others; they take, as it were, greater risks.
6. Confirming evidence should not count except when it is the result of
a genuine test of the theory; and this means that it can be
presented as a serious but unsuccessful attempt to falsify the
theory. (I now speak in such cases of "corroborating evidence.")
7. Some genuinely testable theories, when found to be false, are still
upheld by their admirers—for example by introducing ad hoc some
auxiliary assumption, or by re-interpreting the theory ad hoc in such
a way that it escapes refutation. Such a procedure is always
possible, but it rescues the theory from refutation only at the price
of destroying, or at least lowering, its scientific status. (I later
described such a rescuing operation as a "conventionalist twist" or
a "conventionalist stratagem.")
I may perhaps exemplify this with the help of the various theories so far
mentioned. Einstein’s theory of gravitation clearly satisfied the criterion of
falsifiability. Even if our measuring instruments at the time did not allow us
to pronounce on the results of the tests with complete assurance, there
was clearly a possibility of refuting the theory.
Astrology did not pass the test. Astrologers were greatly impressed, and
misled, by what they believed to be confirming evidence—so much so that
they were quite unimpressed by any unfavourable evidence. Moreover, by
making their interpretations and prophesies sufficiently vague they were
able to explain away anything that might have been a refutation of the
theory had the theory and the prophesies been more precise. In order to
escape falsification they destroyed the testability of their theory. It is a
typical soothsayer’s trick to predict things so vaguely that the predictions
can hardly fail: that they become irrefutable.
The Marxist theory of history, in spite of the serious efforts of some of
its founders and followers, ultimately adopted this soothsaying practice. In
some of its earlier formulations (for example in Marx’s analysis of the
character of the "coming social revolution") their predictions were testable,
and in fact falsified. Yet instead of accepting the refutations the followers
of Marx re-interpreted both the theory and the evidence in order to make
them agree. In this way they rescued the theory from refutation; but they
did so at the price of adopting a device which made it irrefutable. They
thus gave a "conventionalist twist" to the theory; and by this stratagem
they destroyed its much advertised claim to scientific status.
The two psycho-analytic theories were in a different class. They were
simply non-testable, irrefutable. There was no conceivable human
behaviour which could contradict them. This does not mean that Freud
and Adler were not seeing certain things correctly; I personally do not
doubt that much of what they say is of considerable importance, and may
well play its part one day in a psychological science which is testable. But
it does mean that those "clinical observations" which analysts naïvely
believe confirm their theory cannot do this any more than the daily
11
confirmations which astrologers find in their practice. And as for Freud’s
epic of the Ego, the Super-ego, and the Id, no substantially stronger claim
to scientific status can be made for it than for Homer’s collected stories
from Olympus. These theories describe some facts, but in the manner of
myths. They contain most interesting psychological suggestions, but not in
a testable form.
At the same time I realized that such myths may be developed, and
become testable; that historically speaking all—or very nearly all—
scientific theories originate from myths, and that a myth may contain
important anticipations of scientific theories. Examples are Empedocles’
theory of evolution by trial and error, or Parmenides’ myth of the
unchanging block universe in which nothing ever happens and which, if we
add another dimension, becomes Einstein’s block universe (in which, too,
nothing ever happens, since everything is, four-dimensionally speaking,
determined and laid down from the beginning). I thus felt that if a theory is
found to be non-scientific, or "metaphysical" (as we might say), it is not
thereby found to be unimportant, or insignificant, or "meaningless," or
"nonsensical." But it cannot claim to be backed by empirical evidence in
the scientific sense—although it may easily be, in some genetic sense, the
"result of observation."
(There were a great many other theories of this pre-scientific or pseudoscientific character, some of them, unfortunately, as influential as the
Marxist interpretation of history; for example, the racialist interpretation of
history—another of those impressive and all-explanatory theories which
act upon weak minds like revelations.)
Thus the problem which I tried to solve by proposing the criterion of
falsifiability was neither a problem of meaningfulness or significance, nor a
problem of truth or acceptability. It was the problem of drawing a line (as
well as this can be done) between the statements, or systems of
statements, of the empirical sciences, and all other statements—whether
they are of a religious or of a metaphysical character, or simply pseudoscientific. Years later—it must have been in 1928 or 1929—I called this
first problem of mine the "problem of demarcation." The criterion of
falsifiability is a solution to this problem of demarcation, for it says that
statements or systems of statements, in order to be ranked as scientific,
must be capable of conflicting with possible, or conceivable, observations.
5.3 ASSESSMENT
Pragmatic
http://www.music-cog.ohio-state.edu/Music829C/Notes/Popper.critique.html :
Chalmers notes:
An embarrassing historical fact for falsificationists is that if their
methodology had been strictly adhered to by scientists then
those theories generally regarded as being among the best
examples of scientific theories would never have been
developed because they would have been rejected in their
12
infancy. (Chalmers, What is this thing called Science ?, 2nd ed.,
1982 : 66).
The assumption here is that the history of science provides a valuable test of
methodological principles. If a modern methodology cannot account for past
"successes" then the methodology must be false.
Popper takes issue with this view. He counters that the progress of science
might have been faster if historical figures had been falsificationists.
Methodology doesn't necessarily have to account for history.
Can one falsify a ‘might have been’ ?
5.3.1 Quine-Duhem thesis
The Quine-Duhem thesis holds that if a ‘falsifying’ observation is made, it is
impossible to determine whether the theory is false – or the observation is
false.
http://www.csicop.org/si/9703/end.html :
hypotheses have testable consequences only in the context of certain
background assumptions. If a test fails, it is always possible to maintain the
hypothesis in question by rejecting one or more of the background
assumptions.
Let the theory or hypothesis be that all swans are white. We can take this as
an example of a scientific theory or hypothesis, even though real-life scientific
theories or hypotheses involve more or less complex relationships between
variables of the kind we met with in Boyle’s Law.
http://www.music-cog.ohio state.edu/Music829C/Notes/Popper.critique.html :
Now consider the problem raised by Duhem and Quine. Suppose that an
observer observes a black swan. Duhem and Quine would note that this
observation is consistent with falsifying any one of the following statements:
Theory:
Observation
conditions:
Observer
disposition:
Observer language:
Observer state:
Observer character:
Definitional:
Definitional:
Situational:
Methodology:
"All swans are white."
"The lighting was appropriate for accurate color
observation."
"The observer is reliable."
"The observer understands the word `black'."
"The observer is not white/black color blind."
"The observer is not prone to make jokes."
"This animal is a swan."
"This color is black."
"The feathers have not been painted/dyed black."
"Falsificationism is a good methodology."
Although it is indeterminate which statement is false, the observation is
nevertheless valuable in constraining the possibilities.
13
A falsificationist might point out that, in principle, one can resolve which
hypothesis is incorrect by carrying out further falsifying experiments. For
example, the above issues can be addressed by experimentally testing
various supplementary hypotheses: E.g.
Hypothesi "The observer understands the word `black'."
s:
Experimen Show different color chips to the experimenter and observe
t:
descriptive language.
Hypothesi "The observer is reliable."
s:
Experimen Send another observer to make observations.
t:
Hypothesi "The feathers have not been painted/dyed black."
s:
Experimen Pluck out some feathers and observe whether they grow
t:
back as black in color.
Hypothesi "This animal is not a swan."
s:
Experimen Try to breed this swan with another swan. If there are
t:
offspring, then the statement "this animal is not a swan." is
false.
In this last case, notice the "reversing" of the original hypothesis -- "This
animal is a swan." Biologists define a species is a breeding population that
cannot breed with other populations. Since an animal can fail to breed for
other reasons (infertile, etc.), successful breed of the swan falsifies the
reverse statement: "This animal is not a swan."
The problem with the falsificationist’s reply is that ‘testing various
supplementary hypotheses’ raises just the same problems; the Quine-Duhem
thesis applies to them in turn.
The Quine-Duhem thesis can appear just as intellectual slipperiness and tergiversation. But
it’s deeper than that. Maybe two quotes, one from Quine and the other from Duhem, will give
the thesis extra depth for you :
Duhem :
The physicist can never subject an isolated hypothesis to experimental test, but
only a whole group of hypotheses; when an experiment is in disagreement with
his prediction, what he learns is that at least one of the hypotheses constituting
this group is unacceptable and ought to be modified; but the experiment does
not designate which one should be changed (P. Duhem, The aim and structure of
physical theory (1954) tr. P.P. Wiener, Princeton : Princeton University Press : 187.
Original text (French) : La théorie physique, son objet et sa structure, 1906.)
14
W.V.O. Quine :
The totality of our so-called knowledge or beliefs, from the most casual matters of
geography and history to the profoundest laws of atomic physics or even of pure
mathematics and logic, is a man-made fabric which impinges on experience only
along the edges. Or, to change the figure, total science is like a field of force whose
boundary conditions are experience. A conflict with experience at the periphery
occasions readjustments in the interior of the field. Truth values have to be
redistributed over some of our statements. Re-evaluation of some statements entails
re-evaluation of others, because of their logical interconnections - the logical laws
being in turn simply certain further statements of the system, certain further elements
of the field. Having re-evaluated one statement we must re-evaluate some others,
which may be statements logically connected with the first or may be the statements
of logical connections themselves. But the total field is so underdetermined by its
boundary conditions, experience, that there is much latitude of choice as to
what statements to re-evaluate in the light of any single contrary experience. No
particular experiences are linked with any particular statements in the interior of the
field, except indirectly through considerations of equilibrium affecting the field as a
whole (Quine, From a Logical Point of View, 1961 : 42-3).
ENDNOTE
1. APPROACHES TO THE CRITIQUE OF POPPER’S FALSIFICATIONISM
http://www.stephenjaygould.org/ctrl/gardner_popper.html :
Skeptical Look at Karl Popper
The following essay was published in Skeptical Inquirer (2001).
by Martin Gardner
"Sir Karl Popper / Perpetrated a whopper / When he boasted to the world that
he and he alone / Had toppled Rudolf Carnap from his Vienna Circle throne."
—a clerihew by Armand T. Ringer
ir Karl Popper, who died in 1994, was widely regarded as England's greatest
philosopher of science since Bertrand Russell. Indeed a philosopher of
worldwide eminence. Today his followers among philosophers of science are
a diminishing minority, convinced that Popper's vast reputation is enormously
inflated. I agree. I believe that Popper's reputation was based mainly on this
persistent but misguided efforts to restate common-sense views in a novel
language that is rapidly becoming out of fashion. Consider Popper's best
known claim: that science does not proceed by "induction"—that is, by finding
confirming instances of a conjecture — but rather by falsifying bold, risky
conjectures. Conformation, he argued, is slow and never certain. By contrast,
a falsification can be sudden and definitive. Moreover, it lies at the heart of
the scientific method.
A familiar example of falsification concerns the assertion that all crows are
black. Every find of another black crow obviously confirms the theory, but
there is always the possibility that a non-black crow will turn up. If this
15
happens, the conjecture is instantly discredited. The more often a conjecture
passes efforts to falsify it, Popper maintained, the greater becomes its
"corroboration," although corroboration is also uncertain and can never be
quantified by degree of probability. Popper's critics insist that "corroboration"
is a form of induction, and Popper has simply sneaked induction in through a
back door by giving it a new name. David Hume's famous question was "How
can induction be justified?" It can't be, said Popper, because there is no such
thing as induction!
There are many objections to this startling claim. One is that falsifications are
much rarer in science than searches for confirming instances. Astronomers
look for signs of water on Mars. They do not think they are making efforts to
falsify the conjecture that Mars never had water.
Falsifications can be as fuzzy and elusive as confirmations. Einstein's first
cosmological model was a universe as static and unchanging as Aristotle's.
Unfortunately, the gravity of suns would make such a universe unstable. It
would collapse. To prevent this, Einstein, out of thin air, proposed the bold
conjecture that the universe, on its pre-atomic level, harbored a mysterious,
undetected repulsive force he called the "cosmological constant." When it
was discovered that the universe is expanding, Einstein considered his
conjecture falsified. Indeed, he called it "the greatest blunder of my life."
Today, his conjecture is back in favor as a way of explaining why the universe
seems to be expanding faster than it should. Astronomers are not trying to
falsify it; they are looking for confirmations.
Falsification may be based on faulty observation. A man who claims he saw a
white crow could be mistaken or even lying. As long as observation of black
crows continue, it can be taken in two ways; as confirmations of "all crows are
black," or disconfirmations of "some crows are not black." Popper recognized
— but dismissed as unimportant — that every falsification of a conjecture is
simultaneously a confirmation of an opposite conjecture, and every
conforming instance of a conjecture is a falsification of an opposite
conjecture.
Consider the current hypothesis that there is a quantum field called the Higgs
field, with its quantized particle. If a giant atom smasher some day, perhaps
soon, detects a Higgs, it will confirm the conjecture that the field exist. At the
same time it will falsify the opinion of some top physicists, Oxford's Roger
Penrose for one, that there is no Higgs field.
To scientists and philosophers outside the Popperian fold, science operates
mainly by induction (confirmation), and also and less often by disconfirmation
(falsification). Its language is almost always one of induction. If Popper bet on
a certain horse to win a race, and the horse won, you would not expect him to
shout, "Great! My horse failed to lose!"
16
Astronomers are now finding compelling evidence that smaller and smaller
planets orbit distant suns. Surely this is inductive evidence that there may be
Earth-sized planets out there. Why bother to say, as each new and smaller
planet is discovered, that it tends to falsify the conjecture that there are no
small planets beyond our solar system? Why scratch your left ear with your
right hand? Astronomers are looking for small planets. They are not trying to
refute a theory any more than physicists are trying to refute the conjecture
that there is no Higgs field. Scientists seldom attempt to falsify. They are
inductivists who seek positive conformations.
At the moment the widest of all speculations in physics is superstring theory.
It conjectures that all basic particles are different vibrations of extremely tiny
loops of great tensile strength. No superstring has yet been observed, but the
theory has great explanatory power. Gravity, for example, is implied as the
simplest vibration of a superstring. Like prediction, explanation is an important
aspect of induction. Relativity, for instance, not only made rafts of successful
predictions but explained data previously unexplained. The same is true of
quantum mechanics. In both fields researchers used classical induction
procedures. Few physicists say they are looking for ways to falsify superstring
theory. They are instead looking for confirmations. Ernest Nagel, Columbia
University's famous philosopher of science, in his Teleology Revisited and
Other Essays in the Philosophy and History of Science (1979), summed it up
this way: "[Popper's] conception of the role of falsification . . . is an
oversimplification that is close to being a caricature of scientific procedures."
For Popper, what his chief rival Rudolf Carnap called a "degree of
confirmation"—a logical relation between a conjecture and all relevant
evidence—is a useless concept. Instead, as I said earlier, the more tests for
falsification a theory passes, the more it gains in "corroboration." It's as if
someone claimed that deduction doesn't exist, but of course statements can
logically imply other statements. Let's invent a new term for deduction, such
as "justified inference." It's not so much that Popper disagreed with Carnap
and other inductivists as that he restated their views in a bizarre and
cumbersome terminology.
To Popper's credit he was, like Russell, and almost all philosophers,
scientists, and ordinary people, a thoroughgoing realist in the sense that he
believed the universe, with all its intricate and beautiful mathematical
structures, was "out there," independent of our feeble minds, In no way can
the laws of science be likened to traffic regulations or fashions in dress that
very with time and place. Popper would have been appalled as Russell by the
crazy views of today's social constructivists and postmodernists, most of them
French or American professors of literature who know almost nothing about
science.
Scholars unacquainted with the history of philosophy often credit popper for
being the first to point out that science, unlike math and logic, is never
absolutely certain. It is always corrigible, subject to perpetual modification.
This notion of what the American philosopher Charles Peirce called the
17
"fallibilism" of science goes back to ancient Greek skeptics, and is taken for
granted by almost all later thinkers.
In Quantum Theory and the Schism in Physics (1982) Popper defends at
length his "propensity theory" of probability. A perfect die, when tossed, has
the propensity to show each face with equal probability. Basic particles, when
measured, have a propensity to acquire, with specific probabilities, such
properties as position, momentum, spin and so on. Here again Popper is
introducing a new term which says nothing different from what can be better
said in conventional terminology.
In my opinion Popper's most impressive work, certainly his best known, was
his two-volume The Open Society and Its Enemies (1945). Its central theme,
that open democratic societies are far superior to closed totalitarian regimes,
especially Marxist ones, was hardly new, but Popper defends it with powerful
arguments and awesome erudition. In later books he attacks what he calls
"historicism," the belief that there are laws of historical change that enable
one to predict humanity's future. The future is unpredictable, Popper argued,
because we have free wills. Like William James, Popper was an indeterminist
who saw history as a series of unforeseeable events. In later years he liked to
distinguish between what he called three "worlds"—the external physical
universe, the inner world of the mind, and the world of culture. Like Carnap
and other members of the Vienna Circle, he had no use for God or an
afterlife.
Karl Raimund Popper was born in Vienna in 1902 where he was also
educated. His parents were Jewish, his father a wealthy attorney, his mother
a pianist. For twenty years he was a professor of logic and scientific method
at the London School of Economics. In 1965 he was knighted by the Crown.
I am convinced that Popper, a man of enormous egotism, was motivated by
an intense jealousy of Carnap. It seems that every time Carnap expressed an
opinion, Popper felt compelled to come forth with an opposing view, although
it usually turned out to be the same as Carnap's but in different language.
Carnap once said that the distance between him and Popper was not
symmetrical. From Carnap to Popper it was small, but the other way around it
appeared huge. Popper actually believed that the movement known as logical
positivism, of which Carnap was leader, had expired because he, Popper,
had single-handedly killed it!
I have not read Popper's first and only biography, Karl Popper: The Formative
Years (1902-1945), by Malachi Haim Hacohen (2000). Judging by the reviews
it is an admirable work. David Papineau, a British philosopher, reviewed it for
The New York Times Book Review (November 12, 2000). Here are his harsh
words about Popper's character and work:
By Hacohen's own account, Popper was a monster, a moral prig. He
continually accused others of plagiarism, but rarely acknowledged his own
intellectual debts. He expected others to make every sacrifice for him, but did
little in return. In Hacohen's words, "He remained to the end a spoiled child
18
who threw temper tantrums when he did not get his way." Hacohen is ready
to excuse all this as the prerogative of genius. Those who think Popper a
relatively minor figure are likely to take a different view.
When Popper wrote "Logik der Forschung," he was barely thirty. Despite its
flawed center, it was full of good ideas, from perhaps the most brilliant of the
bright young philosophers associated with the Vienna Circle. But where the
others continued to learn, develop and in time exert a lasting influence on the
philosophical tradition, Popper knew better. He refused to revise his
falsificationism, and so condemned himself to a lifetime in the service of a
bad idea.
Popper's great and tireless efforts to expunge the word induction from
scientific and philosophical discourse has utterly failed. Except for a small but
noisy group of British Popperians, induction is just too firmly embedded in the
way philosophers of science and even ordinary people talk and think.
Confirming instances underlie our beliefs that the Sun will rise tomorrow, that
dropped objects will fall, that water will freeze and boil, and a million other
events. It is hard to think of another philosophical battle so decisively lost.
Readers interested in exploring Popper's eccentric views will find, in addition
to his books and papers, most helpful the two-volume Philosophy of Karl
Popper (1970), in the Library of Living Philosophers, edited by Paul Arthur
Schilpp. The book contains essays by others, along with Popper's replies and
an autobiography. For vigorous criticism of Popper, see David Stove's Popper
and After: Four Modern Irrationalists (the other three are Imre Lakatos,
Thomas Kuhn, and Paul Feyerabend), and Stove's chapter on Popper in his
posthumous Against the Idols of the Age (1999) edited by Roger Kimball. See
Also Carnap's reply to Popper in The Philosophy of Rudolf Carnap (1963),
another volume in The Library of Living Philosophers. Of many books by
Popperians, one of the best is Critical Rationalism (1994), a skillful defense of
Popper by his top acolyte.
http://en.wikipedia.org/wiki/Falsificationism :
Naïve falsification
Falsifiability was first developed by Karl Popper in the 1930s. Popper noticed
that two types of statements are of particular value to scientists. The first are
statements of observations, such as 'this is a white swan'. Logicians call
these statements singular existential statements, since they assert the
existence of some particular thing. They can be parsed in the form: There is
an x which is a swan and x is white.
The second type of statement of interest to scientists categorizes all
instances of something, for example 'All swans are white'. Logicians call
these statements universal. They are usually parsed in the form: For all x, if x
is a swan then x is white.
19
Scientific laws are commonly supposed to be of the second type. Perhaps the
most difficult question in the methodology of science is: how does one move
from observations to laws? How can one validly infer a universal statement
from any number of existential statements?
Inductivist methodology supposed that one can somehow move from a series
of singular existential statements to a universal statement. That is, that one
can move from 'this is a white swan', 'that is a white swan', and so on, to a
universal statement such as 'all swans are white'. This method is clearly
logically invalid, since it is always possible that there may be a non-white
swan that has somehow avoided observation. Yet some philosophers of
science claim that science is based on such an inductive method.
Popper held that science could not be grounded on such an invalid inference.
He proposed falsification as a solution to the problem of induction. Popper
noticed that although a singular existential statement such as 'there is a white
swan' cannot be used to affirm a universal statement, it can be used to show
that one is false: the singular existential observation of a black swan serves to
show that the universal statement 'all swans are white' is false - in logic this is
called modus tollens. 'There is a black swan' implies 'there is a non-white
swan' which in turn implies 'there is something which is a swan and which is
not white', hence 'all swans are white' is false, because that is the same as
'there is nothing which is a swan and which is not white'.
A white mute swan, common to Eurasia and North America.Although the logic
of naïve falsification is valid, it is rather limited. Popper drew attention to these
limitations in The Logic of Scientific Discovery, in response to anticipated
criticism from Duhem and Carnap. W. V. Quine is also well-known for his
observation in his influential essay, "Two Dogmas of Empiricism" (which is
reprinted in From a Logical Point of View), that nearly any statement can be
made to fit with the data, so long as one makes the requisite "compensatory
adjustments". In order to logically falsify a universal, one must find a true
falsifying singular statement. But Popper pointed out that it is always possible
to change the universal statement or the existential statement so that
falsification does not occur. On hearing that a black swan has been observed
in Australia, one might introduce the ad hoc hypothesis, 'all swans are white
except those found in Australia'; or one might adopt another, more cynical
view about some observers, 'Australian ornithologists are incompetent'. As
Popper put it, a decision is required on the part of the scientist to accept or
reject the statements that go to make up a theory or that might falsify it. At
some point, the weight of the ad hoc hypotheses and disregarded falsifying
observations will become so great that it becomes unreasonable to support
the base theory any longer, and a decision will be made to reject it.
Falsificationism
In place of naïve falsification, Popper envisioned science as evolving by
the successive rejection of falsified theories, rather than falsified
statements. Falsified theories are to be replaced by theories which can
20
account for the phenomena which falsified the prior theory, that is, with
greater explanatory power. Thus, Aristotelian mechanics explained
observations of objects in everyday situations, but was falsified by Galileo’s
experiments, and was itself replaced by Newtonian mechanics which
accounted for the phenomena noted by Galileo (and others). Newtonian
mechanics' reach included the observed motion of the planets and the
mechanics of gases. Or at least most of them; the size of the precession of
the orbit of Mercury wasn't predicted by Newtonian mechanics, but was by
Einstein's general relativity. The Youngian wave theory of light (i.e., waves
carried by the luminiferous ether) replaced Newton's (and many of the
Classical Greeks') particles of light but in its turn was falsified by the
Michelson-Morley experiment, whose results were eventually understood as
incompatible with an ether and was superseded by Maxwell's electrodynamics
and Einstein's special relativity, which did account for the new phenomena. At
each stage, experimental observation made a theory untenable (i.e., falsified
it) and a new theory was found which had greater 'explanatory power' (i.e.,
could account for the previously unexplained phenomena), and as a result
provided greater opportunity for its own falsification.
Naïve falsificationism is an unsuccessful attempt to prescribe a rationally
unavoidable method for science. Falsificationism proper, on the other hand, is
a prescription of a way in which scientists ought to behave as a matter of
choice.
Popper's swan argument
Two black swans, native to Australia.One notices a white swan, from this one
can conclude:
At least one swan is white.
From this, one may wish to infer that:
All swans are white.
However, to prove this, one must find all the swans in the world and verify that
they are white.
As it turns out, not all swans are white. By finding a black swan, one has
falsified the statement all swans are white; it is not true.
Formal logical arguments
The falsification of theories occurs through modus tollens, via some
observation. Suppose some theory T implies an observation O:
An observation conflicting with O, however, is made:
So by Modus Tollens,
21
The criterion of demarcation
Popper proposed falsification as a way of determining if a theory is scientific
or not. If a theory is falsifiable, then it is scientific; if it is not falsifiable, then it
is not science. Popper uses this criterion of demarcation to draw a sharp line
between scientific and unscientific theories. Some have taken this principle to
an extreme to cast doubt on the scientific validity of many disciplines (such as
macroevolution and Cosmology). Falsifiability was one of the criteria used by
Judge William Overton to determine that 'creation science' was not scientific
and should not be taught in Arkansas public schools.
In the philosophy of science, verificationism (also known as the verifiability
theory of meaning) held that a statement must be in principle empirically
verifiable in order to be both meaningful and scientific. This was an essential
feature of the logical empiricism of the so-called Vienna Circle that featured
such philosophers as Moritz Schlick, Rudolf Carnap, Otto Neurath, and Hans
Reichenbach. After Popper, verifiability came to be replaced by falsifiability as
the criterion of demarcation. In other words, in order to be scientific, a
statement had to be, in principle, falsifiable. Popper noticed that the
philosophers of the Vienna Circle had mixed two different problems, and had
accordingly given a single solution to both of them, namely verificationism. In
opposition to this view, Popper emphasized that a theory might well be
meaningful without being scientific, and that, accordingly, a criterion of
meaningfulness may not necessarily coincide with a criterion of demarcation.
His own falsificationism, thus, is not only an alternative to verificationism, it is
also an acknowledgment of the conceptual distinction that previous theories
had ignored.
Falsifiability is a property of statements and theories, and is itself neutral. As a
demarcation criterion, it seeks to take this property and make it a base for
affirming the superiority of falsifiable theories over non-falsifiable ones as a
part of science, in effect setting up a political position that might be called
falsificationism. Much that would be considered meaningful and useful,
however, is not falsifiable. Certainly non-falsifiable statements have a role in
scientific theories themselves. The Popperian criterion provides a definition of
science that excludes much that is of value; it does not provide a way to
distinguish meaningful statements from meaningless ones.
It is nevertheless very useful to know if a statement or theory is falsifiable, if
for no other reason than that it provides us with an understanding of the ways
in which one might assess the theory. One might at the least be saved from
attempting to falsify a non-falsifiable theory, or come to see an unfalsifiable
theory as unsupportable.
22
Criticism
Thomas Kuhn’s influential book The Structure of Scientific Revolutions
argued that scientists work within a conceptual paradigm that determines the
way in which they view the world. Scientists will go to great length to defend
their paradigm against falsification, by the addition of ad hoc hypotheses to
existing theories. Changing one's 'paradigm' is not easy, and only through
some pain and angst does science (at the level of the individual scientist)
change paradigms.
Some falsificationists saw Kuhn’s work as a vindication, since it showed that
science progressed by rejecting inadequate theories. More commonly, it has
been seen as showing that sociological factors, rather than adherence to a
strict, logically obligatory method, play the determining role in deciding which
scientific theory is accepted. This was seen as a profound threat to those who
seek to show that science has a special authority in virtue of the methods that
it employs.
Imre Lakatos attempted to explain Kuhn’s work in falsificationist terms by
arguing that science progresses by the falsification of research programs
rather than the more specific universal statements of naïve falsification. In
Lakatos' approach, a scientist works within a research program that
corresponds roughly with Kuhn's 'paradigm'. Whereas Popper rejected the
use of ad hoc hypotheses as unscientific, Lakatos accepted their place in the
development of new theories.
Lakatos also brought the notion of falsifiability to bear on the discipline of
mathematics in Proofs and Refutations. The long-standing debate over
whether mathematics is a science depends in part on the question of whether
proofs are fundamentally different from experiments. Lakatos argued that
mathematical proofs and definitions evolve through criticism and
counterexample in a manner very similar to how a scientific theory evolves in
response to experiments.
Paul Feyerabend examined the history of science with a more critical eye,
and ultimately rejected any prescriptive methodology at all. He went beyond
Lakatos’ argument for ad hoc hypothesis, to say that science would not have
progressed without making use of any and all available methods to support
new theories. He rejected any reliance on a scientific method, along with any
special authority for science that might derive from such a method. Rather, he
claimed, ironically, that if one is keen to have a universally valid
methodological rule, anything goes would be the only candidate. For
Feyerabend, any special status that science might have derives from the
social and physical value of the results of science rather than its method.
Following from Feyerabend, the whole "Popper project" to define science
around one particular methodology—which accepts nothing except itself—is a
23
perverse example of what he supposedly decried: a closed circle argument.
The Popperian criterion itself is not falsifiable.
Moreover, it makes Popper effectively a philosophical nominalist, which has
nothing to do with empirical sciences at all.
Although Popper's claim of the singular characteristic of falsifiability does
provide a way to replace invalid inductive thinking (empiricism) with deductive,
falsifiable reasoning, it appeared to Feyerabend that doing so is neither
necessary for, nor conducive to, scientific progress.
Case Studies
Multiple universes from the Anthropic Principle and the existence of intelligent
life (see SETI) beyond Earth are potentially non-falsifiable ideas. They are
"true-ifiable" because they are potentially detectable. Lack of detection does
not mean other universes or non-human intelligent life does not exist; it only
means they have not been detected. Yet, both of these ideas are generally
considered scientific ideas. Some suggest that an idea has to be only one of
falsifiable or "true-ifiable", but not both to be considered a scientific idea.
From scientists
Many actual physicists, including Nobel Prize winner Steven Weinberg and
Alan Sokal (Fashionable Nonsense), have criticized falsifiability on the
grounds that it does not accurately describe the way science really works.
Take astrology, an example most would agree is not science. Astrology
constantly makes falsifiable predictions -- a new set is printed every day in the
newspapers -- yet few would argue this makes it scientific.
One might respond that astrological claims are rather vague and can be
excused or reinterpreted. But the same is true of actual science: a physical
theory predicts that performing a certain operation will result in a number in a
certain range. Nine times out of ten it does; the tenth the physicists blame on
a problem with the machine -- perhaps someone slammed the door too hard
or something else happened that shook the machine. Falsifiability does not
help us decide between these two cases.
In reality, of course, theories are used because of their successes, not
because of their failures. As Sokal writes, "When a theory successfully
withstands an attempt at falsification, a scientist will, quite naturally, consider
the theory to be partially confirmed and will accord it a greater likelihood or a
higher subjective probability. ... But Popper will have none of this: throughout
his life he was a stubborn opponent of any idea of 'confirmation' of a theory,
or even of its 'probability'. ... [but] the history of science teaches us that
scientific theories come to be accepted above all because of their
successes."
Some examples
24
Claims about verifiability and falsifiability have been used to criticize various
controversial views. Examining these examples shows the usefulness of
falsifiability by showing us where to look when attempting to criticise a theory.
Non-falsifiable theories can usually be reduced to a simple uncircumscribed
existential statement, such as there exists a green swan. It is entirely possible
to verify that the theory is true, simply by producing the green swan. But since
this statement does not specify when or where the green swan exists, it is
simply not possible to show that the swan does not exist, and so it is
impossible to falsify the statement.
That such theories are unfalsifiable says nothing about either their validity or
truth. But it does assist us in determining to what extent such statements
might be evaluated. If evidence cannot be presented to support a case, and
yet the case cannot be shown to be indeed false, not much credence can be
given to such a statement.
Mathematics
Mathematical and logical statements are typically regarded as unfalsifiable,
since they are tautologies, not existential or universal statements. For
example, "all bachelors are male" and "all green things are green" are
necessarily true (or given) without any knowledge of the world; given the
meaning of the terms used, they are tautologies.
Proving mathematical theorems involves reducing them to tautologies, which
can be mechanically proven as true given the axioms of the system or
reducing the negative to a contradiction. Mathematical theorems are
unfalsifiable, since this process, coupled with the notion of consistency,
eliminates the possibility of counterexamples—a process that the philosophy
of mathematics studies in depth as a separate matter.
How a mathematical formula might apply to the physical world, however (as a
model), is a physical question, and thus testable, within certain limits. For
example, the theory that "all objects follow a parabolic path when thrown into
the air" is falsifiable (and, in fact, false; think of a feather—a better statement
would be: "all objects follow a parabolic path when thrown in a vacuum and
acted upon by gravity", which is itself falsified when considering paths that are
a measureable proportion of the planet's radius).
Ethics
Many philosophers have held that claims about morality (such as "murder is
evil" and "John was wrong to steal that money") are not part of scientific
inquiry; their function in language is not even to state facts, but simply to
express certain moral sentiments. Hence they are not falsifiable.
Theism
25
On the view of some, theism is not falsifiable, since the existence of God is
typically asserted without sufficient conditions to allow a falsifying observation.
If God is a transcendental being that can escape the realm of the observable,
claims about God's non-existence can not be supported by a lack of
observation. It is quite consistent for a theist to agree that the existence of
God is unfalsifiable, and that the proposition is not scientific, but to still claim
that God exists. This is because the theist claims to have presentable
evidence that verifies the existence of God. This is, of course, a matter of
interest for anyone who places stock in witnesses who claim to have seen
God or ideas like natural theology--the argument from design and other a
posteriori arguments for the existence of God. (See non-cognitivism.)
However, arguments relating to alleged actions and eye-witness accounts,
rather than the existence, of God may be falsifiable. See nontheism for further
information.
Conspiracy theories
Some so-called "conspiracy theories," at least as defended by some people,
are essentially unfalsifiable because of their logical structure. Conspiracy
theories usually take the form of uncircumscribed existential statements,
alleging the existence of some action or object without specifying the place or
time at which it can be observed. Failure to observe the phenomenon can
then always be the result of looking in the wrong place or looking at the wrong
time. Conspiracy theorists can, and often do, defend their position by claiming
that lying and other forms of fabrication are, in fact, a common tool of
governments and other powerful players and that evidence suggesting that a
conspiracy did not occur has been fabricated.
Economics
Many viewpoints in economics are often accused of not being falsifiable,
mainly by sociologists and other social scientists in general.
The most common argument is made against rational expectations theories,
which work under the assumption that people act to maximize their utility.
However, under this viewpoint, it is impossible to disprove the fundamental
theory that people are utility-maximizers. The political scientist Graham T.
Allison, in his book Essence of Decision, attempted to both quash this theory
and substitute other possible models of behavior.
Historicism
Theories of history or politics which allegedly predict the future course of
history have a logical form that renders them neither falsifiable nor verifiable.
They claim that for every historically significant event, there exists an
historical or economic law that determines the way in which events
proceeded. Failure to identify the law does not mean that it does not exist, yet
an event that satisfies the law does not prove the general case. Evaluation of
such claims is at best difficult. On this basis, Popper himself argued that
neither Marxism nor psychoanalysis were science, although both made such
26
claims. Again, this does not mean, that any of these types of theories are
necessarily invalid. Popper considered falsifiability a test of whether theories
are scientific, not of whether theories are valid.
Memetics
The model of cultural evolution known as memetics is as of yet unfalsifiable,
as its practitioners have been unable to determine what constitutes a single
meme, and more importantly, what determines the survival of a meme. For
the theory to be falsifiable, more exact accounts of this are needed, as
currently every outcome of cultural evolution can be explained memetically by
suitable choice of competing memes. This does not, however, mean that all
epidemological theories of social and cultural spread are unscientific, as
some of them have (mostly due to smaller scope) more exact terms of
transmission and survival.
Solipsism
In philosophy, solipsism is, in essence, non-falsifiable. Solipsism claims that
the Universe exists entirely in one's own mind. This can straightforwardly be
seen not to be falsifiable, because whatever evidence one might adduce that
is contrary to solipsism can be, after all, dismissed as something that is "in
one's mind." In other words, there is no evidence that one could possibly
adduce that would be inconsistent with the proposition that everything that
exists, exists in one's own mind. This view is somewhat similar to Cartesian
scepticism, and indeed, Cartesian skepticism has been rejected as
unfalsifiable as well by many philosophers.
Physical laws
The laws of physics are an interesting case. Occasionally it is suggested that
the most fundamental laws of physics, such as "force equals mass times
acceleration" (F=ma), are not falsifiable because they are definitions of basic
physical concepts (in the example, of "force"). More usually, they are treated
as falsifiable laws, but it is a matter of considerable controversy in the
philosophy of science what to regard as evidence for or against the most
fundamental laws of physics. Isaac Newton's laws of motion in their original
form were falsified by experiments in the twentieth century (eg, the anomaly
of the motion of Mercury, the behavior of light passing sufficiently close to a
star, the behavior of a particle being accelerated in a cyclotron, etc), and
replaced by a theory which predicted those phenomena, General Relativity,
though Newton's account of motion is still a good enough approximation for
most human needs. In the case of less fundamental laws, their falsifiability is
much easier to understand. If, for example, a biologist hypothesizes that, as a
matter of scientific law (though practising scientists will rarely actually state it
as such), only one certain gland produces a certain hormone, when someone
discovers an individual without the gland but with the hormone occurring
naturally in their body, the hypothesis is falsified.
27
The range of available testing apparatus is also sometimes an issue - when
Galileo showed Roman Catholic Church scholars the moons of Jupiter, there
was only one telescope on hand, and telescopes were a new technology, so
there was some debate about whether the moons were real or possibly an
artifact of the telescope or of the type of telescope. Fortunately, this type of
problem can usually be resolved in a short time, as it was in Galileo's case, by
the spread of technical improvements. Diversity of observing apparatus is
quite important to concepts of falsifiability, because presumably any observer
with any appropriate apparatus should be able to make the same observation
and so prove a thesis false.
References
Karl Popper, The Logic of Scientific Discovery (New York: Basic Books,
1959).
Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of
Chicago Press, 1962).
Paul Feyerabend, Against Method (London: Humanities Press, 1975).
2. POPPER AND NEWTON
http://plato.stanford.edu/entries/popper/ :
As Lakatos has pointed out, Popper's theory of demarcation hinges quite
fundamentally on the assumption that there are such things as critical tests,
which either conclusively falsify a theory, or give it a strong measure of
corroboration. Popper himself is fond of citing, as an example of such a
critical test, the resolution, by Adams and Leverrier, of the problem which the
anomalous orbit of Uranus posed for nineteenth century astronomers. Both
men independently came to the conclusion that, assuming Newtonian
mechanics to be precisely correct, the observed divergence in the elliptical
orbit of Uranus could be explained if the existence of a seventh, as yet
unobserved outer planet was posited. Further, they were able, again within
the framework of Newtonian mechanics, to calculate the precise position of
the ‘new’ planet. Thus when subsequent research by Galle at the Berlin
observatory revealed that such a planet (Neptune) did in fact exist, and was
situated precisely where Adams and Leverrier had calculated, this was hailed
as by all and sundry as a magnificent triumph for Newtonian physics: in
Popperian terms, Newton's theory had been subjected to a critical test, and
had passed with flying colours. Popper himself refers to this strong
corroboration of Newtonian physics as ‘the most startling and convincing
success of any human intellectual achievement’. Yet Lakatos flatly denies that
there are critical tests, in the Popperian sense, in science, and argues the
point convincingly by turning the above example of an alleged critical test on
its head. What, he asks, would have happened if Galle had not found the
planet Neptune? Would Newtonian physics have been abandoned, or would
Newton's theory have been falsified? The answer is clearly not, for Galle's
failure could have been attributed to any number of causes other than the
falsity of Newtonian physics (e.g. the interference of the earth's atmosphere
with the telescope, the existence of an asteroid belt which hides the new
28
planet from the earth, etc). The point here is that the
‘falsification/corroboration’ disjunction offered by Popper is far too logically
neat: non-corroboration is not necessarily falsification, and falsification of a
high-level scientific theory is never brought about by an isolated observation
or set of observations. Such theories are, it is now generally accepted, highly
resistant to falsification. They are falsified, if at all, Lakatos argues, not by
Popperian critical tests, but rather within the elaborate context of the research
programmes associated with them gradually grinding to a halt, with the result
that an ever-widening gap opens up between the facts to be explained, and
the research programmes themselves. (Lakatos, I. The Methodology of
Scientific Research Programmes, passim). Popper's distinction between the
logic of falsifiability and its applied methodology does not in the end do full
justice to the fact that all high-level theories grow and live despite the
existence of anomalies (i.e. events/phenomena which are incompatible with
the theories). The existence of such anomalies is not usually taken by the
working scientist as an indication that the theory in question is false; on the
contrary, he will usually, and necessarily, assume that the auxiliary
hypotheses which are associated with the theory can be modified to
incorporate, and explain, existing anomalies.
GLT : 23 March 2006
See also §7.2 below : Popper on hypothetical-deductive method.
MAGIC, SCIENCE AND RELIGION
29
Geoffrey Thomas
Geoffrey.thomas2@btinternet.com
6. LAWS AND EXPLANATION
Time now to make a start on our second topic :

laws and explanation
The basic issue is whether for every valid singular explanation in science
there is a covering law. In other words, is there implicit in, and underpinning,
every valid explanation a generalisation which is both lawlike and true ?
Note carefully : I am taking ‘lawlike’ in the sense of ‘essentially generalisable’.
There is another sense, perfectly okay for other purposes, in which a
generalisation is lawlike if it approximates to a law.
Primary reading :
C Hempel, Philosophy of Natural Science, ch. 5, 1966, 47-69.
M Stanford, ‘Explanation : the State of Play’, Cogito, 5, 1991, 172-5.
J Trusted, ‘Inadequacies of the DN Model’, Inquiry and Understanding, 1987,
123-9.
6.1 SINGULAR EXPLANATION
Then what is a singular explanation ? Well, take a singular causal
explanation. After a talk with the fire brigade I might say, observing the burntout shell of a building, ‘The short-circuit caused the fire’. This is quite different
in logical form from a statement like, ‘drunken driving causes accidents’,
which is explicitly lawlike. In the burnt-out building example I am offering to
explain a particular fire in terms of a specific cause at a given time and place.
No generalisations – no lawlike claims – appear to be involved. This is a
singular causal explanation. It would of course need to be supplemented to
make it a serious explanatory contender : e.g. ‘The short-circuit, occurring in a
building where there was no sprinkler system and where no night security
staff were on duty, caused the fire’. Even so, is there a universal
generalisation implicit in this explanation ? If we accept the principle of the
uniformity of nature, ‘same cause, same effect’, then we seem committed to
saying ‘And if the exact or relevantly similar conditions were repeated,
another short-circuit would cause another fire’. Said another way, if A causes
B, then isn’t there a covering law by which, if the same conditions are
repeated, another A-type event will cause another B-type event ?
Donald Davidson (1917 - 2003), one of the most respected and influential
American philosophers of the 20th century, claimed that :
30
… it does not follow that we must be able to dredge up a law if we know a
singular causal statement to be true; all that follows is that we know there
must be a covering law (Davidson, ‘Causal Relations’, Causation, ed. E. Sosa
& M Tooley, 1993 : 84).
6.2 NATURE OF LAWS
But we are helping ourselves to the idea of law here. What is a law ?
Standard answer : a law is a statement/ relationship between phenomena
which is both essentially generalisable and true. A generalisation as such,
even if true, is not essentially generalisable. For instance, ‘All the people in
this room are less than 6 feet 4 inches tall’. This is a true generalisation. But it
is not essentially generalisable, because it does not entail counterfactuals. If
‘All the people in this room are less than 6 feet 4 inches tall’ were a lawlike
statement then it would support the claim, ‘If anybody were to be in this room,
they would be less than 6 feet 4 inches tall’. (A counterfactual is a conditional
statement of which the antecedent – the first bit – isn’t fulfilled. E.g., ‘If the
butter were heated then it would melt’. But the butter hasn’t been heated; its
being heated is contrary-to-fact. The whole statement is therefore a
counterfactual.) The statement ‘All the people in this room are less than 6 feet
4 inches tall’ is, in the jargon, an accidentally true generalisation, not a law.
Problems lurk in this appeal to counterfactuals in the specification of laws.
Simply said, ‘If anybody were to be in this room, they would be less than 6
feet 4 inches tall’ and ‘If the butter were heated then it would melt’ are
themselves understandable only as essentially generalisable statements. So
we’ve appealed to counterfactuals to help distinguish essentially
generalisable statements, and we now find we need essentially generalisable
statements to help distinguish counterfactuals.
But let’s suppose that something like this account of laws is right. Some
philosophers of science would add that a law is precise (cf. precise
measurement and quantification of phenomena as part of the ‘narrow’ view
of science (see Introduction, §1), admits no exceptions - has no escape
clauses or ceteris paribus (‘other things equal’) provisos, has empirical
(observable) consequences and is confirmed by its instances (also from
the paradigm).
This is not the only possible concept of a scientific law but it has been a highly
influential one. See also ‘Physical laws’ on page 35 below.
6.3 DEDUCTIVE-NOMOLOGICAL EXPLANATION : BASIC STATEMENT
Just having the concept presented to us in this way, we don’t know it for a fact
that there are any scientific laws in this sense. But let’s suppose it. Then we
need to have some idea of how such laws would, might or should enter into
scientific explanations. This is where the work of Carl Hempel fits in.
31
Carl G. Hempel (1905-1997)
Carl Hempel was one of the most influential philosophers of science in the
mid-20th century. His ideas retain a good deal of currency; and one
contribution in particular, his deductive-nomological model of explanation, has
served to secure his continuing reputation. Although offered initially as a
model specifically of scientific explanation, the model has also been thought
(not least by Hempel himself) to be applicable to the social sciences and to
history. The label, ‘deductive-nomological’, is mildly alarming but the basic
idea is straightforward.
Take something that needs to be explained. This might be that X, a piece of
metal, expanded. Call this ‘E’, the ‘explanandum’ or occurrence for which we
have to give an explanation; and ‘X expanded’ is the explanandum-sentence,
the sentence that describes this event. In deductive-nomological (‘DN’)
explanation, this sentence is deduced (hence the ‘deductive’ part of the label)
from sentences stating a set of laws or universal generalisations (hence the
‘nomological’ part, from Greek nomos = ‘law’) and relevant circumstances,
otherwise known as ‘initial conditions’. In formal terms :
L1, L2 …………. Ln (Laws)
C1, C2 ………….Cn (Initial conditions)
------------------------------------------E
(Explanandum = thing to be explained)
32
The laws assert ‘Always (or necessarily) if (C1, C2 .…Cn), then E’. To translate
this into a crudely simplified example. A piece of metal, X, has expanded and
we want to explain why :
L. All metals expand when heated (roughly : because under heating the
atoms of a metal start to move faster and to move around more when they
have more energy, and so to displace neighbouring atoms and to expand the
space occupied)
C. X is a piece of metal and X was heated
-------------------------------------------------------------------------------------------------------E. X expanded
Deductive, because E. follows logically from L. and C.; and nomological,
because L is a law.
The DN model requires an explanation to include at least one law; and in this
example only one ‘law’ has been cited, ‘All metals expand when heated’. In
practice, several laws – ‘covering laws’, as Hempel calls them - may be
involved. More than that, genuine scientific laws are more nuanced in their
statement than this kind of crude generalisation. Ohm’s Law is more typical
(‘the current in a circuit varies directly as the electromotive force and inversely
as the resistance’) or Boyle’s Law(‘For a fixed amount of gas [fixed number of
molecules] at a fixed temperature, the pressure and the volume are inversely
proportional’). But nothing depends for our purposes on the verisimilitude of
the example, which is merely a dummy illustration.
6.4 DEDUCTIVE-NOMOLOGICAL EXPLANATION : REFINEMENTS
Hempel makes a number of elucidations, most of which are implicit in what
we have so far seen :
1. The explanandum must be a logical consequence of the explanans.
2. The explanans must contain general laws that genuinely feature in
the explanation; i.e. they must be essential to deriving the
explanandum.
3. The relevant e general laws may be subsumed under - explained by higher-level laws or theories.
4. The explanans must have empirical content – it must be open to
confirmation or (with Popper) falsification.
5. The sentences constituting the explanans must be true.
6. DN explanation is not necessarily causal explanation, in this sense;
if we take a necessitarian view of causation, such that causes
necessitate their effects (Hume as we will see take a different view)
then Hempel is not committed to the causal character of DN
explanations. The general laws may simply describe exceptionless (in
our experience) regularities. Remember Morris Schlick’s remark that
‘the function of laws is to ‘describe’ and not to ‘pre-scribe’’.
Hempel does not assume that DN explanation is the actual form of all (good)
scientific explanation. He recognises among other things ‘Inductive-
33
Statistical’ (‘IS’) explanation. This is model for the explanation of
indeterministic events. The argument, which must involve a lawlike statement,
leads to the conclusion that the explanandum was extremely likely. E.g. 95%
of swans in the UK are white; this is a U|K swan; it is highly likely that this
swan is white. Note that you cannot deduce that the swan is white.
In other words Hempel recognised the possibility of using probabilistic or
statistical laws. Such laws will not have the form ‘Always (or necessarily) if
(C1, C2 .…Cn), then E’ but rather ‘Probably, if (C1, C2 .…Cn), then E’. But then
we lose the deductive structure of the explanation. In a deductively valid
argument the conclusion cannot be false if the premises are true. But with
‘Probably …’, the conclusion is not necessarily true, given the premises.
Another form of explanation is that of elliptic or partial explanations,
‘explanation sketches’ as Hempel calls them :
http://www.philosophy.ubc.ca/faculty/savitt/phil460/hempel.htm
The explanations one finds in textbooks or other places rarely conform
exactly to the schema (D) and (P) above. The schema are models or ideals or
rational reconstructions. Explanations may fall short of the ideal in virtue of
being :
1. Elliptically formulated – that is, gappy or enthymematic
2. Sketchy – Not merely gappy but only a pointer or “promissory
note” towards a real explanation
3. Partial – Only some general aspect of the explanandum (fact)
is actually explained or derived from the explanans. (Freudian
slip example)
http://humanities.byu.edu/rhetoric/Figures/E/enthymeme.htm
Enthymeme : The informal method of reasoning typical of rhetorical
discourse. The enthymeme is sometimes defined as a "truncated syllogism"
since either the major or minor premise found in that more formal method of
reasoning is left implied. The enthymeme typically occurs as a conclusion
coupled with a reason. When several enthymemes are linked together, this
becomes sorites [a chain of enthymemes : GT].
Example
We cannot trust this man, for he has perjured himself in the past.
In this enthymeme, the major premise of the complete syllogism is missing:
1.Those who perjure themselves cannot be trusted. (Major premise - omitted)
2.This man has perjured himself in the past. (Minor premise - stated)
3.This man is not to be trusted. (Conclusion - stated)
34
6.5 PROBLEMS
There are three main problems :
7. DN-style covering laws are a scientific fiction
8. The problem of irrelevance
9. The problem of assymetry
6.6. LAWS AS FICTIONS
In How the Laws of Physics Lie, Oxford, 1983, Nancy Cartwright doubts
whether anything like total accuracy is possible in the formulation of a
scientific law. Which means that any candidate for a scientific law is likely to
be inaccurate and therefore false. Her view is presented by Karen Fox
(http://www.nasw.org/users/kfox/cart.htm):
I will begin by discussing the basic--and most extreme--position presented by
Cartwright: that the laws of physics are inherently false. She draws a
distinction between two types of laws in physics: the phenomenological and
the fundamental. The former, she says, describe the way things work; the
latter explain why they work that way. She has no quarrel with
phenomenological laws, only fundamental ones. "I think we can allow that all
sorts of statements represent facts of nature, including the generalizations
one learns in biology or engineering. It is just the fundamental explanatory
laws that do not truly represent."
Cartwright says that these fundamental laws, which attempt to explain entire
classes of phenomena, never provide accurate predictions of what happens
in any given system in nature. Yes, we can get awfully close if we build a very
precise model and protect it from the outside world, but in real life the
equations don't apply.
No scientist could successfully deny this, and I'll draw upon my own
experience as an example. In my high school physics classes, we were
taught- à la Newton--that a falling rock will accelerate towards the earth at a
rate of 32 feet per second per second. A little later on in the year, it's
mentioned that, actually, this number is not accurate in real life: air friction
gets in the way. Once in college, we learned that everything we'd learned so
far was untrue, even that description of gravity neglected subtleties like the
spinning of the earth and the height above sea level; a valid prediction of the
acceleration requires incorporating these new variables. A couple years later
our professor showed us that these laws too were false. General relativity-with its gravitational fields and fluctuating space-time--must be incorporated to
truly predict the acceleration of our falling rock. And finally, graduate school
teaches that this is still an incomplete understanding of gravity and we must
now include a variety of accessories from gravitons to string theory.
35
The punch line is not that physics education needs to be revamped but that
even in the final "correct" version of the theories, the laws of physics simply
do not yield dead-on predictions of a falling rock's acceleration.
For example, scientists to this day run experiments to determine the exact
value of G, the fundamental gravitational constant regulating how strongly two
bodies will attract each other and whether or not a falling rock will indeed
accelerate at 32 ft per second per second. Such experiments are invariably
performed deep in the basements of buildings, far away from any
disturbances, and yet the experiments have been compromised by a deer
wandering 15 yards outside or the water table rising in the ground around the
foundation. No two different experiments have yet yielded the same results
for the number. Numerous examples of this inconstancy in physics
experiments exist. The number of forces acting on a system are too great
to understand the sum of their effects perfectly. Scientists take this for
granted and incorporate uncertainties and perturbations right into their
equations. The information coming out of a mathematical prediction is only
expected to be very close to the final outcome--not exact. In fact, when
equations do yield perfect predictions on the first run of an experiment,
scientists tend to be wary and assume that a mistake has been made.
In other words, we can never secure accuracy – we can never get the
equations right, never foresee or quantify all the variables – and so we have
no covering laws if truth and precision are hallmarks of scientific laws.
6.7 PROBLEM OF IRRELEVANCE
This makes the criticism that an ‘explanation’ can fulfil the DN criteria – i.e.,
can be set out in perfect DN form – and yet be irrelevant to the
explanandum. Wesley Salmon has produced the following example :
L. No man who takes birth control pills becomes pregnant
C1. Rod takes birth control pills
C2: Rod is a man
---------------------------------------------------------------------------------E. Rod has not become pregnant.
The law in this case is irrelevant to the explanandum. A relevant law would be
something like, ‘No man can become pregnant’, though relevance would here
be bought at the price of doubtful truth in the new age of biology.
6.8 PROBLEM OF ASYMMETRY
The example of the barometer has been used to make a different kind of
criticism, that the DN model can allow us to reverse the explanatory order.
This is the so-called problem of asymmetry brought out as follows :
L. Whenever the barometer falls rapidly, a storm is approaching
C. The barometer is falling rapidly
-----------------------------------------------------------------------------------------
36
E. A storm is approaching
It seems odd to accept that the falling of the barometer explains the approach
of the storm, however reliable an indication it might be of a storm to come.
Rather it is the approaching storm that explains the falling of the barometer.
My own view is that these two criticisms – of irrelevance and asymmetry – are
dust in the balance. It is no doubt unwelcome to realise that the DN model
admits such defective explanations. But there is an unavoidably formal
element in the philosophy of science. Hempel is trying to define the logical
form of a good scientific explanation, not to fill the form with content. He
cannot insure against stupidity. It is no more a shortcoming of DN
explanation that defective content can be fed into it than it is of a computer
program when it produces rubbish (‘garbage in, garbage out’) or of a rule of
logic such as modus ponens (‘if p then q; p; therefore q’) when we choose to
translate ‘p’ and ‘q’ into sentences that have no sensible (i.e. good scientific)
connection with each other.
MAGIC, SCIENCE AND RELIGION
Geoffrey Thomas
7. HUME’S PROBLEM OF INDUCTION
Tonight we focus on induction.
We have looked at the roles of laws of nature scientific laws in scientific
explanation, how they might logically fit into explanations. But we still have to
examine what, if anything, is the rational basis for accepting such laws in the
first place. If a regularity has held in the past, what reason does this give for
assuming that it will continue in the future ? This question defines ‘Hume’s
problem of induction’.
Primary reading :
AJ Ayer, ‘The Legacy of Hume’, Probability and Evidence, 1972, 3-26.
37
D. Hume, A Treatise of Human Nature, 1739-40, Book I, Part III, Sections VI
& XII.
A Fisher, ‘Reichenbach on Induction’, Cogito, 7, 1993, 209-10; Cogito, 8,
1994, 53-4.
7.1 DEDUCTION AND INDUCTION
There are broadly two types of argument :


deductive
inductive
In a deductively valid argument, the conclusion cannot be false if the
premises are true; the conclusion is really just a restatement, a
reprocessing, of the information contained in the premises. In the timehonoured example :
All men are mortal
Socrates is a man
----------------------Socrates is mortal
premise
premise
conclusion
If the premises actually are true, the argument is not only valid but sound.
Soundness is not the same thing as validity. For instance, the following is a
valid argument :
This room currently contains three crocodiles
A crocodile is a rat
-----------------------------------------------------------This room currently contains three rats
If the premises are true, the conclusion must be true; the argument has a
valid logical form. But of course the premises are not true; the argument is not
sound.
The following argument is both valid and sound :
All triangles are three-sided plane figures
All three-sided plane figures have three internal angles
----------------------------------------------------------------------All triangles have three internal angles
By contrast, inductive arguments are never deductively valid. If I say of the
luckless Joe:
He has a severe heart condition
38
He never takes exercise
He eats a great deal of fat
He drinks alcohol heavily
He will not change his habits
--------------------------------------------He will not live longer than 5 years
this is (at least on the surface) a perfectly sensible line of reasoning but the
premises do not guarantee the conclusion; the conclusion could be false even
though the premises are true. Perhaps a wonder-cure will become available,
enabling Joe to live for 10 years in spite of his unhealthy life style. Or perhaps
he has a totally exceptional physical constitution that allows him to survive for
many years to come – a real-life Father Jack Hackett. The conclusion
involves a risk; it goes beyond the data.
Induction and probability are two sides of the same coin. In an inductively
strong argument, the conclusion is unlikely to be false if the premises
are true. The premises provide good evidence for the conclusion; they give
significant support to it, as in my argument just now about the heart case, but
they do not guarantee the conclusion. The conclusion ‘goes beyond’ the
premises, makes a claim which is larger than the information contained in the
premises.
7.2 MORE ABOUT INDUCTION
A further characterisation of these probabilistic arguments is that induction is
inference from the observed to the unobserved, on the assumption that
unobserved instances resemble observed ones. This is the widest
characterisation of induction, wider than (but clearly including) inference from
the past to the future and inference from the particular to the universal.
Four asides :
1. Karl Popper excludes induction from science. In Popper’s view,
science employs (or should employ) the hypothetico-deductive
method. Take a group or class of phenomena under specific
characteristics and interrelations. Then from a scientific hypothesis (an
educated guess) or a theory (a systematically related set of
statements, including a covering law) about that group or class,
together with statements of ‘initial conditions’, various ‘basic
statements’ or empirical consequences are logically deduced (cf.
Hempel, §2.5). These basic statements are compared with the results
of experiments. ‘If this decision is positive...then the theory has, for the
time being, passed its test: we have found no reason to discard it. But
if the decision is negative, or in other words, if the conclusions have
been falsified, then their falsification also falsifies the theory from which
they were logically deduced’ (Popper, Logic of Scientific Discovery,
1959, 33). No mention of induction – of inference from the observed to
the unobserved - in any of this. It is hard to see, however, how
scientific inquiry could repudiate induction altogether. As Hilary
39
Putnam remarks : ‘If there were no suggestion at all that a law which
has to withstand severe tests is likely to withstand further tests, such
as the tests involved in an application or attempted application, then
Popper would be right; but then science would be a wholly unimportant
activity’.
2. Mathematical induction – a method of proving that all integers have
a certain property by proving a base clause and a recursion clause – is
entirely separate from induction as characterised here.
3. Aristotelian epagoge is usually translated as ‘induction’. In Posterior
Analytics, II 19, Aristotle refers to epagoge as the grasp of essences
(universals as embodied in particulars) and of fundamental necessary
truths such as the law of non-contradiction (for any proposition P, it is
not the case that both P and not-P) from just a brief exposure to
examples. So, for instance, by the exercise of nous, as Aristotle calls
the relevant intellectual faculty, I might perceive that it is the essence
of a three-sided plane figure to have three internal angles.
4. We can distinguish between :
1. the justification of induction, the grounds of its general reliability
2. specific rules for making particular kinds of inductive inference,
given the general reliability of induction (see e.g. John Stuart Mill,
A System of Logic, 1843, III.8).
It was David Hume (1711-76) who called the justification of induction into
question in a major way, though there was some anticipation in Sextus
Empiricus (a late Greek sceptical philosopher, circa 200 CE : Outlines of
Scepticism, II. 204). Hume treats induction as a non-rational product of the
association of ideas. It just is a feature of the human mind, according to him,
that we make predictions by the association of ideas when we have observed
regularities. Rationality does not, and cannot, enter the picture.
40
David Hume (1711-1776)
7.3 HUME’S PROBLEM OF INDUCTION
In Enquiry Concerning the Human Understanding [‘EHU’], 1748, Section IV,
Part I, Hume denies the possibility of a rational justification of induction.
Hume’s main interest was in induction as inference from the past to the
future.
41
We need to put just a bit of philosophical machinery in place before we begin
to consider Hume’s critique. Perhaps ‘machinery’ isn’t quite the right word,
because we’re going to refer to a certain instrument :

Hume’s Fork
Hume was an epistemologist, a theorist of knowledge. There are only two
sources of knowledge in Hume’s view (A Treatise of Human Nature, 1739-40
[‘T’], I.3.1; EHU, IV):


relations of ideas
matters of fact
Relations of ideas are the realm of the analytic, of logical or conceptual truths.
Matters of fact are empirical truths derived from observation, from sense
experience – ‘synthetic’ truths as Kant was later to call them in contrast to the
analytic. Hume’s Fork is his methodological rule, which he uses to devastating
sceptical effect across a wide terrain, that no belief amounts to knowledge
unless it falls into one or other of these categories. His charge against
induction is that we cannot know it to be justified because it cannot be
justified either through relations of ideas or through matters of fact.
Then let’s get on with the critique. We commonly assume the principle of
induction, that the unobserved will resemble the observed - the future will
resemble the past, the general the particular. Let’s talk about events, and
take an example in which all A-type events have been followed by B-type
events. This regularity is generally taken to provide good grounds for
supposing that the next A-type event will be followed by a B-type event. But :
1. There is no logical connection between events (‘relations of ideas’). The
conclusion never follows logically that if one event (or set of events) has
occurred then another must follow. To say that one event has occurred but
not the other does not involve a logical contradiction. The occurrence of one
event never entails the occurrence of another in the way that ‘It is red’ entails
‘It is coloured’ or ‘All human beings are mortal’ and ‘All Greeks are human
beings’ together entail ‘All Greeks are mortal’.
2. Nor can we perceive - establish through sense-perception on the basis of
experience - any necessary connection between events, any matter of fact
proposition that there is a binding link between events (‘matters of fact’). All
we can perceive is one event occurring before, at the same time as, or after
another - or at any rate types of event regularly correlated in these ways.
3. The only supporting, justifying evidence we have, if event A has occurred
and we expect event B, is that B-type events have regularly followed A-type
events in our past experience. Hume talks of ‘constant conjunction’, the
regular association or correlation of one type of event with another.
42
4. But plainly this correlation will not save us from the logical connection
problem that we met in 1.:
A-type events have always been followed by B-type events in the past
---------------------------------------------------------------------------------------------The next A-type event will be followed by a B-type event
is not logically valid; it lacks deductive validity.
5. We might try to make it valid by putting in an extra premise (a bridge
principle):
(1) A-type events have always been followed by B-type events in the past
(2) Nature is uniform (regular) so that what has always followed in the past
will always follow in the future
----------------------------------------------------------------------------------------------(3) The next A-type event will be followed by a B-type event
To put (2) more formally :
UN (principle of the uniformity of nature) : If a regularity R (in the present
case, all A-type events are followed by B-type events) holds in my
experience, then it holds in nature generally, or at least in the next instance.
Hume himself does not make this move. It was made by John Stuart Mill in
A System of Logic, 1843, III.4.21. I think Hume’s instincts were sound here,
because there’s an obvious question …
6. How are we to justify reliance on UN, the principle of the uniformity of
nature ?
7. UN is itself a proposition. Can it be established logically ? Its ‘if … then’
claim (‘if a regularity R holds in my experience, then it holds in nature
generally, or at least in the next instance’) is not an entailment. The claim
does not register a logical connection. If it is known at all, it is known on the
basis of experience.
8. But UN is a claim about unobserved matters of fact, so it goes beyond
experience. It is a claim, in part, about the future. So we cannot know it on the
basis of experience.
9. Well but, can we rely on it because it has been reliable in the past ? This
appears to be the only remaining possibility. A problem of circularity arises.
We are attempting to justify the principle of induction, the assumption that the
future will resemble the past. But UN features as a premise in that attempt.
To say that UN will continue to be reliable because it has been reliable in the
past is to assume the principle of induction. Said another way, it is patently
circular to try to justify the principle of induction by appeal to UN if UN itself is
going to be supported by appeal to the principle of induction. Hence there can
43
be no non-circular appeal to UN. (PF Strawson, An Introduction to Logical
Theory, 1952, ch. 9.)
10. The conclusion cannot be avoided : induction lacks rational justification.
The attempt to justify the principle of induction relies on UN as a premise but
UN can only be supported by circular appeal to the principle of induction.
See ‘Endnote : Hume’s Problem of Induction’ for schematic statement of
argument.
Note carefully : Hume is not criticizing our habit of making inductive
inferences. This habit is perfectly natural to human beings. Hume even offers
specific rules for making particular kinds of inductive inference (T, I.3.15). His
philosophical point is to question – in fact, to deny – the rational status of
inductive inference.
5.5 REICHENBACH’S ‘SOLUTION’
The range of responses to Hume’s problem of induction has been huge, from
Kant down to Karl Popper and beyond. One of the most interesting, in my
view, is that of the philosopher of science, Hans Reichenbach (1891-1953).
See Alec Fisher.
The key to Reichenbach’s argument is this. Either UN holds (in scientifically
relevant respects) or it doesn’t. If it does hold, then induction will work. If it
doesn’t hold, then induction won’t work (except by occasional fluke) but then
nothing else – no other predictive method - will work either in a random
universe. So : induction will either work and is to be preferred (if UN holds) or
it won’t work (if UN doesn’t hold) but nothing else will work any better. It is
either the best method for projecting the future (given UN) or no worse than
any alternative (in the absence of UN).
I think this is the right response to Hume but note carefully that it isn’t really a
solution to Hume’s problem of induction. It does nothing to show that we can
safely infer the future from the past, the unknown from the known, but it does
suggest a rational strategy in face of the problem.
ENDNOTE : TABULAR SUMMARY OF HUME’S PROBLEM OF INDUCTION
1. A-type events have always been followed by B-type events in our
experience.
44
2. The next A-type event will be followed by a B-type event.
PROBLEM : how to justify the inference from 1. to 2. By what right do we
assume – project - that what has been the case in the past will continue to be
the case in the future ? Problem of reliability of induction – inference of the
future from the past, the unknown from the known.
First answer …
DEDUCTION (1) : we can deduce 2. from 1; 1 logically implies 2. INVALID : 1
does not logically imply 2. It’s logically possible for 1. to be true while 2. is
false.
Second answer …
PERCEPTION : we can perceive connections between A-type and B-type
events, so when the next A-type event occurs we will be able to perceive its
connection with a B-type event. FALSE : we cannot perceive connections
between events.
Third answer …
DEDUCTION (2) : we can secure deductive validity for our projection by
introducing premises invoking the uniformity of nature (a suggestion by JS
Mill):
(1) A-type events have always been followed by B-type events in the past.
(2) Nature is uniform (regular) so that what has always followed in the past
will always follow in the future.
----------------------------------------------------------------------------------------------(3) The next A-type event will be followed by a B-type event.
VALID BUT CHALLENGEABLE ON GROUNDS OF CIRCULARITY AS
ASSUMING RELIABILITY OF INDUCTION – see below.
Fourth answer … full circle
INDUCTION : We have just helped ourselves to the principle of the uniformity
of nature. But the objection can be put : how do we know that nature will
continue to be uniform ? All that we actually know (at most) is that nature has
been uniform in the past. By what right do we project that uniformity into the
future ? We are assuming that what has been the case in the past will
continue to be the case in the future : but the justification of this assumption is
exactly the problem with which we began. So our proceeding is CIRCULAR.
We have assumed the reliability of induction in order to justify induction.
23 March 2006
45
46
MAGIC, SCIENCE AND RELIGION
Geoffrey Thomas
Geoffrey.thomas2@btinternet.com
8. SCIENTIFIC REALISM AND PROGRESS
A standard view of science is that it is incremental and progressive. Newton
knew more and better than Aristotle or Descartes; Einstein knew more and
better than Newton. Newton himself said that he had seen further by standing
on the shoulders of giants. This expresses the ‘progressive’ perspective.
Scientific realism is the view that successive scientific theories draw closer
and closer to the truth, that science fulfils something deeper to reality than
(merely) the ‘pragmatic criterion of predictive success’. Science is
conducted in language but it corresponds with extra-linguistic reality – it
matches the independently existing real world.
Thomas Kuhn’s pioneering work, The Structure of Scientific Revolutions
(1962, 2nd ed., 1970) offers an account of paradigm change,
incommensurability, and scientific revolutions which casts doubt on the
standard view – or has been widely taken to do so.
Thomas Kuhn (1922-96)
8.1 KUHN : PARADIGM
SCIENTIFIC REVOLUTIONS
CHANGE,
INCOMMENSURABILITY,
AND
Kuhn sees the history of science as one of alternating periods of ‘normal’ and
‘revolutionary’ science. Normal science (in some particular field) is
characterised by the dominance of a single paradigm (in that field). When a
single paradigm is in possession, disagreements are marginal; and scientific
inquiry is mainly taken up with puzzle-solving within the paradigm. In periods
of revolutionary science, paradigms are overthrown. That’s a slightly selective
account, because Kuhn also recognises periods of what he calls ‘pre-
47
paradigm’ science and periods of ‘insecurity’ in which a paradigm is beset
with anomalies (cases it cannot readily handle).
On the definition of a paradigm, see Howard Sankey, ‘Kuhn’s Model of
Scientific Theory Change’, Cogito, 1993 : 19. Kuhn himself identifies the
following elements (http://en.wikipedia.org/wiki/Paradigm). A paradigm is a set
of beliefs and assumptions that fixes the scope and limits of :
1. what is to be observed and scrutinized,
2. the kind of questions that are supposed to be asked and probed for
answers in relation to this subject,
3. how these questions are to be put,
4. how the results of scientific investigations should be interpreted.
It’s easier to get the hang of what a Kuhnian paradigm is from examples than
from formal definition, on which Kuhn was not strong.
So, for example, the Aristotelian paradigm took a teleological view of nature,
seeing certain forms of development as proper, true to the essential identity
of a thing. Along these lines, it is proper e.g. for an acorn to develop into an
oak tree, it is proper for a human being to develop into an agent whose
emotions are moderated by reason. So Aristotelian science allowed questions
about the proper, perfected form of something that was fully developed. No
such questions are allowable in Newtonian mechanics or in relativity theory or
evolutionary biology.
On the overthrown of a paradigm, Sankey, op. cit. : 21-2. When a paradigm is
overthrown, there is in Kuhn’s famous phrase, a paradigm shift or
conceptual revolution :
Copernicus’ heliocentric theory replaces geocentric theory of Ptolemy
Newtonian mechanics replaces Cartesian cosmology
Lavoisier’s oxygen theory replaces phlogiston theory of Stahl
Einstein’s relativity theory replaces Newtonian physics
For these and other examples, see Paul Thagard, Conceptual Revolutions,
1992 : 6.
A standard view of science, I said above, is that science is incremental and
progressive; this is the accretion theory of scientific growth. Kuhn mounts a
challenge to this theory. In his view – more strongly present in his early than
in his later work – paradigms are incommensurable and there is no rational
choice between them. Paradigms shift; science is not cumulative, because
there is no common measure in terms of which to calculate ‘internal’
improvement from one paradigm to another – improvement in their theoretical
terms or in their handling of the same observational data. See Sankey, 22.
48
In a famous example, due to NR Hanson, ‘we cannot mean what someone
living in the age of Ptolemaic astronomy meant by saying “I see the sun rise”
because even the perceptual notion of a sunrise has been affected by the
shift from Ptolemaic to Copernican astronomy’ (H. Putnam, Realism with a
Human Face, 1990 : 126).
Claudius Ptolemaeus (‘Ptolemy’) flourished 127-145 CE, Alexandria
Nicolas Copernicus (1473-1543)
Although Kuhn does not accept that science is cumulative, nevertheless
he does think it is progressive. There is such a thing as scientific progress,
because while ‘internally’ incommensurable, different paradigms, and the
theories or hypotheses falling under them, can be scaled in terms of external
common and enduring cognitive values and historically do show progress
in terms of those values. Sankey mentions the relevant values on op. cit.
page 19; we are talking about values such as consistency, good fit with the
data (empirical accuracy), depth, fruitfulness, congruence with received
general theories, and simplicity.
This is a hotly debated element in Kuhn’s account. In his later work he still
maintained that scientific terms have different meanings between different
paradigms, but he gave more importance to common and enduring
cognitive values.
Kuhn’s basic position is roughly this :
(1) The definition of a theoretical term, e.g. ‘mass’, involves other theoretical
terms : mass is a measure of the quantity of matter in an object, expressed in
terms of the object’s degree of resistance to having its motion changed
49
[inertial mass] or in terms of the effect that a gravitational field has on it
[gravitational mass]. This definition connects the concept of mass with that of
motion, gravitational field, etc. To use and understand it w’re involved in a
network theory of meaning.
(2) Therefore, e.g., ‘mass’ does not have the same meaning in Newton and
Einstein, because the network is different.
(3) So Newton’s physics cannot be absorbed by Relativity Theory, because
e.g. what one theory asserts about mass is not denied by the other : they are
not referring to the same thing.
(4) This is to say that the two theories are incommensurable; there isn’t a
common measure for their claims because they are not making claims about
the same thing. (Compare this example : there is a room with 100 books. If
claim X = ‘there are 55 blue books, and 45 non-blue books’ and claim Y =
‘there are 30 science books and 70 novels’, then the difference between their
‘theoretical’ terms – classification in terms of colour and classification in terms
of contents – means that X and Y are incommensurable claims. There’s no
significant sense in which they’re rival. ‘Mass’ in Newton and ‘mass’ in
Einstein are just as different as ‘colour’ in X and ‘contents’ in Y, with the same
result of incommensurability.) This has implications for scientific realism :
‘We may ... have to relinquish the notion, explicit or implicit, that changes of
paradigm carry scientists and those who learn from them closer and closer to
the truth’ (Kuhn, op. cit., 1970 : 170). (Why only ‘may’ ?)
(5) Incommensurability also means that we cannot use observations to decide
between theories from different paradigms. This is because the sentences
used to describe the observations would have different meanings – would
contain theoretical terms with different meanings – between the two theories.
(6) It is not a fair corollary of Kuhn’s critique of scientific progress that science
is irrational : ". . . I do not for a moment believe that science is an intrinsically
irrational enterprise. . . I take this assertion not as a matter of fact, but rather
of principle. Scientific behavior, taken as a whole, is the best example we
have of rationality" (Kuhn, "Notes on Lakatos," in R.C. Buck & R.S. Cohen,
eds. In Memory of Rudolf Carnap, Boston Studies in the Philosophy of
Science 1971, 8: 143-144).
8.2 COMMENTS ON KUHN
I’d make three points:
1. I agree with Hilary Putnam that, in the sunrise example, ‘We can say
what Ptolemaic astronomy was trying to explain, and we can give a
good description of how it went about explaining it’ (Putnam, ibid.).
Yes: in a sense ‘sunrise’ had a different meaning for the Ptolemaics =
(roughly) the first appearance of the sun, each day, on its circling of the
earth. Copernican astronomy offers a different interpretation of, and
gives a different meaning to, ‘sunrise’, because it precisely doesn’t
50
assume that the sun circles the earth. But there is enough overlap of
meaning between the theories to justify our saying that what both
theories are trying to do is to explain the first appearance of the sun
(the heavenly body, white and circular in appearance, that is our main
source of light) above the horizon each day. That statement doesn’t
itself presuppose either the Ptolemaic or the Copernican theory and it
enables us to compare them pretty well in their rival accounts of that
phenomenon.
2. There can be testable differences of prediction – of observation between theories. Allow the point : Newtonian physics and Relativity
Theory differ over the meaning of ‘mass’; let’s concede that they’re not
talking about the same thing. But the theories precisely are intercheckable. The General Theory of Relativity predicts that light coming
from a strong gravitational field will shift its wavelength to larger values
(the so-called ‘red shift’). This is totally inconsistent with Newtonian
physics. So the theories can be compared; they are not
incommensurable in respect of this prediction. If you’re worried that
‘strong gravitational field’ is a theoretical term reintroducing
incommensurability, re-run the example on ‘near the sun’ (a
reapplication of the ‘overlap of meaning’ point above). For further
discussion of examples, see ENDNOTE.
3. Common and enduring cognitive values enable comparison. This
is a point that Kuhn allows when he lists a number of ‘external’ criteria
in terms of which one theory can be better than another : ‘accuracy of
prediction; the balance between esoteric and everyday subject matter;
and the number of different problems solved’ (Chalmers, What is this
thing called Science ?, 2nd ed., 1982 : 109). So Kuhn admits an idea of
scientific progress. But what he offers with one hand, he takes away
with the other. For he tells us that these criteria are values of which the
specification ‘must, in the final analysis, be sociological or
psychological. It must, that is, be a description of a value system, an
ideology, together with an analysis of the institutions through which that
system is transmitted and enforced’ (Chalmers, ibid; Kuhn in I. Lakatos
& A. Musgrave, Criticism and the Growth of Knowledge, 1974 : 21).
‘There is no standard higher than the assent of the relevant
community’ (Chalmers, ibid.; Kuhn, op. cit., 1970 : 94). Common and
enduring cognitive values re-introduce commensurability through the
back door.
ENDNOTE
http://csep10.phys.utk.edu/astr161/lect/history/einstein.html :
They [i.e. Newton's theory of gravitation and the theory of gravitation implied
by the General Theory of Relativity] make essentially identical predictions as
long as the strength of the gravitational field is weak, which is our usual
experience. However, there are three crucial predictions where the two
theories diverge, and thus can be tested with careful experiments.
51
The orientation of Mercury's orbit is found to precess in space over time, as
indicated in the adjacent figure [GT : see website] (the magnitude of the effect
is greatly exaggerated in this figure). This is commonly called the "precession
of the perihelion", because it causes the position of the perihelion to move.
Only part of this can be accounted for by perturbations in Newton's theory.
There is an extra 43 seconds of arc per century in this precession that is
predicted by the Theory of General Relativity and observed to occur (a
second of arc is 1/3600 of an angular degree). This effect is extremely small,
but the measurements are very precise and can detect such small effects
very well.
Einstein's theory predicts that the direction of light propagation should be
changed in a gravitational field, contrary to the Newtonian predictions.
Precise observations indicate that Einstein is right, both about the effect
and its magnitude. A striking consequence is gravitational lensing.
The General Theory of Relativity predicts that light coming from a strong
gravitational field should have its wavelength shifted to larger values (what
astronomers call a "red shift"), again contary to Newton's theory. Once
again, detailed observations indicate such a red shift, and that its
magnitude is correctly given by Einstein's theory.
GLT : 27 April 2006
52
Download