1 The reciprocal relationship of epistemology and science is of noteworthy... They are dependent upon each other. Epistemology without contact with...

advertisement
1
A. INTRODUCTION
The reciprocal relationship of epistemology and science is of noteworthy kind.
They are dependent upon each other. Epistemology without contact with science
becomes an empty scheme. Science without epistemology is - insofar as it is thinkable
at all - primitive and muddled. However, no sooner has the epistemologist, who is
seeking a clear system, fought his way through to such a system, than he is inclined to
interpret the thought-content of science in the sense of his system and to reject
whatever does not fit into his system. The scientist, however, cannot afford to carry
his striving for epistemological systematic that far. He accepts gratefully the
epistemological conceptual analysis; but the external conditions, which are set for him
by the facts of experience, do not permit him to let himself be too much restricted in
the construction of his conceptual world by the adherence to an epistemological
system. He therefore must appear to the systematic epistemologist as a type of
unscrupulous opportunist: he appears as realist insofar as he seeks to describe a world
independent of the acts of perception; as idealist insofar as he looks upon the concepts
and theories as the free inventions of the human spirit (not logically derivable from
what is empirically given); as positivist insofar as he considers his concepts and
theories justified only to the extent to which they furnish a logical representation of
relations among sensory experiences. He may even appear as Platonist or Pythagorean
insofar as he considers the viewpoint of logical simplicity as an indispensable and
effective tool of his research.
A. Einstein
The predominantly inductive methods appropriate to the youth of science are
giving place to tentative deduction. ... There is no inductive method which could lead
to the fundamental concepts of physics.
A. Einstein
... concentration on induction has very much hindered the progress of the
whole inquiry into the postulates of scientific method.
B.Russell
Unlike the old epistemologists, we seek no firmer basis for science than
science itself; so we are free to use the very fruits of science in investigating its roots.
It is a matter, as always in science, of tackling one problem with the help of our
answers to others.
W.V. Quine
Towards a better methodology of science science itself will lead the way.
D. Miller
Dick's method is this. You write down the problem. You think very hard. ( He
[Gell-Mann] shuts his eyes and presses his knuckles parodically to his forehead.) Then
you write down the answer.
M. Gell-Mann on the method of R. Feynman
2
Contents
pg.
Hume
3
Kant
7
Probabilism
10
Popper
21
The present approach
33
References
53
3
Hume
We are determined by custom alone to suppose the future conformable to the
past.
D. Hume
Hume's work continues to challenge the contention on which a realist outlook
on science is based, i.e. that the method of science can deliver a growing body of
objective, albeit approximate and fallible, knowledge. The realist contention
presupposes: (a) the possibility of a modicum of objective, albeit corrigible,
knowledge of particulars; and (b) that the method, presumed rational, can bridge the
gap between knowledge of such particulars and deeper knowledge, i.e. knowledge of
wider scope or greater generality; knowledge that would make rational (hence
successful) prediction and retrodiction possible regarding states of affairs beyond
known particulars.
Humean scepticism about this contention is traceable to Hume's classical
empiricist conception of the natal mind qua tabula rasa. Hume's analysis of the
workings of such a mind may perhaps be briefly depicted thus: The 'ideas' of such a
mind derive from experienced 'impressions', and the 'association of ideas' - which
gives rise to the notion of cause and effect - derives from experienced 'constant
conjunctions' of 'impressions'. Such repeated experiences form customs and habits,
which, rather than reason, determine (or, at any rate, make us feel that they determine)
mind's expectations. Thus mind appears to be but a passive vehicle for experiences
which have (or could have) a determining effect on it, and it is not clear how, if at all,
those experiences relate to an external reality, even assuming that reality to be the
source of those experiences. Thus the notion that our ideas and their association have
objective counterparts could be but a projection onto an external reality of internal
mental processes - a projection that may be due to repetitions of experiences inducing
mind to expect external reality to mimic, and to continue to mimic, its own
experiences.
Hume's analysis suggests that our minds could be but passive deterministic
mechanisms compelled to project their 'ideas' and their 'associations' onto an external
reality. But there is neither an empiric nor logical (valid inferential) justification for
those projections. We may thus be led astray by our 'senses and experience' (Hume,
1975, p. 19) in supposing a reality, composed of particulars, which engender
experienced 'impressions', which in turn engender 'ideas'; and whose spatial
contiguities and temporal successions engender experienced 'conjunctions', which in
turn engender 'associations of ideas'. And even if a reality of particulars is granted even if we suppose our 'ideas' to have objective counterparts - we may still be led
astray by experienced 'conjunctions' of particulars - which engender 'associations of
ideas' - in supposing that those 'associations' have objective counterparts that
instantiate objective law-like correlations, or 'necessary connections'. We may be so
led astray because we do not actually experience any causal linkage that might be
responsible for the experienced 'constant conjunctions' - we may, at best, experience
only spatial contiguities and temporal successions. Thus neither experience - the
empiricist conception of the source of all knowledge - nor logic, can legitimate the
mental leap to, or inference of, the idea of objective causal processes, hence of law-
4
likeness or law-like generality in an external reality. The source of that idea is to be
sought in our customs and habits, formed by our repeated experiences of 'impressions'
and their 'conjunctions'. Mind may thus be habituated by its experiences to perform
logically invalid projections of empirically wanting 'impressions' and their
'conjunctions'. In short, we may be completely conditioned to project our
psychological "states" onto an external reality, as well as onto other equally
determined minds. And when we do so onto a posited external reality we have no
good reason to suppose that the posited objects of our projections, should they exist,
conform to them. Thus the alleged "laws of nature", which if sound might legitimate
our projections, may be no more than codifications of our customs and habits.
Accordingly, all of "objective knowledge", including "knowledge" of particulars, may
be but a chimera.
In today's parlance, the Humean stance suggests that even if an external reality
of particulars is granted, then our projections, to which we may be completely
compelled, amount to invalid inductive inferrals of general statements about that
reality from statements based on experiencing some of its particulars; moreover, resort
to the probability notion cannot obviate that invalidity. Thus on the assumption that
only a rational method can lead to general objective knowledge, which implicates the
posit that objective reality is more or less rationally structured, scepticism is warranted
towards the very idea of the possibility of such knowledge.
If we take the complete determination of mind thesis to be sound, and if we
suppose that Hume's analysis of the problem of human knowledge could have been
effected only by a mind not entirely determined, then we can only conclude that the
thesis is refuted by Hume effecting the analysis. For we must then suppose that whilst
performing the analysis, Hume's mind was, contrary to the thesis, not completely
determined. It would follow that the outcome of his analysis is not just another
determined mental "state" projected onto other determined minds. Thus if we are to
take Hume's results seriously, then we need to take them as not being themselves
subject to Humean scepticism.1 That is the way they are taken here. For leaving aside
his conception of mind, Hume does point to an authentic projection problem, i.e.
given that, in both science and out science, we are pervasively engaged in projections,
which of them, if any, are rationally warranted? Linked to this problem is an
explanatory one, i.e. how to account for the often amazing successes of the sciences,
particularly of the physical and biological sciences? How is it that there is such a gulf
between the successes of scientific projections and those of non-scientific ones?
Three points emerging from the Humean analysis need stressing:
1
In an apparent attempt to deflect the full impact of his scepticism, Hume (1739 and
1748) resorts to the idea of a determining "nature", according to which "natural
reason" operates, but only within the bounds of, or in the service of, 'common life',
which for Hume apparently includes intellectual pursuits such as mathematics,
empirical science, and an empirically oriented metaphysics; disciplines which he
apparently thought possible, notwithstanding mind's determination. Thus from his
perspective, the ability to perform his analysis may not be a contradiction of the
complete determination of mind stance; and his "solution" to his problem amounts to
the advice to act as if there is no problem, since there is nothing that can be done
about it, given our minds' "natural determination".
5
(1) The analysis indicates that the problem of objective knowledge is
inextricably intertwined with the problem of the validation, or legitimation, of our
projections;
(2) Suppose we grant an objective reality composed of particulars, some of
which are known. Suppose further that this reality is sufficiently rationally ordered or
structured - a necessary supposition if it is held that only a rational method can lead to
its comprehension. Then the mismatch between the rational ordering of reality and
inferrals that are inescapably invalid, warrants scepticism regarding the view that the
outcomes of such inferrals can be law-like. For to hold (b) above - that the method can
bridge the gap between knowledge of particulars and more general knowledge - is to
suppose that the inferential mode of the method is appropriate for the bridging
(Popper, 1972, Ch. 9). It follows that even if knowledge of particulars were possible,
our attempts to acquire more general knowledge are likely to be fallacious, since,
according to Hume, we appear to be compelled to an invalid projective mode of
reasoning. Hence any such knowledge cannot be rationally upheld; and
(3) The concepts of space and time - in their physical sense, not in Hume's
sense of being but the way in which we experience 'impressions' - have a key role in
the Humean challenge. For any move from statements about particulars (which could
be statistical samples), based on experiences (or observations, or detections) at some
"here and now", to general statements about unexperienced states of affairs,
unavoidably implicates a spatio-temporal projection from that "here and now" unto
some spatio-temporal expanse that is either in the past or future of that "here and
now". (Although in statistical cases, the move from samples to larger populations may
in some cases implicate only a temporal projection.) And clearly any conceptual move
from some conceptual "here and now" unto a spatio-temporal expanse not within that
"here and now" implicates a spatio-temporal projection. Indeed, any conceptual
generalisation of particulars involves, whether explicitly or implicitly, a spatiotemporal projection. The validation of such projections is thus the fundamental issue.
The Humean challenge to scientific realists is thus to provide plausible
accounts of their presuppositions about two matters: (a) that there is the possibility of
a modicum of objective knowledge about particulars; and (b) that there is the
possibility of general, including deep, objective knowledge, arrived at by rational
means, given a satisfactory account of (a); knowledge that could legitimate our
projections. An alternative brief expression of Hume's challege to rationalists
generally is: give me a good (non-inductivist based) reason(s) for our projections, in
and out of science; for, as we shall see, giving an account of the possibility of such
reason(s) is inextricably intertwined with giving an account of the possibility general
objective knowledge. Now resort to evolutionary biology can provide an account of
(a), albeit not to the satisfaction of a die-hard Humean, who might well regard such a
move as skirting the issue - as we shall see. An account of (b) ought to have two parts.
First, we need a descriptive part that would address the question of how the gap
between knowledge of particulars and more general knowledge could have been
bridged hitherto, thus providing the ground for a possible account of the development
of science. Secondly, we need a normative part that would address the question of
how that gap ought rationally to be bridged; and hence, by implication, how it could
have been so bridged. Ideally, the two parts should coalesce, which would suggest that
the bridging was accomplished the way it ought to have been accomplished, i.e.
ideally, the actual method of science would, by and large, meet rational desiderata. My
chief concern is whether Popper's methodological outlook - seen here as the one that
6
best depicts and combines descriptive elements and normative (rational) desiderata - is
up to the task of meeting the challenge as regards (b). Although that outlook could in
principle meet that challenge, it is, at present, in an apparent impasse. This study
suggests that that impasse can perhaps be resolved by attending to the role that
symmetry hypotheses may have in tests and applications (both explanatory and
pragmatic) of their embedding foundational physical theories. Before turning to
Popper, however, it is important to have a brief look at two other influential attempts
to meet the Humean challenge - Kantianism and probabilism.2
2
On the variety of responses to Hume see (Watkins, 1984, Part One; Miller, 1994,
Ch.5)
7
Kant
The theoretical attitude here advocated is distinct from that of Kant only by the
fact that we do not conceive of the "categories" as unalterable (conditioned by the
nature of the understanding) but as (in the logical sense) free conventions. They
appear to be a priori only insofar as thinking without the positing of categories and of
concepts in general would be as impossible as is breathing in a vacuum.
A. Einstein
...the evidence seems to favour a naturalistic rather than a transcendental
explanation of a priori elements in experience.
A. Shimony
Ideas do, at times, develop lives and powers of their own and, like
Frankenstein's monster, act in ways wholly unforeseen by their begetters, ...
I. Berlin
Kant's transcendental method leads to an outlook which breaks decisively with the
classical empiricist conception of the natal mind qua tabula rasa. On this view, our minds
come equipped with just the sort of conceptual structures as to enable them to effect a
rational synthesis of their common phenomenal experiences. Thus they contain forms of
intuition (space and time) and synthetic a priori categories (Euclidean geometry,
deterministic causality, etc.); intuitions and categories which our faculties of sensibility
and understanding (which presumably jointly make up our rational faculty or reason or
intellect) impose or project, respectively, on commonly experienced phenomena (thus,
presumably, including on phenomena that have been metricated by commonly accepted
standards). Apparently, the contention is that we possess a "pure intuitive base"
(comprising the forms of intuition and the categories) which our intellect can resort to in
order to effect synthetic a priori judgements that are objective, in the sense of being valid
a priori, but only for commonly experienced phenomena, i.e. for a world of 'phenomena'
or appearances. That world is thus knowable (rationally comprehesible) and deterministic
(given the imposed category of deterministic causality). But the world 'in and of itself', the
'noumenal' world, is unknowable and indeterministic.3 Thus Kant exhibits the possibility
of valid, hence law-like, knowledge of an interpersonal phenomenal world; knowledge,
capable of making rational sense of experiences which anyone (of sound mind) can
partake in. We are thus so constructed as to be able to rationally synthesize interpersonal
experiences Accordingly, I take it, that in so far as such experiences are concerned, the
spatio-temporal projectibility of that knowledge is rationally warranted. However,
authentic objetive knowledge cannot be had. As Shimony (1993b, p. 185) observed,
Kant's stance, '...undermines the legitimacy of all the inferences upon which critical
realism relies in order to achieve indirect knowledge of the existence and properties of
things in themselves.' Kant's stance is clearly a retreat from Hume's challenge to realists.
In the face of Hume's argumentation and consequent scepticism about the possibility of
authentic objective knowledge, Kant asserts that such knowledge is indeed not a
3
But see Popper (1972, Chs: 7 & 8); Schilpp (1974, pp. 1063-1064); and Shimony
(1993a, pp. 21-61).
8
possibility. Nonetheless, the distinction between a knowable deterministic phenomenal
reality and an unknowable indeterministic noumenal one does yield an internally coherent
outlook, which saves the possibility of human enlightenment from Hume's scepticism and indeed from any scepticism which flows from a committment to a complete
determination of mind stance (i.e. where that determination could stem from whatever
source, and not just from Hume's 'customs' and 'habits'). For Kant's posit of an
indeterministic noumenal reality suggests the possibility of "free" agents capable of
exercising two forms of rationality: one theoretical and one practical; the former leads to
scientific knowledge of an interpersonal deterministic phenomenal reality (which would
account for the apparently miraculous successes of Newton's deterministic physics, of
Kant's day); the latter leads to a universal ethical imperative (which although failing to
take account of possible consequences is nonetheless founded on the laudable idea of
'universalisibility'). Kant's achievement is thus to point to the possibility of human
enlightenment via the use of a posited universal faculty of reason. From today's
perspective, however, Kant's conception of that possibility is no longer viable, for it
suggests two intertwined notions neither of which appears reasonable today (as will
become clear from the discussion further on): that human rationality has no naturalist
(hence no scientific) account, and that it is incapable of revealing the character of reality
'in and of itself'. Those views are inherent in the metaphysical idea that reality consists of
two very diverse parts; one, scientifically comprehensible via mental structures meant to
be valid a priori for that purpose, the other, not open to scientific scrutiny.
Kant's response to Hume amounts to turning Hume's scepticism about the
possibility of authentic objective knowledge into a doctrine that indeed no such
possibility exists; a doctrine highly influential on subsequent Western philosophical
thought, including the thought of some scientists.4 Today, that influence is perhaps
best epitomized in the views of internalist frameworks (Wittgensteinian 'forms of
life'), which the Kantian doctrine may well have helped spawn - notwithstanding the
obvious clash between internalisms and Kant's universalism.
Be that as it may, a non-doctrinal, or non-a priorist, critique of Kant, as well
as of Hume, can only be based on the empirical sciences as they have since
developed. From Kantian as well as Humean perspectives, such naturalization of
epistemology is undoubtedly circular; but from a naturalistic point of view it is but to
step outside untestable metaphysical frameworks. Anticipating the discussion further
on, in the light of evolutionary biology and of 20th-century physics the Kantian
stance is untenable. Evolutionary biology and its offshoots indicate that all the
variety of species possess a modicum of authentic objective knowledge, whether they
be aware of it or not. And physics suggests - via the empiricization of physical
geometry by general relativity - that we can reach, both intellectually and empirically,
beyond Kant's rigid framework (sect. F). Moreover, the view that 20th-century
physics, in particular, could be the outcome of such a framework is extremely
unlikely. For the evidence suggests that the conceptual structures that make
mathematical sense of - that rationally synthesize - our experiences of relativistic and
quantum phenomena are considerably removed from our intuitive base. The
indications are that the mathematical languages - concepts, formalisms, and
4
e.g. Bohr's (1934, p.19) stance on the aim of physics, '... not to disclose the real
essence of phenomena but only to track down ... relations between the manifold
aspects of experience…'; where the "real essence" is presumably the unknowable
"noumena" "causing" the phenomena.
9
techniques - which made possible physics in general, and 20th-century physics in
particular, were invented rather than being "given". And it seems highly unlikely that
these unpredictable inventions (Popper, 1982) would be valid a priori for either the
rational synthesis of our phenomenal experiences - in the sense of providing
necessary rules or connections for such a synthesis - or for the acquisition and
expression of authentic objective knowledge. Physics is largely a product of the
ingenious application of ingenious mathematical inventions to explanatory problems
posed by our experiences of naturally occurring, and artificially produced, physical
phenomena, which have been metricated. Now whilst intuition undoubtedly had a
very important role in both the inventing process and in the application of its
products, satisfaction, on the part of the foundational physical theories, of the
common constraints - CC: Coherence, Parsimony, and Hamilton's Principle (HP) dominated the development of physics. It is hardly surprising therefore that
interpretative problems should have arisen regarding the outcomes of the application
of mathematical inventions to explanatory problems of the physical world and that
these problems should have become more severe in 20th-century physics. For
interpretation is strongly dependent on the senses that concepts acquired in their prescientific and non-scientific usage; senses to which we are, undoubtedly, highly
habituated (as Hume thought), and which are concomitants of our intuitive
visualizibility (Miller, 1986 and 1991). But this visualizibility is but a product of
evolutionary and historical processes in relation to which relativistic and quantum
phenomena were irrelevant. The ordinary senses linked to our ordinary notions (e.g.
"wave" and "particle") are thus unsuited to provide a view of physical reality beyond
that of our phenomenological level. Not surprisingly therefore the epistemic efficacy
and interpretative utility of ordinary language has conspicuously broken down in
relation to relativistic and quantum phenomena. All this suggests that insofar as
physical reality is concerned, a sharp cognitive role to intuition ought to be assigned
only in relation to the miniscule part of physical reality in which our biological
evolution and historical development transpired. And although our mathematical
inventions transcend our intuitions, their origins may be significantly infused with
them. It would thus not be surprising to find their cognitive role to be circumscribed
as well, especially in relation to deep levels of physical reality (sect. G).
Kant's attempt to rebut Hume's scepticism resorts to "pure" (a priori valid)
intuitive foundations, which allegedly enable our intellect to make rational sense of
our common phenomenal experiences, but does not enable it to deliver authentic
objective knowledge. However, the development of the physical sciences since Kant
indicates, firstly, that an epistemology based on "intuitive foundations" - indeed on
any "foundations" - is misconceived; and, secondly, that we should expect to find
limits to "intuitive understanding". The limits suggest only the poverty of our
intuition, and of ordinary language linked to it, to penetrate beyond appearances; they
do not vitiate a realist view of physics. A physical formalism may grasp the domain it
is intended to be about much better than we can grasp it, e.g. no one would claim that
our intuitive grasp, say with the aid of Feynman diagrams, of what a formalism has to
tell about some phenomena, is what actually goes on in the domain of that formalism.
It is, after all, such formalisms that we generally bring into contact with their intended
respective domains (notwithstanding apparent obstacles which allegedly bar such
contacts - as we shall see), not our interpretations of them; and our assessment of the
formalisms is based on such contacts. Thus Popper's (1972, p. 191) revision of Kant
is apt. Whereas Kant held: 'Our intellect does not draw its laws from nature, but it
10
imposes its laws upon nature.', Popper suggets: 'Our intellect does not draw its laws
from nature, but tries - with varying degrees of success - to impose upon nature laws
which it freely invents.'
11
Probabilism
The calculus of probability is incompatible with the conjecture that probability
is ampliative (and therefore inductive).
K. Popper
There is no probabilistic induction. Human experience, in ordinary life as well
as in science, is acquired by fundamentally the same procedure: the free, unjustified,
and unjustifiable invention of hypotheses or anticipations or expectations, and their
subsequent testing.
K. Popper
According to the Bayesian view, scientific and indeed much of everyday
reasoning is conducted in probabilistic terms.
C. Howson and P. Urbach
The attempt to meet the Humean challenge [to part (b) of the realist
contention, regarding the possibility of general objective knowledge arrived at by
rational means] by demonstrating the validity of probabilistic induction failed, i.e. it
could not be carried out without resort to 'synthetic presuppositions' (Howson and
Urbach, 1993, p. 72; Howson, 2000; Popper, 1977, Chs. VIII & X; Watkins, 1984,
Ch.2); although the attempt to demonstrate its invalidity also failed (Howson and
Urbach, 1993, pp. 395-398; Redhead, 1985; Elby, 1994; Rivadulla, 1994; Landsberg
and Wise, 1988; Cussens, 1996; Miller, 1994, Ch.3).5 This situation leaves only one
possibility open in the probabilist context: a personalist Bayesian retreat which
relinquishes any attempt at a probabilistic rationalisation of the hypothesis-reality
relation. Instead, the attempt is to provide a probabilistic rationalisation of the mindhypothesis relation (Howson and Urbach, 1993; Shimony, 1993c). However, the
Bayesian retreat is not like the Kantian one, where mind's "nets" are capable of
capturing only phenomenal reality; the Bayesian approach may be interpreted to
suggest that the enquiry in question will in the long run lead to a consensus of beliefs
in what could be the truth, provided the mind-hypothesis relation is governed by the
probability calculus plus conditionalization based on ever more objective evidence.
5
On the inadequacy of a purely formal approach, i.e. of the use of the probability
calculus to "validate" either inductivism or inductive scepticism, see Gemes (1997).
Gemes points out that Popper's use of the probability calculus to argue for his
inductive sceptical stance is based on the claim that all law-like universal
generalisations have a prior probability of zero. Gemes then argues that this claim
depends crucially on the unacceptable a priori supposition that there are an infinite
number of distinct instances that could instantiate any such law. But Gemes goes on to
show that, 'One can accept that some law-like universal generalisations have a nonzero prior without accepting any inductivist claim.' (p. 130) But given that all physical
laws and theories have potentially an infinite number of distinct instances which could
instantiate them within their respective domains, it would be inappropriate to assign a
non-zero prior to them.
12
Now leaving aside the problems which may or may not beset Bayesianism,6
probabilism of any variety - qua algorithm for either discovering, or for yielding
quantitative assessments of hypotheses, whether deterministic or statistical - appears
not to have been practiced widely in the sciences hitherto, and perhaps not at all
practiced in the core sciences (physics, chemistry, and biology);7 which generally
underpinned the development of the others, by providing a background epistemological
and methodological perspective. To regard probabilist methodology, therefore, as
descriptive of general past methodological practice in the sciences is to entertain at
least one of three possibilities: (1) Scientists generally used the calculus, or its Bayesian
theorem, for say the evaluation of hypotheses, but did not report their results; (2) There
exists a Black-Box in the mind (Kaplan, 1989), which performs the necessary
calculations when mind's reasoning is rational - calculations, the outcomes of which are
valid a priori for the mind-hypothesis relation, and perhaps also for the hypothesisreality relation; and (3), probabilism in its Bayesian version is a '... well-grounded...'
scientific (descriptive) hypothesis (Howson and Urbach, 1993, p. 164), about the way in
which the mind, perhaps one that aims at maximising its "utility", handles its "beliefs"
about available hypotheses. It seems to me that (1) can be dismissed outright, because
had scientists used the calculus for the evaluation of hypotheses then they would have
reported such activities; that (2) is a Kantian-like thesis; and that (3) is '... intrinsically
too implausible...',8 because it suggests that the operation of the rational mind conforms
to the prescriptive Bayesian algorithm, whether in all its quantitative detail or even only
in outline, which is an hypothesis bordering on the Black-Box idea. And if quantum
effects have anything to do with mind's functioning and/or with the functioning of
physical reality, as is likely even in respect of mind's functioning, then the point about
quantum probabilities not being classical (sect. E.), reinforces that implausibility - but
see Howson and Urbach (1993, pp. 420-423).
Any form of probabilism, moreover, which holds that its practice leads to true
(or even only approximately true) hypotheses, implicates the metaphysical posit that
the structure of the reality under study conforms to the structure of the probability
calculus, or, alternatively, that the consistency attendant with the practice of a
probabilist methodology is mirrored in the reality at issue, or as Giere put it (1996, p.
S184), '... that the structure of the world mirrors the structure of set theory in a fairly
straightforward manner.' This view of the matter is reinforced by the failure to
demonstrate the validity of probabilistic induction. But even a retrenched probabilism,
which is only about rationalisation of the mind-hypothesis relation, needs the above
6
See e.g., Savage, 1967; Shimony, 1967; Black, 1985; Gillies, 1990; Urbach, 1991;
Howson and Urbach, 1993, Ch. 15; Juhl, 1993; Miller, 1994, pp. 125-133; Brown,
1994; Wayne, 1995; Cousins, 1995; Earman, 1996; Phil. Sci. 64, n.2, 1997; Black,
1998; Wheatherson, 1999; Kruse, 1999; and Corfield and Williamson, 2003.
7
Thus Shimony (1993c, p. 222), who is a 'tempered personalist', acknowledges that
'...scientists whose thinking about nature is judicious and orderly do not usually try to
weigh their tentative commitments quantitatively'. Admittedly, probabilistic methods
were and are used in areas marginal to the development of science, where knowledge
of detailed objective conditions is unknown, and where well corroborated explanatory
theories, of either deterministic or statistical character, are not available, as in
medicine, insurance, decision theory, etc.. But this does not effect the point at issue.
8
The expression is due to Howson and Urbach (1993, p. 164) who suggest that a '
...well-grounded... ' hypothesis ought not to be '... intrinsically too implausible... '.
13
realist posit, because otherwise it is not clear why we ought to heed that particular
rationalisation; and linking the rationalisation to maximising one's utility does not
obviate the need for that metaphysical posit. For such maximisation is dependent on
the success (or relative success) of the science or hypothesis in question, and that
success presumably does not come out of the blue. Nor does the proposal of 'A
Nonpragmatic Vindication of Probabilism' (Joyce, 1998) - according to which the
most accurate system of degrees of belief is one that conforms to the calculus obviate the need for the metaphysical realist posit regarding the axioms, for accuracy
too does not, presumably, come out of the blue. Implicit in any probabilist strategy,
therefore, there lurks (or ought to lurk) a metaphysical realist posit; unless, of course,
the particular sort of consistency attendant with probabilist practice is arbitrarily
valued for its own sake.
Thus the metaphysical posit, or leap of faith, implicated (implicitly or
explicitly) by probabilism is much more complex than the critical rationalist one, i.e.
that the structure of the reality in question is rational (or orderly) to the extent (or in a
manner) that it conforms to the contradictoriness principle, so that modus tollens can
be used as a tool for bringing empiric criticism to bear on hypotheses about that
reality. It is perhaps reasonable to suppose, therefore, that the critical rationalist posit
is more likely to hold. The point is that if what we are after is a true account of reality,
and thereby also of our experiences, then there is no more parsimonious alternative to
the conjectural metaphysical critical rationalist posit - the adjunct to logical
consistency, which is clearly an invaluable aid in the search for truth. But even given
this posit, the details of how deep objective knowledge could have come about via the
critical rationalist approach needs to be spelled out. However, if it is held that the
mind is so structured that we have no choice in the matter but to practice probabilism
in relation to our hypotheses, then unless the then implicated complex metaphysical
posits about mind and reality obtain, our epistemic quest is led astray by mind's
determination - as Hume thought, albeit not on the ground at issue here. But if we do
have a choice, then it is not clear why we ought to be rational in the probabilist sense,
unless the implicated complex posit about the reality at issue holds - which would
suggest that probabilism's promising note could be cashed in.9 Moreover, to expect
scientists, particularly in the core sciences, to constrain their discovery and assessment
of hypotheses to formal quantitative procedures, however "rational", is surely
hopelessly unrealistic, even if the utility - whether epistemic or pragmatic - of such
procedures could be unambiguously demonstrated. Thus I do not think that
probabilism can plausibly meet either the descriptive or normative part of the Humean
challenge: it cannot provide a plausible account of how the gap between knowledge of
particulars and more general knowledge could have been bridged hitherto; and as a
normative proposal especially in respect of the core sciences, it is simply unrealistic,
regardless of the status of its claims.
9
As one observer (Schwartz, 1962) put it in relation to misguided applications of
mathematics in science, but which may also be pertinent as regards the use of the
probability calculus for the assessment of hypotheses, especially in the core sciences:
'Typically, mathematics knows better what to do than why to do it. Probability theory
is a famous example.' Thus what the author called the 'single-mindedness, literalmindedness and simple-mindedness' of mathematics, may on occasion lead astray,
whether in science or in methodology.
14
Nonetheless, although probabilism qua method cannot meet the Humean
challenge, it can serve, for example in its Bayesian form, as an account of the method,
in the form of its "rational reconstruction" - one of a number of such possible
reconstructions that may elucidate methodological practices. But if such an account is
also to constitute a normative proposal for the use of the reconstruction, then it ought
to come equipped with a rationale that would suggest how its application could lead a
science to a successful development - successful in the sense of successful predictions
and retrodictions, and hence projections. For without such a rationale, it is, once
again, not clear why one should implement the proposal in future practice, although
even with such a rationale there is no guarantee that the implementation will lead to
success. But if that rationale is to be sound, then it must be anchored in the reality that
the science is about. And in the case of probabilism, such a rationale can be had only
via the metaphysical realist posit that the axioms of the calculus hold in that reality.
We need to keep in mind that the soundness of any inference touching on hypotheses
in which formal considerations have a central role depends crucially on the implicit
metaphysical underpinning of the formal machinery holding in the reality of the
hypothesis involved. Such inferences could, therefore, be misleading, because the
relevant metaphysics may not hold in the reality at issue. Thus, contrary to
appearances, the need of a metaphysic has not been banished. I have two exemplary
cases in mind in which the tension between attempting to do without metaphysics
whilst also having a rationale for the methodology or outlook is poignant; in both
cases resort to the probability calculus is thought to accomplish the task of providing a
rationale without implicating metaphysics.
Consider Jeffreys' (1961) stance: On the one hand he writes (p. 8), 'The general
rules [of induction] are a priori propositions, accepted independent of experience
[since otherwise circularity looms], and making by themselves no statement about
experience. Induction is the application of the rules to observational data.' And (p. 16),
'Even if people disagree about which is the more probable alternative, they agree that
the comparison has a meaning. We shall assume that this is right. The meaning,
however, is not a statement about the external world; it is a relation of inductive logic.'
On the other hand, Jeffreys regards himself a realist who accepts (P. 54), '... that there
is an external world, which would still exist if we were not available to make
observations, and that the function of scientific method is to find out properties of this
world.' But how is a method to yield knowledge of a world if the meaning of
comparisons of alternatives based on the method has no relation to that world, i.e. if
the comparisons (or relations) the logic of the method exhibits - comparisons that flow
from the primitive postulates of the logic - do not hold in that world? There are two
possibilities. One, the primitive rules are held in an apriorist sense: they are apriori
valid either for the world 'in and of itself', or they are apriori valid in a Kantian sense
for the synthesis of experience alone, in which case they cannot deliver objective
knowledge. The other alternative is that they are held to be appropriate conjecturally
for the world in question notwithstanding their a priori character, which is a
metaphysical realist position regarding the rules. If the Kantian alternative is eschewed
then we are left either with an apriorist realist stance, or a conjectural metaphysical
realist stance, on the rules. Jeffreys, however, opts for a Baesian approach.
Consider next van Fraassen's constructive empiricist stance. Van Fraassen adopts
what he takes to be the defining characteristics of the empiricist tradition (2002, p.37):
'(a) A rejection of demands for explanation at certain crucial points and (b) a strong
dissatisfaction with explanations (even if called for) that proceed by postulation.' Now
15
these characteristics are but arbitrary stipulations,10 which, if accepted, ought to
engender 'a strong dissatisfaction' with science, given that science itself provides
explanations 'that proceed by postulation'. However, be that as it may, the
characteristics of the empiricist tradition naturally lead to the core of van Fraassen's
stance (1985, p. 247), '... that the empirical adequacy of an empirical theory must
always be more credible than its truth.' Now prima facie this proposition appears
sound, given that empirical adequacy (in his sense of "'observability'" rather than what
has hitherto been observed)11can in part be observed, whilst truth is an unobservable
conjecture. But, of course, there is an immediate problem, which besets the empiricist
stance as much as it besets the realist one (a problem best exhibited by Jeffreys
(1961); it is discussed in sect. B), i.e. given any theory, we can in principle construct
an infinity of alternative ones, all of which would have the same empiric adequacy (in
either of the above senses) as the theory in question, but the predictive and retrodictive
consequences of each of which would diverge from those of the given theory. It is
possible to obviate this problem, as it touches on the empiricist stance, by adopting a
"theory of belief", as van Fraassen (1985, p. 247) does: a '... theory of belief - or,
better, of opinion - which ... makes probability theory the logic of judgement
(epistemic judgement, the judgements that constitute the state of opinion).'12 But given
that probability theory rests on apriori axioms plus rules,13adopting the theory to be
the logic of epistemic judgement is not in keeping with the empiricist premiss (ibid, p.
258), '... that experience is the sole legitimate source of information about the world
...'. Now van Fraassen is not a naïve empiricist (2002, p. 134), ' Our concept of
experience is multifaceted, and the single word harbours a cluster of notions.' And he
is well aware (ibid, p. 125): 'That even the most modest bit of theory goes beyond the
10
This view is very much in line with van Fraassen's own observations (2002, pp. 4748): 'A philosophical position can consist in a stance (attitude, commitment, approach,
a cluster of such-possibly including some propositional attitudes such as beliefs as
well)'; and, ' … if empiricism is a stance, its critique of metaphysics will be based at
least in part on something other than factual theses: attitudes, commitments, values,
goals.'
11
The term "empiric adequacy" is here linked to adequacy limits actually detected
hitherto. Thus conjectural and projective judgements about the comparative overall
empiric adequacy of a sequence of comparable hypotheses is based on their
comparative adequacy limits, as detected hitherto. This differs markedly from van
Fraassen's (1985, p.. 296) 'empirical adequacy', which is linked to '... limits of
observability...', where those limits are meant to be explicated by the science of the
day, itself taken to be but "empirically adequate". Thus the 'limits of observability' are
not '...the mere factual limits to what has been observed...'. (See also the discussion
further below.)
12
Adopting a "theory of belief" is in line with the view that (van Fraassen 2002, p.
62): 'Stances do involve beliefs and are indeed inconceivable in separation from
beliefs and opinion.' It seems to me, however, that the critical rationalist stance can,
and does, manage without "beliefs and opinion", whatever they may be. Thoughts and
posits are neither "beliefs" nor "opinions".
13
The probability calculus can be regarded either an extension of deductive logic, or a
system of logic in its own right, having a set of axioms which closely, but not quite,
mirror those of deductive logic (Howson, 2000). In either case the calculus clearly
rests on apriori axioms which provide its rational credential.
16
experience we have so far, and our reports on our experience are infected with such
theory…' (hence judgements of 'empiric adequacy' must also be so infected!).14 Thus a
non-naïve notion of "experience" is bound to be ambiguous. Ambiguity, however, is
seen as a strength (ibid, 145), and thus apparently the empiricist premiss is held onto
(ibid, 120 & 136), notwithstanding its apparent naïvete.
But once we take probability theory to be the logic of epistemic judgement we
have abandoned that empiricist premiss, however interpreted. Moreover, scientists,
especially in the core sciences, do not generally keep to that premiss, if only because to
do so would seriously hamper their creativity. Instead, they conjecture hypotheses - an
imaginative and ingenious activity, which may in part be guided by novel data (should it
be available), but also by existing as well as novel backround knowledge (e.g. novel
principles) - and generally interpret them realistically. And in order to discriminate
between them they test them against their predictive consequences, which must stand up
to data (parts of which may turn out to be the available novel data), linked, albeit
indirectly, to the empiricist's 'legitimate source'. Thus outcomes of the faculty of
"experience" checks on outcomes of the faculty of "imagination", and it does so with the
aid of another outcome of the "imagination": the logical relation of modus tollens.
Clearly, although "experience" has a central role in such methodological practice it is
not the only faculty with such a role. Hence, it may not be the '…sole legitimate source
of information about the world.' Now whether or not this sort of methodological
practice can be rationally upheld (given the problem referred to above regarding the
logical possibility of an infinity of alternative hypotheses to the one of interest) - the
chief topic of this study - it is clear that the practice involves scientists going way
beyond the empiricists' "legitimate source"; it is also clear that the practice does not
involve the probability calculus. Ideally, from an empiricist point of view, scientists
ought to arrive at their hypotheses merely from the empiricist's "legitimate source"; but
however they arrive at them, according to constructive empiricism, they ought,
presumably, turn to probability theory for discriminating between them, in particular for
selecting one from an infinity of hypotheses of the same empiric adequacy. Presumably,
they should do so on the ground that the theory constitutes 'the logic of [epistemic]
judgement'; a view arguable, qua rationality requirement, given the theory's rational
credential (see note 10).
However, the use of probability theory to guide epistemic judgements introduces
degrees into methodological practice, degrees which may have no objective relevance to
the hypotheses in question, notwithstanding the apparent rational credential of
probability theory. Thus without some metaphysical realist posit in respect of that
14
In a scientific context, the naïve empiricist view would entertain seriously, qua
scientific knowledge, only phenomenal experiences such as pointer readings,
apparently chaotic traces on photographic plates, and the like; phenomenal
experiences, which become data, vital for judgements of empiric adequacy, only after
being processed with the help of background knowledge, some of which almost
certainly does not originate solely in "experience", for it may include such products of
the "imagination" or "creativity" as mathematical formalisms, principles, constraints,
etc.. Thus, generally, the data, hence also judgements of empiric adequacy, are
themselves laden with theoretical items that are unlikely to stem merely from the
empiricists' 'legitimate source'. Moreover, the data obtained from various domains of
the physical sciences remarkably match, a phenomenon that would be expected only if
the items realists posit actually existed.
17
theory, the rationale for taking it to be the 'logic of [epistemic] judgement' could only be
its purely formal rational credential, even if the hypotheses in need of judgements are
seen as but empirically adequate. But why be rational in the probabilist sense and
remain baffled by the often amazing empiric adequacy of science?15 Indeed, if (ibid, p.
92), '… rationality is but bridled irrationality …', why be rational at all, why be bridled?
The posit that probabilistic degrees are somehow mirrorred in the domain at issue - a
posit which may well be sound in respect of some domains - appears thus to be a
necessary ingredient, qua rationale, of probabilist methodology, whether seen as
yielding truthlike or but empirically adequate hypotheses. Thus the retreat from truth to
empiric adequacy does not obviate the need for metaphysics. Perhaps probabilistic
methodological practice could, at least in principle, proceed without metaphysical
posits (we shall never know, if only because such posits implicitly underpin the axioms
of the probability calculus), with truth being replaced by an enhancement of empiric
adequacy as the aim of science; or in its methodological version (ibid, p.198): '… the
aim is only to construct models in which observable phenomena can be embedded.'
Thus apparently the aim is but to save the phenomena with the aid of models, which
again raises the problem of how to discriminate between an infinity of alternative
models capable of doing the job. But leaving that aside, that aim is indeed in accord
with the view that (ibid, p. 63): 'Science is a paradigm of rational inquiry. To take it as
such is precisely to take up one of the most central attitudes in the empiricist stance. But
one may take it so while showing little deference to the content of any science per se.'
[my italics] But then what point is there in participating in the 'paradigm of rational
inquiry', or indeed in being interested in its outcomes? And how does the view of
'showing little deference to the content of any science' sit with the following two
observations ? (ibid., p. 188): 'Science, whether understood with the scientific realist or
with the empiricist , provides us with a world picture…', and (ibid., p. 195): 'I see
objectifying inquiry as the sine qua non of the development of modern science and its
incredible, breathtaking achievements in our increasing knowledge of nature.' [my
italics] How many scientists would be content with 'showing little deference to the
content' of their science? That would hardly be a very effective approach to their work;
it would be somewhat akin to medical researchers aiming to discover symptoms without
attending to their causes.
15
Non-realist accounts (accounts not centered on intrinsic contents or structures of hypotheses,
which could make them truthlike) of scientific success (including those of van Fraassen - 1980) are
easy targets for the realist, who is bound to inquire further as to the reason the non-realist account
fits the success of the hypotheses in question. In the case of hypotheses arbitrarily constructed to
"save the phenomena" - like Ptolemaic astronomy - that reason will be clear. But ever since
Copernicus, most scientists laboured hard to find successful hypotheses that satisfied the guiding
constraints of their particular discipline; and in this creative discovery process they almost
invariably resorted to, and were assisted by, realistic models (even if only in the guise of thought
experiments). It would be baffling, indeed miraculous, if those particular hypotheses - especially
the physical theories of the 20th century with their counterintuitive contents - were but successful
instruments for "saving the phenomena". But that is not to say that the no-miracles or naturalist
argument, per se, for the possibility of the realist case, is sound, from a critical rationalist
perspective, for it is tinged either by inductivism or apriorism or both. However, this study suggests
(sect. B) that physics may have accorded scientific status on the naturalist outlook (discussed
below), in the context of the CC, and in respect of the physical sciences.
18
Contrary to van Fraassen (1985, p. 258), scientific realists are not or need not be,
'... baffled by the idea that our opinion about the limits of perception should play a role
in arriving at our epistemic attitudes toward science.' Those limits do indeed exist in a
variety of senses - range, sensitivity, selectivity, etc. - and they ought indeed play a
crucial role in our 'epistemic attitudes', for they suggest the very restricted character of
perceptual knowledge. But what scientific realists are or could be baffled about is taking
those limits to circumscibe the epistemic quest, and thus to abandon the attempt to
acquire deep representational knowledge; to do so is to take no heed of evolutionary
biology (the truthlike character of which is here admittedly, albeit not arbitrarily,
supposed - as we shall see), which suggests that 'the limits of perception' were
circumstantially conditioned, because the sense organs delivering our perceptions are
products of a process the blind "aim" of which is other than the acquisition of truth. As
Dawkins (20004, p. 53) points out, 'You might think that sense organs would be shaped
to give us a "true" picture of the world as it "really" is. It is safer to assume that they
have been shaped to give us a useful picture of the world, to help us to survive.' Of
course, from a realist perspective, the idea that knowledge delivered by sense organs is
of survival utility presupposes that that knowledge is to some modest degree true of the
particular niche in which that utility holds. But, given its circumstantial origins, and its
niche dependence, that degree of truthful knowledge can hardly be regarded
epistemically privileged; indeed it expectedly differs from species to species. Thus
implicit in the naturalist outlook is the view that knowledge delivered by the sense
experience of any species has no objective preferential epistemic status. Accordingly, in
the particular human case, although knowledge delivered by our sense experience may
well constitute our truthlike epistemic start (it would have been that at least in our
evolutionary past), it does not follow, '... that [our] experience is the sole legitimate
source of information about the world ...' [my italics]. And, if experience were to be the
'sole legitimate source' then how did we legitimately get beyond that source, if indeed
we did do so? It seems that to regard "experience", however interpreted, to be the 'sole
legitimate source of information about the world', and to take the 'limits of perception'
to guide 'our epistemic attitudes toward science', without considering the import of
evolutionary theory as regards both those views, is bound to lead to a misconstrual of
both science and its method. This appears to be the case with constructive empiricism,
for it cannot consistently admit a realist view of evolutionary theory, accepting only its
empiric adequacy.16 But then that adequacy is rationally unaccountable, inviting any
account, with all its attendant consequences. Evolutionary theory, realistically
interpreted, can account for how, in addition to acquiring the faculty of sense
experience, we could also have acquired faculties of imagination, creativity, and of
critical argumentation; faculties which, jointly with experience, could have led to
objective knowledge way beyond that delivered by our percepts.
Consider two further points of van Fraassen:
16
This approach is in line with the defining characteristics of the empiricist tradition
(noted above). It is also in line with the view (van Fraassen, 2002, p. 140): '… we
must admit into our epistemology a place for emotion, or analogous to emotion, in our
description of rationally endorsable changes in view.' On the other hand, to take the
empiricist premiss seriously is not in line with another observation of van Fraassen.
(ibid., p. 139): 'Rationality will consist not in having a specially good starting point
but in how well we criticize, amend, and update our given condition.' This view is
more in line with critical rationalism, and with Bayesianism.
19
(1985, p. 256): ' ... I define empirical adequacy in terms of observability and
point to science as the source of information about what is observable.' But a science presumably the science of the day - regarded to be but empirically adequate, i.e. taken
to relate only to some set of data deemed obtainable, and deemed accountable by the
science in question, can tell us only that the events, e.g. pointer readings, which, with
the help of backround knowledge, leads to the data, are observable. Thus as regards
'what is observable', a science seen as but empirically adequate can only point to
events, the sources of which remain in the dark. But it is, I think, a tall order, to expect
most scientists who are naturally curious to leave it at that, without enquiring how it is
that the pointer on the dial stops where it does, rather than in miriads of other possible
places. It seems to me that most scientists would regard a science construed in a
constructive empiricist sense hardly worth pursuing. Of course, van Fraassen (1985, p.
257) is right in that, 'We do not ... have a divine spectator who can tell us what is
really going on.' But we can posit 'what is really going on' in order to grasp what our
hypotheses may be telling us - as most scientists do. Without such metaphysical posits
(which often become in time testable posits), the empiric adequacy of scientific
hypotheses - empiric adequacy in either sense, that of van Fraassen's 'observability', or
as used here: observed hitherto - could miraculously have no relation to 'what is really
going on'. So constructive empiricism ends up (or ought to end up if it were not bound
by its arbitrary stipulations, above), entertaining the possibility of miracles, thereby
placing itself outside the bounds of the naturalist outlook.17 But that is not to suggest
that the no miracles or naturalist argument per se is adequate for the realist case (see
note 15).
(1985, p. 296): 'Why should the limits of observability, rather than the mere
factual limits to what has been observed, be given a role in determining our epistemic
commitment? … I feel very sure of two points: that this is how it is in fact in science
and that this is how an empiricist ought to say that it ought to be.' But that is hardly the
way it is in science, given that most scientists entertain seriously realist interpretations
that go way beyond not just 'the factual limits to what has been observed' but also
beyond whatever the 'limits of observability' may be. 18 And if that is the way it ought
to be, then we need to know the 'limits of observability'. But to know those limits
requires knowing 'what is observable', and, as indicated above, a science seen from a
constructive empiricist perspective is not very illuminating on that issue. 'What is
observable' as well as the 'limits of observability' do indeed depend on the state of the
sciences and of the state of instrumental technology of the day (including the science
of sight and of its interaction with what is seen), but that is a view that can reasonably
17
It seems to me that this is the case for all non-realist stances. For their logic
implicates the view that the notion of physical state has no objective counterpart
(presumably, not even when such a state is prepared in the laboratory); a view which
relegates talk in physics to be about nothing at all, with the exception of talk about the
adequacy or not of theoretical structures in respect of outcomes of measurements. But
then given that those outcomes, particularly the corroborative successful ones, have no
good account, they are perhaps best relegated to the realm of the miraculous.
18
The following observation of van Fraassen is relevant here (2002, p. 225): 'It is
important (but only important) that the articulated view of science is exemplified by
real science to some significant (but only significant) extent and that it succeeds well
(but only well) in "rationalizing" real scientific practice.' It does not seem to me that
constructive empiricism fits that bill.
20
be held only if those sciences and the technology are interpreted realistically; an
interpretation which could license the projection from 'the mere factual limits to what
has been observed' to the 'limits of observability' (as indicated by the sciences
realistically construed), or, alternatively, from the empiric adequacy of an hypothesis
observed hitherto (which tells about its performing limits hitherto, i. e. across a
miniscule spatio-temporal expanse) to its empiric adequacy across its entire domain,
wherever and whenever that domain is realised. Now scientists do make such
projections, but they generally base them (if only implicitly) on their realist
perspective of the sciences of the day, which can license those projections because
that perspective is a more encompassing projection than the ones in need of a license.
The question addressed here is whether such realist perspectives could be rationally
upheld; or, alternatively, whether the prima facie rationally unwarranted projective
practices could have a rational underpinning, in at least physics and its related
sciences. But "rational" is not understood here quite in the sense of van Fraassen who,
after his description of the general limits of empiricism (1985, p. 253), '... experience
can give us information only about what is both observable and actual.', goes on to
suggest, 'We may be rational in our opinions about other matters [I take that to include
posits about 'what is really going on'] ... but any defence of such opinions must be by
appeal to information about matters falling within the limits of the deliverances of
experience.' The present study does attempt to defend the realist case, or the
apparently rationally unwarranted projective practices, '… by appeal to information
about matters falling within the limits of the deliverances of experience'. But, not
surprisingly, the realist case, being metaphysical, necessitates for its defence some
metaphysical input, as we shall see. But such an approach, and its tentative outcome,
may, nonetheless, be deemed "rational"; especially if the attempt is but to suggest the
possibility of the realist case, i.e. of how a sequence of increasingly truthlike physical
theories could have come about, given the core standard methodological practices in
physics.
Everyone - realists, Kantians, probabilists, empiricists, and instrumentalists would, of course, agree that science is about accounting for outcomes of
measurements or observations or experiences, which can be publicly authenticated.
Realists insist, however, that it is, and rightly ought also to be, about accounting for
how it is that those outcomes are what they are. The realist stance has important
advantages over non-realist ones, which the latter cannot quite match without,
implicitly, at least, implicating one or another realist posit. Firstly, a psychological
advantage: realist posits motivate the curious. Secondly, an epistemic advantage: an
ability to account for scientific success. Thirdly, a pragmatic and explanatory
advantage, to do with scientific predictions and retrodictions, respectively. Realist
thought can confer foresight to its practitioners: for only an analysis of systems - such
as the sun, the earth, the atom, or whatever - founded on realist views of available
theoretical structures about the systems, can lead to rational anticipation of future
events, i.e. of what might happen to the systems (with or without our interactions with
them), and of how such happenings might relate to human needs (think, e.g. of atomic
and solar energy, earthquakes, etc.). This sort of anticipation is but a continuation of
anticipations of adaptive value acquired in pre-scientific contexts, via direct
interactions of humans with their environment (e.g. a child's acquisition of the
anticipation that touching a hot stove is dangerous). Mutatis mutandis as regards the
explanatory advantage of the realist stance (e.g. in attempts to account for cosmic
evolution, etc.). Finally, a methodological or heuristic advantage: e.g., in viewing the
21
history of the quantum concept from 1900 to circa 1925 one cannot but be struck by
the consideration that Einstein's heuristic approach of 1905 - of considering the
concept as if it represented something real, notwithstanding Maxwell's field
description of electromagnetic radiation - was pivotal in the development of quantum
theory, from both an experimental and theoretical perspective.19 Non-realists - holding
that physics is only about accounting for outcomes - should have been content with a
purely numerical (instrumentalist) approach. Mutatis mutandis in the case of
accounting for say tracks in bubble chambers by positing particles of particular masses
and charges; etc.. Realist accounts of how it is that an outcome is what it is in a
particular instance may further facilitate progress by leading to the generalization of
such accounts and/or to their application to other cases; generalizations and/or
applications which often tell whether the realist posits could be sound. But regardless
of whether such posits are sound, realist thought is generally of great heuristic value if
only because it could (and often does) engender a fruitful (from the viewpoint of
further advances, however construed) critical dialogue as to which hypothesis, among
a set of competing empirically equivalent ones, may better capture some truth beyond
the mere data. In the light of such advantages it is hardly surprising that, apparently,
the vast majority of scientists are addicted to realist thinking about the whole of
science, including deep theories. It seems to me that it would take a miracle to get
most scientists to adopt thought processes which would conform to this or that nonrealist outlook, whatever the alleged merits of such an outlook might be; and,
notwithstanding, that the advantages of the realist stance, whether singly or jointly, do,
admittedly, not constitute a sound basis for it. (They do not constitute a sound,
deductive/empiric, basis, because, whilst they are conducive to human desiderata,
attempts to infer the realist stance from them are tinged with either circularity, or
inductivism, or apriorism, or all three; as in the case of an inference from success to
truth - see below.)
Jeffrey's stance and that of van Fraassen indicate, I think, that attempts to
construct a methodological strategy devoid of realist posits are misguided from both a
descriptive and prescriptive point of view. (Note that constructive empiricism may be
seen not merely as an outlook or stance on science but also as a methodological
strategy, qua attempt to suggest that the discovery and assessment activities in
science, can be done without metaphysical posits; and hence such posits are redundant
for a view of science.) Moreover, if a methodological strategy is to be equipped with a
rationale for its adoption - a rationale that is more than a purely formal rationality
requirement - then realist posits about mind or reality, or both, are unavoidable. For
only such posits can suggest a relation between the strategy and either mind or reality
(which could be but phenomenal reality) or indeed both; a relation neither testable nor
demonstrable, but which can, nonetheless, account, successfully or not, for the
expected utility (epistemic, pragmatic, etc.) of the strategy, and which could thus serve
as a rationale for its implementation. Thus implicitly, at least, methodology is
inextricably intertwined with metaphysics. More generally, formal approaches to
philosophical problems, i.e. the use of a calculus or theorem, for either
methodological or metaphysical aims, respectively, cannot obviate implicating some
metaphysics, because all such approaches necessarily rest on a set of untestable
19
Einstein (1905) suggested that Planck's hypothesis can be disengaged from his
black-body theory, and he went on to derive his more general quantum hypothesis
using thermodynamic considerations, see e.g., Klein (1967) and Farmelo (2002).
22
axioms - from which the calculus or theorem flows - plus a set of rules, in a way that
implicates, at least implicitly, some metaphysics about the relation of the axioms to
mind or reality or both. It is for this reason that Kripke's contention is sound: 'There is
no mathematical [nor empiricist] substitute for philosophy'.20 Contrary to van Fraassen
(1980, p.82), 'Philosophy of science is not metaphysics... Philosophy of science can
surely stay nearer the ground.', philosophy of science requires both philosophy (or
metaphysics) and science, because the analysis of, and attempts at resolving,
philosophical problems to do with science necessitates both; indeed, often, especially
in important cases leading to novel discoveries, the analysis, and attempts at resolving,
scientific problems, also necessitates both (could Einstein have arrived at his relativity
theories without a philosophical analysis of the concepts of space and time? And what
sense can be made of those theories merely from their empiric adequacy?). Somewhat
analogously, methodology cannot do without both normative and descriptive parts (as
we shall see below in the discussion of Popper's methodology).
Given the metaphysical underpinnings of grand methodological
reconstructions aimed at grand methodological unification, their descriptive soundness
and explanatory and normative (prescriptive) utility are far from clear, however
instructive they may otherwise be. In this, their fate is somewhat similar to that of the
grand philosophical systems of the past. For example, consider the Bayesian account
of the practice of not repeating the same experiment ad infinitum. The account does
indeed suggest the Bayesian rationality of the practice (Urbach, 1981), just as Popper's
account suggests its Popperian rationality (Popper, 1972, p. 240; Miller, 1994, pp. 3235). But it is surely not the case that if it is possible to show that this or that practice,
or this or that episode in the development of science, may be fitted into this or that
methodological schema, then we are licensed to infer that that practice, or that
episode, is actually the outcome of the prescriptive rationality of that schema. The
possibility of showing that scientific knowledge could have been acquired via this or
that methodological schema, does not licence the inference that it was in fact so
20
Cited in Levin (1997, p. 256). Levin discusses Putnam's resort to the LowenheimSkolem theorem in an attempt to argue against metaphysical realism. In sect. B of this
study, Noether's theorem is made use of in an attempt to exhibit the possibility of the
scientific realist case. It needs stressing, however, that unlike Bayes's theorem, used
for methodological aims, and unlike the Lowenheim-Skolem theorem, used for a
metaphysical aim, Noether's theorem, used here also for a metaphysical aim, pertains
directly to testable theories of specific sorts: those satisfying the CC, specifically HP.
The theorem exhibits links - generally the links are both necessary and sufficient between specific predictions of such a theory and specific symmetrical features of the
theory. This suggests that we could have distinct empiric access to those symmetries
that are covered by the theorem. But to hold that we do have such access - an essential
posit in the approach of this study to scientific realism - implicates the metaphysical
posit that the formal links, as indicated by the theorem between symmetries and the
theory predictions they generate, are objective features of the theory's real domain.
Thus the use of Noether's theorem in the attempt to bolster the scientific realist stance
also implicates a metaphysical posit. Nonetheless, since the theories to which
Noether's theorem is applicable are also testable and well corroborated via predictions
other than those to do with the theorem, we may have good reasons to think that the
formal links indicated by the theorem could reflect objective relations between
objective symmetries and the predictions they generate (Sect. B).
23
acquired at. Just how it was acquired depends on actual practices and on certain
metaphysical posits holding. Hence, whatever the descriptive, explanatory, and
normative force of methodological schemas may be, the fact that none is a testable
scientific hypothesis is crucial to their comparative assessment. Thus all available
schemas - confirmationism, falsificationism, Bayesianism, Kuhnian paradigmism,
constructive empiricism, abductivism (Fetzer, 1981 and 1998),21 bootstrapping
(Glymour, 1980; Worrall, 1982), the methodology of scientific research programs
(Howson, 1976; Laymon, 1978), etc. - might indeed be grafted onto some aspects or
episodes of the history of science, and given the "rationality" of the preferred schema,
whatever it may be grafted onto will appear to have a "rational" foundation.
Two questions then arise: (1) Which schema best models the core
methodological practices, at least in the core sciences? Those are practices that could
have dominated the development of science generally and which might also conform
to our knowledge of the much less articulated practices that could have dominated the
pre-scientific stage of the development of human knowledge; and (2) Are these
practices rational? My reply to (1) is, broadly, Popper's critical rationalism. My
reasons for this choice will become clearer by and by, but the primary one may be
briefly stated: should the dominant method in science, or those aspects of
methodological practices that dominate developments in science, be inductively
grounded (we may, I think, safely assume that they are not aprioristically grounded),
then, given the logical invalidity of induction, the inescapable conclusion would have
to be that the practices that dominate developments in science are not rational, and are
thus incapable of being oriented (intentionally or otherwise) towards acquiring
objective knowledge of an objective reality, presumed to be rationally structured.
This unpalatable conclusion would have to be accepted were it not for the fact that the
evidence indicates, I think, that, hitherto, the dominant method, in both the scientific
and pre-scientific contexts, was not inductively grounded; and, moreover, that formal
inductive procedures, which might conceivably have rationalized - whether in an
objectivist or subjectivist sense - the presumed dominant inductive practices, have
been only sparingly used in the sciences, and perhaps not at all in the core sciences.
The evidence further suggests that all scientific hypotheses do, sooner or later,
21
Abductivism (Fetzer, 1998) is meant to provide '... a more adequate reconstruction
of science than do its "inductivist", "deductivist" and "hypothetico-deductive"
alternatives.'(P. 1) With regard to Popper's deductivism, it is meant to give a better
account of "acceptance", which is crucial; for '... otherwise, science cannot attain the
aim of discovering general principles that can be applied for the purposes of
explanation and of prediction.' (PP. 21-22) I agree that Popper's account of
"acceptance", in terms of his notion of corroboration, is inadequate, as I try to indicate
further in this section. But the abductivist solution to this problem, which is in effect
the induction problem, is obtained, by making IBE the centerpiece of abductivism.
Thus abductive inference is meant to complement IBE. But given that IBE is only a
species of inductivism, it is not at all clear how abductivism could be '...the strongest
solution to the problem of induction that empirical procedures are able to provide.' (p.
52) The problem of "acceptance", for purposes of application both explanatory and
pragmatic, is to provide a good rationale for such "acceptance". That is what this study
attempts to do.
24
encounter their non-white swan.22 Thus it seems to me that the method that dominated
the development of scientific, as well as pre-scientific, knowledge is not inductively
grounded; in particular, that the development of physics was dominated by
deductively grounded tests. This view does not deny that informal inductive thought
has had a significant role in the development of the sciences, especially at low levels
of a science and especially in discovery contexts; nor is it to deny that formal
inductive procedures are now being used, apparently with benefit, in some of the noncore sciences. Given this response to (1) it is important to explore whether a positive
reply can be given to (2), i.e. are the dominant core methodological practices in the
sciences rational from the viewpoint of the critical rationalist outlook? Thus I do not
suppose that, either in or out of science, the thought processes of scientists conform in
any detail to any of the available rational reconstructions: I do not suppose that
scientists are, implicitly or explicitly, Popperians, Bayesians, etc. But I do suppose,
that at least in the core sciences there are more or less standard practices, chief among
them are the testing of hypotheses and their projection for either explanatory or
pragmatic aims; projections which unavoidably implicate the Humean projection
problem, i.e. do we have a good (deductive-empiric-based) rationale for them, one
that would render their practice rational? (The projection problem has hitherto been
stressed in relation to pragmatic applications linked to the predictive use of
hypotheses, but it also arises in explanatory applications generally linked to their
retrodictive use; indeed, in the case of most hypotheses it arises even in their tests - as
we shall see in sect. B) My main aim is thus to enquire whether and under what sort of
presuppositions those key practices, which appear to have dominated developments in
the sciences, merit the accolade of being rational in the light of Popper's methodology.
For only if they do merit this accolade in that methodology can we regard the
development of science to be a truthlike ascent, given that that methodology is
broadly descriptive (or at least more descriptive than available alternatives, as we
shall see), and given the posit that the worlds which the sciences address are more or
less rationally structured.
My chief concern is with physics, but an analysis of the methods and content
of that science has clear implications for the other sciences. The practice of not
repeating the same experiment - other than to check existing results by increasing the
severity of the tests - is intimately linked up with the projection problem. For I take it
that this practice is not an outcome of scientists adhering to this or that
methodological schema, but rather an outcome of {the fact} that they, like most
humans, implicitly hold some form of the Principle of the Uniformity of Nature
(PUN), which may perhaps be traceable to our inductivist Humean habits or
intuitions, and ultimately to our evolutionary beginnings. But if evolution chiseled us
22
The case of the swans is thus much more pertinent to science than the case of the
ravens. Be that as it may, encountering non-white swans generally occurs outside the
domain of validity of the hypothesis in question, thus helping to indicate the bounds of
that validity. There are, of course, the most advanced hypotheses in their respective
fields which have as yet not come up against their non-white swan. But in such cases
there generally are good non-inductive grounds for thinking that in time they will (not
even Einstein thought that general relativity is the last word on gravitation). Although,
prima facie, this situation points towards the pessimistic induction view regarding the
development of science, this study suggests the contrary view, that that development
could be a truthlike ascent.
25
to hold this uniformity intuition it has also ensured that we are capable of learning
from the mistakes which that intuition often leads to; this capability is seen here to
dominate the learning process. In physics, the uniformity intuition takes the form of an
hypothesis that, ceteris paribus, the mechanism or effect responsible for the outcomes
of an experiment is invariant in respect of spatio-temporal location; and if the
explanatory hypothesis accounting for the effect is to take account of that intuition,
then it must be invariant under appropriate space and time transformations. In
Popper's outlook, this invariance supposition is prima facie irrational (invalid),
because prima facie its rationale is either inductivist or apriorist or both. But could
physicists have inadvertently stumbled - via whatever means, including inductivist
ones - upon a good rationale based on distinct empiric scrutiny? It is suggested here
that this may indeed have occurred, albeit within a context that has hitherto guided the
development of physics: the CC (sect. B). Thus this study suggests how this context
could have led, perhaps uniquely, to a good rationale for not repeating experiments, as
well as leading to deep objective physical knowledge: for one of the elements of this
context appears uniquely capable of enabling what amounts to positive selection of
hypotheses, effected by deductive-empiric means, which would have the further effect
of validating an otherwise invalid operation of customary negative selection on the
selected hypothesis.
The upshot is that two distinct selection mechanisms, both effected by
deductive-empiric means, appear to operate concomitantly but distinctly in respect of
two distinct traits of hypotheses that satisfy the guiding context - the foundational
physical theories. Positive selection appears to operate in respect of their comparative
extent of projectibility, and negative selection appears to operate in respect of their
comparative scope. Their positive selection leads to the disqualification of their
hitherto empirically equivalent aberrant alternatives and to the validation of the
operation of negative selection pressure on them. It turns out that negative selection
alone, even were it to be effected via unproblematic tests, could not single out the
hypothesis of interest, which is, from a realist perspective, the truthlike hypothesis, the
one that is successfully projectable, within its domain. Thus, from a critical rationalist
perspective, the posit that this guiding context could enable a positive selection
mechanism, effected by deductive-empiric means, is a necessary concomitant to the
supposition that it could have delivered deep objective knowledge. Thus the study
suggests how the actual guiding context in use in physics could have delivered deep
objective knowledge by apparently bringing aspects of the physicists' implicitly held
PUN under distinct empiric scrutiny. The practice of not repeating the same
experiment could thus have acquired, unawares on the part of physicists, a good
rationale, which could have also served as a rationale for the projectibility - hence
applicability for either explanatory or pragmatic aims - of the explanatory hypothesis
across its domain.
Among such traditional alleged epistemic virtues as "simplicity", "regularity",
"fertility", etc., some form of simplicity has often been thought to be the best means
for singling out the hypothesis of interest. Popper's resort to simplicity, qua paucity of
adjustable parameters, is a case in point, although he did not acknowledge that that
constitutes a positive reason for the choice of that hypothesis. Nor did he acknowledge
that the proposal fails to obviate the induction/apriorist problem (sect. B). Indeed, the
use of any such traditional guide is either inductively laden or else amounts to
apriorism, or both. From a critical rationalist perspective, such approaches are thus
inappropriate. The presently proposed context is unique in two senses. Firstly, it is the
26
one in actual use in physics; and secondly, it appears capable of enabling a form of
positive selection, effected by deductive-empiric means, which could validate an
otherwise invalid negative selection mechanism. If our interest is to enquire into the
possibility that today's physics constitutes objective knowledge, then we need to look
at the context in actual use in physics, and try to discern whether and how that context
could have delivered such knowledge; rather than imposing on such an enquiry a
context taken from one or another "rational reconstruction". This study is thus an
attempt to enquire whether the actual core practices in physics - in particular, the bare
bones of those practices, namely the testing and projection of hypotheses – could
possibly stand up to rational considerations; for only if they do could they have
delivered objective knowledge, given the rationalist posit that physical reality is
rationally structured. Such an enquiry requires not only taking account of actual
practice but also considering the methodological context in which it takes place and
analysing the content and structure of its products, for practice and its products
generally interact (e.g. if a product is unsuccessful then an alternative is sought). We
need to look at methodological practices, as well as their products, as they are, or as
best we can discern them to be, and consider whether these practices and their
interaction with their products, conform to the dictates of reason, independently of the
thought processes or lack of such which may have engendered them.
The scientific community appears to harbour at least two conflicting attitudes
towards philosophy of science. According to one, it is, '...just about as useful to
scientists as ornithology is to birds.'23 The other is expressed by Weinberg (1992, p.
29) thus: 'We could use help from professional philosophers in understanding what it
is that we are doing, but with or without their help we shall keep at it.' If philosophers
are to heed the plea for help, then they need to pay more attention to two admonitions
of the same general import: the first comes from Einstein (1933, p. 270), 'If you want
to find out anything from the theoretical physicists about the methods they use, I
advise you to stick closely to one principle: don't listen to their words, fix your
attention on their deeds.';24 the second comes from Cushing (1989, p. 17), ' ... science
is what scientists have done, not what a philosopher tells us the scientists meant to do,
were really doing, or should have done.' Of course, what scientists do, and what to
make of the outcomes of their activities, is not all that clear; hence Weinberg's plea for
help. But philosophical help is perhaps more likely to be effective if it takes a greater
account of actual practice, its context, and its products. One of the most significant
achievements of Popper is to have brought the philosophy of science closer to actual
methodological practices and their outcomes (at least as regards the core sciences),
notwithstanding that normative considerations may have been foremost in his mind;
23
Weinberg (1987) attributes the expression to a remark, whose source he forgot..
Einstein (1933, pp. 270-271) qualified his admonition both in relation to himself,
'...on the ground that it may, after all, be of interest to know how one who has spent a
lifetime in striving with all his might to clear up and rectify its fundamentals looks
upon his own branch of science.', and more generally, 'At a time like the present, when
experience forces us to seek a newer and more solid foundation, the physicist cannot
simply surrender to the philosopher the critical contemplation of the theoretical
foundations; for, he himself knows best, and feels more surely where the shoe
pinches.' (Einstein, 1936, p. 290) Notwithstanding these apt qualifications, however,
Einstein's admonition is worth heeding for anyone interested in the method practiced
in physics and in interpreting its outcomes.
24
27
which would suggest that we may expect to find the method of science and its
outcomes to be by and large attuned with rational desiderata.
28
Popper
... the central problem of epistemology has always been and still is the problem
of the growth of knowledge. And the growth of knowledge can be studied best by
studying the growth of scientific knowledge.
K.Popper
... uncritical empiricists and positivists have been misled by the generally
excellent working of our decoding apparatus... the apparatus and its excellence are the
result of "natural selection";... we should not be here if the apparatus were much
worse.
K.Popper
... reality has its invariants, and its organic life, and with it "knowledge"
becomes possible provided the invariants play - locally - a sufficiently prominent role
for the organisms to "adapt" themselves to them.
K. Popper
...cats and dogs assuredly learn from experience, but without a symbolic
language.
K.Popper
...it has to be admitted that Popper's thesis accords well with a great deal of the
history of science and its applications. As he puts it ... he is doing little more than
elaborating the old saying that we learn by our mistakes, which, though trite, is
manifestly true of much of the development of science, and of human knowledge in
general.
T.E. Burke
Popper (1977, p. 420) depicts Hume's stance thus: 'The fundamental doctrine
which underlines all theories of induction is the doctrine of the primacy of repetitions.
Keeping Hume's attitude in mind, we may distinguish two variants of this doctrine.
The first (which Hume criticised) may be called the doctrine of the logical primacy of
repetitions. According to this doctrine, repeated instances furnish a kind of
justification for the acceptance of a universal law. (The idea of repetition is linked, as
a rule, with that of probability.) The second (which Hume upheld) may be called the
doctrine of the temporal (and psychological) primacy of repetitions. According to this
second doctrine, repetitions, even though they should fail to furnish any kind of
justification for a universal law and for the expectations and beliefs which it entails,
nevertheless induce and arouse these expectations and beliefs in us, as a matter of fact
- however little 'justified' or 'rational' this fact (or these beliefs) may be.' Popper then
proceeds to give cogent arguments suggesting that both of the above doctrines are
untenable. He thus finds himself in agreement with Hume on the first logical doctrine
and in disagreement on the second psychological one: he accepts the logical invalidity
of induction and that probabilistic arguments cannot validate it, whilst rejecting
Hume's psychological thesis regarding mind's habituation - via repetitions - to an
inductive mode of thought. Popper's assent to Hume's logical stance, and his dissent
29
from Hume's psychological one, indicates how he could have come to the central
plank of his methodology, which, if sound, dislodges the apparent clash within
methodology between the demands of logic (normative aspect) and the psychology of
the learning process (descriptive aspect): the '... principle of transference from logic
to psychology--the principle that what is true in logic must, by and large, be true in
psychology.' (Schilpp, 1974, p. 1024) Hence, in its learning activity, the mind - a
product of Darwinian selection - operates, 'by and large', in a logically valid manner.25
Thus, '... I see in science ... the conscious and critical form of an adaptive method of
trial and error. ...we (from the amoeba to Einstein) learn from our mistakes.' (Schilpp,
1974, p. 1147) Accordingly, the method by which the epistemic ascent has taken
place neither is nor need be inductive; rather, its dominant feature is eliminative:
weeding out of error or falsity either by nature alone or in more advanced stages by the
critical mind and nature combined. This view leads Popper to reject the notions of
verification and confirmation and to replace them by falsification and corroboration as
being the more relevant concepts for methodology.
On the face of it, Popper's stance does indeed suggest the in principle
possibility of objective knowledge - including deep objective knowledge - brought
about by rational means, i.e. by some form of a Darwinian mechanism. (As indicated
above, the rationality of the means is required by the rationalist presupposition that the
reality in question is so structured that only a rational method is appropriate for its
exploration. From a critical rationalist point of view, this means that the reality must
satisfy the contradictoriness principle.) But the details of the mechanism are missing
and the impasse in Popper's outlook is traceable to their absence. One way of
characterising that impasse is to begin with Popper's depiction of the dilemma
inherent in the inductivist stance (1977, p. 254): '...if we try to turn our metaphysical
faith in the uniformity of nature and in the verifiability of theories into a theory of
knowledge based on inductive logic, we are left only with the choice between an
infinite regress and apriorism.' But this dilemma has its counterpart in critical
rationalism in the following guise: either corroboration is indicative of truthlikeness
(Tr), in which case it has 'predictive import' (more generally projective, hence also
retrodictive, import), albeit not 'predictive [nor retrodictive] content', and is thus
inductive in character, or it is not indicative of Tr, in which case it has no 'predictive
[projective] import' or 'inductive [projective] aspect'.26 But if the latter is the case then
25
Popper's point may perhaps be interpreted thus: learning about reality is of adaptive
utility and in that learning activity the mind is "by and large" rational, willy-nilly,
because it has so been chiseled by natural selection, given that reality is "by and large"
rationally ordered. Hence, to be rational is, "by and large", of adaptive utility. But this
suggests that an ability to choose to be rational or irrational, as circumstances may
warrant, may also contribute to adaptibility. We may therefore posit that the chiseling
has also effected a degree of "free choice", at any rate in respect of being rational or
irrational. But once the attribute of "free choice" was there it could have been
exercised - again, depending on circumstances - in respect of aspects of life that were
largely of mind's own making, i.e. ethical behaviour and aesthetic creativity,
notwithstanding that they too may have originated in the evolutionary process.
26
See Popper (1981a, pp. 101-103; 1972, Ch.10); Schilpp (1974, pp.1027-1030); and
Salmon (1988). The impasse in Popper's outlook may also be gleaned from Watkins'
(1996 and 1995a), from which Watkins' own approach to progress and rationality in
science can be discerned.
30
there is no good rationale for the use - application or projection - of the predictive and
retrodictive content of scientific hypotheses within their respective domains for either
pragmatic or explanatory aims. For if corroboration has no projective import then it is
not clear what reason there is for using any well or even best corroborated hypothesis
to any alternative hypothesis, however bizarre that hypothesis may be.27 Hence, there
can be no account of the successes of either predictive or retrodictive applications.
(Popper's eschewal of such accounts is discussed below.) Nor is it of any help, as
regards the problem of the lack of a rationale for projections, to point to the formal
universal character of scientific hypotheses, which would suggest that the hypothesis
should be applicable across its reality, wherever and whenever its domain in that
reality is realised. For what good reason (or deductive/empiric indication) do we have
of the applicability of the hypothesis within its domain, if corroboration has no
projective import? This question acquires particular poignancy upon the realisation
that given any scientific hypothesis it is in principle possible to construct an infinity of
alternative hypotheses all of which would be empirically equivalent hitherto, but the
predictions and retrodictions of each of which would diverge from those of the given
hypothesis, and for all of which one could also claim universality (sect. B). The view
that corroboration has no projective import is the Achilles heel of Popper's
methodology, if only beacause it is not in line with practice. Scientists do successfully
project hypotheses for both explanatory and pragmatic aims, within the bounds of
what they take to be their real domains. And in so doing, they treat corroborations to
have limited projective imports. Moreover, the idea that corroborations have no
projective import undermines metaphysical realism, committed to the view that
scientific hypotheses are truthlike, and hence projectable within their respective
domains. Thus the pragmatic and explanatory advantages of the realist stance (pointed
to above), to do with predictive and retrodictive projections, are also undermined. Of
course, metaphysical realism may be held onto even if it is held that corroborations
have no projective import - notwithstanding that corroborations relate to objective
sources - but the defence of the realist case would then rest solely on its being
conducive to human desiderata (as discussed above), which, as indicated, does not
constitute a sound basis for it, i.e. its defence would then fall entirely outside any
direct (i.e. non-circular and non-inductive/apriorist) involvement with what van
Fraassen (1985, p. 253) calls, 'deliverances of experience.', notwithstanding that it can
account for those 'deliverances'. Admittedly, the approach pursued here in defence of
the realist case - which leads to the view that corroborations could have limited
projective imports - does not stay entirely within '… the limits of the deliverances of
experience.', since it does invoke some metaphysical realist posits. But, as we shall
see, those posits relate specifically to the possibility of the distinct testability of
distinct symmetric features of physical theories; a possibility which indicates just how
the realist contention of a progression of increasingly truthlike physical theories could
have been achieved by inadvertantly submitting their posited truthlike character to the
distinct control of 'deliverances of experience'. The realist contention could thus have
27
The problem of corroboration having no projective import has led one critic to
conclude, 'Popper's scepticism is profound. Corroboration does not alleviate it. In the
context of this scepticism the point of preferring corroborated theories remains
obscure, while the virtue of highly corroborable theories seems to be purely
instrumental.' (O'Hear, 1982, p. 42)
31
been brought into the purview of such 'deliverances'; hence the possibility of the
realist case.
However, leaving aside the present approach, there is another way, within a
Popperian context, of attempting to circumvent the unpalatable consequences of the
view that corroboration has no projective import: it is to downgrade the significance
of corroboration, stressing falsification instead. On that view (Miller, 1994, p. 6), 'The
central idea of falsificationism is that the purpose of empirical investigation is to
classify hypotheses as false, not to assist ... in classifying them as true.', and, '... if the
aim of science is truth alone, then it is on truth alone that the methods of science
should be concentrated.' I take that to mean that the function of tests is to eliminate the
false hypotheses from those on the table. Should one of these resist elimination for the
time being, then it would be reasonable - because one can do no better - to regard it
tentatively as true and to use it for explanatory and pragmatic aims. Thus the function
of tests is to bring empiric criticism to bear on a set of competing hypotheses in order
to tentatively demarcate those which may be false from the one that may be true. But
this move fails to solve the problem satisfactorily for two intertwined reasons that mix
logical and methodological considerations:
Firstly, as noted above, if there is one hypothesis that resists elimination, then
we can in principle construct an infinity of incompatible alternatives, all of which
would have equally resisted elimination hitherto. On grounds of logic, we cannot
dismiss all these alternatives, however aberrant they might be. And as we shall see,
attempts to obviate the problem by resorting to notions such as 'simplicity', 'physical
necessity', 'logically weaker' (Watkins, 1991), etc. revert either to inductivism or
apriorism or both.
Secondly, there would be no trial and error process of the sort that would have
a realistic chance of leading to the truth - especially in cases where the object of
enquiry is complex - unless on the way to that unknowable and unrecognisable truth
the apparently refuted guesses are truthlike, in the sense of having some approximate
relation to the truth, or, alternatively, unless the corroboration of those guesses is
indicative of their Tr, and hence has some projective import. And the idea that one
could stumble upon the true hypothesis of a reality as complex and multi-levelled as
the physical world appears to be by a purely eliminative process, is surely entirely
unrealistic; particularly in view of the possibility of an infinity of alternatives at each
step of the process. A graduated stepwise process of truthlike guesses that could point
the way towards the truth, and in the selection of which some positive element is
involved, seems essential for deep objective progress in physics. Thus the fallback to a
purely negative rationale for the use of the best tested hypothesis available seems to be
a desperate move, at any rate in so far as physics is concerned. Moreover, physicists,
as well as scientists and engineers in the physical sciences generally, use hypotheses
successfully within their respective domains, for either explanatory or pragmatic aims,
notwithstanding that the hypotheses have been refuted and are thus less corroborated
than their best corroborated successors, and notwithstanding that such refuted
hypotheses are incompatible with their rivals, including with their best corroborated
rival.28 And if corroboration is not indicative of Tr, then there is no good account of
28
Miller (1994) writes: 'Falsificationists ... think that a hypothesis need submit to tests
only after it has been admitted to science. If it fails any of the tests to which it is put,
then it is expelled, removed from science;' (p. 7), and, 'For since the aim of science is
just the truth, we shall want to retain anything that might be true, but nothing that
32
such successes. Further, if corroboration is not indicative of Tr and thus has no
'predictive [projective] import', then it is essentially but hitherto observed empiric
adequacy, because, ceteris paribus, empiric adequacy is the central consideration in a
'critical discussion' regarding theory assessment on which an assessment of the
theory's scope is based. And observed empiric adequacy could, in principle, serve just
as well as Tr qua regulative notion, or guide for progress, and as a criterion of whether
progress has been achieved. For although Popper held metaphysical realism to be
important for methodology, it is, in principle, not indispensable (Schilpp, 1974, p.
996). Nor does turning metaphysical realism into an explanatory conjecture - whereby
Tr would account for either corroborative or explanatory success or both - succeed in
resolving the problem. For implicit in such a move is that Tr is inferred from such
successes - or, equivalently, it is to take success to be indicative of Tr - and that move
is tinged, by Popper's own account, with a "whiff" of inductivism (Schilpp, 1974, pp.
1192-1193, note 165b). Moreover, such accounts of success appear to be in Popper's
sense '...almost circular or ad-hoc ' (Popper, 1981a, p. 192). This logical point is
reinforced by historical cases of successful hypotheses now regarded to be devoid of
Tr, e.g. the phlogiston theory of oxygen, the caloric theory of heat, etc. Such
considerations plus the model mediation problem (touched on below) suggest that as a
general proposition the posited link between either corroborative or explanatory
success and Tr is rightly suspect, notwithstanding its plausibility, especially with
respect to hypotheses of low theoreticity, and notwithstanding that without such a link,
the successes of the physical hypotheses of high theoreticity appear miraculous.29
Popper was undoubtedly aware of the problem: hence his suggestion that
corroboration may be regarded to be indicative of Tr, notwithstanding that this too
'...involves a whiff of inductivism...' (Watkins, 1991, p. 345). In the light of this
impasse, it is puzzling to find Miller assert (1994, p.196), '...the absence of an
adequate objective theory of verisimilitude hardly contradicts falsificationism, and it
does not detract at all from its claim to have solved the problem of induction.' Of
course, this absence does not contradict falsificationism, but it certainly detracts from
the tenability of its central claim.
Falsifications are indeed central to the method of science, notwithstanding
their inconclusivity, for they not only suggest the falsity of the hypothesis vis-à-vis its
complete reality, but they also help exhibit the bounds of a domain within that reality,
in respect of which the hypothesis might be truthlike, in virtue of having a similarity
cannot be true.' (p. 8) If being "expelled" means that the hypothesis is no longer used,
or that it is no longer a topic of enquiry, then this view is in clear conflict with
practice, in at least physics. Nor is the following view in accord with practice, 'One
may note, for example, as did Popper ... that science often operates as though it aims
not at truth but merely at close approximations to the truth. In such circumstances a
hypothesis will not be discarded as soon as it is falsified, but only when it has been
shown to approximate the truth worse than does some rival hypothesis.' (p. 10)
Newtonian physics has not been discarded, it has continued to be both used within its
domain and scrutinised for its soundness in respect to a whole variety of measures,
notwithstanding the general view among physicists that quantum and relativistic
physics are closer to the truth. But perhaps by being "expelled" Miller has in mind no
longer being considered to be candidates for the full truth.
29
Discussions of the relation between success and truth are ongoing - see, e.g.
Magnus (2003), and references therein.
33
relation to the true theory of its reality - as we shall see in sect. B. But if that idea were
to be the case, then corroboration (in the sense of corroborative success) could be
indicative of that truthlikeness, which could account for it, and hence it could have
projective import in respect of a limited domain. Practice is in line with this picture,
whilst falsificationism without corroboration qua indicator of Tr, is not. Thus to
overcome the impasse in Popper's outlook, requires bringing it more in line with
practice, by focusing on it and the contents of its outcomes.
Indeed, Popper himself points the way to just such a resolution of the impasse.
In discussing the replacement of metaphysical statements expressing faith in the PUN
by methodological principles, he states, 'One attempt to replace metaphysical
statements of this kind by principles of method leads to the "principle of induction",
supposed to govern the method of induction, and hence that of the verification of
theories. But this attempt fails, for the principle of induction is itself metaphysical in
character.' (Popper, 1977, pp. 253- 254) His alternative is: '...the principle of the
uniformity of nature is here replaced by the postulate of the invariance of natural
laws, with respect to both space and time... it is [thus] part of our definition of natural
laws if we postulate that they are to be invariant with respect to space and time;...the
"principle of the uniformity of nature" can... [thus] be regarded as a metaphysical
interpretation of a methodological rule - like its near relative, the "law of causality" '
(Popper, 1977, p. 253). But, prima facie, this replacement of the PUN by this
methodological rule or postulate does nothing to meet the Humean challenge to
scientific rationality and its concomitant realism - understood here to be the demand
that there be a good positive rationale for the projection or application of scientific
hypotheses within their respective domains, where such a rationale would be based on
distinct and valid critical empiric scrutiny, i.e. on valid tests; tests that could effect
contacts between singular hypotheses and their test-phenomena, consequent to that
their attendant projection and model mediation problems could be resolved via
deductive empiric means. Prima facie, the replacement does not meet this challenge,
because, firstly, most scientific hypotheses do not explicitly satisfy Popper's postulate,
and, secondly, even when they do, the postulate does not appear to be under
appropriate scrutiny, given that hypotheses are tested en bloc, and given the DuhemQuine problem. Thus the methodological rule, or postulate, does no more than
demand that hypotheses ought to be so constructed as to satisfy space and time
translation invariance. But, firstly, the demand is generally not implementable
explicitly, and, secondly, even when it is, it achieves, prima facie, no more than to
introduce metaphysical inductivist invariance hypotheses into, and in respect of, the
host hypothesis in question; prima facie, the invariance hypotheses are inductivist
because they are, apparently, not under distinct empiric control, given that the host
hypothesis is tested en bloc. Thus Popper's postulate or methodological rule, when
implementable, is prima facie no less metaphysical and inductive (projective) than its
metaphysical interpretation, i.e. the PUN, which it is meant to replace. Popper's resort
to a definition in an attempt to resolve the impasse is hardly in keeping with his sound
view on the poverty of definitions.30 Accordingly, the utility of definitions is to be
sought solely in their consequences (Popper, 1977, p. 55). But, prima facie, there
appear to be no consequences that would resolve the problem. Mere methodological
30
'Definitions are either abbreviations and therefore unnecessary, though perhaps
convenient. ... or they are Aristotelian attempts to "state the essence" of a word, and
therefore unconscious conventional dogmas.' (Schilpp, 1974, p. 981)
34
postulation, amounting to but stipulation, cannot itself resolve the issue. Nonetheless,
the demand that natural laws 'are to be invariant with respect to space and time' is a
step in the right direction; firstly because it mirrors practice in respect of foundational
physical theories, and secondly, because it suggests that a resolution of the problem in
relation to those theories could be had, provided the relevant invariance hypotheses,
which would render the theories 'invariant with respect to space and time', were under
distinct and valid empiric control. Such control would raise the possibility of the
circumvention of the induction (projection)/apriorist problem in respect of both the
invariance hypotheses as well as their host theory. This study suggests that that
possibility is (or may be) an actuality, albeit within a context specified by the CC,
which the foundational theories satisfy. If that suggestion is sound, then the projection
problem in the rest of physics as well as in the other physical and biological sciences
could have also found a resolution.
The broad outline of Popper's stance is compelling if we wish to entertain the
possibility of a growing body of objective albeit fallible and approximate scientific
knowledge, arrived at largely by rational deductive means. But the stance fails to
provide an account of just how scientific hypotheses could be truthlike, or,
alternatively, it fails to suggest a possible source of their posited Tr; a failure that
accounts for the inability of the stance to provide a good rationale for the application
(projection) of scientific hypotheses within their respective domains, in either
pragmatic or explanatory contexts. And the lack of such a rationale means that the
stance cannot account for the successes of such applications. The defect in Popper's
stance is not a minor one, for it undermines his realist interpretation of scientific
knowledge (O'Hear, 1982, pp. 90-96). Truthlike knowledge ought to be projectable
within some limited domain, and that projectibility ought to be rationally warranted.
Without such a warrant, i.e. without having a good indication of the projectibility of
an hypothesis within its domain, we have no good indication of the possibility that the
hypothesis is truthlike. The suggestion here will be, however, that this defect in
Popper's stance is due to an insufficient attention to the details of the learning process
in the evolutionary and scientific contexts; and hence that the defect may be rectified
by a greater naturalization of Popper's outlook, in particular its greater naturalization
to evolutionary biology and to physics.
It may be instructive to begin, therefore, by looking into the source of the
impasse in Popper's outlook. That source can be located in an understandable
dilemma: how to balance two essential but often conflicting desiderata of a
methodology, that it be both normative and descriptive. That it be normative without
losing sight of actual practice: e.g. the unavoidable practice of projecting or applying
hypotheses for either explanatory or pragmatic purposes, a practice in need of a good
rationale. Tilting too much in the normative direction runs the risk of ending up with a
utopian methodology that has little chance of being either enlightening or of any use.
Tilting too much in the descriptive direction - in the sense of noting, describing, and
perhaps explaining what appears to be scientific practice - runs the risk of turning
methodology into an empirical science, an approach that leaves no room for normative
criticism. There are indications that Popper was aware of this dilemma and that he
tried to come to terms with it. Thus we can find quite a number of indications in his
work that attest to a tension between the attempt to be both normative and descriptive
(Popper, 1977, p. 50 and p.199; 1983, pp. 188-189 and p. 223; Schilpp, 1974, p. 966
and p.1036). In particular, he stressed that methodology cannot do without
conventional but non-arbitrary decisions - decisions that generally, albeit not
35
universally, just happen to be part and parcel of actual practice: e.g. accepting
falsifiability as the demarcation criterion between science and non-science, and
regarding instances of apparent refutations as falsifications, notwithstanding the
unavoidable inconclusivity of such instances; ceteris paribus, preferring the best
tested, best corroborated, hypothesis from a set of rival hypotheses; not to account for
reproducible regularities in terms of an accumulation of accidents; and, at least in
physics, insuring that hypotheses are space and time translation invariant; etc. Thus
much of what appears to be purely normative in Popper's methodology coincides with
actual general practice.31 Nonetheless, prima facie, the following lines suggest an
unambiguous resolution of the dilemma in favour of a largely normative methodology.
'A naturalistic methodology [methodology qua empirical science, hence possibly
descriptive] ... has its value... But what I call "methodology" should not be taken for
an empirical science... [he goes on to dispense with the principle of induction] ... not
because such a principle is as a matter of fact never used in science, but because I
think that it is not needed; that it does not help us; and that it even gives rise to
inconsistencies. Thus I reject the naturalistic view. It is uncritical.' (Popper, 1977,
pp. 52-53) [my italics]; 'For logical reasons, the hypothesis must come before the
observation; and I suggest that what is true for logical reasons is actually true for the
organism - both for its nervous system and for its psychology.' (Popper and Eccles,
1977, p. 435); 'I assert that I have an answer to Hume's psychological problem which
completely removes the clash between the logic and the psychology of knowledge.'
(Schilpp, 1974, pp. 1019-1020) The answer to that problem is, '... the principle of
transference from logic to psychology - the principle that what is true in logic must, by
and large, be true in psychology.' (Schilpp, 1974, p. 1024)
Thus, Popper's outlook is, apparently, based on a proposition that is either held
to be a normative principle based on an priori valid truth - in which case we must
again suppose a "black box", which, notwithstanding apparent induction, effects, by
and large, a logically valid quasi-Darwinian mechanism - or it is held to be an
empirical conjecture, testable in principle. If it is read in the former sense, then Popper
has come down on the side of a 'by and large' normative methodology based on an a
priori valid truth. But a priori validity he explicitly rejects (Popper, 1981a, p. 24 and
p. 92). On the other hand, if it is read in the latter empirical sense, then Popper has
come down on the side of a 'by and large' descriptive naturalized methodology and
epistemology. Of course, this need not mean that we derive methodological norms
from methodological practice or from scientific facts or from the content of scientific
theories, rather, we allow these practices facts and theories to inform our methodology
and epistemology. Thus the ambiguity remains. However, by the time Popper
formulated his 'principle of transference', he had moved considerably towards a
naturalism based on a Darwinian evolutionary biology. Thus, as I see it, the principle
should be read as not only a normative proposal, but also as a descriptive conjectural
hypothesis - an hypothesis that suggests itself in view of evidence stemming from the
biological and neuro-psychological sciences as well as from methodological practices
in the sciences. This evidence is, of course, in conflict with the evidence suggesting
that the mind is prone to hold some form of the PUN, and thus prone to inductivist
31
For some examples of the practice of Popper's advocacy of bold hypotheses, see
Rohrlich (1996a). On an example of the practice of hypothetico-deductivism in
chemistry, see Brush (1999). And on how Popper's view of reduction is close to
practice see Scerri (1998 and 1999).
36
projections. But then the mind could be capable of both sorts of dispositions
depending on the particular situation within its milieu or niche it happens to be in, a
view that would be more in line with the evolutionary picture. The term 'by and large'
in Popper's principle suggests that he was well aware of that possibility, and hence
that he did not hold an either/or stance on the issue, as Hume apparently did.
Be that as it may, although Popper's methodology, at least as regards practices
in the core sciences, appears more descriptive than the available alternatives, as I see
it, the source of its impasse is that its tilt is still strongly in the normative direction. To
get out of the impasse requires its greater naturalization; indeed, it requires
naturalization of the very notion of Tr - as we shall see. Popper deplored
naturalization if that meant turning methodology into an empirical science methodology qua scientific study of practice - since that could render normative
criticism superfluous. (See above quotations.) But naturalization, in the sense of
allowing both scientific practice and scientific knowledge to inform on what may or
may not be sound practice, which could lead to successful science, is hardly likely to
turn methodology into an empirical science. Thus one would not expect naturalization,
in this latter sense, to be a problem for critical rationalism. After all, learning from the
sciences about how we learn, and about what sort of practices have led to successful
science hitherto, knowledge of which could facilitate further progress, would be but
another example of the bootstrap or feedback character of the growth of human
knowledge. Nonetheless, naturalization to the sciences could lead to an account of
scientific success, and as such naturalization does appear to clash with Popper's
eschewal of attempts at such accounts (Popper, 1981a, pp. 23, 98, and 204; Schilpp,
1974, pp. 1041 and 1026-1027). But why did Popper spurn such attempts,
notwithstanding that he himself indulged in them (as one sees in some of the
quotations above), and notwithstanding that the chief rationale for a realist view of
science, to which Popper was attached, is that it can account for the success of
science? Presumably, because '... any theory of knowledge which, in explaining why
we are successful, allows us to predict that we shall continue to be successful [which
would mean that the theory has 'predictive import' and hence an 'inductive aspect'],
explains and predicts too much.' (Popper, 1983, p. 60) But if Popper's stance is sound
regarding the impossibility of predicting '... by rational or scientific methods, the
future growth of our scientific knowledge.' (Popper, 1976, pp. v-vii) (as is surely
likely, even if his arguments for it should not be sound; Urbach, 1978), then that
would rule out a scientific prediction, one emanating from a scientific theory, of the
continued success of our efforts at making further epistemic progress. Nonetheless, it
would not rule out a metaphysical guess of continued success - a guess that would
stem from a metaphysical theory of knowledge, or rather from metaphysical
conjectures, which, although based on the sciences, are not themselves scientific. The
scientific basis means that the reasons given for the conjectures have withstood
critical scrutiny. Hence, such a theory of knowledge could still count as "rational"
notwithstanding that, admittedly, it would have a metaphysical predictive, or
projective, import, but no predictive, or projective, content. Such a theory could not be
predicting and explaining 'too much' since it could only yield untestable conjectures
regarding the possibility of scientific success hitherto, and in our future. This is
because today's sciences are likely to be able to point only to metaphysical responses
to today's methodological and epistemological problems. And to tentatively accept
such pointers from the critically based sciences is hardly being uncritical. Indeed,
37
Popper himself often used such pointers (Popper, 1981b,1983,1984; Schilpp, 1974,
pp. 1112-1114).
So long as the sciences remain critically based, naturalization to the sciences
does not mean an end to a critically based metaphysical and normative methodology
and epistemology. Nor need naturalization constitute a reversion to an observationally
centered positivism, since the scientifically based pointers are in no sense
"observables". From a critical rationalist perspective, one that eschews both
inductivism and apriorism, naturalization to the sciences is unavoidable. For if, as
Popper held (1977, pp. 51-52), '... the main problem of philosophy is the critical
analysis of the appeal to the authority of "experience"...', then leaving that analysis,
and indeed any analysis of matters to do with the learning process, solely with
philosophy, is bound to lead to inductivist and/or apriorist "solutions". The only
alternative is to relegate to the method content and structure of the sciences a
dominant role in such analyses. Admittedly, to do so, requires a convention, but it is a
convention that yields to the convention on which science is based: the only
convention that is openly fallibilist and under the control of universal interpersonal
reason and experience, and hence non-arbitrary. The case for naturalization is well
made by Shimony (1993d, p. 323): '... there is no reason to believe that the human
intellect is endowed a priori with all the methodological tools it needs for
investigating the natural world. Experience is needed not only to learn substantive
truths about nature, but also to learn how to learn.' To remove any positivist taint from
Shimony's observation, we may interpret "experience" à la Popper (1977, p. 52), i.e.
'...as the method of empirical science.' And we need not follow Quine all the way to
appreciate his main point (1981, p. 72): 'Naturalism does not repudiate epistemology,
but assimilates it to empirical psychology. Science itself tells us that our information
about the world is limited to irritations of our surfaces, and then the epistemological
question is in turn a question within science: the question how we human animals can
have managed to arrive at science from such limited information. Our scientific
epistemologist pursues this enquiry and comes out with an account that has a good
deal to do with the learning of language and with the neurology of perception. ...
evolution and natural selection will doubtless figure in this account, and he will feel
free to apply physics if he sees a way.' [my italics] Here the "assimilation" is not to
empirical psychology but to evolutionary biology and to physics. However, we need to
keep in mind that, 'All sciences interlock to some extent; they share a common logic
and generally some common part of mathematics, even when nothing else.' (Quine,
1981, p. 71)
Although naturalization to the sciences is hardly likely to lead to a scientific
theory of knowledge, it may lead to a metaphysical one, capable of giving a
metaphysical account of our past successes - an account, that is, of what may have
made those successes possible. And whilst such an account of past success would not
enable us to predict future success, it may provide clues as to how to proceed towards
further success. Such clues, moreover, may also suggest how we could have acquired
good reasons, albeit inconclusive ones, for the application (projection) of scientific
hypotheses. Popper (Schilpp, 1974, p. 1114) grants the possibility of 'inconclusive
reasons' in the form of experiences leading to the acceptance or rejection of basic
observational statements - reasons traceable to evolutionary biology. And, of course,
he acknowledges that we may have 'critical reasons', but not positive justificatory
ones, for making choices between competing hypotheses (1983, p. 20). But as to a
reason for the pragmatic application of hypotheses, his suggestion is, '... it is perfectly
38
reasonable to act on the assumption that it [the future] will, in many respects, be like
the past, and that well-tested laws will continue to hold (since we have no better
assumption to act upon).' (1972, p. 56) But if the reasonableness of that projective
assumption rests only on the idea that we have no better assumption that could guide
our actions, then all manner of projective assumptions could be on an equal footing,
however bizarre they might be. It is not clear then why we ought to regard that
particular projective assumption privileged compared to any other possible
assumption, unless, of course, the reasonableness of that projective assumption is
based on past experience. But then we have not escaped the inductivist/apriorist
dilemma (Burke, 1988, p. 212). Indeed, this "solution" is not far removed from what
follows from Hume's own stand: act as if there is no problem involved, since we have
no choice in the matter anyway.32 But perhaps Popper's stance is best expressed in the
following lines: '... in spite of the "rationality" of choosing the best-tested theory as a
basis of action, this choice is not "rational" in the sense that it is based upon good
reasons for expecting that it will in practice be a successful choice: there can be no
good reasons in this sense, and this is precisely Hume's result.' (1981a, p. 22); and,
'...for rational action we need criticism... positive reasons are neither necessary nor
possible.' (Schilpp, 1974, 1041)33 So Popper is in agreement with Hume that good
positive reasons are not possible, and Popper's response to the projection problem this
poses is to say that they are not necessary. But some such reason is necessary if the
impasse in critical rationalism is to be resolved. Indeed, Popper's use of simplicity qua
selection mechanism amounts to a positive reason, but it fails to obviate the impasse,
for it implicates at least a whiff of apriorism (sect. B). However conjectural a
simplicity-based approach might be, simplicity of any form per se remains a human
desideratum, unless its consequences can be shown to obviate the induction/apriorist
problem via deductive-empiric means. What is required to obviate the impasse in
critical rationalism is a positive selection mechanism brought about by deductiveempiric means; a positive selection mechanism that operates distinctly from, but in
conjunction with, customary negative selection, and which can deliver positive, albeit
inconclusive, reasons for the projectibility of scientific hypotheses. Indeed, it turns out
that customary negative selection is illegitimate, from a critical rationalist point of
32
As Strawson ( 1958, p. 21) observed, 'If it is said that there is a problem of
induction, and that Hume posed it, it must be added that he solved it.'.
33
Miller (1994, pp. 51-52) eschews both positive and negative reasons: 'Although
there are such things as good arguments, and it is these that the rationalist tries to
provide, there are no such things as good reasons; that is, sufficient or even partly
sufficient favourable (or positive) reasons for accepting a hypothesis rather than
rejecting it, or for rejecting it rather than accepting it, or for implementing a policy, or
for not doing so.' But is not a good reason also a good argument, and vice versa? But
even allowing the distinction, I fail to see its significance. For are not good arguments
based on good reasons? Do we not argue for the falsity of an hypothesis on the ground
that we have acquired a falsifying instance of it? And whether or not the independent
issue of the "sufficiency" of reasons and of arguments can be satisfactorily explicated,
we know that there are no conclusive falsifications and corroborations. The issue is,
therefore, what to make of such inconclusive reasons. Thus Watkins (1995, p. 615)
was right to point out that if Miller's stand on this issue is right then his book would
have been more accurately entitled, 'What Remains of Critical Rationalism when the
Rationalism is Dropped'. (More on Miller's stand in sect. B., note 42)
39
view, without a positive selection mechanism that can legitimate it. This study
suggests that just such a mechanism could be in operation, albeit explicitly only in
respect of the foundational physical theories, satisfying the CC. Nonetheless, the
approach has implications for the projection problem in the rest of physics as well as
in the other physical and biological sciences. This study is thus an attempt to show
that in these sciences we may have good rationales - rationales based on distinct and
valid scrutiny - for expecting that our application of hypotheses within their respective
domains, for either explanatory or pragmatic gain, will be successful. To discern such
possible rationales necessitates looking into the reasons that may be responsible for
the varied successes of these sciences hitherto; and that requires looking into past
scientific practice as well as the content and structure of its outcomes.
But in resorting to the sciences to meet the Humean challenge, are we not
begging the issue? For Hume questions the possibility of even a modicum of objective
knowledge required for any kind of development that could lead to objective science.
From a Humean perspective - mind being compelled to inductive, or projective,
procedures whose validity cannot be demonstrated - it is indeed begging the issue.
From a naturalistic standpoint, however, the situation looks more like that indicated
by Shimony (1993e, p. 268): 'Tentative epistemological starting points, derived from
the life-world, are indeed essential in order to begin the enterprise of systematic
inquiry, but these starting points are subject to refinement and correction in the light
of subsequent discoveries. There is a dialectical interplay between epistemology and
the substantive content of the empirical sciences. This dialectic does not commit a
petitio principi, in the sense that what eventually is purported to be demonstrated has
actually been presupposed. It must be acknowledged that the dialectic process is not
algorithmic, and one has no guarantee that the succession of corrections and
refinements will converge. To this extent, the naturalistic epistemology ... may fail to
satisfy all the desiderata of the protophysicists and of other constructive
epistemologists. However, this shortcoming is more than compensated for by
openness, freedom from rigid commitments, capability of self-criticism, and possible
attunement to the natural world.'.34 Whilst the dialectic between epistemology and the
sciences does indeed 'not commit a petitio principi', the naturalist outlook, in the
context of which the dialectic transpires, does presuppose the PUN and hence the idea
of laws of nature and hence the idea of the impossibility of unnatural (miraculous)
events. Indeed this presupposition is implicit in every predictive (pragmatic) and
retrodictive (explanatory) application of scientific hypotheses, as well as in tests of
most hypotheses, and is thus part and parcel of scientific practice and of the scientific
realist outlook. However, the present study suggests that physics may have found a
way of distinctly and validly corroborating this naturalist presupposition in respect of
the domains of its foundational theories; a corroboration with likely implications for
the domains of the rest of physics and of the other physical and biological sciences,
and for the possibility of the scientific realist stance (sect. B).
34
See also Shimony (1993f, pp. 79-91), and O'Hear (1994).
40
The present approach
I begin on familiar naturalistic ground (see, e.g. Rosenberg, 1996). Whatever
further details of the bio-evolutionary process may be discovered, its account by the
Darwinian molecular-genetic synthesis is taken to be fundamentally sound: the
account is regarded to be a well corroborated truthlike scientific (falsifiable)
hypothesis.35 Given that context, it is further posited that within each species, an
organism's comparative "grasp" - whether conscious or not - of its entire situation,
including that of its species in its niche, would make a difference as regards its
comparative expected number of offspring or its comparative chances of survival and
reproduction. Thus the assumption is that variations in this "grasp" are always present
and that natural selection operates, perhaps derivatively, on such epistemic variations.
Alternatively: if the principle of natural selection can be expressed thus: 'If a is better
adapted than b in environment E, then (probably) a will have greater reproductive
success than b in E.' (Brandon, 1990, p. 11), then my posit is that among the items that
figure in being 'better adapted' is the possession - whether via encoded genes or
otherwise - of comparatively more truthlike (more epistemically fitting) bits of
information or knowledge.36 Thus in selecting the 'better adapted' natural selection
would, generally, concomitantly also select the epistemically or cognitively 'better
adapted' among organisms (hence also among genes in the genetic pool) of a species,
and perhaps also among varieties of similar species occupying the same niche.37 In
such a setting it would have been of adaptive utility for organisms and species to
acquire novel bits of correct, or truthlike, information about their niche. Such
acquisitions could have been outcomes of sequences of both passive and active
(whether blindly or consciously active) interactions between organisms and their
niche, as well as between species and their niche. And since such acquisitions are
unlikely to be uniformly spread, variations in cognitive fitness, on which natural
selection could operate, would be maintained. Further, over the longer run in the
course of human evolution mutations have undoubtedly led to an increase in
'information content' (Dawkins 2004, p.119). Thus in both the short and long runs of
human evolution low level epistemic developments could have been closely
intertwined with adaptationist requirements, leading to small truthlike epistemic
advances. Now whilst through most of that evolutionary period information would
35
By the Darwinian molecular-genetic (Crick-Watson genetic) account I mean: '...that
the evolution of the biosphere, subsequent to the establishment of the genetic code, is
governed by the principles of heredity and variation and the laws of physics, and is
constrained by biological and environmental boundary and initial conditions, but not
constrained otherwise: within these constraints let happen what happens.' (Shimony,
1993g, p. 229). See also Brandon (1990) and Meehl (1983).
36
Shimony (1993g) claims, rightly, I think, that the principle of natural selection is
'derivative rather than fundamental'. But that is compatible with Brandon's neat
expression of the principle, understood as a scientific hypothesis. See also Clarke
(1996). Clarke points out that 'Humans do not reason logically, but adaptively' (p.27).
But if the human environment is "sufficiently" logically structured, then to reason
adaptively is, at least to some extent, to reason logically.
37
On a non-Popperian view of the possibility that natural selection acted
concomitantly on bio and epistemic fitness in the Pleistocene period, see Clarke
(1996).
41
have been held only intersubjectively, with the appearance of a more developed
consciousness among distinct species (Popper and Eccles, 1977; Newman, 1996), and
of distinct languages - qua '... biological adaptation to communicate information. ...'
(Pinker, 1994, p. 19) -, rudimentary objective representational knowledge, in the form
of expressions signals and verbal pronouncements, would have become a possibility.38
In short, whilst, '...there is [indeed] no direct way of moving from evolutionary
workings to truth.' (O'Hear, 1994, p.127), I nonetheless posit that insofar as low level
epistemic developments are concerned in an evolutionary context there is a link,
however indirect, between the Tr of expectations or perceptions or hypotheses, held
blindly or consciously by organisms and by species, and their comparative
evolutionary or reproductive success. Hence the possibility of a modicum of truthlike
progress in the epistemic evolution of the human species, in the course of its bioevolution.
Apparently, only the human genetic structure contained the potential for
allowing the evolutionary process to reach the stage of a degree of consciousness able
to exploit language not merely in a communicative and descriptive sense but also in a
critical argumentative one (Popper, 1977, p. 55 and Sober, 1981). This would have
meant that within the human species biological and epistemic evolution would have
no longer been intertwined to the degree that they are posited here to have been
earlier, since a selective/retention sort of mechanism, somewhat like, but also notably
different from, a Darwinian one (Watkins, 1995b; Smithurst, 1995), could now
operate on variations of epistemic fitness - as regards both description and
communication - on the part of knowledge systems within specific cultures as well as
between cultures. The posit here is that this linquistic critical mechanism came to
dominate the human epistemic ascent, thereby freeing it from its previous intimate
link with bio-evolution. It would thus have been possible for the ascent to "take of" at
a rate much faster than before, and much faster than the rate of continued bioevolution. Galileo's stress on the mathematical language and on the experimental
method would have sharpened this critical dominating mechanism, providing a further
boost to the ascent. We can thus imagine an epistemic developmental process, capable
of producing increasing amounts of objective physical knowledge, in which both a
Lamarckian - instructive or repetitive or inductive - and a Darwinian - critical or
selective - mechanism play important roles, but with the Darwinian one dominating
the process by exercising a controlling impact on it. Given the supposition that
physical reality is sufficiently logically structured so as to allow for its exploration by
rational means, the inductivist mechanism, being invalid, could not in the "long run"
dominate, but it could both generate expectations, or hypotheses, and be instrumental
in their transmission - hypotheses which one way or another would eventually find
themselves pitted against the controlling impact of reality.
The above evolutionary stance suggests two posits. The first is that
intersubjective sense-mediated mind-world interactions deliver a modicum of
objective knowledge, albeit fallible and laden by theory like arithmetic, geometry, etc.
That knowledge is therefore corrigible. It is quantitative knowledge about meter
readings, tracks in bubble chambers, and the like; knowledge that forms the "raw data"
of physics. This posit, which presupposes metrication of the world, is essential for an
38
On Popper's account of the development and function of language, of learning, etc.,
see, e.g. Popper (1981a, pp. 266-267); Schilpp (1974, p. 1064); and. Popper and
Eccles (1977, pp. 48-49; 56-60, and p.137).
42
evolutionary epistemology of science. Its truth depends on the truth of an analogous
conjecture drawn from the above evolutionary picture: that an organism's mind-world
interaction, with the world un-metricated, yields a modicum of objective, albeit
fallible and laden by theory like general background knowledge, including genetically
stored knowledge. That knowledge is thus corrigible qualitative knowledge. The truth
of this hypothesis would account for our being here at all, as well as for our agreement
about the content of the "raw data".39 Now the "raw data" generally require processing
with the aid of more theoretical and technological background knowledge - including
the available basic data of physics - before they can become either an available
'empirical basis' in tests of this or that hypothesis or an additional part of the basic
data.40 Thus the 'empirical basis' as well as the basic data are fallible and theory laden.
However, a critical correction mechanism operates on both their fallibility - on their
errors - and on their ladenness - on the background knowledge. Thus the "raw data"
rarely gets into the public arena since experimentalists generally report only the
processed data. But they do report how the data was obtained: the experimental "setup". It is then open to anyone to check the reported data by repeating the experiment
39
If quantitative "raw data" is to be objective in the sense of reflecting the world 'in
and of itself' - i.e. if it is to be objective in the sense of not being merely humanspecies-relative - then our metrication of the world must reflect, at least to some
extent, its real features. For only if that is the case would the world be mathematically
comprehensible to us. Now our metrication consists in the use of our metrical
standards for the measurement of length and time intervals and of differences in mass.
In the final analysis, however, all physical measurements depend on the detection,
directly or indirectly, of length intervals. Thus we implicitly suppose that a
measurement of such an interval actually reflects an objective feature of the physical
world. We can argue for the reality of such intervals on the ground that our spatial
sense appears to be able to distinquish roughly between differently sized spatial
intervals, and if there were nothing to correspond to such estimates then we would not
be here. Accordingly, our metrication of space adds only precision and interpersonal
uniformity to such estimates of length intervals. But given the mathematical character
of physical hypotheses, a realist view of them must suppose that metrication is not just
a matter of precision and interpersonal uniformity but also a necessary precondition
for making mathematical sense of the physical world and of our experience of it. And
that this supposition is so because physical reality is fundamentally metrically
structured: it possesses an intrinsic metric as regards at least length. Accordingly, our
metrication must be such as to be able to latch on to this metric, with the result that
the outcome of our measurements should be integral multiples of it. Now if there is
indeed an intrinsic metric of length then we would expect it to relate to some physical
constants. Thus it may well relate to Planck's foundational unit of length given in
terms of three physical constants or approximate constants: that of gravitation (g),
1/2
33
quantum of action (h), and light velocity in vacuo (c), i.e. L = (g h/c3) ~ 10- cm.
This would suggest that on this scale the possible discrete character of space,
implicated by an intrinsic metric of length, needs to be taken seriously (Butterfield and
Isham, 2001). However, such discreteness need not mean that the metric can only be
captured mathematically and that mathematical languages are necessarily privileged
for the description of physical reality (sect. H).
40
On the problem of the 'empirical basis', see Popper (1977, Ch1, sect.); Watkins
(1984, Ch7, and 1991); and Haack, (1991).
43
either with a similar set-up or with a radically different one, using different
techniques. Such repetitions are generally done of crucial experiments - e.g.
measurements of c - and of experiments yielding doubtful data - data that appear to
refute some of the background knowledge. But the repetitions are generally performed
with the aid of a much improved technological and theoretical background knowledge,
the outcome of intertwined learning processes in technology and theory, processes
which may be roughly modelled by a trial and error elimination schema. The result is
that the repeated experiments "see" better, often in the sense of both scope and
accuracy, and thus constitute more severe tests than their predecessors. In this manner
technological and theoretical advances lead to improved past experiments as well as
novel ones. Consequently, the 'empirical basis' and the basic data are often revised and
augmented, with serendipity playing an important role in the augmentation (Andel,
1994). But since no experiment is likely to "see" perfectly, no outcomes of
measurements are likely to reflect their objective counterparts perfectly. Thus the
'empirical basis' and the basic data may be expected to contain only corrigible
approximations to actual values of quantities, relations, etc.. The development of
physics can be pictured therefore as the outcome of two mutually dependent bootstrap
or feed-back processes, one to do with technology dependent experimentation, the
other with theorizing.
As indicated above, the objectivity of the "raw data" as well as the consensus
on its content have an account traceable to evolutionary biology. Thus the rationale for
the tentative acceptance of the "raw data" (qua objective, albeit fallible, and corrigible
knowledge), hence of the physical empirical basis, hence of physics (but also of the
empirical basis of science generally, hence of science), rests on the posited Tr of
evolutionary biology, on a naturalism based on the posited Tr of the Darwinian
hypothesis. Popper appears to have come round to some such position on the
empirical basis from an earlier consensus-plus-repeatability stance, notwithstanding
his earlier opposition to naturalization (Schilpp, 1974, p. 1112). This shift in Popper's
stance on the 'empirical basis' goes hand in hand with his shift towards a greater
naturalization of his outlook, leading to his change of mind regarding the status of
Darwin's theory: he accepted its scientific credential after initially regarding it to be
but a 'metaphysical research programme' (Popper, 1978). From a Humean point of
view, no form of a scientifically based naturalism or anything else can solve the
projection problem. From that perspective the only "solution" is Hume's own stance:
mind's "natural" compulsion to an invalid form of reasoning means that there is no
choice but to think and act as if no rationality problem is involved. In that context any
alternative approach begs the question. But a naturalism linked to evolutionary
biology is admittedly internally circular since the posited Tr of that theory depends on
the posited Tr of its empirical basis and vice versa. The circularity is not vicious,
however, because a critically-based non-arbitrary, intersubjective convention
regarding the empirical basis breaks the impasse. The convention is not arbitrary or
uncritical because with the aid of the other sciences the empirical basis of
evolutionary biology, and hence the theory itself, are subject to a critical correction
mechanism operating on both their fallibility and ladenness - e.g. dating the
palaeontological evidence with the aid of techniques obtained from the other sciences.
Thus, with the help of the other sciences, both the theory and its empirical basis are
under critical empiric control - a situation that exhibits both the bootstrap character of
the development of science and the character of scientific rationality. The only
alternatives to such a conjecturalist naturalism, admittedly resting on a conventionally
44
but not uncritically based decision, are incoherent (self-refuting - as we shall see)
relativisms, and untestable and non-demonstrable foundationalisms.41
The second posit suggested by the evolutionary stance is that in the
evolutionary context a link exists between the degree of descriptive fitness of
knowledge bits (or their Tr) and their adaptive utility to organisms possessing them.
The better the former, the better the latter. But can we extend the notion of such a link
- which would allow the projective inference from success to Tr - to the historical
period when the epistemic ascent "took off", more or less on its own? When a
linguistic-argumentative mechanism, somewhat similar to the Darwinian one, began
to play a significant role in epistemic developments? When a Lamarckian mechanism
could have had a much more effective cumulative role in this process, i.e. when
adaptive utility no longer exercised immediate direct control over the process? Even if
this extension is granted, can it be further extended to modern science, when not only
had adaptive utility generally no immediate effect on the developing process, but
when that process came to enable human society to interfere and perhaps even halt
human bio-evolution? When conventional, but non-arbitrary, decisions were
unavoidable for doing science? Could the posited link between descriptive fitness and
adaptive utility in the evolutionary context still hold between descriptive fitness and
instrumental or pragmatic success in the historical pre-scientific period, and between
descriptive fitness and corroborative success in the scientific context? Is it still the
case in science, as posited here to have been the case in the evolutionary context, that
notwithstanding the ambient complexities, '... the world does, in evolutionary fashion,
select some theories and kill off others in which there is no predictive profit?' (O'Hear,
1994, p. 133). And if this is indeed the case as regards hypotheses of low level
theoreticity, is it still the case on a level of theoreticity exemplified by the
foundational physical theories?
Perhaps a better way of expressing the problem is this: could we have acquired
objective knowledge that goes way beyond the knowledge encoded in our
evolutionary and historically conditioned ordinary notions? If so, how? Or are our
biological determinations and historical-sociological conditionings so constraining as
to exclude the possibility of deep objective knowledge? To counter such scepticism
we need first to accept the view that the human mind, with its critical and creative
faculties, is a product of natural selection, and hence a real, indeed controlling,
participant in human activities, albeit within limits set by neuro-physiological and
environmental constraints. Secondly, we need to exhibit in some greater detail just
how deep objective knowledge could have come about, notwithstanding biological
and historical constraints. Otherwise the "miraculous" phenomena of scientific
41
Attempts to provide a foundationalist basis for science cannot obviate the need for a
convention regarding that basis. But a convention merely to underpin an infallible
basis is bound to be arbitrary and leads to the abandonment of its criticism as well as
to the abandonment of the commitment to allow the critical method free reign.
Admittedly, the logical status of the rationalist commitment may not differ from that
of the foundationalist one (But see Miller, 1994, Ch 4.) There is, however, an
epistemic difference. The rationalist commitment is not to an infallible basis, but
rather to a fallible method of inquiry, which uses, hence must answer to, rational
thought plus intersubjective sensory experience, leading to fallible outcomes. It is thus
a commitment uniquely rational from the point of view that it leads to outcomes that
are perpetually open to criticism.
45
successes and of the successes of applied science have no good account. The former
have irreversibly transformed our view of the cosmos and of our place in it, and the
latter have radically changed the material conditions of life. All this, accomplished in
a very small part of the 5000 years of recorded history, is in need of an account no less
than Newton's falling apple; although, given the nature of the problem, the account
can only be metaphysical.
But to suppose that the way in which objective knowledge could have grown
in the evolutionary context, could have continued in the historical pre-scientific and in
the scientific contexts, is to suppose that the learning methods in these later contexts
are sharpened mechanisms for a continuation of nature's way of effecting objective
epistemic progress, with the difference that instrumental success replaces adaptive
utility in the pre-scientific context, and corroborative success replaces it in the
scientific context. Now I posit that the adaptive utility or adaptive success of some
knowledge bit in the evolutionary context would depend on two aspects of that bit: (1)
the extent of the class of phenomena it captures, i.e. its scope, termed here its
integrative generality, and (2) the extent of its projectibility, termed here its projective
generality, which demarcates the expanse of its niche (or domain). Thus "epistemic
fitness" acquires two senses.42 Clearly both senses matter as regards the adaptive
utility of the bit of knowledge. Think of a signal communicating danger, or location of
food, etc. The extent to which the signal encapsulates the class of phenomena to do
with what it is meant to communicate, i.e. its scope, clearly matters to its success, but
so does the extent of its projectibility, for if that was nought, the signal would be
useless. Thus the success of the communication depends on the projectibility of the
signal, and that requires that the relation between the signal and the terrain it would
need to traverse should be more or less uniform or invariant. For if there were an
obstacle, for example, along the path of the signal, which the signal failed to take
account of, then the signal would not be projectable across the path, and consequently
would not reach its destination. The extent of the signal's projectibility would thus
demarcate the bounds of its niche. Thus the more an organism's descriptive knowledge
comprised knowledge of the invariance or non-invariance of that knowledge across
the relevant reality, the greater would have been the adaptive utility of its knowledge.
Accordingly, evolutionary biology suggests that we ought to regard the Tr of an
hypothesis to be a composite of two components with respect to the two aspects of its
domain: its scope and its expanse.
Given the posit about a link between epistemic and bio-fitness in the
evolutionary context, it seems reasonable to suppose that natural selection would have
operated distinctly but concomitantly on the two aspects of knowledge bits, their
integrative and projective generalities, in the sense that an effect (as regards size) on
42
This is to be contrasted with the bio-genetic notion of "fitness" qua expected
number of offspring or qua probabilities of survival and of reproduction. However,
the posit here is that, in the evolutionary context, there is a long term link between
epistemic and biological fitness. Hence considerations of epistemic fitness can either
be subsumed in an analysis of bio-fitness or discussed independently. Thus in analogy
with bio-fitness it might be possible to consider a probabilistic analysis of the notion
of epistemic fitness, qua differential descriptive efficacy, that would determine, in a
given context, the chances of survival of a bit of knowledge in the sense of its further
use as well as its partial "reproduction", in the sense of it leading to its improved
modification.
46
one would, over the long run, have a concomitant effect on the other, and vice versa.
Moreover, over a long enough period, the outcome of such concomittant but distinct
selection is likely to be that the two aspects become correlated, or that they would go
hand in hand: the more integrative a bit of knowledge, the more projectable it would
be, and vice-versa. Figuratively, the wider the "net" in the sense of scope, the wider
may be expected to be its projectibility (applicability), and vice-versa, in line with our
intuition. It would follow that the most truthlike bit of knowledge would be most
truthlike in the sense of both scope and projectibility. This would suggest that if the
learning methods of the pre-scientific and scientific contexts mimic nature in the way
in which objective epistemic progress is effected (as Popper claimed), then they would
have had to bring under concomittant but distinct empiric control the fitness of
hypotheses, in the sense of scope and of projectibility. We may also expect that such
control should bring about a correlation between the two components of epistemic
fitness. Now it is clearly possible that in the pre-scientific context instrumental
success could have functioned as a selection mechanism, acting concomitantly but
distinctly, on both the scope and projectibility fitness of available competing
hypotheses, since success in the sense of scope would have been noted locally and
then the hypotheses - mostly in the form of "know-how" - would have been applied
across the domain of the social setting involved. And we may expect that, in time, the
larger the success in the sense of scope would have been, the larger that domain would
have become. Whilst such a process could have advanced the epistemic ascent to
some extent, it clearly could not have done so greatly, since instrumental success in a
pre-scientific context requires that the knowledge underpinning it posses only a
modicum of truth about phenomenal reality. But if we posit a deeper reality that could
account for the phenomenal one, then the possibility of deep knowledge of this deeper
reality depends on whether the method of science could have effected such
knowledge. The core of this method is the experimental testing procedure delivering
chiefly falsifying and corroborating instances on which assessments of the
comparative corroborative successes of competing hypotheses are based.43 Thus if the
method of science mimics nature in the way in which objective epistemic progress
could be effected then the corroborative success of a scientific hypothesis would need
to reflect concomittant but distinct empiric control exercised over its scope and
projectibility fitness. And if such control is indeed present, then the extent to which an
hypothesis is fit in its scope and projectibility ought to go hand in hand. If we then
take the comparative Tr of an hypothesis, in a sequence of comparable hypotheses, to
be a composite of its comparative scope and projectibility fitness - a composite of the
comparative extent to which the hypothesis captures phenomena in its reality, plus the
comparative extent to which it captures projectibility (invariant) features in that reality
- then these two components of its Tr ought to be correlated, in the sense that the
better one the better the other, and vice versa. Given such a correlation, scope and
projectibility fitness could independently reflect comparative Tr.
43
Since falsifying instances are more telling about the bounds of the domain of an
hypothesis than are verifying (or indeed corroborating) instances, they are more
significant from the point of view of further progress, provided that scientists are not
"immunised" to refutations. Thus, although, given the inconclusive hence tentative
character of both falsifying and verifying instances, the logical asymmetry between
falsification and verification is indeed lost in practice, there exists nonetheless an
important methodological asymmetry between falsifications and verifications.
47
But could the outcomes of tests of scientific hypotheses (their corroborative
successes), outcomes that have no direct immediate link with either instrumental
success or adaptive utility, reflect distinctly their scope and projectibility,
notwithstanding that all tests unavoidably take place at some spatio-temporal
locality? If we could test locally the projectibility of an hypothesis, and if such tests
corroborated the projectibility supposition, then we could be in possession of a good
rationale for the projectibility - explanatory and pragmatic applicability - of the
hypothesis within its domain (given that extent of projectibility demarcates the bounds
of its domain), notwithstanding the local character of the test. This possibility
amounts to being able to discern in the outcomes of local tests distinct indications of
projectibility, a possibility which, it is suggested here, may obtain in respect of the
foundational physical theories. On this view, the projection problem pertaining to
these theories, and perhaps also pertaining to all hypotheses that are mathematically
linked to them, could have been obviated.
How then could that possibility have come about? One thing is clear: if the
idea is to be tenable, then the corroborative successes of the foundational theories
would need to be outcomes of valid tests - tests in which their respective domains
could have their say about the theories. But such tests require that the theories make
appropriate contact with their domains, with both phenomena and invariant features in
their domains. Now there are three major logical considerations that cast doubt on the
view that such contacts are obtained: the theory-ladenness of the empirical basis, the
Duhem-Quine problem, and the Humean-inspired projection or underdetermination
problem. It turns out that a projection problem arises not only in the application of
all hypotheses, but generally also in their tests. This problem challenges the idea that
the hypothesis of interest confronts the test-phenomenon singularly, as required if the
test is to be valid. Moreover, there is an additional underdetermination problem particularly in 20th-century physics - which also challenges the view that deep
physical theories make contact with their purported level of physical reality. This
problem arises because it is generally possible to provide empirically and
mathematically equivalent, but what may be interpreted to be ontologically
incompatible alternative formulations, of one and the same theory: quantum theory is
a striking example (Rohrlich, 1996b). The chief concern here is the Humean
underdetermination problem, but a resolution of this problem could also be relevant
for this second underdetermination, or non-uniqueness, problem (sect. B).
The relevance of the first two problems - theory ladenness and Duhem-Quine –
have, I think, been exaggerated in the philosophical literature. As indicated above,
from an evolutionary point of view, the theory-ladenness problem in science is but a
manifestation of the bootstrap character of the epistemic ascent in the scientific
context, and ladenness in science is ultimately traceable back to an evolutionary
context. And if scientists had taken seriously what Quine (1981, p. 71) himself
described as, '...the uninteresting legalism ... to think of our scientific system of the
world as involved en bloc in every prediction.', then there could have been no
development of science. Quine continues: 'More modest chunks suffice, and so may
be ascribed their independent empirical meaning, nearly enough, since some
vagueness in meaning must be allowed for in any event.' In practice, methodological
compartmentalization - abstraction, idealization, modelling, on the theoretical level,
and quasi-isolation plus ingenious techniques on the experimental level - made
scientific advances possible.
48
Compartmentalization is the only possible realistic response to the admittedly
valid wholistic thesis, and it generally leads to the evaluation of the hypothesis of
interest in relative isolation. Indeed, it is suggested here that in the context of the
practice of compartmentalization and of the CC, the Duhem-Quine problem, as it
pertains to tests of physical symmetries embedded in foundational physical theories,
could have been circumvented by a characteristic attribute of the theories themselves:
for their embedded symmetries generate distinct predictions on the part of their theory,
apparently, making the symmetries distinctly testable. Thus in physics, at least, the
theory-ladenness and Duhem-Quine problems need not seriously undermine the
validity of tests.44 Hence neither do they vitiate the possibility that advances in physics
could instantiate objective progress. But by what mechanism could such progress have
come about? The present tentative response to this question stems from attending to
the projection problem as it arises in tests of the foundational theories - a problem
that, prima facie, brings into question the validity of their tests.
However, no discussion of the problems facing scientific realism can evade
the model mediation problem, which, like the projection problem, challenges a realist
conception of both physical theories and the unification program in physics (sect. I).
The problem arises chiefly in explanatory applications of physical theories, because
whilst those theories are true of their models, they may not be true (nor, from the
present perspective, truthlike) of reality in the "raw"; because that reality is too
"complex" to conform to the theoretical models, and our attempts to mediate between
those models and complex phenomena via phenomenological models may be
ineffective. Thus whilst the theories may indeed be integrative (of predecessor
theories, laws, and hence of phenomena), and thus appear to explain, this apparent
explanatory power may not stem from their alleged descriptive trait, which
unavoidable model mediation may vitiate. It would follow that explanatory power
may not be indicative of truth, nor of Tr, but merely of efficient classification. A
realist conception of the unification program may thus be misconceived. And so
could be the IBE thesis (which is in any case suspect since it fails to resolve
underdetermination problems), because the theories cannot cope with reality in all its
complexity. And if the theories are indeed neither true nor truthlike then there is
clearly no good rationale for their pragmatic application, even within their respective
domains. However, the mediation problem arises also in tests of physical theories,
because the quasi-isolated phenomena used in such tests, although much simpler than
complex phenomena in applications, may still not conform to the models of the
theories - a situation particularly poignant in tests of 20th century physical theories,
whose purported referents are much further from our phenomenological base than
those of classical theories. Consequently, the theories may not make appropriate
contact with the test-phenomena, and hence the validity of their tests is in doubt. Thus
although the model mediation challenge is not based on logical considerations, it has
the same effect as regards the validity of tests of deep theories as the logically based
challenges to the validity of such tests. Indeed, the two challenges - the Humean one
and the one due to model mediation - cast doubt on the validity of all tests that are
dependent on prediction or retrodiction; tests in which the attempt to establish contact
between the hypothesis under test and the phenomenon used in the test necessarily
44
On the Duhem-Quine problem see Popper (1972, pp 238-239); Franklin, et al.
(1989); Franklin (1986 and 1990); Shimony (1993h); Hardcastle (1994); Culp, (1995);
Culler (1995); Weinert (1995); and Lautrup and Zinkernagel (1999).
49
requires a prediction or retrodiction, drawn from the hypothesis. (Tests of physical
theories are of this sort, but generally not tests of entities, laws, and distributions sect. B) Such tests necessarily involve the projection of the hypothesis across its testintervals, thereby implicating a projection problem across such intervals. The
projection problem challenges the validity of the posit that the hypothesis of interest
confronts the test-phenomenon singularly, and model mediation challenges the
validity of the posit that contact between that hypothesis and the test-phenomenon is
possible, even if that theory were the only one confronting the phenomenon. The
projection and model mediation problems thus suggest independently that tests may
be invalid, because the necessary contact between the hypothesis and its test
phenomena, for valid tests to have taken place, may not be achieved.
If any of these problems challenging the validity of tests is to be obviated to
any degree whatever, then some sort of intervention will clearly be required. The
intervention could be of a theoretic or experimental character, or both. But the
interventions need not be intentionally directed at the problem in question: they could
be unintended consequences of intentions and practices that may have no apparent
bearing on the problem. Such situations often arise in the case of interventions that
lead to the circumvention or mitigation of the theory-ladenness and Duhem-Quine
problems, interventions that are part and parcel of scientific practice. But they may
also arise in the case of the projection and model mediation problems as they appear
in tests of the foundational theories. Indeed, this study suggests that in the context of
the CC, the projection problem as it arises in tests of the foundational theories may
have been inadvertently obviated to an extent appropriate for each theory via the
distinct and valid testability of the similar (in form) but also diverse (in content)
symmetric-structures of the theories (the composite of their embedded testable
symmetries and asymmetries). Those structures are interpreted here to be the sources
of the comparative projective generality, and hence possibly of the comparative Tr of
comparable theories. The study also suggests that this possible circumvention of the
projection problem could have a positive bearing on the model mediation problem.
In line with the evolutionary considerations discussed above, there are two
distinct generality traits that may be associated with an hypothesis: generality as to
scope and generality as to projectibility. If we interpret the comparative similarity of
the symmetric-structure of a theory to the true symmetric-structure of the true theory
to be an indicator of the theory's comparative projectibility, and hence possibly
comparative Tr, then in the case of the foundational physical theories the
consequences of the two generality traits may be distinctly discernible in their
corroborative successes. For a sequence of such comparable theories, these distinct
traits appear correlated, as a realist view of physical symmetries would suggest. It
follows from this possibility that the projection problem, as it arises in explanatory
and pragmatic application of these theories and of hypotheses in part reducible to
them, could have been obviated to the extent that that is rationally possible. For the
distinct and valid corroborations of the respective symmetric-structures of the
theories, corroborations that are distinctly discernible within the overall outcomes of
tests of the theories, could constitute good rationales for such applications within the
respective adequacy limits or domains of the theories, as well as for applications of
hypotheses in part reducible to them within their respective domains. If the projection
problem, as it arises in tests of the foundational theories, has indeed been so
circumvented, then the challenge to the validity of those tests stemming from the
model mediation problem may have also been met. If this is at all sound, then there
50
would be a critical rationalist-based account of the successes of physical laws and
theories as well as of the gradation of the successes of comparable physical laws and
theories, and perhaps also of the successes of the physical and biological sciences
generally, and of the gradation of their successes. The development of physics, and
perhaps that of the physical and biological sciences generally, could thus be outcomes
of a critical rationalist method.
In the light of this possibility, Popper's methodological approach, although
perhaps largely motivated by normative considerations, may be seen to be a broadly
sound description of the scientific method, particularly, but by no means exclusively,
as that method has been practiced hitherto in the core sciences.45 It is broadly
descriptive because the following claims are incontrovertible: (a) scientists do not
generally either discover or assess hypotheses algorithmically; (b) although there is
generally no algorithmic assessment, hypotheses do get severely scrutinised, both
empirically and theoretically, and the outcomes of such critical scrutiny is
overwhelmingly accepted by the relevant scientific community; and (c) this
acceptance is willy-nilly tentative in an objective sense, since any hypothesis can
always be overturned given the perpetual conjectural character of science. But, as
indicated above, there is much more in the way in which Popper's methodology is
descriptive. The core of the method, at least in the core sciences, is, therefore,
regarded here to be the experimental testing procedure plus a deductivism in the form
of the use of modus tollens for truth transmission and falsity re-transmission, leading
to inconclusive hence tentative corroborations and falsifications. The testing procedure
is also the chief catalyst in attempts to link-up hypotheses with their test-phenomena.
This methodological core had a dominating, albeit not a determining, impact on all
levels of the development of science. But this core could not do without mediating
norms, interventions, critical theorizing and non-arbitrary decisions, as well as
interventions via calculations, models, ingenious experimental techniques (Hacking,
1983, Part B), and ingenious (intuitive?) judgements as to what may or may not be
explanatorily and/or causally relevant (Hitchcock, 1995; Dowe, 1995). All this
mediation, however, points only to the fallibility of the method. It does not vitiate its
rationality. The mediation also suggests that we should not expect to capture the entire
method within some formal algorithmic straitjacket. Thus the theories that dominated
the development of physics - classical mechanics, classical electrodynamics, special
relativity, general relativity, quantum theory, and now quantum field theory - could
not have been arrived at via any sort of algorithm, although 'heuristic guidelines' have
undoubtedly had a very important role in their discoveries ( Redhead, 1975; French
and Kaminga, 1993; Arntzenius,1995). Creative discoveries, in and out of science,
appear to be products of a rationally inexplicable element of ingenuity, albeit within a
45
That Popper himself regarded his methodology as being not just normative but also
descriptive may be seen from the following lines: 'The thesis that the criterion [of
testability] here proposed actually dominates the progress of science can easily be
illustrated with the help of historical examples. The theories of Kepler and Galileo
were unified and superseded by Newton's logically stronger and better testable theory,
and similarly Fresnel's and Faraday's by Maxwell's. Newton's theory, and Maxwell's,
in their turn, were unified and superseded by Einstein's. In each such case the progress
was towards a more informative and therefore logically less probable theory: towards
a theory which was more severely testable because it made predictions which, in a
purely logical sense, were more easily refutable.' (1972, p. 220).
51
context of some guiding constraints.46And although experimental results, particularly
in the form of refutations, may engender novel theoretical discoveries - e.g. Planck's
black body radiation experiment's refutation of classical electrodynamics, which lead
to the beginnings of Q.T. - perceived theoretic deficiencies in the available knowledge
can also engender such discoveries - e.g. Einstein's' perception of the deficiencies of
the physics at the beginning of the 20th-century leading to his relativity theories.47 But
ingenuity also figures importantly in scientific appraisals, often via mediations of the
above sort, and hence the inconclusivity of objective selection even in science. And
although the discovery/appraisal distinction is not clear cut, it is sound and important
for understanding the development of science (Siegel, 1980; Woodward, 1992).
Scientific ingenuity involved in scientific discoveries and appraisals makes the
difference between an intentionally goal-directed evolutionary epistemic process and
one driven by adaptive and/or instrumental success. As regards methodological norms,
therefore, there is much to be learned from a study of successful practice and an
analysis of its products; leading to what Post (1971, p. 218) aptly called, '... a rationale
of scientific discovery, over and above mere trial and error.' [my italics]
However, in explanatory applications - and a fortiori in pragmatic ones straightforward deductivism, in the sense of direct deductive subsumption under a
foundational theory, is admittedly possible only with respect to models of extremely
simple phenomena; and even there, ingenious mediations are generally involved. But
that is to be expected, given complexity (sect. E). Complexity, generally requires
abandonment of the D-N explanatory model in favour of one or another somewhat adhoc explanatory scheme. But in the context of the present stance - where Tr is not
inferred from success, whether explanatory or corroborative - the fact that
foundational theories cannot be made directly deductively relevant to complex
phenomena does not tell against the idea that they could be truthlike, if only obliquely,
even with respect to such phenomena (sect. I.). Be that as it may, the covering law
account of explanation, linked to a syntactic view of theories, and the model theoretic
account, linked to a semantic view (van Fraassen, 1980), can both be involved in
scientific explanation (Churchland and Hooker, 1985, p. 301; Hughes, 1992, pp. 255258;). A theory may thus be seen as a formal structure allowing both a syntactic and
semantic interpretation, either of which may be used in an attempt to clarify the
theory's explanatory role.
The stance is unaffected by the true observation that experiments are often
done without any idea of testing an hypothesis (Hacking, 1983, pp. 149-166). For once
the outcome of an experiment is available, scientists seek to account for it regardless
of the motivation of the experimenter. And if no account can be had from available
hypotheses, then they try to conjecture a novel hypothesis. In this way, experimental
results, not only in the form of refutations, may stimulate and guide theoretical
46
For a variety of views on this problem see Simon (1992); Zahar (1983); Gillies
(1996); and Korb and Wallace (1997).
47
In particular, his perception that classical electrodynamics cannot accomodate both
the constancy of c and the relativity principle. Einstein was guided here by both an
empiric result and a theoretic desideratum (Schilpp, 1969, pp. 53-57). On further
examples of theoretical considerations leading to progress see Post (1971).
52
developments.48 And the outcome of an experiment done without any test in mind
could in time become a test of an hypothesis, the discovery of which either succeeded
the experiment or preceded it.49 There is also the possibility that an experiment
designed to test one hypothesis becomes in time a test, even a crucial test, of a very
different hypothesis (Hacking, 1983, pp. 237- 238).50
From this perspective, inductivist and apriorist modes of thought would have
had significant roles in the development of science - roles which may have been
positive in some cases and negative in others. Psychological factors of all sorts, as
48
E.g. the measured Lamb shift, and magnetic moment of the electron, being
instrumental in arriving at a renormalized quantum electrodynamics. See also Weinert
(1995).
49
This point raises the question of the relative value, in respect of an hypothesis, of available
evidence which the hypothesis is able to accommodate, and novel corroborating evidence
which has been unearthed in consequence of novel predictions of the hypothesis (Howson,
1988; Lipton, 1990; Lange, 2001). From a critical rationalist point of view, the latter kind of
evidence ought to count – and in practice is generally taken to count - for more than the former
kind, because accommodation could easily be the outcome of ad-hoccery, whilst novel
corroborating evidence, could be indicative of the ability, on the part of the hypothesis, to
withstand severe empiric criticism; and thus suggestive of enhanced empiric content. An
hypothesis may, of course, generate both sorts of predictions, e.g. G.R., which could account
for known discrepancies between predictions stemming from N.T.G. and evidence, as well as
engendering quite novel predictions (see sect. F). But, given the absence of ad-hoccery, the
issue of the relative value of accomodated and predicted evidence does not seem to me to be
all that significant. What is much more significant in practice, I think, is the overall
comparative performance, or comparative scope, of an hypothesis.
50
In view of the stance outlined above, consider van Fraassen's pronouncement (1985,
p. 279): '... it is ... merely trivial, indeed banal, to say that the evidence exerts
"negative control": a theory seen to conflict with the evidence is ruled out. This point
is simply irrelevant to those who do not believe the evidence, and it follows trivially
from the principle to maintain ordinary logical consistency for those who do. (It does
so even on empiricist grounds, if acceptance of a theory entails belief that it is
empirically adequate!) So, if that point were the core of methodology, emperor Popper
would be wearing no clothes.' Firstly, the point is indeed 'irrelevant to those who do
not believe the evidence'. But overwhelmingly, and without adherence to any theory of
belief, scientists tentatively accept published evidence for purposes of testing
hypotheses as well as for other purposes; and if they are sceptical whether the
evidence merits acceptance, they check on it by redoing the relevant experiment.
Secondly, the point is indeed the core of the method; hence the emphasis the
"emperor" put on it. To have done so may seem trivial and banal to some, significant
to others; significant, if one accepts that just such '"negative control"', exercised by an
objective reality with and without human intervention, actually dominated a trial-anderror process that led to the development of human knowledge regarding the subject
matters dealt with in todays' sciences. Nor is it surprising that that should have been
the case, given the bounds of the human situation in both the prescientific and
scientific stages of the development of that knowledge. The idea that an algorithmic
procedure - whether administered intentionally or by a mental "black box" - could
have had a significant role in that development, or that it could do so in the future,
seems to me illusory. So the emperor may have been wearing some clothes after all.
53
well as pure chance and serendipity, would undoubtedly have had similar roles.
Sociological factors too would have had such roles (notwithstanding that physics, and
the other sciences are largely products of key dominating ideas invented or discovered
by individual scientists in the course of their use of reason and evidence to see their
way through the problems facing their science of the day).51 Such factors in the
development of science may increase the number of blind alleys - Ptolemaic
astronomy, the phlogiston hypothesis, etc. - which are, in any case, to be expected.52
But the realist stance does not require that the ascent be linear, only that there be a
controlling, even if fallible, mechanism that could extricate the ascent from such
alleys; a mechanism that could thus dominate developments in the longer run. Some
such mechanism has been in operation in both the evolutionary and pre-scientific
periods, and it has been immensely sharpened with Galileo's introduction of the
experimental method and much further sharpened since then. It is thus difficult to see
how it can be maintained that there is no more truth in the products of longer term
scientific developments then there is in the knowledge of pre-scientific cultures, or
that there is no more truth in Newtonian physics then there is in Aristotelian physics,
etc. 53
But some such idea is implicated by today's internalist outlooks, willy-nilly,
notwithstanding that their apparent progenitor - Wittgenstein of the Philosophical
Investigations - may not have had that in mind (Putnam, 1990; Shimony, 1993a and
1993h; Friedman, 1998). Thus, Kuhn, (1969, p. 206), 'I do not doubt, for example,
that Newton's mechanics improves on Aristotle's and that Einstein's improves on
Newton's as instruments for puzzle solving. But I can see in their succession no
coherent direction of ontological development.' Kuhn (1969, pp. 171-173) also
suggests that we ought to see epistemic evolution not as an approach to truth, but
rather as a process that has no goal, in analogy with Darwinian bio-evolution. But
whilst in the case of the bio-evolutionary process truth is indeed not a goal in any
direct sense - although as indicated above low level epistemic advances may have
been intertwined with adaptationist requirements - in the case of the epistemic
evolutionary process truth (or approximate truth) could certainly play a crucial role,
via the preference for the most successful hypotheses, given that truth (or approximate
truth) can account for such successes (notwithstanding the projection problem such an
51
On the role of sociological and psychological factors in the development of Q.T. see
Cushing (1994); and on the role of serendipity in the development of science see van
Andel (1994).
52
Incidentally, an account of the possibility of objective scientific progress ought to
be able to explain why it is that some very successful hypotheses turned into blind
alleys. From the viewpoint of the present stance, the answer to that will become
evident in sect. B, i.e. those sort of hypotheses are outside the context of the CC; they
neither satisfy them nor are they linked to hypotheses that do. In sect. B, we will also
see that even hypotheses (foundational theories) that do satisfy the CC can be dead
ends in a developmental strand. Such cases are attributed here either to the
incongruous symmetric-structures of such theories or to their failure to take account of
fundamental invariants, or both.
53
It could be that Q.T. may lead us to the view that on the sub-quantum level physical
reality is more in tune with Aristotle's physics than with Newtonian physics (sect. E,
note 28). But then we would have arrived at that view via a progressive learning
process.
54
account raises). Perhaps in an attempt to mitigate the full impact of his earlier views,
Kuhn tellingly explains (retracts?) (Horwich, 1993, p. 336), 'The point is not that laws
true in one world may be false in another but that they may be ineffable, unavailable
for conceptual or observational scrutiny. It is, effability, not truth, that my view
relativizes to worlds and practices.' But does not relativization of effability implicate
the relativization of truth? For if the cognitive knowledge of one "world" is not open
to conceptual and observational scrutiny from another "world" - using, of course,
concepts and methods of that other "world" - then truth is, and must remain, bottled up
in the different "worlds", or lexicons. Only this interpretation seems to be in accord
with Kuhn's stance, 'That claims to that effect [claims regarding objective epistemic
progress, hence claims regarding the nonrelativity of effability] are meaningless is a
consequence of incommensurability.' (Horwich, 1993, p. 330) 54 But is the ineffability
claim, a consequence of the incommensurability thesis, descriptively tenable?
Consider the following examples. Science-based medicine is ineffable within
alternative medical cultures or practices, but the reverse is not the case: alternative
medicines have been under the scrutiny of science-based medicine for some time now.
Similarly, quantum electrodynamics is ineffable in the "world" of classical
electrodynamics, but again not the reverse: classical electrodynamics can be made
sense of from the perspective of quantum electrodynamics, etc. Thus, whatever the
logical credentials of the incommensurability and ineffability theses (it is admittedly
the case that successor and predecessor "worlds" are incompatible), they are not in
accord with experience - just as Berkeley's idealism is not in accord with Johnson's
experience of kicking a stone. Of course, one-way effability does not mean that we
have truthlike progress, but it does challenge the contention that the realist claim is
meaningless, because the best account of that one-wayness is truthlike progress.
In their attempt to vitiate the possibility of such progress, relativists raise
another argument. Scientific culture, they say, is not exempt from a necessity common
to all cultures, according to which each community of practitioners is "committed" to
its methods and its tenets. And since there can be no Archimedian point that would
allow judgements about the relative cognitive worthiness of different methods and
tenets, the method and tenet of scientific culture cannot possibly be cognitively
privileged. But if there can be no such point, then the relativist method (e.g. the use of
the argument just presented) and tenet must itself be cognitively on a par with the
others. Consequently, relativism is incoherent, for it must claim a privileged position
for its stance, on the one hand, whilst on the other, it cannot abandon the claim that no
such position is possible.55 This leaves open the possibility, not excluded by the
common necessity above, that the method and tenet of scientific culture - empiric and
theoretic scrutiny plus a posited unifying explanatory scheme, realistically interpreted
- are cognitively privileged, if only because the "commitment" to them is uniquely not
absolutist; in that the scientific community does not claim that the "worlds" its method
and tenet deliver - "worlds" effable from the perspective of their successors, but not
vice-versa - are incontrovertible. This absence of closure in the learning process, its
perpetual openness to revision, is but a continuation of the naturalist trial and error
approach; and from that naturalist perspective it is of immense cognitive utility - it
54
See also Miller (1991); Sankey (1993); and Shimony (1993d and1993i).
Suspect too is a constructivism the central claim of which is that reason is never
compelling. For if that is the case, then it is also not compelling as regards the
constructivist argument.
55
55
could clearly have had such utility in all stages of cognitive developments. It follows
from that openness that, 'Rational discussion and critical thinking are not like the more
primitive systems of interpreting the world; they are not a framework to which we are
bound and tied. On the contrary, they are the means of breaking out of the prison - of
liberating ourselves.' (Popper, 1983, p. 155) It is the liberating effect of these 'means'
that makes it possible to allow nature to dictate - admittedly within some conjectural
guiding context of our choosing - the sort of "worlds" we can tentatively accept. 56
Empiric and theoretic scrutiny amounts to scrutiny with respect to empiric
fitness (in the light of experimental outcomes), and theoretic soundness (in the light of
logical, mathematical, conceptual, symmetrical, and metaphysical, desiderata). One
needs only to browse in physical and related journals to note that such scrutiny of
available physical "worlds", at times directed specifically at their "paradigms", is,
whilst not a matter of routine, not uncommon. Moreover, physicists who engage in
such activities generally also do "normal physics", in the "worlds" being scrutinised,
as well as in others. But the activity of "normal science", whether theoretic or
experimental, is, willy-nilly and regardless of intentions and motivations, also a
critical one, because it can always uncover problems for existing "paradigms";
problems - theoretic deficiencies or empiric anomalies - which the community of
practitioners cannot and does not ignore - a striking example is the anomaly
uncovered by Planck in black-body radiation experiments. Be that as it may, scientific
scrutiny, combined with the drive towards better understanding and unification, have
led to spectacularly new "worlds" in physics, e.g. those that arose in consequence of
the scrutiny of Aristotelian physics and of Newtonian-Maxwellian physics, "worlds"
that had a dominating impact on the development of physics. But not even such
radical developmental steps quite conform to Kuhn's model of change, if only because
predecessor and successor "worlds" (or their attempted formal descriptions) invariably
share some significant structural aspect (s). Admittedly, there can be no guarantee that
scientific activities lead to objective knowledge, let alone objective progress: that the
new "worlds", founded on somewhat but not entirely new "kinds", are at all truthlike,
let alone nearer to a posited truth. But Kuhn's internalist outlook, his stress on
"incommensurability" and its complement "normal science", does not seem to be a
56
Implicit in Popper's stance is the posit that the human mind has reached a stage in
its evolutionary and historical development such that, under favorable conditions, it is
not entirely determined by its biological origins, sociological conditionings (which
tend to engender their particular confining customs and habits, over and above the
Humean ones), and the interaction between them; for the possibility of a choice to
participate in 'rational discussion and critical thinking' holds only for minds that are
already in part "free", or autonomous (in the sense of Kant's Enlighenment). Such a
choice then becomes 'the means of breaking out of the prison': of further selfliberation. See also Redhead (1995, Ch.1). On whether Critical Rationalism, and its
naturalist underpinning, can avoid the tu quoque charge as regards "commitment", see
Miller (1994, Ch.4). But whether or not it can, the rationalist "commitment" differs
significantly from the others - as indicated here and in Sect. B. The suggestion, on the
part of rationalism, of the possibility of approximate objective knowledge arises out of
this difference. Internalism denies that possibility, and with it the possibility of
objective evidence. The search for evidence in any field whatever is thus pointless,
except, of course, in the interest of internal puzzle solving. There can thus be no
matter of fact about any event, only diverse internalist perspectives on it.
56
good description of what goes on in physics. (Indeed, the very concept of "normal
physics" is dubious - Leggett, 1992). And, I suspect, the same holds, more or less, and
mutatis mutandis, in the other sciences; and perhaps even in scientific technology, and
in pre-scientific contexts, i.e. in cases where the common measure is generally
instrumental success. It seems that diverse societies have been comparing each other's
knowledge systems since the dawn of history, for their relative instrumental efficacy,
and in later stages of historical development also for their relative intellectual quality.
Thus, diverse epistemic "worlds" do not appear to be such conceptually
(semantically), and observationally (factually), closed structures (frameworks), as
relativists imagine them to be (Popper, 1994; Schilpp, 1974, pp. 1144-1148). It is true
that there are many scientists who do only "normal science" in one particular "world"
of their science, whether before, during, or after a revolutionary period in that science.
But the reasons for that, I suspect, are a host of circumstances external to science, e.g.
specialization, and not that the "paradigm" of that "world", or of the science to which
it belongs, is all that constricting, or that there is a lack of problems facing it, or its
alleged unavailability to critical scrutiny. In no case are practitioners of "normal
science" 'bound and tied' by the "paradigm" of their science.57
Kuhn's challenge is directed at the idea that successor "worlds" could be more
truthlike than their predecessors. The challenge is based on the doctrine that
successors and predecessors are incommensurable, and hence mutually ineffable.
Realists can point out, however, that this ineffability is unidirectional: predecessor
"worlds" are effable in the context of their successors, but not the other way round. A
good account of this one-way effability is the posit that successors are more truthlike
than their predecessors. But prima facie this account runs into the projection problem,
via IBE. Moreover, the projection problem, and independently the model mediation
problem, bring into doubt the validity of tests of at least physical theories - as we shall
see in sect. B. (Of course, the projection problem also suggests that there may be no
good rationale for the explanatory and pragmatic applicability of any hypothesis, and
the model mediation problem suggests that there may be no good rationale for the
pragmatic applicability of at least physical theories, notwithstanding their integrative
efficacy.) Further, the theory-ladenness problem, and, independently, the wholistic
thesis, cast doubt on the validity of tests of any hypothesis. Thus each of these
problems present a specific challenge, with respect to some or all scientific
hypotheses, to the central presupposition of scientific realism - the validity of tests.
This presupposition is a sine qua non for the realist case, since it is tests that effect
empiric scrutiny. But notwithstanding the alleged flawed character of tests, they do
appear to have a dominating impact on scientific developments, developments that led
to progressively more successful hypotheses. Thus whilst Kuhn's incommensurability
57
The likely origin of Kuhn's misconceptions may be gauged from the following lines
of a recent study of Kuhn's work (Caneva, 2000, p. 119): 'Kuhn's faith in the central
importance of incommensurability both to the progress of science and to its
philosophical understanding was ineradicably rooted in the central experience of his
development as a philosophically attuned historian of science. Indeed, the likeness of
that experience to a religious epiphany - he called it a 'revelation' - helps to explain its
tenacious hold on his imagination. In the end, incommensurability represented to
Kuhn the cardinal insight of his life's work: if the discontinuities it entailed did not
chime well with the historical record of science, then it was history that would have to
go.' [My italics].
57
thesis flies in the face of the routine practice of empiric and theoretic comparisons of
incompatible "worlds" - comparisons made possible by the effability of predecessors
in the context of their successors - as well as in the face of the important role such
comparisons have had in scientific developments, the theses or problems challenging
the validity of tests fly in the face of the success of such tests, qua indispensable
mechanism for bringing about more successful hypotheses. If we then posit that the
increased successes of successor hypotheses is due to their enhanced Tr, then the
question arises: how could a rationally flawed mechanism or practice lead to more
truthlike hypotheses, even if only in the longer run? (Recall the posit that only rational
means can bring about objective knowledge.) Thus realists of either variety,
inductivist or deductivist, need to show how scientists could have, either intentionally
or inadvertently, circumvented, at least in part, the problems effecting tests, thereby
providing an account of the indispensable role tests could have had in bringing about
the realist's posit of objective epistemic progress. And Popperian realists - who need
not be concerned with inductively arrived at conclusions, pessimistic or otherwise
(Psillos, 1996) - need to show that possibility, without resort to either inductivist or
apriorist arguments. Only an indication of how the problems effecting the testing
procedure could have been significantly overcome, can suggest how objective
knowledge could have grown in the sciences, notwithstanding whatever authenticity
there may be in Kuhn's analysis of that growth.
I have briefly indicated above how the problems of theory-ladenness and
Duhem-Quine wholism could have been largely circumvented. It remains to consider
whether the projection problem in tests could have also found a resolution, even if
only a partial one, and whether that resolution is of relevance to the projection
problem in applications, and to the model mediation problem in tests and applications.
The projection problem arises because given any scientific hypothesis there is always
an in principle possibility of an infinity of hitherto empirically equivalent 'alternative
modes of representation' - to use Hacking's phrase (1983, p. 143) - the projective
implications of which differ. Thus the problem is to select the possibly true or
truthlike representation, for use in projective, explanatory and pragmatic, applications.
Hacking (1983, p. 143) points out that, 'None of the traditional values - values still
hallowed in 1983 - values of prediction, explanation, simplicity, fertility, and so forth,
quite do the job'. Indeed they don't, if by doing the job is meant being able to show
that such values, whether singly or jointly and however interpreted, could single out
the possibly truthlike representation from among the alternatives without encountering
the induction/apriorist dilemma: or, alternatively, of being able to exhibit, within a
critical rationalist context, the possibility of the realist case in respect of the
hypothesis of interest with the aid of such traditional values. But from a realist
perspective, the central issue is not whether this or that value (tool) could do the job,
but rather whether the methodological tools in actual use could have done the job, and
if so, how? Now the chief methodological tool in use is the testing procedure, thought
to be operating only on the hypothesis of interest. But the in principle possibility of a
plethora of hitherto empirically equivalent alternatives to that hypothesis calls into
question the validity of the testing procedure itself, since it makes it unclear as to
which hypothesis is actually being tested. The testing procedure, however, does not
operate in a methodological void, nor can it be expected to do so if it is to lead to
truthlike hypotheses. For prior to testing, the discovery process itself, is generally
guided by a set of constraints, tailored to the requirements of the science in question.
Thus parsimony is apparently universally practiced in all the sciences. Parsimony does
58
the job of disqualifying hitherto empirically equivalent but non-parsimonious
alternatives to the hypothesis of interest. It thus effects the selection of that hypothesis.
Parsimony is, however, patently laden with the induction/apriorist problem. (Of
course, any human activity, value, constraint, etc., including the testing procedure
itself, could be regarded to be laden with this problem. But this is to stretch such
ladenness to absurdity.) Parsimony, therefore, effects the selection of the hypothesis of
interest, but, not surprisingly, without validating the testing procedure, or,
alternatively, it "validates" the testing procedure solely in relation to the hypothesis of
interest, but, from a critical rationalist point of view, it does so with invalid means.
Thus, from that point of view, the selection is illegitimate (invalid). Could the practice
of parsimony be rationally underpinned by another practice? It is suggested here that
that possibility is indeed present in physics, and since the other physical and biological
sciences are in part reducible to physics, parsimonious practice may also be implicitly
rationally underpinned in those sciences.
Thus the CC are imposed on the foundational theories of physics in the very
process of their discovery, forming the guiding context in such discoveries. In addition
to parsimony, theoreticians demand that the theory be mathematically coherent.58
They further demand that the theory satisfy HP.59 It turns out that the demand to
satisfy HP restricts the sort of phenomena the theory can precisely be about, i.e.
adiabatic phenomena; and, in inertial pregeneral-relativistic physics, the demand
imposes the basic chronogeometric symmetries on the theory in such a manner that
they are apparently open to distinct and valid empiric scrutiny. (All this is discussed in
sections B and C. The case of G.R. is more problematic; it is discussed separately in
Sect. F., and is not taken account of in the remarks here.) HP is thus interpreted to be
an expression of the realist posit about theories meant to describe adiabatic
phenomena, and to linking that posit to testable fundamental invariant traits, which
such a theory would need to satisfy. Given their distinct and valid testability, those
invariant traits could have had the following three consequences: (1) They could have
validated the testing procedure, solely in respect of the theory of interest, and they
could have done so with valid, deductive-empiric, means. From a critical rationalist
point of view, such validation would amount to the positive selection of that theory
via valid means, and the disqualification of the in principle possible non-parsimonious
alternatives, since they are untestable; this would have had the effect of rationally
underpinning parsimonious practice and thereby accounting for the success of the
practice. (2) The validation of the testing procedure solely in respect of the theory of
interest means that that theory alone could have been validly refuted, as well as validly
corroborated within its adequacy limits, thereby exhibiting its standing in relation to
58
This demand fits in with the rationalist-realist posit that physical reality is a single
unit, rationally structured. It is presupposed that the mathematical language is well
suited, perhaps uniquely suited, to express that posit - see sect. H.
59
Whilst satisfaction of this demand - in its present form, at any rate - was inadvertent
on the part of Newton, it and the other two demands of the CC have become part and
parcel of the method of 20th-century theoretical physics, but the origins and practice
of all three demands undoubtedly date back considerably earlier. Thus mathematical
coherence dates back to at least the 17th C, parsimony is traceable to Ockham's razor
(14th C), and versions of HP (1833), in the form of a principle of least action, date
back to the 18th century in the work of Lagrange, Euler, and Maupertuis (Bynum,
Browne, Porter, 1981, p. 146).
59
its actually available alternatives. Thus positive selection via valid means, which can
validate the testing procedure, appears to be a necessary condition for valid negative
selection to operate, leading to increasingly corroborable theories. And (3), the theory
embedded basic chronogeometric symmetries could have imparted to the positively
selected theory some spatio-temporal projective generality, which could have been
under distinct and valid empiric control. This idea is extended here to the composite
of all apparently distinctly and validly testable theory embedded symmetries and
asymmetries, the composite forming the symmetric-structure of a theory. The apparent
distinct and valid testability of theory embedded symmetries could thus have effected
positive selection for projectibility and, in line with (2), enabled valid negative
selection for scope. Thus, for a sequence of comparable theories satisfying the CC, the
symmetric-structure of a theory could be an indicator of its comparative projectibility,
and hence of its comparative Tr vis-à-vis a posited final true goal theory (of our
universe at, or near, the Big Bang), satisfying the CC, and possessing the true
(original) symmetric-structure of physical reality. Truthlikeness (for a sequence of
comparable theories satisfying the CC) thus acquires the sense of symmetric-structurelikeness in respect of the true symmetric structure of the true theory - with the likeness
relation referring to likeness of common symmetric form (suggesting continuity), as
well as to likeness of uncommon symmetric content, i.e. of uncommon extent and kind
(not least because symmetricity is theory context dependent) of symmetricity
(suggesting a discontinuous stepwise approach to the true symmetric-structure and
thus to the true theory) - with individual symmetries constituting components of the
truthlikeness of their embedding theory; importantly, comparative extents of
symmetricity are roughly discernable. Testable symmetries and asymmetries could
thus have empirically guided physics towards increasingly truthlike theories, given the
preference for the most successful, best corroborated, theories, within the setting of
the CC, and given that Tr can account for comparative success. But there is no longer
a need to resort to the projective step of inferring Tr from success.
The upshot of seeing the possible role of physical symmetries in this way is
that we may, unawares, have good rationales for the posit of the Tr of the
foundational theories (of at least inertial physics), and hence for their explanatory and
pragmatic applications (projections) within their respective domains. The rationales
stem from those parts of the corroborative successes of the theories, to do with their
embedded testable symmetries, interpreted here to be hypotheses about the
projectibility of their embedding theories across specific features of their domains.
Those corroborative successes could thus have projective imports for the relevant
theories, with respect to their limited domains. Rationales of this sort are essential for
an evolutionary epistemic process driven by rational means towards increasingly
truthlike hypotheses - in contrast to the bio evolutionary process, which has no
rationales for its products, given that natural selection acts blindly, being oblivious of
any aim. A realist view of physical hypotheses other than the foundational theories,
and a realist view of the physical and biological sciences generally, is indicated by
their partial reducibility to the foundational theories of inertial physics, or to parts of
them; because such reducibility suggests that positive selection via valid means could
operate on them as well. Physicists may thus have inadvertently - as a consequence of
their preference of the best corroborated theory, within a given field, and within the
confines of the CC - found an internal resolution of the induction/apriorist problem, as
it relates to the physical and biological sciences; a resolution which could also be of
relevance to the model mediation problem in those sciences (Sect. I).
60
Among Hacking's observations one stands out (1983, p. 145): 'In physics there
is no final truth of the matter, only a barrage of more or less instructive
representations. ... Realism and anti-realism scurry about, trying to latch on to
something in the nature of representation that will vanquish the other. There is nothing
there'.60 This study suggests, however, that whilst there is indeed 'no final truth of the
matter', nonetheless, there may be something there for the realist case, in the attempted
deep representations of physics, but it is admittedly not something that can 'vanquish
the other'.
60
For a critique of Hacking's realist stance on entities, see Reiner and Pierson (1995).
61
References for Sect. A: Introduction
Andel, van, P. (1994) 'Anatomy of the Unsought Finding', BJPS 45, 631-648.
Arntzenius, F. (1995) 'A Heuristic for Conceptual Change', Phil. Sci. 62, 357-369.
Black, M. (1985) 'Making Intelligent Choices', Dialectica 39, 19-34.
Black, R. (1998) 'Chance, Credence, and the Principal Principle', BJPS 49, 371-385; esp. pp.
381-382; and Phil. Sci. 64, n. 2, (1997).
Bohr, N. (1934) Atomic Theory and the Description of Human Knowledge (Cambridge:
CUP)
Brandon, R.N. (1990) Adaptation and Environment (Princeton: Princeton University Press)
Brown, H.I. (1994) 'Reason, Judgement and Bayes's Law', Phil. Sci. 61, 351-369.
Brush, S.G (1999) 'Dynamics of Theory Change in Chemistry: Part 1, The Benzene Problem
1865-1945', Stud. Hist. Phil. Sci., 30, 21-79.
Burke, T.E. (1988) 'Science as Conjecture and Refutation', in G.H.R. Parkinson (ed), An
Encyclopaedia of Philosophy (London: Routledge and Keegan Paul), pp. 205-224.
Butterfield, J. and Isham, C. (2001) 'Spacetime and the philosophical challenge of quantum
gravity', sect. 2.1.2., in C. Callender and N. Huggett (eds), Physics Meets Philosophy at the
Planck Scale (Cambridge: CUP).
Bynum, W.F., Browne, E.J., Porter, R (eds.), (1981) The History of Science (London: The
Macmillan Press)
Caneva, K.L. (2000) 'Possible Kuhns in the History of Science: Anomalies of
Incommensurable Paradigms', Stu. Hist. Phil. Sci., 31, 87-124.
Clarke, M. (1996) 'Darwinian Algorithms and Indexical Representation', Phil. Sci. 63, 2748.
Cousins, R.D. (1995) 'Why isn't every physicist a Bayesian', Am. J. Phys. 63, 398-410.
Culler, M.(1995) 'Beyond Bootstrapping: A New Account of Evidential Relevance', Phil.
Sci. 62, 561-579.
Culp, S. (1995) 'Objectivity In Experimental Enquiry: Breaking Data Technique Circles',
Phil. Sci. 62, 438-458;
Cushing, J.T. (1989) 'The Justification and Selection of Scientific Theories', Synthese 78 ,124.
Cushing, J.T. (1994) Quantum Mechanics: Historical Contingency and the Copenhagen
Hegemony (Chicago: University of Chicago Press)
Cussens, J. (1996) 'Deduction, Induction and Probabilistic support', Synthese, 108,1-10.
Dawkins, R. (2004) A Devil's Chaplain (London: Phoenix).
Dowe, P. (1995) 'Causality and Conserved Quantities: A Reply to Salmon', Phil. Sci. 62,
321-333.
Earman, J. ([1992] 1996) Bayes or Bust? A Critical Examination of Bayesian Confirmation
Theory (Cambridge, MA: MIT Press)
Einstein, A. (1905) 'On a Heuristic Point of View Concerning the Production and
Transformation of Light', Annalen der Physik 17, 132-148. English translation in The
Collected Papers of Albert Einstein, 2, (Princeton: PUP), 86-103.
Einstein, A. (1933) 'On the Method of Theoretical Physics', The Herbert Spencer lecture,
Oxford; Reprinted in A. Einstein, Ideas and Opinions (N.Y.: Crown Publishers, MCMLIV),
pp. 270-276.
Einstein, A. (1936) 'Physics and Reality', The Journal of the Franklin Institute, 221, 3,
reprinted in A. Einstein, Ideas and Opinions, (N.Y.: Crown Publishers, MCMLIV), pp. 290323.
62
Elby, A. (1994) 'Contentious Contents: For Inductive Probability', BJPS 45,193-200
Farmelo, G. (2002) 'A Revolution with No Revolutionaries' in G. Farmelo (ed), It Must be
Beautiful, Great Equations of Modern Science (London: Granta Books, 2002)
Fetzer, J.H. (1981) Scientific Knowledge (Dordrecht: D. Reidel)
Fetzer, J.H. (1998) 'Inference To The Best Explanation', preprint (A paper presented at a
meeting of the BSPS on 8.6.1998.)
Fraassen, van, B.C. (1980) The Scientific Image (Oxford: Clarendon Press)
63
Fraassen, van, B.C. (1985) 'Empiricism in the Philosophy of Science', in P.M.
Churchland and C.A. Hooker (eds), Images of Science (Chicago: University of
Chicago Press), pp. 245-308.
Fraassen, van, B.C. (2002) The Empirical Stance (New Haven:Yale U. P.)
Corfield, D. and Williamson, J. (eds) (2003), Foundations of Bayesianism (Dordrecht:
Kluwer Academic Publishers)
Franklin, A. (1986) The Neglect of Experiment (Cambridge: Cambridge University
Press)
Franklin, A. (1990) Experiment, Right or Wrong (Cambridge: Cambridge University
Press)
Franklin, A., et al., (1989) 'Can a Theory-Laden Observation Test the Theory?', BJPS
40, 229-231.
French, S. and Kamminga, H. (eds), (1993) Invariance and Heuristics: Essays in
Honour of Heinz Post, Boston Stud. Phil. Sci. 148 (Dordrecht: Kluwer)
Friedman, M. (1998) 'On the Sociology of Scientific Knowledge and its Philosophical
Agenda', Stud. Hist. Phil. Sci., 29, 239-271.
Gemes, K. (1997) 'Inductive Skepticism and the Probability Calculus I: Popper and
Jeffreys on Induction and the Probability of Law-Like Universal Generalisations',
Phil. Sci. 64, 113-130.
Giere, R.N. (1995) 'The Skeptical Perspective: Science without Laws of Nature', in F.
Weinert (ed), Laws of Nature (Berlin: Walter de Gruyter), pp. 120-138.
Giere, R.N. (1996) 'Scientific Inference: Two Points of View', in Proceedings of the
1996 Biennial Meeting of the Philosophy of Science Association, Part II, edited by L.
Darden, pp. S180-S184.
Gillies, D. (1990) 'Bayesianism versus Falsificationism', Ratio III, 82-98.
Gillies, D. (1996) Artificial Intelligence and Scientific Method (Oxford: Clarendon
Press).
Glymour, C. (1980) Theory and Evidence (Princeton: Princeton University Press).
Haack, S. (1991) 'What is "the problem of the Empirical Basis", and Does Johnny
Wideawake Solve It?', BJPS 42, 369-389.
Hacking, I. (1983) Representing and Intervening (Cambridge: Cambridge University
Press)
Hardcastle, V.G (1994) 'The Image of Observables', BJPS 45, 585-597.
Hitchcock, C.R. (1995) 'Salmon on Explanatory Relevance', Phil. Sci. 62, 304-320.
Horwich, P. (ed), (1993) World Changes: T. Kuhn and the Nature of Science
(Cambridge: MIT Press)
Howson, C. (ed), (1976) Method and Appraisal in the Physical Sciences (Cambridge:
Cambridge University Press)
Howson, C. (1988) 'Accomodation, Prediction and Bayesian Confirmation Theory',
PSA 2, 381-392.
Howson, C. and Urbach, P. ([1989], 1993) Scientific Reasoning, The Bayesian
Approach (Chicago: Open Court)
Howson, C. (2000) Hume's Problem: Induction and the Justification of Belief (Oxford: OUP)
Hughes, R.I.G. ([1989] 1992) The Structure and Interpretation of Quantum
Mechanics (Cambridge, MA: Harvard University Press).
Hume, D. (1975 [1748/1751]) Enquiries Concerning Human Understanding and
Concerning the Principles of Morals (Oxford: Clarendon Press).
Hume, D. (1739) A Treatise of Human Nature.
Hume, D. (1748) Enquiry Concerning Human Understanding.
64
Jeffreys, H. (1961) Theory of Probability (Oxford: Clarendon Press, )
Joyce, J. M. (1998) A Nonpragmatic Vindication of Probabilism, Phil. Sci. 65, 575-603.
Juhl, C. (1993) 'Bayesianism and Reliable Scientific Inquiry', Phil. Sci. 60, 302-319.
Kaplan, M. (1989) 'Bayesianism Without The Black Box', Phil. Sci. 56, 48-69.
Klein, M.J. (1967) 'Thermodynamics in Einstein's Thought', Science 157, 509-516.
Korb, K.B. and Wallace, C.S. (1997) 'In Search of the Philosopher's Stone: Remarks on
Humphreys and Freedman's Critique of Causal Discovery', BJPS 48, 543-553.
Kuhn, T.S. ([1962] 1970) The Structure of Scientific Revolutions (Chicago University
Press)
Ladyman, J. (2000) 'What's Really Wrong with Constructive Empiricisim? Van Fraassen
and the Metaphysics of Modality', BJPS 51, 837-856.
Landsberg, P.T. and Wise, J. (1988) 'Components of Probabilistic Support: The Two
Proposition Case', Phil. Sci. 55, 402-414.
Lange, M. (2001) 'The Apparent Superiority of Prediction to Accomodation as a Side
Effect: a Reply to Maher', BJPS 52, 575-588.
Lautrup, B. and Zinkernagel, H. (1999) ' g-2 and the Trust in Experimental Results', Stud.
Hist. Phil. Mod. Phys. 30, 85-110.
Laymon, R. (1978) 'Review of C. Howson (ed), Method and Appraisal in the Physical
Sciences (Cambridge: CUP)', Phil. Sci. 45, 318-322.
Leggett, A.J. (1992) 'On the Nature of Research in Condensed-State Physics', Foundations
of Physics 22, 221-233.
Levin, M. (1997) 'Putnam on Reference and Constructible Sets', BJPS, 48, 55-67.
Lipton, P. (1990) 'Prediction and prejudice', Intl. Stud. Phil. Sci. 4, 51-65.
Magnus, P.D. (2003) 'Success, Truth and the Galilean Strategy', BJPS 54, 465-474.
Meehl, P.E. (1983) 'Consistency Tests in Estimating the Completeness of the Fossil
Record: A Neo-Popperian Approach to Statistical Paleontology', in J. Earman (ed),
Testing Scientific Theories, Minn. Stud. in the Phil of Sci X, (Minneapolis: University of
Minnesota Press), pp. 413-473.
Miller, A. I. ([1984] 1986) Imagery in Scientific Thought (Cambridge, MA.: MIT Press)
Miller, A.I. (1991a) 'Imagery and meaning, the cognitive science connection', Intl. Stud.
Phil. Sci. 5, 35-48.
Miller, A.I. (1991b) 'Have incommensurability and causal theory of reference anything to
do with actual science? - incommensurability, no; causal theory, yes', Intl. Stud. Phil. Sci.
5, 97-108.
Miller, D. (1994) Critical Rationalism (Chicago: Open Court)
Newman, D.V. (1996) 'Emergence and Strange Attractors', Phil. Sci. 63, 245-261.
O'Hear, A. ([1980] 1982) Karl Popper (London: Routledge)
O'Hear, A. (1994) 'Knowledge in an evolutionary context', Intl. Stud. Phil. Sci. 8,125-138.
Pinker, S. (1994) The Language Instinct (London: Penguin)
Popper, K.R. ([1957] 1976) The Poverty of Historicism (London: Routledge & Kegan Paul)
Popper, K.R. ([1959] 1977) The Logic of Scientific Discovery (London: Huthchinson &
Co.)
Popper, K.R. ([1963] 1972) Conjectures and Refutations (London: Routledge and
Popper, K.R. ([1972] 1981a) Objective Knowledge, An Evolutionary Approach
(Oxford:Clarendon Press)
Popper, K.R. (1978) 'Natural Selection and the Emergence of Mind', Dialectica 32, 339-355.
Popper, K. (1981b) 'The Rationality of Scientific Revolutions', in I. Hacking (ed),
Scientific Revolutions (Oxford: OUP)
65
Popper, K.R. (1982) The Open Universe, An Argument for Indeterminism (Totowa N.J.:
Rowman and Littlefield)
Popper, K.R. (1983) Realism and the Aim of Science (London: Hutchinson)
Popper, K.R. (1984) 'Evolutionary Epistemology', in J.W. Pollard (ed), Evolutionary
Theory: Paths into the Future (London: John Willey), pp. 239-255.
Popper, K.R. (1994) 'The Myth of the Framework', in M.A, Notturno (ed), The Myth
of the Framework (London: Routledge), pp.33-64.
Popper, K.R. and Eccles, J.C. (1977) The Self and Its Brain, An Argument for
Interactionism (Berlin: Springer)
Post, H. (1971) 'Correspondence, Invariance and Heuristics': In Praise of Conservative
Induction', Stud. Hist. Phil. Sci., 2, 213-255.
Psillos, S.(1996) 'Scientific Realism and the "Pessimistic Induction''', Phil. Sci. 63
(Proceedings) S306-S314.
Putnam, H. (1990) Realism with a Human Face (Cambridge, MA: Harvard University
Press)
Quine, W.V. Theories and Things (Cambridge, MA: Harvard University Press, 1981)
Redhead, M.L.G. (1975) 'Symmetry in Intertheory Relations', Synthese 32, 77-112.
Redhead, M. (1985) 'On The Impossibility of Inductive Probability', BJPS 36,185-191.
Redhead, M. (1995) From Physics to Metaphysics (Cambridge: Cambridge University
Press)
Reiner, R. and Pierson, R. (1995) 'Hacking's Experimental Realism: An Untenable
Middle Ground', Phil. Sci. 62, 60-69.
Rivadulla, A. (1994) 'Probabilistic Support, Probabilistic Induction and Bayesian
Confirmation Theory', BJPS 45, 477-483.
Rohrlich, F. (1996a) 'The Unreasonable Effectiveness of Physical Intuition: Success
While Ignoring Objections', Found. of Phys. 26, 1617-1626.
Rohrlich, F. (1996b) 'Scientific Realism: A Challenge to Physicists', Found. of Phys.
26, 443-451.
Rosenberg, A. (1996) 'A Field Guide to Recent Species of Naturalism', BJPS 47, 1-29.
Salmon, W.C. (1988) 'Rational Prediction', in A. Grünbaum and W.C. Salmon (eds),
The Limitations of Deductivism (Berkeley: University of California Press)
Sankey, H. (1993) 'Kuhn's Changing Concept of Incommensurability', BJPS 44, 759774.
Savage, L.J. (1967) 'Difficulties in the Theory of Personal Probability', Phil. Sci. 34,
305-310.
Scerri, E. R. (1999) 'Response to Needham', Intl. Stud. Phil. Sci., 13, 185-192.
Scerri, E.R. (1998) 'Popper's naturalized approach to the reduction of chemistry', Int.
Stud. Phil. Sci. 12, 33-44.
Schilpp, P.A. (ed), ([1949] 1969) Albert Einstein: Philosopher Scientist I (La Salle.
Ill: Open Court).
Schilpp, P.A.(ed), (1974) The Philosophy of Karl Popper (La Salle: Open Court, Part
II), pp.1063-1064.
Schwartz, J. (1962) 'The Pernicious influence of Mathematics on Science', in L.E.
Nagel, P. Suppes and A. Tarski (eds.), Logic, Methodology and Philosophy of Science
(Stanford: Stanford University Press), pp. 356-360.
Shimony, A. (1967) 'Amplifying Personal Probability Theory', Phil. Sci. 34, 326-332.
Shimony, A. (1993a) 'Reality, causality, and closing the circle', in his Search for a
Naturalistic World View I (Cambridge: CUP), pp. 21-61.
Shimony, A. (1993b) 'Physical and philosophical issues in the Bohr-Einstein debate',
66
in his Search for a Naturalistic World View II, (Cambridge: CUP), pp.171-187.
Shimony, A. (1993c) 'Scientific inference' and 'Reconsiderations of inductive
inference', in his Search for a Naturalistic World View I (Cambridge: CUP), pp.183300.
67
Shimony, A. (1993d) ' On Martin Eger's "A tale of two controversies" ', in his Search
for a Naturalistic World View I (Cambridge: CUP), pp.321-328.
Shimony, A. (1993e) 'Toward a revision of the protophysics of time', in his Search for
a Naturalistic World View, II (Cambridge: CUP), pp. 255-270.
Shimony, A. (1993f) 'Perception from an evolutionary point of view', in his Search for
a Naturalistic World View, I (Cambridge: CUP), pp. 79-91.
Shimony, A. (1993g) 'The non-existence of a principle of natural selection', in his
Search for a Naturalistic World View, II (Cambridge: CUP), pp. 228-247.
Shimony, A. (1993h) 'Is observation theory-laden? A problem in naturalistic
epistemology', in his Search for a Naturalistic World View, I (Cambridge: CUP), pp.
92-116.
Shimony, A. (1993i) 'Comments on two epistemological theses of Thomas Kuhn', in
his Search for a Naturalistic World View, I (Cambridge: CUP), pp. 301-318.
Siegel, H. (1980) 'Justification, Discovery and the Naturalizing of Epistemology', Phil.
Sci. 47, 297-321.
Simon, H.H. (1992) 'Scientific Discovery as Problem Solving', Intl. Stud. Phil. Sci. 6.
Smithurst, M. (1995) 'Popper and the Scepticisim of Evolutionary Epistemology, or,
What Were Human Beings Made For?', in A. O'Hear (ed) (1995), Karl Popper:
Philosophy and Problems, (Cambridge: Cambridge University Press), pp.207-223.
Sober, E. (1981) 'The Evolution of Rationality', Synthese 46, 95-120.
Strawson, P.F. (1958) 'On Justifying Induction', Philosophical Studies IX, pp 20-21.
Urbach, P. (1978) 'Is Any of Popper's Arguments Against Historicism Valid?', BJPS
29, 117-130.
Urbach, P. (1981) 'On the Utility of Repeating the "Same" Experiment', Australasian
Journal of Philosophy 59,151-162.
Urbach, P. (1991) 'Bayesian Methodology: Some Criticisms Answered' Ratio IV, 170184.
Watkins, J. (1984) Science and Scepticism (Princeton: Princeton University Press)
Watkins, J. (1991) 'Scientific Rationality and the Problem of Induction: Responses to
Criticisms', BJPS 42, 343-368.
Watkins, J. (1995a) 'What Remains of Critical Rationalism when the Rationalism is
Dropped', Review of D. Miller (1994) Critical Rationalism (Chicago: Open Court),
BJPS 46, 610-616.
Watkins, J. (1995b) 'Popper and Darwinisim' in A. O'Hear (ed) (1995), Karl Popper:
Philosophy and Problems, (Cambridge: Cambridge University Press), pp.191-206.
Watkins, J. (1996) 'Popperian Ideas on Progress and Rationality in Science', American
Philosophical Association, Eastern Division Meeting.
Wayne, A. (1995) 'Bayesianism and Diverse Evidence', Phil. Sci. 63, 111-121.
Weatherson, B.(1999) 'Begging the Question and Bayesians', Stud. Hist. Phil. Sci., 30,
687-697. Kruse, M. (1999) 'Beyond Bayesianism: Comments on Hellman's "Bayes
and Beyond", Phil. Sci., 66, 165-174.
Weinberg, S. (1987) 'Newtonianism, reductionism and the art of congressional
testimony', Nature 330, 433-437.
Weinberg, S. (1992) Dream of a Final Theory (N.Y.: Pantheon Books)
Weinert, F. (1995) 'The Duhem-Quine thesis revisited', Intl. Stud. Phil. Sci. 9,147-156.
Weinert, F. (1995) 'Wrong Theory - Right Experiment: The Significance of the SternGerlach Experiments', Stud. Hist. Phil. Mod. Phys. 26, 75-86.
68
Woodward, J.F. (1992) 'Logic of Discovery or Psychology of Invention?',
Foundations of Physics 22, 187-203.
Worrall, J. (1982) 'Broken Bootstraps', Erkenntnis 18,105-130.
Zahar, E. (1983) 'Logic of Discovery or Psychology of Invention', BJPS 34, 243-261.
Download