No Miracles Argument

The “No Miracles Argument” and the “Base Rate Fallacy”1
In their (2004) Magnus and Callender introduce the distinction between what they
call “retail” arguments for scientific realism, that is arguments about specific kinds of
things such as neutrinos; and wholesale arguments, that is arguments about all or most of
the entities posited in our best scientific theories (321). They conclude that wholesale
arguments are hopeless – that “the great hope for realism and anti-realism lies in retail
arguments that attend to the details of particular cases.” (336) For the most part they
consider the so called “no miracles argument” and work by taking themselves to show
that such an argument commits what is known as the “base rate fallacy”. Here I will
agree with their pessimistic assessment of prospects for wholesale arguments but urge
that the base rate fallacy functions in this discussion only to force a charitable restatement
of the no miracles argument over its familiar statement. Since the “base rate fallacy”
appears to have attracted some attention as a knock down case against the “no miracles
argument” it will be worthwhile to get a clearer view of the relation of these two kinds of
According to the no miracles argument it would be a miracle to get all the
astonishingly accurate and unexpected predictions that we do from our successful
theories if the theories weren’t true. So we are supposed to conclude that they are true, or
at least approximately true. In particular we are supposed to conclude that the things to
which such theories purport to refer must be taken to exist.
We don’t need any fancy statistics to see the alleged problem with this argument.
Consider the following exchange:
Realist: It would be a miracle to have the occurrence of all the predictions of the
atomic theory tabulated by Perrin if there were no atoms. So we must conclude that there
are atoms and that their properties are pretty accurately described by the atomic theory.
(19th century) Anti-realist: Frankly, the whole idea that there should be atoms,
things so fabulously small that they could not possibly be seen, itself is just astonishing. I
think it would be a greater miracle if there were atoms than the miracle of nature
conspiring to produce these striking regularities without the mechanism of atoms.
Choose your own poison. I’ll take the data without atoms as the less painful horn of this
Does this response show that the no miracles argument is a fallacy? Well
perhaps, as originally stated. But with just a smidgen of charity we can take the intended
form of argument to have been all along: If there is evidence that can be explained or
predicted on the basis of a theory but is otherwise very surprising, and if it would be less
Thanks to Craig Callender, Branden Fitelson, and Peter Lipton for helpful
surprising for the theory to be true, then conclude that the theory is true.2
The foregoing exchange between the realist and antirealist is a qualitative
rendering of the appeal to the “base rate fallacy” and the last paragraph gives the
appropriate response. Here is the way it plays out with the statistics. (I follow the
example used by Magnus and Callender.)
Suppose there is a disease with a frequency in the population of .001, and a test
for the disease that is certain to be positive if the subject has the disease but also exhibits
a false positive rate of .05. If ‘h’ stands for the hypothesis that the person to whom the
test is administered has the disease and ‘e’ for the result that the test is positive, we have
P(h) = .001, P(e/h) = 1 and P(e/~h) = .05. Question: If someone tests positive, what is the
probability that they have the disease? Apparently a lot of smart people will say (around)
.95. But if 1000 people are tested, on average 1 will test positive because they have the
disease and 50 will yield a false positive. So the rate of people who have the disease
among those who test positive is very close to .02.
It is claimed, now, that the same mistake is being made in the no miracles
argument. Let’s continue to work with the example of the “atomic hypothesis”. Perrin
tabulated an astonishing catalogue of verified predictions, e, of the atomic hypothesis, h.
P(e/h) = 1, we may suppose. P(e/~h) on anybody’s story is going to be very low. It
would be a “miracle” to have e if h were false. So we are supposed to conclude that h is
true. But this case has the same form as the disease example. Ones intuition is that the
evidence makes it likely that the subject has the disease/likely that the atomic hypothesis
is true. In both cases the evidence does raise the probability of the truth of the
hypothesis. But in the disease case this probability is still not high. So no reason has
been given for concluding that the evidence makes the atomic hypothesis likely to be
Let’s look at the structure of these arguments in a little more detail. Talk of “it
would be a miracle if” is a metaphor that we can also more flexibly express with “it
would be extremely surprising if”, that in turn is an attitude that one can describe with a
probability function: 1/P(e) can be used as a measure of how surprising it would seem if
e turned out to be true.
Reexpression with a probability measure facilitates comparison with the medical
example. ‘e’ describes a positive test result in the medical example and the successful
evidence in the case of a scientific theory or theories. In the medical example ‘h’ names
the hypothesis that the person tested has the disease. In the case of science, if we are
concerned with a retail argument, ‘h’ names some specific scientific hypothesis or theory
– we can think of the “atomic hypothesis” as a concrete example; or, if we are thinking
about the wholesale version of the argument, h will refer collectively to our “mature”
I’ll leave it to the scholars to inspect the copious literature on the no miracles argument
to determine whether or not this was really the intended argument all along.
theories. In both cases we take P(e/h) to be very high – we can think of it as P(e/h) = 1.
In both cases we also take P(e/~h) to be low.
The fallacious thinking in the medical case goes like this. If h, e is very likely - it
would not be at all surprising. If ~h, e is unlikely – it would be very surprising. If ~h, e
would be, well maybe not a miracle, but not what one would have expected. So conclude
h. This is an instance of a rule of inference that appeals to what are called likelihoods,
the numbers P(e/h) and P(e/~h): If the likelihood ratio P(e/h)/P(e/~h) is high, conclude h.
In the medical case it is easy to see how to rectify the situation, for it is easy to see
that in every case if P(e/h) and P(~h) are both close to 1, P(h/e) is close to 1 iff P(e/~h)
<< P(h). In the medical case P(e/~h) > P(h), so P(h/e) is not close to 1. Earlier I
suggested that this condition, or a qualitative analog, can be used to shore up statement of
the no miracles argument: In discussing an antecedently unlikely theory (low P(h) ) and
if one has evidence that is to be expected on the basis of the theory (high (Pe/h) but
exceedlingly surprising otherwise (very low P(e/~h), and if in addition the degree of
surprise assigned to the evidence on the assumption that the theory is false is much
greater than the degree of surprise associated with the truth of the theory ( (P(e/~h) <<
P(h) ), conclude that the theory is true, or that it merits a high degree of belief. Indeed,
once stated, I don’t think that excess charity is required to suppose that this last condition
was tacitly intended all along by advocates of the no miracles argument.
On this evaluation, it would hardly seem that the no miracles argument is an
instance of the base rate fallacy, though comparison to cases in which the fallacy is
committed shows us how better to state the argument. But once we recognize this
refinement in statement we also see a further potential objection: The refined statement
makes clear that the no miracles argument must appeal to P(h), the “base rate”, better
known as the “prior probability” of the hypothesis, theory, or theories under
consideration. How is this prior probability to be known? Magnus and Callender argue
briefly, but cogently, that the case is hopeless for the no miracles argument on any
interpretation of P that requires the use of statistics to justify the prior, P(h). What is the
relevant reference class of hypotheses? How are the hypotheses to be individuated?
Even if such questions could be answered, the statistics could be done only if we had
some independent way of determining whether individual theories get into the reference
class, that is whether they are true, likely to be true, or worthy of belief; and then the no
miracles argument would be superfluous. (328). On any interpretation of P on which
statistics would be required to justify the prior, any version of the no miracles argument
presented as a wholesale argument will be hopeless. What I don’t think Magus and
Callender make quite clear is that this conclusion holds just as well if the argument is
applied retail. If we must rely on counting theories to determine the prior for the atomic
hypothesis, what will the relevant reference class be? And how will we determine
whether or not a theory qualifies by way of truth or high confirmation or failure thereof?
So if there is any way for this kind of argument to get a purchase, the priors will
have to be interpreted in the tradition of subjective degrees of belief, as a representation
of the agent’s unjustified epistemic inclinations before the evidence is brought to bear, an
option that Magnus and Callender acknowledge. (328) Many still reject any such appeal:
“Crazy” priors will lead to crazy conclusions, so conclusions that are based on unjustified
priors can’t, it is held, count as rational. But in this post foundationalist age most have
acknowledged that epistemic arguments have to start from substantive unjustified
premises. One must, at some stage, have unjustified prior opinions about how the
evidence is to be brought to bear. Priors interpreted as subjectively held prior opinions
are the Bayesian subjectivists’ way of incorporating this conclusion.
On the original formulation of the no miracles argument there was no apparent
appeal to priors. So, it would seem we must conclude, comparison with the base rate
fallacy cases has forced a restatement on which we can see, in this regard, that the no
miracles argument is weaker than it had previously appeared. I don’t think that such an
evaluation is right. In the medical case we suppose that we have an objectively
determined value for P(e/~h), the rate of false positives. But how is the value of P(e/~h)
to be determined in the case of, say the atomic hypothesis, in the original formulation of
the no miracles argument? What is the “correct” surprise value, degree of belief, or
probability of the evidence on the supposition just that the atomic hypothesis is false? In
the case of h a scientific hypothesis, theory, or range of theories, this value is as illusive
as the prior, P(h), just as much a quantity underwritten only by the agent’s unjustified
prior attitudes. This is the problem known among Bayesians as the problem of the
“catchall” hypothesis, the problem of ones attitude towards the evidence on the
assumption that all of the explicitly proposed ways of accounting for, predicting, or
explaining the evidence should be false. The no miracles argument, as originally stated,
had to appeal in this respect, to unjustified prior opinion. The need to appeal to
unjustified prior opinion was really an element in the argument pattern all along.
I conclude that the appeal to the base rate fallacy plays the limited role of forcing
us to clarify the statement of the no miracles argument. Either one will reject a pattern of
argument if it appeal to unjustified prior attitudes or not. If the former, one will reject the
no miracles argument outright for its appeal to P(e/~h), or the qualitative counterpart, the
prior opinion that without the truth of the theory under test the predicted evidence “would
be a miracle”. And if the argument pattern is rejected outright, there is no need to appeal
to the considerations of the base rate fallacy. Or, if unjustified prior attitudes are
admitted, then in particular, appeal to the unjustified prior, P(h), is admissible. On this
option consideration of the base rate fallacy has performed the useful work of showing
that a proper statement of the no miracles argument requires appeal to P(h) as well as to
P(e/~h), but not that it counts as a fallacy.
For reasons Magnus and Callender give, when viewed as a statistical argument,
whether retail or wholesale, the no miracles argument is a howler. But not by running
afoul of the base rate fallacy. If unjustified priors are admitted, P(e/~h) and P(h) equally,
none of the problems so far discussed apply. However, as Magnus and Callender briefly
mention, on a subjectivist approach the wholesale/retail distinction can make an
important difference: “In the present wholesale case, however, where the entire fate of
realism or anti-realism seems bound up with the priors, we can’t imagine how one could
find a reasonable set of priors.” (329). Strictly speaking, the question of unjustified
priors is a private matter between agents and their subjective attitudes. But when the
subject is all of “mature” science, the extent to which its predictions (most of which
otherwise would never have come to light) would have been miraculous if all? most?
much? of mature science were not to be true, most of us will see restricting appeal to our
subjective attitudes to concrete, retail cases as a much more sound policy.
Random flashcards
Arab people

15 Cards


46 Cards

Create flashcards