A Poisoned Dart for Conditionals

advertisement
1
A Poisoned Dart for Conditionals
Alan Hájek
[NOTE TO THE READER: SINCE THIS IS FAIRLY LONG, I HAVE PUT IN
LIGHTER FONT SOME PASSAGES THAT COULD BE SKIPPED IF YOU
ARE SHORT OF TIME OR PATIENCE.]
1. Setup
Philosophers of probability are fond of throwing darts at representations of the
[0, 1] interval of the real line, and I’m no exception. Here comes my throw. As
usual, my dart has an infinitely thin tip, its landing point is a real number in the
interval, and my throw will be fair, not privileging any part of the interval over any
other: the distribution over the possible landing points is uniform. I will present
two paradoxes for the interaction of probabilities and conditionals that arise out of
this scenario, and I will offer solutions to them that I hope will illuminate various
aspects of indicative conditionals.
Here is a proposition concerning the landing point:
L (for “left”): [0, ½]
Read this as: ‘the dart lands in the closed left-half of the interval’. (I will move
back and forth between set-theoretic and sentential formulations of the
propositions that I discuss, and often I will just use plain English, aiming to make
the presentation as perspicuous as possible. I will be discussing some complicated
formulas, so I want to simplify my notation as much as I can.) The probability that
you assign to L has to be ½, given the setup. There is no room for rational
2
disagreement here. Or at least, there is only infinitesimal room for disagreement.
You may say that the probability that you assign to L may differ from ½ by an
infinitesimal—for example, its endpoints may contribute infinitesimal amounts to
its probability. But we may safely ignore such finessing; the paradoxes will turn
on far greater quantities than those. So I will assume that the probability is 1/2, but
feel free to hear that as ‘within an infinitesimal of ½’.
Here is another proposition concerning the landing point:
C (for “conditional”): [½, 1]  ½
Read this as: ‘if the dart lands in the closed right-half of the interval, then it lands
exactly on the point ½’. There are uncountably many possible landing points of the
dart in the closed right-half of the interval, and the point ½ is just one of them (the
left-hand edge of the interval).
What probability do you assign to C? I am asking for your credence in a
conditional. You may wonder what kind of conditional I have in mind. Answer: it
is the indicative ‘if … then’ of English. Beyond that, you are on your own. As
much as you can, try to consult your intuitions without assuming a theory of the
indicative conditional. The theory can come later, if need be. Just as such a theory
is answerable to our intuitions about the truth-values of conditionals, so it is
answerable to our intuitions about their probabilities, when we have them. Let me
record my intuition: I assign C probability 0.
Notice that the proposition [½, 1] is equivalent to the proposition ¬[0, ½), given
the set-up. The dart’s landing in the closed right-half just is the dart’s not landing
in the half-open left-half.1 So C is equivalent to:
C':
¬[0, ½)  ½.
More precisely: in the relevant probability space,  = [0, 1], so [1/2, 1] =  – [0,
½).
1
3
I could just as well have asked for your credence in this conditional.
The two paradoxes that I will present concern how L, C, and their probabilities
relate to each other. They will add different claims about how L and C are
inferentially related. I am fond of paradoxes—they can often stimulate
philosophical progress or understanding. Resolving paradoxes can force one to
clarify concepts, to draw distinctions, and to question patterns of inference that
one previously took for granted. I hope that my discussion of various ways of
solving my paradoxes will shed some light on the semantics of the indicative
conditional. As well as aiming my dart at the [0, 1] interval, I will target the
material conditional analysis, the ‘Or-to-If’ inference, two ‘Export’ principles for
iterated conditionals, and McGee’s ‘counterexample to modus ponens’. I will trace
their downfall to a common source. So one of my goals is to unify a number of
seemingly disparate phenomena.
2. First paradox: L entails C
Do you think that L entails C? That is, do you think that the following holds?
[0, ½] entails [½, 1]  ½
You may find it easier to answer the equivalent question: do you think that L
entails C'?
[0, ½] entails ¬[0, ½)  ½
In words: does the dart’s landing in the left-half including the edge ½ entail that if
it does not land in the left-half excluding the edge, then it lands exactly on the
edge? Think of it this way: the interval [0, ½] is [0, ½)  {½}, the union of the
half-open interval and the singleton of the final point, which partition it. Does the
4
dart landing in [0, ½] entail that if the dart does not land in the first member of the
partition, then it lands in the second? That is, does L entail C’?
Or think of it another way. The dart throw can have a certain property: landing
in L. There are two ways this property can be realized, which exhaust the
possibilities: the [0, ½) way, and the ½ way. Start with the premise that the
property is realized. Infer that if the property is not realized the first way, it is
realized the second way—that’s the upshot of C’. Does that inference seem valid?
Let me now report my initial intuitions, for what they are worth. Offhand, it
seemed to me that L does entail C. And I intuited, and still do, that the probability
of C is 0. But something is rotten in the state of these intuitions. For the
probability of L is ½, and probability cannot decrease through entailment.
Something with probability ½ cannot entail something with probability 0. So
either my intuition that the probability of C is 0, or my intuition that L entails C, is
mistaken. (Perhaps both are.)
But I’m not the only one with a mistaken intuition. I have surveyed quite a few
people, and almost everyone has shared my intuitions, at least initially. So I submit
that we have a paradox here, stemming from the following inconsistent quartet of
premises:
(Half)
P(L) = ½.
(Zero)
P(C) = 0.
(Entailment) L entails C.
(Non-decrease) Probability cannot decrease through entailment.
I hope that the four claims of the paradox already seem plausible after this first
pass. Let’s try to sharpen the paradox by adding support to the claims that perhaps
need it.
5
(Half) P(L) = ½
This is non-negotiable—or as I have said, negotiable only up to an infinitesimal,
which will make no difference to the paradox—so I will not negotiate it. One has
to be a ‘halfer’ about L.
(Zero) P(C) = 0
Perhaps this is more up for grabs, although I find it hard to deny it. The
antecedent of C, [½, 1], corresponds to a set with uncountably many points, while
the consequent ½ corresponds to a set with just one point. And since the
probability measure is uniform, each point in [½, 1] must receive the same
treatment. Then for all x in [½, 1], all conditionals of the form
[½, 1]  x
should receive the same probability. Plausibly, that probability is 0. The particular
landing points are mutually exclusive. Now assume the widely held principle that
conditionals with the same consistent antecedent and incompatible consequents are
incompatible. That is, if P is consistent and Q and R are incompatible, then P  Q
and P  R are incompatible. (The thought is that these conditionals cannot both
be true, for they give conflicting information about what happens when P is the
case: the first tells us that Q is the case, the second tells us that R is the case, but
they cannot both be the case.) So all pairs of conditionals [½, 1]  x and [½, 1] 
y, where x ≠ y, are incompatible. Yet collectively their probabilities cannot exceed
1 (much as collectively the probabilities of the individual landing points cannot
exceed 1). This implies that the probability of each such conditional is 0 (much as
the probability of each landing point is 0).
6
[Frequentist justification.]
This is seemingly the case according to Stalnaker’s (1968) semantics for the
conditional. Choose one of these conditionals—say, [½, 1]  2/3. This is true at a
world w just in case the closest (overall most similar) world to w in which the dart
lands in [½, 1] is a world in which it lands exactly on 2/3. In which worlds is this
the case? Consider first the worlds in which the antecedent is true. Among these,
the conditional is true only at the 2/3-world—the rest yield a true antecedent and
false consequent. Now consider a world in which the antecedent is false—say, the
1/17-world. All the [½, 1]-worlds seem to be equidistant from it. (The fact that ½
is the closest point in the interval according to Euclidean distance is surely beside
the point, since there is no reason to think that this matters to overall similarity of
worlds. Worse, it is absurd to think that it’s true at the 1/17-world that if the dart
had landed in [½, 1], it would have landed exactly on ½, as a similarity ordering
based on Euclidean distance would predict.) We could then supervaluate over all
the arbitrary ways of selecting one of the [½, 1]-worlds to be designated ‘the
closest’ world to the 1/17-world. On exactly one of these ways is the 2/3-world
selected, so in exactly one of these ways is [½, 1]  2/3 rendered true. So the
conditional is not supertrue (far from it!), hence not true simpliciter.
Now, at a first pass, probability is probability of truth. So the probability of this
conditional is the probability of the worlds at which it is true, which is the
probability of just the 2/3-world. That is 0. So it goes for every other conditional
of the form [½, 1]  x for x  [½, 1]. In particular, P([½, 1]  ½) = 0.
At a second pass, we may think that where there are many equally good
selection functions, the conditional is indeterminate, and its probability should be
regarded as its expected truth value. Think of the conditional as a random variable,
7
and the probability of the conditional as the expectation of this random variable, à
la de Finetti (19xx), and Jeffrey and Stalnaker (1994). The conditional p  q is a
random variable that takes value 1 at worlds where p and q are both true, 0 at
worlds where p is true and q false, and P(q | p) at worlds where p is false. So
thinking of C as such a random variable, it takes the value 1 at ½, 0 throughout
(1/2, 1], and P(1/2 | [1/2, 1]) = 0 throughout [0, ½). Its only non-zero value
(namely, 1) has probability 0, so its expectation is 0.2
Or apply the Ramsey test for evaluating a conditional. He says in the most
famous passage ever written on conditionals: “If two people are arguing ‘If p will
q?’ and are both in doubt as to p, they are hypothetically adding p to their stock of
knowledge and arguing on that basis about q.” Hypothetically add to your stock of
knowledge the antecedent, the claim that the dart lands in [½, 1]. Now, on that
basis, what do you think of the consequent, the claim that it lands exactly on ½?
That should seem highly unlikely—probability 0, presumably.
Better still, Ramsey offers a version of the test based on the corresponding
conditional probability. He goes on to say: “We can say that they are fixing their
degrees of belief in q, given p” (my emphasis). So fix your degree of belief in ½,
given [½, 1]. That is easy: this conditional probability has to be 0.
Well, perhaps approximately 0. Again, we could argue about whether it has to
be 0, or some infinitesimal instead. But again, as far as solving the paradox goes,
that won’t make an iota—or if you prefer, an epsilon—of difference. The problem
was that probability appears to drop through an entailment. Whether it drops all
the way from ½ to 0, or ‘merely’ from ½ to some infinitesimal, doesn’t matter.
(Nor does it matter if the ½ should be corrected by an infinitesimal amount, as
2
As before, this expectation may be infinitesimal, but that would do nothing to
resolve the paradox.
8
considered earlier.) So I will assume that the probability is 0, but feel free to hear
that as ‘infinitesimal’ throughout.
I have considered several ways of thinking about the probability of the
conditional [½, 1]  ½, and they all agree that it is 0. This is the paradox’s
second premise, (Zero).
(Entailment) L entails C
The last step of the paradox was the claim of entailment from L to C (or
equivalently, from L to C'). One way to argue for it is to appeal to the validity of
the ‘Or-to-If’ schema:
Or-to-If
‘p or q’ entails ‘if not p, then q’
L is equivalent to L' = [0, ½)  ½. This has the form ‘p or q’, where p = [0, ½) and
q = ½. And C' = ¬[0, ½)  ½ has the form ‘if not p, then q’. So the inference from
L' to C' is an instance of ‘Or-to-If’. And the ‘Or-to-If’ inference pattern prima
facie seems valid. It seems to schematize the Sherlock Holmesian pattern of
reasoning: one of two cases obtains; so if it’s not the first, it’s the second. Jackson
writes: “Suppose I am told that either the butler did it or the footman did it, then I
may infer that if the butler did not do it, the footman did” (5). He regards
preserving the validity of Or-to-If as such an important desideratum of any theory
of the indicative conditional that it is his main reason for adopting the material
conditional analysis of the indicative conditional.
Here is a second argument for the entailment holding. Consider the conditional
L  C, which unpacks as the iterated conditional
[0, ½]  ([½, 1]  ½).
9
Your eyes might glaze over when you look at that. But we can apparently quickly
simplify it. Employ the Import-Export principle, championed by various authors
(see Bennett 2003):
Import-Export
p  (q  r) is equivalent to (p & q)  r
or even just
Export:
p  (q  r) is entailed by (p & q)  r
According to Export, L  C is entailed by
([0, ½] & [½, 1])  ½,
which is equivalent to
½  ½,
a logical truth. So L  C is a logical truth, and so L entails C.
Here is a third argument for the entailment holding. 3 Suppose that the dart
throw has taken place, and that you learn L. Having done so, then consider C:
what should your attitude to it be? You should believe it. Said another way: when
you learn L, you are thereby committed to C. And this is true as a matter of
necessity. This is strong evidence that L entails C.
(Non-decrease) Probability cannot decrease through entailment.
Like (Half), I will regard this as non-negotiable. It is a theorem of probability
theory.
So I believe there are exactly two ways to respond to the paradox: deny (Zero),
or deny (Entailment). Soon we will see how such denials might run. But first, let’s
look at …
3. A finite version of the paradox
3
Thanks here to Wolfgang Schwarz.
10
The dart throw involves an infinite scenario: there are infinitely many possible
landing points. But look at all the gifts that infinity has brought us, or better,
wrought upon us: Zeno’s paradox, Hilbert’s Hotel, the Banach-Tarski paradox, the
Burali-Forti paradox, Yablo’s paradox, controversy about continuum hypothesis,
undecidability, Skolem’s paradox, the St. Petersburg paradox, the two-envelope
problem … (add to this list ad infinitum). I’ve tried to add to this infinite mischief
myself with the Cable Guy paradox and (with Harris Nover) the Pasadena game.
Closer to home, infinitude leads to various paradoxes to do with probability:
Bertrand’s paradoxes, non-measurable sets, non-conglomerability, the shooting
room, and so on.
So maybe the dart paradox is just more trouble with infinity? Well, no: I think it
really does involve the interaction of probability and conditionals, and I can give a
finite version of the problem. (I won’t count it as new paradox since it is so similar
to the original, but I think not quite as compelling.) Consider a lottery with 3
tickets, numbered 1, 2, 3. Let L = ‘1 or 2 wins’ – mnemonically, a ‘Low number’
wins, counting 2 as a low number. Clearly, P(L) = 2/3. Let C = ‘if 2 or 3 wins, then
2 wins’. Intuitively, P(C) = ½. (Hold that thought; I will soon revisit it.)
Intuitively, L entails C. But 2/3 is greater than ½, and probability cannot drop
through entailment.
I think that the probability of C here is more controversial than the probability
of C was in the original, infinite case. There are various ‘triviality results’ showing
that probabilities of conditionals cannot always be identified with conditional
probabilities. It is clear that the conditional probability of ‘2 wins’ given ‘2 or 3
wins’ is ½; it is less clear that the probability of C is ½. But I thought that it was
11
clear in the infinite case that P(C) = 0. The triviality results don’t show that the
probability of a conditional can never be identified with the corresponding
conditional probability. In fact, the prospects for such identification seem to be
particularly good at the extreme values of 0 and 1. Moreover, van Fraassen (1976)
has a positive result that probabilities of conditionals can be identified with
conditional probabilities in uncountable spaces. The dart throw space is
uncountable, but the lottery space is not. So I think that transferring the clear
conditional probability judgment to the corresponding judgment about the
probability of the conditional is safer in the dart case than in the ticket case.
Or perhaps C is a Stalnaker conditional, in which case its probability seems to
be 1/3. After all, it seems to be true only at the 2-world that: the closest 2-or-3world is a 2-world. (At the 1-world it seems arbitrary which of the 2-world and the
3-world is selected as closer, so C fails to be supertrue there.) And 1/3 is less than
2/3, so probability has dropped through the entailment from L to C.
Or think of C as a random variable taking the value 1 at the 2-world, 0 at the 3world, and P(2 | 2 or 3) = ½ at the 1-world. Its expectation is
(1 x 1/3) + (0 x 1/3) + (1/2 x 1/3) = ½. But this is still less than 2/3.
So we have two different values for P(C): ½ and 1/3. Which is it to be? In a
way, it does not matter, for both of them are below 2/3, which is all that we need
to get the paradox going. In another way, however, perhaps it does matter for my
purposes: the fact that there are two different values may undermine our
confidence in any one of them. At least in the original dart example, we
triangulated to the same value, however we arrived at it, reinforcing our
confidence that that was the right value.
12
So I will stick with that example, although I do think that a version of the
paradox arises in this finite case, too, so the problem is not essentially to do with
infinity.
Alex: we could even give a comparative probability version of the paradox.
Adam: make the ½ an interval – more like Anderson.
4. Responding to the first paradox
Theory to the rescue?
Denying (Zero)
The paradox is resolved if P(C) is at least ½. I think the best case that can be
made for this is to insist that the indicative conditional is simply the material
conditional. Now, the word “the” sets off alarm bells for me: talk of “the X”
typically presupposes that there is exactly one X. And notice the presupposition
here: “the indicative conditional” presupposes that there is exactly one indicative
conditional. Perhaps there is more than one kind. (So-called ‘biscuit’ conditionals,
such as ‘there are biscuits on the sideboard if you want some’, suggest that there
is.) But let’s honour the presupposition, at least for now.
If the conditional is material, the paradox is easily resolved: the probability of C
is not 0, but rather ½. After all, it is the probability of ¬[½, 1] ∨ ½, which is
equivalent to [0, ½) ∨ ½, and thus to [0, ½]. And of course this has probability ½.
The entailment that I intuited holds, but probability is preserved through it after
all. Indeed it had better do so, for on this view L is equivalent to C. So we have a
simple solution to the paradox. Indeed, friends of the material conditional analysis
may see it as a new argument for that analysis!
13
But far from being a happy ending, this strikes me as a reductio of the proposal.
For stare as I might at L and C, they do not look equivalent to me. Be that as it
may, I think that the material conditional is a poor analysis of the indicative
conditional for other reasons. There are many arguments against it in the literature.
Let me emphasize two. Firstly, I will soon argue that ‘Or-to-If’ is not a valid
argument form after all, but according to the material conditional analysis it is.
Secondly, the material conditional contraposes, but the indicative conditional
does not, meaning that there are p and q for which ‘if p then q’ is not equivalent to
‘if not q then not p’. Indeed, as Stalnaker (1984) observes, we can find p and q for
which not only these conditionals differ in truth value, but also their modal status
is different—one contingent, the other necessarily false. Consider this prediction
about this seminar:
If we run late, then we won’t run very late.
That’s surely true (Richard runs a tight ship!). Now contrapose it:
If we run very late, then we won’t run late.
Of course that’s false. But it isn’t just that we regard the first conditional as true
and the second false, as in various stock examples.4 Worse than that, we regard the
first as contingent, and the second as necessarily false.
This casts serious doubt on the claim that the indicative conditional is the
material conditional. Perhaps, then, we should revisit the thought that there is more
than one indicative conditional. Perhaps some other indicative conditional figures
Here is Jackson’s stock example:
“If the stock market drops, that it is not the case that it will drop a lot.
Therefore, if the stock market drops a lot, it won’t drop” (465).
But this example appears in a critical discussion of the no-truth-value view (to be
discussed shortly), and he does not make the point about the different modal status
of these conditionals.
4
14
in the seminar prediction, but the material conditional figures in C. That would
still solve the paradox.
If there is more than one indicative conditional, then offhand I should be able to
stipulate that the one that figures in C is another, non-material conditional. I
simply ask you to hear C this other way. But maybe there is something about the
case that prevents you from doing that—e.g. it is a quasi-mathematical example,
and maybe the conditional is automatically material in such cases, whatever I
might try to stipulate. Still, stare as I might at L and C, they don’t look equivalent
to me, even in this quasi-mathematical case, as the material conditional analysis
would have it. I thus reject this response to the paradox.
Nevertheless, one might find ways of explaining away the recalcitrant data that
I regard as fatal to the material conditional analysis of the indicative conditional.
(Grice (19xx) famously began this project by appealing to pragmatics.) Or perhaps
fitting our linguistic intuitions, like fitting a curve to a set of data points in science,
is a messy business, and we should not expect to fit all them perfectly, settling
instead for the simplest theory that fits most of them well enough. Perhaps the
material conditional analysis wins by that criterion. And it certainly gains bonus
points for solving my paradox so neatly.
I wish I could bring myself to believe the analysis, but I can’t—I think it misses
too much recalcitrant data by too much. So I must look elsewhere for a solution.
Denying (Entailment)
If (Zero) holds, then (Entailment) is surely the culprit. It’s tempting, then, to
‘squeeze it out’. The upshot is that ‘Or-to-If’ is not a valid argument form after all:
15
the inference from L' to C' provides a counterexample. But it will be more
satisfying if we can argue against (Entailment) independently.
Undermining the arguments for the entailment
As a warm-up for the argument that (Entailment) is false, it will be useful first
to try to defuse the arguments that I gave that it is true. There are independent
reasons for doubting that ‘Or-to-If’ is a valid argument form, pace Jackson. Start
with a contingent truth:
Kenny has black hair.
By ‘or’ introduction, derive:
Kenny has black hair or 0 = 1.
By ‘Or-to-If’, derive:
If Kenny does not have black hair then 0 = 1.
Phew! Just as well Kenny has black hair: the foundations of mathematics depend
on it! (Actually, sometimes he doesn’t …) We have moved from a truth to a
falsehood via Or-to-If. It is not a valid inference rule after all.
Or start with an arbitrary true sentence ‘p’. By ‘or’ introduction, derive ‘p or p’.
By ‘Or-to-If’, derive ‘if not p then p’. This surely implies that p is a necessary
truth: only that way does the conditional get to be vacuously true, which seems to
be its only hope for truth. (Of course, the material conditional analysis implies that
the mere truth of p suffices—but that’s yet another strike against the analysis.) But
since p was arbitrary, ‘Or-to-If’ yields the absurdity that every true sentence is
necessary. Not even Spinoza would welcome this result.
We could have seen this coming. After all, if Or-to-If is valid, then the material
conditional analysis is correct, as various authors have pointed out (e.g. Stalnaker
16
1984). By Or-to-If, we can validly move from ¬p ∨ q to ¬¬p  q, which is
equivalent to p  q (assuming we can eliminate the double negation). That is, we
can validly move from p  q to p  q. And we can validly move in the other
direction (as most authors agree)—the indicative conditional is at least as strong
as the material conditional, the falsehood of the latter entailing the falsehood of
the former. So we have that p  q is equivalent to p  q, just as the material
conditional analysis says.
Note that this is only a warm-up. For it doesn’t follow that Or-to-If is an invalid
inference rule, in the sense that every instance of it is invalid. Massey (xx) taught
us that there can be no such thing (well, modulo anything of the extreme form
‘logical truth, therefore logical falsehood’, which he passed over, but that’s not
relevant here). Merely pointing out invalid instances of Or-to-If, as I have, does
not suffice to show that any instance of Or-to-If is invalid—in particular, the
instance involving L' and C'. For all we know, there is some other valid form under
which the L' to C' inference falls. And never mind form; the inference from L to C
might just be valid in the usual sense that it’s not possible for L to be true and C to
be false, for whatever reason.
And I offered another a second reason for thinking that L entails C: the one that
went via the Export principle. But again, we can apparently defuse this reason,
again by showing that it collapses to the material conditional analysis. I adapt an
argument by Gibbard. Consider the following indicative conditional:
(IC)
If A  C, then A  C.
By Import-Export, this is equivalent to:
[(A  C) & A]  C.
17
But this is a logical truth. So by Import-Export, (IC) is a logical truth. But the
converse of (IC) is a logical truth (almost everyone agrees): the indicative
conditional is supposed to be at least as strong as the material conditional. Putting
these facts together, Import-Export entails the material conditional analysis.
Indeed, only the ‘Export’ direction is needed for this proof to go through. By
Export, (IC) is implied by [(A  C) & A]  C. So it is implied by a logical truth,
and hence must be itself be a logical truth. And as before, the converse of (IC) is a
logical truth (almost everyone agrees). Export on its own entails the material
conditional analysis. And by the arguments against that analysis—e.g. the failureof-composition argument in the previous section—I take that to be a reductio of
Export as a general principle. But notice that this falls short of refuting my
particular application of it. For all that has been said, L  C might be a logical
truth in its own right.
To summarize: the first two arguments that I gave for L entailing C seem to
prove too much: that the material conditional analysis is correct. Again, all this
fails to show that L does not entail C; it only undermines these earlier reasons for
thinking that it does. The third argument for L entailing C can also be undermined,
and indeed that undermining provides a direct argument that L does not entails C.
Let’s go for the jugular, then, turning to three such arguments.
Arguments for the entailment not holding
Recall the third putative reason for thinking that L entails C: upon learning L,
you are committed to C, by necessity. But this reason too can be undermined.
Entailment is monotonic: if L entails C, then any stronger proposition than L also
entails C. But consider this stronger proposition than L: the dart lands on 1/3. Does
18
it entail C? Subject it to the same test as before: suppose that you learn that the
dart lands on 1/3. Having done so, then consider C: what should your attitude to it
be? Presumably you should not believe it—indeed, arguably you should find it so
improbable that you disbelieve it, that you are committed to its falsehood! Or
perhaps one cannot make sense of an indicative conditional whose antecedent is
incompatible with what one has learned. Very well, then; in that case, upon
learning 1/3, you can no longer make sense of the conditional.5 Still, your
commitment to C vanishes. You are not committed to something of which you
cannot make sense.
In fact, any stronger proposition than L that does not include ½ goes the same
way. Consider this stronger proposition: the dart lands in [0, 0.499]. Suppose that
you learn that. Again, having done so, you should either regard C as false, or not
be able to make sense of it. In this sense, your commitment to C upon learning L is
highly precarious: if you learn something even just slightly more informative than
L that is incompatible with the dart’s landing on ½, your commitment to C
vanishes. Entailment does not work that way!
To be sure, friends of the material conditional will not grant me this argument.
They will insist that L does entail C, and that (a fortiori) any strengthening of L
also entails C. Of course, they will say, the dart landing on 1/3 entails C—after all,
it entails the falsehood of C’s antecedent! So once again, what started out as an
argument for L entailing C leads naturally to the material conditional analysis. And
again, I do not want to follow it there.
The next anti-entailment argument is related, but a little different.
5
A reason to think that one can make sense of such a conditional is that if one
knows the antecedent to be false, but one’s conversational partner does not, and
indeed believes it to be true, one can retreat to what is common ground (in
Stalnaker’s 19xx sense), and still agree about what is true if the antecedent is true.
19
The semantic account of entailment
There are ways of looking at the putative entailment from L to C so that it
appears compelling, and I have tried to convey a couple of such ways to you. But
there is also a way to make it look uncompelling, and I find that a sort of gestalt
shift takes place when I look at things this way. I think this goes by whether I
think of the entailment syntactically or semantically.
Remember the usual semantic account of entailment: L entails C iff at every
world in which L is true, C is true. Let’s pick a world at which L is true: say, the
dart lands on 1/17. Is it true at that world that if the dart lands in [½, 1], then it
lands on ½? It’s hard to see why.6 Indeed, it is surely false at that world. In fact,
the only L-world at which C seems to be true is one in which the dart lands on ½!
So C is not true at every L-world. Far from it! So L does not entail C after all.
This convinces me of the solution to the first paradox: (Entailment) is false. But
before moving on to the second paradox, I want to look at another argument for
denying (Entailment). I do not subscribe to it myself, but it provides a good point
of entry to the second paradox.
6
Recalling our earlier discussion, a bad reason would be that the dart landing on ½
is the closest [1/2, 1]-world to 1/17, and so by the Stalnaker semantics for
indicative conditionals, C is true at the world that we have picked. This trades on a
pun on the word ‘close’: there is no reason to think that ‘closeness’ as determined
by the usual Euclidean metric on the real line corresponds to ‘closeness’ in the
sense of overall similarity. But if you insist on this reason, then I will merely
change the example to deprive you of it. Replace L by [0, ½) ∨ 1, and replace C by
[1/2, 1]  1. (Thanks here to Kenny Easwaran.) The paradox will go through very
much as before. But now there is no temptation to say that the closest [1/2, 1]world to 1/17 is the one in which the lands on 1. On the contrary: 1 is the furthest
such world according to the Euclidean metric!
Or as Michael Titelbaum pointed out to me, we could change the example to
throwing the dart at the unit circle, in which the points 0 and 1 are identified. Then
the nearest [1/2, 1] world to 1/17 is 1 (= 0).
20
Indicative conditionals don’t have truth values
Another objection to (Entailment) denies that indicative conditionals have truth
values, and denies more generally that they can enter into entailment relations at
all. For example, Adams, Edgington, Bennett, and others believe that a conditional
may be more or less acceptable, but not true.
So on this view, (Entailment) is false—indeed, it involves a category mistake,
since a conditional is not the sort of thing that could appear in an entailment
relation.7 (Soon we will see a surrogate for entailment that is congenial to no-truthvalue theorists.) Indeed, friends of the no-truth-value account of the indicative
conditional may regard the elegance of this solution as a new argument in favor of
the account!
However, like the response that the indicative conditional is the material
conditional, I find this response to have significant costs. It isn’t just (Entailment)
that involves a category mistake on this view. All claims of validity of arguments
involving conditionals do. For example, it is a category mistake to say that modus
ponens is valid. No-truth-value theorists will be quick to reply that it is
A no-truth-value theorist might go further still. If indicative conditionals don’t
have truth values, then perhaps they don’t have probability values either—perhaps
attributing them such values is another kind of category mistake. After all, what is
the probability of a conditional, if not its probability of truth? The probability of
something that cannot have a truth value would seem to be undefined. Then the
paradox is solved: probability does not decrease through entailment. There is still
the threat of the related problem of probability disappearing through entailment. If
X entails Y, and the probability of X exists, then the probability of Y should exist
because it is at least as great as that of X!7 So the probability of C should be at
least 1/2, whereas that’s not the case according to this reply (an undefined
probability is not at least ½). So this view is best combined with the no-truth-value
view, so that the entailment claim is jettisoned also. In short, on this view, both of
my initial intuitions were mistaken: the probability of C is not 0 (since it doesn’t
exist), and L does not entail C (since C is not fit to enter into entailment relations).
I can also reformulate the paradox: (2) becomes: P(C) is at least ½. The reply that
P(C) is undefined denies this.
7
21
‘probabilistically valid’ in Adams’ sense: it’s not possible for the premises to be
jointly probable and the conclusion improbable. They will then appeal to Adams’
Thesis to determine the probability of the relevant conditional. That may be true,
but I think that modus ponens is also just plain valid in the traditional sense. And
this ‘no truth value’ account is surely not true across the board. What about
conditionals of the form p  p, and more generally, those whose antecedents entail
their consequents? There’s also the familiar Frege-Geach problem of how
conditionals are meant to embed in various contexts, such as in Boolean
combinations, or modal contexts, or propositional attitudes, or how they iterate.
To summarize: I believe that (Entailment) is false, and I think that no-truth-value
theorists will agree with me. But the paradox can be reformulated without any appeal
to (Entailment)—as follows.
5. Second paradox: ‘if L then C’ is maximally acceptable
Previously I asked whether you thought that L entails C. Now let me ask
instead what your attitude is to the conditional L  C, that is
[0, ½]  ([½, 1]  ½)
I won’t ask you to assume Import-Export, nor even merely Export. In fact,
according to a no-truth-value theorist, these principles embody category mistakes:
for they respectively say that one conditional is equivalent to another, and that one
conditional entails another, when conditionals are not fit to appear in logical
relations. But a no-truth-value theorist might adopt surrogates for these logical
relations. For example, McGee recasts Export:
22
“In general, we assert, accept, or believe a conditional of the form 8 [If , then
if  then ] whenever we are willing to assert, accept, or believe the
conditional [If  and , then ]” (464-5).
I will restate this generalization in terms of acceptability, rather than in terms of
what we in fact accept: sometimes we might accept a conditional that we
shouldn’t, or fail to accept a conditional that we should. Similar principles seem
just as plausible to me for assertability and believability, but there will be no need
to strengthen my principle to include them; it will be problematic enough:
Export for Acceptability: p  (q  r) is at least as acceptable as (p & q)  r.
This seems highly plausible; indeed, by Import-Export, these two conditionals are
apparently equally acceptable. Moreover, it avoids collapsing the indicative
conditional into the material conditional, as Export did.
Export for Acceptability tells us that the acceptability of L  C is at least that
of ½  ½. That is surely acceptable to the highest degree. And this is not in virtue
of some contingent fact; it’s surely necessarily acceptable to the highest degree.
Moreover, the status of the conditional is obviously knowable a priori. Saying that
a conditional is ‘necessarily and a priori acceptable to the highest degree’ is a
mouthful: for short, let’s say that it is maximally acceptable.
So we have:
(Max acc) L  C is maximally acceptable.
Think of this as a surrogate for the original entailment claim (Entailment) that
even a no-truth-value theorist can accept. But it is still strong enough to generate a
paradox.
8
Here I correct a typo, replacing a “” in the original text with “”.
23
Suppose that C has probability 0, as (Zero) claimed. Intuitively, probability
should not decrease from the antecedent to the consequent of a maximally
acceptable conditional. Since L  C is such a conditional, the probability of its
consequent should be at least as great as that of its antecedent. But this is not the
case.
So now the paradox becomes:
(Half) P(L) = ½
(Zero) P(C) = 0
(Max acc) L  C is maximally acceptable.
(Non-dec max acc) Probability cannot decrease from the antecedent to the
consequent of a maximally acceptable conditional.
(Max acc) is plausibly weaker than (Entailment). (Non-dec max acc) is
correspondingly stronger than (Non-decrease), so perhaps this is now a point of
greater vulnerability. But it still looks watertight to me.
Which is the culprit, then?
6. Responding to the second paradox
Denying (Max acc)
Despite appearances, I will now argue that L  C is unacceptable; the
reasoning is much as it was in the objection to (Entailment) from the semantic
account of entailment. Suppose that in fact the dart lands on 1/3, and that you
believe this, so that the antecedent of L  C is true, and you believe this. If L  C
is acceptable, then it should be acceptable for you to infer C—after all, nothing in
this information should revise the acceptability of L  C. But C seems
unacceptable under this supposition of the dart’s actual landing point. Moreover,
24
we can make almost any other supposition about the dart’s landing in a way that
verifies L, and the same reasoning goes through. Indeed, the only such supposition
that seems to make C acceptable is the dart’s landing on ½—in that case C has a
true antecedent and true consequent, so it is plausibly true. So the acceptability of
C is hostage to an extremely precise and improbable supposition about the way in
which L is realized.
Another way to see the point is to read ‘if L then C’ as: ‘if L, then the following
conditional is acceptable: if the dart lands on [½, 1], then it lands exactly on ½’.
But of the uncountably many ways in which L can be realized, only one of them
makes that following conditional acceptable; all the rest make it unacceptable. ‘If
L then C’ thus grossly exaggerates the conditions under which C is acceptable: it
claims that L suffices, when in fact only a very particular realization of L will do,
one that has probability 0. If L  C is acceptable at all, then it is so to the tiniest
degree. Yet (Max acc) claims that it is maximally so. Hence, (Max acc) is false.
This resolves the paradox.
BOB’S CONTEXTUALISM.
BRIAN:
IF
YOU
INSIST
ON
READING
THE
CONSEQUENT
UNRESTRICTEDLY, THEN IT’S TRUE.
BENNETT ON SPANISH WAR
CONTXTUALISM COMPATIBLE WITH RELATIVISM. LOTS GOING ON
IN CONDITIONALS.
BOB:
P© = 0
L ENTAILS C
25
ACCEPTING L COMMITS YOU TO ACCEPTING C*; BUT THE
INFERENCE FROM L TO C* IS NOT VALID.
We thus have an argument against Export for Acceptability: not all of its
instances are valid. For ‘if [0, ½] and [½, 1], then ½’ is maximally acceptable. So
Export for Acceptability predicts that ‘if [0, ½], then if [½, 1] then ½’ is
maximally acceptable. But this prediction is false: far from being maximally
acceptable, this iterated conditional is at best acceptable to the tiniest degree.
Hence, I reject Export for Acceptability.
In this section I have framed the discussion in terms that should be congenial to
a no-truth-value theorist about the indicative conditional. I happen to be a truthvalue theorist, so I am happy to speak of conditionals entering into good oldfashioned, red-blooded entailment relations. I draw a further moral from the above
discussion. ‘If [0, ½] and [½, 1], then ½’ is true (indeed, necessarily so). Export,
then, predicts that ‘if [0, ½], then if [½, 1] then ½’ is also true. But this prediction
is false. Hence, I reject Export.
To be sure, Export and Export for Acceptability enjoy considerable support
from intuition. But I think that this is trumped by the even stronger evidence
against it that the dart case provides. And yet there appears to be a compelling
reason to accept Export. It is easily shown that the following inference schema is
valid on the Stalnaker/Lewis semantics for the conditional, and also
probabilistically valid à la Adams, and it is surely watertight:
And-Antecedent to Or-Consequent (p & q)  r  p  (¬q  r)
And the inference from p  (¬q  r) to p  (q  r) seems compelling. So starting
with (p & q)  r we derive p  (q  r) in two steps. Export appears to be
vindicated after all.
26
But wait! That involved or-to-if reasoning in the consequent of a conditional:
Or-to-If-in-the-Consequent: x  (y  z)  x  (¬y  z)
(Simply replace x by p, y by ¬q, and z by r, and eliminate the double negation.)
We get ‘Or-to-if’ by making x a tautology. So my earlier counterexample to ‘Orto-If’ can immediately be adapted to provide a counterexample to this inference
schema: let x be a tautology, y be ‘Kenny has black hair’, and z be ‘0 = 1’. x can be
contingent, too—e.g. ‘I am not mistaken’. The shadow of the nefarious Or-to-If is
lurking in the background here too.
We can see, then, how we could easily be seduced by Export and Export for
Acceptability. We rightly infer from (p & q)  r to p  (¬q  r); we then wrongly
infer p  (q  r). Once we have exposed Or-to-If and now Or-to-If-in-theConsequent as fallacious, we should avoid falling into this trap.
The rejection of Export has serious ramifications for no-truth-value accounts of
conditionals. I have mentioned the Frege-Geach problem for such accounts. While
all parties should acknowledge that that problem remains live to some extent, notruth-value theorists have insisted that considerable progress has been made in
solving it. In particular, McGee’s (1989) work on the Import/Export principles
promised to solve the problem for an important class of cases: right-nested
iterations of conditionals. If I am right, the dart example shows that those
principles do not hold in general, and that iterated conditionals remain as
problematic as ever for no-truth-value theorists.
7. McGee’s “Counterexample to Modus Ponens”
Another pay-off of our rejection of the Export principles is that we have a
principled way of blocking McGee’s (1985) famous argument against modus
27
ponens. (It is some 25 years old, and it has generated a huge literature. At a
conference in Leuven on conditionals that I went to, almost every talk of the
twenty or so given referred to McGee’s argument at some point.) Indeed, if you
know that argument you may have already seen some parallels to my paradox. The
best-known McGee case is the ingenious one that follows (p. 462):
Opinion polls taken just before the 1980 election showed the Republican
Ronald Reagan decisively ahead of the Democrat Jimmy Carter, with the
other Republican in the race, John Anderson, a distant third. Those apprised
of the poll results believed, with good reason:
If a Republican wins the election, then if it’s not Reagan who wins it will
be Anderson.
A Republican will win the election.
Yet they did not have reason to believe
If it’s not Reagan who wins, it will be Anderson.
Schematically, the putative counterexample to modus ponens is:
Republican  (¬Reagan  Anderson)
Republican
¬Reagan  Anderson
Republican, in turn, can be decomposed as the disjunction Reagan ∨ Anderson, the
former disjunct overwhelmingly more probable than the latter. Kolodny and
MacFarlane (xx) concede: “The fact that there were only two Republicans in the
race made [the first premise] unassailable”, and I believe this reaction is the norm.
The second premise was reasonably believed ahead of the election and turned out
to be true. The conclusion, however, is clearly false/should be disbelieved, since
Anderson was such a long shot compared to Carter.
The dart example can be regarded as somewhat parallel when we identify
Republican with L, Reagan with [0, ½), and Anderson with ½. The iterated
conditional in the election example parallels L  C in the dart example:
[0, ½]  (¬[0, ½)  ½)
28
[0, ½], in turn, can be decomposed as the disjunction [0, ½) ∨ ½, the former
disjunct overwhelmingly more probable than the latter. I could even offer another
‘counterexample’ to modus ponens paralleling McGee’s:
First premise: L  C.
Second premise: L is true. (We stipulate that in fact the dart lands in [0,
½]. If you prefer, we could make this premise that L is
believed/accepted.)
But these premises give no reason to believe/accept C.
And it should be clear what my reaction is to this ‘counterexample’: the first
premise is at best acceptable to the tiniest degree. Indeed, since I am a truth-value
theorist, I can state my opinion about this iterated conditional more directly: it’s
false. To be sure, a closely related conditional,
([0, ½] & [½, 1])  ½,
is true (indeed necessarily so), and maximally acceptable. But since we have
rejected Export and Export for Acceptability, the truth/acceptability of this
conditional gives us no reason to believe/accept the iterated conditional.
We do have a reason to believe/accept the related conditional with a disjunctive
consequent:
[0, ½]  [ [0, ½) ∨ ½ ]
It follows from the previous conditional by And-Antecedent-to-Or-Consequent,
and it is obvious in its own right. But again, the inference from that to the iterated
conditional is fallacious, being an instance of Or-to-If-in-the-Consequent.
So there are parallels between my dart case and McGee’s ‘counterexample’,
and we should have parallel reactions to them. We should assail McGee’s
“unassailable” first premise,
29
Republican  (¬Reagan  Anderson)
In fact, it is at best acceptable to a tiny degree. It claims that Republican suffices
for ¬Reagan  Anderson, when in fact only very particular realizations of
Republican will do. In particular, the actual realization of Republican will not do!
And whatever contextualist moves we might try, surely that realization cannot be
properly ignored.
Indeed, as before, I prefer to say it more directly: the iterated conditional is
false. And it is false for the simplest of reasons. An indicative conditional with a
true antecedent and a false consequent is false. (As most authors agree, the
indicative conditional is at least as strong as the material conditional, and a
material conditional with a true antecedent and false consequent is false.) McGee’s
first premise has a true antecedent (a Republican did indeed win), and a false
consequent (since Anderson was such a long shot compared to Carter).
We may recast all this in the language of ‘belief’, or ‘acceptance’ if you prefer.
Pace McGee, those apprised of the poll results should not have believed/accepted
the first premise. For by hypothesis, they believed/accepted its antecedent, and
disbelieved/did not accept its consequent.
Lest it appear that I am begging the question against McGee, let me assemble
what I take to be the strongest intuitions in this case. Firstly, I submit, here is the
strongest intuition regarding conditionals that can be had: an indicative
conditional with a true antecedent and a false conditional is false. I think that the
very functional role of the indicative conditional guarantees this. (Jackson goes so
far as to call this “the uncontested principle”.) Secondly, a conditional with a
believed/accepted antecedent and a disbelieved/not accepted consequent should be
30
disbelieved/not accepted. Thirdly, in parallel to the dart scenario, there are
conditionals closely related to the first premise that are indeed unassailable:
(Republican & ¬Reagan)  Anderson
McGee appeals to Export for Acceptability to infer the first premise from this. But
I reject Export for Acceptability. One can infer from this:
Republican  (Reagan ∨ Anderson)
Now inferring the first premise is a tempting piece of Or-to-If reasoning in the
consequent of a conditional. But this temptation should be resisted, much as
before. It is no licence for Export for Acceptability, or Export.
So I claim to have located the sources of McGee’s intuition (shared by Kolodny
and MacFarlane and many others) that the iterated conditional is true/should be
believed, but these sources in fact lend it no support. On the other hand, the
intuition that a conditional with a true/believed antecedent and a false/disbelieved
consequent is false/should be disbelieved is unimpeachable. Moreover, this
resolution of McGee’s ‘counterexample’ is simpler than various others that have
been offered—for example, that it involves a fallacy of equivocation (SinottArmstrong et al. 19xx).
Here, then, is a recipe for generating McGee-style ‘counterexamples’ to modus
ponens. Begin with a (known) true sentence, P. Disjoin it with a highly
improbable falsehood, Q, for which Pr(Q | ¬P) is very low, to yield the truth R = P
∨ Q. This already provides a counterexample to Or-to-If, for we are unwilling to
derive ¬P  Q, whose Ramsey test score is low, from R. Now derive from R two
true conditionals: (R & ¬P)  Q, and R  (P ∨ Q). Finally, derive R  (¬P 
Q), either by Export from the first conditional, or by Or-to-If reasoning in the
consequent from the second conditional. Voilà! You have arrived at a nested
31
conditional whose antecedent is true; modus ponens licenses you to infer ¬P  Q;
but this scores poorly according to the Ramsey test, and so you reject it. Replace
‘true’ by ‘acceptable’ or ‘reasonably believed’ throughout this recipe if you are a
no-truth-value theorist.
And there is a recipe for diagnosing where this reasoning goes wrong: the
derivation of R  (¬P  Q) is illicit either way.9
My dart case resembles McGee’s election case, but it differs in three important
ways. Firstly, it gives us precise probability assignments. Secondly, one of these
assignments is not just small, but 0. Finally these assignments are necessary, and
knowable a priori. All of these features allow me to derive lessons from my case
that could not be derived from McGee’s.
Here is a slight revision of the recipe. If we replace “Pr(Q | ¬P) is very low” with
“Pr(Q | ¬P) is low or middling”, then the ‘counterexample’ may not seem quite as
compelling: ¬P  Q may not score so badly on the Ramsey test. Still, it may
score badly enough—i.e. not highly—to seem compelling enough. Another of
McGee’s ‘counterexamples’ has this form:
9
Here the conclusion does not sound as bad as ¬Reagan  Anderson did: you may
think that if Otto’s chances of striking gold and of striking silver are roughly
equal, then it has middling acceptability. So I don’t think this case is as compelling
at first blush as the election case. Anyway, it fits the slightly revised recipe: let P =
Otto finds nothing; Q = Otto finds silver; R = Otto doesn’t find gold.
32
8. Conclusion: paradox lost
Both paradoxes began with probability claims that I found indisputable: P(L) =
½, and P(C) = 0. They then added a claim about the inferential relationships: that
L entails C, and that L  C is maximally acceptable.
But L does not entail C. Don’t be fooled by Or-to-If! Moreover, we had good
reason to dispel Or-to-If, since by its lights the indicative conditional is the
material conditional, which we found untenable.
L  C is not maximally acceptable. Don’t be fooled by Export for
Acceptability! And once we dispelled Export for Acceptability, we also dispelled
McGee’s ‘counterexample’ to modus ponens. Or-to-If was again the bad guy, for
it made Export for Acceptability, and Export, seem compelling, when they are
fallacious. I showed how to generate even quicker ‘counterexamples’ to modus
ponens, and once again Or-to-If was to blame. Here are the relationships
diagrammed, with arrows representing support. Notice that Or-to-If supports the
whole edifice—a shaky foundation.
"Counterexamples" to
Modus
Ponens
Second
Paradox
First
Paradox
Material
Conditional
Export
Or-To-If
Export for
Acceptance
33
So Or-to-If’s downfall congenially helped to bring down with it 6 other things
that I hoped not to leave standing:
1 and 2: both paradoxes.
3. The indicative conditional’s collapsing into the material conditional.
4 and 5. Support for Export and Export for Acceptability.
6. McGee’s ‘counterexamples’ to modus ponens.
With the downfall of Or-to-If, may we now enjoy 1000 years of peace.
[We ran late, but we didn’t run very late.]10
Philosophy Program
Research School of Social Sciences
Australian National University
Canberra, ACT 0200, Australia
REFERENCES (to be completed)
Bennett, Jonathan (2003): Conditionals, Oxford University Press.
Eells, Ellery and Brian Skyrms (eds.) (1994), Probability and Conditionals,
Cambridge: Cambridge University Press.
Harper, W.L., Stalnaker, R., and Pearce, G. (eds.) (1981): Ifs, Reidel, Dordrecht.
Jackson, Frank (1987): Conditionals, Blackwell, Oxford.
10
I thank Rachael Briggs, Dave Chalmers, John Collins, Andy Egan, David Etlin, Daniel Greco,
Frank Jackson, Carrie Jenkins, Barry Loewer, Aidan Lyon, Andy McGonigal, Daniel Nolan, David
Over, Joe Salerno, Michael Simpson, Robert Stalnaker, Jason Stanley, Mike Titelbaum, Peter
Vranas, and Timothy Williamson for helpful discussion. Thanks also to audiences at the Australian
National University, St. Louis State University, Katholieke Universiteit Leuven, Stanford
University, University of Michigan, University of Wisconsin at Madison, MIT, and the Summer
School on Conditionals at Central European University, Budapest. Special thanks to Kenny
Easwaran, Vann McGee, and Wolfgang Schwarz for comments that led to significant
improvements.
34
Jeffrey, Richard, and Robert Stalnaker (1993): "Conditionals as Random
Variables", in Eells and Skyrms.
McGee, Vann (1985): "A Counterexample to Modus Ponens", The Journal of
Philosophy 82, 462-470.
McGee,
Vann
(1989):
“Conditional
Probabilities
and
Compounds
of
Conditionals”, Philosophical Review 98 (4): 485-541.
Stalnaker, Robert (1968): "A Theory of Conditionals", Studies in Logical Theory,
American Philosophical Quarterly Monograph Series, No. 2, Blackwell,
Oxford.
Stalnaker, Robert (1970): "Probability and Conditionals", Philosophy of Science
37, 64-80; reprinted in Harper et al.
Stalnaker, Robert C. (1981): “Indicative Conditionals”, reprinted in Harper et al.
Stalnaker, Robert C. (1984): Inquiry, Cambridge, Mass., Bradford Books, MIT
Press, 1984.
van Fraassen, Bas (1976): "Probabilities of Conditionals", in Harper and Hooker
(eds.), Foundations of Probability Theory, Statistical Inference and
Statistical Theories of Science, Vol. I, Reidel, 261-301.
…
Download