Philosophical Disagreement September 2015

advertisement
For Many Philosophical Disagreements, You Are Probably Wrong
Mark Walker
Comments welcome: mwalker@nmsu.edu
1. Introductory
Much of the literature on disagreement focuses on the issue of whether I am still entitled to believe that
P, when I find out an epistemic peer believes not-P.1 One disagreement about disagreement is how much (if
any) revision of doxastic attitudes is called for by awareness of peer disagreement. On the “conciliationist”
view, recognition of epistemic peer disagreement requires a substantial revision of doxastic attitudes. On the
“steadfastness” view, it is (epistemically) permissible to maintain one’s doxastic attitudes in the face of
disagreement with epistemic peers.2 In what follows, we will think of conciliationism and steadfastness in their
most extreme forms. The most extreme form of conciliationism is sometimes referred to as the “equal weight
view”, which directs epistemic peers to give equal weight to the original view and the opposing view. A wellknown example may serve as illustration:
TWO FOR DINNER: Suppose you and I go for dinner on a regular basis. We always split the
check equally, not worrying about whose dinner costs more, who drank more wine, etc. We also
always add a tip of 23% and round up to the nearest dollar when dividing the check. (We reason
that 20% is not enough and 25% is ostentatious.) We both pride ourselves on being able to do
simple arithmetic in our heads. Over the last five years we have gone out for dinner
approximately 100 times, and twice in that time we disagreed about the amount we owed. One
time I made an error in my calculations, the other time you made an error in your calculations.
1 As we shall see below, the notion of ‘epistemic peer’ is defined in several ways. For the present, we may understand epistemic peers as
subjects who are equally likely to arrive at the truth of some disputed question.
2 The conciliatory camp, broadly construed, comprises, for example, Richard Feldman, “Epistemological Puzzles about Disagreement,” in
Epistemology Futures, Edited by S. Hetherington (New York: Oxford University Press, 2006), 216–36. Richard Feldman, “Reasonable Religious
Disagreements,” in Philosophers without Gods: Meditations on Atheism and the Secular Life (Oxford: Oxford University Press, 2007). Richard
Feldman, “Evidentialism, Higher-Order Evidence, and Disagreement,” Episteme 6, no. 03 (2009): 294–312. David Christensen, “Epistemology of
Disagreement: The Good News,” The Philosophical Review, 2007, 187–217. David Christensen, “Disagreement, Question-Begging, and Epistemic
Self-Criticism,” Philosopher’s Imprint, 2011.Adam Elga, “Reflection and Disagreement,” Noûs 41, no. 3 (2007): 478–502. Jonathan Matheson,
“Conciliatory Views of Disagreement and Higher-Order Evidence,” Episteme 6, no. 03 (2009): 269–79. The steadfastness camp, broadly construed,
comprises, for example, Peter Van Inwagen, “Is It Wrong, Everywhere, Always, and for Anyone to Believe Anything on Insufficient Evidence?,” in
Faith, Freedom, and Rationality (London: Rowman & Littlefield, 1996), 136–53. Peter Van Inwagen, “We’re Right. They’re Wrong,” in
Disagreement (Oxford University Press, 2010). Thomas Kelly, “The Epistemic Significance of Disagreement,” Oxford Studies in Epistemology 1
(2005): 167–96. Thomas Kelly, “Peer Disagreement and Higher Order Evidence,” in Disagreement (New York: Oxford University Press, 2010).
Alvin Plantinga, Warranted Christian Belief (Oxford University Press, 2000). Ernest Sosa, “The Epistemology of Disagreement,” in Social
Epistemology (New York: Oxford University Press, 2010), 278–97.
1
Both times we settled the dispute by taking out a calculator. On this occasion, I do the arithmetic
in my head and come up with $43 each; you do the arithmetic in your head and come up with
$45 each. Neither of us has had more wine or coffee; neither is more tired or otherwise
distracted. How confident should I be that the tab really is $43 and how confident should you be
that the tab really is $45 in light of this disagreement?3
The extreme form of conciliation requires assigning equal credence to the claim that the tab is $43 each and the
claim that the tab is $45 each. The most extreme version of steadfastness permits my credence to remain
unchanged, even in light of your excellent track record.
Sometimes disagreement is modeled in terms of belief/disbelief and withholding of belief,4 but for our
purposes, it will help to borrow the Bayesian convention where numbers between 1 and 0 are used to indicate
credence about some proposition.5 1.0 indicates maximal confidence (full belief), 0.0 indicates maximal
nonconfidence (full disbelief), and 0.5 is where the proposition is neither believed nor disbelieved.
As noted, there is a continuum of possibilities between extreme conciliationism and extreme
steadfastness. Let us think of ‘midway’ as the position where I attribute to myself 50% more credence than my
epistemic peer, as compared with what conciliationism permits. To keep the arithmetic simple, let us suppose in
connection with the previous example that initially I am supremely confident that the tab is $43 (credence =
1.0). I then find out you think the correct number is $45. Since conciliationism indicates that I should reduce my
credence to 0.5, and midway permits 50% more than what conciliationism permits, then midway permits me to
hold, with 0.75 confidence the claim that the tab is $43, and 0.25 credence the claim that the tab is $45.
An alleged consequence of conciliationism is skepticism: suspension of belief about P or not-P across a
wide range of disagreements. However, given some plausible assumptions, including fallibilism and
disagreement about contrary, rather than contradictory philosophical positions, it will be argued that
conciliationism actually mandates a stronger conclusion: you should disbelieve many of your philosophical
views, that is, we should believe that many of our philosophical beliefs are probably false. Furthermore, and
what is perhaps even more surprising, is that the same stronger conclusion follows from midway as well.
Finally, steadfastness faces a dilemma in light of these same considerations.
3 This is basically David Christensen’s example with minor tweaking. See Christensen, “Epistemology of Disagreement.” David
Christensen, “Disagreement as Evidence: The Epistemology of Controversy,” Philosophy Compass 4, no. 5 (2009): 756–67.
4 Richard Feldman, “Respecting the Evidence,” Philosophical Perspectives 19, no. 1 (2005): 95–119.
5 CF. Kelly, “Peer Disagreement and Higher Order Evidence.”
2
2. Three Concepts of Epistemic Peers
There are at least three different understandings of the notion of ‘epistemic peer’ in the literature:
Virtue Peers (VP): X and Y are (approximately) the same in terms of epistemic virtues with
respect to P.
Correctness Peers (CP): X and Y have (approximately) the same probability of making an
epistemic mistake about P.
Accuracy Peers (AP): X and Y are (approximately) equally likely to determine the truth about
P.6
A representative statement of VP is as follows:
A and B are epistemic peers relative to the question whether p when A and B are evidential and
cognitive equals with respect to this question—that is, A and B are equally familiar with the
evidence and arguments that bear on the question whether p, and they are equally competent,
intelligent, and fair-minded in their assessment of the evidence and arguments that are relevant to
this question.7
A version of CP is offered by Trent Dougherty: “…neither A nor B have any reason to think that the probability
of A making a mistake about the matter in question differs from the probability of B making a mistake about the
matter.8 Worsnip offers a statement of AP: “What matters when it comes to disagreement is how likely my peer
is to be right, that is, how reliable she is.”9
It is clear that these three concepts are distinct. Consider, for a start, the relationship between VP and
CP. As Trent Dougherty suggests, VP seems more concerned with “inputs” and CP more about “outputs”.10 To
I will ignore the qualification “approximate’ in these definitions where it does not affect the discussion
Jennifer Lackey, “Disagreement and Belief Dependence: Why Numbers Matter,” in The Epistemology of Disagreement: New Essays,
2013, 243–68. 243. In a similar vein, Christensen writes: “Much of the recent discussion has centered on the special case in which one forms some
opinion on P, then discovers that another person has formed an opposite opinion, where one has good reason to believe that the other person is one’s
(at least approximate) equal in terms of exposure to the evidence, intelligence, freedom from bias, etc.” (Christensen, “Disagreement as Evidence.”)
Notice that Christensen writes “approximate equal”. In a footnote (243, note 2) Lackey qualifies her definition to say “roughly equal”. This sort of
qualification to ‘epistemic peers’ (approximately, or roughly) will be assumed throughout this paper.
8 Trent Dougherty, “Dealing with Disagreement from the First-Person Perspective: A Probabilist Proposal,” Disagreement and Skepticism
46 (2013): 218–38. P. 222. I’m not entirely sure that Dougherty endorses CP rather than AP, or whether his notion includes both CP and AP. While
not pointing the finger at anyone, including Dougherty, I agree with Christensen that there appears to be some ambiguity in the literature as to
whether CP or AP is at issue (David Christensen, “Conciliation, Uniqueness and Rational Toxicity,” Nous, 2014).
9 Alex Worsnip, “Disagreement about Disagreement? What Disagreement about Disagreement?,” Philosopher’s Imprint, 2014, 1–20. P. 2.
For a similar understanding of AP see Christensen from whom I lift the term ‘accuracy peer’ (Christensen, “Conciliation, Uniqueness and Rational
Toxicity”).
10 Dougherty, “Dealing with Disagreement from the First-Person Perspective.”
6
7
3
see why, suppose we define VP as Lackey does above. In which case, it might be that X and Y are VP but not
CP. Why? For one thing, there is the issue of whether we have correctly identified all the relevant virtues.
Consider that Kelly has “thoughtfulness” as part of his understanding of epistemic peers, whereas Lackey does
not include this on her list.11 If we define VP with Lackey, but think that thoughtfulness is relevant to the issue
of making a mistake, then X and Y might be VP but not CP.
It seems that CP is neither necessary nor sufficient for AP. A modified version of TWO FOR DINNER
provides a counterexample to the claim that CP is necessary for AP. Imagine things much as before except that
you claim that God whispered $45 to you. In the past, you have said that when calculating a tab there is nothing
going on in your head until a number just pops up, which you attribute to God whispering the answer. I have
pointed out on numerous occasions that in general you have a terrible track record with respect to God
whispering hypothesis, e.g., when I flip a coin it turns out that God whispers the correct answer to you only half
the time. The only time that God whispering hypothesis appears successful is when you are doing mental
calculations. I suggest that some people have the ability to do mental math subconsciously and this explains
your excellent track record calculating our dinner tabs. It also explains why you fail in your predictions in so
many other cases. You reject this explanation as heretical. Accordingly, I reject the idea that we are CP about
the tab, but accept that we are AP on this issue. The example also shows a case where VP is not necessary for
AP, since it is plausible that there is some sort of epistemic vice associated with your failure to accept defeat for
your God whispering belief.
To understand why CP is not sufficient for AP, we need to look briefly at the issue of uniqueness versus
permissiveness. Uniqueness is the thesis that there is only one maximally rational doxastic response to a given
set of evidence, while permissiveness allows instances where there is more than one maximally rational
doxastic attitude to a given set of evidence.12 On the face of it, it seems that permissiveness is friendly to
steadfastness. And indeed, this is the correct conclusion if we are thinking about epistemic peers as CP. To see
why permissiveness cases licenses steadfastness, at least in some instances, consider the following example.13
Suppose we are hiking out of electronic communication with the outside world. The American presidential
election took place the day before, and we are speculating about who won. Let us assume it is a permissive case:
an instance where more than one maximally rational doxastic attitude is permitted. We both reason correctly
Kelly, “The Epistemic Significance of Disagreement.”
Roger White, “Epistemic Permissiveness,” Philosophical Perspectives 19, no. 1 (2005): 445–59.Christensen, “Conciliation, Uniqueness
and Rational Toxicity.”
13 Adapted from Kelly, “The Epistemic Significance of Disagreement.”
11
12
4
using the same set of evidence. You think it is slightly more likely that the Republican candidate won (p =
0.54), while I think it is slightly more likely that the Democratic candidate won (not-p =0.54). Since by
assumption we have both reasoned correctly based on shared evidence, we are CP. Since we both have reasoned
correctly, we are fully justified. So, the fact that we disagree provides no evidence for doxastic revision.
Notice, however, this means that in permissive cases, the fact that I am fully justified in believing P
provides no reason to suppose that P is likely to be true. In other words, in permissive cases, I may say to
myself: “I am fully justified in believing P, yet I still wonder whether P is more likely than not true.” The reason
is that both of our credences cannot consistently satisfy the axioms of probability: I can’t say I am correct that
pr(P = 0.54) and you are correct that pr(not-P = 0.54). Permissiveness says we are both justified, so CP;
permissiveness does not say that we are both accurate, so permissiveness does say that we are AP.14 Good thing,
since both of us can’t be right.
So, those who use permissiveness to defend steadfastness are correct that disagreement is not evidence
for belief revision. This is because the issue of disagreement is orthogonal to the question of accuracy, given
permissiveness. This is confirmed when we realize agreement in permissive cases is also not evidence that
some belief is likely true. Suppose the whole world joins hands and sings the praises of P in choric unison. All
the agreement in the world does not answer the question of whether P is likely to be true if P involves a
permissive case, since the same evidence permits the whole world to sing the praises of not-P as well. But this is
just to say the price of permissiveness is that the idea has to be abandoned that being justified about P gives one
reason to think that P is likely true.15
So, there is a certain irony in thinking that one can avoid the widespread skepticism that conciliationism
seems to suggest by using permissiveness to defend steadfastness. For the connection between permissiveness
and skepticism about the truth of P or not-P is even more direct—one need not even worry about agreement or
disagreement to generate skeptical doubt. Suppose I am the only one who has ever consider a certain permissive
case. I come to believe P on the basis of some set of evidence e. The fact that there is no one disagreeing
doesn’t affect the worry that my evidence doesn’t make it likely that P is true, since e could be used to show
that not-P is justified.16 Permissiveness is a good means to cut out the middleman of disagreement and go
directly to skepticism.
White, “Epistemic Permissiveness.” Christensen, “Conciliation, Uniqueness and Rational Toxicity.”
Cf. White, “Epistemic Permissiveness.” Christensen, “Conciliation, Uniqueness and Rational Toxicity.”
16 White, “Epistemic Permissiveness.”
14
15
5
In what follows, AP is the primary sense of ‘epistemic peer’ we will be interested in. Occasionally, we
will also reference the idea of CP. The reasons for focusing on these two will become obvious as we proceed.
3. Dogmatism, Skepticism, and Skeptical Dogmatism
It will help to say a few clarificatory words about ‘skepticism’ and allied notions. Conciliationism and
steadfastness indicate how much change in doxastic attitude (if any) there should be in light of evidence of
disagreement. We can also classify different levels of credence in some proposition P as follows. Let us
understand ‘dogmatism’ as the view that subject S has a credence of greater than 0.5 concerning some
proposition P. So, suppose you believe there is a 60% chance of rain tomorrow. You might say, “It will
probably rain tomorrow” and this would count as dogmatism in our sense. ‘Skepticism’ is the view that some
subject S has a credence of 0.5 whether some proposition P is true. This sense of ‘skepticism’ follows the
recommendation of Sextus Empiricus:
The Skeptic Way is a disposition to oppose phenomena and noumena to one another in any way
whatever, the result that, owing to the equipollence among the things and the statements thus
opposed, we are brought first to epochè and then to ataraxia. … By “equipollence” we mean
equality as regards credibility and the lack of it, that is, that no one of the inconsistent statements
takes precedence over any other as being more credible. Epochè is a state of the intellect on
account of which we neither deny nor affirm anything.17
Finally, ‘skeptical dogmatism’ is the view that a subject S has a credence of less than 0.5 concerning some
proposition P. Skeptical dogmatism about “It will rain” translates into something like denying “It will rain” or
holding “It probably won’t rain”. As the quote above indicates, Sextus Empiricus would not approve skeptical
dogmatism since he does not approve of denying any more than affirming some proposition.
It should be clear that the conciliationism/steadfastness distinction is independent of the
dogmatism/skepticism/skeptical dogmatism distinction: Suppose I am skeptical about some proposition P
(credence = 0.5), and I find you disagree, you are a dogmatist about P (credence = 0.7). If I believe
conciliationism applies in this instance, I should become a dogmatist about P (credence = 0.6). If I believe in
steadfastness, then I should remain a skeptic.
17
Benson Mates, The Skeptic Way (New York: Oxford University Press, 1996). P. 8-10.
6
As is often observed, almost everyone thinks that steadfastness is appropriate under some circumstances
and conciliationism is appropriate in others.18 The real source of controversy is under which circumstances each
is appropriate. Accordingly, let us think of ‘epistemic humility’ as the degree of the disposition to revise one’s
confidence in light of countervailing evidence, including epistemic disagreement; whereas, ‘epistemic egoism’
is the degree of the disposition to retain one’s confidence in light of countervailing evidence, including evidence
from disagreement.19 So, ‘conciliationism’ and ‘steadfastness’ indicate the degree of revision (from no revision
to equal weight), while ‘epistemic humility’ and ‘epistemic egoism’ indicate the degree of resistance to change.
To illustrate, in TWO FOR DINNER, if I remain steadfast, then I have a higher degree of epistemic egoism as
compared to if I revise my credence downwards in accordance with conciliationism. Alternatively, we might
say that when I revise my confidence down I exhibit greater epistemic humility than in the case where I remain
steadfast. Suppose TWO FOR DINNER is changed such that you have a terrible track record for getting the
total right: you are right only 50% of the time. In this case, I may remain steadfast while demonstrating little
epistemic egoism. Since you are an “epistemic inferior”, the countervailing evidence against my belief that the
tab is $43 is much less in this instance. The example might be changed in the other direction: while I am in the
restaurant bathroom you take a picture and post our restaurant check on Facebook. Fifty of your charter
accountant Facebook friends, with calculators in hand, agree with you that the correct number is $45. If I
remain steadfast under these conditions, I exhibit a high degree of epistemic egoism, since my belief faces the
defeater of so many epistemic superiors saying otherwise.
4. Conciliationism and Skepticism
As noted, it is often said that conciliationism leads to skepticism. In a review article on disagreement,
Christensen claims:
The most obvious motivation for Steadfast views on disagreement flows from the degree of
skepticism that Conciliationism would seem to entail. There must be something wrong, the
thought goes, with a view that would counsel such widespread withholding of belief. If you have
an opinion on, for example, compatibilism about free will, scientific realism, or contextualism
about knowledge, you must be aware that there are very intelligent and well-informed people on
18
Worsnip, “Disagreement about Disagreement?”
It would perhaps be more accurate to say “putative counter evidence’ since the defeating evidence might be thought defeated by the
epistemic egoist. If you are strongly attracted to epistemic egoism, then you might read ‘evidence’ as “putative evidence”.
19
7
the other side. Yet many are quite averse to thinking that they should be agnostic about all such
matters. The aversion may be even stronger when we focus on our opinions about politics,
economics, or religion.20
Richard Feldman provides another example in a seminal article on peer disagreement:
That is, consider those cases in which the reasonable thing to think is that another person, every
bit as sensible, serious, and careful as oneself, has reviewed the same information as oneself and
has come to a contrary conclusion to one’s own. And, further, one finds oneself puzzled about
how that person could come to that conclusion…..These are cases in which one is tempted to say
that ‘reasonable people can disagree’ about the matter under discussion. In those cases, I think,
the skeptical conclusion is the reasonable one: it is not the case that both points of view are
reasonable, and it is not the case that one’s own point of view is somehow privileged. Rather,
suspension of judgment is called for.21
The thought that disagreement supports skepticism has a long history going back to the ancient skeptics. Indeed,
disagreement is one of the modes of skepticism discussed by Sextus Empiricus.22
It is helpful to think of disagreement in terms of the very general pattern of skeptical underdetermination
argument also found with the Ancient skeptics:
U1: Underdetermination Justificatory Principle: If h1 and h2 are incompatible hypotheses and e
is S’s total evidence, S is justified in believing h1 only if pr(h1/e) > pr(h2/e).23
U2: It is not the case that pr(h1/e) > pr(h2/e) for S.
UC: S is not justified in believing h1.24
Christensen, “Disagreement as Evidence.” 757-758
Feldman, “Epistemological Puzzles about Disagreement.” 235
22 Diego E. Machuca, Disagreement and Skepticism, vol. 46 (Routledge, 2013). Harald Thorsrud, Ancient Scepticism (Routledge, 2014).
23
This is nearly the version of the principle provided by Allan Hazlett, “How to Defeat Belief in the External World,” Pacific
Philosophical Quarterly 87 (2006): 198–212. P. 200. Clearly, we are thinking of epistemic probability here. For discussion of the relationship
between this version of the underdetermination principle and other version see Mark Walker, “Underdetermination Skepticism and SkepticalDogmatism,” International Journal for the Study of Skepticism, Forthcoming.
24 Presentations of the underdetermination argument that follow this pattern include Duncan Pritchard, Epistemic Luck (New York: Oxford
University Press, 2005)., and Jonathan Vogel, “The Refutation of Skepticism,” in Contemporary Debates in Epistemology (New York: Blackwell,
2005), 72–84. Hazlett, “How to Defeat Belief in the External World.”
20
21
8
Applying the underdetermination argument to TWO FOR DINNER, my reasoning might be
reconstructed as follows: I calculate the tab as $43 (= h1). Initially, I’m justified in believing that h1 is true.
When I find out that you have calculated the tab as $45 (=h2), then I have reason to believe that it is just as
likely that h2 is true (given conciliationism), so I am not justified in believing h1. Of course, I have no more
reason to believe h2 either, so it seems appropriate to suspend judgment. As we shall see below, defenses of
steadfastness involve denying one of the two premises of the underdetermination argument, hence, the
underdetermination argument provides a useful way to think about the disagreement about disagreement.
5. Probably Wrong in Disagreements About Contraries
In the next few sections, we will assume conciliationism and show that it leads to skeptical dogmatism.
We shall assume also that most of us allow a modicum of fallibilism to many of the things we believe. If this is
so, then the correct conclusion to draw about TWO FOR DINNER is skeptical dogmatism. As we noted, the
skeptical position is that belief that the tab is $43 has a 0.5 credence. However, the propositions that the tab is
$43 and the tab is $45 are not contradictories, but contraries: both cannot be true, but both can be false. Since
we are allowing a modicum of fallibilism, let us allow that there is some chance that both estimates are wrong.
Even if we allow a small probability to mutual error, then my credence that the tab is $43 per person must be
less than 0.5. To illustrate, let us suppose that I have a credence of 0.001 that both propositions are false. The
maximum credence I can have that $43 is correct, given conciliationism, is 0.4995. Or to put the point in a
slightly different manner, if I give equal weight to our two views, and allow a modicum of fallibilism to the
effect that we might both be wrong, then it is more likely that I am wrong than either you are right, or we are
both wrong.
This shows an inherent weakness in the usual manner of modeling disagreement, which we will refer to
as the ‘binary contradictories model’ (or simply the ‘binary model’ for short). The binary model says
disagreement is best modeled as one party asserts ‘P’ and the other party asserts ‘not-P’.25 Admittedly, on the
face of it, this looks plausible enough. Suppose we model the TWO FOR DINNER example in this way. I
claim:
P = Our share of dinner is $43.
25
I will add use/mention punctuation only where I think it helps.
9
It looks appropriate to say that you claim not-P. However, this is implausible. Imagine then we take out our
calculators and both agree with the added help that the total is actually $44. You say: “See, I was right: the tab
is not $43.” Your joke turns on conflating contradictories and contraries. It would be better to model the
disagreement as ‘contraries disagreement’. I assert P, you assert:
Q = Our share of dinner is $45.
We allow some small credence to:
R = Our share of dinner is neither $43 nor $45.
Modeling the disagreement as a binary disagreement, as opposed to a contraries disagreement, makes it much
harder to appreciate the possibility of mutual error, or what we will refer to as ‘contraries fallibilism’. Yes, it is
true that Q implies not-P, but ‘Q’ and ‘not-P’ are not identical. Leaving out the fact that your reason for
asserting not-P is Q fails to correctly model those situations where not-P is true and Q is false (e.g., when the
tab is $44).
It is worth thinking how this argument challenges the underdetermination argument. If we think of just P
and Q, and ignore R, then the underdetermination argument appears apt: P and Q are equipollent and so we are
not justified in believing either. But P and not-P are not equipollent. That is, not-P = Q and R. So pr(not-P) >
pr(P), which is to say that U2 is false when applied to P and not-P. In other words, the underdetermination
argument is powerless to show that I am not justified in believing that my share of the dinner is probably not
$43 in light of accuracy peer disagreement about contraries.
6. Probably Wrong in Multi-Proposition Disagreements
Let us think of ‘multi-proposition disagreements’ as disagreements where there are three or more
contrary positions at issue among three or more Accuracy Peers disputants about contraries P, Q, R... To
illustrate, consider a three person analog of our previous example:
THREE FOR DINNER: Suppose you and I go for dinner with Steve on a regular basis. We
always split the check equally, not worrying about whose dinner cost more, who drank more
wine, etc. We also always add a tip of 23% and round up to the nearest dollar when dividing the
check. (We reason that 20% is not enough and 25% is ostentatious.) The three of us pride
ourselves on being able to do simple arithmetic in our heads. Over the last five years we have
gone out for dinner approximately 100 times, and three of those times we disagreed about the
10
amount each of us should pay. One time I made an error in my calculations, the other time you
made an error in your calculations and the third time it was Steve. In each case, we settled the
dispute by taking out a calculator. On this occasion, I do the arithmetic in my head and come up
with $43 each; you do the arithmetic in your head and you come up with $45 each, while Steve
comes up with $46 each. The three of us had the same amount of wine and coffee, and none of us
are more tired or otherwise distracted than the others. How confident should I be that the tab
really is $43, how confident should you be that the tab really is $45, and how confident should
Steve be that the tab really is $46 in light of this disagreement?
Applying conciliationism, we get the result that the maximum credence I should have that $43 is the correct
amount is 0.3333, since now I have two epistemic peers. Here my reasoning might be that I have good evidence
that the three of us are AP, that is, equally reliable in terms of getting the right total, and since, at most, one of
us is correct, I should put no more than 0.3333 credence in the claim that I am right. The same point would
apply to your total and Steve’s.
As in TWO FOR DINNER, the claims that the tab is $43, $45 or $46 are contraries: only one can be
correct, but all three could be false. Of course, we might allow that the credence that all three of us made an
error is less than it is in the two person case, perhaps as little as 0.0001. In any event, both contraries, fallibilism
and multi-proposition disagreements, lead to skeptical dogmatism on their own, so it should be no surprise that
together they add up to an even stronger claim for skeptical dogmatism.
We can spell this out in terms of the radical underdetermination argument, which is the skeptical
dogmatist’s analog to the skeptic’s underdetermination argument:
RU1: Disjunctive Radical Underdetermination Principle: If h2 and h3 are competitor hypotheses to h1
and to each other, and S’s evidence for believing h1 is less than S’s evidence for believing (h2 or h3),
then S is justified in believing that h1 is probably false.
RU2: S’s evidence for believing h1 is less than S’s evidence for believing (h2 or h3).
RUC: S is justified in believing h1 is probably false.26
26 Walker, “Underdetermination Skepticism and Skeptical-Dogmatism.” Mark Walker, “Occam’s Razor, Dogmatism, Skepticism, and
Skeptical Dogmatism,” International Journal for the Study of Skepticism, Forthcoming.
11
An application to multi-proposition disagreements is perhaps fairly obvious. In THREE FOR DINNER, three
hypotheses are at issue: the dinner tab is h1 = $43, h2 = $45, or h3 = $46. Since by assumption, my evidence
about the reliability of each of us is the same, that is, we all have an equally good track record calculating the
dinner tab, it follows that my evidence for h1 is less than my evidence that either h2 or h3 is correct. So, RU2 is
satisfied in this instance. RU1 is, I take it, plausible, but it is worth taking a minute to say why. If the evidence
for h1 is less than the evidence for either h2 or h3, then the maximum epistemic probability of h1 is .49. The
epistemic probability of h1 cannot be more, say. 0.5, because then the probability of h2 or h3 would have to be
greater than 0.5, leading to a combined probability of greater than 1.0. Since the maximum probability of h1 is
0.49, we are justified in believing that h1 is probably false.
Part of the attraction of the radical underdetermination argument is that its premises are weaker in
significant ways than the corresponding underdetermination argument. Consider first that RU1 is weaker than
RU2 in the sense that it takes a stronger defeater to motivate doxastic revision than U1. To see that this is so, we
need to think about defeaters for a moment. In terms of strength, a distinction is often drawn between rebutting
and undercutting defeaters.27 There is a rebutting defeater for some belief p if there is a reason to believe not-p,
or some proposition q that is logically incompatible with p. There is an undercutting defeater for some belief p
if there is no longer reason to believe p, but no reason to believe not-p, or some logically incompatible
proposition q.28 For reasons that will emerge below, we shall refer to the former as ‘dogmatic defeaters’ and the
latter as ‘skeptical defeaters’.
Pollock’s example of a visit to a widget factory is a standard way to illustrate the distinction. Suppose
you see red widgets on a conveyor belt and form the belief that there are red widgets. You then hear that the
widgets are being illuminated by red light in an attempt to detect defects. You now have a skeptical defeater for
your belief that there are red widgets. You reason that the widgets might not be red, but simply look red under
the light. But at this point, you have no reason to suppose that the widgets are not red. Later, you hear from a
reliable source that they are blue widgets that simply look red under the red light: you have a dogmatic defeater.
You have good reason to believe an incompatible proposition: the widgets are blue.
Suppose then that you have a skeptical defeater for your belief that the widgets are red (=h1). You allow
that it is as likely as not that the widgets are some color other than red. U1 in this case dictates that you are not
27
John Leslie Pollock, Contemporary Theories of Knowledge (Rowman & Littlefield, 1986).
28
Ibid. Pp. 38-39.
12
justified in your belief about h1. Notice that RU1 is silent on whether a belief is justified in this case. A
dogmatic defeater is required to satisfy RU1, so RU1 is weaker in that it requires a stronger defeater to satisfy it.
RU2 is weaker than the corresponding premise, U2, in the underdetermination argument in terms of how
much epistemic humility is required to satisfy each premise. Or to put the point the other way around, much
more epistemic egoism is required to resist U2 than RU2. Consider that surprisingly little epistemic egoism is
necessary to maintain a dogmatic position using the binary disagreement model. For example, suppose we use
the binary model to understand the disagreement in TWO FOR DINNER. As before, suppose initially I am
supremely confident (credence = 1.0) that the tab is $43. Upon realizing we disagree, I reduce my confidence to
0.55. If perfect epistemic humility requires conciliationism, then I am pretty close to the ideal, even though I am
still a dogmatist in this instance. Only perfect humility would require skepticism on my part. So, with only a
smidgeon of epistemic egoism, I can reject U2. A smidgeon of epistemic egoism is not sufficient to deny the
corresponding premise in the radical underdetermination argument. Consider that in THREE FOR DINNER,
if I give myself the same slight epistemic edge (plus 0.5 over what conciliationism demands), 0.38 to the
proposition that the tab is $43, and 0.31 to your estimate of $45 and 0.31 to Steve’s estimate of $46, I am well
short of what is required for dogmatism.
7. Many Philosophical Disputes Are About Contraries
In order to demonstrate that similar reasoning applies to philosophical disagreements, we need to show
that many philosophical disagreements are multi-proposition disagreements. It is true that philosophical debates
are often cast in terms that seem to suggest the binary model. However, the question is whether such debates are
best characterized as philosopher X arguing for P and philosopher Y arguing for not-P. Imagine Daniel Dennett
and Alvin Plantinga are invited to debate theism by some university organization that has agreed to pay tens of
thousands of dollars to each for their performance. The poster for the event reads:
Does God exist? Professors Alvin Plantinga and Daniel Dennett take opposing sides. The event
promises to be a spectacular philosophical bloodbath of epic proportions: Two men enter, one
man leaves.
We may think of the debate proposition as
P = We are justified in believing God exists.
13
It is true that Dennett will argue against P, and so gives the appearance of the debate having the form of the
binary model, however, it is more accurate to say that Dennett will argue for a proposition that implies the
falsity of P. In particular, Dennett’s brief will be:
Q: We are justified in believing atheism.
Q, we will assume, implies not-P, but ‘Q’ and ‘not-P’ are equivalent. This is confirmed when we realize we can
imagine at least two other potential debating partners for Plantinga. Plantinga might be invited on another
occasion to India to debate a Hindu philosopher who believes we are justified in believing polytheism:
R: We are justified in believing in multiple Gods.
R, we will assume, implies not-P. (P is committed to the existence of one true God, the polytheism of R is
committed to a multiplicity of divine beings.) Finally, we might imagine an agnostic debating Plantinga;
someone who believes that we do not have justified belief in God, nor do we have justified belief in the nonexistence of God:
S: We are not justified in believing or disbelieving in the existence of any divinities.
S, we shall assume, implies not-P as well. So, although these three positions imply not-P, the positions
themselves are clearly different from not-P, which lends support to the idea that the disagreement about the
existence of God is best modeled as a multi-proposition disagreement.
The fact that the binary model is inadequate to model the disagreement between Dennett and Plantinga
is further confirmed when we imagine the day of the debate unfolding as follows. Dennett makes opening
remarks in favor of atheism and Plantinga responds using the ontological argument. Just as the debate is about
to be opened up to questions from the audience, a flash of lightning temporarily blinds everyone in the
auditorium and Zeus, Poseidon and Hades appear on stage. They put on a display of their powers sufficient to
convince all present that they are the real deal. Plantinga then remarks that he wins the debate because Dennett
is wrong: there is a divine presence. Dennett remarks that Plantinga is wrong because his Christian God does
not exist (and Zeus, Poseidon and Hades agree with Dennett on this point). The moderator of the debate
declares both men wrong.
The point here is rather banal: the disagreement about atheism and monotheism is about contraries, not
contradictories. This is hardly news for anyone who has taught these disputes. Part of the pedagogical work is to
help students see the logical relationships between the various positions, e.g., that atheism is not the same thing
14
as agnosticism. Appealing to the binary model philosophical disagreement is an invitation to forget this
important point.
In the political realm, it might be tempting to think of the disagreement between Rawls29 and Nozick30 in
terms of a binary model, yet the multi-proposition disagreement model is more apt. Consider Goodin’s
utilitarianism as public philosophy31 and Cohen’s socialism32 as competitors to Rawls’ justice as fairness and
Nozick’s libertarianism. Other disagreements that have the structure of multiple-proposition disagreements
include:
Ontology: materialism vs. immaterialism vs. dualism
Ethics: virtue ethics vs. consequentialism vs. deontology
Metaphysics: compatibilism vs. determinism vs. libertarianism
Philosophy of Science: realism vs. empirical realism vs. constructivism
It may be that there are some, perhaps many, binary disagreements in philosophy, e.g., abortion is
permissible/not permissible, capital punishment is permissible/not permissible, and
compatibilism/incompatibilism.33 We can rest content here with the more limited claim that many important
philosophical disputes are multi-proposition debates.
8. Conciliationism and Multi-Proposition Philosophical Disagreements
If conciliationism is applied to multiple-proposition philosophical disagreements, then we should
conclude that proponents of each view are probably wrong. For example, let us imagine Rawls endorses
conciliationism. He disagrees with Nozick, Goodin, and Cohen, but he regards them as AP. So, he should
reason that his justice as fairness view has, at most, a 0.25 probability of being correct, given that each of his
peers is just as likely to be correct. Moreover, if he is a fallibilist and allows that all four views might be false,
he should reason that the probability that justice as fairness view is correct is less than 0.25.
29
J. Rawls, A Theory of Justice (Harvard University Press, 1971).
R. Nozick, Anarchy, State and Utopia (Basic Books, 1974).
31
Robert E. Goodin, Utilitarianism as a Public Philosophy (Cambridge University Press, 1995).
32
Gerald Allan Cohen, Why Not Socialism? (Princeton University Press, 2009).
30
33
Even these examples are not clear cut. The permissible/impressible abortion question can be parsed further: permissible in the first
trimester, second trimester, and third trimester, for example. Also, cases where the mother’s life is at risk, the pregnancy is the result of
rape, or where the fate of the universe hangs in the balance, may divide proponents on both sides of the permissible/impermissible
opposition.
15
But, couldn’t Rawls take a page from President George Bush and simply define the disagreement in
binary terms? He might say either you are for justice as fairness (J) or you are against it (not-J). There are at
least three problems with this maneuver.
First, as already indicated, the model doesn’t capture all there is to the actual disagreement. For
example, the proposal lumps Cohen, Nozick, and Goodin as proponents of not-J, but this leaves unexplained all
the disagreement amongst proponents of not-J. It is not hard to imagine that Nozick might find Cohen’s
socialism more anathema than Rawls’ justice as fairness. So, the binary model doesn’t explain why those on the
“not-J team” squabble at least as much amongst themselves. The multi-proposition model has a straightforward
explanation: Cohen, Nozick, and Goodin claim not-J as an implication of their preferred philosophical theories.
The binary model papers over the differences amongst Cohen, Nozick, and Goodin.
Second, there does not seem to be a principled answer to the question: Which binary model should be
adopted? Nozick, it seems, could just as easily invoke Bush and say either you are for libertarianism (L) or you
are against it (not-L). Cohen and Goodin could say similar things (mutatis mutandis) as well for socialism (S)
and utilitarianism (U). If only one binary model is correct, then how can we say, in a principled way, which
one? If all the binary models are correct, then it is not clear that there is going to be any purchase over the
multiple-proposition disagreement model. After all, if we use the set of binary models, it will turn out that, at
most, one of (J), (L), (S), and (U) is true, but more than one of (not-J), (not-L), (not-S), and (not-U) may be true,
which is just to say they are contraries. Modeling the disagreement as sets of binary disagreements complicates,
without adding anything of value.
Third, even if it could somehow be maintained that the best way to model disagreement about
distributive justice is (J) versus (not-J), this would help Rawls only to the extent that he believes that the number
of epistemic peers disagreeing is irrelevant.34 In the example we are working, on the multi-proposition model,
there are four positions with one proponent each. On the binary disagreement there is a 3 to 1 ratio in favor of
34 As Jennifer Lackey points out, many do think that numbers matter. If we change the THREE FOR DINNER example such that both
you and Steve believe the tab is $45, this seems to provide even greater evidence that I am wrong than in the TWO FOR DINNER version. Usually,
the thought that numbers matter is supplemented with the idea that there is epistemic independence. If masses of libertarians are produced by
brainwashing at the Nozick summer camp for kids, graduates from the camp won’t add weight to the libertarian cause. See Lackey for a thoughtful
discussion on how to spell out what ‘independence’ might mean here. (Lackey, “Disagreement and Belief Dependence.”) Here I ignore the question
of numbers as it involves all sorts of hard work. We would need to somehow operationalize ‘epistemic peer’ for philosophical positions and then go
out and do some head counting. We would then have to figure out how to add up the epistemic weight of differing numbers. In theory, however,
Rawls might remain a dogmatist about justice as fairness, even though it is a multi-proposition disagreement, so long as there are more on team
justice as fairness than the other teams. Cf. Philip Pettit, “When to Defer to Majority Testimony–and When Not,” Analysis 66, no. 291 (2006): 179–
87.
16
(not-J) over (J). The general principle is perhaps obvious: the more one defines one’s position as the center of
any disagreement, the larger the pool of opposition.
9. Midway and Multi-Proposition Philosophical Disagreements
Perhaps somewhat surprisingly, even the epistemic egoism of midway is not sufficient for dogmatism in
multi-proposition disagreements. Recall, midway permits a considerable “home field advantage”: 50% more
credence in one’s initial proposition than that permitted by conciliationism. This egoism itself is quite
remarkable. As we said, conciliationism sets the following credences for Rawls:35
Pr(J) = 0.25
Pr(L) = 0.25
Pr(U) = 0.25
Pr(S) = 0.25
Midway permits 50% more credence than Conciliationism, so if Rawls follows midway he would distribute his
credences like so:
Pr(J) = 0.37
Pr (L) = 0.21
Pr(U) = 0.21
Pr(S) = 0.21
Imagine Rawls passing Nozick in the halls of Harvard and saying, “Hi Bob, I’m 70% more likely to determine
the correct political philosophy than you.” Putting aside the social inappropriateness of such a comment, the
philosophical worry is how unfounded the claim looks. True, it is easy to imagine thinking that one might have
some small edge. Perhaps you recently thought of a novel argument for your position, or have a strong
unpublished criticism of a competing position that you think gives you an edge. Still, it is a giant leap from this
to thinking that you are 70% more likely to determine the truth of the issue under disagreement.
Of course we might balk at something as precise as “70% more likely”, but this would be to miss the
more important point that making such an extravagant claim for one’s philosophical abilities seems entirely
unfounded. Still, we may let this pass since midway is not sufficient to thwart skeptical dogmatism. As we can
see from the above assignments of credences, Rawls would still have to admit that it is more likely that one of
35
We will ignore the issue of fallibilism here to keep things simple.
17
his colleagues is correct and he is wrong, even with his impressive epistemic edge. So, even assuming midway,
Rawls should be a skeptical dogmatist about justice as fairness.
10. Steadfastness and Multi-Proposition Philosophical Disagreements
In a quote above, Christensen noted that “the most obvious motivation for Steadfast views on
disagreement flows from the degree of skepticism that Conciliationism would seem to entail.”36 Presumably,
skeptical dogmatism is abhorrent to those who find skepticism an unwelcome consequence. Unfortunately for
friends of steadfastness, as we shall see, it is much more difficult to defend steadfastness in multi-proposition
disagreements than in binary disagreements. Defenses of steadfastness in the literature tend to divide along two
different strategies. One is to find an epistemic “edge” that makes it plausible to attribute to a disputant a higher
degree of likelihood in determining the truth than an opponent. The other is to defend steadfastness in
conjunction with permissiveness. We will take these in turn.
To explore the idea of an epistemic edge, let us think for a moment about disagreements between non
epistemic peers. Let us think of an ‘epistemic superior’ as: X is an epistemic superior to Y in some
disagreement, if, and only if, X is more likely to determine the truth than Y in the disagreement. Let us think of
an ‘über epistemic superior’ as: X is an über epistemic superior to Y and Z in some disagreement, if and only if,
X is more likely to determine the truth in the disagreement than Y and Z. As is perhaps evident by now, the
notion of an epistemic superior is not enough to defend steadfastness in multi-proposition disagreements. For
even if one is more likely to determine the truth of some dispute than each of one’s colleagues, it does not
follow that one is more likely than all of one’s colleagues to determine the truth. So, it is the idea of an über
epistemic superior that is of primary importance for us. It will help to have an example of an über epistemic
superior status to work with. Imagine that THREE FOR DINNER is modified such that you are out to dinner
with two twelve year olds of average mathematical ability—call this modified version KIDDIE DINNER. It
seems plausible to think that you are an über epistemic superior in this case, and steadfastness is appropriate
when you find they have different estimates for the restaurant tab.
If one defends steadfastness in multi-proposition disagreements by claiming that one is an über
epistemic peer, then at least one aspect of the puzzle of peer disagreement disappears. For if you remain
steadfast in a multi-proposition disagreement, then you cannot consistently think of yourself as AP with your
disputants. Accuracy Peers, as we defined the term above, are peers that are approximately equally likely to
36
Christensen, “Disagreement as Evidence.” P. 757.
18
determine the truth of the matter under dispute. Thus, to claim that one is an über epistemic peer is to reject
one’s disputants as AP. Accordingly, when you claim to be an über epistemic superior to your colleagues in
multi-proposition disagreements, then there is little puzzle about doxastic revision: you are not required to
revise your beliefs about the matter.
However, a new puzzle emerges: unlike KIDDIE DINNER, there is a puzzle about how you could
consider yourself an über epistemic superior to your philosophical colleagues. One possibility is to appeal to
first person asymmetries in evidence or “personal knowledge”.37 For example, in TWO FOR DINNER, it
seems plausible that I might have access to my own mental states in a manner that is importantly different than
my access to your mental states. I know through reflective access to my own mental states that I don’t have a
secret drinking problem, or that I am not hiding some distracting personal problem for the sake of being a good
dinner companion, etc. On the other hand, evidence for my belief that you do not have a secret drinking
problem or that you are not hiding a distracting personal problem for the sake of being a good dinner
companion is much less direct.
For the sake of the dialectic, let us concede that these first person asymmetries are sufficient to justify
attributing to oneself the status of epistemic superior in TWO FOR DINNER. Proponents of steadfastness
should welcome this concession, since with it, the underdetermination argument used to underwrite skepticism
in binary disagreements may be rejected. Recall that a conciliationist will defend U2 when applied to TWO
FOR DINNER by claiming that equal epistemic weight should be given to the $43 and $45 hypotheses. But if
one is an epistemic superior, this is all that is needed to reject U2 and hand victory to steadfast.
The same concession is not sufficient to reject RU2, the radical underdetermination analog of U2. As we
noted above, to reject RU2 requires that one attribute to oneself more than twice the likelihood of determining
the truth in a multi-proposition dispute where the remaining credence is divided equally. So, for example, to
remain steadfastness and believe that the tab is $43 in THREE FOR DINNER would require that I divide my
credence as follows: 0.51 pr($43), 0.24 pr($45), and 0.24 pr($46). This means that personal knowledge would
have to do some heavy lifting to show that I am twice as likely as each of my colleagues to calculate the tab
correctly. It would not be enough for me to think that either you or Steve might be tired, tipsy, or distracted. I
would have to put a significant probability that you are both tired, tipsy, or distracted, and, indeed, incapacitated
enough that your individual performance is likely less than half that of mine in calculating the tab.
37 Jennifer Lackey, “What Should We Do When We Disagree?,” in Oxford Studies in Epistemology (Oxford University Press, 2008). David
Enoch, “Not Just a Truthometer: Taking Oneself Seriously (but Not Too Seriously) in Cases of Peer Disagreement,” Mind, 2010, 953–97.
19
The preceding suggests that we need to “think big” to explain why it is appropriate to attribute to oneself
über epistemic superior status in philosophical disagreements. The epistemic edge has to be a significant and
enduring advantage. By ‘significant’ I mean simply that whatever epistemic edge is used to explain one’s
superior performance, it must be enough to merit über epistemic status. As for enduring, the idea is that it is not
enough to think that one has a significant epistemic advantage on a particular day. Being tipsy, distracted, or
tired may explain über epistemic performance differences on a particular occasion (e.g., when one goes out with
a group of friends and is the designated driver), but philosophical disagreements between colleagues can span
decades, so we should expect that these differences will tend to even out over time. Perhaps you both are a bit
tired or distracted this week, next week it may be me.
KIDDIE DINNER provides an apt illustration of where these desiderata are met. The cognitive
advantage of the average adult over the average child is significant and enduring. No doubt the epistemic edge
has something to do with neurological differences between children and adults: their brains are still undergoing
substantial neurological development. So, the gulf between adult and child epistemic capacities does seem the
right sort of explanation to justify über epistemic status vis-a-vis children.
We need a similar type of explanation to explain how one could be an über epistemic superior to one’s
philosophical colleagues. Consider van Inwagen’s suggestion in the following passage, where he considers a
disagreement on a generic philosophical dispute between ‘Ism’ and ‘Nism’:
I can, consistently, believe it is rational for me to accept Ism and rational for other
philosophers to accept Nism. I can, without logical inconsistency, maintain that Nismists are,
through no fault of theirs, in epistemic circumstances that are (vis-à-vis the Ism/Nism question)
inferior to mine. Owing to some neural accident (I might say) I have a kind of insight into the,
oh, I don’t know, entailment relations among various propositions that figure in the Ism/Nism
debate that is denied to the Nismists. I see, perhaps, that p entails q (although I am unable to
formulate this insight verbally) and they are unable to see that p entails q. And this insight really
is due to a neural quirk (to borrow a phrase that Rorty used for a different purpose). It is not that
my cognitive faculties function better than theirs. Theirs are as reliable as mine. But there are not
20
identical with mine, and, in this case, some accidental feature of my cognitive architecture has
enabled me to see the entailment that is hidden from the Nismists.38
Notice that in the two party disagreement that van Inwagen imagines, those without the “neural quirk” are in an
epistemically inferior position. Running with van Inwagen’s suggestion, I might remain steadfast and think of
myself as an über epistemic superior along these lines:
When I wave to colleagues in my department down the hall, I often secretly think that I
am right and they are all wrong about the various multi-proposition disagreements we
have. In order to do so, I reject the assumption that they are epistemic peers. I think of
them as my dim-witted but well-meaning colleagues. When I greet them, I think things
like “Hi, dim X” or “Hi, dim Y”, but, in an effort to keep a certain amount of collegiality
in the department, I say “Hi, X” or “Hi, Y”. After all, as Aristotle points out, there is such
a thing as excessive honesty. Dismissing them as über epistemic inferiors is, of course,
one way to manifest my disposition for epistemic egoism. But I am not ashamed of it. I
have been blessed with a neural quirk that gives me a much better chance at determining
the truth than all my colleagues combined. For too long, epistemic egoists have hid in the
shadows. It is time we emerge from the closet and let the world know. Stand steadfast
with me and shout, “I am an epistemic egoist. I am so brilliant that I am more likely to be
right than all of my colleagues combined.”
I take it that not many will find this a philosophically attractive position. And not simply because of the social
awkwardness of calling so many of my philosophical colleagues über epistemic inferiors. Rather, it is the
relative dearth of evidence in support of such a claim. In the same article, van Inwagen compares his epistemic
prowess to David Lewis. Unfortunately, van Inwagen does not include in his paper the MRI scan of his brain
that reveals the neural quirk that allows van Inwagen to see entailments that David Lewis cannot see.
I can’t say for sure that van Inwagen is not joking in his appeal to a “neural quirk”, but the difficulty in
carrying through on the suggestion that one is an epistemic superior to one’s philosophical colleagues, never
38
Van Inwagen, “We’re Right. They’re Wrong.” P. 27.
21
mind an über epistemic superior, is evident from the quote from van Inwagen. Notice that in the first part of the
passage, van Inwagen claims an epistemic edge: the Nismists are epistemic inferiors (or, as he puts it, in
epistemic circumstances, inferior to van Inwagen’s). The explanans is the “neural quirk” that enables the Ismists
to see an entailment that the Nismists are oblivious to. However, in the second part of the passage, van Inwagen
claim the Nimists do not have inferior cognitive faculties: their faculties are equally reliable. But surely they
can’t be equally reliable, since we were just told that a neural quirk adds reliability in detecting entailments. Or
to put the point the other way, surely not seeing an entailment is a symptom of at least a little less reliability
than someone who does see the entailment, other things being equal.
In any event, the point stands. If you think of yourself as an über epistemic superior, then the problem of
peer disagreement does not even arise for you. It is not a puzzling case like THREE FOR DINNER, but a nobrainer like KIDDIE DINNER. Again, this is not to say that there is no puzzle here, but it is not the same
puzzle as that of peer disagreement. The primary puzzle in cases of peer disagreement is to what extent, if any,
peer disagreement provides evidence for doxastic revision. When one claims to be an über epistemic superior,
the puzzle shifts to how one could plausibly claim to be an über epistemic superior. In terms of the radical
underdetermination argument, the puzzle is how to plausibly deny RU2 in philosophical disputes with
colleagues.
I suspect very few will be attracted to defending steadfastness dogmatism with the idea of being an über
epistemic superior. More plausible, it may be thought, is to weaken the link between holding a philosophical
belief and thinking it is likely true. The idea would be to say that holding a philosophical belief means merely
that it is rationally defensible:
When I hold a philosophical belief it merely means that it is rationally defensible. Others may
hold equally rational and incompatible views. Since it is possible for three or more people to
hold three or more incompatible views that are equally rational, disagreement provides no
reason in and of itself for doxastic revision. Furthermore, there is no claim to be an über
epistemic superior, or even AP. The issue of accuracy simply does not arise.
This is, of course, just to use permissiveness to defend steadfastness. The only difference here is that
permissiveness is now explicitly understood as permitting more than two incompatible but ideally rational
22
doxastic attitudes towards some disputed matter. It will help to distinguish these as bi-permissiveness and tripermissiveness.39
The dilemma to be put to tri-permissiveness is this: either tri-permissiveness includes the notion of
dogmatic defeat or it doesn’t. Taking the former, we may explore it by considering a skeletal version of tripermissive justification which includes dogmatic defeat. Suppose the theory is this: a philosophical belief P is
justified only if P is internally coherent, and there is no dogmatic defeater for P. The details here, for example,
what it is exactly for a theory to be “internally coherent”, need not detain us. We will assume that proponents of
tri-permissiveness can provide a convincing analysis. What is important is that this understanding of tripermissiveness seems to permit us to construct different, logically incompatible views that are “internally
consistent.” Rawls can justifiably believe J on the basis of e, so long as it is internally consistent and there is no
dogmatic defeater for J, and Nozick can believe L on the basis of e, so long as L is internally consistent and
there is no dogmatic defeater for L, etc. The problem of course, is that the radical underdetermination argument
is precisely an argument that there is a dogmatic defeater for one’s belief. If e equally justifies J, L, S, and U,
then e cannot make it the case that any of them is more probable than the others. In which case, the maximum
epistemic probability of each, given e, is 0.25. So there is a defeater for each of J, L, S, and U, namely: the set
of the other three is more likely true. So, Rawls should accept defeat for J, given that e makes more probable the
disjunction L, or S, or U.
Notice that simply giving up the notion of skeptical defeaters is not enough. For example, bipermissiveness might jettison the notion of skeptical defeaters, but accept dogmatic defeaters. Here the thought
might be that it is permissible to believe h1 on the basis of e, and permissible to believe h2 on the basis of e,
while acknowledging that e does not make more probable h1 or h2. On this understanding, U1 could be
rejected, but it is still consistent with holding to the notion of dogmatic defeat.40 In other words, it permits
justified belief so long as there is no incompatible hypothesis, or set of hypotheses, that are more likely true.
This moderate view, one that rejects skeptical defeaters but not dogmatic defeaters, is not sufficient to answer
Typically, ‘permissiveness’ is defined as the denial of ‘uniqueness’, so the usual understanding of ‘permissiveness’ includes both bipermissiveness and tri-permissiveness. However, when working through the consequences of permissiveness, it is usually bi-permissiveness that
authors seem to have in mind, at least given the sorts of examples they use.
40 Arguably, this is how the entitlement strategy for dealing with underdetermination skepticism works. It rejects the idea that to defeat
skepticism one must show that some hypothesis has more evidence than its skeptical competitor, while allowing that if the evidence makes more
probable the negation of a putative entitlement proposition, the proposition should be rejected. See: Crispin Wright, “Warrant for Nothing (and
Foundations for Free)?,” in Aristotelian Society Supplementary Volume, vol. 78, 2004, 167–212. And Hazlett, “How to Defeat Belief in the External
World.”
39
23
the problem at hand. As we noted above, RU1 works with the notion of dogmatic defeaters, not skeptical
defeaters.
More promising is to completely unchain justification from any alethic considerations, and so jettison
any requirement about defeaters. The stripped down version of tri-permissiveness then, is: A philosophical
theory is justified if, and only if, it is internally coherent. The philosophical payoff is that as a consequence, the
radical underdetermination argument is powerless to show that Rawls, Nozick, Cohen, and Goodin are not
justified in believing their political philosophies, so long as their views are internally coherent. The fact that the
radical underdetermination argument generates a dogmatic defeater is irrelevant.
The trouble with this response is that it does not disarm the radical underdetermination argument, it
simply makes the question of whether a philosophical theory is justified orthogonal to the question of whether
the theory is likely to be true. In other words, the pure form of tri-permissiveness permits one to say, “My
philosophical theory P is justified (because it is internally coherent), and I am justified in believing that my
philosophical theory is probably false.” If this sounds strange, it is because it goes against the usual thought that
justification is somehow connected with truth. And the permissive strategy can’t have it both ways. If
permissiveness makes justification entirely independent of truth, then there is no conflict between saying that
my philosophical theory is justified and probably false. On the other hand, if one wants to claim, if a
philosophical theory P is justified, then one is in a position to deny that P is probably false, then there must be a
link between justification and truth. But then we are back to the problem that if one claims that P is justified so
P probably true, then one must represent oneself as being an über epistemic superior.
11. Objection: Skeptical Dogmatism is Self-Refuting
There is an obvious objection in the self-referential case: the debate over the epistemic import of
disagreement itself seems a multi-proposition disagreement. Some have suggested dogmatism is still
permissible in light of disagreement with peers, others that skepticism is called for, and finally skeptical
dogmatism: our preferred philosophical positions are probably false. The problem, in a nutshell, is that if
skeptical dogmatism is applied to itself, then it appears that skeptical dogmatism is probably false.
Dealing adequately with this objection is beyond the scope of this paper. For the sake of the argument,
let us accept the objection and see where it leads. The objection says that skeptical dogmatism is off the table,
because it is probably false, which means skepticism and dogmatism are the only viable options. But if we
24
reject skeptical dogmatism, then we must reject at least one of the premises that leads to skeptical dogmatism.
We may summarize these premises as:
1. Many philosophical disagreements are about multi-proposition disagreements.
2. If it is epistemically appropriate for you to believe that there is at least a 0.5 chance that you have
determined the truth in a multi-proposition philosophical disagreement, then it is epistemically
appropriate for you to believe that you are an über epistemic superior to your colleagues in the
dispute.
3. It is not epistemically appropriate to believe that you are an über epistemic superior in multiproposition philosophical disagreements.
4. It is not epistemically appropriate for you to believe that there is at least a 0.5 chance that you have
determined the truth in a multi-proposition philosophical disagreement. That is, you should believe
that your philosophical position is probably false.
The concession we are assuming for the sake of the argument is that 4 is implausible. Since the argument from
2 and 3 to 4 is valid, this means that to reject skeptical dogmatism means that at least one of 1-3 must be
rejected. But which one? In other words, it is not enough to say that skeptical dogmatism runs into an
embarrassing consequence, an opponent of skeptical dogmatism must show that there is a more plausible
position. My point simply is that it remains to be seen whether the grass is any greener on the other side of the
skeptical dogmatist’s fence. As Dretske wryly observes, “Philosophy is a business where one learns to live with
spindly brown grass in one’s own yard because the neighboring yards are in even worse shape.”41
References
Christensen, David. “Conciliation, Uniqueness and Rational Toxicity.” Nous, 2014.
———. “Disagreement as Evidence: The Epistemology of Controversy.” Philosophy Compass 4, no. 5 (2009):
756–67.
———. “Disagreement, Question-Begging, and Epistemic Self-Criticism.” Philosopher’s Imprint, 2011.
———. “Epistemology of Disagreement: The Good News.” The Philosophical Review, 2007, 187–217.
Cohen, Gerald Allan. Why Not Socialism? Princeton University Press, 2009.
Dougherty, Trent. “Dealing with Disagreement from the First-Person Perspective: A Probabilist Proposal.”
Disagreement and Skepticism 46 (2013): 218–38.
Dretske, Fred. “Is Knowledge Closed under Known Entailment? The Case against Closure.” In Contemporary
Debates in Epistemology, 13–26, 43–46. Blackwell, 2005.
Elga, Adam. “Reflection and Disagreement.” Noûs 41, no. 3 (2007): 478–502.
41 Fred Dretske, “Is Knowledge Closed under Known Entailment? The Case against Closure,” in Contemporary Debates in Epistemology
(Blackwell, 2005), 13–26, 43–46. P. 43
25
Enoch, David. “Not Just a Truthometer: Taking Oneself Seriously (but Not Too Seriously) in Cases of Peer
Disagreement.” Mind, 2010, 953–97.
Feldman, Richard. “Epistemological Puzzles about Disagreement.” In Epistemology Futures, Edited by S.
Hetherington., 216–36. New York: Oxford University Press, 2006.
———. “Evidentialism, Higher-Order Evidence, and Disagreement.” Episteme 6, no. 03 (2009): 294–312.
———. “Reasonable Religious Disagreements.” In Philosophers without Gods: Meditations on Atheism and
the Secular Life. Oxford: Oxford University Press, 2007.
———. “Respecting the Evidence.” Philosophical Perspectives 19, no. 1 (2005): 95–119.
Goodin, Robert E. Utilitarianism as a Public Philosophy. Cambridge University Press, 1995.
Hazlett, Allan. “How to Defeat Belief in the External World.” Pacific Philosophical Quarterly 87 (2006): 198–
212.
Kelly, Thomas. “Peer Disagreement and Higher Order Evidence.” In Disagreement. New York: Oxford
University Press, 2010.
———. “The Epistemic Significance of Disagreement.” Oxford Studies in Epistemology 1 (2005): 167–96.
Lackey, Jennifer. “Disagreement and Belief Dependence: Why Numbers Matter.” In The Epistemology of
Disagreement: New Essays, 243–68, 2013.
———. “What Should We Do When We Disagree?” In Oxford Studies in Epistemology. Oxford University
Press, 2008.
Machuca, Diego E. Disagreement and Skepticism. Vol. 46. Routledge, 2013.
Mates, Benson. The Skeptic Way. New York: Oxford University Press, 1996.
Matheson, Jonathan. “Conciliatory Views of Disagreement and Higher-Order Evidence.” Episteme 6, no. 03
(2009): 269–79.
Nozick, R. Anarchy, State and Utopia. Basic Books, 1974.
Pettit, Philip. “When to Defer to Majority Testimony–and When Not.” Analysis 66, no. 291 (2006): 179–87.
Plantinga, Alvin. Warranted Christian Belief. Oxford University Press, 2000.
Pollock, John Leslie. Contemporary Theories of Knowledge. Rowman & Littlefield, 1986.
Pritchard, Duncan. Epistemic Luck. New York: Oxford University Press, 2005.
Rawls, J. A Theory of Justice. Harvard University Press, 1971.
Sosa, Ernest. “The Epistemology of Disagreement.” In Social Epistemology, 278–97. New York: Oxford
University Press, 2010.
Thorsrud, Harald. Ancient Scepticism. Routledge, 2014.
Van Inwagen, Peter. “Is It Wrong, Everywhere, Always, and for Anyone to Believe Anything on Insufficient
Evidence?” In Faith, Freedom, and Rationality, 136–53. London: Rowman & Littlefield, 1996.
———. “We’re Right. They’re Wrong.” In Disagreement. Oxford University Press, 2010.
Vogel, Jonathan. “The Refutation of Skepticism.” In Contemporary Debates in Epistemology, 72–84. New
York: Blackwell, 2005.
Walker, Mark. “Occam’s Razor, Dogmatism, Skepticism, and Skeptical Dogmatism.” International Journal for
the Study of Skepticism, Forthcoming.
———. “Underdetermination Skepticism and Skeptical-Dogmatism.” International Journal for the Study of
Skepticism, Forthcoming.
White, Roger. “Epistemic Permissiveness.” Philosophical Perspectives 19, no. 1 (2005): 445–59.
Worsnip, Alex. “Disagreement about Disagreement? What Disagreement about Disagreement?” Philosopher’s
Imprint, 2014, 1–20.
Wright, Crispin. “Warrant for Nothing (and Foundations for Free)?” In Aristotelian Society Supplementary
Volume, 78:167–212, 2004.
26
Download