Shifty Epistemologists and Their Shifty Arguments

advertisement
Arguing for Shifty Epistemology
Jeremy Fantl and Matthew McGrath
Sometimes it is true to say that you know that p while false to say that some other person, S,
knows that p. In many such cases, the reasons for the difference are inconsequential. Perhaps
you have stronger evidence for p than S does, or p is true when you are said to know it, but false
when S is said to know it, or S is Gettiered and you are not. Shifty epistemologists allow that the
truth value of “knowledge”-ascriptions can vary not merely because of such differences, but
because of factors not traditionally deemed to matter to whether someone knows, like salience of
error possibilities and practical stakes.1 Call these “non-traditional” factors.2
Both contextualists and so-called “subject-sensitive invariantists” are shifty in this sense;
they agree that factors such as practical stakes and salience can matter to the truth-value of
“knowledge”-ascriptions. They differ in that contextualists take such factors to matter when they
apply to the knowledge-ascriber, subject-sensitive invariantists to the putatively knowing subject.
In this paper, we remain neutral on the question of which sort of shifty view to accept. Our
concern is that of the newcomer or outsider who wants to know whether to be shifty or not, and
cares not so much about the details of where the shiftiness is located.3
Shifty epistemologists, in effect, assert an existential claim, a claim to the effect that there
are cases in which knowledge-ascriptions differ in their truth values due merely to a variation in
Following DeRose’s practice, we will often drop the quotes in our talk of attributions or ascriptions of
“knowledge.”
2
One might hope for an account of what distinguishes such “non-traditional” factors from “traditional” ones. In
Fantl and McGrath (2009, 27-28), we offer some suggestions on this matter. Here all that matters is that some sorts
of factors, in particular practical factors, have not been thought to matter to whether one knows.
3
A more complete treatment of shiftiness would subsume assessment relativism as well. Thus, we might speak of
shiftiness of the truth-value of a knowledge-ascription across pairs of contexts of utterance and assessment. See, e.g.
MacFarlane (2005).
1
1
some non-traditional factor. So, if one could give cases – what we will call, following Schaffer
(2006), “stakes-shifting cases” – in which this pattern is exhibited, that would be a decisive
reason to be shifty. DeRose’s bank cases (1992, 2009) are the best known examples:
Bank Case A (LOW):
My wife and I are driving home on a Friday afternoon. We plan to stop at the bank
on the way home to deposit our paychecks. But as we drive past the bank, we
notice that the lines inside are very long, as they often are on Friday afternoons.
Although we generally like to deposit our paychecks as soon as possible, it is not
especially important in this case that they be deposited right away, so I suggest that
we drive straight home and deposit our paychecks on Saturday morning. My wife
says, ‘Maybe the bank won’t be open tomorrow. Lots of banks are closed on
Saturdays.’ I reply, ‘No, I know it’ll be open. I was just there two weeks ago on
Saturday. It’s open until noon.’
Bank Case B. (HIGH):
My wife and I drive past the bank on a Friday afternoon, as in Case A, and notice
the long lines. I again suggest that we deposit our paychecks on Saturday morning,
explaining that I was at the bank on Saturday morning only two weeks ago and
discovered that it was open until noon. But in this case, we have just written a very
large and important check. If our paychecks are not deposited into our checking
account before Monday morning, the important check we wrote will bounce,
leaving us in a very bad situation. And, of course, the bank is not open on Sunday.
My wife reminds me of these facts. She then says, ‘Banks do change their hours.
Do you know the bank will be open tomorrow?’ Remaining as confident as I was
2
before that the bank will be open then, still, I reply, ‘Well, no. I’d better go in and
make sure.’
Assume that in both cases the bank will be open on Saturday and that there is nothing
unusual about either case that has not been included in my description of it. (DeRose 1992
913-14)
Providing instances is one way to argue for an existential claim, but it’s not the only way.
Another is to argue from further general claims. One might argue for shifty epistemology by
arguing for epistemological principles and then showing that if those principles are true then
there must be pairs of cases that make shifty epistemology true. Perhaps DeRose’s bank cases
don’t have the right features; perhaps Cohen’s (1999 58) airport cases don’t either; and perhaps
our (2002 67) train cases don’t. But we would have a guarantee that some such cases exist.
Let’s call the first approach the argument-from-instances strategy and the second the
argument-from-principles strategy. The argument-from-instances strategy uses cases in a
standard philosophical way: cases are presented and the theorist trusts us to see, based on
intuition or knowledge of what is proper to say, that the cases possess the relevant features. For
example, in arguing that there are cases of knowledge without justification epistemologists have
presented cases (e.g., the chicken-sexer) in which a subject seems to lack justification, but also
seems to know; and similarly for cases of knowledge without belief, knowledge without truth,
etc. Such an approach might commit the theorist to general principles of philosophical
methodology: that intuition is a reliable guide to philosophical truth or that the same goes for
proper assertion. But often no other principles are introduced to justify the intuitive verdicts.
We’re just supposed to see that the verdicts are correct.
3
Our main goal in this paper is to show shifty epistemologists the benefits of using the
argument-from-principles strategy. We do not say they should abandon the argument-frominstances strategy, but we will show that many of the obstacles to the latter do not affect the
former. The first half of this paper discusses some of the main obstacles to the argument-frominstances strategy; the second shows how turning to the argument-from-principles strategy can
help.
This paper argues, more generally, that the current debate over shifty epistemology has
taken a myopic view of the relevant data. We can and should look beyond intuitions concerning
the truth-value of knowledge-ascriptions in particular stakes-shifting cases, to see if there are
general principles at work behind the scenes. For instance, simply examining one of Gettier’s
examples, one might worry that actually the person isn’t justified in believing the relevant
proposition, or that maybe the person does know. This is often how it goes when teaching the
coin in the pocket example in undergraduate classes. But when students start to appreciate the
general recipe for generating the examples (cf. Zagzebski 1994) and the motivating principle
behind the cases – that one cannot know when one is only luckily right – these worries over his
particular cases tend to subside, and the case that justified true belief isn’t sufficient for
knowledge is much stronger. We want to do the same for the debate over shiftiness. We see in
the bank cases, for instance, a clue to a general pattern, one which might be imperfectly
illustrated in the bank cases as they are actually presented, but one which assures us that there
will be cases doing what the bank cases have been thought to do.
Of course, the principles used in creating our “recipe” for generating shifty cases aren’t
pulled from the void. They’re defended by argument and, as philosophical arguments generally
do, these arguments themselves appeal to intuitions. But the set of intuitions go beyond
4
intuitions about the truth-value of knowledge-ascriptions in the specific stakes-shifting cases;
they include intuitions about the general principles themselves, intuitions about “clashes,” about
conditionals, and about pieces of reasoning. So, in plumping for the principled strategy, we are
not trying to impugn the use of intuitions about cases in general. Furthermore, it’s not all
intuitions all the way down. We will offer support not merely from intuitions but from facts
about how we defend and criticize action, when we seek out knowledge, our habits of appealing
to knowledge in citing reasons. In earlier work, we offer support for similar principles based on
general claims about knowledge, reasons, and justification. The shifty epistemologist who
considers only intuitions about knowledge-ascriptions in specific stakes-shifting cases, we think,
misses the philosophical forest for the trees.4
I. The Argument-from-Instances Strategy
Here’s an argument for shifty epistemology based on DeRose’s LOW and HIGH bank cases.
1)
In LOW ‘I know the bank is open tomorrow’ is true.
2)
In HIGH, ‘I don’t know the bank is open tomorrow’ is true, and so ‘I know the
bank is open tomorrow’ is false.
3)
All traditional factors are held fixed across the cases.
4)
If all traditional factors are held fixed across the cases, then any variation in truthvalue of ‘I know that the bank is open tomorrow’ must be due to non-traditional
factors.
4
We do not take ourselves to be presenting a hitherto unknown argumentative strategy for shifty epistemology.
Some shifty epistemologists other than the present authors do appeal to a principled strategy (cf. Hawthorne 2004).
We suspect that this project was behind Jason Stanley’s (2005) insistence that role of the intuitive responses to his
cases “is not akin to the role of observational data for a scientific theory. The intuitions are instead intended to
reveal the powerful intuitive sway of the thesis that knowledge is the basis for action.” (12) But we fear that too
often it is assumed that the only way to argue for a shifty view is simply to “present your cases.”
5
So, 5) shifty epistemology is true.
(4) is needed to make the argument valid, and seems uncontroversial. (3) is supposed to follow
from the descriptions of the cases. What about (1) and (2)? On the instances strategy (1) and (2)
are motivated by appeals to their intuitiveness and/or a broad principle of charity. DeRose’s own
appeal to charity takes the following form: in LOW and HIGH you speak appropriately, without
relying on any relevant mistake of fact, and this provides is a strong and apparently undefeated
presumption that you speak truthfully. DeRose sees the intuitiveness of (1) and (2) and the
charity-based arguments as “mutually reinforcing strands of evidence” (2009, 49-51).
Much of the large literature on shifty epistemology can be seen as disputing one or more
of (1) – (3) for certain choice cases, very often DeRose’s bank cases (or Stanley’s variations on
them). Many of the objections to (1) – (3), we argue, depend for their plausibility on the shifty
epistemologist using the instances strategy. The principled strategy, we argue in the second part
of the paper, avoids these objections. We begin by considering objections that might be raised
against (3), and then turn to (1) and (2).
1. Objecting to (3): Does a traditional factor vary across the cases?
What traditional factors might vary across the bank cases? One serious contender is belief. If
the descriptions of the cases entail that belief varies across the cases, or even leave this
possibility open, then (3) is unacceptable. And there is a feature of the cases that might seem to
guarantee that in HIGH you will be naturally interpreted as lacking belief. After all, in HIGH
you self-deny knowledge. And, barring unusual additional provisos, “I don’t know that p” might
seem to convey lack of belief that p:
6
“Do you think the bank is open tomorrow?”
“I don’t know.”
One could try to bypass this worry, as does DeRose in the cited passage, by adding to the
description of HIGH the explicit stipulation that you have made up your mind in HIGH that the
bank will be open tomorrow. However, this is likely to confuse consumers of the examples: “He
has made up his mind in HIGH? Why is he saying he doesn’t know then?” Such confusion
endangers the premise that “I don’t know” is true in HIGH. Once it becomes hard to see what
the speaker in HIGH is thinking, and why he is behaving as he does, it might well become less
clear intuitively that he is speaking truly, and less clear that his utterance merits charity.5
There are other ways to try to ensure sameness of belief – in the sense of making up one’s
mind – across LOW and HIGH. One could revise HIGH so that the knowledge-denial is in the
third-person, and concerns someone in a low stakes situation who is not the least worried about
the truth of the proposition question and seems to have made up her mind. DeRose has done just
this with his Thelma/Louise/Lena case (2009 3-6). Louise is talking to the police, who have
asked her whether she knows John was at work yesterday. Thelma, Louise, and Lena all saw
John’s hat in the office. Thelma, in a LOW case in the tavern, asserts, “Lena knows John was at
work.” Questioned by the police, Louise admits in the HIGH case, “I don’t know John was at
work.” When asked whether Lena might know, Louise answers, “Lena doesn’t know either.”
Focus on the attributions and denials of knowledge to Lena. The hope would be that, in
this new version of HIGH, there would be no doubt that Lena has made up her mind that John
DeRose (2009 190-93) replies to this worry by arguing that what’s relevant to knowledge is not that the subject has
“make-up-her-mind” type “unstable” confidence, but rather that the subject has the appropriate degree of “stable”
confidence, the constancy of which he stipulates to hold across the cases. He appeals to the counterintuitiveness of
now-you-know-it-now-you-don’t sentences, such as “I know it now, but when the stakes get higher, I won’t know
it” as evidence. But even if the best arguments show that it’s only the stable sort of confidence that matters to
knowledge, couldn’t it well be that the source of our intuition that you don’t know in HIGH is the fact that your
mind isn’t made up in HIGH? If so, DeRose would lack support for his premise that “I don’t know” is true in
HIGH.
5
7
was at work, so that what makes the difference to the truth-value of the knowledge-attributions to
Lena wouldn’t be a difference in this sort of belief. Doesn’t this solve the problem? Perhaps,
but it does so at the cost of rendering the premise about truthful speaking in HIGH more
doubtful. The intuition that Louise’s knowledge-denial to Lena is true seems weaker than the
intuition that Louise’s own self-denial of knowledge is true. Louise’s stakes are high, so she is
hesitating; she won’t give the police her word, etc. Lena’s stakes are low, so she is satisfied that
the target proposition is true, and she’s willing to assert the target proposition in the pub and at
home. Recall that the target proposition is true and that the evidence possessed by Lena and
Louise is quite strong. It seems markedly less intuitive to us to think that Louise’s knowledgedenial concerning Lena is true than it is to think Louise’s self-denial of knowledge is true.6
A second way to ensure parity of belief in HIGH is to modify HIGH by having you claim
to know and state a plan to come back tomorrow to deposit the check. However, this has costs as
well. You will seem to be more confident in a stable dispositional sense in HIGH than you are
in LOW. For, the same degree of stable dispositional confidence that is strong enough to move
one to act in a low stakes case will not in general be strong enough to move one to act in a high
stakes case. If this is how it seems, then it might be harder to secure the intuition that your “I
know” in HIGH is false, because it might well seem that in HIGH you must surely have more or
better grounds than you do in LOW; normally someone in a situation like HIGH wouldn’t be so
confident without such grounds. Suppose we attempt to control for this possibility by stressing
heavily the sameness of your grounds across LOW and HIGH. We might then secure the
intuition of falsity concerning HIGH. However, this would produce another difficulty. As
6
One might make further changes, for instance, by giving Louise significantly better evidence and grounds than
Lena. But then we have to worry seriously about the possibility of the intrusion of certain well-documented
psychological “egocentric” biases. We have a “well-documented tendency to misread the mental states of those who
are more naïve than we are, to evaluate them as though they were privy to our concerns, without being aware that we
are doing so” (Nagel 2010, 425).
8
Jennifer Nagel (2008 291) has argued, it will seem that in HIGH you are more confident – again
in the stable dispositional sense – than you should be. If so, this results in two difficulties. First,
it gives us reason to distrust knowledge-denials about HIGH believers, because of the possibility
that such denials are motivated by general epistemic dissatisfaction with the HIGH believers.
And, second and more relevantly for the current subject, even if the HIGH believer fails to know,
it seems that a factor traditionally deemed relevant to knowledge would be varying across the
cases: not belief this time, but properly based confidence in the stable dispositional sense.
The general challenge is to construct the cases so that there is clearly no variation in
whether you have made up your mind nor in any factor deemed traditionally relevant to
knowledge and yet there intuitively remains a variation in truth-value of the knowledge
attribution. This is what Nagel (2010) claims cannot be done. It is a serious problem for the
argument-from-instances strategy, at least any version of that strategy which employs a premise
asserting that traditional factors are held fixed across the LOW/HIGH cases presented.
Could one do without such a premise? One prominent shifty epistemologist, Jason
Stanley (2005 180-82), would deny premise (3), on the grounds that some traditional factors –
some factors traditionally deemed relevant to knowledge – would vary across LOW and HIGH.
For instance, suppose that part of the evidence one has in LOW is that the bank is open
tomorrow, but that this is not part of the evidence one has in HIGH. Then there is a traditional
factor that varies across the cases – one has a piece of relevant evidence in HIGH that one lacks
in LOW. Nonetheless, Stanley is shifty. He thinks that although some traditional factors vary
across the cases, they do so because of a variation in a non-traditional factor.
Stanley’s position suggests the possibility of replacing (3) and (4) in the argument above
with a simpler premise:
9
(3*) If “I know” is true in LOW but false in HIGH, then this is due to non-traditional
factors.
However, defending (3*) is no easy matter, once one agrees with Nagel that traditional factors
vary across the cases, or across whatever adjusted cases one devises to avoid the problem
concerning belief. Why think the variation is due to the non-traditional factors if there are also
traditional factors varying? Stanley’s answer must be that the traditional factors vary because the
non-traditional ones do. But making good on this answer requires getting clear on exactly what
the relevant non-traditional factors are and testing their covariation with the relevant traditional
factors across a range of cases. We do not say this cannot be done without giving up the
methodology of presenting cases and relying on intuitive verdicts (or claims about what is proper
to say). But one good way of identifying the relevant non-traditional factors is to employ some
epistemological theory, as we will suggest in Part II.
2. Objections to Premises (1) and (2): does “I know” vary in truth-value across the cases?
There are two kinds of objections one might make to the claims about truthful speaking
in LOW and/or HIGH. One sort of objection, seen less often in the literature than heard in
colloquium halls, is simply to deny the assumptions about what seems intuitively true and about
what we would appropriately say. This sort of objector insists that he “doesn’t have that
intuition” or that the folk don’t, and may also claim that “competent speakers don’t really talk
that way.” There is little a practitioner of the argument-from-instances strategy can do to answer
this sort of objection, except to try out new cases, or to conduct experimental or corpus studies to
try to show that the objector is in the minority.
10
The second sort of objection appeals to what DeRose (2009 83) calls “warranted
assertability maneuvers” or WAMs. In giving a WAM, one concedes that the relevant
knowledge-sentence seems appropriate to assert in the case, and one might even concede that it
seems intuitively true, but one denies that it is true. Moreover, one doesn’t simply deny its truth,
one attempts to explain why it is appropriate to say and even intuitively true despite being false.
One does this by showing how, although literally false, the assertive utterance of the sentence
communicates some important truth. In this section, we explore the dialectic between the
WAMmer and the shifty epistemologist relying on the argument-from-instances strategy.
2.1 WAMs
DeRose (2009 83-86) notes that one can cook up WAMs easily to shield one’s pet theory from
counterexample, regardless of how plausible the theory is. Suppose you accept the “crazed”
theory that “bachelor” is true of all and only males. Then consider the “lame” WAM that holds
that while “bachelor” applies to all males one nevertheless implicates that someone is unmarried
when one says that someone is a “bachelor”. To rule out such impostors, DeRose requires that a
WAM should identify general conversational principles, applicable to claims of potentially any
content, and show how these principles, together with the favored semantics of the target
expressions could deliver the required implicatures or pragmatic information.7 This is a version
of Grice’s calculability requirement on conversational implicatures. We endorse this
requirement.
A candidate WAM might be directed at appropriate false statements or inappropriate but true ones. DeRose’s
favourite example of a successful WAM is of the latter variety. “It is possible that p” seems false when the speaker
knows that p. To explain why it seems false despite being true, one turns to the general principle enjoining us to
“assert the stronger.” If you know p, then by asserting the possibility claim you violate this rule, thus giving rise to a
false implicature that one’s epistemic position to p is not particularly strong.
7
11
2.1.1 WAM for LOW
A natural way to WAM the self-attribution of knowledge in LOW is to appeal to loose
speech (Conee (2005), Davis (2007)). When speakers speak loosely, a stickler can truthfully
object that what the loose talker said is strictly false. If you say, during a time-out 5 seconds
from the end of a 95-67 basketball game, “Well, they lost,” your stickler-interlocutor can
truthfully but annoyingly respond, “Not yet they haven’t. They’re going to lose, but strictly
speaking, they haven’t lost yet.” Your response here won’t be to insist that they’ve lost, nor to
insist that what you said was true. You’ll agree that what you said was false, but add, “Yeah, of
course. But give me a break!” The same can be said for other mundane utterances like, “We’re
out of milk” (when there is milk left in the jug but not enough for cereal the next morning), or
“The conference lasted two weeks” (when it lasted 13 days), and the like.
It is not implausible to think that you speak loosely in LOW when you say that you
“know” the bank will be open tomorrow. You might well admit as much, under challenge from
a stickler. We can imagine your spouse saying, with propriety, “Well, it’s likely to be open then,
but do you know it is?” You might reply, “Well, ok, I don’t know, but it doesn’t matter
anyway.” Contrast your reaction if your spouse started questioning whether you know that it
was open two weeks ago on Saturday: “What? Yes, of course I know that! What are you
suggesting?”
Similar treatment is plausible for knowledge-ascriptions made in other well-known LOW
cases, such as Thelma’s claim in the tavern that both she and Lena “know” John was at work in
DeRose’s Thelma/Louise/Lena case (2009 4-6), and Smith’s claim to “know” in Cohen’s airport
case. If an ordinary speaker in such a case were questioned – without elaborate spinning of
possibilities –whether she knows or instead whether it’s just likely, it seems plausible to us that
12
she would simply concede that she doesn’t “know” and that the third-person subject with the
same evidence doesn’t “know” either. We can certainly imagine ourselves doing this. Maybe
empirical findings will prove us wrong here, but still we boldly predict that there will be sharp
contrasts between our reactions to challenges about such “knowledge” and our reactions to
challenges about “hard” knowledge – that you had been to the bank two weeks ago, that Thelma
and Lena saw a hat, that Smith got his itinerary from a travel agent, etc. One consequence of this
is that WAMming the knowledge-attributions in LOW does not commit the WAMmer to
skepticism. The WAMmer is WAMming the knowledge-attributions in specific cases, not
saying that, in general, no knowledge-attributions in low-stakes cases can be literally true.
There is much to be said in favour of a “loose speech” WAM. It would seem to do well
by DeRose’s constraint. Loose speech is certainly a general phenomenon, and there is a general
though perhaps hard-to-state principle allowing for loose speech depending on the purposes and
interests of the parties involved in the conversation. It would also give us a good explanation of
why the knowledge-ascription in LOW, despite being strictly false, would seem intuitively true.
If one is speaking loosely, what one means – the implicated content – is true, even if what is
literally stated is false (see, e.g. (Davis 2007)). An intuition that “what you say in LOW is true”
doesn’t distinguish this possibility from the possibility that one is speaking the literal truth.
Finally, we could see why ordinary speakers in situations like LOW would speak loosely. They
recognize that the conversation doesn’t call for exactitude and that it is easier to get one’s
message across by cutting a few semantic corners.
Might a shifty epistemologist sympathetic to contextualism insist that when your
interlocutor contrasts the question of whether you “know” with the question of whether it is
instead only “very likely,” the standards (semantically) operant in the speech context become
13
more stringent, so that you no longer “know” on the new standards but did “know” on the laxer
standards? If so, and if such a move were plausible, then of course we would expect you to deny
that you “know” after the contrast with probability has been presented. But we’re skeptical
about the possibility that this move can succeed without closing off the possibility of genuinely
loose uses of “knows”. That there are genuinely loose uses of “knows” seems to us deniable.
And if there are, it seems open to the non-shifty epistemologist to insist that the knowledgeascribing behavior in LOW is one of them.
2.1.2. WAMs for HIGH
Jessica Brown (2005, 2006) and Patrick Rysiew (2001, 2005, 2007) have proposed WAMs
which attack the claim that the knowledge-denial is true in HIGH. In outline, the proposal is as
follows. In HIGH, your claim of “I don’t know” expresses a lack of something fairly weak –
what Rysiew calls “ho hum knowledge.” What your sentence literally says is false, because you
do have ho hum knowledge. However, by asserting this you communicate the fact that your
epistemic position is not strong enough for some relevant purpose at hand, e.g., not strong
enough to be relied on in action, or for ruling out some specific alternative, such as the bank’s
having changed its hours.
We think that the specific WAMs on offer are at best inconclusive, though this is not the
place to engage in extended discussion of the details. We’re interested in the prospects for
general strategies for resisting the argument-from-instances strategy. And even the manifest
failure of specific proposals wouldn’t provide much significant evidence that the general claim –
that the HIGH speaker speaks falsely, but communicates something true – is false. After all, it
certainly seems that saying “I don’t know” accomplishes something in HIGH other than merely
14
describing the speaker’s epistemic relation to the relevant proposition. Among other things, it
communicates something about what the speaker thinks is appropriate for her to do. If one is
motivated by other considerations – for example, fallibilism and the general appeal of nonshiftiness – to think that the knowledge-denial in HIGH must be false, then one will feel inclined
to say that the HIGH speaker’s knowledge-denial communicates something important and
correct even though the speaker does “know.” Specific WAMs are often difficult to make
compelling. But the general claim that HIGH is a case of warranted assertability without truth
can seem compelling even if none of the WAMs that have been offered are. Restricting
ourselves to this methodology, which is the methodology we’re stuck with on the argumentfrom-instances strategy – the dialectical situation between a shifty epistemologist like DeRose
and WAMmers like Rysiew and Brown will seem unsettled.
II. The Argument-from-Principles Strategy
Shifty epistemologists need not argue for shiftiness merely by choosing their cases carefully with
an eye to securing the desired intuitive reactions, and then being prepared to rebut WAMming
opponents. They can ask themselves why it would be that the truth-value of knowledgeascriptions would vary across the cases – is there anything about knowledge, or ‘knowledge’,
that would motivate shifty epistemology? If there is, they can use such deeper explanations to
argue in a principled way. Here we suggest a recipe for the shifty epistemologist to go about
doing this.
First, she lays out her key epistemological principles, beginning with fallibilism about
knowledge. How exactly fallibilism is to be formulated is less important than the basic idea that
it asserts the compatibility of knowledge with some sort of epistemic lack. Different
15
epistemologists might understand this lack in different ways.8 One might favour an “epistemic
chance” approach, which understands the lack as a matter of having epistemic chance less than 1.
Others might understand the lack in terms of having evidence that doesn’t entail the truth of the
proposition known. Still, others might understand it in terms of having something short of
epistemic certainty (where this might not be understood in terms of epistemic chance). In other
work (Fantl and McGrath 2009), we understand the lack in terms of epistemic chance. Here, to
be more ecumenical, we turn to epistemic certainty.9 To have a principle to work with, we’ll
recommend the following formulation (we give an additional metalinguistic formulation to
accommodate contextualists):
(Fallibilism) Knowledge that p does not require absolute epistemic certainty for p.
(Fallibilism – Metalinguistic variant) In some contexts of attribution, the truth of a
knowledge-attribution that p to a subject does not require that the subject have
absolute epistemic certainty for p.
Fallibilism seems required if we are to avoid skepticism about a rather broad range of knowledge
claims: after all, a rather broad range of things we claim to know are things for which we lack
absolute epistemic certainty. But it is hard to embrace such skepticism. For one thing, there’s
the fact that such skepticism seems cognitively catastrophic. For another, it seems in its general
statement intuitively implausible. And, finally, it seems to have counterintuitive results when it
comes to specific instances. We think we know that if the upcoming baseball season goes its full
162 games for every team, then at least one strike will be thrown, the Red Sox will win at least
8
For two general discussions of fallibilism, see Reed (2002) and Hetherington (1999).
Epistemic certainty is distinguished from psychological certainty in that epistemic certainty is necessarily related to
evidence and grounds in a way the psychological certainty isn’t. One can have epistemic certainty for a proposition
but still not believe it with certainty. We will not attempt anything like an analysis of epistemic certainty, although
we are attracted to an account which conceives of epistemic certainty as related normatively to psychological
certainty, or better to what DeRose calls stable confidence. If p is more epistemically certain for you than q then
you ought to be more confident in the stable sense of p than of q. On this account, p is absolutely epistemically
certainty for you just if you ought to be maximally confident of p.
9
16
one game, and at least once during that season a team will score at least 7 runs in game. But we
lack absolute epistemic certainty for any of these. The objective chance of the last possibility is
roughly 1 minus (.6 to the 5000th) power, and now that you have that evidence, it seems it is not
epistemically certain for you – it is very, very likely for you, but not certain. The same goes for
our knowledge that we were alive 30 years ago. Is it absolutely epistemically certain? Here we
can’t calculate an objective chance of falsehood in any obvious way, but it seems we ought to be
less confident of this than of some other things, for instance that we are alive today. But if we
are rightly less confident, then it is not absolutely certain for us.
Here we are appealing to intuitions about what we know in supporting this principle. So,
the argument-from-principles strategy we are recommending doesn’t abjure all appeal to
intuitions. But we are not limiting ourselves to intuitions about knowledge-ascriptions made in
the HIGH/LOW cases, nor to intuitions at all. The general principle has a broad evidential base:
a wide variety of specific intuitively compelling instances, the general fact that such skepticism
would be (in Laurence BonJour’s words) “intellectual suicide” (1998 5), and the general intuition
that a broad skepticism is implausible.
The next principle relates knowledge to action:
(Actionability) You can know that p only if p is actionable for you.
(Metalinguistic variant) In any context in which a self-attribution of knowledge that p is
true of you, then p is actionable for you.
What is it for p to be actionable for you? The basic idea is that epistemic shortcomings in your
relationship to p do not stand in the way of reliance on p as a basis for action. p might not be
relevant to any available practical decision, but this doesn’t make p non-actionable in the
relevant sense. We have elsewhere explained actionability in terms of justifying practical
17
reasons (Fantl and McGrath 2009). A justifying practical reason is a practical reason that doesn’t
merely support doing a given action; it supports it strongly enough so that the action is justified
for you. Since we are not appealing to an overall notion of epistemic position, here we will
characterize actionability in terms of epistemic certainty: p is actionable for you iff either p is
epistemically certain for you or your lacking epistemic certainty for p does not stand in the way
of p’s being among your justifying practical reasons.10
Why think Actionability is true? We do not say it is self-evident, although the principle
itself has some intuitive attraction. It’s not our goal here to offer a fully-fledged argument for
Actionability. Arguing for Actionability, we freely grant, is a greater undertaking than arguing
for fallibilism. We provide what we think is the best case for the richest version of the principle
in our (2009). Here the role of Actionability is just to mark out an argumentative strategy. But it
is incumbent on us to at least provide some reasons for thinking that Actionability is prima facie
plausible – some reasons for thinking that it is worth looking for philosophical arguments to
support it. Of course, some philosophers have objected to some of the reasons we provide here –
some have offered contrary data, and some think our data can be given explanations other than
the truth of Actionability – and we consider some of their responses below. Again, though,
we’re here only looking for whether there is a general argumentative strategy that better resists
the main objections leveled at the instances approach.
To see whether a general principle like Actionability is plausible, we should look at what
we would expect to be the case if it were true. If it were true, we would expect that ordinary
people have some implicit grasp of its truth, and for this to show up in various intuitions, patterns
In our (2009), we employ a notion of one’s epistemic position with respect to a proposition, which is in effect a
summary of one’s standing on truth-relevant dimensions in respect to p. Given this notion, actionability can be
understood as follows: p is actionable for you iff weaknesses in your epistemic position with respect to p, if there are
any, don’t stand in the way of p’s being among your justifying practical reasons. We refer the reader to Fantl and
McGrath (2009), chapter 3 for details.
10
18
of verbal behaviour, and thinking. Does it? Well, first, if Actionability is true then we’d expect
the following not just to be odd but to clash – that is, not merely to sound unusual or strange but
to seem inconsistent.
1) I/She know(s) that p, but I/she can’t count on p’s being the case because there’s too much
of a risk it’s false.
And indeed it does.11
Second, we’d expect that in cases in which the propriety of an action A depends on just
how epistemically certain a proposition p is for the subject, the conditional if you know p, you
are reasonable to do A should seem true. Thus, consider the following conditional about the
bank cases:
2) If you know the bank is open tomorrow, you can just plan on coming back then.
These conditionals do seem true, not only in the bank cases but in any case in which the
propriety of an action hinges on how epistemically certain for the subject the relevant proposition
is. And they seem true regardless of person or tense.
Third, again in cases in which the propriety of an action hinges on how certain p is, we’d
expect to find ourselves defending, criticizing, and deliberating about p-reliant action by citing
knowledge, as in:
3) You knew the bank is open tomorrow. You shouldn’t have waited in the long lines
today.
4) Look, don’t worry. I did decline Avis’s liability insurance. But that’s because I know
we’re covered by our regular insurance.
5) He knows that smoking is bad for him. So he should quit.
In (1) we use an ordinary language expression – “can count on” – which is naturally read epistemically in the
context of (1).
11
19
Moreover, one would expect that the criticisms and defenses would stick epistemically; that is,
we wouldn’t expect to find people responding to such a criticism or defense by conceding the
knowledge claim but disputing the evaluation of the action on the grounds that the proposition
known has too much of a risk of being false, i.e., isn’t certain enough epistemically. And this is
what we find. We don’t find people reacting to the likes of (5) by saying, “sure, you know that
we’re covered by the regular insurance, but there’s still too much of a risk that we’re not covered
by it, and so you shouldn’t have declined their insurance.” Compare the criticism “sure, you
have good reason to believe we’re covered by the regular insurance, but there’s too much of a
risk that we’re not, and so you shouldn’t have declined Avis’ insurance.” This response is not at
all odd, and we find the likes of it very often in ordinary life.12
Fourth, we’d expect people in high stakes cases to inquire after knowledge even if they
already have very strong support, because in such cases even very strong support might not be
enough for actionability. And, again, this is what we find. In high stakes cases, people ask “do
you know p?”, sometimes emphasizing that they are interested in genuine knowledge and merely
strong evidence – “We think it’s likely that p, but we want to actually know” or “I realize you
have very strong evidence, but do you know?”
Finally, we’d expect to appeal to what they know as reasons for action, even in high stakes
cases:
6) I know that if I came back tomorrow I’ll be able to deposit the check in time and without
waiting in long lines like I’d have to today. So that’s a reason I have to come back
tomorrow.
7) You know that the train pulling into the station goes to Foxboro, so since you need to get
to Foxboro, that’s a reason to take this train.
12
For more on how knowledge-citing criticisms and defenses stick epistemically, see McGrath (manuscript).
20
You’ll notice that many pieces of this data do concern high stakes cases, and that is
because those cases provide the real test for Actionability. We can all agree that when the stakes
are low, knowledge is enough for actionability. The test cases are high stakes cases. But notice
that the data we are mining about these cases go far beyond claims of intuitiveness or propriety
of simple knowledge-attributions or denials. The evidence is still partly derived from intuitive
reactions to instances. But, as we’ve said, our goal is not to undercut the use of intuitions in
philosophy. The point is that arguments based on intuitions about knowledge-attributions in the
stakes-shifting-cases are subject to objections that the more principled strategy avoids.
These are the core epistemological principles we recommend to the shifty epistemologist:
Actionability and fallibilism. We next show why, if these principles are true, epistemology is
shifty. To do this, we need one further principle about epistemic certainty and action:
(Certainty-Actionability Principle) If p isn’t absolutely epistemically certain for a subject
in a particular case C1 and p is actionable for the subject in C1, then there is a
correlate case C2 which differs in actionability from C1 merely because the stakes are
higher in C2 than in C1.
Suppose p is the proposition that at least one student in a philosophy course at the University of
Calgary will get a B+ next year and that you are offered a small bet on whether p. You can, it
seems, rely on p in your decision about whether to take the bet. But if the stakes go up too much
– if taking the bet risks many lives if you’re wrong and the potential payoff is small – you can’t
rely on p in your decision. Why can you take the former bet but not the latter? Plausibly,
because of differences in the stakes (or some other broadly practical and, thus, non-traditional
factor).
21
How do we reason from these principles to the consequence that the truth-value of
knowledge-ascriptions can vary due to non-traditional factors? In the argument, we use the
metalinguistic versions of fallibilism and Actionability, appropriate for contextualists. The
object-level versions, if accepted as invariant conditions on knowledge, entail the metalinguistic
versions in any case.
If fallibilism is true, then there are going to be cases relevantly like the LOW bank case in
which a self-attribution of knowledge is true. Maybe the LOW bank case isn’t one of them. But
if fallibilism is true then there will be a case in which there a subject – say, you – is truly said to
know that p even though you lack absolute epistemic certainty for p. Whatever case that is, we
choose it as our LOW. Because there is such a LOW, the Certainty-Actionability principle
guarantees that there is a high stakes case – call it our HIGH – that comes from LOW in which p
is not actionable for you, and the difference in actionability across LOW and HIGH is due to a
difference in the stakes. Next, we use the contextualist version of Actionability, which holds that
a self-attribution of knowledge that p is true only if p is actionable for you. Now, since in HIGH
p isn’t actionable for you, a self-attribution of knowledge that p is false in HIGH.
Why think that the difference in truth-value of the self-attribution of knowledge varies
across LOW and HIGH because of the variation in stakes across these cases? We have no
formal proof, partly because the relevant explanatory relation is being left intuitive, but the
conclusion is quite plausible. For we have, in effect, taken a case of true self-attribution of
knowledge, jiggled the stakes, and thereby generated a difference in actionability, which
guarantees a corresponding variation in the truth of the knowledge-attribution. This seems to us
22
to be a case of jiggling the truth-value of a knowledge-ascription because one has jiggled the
stakes.13
Even if one worried about how to draw this conclusion, we are clearly in shifty territory.
Say that a condition on the truth of “I know that p” is shifty iff it can vary due essentially to
variations in some non-traditional factor. Then the truth of “I know that p” has a shifty necessary
condition – p’s actionability for the subject. And it is not as if this necessary condition can
exhibit its shiftiness only across pairs of cases in which “I know that p” stays false. It can exhibit
its shiftiness across cases in such a way that “I know that p” must go from true to false.14
The principles we have employed in arguing for shifty epistemology are the products of
argument, and, as we’ve acknowledged, we’ve appealed to intuitions in arguing for them. So,
can’t the very kinds of objections we were hoping to avoid by switching to this strategy be
employed to undermine the arguments we’ve invoked?
Principles have to stand up against objections, and it’s not as though there are certain
unique objection-classes that can be applied only to the stakes-shifting cases. The question is
whether objections that have a certain force when used in response to the argument-frominstances strategy will have the same force against the various principles. Consider the loose-use
objection to our intuitions about LOW. Does that objection fare as well against fallibilism? Do
we want to say that all of our self-ascriptions of knowledge are loose when there is some nonWhat’s important here is that the “because” is not a causal “because” – the jiggling of the stakes doesn’t cause,
say, loss of belief which then partly constitutes loss of knowledge. The jiggling of the stakes thereby generates a
difference in actionability.
14
One traditional view has it that knowledge requires belief. But belief, plausibly, can be destroyed by heightened
stakes. Heightened stakes can reduce one’s credence and, more contentiously, can increase the level of credence
one must have in order to have outright belief. Anyone who agrees with this will be committed to some form of
shiftiness, in that variations in non-traditional factors can cause knowledge to come and go by generating variations
in traditional factors. But this kind of shiftiness is rather boring. However, note that if our principles are correct,
knowledge has a shifty necessary condition which is not psychological. To say p is actionable for you is not to say
anything about your psychology but rather to say that your lack of epistemic certainty does not stand in the way of
p’s being a justifying practical reason for you. Anyone who prefers to limit shifty views to those that make
variations in non-traditional factors directly relevant to whether one knows is welcome to.
13
23
zero chance that what we claim to know is false? Do you talk loosely when you claim to know
that if the upcoming baseball season goes its full 162 games that at least one strike will be
thrown? It seems not nearly as enticing to claim you do as it does to claim you talk loosely when
saying, after only seeing a hat hanging in the hall, “I know he was in the office today.” And, in
any case, fallibilism is not motivated only by considerations of specific cases of self-ascription of
knowledge, but general worries about the implausibility and catastrophe of a sweeping form of
skepticism. The loose-use objection fares much better against the argument-from-instances
strategy than the argument-from-principles strategy.
What of WAMming the evidence supporting Actionability? For one thing, the
WAMmer’s job is now more difficult than it was when the shifty epistemologist restricted
herself to marshaling intuitions about the truth-value of knowledge-ascriptions in particular
LOW/HIGH cases. Where there is a merely pragmatic implicature, one doesn’t expect so many
signs of a genuine entailment. So, one would expect that a cancelation of such an implicature
wouldn’t clash. One would expect that in high-stakes cases, the relevant conditionals wouldn’t
seem clearly true. One wouldn’t expect that knowledge-citing criticisms and defenses of action
would in general stick epistemically, because, if the WAMmer were correct, sometimes clearly
more than knowledge is needed for actionability. Most importantly, one wouldn’t expect the
range of distinct phenomena that we observe. The methodological principle is the familiar one
that evidence that would be surprising unless the target proposition – Actionability – were true is
evidence for that proposition.
Having said this, we should note that some philosophers have proposed counterexamples
to principles like Actionability, e.g., Jessica Brown (2008) and Baron Reed (2010). For instance,
in Brown’s example, a nurse says about a surgeon who is checking the charts, “Of course, she
24
knows which kidney it is. But, imagine what it would be like if she removed the wrong kidney.
She shouldn’t operate before checking the patient’s records.” (176) If these sorts of examples
are indeed cases of knowledge without actionability, then Actionability is false. And if we find
such examples at least somewhat intuitive, that is data that speaks against Actionability. So, by
no means is it smooth sailing for the shifty epistemologist once she turns to our version of the
principled strategy.
However, examples such as the surgeon case hardly provide a simple refutation of
Actionability.15 For one thing, at least in the surgeon case, it is unclear what generates the
impropriety of proceeding without checking. If it’s hospital policy to always check – or if
there’s a general norm that requires surgeons to always double-check – then no counterexample
is generated. Second, the force of the examples seems mitigated by the fact that the speeches
made could easily be replaced with Actionability-friendly speeches that seem perfectly fine. So,
for instance, the nurse in Brown’s surgeon case could just as easily and just as properly have
said, “Well, of course she’s checking the chart; it’s not enough to rely on her memory that it’s
the left kidney. She needs to know it is.” Third, it’s interesting that if we alter the surgeon
example so that the discussion concerns not knowledge but having good reasons, having
excellent reasons, or other terms that might capture the anti-skeptical non-shifty epistemologist’s
conception of the justification condition on knowledge, the example is far clearer than it is in the
case of knowledge. “She has good reason to think the diseased kidney is the left kidney, but she
must check the charts before operating in case it’s not the left kidney” sounds perfectly fine,
whereas “She knows it is the left kidney, but she must check the charts before operating in case
15
The considerations we mention here seem to apply to Brown’s Affair case as well.
25
it’s not the left kidney” sounds worse.16 Fourth, and most importantly, it is not as if the set of
data we have identified (clashes, intuitive conditionals, defenses and criticisms, inquiries after
knowledge in high-stakes cases, appeal to knowledge as reasons for action) are explained away
merely by presenting cases like the surgeon case.17
We think these considerations cast significant doubt on Brown’s counterexample.
However, we do not claim that they decisively undermine it. What they do is motivate a search
for an account of why Actionability would be true if it were true, of how it could be grounded.
We have attempted to provide such a grounding in Ch. 3 of our 2009 by appealing to three
general principles, one relating knowledge to reasons for belief, the second relating reasons to
belief to reasons for action, and the third relating reasons to justifiers. We can only point to this
argument here. It may fail, but it is not enough to show it fails simply to point to examples like
Brown’s.18
Finally, the defiant “I don’t have that intuition’ seems more difficult to pull off, again
because of the breadth of the data supporting Actionability, even in advance of a philosophical
account of what would ground its truth if it is true. It’s not just an intuition about whether a
knowledge-ascription made in a particular case is true. It’s a broad range of data, including, yes,
intuitions about clashes and about the truth of certain conditionals, but also about our habits of
citing knowledge to criticize and defend action; our habits of inquiring after knowledge in high
stakes situations and appealing to knowledge in citing reasons. Some of the support is ultimately
going to stem from intuitive responses to instances. But, again, the support is not going to stem
Note that when ‘has good reason’ is stressed in the former speech and ‘knows’ is stressed in the latter speech, the
difference remains.
17
We offer more extended defense of a principle like Actionability in our (2009) in terms of relations between
knowledge, reasons, and justification. Defenses of similar principles are offered by Hawthorne (2004), Stanley
(2005), and Stanley and Hawthorne (2008).
18
This is not to say that the defender of Actionability shouldn’t seek explanations of why examples like Brown’s
have whatever appeal they have.
16
26
from intuitive responses to instances of a single-kind – responses about knowledge-ascriptions
made in the stakes-shifting cases. Of course, experimental philosophers have yet to turn their
attention to the broad range of data we think supports the principle. And it may turn out that
some of what we expect about this mass of data is false. We wait to see the results. But we
should remember that one gains plenty of solid empirical information just by living a normal life
in an English-speaking country.
There is therefore no reason to expect that just because certain kinds of responses are
plausible against the intuitions concerning the truth-values of knowledge-ascriptions in stakesshifting cases that the same kinds of responses will be plausible against the data given in support
of the key principles employed in the principled strategy. There’s good reason to expect the
contrary. But even if there is some plausible case to be made against the principles invoked in
the argument-from-principles strategy, the suggestion here is that this is where the main locus of
debate should be. We should be concentrating debate – both pro and con – on the principles in
the principled strategy, and not directly on the intuitive responses to the cases.
Tamar Gendler (2007) argues that philosophical thought experiments “recruit
representational schemas that were previously inactive. As a result, they can be expected to
evoke responses that run counter to those evoked by alternative presentations of relevantly
similar content.” (86) The use of these previously inactive schema is what gives thought
experiments their power to move us. But in any sort of theorizing, it is dangerous to rely too
heavily on only a single or narrow range of representational schemas. Such schemas can be
misguided or misleading, as they are in abstract versions of Wason selection tasks or the more
concrete examples studied by Kahneman and Tversky. The best assurance that a thought
experiment is not leading us astray requires seeing if the responses evoked in the thought
27
experiment stand up to general theorizing, drawing on diverse strands of data. The shifty
epistemologist who stakes her fortunes on the argument-from-instances strategy faces doubts
about whether the particular representational schemas she activates in her thought experiments –
her “stakes-shifting cases” – might be leading us astray. The argument-from-principles strategy
is what is needed to put these doubts to rest. Doubts, of course, remain about whether the
principles invoked in the strategy are true. These principles are where we think epistemologists
– both shifty and non-shifty – should turn their sights.
Works Cited
Bach, Kent (2005). “The Emperor’s New ‘Knows.” in Contextualism in Philosophy, ed. G. Peter
and G. Preyer, Oxford: Clarendon Press: 51 – 89.
BonJour, Laurence (1998). In Defense of Pure Reason. Cambridge: Cambridge University Press.
Brown, Jessica (2005). “Adapt or Die: The Death of Invariantism?” The Philosophical Quarterly
55 (219): 263–85.
Brown, Jessica (2006). “Contextualism and Warranted Assertability Maneuvers,” Philosophical
Studies 130 (3): 407-35.
Brown, Jessica (2008). “Subject-Sensitive Invariantism and the Knowledge Norm for Practical
Reasoning.”. Nous 42, 2: 167-189.
Cohen, Stewart (1999). “Contextualism, Skepticism, and the Structure of Reasons.”
Philosophical Perspectives 13: 57-89.
Conee, Earl (2005). “Contextualism Contested.” Contemporary Debates in Epistemology. E.
Sosa and M. Steup. Malden, MA, Blackwell Publishers: 47-56.
Davis, Wayne (2007). “Knowledge Claims and Context: Loose Use.” Philosophical Studies
132(3): 395-438.
DeRose, Keith (1992). “Contextualism and Knowledge Attributions.” Philosophy and
Phenomenological Research 52: 913-29.
DeRose, Keith (2009). The Case for Contextualism. Oxford, Oxford University Press.
Fantl, Jeremy and McGrath, Matthew (2002). “Evidence, Pragmatics, and Justification.” The
Philosophical Review 111(1): 67-94.
28
Fantl, Jeremy and McGrath, Matthew (2007). “On Pragmatic Encroachment in Epistemology.”
Philosophy and Phenomenological Research 75(3): 558-89.
Fantl, Jeremy and McGrath, Matthew (2009). Knowledge in an Uncertain World. Oxford,
Oxford University Press.
Gendler, Tamar Szabó (2007). “Philosophical Thought Experiments, Intuitions, and Cognitive
Equilibrium.” Midwest Studies in Philosophy 31: 68-89.
Hawthorne, John (2004). Knowledge and Lotteries. Oxford, Oxford University Press.
Hawthorne, John and Stanley, Jason (2008). “Knowledge and Action.” Journal of Philosophy
105(10): 571-90.
Hetherington, Stephen (1999). “Knowing Fallibly.” Journal of Philosophy 96: 565-87.
Kvanvig, Jonathan (2011). “Against Pragmatic Encroachment.” Logos & Episteme 2 (1): 77-85.
MacFarlane, John (2005). “The Assessment Sensitivity of Knowledge Attributions.” Oxford
Studies in Epistemology 1: 197-233.
McGrath, Matthew (manuscript). “Two Purposes of Knowledge Attribution.”
Nagel, Jennifer (2008). “Knowledge Ascriptions and the Psychological Consequences of
Changing Stakes.” Australasian Journal of Philosophy 86 (2): 279 – 294.
Nagel, Jennifer (2010). “Epistemic Anxiety and Adaptive Invariantism.” Philosophical
Perspectives 24 (1): 407-35.
Reed, Baron (2002). “How to Think About Fallibilism.” Philosophical Studies 107 (2): 143-57.
Reed, Baron (2008). “Certainty.” Stanford Encyclopedia of Philosophy.
Reed, Baron (2010). “Stable Invariantism,” Nous 44 (2); 224–244.
Rysiew, Patrick (2001). “The Context-Sensitivity of Knowledge Attributions.” Noûs 35 (4):477–
514.
Rysiew, Patrick (2005). ”Contesting Contextualism.” Grazer Philosophische Studien 69 (1):5170.
Rysiew, Patrick (2007). “Speaking of Knowing.” Noûs 41 (4):627–662.
Schaffer, Jonathan (2006). “The Irrelevance of the Subject: Against Subject-Sensitive
Invariantism.” Philosophical Studies 127: 87-107.
Stanley, Jason (2005). Knowledge and Practical Interests. Oxford, Oxford University Press.
29
Zagzebski, Linda (1994). “The Inescapability of Gettier Problems.” Philosophical Quarterly 44
(174): 65-73.
30
Download