Thick

advertisement
Does Practical Rationality Constrain Epistemic
Rationality?
(Forthcoming in Philosophy and Phenomenological
Research symposium on Fantl and McGrath
Knowledge in an Uncertain World)
I The Direct Argument
An issue that looms large in contemporary
epistemology concerns the relationship between
knowledge and practical rationality. Jeremy Fantl and
Matt McGrath provide the most in-depth and rigorous
discussion of this issue. Their book is an impressive
achievement and I learned much from studying it.
Fantl and McGrath (F&M) defend a general
principle linking knowledge and rational justification—
both practical and theoretical:
KJ: If you know that p, then p is warranted enough to
justify you in ϕ-ing, for any ϕ. (66)
For F&M, the justification component of knowledge is
carrying the theoretical load. Thus they endorse
JJ: If you are justified in believing that p, then p is
warranted enough to justify you in ϕ-ing, for any ϕ.(99)
While I am unclear about whether KJ is true, I am
inclined to think JJ is false. Most of the discussion of
these issues has centered on cases that pull intuitions
in one direction or the other. To their credit, F&M set
out to move the discussion beyond intuitive judgments
about cases. They construct a principled argument in
defense of JJ.
“The direct argument" attempts to derive JJ from
fundamental principles, some of which are analogues
of the principles used in their argument for KJ. The
principle I will focus on is central to the argument for
KJ and JJ:
The Unity Thesis: If p is warranted enough to be a
reason you have to believe that q, for any q, then p is
warranted enough to be a reason you have to ϕ, for
any ϕ.
It is uncontroversial that theoretical reasons can
constrain practical reasons. But according to the
Unity Thesis, practical reasons can constrain
theoretical reasons. F&M begin by commenting on
how we do in fact reason:
"On a hike, you come to a frozen pond. Do you walk
across or walk around the frozen pond? Walking
around will take a while, but you don't want to fall
though the ice. How do you decide? The crucial
issue is whether the ice is thick enough to hold you.
Suppose you do some checking (you call the park
authorities) and come to know that the ice is thick
enough. So the ice is thick enough becomes a
reason you have to believe other things (e.g. that it is
perfectly safe to cross it). It would then be very odd
not to allow this knowledge into your practical
reasoning" (73-4)
F&M argue that as a matter of fact we do not
typically segregate our reasons for believing and our
reasons for acting. But they concede that, as a
psychological matter, it might be possible to do so,
e.g., for you to count the ice is thick enough as a
reason to believe you will cross safely, while at the
same time not treating the ice is thick enough as a
reason to cross the ice. In their view, however,
segregating your reasons in this way would be
irrational:
“When p becomes available as a basis for theoretical
conclusions, it is 'barmy' to ignore p in one's decisionmaking and planning".
While I agree that in general, reasons for
believing will (in the relevant sense) be reasons for
acting, matters are somewhat different when we
consider cases with massively asymmetrical stakes:
Suppose crossing the ice rather than walking around
the pond would save at most a few minutes (and
saving the few minutes counts for almost nothing).
Suppose further that the water is deep enough so that
if you break through the ice, you will certainly drown.
When the stakes are so asymmetrical, the Unity
Thesis seems to run into trouble.
Consider the proposition:
Thick: The ice is enough to support your weight.
On the basis of the testimony of the park authorities,
is sufficiently warranted to be a reason you have to
believe that if you cross the ice, it will not break.1 But
is Thick sufficiently warranted to be a reason you
have to cross? It would seem not. You have
everything to lose, and virtually nothing to gain. Why
risk your life in order to gain a few minutes?2
The preceding strikes me as the correct way to
describe the case. The testimony of the park
authorities makes it rational for you to believe that if
you cross, the ice will hold you. But the asymmetrical
stakes makes it irrational for you to cross. If so, then
the Unity Thesis fails. Of course F&M would resist
this description of the case. They would agree that
Thick is not sufficiently warranted to be a reason for
you to cross. But they would also hold that neither is
Thick sufficiently warranted to be a reason for you to
believe that if you cross, the ice will not break. On
their view, it would be barmy for you to appeal to
Thick as a reason to believe but not to appeal to it as
a reason to act. F&M point out (in correspondence)
that the situation as I describe it, would license your
saying (or simply believing) seemingly crazy things.
To see this, we need to make the familiar
1
If reasons must be true, we can stipulate that is true.
Note that one cannot hold that Thick is sufficiently warranted to be a reason you have to
cross, but that it is not a good enough reason to justify crossing. Given the specifications
of the case, Thick is certainly a strong enough reason. Were you certain of Thick, it
would be rational for you to cross. So the problem has to be that it is insufficiently
warranted to be a reason for you to cross.
2
distinction between there being a reason for you to do
ϕ, and your having that reason to do ϕ. In the case
we have been discussing, Thick is a reason for you to
cross the ice. This remains true whether or not you
possess Thick as a reason to cross the ice. On my
description of the case, you are justified in believing
Thick. Now suppose, as F&M suggest, we stipulate
that you justifiably believe that Thick is a reason for
you to cross the ice. Finally, if as I hold, Thick is not a
reason you have to cross the ice, then this is
something you could recognize. But that means you
could be justified in believing:
Strange: The ice is thick enough, and that is a reason
for me to cross, but I do not have that reason.
Admittedly, this result is somewhat jarring. How could
you be justified in believing Strange? I will argue that
independently motivated principles about the structure
of reasons can explain how.
Jonathan Weisberg has defended the following
principle:
No-feedback: If one reasons from premise P to
conclusion C via lemma L, then if p does not by itself
support C, the inference from C to L is unjustified.3
This is a very plausible principle. The main idea is that
you cannot increase the justificatory power of your
“Bootstrapping in General” Philosophy and Phenomenological Research 81, 2010. I
have simplified the statement of the principle for current purposes.
3
evidence simply by making inferences from it.4
Suppose your evidence supports some lemma to just
above the threshold for rational justification. Even if
that lemma is a good inductive reason to believe
some further conclusion, inferring the conclusion from
the lemma could result in that conclusion having a
level of justification below the threshold. This means
that we cannot allow that any inductively good
inference from a lemma we justifiably believe yields a
justified conclusion. According to No-Feedback, the
conclusion will be justified only if the original evidence
supports it.
Note that as a consequence of No-Feedback, the
following situation could arise. On the basis of some
P, you are justified in believing some lemma L, where
L is a reason to believe C. If you are justified in
believing L is a reason to believe C, then you will be
justified in believing L, and that L is a reason to
believe C. But you may still be justified in believing
that you do not have that reason. This is an epistemic
analogue of (Strange). For example, suppose you
justifiably believe, on the basis of the muddy footprints
matching his shoe size, that the butler did it. But the
muddy footprints just barely justify you in believing
this. Suppose further that the butler did it is a good
inductive reason to believe that the maid knows who
did it. You might nonetheless fail to be justified in
believing that the made knows who did it. That is to
say, you could be justified in believing
4
Weisberg argues, correctly I think, that the principle blocks bootstrapping reasoning.
Unfortunately, the Dogmatist is committed to rejecting No-feedback.
Strange-epistemic: The butler did it, and that is a
reason for me to believe that the maid knows about it.
But I do not have that reason.
You do not have The butler did it as a reason
because it is not sufficiently warranted to justify you in
believing that the maid knows who did it. So when
there is epistemic feedback, one can end up with
justification for epistemic analogues of Strange.5
In No-Feedback, the conclusion C is a belief. But
we also reason to practical conclusions--intentions or
actions. I see no reason why we should not
generalize NF to apply to practical conclusions as
well:
Practical No-Feedback: If one reasons from premises
P to an intention to act I via lemma L, then if P does
not by itself support I, then the inference from L to I is
unjustified.6
Practical No-Feedback should be as plausible as
Theoretical No-Feedback. The point of No-Feedback
is that if one's initial premise does not support one's
practical conclusion, then neither does any lemma
one might infer from one's initial premise. This
remains true whether the conclusion is a belief
5
6
This also demonstrates the failure of F&M’s principle (JKR for belief) p98
We can remain neutral on whether the conclusion of a practical inference is an intention
or an action.
inferred directly from the lemma, or whether it is an
intention “inferred” from the lemma in conjunction with
one's preferences. The point remains that making
inferences from one's evidence cannot increase the
strength of one's evidence.
Our case of asymmetrical stakes involves both
theoretical and practical reasoning:7
If you cross the ice will
hold you.
Testimony
Thick
Intention to cross.
In the theoretical reasoning, the testimony of the park
authorities clearly supports the conclusion that if you
cross, the ice will hold you. Thus, No-Feedback does
not block the inference from Thick to the conclusion.
So Thick counts as a reason you have to believe the
ice will hold you. But in the practical reasoning, the
testimony does not, given the stakes, support the
practical conclusion, viz., your intention to cross.
Here No-Feedback does block the practical inference
from Thick to the conclusion. So while Thick is a
theoretical reason you have to believe the ice will hold
you, it is not a practical reason you have to cross.
As F&M note, this does saddle us with the
conclusion that you are justified in believing Strange.
7
The arrows represent the reason support relations
But now we have a theoretical account for why you
are so justified. Even if one were to resist the
extension of No-Feedback to practical reasoning, we
have seen that even theoretical No-Feedback results
in your being justified in believing epistemic
analogues of Strange. I can see no reason to
suppose that you could be justified in believing
Strange-epistemic but not Strange. I conclude that
the Unity Thesis is false.
II Outright Belief
My argument against the Unity Thesis raises
issues about the nature of binary or outright belief.
On a certain view of outright belief, my description of
the case seems to be incoherent. F&M call it “The
Strong Pragmatic View” of belief:
(SPV) You believe that p iff your credence for p is
high enough for p to be your motivating reason for ϕing for all ϕ.(137)
8
A consequence of SPV is that in the asymmetrical
stakes version of the frozen pond case, you fail to
outright believe Thick. Given those stakes, your
credence for Thick is not sufficient to motivate you to
cross the ice. And if you do not believe Thick, you do
not justifiably believe Thick.
The issues concerning the nature of outright
7 F&M favor SPV, though they do not explicitly endorse it. They also distinguish it from
what they call “weak pragmatic belief”.
belief are too complex for a thorough treatment in this
symposium piece. What I can do is point out a
reason to be worried about the correctness of (SPV).
Then I will sketch an alternative account according to
which you do outright believe Thick in the
asymmetrical stakes case.
The reason to be worried about (SPV) is that it
has the consequence that what you believe depends
on what you prefer.9 Suppose in the asymmetrical
stakes case, I get depressed and stop caring whether
I fall through the ice and drown.10 According to
(SPV), under this scenario I do outright believe Thick,
because my new preference structure has eliminated
the massive asymmetry in the stakes. With my new
preferences, my credence for Thick is high enough for
it to be my motivating reason to cross the ice. I will
save time, and it is no longer a priority for me not to
fall through the ice. So before the change in my
preferences, the strength of my credence was
insufficient for me to count as believing Thick. After
the change, even though my credence for Thick
remains unchanged--the strength of my credence has
not increased--I now count as believing Thick. The
mere fact that I have stopped caring so much about
whether I drown entails that I now believe Thick. This
strikes me as extremely implausible.
Now consider the view that part of the functional
Jacob Ross and March Schroeder raise this problem in “Belief, Credence, and
Pragmatic Encroachment” Philosophy and Phenomenological Research, forthcoming.
10
It is not clear to me whether SPV require rational motivation. If so, and we assume
depression is irrational, we can assume that the preferences change rationally, or at least,
not irrationally.
9
role of outright belief is to (defeasibly) dispose one to
treat P as true in one’s reasoning.11 On this account,
your outright believing disposes you to treat Thick as
true in your reasoning. But the disposition is
defeasible. So while in the normal course of events,
you will treat Thick as true in your reasoning, in cases
where the risk of acting on Thick is too great, this
disposition gets defeated. In these circumstances,
you will treat Thick only as probable in your
reasoning. But the defeasible disposition remains,
even when it is defeated. Thus, despite the fact that
under these circumstances you are not treating Thick
as true in your reasoning, you continue to outright
believe 12
III The Subtraction Argument.
F&M’s` "Subtraction Argument" purports to derive
JJ from KJ. If the Unity Thesis fails, then so does the
argument for KJ. I do however think that KJ is more
plausible than JJ on its own terms. For this reason, it
would be of interest if JJ could be derived from KJ.
The Subtraction Argument has three steps:
11
Versions of this view have been held by various people, though I borrow this
formulation from Jacob Ross and Mark Schroeder, ibid.
12
I am not sure weather the right way to think of outright belief on this view is as a
defeasible disposition. That is how Ross and Schroeder describe it. But couldn’t your
having a disposition to treat P as true in your reasoning just amount to the following
condition being defeasibily true: If P is relevant to your reasoning, you will treat P as
true? One such defeater would be massively asymmetrical stakes. So if P is relevant to
your reasoning and the stakes are massively asymmetrical, you may not treat P as true. I
confess that I am not competent enough about the metaphysics of dispositions to have a
view about this.
(S1) If you know that p, then p is warranted enough to
justify you in ϕ-ing for any ϕ (KJ).
(S2) Holding fixed knowledge-level justification, while
subtracting from knowledge any combination of truth,
belief, or being ungettiered makes no difference to
whether p is warranted enough to justify you in ϕ-ing,
for any ϕ.
Therefore
(S3) If P is knowledge-level justified, then p is
warranted enough to justify you in ϕ-ing, for any ϕ.
The crucial premise in the subtraction argument is
(S2). F&M argue for it by considering a case where
you are freezing cold and see a barn close ahead on
the right. There is another barn much further ahead
on the left. They note that because you want to get
out of the cold as soon as you can, if you know there
is a close barn on the right (and presumably, there
are no countervailing considerations), then there is a
barn on the right is warranted enough to justify you in
heading toward the barn on the right (rather than the
distant barn on the left). This remains true even if
there is instead a barn replica on the left and so you
are gettiered and fail to know there is a barn on the
right. Similarly if there is no barn on the right but
merely a facade, you are still justified in heading
toward the barn on the right. Even if, despite your
evidence, you fail to believe there is a barn on the
right, you remain justified in heading toward the barn
on the right.
These intuitive considerations are taken by F&M
to support (S2). By definition, if you know p, you have
knowledge-level justification for p. KJ says that if you
know p, then p is warranted enough to justify you in ϕing for any ϕ. The subtraction argument says that
subtracting truth, belief and being ungettiered, you
remain warranted enough to justify you in ϕ -ing, for
any ϕ. Thus, knowledge-level justification for p makes
p sufficiently warranted to justify you in ϕ -ing for any
ϕ.
As I see it, the main problem with the subtraction
argument hinges on the notion of being “ungettiered”.
What is it to be ungettiered? The only way to define
being “gettiered", without solving the Gettier problem,
is this: You are gettiered just in case you have
justified true belief that isn't knowledge. (Indeed the
title of Gettier's paper is "Is justified true belief
knowledge?).
But this definition raises the question whether the
existence of high stakes can be a gettier condition?
This would be ruled out only if it were impossible for
the stakes to be high enough to rule out knowledge,
without ruling out knowledge-level justification.
Because the subtraction argument attempts to derive
the stakes-sensitivity of knowledge-level justification
from the stakes sensitivity of knowledge, it would be
question-begging to assume this at the outset.
If you can be gettiered by high stakes, then in
order to subtract your being ungettiered, while holding
fixed your knowledge-level justification, we would
have to assume that the stakes are high enough to
preclude knowledge, even given your knowledge-level
justification. But high stakes undermine knowledge
by blocking justification for acting. Thus you will have
knowledge-level justification for p, with insufficient
warrant for p to justify you in acting. So (S3) will not
follow.
Are there other ways of defining being “gettiered"
that exclude high stakes? F&M, at different, places
talk about "Gettier luck" conditions as well as "truthrelevant" conditions. Neither of these specifications is
precise enough to capture all the conditions
discussed in the Gettier literature, but no matter. The
problem remains. The subtraction process begins
with stakes low enough for you to know P and thus
low enough for P to be sufficiently warranted to justify
you in ϕ-ing. Now suppose we subtract, in addition to
truth and belief, all the additional truth-relevant/noluck conditions for knowledge. Presumably, low
(enough) stakes is not a truth-relevant/luck condition.
(If it were, the subtraction process would result in
stakes high enough to prevent P from being
sufficiently warranted to justify you in ϕ-ing and (S3)
would not follow). So after the subtraction process,
low (enough) stakes will remain. But then all that
follows is:
(S3') If P is knowledge-level justified and the stakes
are low enough for knowing, then p is warranted
enough to justify you in ϕ -ing, for any ϕ .
(S3') is weaker than (S3). Just because knowledgelevel justification for p ensures that p is sufficiently
warranted to justify acting when the stakes are low
enough for knowing, it doesn't follow that knowledgelevel justification ensures that p is sufficiently
warranted to justify acting when the stakes are too
high for knowing. Again, the argument assumes that
one cannot have knowledge level-justification for P
when the stakes are too high for knowing. In essence,
the subtraction argument overlooks the possibility that
low enough stakes is an independent condition for
knowledge.
Stewart Cohen
University of Arizona
Download