Reliabilism and the Value Problem

advertisement
1
Reliabilism and the Value Problem
Christoph Jäger (Aberdeen/Innsbruck)
Draft May 8, 2008
Part I
1. Introduction
The value problem in epistemology, at least as it is commonly construed, is the problem of
explaining why knowledge is more valuable than mere true belief.1 In recent years many
authors have claimed that this problem is especially troublesome for reliabilism.2 The core
idea of process reliabilism for example is that knowledge is true belief that has been
produced, or is sustained, by a reliable epistemic process. However, the critic argues, the
epistemic value of such a process derives from the value of the true beliefs it produces. But if
a cognitive process is epistemically valuable because it produces mostly true beliefs, how
could the fact that a true belief has been produced by such a process add anything of
axiological significance to it? Value transmission, so it seems, works only in one direction.
While positive value is transmitted from a valuable product to the source that reliably
produces it, the sheer property of being generated by a reliable mechanism doesn’t seem to
make something valuable. More specifically, a reliable epistemic process doesn’t seem to
have any value over and above the fact that it tends to produce true beliefs. But then how
could the fact that a given true belief has been generated by such a process enhance this
belief’s epistemic value?
“We do value knowledge over mere true belief. ... I want to know why we value knowledge over
‘mere true beliefs’ ” (Jones 1997, 423). “Most philosophers [...] agree that knowledge is more valuable
than mere true belief. If so, what is the source of the extra value that knowledge has?” (Zagzebski
2004, 190) “The value problem in epistemology is to explain why knowledge is more valuable than
true belief” (Brady 2006). For similar statements see also Pritchard (2006), Baehr (forthcoming), or
Riggs (forthcoming).
1
2
Alvin Goldman and Eric Olsson (forthcoming) have proposed a novel solution to the
value problem as it appears to arise for simple process reliabilism. More precisely, they argue
for two claims: (i) There is a weak sense of ‘know’ in which the term just means ‘believe
truly’ (manuscript, p. 2). If that is right, it follows trivially that in contexts in which this sense
of knowledge operates, knowledge fails to be more valuable than mere true belief. The
common construal of the problem according to which we need to explain why knowledge is
always more valuable than mere true belief would then be misconceived. (ii) Goldman’s and
Olsson’s second claim is that in contexts in which knowledge is to be analysed as true belief +
X even simple process reliabilism can account for the extra value of knowledge. The reason,
they maintain, is that the value of a given epistemic process is partly a function of potential
future employments of that process.
I argue, first, that Goldman’s and Olsson’s argument for weak knowledge is
unconvincing. Their second argument, by contrast, contains a promising explanation of why
we often do value knowledge over mere true belief. Regarding this second claim, however, it
turns out that the Goldman-Olsson account is committed to significant internalist constraints.
While this by itself may not constitute a punishable epistemological crime, it means that their
proposed solution to the value problem cannot be sold as a solution that conforms to pure
externalist forms of reliabilism as Goldman and Olsson would presumably prefer. I argue that
this problem is rooted in the fact that they consider the value of knowledge only from the
point of view of the knower himself. This concession to internalism can be avoided if we
switch to a third-person perspective. The fact that S reliably knows that p will not only often
be more valuable for S, but also, and especially, for other people in S’s epistemic community.
2. The Hawthorne-Goldman-Olsson argument for weak knowledge
2
Jones (1997); Swinburne (1999); DePaul (2001); Sosa (2003); Kvanvig (2003); Zagzebski (2000,
3
Are there contexts in which knowledge reduces to true belief? If so, the common way of
phrasing the value problem would at best be misleading. The question would not be: “Why do
we (always) value knowledge over mere true belief?”, but rather: “Of the things we call
‘knowledge’, which are more valuable than mere true belief?” While I am sympathetic to this
refinement of the question, I believe that Goldman’s and Olsson’s argument for the claim that
knowledge sometimes reduces to true belief is unconvincing. Their core idea is that in certain
contexts we treat knowledge as the complement of ignorance. In such contexts, the claim
goes, knowing that p just means not being ignorant of the fact that p, while being ignorant of
the fact that p simply means having no belief that p. Goldman and Olsson argue for this latter
claim by way of a reductio. Suppose that ‘knowledge’ were to mean, in such contexts, ‘true
belief plus X’. Then the statement that S fails to know that p could be true because S failed to
meet condition X. Hence, since ex hypothesi knowledge is in such contexts the complement of
ignorance, S could be said to be ignorant of p despite the fact that she truly believed that p.
But such a claim about the meaning of ‘ignorance’, Goldman and Olsson argue, “is plainly
wrong”; it would at least be “highly inaccurate, inappropriate and/or misleading” (manuscript,
p. 3).
But why should we think there are such contexts in which knowing that p is the
complement of being ignorant of p? At this point Goldman and Olsson ask us to consider an
example from John Hawthorne:
“Consider a case discussed by John Hawthorne (2002). If I ask you how many people
in the room know that Vienna is the capital of Austria, you will tally up the number of
people in the room who possess the information that Vienna is the capital of Austria.
Everyone in the room who possesses the information counts as knowing the fact;
everybody else in the room is ignorant of it. It doesn’t really matter, in this context,
2003, 2004); Koppelberg (2005); Brady (2006); Riggs (forthcoming), Baehr (forthcoming).
4
where someone apprised of the information got it. Even if they received the
information from somebody they knew wasn’t trustworthy, they would still be counted
as knowing” (Goldman & Olsson, forthcoming, p. 2).
I don’t think that this example can be used to illustrate that knowing that p sometimes reduces
to truly believing that p. An initial worry is that the question “How many people in the room
know that ....?” is an at least slightly leading question. The formulation suggests that,
whatever the correct number, it is in any case false that no one in the room knows that Vienna
is the capital of Austria.3 Suppose the question had instead been phrased in a more neutral
way and had begun with: “How many people in the room, if any, know that ...”, or with: “Are
there any people in the room who know that ..., and if so, how many?”. In that case the answer
might well have been “none”. Why is this?
Let us grant for the moment that there are some people in the room who hold the true
belief that the capital of Austria is Vienna. How firm are their beliefs supposed to be?
Goldman and Olsson don’t address this question. But it is widely agreed that knowledge
involves firmly held belief in the sense of subjective certainty. Is it plausible that someone
who knows his informant is untrustworthy would form a firm belief in the truth of his
“informant’s” testimony? In short, the problem is that at least for minimally rational people
the following propositions seem to form an inconsistent triad:
(1) Knowledge requires firm belief.
(2) S is confronted with a piece of information p from somebody who S knows isn’t
trustworthy (in questions of the kind at hand).
(3) S knows that p (solely) on the basis of the fact described in (2).
5
If this claim is true, then, since in Goldman’s and Olsson’s example (2) and (3) are true, they
must reject (1).
At this point the reply might be that this, indeed, is what is required, but that the view
that all kinds of knowledge require firm belief is false. Weak knowledge, Goldman and
Olsson might claim, requires only weak belief. In order not to be ignorant, it is not necessary
to have a firm belief.
Unfortunately, however, this proposal will not help Goldman and Olsson. Given that
the subjects know their informant to be untrustworthy, it is hard to see why they would form
any belief at all about the capital of Austria. For example, if we model belief in terms of
subjective probabilities, then if a subject knows that a potential source of information isn’t
trustworthy, they would not normally assign a probability of more than 0.5 to a claim made
by, or derived from, that source (assuming the subject has no prior evidence for the truth of
the proposition in question). If someone who I know suffers from severe schizophrenia tells
me that the Martians have landed, I would not form even a weak belief that the Martians have
landed. (At least so I hope.) The problem with the Hawthorne-Goldman-Olsson example thus
is that it is not clear what it could mean for a subject to come to “possess the information that
p” even in the sense of acquiring a weak true belief that p, when this information has been
received from someone who is known not to be trustworthy. In other words, at least for
minimally rational people, the following propositions seem to form a second inconsistent
triad:
(1*) Knowledge requires belief.
(2) S is confronted with a piece of information p from somebody who S knows isn’t
trustworthy (in questions of the kind at hand).
(3) S knows that p (solely) on the basis of the fact described in (2).
3
It suggests a reply other than zero especially in a context in which it is known that at least one person
6
I have said that (1)–(3) and (1*)–(3) seem to form an inconsistent triad. But maybe (2)
leaves room for interpretation. If (2) is the whole story about S’s epistemic situation, as
suggested by Goldman’s and Olsson’s formulation of the example, then it is hard to see how
the story could be coherent. But maybe the idea is not that the informant is known to be
always untrustworthy (at least in questions of the kind under consideration), but only that she
is usually untrustworthy (in questions of the kind under consideration), while in this particular
case – i.e., when she informs the people in question about the capital of Austria – she is not.
There is not much textual evidence for this interpretation, but perhaps this reading is
compatible with the way Goldman and Olsson sketch the example. In any event, this is the
way Hawthorne himself sets up the case (in Hawthorne 2002). Let us take a look at this option
and see whether it offers a way out. The first part of Hawthorne’s story is identical with
Goldman’s and Olsson’s. But then Hawthorne goes on as follows:
“Even if someone was given the information by an informant that they knew full well
they shouldn’t trust (who happened to be telling the truth on this occasion), you will in
this context count him as knowing what the capital of Austria was (so long as he had
the firm belief). [Footnote Hawthorne:] Of course, someone who didn’t in fact trust
their informant and merely used the informant as a basis for guessing an answer –
being altogether unsure on the inside – would not count.” (Hawthorne 2000, pp. 253f.)
So Hawthorne makes the explicit proviso that the beliefs in question be firm beliefs. But this
forces him to construct a fairly complicated scenario: the subjects receive a piece of
information from someone who they know is generally not a reliable informant; yet on this
would present the answer “Vienna” when asked about the capital of Austria.
7
particular occasion they do in fact trust him and form a firm belief on the basis of the
informant’s testimony. Is this example coherent?
Again, other things being equal, it is not clear to me why someone would generate a
firm belief on the basis of testimony by someone whom they know they shouldn’t trust. To
make the story coherent one would have to add that, even though it is known that the
informant is generally untrustworthy, on the current occasion this (potential) defeater is
overridden by a defeater-defeater. For example, the subjects may have, and utilize, (what they
believe is) good evidence that in this particular instance the generally untrustworthy informant
is trustworthy. A belief to this effect could serve as a meta-defeater and thus explain why the
subjects would form the belief that Vienna is the capital of Austria, even though they think
their informant is generally untrustworthy.
There is nothing wrong with this way of spelling out the example – except for the fact
that the relevant epistemic state we are now envisaging is not simply the true belief that the
capital of Austria is Vienna. Instead, our subjects are now having a true belief that is
dialectically justified in a fairly complex way. They are aware of a (potentially) undercutting
defeater of their belief, but have acquired some higher-order defeater which on this occasion
can neutralize the defeater “untrustworthy informant”. Moreover, given this meta-defeater
they not only have a good reason to trust their informant in this case, but they must also
believe that this meta-defeater neutralizes the fact that their source of information is generally
untrustworthy, and so forth. That’s all well and good, yet the point is that this state is not
simply the complement of being ignorant, at least not in Goldman’s and Olsson’s minimal
sense of failing to have a true belief. In order to tell a coherent story about people generating,
under the envisaged circumstances, knowledge to the effect that Vienna is the capital of
Austria, we must invoke a true-belief-plus-X account.
So far the present argument was concerned with firm or strong belief, as explicitly
required in Hawthorne’s version of the example. But the general condition just laid out also
8
applies to weak belief. In order for S to form a weak belief on the basis of testimony by an
informant who S believes to be generally untrustworthy, S must also possess, and utilize, a
meta-defeater which on the given occasion neutralizes the first-order defeater. (The strength
that meta-defeater is required to have, or of the belief in the truth of that meta-defeater, will
depend on the strength of S’s belief that the source is untrustworthy.) The important point is
that, as in the case of firm belief, so too in the case of weak belief, one needs a meta-defeater
that cancels out the suspicion that the informant is untrustworthy on the present occasion. And
if that is so, the epistemic state we need to assume to make the story coherent is not just mere
belief that p. If the meta-defeater is eliminated, however, then (at least in the case of
minimally rational believers) the belief will disappear with it, in which case it would clearly
be wrong to say that the people in question have knowledge in any sense of the word.
I have inserted, at various points, the qualification “at least for minimally rational
believers/subjects”. Couldn’t Goldman and Olsson reject that constraint? Couldn’t they reply
that it suffices for their case to assume that the people “received the information from
somebody they knew wasn’t trustworthy”, while no defeater was in place? Maybe Hawthorne
can be interpreted in this way. The information he explicitly gives us about his example is that
the people in question know full well that they shouldn’t trust the informant, but in fact they
trust him nevertheless, full stop.
My answer to this is that in that case we wouldn’t ascribe knowledge to the subjects.
Suppose Tom, who has no idea what the capital of Zimbabwe is, is given the opportunity to
use a machine that is loaded with forty index cards displaying the names of the forty largest
cities in Zimbabwe, including the name of the capital. When he pushes a button, the machine
spits out one card at random. Tom knows that this is the way the machine works. He pushes
the button, picks his card, and forms the belief that the city named on the card is the capital of
Zimbabwe. As it happens, the belief is right. Would we say that Tom’s true belief is an
instance of knowledge? No. The situation, I submit, is sufficiently analogous to the one where
9
someone trusts an informant they believe to be untrustworthy (in the absence of any metadefeater). In that case, too, ascribing knowledge to the subject would be inappropriate.
Download