Reliabilism and the Value Problem

advertisement
1
Reliabilism and the Value Problem
Christoph Jäger (Aberdeen/Innsbruck)
Draft May 14, 2008
1. Introduction
The value problem in epistemology, at least as it is commonly construed, is the problem of
explaining why knowledge is more valuable than mere true belief.1 In recent years many
authors have claimed that this problem is especially troublesome for reliabilism.2 The core
idea of process reliabilism for example is that knowledge is true belief that has been produced
(and is sustained) by reliable epistemic processes. However, the critic argues, the epistemic
value of such processes derives solely from the fact that they tend to produce true beliefs.
How, then, could the fact that a given true belief has been produced by such a process add
anything of axiological significance to it? Value transmission, so it seems, works only in one
direction. While positive value is transmitted from a valuable product to the source that
reliably produces it, the sheer property of being generated by a reliable mechanism doesn’t
make something valuable. If so, it is hard to see how the fact that a true belief has been
generated by a reliable process could enhance this belief’s epistemic value.
Alvin Goldman and Eric Olsson (forthcoming) have proposed a novel solution to the
value problem as it appears to arise for simple process reliabilism. They argue for three
claims. (i) There is a weak sense of “know” in which the term just means “believe truly”. If
that is right, it follows trivially that in contexts in which this sense of knowledge operates,
knowledge fails to be more valuable than mere true belief. The common construal of the value
problem according to which we need to explain why knowledge is always more valuable than
mere true belief would then misconceive the problem. (ii) Goldman’s and Olsson’s second
claim is that in contexts in which knowledge is to be analyzed as true belief + X, even simple
process reliabilism can account for the extra value of knowledge. The reason, they maintain,
“We do value knowledge over mere true belief. ... I want to know why we value knowledge over
‘mere true beliefs’ ” (Jones 1997, p. 423). “Most philosophers [...] agree that knowledge is more
valuable than mere true belief. If so, what is the source of the extra value that knowledge has?”
(Zagzebski 2004, p. 190) “The value problem in epistemology is to explain why knowledge is more
valuable than true belief” (Brady 2006). For similar statements see also Pritchard (2006), Baehr
(forthcoming), or Riggs (2002, and forthcoming).
2
Jones (1997); Swinburne (1999); DePaul (2001); Sosa (2003); Kvanvig (2003); Zagzebski (2000,
2003, 2004); Koppelberg (2005); Brady (2006); Riggs (forthcoming), Baehr (forthcoming).
1
2
is that the value a given epistemic process has for a subject is partly a function of the subject’s
potential future employments of that process. (iii) A third thesis Goldman and Olsson defend
is that our tendency always to attribute greater value to knowledge than to mere true belief,
irrespective of the context, is due to a psychological mechanism they call “value
autonomization”. In what follows I shall discuss only the first and the second project tackled
in their paper.
I argue that Goldman’s and Olsson’s argument for “weak knowledge” fails (section 2).
Their second argument, by contrast, contains a plausible explanation of why we often do
value knowledge over mere true belief. But Goldman and Olsson have a skeleton in their
closet: Their account is committed to significant internalist constraints. While this by itself
may not constitute a punishable epistemological crime, it means that Goldman’s and Olsson’s
proposed solution to the value problem cannot be sold as a solution that conforms to pure
externalist forms of reliabilism (section 3). I argue that this problem is rooted in the fact that
they consider the value of knowledge only from the point of view of the knower himself. The
concessions to internalism can be avoided if we switch to a third-person perspective. The fact
that S reliably knows that p will not only often be more valuable for S, but also, and
especially, for other people in S’s epistemic community (section 4). I conclude by sketching
what I call a contextualist account of the extra value that reliabilist knowledge sometimes,
though not always, has (section 5).
2. Goldman’s and Olsson’s argument for weak knowledge
Are there contexts in which knowledge reduces to true belief? Goldman and Olsson argue that
there are. Sometimes, they claim, knowing that p just means not being ignorant of the fact that
p. Their argument proceeds by way of a reductio. Suppose that “knowledge” were to mean, in
such contexts, “true belief plus X”. Then if S failed to know that p, this could be true because
S failed to meet condition X. Hence, since knowledge is in such contexts the complement of
ignorance, S could be said to be ignorant of p despite the fact that she truly believes that p.
But such a result regarding the notion of ignorance, Goldman and Olsson maintain, would be
“plainly wrong”; it would at least be “highly inaccurate, inappropriate and/or misleading”
(Goldman and Olsson, forthcoming, manuscript, p. 3; page numbers henceforth refer to the
manuscript).
I don’t wish to dispute this view about the meaning of “ignorance” (and the reasoning
in this reductio argument is certainly correct). But why should we think there are contexts in
3
which knowing that p is simply the complement of being ignorant of p, in the sense just
sketched? Goldman and Olsson illustrate their case as follows:
“Consider a case discussed by John Hawthorne (2002). If I ask you how many people
in the room know that Vienna is the capital of Austria, you will tally up the number of
people in the room who possess the information that Vienna is the capital of Austria.
Everyone in the room who possesses the information counts as knowing the fact;
everybody else in the room is ignorant of it. It doesn’t really matter, in this context,
where someone apprised of the information got it. Even if they received the
information from somebody they knew wasn’t trustworthy, they would still be counted
as knowing” (p. 2).3
I don’t think that this example can be used to illustrate that knowing that p sometimes reduces
to truly believing that p. An initial worry is that the question “How many people in the room
know that ....?” is a leading question, at least in a context in which it is known that there are
people in the room who – whatever their source – can give the right answer. The formulation
suggests that it is in any case false that no one in the room knows that Vienna is the capital of
Austria.4 Suppose the question had instead been phrased in a more neutral way and had begun
with: “How many people in the room, if any, know that ...”, or with: “Are there any people in
the room who know that ..., and if so, how many?”. In that case the respondent, who is
supposed to be aware that the people in the room know that their informant is untrustworthy,
might well be inclined to reply: “none”. Why is this?
Let us grant for the moment that some people in the room hold the true belief that the
capital of Austria is Vienna. How firm are their beliefs supposed to be? Goldman and Olsson
don’t tell us. But it is widely agreed that knowledge involves firmly held belief in the sense of
subjective certainty. Is it plausible that someone who knows his informant is untrustworthy
generates a firm belief in what his “informant” tells him? I don’t think so. At least for
minimally rational people the following propositions seem to form an inconsistent triad:
3
Goldman has put forth a similar argument before (in Goldman 2002). There he also takes
Hawthorne’s example (Hawthorne 2002) as a point of departure, but presents his case in a slightly
different way. Goldman’s earlier argument has been criticized – not successfully, I believe – by Le
Morvan (2005). I shall discuss Goldman’s earlier argument, Le Morvan’s critique, and Hawthorne’s
own interpretation of the example in Hawthorne (2002) and Hawthorne (2004) below. [### Discussion
of Goldman 2002, Hawthorne 2004, and Le Morvan 2005 still needs to be added.]
4
It suggests a reply other than zero especially in a context in which it is known that at least one person
would present the answer “Vienna” when asked about the capital of Austria.
4
(1) Knowledge requires firm belief.
(2) S is confronted with a piece of information p from somebody who S knows isn’t
trustworthy (in questions of the kind at issue).
(3) S knows that p (solely) on the basis of the fact described in (2).
If these propositions, for minimally rational people, form an inconsistent triad, then, since in
Goldman’s and Olsson’s example (2) and (3) are true, they must reject (1).
A possible reply at this point is that this, indeed, is what is required, but that this
consequence is unproblematic since the view that all kinds of knowledge require firm belief is
false. In order not to be ignorant in the sense in question, Goldman and Olsson may argue, it
is not necessary to hold a firm belief. Weak knowledge requires only weak belief. But this
proposal will not be of much help. Given that the subjects know their informant to be
untrustworthy (in questions of the type at issue), it is hard to see why they would form any
belief at all to the effect that the capital of Austria is Vienna and not, say, Innsbruck or
Amstetten. For example, if we model belief in terms of subjective probabilities, then if a
subject knows that a potential source of information isn’t trustworthy, why would they assign
a probability of more than 0.5 to a claim made by, or derived from, that source? (We are
assuming that the subject has no prior evidence for the truth of the proposition in question.) If
someone who I know suffers from severe schizophrenia tells me that the Martians have
landed, this would not motivate me to form even a weak belief that the Martians have landed.
(At least so I hope.) The problem with the Hawthorne-Goldman-Olsson example thus is that it
is not clear what it could mean for a subject to come to “possess the information that p” even
in the sense of acquiring a weak true belief that p - understood as assigning a probability >
0.5, but less than 1, to p5 – when this information is presented by someone who is known to
be an unreliable informant. At least for minimally rational people, the following propositions
seem to form a second inconsistent triad:
(1*) Knowledge requires belief.
5
Goldman accepts approaches that equate degrees of belief with subjective probabilities (see for
example Goldman 1999a, p. 88ff.). However, he also counts probability assignments of less than 0.5
to a proposition as “degrees of belief”. This is at least misleading, for in such cases (since Pr(p) +
Pr(~p) = 1) the subject assigns a greater probability to the negation of the proposition than to the
proposition itself. The most natural description of this case however is that S believes ~p. Since, in this
idealized model, S should not be allowed to believe both p and ~p, we should therefore, strictly
speaking, not say (as Goldman does) that if S assigns a probability of, for example, 0.4 to p, the degree
of S’s belief that p is 0.4.
5
(2) S is confronted with a piece of information p from somebody who S knows isn’t
trustworthy (in questions of the kind at issue).
(3) S knows that p (solely) on the basis of the fact described in (2).
I have said that (1)–(3) and (1*)–(3) seem to form an inconsistent triad. But maybe (2) leaves
room for interpretation. If (2) is the whole story about S’s epistemic situation, as indeed
suggested by Goldman’s and Olsson’s formulation of the example, then it is hard to see how
the story could be coherent. But maybe (2) can be read in a way that is compatible with the
assumption that S, despite knowing that the informant is usually untrustworthy, thinks that on
this particular occasion she is not.
The problem with this idea is that we are now constructing a fairly complicated
scenario in which – contrary to what is required – S’s relevant epistemic state does not reduce
to mere belief that the capital of Austria is Vienna. For again, if S is minimally rational, the
story can only be made coherent when we assume that on the current occasion the
(potentially) undercutting defeater for S’s belief that Vienna is the capital of Austria, i.e. the
belief that the informant is untrustworthy, is overridden by a defeater-defeater. For example, S
may have (what S believes is) good evidence that in this particular instance the generally
untrustworthy informant is trustworthy. There is nothing wrong with such an epistemic
situation. The point however is that now the epistemic state under consideration is not simply
the complement of being ignorant that Vienna is the capital of Austria in Goldman’s and
Olsson’s minimal sense of having a true belief. In order to tell a coherent story about S’s
generating, under the envisaged circumstances, knowledge, we have invoked an epistemic
state constituted by true belief + X. Note, moreover, that this holds regardless of whether the
belief in question is construed as weak or as firm belief. In both cases the strength of the
meta-defeater must be adjusted to the firmness of the first-order defeater belief that the
informant is generally untrustworthy. But in both cases, if the meta-defeater is eliminated,
then (at least in the case of minimally rational believers) the belief will disappear with it, in
which case it would clearly be wrong to say that the people in question have knowledge in
any sense of the word.
I have inserted, at various points, the qualification “at least for minimally rational
subjects”. Couldn’t Goldman and Olsson reject that constraint? Couldn’t they reply that
without rationality constraints in place it suffices for their case to assume that the people in
question received the information from somebody they knew wasn’t trustworthy, even when
no meta-defeater was at work? This interpretation seems to be suggested by Hawthorne’s
6
original construal of the example (Hawthorne 2002). The first part of Hawthorne’s story runs
exactly parallel to Goldman’s and Olsson’s formulation. But then Hawthorne goes on as
follows:
“Even if someone was given the information by an informant that they knew full well
they shouldn’t trust (who happened to be telling the truth on this occasion), you will in
this context count him as knowing what the capital of Austria was (so long as he had
the firm belief). [Footnote Hawthorne:] Of course, someone who didn’t in fact trust
their informant and merely used the informant as a basis for guessing an answer –
being altogether unsure on the inside – would not count” (Hawthorne 2000, pp. 253f.).
Hawthorne may be read here as envisaging a case in which the subjects, although they know
they shouldn’t trust their informant even on the present occasion, trust him nevertheless. Note
that Hawthorne’s concern is not the question whether knowledge sometimes reduces to mere
true belief, but whether there is a sense in which “knowing that p” means “possessing the
information that p”. I shall not discuss here whether Hawthorne’s argument is convincing. But
I think the question whether we should ascribe knowledge to someone who knows they
shouldn’t trust their “informant”, but trusts him nevertheless, should be answered in the
negative. Suppose Tom, who has no idea what the capital of Zimbabwe is, is given the
opportunity to use a machine that is loaded with twenty index cards displaying the names of
the twenty largest cities in Zimbabwe, including the name of the capital. When he pushes a
button, the machine spits out one card at random. Tom knows that this is the way the machine
works and thus knows that the information he will receive from the machine is unreliable. He
pushes the button, picks a card, and forms the belief that the city named on the card is the
capital of Zimbabwe. As it happens, the name is correct (“Harare”). Is Tom’s true belief an
instance of knowledge? Clearly not. The situation, I submit, is analogous to the one where
someone trusts an informant they believe to be untrustworthy (in the absence of any metadefeater). They have no reason to think that if the informant states “p”, p is more likely to be
true than not-p.6 Hence we wouldn’t, and shouldn’t, ascribe knowledge to the subject.
6
Would it provide a way out to drop the critical condition that the subjects know that the informant is
untrustworthy? No. We would again face the question what motivation the subjects might have for
forming the belief that the capital of Austria is Vienna. They could only rationally form this belief if
they believe that their informant is trustworthy. But in that case, as in the cases discussed above, their
Vienna-belief is not mere true belief, and Goldman’s and Olsson’s point about knowledge as mere true
belief would collapse.
7
I conclude therefore that Goldman’s and Olssson’s argument for weak knowledge
fails. It fails because it relies on cases in which it is either inappropriate to ascribe knowledge
to the subjects under consideration, since it is unclear why they would even hold a
corresponding belief. If the situation is reformulated so that it is plausible to suppose that the
subjects have the belief in question, by contrast, they don’t merely have this belief, but are
epistemically justified (warranted, entitled, etc.) in holding it, and in a fairly complex way.
Either way, it is false that in such cases knowledge attributions in the sense of attributions of
mere true belief would be appropriate. So, contrary to what Goldman and Olsson suggest, the
value problem cannot be mitigated, or relativized, in this way by pointing out that in some
contexts knowledge reduces to mere true belief.
3. First-person extra value and internalism
I turn now to Goldman’s and Olsson’s argument that in contexts in which knowledge cannot
be reduced to mere true belief, process reliabilism can – contrary to what the critic claims –
account for the extra value of knowledge. The core idea of the Goldman-Olsson solution is
that the “extra valuable property” of true belief that has been produced in a reliable way is the
“property of making it likely that one’s future beliefs of a similar kind will also be true” (p.
12). More precisely, the claim is that “under reliabilism, the probability of having more true
belief (of a similar kind) in the future is greater conditional on S’s knowing that p than on S’s
merely truly believing that p” (p. 12). Goldman and Olsson call their proposal “the conditional
probability solution”, noting that probability is to be understood here in an objective sense.
Consider their illustration in terms of a modernized Larissa example. Suppose you are driving
to Larissa, and that there are two forks on the way. Compare two situations. Situation 1: You
are using a reliable onboard navigation system which, when you reach the first crossroads,
tells you correctly that the shortest route to Larissa is to the right. Situation 2: Your onboard
navigation system is unreliable, but when you reach the first crossroads, it recommends
correctly that you take a right, just as the reliable one did. In both cases you form the
corresponding belief. Goldman and Olsson argue as follows:
“In both situations you believe truly that the road to Larissa is to the right (p) after
receiving the information. On the simple reliabilist account of knowledge, you have
knowledge that p in Situation 1 but not in Situation 2. This difference also makes
Situation 1 a more valuable situation (state of affairs) than Situation 2. The reason is
8
that the conditional probability of getting the correct information at the second
crossroads is greater conditional on the navigation system being reliable than
conditional on the navigation system being unreliable” (pp. 12f.).
The first thing I should like to note with regard to this argument is that in Situation 2 we can
attribute weak knowledge to the subject, in the sense laid out by Goldman and Olsson in the
first part of their paper. Situation 2 in the present example doesn’t differ in any relevant
aspects from Hawthorne’s Vienna example. When you rely on an unreliable navigation
system, you receive your information from a source that isn’t trustworthy. Yet when the
information is, on a given occasion, nonetheless correct and you adopt it, then, according to
Goldman’s and Olsson’s definition of weak knowledge, you weakly know the proposition in
question. To see that the two cases are analogous consider the following situation. (This
example might be considered slightly anachronistic, but let that pass.) Larissa, Situation 3:
Socrates, Meno, and Theaetetus are driving together to Larissa. Each has brought his own
computerized onboard navigation system. Socrates is using a reliable system (a Garmin
product, say), whereas both Meno and Theaetetus are using an unreliable instrument. At the
first crossroads both Socrates’ and Meno’s navigation system relays the correct information
that the best route to Larissa is to the right. Theaetetus’ system however tells him to take a
left. Let us assume that everyone in the car accepts the proposition suggested by his own
navigation system. If Goldman’s and Olsson’s argument for weak knowledge goes through,
then the Hawthorne story would appear to be applicable to this situation as well, as follows:
“If I ask you how many people in the car know that the best route to Larissa is to the right,
you will tally up the number of people in the car who possess the information that the best
route to Larissa is to the right. Everyone in the car who possesses the information (i.e., both
Socrates and Meno) counts as knowing the fact; everybody else in the car (i.e., Theaetetus) is
ignorant of it. It doesn’t really matter, in this context, where someone apprised of the
information got it.”
Now, if the Larissa example invites, or at least doesn’t rule out, a comparison of
Situation 2 (the weak-knowledge situation) with Situation 1 (the reliable-knowledge
situation), then the same goes for the Vienna example. Supposing for the moment that the
Vienna argument for weak knowledge works, our description of the Vienna case should
parallel the description of the Larissa case, as follows: “Suppose I ask you how many people
in the room know that Vienna is the capital of Austria (p). We may consider two situations,
differing only in whether or not the source of information the subjects have used is reliable.
9
Suppose that in Situation 1 the source is reliable and that in Situation 2 it is unreliable. In both
situations you might tally up the number of people in the room who possess the correct
information that Vienna is the capital of Austria. Yet on the simple reliabilist account of
knowledge, only those subjects who possess the right information in Situation 1, but not those
who possess the right information in Situation 2, know that Vienna is the capital of Austria.
This difference makes Situation 1 a more valuable situation (state of affairs).” Let us
henceforth abbreviate weak knowledge by “knowledgeW” and (simple) reliabilist knowledge
by “knowledgeR”. The point then is that, if the Larissa context allows for a distinction
between knowledgeW and knowledgeR, then so does the Vienna context. Or, to put it
differently: It is hard to see why in the Vienna case we should say that the subjects have
knowledgeW, full stop, without any implied contrast with a kind of knowledge that is more
valuable; but that in the Larissa case the subjects in Situation 2 have knowledge W which
should be contrasted unfavorably with the knowledgeR, that is posssessed by the people in
Situation 1. In other words, I don’t think we should say without qualification, as Goldman and
Olsson do, that there are contexts in which “knowledge” just means “knowledgeW”, and that
this fact challenges the claim that knowledge is always more valuable than mere true belief.
Instead, Goldman and Olsson should say that there is knowledgeW and knowledgeR, and that
even in contexts in which ascriptions of knowledgeW are appropriate, the value of knowledgeR
with respect to the proposition in question would exceed the value of knowledgeW.
My second question is how exactly the answer to the value problem is supposed to
work. To begin with, the fact that in Situation 1 of the Larissa example the navigation system
is reliable at the first crossroads doesn’t entail that it is, in this situation, also reliable at the
second crossroads. So this cannot be the whole story. Goldman and Olsson acknowledge this
and add that their conditional probability solution depends on a number of empirical
regularities. One of them is generality. Generality means that “if a given method is reliable in
one situation, it is likely to be reliable in other similar situations as well” (pp. 13f.). But this is
not enough. It must also be assumed that problems of the type in question are likely to occur
again. This feature they dub non-uniqueness. Furthermore, Goldman and Olsson argue, you
must also have cross-temporal access to the method in question. It doesn’t suffice that the
method in question is likely to be reliable in similar situations and that such situations are
likely to occur again. The method must also be available to you at other occasions. Finally,
they argue, an empirical condition that needs to be fulfilled is that “if you have used a given
method before and the result has been unobjectionable, you are likely to use it again on a
similar occasion, if it is available” (p. 13). This condition they call learning. Let us consider
10
learning first. Exactly which features of your cognitive condition can explain that it is likely
that you will use a method you have employed before on other occasions of a similar type?
Goldman and Olsson reply that, “having invoked the navigation system once without
apparent problems, you have reason to believe that it should work again. Hence you decide to
rely on it also at the second crossroads” (p. 13, my emphases). As it stands, however, this is
not entirely accurate. In order for you to decide to use a given method again, you must not
only have reason to believe that it should work again. You must also realize that you have
reason to use the method again; you must think that it is a good reason; and you must utilize
the reason. Your reason will typically be constituted by other beliefs, especially by the belief
that the mechanism or method in question is diachronically reliable. So, in order (rationally)
to decide, on the basis of such a reason, to use the method again, you must believe that the
reason-constituting beliefs justify the belief that the mechanism should also work on the
present occasion. Considering your earlier employment of the method, not only must the
results have been unobjectionable; you must also believe that they were, and consider this as
inductive evidence for the fact that the method is diachronically reliable; and so on. In short,
you must have a whole battery of beliefs and higher-order beliefs about the situation in order
to (rationally) decide to rely on the method on further occasions. This is a classical situation
of internalist justification.7
Similar constraints apply to non-uniqueness, cross-temporal access, and generality. If
it is to be more valuable for you to form a belief on a given occasion on the basis of a reliable
mechanism than on the basis of an unreliable mechanism because you will then tend to use
that mechanism also on future occasions, you must also recognize future situations as being of
the same type than earlier ones. You must believe on future occasions that the method or
mechanism you used on earlier occasions is also available to you now. You must believe that
the mechanism employed earlier successfully solved the problem then, and that it still works
properly and is therefore likely to solve the problem on the present occasion as well.
Otherwise you might just as well be inclined to use another mechanism or method that, as it
happens, is less reliable or unreliable. Consider once more Socrates’ reliance on his
navigation system when traveling to Larissa. If he didn’t believe that his navigation system
had been working properly at the first crossroads, that it hadn’t broken in the meantime, that
the situation at the second crossroads was of a similar type, etc., he would not decide to use
the system again at the second crossroads, instead of using, for example, Meno’s system. In
general, in order to account for the extra value of reliabilist knowledge along the lines
11
Goldman and Olsson propose, the overall epistemic condition of the believer that needs to be
invoked must involve a number of psychological states of the believer to which she has
reflective access. This is a heavy concession to internalism.
I have argued that in order to account, within the Goldman-Olsson approach, for the
extra value that a reliably produced true belief has for S, S must have other beliefs about the
epistemic status of the target belief. Must these beliefs constitute knowledge? That depends
on how we construe the notion. In order for the subject to successfully reuse a cognitive
process or method, it would seem to suffice that the subject hold true beliefs about the
reliability of the process and related questions. Employing Goldman’s and Olsson’s notion of
weak knowledge as they have introduced it in the first part of their paper, I shall say that this
is the weak-KK condition to which their conditional probability solution is committed. S’s
knowingR that p, i.e., S’s knowledge in the sense of simple reliabilism, is more valuable for S
only if S has at least knowledgeW, i.e. true belief, to the effect that S enjoys knowledgeR that
p.
What is the significance of these observations for Goldman’s and Olsson’s approach?
A core motivation of reliabilism – and who has taught us this better than Goldman himself? –
is to provide a non-Cartesian, externalist account of knowledge and justification. Yet if I am
right, then in order for (simple) reliabilism to account for the value problem (in the way
Goldman and Olsson propose), the account must balance its general externalist orientation
with some severe internalist restrictions. This is a concession that Goldman usually tries to
avoid in his work. Recall, for instance, Goldman’s concluding sentence in “Internalism Exposed”
(1999b, p. 293). “I see no hope for internalism”, he concludes this paper. “It does not survive
the glare of the spotlight.”8 If what I have argued is on target, there is also no hope for
reliabilism to handle the value problem along the lines Goldman and Olsson propose, if
reliabilism doesn’t take on board the constraint that the conditions that turns true belief into
knowledge will be placed under the glare of the spotlight of the subject’s mind.
So far we have been considering what may be called a first-person extra value claim:
the claim that if S reliably knows that p, this may be more valuable for S than merely truly
believing that p. But we may also consider a third-person perspective. If S knowsR rather than
merely knowsW that p, can the former state of affairs be more valuable for other epistemic
agents in S’s epistemic community?
This point even seems to be reflected in the terminology Goldman and Olsson chose: “learning” is a
process that is based on such kinds of reflections.
7
12
4. Third-person extra value
Surely it can. And in this respect, it seems, reliabilism would not have to pay the tribute to
internalism which I have argued it must pay in order to account for the first-person extra value
of knowledgeR. Epistemic communities take interest in whether their members are reliable
informants in subject matters that are considered to be important. Switching to the level of
knowledge ascriptions, let us borrow a famous phrase from Edward Craig and say that a
crucial function of knowledge ascriptions is to mark “approved sources of information”
(Craig 1990, pp. 18, 11, 17) As in the first-person case, this has a diachronic dimension. Often
it is important to us not only whether an informant gets it right at the present occasion, but
also whether we can rely on him on similar occasions that are still to come. This observation,
although it changes the perspective taken by Goldman and Olsson, is, I take it, within the
general spirit of their idea. Will a third-person account of the extra value of knowledgeR also
be contaminated by internalist provisos?
Consider good old Mr. Truetemp, into whose brain a reliable thermometer-cum-beliefgenerator has been implanted (cf. Lehrer 1990, pp. 163f.). Truetemp doesn’t know that this
has happened. Let us add to Lehrer’s original story that Truetemp is, for some reason,
debarred from generating the belief that he can reliably tell the temperature. (He cannot
collect records of his successful epistemic performances in this area, for example, and thus
doesn’t form inductive beliefs about his abilities, etc.). In short, Truetemp has no belief
whatsoever about having a special cognitive ability to assess the temperature of his
environment. Then, as I have argued, his knowledgeR on a given occasion about the current
temperature has no extra epistemic value for him, in comparison with a situation in which he
would only have a true belief about the temperature. The reason is, in Goldman’s and
Olsson’s words, that he fails to “have reason to believe” that he is a reliable cognizer in such
matters. There is no way in which he could “decide to rely on” the mechanism on other
occasions, and so on.
Yet the fact that in a given situation Truetemp knowsR the temperature may have extra
value for others. Whenever there is no other thermometer at hand, his colleagues in the
department, who have formed inductively well-grounded beliefs that Truetemp is a reliable
informant about the temperature of the environment, will consult him when they want to
Perhaps “Strong and Weak Justification”, in which Goldman allows for internalist no-defeater
constraints and where he argues that strong, reliabilist justification may entail blamelessness, is an
exception.
8
13
know what the temperature is. In this example, we have ruled out any form of internal access
on the part of Truetemp to the conditions that elevate his beliefs to knowledgeR. So does
Truetemp’s pure knowledgeR exceed the value of knowledgeW, i.e. of mere true belief? Yes
and No. For him it does not. But for others in his epistemic community it does. Bringing a
third-person perspective into the picture thus relativizes the reliance of reliabilism on
internalist constraints for purposes of solving the value problem. While I have argued that it
cannot avoid such constraints with respect to the first-person extra value of knowledgeR, it is
not so committed when it comes to the third-person extra value of knowledgeR. I would like to
conclude with a brief sketch of where I think all this is leading.
5. Towards a contextualist account of epistemic extra value
I myself think that we must adopt what I call a “contextualist approach to the value problem”.
I believe (as argued in an unpublished paper that I have presented at various public occasions
over the last few years)9 that the extra-value claim can indeed be defended from the point of
view of reliabilism, but that it is clear that it needs to be relativized to certain epistemic
contexts. The right way of phrasing the value-question is thus indeed not: “Why is knowledge
always more valuable than mere true belief?”, but instead: “For whom, and in which kinds of
epistemic context, is knowledge more valuable than mere true belief?” As we have seen,
Goldman and Olsson also believe that the first, traditional, way of phrasing the question
should be rejected. But their reason is that knowledge sometimes reduces to mere true belief –
a claim for which, I have argued, they have not offered a convincing argument. Yet even if
knowledge, in every context, is true belief + X, this doesn’t imply that knowledge is, in every
context, more valuable than mere true belief. There could be contexts in which S’s knowledge
that p doesn’t reduce to true belief, but nevertheless fails to be more valuable than the latter.
In other contexts, by contrast, S’s knowledge could be more valuable, not only for S, but also,
and especially, for other members of S’s epistemic community. This, I believe, is the most
plausible way of looking at the problem.
To flesh this out a bit, consider again Craig’s point. Knowledge attributions are rooted
in our desire to “flag approved sources of information”. But such ascriptions usually have
strong evaluative components. They do not only express approval of the fact that a belief
“A Contextualist Solution to the Value Problem”: English versions have been presented at the GAP
conference, September 2006 in Berlin, and at the Bled Epistemology conference, May 2007 in Bled.
German versions have been presented in Oldenburg, in July 2007 in Dresden, in January 2008 in
Düsseldorf and in Potsdam, and in April 2008 in Dresden.
9
14
currently under consideration is true. They also mark general sources of information, i.e.
sources of generating, and for distributing, true beliefs on similar occasions. This kind of
evaluative component is especially salient in contexts in which, apart from the truth of a
particular belief currently under consideration, a person’s future performance as a reliable
epistemic agent matters to us. The assertion: “Our guide knew that we wouldn’t be able to
reach the top of the Zugspitze before late afternoon, and thus strongly recommended that we
hike back to the mountain shelter” normally expresses approval of the guide and of his belief
as having a reliable basis. If we declare this, we would normally be prepared to employ the
guide again on future occasions. Similarly, the remark: “The doctor knew that it was a bite
from a tick” brings out that we regard the doctor as a reliable informant in this matter, and
would be prepared to consult him again in matters that look similar. However, it would seem
wildly implausible to me to maintain that we are prepared to ascribe reliable knowledge to
someone in our epistemic community only when the subject matter under consideration is of a
type that we think will interest us also on future occasions. In other words, epistemic contexts,
in the sense here relevant, are governed partly by the epistemic goals and interests of
knowledge ascribers. These goals and interests determine whether or not knowledgeR is more
valuable than knowledgeW. Note that, if this thesis is right, it also applies to the first person
case.
The epistemic value contextualism I want to advocate is inspired by, but must sharply
be distinguished from, current forms of semantic or conversational contextualism regarding
the truth of knowledge ascriptions. Conversational contextualism (as championed in recent
decades by Stewart Cohen, David Lewis, and Keith DeRose) claims, roughly, that the truth
values of knowledge ascriptions vary with the epistemic standards of the attributer’s context.
The position here outlined, by contrast, maintains that different epistemic contexts are
dominated by different epistemic goals and interests, and that this can account for the fact that
knowledge – even in the sense of knowledgeR – is sometimes, but not always, more valuable
than mere true belief. To summarize, the answer to the value problem from the point of view
of simple process reliabilism that emerges from the discussion in this paper has three main
ingredients: (i) a third-person perspective on the extra-value intuition; (ii) an emphasis on the
diachronic dimension of the so-called truth goal of believing; and (iii) a contextualist proviso
that accounts for the fact that knowledge is sometimes, though not always, more valuable than
mere true belief.
15
6. Literature
Alston, William P., 1989: Epistemic Justification, Ithaca, London: Cornell University Press
Baehr, Jason (forthcoming): Unravelling the Value Problem, Manuscript, draft July 2006.
Brady, Michael, 2006: Appropriate Attitudes and the Value Problem, American Philosophical
Quarterly 43
Craig, Edward, 1990: Knowledge and the State of Nature, Oxford: Clarendon
DePaul, Michael R, 2001: Value Monism in Epistemology, in Knowledge, Truth, and Duty,
ed. Matthias Steup, Oxford: Oxford University Press, 170-183
Goldman, Alvin I., 1979: What is Justified Belief?, repr. in Goldman (1992), 105-126
Goldman, Alvin I., 1980: The Internalist Conception of Justification, Midwest Studies in
Philosophy V, 27-51
Goldman, Alvin I., 1986: Epistemology and Cognition, Cambridge, Mass.: Harvard
University Press
Goldman, Alvin I., 1988: Strong and Weak Justification, repr. in Goldman (1992), 127-141
Goldman, Alvin I., 1992a: Liaisons – Philosophy Meets Cognitive and Social Sciences,
Cambridge, Mass., London: The MIT Press
Goldman, Alvin I., 1992b: Reliabilism, in Jonathan Dancy and Ernest Sosa (eds.), A
Companion to Epistemology, Oxford: Blackwell, 1992, pp. 433-436
Goldman, Alvin I., 1999a: Knowledge in a Social World, Oxford/New York: Clarendon
Goldman, Alvin I., 1999b: Internalism Exposed, The Journal of Philosophy (96), 271-293
Goldman, Alvin I., 2001: The Unity of the Epistemic Virtues, in Virtue Epistemology –
Essays on Epistemic Virtue and Responsibility, ed. by A. Fairweather and L.
Zagzebski, Oxford, New York: Oxford University Press
Goldman, Alvin, 2002: What is Social Epistemology? A Smorgasbord of Projects, in id.,
Pathways to Knowledge: Private and Public, Oxford: OUP, 2002, 182-204
Goldman, Alvin I., and Erik J. Olsson (forthcoming): Reliabilism and the Value of
Knowledge, to appear in Epistemic Value, ed. by D. Pritchard, A. Millar, and A.
Haddock, Oxford University Press, 2008
Hawthorne, John, 2002: Deeply Contingent A Priori Knowledge, Philosophy and
Phenomenological Research 65 (2), 247-269
Hawthorne, John, 2004: Knowledge and Lotteries, Oxford: Clarendon Press
Jones, Ward, 1997: Why Do We Value Knowledge?, American Philosophical Quarterly 34,
423-439.
16
Jäger, Christoph, 2006 (draft): A Contextualist Solution to the Value Problem
Kvanvig 2003. Jonathan L. Kvanvig: The Value of Knowledge and the Pursuit of
Understanding, Cambridge, New York: Cambridge UP, 2003
Lehrer, Keith, 1990: Theory of Knowledge, Boulder: Westview Press
Le Morvan 2005. Pierre Le Morvan: “Goldman on Knowledge as True Belief”, Erkenntnis 62
(2005), 145-155.
Riggs 2002. Wayne Riggs: “Reliability and the Value of Knowledge ”, Philosophy and
Phenomenological Research 64 (2002), 79-96.
Riggs (forthcoming). Wayne Riggs: “The Value Turn in Epistemology”, forthcoming in New
Waves in Epistemology, ed. V. Hendricks & Duncan H. Pritchard, Ashgate
Sosa, Ernest, 2003: “The Place of Truth in Epistemology”, in DePaul & Zagzebski (2003),
155-179.
Swinburne, Richard, 1999: Providence and the Problem of Evil, Oxford: Oxford University
Press
Zagzebski, Linda, 2004: Epistemic Value Monism, in Ernest Sosa and His Critics, ed. John
Greco, Oxford: Blackwell, 190-198
Download