© Michael Lacewing Reliabilism and virtue epistemology Reliabilism (R+T+B) Reliabilism claims that you know that p if 1 2 3 p is true; you believe that p; your belief is caused by a reliable cognitive process. A reliable cognitive process is just one that produces a high percentage of true beliefs. Examples include perception, memory and testimony. True beliefs caused by such processes count as knowledge. (Of course, if these processes cause a false belief – if you misperceive or misremember or someone lies to you – then your belief isn’t knowledge, but that’s because it is false.) One advantage of reliabilism is that it allows young children and animals to have knowledge. It is odd to say, of many animals, that they have reasons or evidence for their beliefs – they don’t have that kind of sophisticated psychology. But they get around the world very well indeed, so it is also odd to deny that they have knowledge. Reliabilism explains both points. Justification is irrelevant to knowledge, which children and animals have because their true beliefs are caused by reliable processes. However, simple reliabilism – reliabilism as we have defined it so far – is unsatisfactory. Here’s a famous example from Alvin Goldman, ‘Discrimination and Perceptual Knowledge’: Henry is driving through the countryside. He doesn’t know it, but in this part of the country – call it ‘Barn County’ – there are lots of fake barns, just barn facades. But they have been built so that they look just like real barns when seen from the road. As he drives along, Henry often thinks ‘There’s a barn’, or ‘Hey, there’s another barn’. These beliefs don’t count as knowledge because they are false. But just once, Henry thinks ‘There’s a barn’ when he is looking at the one and only real barn in the area. This belief is not knowledge, because it is only a matter of luck that Henry’s belief is true in this one instance. But simple reliabilism has to say that Henry does know. His belief is caused by a very reliable process, namely vision, and it is caused by precisely what makes it true. The problem is that in Barn County, this reliable process has produced a true belief in circumstances in which the belief still seems only accidentally true. One solution is to make reliabilism more complex. In normal situations, Henry can discriminate between barns and things that aren’t barns just fine, so he knows a barn when he sees one. But in Barn County, he can’t reliably discriminate between real barns and facades. That’s why he doesn’t know that what he sees is a barn when it is. This more sophisticated reliabilism says that you know that p if 1 2 3 4 p is true; you believe that p; your belief that p is caused by a reliable cognitive process; you are able to discriminate between ‘relevant possibilities’ in the actual situation. Reliability as tracking the truth We defined a reliable process as one that produces a high percentage of true beliefs. Robert Nozick (Philosophical Explanations, pp. 172ff.) provides a different definition in terms of ‘tracking the truth’. You know that p if 1 2 3 4 p is true; you believe that p; in the situation you are in, or a similar situation, if p were not true, then you would not believe that p; in the situation you are in, or a similar situation, if p were true, then you would believe that p. (3) tends to be more important for solving the counterexamples we’ve been looking at. In normal cases, Henry knows that he is looking at a barn, because if it wasn’t a barn he was looking at it, he wouldn’t believe that it was. But in Barn County, Henry does not know he is looking at a barn, because he would believe it was a barn even if it were a facade. So in normal cases, Henry knows that there is a barn by sight; but in Barn County, he doesn’t. (3) does not imply that you could not be mistaken, no matter what. As stated, it means that in the situation you are in and others that are likely to come up, you are able to tell whether or not p is true. Denying the principle of closure According to reliabilism, I know I have two hands. It’s true, and I can see and feel them. But now consider a rather extreme thought experiment. In Meditation I, Descartes envisages the possibility that he is being telepathically deceived by an evil demon, so that although it seems to him that there is a world of physical objects, including his own body, in fact there is no such world. The modern variant of this is to imagine oneself as a brain in a vat, being fed perceptual experiences by a supercomputer. According to reliabilism, do I know that this isn’t true, that I’m not a brain in a vat? I believe that I am not, but can I tell reliably? The answer must be ‘no’, because if I were a brain in a vat, I would continue to believe that I am not. My experiences, after all, would be exactly as they are now, in the real world. So I don’t know that I am not a brain in a vat. These two results – that I know I have two hands, but I don’t know that I am not a brain in a vat – produce a paradoxical conclusion. Deduction is a reliable cognitive process. If you start with true beliefs (the premises), and validly deduce another belief (the conclusion), then that belief must be true. ‘The principle of closure’ says that if I know the premises, and I validly deduce the conclusion from the premises, then I know the conclusion. So here is a valid deduction: 1 2 3 I have two hands. If I have two hands, then I am not a brain in a vat. Therefore, I am not a brain in a vat. Reliabilism claims that I know (1), and of course, I know (2), because it is true by definition. But although I know both premises, and the premises entail the conclusion, reliabilism says that I do not know the conclusion. But that must mean that deduction does not always produce knowledge! Many philosophers have found this an absurd conclusion that shows that reliabilism cannot be a correct analysis of what knowledge is. It is tempting to think that if I do not know that I am not a brain in a vat, then I do not know that I have two hands. Because it is possible that I am mistaken about not being a brain in a vat, then it is possible that I am mistaken about having two hands. But this line of thought returns us to scepticism. That is why reliabilists want to say that if I am not a brain in a vat, then the process that causes my belief that I have two hands is reliable. So I can know I have two hands without knowing whether I am a brain in a vat. Virtue epistemology (V+T+B) Virtue epistemology is a recent development out of reliabilism. The idea of a ‘virtue’ in this context relates to intellectual virtue. An intellectual virtue can be understood as a particular intellectual skill or ability or trait that contributes to getting to the truth. There are two types of virtue epistemology, each of which emphasises the importance of different kinds of intellectual virtue. ‘Reliabilist’ versions of virtue epistemology developed directly out of reliabilism. They focus on the virtues of cognitive faculties, such as acuity of perceptual organs, reliability of memory, and rationality of thought processes. ‘Responsibilist’ versions of virtue epistemology focus on the virtues of intellectual traits, such as caring for the truth, open-mindedness and intellectual courage. Responsibilist virtue epistemologists focus more on the conditions of a virtuous knower than on definitions of knowledge. So we will discuss only reliabilist virtue epistemology. Reliabilist virtue epistemology claims that you know that p if 1 2 3 p is true; you believe that p; your true belief is a result of you exercising your intellectual virtues. The fact that you have true belief represents a ‘cognitive achievement’ of yours for which you deserve ‘credit’. You have the true belief ‘owing to [your] own abilities, efforts, and actions, rather than owing to dumb luck, or blind chance, or something else’ (John Greco, ‘Knowledge as Credit for True Belief’). We can draw an analogy with skill at sports (Ernest Sosa, Apt Belief and Reflective Knowledge): suppose an archer shoots an arrow at a target. We can assess the shot in three ways: Accuracy: did the arrow hit the target? Adroitness: was the arrow shot well? Was the shot competent? Aptness: did the arrow hit the target because it was shot well? It is possible for a shot to be accurate without being adroit – it can be lucky. It is possible for a shot to be adroit without being accurate – even an expert can miss sometimes or be unlucky. And it is possible for a shot to be accurate and adroit without being apt. Suppose it was shot well, but then a gust of wind blows the arrow all over the place before finally, by chance, it hits the target. That’s quite different from when the arrow hits the target because of the skill of the archer. We can now apply this to belief: Accuracy: is the belief true? Adroitness: is the way that the person formed the belief an exercise of their intellectual virtues? Aptness: is the belief true because the person used their intellectual virtues in forming it? According to Sosa, knowledge is apt belief – belief that is true because it is formed by an exercise of intellectual virtue. This is more than being both true and the result of virtuous intellectual activity. The virtuous intellectual activity explains why the person holds a true belief. Does this solve our puzzle cases? Consider Henry again. Normally, of course, when Henry sees and recognises a barn, he believes it is a barn because he sees and recognises it, so his belief is apt. In Barn County, he uses these same abilities to acquire the true belief ‘there’s a barn’. That means that his belief is still apt. But, we said, that belief doesn’t count as knowledge. The reliabilist virtue epistemologist can reply in two ways. Either Henry doesn’t have the ability to recognise barns in Barn County, because of all the fake barns, as demonstrated by all his false beliefs about the other barn facades. His true belief can’t be the result of an ability he doesn’t have – so it isn’t apt. Or we can say that because of all the fake barns around, the fact that his belief is true in this one case is the result of luck. Again, Henry’s belief is not the result of his abilities.