Timothy Williamson, Knowledge and its Limits

advertisement
Timothy Williamson, Knowledge and its Limits, (Oxford: Oxford
University Press, 2000), xii + 340pp.
“[This book] takes the simple distinction between knowledge and ignorance as a
staring point from which to explain other things, not as something itself to be
explained. In that sense the book reverses the direction of explanation predominant in
the history of epistemology.” These words, from Timothy Williamson’s preface to
Knowledge and its Limits, give an adequate account of its most notable characteristic.
The book never offers a full-blown reductive analysis in the traditional way, it moves
swiftly to applications of the new ‘theory’ to the philosophy of mind, philosophy of
science, philosophy of language, decision theory; from novel replies to scepticism, to
the dispute between realism and anti-realism and paradoxes of game theory.
The fundamental postulate of Williamson’s book, and most that, according to
Williamson, can be said about knowledge is: ‘knowing is a state of mind’. It is a
conjunction of a mental state and a factive attitude to propositions. To be precise: ‘to
know’ is a factive mental state operator; it deductively entails the proposition that it
operates on. If a subject knows p, then it follows deductively that p. But this still does
not offer a constructive, purely epistemological, analysis of knowledge. Williamson is
explicit about not being concerned with that. His analysis, if it can be compared to a
scientific enterprise, is at best a ‘principle theory’, such as for example is Special
Relativity. But Williamson also says his intention is not to offer a new theory that can
be proved or disproved, in a way that a scientific theory can be, but to use the more
recent ‘discoveries’ in the philosophy of mind, language and science to solve the
problems that plague the previous theories. And these applications Williamson
suggests should be a benchmark against which his new theory should be judged. This,
however, is precisely the motivation behind principle theories.
Despite Williamson’s insistence to the contrary, I find it difficult to escape the
feeling that something of scientific methodology can be recognised in Knowledge and
its Limits, indeed in Williamson’s proposal in general. His goal is to solve the
problems of old theories so as to successfully apply the new theory in other areas of
philosophy and science. On such view he appears to be postulating a principle theory,
the one that stems from a simple generalisation and proceeds to solve the old
problems, never straying too far from the original generalisation. As such it would
have great clarity and would be very hard to disprove. But it would, on the other
hand, remain a generalisation, a tool for solving problems, not a detailed explanation
of what knowledge is.
It would also borrow terminology, concepts, from other areas of philosophy.
Williamson effectively does that by relying on the philosophy of mind terminology,
introducing new concepts into epistemology. But his application of those concepts
might soon enough require a change in the fundamentals of the discipline that those
concepts have been borrowed from. Such a change may alter the applicability of the
concepts in question to his own enterprise. Thus, the new theory has to tread very
carefully in order not to undermine its own foundations. It needs to pave the way for a
constructive development in epistemology that will offer an analysis of knowledge
built from atomic elements of epistemology.
And there is something intuitive about our desire to understand constructively the
world around us, including to some extent the philosophical concepts, through
reductive analysis to atomic elements. If Williamson’s work is to result in acceptance
of knowledge as some such atomic element, he has to be careful to draw a clear line
between his and the old reductive analyses, so as not to allow circularity in his won
reasoning. This will require rigorous scrutiny of all areas where the new theory is
applied.
The material presented in the book is not all new. Most of the book contains
revisions (with additions, most notably replies to criticism) of Williamson’s papers
where the foundations of his ‘radical’ view first appeared, most notably of ‘Is
knowing a state of mind?’, Mind, 104 (1995), 533-65. A short stroll through
‘traditional’ history of theories of knowledge will show why Williamson’s proposals
are so radical.
Traditional analyses of knowledge are based on the formula that knowledge equals
true belief plus some mysterious ingredient that the analysis is supposed to uncover:
knowledge=truth+ belief + x. The tripartite analysis, that knowledge is a justified true
belief, was not able to overcome the sceptical problems presented through the Gettier
counterexamples (Analysis, 23 (1963), 121-3). The first reaction to the simple Gettier
counterexamples was to strengthen the justification requirement, by adding the ‘no
false lemmas’ clause. That is, for S to know p, S must have a justification of belief in
p involving no false beliefs. But it is not always possible to structure the route from
one belief we hold to another, as a valid argument. Even if it were possible, how were
we to stop the infinite regress of such argumentative justification?
The models supposed to provide the justification required were coherentism and
foundationalism. Foundationalism postulates the existence of special type of beliefs
that are not in need of justification by further beliefs. But what then will justify them?
Experience, their content or something else? Each of the solutions offered has its own
problems. Perhaps it was impossible to lay down the detailed structure of the
foundationalist model of knowledge, leaving several routes open to sceptical
challenges. Coherentism, a model on which a belief is justified in virtue of coherence
with the believers’ background system of beliefs, suffers from a major drawback,
namely that an internally coherent system of beliefs could be mad. Furthermore, the
relation between coherentism in knowledge theories and the coherence theory of truth
must be explored and justified.
Demanding that there ought to be a non-deviant causal connection between S’s
belief that p and the fact that p, we will successfully fend off the sceptical challenges
in some possible worlds (thus, leave open the possibility of knowledge), but we’ll
have to admit that facts, as well as events, are possible agents of causation. This
choice of agents of causation will open up the question of the internalist and the
externalist position. The former will argue, in general, that the ingredient X must be
cognitively accessible (the believer must be aware of it, at lest on reflection), whilst
the latter will oppose that position. On the causal theory, we will also not be able to
have knowledge of the future, as that would imply backward causation. Mathematical
facts, on the other hand, seem too abstract to cause anything. Finally, the sceptic can
increase the deviancy of the causal chains, imposing further complications on the
theory.
Nozick’s counterfactual conditional analysis solved a range of sceptical challenges,
but introduced a whole set of its own, most notably by being forced to abandon the
closure principle for knowledge (the KK principle). This effectively warranted us to
hold unjustified beliefs. On the other hand Nozick’s theory was vacuously acceptable
in the case of mathematical knowledge.
A number of issues are left unsettled above, but the increasing complexity of the
analysis made it incapable of dealing with all of the problems from a unique
foundation. There were too many theories about, each suitable to deal only with its
own specific range of problems, but leaving others intact, or choosing to ignore them.
Williamson hopes to make a revolutionary move in the game, by changing the very
foundations of traditional epistemology and then setting out to either show that the
previous problems are irrelevant, or how to solve them without modifying the
foundational postulate of the theory.
Knowledge and its Limits opens with the chapters proving that we have to abandon
the formula: knowledge=true belief + x, and postulates knowledge as a factive mental
state, prior to belief. Chapter 1 sets out the stage for the rest of the book. It explains in
some detail just what is meant by a ‘mental state’, ‘factive mental state operator’,
‘factive attitude’ etc. There is a series of good examples that go some way towards
alleviating the burden of introduction to new terminology, for those not familiar with
the subtleties of the philosophy of mind. It also establishes the all-new relation
between belief and knowledge. Williamson argues that knowledge is conceptually
prior to belief, and that epistemology historically did not secure (through rigorous
proof) the ground it built its theories on.
There is no a priori reason why we should accept the traditional formula, no more
than there is an a priori reason why we should accept the analysis of red according to
the formula: red= coloured + x. According to Williamson, the traditional approach
fails to prove that we can analyse knowledge in terms of belief, i.e. that belief is
conceptually prior to knowledge, without falling into circularity. He holds that the
problem is encountered generally in language. ‘Parenthood’ can, for example, not be
defined in terms of ‘ancestry’ without some additional conceptual resource, however
intuitive such suggestion might seem. Williamson suggests that knowledge is an
adaptation of mind to the world (as opposed to action, which is an adaptation of the
world to the mind), and that the maladaptation of mind to the world results in a
residue that is pure belief. Thus he makes knowledge conceptually prior to belief. His
analysis is certainly structurally simpler than the most recent versions of, say, the
counterfactual conditional analysis. Now it has to be judged by its success in
overcoming the problems faced by traditional analyses.
Chapter 2 goes on to argue for the new theory by exploring the weaknesses of the
internalist picture of the mind, for it poses the most serious challenge to the
‘ontology’ of the new theory. However, this is ‘internalism’ closer to the philosophy
of mind terminology, that to that of epistemology. These arguments are based on the
differences between mental states that are knowledge and those that are belief, and the
causal efficacy of those states that are knowledge. The latter will eventually be used
when combating sceptical attacks in Chapter 8. The arguments of Chapter 2 are
aiming to support externalism about mental contents and factive mental attitude to
those concepts. Chapter 3 develops further the externalist conception of knowing as a
state of mind, by developing a fundamental role for knowledge in causal explanations
of action. It features the first appearance of the technical terminology of probability
theory.
The strongest argument for externalism, though will be given in Chapter 4, where
Williamson argues against luminosity of the non-trivial features of any general state of
the subject, and thus of its mental state that is knowledge. A condition is luminous if,
whenever it obtains (and one is in a position to wonder whether it does), one is in a
position to know that it obtains. Williamson thinks that our powers of discrimination
are more limited than that. In this chapter, he argues against this further obstacle to
knowledge being a mental state, namely that it must be cognitively accessible. This is
the work that originates from ‘Cognitive homelessness’, Journal of Philosophy, 93
(1996), 554-73; claiming that we do not have a ‘cognitive home’, whatever that may
be (Descartes’ mind or Wittgenstein’s world of discourse), where everything is open
to our view.
In short, he considers a process whereby one very gradually changes from being in
a mental state without a feature F, to being in a mental state with F. Just after one’s
mental state has acquired F, one is in a mental state so similar to the mental state
without F that one cannot detect the difference between them, and is therefore
unaware, even on reflection, that one’s mental state has F. Thus the presence of F
does not guarantee the awareness of F. Williamson concludes that epistemology then
has no choice but to work with cognitively inaccessible features, i.e. to be externalist.
Further analysis of the vagueness of ‘to know’, something Williamson insists we will
have to learn to live with, is designed to be a defence against the standard objections
to the sorites arguments, such as the above appears to be. Finally, at the end of this
chapter, Williamson uses the general anti-luminosity argument to argue against
Dummett’s anti-realistic position (that it is not the case that all meaningful sentences
have a truth-value) in the philosophy of language.
In chapter 5 we are shown exactly when the KK principle fails (in the cases where
vague external conditions apply), and the reliability requirement for knowledge is
explored. Williamson applies the anti-luminosity conclusion to the conceptual space
containing knowledge and truth. On that abstract space, the region in which subject
knows p and p is true is separated from the region in which p is not true, by a ‘fuzzy’
area where p is true, but the subject does not know p. The ‘fuzzy’ area can never be
made sharper, on any theoretical analysis, precisely because of the anti-luminosity of
‘to know’. The fuzziness affects our orientation, and we cannot, from the inside,
clearly map the regions where p is or is not true. Thus the KK principle applies, but
not always. These special cases occur in the in the fuzzy region outlined above, and
allow an exit from the sceptical scenarios, without abandoning the KK principle
altogether (as Nozick is forced to do).
Chapter 8 then executes a neat twist that makes the sceptical argument from the
‘Brains in vats’ scenario (perhaps the strongest and the most general sceptical
argument) circular, for it wrongly presupposes that the condition for being evidence is
luminous. That scenario also draws on the KK principle, or the assumption that when
the subject knows p, she also knows that she knows that p. In other words the sceptic
relies on the universal validity of the KK principle. As Williamson directly equates
the evidence for a belief with knowledge, he denies the sceptic the right to make the
controversial claim without further argument. In the sceptical scenario the subject
cannot distinguish her total mental state from her present mental state, but it does not
follow that there is no difference. Even for mental states, there is a difference between
appearance and reality. This does not prove the sceptic wrong; it is way of showing
that the sceptic has not proven himself to be right yet. And that is a step in the right
direction.
But the identification of knowledge with evidence (Williamson’s equation: K=E)
may be seen as controversial. It allows foundationalism without circularity:
knowledge is the end of justification of belief, but it itself does not require
justification. However, even if it successfully solves the problems of infinite regress
of justification (thus the debate between the adherents of coherentism and
foundationalism), features in the rebuttal of scepticism, and has practical applications
in evidential probability theories (Williamson devises a modified version of objective
Bayesianism), it is yet another postulate that yearns for a justifying explanation. Many
will, despite successful applications and catching examples, wonder: ‘How come?’.
The causal theory, outlined briefly above, did not allow any knowledge of the
future. Williamson includes some remarks in Knowledge and its Limits that make it
sound like he does not either. And maybe he doesn’t, and has reasons why he
shouldn’t (he claims he does not know he will not be knocked down by a bus
tomorrow). But, there is no specific consideration given in the book, to the so-called
Descartes’ principle (cf. for example C. J. G. Wright, ‘Scepticism and dreaming:
imploding the demon’, Mind, 100 (1991), 87-116), or the principle of closure under
known logical implication. For Williamson is supportive of the idea that deducing
something from known premises is a way of coming to know it. Thus, if he knows he
will be meeting someone next week, then he knows he will not be knocked down by a
bus tomorrow. But, if he fails to know that he will not be knocked down by a bus
tomorrow, then he fails to know anything about next week, or any week thereafter.
Given that analogous concerns can arise for much of the knowledge arrived at by the
principle suggested above, we might need further guarantees from the new theory that
is has that potential leak covered. And the anti-luminosity arguments (and their
consequence for the KK principle) go some way towards that, but we need an explicit
consideration of this problem in order to fight off sceptical attacks. Otherwise, the
sceptic has still got a clear line of attack open and is not forced into defensive position
yet, contrary to what Williamson claims in chapter 8.
Chapter 9 further expands the equation between knowledge and evidence, which
was used in combating the sceptical challenge. It is here that the new theory has most
bearing on the philosophy of science, on what can be admitted as evidence to support
the theories in science. Chapter 6 deals with specific applications of the new theory to
some paradoxes from game theory, such as the paradox of Surprise Examinations, and
the Conditionally Unexpected Examinations. Chapter 10 is technical; it develops the
probabilistic account based on the conclusions of the previous chapters and the
principles of the Bayesian probability calculus. Chapter 11 applies the new analysis
directly to the philosophy of language by considering the rules of assertion. Chapter
12, the final chapter, explores the ‘limits’ side of Knowledge and its Limits. It tries to
go further than the arguments of Chapter 4 which merely present that the cases in
which p is available to be known, do not exhaust the cases in which p is true. It
carefully explores the Fitch’s argument that the point about the conjunctive
proposition that p is true and unknown is that, in virtue of its structure, it is not
available to be known in any case whatsoever. Even when considered from the
vantage point of the new theory there will be realistic limits to our knowledge, there
must be truths that can never be known by anyone.
Despite possible criticism that the new theory will be faced with in the future, this is
an ambitious and refreshing project. If for nothing else, then for turning the much
entangled traditional reductive analyses upside down. Williamson proposes to
thoroughly dust all the rooms of the old house of epistemology, and his work will
send tremors through the neighbourhood, as far as the foundations of scientific
theories such as Quantum Mechanics. There are novel proposals that the quantum
state, an element of the formalism of Quantum Mechanics, should be interpreted as
the state of experimenters’ knowledge. How this will mesh with the new
understanding of knowledge, proposed in this book, is an exciting prospect for both
scientists and philosophers. Both can only benefit from detailed technical expositions,
clear examples and thorough arguments offered in this book. And there are further
applications of the new theory outlined throughout the book. It is a demanding read,
and however well trained one might be in Bayesian calculus or the subtleties of
traditional epistemology (to mention just two of the technical details), bringing them
together under one roof along with many other new elements will require serious
concentration. But the end result promises to be truly rewarding, we might be on a
course to see the world in a new light through renewed understanding of our own
knowledge, as well as its limits.
Download