EELDraft2Section1

advertisement
When discussing intuitions and their relevance for philosophical pursuits, there
are at least two important questions which must be asked. The first is a theoretical
question: What does it mean for intuitions to be properly utilized? The second is a more
specific question: When is it appropriate to employ a given intuition? While these
questions both center around the issue of deploying intuitions in a philosophical context,
they can be viewed as addressing different aspects; in the first case, the issue is how
intuitions are deployed, in the second, when. This section is devoted to laying out a
framework for answering that second question: When is it appropriate to employ a given
intuition? In other words, when is the intuition that p holds a good indicator that p holds?
In beginning to answer this question, we will start with some symbolization.
Substituting I(p) for ‘the intuition that p holds’, the relationship we are interested in is the
following:
I(p) → p
Before moving forward, it is important to say a little bit more about this symbolization.
The relationship we are examining should not be read as a formal implication. Such an
implication would be much too strong; it would be both very difficult to satisfy and is
unnecessary to support the work we generally take intuitions to be doing. Instead, the
relationship should be viewed as a material conditional. It isn’t necessary to know that
I(p) → p holds, or to have evidence that it does, to use intuitions. But if we are to take
intuitions seriously, we must presuppose an indicator relation. This point can be made
clearer by analogy with instruments for measurement. When using a thermometer, we
assume that the indicator relation T(p) → p holds, where p is the [surely p is not
proposition since temperatures don’t think or speak] ambient temperature and T(p)
represents a measurement from the thermometer. Similarly, if our intuitions are going to
do any work for us, we must presume the minimal indicator relation I(p) → p.
So, when are we justified in employing intuitions? When does I(p) → p? What
we need is a process for ratifying intuitions—verifying the legitimacy of their use in
different circumstances. As we will show, the relationship I(p) → p requires
additional specification. In order to ratify intuitions, one need specify a domain of
discourse or set of propositions for which the intuitions are to be considered
relevant, as well as a set of contexts in which to apply the intuitions to that
discourse. [multiple sentences deleted]
The necessity for these complications can again be seen by analogy with
instruments. Simple room thermometers are designed for a specific purpose, and as such
have a limited temperature range. Such a thermometer would not serve well to gauge the
temperature of cooking meat, and if such a task were to be included in the domain in
which the thermometer was expected to function, the thermometer would become an
unreliable indicator. This is analogous to the need for a limited domain of discourse in
which to employ particular intuitions. Gamblers are all to aware of this need. One’s
intuitions regarding Texas Hold ‘Em are woefully ineffective when playing Five
Card Stud. Similarly, chicken sexers—those with the inexplicable ability to identify
the sex of chickens, know all too well that even they are not prepared to determine
the sex of Saturday Night Live’s “Pat.” These are all examples of the need to define
a domain of discourse in which intuitions (or measurements) are valid. [multiple
sentences deleted]
In addition to defining a domain of discourse, one must also determine the
contexts in which intuitions can appropriately be appealed to. For instance, even an
expert chicken sexer knows not to trust his intuitions about the sex of a chicken
when he is three sheets to the wind. Gamblers would do well to recognize their own
related limitations. Similarly, one who claims to predict the weather based on the
pain in his knee knows not to trust his meteorological intuitions when the source of
the pain is his grandson kicking his knee. Furthermore, even those with a good
sense of etiquette realize the inapplicability of their expectations in other cultures.
All of these examples go to show that even when an appropriate domain of discourse
is found, there still exists the need to identify the relevant contexts in which
intuitions about that domain can be trusted. [paragraph deleted—I felt there needed to
be a clearer distinction between the examples used to illustrate each case]
In a more complete form, then, the process of ratification can be summarized as
follows: a domain of discourse is deemed appropriate for the employment of
intuitions just in case our intuitions about propositions within that domain are
accurate in at least some contexts. [sentences deleted—I felt the need to make the first
formulation less formal in terms of notation, and also to make intuitions the object of
ratification, rather than domains] In definitional form:
An intuition that p, where p is some proposition within domain D, is
ratified in contexts c just in case for all propositions p in D which are
intuited in contexts c the indicator relation I(p) _p holds. [sentence
deleted—again I wanted the definition to make intuitions the object of
ratification]
The relation in this definition is, again, designed to be read as a material
conditional, the indicator relation here need not be any stronger than it was in the simple
definition. What we are left with, when intuitions about propositions within a
particular domain are ratified, is a set of contexts in which we are allowed to apply
intuitions which fall into that domain. When intuitions are so applied, we have some
confidence that the indicator relation holds. This allows for some amount of fallibility as
well, I(p) need only be a reasonably good indicator of the truth of p.
Given this definition, we can now discuss its implications for the application of
intuitions. If we have evidence that ratification fails for a given domain and set of
contexts, then we presume that the faculty of intuition is not reliable for D in a set of
contexts R. We call this the epistemic conditional [I think this is a conditional, not a
condition. Also, it is introduced as the epistemic condition and thereafter referred to as
the methodological condition. I don’t know which we prefer, but we need to choose one.
I’ll go with the first.], and it follows directly from the definition of ratification. From this
condition, it seems reasonable to adopt the following methodological condition: If there
is evidence that intuitions that p within D are not ratified in R, then one ought to refrain
from bringing intuitions to bear upon D in R. Before discussing this further, we will
introduce a formalized version of our notion of ratification:
Rat(D,R): (p  D) (c  R) [I(p,c) → p]
From this point forward ~Rat (D,R) will indicate the presence of some reason for
thinking that the indicator relationship does not hold for D and R. To restate the
epistemic conditional: if there is evidence that ~Rat (D,R), then one ought to refrain from
using intuitions for D in R. This condition relies on the fact that ratification requires the
presupposition of an indicator relationship between the intuition that p holds and p
holding. If there is evidence that ~Rat (D,R), then we have a reason to doubt this
supposition. Rat(D,R) may fail to hold in two ways: certain contexts may not
support the use of intuitions for D, or D may be wholly inappropriate for the
employment of intuitions. The use of a compass provides a good analogy to
illustrate both types of failure. [sentence deleted] When working properly, a compass
is a good indicator of magnetic north. In an area containing large amounts of ferrous
material, though, the device ceases to function effectively. There is an external factor
that prohibits the supposed indicator relationship from functioning properly. So,
although compasses are ratified for D (determining magnetic north) under some c,
they are not ratified under other c (in areas with large amounts of ferrous material).
The same is true of intuitions in cases where we have reason to doubt that ratification is
successful. Furthermore, although a compass is useful for determining direction, it
is useless when employed to determine one’s velocity, altitude or zipcode.
Analogously, intuitions are relevant only for certain domains.
Where does the epistemic conditional leave us? Given the widespread use of
intuitions in various branches of philosophy, it would appear that there are relatively few
or only isolated cases where we have evidence that ~Rat (D, R) holds. Unfortunately,
closer examination reveals a widespread failure of our intuitions in many of the
circumstances in which we are most likely to employ them.
The history of philosophy presents a wealth of intuitions gone bad; one example
is Kant’s claim, early in the Critique of Pure Reason, that space is necessarily Euclidean.
Another is his notion of analytical necessity with regard to properties of objects. To
claim that gold is necessarily yellow metal is to adopt an intuition now recognized to be
faulty. Leibniz based a good deal of his work on the principle of sufficient reason,
requiring the grounding of contingent facts in necessities. This intuition served as a
breeding ground for critics, Voltaire’s Candide not the least among them. More
generally, the role that intuitions concerning God played for rationalist thinkers,
including such figures as Descartes and Berkeley, has subsequently been dismissed.
In addition, current work in cognitive science continues to demonstrate an
inherent tendency towards false intuitions. Tversky and Kahneman have been reporting
results since the 1970s which demonstrate a trend towards false intuitions on the basis of
availability. In some experiments, subjects were asked whether there were more words in
the English language that began with the letter ‘k’ or more words which had ‘k’ as their
third letter. Though objectively false, most subjects choose the first answer. It is
speculated that this is because subjects can more readily produce a list of words that
start with ‘k.’ In another experiment, subjects were given a list of names and asked
whether there were more male or female names on the list. The lists were designed such
that either all of the male or all of the female names were famous, and fewer in number,
than the names of the opposite sex. Despite the fact that they were in the minority,
subjects chose the sex comprised of famous names over the unknowns1.
The Wason selection task provides another striking demonstration of the failure of
intuition. First published in 19662, the Wason selection task requires that subjects choose
which cards to turn over in order to test a rule. Subjects are informed that each card has a
single letter on one side and a single number on the other, then provided with a rule of
1
2
These experiments are summarized in Bias and Human Reasoning, Jonathan St. B. T. Evans (1989)
Wason, P.C. Reasoning. In B. M. Foss (Ed.), New Horizons in Psychology I.
the sort “If there is an A on one side of the card, then there is a 3 on the other side of the
card”. In the original experiment, the subjects were shown four cards; on their faces were
an ‘A’, a ‘D’, a ‘3’ and a ‘7’. To properly test the rule, the subject should turn over the
‘A’ card (to ensure that there is a ‘3’ on the other side—Modus Ponens) and the ‘7’ card
(to ensure that there is not an ‘A’ on its other side—Modus Tollens). A large majority,
however, select either only the ‘A’ card, or the ‘A’ and the ‘3’ cards. While the specific
nature of the bias that leads to this failure is still under debate (Wason’s task is one of the
most widely studied and duplicated experiments in history), it is clear that intuitions are,
at some point, going awry3.
The previous examples indicate a problem with intuitions concerning language
and logic, but there is also work which suggests problems in employing epistemological
intuitions. In a series of experiments, “_________________” tested the intuitions of
various groups on questions of relevance to current epistemology. Two primary
hypotheses were tested: (1) epistemic intuitions vary from culture to culture; and (2)
epistemic intuitions vary from one socioeconomic group to another. The first set of cases
involved truetemp questions, which concern an agent who has a reliable process for
determining the current temperature where he is but does not know that he possesses this
process. Subjects were asked to evaluate whether the agent truly knows the temperature
or only believes it. The results from such questions show a highly significant difference
between European American subjects and East Asian subjects, with East Asian subjects
being much less willing to attribute knowledge. This pattern is reversed when subjects
are presented with Gettier cases, which consist of an agent who has good but faulty
evidence for a true belief. In these cases, East Asian subjects are much more likely to
3
This material is also covered in depth in Bias and Human Reasoning.
attribute knowledge than their Western counterparts. This difference is even more
pronounced between Western subjects and those from the Indian sub-continent. Further
experimentation showed similar discrepancies between different socioeconomic groups.
These examples differ from the examples above in that they do not clearly
indicate a failure of intuitions. They do, however, indicate an inconsistency in intuitions
amongst different groups. At the very least, this suggests that the set of contexts
examined when attempting to ratify intuitions for a domain of discourse should be
relativized according to ethnicity and socioeconomic background, which poses a
significant problem for anyone trying to employ intuitions in the development of a
universal set of normative [do we need “normative”?] epistemological principles.
In his essay “What Good are Counterexamples?”, Brian Weatherson provides
many more examples of faulty intuitions. Based on the number and variety of cases in
which our intuitions are shown to go wrong, it is apparent that this problem is not
localized, or limited to certain types of intuitions. Indeed, Weatherson provides
examples from many of the areas most significant to philosophers. From logic, we have
Frege’s Axiom V and the striking and persistent failure rates from experiments involving
the Wason selection task, discussed above. He also points to broad issues with
probabilistic reasoning, including further results from Tversky and Kahneman indicating
a tendency in people to think that the probability of a conjoined pair of events is higher
than the probability of one of them occurring independently. Weatherson also provides
examples from moral reasoning, highlighting the fact that for thousands of years, slavery
was a morally acceptable practice. Finally, he discusses mistaken conceptual intuitions,
including the idea that whales are fish.4
Given all of this evidence that the epistemic conditional applies to many of the
areas in which philosophers rely on intuitions the most, it seems that we are in a dire spot
indeed. But perhaps we can ‘tweak’ or calibrate our intuitions, by identifying where
Rat(D,R) holds, so as to avoid these problems – we investigate such a possibility in the
next section of this paper.
4
p. 2-4
Download