Truth and Serving the Biological Purpose

advertisement
Naturalized Truth and
Plantinga’s Argument against Naturalism
Feng Ye
I argue that Plantinga’s evolutionary argument against naturalism contains
serious difficulties. His arguments for constructing defeaters of naturalism
presuppose an anti-materialistic notion of content or truth. They cannot show that
naturalism is self-defeating. Moreover, a more subtle way of naturalizing content
can avoid his criticisms on naturalizing content and demonstrate that
adaptiveness confers a high probability to naturalized truth, at least for simple
beliefs about physical things in human environments. This contradicts a
consequence of his argument. On the other side, I suggest that there might be a
similar argument against naturalism targeting the reliability of scientific methods
but not affected by these difficulties in Plantinga’s evolutionary argument.
1. Introduction
Let N be naturalism, let E be the current naturalistic evolutionary theory, namely,
evolution without divine guidance, and let R be the proposition that human innate
cognitive mechanism selected by naturalistic evolution is reliable. Plantinga argues that1
(1) The conditional probability P(R|N&E) is low.
(2) Given (1), N&E has a defeater, and that defeater cannot be defeated again.
Plantinga considers some potential ways to defeat the defeater of N&E because of (1) by
accepting another hypothesis. First, let RM be reductive materialism, which implies that
the content of a belief supervenes on neural-environmental states. If P(R|RM&N&E) is
not low and naturalists can accept RM, then this defeats the defeater of N&E because of
(1). Therefore, Plantinga also argues for
(3) The conditional probability P(R|RM&N&E) is low.
Second, let NC be any naturalistic theory of content. Similarly, if P(R|NC&RM&N&E) is
not low and it is admissible for naturalists to accept NC&RM, then this also defeats the
defeater of N&E again. This time, Plantinga argues that
(4) The naturalistic theories of content by Dretske, Millikan, functionalism, and
others are not admissible to naturalism.
Note that materialists can also hold eliminativism regarding belief and content. In a
footnote, Plantinga explains that he is concerned with whether the belief N&E is rational,
and therefore he will not consider the idea that there are no such things as beliefs. 2
However, materialists who hold eliminativism do not have to say that they have this
belief N&E. According to eliminativism, there are only neural circuitries in brains and
there are no such things as beliefs or content. When a brain allegedly lures another brain
into believing naturalism, what really happens is only that the neural circuitries in the
former control the mouth to produce sounds and cause the latter to be in some neural state.
This has nothing to do with ‘beliefs’. Therefore, to be fair, we have to say that
Plantinga’s argument actually takes the following as a premise:
(5) The notions of belief, content, and truth are indispensable for any account of
human cognitive activities.
In Section 2, I will analyze Plantinga’s arguments for (1), (3), and (4). I will first
argue that Plantinga’s arguments for (1) and (3) cannot show that naturalism or reductive
materialism is self-defeating, because the arguments presuppose an anti-materialistic
notion of content or truth, which all materialists will naturally reject. I will point out that
if Plantinga can successfully argue for both (4) and (5), which means that everyone has to
accept that notion of content or truth, then he refutes naturalism already, and the idea in
his evolutionary argument becomes redundant and trivial. Nonetheless, I will argue that
his argument for (4) is not successful either. A more subtle way of naturalizing content is
not affected by his criticisms. Then, in Section 3, I will briefly explain this way of
naturalizing content and explain how it implies that adaptiveness confers a high
probability to naturalized truth, at least for simple beliefs about physical things in human
environments. This means that human cognitive mechanism selected in evolution by its
adaptiveness is reliable at least for producing simple true beliefs. This will be sufficient to
show that Plantinga’s arguments for (1) and (3) are invalid, since Plantinga’s arguments
do not discriminate between simple and complex beliefs. Finally, in Section 4, I will
discuss the question whether we can construct a similar argument against naturalism,
targeting not the reliability of human innate cognitive mechanism selected by evolution,
but the reliability of scientific methods. I will argue that the question is still open and the
difficulties in Plantinga’s evolutionary argument do not prevent such an argument. If
such an argument is possible, it will be less counter-intuitive, although its conclusion
might be weaker.
2. Analyzing Plantinga’s Arguments
It seems that Plantinga’s ultimate reason for defending (1) and (3) is Indifference of
Adaptiveness to Truth:
If naturalism is true, the adaptiveness of a belief (or the behaviors caused
by the belief) is indifferent to the truth or falsehood of the belief.
He repeatedly asserts this thesis in his arguments. It implies that beliefs selected in
naturalistic evolution by their adaptiveness have random truth-values. Then, as long as
there are sufficiently many probabilistically independent beliefs, by the law of large
numbers, the probability of the event ‘at least 75% of the beliefs selected by evolution are
true’ will be very low. This then implies that the probability of R is very low.
2.1 Plantinga’s argument for (1)
In his argument for (1), Plantinga derives Indifference of Adaptiveness to Truth from
semantic epiphenomenalism. Plantinga agrees that naturalism implies materialism and
materialism implies that a belief is a neural structure and a belief causes behaviors by
virtue of its neural-physiological properties (NP properties). Let us assume that a neural
structure can somehow possess content.
(Q1)3 ‘It is easy to see how beliefs thus considered can enter the causal chain leading to
behavior; … It is exceedingly difficult to see, however, how they can enter that
chain by virtue of their content; a given belief, it seems, would have had the
same causal impact on behavior if it had had the same NP properties, but
different content.’4
Therefore, Plantinga holds Indifference of Adaptiveness to Content:
If naturalism is true, the adaptiveness of a belief (or the behaviors caused
by the belief) is indifferent to the content of the belief.
This then implies Indifference of Adaptiveness to Truth, for the content of a belief
determines its truth-value.
2
Now, what notion of content Plantinga assumes in (Q1)? It cannot be any
materialistic notion of content. For materialists, the state of the material world should
uniquely determine every property of the world, including the content of a neural
structure in the world, if there is such a property (of having a specific content). The
(broad) content of a belief may depend on environmental states. Therefore, the content of
a neural structure should be uniquely determined by the neural structure and a
corresponding environmental state. Then, the same neural structure and environmental
state could not have had a different content, given materialism. The content of a belief
can affect the adaptiveness of the belief, because the neural structure of the belief and the
environmental state, which uniquely determines the content, can affect the adaptiveness
of the belief. One might think that the content of a belief is an abstract entity, such as a
proposition; it is not a material thing and does not have any causal power. This is true,
but still a belief can be adaptive or maladaptive in an environment by virtue of its content,
which means that the belief can be adaptive or maladaptive by virtue of its neural
structure relative to the environment, which together uniquely determines the content of
the belief. Therefore, (Q1) seems to be wrong if it means content in the materialistic
sense, assuming that there is such materialistic content. Instead, Plantinga seems to be
assuming in (Q1) that the content of a belief can exist independently of all material things
and their states, and only an immaterial mind can somehow grasp content and freely
assign content to a belief as a sentence or neural structure. Then, the same neural
structure (and environmental state) can somehow have a different content. Materialists
will naturally reject this notion of content.
One may contend that, in order to derive Indifference of Adaptiveness to Content, we
only need to assume that the content of a neural structure is independent of the
adaptiveness of the neural structure, and we do not need to assume that it is completely
independent of all material things and their states. However, if the content of a neural
structure is not independent of all material things and their states, what reasons do we
have to assert that adaptiveness is indifferent to content? A belief as a neural structure
comes into existence in evolution because of its adaptiveness. We have good reasons to
think that any natural property of that neural structure relative to its environment is
related to the adaptiveness of that neural structure in the environment. Now, if the content
of the belief is a property uniquely determined by the neural structure in a corresponding
environment, we have a prima facie reason to think that content is related to adaptiveness
in some way. Admittedly, without a real materialistic theory of content, it cannot be clear
how content and adaptiveness are related. However, my point here is merely that
Indifference of Adaptiveness to Content is far from obvious if the notion of content there
is a materialistic notion. At least it requires a separate argument and Plantinga never
offers any. Plantinga does consider the possibility that the content of a belief supervenes
on neural-environmental states when he argues for (3) and he admits there that content
can be adaptive or maladaptive in that case. However, we can comeback to ask what
notion of content he assumes when he argues for (1). His later consideration actually
corroborates the judgment that, when arguing for (1), Plantinga presupposes an antimaterialistic notion of content, which all materialists will naturally reject. This means that
he has not yet found any defeater of naturalism at this stage.
One might think that this is unfair for Plantinga. There has not been any widely
agreed materialistic notion of content and Plantinga has to rely on our intuitive notion of
3
content in his arguments. Plantinga is not entitled to invent a materialistic theory of
content first and then argue against materialism based on that materialistic notion of
content. This is true. However, even without a clear materialistic notion of content, we
can still ask if the intuitive notion of content assumed by Plantinga contains any elements
that are clearly anti-materialistic and will be naturally rejected by all materialists. The
answer seems to be yes, which makes his argument for (1) irrelevant for showing that
materialism is self-defeating.
Of course, Plantinga can try to argue that the notion of content is indispensable for
any account of human cognitive activities and materialism just cannot accommodate the
notion of content. That is, he can try to argue for both (4) and (5). However, if he can do
this, then he refutes materialism already. After that, the idea in this evolutionary
argument becomes trivial, because the essence of the problem is that content is not
completely determined by material things and their properties. Then, naturalistic
evolution alone, which can affect material things only, cannot produce any content.
Something else is required. For instance, perhaps there has to be an immaterial mind that
can conceive of content and assign content to a neural structure (or sentence). Then, the
idea of evolutionary argument is that if naturalistic evolution alone had produce any
content, it would have produced content arbitrarily with random truth-values. The
premises of the argument explicitly contradict materialism already. The further reasoning
is correct, but it adds nothing essentially new for refuting materialism.
2.2 Plantinga’s argument for (3)
Here, Plantinga considers the materialist claim RM that the content of a belief
supervenes on neural-environmental states. This implies that content can be adaptive or
maladaptive and contradicts Indifference of Adaptiveness to Content. Plantinga presents
this as an effort by materialists to deflect the defeater of N&E because of (1). If my
analysis above is correct, this is not a faithful characterization of the situation. Naturalism
has to say something about content anyway. Either it will be eliminativism, or it will be
reductive materialism, or …. It seems that when Plantinga argues for (1), his specification
of naturalism is incomplete regarding what content is, and he supplements it with an antimaterialistic notion of content that materialists will naturally deny. RM is actually one
way to complete naturalism regarding the notion of content, accepted by some
materialists. Without any such completion, an argument for (1) is naturally tempted to
assume a non-naturalistic notion of content as its basis. Of course, this is not an essential
problem for Plantinga. If his argument for (3) is successful, he can replace (1) by (3) and
get a defeater of RM as a more concrete version of naturalism.
In arguing for (3), Plantinga directly asserts Indifference of Adaptiveness to Truth.
(Q2) ‘[C]ontent enters the causal chain leading to behavior, but not in such a way that
its truth or falsehood bears on the adaptive character of the belief.’5
However, there is the same problem as before, although this time it is about the notion of
truth, not the notion of content. What notion of truth is assumed in (Q2)? What reasons
do we have for claiming that adaptiveness is indifferent to truth? Again, it cannot be any
materialistic notion. Truth is a property of content. If the content of a belief supervenes
on neural-environmental states, then truth has to supervene on neural-environmental
states as well. In other words, truth should be a relation between neural structures (as
beliefs) and corresponding environments, according to materialism. Now, adaptiveness is
also a relation between neural structures and corresponding environments, and those
neural structures (as beliefs) come into existence because of their adaptiveness. Therefore,
4
we similarly have good reasons to suspect that truth is related to the adaptiveness in some
way. Again, without a materialistic theory of truth, this cannot be clear, and my point is
merely that (Q2) or Indifference of Adaptiveness to Truth is not obvious if the notion of
truth there is a materialistic notion. Plantinga does not give any further argument to
support (Q2) either. Instead, he again appears to be assuming an essentially antimaterialistic notion of truth in (Q2). Since a materialistic notion of content demands a
materialistic notion of truth, this means that Plantinga implicitly assumes an essentially
anti-materialistic notion of content in (Q2) as well. In other words, ‘content’ at the
beginning of (Q2) does mean a materialistic property of a neural structure relative to its
environment, but when he claims that adaptiveness is indifferent to truth, the bearer of
truth seems to be still content in an intuitive and anti-materialistic sense, that is, content
in (Q1). Similarly, Plantinga does consider teleosemantics when he argues for (4), and
teleosemantics holds exactly that truth is closely related to adaptiveness. Therefore, if we
comeback to ask what notion of truth is assumed in (Q2), we may have to conclude that it
is essentially anti-materialistic.
Again, it is not fair to require Plantinga to invent a materialistic theory of truth first
and then argue that materialism implies that human cognitive mechanism is likely
unreliable for producing beliefs true in that materialistic sense. However, if Plantinga’s
argument is supposed to reveal any internal problem of materialism, it should not assume
any straightly anti-materialistic notion. Plantinga can try to argue that the notion of truth
is indispensable and materialism cannot account for the notion of truth. Then, that will be
another argument against materialism. With that argument available, the idea in this
evolutionary argument will become trivial, because the real point against materialism is
then that truth is not completely determined by material things, and therefore naturalistic
evolution alone, which affects material things only, cannot select true beliefs against false
beliefs. This, I believe, is actually the intuition behind Plantinga’s argument. However,
while this is obvious for people who loathe materialism, it does not reveal any internal
problem of materialism.
2.3 The notions of belief, content, and truth in Plantinga’s arguments
That Plantinga implicitly assumes an anti-materialistic notion of content and truth
can also be seen from his response to a commonsense objection to Indifference of
Adaptiveness to Truth. Ramsey, Fales, and Draper, among others, raised this objection.6
Draper uses an example in which Draper wants to take a bath and there is an alligator in
Draper’s tub. Then, true beliefs such as ‘there is an alligator in my tub and alligators are
dangerous’ will save Draper’s life, but false beliefs such as ‘there is nothing in my tub’
will cause maladaptive behaviors. In other words, a belief may state something about the
environment. When a belief is true, it means that the environment is in some special state,
which may affect the adaptiveness of the behavior induced by the belief. Therefore, it
appears that adaptiveness not indifferent to truth. This is the commonsense view that
knowledge is power.
Plantinga’s response to this kind of objections claims that objectors conflate beliefs
with ‘indicator representations’.7 An indicator is a neural structure that represents other
states by causal or other connections as a matter of natural regularity. It is neither true nor
false. For Draper’s example of alligator in his tub,
(Q3) ‘Suppose m holds false beliefs, believing at the time in question that the
alligator is a mermaid, or even that he’s sitting under a tree eating
5
mangoes. Will that adversely affect his fitness? Not just by itself. Not if
m has indicators and other neural structures that send the right messages
that cause his muscles …. It’s having the right neurophysiology and the
right muscular activity that counts.’8
Therefore, a neural structure as indicator causes Draper to jump away upon seeing an
alligator and it does not matter what beliefs Draper has at that moment. This means that
Plantinga actually accepts the following Indifference of Adaptiveness to Beliefs,
If naturalism is true, the adaptiveness of a neural structure that causes
behaviors is indifferent to whatever beliefs one has.
This appears to be stronger than Indifference of Adaptiveness to Content or to Truth. If
adaptiveness is indifferent to whatever beliefs one has, it is certainly indifferent to
whatever content or truth-values those beliefs have.
However, this generates a puzzle. Yes, it is having the right neurophysiology that
counts, but under reductive materialism RM, beliefs are just neural structures. Then, (Q3)
appears to imply that adaptiveness is indifferent to whatever neural structures one has,
which is absurd. Otherwise, it might be saying that those beliefs as neural structures in
Draper’s brain do not cause any of Draper’s behaviors, but some other neural structures
as indicators are causing Draper to jump away, which is also absurd under reductive
materialism. A natural interpretation is that Plantinga is never serious about the reductive
materialistic claim that beliefs are neural structures and the content of a belief supervenes
on neural-environmental states. He still understands beliefs in the intuitive and nonmaterialistic sense, as something that are not material but only an immaterial mind can
conceive and entertain. Then, (Q3) can mean that a neural structure is causing the
adaptive behavior of jumping away, but evolution is indifferent to which belief (in the
non-materialistic sense) that neural structure ‘means’, or which belief (as meaning) the
mind assigns to that neural structure, or whatever beliefs the mind is entertaining
independently of any neural structures in the brain at the moment. This sounds very
natural to anyone who accepts that non-materialistic notion of belief, but materialists will
naturally reject it.
2.4 Plantinga’s argument for (4)
Now, Plantinga does consider naturalistic theories of content and truth in his latest
writings on the subject.9 Plantinga again takes this to be an effort by naturalists to deflect
the defeater of naturalism because of (1). However, these naturalistic theories of content
and truth are again just several potential ways to complete naturalism as a worldview,
regarding what naturalism should say about content and truth. Therefore, the issue here is
not about whether materialists can deflect a known defeater of materialism. It is about
whether materialism can account for content and truth at all. I have mentioned that if
Plantinga can successfully argue that the notions of content and truth are indispensable
and materialism cannot account for content and truth, then he actually has another
argument against materialism. However, Plantinga’s criticisms on the naturalistic theories
of content are not successful either.
Plantinga discusses three naturalistic theories of content. I will discuss his criticisms
on teleosemantics only. Plantinga gives two major objections against teleosemantics.
First, teleosemantics tries to explain the content of a belief by referring to biologically
normal responses to tokening the belief in a normal environment, and referring to the
conditions of the environment which make those responses realize a proper biological
6
function of the belief and serve the biological purpose (of preserving genes). However,
many complex beliefs, including the belief of naturalism, do not seem to have any
biologically normal responses associated, and it is hard to imagine what could be the
biological function of those beliefs. Second, for a belief to have content, teleosemantics
requires that it carry information about the environment. Then, universally true and
universally false beliefs, including the belief of naturalism again, do not have content,
because they do not carry any specific information about the environment.
The first objection is based on the assumption that the teleological account of content
should apply to all beliefs, including complex beliefs about things and states of affairs
that are never relevant to human evolution. This makes teleosemantics sound absurd and I
suppose no supporters of teleosemantics intend to hold that (although some of their
writings may give that impression). No one would hold that the content of the belief
‘there was the Big Bang’ or ‘I have an even number of hairs’ is to be characterized by
referring to biologically normal responses to tokening the belief in biologically normal
environments and referring to the condition that makes those responses realize a proper
biological function of the belief. Intuitively, these beliefs do indicate specific condition
about the universe or the environment, but obviously, the condition has nothing to do
with human evolution, and the beliefs have no specific biological functions or normal
biological responses. Teleosemantic approaches to naturalizing content do not have to
assume that the same kind of teleological account of content must apply to all beliefs. If
we artificially construct a Boolean combination of a hundred atomic sentences, then the
content of that composite sentence as a belief should clearly be characterized
compositionally, based on the content of its atomic components and the content of logical
constants. Then, when we consider an ordinary logically composite belief, obviously we
should also characterize its content compositionally, as long as that characterization is
available. Very naturally, the teleological account of content should apply to very simple
and primitive semantic representations only, and the content of a complex composite
representation should be characterized structurally and compositionally. Then,
Plantinga’s objection misses the target. This answer applies to Plantinga’s second
objection as well, because we can expect that the teleological account will apply only to
representations that do indicate specific information about human environments, that is,
representations that can make a difference to human behaviors and their survival.
I will give more details regarding this teleological-structural approach to naturalizing
content in the next section. Here I just want to add that Plantinga’s criticisms did not
challenge the core idea of teleosemantics. I suppose the core idea is the following. A
neural structure is constantly associated with some environmental feature as a matter of
natural regularity, but it allows exceptions. These exceptions become semantic
misrepresentations, and therefore semantic normativity emerges, when we consider the
proper biological function of the neural structure. This then reduces semantic normativity
to biological normativity and transforms an indicator into a semantic representation with
the possibility of misrepresentation. When applied to very simple representations, this
idea is intuitively appealing. For instance, imagine that a neural structure is constantly
associated with a spatial location relative to one’s body and reachable by hands, in the
sense that it is constantly generated by seeing something at that location and it
participates in controlling the hands to stretch out and grasp something at that location.
There can be exceptions to these constant associations because of a biological
7
malfunction of the vision, hands, or brain, or because of a biological abnormality of the
environment. For instance, sometimes there is an illusion caused by some abnormal
environmental condition, or sometimes the brain is influenced by alcohol and it cannot
control the hand properly. Therefore, what spatial location that neural structure really
represents (i.e. semantically represents) should be determined by the situations in which
everything is biological normal. That is the location accessed by the hands, controlled by
the neural structure activated by seeing something at the location, when the brain, body,
and environment are all biologically normal. This makes that neural structure a semantic
representation of a spatial location relative to one’s body with the possibility of
misrepresentation. Similarly, there may be constant associations between a color and a
neural state with exceptions, and what color that neural state semantically represents
should also be determined by biologically normal situations. For these extremely simple
representations, it is intuitively reasonable to assume that the semantic norm coincides
with the biological norm. If you look into a brain and try to find out which color a neural
structure semantically represents (with the possibility of misrepresentation), it seems
natural to look for the color that regularly activates that neural structure in situations that
are biologically normal for the brain and take aberrations that occur in abnormal
situations as misrepresentations. Otherwise, you may have to assume that there is an
immaterial mind behind the brain that can ‘intend’ to represent a color but fail.
It certainly becomes funny when one applies a similar characterization of content to
a complex semantic representation, for instance, the belief ‘I have an even number of
hairs’. No environmental condition regularly stimulates that belief in biologically normal
situations, and the belief does not regularly participate in controlling any specific motor
actions in biologically normal situations. We certainly make semantic mistakes in
biologically completely normal situations for such complex representations. That is, it is
obvious that the semantic norm does not coincide with the biological norm for such
complex representations. A challenge for naturalizing content is just to explain when and
how the semantic norm diverges from the biological norm. Perhaps current proponents of
teleosemantics have not met the challenge yet, but Plantinga’s criticisms do not really
affect the intuition favoring the core idea of teleosemantics explained above.
If the analysis in this section is correct, then Plantinga has found neither defeater of
materialism nor insurmountable obstacle to a materialistic account of content and truth.
Plantinga’s argument does show that a half-hearted naturalism is incoherent. Here I mean
a position that tries to embrace naturalism but still wants to keep some intuitive notions
of belief, content, and truth that imply Indifference of Adaptiveness to Truth, Content, or
Belief. On the other side, materialists still owe a positive materialistic account of content
and truth. Otherwise, it is still possible to refute materialism by arguing for both (4) and
(5). Materialists also owe an account of how truth is related to adaptiveness. Otherwise, it
is still possible to argue for Indifference of Adaptiveness to Truth (based on a
materialistic notion of content and truth). I will discuss these issues in the next section.
3. Naturalized Truth and the Reliability of Human Cognitive Mechanism
In a few previous articles10, I proposed a theory for naturalizing content and discussed the
connection between naturalized truth and adaptiveness. The basic idea is that a belief is a
complex semantic representation composed of other more primitive semantic
representations. What a belief semantically represents is determined by some structural
8
semantic rules and by what the primitive constituents of the belief represent. Some
human cognitive mechanism selected by evolution, together with natural regularities in
human environments, determines what the most primitive semantic representations
represent, and determines the structural semantic rules. These together determine what a
belief represents and what truth as a relation between beliefs and environments is. This
will imply that evolution eventually determines truth but truth is not identical with
adaptiveness and semantic misrepresentations or errors can occur in biologically normal
environments for biologically normal subjects. This takes the core idea of
teleosemantics,11 but it avoids Plantinga’s criticisms on teleosemantics. In this section, I
will briefly introduce the theory and examine the reliability of human cognitive
mechanism based on the theory. So far, the theory covers only some simple concepts
representing physical things or their properties and some simple beliefs composed of
those concepts. Recall that Plantinga’s arguments do not discriminate between simple and
complex beliefs. If his arguments for (1) or (3) were valid, they would show that human
cognitive mechanism is likely unreliable even for producing simple perceptual beliefs
about physical objects in human environments. My goal is to show that, based on this
theory, human cognitive mechanism selected by evolution is at least reliable for
producing true simple beliefs about things and states of affairs relevant to human survival,
which will show that Plantinga’s arguments must have problems.
The theory assumes that a simple belief is a composition of concepts and a concept is
a composition of an even more primitive kind of representations called ‘inner maps’. As
a materialistic theory, inner maps, concepts, and beliefs are supposedly neural structures
in brains. The theory treats broad content only. It means that the critical task is to
characterize the semantic representation relation between representations including inner
maps, concepts, and beliefs on the one side, and external things, their properties, or their
states of affairs on the other side. I will simply call the presented objects, properties, or
states of affairs the content of a representation. An inner map is essentially a perceptual
mental image in one’s memory, representing an object instance seen at a concrete
moment in one’s past experience. I will skip the details about inner maps.12
Concepts belong to a higher-level in the hierarchy of representations. A basic
concept is a composition of inner maps, and basic concepts are in turn components of
composite concepts. A common type of concepts is essentialist concepts. An essentialist
concept contains some inner maps as exemplars, representing object instances that one
encountered and applied the concept before (in learning or using the concept). The
semantic rule for an essentialist concept says that an entity is represented by the concept
if it shares the same internal structure as those object instances represented by the
exemplars of the concept. For instance, if my concept DOG contains some exemplars
representing some dog instances that I encountered before (in learning the concept), then
an entity is represented by my concept DOG if it shares the same internal structure with
those instances. There are many details regarding the structure and content of concepts.
In particular, this theory integrates the summary feature list theory and the exemplar
theory of concepts by cognitive psychologists.13 I will skip them here.
Thoughts are composed of concepts, and a belief is a thought that one puts in one’s
‘belief box’. The theory so far considers thoughts in simple formats only. For instance,
‘dogs bark’ expresses a thought consisting of two concepts DOG and BARK, which I
will denote as <DOGs BARK>. Similarly, in the sentence ‘that is a dog’, the
9
demonstrative ‘that’ actually expresses a singular concept (denoted by THAT) that
represents an object located at some location relative to one’s body, for instance, the
focus point of one’s eyes. Then, the sentence expresses a thought <THAT is a DOG>
consisting of two concepts in that person’s brain. Thoughts are truth bearers. The
semantic representation relation for thoughts is determined by the semantic representation
relation for concepts as constituents of thoughts and by the composition patterns for
composing the thoughts (out of constituent concepts). For instance, the thought <DOGs
BARK> is true if the content of DOG is contained in the content of BARK. Therefore,
this is a correspondence theory of truth with the naturalized semantic representation
relation as the correspondence relation.
Now, consider the reliability of human cognitive mechanism for producing simple
true beliefs in a format like <THAT is a RABBIT>. Evolution selects several things
simultaneously. It determines a class of entities as a natural class relative to human
cognitive mechanism, e.g. rabbits. It simultaneously selects an innate human cognitive
mechanism that can recognize entities belonging to that natural class consistently. It also
simultaneously determines a class of environments in which the human cognitive
mechanism can recognize that natural class stably. I will call these optimal environments
(for recognizing that natural class). In other words, human cognitive mechanism is
adapted to nature so that physical objects in optimal environments with similar
appearances, relative to human pattern recognition mechanism, mostly have the same
internal structure and form a natural class. This is the basis for developing essentialist
concepts in human brains. The evolutionary value of such cognitive mechanism adapted
to natural classes is obvious. It allows our ancestors to recognize reliably an object as the
same kind of thing as those edible rabbits that one hunted and ate before. It helps food
collecting, hunting, and fleeing from predators and so on. Evolution also selects a human
cognitive mechanism that keeps its perceptual memories of rabbits encountered in
optimal environments as the constituents of the essentialist concept RABBIT. Here we
assume that visual images that are included in one’s concept RABBIT are clear images of
rabbits that one gets in optimal environments for seeing rabbits. Unclear visual images
obtained in non-optimal environments, such as an image of a rabbit in the bushes, will
not be included in one’s concept. This strategy has adaptive values as well. An optimal
environment for seeing something is the environment in which one can identify that thing
more stably. This strategy allows human concepts to represent things in optimal
environments more stably, which then allows adaptive behaviors associated with the
concepts to develop. See below.
Then, when one sees a new rabbit instance in an optimal environment, one’s
cognitive mechanism performs visual pattern recognition. It compares the new visual
image received with the visual images in memory and decides that the new visual image
is similar to some of the visual images that are constituents of one’s concept RABBIT.
This causes one to accept the thought <THAT is a RABBIT> as one’s belief. The
reliability of human cognitive mechanism here is guaranteed by regularity in nature and
by the fact that human cognitive mechanism has adapted to natural regularity. That is, an
object in optimal environments with a similar appearance as those rabbit instances in
one’s memory, relative to the human pattern recognition mechanism, mostly has the same
internal structure as those rabbit instances. That is, it has a high probability to be a rabbit.
10
Besides, these memories of rabbits are also associated with memories of past
adaptive behaviors in dealing with rabbits, for instance, hunting, roasting, and eating
rabbits. This means that the belief <THAT is a RABBIT> tends to cause the behavior of
hunting, roasting, and eating the object represented by the concept THAT. This
connection between the belief and the behaviors is selected for its adaptiveness in dealing
with rabbits in optimal environments in the past. The adaptiveness of this connection also
owes to the fact that, in optimal environments, when one identifies something as an
instance of RABBIT and produces the belief <THAT is a RABBIT>, there is a high
probability that the object is a rabbit.
Then, in non-optimal environments, the cognitive processes leading to a belief like
<THAT is a RABBIT> are much more complex. It requires a more sophisticated human
cognitive mechanism that can perform cognitive tasks such as deduction, induction,
abduction, and various kinds of heuristic reasoning or memory association. To see that
evolution also favors a more reliable cognitive mechanism here, let us consider two
scenarios for some primitive person X. In scenario 1, X sees something moving in the
bushes, which is actually a rabbit, and that causes in X’s brain to put a thought P (as a
neural structure) in its ‘belief box’ as a belief, which in turn causes X to chase that
moving object. In scenario 2, X similarly sees something moving in the bushes, which is
actually a tiger, and that causes the same neural state in X’s brain and also causes X to
chase that moving object. The behavior caused by the neural structure P is adaptive in
scenario 1 but maladaptive in scenario 2. Now, we must be careful in asserting that a
more reliable cognitive mechanism will be more likely to cause adaptive behaviors here,
because it depends on what the neural structure P ‘means’. For instance, one might insist
that P represents the state ‘that is a tiger’, and then a false belief in scenario 1 is adaptive
while a true belief in scenario 2 is maladaptive.
To clarify this, first note that, under materialism, what a neural structure P as a
thought represents is not decided by what a mind intends it to represent. It is decided by
the structure of P, the evolutionary or individual developmental history of P, and the
corresponding environmental states. For example, we can examine the structural
components of P, to see if it has the structure <THAT is a RABBIT>, in particular, if it
has the concept RABBIT (as a neural structure) as its component. Similarly, whether a
concept as a neural structure does represent rabbits (and therefore is RABBIT) is
determined by the structure and history of the concept and by the corresponding
environmental states. For example, we can examine if it contains visual memories
representing rabbits as exemplars. Note that it is also a fact whether a piece of visual
memory does represent rabbit instances that one encountered in the past, given human
innate pattern recognition mechanism, given the actual experiences of the brain, and
given the historical origin of that piece of memory. In general, as long as we agree that
the semantic representation relation between a thought and environmental states can be
naturalized, we already agree that it is fact whether the thought P is <THAT is a
RABBIT>, or <THAT is a TIGER>, or any other thought. Second, recall that we assume
that the connection between the belief <THAT is a RABBIT> and the behaviors of
hunting, roasting, and eating is established because of the adaptiveness of these behaviors
targeting rabbits in optimal environments. We may assume that, in non-optimal
environments, the same belief <THAT is a RABBIT> also tends to cause the same
behaviors of hunting, roasting, and eating, and that these behaviors are adaptive only
11
when the targeted object is a rabbit. For instance, a pheasant will require a different
hunting skill, and a tiger will require fleeing. Then, given that in scenario 1 the object is a
rabbit, the belief <THAT is a RABBIT> together with the behaviors caused by it are
more adaptive than other beliefs such as <THAT is a PHEASANT> or <THAT is a
TIGER> (and the behaviors associated with them). This means that a true belief is more
likely to cause adaptive behaviors and therefore a more reliable mechanism for guessing
truths is more adaptive. Similarly, in scenario 2, given that the behavior there (i.e.
chasing) is not the behavior normally caused by the belief <THAT is a TIGER> in
optimal environments, it is more likely that X’s belief P in that scenario is not the belief
<THAT is a TIGER>. This means that a false belief is more likely to be maladaptive.
Given what truth is under this theory, it should not be surprising that a more reliable
mechanism for guessing truths is more adaptive. There is no magic here. First, the truth
of one’s belief <THAT is a RABBIT> in optimal environments has a high probability.
This is because the components of the concept RABBIT are accrued just in those optimal
environments. Therefore, those optimal environments determined the content of the
concept RABBIT. Second, the connection between the belief <THAT is a RABBIT> and
those adaptive behaviors dealing with rabbits are also established in those optimal
environments. Then, when moving to a new and non-optimal environment, the truth of
the belief <THAT is a RABBIT> means exactly that the new environment is similar to
those optimal environments in relevant aspects. That is, the relevant object belongs to the
same kind as those objects that will cause the belief in optimal environments. This then
means that the behavior that is associated with the belief and is adaptive in optimal
environments will also be adaptive in the new environment. That is how the truth of the
belief <THAT is a RABBIT> in a new environment also assures that the behavior caused
by the belief is adaptive. This implication from truth to adaptiveness is ‘nearly
tautological’ in some sense. However, one must also note that truth is not simply identical
with adaptiveness. In scenario 1 above, if the rabbit is contaminated with some dangerous
virus, then the behavior induced by the belief may turn out maladaptive in the end,
although the belief is still true. Such cases should not happen too often. Otherwise, the
behaviors of hunting, roasting, and eating rabbits will not be adaptive and evolution will
not select them as the biologically normal behaviors associated with the belief <THAT is
a RABBIT> in the first place. However, as long as it does happen occasionally, it is
sufficient to show that truth is not identical with adaptiveness.14
Finally, consider some potential objections implied in Plantinga’s writings on the
subject. First, some may object that inner maps, concepts, and beliefs described here are
merely indicator representations, not semantic representations. Here we must note that,
under materialism, whether something is a semantic representation is determined by the
natural characteristics of that thing, not by any subjective intention. A neural structure is
a semantic representation if it has some natural characteristics or it performs some
cognitive functions. First, it must be a representation. That is, there must be some regular
correlation between it and some other things, properties, or states of the environments.
Second, a major characteristic that differentiates between a semantic representation and
an indicator representation is that the former allows misrepresentations. There may be
other requirements for a semantic representation. I will not try to explore here. In
naturalizing a semantic representation relation, we define a condition for a relation using
naturalistic terms, that is, without using intentional terms such as ‘mean’, ‘represent’, and
12
so on. We describe the environments in which that relation exists and the environments in
which that relation does not exist. The latter are the environments in which there is a
misrepresentation. We then compare that relation characterized by the naturalistic
condition with our intuitive understanding of the semantic representation relation, to see
if they fit each other. If yes, then we claim that that naturalistic condition does capture
that semantic representation relation, and that the natural relation characterized by the
condition is the semantic representation relation. Now, our description of the
representation relation for beliefs like <THAT is a RABBIT> does allow
misrepresentations and does seem to fit our intuitive understanding of truth. Therefore,
under materialism, we claim that it is the semantic representation relation between beliefs
and environmental states, that is, it is truth.
On the other side, consider the neural structure P mentioned above. As an indicator
representation, P will represent rabbits-in-bushes, tigers-in-bushes, and rabbits-inoptimal-environments, since all three scenarios can cause P. Therefore, this indicator
representation relation cannot discriminate between the cases where the indicator causes
adaptive behaviors and the cases where it causes maladaptive behaviors. That is exactly
why the semantic representation relation or truth plays an indispensable role in explaining
the adaptiveness of behaviors. Truth classifies that non-optimal environment into the
same kind as those optimal environments in which the connection between the belief and
the adaptive behaviors (of hunting, roasting, and eating) is established. Therefore, truth
explains the adaptiveness of the associated behaviors in that non-optimal environment.
The indicator representation relation alone cannot accomplish this.
Second, consider the well-known problem where a belief/desire pair with a false
belief and a harmful desire induces the same behavior as a pair with a true belief and a
healthy desire. Another way to put the problem is the following. Suppose that H is an
adaptive behavior in some environment. Then, for any desire D, the belief/desire pair <if
H then D, D> will cause the same behavior H, no matter if D is healthy or harmful, and
no matter if the belief ‘if H then D’ is true or false. Suppose that the way evolution
selects adaptive beliefs and desires is to select such pairs from a pool of many available
pairs. Then, there is no guarantee that evolution will tend to select true beliefs or healthy
desires. However, apparently evolution does not work that way. In fact, evolution does
not select individual beliefs (or desires or belief/desire pairs) directly. Evolution selects
genes that determine an innate human cognitive mechanism with various traits. In the
above, I suggest that this innate cognitive mechanism allows humans to identify natural
classes consistently in optimal environments and to develop concepts. Then, this also
determines the content of concepts and thoughts, and determines that a simple belief
produced by the mechanism is normally true in optimal environments. I have also tried to
show how evolution favors a cognitive mechanism that produces true simple beliefs in
non-optimal environments. I cannot claim that this must be an accurate description of
how things go in evolution, but it seems obvious that it never happens that some random
factors (like genetic mutations) produce belief-desire pairs randomly and then evolution
selects adaptive belief-desire pairs. Instead, at least for a simple belief like <THAT is a
RABBIT>, a belief is a result of some neural process controlled by an innate cognitive
mechanism reacting to some environmental condition, and this also determines the
content of the belief in some way. That is how truth is related to adaptiveness, although
they are not identical. Now, since the advent of human language, humans can indeed
13
produce complex beliefs such as ‘if H then D’ using language and they can consciously
consider many potential candidates of belief and desire. However, again, evolution does
not select these complex beliefs individually and directly based on their adaptiveness.
Complex beliefs are produced either by the same mechanism that produces simple true
beliefs reliably, or by some extension of it. I cannot discuss here how evolution still
favors reliable extensions. However, it seems at least clear that these examples of
belief/desire pairs with false beliefs do not affect the conclusion that, at an early stage,
before humans can form complex beliefs, evolution favors a human innate cognitive
mechanism that is reliable for producing true simple beliefs. Then, this is sufficient to
discredit Indifference of Adaptiveness to Truth (with this naturalized notion of truth).
Third, consider Plantinga’s example of a God-believer tribe whose people always
express their beliefs in the form ‘this apple-creature is good to eat’.15 The problem with
this example is that those people’s system of beliefs actually appears to be quite reliable
according our intuitive understanding of what is to be a reliable system of beliefs. To see
this let us dramatize the example. Suppose that people in that tribe have all the scientific
knowledge that we have, from contemporary fundamental physics to biology and
cognitive sciences. The only difference between them and us is merely that they always
use predicates such as ‘lepton-creature’, ‘black hole-creature’, and ‘DNA-creature’, and
they always express their beliefs using these predicates. Then, an atheist will naturally
think that people in that tribe have only one clearly false belief, that is, the belief that
everything is created by God, and except for that single belief, all their other beliefs are
actually the same as our scientific beliefs. Their cognitive mechanism is as reliable as
ours is, since that single false belief is quite insignificant (for an atheist). Counting the
number of beliefs can be tricky. Suppose that I state my beliefs as {A&B, A&C, A&D},
and suppose that A is false and B, C, and D are true. Do I have three false beliefs? or do I
have one false belief and three true beliefs? It may be difficult to give a theory on what is
the right way to count the number of true beliefs for evaluating reliability, but our
intuition seems to be that, in this example, those people’s cognitive mechanism is in fact
as reliable as ours is. My naturalistic theory of content and truth actually supports this
intuition. Although those people explicitly use predicates like ‘apple-creature’
exclusively, we still have reasons to believe that inside their brains, they have simple
concepts such as APPLE as well. Such simple concepts were developed in an early stage
of their evolution, and their innate cognitive mechanism developed in the early stage may
be the same as ours, and it is as reliable as ours is for producing simple beliefs. Their
predicate ‘apple-creature’ expresses a composite concept, although they invent a simple
name for it. Their notions of God, Creation, and apple-creature are developed much later,
because of cultural evolution or divine revelation. Then, this kind of examples does not
affect the conclusion that human innate cognitive mechanism is reliable for producing
true simple beliefs.
4. An Open Question
If the above argument is correct, then a natural question is, ‘What about the reliability of
human cognitive mechanism for producing complex beliefs under naturalism?’ First, we
should note that human innate cognitive mechanism selected by genetic evolution in
general is in fact not reliable for producing beliefs about things remote from human life.
Six hundred years ago, most humans had false beliefs about if the Earth is flat, as well as
14
many other false beliefs about similar states of affairs not closely related to human
survival. Even just one hundred years ago, many people, except for those in the western
countries, still had many false beliefs about the Sun, the Moon, stars, the microscopic
composition of matter, and so on. Human cognitive mechanism selected by genetic
evolution never changes in the past six hundred years. Therefore, it is actually not reliable
for producing these beliefs. On the other side, since most people have supernatural or
anti-naturalistic beliefs a few hundred years ago, naturalism is obviously not a product of
that general cognitive mechanism alone. Therefore, this fact does not imply any internal
inconsistency in naturalism. It does not imply that the real mechanism that produces the
belief of naturalism is unreliable for producing the kind of beliefs to which the belief of
naturalism belongs. Now, naturalists believe that naturalism follows from modern science
and that scientific methods are reliable for producing true beliefs about things and states
of affairs remote from humans. Therefore, we should consider the reliability of scientific
methods. Let RS be the proposition ‘Scientific methods are reliable (or fairly reliable) for
achieving true beliefs within the scope of science’. Let N be a complete specification of one
version of naturalism, for instance, reductive materialism plus a naturalistic theory of
content and truth. Let ES be a description of the series of historical events and the internal
mechanism that together finally led to the emergence of modern scientific methods in the
western culture in recent centuries. Then, we should consider the following proposition
(6) P(RS|N&ES) is low.
Note that scientific methods are not products of genetic evolution alone. Therefore, I
replace E by ES. I take ES to include a description of the mechanism of genetic evolution
and the mechanism of cultural evolution (if any), as well as a description of the accidental
historical events that affect the course of genetic evolution on the Earth and the cultural
evolution that led to the emergence of scientific methods. Some naturalists may hold that
naturalism as a worldview already includes evolutionary theory, but at least the
accidental historical events that affected the actual course of evolution on the Earth are
not implied by any complete specification of naturalism. These accidental historical
events and the mechanism of genetic evolution together imply the emergence of humans
on the Earth. Therefore, we need ES as a separate condition in evaluating RS. On the
other side, since scientific methods are not products of random mutations plus selection
by adaptiveness, Plantinga’s strategy for his arguments cannot apply to (6) directly, even
if we ignore the problems of his strategy analyzed in this paper. Note that this does not
make my analysis of Plantinga’s arguments redundant. If Plantinga’s arguments were
correct, then human cognitive mechanism selected by naturalistic evolution is likely
unreliable even for producing simple beliefs. Then, scientific methods as extensions of
human innate cognitive mechanism are also likely unreliable as well. Plantinga’s problem
is that his conclusion is too strong.
I believe that it is still open if we can construct an argument against naturalism by
arguing for (6). To see the problems more clearly, I suggest we divide the proposition RS
further into two sub cases. First, consider the case where the proposition RS in (6) is
restricted to beliefs about unobservable physical entities. With this restriction, (6) follows
from constructive empiricism in philosophy of science, which claims that we do not have
any good reason to believe that our scientific beliefs about unobservable physical entities
are true. Therefore, if one is at least partially convinced by constructive empiricism, then
15
one may want to argue for (6). Otherwise, at least we can perhaps agree that the status of
(6) is open, since many people agree that constructive empiricism is hard to refute.
Note that Plantinga’s argument bears some similarity with the argument by
constructive empiricism. Plantinga’s argument claims that adaptiveness is indifferent to
truth, while the argument by constructive empiricism claims that empirical adequacy is
indifferent to truth about unobservable entities. The difference is that Plantinga asserts
indifference for all beliefs, while constructive empiricism asserts it for beliefs about
unobservable entities only. This becomes a critical difference under naturalism. Under
naturalism, human cognitive activities are physical interactions between human brains
and other macro physical objects in human environments, and then both truth about
macro physical objects and adaptiveness are relations between human brains and other
macro physical objects in human environments. Therefore, it is likely that adaptiveness is
related to truth about macro physical objects. To support Plantinga’s argument, we then
need a non-materialistic notion of truth (and content), which makes the argument circular
as a refutation of materialism. In contrast, the argument by constructive empiricism only
tries to exploit the gap between the macro and the micro physical world, which is a gap
completely within materialism, unlike the gap between material things and other things
like content. Therefore, the argument by constructive empiricism can be a completely
naturalistic argument. It also respects our commonsense idea that truths are useful and is
less counter intuitive on this respect. Besides, as an argument refuting naturalism, in
arguing for the claim that the selection of empirically adequate scientific theories is blind
to truth about microscopic entities, one can try to exploit the point that there is no divine
guidance there assuming naturalism, although I do not know how much this can add to
the existent argument by constructive empiricism. These are just some ideas. My
conclusion here is merely that the status of (6) seems still open.
If a convincing argument for (6) is available, then one has two options to proceed
further. The first is to follow Plantinga’s strategy to argue that if (6) is true then there is a
defeater of naturalism. However, there is a problem here. It is unclear whether the belief
of naturalism depends on the truth of the beliefs about unobservable physical entities. For
instance, if naturalism as a philosophical thesis can be stated without referring to any
unobservable physical entities and if scientific methods are indeed empirically adequate
(as constructive empiricism claims), then naturalism may not depend on the truth of
scientific assertions about unobservable things. In that case, we cannot get a defeater of
naturalism. I do not have an answer here. This is perhaps open as well. The second option
is to say that, because of (6), scientific realism is in conflict with naturalism while truths
about unobservable physical entities are reachable by humans with divine guidance. This
will not be a straight refutation of naturalism, but considering the fact that most working
scientists accept scientific realism, this does have a significant value.
Second, consider the case where the proposition RS in (6) is restricted to beliefs
about observable physical entities. This time, if materialists can show that human innate
cognitive mechanism selected by evolution is reliable for producing true beliefs about
things closely related to human survival, then there may be a chance that they can also
show that scientific methods are reliable for achieving true beliefs about observable
things. I do not have an argument, but here are some considerations that might favor this
idea. First, indeed, for beliefs about things remote from human life, the adaptiveness of a
belief can at most relate to its truth very remotely. This appears to support Indifference of
16
Adaptiveness to Truth for those beliefs. However, what drives the emergence of scientific
methods is not merely their adaptiveness. Intuitively, it is partially human desire for truth.
Note that after naturalizing truth and other semantic notions, we can characterize this
‘desire for truth’ in naturalistic terms. In other words, many commonsense explanations
of human behaviors become available to naturalists, including this ‘desire for truth’. They
need not imply any non-physical mind with irreducible mental capabilities (such as
irreducible intentionality). The desire for truth seems to be a byproduct of the clearly
adaptive desires for food, reproduction, and so on. More specifically, if materialists are
correct in characterizing the connection between adaptiveness and truth, truth brings
satisfactions of desires in most normal cases. This helps humans to develop the desire for
truths. The objects of this desire may extend from truths about things directly to human
life to truths about things remote from humans, which may eventually lead to the
invention of scientific methods that are reliable for achieving truths about things remote
from humans. That is, from the point of view of cultural evolution, scientific methods
may be a byproduct of some healthy and adaptive desire. If scientific methods in the end
lead to a nuclear holocaust, they will be maladaptive in fact, but given how they emerged,
they may be reliable for producing truths at least to some extend. This is not an argument
for the reliability of scientific methods for achieving truths about observable things. It is
merely an idea with some intuitive appeal. I admit that it is also still open if materialists
can refute (6) in this case, and the issue seems very complex.16
Peking University, P. R. China
NOTES
See Plantinga ‘Introduction: The Evolutionary Argument against Naturalism’ and ‘Reply to
Beilby’s Cohorts’ in J. Beilby (ed.) Naturalism Defeated? Essays on Plantinga’s Evolutionary
Argument against Naturalism, Cornell University Press, 2002; ‘Naturalism vs. Evolution: A
Religion/Science Conflict?’, http://www.infidels.org/library/modern/alvin_plantinga/conflict.html;
‘Against “sensible” Naturalism’, http://www.infidels.org/library/modern/alvin_plantinga/againstnaturalism.html; ‘Religion and Science’, in Stanford Encyclopedia of Philosophy,
http://plato.stanford.edu/entries/religion-science/; ‘Content and Natural Selection’, forthcoming
on Philosophy and Phenomenological Research.
2
‘Content and Natural Selection’, forthcoming on Philosophy and Phenomenological Research.
3
I will number my major quotes from Plantinga by (Q1), (Q2), etc. for later cross reference
4
Plantinga, ‘Content and Natural Selection’, forthcoming on Philosophy and Phenomenological
Research. Similar claims are in other texts cited above.
5
‘Naturalism vs. Evolution: A Religion/Science Conflict?’,
http://www.infidels.org/library/modern/alvin_plantinga/conflict.html. Similar claims are in other
texts cited above.
6
W. Ramsey ‘Naturalism Defended’, and E. Fales ‘Darwin’s Doubt, Calvin’s Cavalry’, both in J.
Beilby (ed.) Naturalism Defeated? Essays on Plantinga’s Evolutionary Argument against
Naturalism, Cornell University Press, 2002. P. Draper, ‘In Defense of Sensible Naturalism’,
http://www.infidels.org/library/modern/paul_draper/naturalism.html.
7
See Plantinga ‘Reply to Beilby’s Cohorts’ in J. Beilby (ed.) Naturalism Defeated? Essays on
Plantinga’s Evolutionary Argument against Naturalism, Cornell University Press, 2002; ‘Against
“sensible” Naturalism’, http://www.infidels.org/library/modern/alvin_plantinga/againstnaturalism.html.
8
Plantinga, ‘Against “sensible” Naturalism’,
http://www.infidels.org/library/modern/alvin_plantinga/against-naturalism.html.
1
17
‘Content and Natural Selection’, forthcoming on Philosophy and Phenomenological Research.
See F. Ye ‘A Structural Theory of Content Naturalization’, ‘On Some Puzzles about Concepts’,
and ‘Truth and Serving the Biological Purpose’, all available online at
http://sites.google.com/site/fengye63/
11
See D. Papineau, Philosophical Naturalism, Oxford: Basil Blackwell, 1993; R. Millikan,
Varieties of Meaning, Cambridge, MA.: MIT Press, 2004; and K. Neander, ‘Teleological
Theories of Content’, in Stanford Encyclopedia of Philosophy, E. N. Zalta (ed.),
http://plato.stanford.edu/entries/content-teleological/
12
Interested readers can consult ‘A Structural Theory of Content Naturalization’, available online
at http://sites.google.com/site/fengye63/.
13
See G. Murphy, The Big Book of Concepts, Cambridge, MA.: MIT Press, 2002; S. Laurence
and E. Margolis, ‘Concepts and cognitive science’, in E. Margolis & S. Laurence (eds.), Concepts:
core readings. Cambridge, MA.: MIT Press, 1999.
14
For more discussions on how truth diverges from adaptiveness, see my article ‘Truth and
Serving the Biological Purpose’, available online at http://sites.google.com/site/fengye63/
15
Plantinga ‘Introduction: The Evolutionary Argument against Naturalism’, in J. Beilby (ed.)
Naturalism Defeated? Essays on Plantinga’s Evolutionary Argument against Naturalism, Cornell
University Press, 2002.
16
An earlier and shorter version of this paper was presented at Beijing Conference on Science,
Philosophy, and Belief (2009). I would like to thank Professor Alvin Plantinga, Kelly Clark, and
other participants of the conference for their helpful comments. Further discussions with Prof.
Plantinga after the conference helped me greatly in revising this article. Without all their helps,
this article is impossible.
9
10
18
Download