AI & Soc (1990) 4:39-50 © 1990 Springer-Verlag London Limited AI & SOCIETY Thinking Persons and Cognitive Science Martin Davies Philosophy Department, Birkbeck College, Malet Street, London WC1E 7HX, UK Abstract. Cognitive psychology and cognitive science are concerned with a domain of cognition that is much broader than the realm of judgement, belief, and inference. The idea of states with semantic content is extended far beyond the space of reasons and justification. Within this broad class of states we should, however, differentiate between the states distinctive of thinking persons - centrally, beliefs, desires, and intentions - and other states. The idea of consciousness does not furnish a principle of demarcation. But the distinction between states whose content is conceptualized by the person whose states they are and states for which this is not so is more promising. This principle of demarcation contains the seeds of a problem for distributed connectionism. The article ends with some more general reflections about cognitive science. Keywords: Cognition; Cognitive science; Concepts; Connectionism; Consciousness; Content There is a familiar distinction between perception and cognition. According to that distinction, the realm of cognition is, roughly, that of thinking: judgement, belief, and inference. In this familiar usage of the term, cognition has as its paradigm cases believing and deciding on the basis of reasons. The space of cognition is the space of reasons and justification; the cognitive realm is the realm of propositional attitudes, governed by the norms of rationality. In short, cognition is what makes us thinking persons. What prospects or problems does this cognitive realm present for the connectionist programme? More generally, what are the chances that cognitive science will illuminate the realm of thought and inference? Cognitive science actually concerns itself with a much broader domain, which also includes early stages of perceptual processing, for example. We need a principle for marking out, within this broad domain, the narrower realm that is characteristic of thinking beings. The main argument of this article is that the correct principle of demarcation contains the seeds of a problem for distributed connectionism in the style of Smolensky (1988). The article ends with some more general reflections about cognitive science. 40 Martin Davies Extending Content The disciplines of cognitive psychology and cognitive science are concerned with a domain that is very much broader than the realm of judgement, belief, and inference. Absolutely typical examples of cognitive psychological research concern the processes implicated in face recognition, or visual word recognition, or reading aloud, or the experience of depth resulting from binocular disparity: processes that fall squarely outside the space of reasons and justification. This point is probably quite obvious; but let me labour it nevertheless. Take the case of visual word recognition. The final upshot of a piece of word recognition may well be a judgement, and a b e l i e f - the belief, say, that this is the word "doctor" on the screen before me. But competing theories about the processes involved in visual word recognition- whether it be logogens, or lexical search, or cohorts, or threshold bias - are not theories about a subject's reasons for his belief. Indeed, in a clear sense, the subject has no reason for his belief: he simply takes at face value his experience as of the word "doctor" there on the screen in front of him. We could make similar remarks about the other examples. Chomsky's work on knowledge of language provides yet a further example of research in this broad cognitive domain - once again, extending far beyond the space of reasons. In expressing his claims, Chomsky sometimes uses the term "cognize" (1980, pp. 69-70), with the intention that this should be much more inclusive than any propositional attitude verb such as "believe", or indeed "know" in the everyday sense. Thus, ordinary language users know - in the familiar everyday sense involving belief- some facts about, for example, whether certain strings are grammatical, and what various complete sentences mean. Since "cognize" is supposed to be a more inclusive term than "believe", language users also cognize these facts. But, in addition, they cognize facts that they do not believe or even contemplate: facts stated by the rules or principles of a grammar, from which the workaday facts follow. These pieces of cognizing (or tacit knowledge) lie behind a language user's judgements and beliefs about grammatically and about meaning; they are causally antecedent to those judgements. But they in no way constitute the language user's reasons for what he believes. The typical unreflective language user simply takes at face value his impressions of grammaticality and meaning. According to a realist about propositional attitudes, belief states and other attitude states exhibit a crucial combination of features: they have both causal powers and semantic content or "aboutness". Cognitive psychology extends this combination of features far beyond the realm of propositional attitudes. So much for labouring the obvious point that cognition conceived as the domain of cognitive psychology is much broader than cognition conceived as the realm of beliefs and reasons. The Credentials of Cognition There are those who question the legitimacy of any kind of extension of semantic content beyond the realm of propositional attitudes, which is the philosophical ThinkingPersonsand CognitiveScience 41 home territory of "aboutness" or "intentionality". These sceptics seek to reject the idea of a scientific psychology. They aim to impose a dichotomy: on the one hand, science (with physics as the paradigm case) and, on the other hand, the socalled "folk psychology" of the non-scientific common-sense scheme of attribution of attitudes and explanation in terms of reasons. The sceptics bear the onus of proof here; for the very existence of the discipline of cognitive psychology confers prima facie legitimacy upon the extended notion of cognition. If the friend of cognitive psychology sought to extend the domain of semantic content by extending the space of reasons and justification, then the sceptic would have a promising line of attack. On that reading, the attribution of tacit knowledge of linguistic rules, for example, would involve two steps. First, in the grip of the idea that there must be a step of justificatory reasoning preceding each piece of linguistic understanding, the theorist attributes to each ordinary language user a set of linguistic rules that are to be consulted in order to justify actual linguistic practice. Then, second, confronted with the evident implausibility of such attribution of conscious knowledge of rules to anyone other than linguists, the theorist says that, in the case of ordinary unreflective language users, the knowledge and the consultation of rules is unconscious or tacit. Conceived in this way, the position of the cognitive theorist really would appear confused and, indeed, incoherent- involving as it does the totally mysterious idea of unconscious justification. The root of the incoherence would be located in the first of the two steps. For that step fails to heed a dominant theme in Wittgenstein's philosophy; namely, that at a certain point justifications give out. However, cognitive science is not to be saddled with this incoherent position. Cognitive science extends the domain of semantic content beyond the space of reasons, rather than extending the space of reasons itself. The cognitive theorist is not guilty of "giving a justification of our procedure where there is no such thing as a justification" (Wittgenstein, 1976, III-74). Within the broad class of cognitive states recognized by cognitive science there will be propositional attitude states and there will be other psychological states which also have semantic contents and figure in causal explanations. The first subclass contains, centrally, beliefs, desires, and intentions. The second subclass contains, for example, states that register information about the disparity between two retinal images, or about the orthographic form of a word, and states of tacit knowledge of linguistic rules. Stich labels states in this second subclass as subdoxastic states. They are states which "play a role in the proximate causal history of beliefs, though they are not beliefs themselves" (1978, p. 499). In Chomsky's terminology, they are states of (mere) cognizing. Now it is certainly consistent to u r g e - against the sceptic- that the extended cognitive domain is legitimate, while agreeing with the sceptic that there is something special about the bearers of semantic content that fall within the narrower realm of propositional attitudes. So it is open to us to stress the importance of a distinction within the broad class of cognitive states recognized by ~gnitive science: the distinction between beliefs and other attitude states on the one hand, and subdoxastic states on the other. ,$2 Martin Davi¢~ Connectionist models are being used in the study of many cognitive processes. The patterns of activation in a connectionist network certainly exhibit the crucial combination of features that is characteristic of all cognitive states. For patterns of activation clearly have causal powers, and they also have semantic or interpreted descriptions: patterns of activation are a network's way of registering information about its environment. But what are the prospects for connectionism in modelling the doxastic realm of thought and inference? The answer to that question evidently depends upon the principle for distinguishing between doxastic and subdoxastic states. Consciousness and What It Is Like It is a deeply appealing idea that, ff there is to be a principled distinction between the realm of attitudes and the domain of mere information processing, then consciousness should somehow mark the boundary. But I shall be arguing here for a negative claim; namely, that the intuitive notion of consciousness cannot bear the weight of a principled distinction between the doxastic and the subdoxzstic (even if we ignore the possibility of unconscious beliefs). One of the most basic thoughts about consciousness is that bats are conscious while bricks are not. There is something that it is like to be a bat (Nagel, 1979), but there is nothing that it is like to be a brick. This basic thought ties the notion of consciousness to that of experience; and my negative claim is that this undifferentiated notion of conscious experience cannot furnish the principle that we require. We want to say that beliefs, for example, are conscious states. But there is certainly more to conscious experience than propositional attitudes. Many creatures enjoy conscious experience, but are not bearers of attitudes at all. As a consequence, there is a sense in which a state that has a semantic content may be accessible to consciousness while also being intuitively classified as subdoxastic rather than doxasfie. Suppose, for example, that certain low level states with semantic content prima facie examples of subdoxastic states - were to surface in conscious awareness, as distinctive itches or fiches, perhaps. Then there would be something that it was like to be in those states. But that empiric~ difference from our actual situation would obviously not be enough to make those states into beliefs. It might be said that, in the imagined situation, the semantic content of the state is in no way reflected by the character of the experience of being in the state. The experience does not encode the content. But, while that is true, it is not difficult to refine the example. Suppose that the semantic content of certain states concerns the value of some parameter along a one-dimensional scale. And suppose that the intensity of the itch or fiche varies with the value that the state assigns to that parameter. Now what it is like to be in the state depends systematically upon the semantic content of the state. But this still does not make the state into a belief. Thinking Persons and Cognitive Science 43 These hypothetical examples show that there can be aspects of experience which make no difference to what a person taking his experience at face value would believe about the world, even though the character of the experience does systematically reflect information about the world. Those aspects of experience amount to the surfacing in conscious awareness of states with semantic content, but the experiences do not present those contents as potential contents of judgement and belief. The problem for the idea that consciousness marks the distinction between doxastic and subdoxastic states is that semantic content plus consciousness (there being something that it is like to be in the state) does not add up to the content of a doxastic state. We can gain a slightly different perspective on this problem if we consider a distinction that is sometimes drawn within the character of perceptual experiences. The representational content of an experience is a matter of the way that the experience presents the world as being; it is the way that a person taking the experience at face value would thereby judge the world to be. This is to be distinguished from the sensational or phenomenal properties of the experience (Peacocke, 1983). One of the examples that can be used to illustrate the representational versus sensational distinction is provided by monocular and binocular viewing of the same scene. Monocular vision certainly does not present the world as fiat; and provided enough depth cues are present, it is entirely possible that the representational content of the two visual experiences should be the same. Yet there is an intrinsic difference between the experiences: they have different sensational properties. So sensational properties cannot be reduced to representational ones. But, nevertheless, the sensational differences that make no representational difference may be underpinned by differences in the semantic contents of states of visual processing. It does not seem entirely science fictional to suppose that the sensational difference between the two experiences is to be explained in terms of the presence or absence of information about binocular disparity in their causal antecedents. Semantic content may surface in experience as a sensational rather than a representational feature. Given this terminology, we can say that the semantic content of a belief, or other doxastic state, is representational content. Subdoxastic states have semantic content, too. And the point that semantic content plus consciousness does not add up to the content of a doxastic state emerges here as the claim that semantic content can surface in experience without surfacing as representational content. Someone might reply in the following way, to this problem with the idea of using consciousness to mark the distinction between doxastic and subdoxastic states. We can allow that, in the imagined cases, a subdoxastic state with semantic content is accessible to consciousness. But, even so, the content of the state is not accessible to consciousness; and it is this that matters for a principled distinction between beliefs and subdoxastic states. This reply does not, strictly speaking, oppose my negative claim. For that claim was concerned with the intuitive and undifferentiated notion of consciousness, tied to the idea of there being something that it is like to be in a state. What the 44 Martin Davies reply does amount to is a suggestion for refining the undifferentiated notion of accessibility to consciousness; introducing the notion of accessibility of content to consciousness. If this suggestion is to furnish a principle of demarcation, then it will have to be the case that the idea of accessibility of content is tolerably clear, and that it can be employed in a non-question-begging way to distinguish doxastic from subdoxastic states. Let us ask first whether the idea is clear. What the examples that we have just considered show is that accessibility of content would need to be sharply distinguished from the mere systematic reflection of semantic content in aspects of experience. Accessibility of content would also need to be distinguished from the kind of case where a person has beliefs about the semantic contents of his or her own subdoxastic states. The fact that a theorist of vision, for example, may have beliefs about the contents of the states implicated in the information processing that is going on within him does not render those states any the less subdoxastic. Nor does the possibility of a belief to the effect that the binocular disparity is suchand-such render doxastic the visual processing state of registering information about disparity. Accessibility to consciousness of the content of a state is not just the possibility of beliefs about the content of a state, or beliefs with the same content as the state. To be in a state with a semantic content is one thing; to have a belief about that state is another. For accessibility of the content of a state, we need to require that to be in that state is ipso facto to have the content accessible. Then, borrowing from Fodor (1983, p. 56), we could cash out accessibility as availability for exptieit verbal report. The result would be the proposal that a state is doxastic, rather than subdoxastic, if being in that state is ipso facto to have the semantic content of the state available for verbal report. However, just as it stands this gratuitously ties doxastic states too closely to language. We do not want a principle of demarcation that simply legislates that only a language user can think. The proposal is reasonably clear, and there is something compelling about it. But the idea of actual verbal report seems to be an inessential intrusion. What is more fundamental than availability for report is availability for thought. If we replace the inessential with the fundamental, then the proposal comes to this. A state with semantic content is doxastic if being in that state is ipso facto to have the content of the state available as a content of thought - of propositional attitudes. A state is subdoxastic if being in the state is not ipsofacto to have its semantic content available for thought. What this says is true. But it is hardly a non-question-begging way of distinguishing the realm of thought and inference from the subdoxastic domain. For the proposal simply helps itself to the notion of thought. Until we understand more fully what we are saying when we say that beliefs are conscious states, we cannot make anything of the appealing idea that consciousness marks the boundary of the doxastic realm. It is time to turn elsewhere for a principle of demarcation. Thinking Persons and Cognitive Science 45 Conceptualized Content and the Structure of Thinking It is a familiar neo-Fregean point that no one can have a belief with a particular content - or entertain a thought with a particular content - without grasping the constituent concepts of that content. For example, no one can believe, or even entertain the hypothesis, that God is triune unless he knows what it is for something to be triune. Someone who lacks that knowledge can merely believe that the string of words "God is triune" expresses some true proposition or other, not knowing which proposition it expresses. However, there is no corresponding requirement relating to the semantic contents of subdoxastic states. A person can certainly be in a state that registers information about binocular disparity, for example, without that person having the concept of binocular disparity. And, of course, ordinary language users who are credited with tacit knowledge of linguistic rules, principles, or generalizations are likely to have no grasp at all upon the technical concepts of linguistic theory. They do not, for example, know what it is for one expression to c-command another. Consequently, the requirement that the semantic content of a state should be conceptualized by the person whose state it is suggests itself as a foundation for the distinction between propositional attitude states and other cognitive states. The idea here is not simply that if a thinker happens to possess the concepts that are involved in a content, then any state of the thinker which has that content is a doxastic state. Of course, a person who enjoys binocular vision may also be a theorist of vision, and so may indeed have the concept of binocular disparity. But that does not make the low level state of registering disparity anything other than subdoxastic. To be in the low level state whose semantic content concerns binocular disparity is not ipso facto to have the content of that state available for thought, and so does not itself require possession of the concept of binocular disparity. Strawson says (1959, p. 99) that "the idea of a predicate is correlative with that of a range of distinguishable individuals of which the predicate can be significantly, though not necessarily truly, affirmed". This truth about predicates in language has its echo for concepts in the realm of thought. If a thinker is to have the concept of being F, then that thinker must know what it is for an object to be F; and what this means is that the thinker must know what it would be for an arbitrary object to be F. If we put this truistic sounding doctrine about concepts together with the claim that being in a doxastic state involves deploying concepts, then an important consequence follows. In order to entertain the thought that a particular object a is F, a thinker must have the concept of being F. That is, the thinker must know what it is for an arbitrary object to be F. So, if the thinker is able to think about some other object b, then the thinker knows what it would be for that object b to be F. In short, if a thinker is able to entertain the thought that a is F, and is able to think about b thinking perhaps that b is G - then the thinker has all the conceptual resources that are required for also entertaining the thought that b is F(and the thought that a is G). 46 Martin Davies This requirement upon thoughts is what Evans (1982, pp. 100-105) calls the Generality Constraint. Semantic contents in general are not subject to this constraint; but thoughts are subject to it because their semantic contents are conceptualized by the person whose thoughts they are. One immediate consequence of the Generality Constraint (perhaps not properly distinguishable from the Generality Constraint itself) is that the class of thought contents that are available to a given thinker exhibits a kind of closure property. If the contents a is F and b is G are available, then so too are the recombined contents a / s G and b / s F. Likewise, if the content a is R to b is available, then so too is the content b is R to a. It may be that, if we look at the semantic contents of some class of subdoxastic cognitive states, then we shall find that those semantic contents also meet a closure condition: the class of actual and potential semantic contents is closed under recombination. But that possibility need not blur the distinction between doxastic and subdoxastic states. Doxastic states have conceptualized contents, are immediately answerable to the Generality Constraint, and so essentially meet the closure condition. Subdoxastic states are not subject to the Generality Constraint, and so, if they do meet the closure condition, then that is a merely contingent fact about them. Their having the contents that they do is one thing; their meeting the closure condition is another. Suppose that we take conceptualization of semantic content as the principle for distinguishing propositional attitudes from subdoxastic states. What, then, are the prospects for connectionism in modelling the doxastic realm of thought and inference? Inference and Causal Systematicity It is not easy to argue directly from the closure condition on thought contents to a problem for the connectionist programme. The friend of connectionism can point out, in the first instance, that it is perfectly easy to devise little models that de facto meet the closure condition. A system of four nodes - representing the four states of affairs a is F, b / s F, a is G, b / s G - would serve. The opponent can then object that this model is a mere toy; and he can claim that to meet the closure condition in a realistic case would require some kind of articulation in the states that bear semantic contents. He can claim, in particular, that there would need to be constituents of the states- an a constituent, and an F constituent, for example - and that this articulation into constituents would amount to a syntax, which is just what connectionist models are supposed to do without. However, the friend of connectionism has a rejoinder. The opponent's objection is essentially a "How else?" argument. Thus: "It is possible to meet the closure condition on contents by using syntactically structured vehicles of semantic content. How else could it be done?" But this style of argument is apt to seem question-begging in the context of a proposed alternative style of vehicle for semantic content. Thinking Persons and CognitiveScience 47 The opponent can then regroup around the doctrine that thought contents are essentially subject to the Generality Constraint, whereas connectionism makes the satisfaction of the closure condition look like a mere contingency. But, here too the friend of connectionism has a reply. For he can offer his opponent a stipulation. Let it be stipulated that a semantic description of a network will not amount to a description in terms of thought contents unless the closure condition is met. That preserves an essential link between thoughts and closure, as the opponent wanted. But it also acknowledges as a contingency the fact that any given network has a degree of complexity that could warrant a doxastic description. There are further possible steps in this exchange, but we do not need to follow them. For the closure condition on thought contents does not exhaust the significance of the neo-Fregean idea of conceptualized content. Part of what is involved in the idea of conceptualized content is that when a thinker entertains the thought that a is F and also the thought that b is F, a common piece of concept mastery is being deployed: both thoughts require knowledge what it is for an object to be F. This knowledge is implicated in a person's grasp of the inferential potential of these two thoughts. For, to take a very basic case, suppose that part of what is required for an object to be Fis that it should be H. Then a thinker will appreciate that from a is F it follows that a is H, and from b is F it follows that b is H. The thinker will be disposed to accept these two inferences; and, what is more, the two inferential dispositions will be products of a common underlying capacity mastery of the concept of being F. What we arrive at is the idea of real causal systematicity in inferential transitions. Here is a very simple example of the way that this idea works. A thinker who has the thought that Bruce is a bachelor appreciates that from this thought it follows that Bruce is unmarried. He also appreciates that from the thought that Nigel is a bachelor it follows that Nigel is unmarried. The thinker appreciates the inferential potential of these two thoughts; and the inferential disposition depends in each case upon the same general capacity- mastery of the concept of being a bachelor. In order to have either the thought that Bruce is a bachelor or the thought that Nigel is a bachelor, the thinker must grasp the concept of being a bachelor. Grasping the concept of being a bachelor is a matter of knowing what it is for an object to be a bachelor; and that is matter of knowing inter alia that to be a bachelor requires being unmarried. This single piece of knowledge is implicated as a causal common factor in both of the inferential transitions that the thinker is disposed to make. Of course, this is a very simple example; very few concepts are easily definable in the way that bachelor is. And even in this simple case, there will be many other inferences that our thinker will draw concerning Bruce and Nigel. Often, the conclusion about Bruce will be different from the conclusion about Nigel; and even where the conclusions are similar there is no general guarantee of causal common factors. All this is quite correct. But it does not undermine the idea that, very often, part of what is involved in mastery of a concept, or of a family of concepts, is commitment to a pattern of inference. When inferential transitions 48 Martin Davies that conform to that pattern are made, there is a commonality, not merely in our descriptions of the transitions, but in their causal explanations as well. It is this idea of causal systematicity of inferential transitions that is problematic for connectionist models. A PDP system may certainly perform transitions which, under a semantic description, conform to a certain pattern. But it is one of the characteristic features of connectionist networks with distributed representation that there is not a causal commonality in the way that the network performs the various individual transitions conforming to the pattern. In a localist connectionist network with a node for bachelor and a node for unmarried, a connection between those nodes could function as a causal common factor in transitions from patterns of activation with semantic contents . . . . . . /s a bachelor to patterns with corresponding contents ~ / s umarried. This, incidentally, illustrates the important fact that strict causal systematicity does not require explicitly represented rules. But, with distributed representation what we expect to find is that there is no common subpattern of the various patterns of activation that have the semantic contents: Bruce is a bachelor, Nigel is a bachelor, and so on. Rather there will be constituent subpatterns that exhibit both commonalities and differences. Consequently, there will not be a common pattern of weights on connections that is implicated in all and only the bachelorto unmarried transitions. That is to say that there will not be causal systematicity in these transitions. In short, part of what is involved in the notion of conceptualized content is a requirement of causal systematicity in inferential transitions. A PDP system may perform the transitions; it may achieve the right input-output relation. But a network, as such, will not model this causal systematicity. So, on the face of it, if conceptualization of content provides the principle of distinction, then the doxastic realm of thought and inference is problematic for connectionism. Reconstructing the Mind I began by defending the extension of the notions of cognition and semantic content far beyond their philosophical home territory. I then sought a principle for a distinction, within the broad class of cognitive states, between the doxastic realm - the home territory of propositional attitude states - and the subdoxastie domain. I argued that the intuitive notion of consciousness will not bear the weight of such a distinction, and turned instead to the idea of conceptualization. If the argument of the last section is right, then conceptual content brings with it a kind of causal systematicity that is typically not exemplified by PDP systems. This tells us that, if we attempt a cognitive scientific reconstruction of the philosopher's notion of a thinking person, then a component in that reconstruction will be a system with an architecture more classical than connectionist. In fact, there are plausible arguments (Fodor, 1987) linking systematicity of cognitive processes with syntactic structure in representational states. So perhaps we can say that something like the language of thought will be a component in the construction of a thinker. ThinkingPersons and CognitiveScience 49 There is a second Fodorian element that could plausibly contribute towards a putative reconstruction: this is the idea of the modularity of mind (Fodor, 1983). For it can hardly escape notice that the distinction between the realm of thoughts with their conceptualized content, and the domain or domains of other cognitive states is reminiscent of the distinction between the central cognitive system and modules. What this suggests is that thoughts are content bearing states of a central system, rather than of a module. However, it is clear that a putative scientific reconstruction of the realm of thoughts and concepts drawing on just these two elements would be only a partial reconstruction. Being a content bearing state that is syntactically structured and implicated in causally systematic processes, and being a state of a central system, do not add up to being a thought. Less telegraphically, we can say this. There can be a complex information processing system with the architecture of a central system plus modules, and with its central system operating systematically over syntactically structured states, but which is intuitively not a thinker, not a deployer of concepts. (Many AI systems, such as planning and action systems, illustrate this point.) Many intuitions seem to converge here. For example, a thinker can be held responsible for his beliefs; there are normative aspects to the doxastic realm. Again, thought involves some kind of self-consciousness; there are reflective aspects of the doxastic realm. It may be that cognitive science will make progress with these normative and reflective aspects of thought. But the intuitions suggest that there are features of our common-sense idea of the realm of judgement, belief, and inference which may well not be fully captured in a scientific psychological reconstruction. How should we respond to that suggestion? Two extreme responses are evidently possible. One (the "British" response?) is to conclude that cognitive science is best avoided. The idea here would be that, since cognitive science does not reconstruct the philosopher's notion of a thinking person, it is dehumanizing. (This might be regarded as a vindication of those Wittgensteinian reservations about extending the notion of cognition.) The opposite extreme (the "Australian" response?) is to credit philosophy with something like superstitious hankering. The idea here would be that what science does not reconstruct should be banished. Our conception of our own mentality ought, in full seriousness, to be purged of whatever has no echo in a cognitive psychological model. But a choice between these extremes is not obligatory. It is surely allowable to believe that the prospects for fruitful interdisciplinary liaison are not so dire. Certainly, there is more to the study of the mind than computational psychology. But still, representation and computation may well be the de facto underpinnings of at least some psychological phenomena; and the empirical discipline of cognitive psychology can and should inform philosophical theorizing. We might even hope to advance beyond such a justified eclecticism. For if we came to understand the relationships between the philosopher's and the psychologist's accounts of the mind - between folk psychology and its cognitive psychological underpinnings - then we would have taken a step towards a principled appreciatio n of the various levels of description of our mental life. 50 Martin Davies References Chomsky, N. (1980). Rules and Representations. Oxford: BlackweU. Evans, G. (1982). The Varieties of Reference. Oxford: Oxford University Press. Fodor, J. (1983). The Modularity of Mind. Cambridge,/viA: MIT Press. Fodor, J. (1987). Psychosemantics. Cambridge, MA: MIT Press. Nagel, T. (1979). What is it like to be a bat? In Mortal Questions. Cambridge: Cambridge University Press, pp. 165--80. Peacocke, C. (1983). Sense and Content. Oxford: Oxford University Press. Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1-74. Stieh, S. (1978). Beliefs and subdoxastic states, Philosophy of Science 45, 499-518. Strawson, P. F. (1959). Individuals. London: Methuen. Wittgenstein, L. (1976). Remarks on the Foundations of Mathematics. Oxford: Blaekwell. Correspondence and offprint requests to: Martin Davies, Philosophy Department, Birkbeck College, Malet St, London WC1E 7 H , UK