1 Compositionality Michael Johnson Lingnan University 1. Introduction A symbolic system is compositional if the meaning of every complex expression E in that system depends on (and depends only on) (i) E’s syntactic structure and (ii) the meanings of E’s simple parts. If a language is compositional, then the meaning of a sentence S in that language cannot depend on the prior discourse, speaker intentions, salient objects and events in the environment, or the non-semantic character of S’s simple parts, such as their shape or sound. It can only depend on the meanings of the words composing S, and the way those words are syntactically related to one another. Several arguments purport to show that not only is natural language compositional, but that it must be, since we could not have the linguistic abilities we in fact do have, unless the languages we speak are compositional. A commitment to compositionality has driven a large amount of research in the philosophy of language and in linguistics, since it appears to be very difficult to provide adequate compositional treatments of commonplace linguistic constructions, especially propositional attitude ascriptions. On the other hand, some philosophers have argued that natural language is not compositional, or that compositionality induces no substantive restriction on possible theories of meaning. This article addresses the different ways compositionality has been understood by philosophers and linguists, and surveys the arguments that natural language is, must be, or should be compositional, as well as the arguments that it isn’t or needn’t be. 2. Interpretations of Compositionality for Natural Language 2.1 Syntactic Structure In natural languages (like English, Cantonese, Kalaallisut, etc.), the smallest meaningful symbols are called “morphemes.” For highly analytic languages like English, there is a large overlap between morphemes and words: words are largely the smallest meaningful units. (English does have a number of morphemes that are not words, however, like the plural ending –s for nouns, the possessive ending –’s for noun phrases, and the 3rd person singular ending –s for verbs. These are “bound” morphemes, in that they cannot grammatically occur on their own.) In other, more synthetic languages (like Kalaallisut), single words can be made of many meaningful parts. For example, the word atuartariaqalirpuq (“he began to have to study”) contains six morphemes, and can be used by itself as a sentence. [Example from Bittner 1995.] 2 ‘Morphology’ is the set of rules governing how morphemes are combined to form words; ‘syntax’ is the set of rules governing how words are combined to form phrases and, ultimately, sentences. These rules describe (among other things) how smaller parts are put together to form larger units called “constituents.” The syntactic rules that formed an expression can affect its meaning. Consider the expression ‘large horse painting’: it can either mean painting of a large horse or large painting of a horse, depending on whether ‘large’ is modifying ‘horse painting’ or just ‘horse.’ The principal claim regarding compositionality that philosophers have been concerned with is the claim that all actual and possible natural languages are compositional. A natural language is a language that humans learn to speak naturally, as part of their development, as opposed to an artificial language like computer languages. In this context, the claim that natural languages are compositional amounts to the claim that the meanings of complex (multimorphemic) expressions are determined by and only by (i) the ways their morphemes are put together by the morphosyntactic rules of the language and (ii) the meanings of those morphemes. This may seem like a clear statement of a single thesis, but unfortunately there is wide philosophical disagreement concerning (a) what meanings are and (b) how we should understand ‘dependence’ in the statement of compositionality. We turn now to these two issues. 2.2 Meaning There are two ways in which there are a wide variety of meanings of ‘meaning.’ First, many different philosophers will use the word ‘meaning’ and understand by it various distinct things. Some will think meanings are conceptual roles; others that they are set theoretic objects and functions. Second, one and the same philosopher may recognize several types or dimensions of meaning. She may think, for example, that connotations are meanings in one sense, and that denotations are meanings in a different sense. In discussing compositionality, a reasonable stance is to consider all proposed types of meanings as bona fide meanings and therefore understand that there are numerous compositionality theses. For example: Compositionality of stereotype: the stereotype associated with a complex expression E in a natural language is determined by (and only by) (i) E’s morphosyntactic structure and (ii) the stereotypes associated with E’s morphemes. Compositionality of semantic features: the semantic features (e.g. [+male] or [+animate], as they attach to ‘he’ and ‘who,’ respectively) of a complex expression E in a natural language is determined by (and only by) (i) E’s morphosyntactic structure and (ii) the semantic features of E’s morphemes. And so on, for each possible type or dimension of meaning. The philosophical question is which, if any, of these theses is true. Any argument for or against compositionality should make it clear 3 what conception of meaning it takes to be or not to be compositional. It is quite possible that there are several legitimate conceptions of meaning, each deserving the name ‘meaning,’ where on some of those conceptions, natural languages are compositional, and on other of those conceptions, they are not. The question that has perhaps most concerned philosophers interested in compositionality is whether the truth-conditions of a sentence depend on (and only on) its syntax and the meanings of its simple parts. The truth-conditions of a sentence are simply the conditions under which the sentence is true. The truth-conditions of a sentence do not depend only on its syntax and the meanings of its simple parts if some sentences with the same syntax and the same meanings assigned to their parts are true, and other sentences with the same syntax and the same meanings assigned to their parts are false. For example, we will later consider sentences like ‘It is midnight.’ Sometimes this sentence is true, but other times—apparently without a change in the meanings of the words or in the way they are combined—it is false. This is an apparent violation of the compositionality of truth-conditions. 2.3 Dependence Dependence and determination are common and vital notions in philosophy, though they are in many ways ambiguous. Sometimes dependence is a functional notion, as in: “the signs of two numbers determine the sign of their product (the sign of their product depends on their signs).” Dependence can also be a causal notion, as in: “the success of our movie depended on our advertising campaign.” It can be a constitutive notion, as in: “whether I win depends on whether I get a card lower than 4.” There are many ways the notion of dependence has been understood with regard to the compositionality thesis. 2.3.1 Functional Dependence One way of understanding the sense in which the meaning of the whole, according to compositionality, “depends on” the meanings of the parts, and the way those parts are combined, is reading “depends on” as “is a function of.” That is, a symbolic system is compositional if, and only if, the meaning of each complex expression E in that system is a function of (a) E’s syntactic structure and (b) the meanings of E’s simple parts. Let the meaning function be designated M. The functional conception of compositionality says that M’s outputs only differ when M’s arguments differ in either (a) their syntactic structure or (b) the meanings of their simple parts. Thus M(E) and M(E*) will be identical whenever E and E* differ only in the substitution of simple parts that are synonymous. (For example, “Fred sweats” and “Fred perspires” must be assigned the same meanings if “sweats” and “perspires” are assigned the same meanings.) The functional conception of compositionality is equivalent to the substitutional conception of compositionality: a symbolic system is compositional if, and only if, for any two complex expressions E and E* in that system, where E contains simple part P 4 and E* is identical to E except in containing simple part P* where E contains P: if P and P* have the same meanings, then E and E* have the same meanings. While the functional conception of compositionality is easy to characterize and understand, it fails to capture the full force of the constraint many philosophers have thought compositionality imposes upon semantic theories for natural languages. This is because many semantic theories which are not compositional intuitively are compositional in the functional sense. One way to see this is by noting that any symbolic system that contains no synonyms is compositional in the functional sense. If a symbolic system contains no synonyms, the meaning function for that language can’t treat two expressions differing only in the substitution of synonyms differently (because there are no such expressions). Thus for any expression E of S, there is a function F that takes E’s syntactic structure and the meanings of E’s parts as inputs and returns the meaning of E as output. This entails that a non-compositional language could be made compositional solely by removing a few redundant expressions (synonyms of other expressions in the language). Second, the functional conception of compositionality does not demand any particular relatedness among the meanings of related expressions. The functional conception requires only that the meaning function not assign different meanings to expressions that differ only in the substitution of synonyms. It does not require that the meanings it does assign to complex expressions be in any natural way related to the meanings of their parts, or to the meanings of other complex expressions composed of similar parts. For example, consider these meaning assignments: Le chien aboie. The dog barks. Le chat aboie. The cat dances. Le chat pue. The skunk eats. Sentences (1) and (2) share a verb, but nothing about their assigned meanings are similar; (2) and (3) share a noun phrase, but again nothing about their assigned meanings is similar. Nevertheless, there exists a function that takes the syntax, and the meanings of the morphemes, of each expression on the left, and maps it to the meaning on the right: it’s displayed in (1)-(3). In fact, any random, unsystematic assignment of meanings to sentences is compatible with the functional conception of compositionality, provided that either there are no synonyms or that sentences that differ only in the substitution of synonyms are assigned the same meaning. This is ‘dependence’ only in the weakest sense of that word. 2.3.2 Dependence as Computability As we shall see, the principal historical reason for the belief that natural languages are compositional is that only compositionality can explain how we can figure out the meanings of a 5 potential infinitude of expressions given our finite memories and capacities. Compositionality, on this conception, says that if you know the syntactic structure of an expression E, and you know the meanings of E’s simple parts, this suffices for you to “work out” the meaning of E: there exists a procedure that you can use, which after a finite number of steps, tells you the meaning of E itself. In other words, the meaning of any expression E is computable from (a) E’s syntactic structure and (b) the meanings of E’s simple parts. If the meaning of any expression E is computable from E’s syntactic structure and the meanings of E’s simple parts, then it is a function of E’s syntactic structure and the meanings of E’s simple parts. But the converse is not true, for not every function is computable. While computability imposes some standard of systematicity in meaning assignments, it nevertheless allows more freedom than we might wish. Consider how different programs running on your computer produce wildly different outputs, even given the same sequence of keystrokes. The outputs of the programs are computed from the keystrokes, but they process that information in radically different ways, and produce outputs of radically different characters. The keys I used to type the previous sentence in a word processer might result in a complicated series of moves if typed in a fantasy role-playing game. The computability conception of compositionality says that the transition from the syntax of a complex expression and the meanings of its parts to the meaning of that expression must be a function of the syntax and the meanings of the parts, and that it must be rule-governed; but it doesn’t say anything about what the rules are or can be, except that they can be carried out in a finite number of steps. 2.3.3 Dependence as Supervenience Suppose language L1 contains a complex expression E1 and language L2 contains a complex expression E2. E1 and E2 are similar in the following way: each expression in E1 has a counterpart in E2 that means the same thing, and those expressions are combined in exactly the same way in E1 as they are in E2. The supervenience conception of compositionality says that in any such situation E1 and E2 must have the same meaning. For example, the supervenience conception would say: if the French words ‘le,’ ‘chien,’ and ‘aboie’ have the same meanings as the English words ‘the,’ ‘dog,’ and ‘barks,’ and the French sentence ‘Le chien aboie’ has the same syntax as the English sentence ‘The dog barks, then ‘Le chien aboie’ and ‘The dog barks’ must have the same meaning. The thesis that the meaning of a complex expression E supervenes on its syntax and the meanings of its simple parts entails the thesis that E’s meaning is a function of its syntax and the meanings of its simple parts, but not the thesis that E’s meaning is computable from its syntax and the meanings of its simple parts. However, the supervenience thesis is consistent with the computability thesis, and they may well both be true. Whether supervenience is an adequate characterization of dependence for the purposes of compositionality is unclear. It allows non-computable meaning functions, and it allows unsystematic meaning assignments—provided every language has the same unsystematic 6 assignments. Computability and systematicity are properties that many philosophers believe natural languages satisfy, but perhaps they are independent of compositionality. 2.3.4 Dependence as Mereology The functional and computational conceptions of dependence, with regard to the thesis that natural languages are compositional, are seemingly weaker than the pre-theoretical conception of dependence that occurs in the thesis itself. There is another conception of dependence in the literature that can reasonably characterized as too strong (though it is not necessarily false that languages are compositional in this sense). On this conception, the meanings of the parts of a complex expression are literally part of the meaning of that expression. To see how this could be, consider the view that the meaning of a sentence is a structured proposition. The French sentence [[le chien] aboie]—where bracketing indicates syntactic structure—means a structured proposition like <<the dog> barks>-- where the italicized words stand here for the meanings of ‘le,’ ‘chien,’ and ‘aboie,’ respectively. On this view, the meaning of ‘chien,’ for example, is literally a part of the meaning of ‘le chien aboie.’ This notion of dependence is quite strong: the meaning of a complex expression is made out of its syntactic structure and the meanings of its parts. And while many theories of the meanings of complex expressions, like the structured propositions theory, validate the principle of compositionality as interpreted with the mereological conception of dependence, it should be clear that this is more than what philosophers normally mean when they say natural languages are compositional. 2.3.5 The Empirical Conception of Dependence Finally, it’s possible to define compositionality in terms of the role that it plays in explaining certain of our linguistic abilities. In particular, many philosophers have thought that unless the meanings of complex expressions in natural languages depend on (and depend only on) (a) the syntax of those expressions and (b) the meanings of those expressions’ parts, we would not be able to learn and understand the languages we in fact learn and understand. Thus we can understand “dependence” here as whatever relation in fact obtains between the meaning of a complex expression and that expression’s syntax and the meanings of its parts that in fact explains our ability to learn and understand a language containing an infinitude of such expressions. We know that language is compositional, but it is an empirical question as to just what compositionality consists in. The empirical conception of compositionality need not be thought of as a competitor to the alternative conceptions considered above. Instead, it provides a methodological backdrop against which we can evaluate various proposals regarding the sense of “dependence” at the heart of compositionality. As we saw, the functional conception of dependence is ill-favored precisely because it fails to explain our abilities to learn and understand the natural languages we 7 speak. Any proposed account of compositionality not only has to meet certain internal criteria, like clarity and consistency, but it also has to (a) actually be true of the languages we speak and (b) actually explain our abilities to learn and understand those languages. There is of course the possibility that no dependence relation that obtains only between the meanings of complex natural language expressions and their syntax and the meanings of their simple parts plays a discernible role in our linguistic abilities. Perhaps the meanings of complex expressions are partly determined by prior discourse, speaker intentions, salient objects and events in the environment, or the non-semantic character of those expressions’ simple parts, such as their shape or sound. In such an event, it might turn out not just that natural languages are not compositional, but that “compositionality” is without application, its introduction having rested on a false presupposition. 3. Compositionality and Thought According to the language of thought hypothesis (LOT), mental representations have a syntactic structure comparable to natural language expressions and expressions of artificial computing languages. In particular, according to LOT, there is a distinct category of simple mental representations (containing no meaningful parts), and these simple representations are combined in rule-governed hierarchical structures to form complex mental representations. We can thus ask whether thought is compositional, that is, whether the meanings of complex mental representations depend on (and only on) (a) their syntactic structures and (b) the meanings of the simple mental representations that are their parts. Many of the issues regarding compositionality will be the same for both natural language and the language of thought. However, there are a few important differences that should be kept in mind. 3.1 Assigning a Meaning and Having a Meaning Natural language expressions have the meanings they do because we assign them those meanings. ‘Dog’ means dog, because we represent it as meaning dog. On a LOT-type picture, what it takes for ‘dog’ to mean dog is for our lexicon to store a representation like “DOG” MEANS DOG, where ‘“DOG”’, ‘MEANS’, and ‘DOG’ are LOT expressions whose contents correspond to the contents of the capitalized English words used here to represent them. The standard argument for the compositionality of natural language begins with the observation that we cannot store, in our finite minds, such representations for each of the infinitude of meaningful complex expressions in our language. Compositionality is supposed to explain how we can calculate the meanings of novel complex expressions from their syntactic structure and the finite set of stored meanings in the lexicon. We cannot tell the same story for the language of thought. Suppose that for a simple LOT expression E to have a meaning M, we must store the mental representation E MEANS M* 8 (where M* is a LOT expression whose meaning is M). Then for M* to have a meaning, we must store some other representation M* MEANS X. Unless the LOT lexicon is circular, X can’t be M* and it can’t be E either, so it must be some third LOT expression M**, which itself needs yet another stored representation to determine its meaning, and so on, ad infinitum. The lesson of the example is that there’s no finite, non-circular way to write a lexicon for LOT in LOT. (This is a basic fact of dictionaries in general: there is no, nor can there be a, finite, non-circular dictionary of English in English.) The mainstream response to this fact is to deny the claim that LOT expressions get their meanings by being assigned those meanings by LOT lexical entries. LOT expressions have meanings without being assigned meanings. (Different theorists then differ on what it is that determines the meanings of LOT expressions—causal relations, inferential relations, teleology, etc.) Arguments for compositionality thus relate differently to natural languages and to the language of thought. The first observation of the standard argument for the compositionality of natural languages is that we cannot store, in our finite minds, separate lexical entries for each of the infinitude of meaningful complex expressions. The analogous observation regarding the language of thought, however, is straightforwardly irrelevant. While it’s true that we can’t store an infinite number of such lexical entries, it’s not necessary that we store any of them. And while compositionality might explain how we can calculate meaning-assigning representations for each complex natural language expression from a finite store of meaning-assigning representations in the lexicon, it’s not necessary that we calculate any meaning-assigning representations for expressions in the language of thought. Those expressions aren’t assigned meanings, they just have them. 4. Arguments for Compositionality 4.1 Productivity and Understanding The most common argument for the compositionality of natural languages goes as follows: 1. English (Cantonese, Kalaallisut…) is productive: there are infinitely many grammatical, meaningful sentences of English (Cantonese, Kalaallisut…), possessing an infinite number of distinct meanings. 2. Human beings have finite minds. In particular, they can only store or remember a finite amount of information. 3. It is impossible for beings with finite minds to learn/ understand productive languages unless those languages have compositional semantics. 4. Since some humans do in fact learn/ understand English (Cantonese, Kalaallisut…), these languages must have compositional semantics. 9 The Justification for Premise 1. Natural language syntax is recursive. This means that phrases of one syntactic type can be embedded in larger phrases of that same type. For example, ‘dog’ is an English noun phrase, and it is a proper part of ‘old dog,’ which is also an English noun phrase. In turn ‘old dog’ is a proper part of the noun phrase ‘smelly old dog,’ which itself is a proper part of the noun phrase ‘big brown smelly old dog.’ This process can be repeated: any time we have a noun phrase, we can add an adjective to the front of it and get a new noun phrase. It’s not obvious that this will give us infinitely many English expressions with distinct meanings. After all, there are only finitely many English adjectives; at some point we will have to start repeating them. And it’s not obvious that ‘big brown smelly old big brown smelly old dog’ differs in meaning from the shorter ‘big brown smelly old dog.’ However, there are recursive rules of English syntax that lead to an infinite number of expressions with nonredundant meanings. Consider the following sentences: a. b. c. d. We’re having pizza for dinner. Michael believes that we’re having pizza for dinner. Jenny believes that Michael believes that we’re having pizza for dinner. Michael believes that Jenny believes that Michael believes that we’re having pizza for dinner. In general, for any English sentence S and name N, N + ‘believes’ + S is also an English sentence. Sentences (c) and (d) don’t have the same meaning: the two occurrences of ‘Michael believes’ are not redundant. Jenny might believe that Michael believes pizza is for dinner (sentence (c) is true), and yet Michael might be unaware of Jenny’s belief, and thus fail to believe that she believes that he believes that pizza is for dinner (thus sentence (d) is false). Adding ‘N + believes’ to a sentence is never redundant, and thus there are not only infinitely many English sentences, but an infinite subset of them have distinct meanings. The Justification for Premise 2. The human brain has a lot of neurons, but the number is finite (it’s about 85 billion). If the mind is the brain, or is realized by the brain, and it stores information in our neurons, then there simply is not space for an infinite amount of information. Furthermore, even if we deny that the mind is or is realized by the brain, and even if we hold that the mind has an infinite capacity for storage, we don’t have the time to store infinitely many things in them. Human beings live for lots of seconds, but the number is finite (it’s less than 3 billion, typically). The Justification for Premise 3. Understanding an expression E requires that we produce a mental representation of E’s meaning. When someone produces sentences of a language we do not understand, the reason why we do not understand them is that we are unable to correctly represent what those sentences mean. Understanding a language with a finite number of expressions may be done with a lookup table, like a dictionary. Each expression is paired with a representation of its meaning in memory. When we hear that expression, we retrieve the meaning from memory. 10 A being with an infinite mind could use the same lookup table strategy to understand the meanings of expressions in a language containing infinitely many expressions, but a being with a finite mind cannot. If a being with a finite mind is to master a language L with infinitely many meaningful expressions together possessing an infinity of distinct meanings, it must have some way of working out (computing) the meanings of those expressions from a finite set of stored representations and the information it has available to it regarding the expression and the context it is produced in. One way this could be possible is if L has finitely many simple expressions, and the meanings of the complex expressions can be computed from their syntactic structure and the meanings of their simple parts. Then the finite being could follow the following strategy: store the meanings of each of the simple expressions in memory; when presented with a simple expression, retrieve its meaning from memory; when presented with a complex expression E, determine E’s syntactic structure, retrieve the meanings of E’s simple parts from memory, and compute the meaning of E. If this is the only way that a being with a finite mind can learn and understand a productive language, then the conclusion (4) follows, that English, Cantonese, Kalaallisut, etc. must have compositional semantics: the meanings of complex expressions in these languages must be computable from their syntactic structure and the meanings of their simple parts. Premise 3 is the weakest premise in the argument, as there is no reason to think the only way to learn a productive language is if that language is compositional. A hearer must be able to work out the meanings of complex expressions from the information she has available to her. But she will have more information available to her than just the syntax of the expression she has heard and the stored meanings of the simple expressions of the language. She will, for example, be aware of the prior discourse; have environmental cues and background beliefs about the speaker’s intentions; be poised to observe salient objects and events in the environment; and she also has access to the non-semantic character of the expression’s simple parts, such as their shape or sound. It thus remains open to a semantic theorist to say that the meanings of at least some natural language expressions depend on more than just their syntactic structure and the meanings of their simple parts. 4.2 Systematicity It is commonly argued that the systematicity of language (thought) provides good reason to suppose language (thought) is compositional. However, most of the literature fails to provide a clear characterization of systematicity and sometimes very distinct phenomena are all crowded under the one heading. On the most common way of understanding systematicity, language L is systematic if, and only if, for all expressions E1, E2, and E3, if E1 can syntactically combine with E2 to form a grammatical sentence, and E3 is of the same syntactic category as E2, then E1 can combine with E3 to form a grammatical sentence. For example, the English expression ‘Fred’ can combine 11 with the expression ‘eats bananas’ to form the grammatical sentence ‘Fred eats bananas.’ Since ‘George’ is of the same syntactic category as ‘Fred’ (proper names), if English is systematic we expect that ‘George eats bananas’ is also a grammatical sentence. Since it is, and since examples like this are easy to come by, it is often assumed by philosophers that English and other natural languages are systematic, in this sense. There are reasons to think that English and other natural languages are not systematic in this sense. First, so-defined, a language is systematic only if its syntactic rules contain no semantic or phonological constraints: it says that any expression can be substituted for any other expression of the same syntactic category, regardless of differences in meaning/ phonology between the two expressions. [second, it entails they’re all context free grammars. Cite Johnson, Pullum at end] Whether a language is systematic, in the sense just discussed, is not obviously relevant to whether it is compositional. After all, systematicity in that sense is only a constraint on which sentences must be grammatical if certain other sentences are grammatical. A language being systematic in that sense is compatible with that language having a non-compositional meaning function. There is, however, another sense of systematicity that is more difficult to precisely characterize, but which is in fact relevant to whether languages are compositional. Consider these two claims about English: For English expressions E1, E2, E3, and E4, when the following conditions are met: a. E1 can combine with E2 to form a grammatical sentence [E1 E2]. Example: ‘Dogs’ can combine with ‘chase cars’ to form the sentence ‘Dogs chase cars.’ b. E3 can combine with E4 to form a grammatical sentence [E3 E4]. Example ‘Cats’ can combine with ‘eat mice’ to form the sentence ‘Cats eat mice.’ c. E1 is of the same grammatical category as E3 d. E2 is of the same grammatical category as E4 Then the following two claims hold: Claim 1: Anyone who can understand [E1 E2] and [E3 E4] can also understand [E1 E4] and [E3 E2], when the latter are well-formed. Example: Anyone who can understand ‘dogs chase cars’ and ‘cats eat mice’ can also understand ‘dogs eat mice’ and ‘cats chase cars.’ Claim 2: The meanings of [E1 E2] and [E3 E4] are predictably related to the meanings of [E1 E4] and [E3 E2], when the latter are well-formed. Example: ‘dogs chase cars’ has a meaning that is predictably related to both ‘dogs eat mice’ and ‘cats chase cars.’ 12 It can be argued that any language that is like English in this way is most likely a compositional language. The argument runs as follows. If English is compositional, then understanding ‘dogs chase cars’ and ‘cats eat mice’ involves (a) knowing the meanings of all the morphemes in the two sentences and (b) being able to recognize the syntactic structure of both sentences. Furthermore, if English is compositional, such knowledge and abilities suffice to understand ‘dogs eat mice’ and ‘cats chase cars.’ For these sentences are composed of the same morphemes, put together in the same syntactic structures. Thus the best explanation for why Claim 1 is true of English is that English is in fact compositional. A similar argument can be built around Claim 2. If English is compositional, then the meanings of English expressions are completely determined by (a) their syntactic structure and (b) the meanings of their morphemes. Since the expressions ‘dogs chase cars’ and ‘dogs eat mice’ partially overlap in their morphemes, they partially overlap in what determines their meanings, if compositionality is true. Thus the fact that they have related meanings is some evidence that English is in fact compositional. Neither of these arguments is very strong on its own, though each may be combined with other arguments or evidence for compositionality to marshal a stronger case. First, it can be argued that Claim 1 and Claim 2 are not true of all English expressions E1, E2, E3, and E4. With regard to Claim 1, someone might, for instance, know what ‘solar flare’ and ‘prison cell’ mean without knowing what ‘solar cell’ means. With regard to Claim 2, it’s not obvious that ‘dogs exist’ and ‘no dogs exist’ have related meanings, even though they overlap in morphemes—if the meanings of sentences are the conditions under which those sentences are true, for instance, then the meanings of ‘dogs exist’ and ‘no dogs exist’ are related only by the fact that they are entirely non-overlapping. Finally, both arguments are inferences to the best explanation: they claim, respectively, that the compositionality of English best explains Claim 1, and that it best explains Claim 2. However, there are non-compositional meaning functions that also predict Claims 1 and 2. For example, if the meanings of sentences are interpreted logical forms (ILFs) then both Claims will be true [see section xxx for discussion of ILFs]. Thus whether compositionality is the best explanation for these claims may depend on what other independent reasons we have for accepting that English is compositional. 4.3 The Inductive Argument A second empirical argument for compositionality is predicated on (a) the apparent compositionality of a wide variety of linguistic phenomena and (b) the success of compositional semantics in compositionally analyzing apparently non-compositional linguistic phenomena. Consider a simple English sentence: ‘Jenny loves baseball.’ Even without a well-defined notion of dependence, it is difficult to see how the meaning of this sentence depends on anything other than the meanings of ‘Jenny,’ ‘loves,’ and ‘baseball,’ and the way those words are syntactically combined. External features like the intentions of a speaker using the sentence on a particular occasion, and the context in which the sentence is used, may well affect what gets 13 implicated by the sentence, but don’t apparently affect its literal meaning. Furthermore, formal features of the sentence, like the fact that each of the words it contains has two syllables, are also apparently irrelevant to its literal meaning. The meaning of ‘Jenny loves baseball’ apparently depends on, and only on, (a) its syntax and (b) the meanings of its simple parts. This sentence, like a large portion of the language we speak, is apparently compositional. Now consider a different example: ‘Every girl loves some sport.’ This sentence has two meanings. First, it can mean that for each girl, there is some sport she loves—even if for different girls it’s different sports. For example, if Jenny and Liz are the only girls, the sentence will be true if Jenny loves baseball and no other sport and Liz loves hockey and no other sport. Second, it can mean that there is one particular sport that every girl loves. If Jenny loves only baseball and Liz only hockey, then the sentence is false, because there is no sport loved by all girls. This sentence is therefore apparently non-compositional. On every occasion of use, the sentence appears to have one and the same syntactic structure, and its parts all appear to have the same meanings. If compositionality were true, then, the sentence couldn’t have different meanings on different occasions, because what determines its meaning is the same on all occasions. And yet, it apparently does have different meanings on different occasions. This is not an argument against the compositionality of English, but rather one for it. The second half of the inductive argument for compositionality concedes that there are indeed a great many apparently non-compositional linguistic phenomena in English—this quantifier scope case being just one among them. However, the argument continues, a rather large subset of the great many apparently non-compositional phenomena have been considered by linguists in the past several decades and been given satisfactory compositional analyses. (With regard to our example, the most common solution has been to regard it as really having two syntactic structures, corresponding to its two meanings. See the annotated works cited.) Since compositional semantics has been such a fruitful and successful research program in the past and there’s no reason to think it will cease to be in the future, we have strong reason to suppose that English is in fact compositional, even if some of it appears not to be. The inductive argument holds up the past successes of compositional semantics as a good reason to believe that English (and any other language we’ve seriously and successfully investigated) is compositional. However, there remain apparently non-compositional linguistic phenomena that have not been given universally agreed upon—or even widely endorsed— compositional analyses (see section 5, Challenges to Compositionality). Some of these cases, such as propositional attitude ascriptions, may well have particular features that justify us in thinking that they cannot be given compositional analyses. One additional point is worth making. A common construal of compositional semantics in linguistics is that the goal is to assign logical forms (LFs) to sentences of natural language in a compositional way. LFs are themselves representations and are not (standardly considered) the same things as meanings. LFs are “in the head,” unlike propositions, states of affairs, situations, truth-conditions, etc. Thus, the fact that an LF can be compositionally determined from the (a) syntactic structure of a sentence and (b) the lexical entries for that sentence’s morphemes does 14 not entail that the meaning of the sentence is determined by those things—at least not without further argumentation. 5. Challenges to Compositionality 5.1 The Triviality Objection Consider the following argument: the debate over whether natural languages are compositional is pointless. Any language can be given a compositional semantics, for any proposed theory of what meanings are. If meanings are ideas, then we let the meaning of [dogs [chase cats]] be [the idea of dogs [the idea of chasing, the idea of cats]]. If meanings are stereotypes, then we let the meaning of [dogs [chase cats]] be [the stereotype of dogs [the stereotype of chasing, the stereotype of cats]], and so on. In general, the meaning of any complex expression is just that very expression, with the meanings of its simple parts in place of those parts. (This is a type of structured propositions view.) There are two main reasons the triviality objection fails to convince most philosophers. First, while one can give such meaning theories for complex expressions, these meaning theories conflict with other principles that seem reasonable to hold. For example, we might think that the meaning of ‘cow’ and the meaning of ‘brown cow’ should be the same general type of thing. If the meaning of ‘cow’ is an idea, the meaning of ‘brown cow’ should also be an idea; if the meaning of ‘cow’ is a property—like the property of being a cow—then the meaning of ‘brown cow’ should also be an idea—like the property of being a brown cow. But according to the triviality objection, we must say instead that while ‘cow’ means the idea of a cow, ‘brown cow’ means a structured complex containing two ideas: the idea of brown and the idea of a cow. Second, even if structured propositions don’t violate any of our other commitments, most structured propositionalists believe that the structured proposition that is the meaning of a sentence determines the truth-conditions of that sentence. And it is far from obvious that one can work out the truth-conditions of ‘this is my pet fish’ from a structured proposition containing the stereotype of a pet and the stereotype of a fish. It is not a trivial question to ask whether the truthconditions of a sentence depend on (and only on) that sentence’s syntax and the meanings of its simple parts. 5.2 Context-Sensitive Expressions Consider the sentence ‘I am Barack Obama.’ Sometimes when the sentence is uttered, it is true; at other times it is false. Although we might try to defend the claim that true utterances of ‘I am Barack Obama’ have a different syntactic structure from false utterances of ‘Barack Obama,’ this seems wholly implausible. Clearly the truth or falsity of the sentence depends on who is saying the sentence. 15 At first, this might seem like proof that the truth-conditions of English sentences are not determined compositionally. Here is the argument: suppose that David Cameron says, ‘I am Barack Obama.’ This sentence is false because its truth-value depends on who says it: it is true only if the person who says it is Barack Obama. However, David Cameron is not the meaning of ‘am’ or ‘Barack Obama,’ as anyone can tell. David Cameron is also not the meaning of ‘I,’ otherwise when Barack Obama says ‘I am Barack Obama’ he would mean ‘David Cameron is Barack Obama.’ So the truth-value of ‘I am Barack Obama’ depends on something that is not its syntactic structure and is not the meanings of any of the words comprising it. And it doesn’t help to say that ‘I’ means ‘the person saying this sentence,’ because now we are faced with the exact same problem: sometimes ‘The person saying this sentence is Barack Obama’ is true and sometimes it is false. But it has the same syntactic structure and its morphemes mean the same thing on both the true occasions of utterance and the false ones. Now we can unravel what’s going on here. There is one sense in which ‘I’ has the same meaning every time it is used. We can call this the character of ‘I.’ There is another sense in which ‘I’ has a different meaning when different people use it. Call this the content of ‘I.’ Character is a rule for determining content. The rule for ‘I’ is: the content of ‘I’ any time it is used is the person who is using it. So when Cameron and Obama both use the word ‘I’ it has different contents for each use—Cameron and Obama, respectively—but those contents are determined by one and the same character (rule). The truth of ‘I am Barack Obama,’ when used by any particular person, is completely determined by (and only by) the syntax of the sentence and the contents of its morphemes. English has a variety of expressions that differ in content from context to context. We call these context-sensitive expressions: Now, today, yesterday, tomorrow Here, there, local, nearby, I, you, he, she, it, they, we Come, go, left, right This, that, these, those Thus, so, yea Some of these have characters that determine their contents with no interpretation necessary. ‘Today’ always names the day on which it is used. The rule for ‘that,’ however is roughly that its content is whatever the speaker intends. The general point here is that compositionality requires that the meaning of a complex expression not be determined ‘directly’ by context or by speaker intentions. However, a language can still be compositional if its simple expressions have their meanings (contents) determined by context or by speaker intentions. Some philosophers have proposed compositional analyses of various apparently noncompositional phenomena that appeal to unwritten, unspoken context-sensitive expressions 16 (“hidden indexicals”). For example, consider the sentence, ‘There is no beer.’ It might mean on different occasions: there is no beer on this menu; there is no beer at this party; there is no beer in this bottle, and so on. This could be because the sentence ‘There is no beer’ has its meaning determined by factors other than the meanings of its parts and the way they are combined. Alternatively, it could be because there is a hidden indexical ‘there’ that is really part of the sentence ‘There is no beer there,’ but that the indexical (though present) is not written or spoken aloud. Nevertheless, the indexical still contributes its context-sensitive content to the meaning of the sentence, thus accounting for the variability in the sentence’s truth-value from context to context. There is nothing theoretically problematic about such a hidden indexical account, but it should be emphasized that whether hidden indexicals exist in these cases is an empirical question that might turn out to be false. 5.3 Idioms The term ‘idiom’ covers a wide range of expressions, including stale metaphors (she’s on the fence, he ran out of steam), common hyperboles (he drinks like a fish, there was no room to swing a cat), and even common phrases (she’s last but not least, there’s method to his madness). To the extent that we don’t think metaphor or hyperbole pose any trouble for the thesis that natural languages are compositional these types of idioms appear equally benign. However, there are some idioms whose meanings cannot be worked out by someone familiar only with their syntax and the meanings of their parts and whose meanings can’t be understood as implicatures. Consider idioms like she let the cat out of the bag, or I think he’s pulling your leg. Understanding these complex expressions requires learning their meanings in advance, separate from the meanings of their parts. In fact, many idioms contain ‘words’ that do not otherwise occur in the language, or only occur with different meanings (that’s beyond the pale, this is an old wives’ tale). It is not uncommon for philosophers to assert that compositionality admits of finitely many exceptions, and as there are only finitely many idioms in any language, compositionality is not violated. This is not strictly speaking true. The most general formulation of compositionality—the meaning of any complex expression depends on and only on its syntax and the meanings of its parts—admits of no exceptions, nor do many of its various precisifications—for example, reading ‘depends on’ as ‘is a function of,’ ‘can be computed from,’ or ‘is metasemantically determined by.’ On the assumption that ‘kick the bucket’ has the same syntax, and simple parts with the same meanings, in both its idiomatic and its nonidiomatic meaning, its meaning is not a function of its syntax and the meanings of its simple parts, for functions have unique outputs. The substitution test fails: ‘kicked the pail’ does not have the same meaning as the idiomatic ‘kicked the bucket,’ despite having the same syntax and parts with the same meanings. In a more intuitive sense, the meaning of ‘kicked the bucket’ doesn’t depend on the meanings of ‘kick’ and ‘bucket’—those meanings, the act of kicking and bucket are neither here nor there with respect to the idiomatic meaning of ‘kick the bucket.’ 17 Nor can non-compositional idioms, on the whole, be treated simply as primitive expressions with separate entries in the lexicon. Many idioms are not syntactic constituents (for example,) and many contain “compositional” parts—parts that contribute to their meanings (for example, strings were pulled to get him the job, or we should all give our speaker a big hand). Furthermore, it’s not clear that there are finitely many idioms. One suggested productive class includes VERB + the removal of relatively irremovable things to mean something like VERB-ed excessively: she cried her eyes out/ laughed her head off/ danced the night away. How we process the meanings of sentences containing idioms is as of now an open question, and the answer may well be that they are not compositional. 5.4 Compounds and Possessives English nouns can be combined with other English nouns to form compound nouns—for example, ‘truck driver,’ ‘panda trainer,’ ‘demolition derby,’ etc. This process is productive: ‘You are reading the compositionality philosophy encyclopedia entry compounds section’ (the section on compounds from the entry in the encyclopedia of philosophy about compositionality). One interesting aspect of noun compounds in English is that they do not specify the relation between the two nouns, and this relation differs from occasion to occasion. A house boat, for example, is a boat used as a house; but a boat house is not a house used as a boat, it’s a house for your boat to live in. A dog house is a house for a dog to live in, but a house dog is not a dog for a house to live in, nor is it a dog used as a house, it’s a dog that lives exclusively in the house. (Still more relations abound: brick house, house appraisal, house party…) While we might treat many compounds simply as idioms there are two general additional problems they pose: their productivity, as stated, and also the fact that nonce or novel compounds are regularly understood. Consider these examples: Example 1: We are at a child’s birthday party, about to eat ice cream. There are several spoons, each of which has a different animal depicted on it. I tell you, “You can have the dog spoon.” You immediately recognize that I mean the spoon with a dog depiction on it. Example 2: Similar birthday party scenario. This time there are only normal spoons. Unfortunately, there are only as many spoons as guests, and the dogs at the party have gotten ahold of one of them and slobbered all over it. I tell you, “Sorry, there’s no ice cream for you, unless you want the dog spoon.” You immediately recognize that I mean the spoon that the dogs have been playing with. Example 3: You and I are shopping for a friend who likes to collect spoons. We find some very nice Chinese commemorative spoons from different years. With 18 the background knowledge that our friend was born in the year of the dog, and that only one spoon is from the year of the dog, I say “Let’s get the dog spoon.” You immediately recognize that I mean the spoon that commemorates a year that is also a year of the dog in the Chinese zodiac. The point of these examples is that what ‘dog spoon’ means on any occasion does not seem to be determined solely by what ‘dog’ and ‘spoon’ mean (and the way in which they are syntactically combined), but also by myriad features of the context. Similar remarks can be made for the English possessive: “my horse” in separate contexts can mean the horse that I own, the horse that I’ve wagered money on, the horse that I’m currently riding, the horse that shares my name, and so on. With enough context, the relation expressed between two nouns in a compound noun construction or a possessive construction can be virtually anything. There are various attempts at compositional solutions to the problem posed by compound nouns. Some philosophers and linguists have argued that “dog spoon” means only “spoon somehow related to a dog or dogs.” More generally they say that any noun compound N1 N2 means “N2 somehow related to a N1 or N1s.” In this way, noun compounds are assigned meanings that only depend on their syntax and the meanings of their parts. Such accounts have unintuitive consequences, to say the least: every time there is a house somehow related to a boat, there is a boat somehow related to a house. But it doesn’t obviously follow that whenever there are house boats, there are boat houses. Another common approach is to posit a “hidden indexical.” The idea is that ‘dog spoon’ means ‘spoon that bears relation R to dogs,’ where R is a relation-indexical that picks out different relations in different contexts, in the way ‘he’ picks out different males in different contexts. As previously discussed, there is nothing theoretically problematic with such solutions, but whether there are such indexicals in these cases is an empirical matter that may well be shown to be false. 5.5 Generics Consider the following two sentences: a. Firefighters are brave. b. Firefighters are at Fred’s house. Sentence (a) means something like: usually or generally, firefighters are brave, whereas sentence (b) does not mean usually or generally, firefighters are at Fred’s house; it means instead that some firefighters are currently at Fred’s house. Sentences like sentence (a) are called ‘generics.’ It’s predictable when generic readings are available. Some predicates are true of things only fleetingly and detachably. If someone is at Fred’s house, they are only at his house for a small portion of time. Even Fred is only detachably 19 at his house: he can leave with little effort and he could well move to a different house, if he had the means and the desire. These predicates, when combined with bare plurals (firefighters rather than the firefighters or all firefighters, for example) do not give rise to generic readings. Sentence (b) does not suggest that being at Fred’s house is a characteristic feature of firefighters generally. Other predicates express non-detachable features of things. People who are brave aren’t just brave for some small portion of their lives: their bravery is a feature of them that doesn’t easily go away. When you combine a bare plural with a predicate like this, you get a generic reading: this is why sentence (a) suggests that firefighters are in general brave. Since the existence of generic readings of sentences is predictable from the meanings of the predicates in those sentences, the existence of generic readings is not itself a problem for the compositionality thesis. However, the precise meanings—the truth-conditions—of generics are not obviously predictable from the meanings of their parts. Consider the following two sentences: c. Sharks attack people. d. Californians attack people. In most ordinary contexts, sentence (c) seems true and sentence (d) seems false, even though the number of shark attacks per year is decidedly lower than the number of attacks by Californians per year, no matter how you calculate the latter statistic and even if you take account of the difference in populations (sharks vs. Californians). Although English speakers often have relatively clear intuitions regarding the truthconditions of generics, there is currently no consensus regarding what information is involved in generating those intuitions. It seems to include quite a bit of background knowledge and various facts about human interests and concerns. Sentence (c) is true, perhaps, because the danger of sharks should be factored into our decision-making (whether, when, and where to go swimming, for example), whereas the danger of Californians is no greater than the background danger humanity faces us with everywhere we go, and thus is not in general relevant to our decisions. The problem generics pose for compositionality is that compositionality says background knowledge and human interests are not involved in calculating the truth-conditions of sentences. However, cognitive science is just beginning to understand the role generics play in cognition and nothing precludes a clever analysis of them that comports with compositionality.