Matthew Menchaca Two solutions to the interface problem and the qualia theory of perception The interface problem is that there doesn’t seem to be any clear explanatory connection between consciousness and the physical order of the world. A solution will need to explain how consciousness is connected with behavior and/or brain states. This problem should be contrasted with the easy problem of consciousness. To solve the easy problem all one needs to do is give a complete account of which brain and/or behavioral states correlate with each conscious mental state. Functionalism offers one solution to the interface problem. The solution requires a reanalysis of consciousness. Consciousness is a complex product of mental states that are to be understood in terms of the functional role they play in a system. Propositional attitudes are a subclass of contentful states. The language of thought is another solution to the interface problem. The common way of speaking about folk belief-desire psychology is relied heavily upon, as are considerations about the nature of language in general. Consciousness is analyzed as fundamentally representational where its quality depends upon an underlying computational process over representations. This is much the same as sentences in a language depend upon an underlying structure (i.e., a grammar) for their meaning. In this paper I will make these solutions clear and then I will support functionalism with a theory of perception called qualia theory. Section 1: Clarifying the problem Before explaining how functionalism and the language of thought theory solve the interface problem more should be said to define the problem. I have said that the interface problem should be contrasted with the easy problem of consciousness. The reason for this is that conscious mental states bear no resemblance to brain and/or behavioral states. Mental states such as “believing that ____” or “thinking that “____” have a structure, but it is not like the neural structure of the brain, nor like the mechanical structure of behavior. In addition, both brain states and behavioral states are satisfactorily described when the physical substrate of those states meets some pre-established criterion of adequacy. This isn’t the case with mental states: we have no criterion of adequacy for descriptions (contrary to what the language of thought theorist supposes). Granted, there are no perfect physical descriptions for neural or behavioral states either. This goes to show that even the “easy problem” is actually incredibly difficult. Neuroscience and biology form two disciplines dedicated in part to giving full descriptions of physical states but these disciplines are not complete. I have given one indication as to how mental states are picked out: they are picked out by sentences like “believing that ____” and “thinking that ____” etc. This however won’t do because sentences, it cannot be assumed, take on the same form as the actual experiential quality a conscious system has when that system believes or thinks something. The subject of the interface problem in need of explanation may best be understood non-symbolically (as in the form of sentences) but as group of non-interpreted fundamental qualities of experience. I will return to a characterization of that noninterpreted quality in my argument for functionalism in section 4. For clarification at this point, taking the object of explanation not as the subject of sentences of the form “S believes that ____,” but rather as the “what it is like for S to believe that ____” is adequate. Functionalism and the language of thought offer clear ways for this what it’s like quality to be connected with the physical order of the world. Section 2: Functionalist solution 1 Matthew Menchaca Functionalism doesn’t deny that conscious experience involves mental states. Rather it offers a theoretical model for the causal production of what it’s like qualities and a method of comparison over systems capable of what it’s like qualities. Consciousness is understood in the theory as an amalgamation of functional states in a complex enough organization. Just what kind of complexity is necessary and sufficient for causing what it’s like qualities is made clearer by analyzing systems as on a continuum. Close to one end of this possibly infinite continuum are amoebas, and close to the other end are systems far more conscious than human, while humans lie somewhere on the more conscious side. According to the theory to be on the continuum the system must process information and exhibit some kind of behavior. The input of the functional states which any of these systems have will be in terms of information, and the output of the functional states will be in terms of behavior. The kind of information available to each system on the continuum can be given phenotypically. A complete description is the object of biology, but specifically those related to cognitive functioning are important for analyzing the variety of experiences typical to a type of system. For instance one of these descriptions which may be true of the possible human phenotypical expressions may be that the perceptual system can only sense a limited spectrum of light frequencies: ultraviolet frequencies give us no information, while they do for other systems like some species of fish. Not all information will therefore be available as possible input for a system. Given this what behavior is possible for a systems development is constrained by its possible input (this is meant to incorporate the systems genetics as well as the systems adaptive history within the environment). It is claimed by this theory that if behavior is observable of a system by another system, that behavior will be an indication as to the complexity of the functional roles in place within the observed system. However, it is important to realize that the degree of complexity over possible input and the kinds of complexity internal to the functional states necessary for what it’s like qualities will be constrained by the very range of possible inputs the observing systems can have (as well as its functional character)1. However, given these problems, the theory offers a first step towards solving the interface problem. It does this by focusing on systems functions, the degree of specificity required of the functions in identifying information and the degree of complexity of the inter-functional relations of the system. These core aspects come together to give a causal explanation for the production of what it’s like qualities (assuming a fine grained enough analysis of the content of functional states is possible to give) given a multi-system comparison of those functional states. It is assumed by more complex systems that other systems represent, but this is not necessary for there to be content in these functional states. The content of a functional state is spread out over the entire input-function-output analysis. Furthermore only some contents are what it’s like contents. There are more fine grained ways of analyzing the adaptive nature of functional states within a system, and it will be the intuitive appeal of those analyses that leads one system to characterize another as capable of some experiential states and not others. This just means that humans are limited in their observational capacities and biased, but it is important to say it in functional terms. 1 2 Matthew Menchaca An example will clarify the strength of this theoretical model before seeing how the language of thought offers a different solution to the interface problem: A frog is a complex informational system, but it is not capable of certain kinds of what it’s like experiences. Its visual system is capable of identifying movement, which involves sensation as of an object existing at one time in one place, causing behavior corresponding to the objects existing at another time and place. Suppose a frogs’ visuals system is active in just this way and a frog shoots out its tongue to capture a fly. We do not say of the frog that it is capable of the experience “as of a fly,” even if that is the object of one of its cognitive capacities. We do not because it just so happens that by sending another non-fly object of comparable width and height before the frog we witness its tongue shoot out. What the content exactly is of this specific functional role in the frog is defined by the possible adaptive behaviors it leads to and the other functional states which it is connected to. If you are skeptical now on the issue of whether functionalism solves the interface problem that is okay. I will support functionalisms solution in section 4. I believe that the qualia theory of perception will supplement the theory nicely. Section 3: The language of thought (LOT) solution The language of thought’s solution to the interface problem I believe is to assume that the problem is solved. To assume that the problem is solved the LOT theorist takes as a working hypothesis that mental states are representational. It is just in the nature of the way we think that our thoughts are manipulations of representations of various states of affairs. These representations may or may not be of anything in the world though it is assumed that they are. These representations have a systematic relationship to each other: there are rules about how they may be formed into a coherent picture of the world and how an agent may be situated in that world (much like a grammar situates the meaning of words in a sentence). When in common language one produces the sentence “S believes that ___” the working hypothesis of the LOT theorist is that S bears some relation (as of believing) to a string of symbols with representational content equivalent to the proposition/subject of the belief. So if the ____ is filled with “the cat is on the mat” the subject is said to be in a believing relation to the-symbol-for-cat in the “on” relation to the-symbol-for-mat. If the representation is a complex whole then the subject can simply be said to be in the believing relation to the-cat-on-the-mat. How fine grained may the symbols of representation be? Infinite, as the representational complexity of an agent may be theoretically infinite. Of course the conditions for a given representation, i.e., the place in the world an agent is will constrain this. It is said by the theory that the theory is implicitly assumed in folk belief-desire psychology language. How can it be said that this theory assumes that the interface problem is solved? Remember from section 1, when the problem was explained in detail: it was said then that an analysis of the special kind of mental states (the experiential kind) in terms of “S believes that ___” would just amount to the assumption that sentences are the form of thought a conscious system has when it believes or thinks something. It is exactly an assumption of this theory that mental states of the form “what it is like for S to believe that ____” form no special problem. In other words the computational model suggested by this theory deals with the practical hypothesis forming and hypothesis testing of agents which are assumed to have the experiential quality. Thus it is assumed that the problem is solved, or the problem is not a problem. 3 Matthew Menchaca However, it does not seem like the quality of experience is representational, even though a particular thought may endorse the use of symbols for its formal expression. Such thoughts must be systematic and infinitely generative. The kinds of thoughts which call for the use of this symbolic structure require a theory of perception. It is not my current object to provide that theory as I am going to for functionalism. I just want to remark, before I move forward, that what is interesting about these kinds of thoughts is their application within comparable systems for analyzing understanding. Since the LOT theorist will be in the enterprise of giving accounts of the representations necessary for understanding a particular thought, if the functional mechanisms underlying the production of the representation can be identified, facts regarding the physical state of a system will advance solutions regarding the easy problem. This comports well with the common manner in which we speak about the beliefs and/or desires of others in relation to their understanding. For one, we speak about individuals representing states of affairs. Secondly we speak of their understanding what we take to be represented if they can produce comprehensible strings of symbols give some input (linguistic symbols for instance).2 It is questionable that the theory can apply to systems on a continuum like the functionalist theory does. For example it seems at least possible that a system might represent in a language of thought, it may express all the formal requirements involved in symbol manipulation and yet not have no experiential qualities. I imagine a calculator as just that kind of system. Of course a calculator expresses no behavior and so it may be removed from serious consideration, but the point is that the LOT theory takes for granted that psychological theories range over some predetermined set of systems and that may be a problem for it. If a problem may be made on this issue then there will be another reason to demand that the assumption made by the LOT theorist should be given more justification. Section 4: The qualia theory of perception Formally the theory states: “S has an experience as of a property F iff S senses Fly” and “S senses Fly” is to be understood to mean “S sense [e-___ and e-___,…, and e-___] where the e___’s are bound in the appropriate way. “e-___” means “of experience ____ where the ____ can only be filled by properties of experience (i.e., what it is like to experience). For example the term “esquarely” will be the way the particular squareness of the experience is itself experienced. The theory can be understood as it denies or accepts one of three principles: 1. Denies the phenomenal principle, which states: If there sensibly appears to a subject to be something which possesses a particular sensible quality then there is something of which the subject is aware which does possess that quality. 2. Denies the representational principle, which states: All visual experiences are representational. I will not discuss the problems with this interpretation as to how the theory would evaluate understanding, but only point out that I have been generous to suppose that the symbols in the LOT should represent the objects that they do. It is also dubious that perception is representational (as I have remarked before—this will need to be supplement by a theory). Even though it seems like perceptual experience has to be representational for any one individual to evaluate another’s’ understanding, assuming a sense datum theory of perception may do equally well. 2 4 Matthew Menchaca 3. Accepts the common factor principle, which states: phenomenologically indiscriminable perceptions, hallucinations, and illusions have an underlying mental state in common3. The most important commitment of the theory is its denial of the phenomenal principle. By denying this principle the theory is able to avoid committing to a dualistic metaphysics. The acceptance of this principle leads to a dualistic metaphysics in cases of hallucinatory visual experiences because there must be something of which a subject is aware that possess whatever qualities the subject is hallucinating, but by postulation it cannot be a worldly object and be a hallucination, so it must be a mind dependent object. The second denial is also important. Though it may seem that it is in the very nature of visual experience that objects are being represented it may be that the representation as of objects will be part of some rather than other information processing systems and I believe it is not necessary to add complications where they are not necessary so this is a unnecessary assumption. Finally the acceptance of the common factor principle pertains to our discussion in support of functionalisms solution to the interface problem because functionalist will be able to identify just what those underlying common mental states may be. In fact explaining the similarities between the three kinds of visual experience (hallucinatory, illusory and veridical) in terms of an underlying phenotypically developed functional mechanism in the organism seems perfectly plausible. Remember that some content bearing mental states in the functional analysis need not have content of the what it’s like variety. I will exemplify how the contents of the variety associated with visual experiences “as of a red tomato” exist for a system given three variations on input-function-output relations and how they share a common element. These can be described in turn: 1) For the veridical experience: the input is of all relevant stimulus required for the substrate of the functional mechanism to perform its operations (ostensibly neurons firing are the substrate for the functional mechanism). These operations conditionally results in the systems interacting with the tomato. 2) For the illusory experience: the input is of all relevant stimulus required for the functional mechanism to perform its operations. However, the functional mechanism is flawed. Within the system this is only apparent conditional upon the systems adaptation to conditions which result in the flaw4. Outside the system the functional mechanism responsible may be tested for under a variety of conditions (which is exactly what is done in illusion experiments). 3) For the hallucinatory experience: the input is of all relevant stimulus required for the functional mechanism to perform its operation. For this to be the case the relevant stimulus for this particular functional mechanism may be identified as one not directly causally related to the external world (i.e, the All of these principles are taken from William Fish, “Introduction: Three key principles” Philosophy of perception: 1-9 4 The fact that illusions exist can only be made sense of, given this analysis, after the fact, when a species adapts, and thus when the content of its functional states changes and thus when the relevant stimulus in the environment results in a difference in information. 3 5 Matthew Menchaca stimulus is in the head)5. Within the system a more abstract procedure wherein the system relies upon some other of its adapted functional mechanisms will be necessary to discover at which times a hallucination is present. Those components of memory I have in mind for human systems. Individuating the functional mechanisms may best be done causally. I hope that the interconnectedness of the functional mechanisms is made apparent in hallucinatory experiences. I have now shown what functionalism has to add to an endorsement of the common factor principle which is at the heart of the qualia theory. Given functionalism insights the common factor principle can be written: In each instance of phenomenally indistinguishable veridical (perceptual), hallucinatory, and illusory experience the common mental state that is shared between them for a given system is the one whose content has all of the relevant stimulus required for that specific functional mechanism to perform its operation in the system. Finally it is possible to propose a solution to the interface problem using the functional theory of mind and the perceptual theory of qualia. (This will be an analysis of the what it’s like for S to experience that ____.) It is claimed by qualia theory that to have an experience as of a ____ one is sensing in a particular manner, and the subject of ones sensing are the properties of experience. Given what is implied if functionalism is true about the common mental state which holds regards of the actual state of the world, namely that they share all the relevant stimulus and a relevant functional mechanism, it is in principle and in practice possible to identify a correspondence between just the relevant functional mechanism and any property of experience. If qualia theory is true it will then be possible to comprise the sensing of experience out of the identifiable properties of experience. There may be problems with such a theory, most certainly there will be objects, but that is in the nature of any solution to the interface problem. I will consider objects and provide replies as I find them voiced by others or as I am able to become introspectively aware. This may seem ad hoc, but it is not. Remember that it is postulated that the visual experience is as of a hallucination and thus we can say truly of the stimulus to the functional mechanism that its cause is not in the external world. This is at least what hallucinations are currently taken to mean. 5 6