Uploaded by Nate Speert

Consciousness

advertisement
The illusion of consciousness: De-problematizing the hard problem
Nate Speert
January 16, 2019
O, what a world of unseen visions and heard silences, this insubstantial country of the mind!
What ineffable essences, these touchless rememberings and unshowable reveries! And the
privacy of it all! A secret theater of speechless monologue and prevenient counsel, an invisible
mansion of all moods, musings, and mysteries, an infinite resort of disappointments and
discoveries. A whole kingdom where each of us reigns reclusively alone, questioning what we
will, commanding what we can. A hidden hermitage where we may study out the troubled book
of what we have done and yet may do. An introcosm that is more myself than anything I can
find in a mirror. This consciousness that is myself of selves, that is everything, and yet nothing at
all一 what is it? And where did it come from? And why?
-Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral mind
What is the silliest claim ever made? The competition is fierce, but I think the answer is easy.
Some people have denied the existence of consciousness: conscious experience, the subjective
character of experience, the “what-it-is-like” of experience. Next to this denial—I’ll call it “the
Denial”—every known religious belief is only a little less sensible than the belief that grass is
green.
-Galen Strawson, The Consciousness Deniers
In this paper I will attempt to show that the so-called hard problem of phenomenal
consciousness is not a problem at all because the thing that it requires a solution for, namely
phenomenal consciousness, does not exist. As evidenced from the Galen Strawson quote
above, we are nevertheless caught up in the overwhelming sense that it does exist. It will be
the purpose of this paper to demonstrate how and why that is so. My claim will be that the
illusion of phenomenal consciousness is borne out of the nervous system misrepresenting its
own states (which I will argue are abstract entities) as if they were concrete things. The illusion
of phenomenal consciousness is the ad-hoc fix-it that the system recruits in order to cover up
for the mismatch that this misrepresentation gives rise to, in order to prevent cognitive ill
effects that would otherwise result.
2
The main conceptual resource that I will draw upon will be Daniel Dennett’s intentional
system’s theory, which in addition to the intentional stance involves two other stances, namely
the design stance and the physical stance. After introducing the hard problem of consciousness
in more detail as well as dealing with some preliminary objections to my position, I will then
utilize the intentional stance to give an account of representation in order to explain why we
have the illusion of phenomenal consciousness before going on to use the design stance and
then the physical stance to explain how we have that illusion.
Introduction:
The word consciousness can mean a number of different things. Consciousness can refer to an
individual being alive, awake and responsive to stimuli. This is the case when one says, ‘The
person lost consciousness upon being administered the anesthetic, or the person regained
consciousness after awakening from a deep sleep.” This sense of the word consciousness is
distinctive in that it is observable from an external, third-person point of view. Call this sense of
the word consciousness creature consciousness. (Rosenthal, 1997)
Consciousness can also refer to the internal states accompanying that individual being
alive and awake, as they respond to stimuli and go about the world. There isn’t really a use of
this sense of the word in our ordinary language, perhaps meaning that we should be
immediately skeptical of it. (Wittgenstein, 1953, p. 125) The very redness of red is an example
of something to which it refers, as are the ‘raw feels’ of various subjective states like the smell
of freshly fallen rain or the taste of garlic. These so-called qualia can be differentiated from the
objectively verifiable brain processes that accompany them. Rather than being observable from
3
the outside, this sense of the word consciousness is inherently private and so can only be
witnessed internally. Call this sense of the word phenomenal consciousness.
It is this latter sense of the word consciousness that is philosophically problematic, and
so henceforth when I use the term consciousness, I am using it short hand to refer to
phenomenal consciousness. The former sense of the word consciousness would presumably be
fully understood in light of a completed biological account of the organism. It is not
immediately obvious however that an exhaustive account of the functioning of the organism
would fully account for the latter meaning of the word consciousness.
This intuition is borne out in the so-called Zombie thought experiment. Philosophical
Zombies are creatures that are exactly like you or me in any and every way you can possibly
imagine, save for the fact that they are entirely lacking in phenomenal consciousness. It is
argued that insofar as these creatures are conceivable, they are possible. The possibility of
zombies entails that phenomenal consciousness is conceptually separable from the physical
processes that underlie it. The zombie argument therefore suggests that phenomenal
consciousness is in some way non-physical. The zombie argument then seems to threaten
physicalism, the doctrine that the only kind of stuff in the universe is of a physical nature. The
zombie argument highlights the so-called hard problem of consciousness, which poses the
question, how and why do the physical/functional states of the individual accompany
phenomenal consciousness. (Chalmers, 2003) It is a matter of great perplexity why we aren’t all
zombies and indeed I want to argue that in some important sense, we are.
It is important to notice that the hard problem of consciousness is only a problem
insofar as phenomenal consciousness actually exists. Sometimes we can get so fixated on
4
finding a solution to a problem that we lose sight of the possibility that the reason we can’t find
a solution is because the problem isn’t really a problem at all. This is the position that I wish to
take up in this paper, that consciousness is an illusion. By that I mean that it is in reality
something other than what it appears. Moreover, what it is in reality is so different from how it
appears that it probably doesn’t deserve to be called consciousness.
The meta-hard problem of consciousness is the problem of the hard problem of
consciousness; that is, it pertains to why we think that there is such a thing. It is premised on a
stance of neutrality as to whether there is a hard problem of consciousness or not; however if
we take up an illusionist position on phenomenal consciousness we can trade in the hard
problem of consciousness for the meta-problem of consciousness. That is, we need only explain
why we have the illusion of phenomenal consciousness; call this the illusion problem.
(Chalmers, 2018; Frankish, 2016)
It will be the purpose of this essay to take up this challenge. Even if what I say is not true
of the actual world, it need only be true of some possible world. That is, it need only be logically
possible, which is to say free of logical contradictions. In order to make my case as strongly as
possible, I will seek to elaborate and extend my position as much as possible to show that
despite doing this it still remains free of logical contradictions. If I am successful, the
consequences will be quite philosophically significant.
Phenomenal consciousness is what Kripke calls a rigid designator which means that it
refers to the same thing in every possible world in which it applies. (Kripke, 1980) So, if
phenomenal consciousness turns out to be an illusion in one possible world, it follows that it
will also be illusion in all possible worlds in which it exists, including our own. A solution to the
5
illusion problem will accomplish what the zombie argument accomplished but only in the
opposite direction. It will do for physicalism what the zombie argument has done for antiphysicalism.
There are some who think that before even going into the details, that the very idea of
consciousness being an illusion is a contradiction in terms. Before I can begin, I will need
respond to what I see as the two primary arguments to this effect. The first is predicated on the
presumption that an illusion can only exist within consciousness. An illusion it is said, is
something that only makes sense as something that one is conscious of. In other words, the
very notion of an illusion presumes the existence of consciousness. Therefore, the idea of an
illusion of consciousness is itself contradictory.
It is true that our common-sense conception of the word illusion is something that we
are necessarily conscious of, but we don’t have to allow our theorizing to fall victim to our pretheoretic intuitions. When it comes to the relationship between consciousness and our mental
processes there have been many times where our common-sense intuitions have turned out to
diverge from what the empirical findings wound up saying. For example, it wasn’t too long ago
that people routinely believed that they were conscious of everything that went on in their
minds. Even now people have a hard time wrapping their heads around the idea of unconscious
perception. If we can grant that there is unconscious representation and if we think of an
illusion as merely being something that lends itself to misrepresentation then we can accept
the existence of an unconscious illusion.
Consider an actual example of this. Imagine that you are juggling several bags of
groceries while navigating your way up a staircase; the last flight of stairs you walked up had 20
6
stairs but the current one you are walking up now only has 19. This discrepancy has led you to
unconsciously assume that there is a 20th step so you lift your foot in anticipation of this
imaginary step only to uncomfortably discover that it is not there. We can think of the
discrepancy between the two staircases as a kind of illusion insofar as it tricked you into
thinking that the second one had a wrong number of stairs. Moreover, it was unconscious
processes in your brain that came to this conclusion, because you weren’t consciously focused
on the stairs but rather your groceries. In sum, an aspect of the world has caused you to
misrepresent it, making it an illusion and one that occurred unconsciously. Hence illusions
needn’t be conscious.
The second argument concerns itself with the reality-appearance distinction, which lies
at the heart of the meaning of the word illusion. An illusion is something whose appearance
departs from its reality. In the case of consciousness, it is said, its appearance is its reality.
Reality and appearance then can’t come apart from one another. It is said that phenomenal
consciousness therefore is the one thing that can’t be an illusion. (Searle, 1997, p. 112)
Intuitively, I think we can all agree that this seems like a fairly damning criticism. Its intuitive
plausibility is rooted in the third characteristic of phenomenal consciousness that I list below,
namely direct acquaintance. Direct acquaintance pertains to the fact that it appears as though
one has immediate access to one’s conscious states. This feature is epistemic in nature, which is
to say it is concerned with the degree to which we are able to acquire knowledge about things.
We have indirect epistemic access to things out in the world, thereby meaning that we can
never hope to have perfect knowledge of them. On the other hand, we seem to have direct
7
epistemic access to our own conscious states, meaning that as a matter of necessity we can’t be
wrong about them. It is on this presumption that consciousness can’t be an illusion.
Importantly, note that this argument only holds if consciousness has the characteristics
that it seems to have which is to say that it is not an illusion, which is exactly what is at issue.
My claim is that consciousness is an illusion, meaning that it does not have the characteristics
that it appears to. The objection then precisely begs the question against an illusionist position.
In other words, Searle and others who have made this argument are effectively saying that
consciousness can’t be an illusion because consciousness is not an illusion! 1 Needless to say, it
amounts to a circular argument. That being said, the burden does fall on me in explaining how
consciousness comes to have the appearance of direct epistemic access when it doesn’t have
that property in reality. This will be one of the objectives of this paper.
In order to provide an account of the illusion of consciousness, I have to give an account
of what consciousness feels like such that I can give an account of the target phenomenon to be
explained. Some would regard phenomenal consciousness as entirely ineffable, leaving
philosophers with only being able to gesture at it using phrases like ‘what it is like’ to be in this
conscious state or that one. (Nagel, 1974) One may feel moved to say of consciousness what
Louie Armstrong supposedly said of Jazz, that “…if you hafta’ ask you ain’t never gonna’ know.”
2
(Block, 1978, p.281) Nevertheless, for my present purposes, I think there are three things that
can definitively be said of consciousness. (1) Consciousness is always about something. For any
given conscious state, it is always the case that in that instance one is conscious of something.
1
See Frankish (2016) for a different yet (I think) related response to this objection.
There is some debate about who this quote is to be accurately attribute to, some arguing that it was a different
Jazz musician, Thomas Wright “Fats” Waller who said it.
2
8
Call this feature of consciousness aboutness, or intentionality. (2) Consciousness shows up for
us in a substantial sense. Consciousness can be conceived of as having an existence akin to that
of concrete objects, like tables and chairs. Call this feature tangibility. (3) One seems to have
immediate access to one’s conscious states. In other words, one doesn’t need to infer that one
is in some conscious state, one simply knows due to having first-hand knowledge of that fact.
Call this feature direct acquaintance.
As has just been said, the notion of an illusion is predicated on a distinction between
appearance and reality, a major focus of this paper will be to cash these concepts out in
concrete terms. In embarking on this, I will need to invoke (1) the notion of a representation,
(2) the thing that the representation represents (it’s referent) and (3) what the representation
represents the thing as being (it’s representational content). For something to be an illusion is
for it to be represented as something other than it exists in reality. In other words, the content
of the representation does not faithfully reflect its referent. If a child were to mistake their
father for Santa, the referent would be their father but the representational content would be
Santa. In the case of consciousness, the representational content would be phenomenal
consciousness whereas the referent would be something else entirely.
II: A view from the intentional stance
In providing an account of representation I will utilize Daniel Dennett’s intentional stance
approach. Dennett’s position is predicated on the notion that the kind of patterns that one
picks out in the world are dependent upon the stance one adopts towards them. Adopt the
physical stance and one becomes attuned to concrete objects defined entirely by their
characteristics in physical terms. Adopt the design stance and rather than physical properties
9
one instead becomes sensitive to functional properties. Certain physical properties fall within
the categories of functional properties if they act to subserve that function. For example, birds
and planes can be thought of as being susceptible to the same description at the design stance
insofar as they both accomplish the same function of flying, nonetheless they have different
physical descriptions. Patterns visible from the design stance can be said to be more abstract
than those from the physical stance in that they range over more things at that lower level of
description.
The final stance that Dennett outlines and the one most pertinent for my present
purposes is the intentional stance. From the intentional stance one perceives contentful states
in others like beliefs and desires. For the first time, one can say that something is about
something else, that is one can perceive representations. The intentional stance in turn occurs
at a level of abstraction greater than that of the design stance. For Dennett, representations are
not things that occur within the individual but rather are patterns of interaction that occur
between the individual and its environment. Patterns at the design level of description fall
within a given intentional stance pattern so long as they occur within the individual and have
the function of causing that individual to engage with the world in such a way as to subserve
that given representational state. (Dennett, 1971) 3
The processes that operate on systems designed through evolution by natural selection
are notoriously economical in how they operate. Evolution is helplessly slow with respect to the
3
I am interpreting Dennett in a way that some might think he would be uncomfortable with. In his writings he only
explicitly applies the intentional stance to so-called propositional attitudes like beliefs and desires and not to other
representations. That being said, in his early writings he talks about the ‘sub-personal’ states underlying ‘personal’
representations which are more broadly reaching interactions between the system and world. This looks very
much like a distinction between a design stance analysis and an intentional stance analysis of representations
more generally.
10
changes that are made to the body plans of the creatures that it operates on. What’s more, the
individuals that it is operating on are competing against one another. It is therefore in the
interests of each of them to make the most they can out of the least structural changes to their
body plans. It is for this reason that you will often find that structures that initially evolved for
one task are co-opted for another function. A commonly cited example of this is the use of
feathers in flight which initially appeared for the purpose of insulation, among other things.
(Gould & Vrba, 1982)
Another example of this, and one that is pertinent to my purposes here is the way our
cognitive architecture has been repurposed for thinking about abstract things. We didn’t
initially evolve to think about abstract matters, but we are now faced with this task. It would be
highly un-economical to start from scratch and construct a wholly new system to think about
abstract things. It would make more sense to simply co-opt the systems we already have in
place for reasoning about concrete things and put them into service for abstract matters with a
few tweaks and changes. There is a considerable amount of evidence for this being the case.
It seems that we think about moral weight in terms of physical weight such that one is
more likely to regard some issue as important if they are holding a heavy object when being
told about it. (Ackerman et. al., 2010) The abstract notion of moral importance is
conceptualized in terms of the concrete property of physical weight. We conceptualize
interpersonal warmth in terms of physical warmth such that a person will have more positive
feelings towards someone if they meet them whilst they are holding onto something warm.
(Williams & Bargh, 2008) Here the abstract notion of interpersonal warmth is conceptualized in
terms of the physical property of physical warmth. We think about time in terms of space such
11
that we talk about moving an appointment forwards or backwards in time or speak about a
vacation being long or short. It’s also been demonstrated that time is perceived as moving more
quickly when we ourselves are moving more quickly through space. (Boroditsky & Ramscar,
2002) Once again the abstract notion of time is conceptualized in terms of the physical notion
of space.
It is my suggestion that we do the same things with the abstract entities visible from the
intentional stance (i.e. representations). Despite representations being abstract entities, we
treat them as being like concrete objects, that is things that tangibly show up in the way that
things like tables and chairs do. Moreover, insofar as representations are that which inform the
behavior of others, we treat those people as being directly acquainted with them. This
convenient short cut will go off without a hitch initially, but it will inevitably come back to haunt
the system later when the representations that it comes to represent belong to itself. This is
because the system at this point is mis-representing itself as being directly acquainted with
concrete entities that it is now in a position to recognize don’t exist.
Were it not for a clever trick that the brain plays on itself, the system would fall into a
paralyzing state of cognitive dissonance. This clever trick is the illusion of phenomenal
consciousness. Phenomenal consciousness consists in the sense in which one is directly
acquainted with the content of one’s representations, which exist in a tangible almost concrete
sense. Phenomenal consciousness is the sense in which one appears to have representations in
the way that one represents them self as having them, despite that not being the case. In sum,
phenomenal consciousness is necessary for the individual to productively represent their own
representations. (Rosenthal, 2005) Such re-representation is sometimes called higher-order
12
representation or metarepresentation. I will refer to it occasionally as auto
metarepresentation, to distinguish it from hetero metarepresentation. Auto
metarepresentation is to represent one’s own representations, whereas hetero
metarepresentation would be to represent another person’s representations.
Allow me to summarize the basic structure of my argument. (1) Representations are a
class of abstract entities that don’t tangibly show up in the world in the way that physical things
like tables and chairs do. (2) In the interest of the economy of evolutionary design processes,
the brain represents abstract entities, including representations in concrete terms. (3) When
the brain goes to represent its own states to itself it ends up with a mismatch between how it is
representing them and how they exist in reality. (4) The brain recruits the illusion of
phenomenal consciousness in order to trick itself into thinking that those representational
states do in fact exist in the way that it is representing them as existing.
As it was said in the beginning, it is my goal in this paper to convince the reader that in
addition to my account being logically possible, that it remains free of contradiction once it has
been elaborated and expanded upon. I have so far only been describing the system from the
perspective of the intentional stance, which is perhaps all well and good but it leaves open
questions like why would the system carry out a representation in the first place? What does it
mean to represent something and how does that happen? Additionally, why would the system
need to represent its own representations?
III A view from the design stance:
13
Whether speaking about things at the intentional stance or the design stance what one is
ultimately doing is giving an account of the causes of behavior. From the intentional stance one
would say that Jim believed he had beer in the fridge and Jim desired a beer causing Jim to get
up and go grab one. Things get more complicated when one descends down to the level of the
design stance, nevertheless one is still speaking about the causes of behavior. Rather than
representations, the causes for behavior at the design stance on my account are what I call
sensorimotor algorithms. Algorithms can be understood to govern the relationship between
inputs and outputs. In other words, they are sets of contingencies such that IF x 1 THEN y1, IF x2
THEN y2 etc. In the case of sensorimotor algorithms those inputs are incoming sensory
information (i.e. stimuli) and the outputs are outgoing motor information (i.e. responses). A
simple example of this is a reflex, if you hit a certain part of my knee with a hammer (the
stimulus) then my leg will extend in a certain way (the response).
We will use the term sensory mechanism for the structure responsible for receiving
input from an external stimulus. The term motor mechanism will be given to the structure
responsible for generating a response. To compute is to implement an algorithm and a
computer is something that computes, so the term computer will be given to the structure
carrying out the given sensorimotor algorithm. That is, it is the computer that governs the
relationship between input and output. The term module will be given to the structure
encompassing all of these things.
14
Module
S
St
C
AA
A
M
R
Key
S = sensory mechanism
St = stimulus
C = computer
M = motor mechanism
R = response
Fig.
AA1 Modular Structure
AA
A
A
The fact that the system is structured in this way enables it to skillfully influence the goings on
of things outside of it based on information about those same goings on. We will call those
goings on affordances insofar as they afford some action (Gibson, 1979, pp. 127 - 137). For
example, I would need to know things like the location, size and shape of an apple in order to
properly pick it up. It is information from the affordances that constitute the stimulus and those
affects aimed at the affordance that is the response. Information is collected from the
affordance in an ongoing way while the movement is being carried out so that information
about the progress of the movement can guide the movement itself as well as know when the
goal of the module has been achieved and the movement can be terminated. Affordances
themselves can be thought of as algorithms insofar as when they are acted on in different ways
they respond in different ways.
S
M
C
AA
A
A
Key
C = Computer
A = affordance
A
Fig. 2One
Relationship
between
Acould treat such a system as
module and affordance
A
15
The algorithm of the module would have to correspond to the algorithm constituting
the affordance in such a way as to yield adaptive behavior on the part of the system as a whole.
We will call this correspondence relation skillful attunement. One could subject such a system
to the intentional stance and regard it as having a representation of the apple. This is because
one can effectively assign the system with a belief that there is an apple is and a desire for it to
have the apple in order to predict its behavior in picking it up. Nevertheless, such a
commitment would require qualification. Sure, representing is happening but who is doing the
representation? I would contend that the answer to this question would depend upon who or
what set up the module responsible for the behavior. If the modules were genetically hard
wired and so entirely crafted by the processes of evolution by natural selection, then we would
have to say that in some sense that it was natural selection that was doing the representing.
(Dennett, 2009) On the other hand, if it were processes within the system itself that set up
those modules, then it would be the system that is doing the representing; allow me now to
explain how this works.
The apple is an example of an affordance that is external to the individual. Affordances
can also exist within the system itself, in the form of other modules. This allows parts of the
system to regulate other parts from the inside. The modules acting to do this will be called
internal modules and the modules that they influence, the aforementioned genetically
hardwired ones will be the external modules. The internal modules take in information about
the functioning of the external modules so that they can bring them in line with their own
goals. For this to be the case the most effective place for the internal module to focus on would
be the computer of the other algorithm so that it can know what algorithm it is running.
16
Through taking in information about that algorithm the internal module can send output signals
to it in order to influence it and cause it to change. Moreover, the internal module can do that
in a way that reflects that ongoing input that it is receiving meaning that it can also stop acting
on it when the algorithm of the external module falls in line with its goals.
S
S
I
AA
A
E
M
Key
E = External module
I = Internal module
M
AA
A
A
A
Fig. 3. Internal
A and external
modules A
It is through doing this that the system can move away from natural selection carrying
out its representations and more in the direction of it doing it itself. This would be the case
because the primary determinants for its representational processes are becoming increasingly
internalized. That is, it is less becoming the case that natural selection is programming in the
algorithms of its external modules and more the case that its own internal modules are doing
that.
One may be inclined to think that with such a higher-order module one would have a
higher-order representation. The skillful attunement of a module with an aspect of the world is
tantamount to the system representing that aspect of the world and so the skillful attunement
17
of a module with another module might seem to be the representation of a representation.
This is not the case. Recall that representation is a process that the entire system engages in
and reflects the way that system engages with its environment. Representation does not occur
at the level of the design stance and hence does not take place within the system. For the
system to represent its own representations it has to engage with some aspect of its
environment in some way, and moreover that aspect of its environment must have
representational capacities. I’ll explain later on exactly how this works.
Modules are stacked on top of one another and this process occurs incrementally with
increasing layers. This would mean an increasingly wide array of different ways for the system
to regulate itself and hence a proportionately increased repertoire of different behaviors. Those
new behaviors could be made more complex if a single internal module were to regulate
multiple external modules. For example, the reflexes that one is genetically endowed with can
be bound together by higher order modules in order to create new behaviors by co-opting
certain of the components of each reflex and combining them together in a novel way. (Brooks,
1987)
A principle that has already come up is that of the economy of design processes of
evolution by natural selection. Evolution will not build something from scratch when it can coopt and modify something else for the job that has already been constructed. Rather than
continually build modules out of whole cloth it became apparent to the system that it could
make use of something else. In preparation for engaging in some behavior, the system’s body
would need to prepare for it in some way, for example by increasing its heart rate, blood
pressure, sweating, getting into a certain stance, clenching one’s fists, ceasing one’s digestive
18
processes, baring one’s teeth etc. Rather than building separate systems that would keep track
of the inputs necessary to trigger a behavior as well as those triggering the preparatory bodily
changes, these systems can be collapsed into one. It would be the bodily changes themselves
that would give rise to the behavior so long as enough of them were to be triggered and to a
great enough degree.
I’ll call these behavior inducing bodily changes emotions. (James, 1894) We can think of
emotions as higher order internal modules that regulate the modules beneath them. Emotions
can either act to facilitate or diminish another module. The module that the emotion acts upon
can be within the same system as the emotion. An example of this is fear. Imagine walking
alone in the woods when you encounter a bear right in front of you, the fear module that
springs up within you acts upon the module that was previously causing you to continue
walking forwards such that you would cease doing that. That fear module might also act to
facilitate modules that would induce behaviors such as running away.
Emotion modules can also act on modules within the systems of other people, albeit
indirectly. An example of this is anger. If you are angry at some behavior in another person, that
anger will impact modules on the part of yourself which in turn will invoke behavior that signals
your disapproval of that behavior. This angry body language and so forth is designed to invoke
fear in the other person directed at the module producing the offending behavior, causing that
module to be diminished. Hence the anger module functions to ultimately impact the module
in the other person, despite the fact that it initially influenced modules in oneself. Emotions
that have ultimate effects on the behavior of individual’s in one’s social group will be called
proto-social emotions.
19
The anger just mentioned previously would be called proto-guilt. The modules
responsible for the offending behavior are attuned with things in the world, thereby subserving
a representation at the intentional stance level. Furthermore, the proto-guilt module is attuned
with the module producing that behavior in the other person, hence consisting in another
representation. What you have here then for the first time is a representation of a
representation, a metarepresentation. However, it is not an auto metarepresentation but
rather a hetero metarepresentation, given that the first order representation exists in a system
separate from the second order representation. Before one can have legitimate auto
metarepresentation, one must first have genuine social emotions.
Proto-social emotions turn into full blown social emotions once they become
internalized within the systems that they act upon. It is then that the social emotion can
autonomously do its work without the person whom is its origin needing to be present. Before I
can explain how this happens, allow me to first give a more technical account of how protoguilt impacts fear. The sensory mechanism of one’s fear module is subjected to the angry body
language of the other person’s proto-guilt module, which is a product of its motor mechanism.
Other
Self
S
S
PG
AA
A
F
M
M
Key
PG = proto-guilt algorithm
F = fear algorithm
AAof proto-social emotions
Fig. 4 The mechanics
A
The proto-guilt module of the other person is in turn monitoring its effects on the fear module
of the other person by taking in information about the output of their fear module. This would
20
include things like fearful body language or a ceasing of the offending behavior. It is this input
that is informing proto-guilt’s output such that it can be guided but also so that it can
deactivate once it is indicated that the algorithm of the fear module has fallen in line with its
goals. In order for the proto-guilt module to be internalized, the key thing that must take place
is for one to deduce what the algorithm of the proto-guilt module is in the other person so that
one can implement this in a module that one generates within oneself.
Other
Self
S
PG
M
S
AA
A
F
M
Key
PG = proto-guilt algorithm
F = fear algorithm
Fig. 4 The mechanicsAA
of proto-social
emotions
A
Notice by again looking at the above diagram that the fear module is providing input to the
proto-guilt module and getting output from it with nothing but the proto-guilt algorithm in
between these two things. Through determining the relationship between its own output and
the input that it is receiving it can deduce the algorithm that must govern that relationship.
If one can establish a module within oneself that implements that same algorithm, one
at that point can use one’s own non-verbal behavior to stimulate one’s own fear module. It
would be at this point that one would have a genuine guilt module. For this to occur though,
one would have to sense the non-verbal behavior in oneself through the same mechanisms that
one senses it in others. Indeed there is a substantial literature to suggest that humans and a
variety of non-human animals possess such a ‘shared manifold’ (Gallese, 2001) in the form of
the so-called mirror neuron system. (Rizzolatti & Arbib, 1998)
21
S
G
M
S
AA
AF
M
Key
G = guilt algorithm
AA of genuine social emotion
Fig. 5 The achievement
A
The reader may finally be thinking that at long last this is where phenomenal consciousness is
achieved, but that is not so. This is because despite the system having internalized the modules
of the other individual into its cognitive economy, that module is nevertheless not under the
direction of the system. Rather, the algorithm of that module was determined by how the
module was functioning in the other person. As was stated previously, a system can only be
said to be the one doing the representation if the primary determinants for how that
representation is taking place is located within it. This I suggest can only happen with language
as it is only with the massive combinatorial power that language gives rise to that one can have
the kind of flexibility to gain the necessary freedom from one’s surroundings in order to achieve
genuine auto metarepresentation.
For this to happen, that same process whereby social emotions are achieved repeats
itself. That is, individuals in one’s social group again are co-opted in the process of stacking
another layer of modules on top of one’s existing ones. Only this time, rather than it being the
non-verbal emotional body language of other persons involved in setting up that internalized
module, it is a full-fledged verbal language. One begins by having one’s social emotion modules
regulated by the verbal utterances of others until one comes to internalize those linguistic
modules within oneself. It is at this point that one hears the voice of the other person within
oneself through the activation of one’s own speech centers.
22
Importantly, the linguistic algorithms in this case would be a mirroring of that of the
modules of the other individuals and hence the voices one hears within oneself would be
determined by outside forces. This is what is now known as schizophrenia and individuals
suffering from this condition don’t regard themselves as being in control of the voices they
experience, nor do they interpret them as belonging to them self. (McCarthy-Jones et. al., 2014)
Nevertheless, (and with a tip of the hat to the psychologist Julian Jaynes) one can
imagine coalitions of those linguistic modules banding together to create the voices of suprapeople (i.e. gods) as spirituality and religion first began to emerge. At this point one is still not
representing one’s own states as one is still toiling under the guidance of some external force. It
is the process through which linguistic modules cooperate with one another so as to generate
an internal locus of control of one’s own self talk that allows for one to generate a singular
inner voice that definitively belongs to oneself. It is at this point that one achieves phenomenal
consciousness because it is only at this point that one achieves genuine auto
metarepresentation and hence needs to invoke the illusion of consciousness. (Jaynes, 1976)
IV A view from the physical stance:
For the remainder of the paper I would like to describe the system that I have been sketching in
a bit more detail so as to provide an account of how the modules managed to come into being
and self-organize in the first place. In order for me to do this I will be descending this time from
the design stance down to the physical stance. In this endeavor I will need to draw upon the
conceptual apparatus of natural selection in its most general terms. As Dawkins points out in
his book The Selfish Gene (1976, p.191) among many others, there isn’t anything special about
23
genes as they exist in and of them self that makes them susceptible to the forces of natural
selection. So long as something fulfills a number of key criteria it too can be said to evolve by
natural selection. These criteria are (1) variation, (2) selection and (3) heredity (Darwin, 1859).
The basic idea is that if members of a population vary in a way that is important with respect to
the degree that they are capable of surviving and reproducing (variation), the ones that are
most suited for this will become more plentiful and pass on those beneficial traits to their
offspring to a greater degree (selection). Those offspring will retain those beneficial traits
(heredity) but will also in turn continue to vary as the process repeats itself.
It is my contention that originally it was only genetic natural selection sculpting the body
plans of the systems that I have been describing but that eventually those processes set up the
necessary starting conditions for a new form of natural selection to step onto the scene.
Genetic natural selection occurs inter-generationally with respect to the generation of the
individual and this is what makes it so slow. If genetic natural selection could set up a separate
process of natural selection that occurs within the lifetime of the individual, the speed at which
the individual could adapt to its environment would be greatly enhanced. As has already been
said, external modules were entirely genetically programmed due to intergenerational natural
selection. The introduction of internal modules on the other hand though allowed for
intragenerational natural selection.
This additional unit upon which for natural selection to operate I contend is the neural
firing pattern. I will make the case that the neural firing pattern is a genuine unit for natural
selection to operate on by arguing that it fulfills each of the aforementioned criteria. I will begin
with the variation requirement. There are approximately 100 billion neurons in the adult
24
human brain, each of which has between 1000 and 10 0000 connections with one another.
Supposedly, there are more ways for the neurons in the brain to be connected up with one
another than there are elementary particles in the known universe. This needless to say, is
more than enough variation on which for natural selection to operate.
The next criteria is selection. Genetic natural selection can be thought of as having wired
up a reward system, whereby neural firing patterns deemed to lend themselves to behavior
conducive to the organism’s survival and reproduction are made more likely to fire again. This is
accomplished through certain neurotransmitters strengthening the connections between the
constituent neurons in question. In sum we can easily talk about the reward system selecting
certain neural firing patterns over others.
The third and final requirement is heritability. The concept of heritability relies upon the
notion of separate individuals belonging to a singular lineage, with one generation of that
lineage passing along traits to the next. The challenge for neural firing patterns in fulfilling this
requirement is that they must in some sense remain the same from generation to generation in
the messy environment of the nervous system, but at the same time varying in some important
sense as well. (Crick, 1989) The activation of neural firing patterns is them coming into being
and ‘neurons that fire together, wire together’ and so they leave behind a memory trace of
themselves such that that a similar pattern of neural activity is more likely to fire again in the
future. (Hebb, 1949) We can identify that second pattern of neural firing as belonging to the
same lineage as the first so long as it can be concluded that it was that same memory trace that
gave rise to it.
25
Despite competition being the name of the game when it comes to the children of
natural selection, this paradoxically doesn’t preclude cooperation, as it is those individuals that
cooperate with one another that often outcompete those that don’t. This is how we could get
structures like sensory mechanisms, computers and motor mechanisms working together to
achieve the overall organization of modules, and indeed how we could even get neurons
working together to yield each of these structures individually and in the first place. It is also
how we would get modules working together to create the kind of hierarchies that at the top of
which have the linguistic modules that when successfully cooperating together yield
consciousness itself (or rather the illusion of such).
References:
Ackerman, J.M., Nocera, C.C., Bargh, J.A. (2010). Incidental Haptic Sensations Influence Social
Judgments and Decisions. Science, 328(5986), 1712-1715.
Block, N. (1978) Troubles with functionalism. Minnesota Studies in the Philosophy of Science, 9,
261-325.
Boroditsky, L., Ramscar, M. (2002). The Roles of Body and Mind in Abstract Thought.
Psychological Science, 13(2), 185-189.
Brooks, R. (1987). Intelligence without representation. Artificial Intelligence, 47(1-3), 139-159.
Rosenthal, D.M. (1997). A theory of consciousness In G. Guzedere (E.d) The Nature of
Consciousness (pp. 729-753). Cambridge, MA: The MIT Press.
Chalmers, D. (2003). Consciousness and its place in nature In T.A. Warfield (E.d) Blackwell
Guide to the Philosophy of Mind (pp. 102-142).
Chalmers, D. (2018). The meta-problem of consciousness. Journal of Consciousness
Studies, 25(9–10), 1–41.
Cooley, C. H. (1902). Looking-glass self. The production of reality: Essays and readings on social
interaction, 6.
Crick, F. (1989). Neural Edelmanism. Trends in Neurosciences, 12(7), 240-248.
26
Darwin, Charles. (1859). On the origin of species by means of natural selection, or, the
preservation of favoured races in the struggle for life. London: J. Murray.
Dawkins, R. (1976). The selfish gene. Oxford: Oxford University Press.
Dennett, D.C. (1971). The Intentional Stance. Journal of Philosophy, 68(4), 87-106.
Dennett, D.C. (2009). Darwin’s Strange Inversion of Reasoning. Proceedings of the National
Academy of Sciences, 106(1), 10061-10065.
Frankish, K. (2016). Illusionism as a theory of consciousness. Journal of consciousness studies,
23(11-12), 11-39.
Gallese, V. (2001). The ‘shared manifold’ hypothesis. From mirror neurons to empathy. Journal
of Consciousness Studies, 8(5-7), 33-50.
Gibson, J.J. (1979). An Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
Gould, S.J., Vrba, E.S. (1982). Exaptation-A Missing Term in the Science of Form. Paleontological
Society, 8(1), 4-15.
Hebb, D. (1986). The Organization of Behavior. Berlin, Heidelberg: Springer.
James. W. (1894). The physical basis of emotion. Psychological Review, 101(2), 205-210.
Jaynes, J. (1976). The origin of consciousness in the breakdown of the bicameral mind. Boston:
Houghton Mifflin.
Kripke, S.A. (1980). Naming and Necessity. Cambridge: Harvard University Press.
Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(5), 435-450.
Noë, A. (2005). Action in Perception. Cambridge: MIT Press.
McCarthy-Jones, S., Trauer, T., Mackinnon, A., Sims, E., Thomas, N., Copolov, D.L. (2014). A New
Phenomenological Survey of Auditory Hallucinations: Evidence for Subtypes and Implications
for Theory and Practice. Schizophr Bull, 40(1), 240-248.
Rizzolatti, G., Arbib, M.A. (1998). Language within our grasp. Trends in Neurosciences, 21(5),
188-194.
Rosenthal, D.M. (1997). A theory of consciousness In G. Guzedere (E.d) The Nature of
Consciousness (pp. 729 - 753). Cambridge, MA: The MIT Press.
27
Searle, J.R. (1997) The Mystery of Consciousness, New York: The New York Review of Books.
Williams, L.E., Bargh, J.A. (2009). Experiencing Physical Warmth Promotes Interpersonal
Warmth. Science, 322(5901), 606-607.
Wittgenstein, L. (1953). Philosophical Investigations. London: Macmillan Publishers.
28
Download