The Neuroscience of Consciousness

advertisement
The Neuroscience of Consciousness
[A paper on the neuroscience of consciousness delivered at the ABS
conference, San Francisco, CA, August 12, 2006]
[slide 1: title]
[slide 2: overview
section 1. Some Historic Background
section 2. An Example of Reductionistic Explanation of Behavior
section 3. The Functionalist Approach to Consciousness
section 4. Consciousness as Subjective Experience
section 5. The Neurobiological Approach to Consciousness: Is There an NCC? section 6.
Combining the Functionalist and Neurobiological Approaches section 7. Meaning and the
Importance of the Self
section 8. Broader Implications]
1. Some Historic Background
Until the early 20th century, classical mechanics, as first formulated by Newton and
further developed by Laplace and others, was seen as the foundation for science as a
whole. It was expected that the observations made by other sciences would sooner or
later be reduced to the laws of mechanics. Although that never happened, other
disciplines, such as biology, psychology, or economics, did adopt a generally mechanistic
and reductionistic methodology and worldview. This influence was so great that even
today most people with only a basic notion of science still implicitly equate “scientific
thinking” with “Newtonian thinking”.
The logic behind this hegemony of Newtonian science is easy to formulate. Its central
principle is that of analysis or reduction: to understand any complex phenomenon, you
need to take it apart, and reduce it to its individual components. If these are still complex,
you need to take your analysis one step further and look at their components. If you
continue this subdivision long enough, you end up with the smallest possible parts, the
atoms or elementary particles. Atoms are thought of as separate pieces of the same hard,
permanent stuff, called “matter”. Newtonian ontology is therefore is materialistic. It
assumes that all phenomena, whether physical, biological, mental or social, are ultimately
just complex arrangements of bits of matter. Any change, development or evolution is
merely a structural rearrangement caused by the movement of the particles. This
movement is governed by deterministic laws of nature. So if you know the initial
positions and velocities of the particles constituting a system, together with the forces
acting on those particles, you can in principle predict the further evolution of the system
with complete certainty and accuracy. The trajectory of the system is predictable
backwards, too: given its present state, you can in principle reconstruct any earlier state
it’s gone through.
1
The elements of the Newtonian ontology are matter, the absolute space and time in which
bits of matter move, and the forces or natural laws that govern their movement. No other
fundamental categories of being, such as consciousness, mind, life, purpose, moral facts
or esthetic facts, are acknowledged. Or, if they are acknowledged to exist, it is only as
causally-irrelevant epiphenomena -- transient arrangements of particles in space and time.
[slide 3: Epiphenomenalism]
If physicalism is true, consciousness may well be something “epiphenomenal”, like the
shadow cast by a moving car. The shadow is a real phenomenon, but it has no causal
powers of its own.
Newtonian epistemology is based on the correspondence view of truth. Knowledge is
merely an imperfect reflection of the particular arrangements of matter outside of us. The
task of science is to make the mapping or correspondence between the external, material
objects and the internal representations of them (which are also ultimately just
arrangements of bits of matter) as accurate as possible.
A basic assumption of the Newtonian outlook is simplicity. The seeming complexity of
the world is only apparent. To deal with it you need to analyze phenomena into their
simplest components. Once you’ve done that, their evolution will turn out to be perfectly
regular and predictable, and the knowledge you gain will be both a reflection and an
instance of that pre-existing order.
My purpose in giving this brief caricature of the assumptions behind Newtonian science
in the early modern period is to suggest that while the science of physics itself has
undergone deep changes, the methodological assumptions behind early modern science
have continued to be influential to the present day. In biology, for example, physicalism
and reductionism are integral to the genetic determinism that characterizes some
interpretations of evolutionary theory.
In the field of psychology, physicalism and reductionism had to contend with the obvious
fact of our own conscious experience and our special introspective access to it, as well as
with religious beliefs about the soul held by the majority of people in most societies.
Modern psychology only began in the 19th century, and for the first half century or so,
consciousness was a central topic. Psychophysicists like Ernst Weber (1795-1878) and
Gustav Fechner (1801-1887) studied the relationship between physical stimuli and
reportable sensations, relating conscious sensation to the intensity of the stimulus.
Wilhelm Wundt (1832-1920), usually credited with founding the first laboratory of
experimental psychology, based his studies on introspective reports of people’s conscious
experiences. He and his student Titchener (1867-1927) tried to train people to make very
precise, reliable observations of their own inner experience, so that a science of
consciousness could be built on the basis of these atoms of experience. William James’s
classic work, Principles of Psychology (1980), still widely read today, was all about
trying to understand conscious experience. And, though it seems hard to believe in our
2
current climate, in late 19th c. philosophical circles, idealism (roughly the view that
reality is ultimately conscious or mental) was a dominant force. [Josiah Royce. Hegel.
Bradley]
But all this changed in the early 20th c., when psychology, too, came under the spell of
reductionistic physicalism. In 1913 the American psychologist John B. Watson argued
that psychology did not need introspection-based methods, and in fact could do without
the concept of consciousness altogether. [1913 article Psych. Review.] He set out to
establish psychology as a purely objective branch of the natural sciences, with the
straightforward theoretical goal of predicting and controlling human behavior. The work
of Pavlov and Watson on classical conditioning and Skinner’s work on operant
conditioning became the paradigms in psychology. Skinner was convinced that
consciousness is epiphenomenal, and that its study shouldn’t be part of psychology at all.
This was the general mood for many decades while behaviorism reigned in university
psychology departments. By the 1960s behaviorism began to lose its influence, and
cognitive psychology, with its emphasis on internal representations and information
processing, became the dominant school of thought. But even within cognitive
psychology, consciousness was a taboo subject for many years.
This neglect of consciousness is now fading rapidly. After almost a century, a series of
significant papers began to appear in the 1980s, in leading journals like Science and
Nature, reporting marked progress in understanding conscious vision in the cortex,
conscious memories mediated by the hippocampus, and other cases where conscious
events can be compared to otherwise similar but unconscious ones. Since that time,
thousands of articles on consciousness have appeared in the neuroscience and
psychological literature, under various headings. There are new journals, new academic
conferences, and new graduate programs, all devoted to the science of consciousness.
There is little doubt that we are again looking squarely at the questions that were familiar
and important to William James and his generation, but now with better evidence, better
technologies and perhaps better theory than ever before.
The science of consciousness is in its infancy. Seemingly intractable problems face us.
But this is one of the things that makes the topic of consciousness so exciting. Each
individual discovery in the neuroscience of consciousness is fascinating in itself, and
tremendously important for medical reasons. But in addition to that, the effort to reach a
scientific understanding of consciousness is forcing us to reconsider some of the
fundamental assumptions about the universe that have been with us for centuries, since
the birth of modern science.
2. An Example of Reductionistic Explanation of Behavior.
I want to begin with a very simple introduction to some aspects of our current
understanding of how the mind works. Looking around the universe, we notice that
consciousness appears where there are minds: i.e. where there are living organisms with
brains capable of producing intelligent behavior. Consciousness may also be a
fundamental and therefore widespread property in the universe, as one of our speakers
3
today, Dr. Hoffman, will suggest. But organisms capable of intelligent behavior seems
like the right place to begin.
How does a living organism have mental capacities and conscious mental states? The
answer to this question is not obvious from simple anatomy. If you open the skull and
look at a brain, here’s what you’ll see.
[slide 4: The Brain]
Looking at a brain doesn’t give you any answers. It’s not that different from looking at
an exposed heart or a kidney. In fact for many centuries the best minds of Europe debated
whether the seat of consciousness was in the brain or the heart. (The heart seemed like a
good candidate because it was centrally located, and connected by blood vessels to all
parts of the body.) It’s not at all obvious how a piece of a body (a piece of meat, if we
want to put it very crudely) can give rise to conscious experience!
Let’s start with a very simple example. What’s the scientific explanation of intelligent
behavior in, for example, a mouse? Does the explanation of intelligent behavior include a
satisfying explanation of consciousness?
[slide 5: The Morris Water Maze (Richard G. Morris. 1984)]
This is a diagram of a Morris water maze. Mice (or rats) are placed in a circular arena
about 6’ in diameter filled with cloudy water. The arena itself is featureless, but the
surrounding environment contains positional cues of various types. A small platform is
located below the surface of the water. As the mice search for this resting place, the
pattern of their swimming is monitored by a video camera mounted above.
After 10 trials, normal mice swim directly to the platform on each trial. Researchers
wanted to know whether the mice had formed a spatial map, a representation of the
spatial organization of the maze relative to the room the maze is in, or merely acquired a
conditioned response.
[slide 6: Which is learned?]
To answer this question the experimenters next tried releasing the mice from a different
location in the maze. The mice were able to swim directly to the platform from the new
location on the first trial, indicating that they had formed a spatial map of the location of
the platform relative to environmental cues.
Mice that are missing a part of the brain called the “hippocampus” behave differently.
They do eventually learn to swim to the platform directly when released, but it takes them
hundreds of trials (typical of conditioning) to learn this. Then, once they’ve learned by
conditioning to swim directly to the platform from the first location, if they’re released
from a new location, it takes them 100’s of trials again to learn to swim to the platform
from the new location.
4
[slide 7: Navigating Without a Hippocampus]
So, we learn several things from these experiments: that normal mice learn spatial maps
of their environments (i.e. they make representations in their brain of the spatial
surroundings of individual events in their life), that this seems to require having a
hippocampus, and that this is not a simple case of acquiring a skill by gradual
conditioning, but is more like forming an autobiographical memory. What are the
mechanisms underlying this kind of spatial learning in mice?
O’Keefe and Dostrovsky discovered in 1971 that there are “place neurons” in the
hippocampus of rats. These are individual neurons that fire when and only when the
freely moving rat is in a certain place relative to a particular environment. (For example,
a particular neuron in the rat’s hippocampus may code for a particular place in the
kitchen, and a completely unrelated place in the livingroom. When the rat or mouse
returns to the kitchen, no matter what direction he enters it from, the neuron codes for the
identical location in the kitchen.) The hippocampus forms maplike memories of spatial
locations, so that it can represent the location of objects and itself when it’s in those
locations. The formation of these spatial maps is one of the mechanisms that enables
maze-learning in rats and mice.
But what explains how there can be place neurons that form spatial maps? What
mechanism gives neurons the capacity to represent individual spatial locations? Various
drugs have been administered before, during, or after maze training. Mice treated with an
NMDA-receptor blocker, called APV, perform poorly on the Morris water maze, e.g.,
suggesting that NMDA receptors play a role in the tuning up of individual neurons. And
since long-term potentiation, a biological mechanism for behavioral learning, also
requires NMDA receptors, spatial tuning of individual neurons may require LTP. [There
is some evidence that consolidation of spatial memory requires NMDA receptors, while
retrieval of spatial memories requires AMPA receptors. Liang et al. 1994]
[slide 8. LTP 1]
This kind of information can be put together to give us the beginnings of a neuroscientific
explanation of maze-learning in mice.
[slide 9: reductionistic explanation]
The basic methodology of cognitive neuroscience is reductionistic. We want to discover
the neurobiological mechanisms for psychological capacities. We know that living
organisms are made up of systems of cells of various types, and these are made up of
molecules of certain types, and so on. Recognizing that an organism is a functional
hierarchy of systems made up of other systems made up of other systems, we seek an
explanation of top-level behavior by means of functional decomposition and neural
localization. We first do a functional analysis of the mouse’s maze learning behavior in
terms of the mouse’s homeostatic mechanisms for maintaining body temperature, which
5
lead to a goal state of avoiding submersion, which the mouse does by swimming to the
platform, using the spatial map it forms in the hippocampus region of its brain. We study
functional relations at the behavioral level, and then try to discover how these are realized
in entities at the next level down. (And each of these is in turn realized by entities at an
even lower level when these stand in certain structural/functional relations.) The goal is
to understand the physical mechanisms that underlie psychological functions. Research
will typically be carried on at several different levels at once. Neuroimaging techniques
may show increased glucose and oxygen uptake in the hippocampus when the mouse is
navigating; microanatomical studies of single neurons in the hippocampus may reveal the
existence of place neurons and spatial maps formed by longterm potentiation (LTP);
genetic changes that disrupt the production of certain proteins that are neurotransmitter
precursors might be found to interfere with LTP; chemical changes in the bloodstream
that lower the availability of Ca might be found to interfere with the activity of the
NMDA receptors, and so on. A complete explanation of the animal’s behavior requires
understanding the functional relations at each level, and how each of these is realized in
relations among the entities of the next level down. When entities at the lower level are in
a certain structural and functional relationship, they form a mechanism that realizes a
function at the higher level.
[slide 10. Terminology]
(Let me say a word about terminology. There were several philosophical theories of the
mind/brain relation in the period following the demise of behaviorism, none of them
adequate. The method used in the cognitive neurosciences today, which I have
characterized as functional decomposition and neural localization, recognizes the
incompleteness of each of these views. The state of the question has moved forward, and
both researchers and philosophers today generally agree on something like the method
I’ve outlined.)
The working assumption in the reductionistic science of intelligent behavior is that all
this can be understood in strictly physical terms. This may seem obvious -- after all, we
can make machines that do what the trained mouse can do. The reason this is possible is
that information is a physical reality. It’s relational, but physical. So is information
processing. Let’s look at a simple example.
[slide 11. Thermostat]
The thermostat guides the behavior of the furnace in response to changes in temperature
in the surrounding room. This feedback mechanism is a simple example of informationprocessing. If you think of hundreds of mechanisms, like the thermostat-furnace
mechanism but much more sophisticated, packed together in a single place, serving the
needs of a living organism, you get a pretty good picture of what goes on in a brain.
The fact that the thermostat keeps track of the room temperature and turns the furnace on
as soon as the room temperature drops to 68 degrees is a pretty fancy kind of fact. But we
can see that the whole process is one of ordinary physical causality. For example, the
6
reason the bimetallic strip performs its function is because the 2 metals it’s made out of
have 2 different coefficients of expansion. The furnace’s seemingly responsive behavior
can be given a reductionistic explanation in terms of the behavior of the fundamental
particles that the thermostat and furnace are made out of, when these fundamental
particles are brought together in a certain arrangement.
[A reductionism methodology does not imply a purely bottom-up approach. No one in
neuroscience thinks that the way to understand consciousness and other psychological
capacities is to first understand everything about the fundamental particles and the
molecules, then everything about every neuron and synapse, and thus to continue
laboriously up the various levels of organization until one finally reaches the
psychological level. The research strategy in what are commonly interpreted as prime
cases of reductionist success in science, like the explanation of thermodynamics in terms
of statistical mechanics, of optics in terms of electromagnetic radiation, or of hereditary
transmission in terms of DNA, was not bottom-up in this way. Instead, research is
conducted on many levels simultaneously. Scientific hypotheses at the various levels coevolve in a helpful way, as they both stimulate and constrain one another. ]
Anytime we do get a reasonably complete reductionistic explanation of a phenomenon,
it’s a tremendously satisfying achievement! It’s important medically, because it enables
us to devise therapeutic interventions at multiple levels. And our intellectual curiosity is
satisfied, because we understand the underlying mechanisms through which each higher
level capacity emerges. We understand the How and Why of it. (If you want to
understand how a thing works, you need to understand not only its behavioral profile, but
also its basic components and how they are organized in such a way that a certain new
kind of thing happens.)
So now, returning to our topic of consciousness, we have some idea of what a
reductionistic, physicalist explanation of consciousness would look like. We would
expect to find that consciousness has some biological function(s). That is, we would
expect to find that a mental state’s being conscious gives it causal properties that it would
not otherwise have. We would also expect to be able eventually to explain the fact that a
certain mental state is conscious in terms of certain structural and functional relations
among the lower-level entities that it’s made out of. In other words, we would expect to
find the mechanisms that produce or constitute conscious experience.
3. The Functionalist Approach to Consciousness .
[slide 12. Definition of Functionalism
Functionalism or computationalism about the mind is the view that what makes a
mental state to be a mental state is its causal relations with inputs, outputs, and other
mental states.
Compare: What is it that makes a bimetallic strip a thermostat?]
7
Almost everything we’ve learned about the mind, about human cognition, over the last 50
years has been learned using a functionalist or computationalist paradigm. The brain is a
computer, and the mind is the software running on the computer. So it’s only natural that
the first attempts to develop a science of consciousness would approach the topic from a
computational point of view. Scientists often use a functionalist definition of
“consciousness” as access to information. The thinking goes something like this: We
know that brainstates can carry information about the environment, just as the bimetallic
strip does in the thermostat. Most of these informational patterns in the brain never
become conscious. But for those that do become conscious, we can think and talk about
the information our nervous system has picked up. So we can define “consciousness”
functionally in the following way. We say that the informational contents of brainstates
are “conscious” if they’re available for verbal report and voluntary behavior.
[slide 13. The Function of Consciousness]
“The biological usefulness of visual consciousness in humans is to produce the best
current interpretation of the visual scene in the light of past experience, either of
ourselves or of our ancestors (embodied in our genes), and to make this interpretation
directly available, for a sufficient time, to the parts of the brain that plan voluntary motor
output, including speech.” Crick & Koch (2003:36)]
[slide 14. The Brain Produces Behavior]
We can tell someone about what we’re attending to visually, for example, but not about
the way our brains are controlling our posture, or the level of estrogen in our blood. The
information-processing in the brain falls into two pools. One pool, which includes some
of the products of high-level sensory processing and the contents of short-term memory,
can be accessed by the neural systems that produce our speech, rational thought and
deliberate decision-making. The other, much larger pool, which includes autonomic
responses, the computations controlling movement, repressed desires or memories if
there are such things, and in general the great bulk of information-processing activities of
the brain, cannot be accessed by our systems for thought, speech and voluntary action.
The rules for computing trajectories that our brain uses when we catch a fly ball, for
example, are not available for use on a physics exam. Some part of our brain has the
information and the rules it needs to compute the ball’s motion and the correct arm
movement we need to catch it, but “we” as conscious persons do not have access to the
rules our own brain uses. “Consciousness” is sometimes used in this informationprocessing sense, to refer to the availability of the outputs of some subsystems of the
brain (but not most) to central processing. When we’re consciously aware of a piece of
information, we can talk about it and use it to produce voluntary actions. Many different
parts of the brain can use the information. Otherwise, not.
[slide 15. Specialized Input Analyzers]
The hundreds of information processors in our brain are specialized mechanisms that
evolved for carrying out particular tasks. The surfaces of our body present the nervous
8
system with information in various analog formats, like spatial arrays of light intensity
from retinal receptors and temporal patterns of sound frequencies from the cochlea of the
ear. Our specialized information processors are input analyzers. They analyze the input
from our sensory surfaces. Each is a set of specialized computational routines for
converting one specialized type of analog input into the unifying, modality-independent
propositional format of the brain’s central processes of belief fixation, long-term
memory, planning and problem solving. Input analyzers are the intermediate-level
computations that accomplish this.
Each module is fine-tuned to capture specific features of environmental input, in forms
that facilitate its special computations. (Each has its own special vocabulary or format, so
they don’t talk to each other.) Features of the human auditory signal, for example, feed at
least 3 specialized processors within the language system of the brain: one to compute the
phoneme structure of speech, one to facilitate voice recognition, and one to compute
emotional tone (Etcoff 1989). The visual system of humans is thought to have 20-30
different computations running off of the same stream of retinal input, each controlling a
separate function like eye movement, grasping, ocular pursuit, object recognition, or
balance.
We have no direct conscious access to the information processing computations in our
brain. But we know they’re there. If we stand with our eyes open and think about what
the visual system is doing, it seems that it’s giving us the experience of a visual scene.
It’s somehow producing our conscious visual experience. But in fact the spatial arrays of
light intensity provided by retinal receptors are feeding separate computations of many
different behaviorally-relevant features that guide several sensori-motor routines, (and
they’re not, in fact, creating a complete visual scene). If you try to balance on one foot
with your eyes open, it’s easy. But if you try it again with your eyes closed, it’s much
more difficult! That’s because all the time our eyes are open, an certain module,
unbeknownst to us, is using visual information to compute the motor instructions for
keeping our balance (Bruce, Green, Georgeson 1996).
Keeping our balance might seem like a very simple thing, because we don’t have to pay
any attention to it to do it. But if you think of yourself trying to make a machine shaped
like us that would move around and keep itself in balance as it moved, never tipping over,
using movable parts of itself to make changes in its environment, etc., all without strings
like a puppet, or any kind of remote control, you get some idea of the complex
computations that would have to be built into the machine somewhere. In a real sense our
brains are smarter than we are: the constant unconscious computations going on in our
brain far outstrip any calculations we can make consciously. It’s important to keep this in
mind, as we try to understand consciousness. Our fleeting conscious experience rides on
top of a large and very complex set of information-processing modules in the brain, each
of which evolved to meet the biological needs of moving organisms. We share many of
these input-analyzing neural circuits with other mammal species.
[slide 16: Probability Matching in Ducks]
9
[slide 17: Probability Matching in Humans]
[slide 18. Target Tracking in Humans]
The topic of consciousness in this information-processing sense lends itself to scientific
study. We have learned many things about consciousness in this sense. Information from
sensation and memory guides behavior only in an awake animal, so part of the neural
basis of access-consciousness can be found in subcortical brain structures that control
wakefulness. We know that information about an object being perceived is scattered
across many parts of the cortex. So consciousness as information access requires a
mechanism that binds together data in different neural networks. Crick and Koch (1995)
proposed that synchronization of neural firing might be one such mechanism, perhaps
entrained by loops from the cortex to the thalamus, the cerebrum’s central relay station.
We know that voluntary, planned behavior requires activity in the frontal lobes. So
access-consciousness may be determined in part by the anatomy of the connections from
various parts of the brain to the frontal lobes. Jackendoff (1987,1994) observed that
access consciousness taps the intermediate levels of information processing. Visual
processing, for example, runs from the rods and cones in the retina, through levels
representing edges, depths, and surfaces, etc. to a recognition of the objects and events in
front of us, to longterm memories of the gist of what happens. We aren’t aware of the
lowest levels of processing, which go on in parallel. Our immediate awareness doesn’t
exclusively tap the highest level of representation, either. The highest levels—the gist of
a scene or a message—tend to stick in long-term memory days and years after an
experience, but as the experience is unfolding, we are aware of particular sights and
sounds, not just the gist, of what’s happening. The intermediate level of processing
produces constancies of shape, color, object identity, etc., across changes in viewing
conditions, and these tend to track the environmental object’s inherent properties. So we
can understand the engineering reasons for making the products of intermediate-level
processing in subsystems of the brain (rather than all processing, or only the highest level
of processing) available to consciousness and thus to the system as a whole. If only the
highest level of processing were available, we’d be conscious of everything as instances
of already familiar categories, and learning of new categories from experience would be
impossible.
According to one functionalist theory of consciousness (i.e. a theory that defines
consciousness in terms of the function it serves), once you have a living organism like a
human being that is behaving adaptively in its environment due to the operation of the
information processors in its brain, there is really no difference between activity that is
conscious but immediately forgotten, and activity that is unconscious. This may seem like
a crazy idea at first, but think about the familiar example of driving.
[slide 19. Driving on a Familiar Route]
You drive on a well-known route, say to work or to a friend’s house. On one occasion
you are acutely aware of all the passing trees, people, shops and traffic signals. On
another day, you’re so engrossed in thinking about something that you’re completely
10
unaware of the scenery and of your own actions. You get all the way there and then
realize that you drove there, unaware of what you were doing. You have no recollection
at all of having passed through all those places and having made all those decisions. Yet
you must have noticed the traffic signals and other things, because you didn’t drive
through a red light, or hit a pedestrian, or stray into a wrong lane of the road. You found
your usual route to the place you were going, and you weren’t asleep. Is it that you were
conscious of traffic conditions at each moment, but immediately forgot them? Or were
you not conscious of traffic conditions and just driving unconsciously?
[slide 20. Post-Surgery Example. ]
An outside observer would make the judgment that I was conscious, on the basis of my
behavior. I felt sure that I wasn’t conscious during that period, because of my lack of any
memory of it. It’s not obvious how to tell who is right. Could my behavior during that
period have been produced unconsciously? Should the patient herself be the judge of
whether she is conscious?]
The driving example and the surgery example highlight the important connection
between consciousness and the type of memory called autobiographical memory.
Autobiographical memory requires a degree of self-consciousness. Consciousness
without self-consciousness may be so transient and fleeting that it’s virtually
indistinguishable from no consciousness at all.. I will say something more about this in
section 7.
In both the driving example and the post-surgery example we want to know whether the
person was behaving intelligently but unconsciously, or was conscious but not
remembering her experience from one moment to the next. Is there any way of being sure
about the answer to this question? Is the question a legitimate question? The question
makes perfectly good sense if being conscious means that our brain produces a movie of
our external environment and then we’re in the brain somewhere looking at the movie.
(Philosophers call this mistaken idea “the Cartesian theater”.)
[slide 21. The Little Man in the Brain]
But we know that isn’t the case. There’s no little audience-person in the brain. The brain
produces an interpretation of environmental input that guides its production of motor
activity. Whether this process is conscious or not is not always easy to tell.
So in the driving situation, you might ask yourself: “Was the red light I stopped at in
consciousness, but then forgotten? Or was it never in consciousness?” One philosopher,
Dan Dennett, rejects this as a mistaken question. On his multiple drafts theory of
consciousness, all kinds of cognitive activities, including perceptions, emotions and
thoughts, are accomplished in the brain by parallel, multi-track processes of interpretation
and elaboration of sensory inputs, and they are all under continuous revision. Like the
many drafts of a book or article, perceptions and thoughts are constantly revised and
altered, and at any point in time there are multiple drafts of narrative fragments at various
11
stages of editing, in various places in the brain. There is no little man in the brain who is
watching all the information-processing. There are only multiple, parallel, unconscious,
ongoing interpretations of input. The sense that there is a single narrative stream or
“flow” of consciousness comes about when one of the streams of unconscious
information-processing is probed in some way—for example, by someone asking a
question or requiring some kind of response. Some of the multiple interpretations of input
the brain produces at each instant are used in controlling actions, or producing speech,
and some are laid down in memory, but most just fade away.
Take the example of a bird flying past your window. Your conclusion, or judgment, that
you saw the bird is a result of probing the unconscious steam of multiple drafts at one of
many possible points. There is a judgment in response to the probe, and the event may be
laid down in memory. But there is not in addition to that, some bare conscious
experience of seeing the bird fly past. According to Dennett, mental contents arise, get
revised, affect behavior and leave traces in memory, which then get overlaid by other
traces, and so on. But there is no actual fact about what I was actually experiencing the
moment the bird flew past. There is no right answer, because there is no little man
watching a movie in the brain. There are no fixed facts about the stream of consciousness
independent of particular probes, so it all depends on the way the parallel stream gets
probed. If something leads you to say something about the bird, or to do some action in
response to the bird, you will have a conscious experience of the bird.
[slide 22. Unconscious or Forgotten?]
[slide 23. Amnesia]
This type of theory of consciousness has implications for our understanding of the self, or
the soul. On this view, the self as a single unified subject of consciousness is an illusion.
What happens is that as the contents of unconscious information-processing in the brain
are fixed by probing the streams of processing at various points-- as we make judgments
and as we speak about what we’re doing or what we’ve experienced -- so the illusion is
created of there being a unified self. What we call the “self”, in Dennett’s view, is a sort
of “center of gravity” of the accumulating narratives that spin themselves together in the
brain. Just as the “center of gravity” of a physical object is not a real physical entity but
only an abstraction, so the idea of a “unified self” is not a real physical entity but only an
abstraction—a “narrative center of gravity”, the center of gravity of the loose bundle of
narratives and memories put together by our brain.
We can ask ourselves a related question: Do we have a single unified stream of
consciousness? We’re not conscious in dreamless sleep. But even when we’re awake, are
we conscious all the time? William James asked the question, way back in 1890, whether
consciousness is continuous, or “only seems continuous to itself, by an illusion?”
(1890:200) Whenever we ask ourselves, “Am I conscious now?” the answer always
seems to be “Yes”. We cannot catch ourselves not being conscious, and when we do find
ourselves being conscious, there seems to be one me, and one unified experience. But
12
what is it like the rest of the time? When I’m not asking myself whether I’m conscious at
the moment, am I conscious?
One possibility is that there is nothing it is like most of the time. Rather, there are just
multiple parallel streams of unconscious processing going on. Then, every so often, we
ask, “Am I conscious now?” or “What am I conscious of?” or in some other way we
reflect introspectively about what is going on. Then, and only then, is a temporary stream
of consciousness concocted, making it seem as though we have been conscious all along.
At these times, recent events from memory are brought together by paying attention to
them, and the appearance of a unified self having unified experiences is created. As soon
as attention lapses, the unity falls apart and unconscious processes carry on as normal.
Just as the refrigerator light is usually off, and the door is usually closed, so we are
usually in an unconscious state of parallel multiple drafts. Only when we briefly open the
door is the illusion created that the light is always on. (Every time I look in the
refrigerator, the light is on. So someone who doesn’t know better might conclude that it’s
always on, continuously. In the same way, every time I reflect on whether I’m conscious
or not, I am conscious. So I conclude that (except when I’m sleeping) I have a continuous
stream of consciousness.)
For a functionalist like Dennett, consciousness arises in any animal species that reaches a
certain level of self-monitoring, due to complex social relations and language. It is not
something extra, over and above the information-processing that goes on in the brain.
4. Consciousness as Subjective Experience
Functionalism has been the dominant paradigm in cognitive science for 50 years. A
tremendous amount has been learned about human cognition, or the human mind, in that
time. But recently there’s been growing dissatisfaction with functionalism, largely
because it doesn’t reflect certain important aspects of how brains actually work. An
additional dissatisfaction with functionalism stems from the fact that it seems ill-suited to
explain certain aspects of consciousness.
There’s a lot of talk these days in the philosophy of mind about “qualia” ( a philosophers’
term for the qualitative properties of conscious experience). Think of the exact smell of
popcorn burning in the microwave, for example. We know the smell is caused by certain
molecules entering your nose and reacting with receptors there, but the experience of the
smell doesn’t seem to be a matter of physical molecules. In your experience, the smell is
a vivid, subjective, private quality that’s unique, and can’t be described in words. This
particular smell is an example of what philosophers call “qualia”. Sometimes questions
about consciousness are phrased as questions about qualia. How are subjective qualia
related to the objective physical world? How can a physical thing like the brain produce a
subjective, private experience?
Dennett simply denies that such things as qualia exist. He denies, in other words, that
there is such a thing as conscious experience with particular private ineffable qualitative
properties, separable from our judgments about experience (or other dispositions to
13
speech or behavior that our sensory discriminations put into effect). When you smell
popcorn burning in the microwave, there is no such thing, he thinks, as a single, unified
“moment of conscious experience” separate from the multiple parallel processes by
which the brain interprets the sensory input and produces a bodily response.
Dennett gives examples to illustrate his point. [slide 24: The Beer Drinker]
The experienced beer drinker says that beer is an “acquired taste”. When he first tried
beer, he hated it. But now, he loves it. What, exactly, has changed? Is it that beer tastes
different to him now than it did then? Or is taste the same, but now he likes what he
formerly hated? If there are two separate things, the quale, or how precisely it tastes to
him, and his opinion about the taste, then a person should be able to decide which has
changed. But can you decide? Assume the beer is chemically exactly the same as before.
And his behavior then and now is different: he used to drink little, now he drinks lots, he
used to say Yuck, now he says Yum, etc. Somewhere between the identical molecular
inputs then & now, and the different outputs then & now, something has changed. But is
there an isolatable quality of experience, a bare quale, that hasn’t changed? Dennett
thinks not.
If Dennett is right, there is no distinction between a stimulus s seeming to be F to a
person, and the person’s judging that s is in fact F. There is no bare conscious experience
prior to a probe of some type.
Other philosophers and neuroscientists disagree with Dennett’s position, and insist that
the qualitative aspects of experience have a reality all their own. There is something that
it’s like to feel a sharp pain in your gut, or sand between your toes. Each of these
conscious states has what philosophers call phenomenal properties, or qualia. When we
look into a green light-flash, and then close our eyes, we experience a red after-image.
Redness is one of the phenomenal or qualitative properties of the afterimage experience.
But if redness is a real property, and we make the assumption that everything real is
ultimately physical, what exactly is the redness a property of? The science of human
color vision gives us a physical account of how an afterimage is produced. But none of
the entities that play a role in the scientific explanation are red. There needn’t be anything
red in the person’s external environment, and nothing in his eye or brain is red either. Yet
the redness is a real feature of the experience. What physical entity is this experienced
redness a property of? It’s not a property of anything in the brain. Nothing, it seems, in
the neurobiological account of afterimages explains why the experience of the afterimage
feels or seems just exactly the way it does.
This is what philosophers have come to call “the hard problem” of consciousness. We
want to focus today on this aspect of consciousness as subjective experience. We say that
a being is conscious if there is something it’ s like to be that being, to use a phrase made
famous by the philosopher Thomas Nagel. A bat, for example, might be conscious. If so,
Nagel argued, then there is something it’s like to be the bat.
14
[slide 25. What Is It Like to Be a Bat?]
Each conscious mental state, each moment of conscious experience, has a qualitative feel
to it. (There’s something it feels like to be conscious.) Consciousness in this sense
appears to be a private, inside dimension to life processes. We might even wonder, does
everything physical have this inside dimension to it? Some philosophers [Whitehead
1929; Russell 1929; Hegel; Bradley; Royce; Maxwell 1979; Lockwood 1989; Chalmers
1996; Rosenberg 1997; Griffin 1998; Strawson 2000; Stoljar 2001] have concluded that it
does.
Consciousness in the sense of subjective experience is deeply puzzling. For one thing, it
seems that all the information-processing the brain does could just as well be done
without subjective experience. For example: any effect of actually feeling how hot the
soup is, like waiting awhile before you eat it, could be accomplished by pure informationprocessing triggered by a mechanical sensor for temperature. There wouldn’t need to be
conscious experience at all. The same thing—detection of temperature and avoidance of
further contact—could be programmed into a machine. This makes it appear, at least, that
consciousness is a totally mysterious and causally irrelevant side effect of the information
processing that goes on in the brain, a kind of dangling afterthought, an epiphenomenon.
[slide 26. Can a Robot be Conscious?]
And how does the brain produce subjective experience? There is at the present time no
widely accepted scientific answer to this question, though we’ll look at some attempted
answers in a moment. The state of our ignorance is such that we can imagine all kinds of
strange possibilities for arrangements between the physical and the mental. We have no
scientific theory of consciousness to tell us which of these imagined arrangements are
possible and which are not. Here is a list of some of the imaginative questions we can still
legitimately ask about consciousness, (due to the state of our ignorance), compiled from
the work of other people by Steven Pinker at MIT (1997:145-6).
[ slide 27: Pinker’s List]
1. If we could ever duplicate the information processing in the human mind as an
enormous computer program, would a computer running the program be conscious?
2. What if we took that program and trained a large number of people, say, the population
of China, to hold in mind the data and act out the steps? Would there be one gigantic
consciousness hovering over China, separate from the consciousnesses of the billion
individuals? If they were implementing the brain state for agonizing pain, would there be
some entity that really was in pain, even if every citizen was cheerful and light- hearted?
15
3. Suppose the visual receiving area at the back of your brain were surgically severed
from the rest and remained alive in your skull, receiving input from the eyes. By every
behavioral measure you are blind. Is there a mute but fully aware visual consciousness
sealed off in the back of your head? What if it were removed and kept alive in a dish?
4. Might your experience of red be the same as my experience of green? Sure, you might
label grass as “green” and tomatoes as “red”, just as I do, but perhaps you actually see the
grass as having the color that I would describe, if I were in your shoes, as red.
5. Could there be zombies? That is, could there be an android rigged up to act as
intelligently and as emotionally as you and me, but in which there is “no one home” who
is actually feeling or seeing anything? How does each of us know that the others in the
room are not zombies?
6. If someone could download the state of my brain and duplicate it in another collection
of molecules, would it have my consciousness? If someone destroyed the original, but the
duplicate continued to live my life and think my thoughts and feel my feelings, would I
have been murdered?
7. What does it feel like to be a bat? Do beetles enjoy sex? Does a worm feel pain when a
fisherman puts it on a hook?
8. Imagine that surgeons replace one of your neurons with a microchip that duplicates its
input-output functions. You feel and behave exactly as before. Then they replace a
second one, and a third one, and so on, until more and more of your brain becomes
silicon. Since each microchip does exactly what the neuron did, your behavior and
memory never change. Do you even notice the difference? Does it feel like dying? Or, is
some other conscious entity moving in with you?
These questions all stem from the fact that we have two basic scientific approaches to
consciousness: the neurobiological and the computational or functional -- and both of
these seem to leave the fact that a mental state is conscious an extra, dangling,
unexplained, causally-irrelevant mystery. It seems we could have a completely specified
brainstate, and not be sure whether or not the person whose brainstate it is is conscious.
(This is the point in questions 6 and 8.) Similarly, it seems we could have a completely
specified computational or information-processing state (something like a software
program), and wonder whether, if it were run on different media, any of them would be
conscious. (This is the point in questions 1, 2, 4, 5 & 7.) Consciousness is still not
successfully integrated into either one of our two major scientific approaches to the mind.
5. The Neurobiological Approach to Consciousness: Is there a neural correlate of
consciousness?
Neuroscientists often assume that a mental state (like having a thought) simply is a
brainstate. The idea is this: if the neural systems of your brain are in a particular state of
activation, then, by definition, you are having a particular conscious experience. To
16
some people this just seems obvious. Injuries to the brain interfere with mental
functioning. Different parts of the brain are active during different kinds of mental
activity. There’s nothing magical besides the brain inside the head, so it must be that the
mind is simply the brain.
Medically speaking, learning what happens where in the brain is immensely important. It
wasn’t very long ago that doctors trying to relieve seriously disabling and otherwise
intractable epilepsy removed the medial temporal lobe on both sides of a man’s brain, not
realizing that by doing this they destroyed his ability to lay down new personal memories
for the entire remainder of his life! This patient can learn new skills, but he cannot
remember his experiences from one moment to the next. He lives now almost completely
confined to the present moment. His life as a meaningful, ongoing narrative came to an
end on the fateful day of his surgery back in 1953
[H.M. 1953. Bilateral temporal lobectomy. Hippocampus. Anterograde amnesia. Brenda
Milner. Procedural vs. declarative memory. Cf. Clive Wearing.]
So neuroscientific knowledge is important. Each hard-won increment in our
understanding of the neuroscience of the brain enables therapeutic interventions that save
lives and raise the quality of life for many people. There could hardly be more important
work.
It used to be that localizing the brain lesion that led to a patient’s cognitive and
behavioral deficits had to await an autopsy after death. But now neuro-imaging
techniques permit the neurologist to analyze lesions in a 3-D reconstruction of the living
patient’s brain, displayed on a computer screen, sometimes while the patient performs
certain cognitive tasks.
[slide 28. PET Scan]
[slide 29. MRI Scan]
While the therapeutic value of these neuroimaging techniques is uncontroversial, the
implications of the knowledge they generate for our understanding of consciousness is a
matter of debate. One philosopher put it this way, in an article complaining that too many
tax dollars were going into neuroimaging studies at the time.
[slide 30: Fodor’s Question]
I t i sn ’ t , af t er al l , ser i ou sl y i n dou bt t h at t al k i n g ( or r i di n g a bi cycl
e) depen ds on t h i n gs thatgooninthebrainsomewhereorother.
Ifthemindhappensinspaceatall,it happenssomewherenorthoftheneck.
Butwhatexactlyturnsonknowinghowfar north?
Itbelongstounderstandinghowtheengineinyourcarworksthatthe functioning of its
carburetor is to aerate the gas; that’ s part of the story about how the
engine’spartscontributetoitsrunningright. Butwhy(unlessyou’rethinkingof
17
having it taken out) does it matter where in the engine the carburetor is? What
part of how your engine works have you failed to understand if you don’ t know
that?
(“Let yourbrainalone.” JerryFodor,LondonReviewofBooks,9/30/99)
Where in the brain something happens, Fodor suggests, doesn’ t tell us a lot
about conscious mentality. [On a personal note: I remember that years ago when
I first heard about the new neuroimaging techniques that were being brought into
use at the time, I thought that using PET scans and MRIs to learn how we
acquire language or do calculus problems was about as likely to succeed as
trying to understand American military policy
bystudyingwhereinthePentagonthelightswereon.] Weneedtounderstandwhat
functions the brain performs and how, Fodor argues, but where these things
happen doesn’t tell usmuch.
Fodor’ s analogy is off the mark, in important ways. Neuroimaging techniques
can help ustestfunctionalhypotheses.
(Theytellus,notjustwherethelightsareonandwhen,so to speak, but whose phone
calls are going to whom, and under what circumstances.)
Still, it’ s not clear that even if we had complete knowledge of which neurons
were firing, at what rate, at each location in the brain at every moment of our
conscious experience, that that would amount to anything more than a bare
correlation. It wouldn’ t explain how
orwhyneuralactivityinthebrainleadstoconsciousexperience. Whyshouldbunchesof
neurons exchanging chemicals back and forth have such a strange effect?
Itwouldn’teventellusthatneuralactivityleadstoconsciousexperience. Conscious
experience and neural activity can be correlated, but maybe it’ s just the reverse:
maybe consciousexperienceleadstoneuralactivity.
Ormaybebothareeffectsofacommon cause.
In a series of influential articles in the 1990’s [1990, 1995, 1998] Francis Crick and
Christof Koch argued that neuroscience is the right way to develop a scientific
understanding of the mind, and that it was time for neuroscientists to stop avoiding the
difficult topic of consciousness. They suggested a research program to address the
question of consciousness directly, by looking for differences between neuronal
processes in the brain that are accompanied by conscious experience and those that are
not.
[slide 31: Necker Cube]
When we look at a Necker cube, we see it first one way, then another. This is called
“rivalry”. We don’t see both appearances at once, or combine them both into one. We
experience an alternation back and forth between the two appearances. This means that
our conscious experience changes while the visual input stays exactly the same. What
explains this difference? Call one way it looks to us A and the other way it looks to us B.
Can we look for a difference in the brain between the situation in which the physical
stimulus is there and the interpretation of the cube as looking A is consciously
experienced, and the situation in which the same physical stimulus is there and the
interpretation of the cube as looking A is not consciously experienced? And if we do
18
locate such a difference, what’s the relationship between the objective physical facts in
the brain and the subjective facts about how it appears to us?
The first research using this approach was done with macaque monkeys (who have visual
systems anatomically similar to ours), by Logothetis and his colleagues. The monkeys
were trained to report which of two pictures they were seeing by pressing a lever.
Trained monkeys were then put in an experimental set-up where different displays were
shown to each eye. They reported binocular rivalry, just as humans do in these situations.
[slide 32. Binocular Rivalry ]
[slide 33. Bistable Perception]
Next the researchers made recordings from various areas of the monkeys’ brains, using
electrodes in single cells. They were looking for the cells whose firing rate correlated
with, not the unchanging visual input, but the changing conscious perception, as reported
by the monkey’s behavior.
[slide 34. Single-Cell Recording]
They found that neural activity in the early stages of the visual pathway—primary visual
cortex (V1), and V2—was better correlated, on the whole, with the unchanging input.
The activity level in these neurons didn’t change when the monkey’s perception changed.
Further along the visual pathway, some of the cells responded to what the monkey
reported seeing. Finally in the inferior temporal cortex (IT) almost all the cells changed
their response according to what the monkey reported seeing. So if the monkey pressed
the lever to indicate a flip, most of the cells that were firing stopped firing and a different
set of cells started. It looked as though activity in this area corresponded to what the
monkey was consciously seeing, not to the physical stimulus. (Logothetis & Schall 1989;
Leopold & Logothetis 1996, 1999)
[slide 35. Macaque Results]
Does this mean that the NCC for the monkeys’ conscious vision lies in IT?
More recently, other researchers have done similar experiments with humans. Using
fMRI, EEG and single unit recording, they’ve identified changes in cortical activity that
are precisely correlated with the changes in conscious perception (Alais & Blake 2004;
Lumer et al. 1998; Brown & Norcia 1997; Kreiman, Fried & Koch 2002).
Let’s look more closely at the idea of a neural correlate of consciousness.
Most neuroscientists assume that for every conscious state we experience, there is some
minimal neural substrate that is necessarily sufficient for its occurrence. The relation
between the minimal neural substrate and conscious experience is assumed to be a lawlike relation of either causation or identity. Most vision scientists, for example, assume
19
that there exists somewhere in the stream of visual processing a set of neurons whose
activities form the immediate substrate of conscious visual perception. That would mean
that the occurrence of a particular pattern of activity in this set of neurons is sufficient for
the occurrence of a particular conscious perceptual state. If this neural activity happens,
the conscious experience will occur, no matter what’s going on elsewhere in the brain.
So, for example, if you could set up conditions to properly stimulate the NCC in the
absence of both retinas, or in the absence of any visual input whatsoever, the correlated
conscious vision would still occur. A growing number of investigators believe that the
first step toward a science of consciousness is to discover such a neural correlate of
consciousness.
[slide 36. NCC]
We can state the NCC assumption this way:
For every conscious experience E, there is a neural correlate of consciousness (NCC)
such that (i) the NCC is the minimal neural substrate whose activation is sufficient for
the occurrence of E, and (ii) there is a match (isomorphism) between features of the
NCC and features of E.
We need to understand 4 concepts here: a) the minimal neural substrate b) sufficiency c)
necessity, and d) isomorphism.
a) the minimal neural substrate
A neural system N is the minimal neural substrate for a conscious experience E if the
states of N suffice for the corresponding states of consciousness, and no proper part of N
is such that its states suffice for the corresponding states of consciousness.
So the NCC for a particular conscious experience is the smallest set of neural structures
whose activity suffices for that conscious experience.
b) sufficiency. NCC--> E
Not every neural state that is correlated with conscious experience is a sufficient
condition for conscious experience. We can see this by looking at the example of
blindsight.
People with damage to primary visual cortex sometimes exhibit a phenomenon called
“blindsight”. The patient D.B., for example, had a tumor removed from area V1 on one
side of his brain, leaving him blind on the opposite side of his visual field. If he looks
straight ahead and an object is placed on his blind side, he cannot see it. (“hemianopia”)
[slide 37. Hemianopia Caused By Damage to V1]
20
In one experiment he was presented with a circle filled with stripes in his normal field
and asked whether the stripes were more horizontal or vertical. He had no trouble
answering correctly. Then he was shown the same thing in his blind field. He said “I
can’t see anything at all.” But when he was asked to guess on repeated trials which way
the stripes were oriented, he was correct 90- 95% of the time. (Weiskrantz 1986, 1997)
[slide 38: Blindsight]
[slide 39: Testing for Blindsight]
So there’s a correlation between activity in V1 and conscious seeing: when you’re
missing some of your neural activity in V1 you’re missing some of your conscious
vision. And it’s a robust correlation -- it shows up reliably across patients with this kind
of brain damage. But we can’t conclude from this that neurons in V1 are the NCC that
Crick & Koch and other neuroscientists are looking for. Neural activity in V1 may be a
necessary condition for conscious vision, but it’s not a sufficient condition. Here’s an
analogy. Activity in the power cord of an LCD projector is positively correlated with
there being a ppt presentation up on the screen. If the power cord’s defective, you don’t
get a presentation. But the power cord activity is not a sufficient condition for the ppt
presentation. Lots of other stuff has to be working properly, too.
So the NCC is the smallest set of neural structures whose activity is sufficient all by itself
for a particular conscious experience. (Language about sufficient conditions expresses a
logical relation. Neuroscientists differ as to whether the underlying ontological relation is
one of causation or identity.) So in answer to our question about the monkey
experiments: No, activity in inferior temporal cortex is not, all by itself, the neural
correlate for the monkeys’ visual experiences.
c) necessity.
The third element is this: if discovering the NCC is to be the first step in developing a
scientific explanation of consciousness, then the relationship between the NCC and
consciousness has to hold in a lawlike way (as a natural law), and not merely by accident.
d) isomorphism
And finally, it is not enough that neural activity in the structures that form the NCC be
sufficient, as a matter of natural necessity, for conscious experience. It is also widely
assumed by neuroscientists that there must be a mapping (under some description) from
features of the conscious experience to features of the minimal neural substrate. For
example, if a certain pattern of activity in the NCC is sufficient for the occurrence of E,
and E is a visual experience of 2 surfaces with a brightness difference between them, then
the NCC must exhibit patterns of activity corresponding to the 2 surfaces and a pattern of
activity corresponding to the perceived difference in brightness.
21
A map is a good example of an isomorphism. The San Bernardino National Forest and a
map of the San Bernardino National Forest differ in almost every respect (size, weight,
color, history, location in space, temperature, molecular structure, etc.). But there is an
isomorphism between them, meaning that certain relations between elements of the map
reflect in a systematic way certain relations between elements of the San Bernardino
National Forest. We saw an example of isomorphism in the spatial maps formed in the
hippocampus of the mouse after training on the water maze.
Notice that isomorphism isn’t enough to make a scientific explanation all by itself.
Isomorphism works together with sufficiency and nomic necessity. You can have an
isomorphism that’s totally accidental (like the resemblance between a pattern that forms
in the clouds and President Bush’s face) and it won’t be explanatory. But for a
reductionistic explanation, the link between the NCC and conscious experience needs to
involve more than logical sufficiency and lawlikeness. If the NCC is going to be the key
to a reductionistic scientific explanation of conscious experience, there has to be
sufficiency, lawlikeness, and some explanatory mechanism or isomorphism between
them. When we say “Water is H20” it’s not just an arbitrary correlation (even one that
holds universally and by necessity) between being water and being H2O. A physical
chemist’s understanding of the atomic structure of hydrogen & oxygen, and of the
structure of the water molecule explains why water has the properties it does. Its atomic
structure gives us the mechanisms at the atomic level that lead to the emergence of
water’s molecular properties. So the isomorphism requirement is met, generally speaking,
when we discover a mechanism at one level of the functional hierarchy that leads to the
emergence of a novel property at the next higher level.
In the case of consciousness, if we restrict ourselves to the level of neurobiology, no
generally accepted explanatory isomorphisms have yet been found.
[The most convincing example of a possible explanatory isomorphism I’ve seen is the
hypothesis of an isomorphism between our phenomenal color space (how our experiences
of color are related to each other) and its neuronal basis, modelled in terms of vector
coding.
[slide 40: Pat on vector space of opponent cell coding]
This is an interesting example of an attempted reductionistic explanation, and if we had
time, it would be worth investigating further. We might consider it as a possible topic for
some future conference.]
Instead, what we find about the relation between the properties of our conscious
experiences and the properties of the patterns of neural activity in the brain that are
correlated with them is a distinct absence of any explanatory isomorphism. A neural
representation of a large visual object won’t be larger than a neural representation of a
smaller one, and a neural representation of red won’t be red. The pattern of neuronal
activity in the brain that underlies our experience of a continuously filled-in page of text
22
won’t be a continuously filled-in neural pattern. In general, what is represented offers
little or no information about the way it is represented in the brain.
Let’s look at an example.
[slide 41. Change Blindness. Ronald Rensink (2000)]
When our eyes are open we seem to experience a uniformly detailed and complete scene
in front of us. Since our experience is as of a complete scene, the assumption has been
that the brain must somehow integrate its successive inputs from the retinas into one big
uniformly detailed and complete representation of the world in front of us, that stays
stable and complete across all our body movements, head movements, eye movements
and blinks. For example, we know that when we blink, there’s no input to the eyes for a
short period of time. But we don’t experience little black-out periods, even though we’re
constantly blinking. The assumption has been that the brain “fills in” where there are
gaps in visual input (like blinks, saccades, and the blind spot), constructing a complete
picture of the scene in the brain. But several recent experiments suggest that this is not
the case. Our brain does not create our feeling of seeing a complete scene by actually
creating a completely detailed representation of the scene to be viewed by a little man in
our brain. Our brain creates the conscious experience of a complete and continuous scene
by representing that the scene is complete and continuous, not by forming a complete and
continuous inner representation of it.
Here’s how we know this. Beginning in the 1980’s, eye trackers were used to detect a
person’s eye movements (called saccades) as they looked at a visual display. Changes
were then made to the display during their saccades. The changes were large and obvious
ones, that you couldn’t miss under normal circumstances. Still, when made during the eye
movements, they went unnoticed. If the assumption were true that our brain makes a rich
and detailed inner representation of the scene that can be used to compare details from
one moment to the next, it’s hard to see how such big changes could go unnoticed.
You can get the same effects without eye trackers using what’s called the flicker method.
Rensink, O’Regan and Clark (1997) showed an original image alternating with a
modified image (each shown for 240 msec), with blank gray screens (shown for 80 msec)
in between. This creates gaps in input similar to blinks or saccades. Then they measured
the number of presentations until the subject noticed the change. Typically subjects take
many alternations before they detect the changes, even though the changes are large ones
that would be noticed directly if presented without the blanks in between.
[try some of the demos]
Here’s an explanation of what happens. When we’re not blinking or moving our eyes
(we’re fixated on the scene), motion detectors in the visual system pick up changes in
visual input and direct our attention to that location in the visual field. But when we move
our eyes this causes a massive blur of activity that swamps the change detection
23
mechanisms, leaving only memory to detect changes. And contrary to the idea of a
complete inner representation of the scene in the brain, trans-saccadic memory is
extremely poor. With every saccade, most of what we see is thrown away.
There are different theories about just what, and how much, is retained across blinks and
eye movements. Simons & Levin (1997) suggest that during each visual fixation we
extract the meaning or gist of the visual scene. Then, when we move our eyes, we get
new visual input, but if the gist remains the same, our perceptual system assumes that the
details are the same. So we don’t notice changes. We get a feeling of continuity and
completeness because the brain retains only the gist of the scene and uses a default
assumption that details remain the same. O’Regan and Noe (2001) have a slightly
different view. They suggest that what remains between saccades is not a picture of the
world, but the information needed for further visual exploration. The visual system has
no need to construct a complete and detailed representation of the world, they suggest,
because the world itself can serve as our external memory. Our sense of the completeness
and detail of the visual scene is based on our brain’s sensorimotor programs for visual
exploration. Whatever the final explanation for change blindness ends up being, the fact
that we don’t notice even large changes if these occur between fixations suggests that our
brains probably do not construct complete and detailed inner representations of the visual
scene in front of us.
This is just one of many illustrations of the general fact that there is a significant absence
of any explanatory isomorphism between the pattern of neural activity that constitutes the
vehicle of a representation and the content of that representation. (Philosophers call this
the vehicle/content distinction.) There is a mismatch between the content of our
experience (the feeling of a completely filled in scene) and the neurological process that
causes or constitutes it. We can describe this in philosophical terms by noting that the
content of a mental representation is often propositional. Some pattern of neural activity
in the brain represents that there’s a full page of script, or that there’s a complete visual
scene, and it does this without having to be continuous, or completely filled in, or
isomorphic with the content of the experience in any way. The brain represents that
something is the case independently of any intrinsic properties of the pattern of neural
activity that is doing the representing. [This very feature of the brain’s ability to represent
things is what permits the range, open-endedness and complexity of the thoughts we can
have, and points to the existence of some language-like code in at least some subsystems
of the brain. The relationship between this language-like code and its neural basis will be
systematic but totally arbitrary from the neuronal point of view.]
This is what has convinced many researchers and philosophers that there will never be a
reduction of mental states to the intrinsic neurobiological properties of patterns of neural
activity in the brain.
I am myself one of the many philosophers who finds the quest for the NCC, as originally
presented, to be a misconceived project. But I’d like to end this section with an argument
sometimes offered in favor of the NCC project, because I think it’s an argument that has
some merit.
24
The argument goes something like this. The NCC, if there is one, will not be a simple
structure, like a single set of neurons in the monkey’s inferior temporal cortex. If there is
a NCC, it will no doubt be a very complex reality, involving certain levels of activity in
certain kinds of neurons in certain cortical layers in particular anatomical structures of the
brain with re-entrant connections to certain other parts of the brain, whose activity is
synchronized in certain ways, etc., etc. So when a skeptic about neural reductionism says
“How can any pattern of neural activity possibly explain conscious experience?” this
might be a little like asking the similar question: “How can the activity of simple nonliving things like molecules possibly explain life?” We no longer take this question about
life seriously. Now that we understand the chemical nature of genes, the great subtlety,
sophistication and variety of protein molecules, the elaborate nature of the control
mechanisms that turn genes on and off, and the complicated way that proteins interact
with and modify other proteins, the question loses its meaning. We realize that these
complex processes of metabolism, replication, gene expression,and so on are life. In a
similar way, the neuroscientist looking for a reductionistic, physicalist explanation of
consciousness may reasonably believe that once we have the full science of the NCC
we’ll see that that complex pattern of activity and connectivity just is consciousness.
6. Combining the Functionalist and Neurobiological Approaches
One way to accommodate the requirement for an explanatory isomorphism between the
NCC and conscious experience, without committing vehicle/content confusions, would
be to suggest that the isomorphism holds only at the level of informational content. In
other words, one could admit that there is no reductionistic explanation of how intrinsic
electrical or chemical properties of neurons constitute or cause conscious experience, but
then claim that it’s similarity of informational content that gives the known correlations
their explanatory force. Crick and Koch may have had something like this in mind in
their NCC articles. They said, for example, that “whenever some information is
represented in the NCC it is represented in consciousness.” (2003:35)
So the idea would be something like this. You don’t just look for a set of neurons whose
firing pattern covaries in a systematic way with the subject’s report of a certain conscious
experience . You look for one that covaries in that way and has the same informational
content as the conscious experience. If you discover this, you might think, you’ve really
hit pay dirt – you’ve discovered the place in the brain where the conscious experience
happens, and you’ve got an explanation why the experience is of what it’s of—why it’s
an experience of seeing that the door in front of you is open, for example.
But if this is what the program calls for, then the explanation of conscious experience will
be a functional explanation, not a narrowly neurobiological explanation. This is because a
pattern of neural activity in the brain has no informational content whatsoever all by
itself. It gets its informational content from the role it plays in some larger arrangement
that includes it. Information is a relation. To see this, let’s look at the thermostat example
again.
[slide 42. Thermostat Again]
25
Intrinsic electrochemical properties of neurons and patterns of neural activity in the brain
are like the coefficients of expansion of the two metals in the bimetallic strip. They only
carry information when they’re in a certain arrangement with other things. They contain
no information all by themselves. They also cause and explain the behavior of the larger
system of which they’re a part only when things are arranged in that very special way.
(It’s for this reason that neuroscience is broadly functional in its orientation. Remember
how the reductionistic approach we looked at in the maze learning example combined
functional decomposition and neural localization.)
[One of the interesting details about the Logothetis experiments is that the receptive-field
properties of monitored single neurons depended on what the animal as a whole was
doing. In these experiments the monkey’s head was restrained and it was trained to
maintain fixation on a certain spot. But even in these restrained conditions it was found
(1989, 1996) that the response properties of neurons thought to be the NCC “were
influenced by the perceptual requirements of the task” (1996:551). Some of the cells that
responded preferentially to the direction of motion or the orientation of a grating when
the monkey’s task was to discriminate these features, showed no such preferences when
their receptive fields were mapped conventionally during a fixation task. Other studies on
alert monkeys have shown that attention and the relevance of a stimulus for the
performance of a behavioral task can have a significant effect on the responses of
individual neurons (Treue & Maunsell 1996; Moran & Desimone 1985; Haenny et al.
1988). The lesson we should draw from this is one insisted on by Francisco Varela: there
is no way to establish the receptive field contents of individual neurons or neural
networks independently of the sensorimotor context of the animal as a whole (1984;
1991).]
As a way to summarize and reflect on what we’ve seen so far, I’d like us to consider the
question of consciousness in split brain patients.
[slides 43-52 Split Brain Patients]
[slide 53. Which Method Is Appropriate? (1) Is the RH conscious, on functionalist
grounds?
(2) Is the RH not conscious, on functionalist grounds? (Can we ever be sure that an
unreported event is unconscious? How would we know whether it’s unconscious or
conscious but not remembered?)
[In some cases, apparently unconscious events may be momentarily conscious, but so
quickly or vaguely so that we can’t recall them even a few seconds later. (Sperling, iconic
memory.) William James understood this problem, and suggested that there may be no
unconscious psychological processes at all! (James 1890. Baars p. 6)]
26
(3) Should we look for the NCC in the LH, and then see if a similar neural pattern of
activity occurs in the RH? If we succeed in this, can we determine whether the RH is
conscious on neurobiological grounds?]
7. Meaning and the Importance of the Self
[slide 54. In one of their recent articles, Crick & Koch say the following:
“An important problem neglected by neuroscientists is the problem of meaning.
Neuroscientists are apt to assume that if they can see that a neuron’s firing [in a monkey’s
brain, for example] is roughly correlated with some aspect of the visual scene, such as an
oriented line, then that firing must be part of the neural correlate of the seen line. They
assume that because they, as outside observers, are conscious of the correlation, the firing
must be part of the NCC. [But] this by no means follows.” ((2003:48)]
The “problem of meaning” that Crick & Koch refer to here is an essential aspect of the
puzzle of consciousness. Whenever there is conscious experience, there is a subject of the
experience, who is conscious of something as being a certain way. Conscious experience
is always of something (either in the external environment, our body, or our memory).
But there’s also always a subjective pole, so to speak. We’re always also conscious (at
least to some degree) of ourself having the experience. Conscious experience presents
something as appearing a certain way to us. How does this happen?
Suppose you have a computer running a program that monitors the inventory at a
supermarket. Given a string of 0s & 1s as input (a 6-oz. can of Campbell’s tomato soup
has just been scanned by the optical scanner at a checkout stand), the computer will go
through a series of computations and emit an output (the inventory count of that item in
stock has been adjusted downward by one can). Thus, the input string of 0s & 1s
represents a can of Campbell’s tomato soup being sold, and the output string of 0s & 1s
represents the amount of Campbell’s tomato soup still in stock. When the manager
checks the computer for a report on the available stock of that item, the computer
“reports” that the present stock is such and such, and it does so because “it has been told”
(by the checkout scanners) that 25 cans have been sold so far today. The relation between
input & output is a physical, causal relation.
It makes no difference to the software program the computer is running what the strings
of 0s and 1s mean. If the input string had meant the direction and speed of wind at the
local airport, the computer would have gone through exactly the same physical
computational process and produced the same output string. What the input and output
strings stand for is irrelevant to the computation. [Jaegwon Kim 1996]
So there’s no meaning for the computer. When the computer is hooked up into the
supermarket situation in the right way, its input carries context-relative information about
its environment. The input string carries information about the can of soup being sold
because it was caused by the can of soup being sold, in much the same way as the
curvature of the bimetallic strip in the thermostat carries information about the air
27
temperature in the surrounding room, because it is caused by the air temperature in the
surrounding room. But the information has no meaning for the computer itself.
Let’s assume that our brains produce representations of external objects and events, like
the spatial maps in the hippocampus of the mouse’s brain. The question is, Who reads the
maps? How does the brain produce the subject of conscious experience, the experienER
for whom or to whom experience presents something as having some property? Input and
output strings have no meaning for the computer, but conscious experience always has
meaning for the organism having it. Why is this? What makes the difference? How do
brainstates, unlike computer states, give rise to an experiencing subject for whom the
information the brainstate carries has some particular meaning?
The answer to this question has 3 parts: information, meaning, and self-representation.
We’ve already talked about information. Our brains contain a large collection of
specialized information-processors, controlling everything from heart rate to voluntary
activity. We can see how patterns of neural activity in various parts of the brain can carry
information about the environmental variables that cause them. Information is simply the
nonrandom covariance between the properties of two communicating systems, like the
nonrandom correlation between the presence of some feature in a stimulus and the firing
of certain cortical neurons. Information is everywhere. But we can’t speak of meaning, or
cognitive content, or representation unless the system that contains an informational
state can use it in some way.
[slide 55. A Definition of Meaning
A pattern of neural activity in a subsystem of the brain can be said to have cognitive
content, or to be a representation, when that pattern of activity is causally correlated
with the presence of a particular environmental feature and the presence of the state
modifies the behavior of the organism in ways specifically adaptive to that environmental
feature. ]
The simplest kind of representational state is embedded in a single, fixed adaptive
behavior. A well-known example of this kind of representation exists in the frog brain.
There are specific neurons in the visual cortex of a frog that are excited only by small
moving objects in its visual field, and that produce “fly-catching” movements of the
frog’s tongue. In this situation we can say something about what the informational state
of the neurons means for the frog. It carries information about or indicates anything that
causes it, under any description. (a fly, the set of cells making up the fly, the set of
molecules making up the fly. A fly-looking moving object like a BB. The molecules the
BB is made of, the atoms the BB is made of. Etc.] But it only means, or represents what
it has the function of indicating. To the frog, the stimulus that sets off its fly-catching
tongue movement has a meaning that we would express with a word like “food” or
“prey”. (And because it has this more restricted meaning for the frog, it can be mistaken,
as it is when the frog goes into this representational state in the presence of a BB rather
than a fly.)
28
[slide 56. Meaning for a Frog]
So, a brainstate, when it’s in a particular environmental context can carry information,
and the behavioral repertoire of the organism whose brain it is determines the meaning
that that information can have for it.
But even when we know a certain pattern of neural activity somewhere in a creature’s
brain carries information about something in the environment, and we know that the
information is modifying the behavior of the organism in some way, we can’t know
exactly what meaning that information-carrying brainstate has for the creature.
[slide 57. What Does What the Pigeon Sees Mean to the Pigeon?]
The meaning that a neural state can have for an organism is limited by the uses the
organism can make of it. The pigeon’s behavior shows that it can discriminate the door
with a prime number on it, but not that it discriminates it as a door with a prime number
on it.
The third element is self-representation. Consciousness as we know it is self-referential.
The brain will not produce conscious awareness unless the nervous system also generates
a representation of self—a representation that establishes a “point of view”. The
neurologist Antonio Damasio has emphasized that the neurobiological mechanisms for
visual awareness, for example, are essentially interconnected with the mechanisms for
representing oneself as a thing that has experiences, feels, remembers and plans; as a
thing occupying space and enduring through time. Damasio’s ideas come from decades
of observing the ways in which consciousness is related to self-representation, and how
that in turn is related to body-representation. Body representation, which systematically
integrates environmental stimulation and body-state information, provides the scaffolding
for self-representation, and self-representation is the anchor point for conscious
awareness. We have already seen how important memory is for conscious awareness.
Someone who is not forming autobiographical memories of their current experience may
be judged to be conscious by others who are observing their speech and behavior, but
they will not be judged to be conscious by themselves. There is no way for us to
distinguish in ourselves between being unconscious and being conscious without
memory. Similarly, a person who has deficits related to self-representation, or an infant
whose self-representational capacities and “theory of mind” are in their early stages, will
not be conscious in the fully human sense of the word.
There is a growing body of research on deficits related to self-representation and on the
development of self-representation and “theory of mind” in infancy and childhood. I
believe this research will become increasingly important in the future scientific study of
consciousness, so it’s especially exciting that we will have the opportunity to hear about
developmental cognitive neuroscience from two of our speakers today. The development
of self-consciousness and a full theory of mind is probably unique to humans (though
elements of both are present in some non-human primates), and these are, I believe, a key
to future progress in the neuroscience of consciousness.
29
8. Broader Implications
Consciousness fits uneasily into our scientific conception of the natural world. On the
most common scientific conception of nature, the natural world is the physical world. But
given our experience of consciousness and our common sense understanding of it, it’s not
easy to see how consciousness can be something physical. It seems that to understand the
place of consciousness in the natural order, we must either revise our conception of
consciousness, or revise our conception of nature to include the non- physical.
As long as we’re thinking of “consciousness” just as information processing --the ability
to discriminate stimuli, or to monitor internal states, or to control behavior -consciousness can be explained in computational terms. I have tried to give you some
idea of how this would go, using simple examples like thermostats and the frog’s flycatching neural circuits. The task is to explain certain behavioral or cognitive functions
that play a role in the production of behavior. To explain the performance of such a
function, one need only specify a mechanism that plays the relevant role. And there is
good reason to believe that neural or computational mechanisms can play those roles, as
we saw in the water maze example.
The hard problem of consciousness is the problem of experience. Human beings have
subjective experience. Conscious experience has both an objective and a subjective pole.
It is an experience of something in the world as being a certain way, accompanied by an
experience of myself as taking it in that way. This aspect of consciousness, its internal
aspect so to speak, does not seem to be required by anything we know about either the
neurobiology or the computational states of the brain. We can’t really say at this point,
why the processes that go on in the brain don’t just go on “in the dark”, without
consciousness. This is the central mystery, the hard problem, of consciousness.
It’s possible that consciousness is a physical phenomenon, and that there is a reductionist
explanation of how physical states produce consciousness, but that human beings will
never find it. Maybe we’ve run up against the cognitive limitations of our species.
Electromagnetism is a perfectly natural physical phenomenon, even though oysters can’t
understand it. Oysters just don’t have the right cognitive equipment to understand that
kind of thing. Likewise, since human beings have a particular kind of perceptual system
and particular kinds of computational capacities, it could be that consciousness is a
straightforwardly physical phenomenon that we simply don’t have the right cognitive
equipment to understand. (Colin McGinn, Steven Pinker)
This is certainly a logical possibility. But it would be premature to adopt this position as
anything more than a bare possibility at this point in time. We are in the very early stages
of the neuroscience of consciousness. We may not even have the phenomenon of
consciousness identified properly. One of the interesting things about studying the history
of science is that we see that over and over again, people’s sense that they have a
30
perfectly good understanding of what some familiar word refers to, turns out to be
mistaken.
Here’s an example. In the early stages of a scientific investigation, a thing’s category
membership is determined largely by similarities in easy-to-observe properties. For many
centuries in the pre-modern era the category “fire” included a wide range of phenomena,
all of which involved the giving off of heat or light. But as physics and chemistry
progressed in the modern period, more theoretically-informed properties were used to
determine category membership, and the phenomena had to be regrouped.
[slide 58. The Definition of “Fire” ]
We’re in the same situation today with consciousness. We don’t have a scientific theory
of consciousness, so we don’t have the proper theoretically-informed properties for
determining category membership. There are familiar mental phenomena about which we
cannot even say for certain at this point whether they’re conscious or not, like the postsurgery experience I described, or the RH information-processing of split-brain patients.
This is a clear symptom of the absence of theory, and of the fact that we’re at the very
beginning of the science of consciousness. It’s very possible that consciousness is a
physical phenomenon, and that we will one day have a reductionistic scientific
explanation of it.
Still, there are things about consciousness that do seem to make it a special case,
something especially difficult to reduce to physical properties. Consciousness seems to
resist reductionist explanation in a way that other phenomena do not. Maybe
consciousness is not a physical phenomenon. It could be a natural, but non-physical,
phenomenon—something that is completely natural but cannot be given the type of
reductionistic explanation usually pursued in the sciences. Some very great philosophers
and scientists, both past and present, have held this view. Prof. Hoffman’s theory, for
example, is a non-physicalist theory of consciousness.
Non-physicalists about consciousness sometimes repeat an observation that goes back to
Bertrand Russell and Whitehead in the early years of the 20th c. Russell pointed out in
The Analysis of Matter(1927) that physics characterizes physical entities and properties
by their relations to one another. For example, a quark is characterized by its relations to
other physical entities, and a property such as mass is characterized by an associated
dispositional role, such as the tendency to resist acceleration. Physics says nothing about
the intrinsic nature of these entities and properties. Normally, where we have relations
and dispositions, we expect some underlying intrinsic properties that are the basis for the
dispositions and relations. (An intrinsic property is a property an entity has independently
of its relations to other entities.) Physics is silent about the intrinsic nature of a quark, or
about the intrinsic properties that play the role associated with mass. The properties that
figure in the fundamental theories of physics are all dispositional or relational properties.
So maybe every real individual entity in the universe has an inside to it, as well as its
external relations described by the laws of physics. If this were true, then the intrinsic,
qualitative features of subjective experience would be adumbrated in
31
the simpler intrinsic properties of all other entities in the universe, and consciousness
would be the key to a new metaphysics of nature.
Two philosophers, Chalmers & Jackson, have reintroduced Russell’s observation into the
contemporary discussion of consciousness. They argue that by the very nature of physical
explanation, physical accounts explain only structure and function, where the relevant
structures are spatiotemporal structures, and the relevant functions are causal relations in
the production of a system’s behavior. Explaining structures and functions cannot, as a
matter of principle, suffice to explain the intrinsic, qualitative features of subjective
experience. So subjective experience is not physical. (Chalmers 2002:248)
[slide 59. The Explanation Argument
1. Physical accounts explain at most structure and function.
2. Explaining structure and function does not suffice to explain consciousness. 3. No
physical account can explain consciousness. (1,2)
4. What cannot be physically explained is not physical.
Therefore, Consciousness is not physical. (3,4)
Consciousness as subjective experience has so far resisted reductionistic scientific
explanation in a way that other phenomena have not. There are several alternative
conclusions we might draw from this fact. Maybe consciousness will be reduced to
physical properties and laws in the future, and the reason we can’t envision exactly how it
will go at this point in history is that we’re just at the beginning of this scientific project.
People who hold this view feel they have history on their side. A second possibility is
that consciousness is physical and a reductionistic explanation is possible in principle, but
humans don’t have the cognitive equipment they need to do it, so we won’t ever know
what the reductionistic explanation of consciousness is, even though one exists. A
scientific explanation of consciousness will elude us by reason of our own cognitive
limitations. A third conclusion we might draw is the conclusion Prof. Hoffman draws in
the paper he’ll be presenting today, that physicalism is simply mistaken, that
consciousness is fundamental and physical properties and laws are derived from
consciousness.
The fact that information processing in humans is accompanied by subjective experience
is not necessitated or entailed by anything in our current neurobiology or cognitive
science. It is thus far unexplained. If this remains the case, it may be mean that
consciousness involves something ontologically novel in the universe. The philosopher
Saul Kripke expressed the thought this way: After choosing all the physical laws (plus
physical constants and initial/boundary conditions) for our universe, God had to make
further decisions, having to do with consciousness. In other words there are ontologically
fundamental features of our universe over and above the features characterized by
physics. Just as we take spacetime, charge and mass to be fundamental
32
features that cannot be further explained, perhaps consciousness is a fundamental feature
in the same way.
Let me set out some possible ways of understanding the relationship of consciousness to
the physical in a schematic way. It could be that consciousness emerges from the physical
when physical entities are arranged in the right kinds of ways. This is emergence in the
weak sense.
33
Download