Cangemi 1 Flavio Cangemi Summer 2013 A Complexity

advertisement
Cangemi 1
Flavio Cangemi
Summer 2013
A Complexity-Based Theory of Mind
The academic divide
The debate over the mind-body problem is taking place on a landscape divided into two
largely antithetical camps. One side argues that mental events can be fully explained in physical
terms—that mind can be reduced to brain. The other expresses the concern that reductionist,
physicalist theories fail to capture some essential quality of mental events; on this view, the
reductionists are not explaining the mind so much as explaining it away. The model I will
suggest, based on emergentism, provides an altogether different perspective on the problem that
allows it to avoid the pitfalls that have traditionally plagued the two opponents in this debate. My
aim is to demonstrate the conceptual advantages of the emergentist approach and begin to draw
out its philosophical implications. Before delving into the specifics of this model, however, we
must be clear on what a satisfactory theory must account for.
A proper theory of mind must, in some fashion, explain the origin, nature, and purpose of
mental events. But this broad definition leaves room for several models that may be found
unsatisfactory by the anti-reductionists. A reductionist approach may meet this aim's standards by
providing a functional explanation of the neural activity that apparently gives rise to mental
events. Thus, for instance, a caricatured version of the reductionist might be content to say that
mental event x is associated with elevated levels of neurotransmitter y, and thus that explaining
the presence of y is all that is required to provide a full account of x. But this explanation, the
anti-reductionist would claim, is entirely missing the point of a theory of mind. David Chalmers
Cangemi 2
argues that the problem of mind can be further complicated into the easy and the hard problem of
consciousness, and that functional explanations such as the one I posited would only answer the
easy problem. Chalmers characterizes the hard problem as follows:
What makes the hard problem hard and almost unique is that it goes beyondproblems
about the performance of functions. To see this, note that even when we have
explained the performance of all the cognitive and behavioral functions in the vicinity
of experience - perceptual discrimination, categorization, internal access, verbal report
- there may still remain a further unanswered question: Why is the performance of
these functions accompanied by experience?1
Thus, if we take his criticism seriously, we should revise our parameters. It is not enough for a
theory of mind to explain mental events; it must explain our awareness of these events: what
makes this awareness possible and what makes it useful (or, if not useful, why it occurs).
It must be verified, then, whether reductive approaches are indeed ill-equipped to produce
a model that satisfies these standards. Daniel Dennett is optimistic about the possibility of a
reductive solution to the problem of consciousness. In Consciousness Explained, Dennett claims
that functional accounts of our neural machinery are enough to account for the phenomenon of
consciousness. He does not, however, answer Chalmers' hard problem, because he believes it
does not exist. Dennett claims that the academic tradition of attempting to establish the role and
purpose of qualia, the subjective impressions associated with experience (e.g. the sensation of
seeing red), has been leading scholars concerned with the mind-body problem astray (expand).2
The problem with Dennett's view is that the fact that qualia are difficult to nail down and
seemingly impervious to objective analysis does not justify the conclusion that they play no role
in consciousness or, worse yet, that they do not exist. The drive to understand qualia arises from
1 Chalmers, David. "Facing Up to the Problem of Consciousness." Journal of Consciousness Studies 2(3):200-19,
1995.
2 Dennett, Consciousness Explained
Cangemi 3
the shared human phenomenon of experiencing mental events; Dennett believes that rejecting
qualia is the key to resolving the mind-body problem, when it merely throws out the most
difficult part of the problem, causing his supposed solution to be unintuitive and deeply
unsatisfying.
The question now is whether reductionism is inherently ill-equipped to answer the hard
problem of consciousness. Could a reductionist tackle the issue of qualia, or are reductionist
approaches incompatible with this phenomenon? A reductionist, again, is committed to the belief
that a system can be explained entirely by reference to its constituent parts. To explore the
potential of this approach, then, we should consider a thought experiment conceived by Frank
Jackson,3 a philosopher who has spent a significant portion of his career wrestling with the mindbody problem. Mary is a blind scientist that has spent all of her life in a laboratory, tirelessly
piecing together the workings of the brain, particularly the visual system. Her devotion and
intellect allows her to achieve a complete understanding of the physiology of sight. Some time
later, a cure for her blindness is discovered and she regains the ability to see. The question now
is: has Mary learned anything new upon seeing colors and shapes for the first time? Pre-cure
Mary represents the highest expression of knowledge that reductionists can hope to achieve; she
fully understands how sight works on a physical level. But the information she receives upon
actually seeing for the first time is qualitatively different than the physical data she has collected
in the laboratory over the course of her life. Qualia then, do not merely exist, but they are also
causally efficient: if Mary has any sort of reaction upon seeing for the first time (even just a
surprised gasp), then it must have been provoked by qualia. If pre-cure Mary is indeed the
3
Frank Jackson, "Epiphenomenal Qualia"
Cangemi 4
reductionist ideal, then, it is not possible for a reductionist approach to answer the hard problem
of consciousness.
Though pure reductionism does not seem to offer much promise for the resolution of the
hard problem of consciousness, its appeal is obvious: reductionism offers a practical, scientific
approach to the problem of consciousness. As such, despite the questions that have been raised
with regards to its potential to provide a philosophically interesting answer to the problem of
consciousness, its concreteness provides it with an apparent advantage over alternative
approaches.
Is the criticism of non-reductionist approaches to the mind-body problem well-grounded?
Before answering this question, it is necessary to have an overview of the relevant nonreductionist literature. The classic non-reductionist solution to the mind-body problem is
Cartesian dualism, wherein the mind is an entirely different substance from matter; the mind
exists within and interacts with the mechanical body, but the two entities are distinct. The pure
dualism of Descartes has fallen out of vogue in modern times, and for good reason: it lacks the
empirical evidence that is expected of a rigorous scientific theory, relying instead on scholastic
proofs of dubious validity.
The apparent flimsiness of dualist theories and the ensuing modern rejection of dualism
means that there are are few thinkers who still defend a dualist approach to the mind-body
problem. Prominent among these is David Chalmers, the philosopher who provided us with the
notion of the hard problem of consciousness. Chalmers' version of dualism is to posit that
information has physical as well as phenomenal properties. He claims that "the differences
between phenomenal states have a structure that corresponds directly to the differences
Cangemi 5
embedded in physical processes"4 and thus that there is always a phenomenal correlate to the
physical properties of information. If information has a dual nature, Chalmers argues, then it
would make sense for its physical aspect to be processed by the brain and for its phenomenal
aspect to give rise to conscious experience.5
His account of consciousness focuses on the object of experience (information) rather
than its subject (the individual). He marks a correlation between the processing power of an
entity and the complexity of its conscious experience: "Where there is simple information
processing, there is simple experience, and where there is complex information processing, there
is complex experience".6 Chalmers' emphasis on information rather than on the individual's
mental processing of that information, however, is a problematic approach, as evidenced by the
following conjecture: "A mouse has a simpler information-processing structure than a human,
and has correspondingly simpler experience; perhaps a thermostat, a maximally simple
information processing structure, might have maximally simple experience?".7 The idea of a
thermostat having anything resembling conscious experience is absurd; the capacity of an entity
to process physical data is not sufficient for it to process phenomenal data. The thermostat
conjecture shows that Chalmers is taking a misguided approach to the problem of consciousness.
In his investigation of consciousness, Chalmers focused almost exclusively on the object of
consciousness (information) and shunned the subject of conscious experience—the entity whose
inner workings the mind-body problem concerns itself with.
The flaws with Chalmers' model of the mind are typical of efforts to provide a non-
4
5
6
7
Chalmers "Hard Problem," 18
Chalmers, 19
Ibid.
Ibid.
Cangemi 6
reductionist account of the mind. Despite the evident limits of the reductionist approach, nonreductionists have struggled to provide an alternative model that is intuitive and is backed by a
solid methodological framework. The debate over the mind-body problem, then, is at an impasse.
To get the gears moving on the mind-body problem, then, we need an approach that avoids
becoming mired in the reductionism/non-reductionism debate. The bodies of theories associated
with these two labels have helped illuminate the difficulties inherent in answering the mind-body
problem, and perhaps even the contours of its resolution, but a different perspective is needed if
there is to be any further progress in this field.
A 'complex' take on the problem
In order to develop a such a perspective, the lessons learned from the critique of the
reductionist and non-reductionist views must be meticulously applied. This debate elucidates two
rules which, though each seemingly valid in its own right, appear to be in tension: 1) mental
events must be shown to be fully caused by the brain; 2) mental events cannot be fully reduced to
physical processes. A model of the mind that strives for scientific credibility cannot violate the
first rule; doing so would imply a non-physical cause for mental events, thus degenerating the
theory into unverifiable spiritualism. At the same time, however, the analysis of neural processes
does not seem able to yield the discovery of the mind. A successful approach to this problem
should not seek to trivialize either of these rules, but to reconcile them in a single unified theory.
I posit that an approach that views the mind as an emergent property, in the vein of
modern complexity theory, is capable of doing just that. Emergence is a mechanism that has been
fruitfully applied across multiple disciplines to explain the existence and order of complex
Cangemi 7
systems. The idea behind emergence is that the individually simple interactions between the units
in a system give rise to a structure that cannot be intuited by reference to its invidivual
constituents. Thus, for instance, the concept of emergence has been used to model the behavior
of colonies of eusocial insects such as bees. The order and efficiency of a bee colony does not
derive from central planning; individual bees are not the masterminds behind the structure of
their colony. Rather, the structure emerges as a result of a multitude of simple instinctual
behaviors exhibited by the bees. The analogy with the mind is very easily made: the mind is the
emergent phenomenon that derives from simple interactions between neurons. An emergent
theory of mind, then, respects the first rule (the mind is caused by the brain) as well as the
second (the mind cannot be fully reduced to the brain).
The concept of emergence is central to the budding field of complex systems theory, or
simply complexity. The latter is an interdisciplinary endeavour to uncover the basic mechanisms
that allow complex phenomena to emerge from interactions between simple agents. Complex
systems theory can thus provide a rigid scientific framework within which the mind as an
emergent phenomenon can be fruitfully analyzed. Before we get to the mind, however, it is
necessary to have a brief overview of the history and principles of complex systems theory.
Though systems characterized by complex interactions have naturally been an object of
interest to scholars for centuries, the formal study of complexity is a relatively recent
development in the history of science. The study of complex systems came into its own as an
interdisciplinary field with the establishment of the Santa Fe Institute in 1984.8 Several
researchers, of course, identified and grappled with the problems of complexity prior to the
8 Waldrop
Cangemi 8
inception of the Santa Fe Institute,9 but they did so independently, unaware that the ideas they
were exploring concerned scholars across a wide swath of academic fields. Today, in large part
due to the efforts of the pioneering academics who laid the groundwork for the formal study of
complexity at the Santa Fe Institute, departments devoted to the study of complex systems can be
found in many major research institutes.10
More specifically, these institutes are interested in complex adaptive systems, which are
characterized primarily by four features. Complexity - these systems are constituted by several
independent units, or agents, that interact with one another. Spontaneous self-organization complex systems are not the product of centralized, directed planning. The structure and order of
the system arises solely from the interactions that take place between the agents. Adaptation these systems actively change in response to their surroundings. The ability of complex systems
to adapt derives from their dynamic and decentralized structures; changes in environmental cues
affect the interactions between the system's agents, which results in a reorganization of the
system into a structure that is better suited to its new environment. The ability of complex
adaptive systems to organize themselves into a coherent structure while maintaining a high
degree of spontaneity and dynamicity is owed to these systems' being in a state of delicate
equilibrium that mathematician Doyne Farmer calls "the edge of chaos." (Langton, 230 in
Complexity). If the units in a system are locked into a particular structure, the system is in a state
of static and unchanging order. If the units move about with complete freedom, the system is in a
disorganized state of chaos. When the conditions are right, however, the system enters a state in
9
Anderson, Burks, von Neumann, Holland
10 e.g. University of Michigan's Center for the Study of Complex Systems, Northwestern University's Institute on
Complex Systems
Cangemi 9
which its structure exhibits a clear but precarious order that is not far removed from complete
disorganization; the system is thus said to be on the edge of chaos.
It now remains to be established whether the mind has all of the attributes that
characterize a complex adaptive system. The mind is certainly complex; its physical substrate
consists of a network of neurons that interact with one another. It can also adapt to changing
circumstances; indeed, the mind's adaptability is perhaps its most distinctive feature. The key
question, then, is whether the mind is the product of spontaneous self-organization or whether it
is the result of a centralized organizational schema. The latter does not seem to be the case
because, as Daniel Dennett points out,11 there is no seat in the brain singularly responsible for
processing information and ordering the mind accordingly. The brain seems rather to operate via
the diffuse synthesis of information processed by the different neural regions with no final say by
a central neural overseer.
If this is the case, then it would seem that consciousness emerges organically from the
neural interactions that occur in the brain. Neuroscientist Giulio Tononi argues that
consciousness is essentially integrated information. Tononi's theory -- and the emergentist
perspective in general -- represents a promising foundation on which to build an answer to the
hard problem of consciousness. According to IIT, consciousness is simply a function of the
interconnectedness of the brain. As such, a highly integrated brain (like our own) will yield a
high degree of consciousness. This is a specific example of the general principle that Langton
posited: a system that is structured yet dynamic, and hence adaptable, is characterized by intense
activity among its constituent units. The lower the activity, the more sluggish the system as a
whole becomes; the system ceases to be in the fertile equilibrium point Farmer calls "the edge of
11 "The Cartesian Theater and 'Filling In' the Stream of Consciousness"
Cangemi 10
chaos" and slips further and further towards inertness. Tononi's findings thus suggest that
consciousness is an emergent property of a highly integrated complex neural system.
Tononi's theory may shed some light on the nature of consciousness, but it does not delve
into the philosophically interesting aspects of the mind-body problem. This is typical of the
disconnect that exists between philosophical and neurobiological investigations of
consciousness. Philosophical approaches traditionally either lack a solid scientific framework (as
seen in the non-reductionist literature) or avoid the hard problem of consciousness (the primary
culprits here being the reductionists). The scientific side of the debate, on the other hand, too
often fails to consider the philosophical questions that its research may help answer, specifically
the "why?" that is at the heart of the hard problem of consciousness.
Philosophical considerations on the complexity model
An assay of the philosophical implications of understanding consciousness as the product
of an integrated neural system may be a step in the right direction towards the bridging of this
academic gap. The primary topic of investigation, then, concerns the purpose of consciousness.
Does consciousness confer any advantages or is it a mere epiphenomenon -- a superfluous
byproduct of neural processes? To answer this question, it would be helpful to recall the idea of a
complex adaptive system being on the "edge of chaos." Computer scientist Christopher Langton
studied the dynamics of complex systems in an effort to understand the conditions that make life
possible. Langton found that living systems are characterized by an order that emerges from a set
of very simple rules (such as the structure of the beehive emerging from simple directives that
bees instinctually follow). He claims that such systems are nonlinear, that their behavior does not
Cangemi 11
amount to a summation of the behavior of its parts. As such, the behavior exhibited by the
system as a whole is not reducible to a description of its individual parts.
If we apply Langton's model of the behavior of complex adaptive systems to the mindbody problem, we can see that conscious activity is the "behavior of the whole" -- the structure
that is produced, but cannot be reduced to, the activity of its constituent neurons. Thus, claiming
that conscious activity is a useless byproduct of neural processes is like saying that the beehive is
a useless byproduct of bee activity. Consciousness is the emergent product of a complex neural
system; the two cannot be divorced from one another. There is no such thing, then, as a
philosophical zombie (a being that is qualitatively identical to a human being except for the
absence of conscious experience), for lack of conscious experience points to a limit in the
complexity of its neural system.
Analyzing the mind-body problem under the lens of complex systems theory also
provides an interesting outlook on the problem of free will. Despite the regularity with which a
complex adaptive system eventually orders itself, the system is fundamentally not predictable;
the structure that emerges from the simple rules that its constituent parts follow is not in principle
determined.12 In other words, the behavior of the whole is not predetermined. If we see conscious
activity as an emergent property of complex neural architecture, it follows that conscious activity
is also not predetermined. Deterministic accounts of human behavior are grounded in the
assumption that everything can be explained by reference to physical laws. But as Langton tells
us, "Life is a property of form, not matter, a result of the organization of matter rather than
anything that inheres in the matter itself,"13 and is thus not reducible to the physical laws that
12 Langton, Artificial Life
13 Langton, 41.
Cangemi 12
engendered it. Complex systems theory, then, represents a solid foundation on which to build a
defense of free will.
Even though consciousness does not appear to be an accidental property of complex
neural systems, however, its causal efficacy must still be assessed. Does consciousness have an
effect on physical events? I find this question to be somewhat misguided because it relies on the
underlying assumption that the physical and the mental are two distinct entities. It only makes
sense to talk about the mental affecting the physical and vice versa if the mental and physical are
two different things. But the complex systems perspective paints a much more nuanced picture
than that. I have been characterizing consciousness as the emergent "product" of a complex
neural system for simplicity's sake, but that terminology is somewhat misleading because the
emergent structure of a system is not a phenomenon that exists independently from the physical
system. The emergent structure is not the product of the interactions of agents in a complex
system; it is its form. The emergent structure of a complex system is like the shape of an object;
it is a property that cannot be divorced from its constituent architecture.
The mind-body problem has until now proven impenetrable because of the distinction
that scholars have traditionally adopted between mind and brain. This distinction is misleading
because it has led thinkers to cast the mind-body problem as a problem of causality. This puts the
mind in a tricky spot, because if the brain causes the mind, then there is a strong appeal to see the
mind as a superfluous entity, a byproduct of neural processes, making it easy to dismiss by
reductionists. Non-reductionists, on the other hand, despite their dissatisfaction with the
explanatory limits of the reductionist approach, similarly struggle to provide an account that
convincingly redeems the mind as something more than the unintended consequence of neural
Cangemi 13
processes. This has resulted in a deadlock wherein either side recognizes in the other a
fundamental flaw that makes conciliation between the two impossible. So the proper measure is
not to undermine or try to work around the flaws in either perspective, but to acknowledge that
these flaws have a shared origin -- that the fundamental flaw lies in drawing a distinction
between the mind and the brain.
Download