MIND, ITS RELATION TO FUNDAMENTAL PHYSICS, AND

advertisement
MIND, ITS RELATION TO FUNDAMENTAL PHYSICS, AND CONTEXTUAL
EMERGENCE
1. Supervenience and Emergence.
In this paper I shall argue for the following claims.
1. That the contemporary division between physicalists, those who hold that all real properties,
objects, and states are physical, and dualists, those who maintain that besides the realm of the
physical there is a separate domain of specifically mental properties, objects and states, is too
crude.
2. That an influential argument for dualism, due to David Chalmers, is flawed because it relies
on too narrow a conception of what is fundamental.
3. That approaches to emergence which are based on supervenience relations are unhelpful and
that a better approach is through using a kind of contextual emergence.
Within the philosophy of mind, it was once common to use supervenience relations to
argue that emergence, if it existed at all, was rare and was confined at most to mental states such
as consciousness and experiences of qualia, those directly accessed perceptual states such as the
experience of smelling a flower. Informally, we can say that a property M supervenes on a set of
properties P1,...,Pn just in case whenever instances of P1,...,Pn occur, an instance of M necessarily
occurs. More formally, a property M strongly supervenes on a family P of properties if and only
if the following is not possible: some object a has M but there is no property G in P that a
possesses which is such that necessarily any object having G also has M.
Note that there are two ways supervenience relations can fail to hold and that standard
examples of supervenience can be misleading. Take the usual example of aesthetic properties of
a painting supervening on the physical arrangement of colours on the canvas. The idea that an
1
aesthetic property could be present when a suitable physical property was absent seems
impossible to us. Indeed, the necessary attribution of an aesthetic property on the basis of certain
physical properties is a consistency condition that is justified by conceptual considerations.
You will notice the word `necessarily’ in the definition. What does that mean? There is
much debate about this but here we need only distinguish between two kinds of necessity,
nomological necessity, the necessity that laws of nature provide, and conceptual necessity, the
necessity that you have when your understanding of the concept of `square’ forces you to
attribute the property `has four sides’ to the square objects.
Brian McLaughlin has used nomological necessity to define emergence in this way:
`If P is a property of w, then P is emergent if and only if (1) P supervenes with nomological
necessity, but not with logical necessity, on properties the parts of w have taken separately or in
other combinations; and (2) some of the supervenience principles linking properties of the parts
of w with w’s having P are fundamental laws’ (McLaughlin [1997], p. 16). (And `A law L is a
fundamental law if and only if it is not metaphysically necessitated by any other laws, even
together with initial conditions’ (ibid, p.16).)
Note two things about this definition. First, nothing that conceptually supervenes on the
physical can be emergent.1 That seems reasonable because we want emergence to give us some
sort of real, separate, existence for the emergent property. Secondly, the definition requires that
emergent properties are possessed by objects with parts. Let us now consider these two features
in the context of a well-known line of argument presented by David Chalmers in Chapter 2,
section 5 of The Conscious Mind [Chalmers 1996]. There, Chalmers argues that all chemical,
McLaughlin’s current view is that conscious mental states are not ontologically emergent,
although they are epistemically emergent (personal communication).
2
biological and other sorts of non-fundamental facts in our world conceptually supervene on
fundamental physical facts, perhaps along with micro-physical laws. By `fact’ Chalmers means
the spatiotemporal distribution of property instances. The use of a conceptual necessity in the
supervenience claim is surprising in that the intuitive reaction would be to think that the relation
between the physical level and higher levels should be weaker, most likely one of nomological
necessitation. Placing this in the context of McLaughlin’s definition, then if all non-fundamental
property instances conceptually supervene on fundamental property instances, no nonfundamental properties are emergent. Chalmers does allow that weak emergence in Bedau’s
sense exists, but he rejects the view that stronger forms of emergence exist except in the case of
consciousness. (see, e.g. Chalmers 1996, pp. 129-130, p.168, note 41).
Chalmers is interested only in what happens in our world and he is not committed to the
position that the conceptual supervenience claim would be true if our world was significantly
different. We should therefore ask what it is about our world that makes an argument for this
claim seem plausible.
Chalmer’s arguments rest on two premises: a) the higher level facts such as biological
facts globally supervene on the entire history of fundamental physical facts, in the sense that the
entire spatio-temporal history of micro-physical facts in the universe constitutes the base facts b)
the fundamental laws as well as specific facts are included in the base. Chalmers has three lines
of argument in favor of the conceptual supervenience of higher level facts on physical facts.
They are the inconceivability argument, the epistemological argument, and the analyzability
argument. I shall focus here on the inconceivability argument which says it is inconceivable that
there could be a world that was indistinguishable from our world in its fundamental physical
features but that was different from ours in its biological, chemical, or other higher level features.
3
I hope you can see the force of this appeal to inconceivability. Try to imagine a world in which
all of the fundamental physical facts are fixed, including the entire history of the universe. Then,
can you consistently imagine a world with exactly the same fundamental physical facts but in
which some biological fact was different, such as most humans having only eight fingers or
tigers being able to reproduce with horses? The answer, I hope, is that you simply cannot
conceive of such a possibility.
2. Fundamentality.
Now, why is Chalmers’s argument effective given what we know about our world and
does it apply to other worlds? Consider a very simple world, one that is very different from ours.
It has a discrete spatial and temporal framework and the elementary particles in this world are of
two kinds, red and green. Their behaviour is described by very simple rules. When a red particle
approaches a green particle from the left and collides with it, both the red and the green particle
are destroyed and a yellow particle is created. When a red particle approaches a green particle
from the right, and collides with it, a blue particle is created and the red and green particles are
destroyed. Note that by introducing laws that describe interactions between yellow particles and
red or green particles, and similarly for other types of particles, we can also allow downward
causation in this world. There are also laws that allow for the creation of even higher level
particles.
Now consider the entire temporal sequence of system states. The first N states consist in
various spatial rearrangements of the elementary particles. Then at time-step N+1 a new type of
particle, a yellow particle or a blue particle, appears due to a collision between a red and a green
particle. Yellow and blue particles do not have red or green particles as parts and they behave
differently than do the red and green particles – when they collide they create white and purple
4
particles. So in the spatiotemporal history of this universe, there appear at some point new types
of particle. These particles and the laws they follow do supervene on the total history and the
laws, but they do not synchronically supervene on the current state of the elementary particles
and their laws, nor do they globally supervene on the total history of the states of the
fundamental red and green particles.
There are some features of this world that are directly relevant to considerations of
emergence. From an inferential perspective, it is not possible to predict, even from a full
knowledge of the fundamental laws governing the red and green particles and their positions up
to N, what the states of the world after N will be. That is, the laws governing the fundamental red
and green particles determine what will happen when red and green particles collide, but those
laws are specific to the domain of red and green particles. They do not specify how yellow and
blue particles behave. For that, experiment and observation are required. The laws governing the
yellow and the blue particles are what I have elsewhere (Humphreys 2008) called inferentially
emergent.
Now ask the question: are facts about the yellow and blue particles fundamental facts?
The standard meaning of a fundamental entity, whether it is a particle or a property, is that X is
fundamental if and only if it is non-composite.2 But if we adopt that meaning of `fundamental’, it
prevents, by definition, this simple world from counting the yellow and blue objects as emergent.
And you will remember that under McLaughlin’s definition of emergence, objects without parts
are not even candidates for emergence. More generally, if any such object lacking structure is
automatically included in the fundamental ontology, the case is immovably prejudiced against
One careful philosophical examination of fundamentality is Schaffer 2003. It is revealing
that all of the criteria he considers are synchronic.
5
certain dynamic forms of ontological emergence, for if a structureless entity emerges as the
result of a temporal process, it cannot qualify as emergent. Yet this is exactly how emergence
can occur in our world as we shall see below.
This characterization of `fundamental’ is one reason why Chalmer’s conceivability
argument succeeds. Within a standard physicalist ontology, which includes the claim that all and
only non-composite entities are fundamental, any non-fundamental and hence composite entity
such as a classical gas or a population of zebras will be decomposable into fundamental, and it is
supposed, physical, entities. It is therefore not surprising that it is impossible for you to imagine
two worlds, in each of which the fundamental physical facts are fixed, but that differ in their
biological facts. Any emergent non-composite entity that emerges over the total history of the
world will automatically be classified as fundamental.
What is the alternative? We should take the dynamics of emergence seriously and reject
the kind of atomism that is hidden in Chalmer’s argument. In many compositional systems, such
as propositional logic, the fundamental entities must all be specified at the outset, even though it
is possible to add non-logical primitives to compositional languages without undermining their
compositionality. Taking a cue from this, a type of entity can be considered fundamental relative
to a system if it is non-composite and an instance of it exists during the initial state of the system.
This allows a specifically temporal concept of fundamentality and similar considerations can be
applied to laws; the fundamental laws are those that are in place at the origin of the universe and
cannot be constructed from other laws that exist at the origin. This allows for the possibility that
new laws have emerged over the history of the universe, and if laws are derivative from features
of properties, then when novel kinds of properties emerge, new laws will emerge with them.
Let us now assume that in this particular world, at some further time N’+1 all of the
6
particles are yellow or blue particles. At this point, nothing synchronically supervenes on the
`fundamental physics’ of the world. Now suppose that the laws do not determine moves but that
they determine only what is possible. Then from this point on there can be two worlds, each with
identical red and green particle histories up to time N’ but with divergent blue and yellow
particle futures.3 In fact this situation will be possible when there is only a single blue particle, as
long as that blue particle has the choice of two alternative moves at some point. This is a
counter-example to the claim that everything supervenes on fundamental physics, if we now take
`fundamental’ in the temporal sense. As the world evolves, so will the domains of emergent
phenomena develop, none of which supervene on the temporally fundamental particles.
It could be said that the `physics' of this toy world will simply expand to include blue,
yellow, white, and purple particles. Fair enough, but then the ontology of this world no longer
consists in fundamental physicalism alone but includes the analogs of chemistry and biology, and
the `biological facts' of this world will not be fixed by the fundamental facts. So the claim that
everything logically supervenes on the fundamentally physical depends on the assumption that
the dynamics of the world do not produce new, non-fundamental and non-composite, entities at
some point in the history of the world. But what does this have to do with issues in the
philosophy of mind? Quite a lot. First, given that there are emergent states within the physical
world, broadly construed to include chemistry, then the fact that there are features that are not
supervenient on fundamental physical facts is not unique to the mental realm. Although the
mental realm might be interesting for other reasons, it is not because it is the only arena in which
emergent features occur. So we should replace dualism with a more sophisticated classification
that includes more than two levels.
Suppose that the moves are made by a physical randomizing device.
7
3. Ontological Emergence.
I shall now describe a particular kind of ontological emergence that is consistent with the
position about fundamentality argued above. First, I must lay out some methodological
constraints on the project. We have to recognize that the term `emergence’ has a number of
different meanings. Some of these are descendants from well-entrenched and conceptually
independent philosophical traditions. Most of those traditions are defensible and have merit. The
term `emergence’ also has casual uses and it is often applied to phenomena without clear criteria
for what counts as a correct use. So we must look to some source other than linguistic use for
guidance. Moreover, unlike a concept such as causation, for which there is close to universal
agreement on simple examples of causation, no such consensus of intuitions exists for examples
of emergence. One alternative is to examine canonical examples of emergent phenomena that
have been suggested in various sciences over the last couple of decades. But even within
scientific contexts, it is highly unlikely, especially at the current state of investigation, that a
single unifying concept of emergence can be identified. Within the group of candidate
phenomena, different criteria are clearly being brought to bear upon different members of that
group. There is no currently available framework within which the Belousov-Zhabotinsky
reaction and conscious experiences can both be subsumed even though each is taken by,
respectively, the chemical and neuroscience communities to be, if anything is, examples of
emergent phenomena. So we must recognize this variance, identify theoretically driven
traditions in science or philosophy within which clearly articulated conditions have been given
for an entity to be emergent, and find a taxonomy that organizes the positions in a reasonably
natural way.
On the constructive side, we can identify generic features that tend to be associated with
8
phenomena that are assumed to be emergent. There are four features that recur again and again in
discussions of emergence. Emergent features develop from something else, they possess a certain
kind of novelty, they have some degree of separateness from the emergent base, and they exhibit
a form of holism.4 The existence of diachronic as well as synchronic examples of emergence
means that in some cases these features will apply to the processes by means of which the
emergent feature comes about; in others it will be the emergent feature itself that possesses them.
In both situations, the more of these characteristics a given entity has, the stronger the case that it
is an example of emergence. This suggests that for many accounts of emergence we should take
a cluster analysis attitude towards the offered account.
Many accounts of emergence, and certainly the ontological tradition that is my primary
interest, deny that certain sorts of reductionist approaches can be completely successful. By
identifying which aspects of those reductionist approaches are inapplicable to a given type of
system, we can achieve a better understanding of what emergence might be. Bill Wimsatt [2007]
used this approach in developing his account of how emergence appears when aggregativity
fails. For reasons that I shall leave aside here, aggregativity is too weak a condition to serve as a
foil. Instead, I shall use what I call generative atomism as the dominant tradition against which
emergentists argue. Historically, we can trace opposition to the universal applicability of
generative atomism back to John Stuart Mill’s distinction between homopathic and heteropathic
laws, but it was G.E. Moore’s efforts to eliminate internal relations from the world that is
especially illuminating for us.
Finally, my principal interest lies in making reasonable clear sense of what one kind of
4In Humphreys [1997b] I discussed six criteria for emergence. Although each of those six are
still applicable, the four presented here form a more compact set.
9
emergence could be. This is a recognizably philosophical task. The issue of whether examples of
a particular kind of emergence exist is an empirical question. Nevertheless, we cannot
completely separate the two lines of investigation. Within the philosophy of science, there is less
tolerance for concepts that have an empty extension than is true in other areas of philosophy.
Providing a definition of, for example, omniscience, can proceed quite happily even if it turns
out that nothing has that property. In contrast to philosophers such as Jaegwon Kim and David
Chalmers whose accounts of emergence have the consequence that emergent phenomena are rare
at best, and are confined to the realm of qualia and other conscious phenomena, if the empirical
evidence turns out to show that there are no examples of ontological emergence in the sense I
shall describe, it would indicate that the proposed account is of little interest. It is also possible
that more sophisticated treatments of the examples I discuss can avoid the emergentist
consequences. That is a risk one always runs in a naturalistically dominated field and if that
happened, the account would then remain as of purely metaphysical interest.
4. Generative Atomism.
What I shall call generative atomism has a long and distinguished history in science and
in philosophy. Going back at least to Democritus, this approach has a synthetic and an analytic
component. The synthetic component says 1) that there is a collection of elementary entities
from which all other legitimate objects in the domain are constructed, 2) there is a fixed set of
rules that govern the construction process and as a consequence of 1) and 2), all entities are
either atoms or are composed of atoms5. Generative atomism can be used both synchronically
and diachronically depending upon whether the rules are compositional (in the synchronic case)
5For one description of this process see Emmeche, Køppe, and Stjernfelt [2000], p. 16, where
they claim that this approach is involved in both `constitutive reductionism’ and `constitutive
10
or dynamical (in the diachronic case). Put another way, the synchronic case deals with the
arrangement of the atoms and the diachronic case deals with rearrangements. Systems whose
ontology satisfies the constraints of generative atomism are core examples of systems that are
regarded as lacking the capacity for ontological emergence because the composite entities are
nothing but structured groups of atoms. Thus, formal grammars, cellular automata, and mosaics
are all examples of the generative atomism approach even though in the case of mosaics, the rule
set is close to being universal. In the case of cellular automata and mosaics, if we allow
supervenience relations and abstractions, some kinds of emergence might apply to patterns, most
noticeably conceptual emergence. But there is no ontological emergence in these cases,
especially if all supervenience relies on conceptual necessitation.
Regarding the atoms, I take three conditions to be individually necessary for something to
be an atom of a generative system. The atoms must be individually distinguishable, they must be
indivisible, and they must be immutable. Regarding immutability, we have the immutability
principle: Let C be a set of properties that characterize the type T to which a belongs and which
a possesses in isolation. Then an atom a must remain invariant with respect to C when it is
embedded in a larger unit. A common candidate for the members of C will be the essential
properties of the type if such exist, but I do not want to insist on that because of the difficulties in
finding an acceptable definition of an essential property. To take some examples for which the
immutability principle is true, a brick does not change its core properties qua brick when
mortared into a wall, and spark plugs retain their central functional features qua spark plugs
when screwed into an engine block. To take a slightly different example, sentences do not
(ordinarily) change their type qua syntactic form when they are part of larger complexes.
irreductionism’.
11
Because of context-dependent semantics, they can change their semantic value when embedded
in larger units – consider the example: `This sentence is the last sentence of its paragraph’, which
is always true in isolation, often false when not – so that qua at least one semantic property, they
are not invariant.
We certainly cannot insist that all properties of the atoms are invariant under such
embeddings – the brick stands in different spatial relations to other bricks once in the wall, the
spark plugs will become hotter when the engine is running, and so on – but it is violations of the
immutability principle that result in features that are characteristic of the kind of contextual
emergence I shall outline. To take a stock example that I shall not rely on as central but does
serve to illustrate the informal idea, consider the case of an ordinary individual who is
voluntarily embedded in a mob. When part of the larger group, the individual will take on a
dispositional property, that of attacking another human without hesitation, which replaces the
properties that previously had inhibited such actions. Although within many individualistic
approaches in the social sciences, humans are taken as atomistic, ontologically this position is
hard to defend and I thus take this example only as suggestive.
A separate property of atoms is their indivisibility. This requirement is close to being
analytically true because one of the principal meanings of the term `atom’ is `indivisible entity’
and, prior to the twentieth century, indivisibility and immutability were required of physical
atoms.6 We thus have the indivisibility principle: the atoms of a domain with respect to some
type T are those units satisfying the membership conditions for T and for which no proper
subpart also satisfies the membership condition for T. The conditions for inclusion in T can be
intrinsic, relational, or functional and in the most straightforward case, can simply be the
12
condition of existence.7 Parthood can be spatial, functional, or some other kind. In a cellular
automaton, individual cells are the smallest units that have a state; in formal languages it is
logically impossible for a sub-sentential unit of language to carry a truth value; no part of a basic
functional unit in a car, such as a fragment of a nut, can perform the function of the original unit;
and, according to our current state of knowledge, it is nomologically impossible for a proper part
of a quark to carry its own charge. What constitutes an atom is unavoidably imprecise within
many material contexts – a well used nut can be damaged and still perform its function – but
fundamental entities of physics seem to have sharp identity conditions, as do the fundamental
entities of a properly constructed logical or mathematical system. The indivisibility principle,
through its specification of the parthood condition, allows that the atoms are divisible in ways
that do not affect type-specific properties. To take one example, supposing that there is a
fundamental unit of charge, then that unit can be divided in the imagination, but that conceptual
operation cannot alter the fundamental physical properties of that unit.
The immutability of the atoms is crucial for the generative atomism project because it
guarantees a stable foundation for the properties of the system under application of the rules for
generating compound entities. The indivisibility of the atoms entails that the search for
compositional explanations is ended with the identification of the atoms and their properties.
Turning to individual distinguishability, we have the distinguishability principle: at the
type level each object type must be distinguishable from every other object type and at the token
level, each token must be unambiguously distinguishable from other tokens of the same type. So,
each primitive type of sign in a properly constructed formal syntax must be distinguishable from
6The post-Rutherford use of the term `atom’ in physics is, of course, a courtesy use.
7Assuming that existence is a property.
13
every other primitive type, and each token must be distinguishable from every other. Spatial
separation usually forms the basis for token distinguishability in the case of physical entities, but
it is well known that the token distinguishability criterion is violated by fermions, for example.
The ontological, rather than epistemological, status of the distinguishability principle is shown
by the fact that token distinguishability can be satisfied by mere numerical difference – the exact
number of type A entities can be determinate even when we cannot provide identity criteria for
any of them, except perhaps in terms of haecceities. For large enough N, an N sided polygon is
indistinguishable by human perception or visual imagination from an N+1 sided polygon but
they are ontologically distinct and, as Descartes noted for chiliagons, they are also distinct in the
understanding.8
One might make a case that the distinguishability principle can be subsumed under the
immutability principle. If we let the members of C include at least one sortal S tied to the type,
then under many characterizations of what constitutes a sortal, S must provide criteria for
counting and hence for the ontological distinguishability of entities of that type. Because of
controversies surrounding what counts as a sortal and what features they have, I shall keep the
distinguishability principle separate from the immutability principle while keeping open the
possibility that it is immutability that is fundamentally the key principle for our purposes.
5. John Stuart Mill, the origins of emergence, and the invariance condition.
One of the first explicit discussions of emergence in western philosophy, although not
under that name, is in John Stuart Mill’s book A System of Logic (Mill 1843, Book III, Ch. 6,
section 1.)9 For Mill, it was the violation of the principle of the composition of causes that led to
8 Descartes Meditations, Meditation VI. A chiliagon is a one thousand sided polygon.
9 G.H. Lewes introduced the term `emergence’ in his [1875], p. 412. For Lewes, an emergent
14
emergent properties being subject to emergent laws. The parallelogram of forces is a core
example of the composition of causes and the claim is that the total effect of two causes acting in
the `mechanical mode’ is simply the sum of the effects of the two causes acting alone. Although
much attention has focused on whether emergent processes violate an `additivity’ principle for
causes and forces, and Mill does repeatedly refer to the sums of various influences, it is of
central importance to Mill’s account that the constituents of compound system, whether they be
laws, causes, forces, or some other entity, remain invariant in their effects whether they occur in
isolation or as part of a larger whole. The composition of causes condition is nothing more than
an immutability condition applied to causal components. Because Mill considered the issue of
heteropathic laws to be a causal issue, one cannot separate his views on this topic from his
general theory of causation and for Mill, a factor is required to act invariantly in order to count as
a cause.10
Within the domain of homopathic laws, generative atomism operates, first with respect to
causes, and secondly with respect to laws. The experimental method allows us to individuate
causal components, and when the immutability condition is satisfied, we have the external
validity for causal factors that is so prized in social systems. The indivisibility of atomistic
causes is a less straightforward issue because many causal factors are continuous and not
discrete, although Mill himself represents causal factors as discrete units. These discrete,
sufficient causal factors cannot be split without losing their sufficiency and hence, for Mill, their
causal status. Why is this invariance condition imposed? For Mill, it was part of having correctly
identified a cause that it be invariant in its effects because if a purported causal factor C
feature is simply a effect of a heteropathic law.
10 The reasons for this are given in Humphreys [1989], section 25.
15
produced its effect in one context but not another, there must be at least one factor that needs to
be added to C to accurately identify the actual sufficient cause. We are less willing now to insist
on sufficiency as a condition for causation, but within that earlier tradition, Mill required it.
In the case of laws, Mill argues that in order to make the effects of the whole deducible
from the components, `...it is only necessary that the same law which expresses the effect of each
cause acting by itself shall also correctly express the part due to that cause of the effect which
follows from the two together.’ (Mill 1843, Part III, Chapter VI, section 1). In situations where
the immutability of laws holds, we have homopathic laws; when it fails we have a new and
indecomposable heteropathic law. According to both Mill and, later, C.D. Broad, characteristic
examples of the effects of heteropathic laws could be found in chemistry, although we do not
have to agree with that claim to see the distinction. The failure of the immutability condition for
properties is revealed when Mill discusses the example of water and writes `Not a trace of the
properties of hydrogen or of oxygen is observable in those of their compound, water.’ (ibid) and
for laws: `[In the science of chemistry] most of the uniformities to which the causes conformed
when separate cease altogether when they are conjoined...’ (ibid)
So much for one historical tradition within which failure of invariance leads to emergent
laws and causal factors.
6. Immutability and Contextualism
Returning to our discussion of immutability and focusing now on properties, we have
that: A property  of an entity a is immutable with respect to contexts C and C' just in case  is
invariant across a's presence in C and a's presence in C'. Conversely, a property  of an entity a
is contextual with respect to contexts C and C' just in case  varies across a's presence in C and
16
a's presence in C'. The immutability principle is therefore violated when contextualism holds
(with respect to context C and C’). These definitions cover the special case where C is the empty
context. Thus, if a has  when in isolation ( when in C, the empty context) and  is also
possessed by a when a is present within a larger whole (i.e C’), we have the situation often tacitly
assumed in atomism. Note that  can have any adicity – monadic, binary, ternary, etc – so that
relational properties can be contextual or immutable.
There is nothing inherently mysterious about contextualism but it must be distinguished
from the related idea of the failure of compositionality. Let N be some property that can be
possessed both by a system S and S's constituents, so that N is not nominally emergent. Then a
property N of S is compositional just in case the N of S, the whole, is determined by the Ns of
S's constituents. I shall leave the exact sense of determination unspecified here since different
determination relations can play this role and the central point here is simply the logical
independence of contextualism and compositionality. An elementary example of
compositionality is when S is a country and N is area. This compositional feature holds even
when S is a scattered object, as when a country is composed of spatially disjoint regions such as
islands. A controversial example of a compositionality claim involves semantics. In that case, S
can be a sentence, the constituents can be terms (words, phrases, or some other sub-sentential
unit), and N is the property of meaning. Then compositional semantics, applied to sentences,
asserts that the meaning of the sentence is determined by the meanings of its constituent terms, a
much disputed issue. The affinity of compositionality with generative atomism is obvious.
It is an often implicit assumption of compositionality that the property N of the
constituents satisfies the immutability principle. That is, N is a feature that is possessed by the
17
constituent in isolation and N remains invariant within larger systems of which that element is a
constituent. However, compositionality can hold when the immutability condition fails as this
example illustrates. Suppose that a1,...,an possess properties 1,...,n in isolation and that
1,...,n are transformed into ’1,...,’n when a1,...,an occur within the compositional context C’.
Then, the overall property  can be given by (a1,...,an ) = f(’1(a1),...,’n (an)) and
compositionality is preserved when the determination relation is taken to be the existence of a
function. For a concrete example, suppose that the length of an object shrinks by ½ whenever it
is spatially composed with another object. So, a and b, initially of length 1, when composed have
a combined length of 2 x 1/2; a,b,c when composed have a combined length of ½ + ½ = 1, and so
on. The length of the composite is still a function of the length of the components, even though
the lengths of the components are not immutable. Although this example exhibits contextualism,
it is not a candidate for emergence because no novel properties are involved. Contextualism is
therefore necessary but not sufficient for contextual emergence.
It might appear that the failure of compositionality can lead to holistic features.
Generative atomism, we recall, has at its heart the position that everything in a given domain
results from composing immutable and individually distinguishable elements. It thus requires
that compositionality holds for all the systems generated by the rules or laws. One source of
concern about the failure of the immutability principle and the attendant contextualism might
then be that the method of analysis cannot be used to understand the system. By its very nature,
analysis (and decomposition) has to maintain that the components of a complex, when analyzed
into entities separate from the whole, are the same entities as they were within the whole. As we
saw, this assumption underpins the methods of causal analysis and experimental analysis.
18
Experimental methods are designed to isolate individual causal influences on a system and
unless a version of the immutability principle is satisfied by the causal factors, it seems that
extrapolation from the experimental context to non-experimental contexts will not be possible.
However, if we know the transformation function that governs the change in the component
between context C and context C’, then the relevant inferences can be made. We can do this for
some systems. We can directly observe the degree of compression of individual rubber blocks
when they have similar blocks stacked on top of them and there are many other cases where
theoretical inferences can give us the required transformations. Epistemically, however, the
failure of analysis is the reason why complex systems frequently exhibit emergent features. The
interactions between the components of a complex system are sufficiently strong that knowledge
of the transformation functions cannot be obtained and so the theoretical treatment must take
place at the level of the entire system.
Let us return to the three conditions for generative atomism: immutability,
distinguishability, and rule-governed compositionality. It is well-known that the failure of
compositionality can lead to certain kinds of emergence. The breakdown of the constructive
project that Philip Anderson pointed to in his famous `More is Different’ essay (Anderson 1972)
often occurs because the complexity of the rules needed to construct the compound objects
requires, in practice, a shift to a different conceptual and theoretical framework that is
autonomous from that used for the finer-grained ontology. Examples of this are common in
condensed matter physics, Anderson’s own field. The new conceptual apparatus results in
novelty and, often, a kind of holism with respect to the former framework.
In an earlier article `How Properties Emerge’ (Humphreys 1997), I argued that fusion
emergence, which results when entities lose their individual identities in diachronic interactions,
19
avoids the problems of downward causation that lead to the much discussed exclusion problem.
The claim that the results of fusion were emergent features was left partly at the intuitive level,
partly as an example of how the failure of supervenience based on nomological necessity can
lead to emergence. I now want to suggest that it is the failure of distinguishability, through
contextualism, that leads to that kind of ontological emergence. I can point to two examples that
fit the bill. First, and most basic, is the case of quantum entangled states that I cited in the
original article. This exemplifies the extreme case of failures of immutability and of
distinguishability. Second is covalent bonding in chemistry. Here the wave functions of valence
electrons in atoms overlap sufficiently that the electrons are no longer distinguishable as separate
entities but are shared between the atoms and provide a region of charge double that of a single
electron. This case does involve genuine novelty, because the lowering of energy resulting from
covalent bonding results in the molecule having different properties than do the original atoms.11
When covalent bonding is present, it is incorrect to think of molecules as being composed of
individual atoms bonded together; one must think in terms of a set of nucleons together with a
charge density distribution.12 Quantum mechanical calculations show that having the electron
density of the outermost electrons spread out over the entire molecule rather than being
concentrated in the regions around individual atoms decreases the overall kinetic energy and
enhances the stability of the molecule. When the outer electrons of two atoms produce a covalent
11 Molecules are the result of covalent bonds, not ionic bonds, which are the result of
electrostatic forces between positively and negatively charged ions; that is, atoms that have
gained or lost an electron from their usual electrically neutral state. The difference between polar
and pure covalent bonds is not relevant here.
12 This is the standard view in contemporary chemistry. For example: `Organic chemists do not
think of molecules as being made up only of atoms, however. We often envision them as
collections of nuclei and electrons, and we consider the electrons to be constrained to certain
regions of space (orbitals) around the nuclei’ (Carroll [1997], p.4).
20
bond, the charge and mass of the nucleons remain unchanged as does the total charge of the
molecule. What changes is a slight lowering of the energy of the combined molecular
arrangement compared to the energies of the atoms before the interaction and it is this change in
energy that is responsible for the characteristic properties of the molecule. Here we also have
failures of immutability and distinguishability.
Theoretical treatments of chemical bonding are tightly tied to models and to the use of
computational approximations, making conclusions about ontological emergence indirect. What
I have just described is the standard interpretation of valence bond theory. A more general
method, the molecular orbital method, considers all of the electrons to belong to the molecule as
a whole, rather than just the pair of outermost electrons, but in that case too, the
distinguishability of the original electrons and atoms has been lost. What is curious about this
example is that it is specifically quantum mechanical features that underpin the emergent status
and so the very advance that once led to the rejection of chemistry as a source of emergent
phenomena now gives us reason to think that emergence is reasonably common at the molecular
level. The claim is not that all chemical phenomena are emergent – ionic bonding, for example is
not – but that one of the most important features of chemistry gives rise to phenomena that can
be considered emergent.
A third example is quark-gluon plasmas. In these, quarks, which are bound to other
quarks in the baryons that make up regular matter such as protons, neutrons and electrons,
become free constituents of the plasma along with the gluons. However, the quarks are confined
to the plasma – free quarks are not found outside the plasma itself. So the existence of free
quarks is contextual and if one holds that they are one type of fundamental entity, their existence
as separate entities depends upon the availability of gluons in the development of the world..
21
7. British Emergentism
Brian McLaughlin’s survey of the British Emergentist movement (McLaughlin 1992)
has justifiably become a central reference point for historical accounts of emergence up through
the mid-point of last century. It does, however, omit an important aspect of those earlier debates
that is important for us. This is the role played in the early part of the twentieth century by the
debate over internal and external relations within the nascent logical atomism movement. This
historical background is important for understanding why the inheritors of the logical atomist
tradition continue to be resistant to the introduction of emergent properties and relations.
An important part of the early history of twentieth century analytic philosophy was the
debate over the status of relations, of which one aspect is relevant to our concerns. Idealists had
promoted the concept of internally relational properties, partly with the aim of establishing a
form of widespread holism, especially with regard to cognition13. If there were an internal
relation between two relata, then those relata would be different were it not in that relation14. In
particular if there were an internal relational connection between a cognizer and what was
cognized, then the external world would be dependent for the way it is on its relation to the
contents of our thoughts. This would lead to a characteristic feature of idealism, which is that
reality is at least in part dependent upon human cognition.
Given the fact that some relations are irreducible to monadic properties, it was important
13The figure most closely identified with this argument was F.H. Bradley, although there is
disagreement as to the extent to which he was committed to advancing the argument. Scholars of
early twentieth century philosophy tend to the view that Moore’s criticisms of Bradley missed
the point in that Bradley’s original position was that all relations are unreal, rather than that all
relations are internal. I am interested here in the intrinsic merits of Moore’s account of internal
relations, rather than whether he interpreted Bradley correctly.
14 It is standard to require that both relata are changed in virtue of being in an internal relation. It
is consistent with the concept of an internal relation that only one of the relata is altered, but I
22
to the project of logical analysis to demonstrate that all relations are external in order that the
relata remained unchanged when related. Spatial relations are a core example of external
relations and they have the characteristic feature that a change in spatial relations alone does not
alter the intrinsic properties of the objects standing in those relations. If I move two sheets of
paper farther apart, they remain the same sheets of paper with the same intrinsic properties. (One
must think of objects isolated in space with only spatial relations holding between them. Where
there are spatially dependent relations of other kinds, such as electrostatic forces, this invariance
may not hold because the shapes of the related objects might change, for example.) This debate
over internal and external relations is still of interest because most contemporary metaphysicians,
following in the analytic tradition, reject internal relations in Moore’s sense, the position that I
informally characterized above.15
There is a clear affinity between some (but by no means all) features of Moore’s
conception of an internal relation and ontological contextualism. Two of these features are of
interest to us: first, the idea that internal relations provide an irreducibly relational connection
between their relata, a connection that results in a certain kind of unity between the relata that is
not present in them considered individually and which results in a holistic ontology rather than
an atomistic ontology; secondly, the idea that being in an internal relation `makes a difference to'
each of the relata, an idea that resembles that behind ontological contextualism.
In using the affinities between internal relations and contextual emergence as a guide to
how to construe the latter, we must be careful of one thing. In the diachronic dimension of
shall here conform to the standard view.
15 Within the contemporary tradition of humean supervenience, the definitions of internal and
external relations have been modified so that the only external relations are spatio-temporal
relations and all other relations are determined by the intrinsic properties of the relata.
23
emergence, there is a clear sense that we first have the initial entities, then the emergent entities
appear. Or, at least that the emergent entities will be present no sooner than their origins. In
contrast, discussions of internal relations, and many discussions of downward causation,
sometimes make it appear that the influence is in the reverse direction - that first one has the
relations, then the relata enter into them and are altered.16 We shall want to account for
contextualism differently.
8. G.E. Moore
We can use G.E. Moore's famous treatment of internal and external relations in Moore
[1922] to motivate the discussion. Moore’s account of internal relations is not as clear as it could
be: `Let P be a relational property and a a term to which it does in fact belong. I propose to
define what is meant by saying that P is internal to a (in the sense we are now concerned with)
that from the proposition that a thing has not got P, it "follows" that it is other than a’.(Moore
[1922], p.291) What Moore meant by this claim is far from obvious. There is a weak and a
strong reading of the claim, the weak claim corresponding to modern definitions of an internal
relation and the strong claim being radically different.
The weak reading of Moore is: what is meant by an entity a being in an internal relation
is simply that were the relation not to apply to an entity b, there would have to be some intrinsic
difference between a and b. The strong reading, which is supported by other remarks Moore
made, asserts that it is in virtue of being in the internal relation that an entity a has some of its
intrinsic properties changed. That is, either being related to some other object, or the relation
itself, produces a change in a. The strong interpretation comes very close to committing itself to
16Moore [1922], p. 278-280 is quite clear that such discussions of relations `modifying' their
relata has no direct bearing on the issue of whether all relations are internal.
24
downward causation, certainly in the second case if it is the relation that causes a change in a
(and the relation is at a `higher’ level than its relata) although not so obviously in the first case,
where it is simply standing in the appropriate relation that causes the change. It is this strong
interpretation that pushed contemporary philosophers, as it did Moore, away from the view that
internal relations in that sense exist and to firmly resist the use of emergent properties in our
ontologies. Here is one famous example of that attitude: An important part of David Lewis’s
doctrine of humean supervenience is the claim that the only external relations in our world are
the spatiotemporal ones, together with the rejection of emergent properties: `...there might be
extra, irreducible external relations besides the spatiotemporal ones; there might be emergent
natural properties of more than point-sized things...But if there is such like rubbish, say I, then
there would have to be extra natural properties or relations that are altogether alien to this
world.’17 (Lewis 1986, p.x). But neither internal nor external relations change the relata
according to this more recent tradition.
9. A Final Argument
Consider an everyday phenomenon. We take two lumps of putty, knead them together,
and we have a lump that has twice the mass of the originals. There is a sense in which we have
lost the original lumps of putty qua lumps but we believe that we could identify the parts of the
two original lumps, for example by colouring the original lumps in distinctive colours and that,
17 Lewis allowed that there were non-spatiotemporal external relations, one example being the
relation of non-identity (Lewis [1986], p.77), although that example has the defect of not being a
natural relation. He also recognizes that there might be physical counterexamples to his view, but
he dismisses them with the remark: "But I am not ready to take lessons in ontology from
quantum physics as it now is. First I must see how it looks when it is purified of instrumentalist
frivolity ... and when it is purified of double thinking deviant logic and ... when it is purified of
supernatural tales about the power of the observant mind to make things jump." (Lewis (1986),
p.xi)
25
despite epistemological problems, the mixture is in fact composed of the original two masses.
The belief is that at the atomic level, the atoms of each lump are preserved and the original `left
hand lump' is present as a scattered object within the whole. If we allow that scattered objects
exist, then the lumps are still there, not qua lumps, but scattered around the jointly occupied
volume. Lumps of putty are of course not atoms of anything -- they are neither immutable nor
indivisible. Yet these examples indicate something important -- generative atomistic must
frequently appeal to some finer-grained ontology in order to retain its mereological orientation.
And that in turn seems to mean that if you want to be a physicalist of the generative atomism
type then you must make your physicalist commitments at a level at or very close to what is
currently considered to be the fundamental level. In turn, if generative atomism fails at this level
of ontological commitment, then the generative atomist’s line of retreat is cut off. That is, the
generative atomism advocate needs to appeal to an iterated sequence of decomposition
operations that result in the analysis of the system into fundamental components. If that
decomposition process halts at any level above the fundamental level, then there is not a uniform
fundamental level. And this is exactly what we have seen. The anti-emergentist position has its
line of retreat cut off at levels above, but not far removed from, what we currently consider to be
the most fundamental domain. Generative atomism and its contemporary relations are blocked
by very basic phenomena in condensed matter physics such as ferromagnetism or in chemistry
such as covalent bonding.18
10. Relations to the Position of Fan, Yan, and Zhang
18This leaves open the possibility that there is a mereological relation between quarks and these
larger objects, perhaps one of mereological supervenience, but that relation would have to skip at
least one level, and would fail to provide us with the kind of explanation of exchange forces that
contemporary accounts do. Moreover, at the quark level there is evidence of contextual
26
I now draw the reader’s attention to two aspects of the position taken by Fan Dongping,
Yan Zexian, and Zhang Huaxia in their paper `Downward Causation of Complex Systems’ (this
journal 2011). Fan, Yan, and Zhang make a valuable distinction between three levels of analysis
in a system; the micro-level, the macro-level, and the environment. They then use the example of
Benard rolls to argue that `the behaviors of these liquid elements are influenced and governed by
the environment...and context sensitive macro structure...’ (p.46) and that this constitutes an
example of downward causation. I believe that it is appropriate to consider their example as one
of contextual emergence as we can see by their discussion of A.N. Whitehead’s views (p.43). In
discussing the example of living organisms, Whitehead wrote: `...the plan of the whole
influences the very characters of the subordinate organisms which enter into it....the principle of
modification is perfectly general throughout nature, and represents no property peculiar to living
bodies’ (Whitehead 1925, p.109) Here we must be careful to distinguish on the one hand
between `the whole’, which would include the part being influenced, and hence potentially
involve self-causation, and on the other hand a part of a system that is influenced by the rest of
the system. It is the latter that applies to the Benard rolls. Fan, Yan, and Zhang’s example has the
context, which consists of the rest of the system, existing at a higher level than the influenced
part and hence involves downward causation. More generally, whether downward causation is
always involved in contextual emergence is an open question and its answer depends upon
having an agreed upon account of levels in a system, an account which is not currently available.
For example nearest neighbor causal interactions would not seem to count as changing the level,
but long-range correlations in lattices in general would.
Fan, Yan, and Zhang also argue that downward causation is the key problem in strong
emergence.
27
emergence. Although in my view their approach is largely successful, I want to note that not all
downward causation involves the action of a system on its own parts. If the term `downward’ has
a clear meaning, it must mean causation for which the cause is at a higher level than the effect
i.e. if C(a,b) then a is at a higher level than b. In that sense, downward causation need not be
from an entity to its components but can be from one entity to a distinct entity at a lower level,
such as from a yellow particle to a red particle in our earlier example. Hence, self-causation is
not always involved in downward causation. Moreover, if we allow that the physical domain
itself contains multiple levels, the physical domain can remain causally closed while allowing
downward causation within that domain.
11. Conclusion
Finally, some conclusions. There is not a realistic prospect of arriving at a unified
account of emergence in the near future. As I have argued, one has to identify deficiencies in a
long-established, very successful philosophical tradition that is resolutely opposed to emergence.
Perhaps the number of actual examples of ontological emergence will be small, perhaps not. But
there is evidence that some candidates result from a failure of the immutability principle, those
failures themselves being a result of breakdowns in the distinguishability principle. If there are
arguments that the distinguishability principle follows from the immutability principle, and I do
not at the moment have such an argument, then we can look to failures of immutability, in
particular failures resulting from contextualism, as a reason why those older traditions of
philosophy need to be modified to allow from emergent features in our world, and it will be
contextual mutability that is the key to a particular kind of emergence.
Paul Humphreys
28
REFERENCES
Anderson, P.W. [1972]:"More is Different", Science 177: 393-396.
Carroll, Felix [1997]: Perspectives on Structure and Mechanism in Organic Chemistry. Brooks
Cole.
Chalmers, David [1996]: The Conscious Mind: In Search of a Fundamental Theory. New York:
Oxford University Press.
Emmeche, Claus, Simo Køppe and Frederik Stjernfelt [2000]: `Levels, Emergence, and Three
Versions of Downward Causation’ pp. 13-34 in: Peter Bøgh Andersen, Claus Emmeche, Niels
Ole Finnemann and Peder Voetmann Christiansen, eds. Downward Causation. Minds, Bodies
and Matter. Århus: Aarhus University Press, 2000.
Fan Dongping, Yan Zexian, and Zhang Huaxia [2011]: `Downward Causation of Complex
Systems’, Philosophical Researches (to appear)
Humphreys, Paul [1989]: The Chances of Explanation. Princeton, Princeton University Press.
Humphreys, Paul [1997b]: `Emergence, Not Supervenience', Philosophy of Science 64, pp..
S337-S345
Humphreys, Paul [2008]:`Computational and Conceptual Emergence’ Philosophy of Science 75,
pp. 584-594
Lewes, G.H. [1875]: Problems of Life and Mind, Volume II. London: Kegan Paul, Trench,
Turbner, and Company.
Lewis, David [1986]: Philosophical Papers, Volume II. Oxford: Oxford University Press.
McLaughlin, Brian [1992]: `The Rise and Fall of British Emergentism’, pp. 49-93 in Ansgar
Beckerman, Hans Flohr, and Jaegwon Kim (eds). Emergence or Reduction?: Essays on the
29
Prospects of Nonreductive Physicalism. Berlin: Walter de Gruyter, 1992.
McLaughlin, Brian [1997]: `Emergence and Supervenience’, Intellectica 25, pp. 25-43.
Mill, J.S. [1843]: A System of Logic: Ratiocinative and Inductive. London: Longmans, Green and
Company.
Moore, G.E. [1922]: "External and Internal Relations", pp.276-309 in G.E. Moore, Philosophical
Studies. London: Routledge and Kegan Paul.
Schaffer, Jonathan [2003]: `Is There a Fundamental Level?’, Nous 37, 498-517
Whitehead, Alfred North [1925]: Science and the Modern World. New York: The Free Press.
Wimsatt, W. [2007]:, `Emergence as Non-Aggregativity and the Biases of Reductionisms’ in
William Wimsatt, Piecewise Approximations to Reality, Cambridge (Mass): Harvard University
Press, 2007.
30
Download