2 Emergent Entities - Department of Computer Science

advertisement
DRAFT
2/12/2016
Emergence Explained: Entities
Nature’s memes
Russ Abbott
Department of Computer Science, California State University, Los Angeles
and
The Aerospace Corporation
Russ.Abbott@GMail.com
Abstract. We apply the notions developed in the preceding paper ([1]) to discuss such issues the nature of entities, the fundamental importance of interactions between entities
and their environment (functionality vs. mechanism), the central and often ignored role
(especially in computer science) of energy, entities as nature’s way of remembering, the
aggregation of complexity, the reductionist blind spot, the similarities between biology
and economics.
1 Introduction
In [1] we characterized emergent phenomena as phenomena that may be described independently of their implementations. We credited [Anderson]
with being one of the first prominent
physicists to argue that new laws of nature, i.e., laws not derivable from physics, exist at various levels of complexity.
While re-reading [Schrodinger] we
found the following relevant passage.
[L]iving matter, while not eluding the 'laws
of physics' … is likely to involve 'other laws
of physics,' hitherto unknown, which … will
form just as integral a part of [the] science
[of living matter] as the former.
As we pointed out in the earlier paper,
there are indeed new laws, which, while
consistent with the laws of physics are
not reducible to them. A significant part
of this paper is devoted to elaborating
this perspective.
In the earlier paper we distinguished between static emergence (emergence that
is implemented by energy wells) and dynamic emergence (emergence that is implemented by energy flows). We argued
that emergence (of both forms) produces
Emergence Explained
objectively real phenomena (because
they are distinguishable by their entropy
and mass characteristics) but that interaction among emergent phenomena is
epiphenomenal and can always be reduced to the fundamental forces of physics.
Our focus in that paper was on the phenomenon of emergence itself. In this paper we explore the entities that arise as a
consequence of the two types of emergence, focusing especially on dynamically emergent entities and their interactions with their environment.
Reductionism is an attempt to eliminate
magic from our understanding of nature.
Without reductionism, what else could
there by besides magic? Software (and
engineering in general) is neither reductionist nor magic.
2 Emergent Entities
As human beings we seem naturally to
think in terms of entities—things or objects. Yet the question of how one might
characterize what should and should not
be considered an entity remains philosophically unresolved. (See, for example, [Boyd], [Laylock], [Miller],
1/41
DRAFT
[Rosen], [Varzi Fall ‘04].) We propose
to define an emergent entity as any instance of emergence that produces a
physically bounded result.1 What is fundamental to a emergent entity is that one
can identify the force or forces of nature
that binds it together and that causes it to
persist in a form that allows one to distinguish it from its environment—on
grounds of its distinguishing entropy and
mass.
Some emergent entities (such as an atom, a molecule, a pencil, a table, a solar
system, a galaxy) are instances of static
emergence. These entities persist because they exist in energy wells. Biological entities (such as you and I) and social entities (such as a social club, a corporation, or a country) are instances of
dynamic emergence. These entities persist as a result of energy flows.
On the other hand, what might be considered conceptual (or Platonic) entities—such as numbers, mathematical
sets (and other mathematical constructs),
properties, relations, propositions, categories such as those named by common
nouns (such as the category of cats, but
not individual cats), and ideas in general—are not (as far as we know) instances of emergence.2 Nor are intellec1
In the earlier paper we used Brownian motion as
an example of an epiphenomenon, and we defined
emergent as synonymous with epiphenomenal. But
Brownian motion produces neither reduced entropy nor a change in mass. This was pointed out to
me by Frank Harold. We must revise our definition to limit emergence to epiphenomena that result in reduced entropy.
2
We simply do not understand how ideas as subjective experience come into being. When we learn
how subjective experience is connected to the
brain, we may find that ideas (or at least their
physical realizations) are in fact instances of
emergence and that one can identify the forces that
hold ideas together. For now, though, we can’t say
that an idea as such is an instance of emergence
since we don’t know how ideas are implemented
Emergence Explained
2/12/2016
tual products such as poems and novels,
scientific papers, or computer programs
(when considered as texts). Time instances (e.g., midnight December 31,
1999), durations (e.g., a minute), and
segments (e.g., the 20th century) are also
not instances of emergence. Neither are
the comparable constructs with respect
to space and distance.
Since by definition every emergent entity is an instance of emergence, all emergent entities consist of matter and energy arranged to implement some independently describable abstraction. Since
conceptual entities don’t involve matter
or energy and since, at least to date, conceptual entities don’t have implementations (at least they don’t have implementations that we understand), none of
them satisfy our definition of an emergent entity.
2.1 Static emergent entities
Statically emergent entities (static entities for short) are created when the fundamental forces of nature bind matter
together. The nucleus of any atom (other
than simple Hydrogen, whose nucleus
consist of a single proton) is a static entity. It results from the application of the
strong nuclear force, which binds the
nucleons together in the nucleus. Similarly any atom (the nucleus along with
the atom’s electrons) is also a static entity. An atom is a consequence of the electromagnetic force, which binds the atom’s electrons to its nucleus. Molecules
are also bound together by the electro-
physically. Even if we did know how ideas are implemented, the physical instantiation of an idea
would not be the same thing as the referent of the
idea. (My thinking of the number 2 is not the
number 2—assuming there is such a thing as the
number 2 as an abstraction.) So we maintain the
position that concepts as such are not material entities, and we do not discuss them further.
2/41
DRAFT
magnetic force. On a much larger scale,
astronomical bodies, e.g., the earth, are
bound together by gravity, as are solar
systems and galaxies.
Static entities, like all instances of emergence, have properties which may be described independently of how they are
constructed. As Weinberg [W] points
out, “a diamond [may be described in
terms of its hardness even though] it
doesn't make sense to talk about the
hardness … of individual ‘elementary’
particles.” The hardness of a diamond
may be characterized and measured independently of how diamonds achieve
that property—which, as Weinberg also
points out, is a consequence of how diamonds are implemented, namely, their
“carbon atoms … fit together neatly.”
A distinguishing feature of static entities
(as with static emergence in general) is
that the mass of any static entity is strictly smaller than the sum of the masses of
its components. This may be seen most
clearly in nuclear fission and fusion, in
which one starts and ends with the same
number of atomic components—
electrons, protons, and neutrons—but
which nevertheless converts mass into
energy. This raises the obvious question:
which mass was converted to energy?
The answer has to do with the strong nuclear force, which implements what is
called the “binding energy” of nucleons
within a nucleus. For example, a helium
nucleus (also known as an alpha particle,
two protons and two neutrons bound together), which is one of the products of
hydrogen fusion, has less mass than the
sum of the masses of the protons and
neutrons that make it up when consid-
2/12/2016
ered separately.3 The missing mass is released as energy.
The same entity-mass relationship holds
for all static entities. An atom or molecule has less mass (by a negligible but
real amount) than the sum of the masses
of its components taken separately. The
solar system has less mass (by a negligible but real amount) than the mass of the
sun and the planets taken separately.
Thus the entropy of these entities is lower than the entropy of the components as
an unorganized collection. In other
words, a static entity is distinguishable
by the fact that it has lower mass and
lower entropy than its components taken
separately. Every static entity exists in
what is often called an energy well; it
requires energy to pull the static entity’s
components apart. Static entities are also
at an energy equilibrium.
Manufactured or constructed artifacts also exhibit static emergence. The binding
force that holds manufactured static entities together is typically the electromagnetic force, which we exploit when we
use nails, glue, screws, etc. to bind static
entities together into new static entities.
As a diamond implements the property
of being hard, a house, a more heterogeneous static entity, but one which is
bound together by many of the same
preceding means, has the statically
emergent property number-of-bedrooms.
As an entity, a house implements the
property of having a certain number of
bedrooms by the way in which it is constructed from its components.
A static entity consists of a fixed collection of components over which it super-
3
Emergence Explained
It turns out that iron nuclei “lack” the most mass.
Energy from fusion is possible for elements lighter
than iron; energy from fission is possible for elements heavier than iron.
3/41
DRAFT
venes. By specifying the states and conditions of its components, one fixes the
properties of the entity. But static entities such as houses that undergo repair
and maintenance no longer consist of a
fixed collection of component elements
thereby raising the question of whether
such entities really do supervene over
their components. We resolve this issue
when we discuss Theseus’ ship.
2.2 Dynamic entities
Dynamic entities are instances of dynamic emergence. As in the case with all
emergence, dynamic emergence results
in the organization of matter in a way
that differs from how it would be organized without the energy flowing
through it. That is, dynamic entities have
properties as entities that may be described independently of how those
properties are implemented. Dynamic
entities include biological and social entities—and, as we discuss below, hurricanes.
For dynamic entities, their very existence—or at least their persistence as a
dynamic entity—depends on a flow of
energy. Many dynamic entities are built
upon a skeleton of one or more static entities. The bodies of most biological organisms, for example, continue to exist
as static entities even after the organism
ceases to exist as a dynamic entity, i.e.,
after the organism dies. When those bodies are part of a dynamic entity, however, the dynamic entity includes processes
to repair them. We discuss this phenomenon also when we examine the puzzle
of Theseus’ ship.
Choreographed events like games are also entities.
3 Capital and savings
are used for two purposes.
Emergence Explained
2/12/2016
1. To create artificial entities. In
capitalism one hopes that they
will be self-sustaining.
2. For growth. In both biology and
economics one needs resources
to pay for growth.
4 Petty reductionism fails
for dynamic entities—for
all practical purposes
Weinberg’s petty reductionism is another way of saying that an entity supervenes over the matter of which it is
composed: fixing the properties of the
matter of which an entity is composed
fixes the properties of the entity.4 Hurricanes illustrate a difficulty with supervenience and petty reductionism for dynamic entities.
The problem for petty reductionism and
supervenience is that from moment to
moment new matter is incorporated into
a hurricane and matter then in a hurricane leaves it. Let’s define what one
might call a hurricane’s supervenience
base as the smallest collection of matter
over which a hurricane supervenes.
Since matter cycles continually through
a hurricane, a hurricane’s supervenience
base consists of the entire collection of
matter that is or has been part of a hurricane over its lifetime. Consequently a
hurricane’s supervenience base must be
significantly larger than the amount of
matter that constitutes a hurricane at any
moment. Because a hurricane’s supervenience base is so much larger than the
4
Recall that a set of higher level predicates is said
to supervene over a set of lower level predicates if
a configuration of truth values for the lower level
predicates determines the truth values for the higher level predicates. When we say that an entity supervenes over its components we mean that the
properties of an entity are fixed if the properties of
its components are fixed.
4/41
DRAFT
2/12/2016
matter that makes it up at any moment
the fact that a hurricane supervene over
its supervenience base is not very useful.
Other than tracking all the matter in a
hurricane’s supervenience base, there is
no easy reducibility equation that maps
the properties of a hurricane’s supervenience base to properties of the hurricane
itself.
Our example was the implementation of
a Turing Machine on a Game of Life
platform. A reductive analysis of a
Game-of-Life Turing Machine can explain how a Turning Machine may be
implemented, but it doesn’t help us understand the laws governing the functionality that the Turing Machine provides.
Furthermore, the longer a hurricane persists, the larger its supervenience base—
even if the hurricane itself maintains an
approximate constant size during its lifetime. Much of the matter in a hurricane’s
supervenience base is likely also to be
included in the supervenience bases of
other hurricanes. Like Weinberg’s example of quarks being composed (at
least momentarily) of protons, hurricanes are at least partially composed of
each other. Thus just as Weinberg gave
up on the usefulness of petty reductionism in particle physics, we must also
give up on the usefulness of petty reductionism and supervenience for dynamic
entities.
Weinberg’s characterization of petty reductionism was the “doctrine that things
behave the way they do because of the
properties of their constituents.” Recall
that Weinberg said that petty reductionism has “run its course” because when it
comes to primitive particles it isn’t always clear what is a constituent of what.
5 Entities and functionality
When one eliminates the entities as a result of reductionism one loses information. Shalizi’s definition of emergence. Throwing away the entity is the
reductionism blind spot.
The question Shalizi raises is do his
higher level variables refer to anything
real or are they just definitional consequences of lower level variables. Our
definitions of entities says they are real.
As we discussed in (1) Weinberg distinguished between what he called petty
and grand reductionism. Grand reductionism is the claim that all scientific
laws can be derived from the laws of
physics. In the first paper, we argued
that grand reductionism doesn’t hold.
Emergence Explained
In most other realms of science, however, petty reductionism still holds sway.
To understand something, take it apart
and see how it works. Thus the traditional scientific agenda can be described as
follows. (a) Observe nature. (b) Identify
likely categories of entities. (c) Explain
the
observed
functionality/phenomenology of entities in those
categories by understanding their structure and internal operation.5
Once this explanatory task is accomplished, the reductionist tradition has
been to put aside an entity’s functional/phenomenological description and replace it with (i.e., to reduce it to) the explanation of how that functionality/phenomenology is brought about. The
functional/phenomenological description
is considered simply a shorthand for
what we now understand at a deeper level. Of course one then has the task of explaining the lower-level mechanisms in
5
In some cases we find that task (c) leads us to conclude that what we had postulated as a category of
entity was not, perhaps because we found that similar functionality/phenomenology in different instances were implemented differently.
5/41
DRAFT
terms of still lower-level mechanisms,
etc. But that’s what science is about,
peeling nature’s onion until her fundamental mechanisms are revealed. In this
section we argue that that this approach
has severe limitations. In particular we
discuss what we refer to as the reductionist blind spot.
5.1
Hurricanes as dynamic entities
Most dynamic entities are biological or
social, but there are some naturally occurring dynamic entities that are neither.
Probably the best known are hurricanes.
A hurricane operates as a heat engine in
which condensation—which replaces
combustion as the source of energy—
occurs in the upper atmosphere. A hurricane involves a greater than normal
pressure differential between the ocean
surface and the upper atmosphere. That
pressure differential causes warm moist
surface air to rise. When the moistureladen air reaches the upper atmosphere,
which is cooler, it condenses, releasing
heat. The heat warms the air, which expands and reduces the pressure, thereby
maintaining the pressure differential.6
Since the heated air also dissipates, the
upper atmosphere remains cooler. (See
Figure 3.)
Hurricanes are objectively recognizable
as entities. They have reduced entropy—
hurricanes are quite well organized—and
because of the energy flowing through
them, they have more mass than their
physical components (the air and water
6
A characterization of a hurricane as a vertical heat
engine may be found in Wikipedia. (URL as of
9/1/2005: http://en.wikipedia.org/wiki/Hurricane.)
The preceding hurricane description was paraphrased from NASA, “Hurricanes: The Greatest
Storms on Earth,” (URL as of 3/2005
http://earthobservatory.nasa.gov/Library/Hurricane
s/.)
Emergence Explained
2/12/2016
molecules making them up) would have
on their own. Hurricanes illustrate the
case of a dynamic entity with no static
skeleton. When a hurricane loses its external source of energy—typically by
moving over land—it is not longer capable of binding together matter into an
organized structure. The hurricane’s entropy rises and its excess mass dissipates
until it no longer exists as an entity.
6 Dissipative structures
Somewhat intermediate between static
and dynamic entities are what Prigogine
[Prigogine] (and elsewhere) calls dissipative structures. A dissipative structure
is an organized pattern of activity that
occurs when an external source of energy is introduced into a constrained environment. A dissipative structure is so
named because in maintaining its pattern
of activity it dissipates the energy supplied to it.
Typically, a static entity becomes dissipative when a stream of energy is
pumped into it in such a way that the energy (a) disturbs the internal structure of
the entity but (b) dissipates without destroying the static entity’s structure. Musical instruments offer a nice range of
examples. Some are very simple (direct
a stream of air over the mouth of a soda
bottle); others are acoustically more
complex (a violin). All are static entities
that emit sounds when energy is pumped
into them. Another commonly cited example is the collection of RayleighBénard convection patterns that form in
a confined liquid when one surface is
heated and the opposite surface is kept
cool. (See Figure 1.)
For a much larger example, consider
how water is distributed over the earth.
Water is transported from place to place
via processes that include evaporation,
atmospheric weather system movements,
6/41
DRAFT
precipitation, groundwater flows, ocean
current flows, etc. Taken as a whole,
these cycles may be understood as a dissipative structure which is shaped by
gravity and the earth’s fixed geographic
structure and driven primarily by solar
energy, which is pumped into the earth’s
atmosphere.
Our notion of a dissipative entity is
broad enough to include virtually any
energy-consuming device. Consider a
digital clock. It converts an inflow of energy into an ongoing series of structured
activities—resulting in the display of the
time. Does a digital clock qualify as a
dissipative entity? One may argue that
since the design of a digital clock limits
the ways in which it can respond to the
energy inflow it receives it should not be
characterized as a dissipative entity. But
any static entity has only a limited number of ways in which it can respond to an
inflow of energy. We suggest that it
would be virtually impossible to formalize a principled distinction between Rayleigh-Bénard convection cycles and the
structured activities within a digital
clock.7, 8
Just as emergent phenomena are typically limited to feasibility ranges, dissipative entities also operate in distinct ways
within various energy intensity ranges.
Blow too gently into a recorder (the musical instrument) and nothing happens.
7
8
Another common example of a dissipative structure is the Belousov-Zhabotinsky (BZ) reaction,
which in some ways is a chemical clock. We design digital clocks to tell time. We didn’t design
BZ reactions to tell time. Yet in some sense they
both do. That one surprises us and the other
doesn’t shouldn’t mislead us into putting them into
different categories of phenomena.
In all our examples, the form in which energy is
delivered also matters. An electric current will
produce different effects from a thermal energy
source when introduced into a digital clock and a
Rayleigh-Bénard device.
Emergence Explained
2/12/2016
Force too much air through it, and the
recorder breaks. Within the range in
which sounds are produced, different intensities will produce either the intended
sounds or unintended squeaks. Thus dissipative entities exhibit phases and phase
transitions that depend on the intensity
of the energy they encounter. The primary concern about global warming, for
example, is not that the temperature will
rise by a degree or two—although the
melting of the ice caps is potentially destructive—but the possibility that a
phase transition will occur and that the
overall global climate structure, including atmospheric and oceanic currents,
will change in unanticipated and potentially disastrous ways. When energy is
flowing through it, a dissipative structure
is by definition far from equilibrium. So
a dissipative structure is a static entity
that is artificially maintained in a farfrom-equilibrium state.
The sorts of dissipative structures we
have been discussing are not fully qualified dynamic entities, however, because
they do not include mechanisms to repair
their static structures. Their static structures are maintained by other forces than
those produced by the energy that flows
through them. In particular, dissipative
structures do not cycle material through
themselves as all fully qualified dynamic
entities do. As we will see below, one
consequence of this fact is that dynamic
entities do not easily supervene over
their material components. Static entities
and dissipative structures do.
7 The reductionist blind
spot
We use the term the reductionist blind
spot to refer to the doctrine that once one
understands how higher level functionality can be implemented by lower level
mechanisms, the higher level is nothing
7/41
DRAFT
more than a derivable consequence of
the lower level. In other words, the objective is to replace descriptions of functionality with descriptions of mechanisms.
Significantly, the reductionist tradition
does not dismiss all descriptions given in
terms of functionality. After all, what
does reductionism do when it reaches
“the bottom,” when nature’s onion is
completely peeled? One version of the
current “bottom” is the standard model
of particle physics, which consists of
various classes of particles and the four
fundamental forces. This bottom level is
necessarily described functionally. It
can’t be described in terms of implementing mechanisms—or it wouldn’t be
the bottom level. The reductionist perspective reduces all higher level functionality to primitive forces plus mass
and extension. This is not in dispute. As
we said in [1], all higher level functionality is indeed epiphenomenal with respect to the primitive forces.
The difficulty arises because functionality must be described in terms of the interaction of an entity with its environment. The fundamental forces, for example, are described in terms of fields
that extend beyond the entity. This is
quite a different form of descriptions
from a structural and operational description, which is always given in terms
of component elements. When higher
levels of functionality are described, we
tend to ignore the fact that those descriptions are also given in terms of a relationship to an environment. What the reductionist blind spot fails to see is that
when we replace a description of how an
entity interacts with its environment with
a description of how an entity operates,
we lose track of how the entity interacts
with its environment. The functionality
of a Turing Machine is defined with reEmergence Explained
2/12/2016
spect to its tape, which is its environment.
This is particularly easy to see with (traditional) Turing Machines when formulated in terms that distinguish the machine itself from its environment. The
functionality of a Turing machine, the
function which it computes, is defined as
its transformation of an input, which it
finds in its environment, into an output,
which it leaves in its environment. What
other formulation is possible? If there
were no environment how would the input be provided and the output retrieved?
It is not relevant whether or not the
computational tape is considered part of
the Turing Machine or part of the environment. All that matters is that the input
is initially found in the environment and
the output is returned to the environment. A Turing Machine computes a
function after all.
The same story holds for energy-based
entities. Higher levels of functionality,
the interaction of the entity with its environment, are important on their own. An
entity’s higher level functionality is
more than just the internal mechanism
that brings it about. As higher and more
sophisticated levels of functionality are
created—or found in nature—it is important to answer questions such as: how
are these higher levels of functionality
used and how do they interact with each
other and with their environment? Answering these questions fills in the reductionist blind spot.
7.1
The whole is more than the
sum of its parts
The whole plus the environment is more
than the sum of its parts plus the environment. The difference is the functionality the whole can bring to the environment that the parts even as an aggregate cannot.
8/41
DRAFT
7.2 Entity functionality
Dynamic entities are defined as structures and processes rather than as material. Furthermore, the processes are defined in terms of the structures, thereby
seeming to come into existence all at
once. A hurricane is defined in terms of
its functioning as a heat engine, with a
condensation area, etc. But a hurricane
still qualifies as an entity.
Once—but not before—one has established the notion of an entity, it becomes
possible to talk about entity interactions
and entity functionality. This is a critical
step. Until we establish that entities can
be differentiated from their environments—and hence from each other—it
makes no sense to talk about an interaction either between an entity and its environment or among entities. But once
one has established the possibility of distinguishable entities, then it makes sense
to talk about how they interact.
Let’s examine the following example of
entity functionality. When placed in a
non-homogeneous nutritive medium, E.
coli bacteria tend to move in the direction of the greater concentration of the
nutrient, i.e., up the nutrient gradient.
This behavior is part of E. coli’s known
functionality.9 That E. coli behaves in
this manner is an important element of
the means whereby it perpetuates itself.
Like all dynamic entities, E. coli must
acquire energy from the environment to
power its internal processes. Its movement toward greater concentrations of
nutrients facilitates that process. If E.
coli didn’t behave in this manner, it
would be less likely to persist. This all
9
Since this description is independent of the mechanism that brings it about, the behavior qualifies as
emergent.
Emergence Explained
2/12/2016
seems very straightforward and common-sensical. What else, after all, would
make sense?
Through lots of good scientific work, we
now understand the mechanisms that
produce this behavior. Here is Harold’s
description.
[E. coli] movements consist of short
straight runs, each lasting a second or
less, punctuated by briefer episodes of
random tumbling: each tumble reorients
the cell and sets it off in a new direction. …
Cells of E. coli are propelled by their flagella, four to ten slender filaments that project
from random sites on the cell’s surface. …
Despite their appearance and name (from
the Greek for whip), flagella do not lash;
they rotate quite rigidly, not unlike a ship’s
propeller. … A cell … can rotate [its] flagellum either clockwise or counter-clockwise.
Runs and tumbles correspond to opposite
senses of rotation. When the flagella turn
counter-clockwise [as seen from behind]
the individual filaments coalesce into a helical bundle that rotates as a unit and
thrusts the cell forward in a smooth straight
run. … Frequently and randomly the sense
of the rotation is abruptly reversed, the flagellar bundle flies apart and the cell tumbles until the motor reverses once again.
…
So this is the first step in understanding
E. coli locomotion: it engages in a random walk. But we know that E. coli’s
motion is not random; it moves up nutrient gradients. How does it do that? Here
is Harold again.
Cells [which happen to be] moving up the
gradient of an attractant towards its source
tumble less frequently than cells wandering
in a homogeneous medium: while cells
moving away from the source are more
likely to tumble. … In consequence, the
cell takes longer runs toward the source
9/41
DRAFT
and shorter ones away, and rambles quite
successfully up the gradient.
Quite impressive.
How can a cell “know” whether it is traveling up the gradient or down? … It
measures the attractant concentration at
the present instant and “compares” it with
that a few second ago. … E. coli can respond within millisecond to local changes
in concentration, and under optimal conditions readily detects a gradient as shallow
as one part in a thousand over the length
of a cell.
In other words, E. coli has a memory of
sorts. It has an internal state, which is set
by the previously sensed concentration
of attractant and which can be compared
to the currently sensed concentration.
How does it do that? Harold goes on to
describe the complex combination of
chemical reactions that produce this effect. (See Harold for the details.)
Given this work, we now have (or at
least let’s presume that we have) a complete chemical explanation for how E.
coli moves up a gradient toward an attractant. Have we thereby reduced E.
coli’s behavior to chemistry?
Insofar as there are no additional forces
to which one must appeal, we have. But
can we now replace the statement that E.
coli moves toward an attractant with a
collection of chemical equations? No.
Chemical equations do not tell us anything about E. coli as an entity. The
chemical equations are about chemicals,
not about cells. It is not possible to convey the information that E. coli moves
toward an attractant in the language of
chemistry. The language of chemistry allows one to talk about chemicals; it
doesn’t allow one to talk about cells.
Perhaps we can define a cell in terms of
chemistry, build an extended chemical
Emergence Explained
2/12/2016
language, and add to it the equations underlying E. coli’s motion. This won’t
work either. As we saw before, dynamic
entities do not supervene over a static
collection of components. There is no
fixed collection of chemicals about
which one can say that a cell is that collection of chemicals arranged in a particular way. As we said before, the best
one can do is tell a story of how chemical molecules participate at various
times in the history of a cell. What’s
missing from the collection of chemical
equations is information about the structure and organization of the chemicals
whose interactions lead to E. coli’s progress up a gradient.
This is similar to the situation we describe in (1) when examining the implementation of a Turing Machine on a
Game of Life. Certainly one can describe how Game of Life patterns implement a Turing Machine. But there is
nothing in either the Game of Life rules
(the physics level10) or a library of Game
of Life pattern interactions (the chemistry level) that lets us draw conclusions
about Turing Machine functionality. In
the same way, there is nothing in the E.
coli chemical equations (or the physics
underlying them) that lets us draw conclusions about the functionality of E.
coli. Everything about E. coli supervenes
over chemistry; but E. coli functionality
cannot be derived from chemistry.
E. coli locomotion toward an attractant
exemplifies the sort of phenomenon one
normally hears described as emergent: a
number of micro-level events occur
which result in a macro-level phenomenon that may be characterized (on its
own terms) as displaying some recog10
Game of Life rules are the physics of the world in
which Game of Life pattern interactions are the
chemistry.
10/41
DRAFT
nized functionality. By looking at the
micro level phenomena—i.e., the individual chemical reactions—in isolation
one wouldn’t guess that the macro-level
functionality would appear (i.e.,
emerge). One must “run” the micro-level
phenomena and observe the result. It all
seems quite mysterious. But of course,
once one understands how it works, it
it’s no more mysterious than the functionality of any engineered product. This
is reductionism at its best. But it isn’t reductionism alone. Chemical equations
don’t talk about bacterial cells. The language just isn’t sufficient any more than
the language of the Game of Life is sufficient to talk about Turing Machines.
Why is this important? It is important
because it illustrates that lower level explanations of higher level functionality
do not substitute for the descriptions of
the higher level functionality. One can
describe how chemistry allows E. coli
cells ascend nutrient gradients, but that
doesn’t tell us that E. coli cells ascend
nutrient gradients or that a consequence
of the fact that E. coli cells are built so
that they ascend nutrient gradients is that
E. coli cells perpetuate themselves.
Statements of the latter form must be
made in a language that can refer to E.
coli cells as entities and not just as aggregations of chemicals. It is the entire
cell after all that moves up the gradient,
not individual molecules. And as it is
moving up the gradient it is also exchanging matter and energy with its environment. Its material constituents do
not remain constant.
The preceding seems both trivial and
profound. It is simply common sense to
observe that the language of chemistry is
not the same language we use when talking about cells. It took only a bit of
thought to notice that (a) cells are entities on their own, (b) they do not superEmergence Explained
2/12/2016
vene over their instantaneous components, and (c) when talking about cells as
entities, one must talk about cells and
not about their constituent chemicals. On
the other hand, this seems to be a sticking point in the argument about reductionism. If everything is reducible to the
laws of physics (which is not in dispute),
it’s difficult to understand why all observed phenomenology and functionality
can’t be expressed in the language of
physics?
This observation applies to static entities
as well as to dynamic entities. As Weinberg points out when speaking of the
hardness of diamonds, it makes no sense
to talk about the hardness of individual
elementary particles. We can apply the
term hardness only to entities like diamonds. As Weinberg also points out, we
can explain why diamonds are hard—
because “the carbon atoms fit together
neatly.” Although this explains how
hardness is implemented it no more allows us to apply the notion of hardness
to individual carbon atoms as it allows
us to apply the notion of ascending a nutrient gradient to the chemical molecules
of which E. coli is composed or to apply
the notion of computation to patterns or
rules in the Game of Life.
7.3
What dynamic entities do vs.
how dynamic entities work
In his talk at the 2006 Understanding
Complex Systems Symposium Eric Jakobsson made the point that biology must
be equally concerned with what organisms do in their worlds and the mechanisms that allow them to do it.
In our definitions, we have insisted on
grounding our notions in terms of material objects. An epiphenomenon is a
phenomenon of something. Emergence
must be an implemented abstraction. But
the abstraction side has until now been
11/41
DRAFT
left abstract. What does it mean to specify some behavior? What does it mean to
describe an entity independently of its
implementation? At the most basic level,
a function is specified in terms of (input/output) pairs. More generally, functionality is specified in terms of behavior. All of these specifications are given
in terms of an environment. Even input/output pairs are defined in terms of
the transformation of some input (in the
environment) to some output. That’s
how it works on a Turing Machine. The
environment is the tape; the input is
found on the tape at the start of the computation; the output is found on the tape
at the end of the computation. Thus for
us emergence is defined in terms of the
contrast between the effect of an entity
on its environment and the internal
mechanism that allows the entity to have
the effect.
8 Minimal dynamic entities
In [Kauffman] Kauffman asks what the
basic characteristics are of what he calls
autonomous agents. He suggests that the
ability to perform a thermodynamic
(Carnot engine) work cycle is fundamental. A hurricane is a persistent entity that
performs thermodynamic work cycles.
Although not minimal in its physical
size, a hurricane is fairly minimal in design. There is very little about a hurricane that could be simplified and yet allow it to persist. In that sense the design
of a hurricane is minimal.
The design of hurricanes depends crucially on the environment within which
hurricanes exist. A hurricane must link a
source of warm moist air to a cooling area in such a way that the warm moist air
can be transported to the cooling area.
Furthermore, the source of warm moist
air must continue to provide “fuel,” and
the cooling area must remain cool. When
Emergence Explained
2/12/2016
such conditions exist, hurricanes can develop around them. But dependence by
dynamic entities on their environment is
not unusual. Every dynamic entity depends on its environment for the conditions that allow it to persist. A hurricane
is just one of the simplest examples.
As very simple dynamic entities hurricanes lack a number of properties that
most biological entities have. These differences can give us clues about where
to look for minimal biological entities—
a yet-to-be-discovered transition from
the inanimate to the animate.
 Hurricanes are not built on a static
skeleton. One will never find either
the carcass of a dead hurricane or a
hurricane burial ground.
 Hurricanes do not have an explicit
representation of their design. There
is no hurricane equivalent of DNA.
One reason hurricanes cannot have a
physical representation of their design is that they lack a static skeleton
within which to embed it.
 A hurricane’s metabolism precedes
its organization. A common definition of metabolism is: the sum total
of all the chemical and physical processes within a living organism. To
the extent that hurricanes “selforganize” they do so in a somewhat
after-the-fact manner which is driven
by an embryonic hurricane’s existing
thermodynamic process. The structure of a hurricane, i.e., its “walls,”
wind patterns, etc., is a consequence
of the hurricane’s fundamental thermodynamic activity rather than the
other way around. Of course once
that structure exists, it allows the
hurricane’s underlying thermodynamic processes to strengthen themselves. Thus if one is looking for a
minimal biological entity, it makes
12/41
DRAFT
2/12/2016
molecules that link [developing elements] to the larger whole. But there
are three more fundamental reasons
… First, some cellular components are
not fashioned by self-assembly, particularly the… cell wall which resembles
a woven fabric and must be enlarged
by cutting and splicing. Second, many
membrane proteins are oriented with
respect to the membrane and catalyze
vectorial reactions; this vector is not
specified in the primary amino acid sequence, but is supplied by the cell.
Third, certain processes occur at particular times and places, most notably
the formation of a septum at the time
of division. Localization on the cellular
place is not in the genes but in the
larger system. Cells do assemble
themselves, but in quite another sense
of the word: they grow. …
sense to look for a thermodynamic
process that (a) can proceed prior to
any significant self-organization and
that (b) leads to the development of a
larger structure around itself once it
has become established.
 Hurricanes are built from scratch.
Although there is much talk of selforganization among complexity scientists, biological cells don’t selforganize from scratch. Cells do not
have the ability to create new cells
from raw materials. Cells reproduce
by using themselves as what Franklin Harold (H) calls a templet: they
extend themselves and then divide.
But no cell builds a new descendent
cell from scratch. Harold quotes Rudolf Virchow as having originated a
fundamental maxim of biology: Omnis cellula e cellula (Every cell
comes from a preexistent cell) If one
wanted to make a case for intelligent
design, its core could very well be
the mystery of how the first cell
came to be. As Harold writes, “The
black hole at the very foundation of
biological science is the origin of
cells.” In exploring why cells don’t
build new cells from scratch (Harold,
p 80) Harold notes that cells grow
and divide; they don’t self-assemble
from components, even from components which may be created from
genetic “instructions.” The growth
and division process itself depends
on the cell’s own structure and components.
Is the cell as a whole a self-assembling
structure? ... Would a mixture of cellular molecules, gently warmed in some
buffer, reconstitute cells? Surely not,
and it is worthwhile to spell out why
not. One reason is [that] assembly is
never fully autonomous, but involves
[pre-existing] enzymes or regulatory
Emergence Explained
[O]nly within the context of a particular
cell, which supplies the requisite organizing power, is it valid to say that
the genome directs the construction
and operation of a cell. It is true but
subtly misleading to envisage a cell as
executing the instructions written down
in its genome; better to think of it as a
spatially structured self-organizing system made up of gene-specified elements. Briefly, the genes specify What;
the cell as a whole directs Where and
When.
 Hurricanes are (relatively) easy to
create. They are created every year.
Why isn’t life? Or is it, and are we
just not seeing it? Is there some biological-like entity that is far less
complex than a cell11 but that (like a
hurricane) (a) persists only as a con11
and almost certainly even significantly less complex than viruses, which don’t consume energy
and which are parasites rather than ancestors of existing biological structures
13/41
DRAFT
sequence of its energy consumption
and (b) can be brought into existence
easily?
 Hurricanes don’t reproduce. Cells
are quite capable of generating more
cells. And even though they don’t
self-organize from scratch, they are
capable of building new cells from
basic raw materials. The following is
from [Harold, p 64].
[Transfer a] few cells of E. coli (in principle, a single one will do) … to [a]
fresh sterile growth medium. The medium is a solution of inorganic salts including phosphate, sulfate, ammonium, and potassium ions; a number of
trace metals; and a pinch of glucose.
The flask is then placed in an incubator, preferably on a shaker. Next morning the glucose has been consumed,
and the medium swarms with cells, billions per milliliter.
 Because hurricanes neither reproduce nor include an explicit representation of their design, hurricanes
as a genus don’t evolve—at least not
by modifying explicit design descriptions.
Notwithstanding all these differences,
hurricanes are dynamic entities. This
suggests that simple dynamic entities
can and do exist.
9 Thermodynamic computing: nihil ex nihilo
In Computer Science we assume that one
can specify a Turing Machine, a Finite
State Automaton, a Cellular Automaton,
or a piece of software, and it will do its
thing—for free. Turing machines run for
free. Cellular Automata run for free. The
Game of Life runs for free. Software in
Emergence Explained
2/12/2016
general runs for free. Even agents in
agent-based models run for free.12
Although this may be a useful abstraction, we should recognize that we are
leaving out something important. In reality, dynamic emergent entities require
energy to persist. Actually to run software requires a real computer, which uses real energy. The problem is that the
energy that drives software is not visible
to the software itself. Computer science
has long ignored the fact that software
does not have to pay its own energy bill.
A theory of thermodynamic computation
is needed to bring together the notions of
energy and computing. Until we find a
way to integrate the real energy cost of
running software into the software itself,
we are unlikely to build a successful
model of artificial life.
10 Why can one build anything?
If one accepts our claim that phenomenology and functionality of higher level
entities must be expressed in a language
that refers to entities that exhibit that
phenomenology and functionality and
that don’t exist at the level of physics, a
follow-up question is: how and why is it
possible to build new entities at all.
When we say that a diamond is hard or
that E. coli cells ascends nutrient gradients, what is it about nature that allows
diamonds or E. coli cells to come into
existence in the first place? In asking
this question we are not asking what
forces keep these entities in existence.
That’s an important question, and we
12
Many agent-based and artificial life models
acknowledge the importance of energy by imposing an artificial price for persistence, but we are
not aware of any in which the cost of persistence is
inherent in the functioning of the entity. In all
these schemes the cost of persistence is tacked on
artificially.
14/41
DRAFT
discussed it above. What we are asking
here is what makes it possible to construct these entities at all.
The common-sense answer is that a diamond, for example, is a collection of
carbon atoms arranged in space. The implication of this statement is that the answer to what allows carbon atoms to
come together to make a carbon atom is
that they all exist in the same 3dimensional space. This sounds unremarkable. But is it?
Lee Smolin argues in favor of what he
calls background independent theories,
i.e., theories that do not rely on a fixed
background within which everything
else is set and to which everything else
can refer to get its bearings. He points to
general relativity as partially background
independent. It is background dependent
in that it assumes that space is 3dimensional and has certain other properties. It is background independent in
that (a) it does not assume that space is
rigid and (b) it supposes that certain
properties of space may change with
time.
Smolin characterizes background independence as follows.
Any theory postulates that the world is
made up of a very large collection of elementary entities (whether particles, fields,
or events or processes.) Indeed, the fact
that the world has many things in it is essential for these considerations — it means
that the theory of the world may be expected to differ in important aspects from
models that describe the motion of a single
particle, or a few particles in interaction
with each other.
The basic form of a physical theory is
framed by how these many entities acquire
properties. In an absolute framework the
properties of any entity are defined with re-
Emergence Explained
2/12/2016
spect to a single entity — which is presumed to be unchanging. An example is
the absolute space and time of Newton,
according to which positions and motions
are defined with respect to this unchanging
entity. Thus, in Newtonian physics the
background is three dimensional space,
and the fundamental properties are a list of
the positions of particles in absolute space
as a function of absolute time: xia (t). Another example of an absolute background
is a regular lattice, which is often used in
the formulation of quantum field theories.
Particles and fields have the property of
being at different nodes in the lattice, but
the lattice does not change in time.
The entities that plays this role may be
called the background for the description of
physics. The background consists of presumed entities that do not change in time,
but which are necessary for the definition
of the kinematical quantities and dynamical
laws.
The most basic statement of the relational
view is that
R1 There is no background.
How then do we understand the properties
of elementary particles and fields? The relational view presumes that
R2 The fundamental properties of the elementary entities consist entirely in relationships between those elementary entities.
Dynamics is then concerned with how
these relationships change in time.
An example of a purely relational kinematics is a graph. The entities are the nodes.
The properties are the connections between the nodes. The state of the system
is just which nodes are connected and
which are not. The dynamics is given by a
rule which changes the connectivity of the
graph.
We may summarize this as
15/41
DRAFT
R3 The relationships are not fixed, but
evolve according to law. Time is nothing
but changes in the relationships, and consists of nothing but their ordering.
Thus, we often take background independent and relational as synonymous. The debate between philosophers that used to be
phrased in terms of absolute vs. relational
theories of space and time is continued in
a debate between physicists who argue
about background dependent vs. background independent theories.
It should also be said that for physicists relationalism is a strategy. As we shall see,
theories may be partly relational, i.e.. they
can have varying amounts of background
structure. One can then advise that progress is achieved by adopting the
Relational strategy: Seek to make progress by identifying the background structure in our theories and removing it, replacing it with relations which evolve subject to
dynamical law.
This characterization of background independence seems to me to have a couple of problems.
1.Is it truly possible to define every
fundamental property (e.g., mass,
charge, etc.) relationally? If so, what
does one start with: a graph consisting of arcs and nodes and nothing
else? The only information in such a
structure are the elements and how
they are connected. Is it truly possible to define all physical properties
starting from nothing but that?
2.More significantly, what about the
graph itself? It as a structure is a
background. What allows one to talk
in terms of dynamically changing
arcs and nodes other than the assumptions (i.e., background) that
(a) there are entities, (b) connections
among them are possible, and
Emergence Explained
2/12/2016
(c) connections can be created and
destroyed—presumably by some
pre-defined mechanism(s)?
However these issues are resolved, one
may conclude from Smolin’s position
that even at the lowest levels of physics
and even in theories that presume the
minimal amount of pre-defined structure
one still assumes (a) the existence of entities (b) that interact with things other
than themselves. In other words, one assumes that nature is set up in such a way
that (a) multiple entities exist within a
common environment and (b) that environment provides means for those entities to interact.
This is a fundamental issue. What is nature that it provides an environment that
supports interaction among separate entities? Or does it even make sense to talk
about entities? We clearly don’t have an
answer to that question. The answer is
likely to involve quite sophisticated
thinking. Here, for example, is how
Smolin explains what are called emergent strings in String theory. (Smolin p.
132). This extract is immediately preceded by an explanation of phonons as
an emergent particle associated with
sound waves.
When the interactions [among strings] are
strong, there are many, many strings
breaking and joining, and it becomes difficult to follow what happens to each individual string. We then look for some simple
emergent properties of large collections of
strings—properties that we can use to understand what is going on. … Just as the
vibrations of a whole bunch of particles can
behave like a simple particle—a phonon—
a new string can emerge out of the collective motion of large numbers of stings. We
call this an emergent string.
The behavior of these emergent strings is
the exact opposite of that of ordinary
16/41
DRAFT
strings—let’s call the latter the fundamental
strings. The more the fundamental strings
interact, the less the emergent strings do.
To put this a bit more precisely: If the
probability for two fundamental strings to
interact is proportional to the string coupling constant g, then in some cases the
probability for the emergent string to interact is proportional to 1/g.
How do you tell the fundamental strings
form the emergent strings? It turns out that
you can’t—at least, in some cases. In fact,
you can turn the picture around and see
the emergent strings as fundamental. This
is the fantastic trick of strong-weak duality.
It is as if we could look at a metal and see
the phonons—the quantum sound waves—
as fundamental and all the protons, neutrons, and electrons making up the metal
as emergent particles made up of phonons.
In our conceptualization of emergence,
every emergent phenomenon is associated with a (lower level) implementation.
This is the case with phonons as well. In
the case of strings and emergent strings,
however, it appears that each is (or may
be understood to be) the implementation
of the other. That may be one way
around a bottom level whose opaque elements have arbitrary fixed properties.
So the question becomes: what is nature
that something like that can be?
2/12/2016
place at the same time. How do they
know that? If we take this as a fact of nature, there is something built into nature
that enables interaction.
When speaking of the electromagnetic
force, one says an electromagnetic field
pervades all of space. First of all, that’s
amazing all by itself. But it also establishes a basis for the notion of interaction. An electron interacts with its environment through its force field. One
doesn’t often think about the environment when speaking of an electron, but
if there were no environment, it
wouldn’t make sense to talk about the
electron’s force field. Perhaps another
way to put it is that the electron is its
force field—which means it has infinite
extent—but even in that case, one still
has to talk about how it interacts with
anything else. An electron and a proton
must somehow affect each other—by the
exchange of photons. How does that
have an effect?
10.1 How can anything interact
with anything else?
The standard model of physics postulates space, a number of particles, plus
the Heisenberg Uncertainty Principle.
These elements are taken as primitive.13
13
What allows one to build a diamond or
E. coli? At the bottom level—at least in
the standard model of physic—entities
are fields which interact with each other
in space.
Quantum theory also includes the mysterious Pauli exclusion principle, which
disallows two fermions from occupying
the same quantum state, i.e., having the
same properties and being in the same
Emergence Explained
Forces are sometimes said to be epiphenomenal
consequences of interactions with virtual particles,
where virtual particles are probability amplitudes
of momentum waves pervading space. The following is from the Usenet Physics FAQ
(http://math.ucr.edu/home/baez/physics/Quantum/
virtual_particles.html).
A virtual particle with momentum p corresponds to a plane wave filling all of space,
with no definite position at all. [Since the virtual particle’s momentum is known precisely,
the Heisenberg Uncertainty Principle requires
that its location be completely unknown.] …
Since the wave is everywhere, the photon can
be created by one particle and absorbed by the
other, no matter where they are. If the momentum transferred by the wave points in the direc-
17/41
DRAFT
Forces may be derived from the Heisenberg Uncertainty Principle. They are not
just implemented on the world that
obeys the Heisenberg Uncertainty Principle as a platform.
10.2 Interaction among/between
entitles
In the earlier paper, we argued that all
interactions are epiphenomenal. Is that
consistent with the position that entities
can interact at all? If all interactions are
simply combinations of primitive forces,
what does it mean to say that an entity as
such interacts? Interaction among static
entities interactions among dynamic entities depends on the entities. Sex between an elephant and a Chihuahua
won’t work for purely mechanical considerations. This is the case at all levels—ranging from the physical (in which
various physical parts must fit together)
to the cell level at which the gamete and
the egg must combine properly. Yet
there is no good way to describe the size
incompatibility in terms of chemical
equations. Furthermore, sex between
compatible partners requires that the
partners operate as entities. Sex would
not occur if only the components—e.g.,
organs, molecules, etc.—of the sexual
partners were left alone in a darkened
room no matter how sweetly the romantic music was playing. Again, there is no
way to say that in terms of chemical
equations.
tion from the receiving particle to the emitting
one, the effect is that of an attractive force.
This is sometimes referred to as the exchange
model of forces. The exchange model of forces
provides a quantum theoretical basis for calculating forces. The Heisenberg Uncertainty Principle
is called upon to explain why virtual particles pervade space in the first place. What explains the
Heisenberg Uncertainty Principle? And what explains the ontological world to which it applies?
Emergence Explained
2/12/2016
Similarly symbolic interaction works only between entities that are capable of
operating at the symbolic level. Science
is not in a position to explain how people
operate at a conceptual/symbolic level.
Yet it is clear that we do. It is also clear
that as symbolic entities we interact with
each other symbolically through language. We each are capable (to some
reasonably good approximation) of
communicating to each other what ideas
are occurring in our consciousness.
Again, there is no good way to characterize the ability to read in chemical
terms.
10.3 Functionality and the environment
Are Turing Machines a special case? Is
the functionality of other entities also of
interest? To examine that question we
first clarify what we mean by functionality. For us, functionality will always refer to how an entity interacts with its environment.
A traditional notion of emergence, e.g.,
[Stanford], is that “emergent entities
(properties or substances) ‘arise’ out of
more fundamental entities and yet are
‘novel’ or ‘irreducible’ with respect to
them.” Or [Dict of Philosophy of Mind
Ontario, Mandik] “Properties of a complex physical system are emergent just in
case they are neither (i) properties had
by any parts of the system taken in isolation nor (ii) resultant of a mere summation of properties of parts of the system.”
(But he goes on to dismiss properties
which are explainable as a result of the
interaction of the components as not
emergent. So nothing is emergent in this
view.)
Although this view of emergence is
widely accepted, as far as know, no one
has asked the obvious next question:
how can there be properties of a macro
18/41
DRAFT
entity that are not also properties of its
constituents? The answer is that the
emergent property must be at least in
part a result of how the constituents are
joined together to form the macro entity.
Then the question becomes: how are the
constituents joined together and what
keeps them joined together in that way?
Thus the structure of the macro entity is
a necessary part of the macro entity’s
properties. This seems like it isn’t saying
much. But where did that structure come
from? What is that structure? What is
it’s ontological status? Is it a real thing?
It must be real if the new properties are
real. What does it mean to say the “the
carbon atoms fit together neatly?
What does it mean for there to be a new
property? A property is an external description of something. How can there
be an external description, which is not
defined in terms of lower level constructs?
The only primitive properties (external
properties, which are not described by
internal constructs) are forces (and mass
and size and time)? How can there be
new properties? Entropy/order is also
primitive? Only makes sense with respect to interaction with entities in the
environment. E.g., catch a mouse? Reflect a glider? API? But API expressed
in what terms?
Functionality is an extension of the notion of force. A force is the functionality
of primitive elements that exert forces.
Functionality is that same notion, how
something acts in the world, applied to
higher level entities. Pheromones and ant
foraging. Mouse traps. Termite nest
building. All require interaction with
other entities on the same level as the interacting entity.
Emergence Explained
2/12/2016
Functionalism too, as its name implies,
has an environmental focus. As Fodor
points out,
[R]eferences to can openers, mousetraps,
camshafts, calculators and the like bestrew
the pages of functionalist philosophy. To
make a better mousetrap is to devise a
new kind of mechanism whose behavior is
reliable with respect to the high-level regularity “live mouse in, dead mouse out.”
For a better mouse trap to be better, the
environment must be reasonably stable;
mice must remain more or less the same
size.
is that refers to macro-level properties
which arise from micro-level elements
but are not reducible to them. construct
has a property that its component elements don’t have.
Similarly, the functionality of any entity
is defined with respect to its environment. As we will see later, the interaction of an entity with its environment is
particularly important for dynamic entities because dynamic entities depend on
their environment for the energy that enables them to persist.
More generally, consider the following
from Weinberg.
Grand reductionism is … the view that all
of nature is the way it is (with certain qualifications about initial conditions and historical accidents) because of simple universal
laws, to which all other scientific laws may
in some sense be reduced.
And this.
[A]part from historical accidents that by
definition cannot be explained, the [human]
nervous system [has] evolved to what [it is]
entirely because of the principles of macroscopic physics and chemistry, which in
turn are what they are entirely because of
19/41
DRAFT
2/12/2016
the principles of the standard model of elementary particles.
vironment imposes on elementary particles.14
Even though Weinberg gives historical
accidents, i.e., the environment, as important a role in shaping the world as he
does the principles of physics, he does so
grudgingly, seemingly attempting to
dismiss them in a throw-away subordinate clause. This is misleading, especially given Weinberg’s example—
evolution. Contrary to his implication,
the human nervous system (and the designs of biological organisms in general)
evolved as they did not primarily because of the principles of physics and
chemistry but primarily because of the
environment in which that evolution
took place and in which those organisms
must function.
Thus although neither Weinberg nor
Fodor focuses on this issue explicitly—
in fact, they both tend to downplay it—
they both apparently agree that the environment within which something exists
is important.
We would extend Jakobsson’s statement
beyond biology to include any science
that studies the functional relationship
between entities and their environment—and most sciences study those relationships. The study of solids, for example, is such a science—even though
solids are static entities. What does hard
mean other than resistance to (external)
pressure. Without an environment with
respect to which a solid is understood as
relating, the term hard—and other functional properties of solids—have no
meaning. Without reference to an environment, a diamond’s carbon atoms
would still fit together neatly, but the
functional consequences of that fact
would be beyond our power to describe.
This really is not foreign even to elementary particle physics. The Pauli exclusion principle, which prevents two
fermions from occupying the same quantum state, formalizes a constraint the en-
In summary, a functional description is
really a description of how an entity interacts with its environment. This is attractive because it ties both sorts of descriptions to the material world. Emergence occurs when an interaction with
an environment may be understood in
terms of an implementation.
10.4 Turing Machines and functionality
A Turing machine is explicitly a description of functionality. The best way to
understand a Turing machine is as a finite state entity interacting with its environment. In that sense a Turing machine
is a minimal model of what it means to
interact with an environment. The entity
must be able to: (a) read the environment
(b) change the environment (c) traverse
the environment. Computability results
with the environment is unbounded. The
environment need be no more complex
than a sequence of cells whose contents
can be modified. But the real environment is multi-scalar and hence much
more complex than a sequence of
read/write cells.
Also as formulated, a Turing machine
interacts with a closed environment. It is
only the Turing machine itself that is
able to change the environment. But environments need not be closed. In an
open environment the entity is far from
14
Emergence Explained
This was pointed out to me by Eshel Ben-Jacob
[private communication].
20/41
DRAFT
equilibrium with respect to information
flow. This is in addition to the fact that it
is far from equilibrium with respect to
energy. PD is an example of how this
makes a difference, but what more is
there to say about it? Entity knows more
and more about the environment, which
enables it to find out more about where
energy is available?
Hurricanes can’t read/write its environment; we do. The real mechanism is that
the entity is shaped by its environment.
But the key is that the entity must be
shapeable, i.e., it must be able to change
state and direction depending on the environment. Hurricanes can’t be shaped in
that way. So one minimal requirement is
that the entity be an FSA. Second requirement is that the entity be mobile
and be able to change direction. Third is
that the entity be able to modify the environment.
11 Identifying Entities
Identify static entities recursively. Need
a place to start: primitive physical elements. To identify dynamic entities need
to recognize patterns, reduction in entropy. So entities are self-persistent entropy
wells, some of which are also energy
wells. Self-persistent means they persist
without us having to think about them.
But days persist as do the set of blueeyed people. But it doesn’t have reduced
or increased mass. Entropy reduction
must be based on dynamic behavior?
That’s what we as human beings (and
probably most biological organisms) are
quite good at. We recognize patterns that
I don’t characterize as dynamic entities
such as whirlpools, tornados, clouds, astrological coincidences, printed letters &
words, temporal entities (days, years,
etc.), numbers, abstractions, arbitrary
collections (are there any?) such as the
Emergence Explained
2/12/2016
collection of people born in December
1954. How to relate this to emergence/levels of abstraction. Set defines
a level of abstraction. Need not only that
can be specified independently of implementation but that there be an implementation. A set doesn’t have an implementation; it’s just a definition. Some
dynamic entities form because of how
their components behave. Usual example
is bees/ants. But also people waiting in
line. Each person is waiting for his or
her own reason. But that creates the line
entity. Some groups (clubs, religions,
etc.) has patterns of behaviors that people adopt when they join the group. No
individual people make up the group.
The behavior pattern defines the group.
It may be stored in documents; it may be
stored (and passed on) in the minds of
the current group members. In that sense
like a collection of genes in DNA.
Entities are identified by the patterns of
behavior their components follow. E.g.,
an insect colony consists of components
that interact in predictable ways with
other elements of the colony and not
with elements not in the colony—or differently with elements not in the colony.
So it is the network of elements that are
tied together by the patterns of behavior
that define the entity. Elements may
come and go, but the network (defined
from moment to moment by the elements in it during the previous moment,
as in the GoL pattern definition) persists
as long as it can be seen as mainly persisting from moment to moment.
Static entities are much easier. They are
defined recursively starting with primitive elements. They don’t change once
they exist.
12 Entities and the sciences
One reason that the sciences at levels
higher than physics and chemistry seem
21/41
DRAFT
somehow softer than physics and chemistry is that they work with entities that
for the most part do not supervene over
any conveniently compact collection of
matter. Entities in physics and chemistry
are satisfyingly solid—or at least they
seemed to be before quantum theory. In
contrast, the entities of the higher level
sciences are not defined in terms of material boundaries. These entities don’t
exist as stable clumps of matter; it’s hard
to hold them completely in one’s hand—
or in the grip of an instrument.
The entities of the special sciences are
objectively real—there is some objective
measure (their reduced entropy relative
to their environment) by which they
qualify as entities. But as we saw earlier,
the processes through which these entities interact and by means of which they
perpetuate themselves are epiphenomenal. Even though the activities of higher
level entities may be described in terms
that are independent of the forces that
produce them (recall that this is our definition of epiphenomenal), the fundamental forces of physics are the only
forces in nature. There is no strong
emergence. All other force-like effects
are epiphenomenal.
Consequently we find ourselves in the
position of claiming that the higher level
sciences study epiphenomenal interactions among real if often somewhat ethereal entities.
“Of course,” one might argue, “one can
build some functionality that is not a
logical consequence of its components.”
Fodor’s simplest functionalist examples
illustrate this phenomenon. The physics
underlying the components of a mousetrap won’t tell you that when you put the
components together in a particular way
the result will trap a mouse. The reason
why rules of fundamental physics cannot
Emergence Explained
2/12/2016
tell you that is because mice simply are
not a part of the ontology of fundamental
physics in the same way as Turing Machines are not part of the ontology of the
Game of Life.
If an object is designed to have a function, then if its design works, of course it
has that function—even if, as is likely,
that function is logically independent of
the laws that govern the components.
We build objects with particular functions all the time. It’s called ingenuity—
or simply good software or engineering
design. Even chimpanzees build and use
tools. They use stems to extract termites
from mounds, they use rocks to open
nuts, and perhaps even more interestingly, they “manufacture” sponges by
chewing grass roots until they become
an absorbent mass. [Smithsonian] But of
course from the perspective of fundamental physics, stems are not probes;
rocks are not hammers; and roots are not
sponges.
To be clear about this point, when we
say that the functionality of a designed
element is logically independent of some
lower level domain we are not saying
that the higher level functionality is
completely unconstrained by the lower
level framework. Of course a Turing
Machine emulation is constrained by the
rules of the Game of Life, and the functioning of a mouse trap is constrained by
the laws of physics. But in both cases,
other than those constraints, the functionality of the designed artifact is logically independent of the laws governing
the underlying phenomena. Typically,
the functionality of the designed artifact
is expressed in terms that are not even a
present in the ontological framework of
the lower level elements.
The question we pose in this subsection
(and answer in the next) is whether such
22/41
DRAFT
logically independent functionality occurs “in nature” at an intermediate level,
at the level of individual things. Or does
this sort of phenomenon occur only in
human (or chimpanzee) artifacts?
Given the current debate (at least in the
United States) about evolution, one
might take this as asking whether the existence of a design always implies the
existence of a (presumably intelligent)
designer.
13 The Reductionist blind
spot
Entities, emergence, and science
Entities and the sciences
One reason that the sciences at levels
higher than physics and chemistry seem
somehow softer than physics and chemistry is that they work with entities that
for the most part do not supervene over
any conveniently compact collection of
matter. Entities in physics and chemistry
are satisfyingly solid—or at least they
seemed to be before quantum theory. In
contrast, the entities of the higher level
sciences are not defined in terms of material boundaries. These entities don’t
exist as stable clumps of matter; it’s hard
to hold them completely in one’s hand—
or in the grip of an instrument.
The entities of the special sciences are
objectively real—there is some objective
measure (their reduced entropy relative
to their environment) by which they
qualify as entities. But as we saw earlier,
the processes through which these entities interact and by means of which they
perpetuate themselves are epiphenomenal. Even though the activities of higher
level entities may be described in terms
that are independent of the forces that
Emergence Explained
2/12/2016
produce them (recall that this is our definition of epiphenomenal), the fundamental forces of physics are the only
forces in nature. There is no strong
emergence. All other force-like effects
are epiphenomenal.
Consequently we find ourselves in the
position of claiming that the higher level
sciences study epiphenomenal interactions among real if often somewhat ethereal entities.
“Of course,” one might argue, “one can
build some functionality that is not a
logical consequence of its components.”
Fodor’s simplest functionalist examples
illustrate this phenomenon. The physics
underlying the components of a mousetrap won’t tell you that when you put the
components together in a particular way
the result will trap a mouse. The reason
why rules of fundamental physics cannot
tell you that is because mice simply are
not a part of the ontology of fundamental
physics in the same way as Turing Machines are not part of the ontology of the
Game of Life.
If an object is designed to have a function, then if its design works, of course it
has that function—even if, as is likely,
that function is logically independent of
the laws that govern the components.
We build objects with particular functions all the time. It’s called ingenuity—
or simply good software or engineering
design. Even chimpanzees build and use
tools. They use stems to extract termites
from mounds, they use rocks to open
nuts, and perhaps even more interestingly, they “manufacture” sponges by
chewing grass roots until they become
an absorbent mass. [Smithsonian] But of
course from the perspective of fundamental physics, stems are not probes;
rocks are not hammers; and roots are not
sponges.
23/41
DRAFT
To be clear about this point, when we
say that the functionality of a designed
element is logically independent of some
lower level domain we are not saying
that the higher level functionality is
completely unconstrained by the lower
level framework. Of course a Turing
Machine emulation is constrained by the
rules of the Game of Life, and the functioning of a mouse trap is constrained by
the laws of physics. But in both cases,
other than those constraints, the functionality of the designed artifact is logically independent of the laws governing
the underlying phenomena. Typically,
the functionality of the designed artifact
is expressed in terms that are not even a
present in the ontological framework of
the lower level elements.
The question we pose in this subsection
(and answer in the next) is whether such
logically independent functionality occurs “in nature” at an intermediate level,
at the level of individual things. Or does
this sort of phenomenon occur only in
human (or chimpanzee) artifacts?
2/12/2016
As we discussed in (1) Weinberg distinguished between what he called petty
and grand reductionism. Grand reductionism is the claim that all scientific
laws can be derived from the laws of
physics. In the first paper, we argued
that grand reductionism doesn’t hold.
Our example was the implementation of
a Turing Machine on a Game of Life
platform. A reductive analysis of a
Game-of-Life Turing Machine can explain how a Turning Machine may be
implemented, but it doesn’t help us understand the laws governing the functionality that the Turing Machine provides.
Weinberg’s characterization of petty reductionism was the “doctrine that things
behave the way they do because of the
properties of their constituents.” Recall
that Weinberg said that petty reductionism has “run its course” because when it
comes to primitive particles it isn’t always clear what is a constituent of what.
14 Entities, functionality, and
the reductionist blind spot
In most other realms of science, however, petty reductionism still holds sway.
To understand something, take it apart
and see how it works. Thus the traditional scientific agenda can be described as
follows. (a) Observe nature. (b) Identify
likely categories of entities. (c) Explain
the
observed
functionality/phenomenology of entities in those
categories by understanding their structure and internal operation.15
When one eliminates the entities as a result of reductionism one loses information. Shalizi’s definition of emergence. Throwing away the entity is the
reductionism blind spot.
Once this explanatory task is accomplished, the reductionist tradition has
been to put aside an entity’s functional/phenomenological description and replace it with (i.e., to reduce it to) the ex-
Given the current debate (at least in the
United States) about evolution, one
might take this as asking whether the existence of a design always implies the
existence of a (presumably intelligent)
designer.
The question Shalizi raises is do his
higher level variables refer to anything
real or are they just definitional consequences of lower level variables. Our
definitions of entities says they are real.
Emergence Explained
15
In some cases we find that task (c) leads us to conclude that what we had postulated as a category of
entity was not, perhaps because we found that similar functionality/phenomenology in different instances were implemented differently.
24/41
DRAFT
planation of how that functionality/phenomenology is brought about. The
functional/phenomenological description
is considered simply a shorthand for
what we now understand at a deeper level. Of course one then has the task of explaining the lower-level mechanisms in
terms of still lower-level mechanisms,
etc. But that’s what science is about,
peeling nature’s onion until her fundamental mechanisms are revealed. In this
section we argue that that this approach
has severe limitations. In particular we
discuss what we refer to as the reductionist blind spot.
14.1 The reductionist blind spot
We use the term the reductionist blind
spot to refer to the doctrine that once one
understands how higher level functionality can be implemented by lower level
mechanisms, the higher level is nothing
more than a derivable consequence of
the lower level. In other words, the objective is to replace descriptions of functionality with descriptions of mechanisms.
Significantly, the reductionist tradition
does not dismiss all descriptions given in
terms of functionality. After all, what
does reductionism do when it reaches
“the bottom,” when nature’s onion is
completely peeled? One version of the
current “bottom” is the standard model
of particle physics, which consists of
various classes of particles and the four
fundamental forces. This bottom level is
necessarily described functionally. It
can’t be described in terms of implementing mechanisms—or it wouldn’t be
the bottom level. The reductionist perspective reduces all higher level functionality to primitive forces plus mass
and extension. This is not in dispute. As
we said in [1], all higher level functionEmergence Explained
2/12/2016
ality is indeed epiphenomenal with respect to the primitive forces.
The difficulty arises because functionality must be described in terms of the interaction of an entity with its environment. The fundamental forces, for example, are described in terms of fields
that extend beyond the entity. This is
quite a different form of descriptions
from a structural and operational description, which is always given in terms
of component elements. When higher
levels of functionality are described, we
tend to ignore the fact that those descriptions are also given in terms of a relationship to an environment. What the reductionist blind spot fails to see is that
when we replace a description of how an
entity interacts with its environment with
a description of how an entity operates,
we lose track of how the entity interacts
with its environment. The functionality
of a Turing Machine is defined with respect to its tape, which is its environment.
This is particularly easy to see with (traditional) Turing Machines when formulated in terms that distinguish the machine itself from its environment. The
functionality of a Turing machine, the
function which it computes, is defined as
its transformation of an input, which it
finds in its environment, into an output,
which it leaves in its environment. What
other formulation is possible? If there
were no environment how would the input be provided and the output retrieved?
It is not relevant whether or not the
computational tape is considered part of
the Turing Machine or part of the environment. All that matters is that the input
is initially found in the environment and
the output is returned to the environment. A Turing Machine computes a
function after all.
25/41
DRAFT
The same story holds for energy-based
entities. Higher levels of functionality,
the interaction of the entity with its environment, are important on their own. An
entity’s higher level functionality is
more than just the internal mechanism
that brings it about. As higher and more
sophisticated levels of functionality are
created—or found in nature—it is important to answer questions such as: how
are these higher levels of functionality
used and how do they interact with each
other and with their environment? Answering these questions fills in the reductionist blind spot.
14.2 The whole is more than the
sum of its parts
The whole plus the environment is more
than the sum of its parts plus the environment. The difference is the functionality the whole can bring to the environment that the parts even as an aggregate cannot.
Software isn’t magic. It is simply the
consequence of a deterministic computer
executing operations in a step-by-step
manner. So software isn’t magic, but it
isn’t reductionist either. But software is
magic. It is at least a tension between
telling a computer what to do and expressing one’s thoughts about a made-up
world. Software is externalized thought,
and a computer is a reification machine.
It converts ideas into reality. What could
be more magical than that. But because
of that there are no equations that will
map quantum physics onto Microsoft
word. Certainly the operations a computer performs are explainable in terms
of quantum theory. But the conceptual
model implemented by MS word is beyond the ontological reach of Quantum
theory. It is not possible to “deduce”
that world by starting with QM and proEmergence Explained
2/12/2016
ceeding by pure logical deduction. David Gross approvingly quotes Einstein to
the contrary, which is exactly the opposite of what Anderson says.
Here is his definition of the goal of the physicist:
The supreme test of the physicist is to arrive at
those universal
laws of nature from which the cosmos can be
built up by
pure deduction.
I love this sentence. In one sentence Einstein asserts the
strong reductionist view of nature: There exist
universal,
mathematical laws which can be deduced and
from which all the workings of the cosmos can
(in principle) be deduced,
starting from the elementary laws and building
up. (from Gross, David, “Einstein and the
search for unification,” Current Science, 89/12,
25 DECEMBER 2005, pp. 2035 – 2040.
http://www.ias.ac.in/currsci/dec252005/2035.pdf)
15 Biological and Social Entities (Thesius’ Ship)
15.1 Biological and social dynamic entities
Biological and social entities also depend on external energy sources. Photosynthesizing plants depend on sunlight.
Other biological entities depend on food
whose energy resources were almost always derived originally from the sun.
Social entities may be organized along a
number of different lines. In modern
economies, money is a proxy for energy.16 Economic entities persist only so
long as the amount of money they take
in exceeds the amount of money they
expend.
16
As a generic store of fungible value, money is
convertible into both energy and materials. Its
convertibility into energy in various forms—
ranging from the raw energy of fuels to the sophisticated energy of human effort—is critical to a
successful economy.
26/41
DRAFT
Political entities depend on the energy
provided—either voluntarily or through
taxes or conscription—of their subjects.
Smaller scale social entities such as families, clubs, etc., also depend on the energy contributions of their member. The
contributions may be voluntary, or they
may result from implicit (social norms)
or explicit coercion.17
No matter the immediate source of the
energy or the nature of the components,
biological and social entities follow the
same pattern we saw with hurricanes.
 They have reduced entropy (greater
order) than their components would
have on their own.
 They depend on external sources of
energy to stay in existence. Because
of the energy flowing through them,
they have more mass than their components would on their own.
 The material that makes them up
changes with time. Their supervenience bases are generally much larger
than the material of which they are
composed at any moment. The longer a dynamic entity persists, the
greater the difference. As a consequence petty reductionism becomes
at best a historical narrative. One can
tell the story of a country, for example, as a history that depends in part
on who its citizens are at various
times. But one would have a difficult
time constructing an equation that
maps a country’s supervenience base
17
We intend our reference to human effort as energy
to be informal but not inaccurate. As we discuss
below, human effort, like any interaction between
a dynamic entity and its environment, is a transformation of the energy that entity acquires into a
force that the entity projects. All dynamic entity
interactions, including human effort and interactions, are a result of the transformation of energy
by the entity.
Emergence Explained
2/12/2016
(which includes its citizens over
time) to its state at any moment unless that mapping were in effect a
historical record. Thus supervenience
and petty reductionism is of limited
usefulness when analyzing dynamic
entities.
 Most biological and social entities
have other dynamic entities as components. All entities are subject to
the effect of interactions with elements they encounter in their environments. Dynamic entities are doubly vulnerable. They are also subject
to having their components replaced
by other components. To persist they
must have defenses against infiltration by elements which once incorporated into their internal mechanisms may lead to their weakening or
destruction. Social entities are more
vulnerable still. Some of their components (people) are simultaneously
components of other social entities—
often resulting it divided loyalties.
 Talk bout bees and other examples
from Wilson’s book.
15.2 Theseus’ ship
The notion of a social dynamic entity
can help resolve the paradox of Theseus’s ship, a mythical ship that was
maintained (repaired, repainted, etc.) in
the harbor at Athens for so long that all
of its original material was replaced. The
puzzle arises when one asks whether the
ship at some time t is “the same ship” as
it was when first docked-or at any other
time for that matter.
This becomes a puzzle when one thinks
of Theseus’ ship as identical to the material of which it is composed at any
moment, i.e., that the ship supervenes
over its components. Since any modification to the ship, e.g., new paint, will
27/41
DRAFT
2/12/2016
change the material of which the ship is
composed, it would seem that the repainted ship is not “the same ship” as it
was before it was repainted. Since the
repainted ship consists of a different set
of components, supervenience precludes
the repainted ship from being “the same
ship” as it was before being repainted.
This is so because by the definition of
supervenience, the ship’s properties are
fixed by the properties of its components, and its components are different
from what they were before. Before being repainted, the ship’s properties did
not depend on the properties of the new
paint. After being repainted they do.
time to time, the physical ship components also change from time to time. But
then so do the people who participate in
the ship maintenance entity. Only the
ship maintenance process as a social entity persists over time.
This cycling of material through an entity isn’t a problem when we discussed
hurricanes or social or biological entities—we had already given up on the
usefulness of petty reductionism and supervenience for dynamic entities. In
those cases we thought of the entity as
including not only its momentary physical components but as also including the
energy that was flowing through it along
with means to slough off old material
and to incorporate new material into its
structure.
Dynamic entities are composed of static
and dynamic entities (bodies and societies). That’s what makes them solid. But
those static entity components are frequently replaced.
To apply the same perspective to Theseus’ ship, we must think of the physical
ship along with the ship maintenance
process as a social entity—call it the
Theseus ship maintenance entity. That
social entity, like all social entities, is
powered by an external energy source.
(Since the maintenance of Theseus’ ship
is a governmental or societal function,
the energy source is either voluntary,
conscripted, or taxed.) The Theseus ship
maintenance entity uses energy from its
energy source(s) to do the maintenance
work on the ship. Just as the material
that makes up a hurricane changes from
time to time and the people and property
that make up a business change from
Once one has autonomous entities (or
agents) that persist in their environment,
the ways in which complexity can develop grows explosively.
Prior to
agents, to get something new, one had to
build it as a layer on top of some existing substrate. As we have seen, nature
has found a number of amazing abstractions along with some often surprising
ways to implement them. Nonetheless,
this construction mechanism is relatively
ponderous. Layered hierarchies of abstractions are powerful, but they are not
what one might characterize as lightweight or responsive to change. Agents
change all that.
Emergence Explained
15.3 Ethical implications
Our fundamental existence depends on
taking energy and other resources from
the environment. We must all do it to
persist as entities. This raises fundamental ethical questions: how can taking be
condemned? Supports stewardship notions since we are all dependent on environment.
Competition for energy and other resources justifies picture of evolution as
survival of the meanest. Also justifies
group selection since groups can ensure
access to resources better than individuals.
16 Stigmergy
28/41
DRAFT
Half a century ago, Pierre-Paul Grasse
invented [Grasse] the term stigmergy to
help describe how social insect societies
function. The basic insight is that when
the behavior of an entity depends to at
least some extent on the state of its environment, it is possible to modify that entity’s behavior by changing the state of
the environment. Grasse used the term
“stigmergy” for this sort of indirect
communication and control. This sort of
interplay between agents and their environment often produces epiphenomenal
effects that are useful to the agents. Often those effects may be understood in
terms of formal abstractions. Sometimes
it is easier to understand them less formally.
Two of the most widely cited examples
of stigmergic interaction are ant foraging
and bird flocking. In ant foraging, ants
that have found a food source leave
pheromone markers that other ants use to
make their way to that food source. In
bird flocking, each bird determines how
it will move at least in part by noting the
positions and velocities of its neighboring birds.
The resulting epiphenomena are that
food is gathered and flocks form. Presumably these epiphenomena could be
formalized in terms of abstract effects
that obeyed a formal set of rules—in the
same way that the rules for gliders and
Turing Machines can abstracted away
from their implementation by Game of
Life rules. But often the effort required
to generate such abstract theories doesn’t
seem worth the effort—as long as the results are what one wants.
Here are some additional examples of
stigmergy.
 When buyers and sellers interact in a
market, one gets market epiphenomena. Economics attempts to formalize
Emergence Explained
2/12/2016
how those interactions may be abstracted into theories.
 We often find that laws, rules, and
regulations have both intended and unintended consequences. In this case the
laws, rules, and regulations serve as
the environment within which agents
act. As the environment changes, so
does the behavior of the agents.
 Both sides of the evo-devo (evolutiondevelopment) synthesis [Carroll] exhibit stigmergic emergence. On the
“evo” side, species create environmental effects for each other as do sexes
within species.
 The “devo” side is even more
stigmergic. Genes, the switches that
control gene expression, and the proteins that genes produce when expressed all have environmental effects
on each other.
 Interestingly enough, the existence of
gene switches was discovered in the
investigation of another stigmergic
phenomenon. Certain bacteria generate
an enzyme to digest lactose, but they
do it only when lactose is present.
How do the bacteria “know” when to
generate the enzyme?
It turns out to be simple. The gene for
the enzyme exists in the bacteria, but
its expression is normally blocked by a
protein that is attached to the DNA sequence just before the enzyme gene.
This is called a gene expression
switch.
When lactose is in the environment, it
infuses into the body of the bacteria
and binds to the protein that blocks the
expression of the gene. This causes the
protein to detach from the DNA thereby “turning on” the gene and allowing
it to be expressed.
29/41
DRAFT
The lactose enzyme switch is a lovely
illustration of stigmergic design. As
we described the mechanism above, it
seems that lactose itself turns on the
switch that causes the lactose-digesting
enzyme to be produced. If one were
thinking about the design of such a
system, one might imagine that the lactose had been designed so that it would
bind to that switch. But of course, lactose wasn’t “designed” to do that. It
existed prior to the switch. The bacteria evolved a switch that lactose would
bind to. So the lactose must be understood as being part of the environment
to which the bacteria adapted by
evolving a switch to which lactose
would bind. How clever; how simple;
how stigmergic!
 Cellular automata operate stigmergically. Each cell serves as an environment for its neighbors. As we have
seen, epiphenomena may include gliders and Turing Machines.
 Even the operation of the Turing Machine as an abstraction may be understood stigmergically. The head of a
Turing Machine (the equivalent of an
autonomous agent) consults the tape,
which serves as its environment, to determine how to act. By writing on the
tape, it leaves markers in its environment to which it may return—not unlike the way foraging ants leave pheromone markers in their environment.
When the head returns to a marker,
that marker helps the head determine
how to act at that later time.
 In fact, one may understand all computations as being stigmergic with respect to a computer’s instruction execution cycle. Consider the following
familiar code fragment.
Emergence Explained
2/12/2016
temp := x;
x
:= y;
y
:= temp;
The epiphenomenal result is that x and
y are exchanged. But this result is not a
consequence of any one statement. It is
an epiphenomenon of the three statements being executed in sequence by a
computer’s instruction execution cycle.
Just as there in nothing in the rules of
the Game of Life about gliders, there is
nothing in a computer’s instruction execution cycle about exchanging the
values of x and y—or about any other
algorithm that software implements.
Those effects are all epiphenomenal.
 The instruction execution cycle itself is
epiphenomenal over the flow of electrons through gates—which knows no
more about the instruction execution
cycle than the instruction execution
cycle knows about algorithms.
In all of the preceding examples it is relatively easy to identify the agent(s), the
environment, and the resulting epiphenomena.
17 Design and evolution
It is not surprising that designs appear in
nature. It is almost tautologous to say
that those things whose designs work in
the environments in which they find
themselves will persist in those environments. This is a simpler (and more
accurate) way of saying that it is the
fit—entities with designs that fit their
environment—that survive. But fit
means able to extract sufficient energy to
persist.
The accretion of complexity
An entity that suits its environment persists in that environment. But anything
30/41
DRAFT
that persists in an environment by that
very fact changes that environment for
everything else. This phenomenon is
commonly referred to as an ever changing fitness landscape.
What has been less widely noted in the
complexity literature is that when something is added to an environment it may
enable something else to be added latter—something that could not have existed in that environment prior to the earlier addition.
This is an extension of notions from
ecology, biology, and the social sciences. A term for this phenomenon from the
ecology literature, is succession. (See,
for example, [Trani].) Historically succession has been taken to refer to a fairly
rigid sequence of communities of species, generally leading to what is called a
climax or (less dramatically) a steady
state.
Our notion is closer to that of bricolage,
a notion that originated with the structuralism movement of the early 20th century [Wiener] and which is now used in
both biology and the social sciences.
Bricolage means the act or result of
tinkering, improvising, or building
something out of what is at hand.
In genetics bricolage refers to the evolutionary process as one that tinkers with
an existing genome to produce something new. [Church].
John Seely Brown, former chief scientist
for the Xerox Corporation and former director of the Xerox Palo Alto Research
Center captured its sense in a recent talk.
[W]ith bricolage you appropriate something. That means you bring it into your
space, you tinker with it, and you repur-
2/12/2016
pose it and reposition it. When you repurpose something, it is yours.18
Ciborra [Ciborra] uses bricolage to
characterize the way that organizations
tailor their information systems to their
changing needs through continual tinkering.
This notion of building one thing upon
another applies to our framework in that
anything that persists in an environment
changes that environment for everything
else. The Internet provides many interesting illustrations.
 Because the Internet exists at all, access to a very large pool of people is
available. This enabled the development of websites such as eBay.
 The establishment of eBay as a persistent feature of the Internet environment
enabled the development of enterprises
whose only sales outlet was eBay.
These are enterprises with neither
brick and mortar nor web storefronts.
The only place they sell is on eBay.
This is a nice example of ecological
succession.
 The existence of a sizable number of
eBay sellers resulted in a viable market
for eBay selling advice. So now there
are businesses that sell advice about
selling on eBay to people who sell on
eBay.
 At the same time—and again because
the Internet provides access to a very
large number of people—other organizations were able to establish what are
known as massively multi-player
18
In passing, Brown claims that this is how most
new technology develops.
[T]hat is the way we build almost all technology today,
even though my lawyers don't want to hear about it.
We borrow things; we tinker with them; we modify
them; we join them; we build stuff.
Emergence Explained
31/41
DRAFT
online games. Each of these games is a
simulated world in which participants
interact with the game environment
and with each other. In most of these
games, participants seek to acquire virtual game resources, such as magic
swords. Often it takes a fair amount of
time, effort, and skill to acquire such
resources.
 The existence of all of these factors resulted, though a creative leap, in an
eBay market in which players sold virtual game assets for real money. This
market has become so large that there
are now websites dedicated exclusively to trading in virtual game assets.
[Wallace]
 BBC News reported [BBC] that there
are companies that hire low-wage
Mexican and Chinese teenagers to earn
virtual assets, which are then sold in
these markets. How long will it be before a full-fledged economy develops
around these assets? There may be
brokers and retailers who buy and sell
these assets for their own accounts
even though they do not intend to play
the game. (Perhaps they already exist.)
Someone may develop a service that
tracks the prices of these assets. Perhaps futures and options markets will
develop along with the inevitable investment advisors.
The point is that once something fits
well enough into its environment to persist it adds itself to the environment for
everything else. This creates additional
possibilities and a world with ever increasing complexity.
In each of the examples mentioned
above, one can identify what we have
been calling an autonomous entity. In
most cases, these entities are selfperpetuating in that the amount of money they extract from the environment (by
Emergence Explained
2/12/2016
selling either products, services, or advertising) is more than enough to pay for
the resources needed to keep it in existence.
In other cases, some Internet entities run
on time and effort contributed by volunteers. But the effect is the same. As long
as an entity is self-perpetuating, it becomes part of the environment and can
serve as the basis for the development of
additional entities.
18 Increasing complexity increasing efficiency, and
historical contingency
The phenomenon whereby new entities
are built on top of existing entities is
now so widespread and commonplace
that it may seem gratuitous even to
comment on it. But it is an important
phenomenon, and one that has not received the attention it deserves.
Easy though this phenomenon is to understand once one sees it, it is not trivial.
After all, the second law of thermodynamics tells us that overall entropy increases and complexity diminishes. Yet
we see complexity, both natural and man
made, continually increasing. For the
most part, this increasing complexity
consists of the development of new autonomous entities, entities that implement the abstract designs of dissipative
structures.
This does not contradict the Second
Law. Each autonomous entity maintains
its own internally reduced entropy by using energy imported from the environment to export entropy to the environment. Overall entropy increases. Such a
process works only in an environment
that itself receives energy from outside
itself. Within such an environment,
complexity increases.
32/41
DRAFT
Progress in science and technology and
the bountifulness of the marketplace all
exemplify this pattern of increasing
complexity. One might refer to this kind
of pattern as a meta-epiphenomenon
since it is an epiphenomenon of the process that creates epiphenomena.
This creative process also tends to exhibit a second meta-epiphenomenon. Overall energy utilization becomes continually more efficient. As new autonomous
entities find ways to use previously unused or under-used energy flows (or
forms of energy flows that had not existed until some newly created autonomous
entity generated them, perhaps as a
waste product), more of the energy
available to the system as a whole is put
to use.
The process whereby new autonomous
entities come into existence and perpetuate themselves is non-reductive. It is
creative, contingent, and almost entirely
a sequence of historical accidents. As
they say, history is just one damn thing
after another—to which we add, and nature is a bricolage. We repeat the observation Anderson made more than three
decades ago.
The ability to reduce everything to simple
fundamental laws [does not imply] the ability to start from those laws and reconstruct
the universe.
19 Undecidability of emergence
I'd like to ask you about the following (p. 6),
which you also mentioned in your talk Sunday.
In the case of elements we can predict
particular properties perhaps such as ionization energies but not chemical behavior. In the case of compounds what can
be achieved is an accurate estimate, and
Emergence Explained
2/12/2016
in many cases even predictions, regarding specific properties in the compounds
that are known to have formed between
the elements in question. Quantum mechanics cannot yet predict what compounds will actually form.
Can you tell me what it would mean to predict
chemical behaviors and in what sense QM can't
do that. To take an extreme case, QM can't "predict" that an automobile can transport a person
from here to there. It can't predict that because
the concepts of automobile and person are not
part of the ontology of QM. But presumably
QM can explain (if one has the patience and once
all the concepts are translated into QM terms)
how an automobile is able to transport people
from here to there.
Similarly, QM can't predict the existence of automobiles. First of all, it can't do it on the same
grounds as above. Secondly, one of the issues
you mentioned on Sunday was that prediction
was not possible because of environmental factors. But that seems like a weak argument. Are
you making a stronger claim than that? If you
took all the environmental factors into account
(Is there a claim that it's not possible to do that?)
and if one asked QM to predict whether a prespecified compound would result from a given
starting point, are you claiming that QM can't do
that, even in theory? If so, why is that? Is it a
matter of the probabilistic nature of QM?
Finally, it's not clear to me that it would be theoretically possible for QM to predict automobiles
on the following grounds. (This is a computer
science argument.)
To predict an automobile, let's enumerate all
conceivable configurations of a (very large but)
finite set of elementary particles and use QM to
determine which are actually possible, i.e., which
are consistent with QM. One of them is bound
to be an automobile. So QM could predict an automobile.
In Computer Science and Logic that strategy
breaks down at the step in which one asks QM
(or the equivalent logical theory) to determine
which are actually possible, i.e., consistent with
QM. (Godel's incompleteness and the unsolvability of the halting problem. That step cannot be
guaranteed to terminate.)
33/41
DRAFT
For QM presumably the same problem doesn't
arise. The computation presumably terminates.
(Is that true?) Although the actual computation
may not be feasible.
But how about the enumeration step. Is it reasonable to assume that one could enumerate all
possible configurations of a finite number of elementary particles? I suspect that it isn't. If
space is continuous, it certainly isn't. Even if
space is discrete, there are probably difficulties,
which I'm not knowledgeable enough to be able
to state now. So on those grounds QM is not able
to predict automobiles.
Is that the sense in which you claim that QM
can't predict which compounds will form?
What I had in mind in the message below was
simply(!) to determine whether a configuration is
consistent with QM laws. But an interesting extension is to ask whether given a starting configuration
one can determine whether some final configuration will result. Besides all the chaos and probability related issues, one can pull in undecidability directly by arguing that if part of the process that one
is attempting to analyze involves a computation,
then by downward entailment (from my paper), one
can't decide if the process one is analyzing will
terminate. So QM (and any physical process that is
capable of supporting general computation, which
is pretty much everything) is undecidable from that
perspective.
20 Concluding remarks
For most of its history, science has pursued the goal of explaining existing phenomena in terms of simpler phenomena.
That’s the reductionist agenda.
The approach we have taken is to ask
how new phenomena may be constructed from and implemented in terms of existing phenomena. That’s the creative
impulse of artists, computer scientists,
engineers—and of nature. It is these new
phenomena that are often thought of as
emergent.
When thinking in the constructive direction, a question arises that is often underappreciated: what allows one to put exEmergence Explained
2/12/2016
isting things together to get something
new—and something new that will persist in the world? What binding forces
and binding strategies do we (and nature) have at our disposal?
Our answer has been that there are two
sorts of binding strategies: energy wells
and energy-consuming processes. Energy wells are reasonably well understood—although it is astonishing how
many different epiphenomena nature and
technology have produced through the
use of energy wells.
We have not even begun to catalog the
ways in which energy-consuming processes may be used to construct stable,
self-perpetuating, autonomous entities.
Earlier we wrote that science does not
consider it within its realm to ask constructivist questions. That is not completely true. Science asks about how we
got here from the big bang, and science
asks about biological evolution. These
are both constructivist questions. Since
science is an attempt to understand nature, and since constructive processes
occur in nature, it is quite consistent with
the overall goals of science to ask how
these constructive processes work. As
far as we can determine, there is no subdiscipline of science that asks, in general, how the new arises from the existing.
Science has produced some specialized
answers to this question. The biological
evolutionary explanation involves random mutation and crossover of design
records. The cosmological explanation
involves falling into energy wells of various sorts. Is there any more to say about
how nature finds and then explores new
possibilities? If as Dennett argues in
[Dennett ‘96] this process may be fully
explicated as generalized Darwinian
evolution, questions still remain. Is there
34/41
DRAFT
any useful way to characterize the search
space that nature is exploring? What
search strategies does nature use to explore that space? Clearly one strategy is
human inventiveness.
21 Acknowledgement
We are grateful for numerous enjoyable
and insightful discussions with Debora
Shuger during which many of the ideas
in this paper were developed and refined.
References
Abbott, R., “Emergence, Entities, Entropy, and Binding Forces,” The Agent
2004 Conference on: Social Dynamics:
Interaction, Reflexivity, and Emergence,
Argonne National Labs and University
of Chicago, October 2004. URL as of
4/2005:
http://abbott.calstatela.edu/PapersAndTa
lks/abbott_agent_2004.pdf.
American Heritage, The American Heritage® Dictionary of the English Language, 2000. URL as of 9/7/2005:
http://www.bartleby.com/61/67/S014670
0.html.
Anderson, P.W., “More is Different,”
Science, 177 393-396, 1972.
Aoki, M. Toward a Comparative Institutional Analysis, MIT Press, 2001.
BBC News, “Gamer buys $26,500 virtual land,” BBC News, Dec. 17, 2004.
URL as of 2/2005:
http://news.bbc.co.uk/1/hi/technology/41
04731.stm.
Bedau, M.A., “Downward causation and
the autonomy of weak emergence”.
Principia 6 (2002): 5-50. URL as of
11/2004:
http://www.reed.edu/~mab/papers/princi
pia.pdf.
Emergence Explained
2/12/2016
Boyd, Richard, "Scientific Realism",
The Stanford Encyclopedia of Philosophy (Summer 2002 Edition), Edward N.
Zalta (ed.), URL as of 9/01/2005:
http://plato.stanford.edu/archives/sum20
02/entries/scientific-realism/.
Brown, J.S., Talk at San Diego State
University, January 18, 2005. URL as of
6/2005:
http://ctl.sdsu.edu/pict/jsb_lecture18jan0
5.pdf
Carroll, S.B., Endless Forms Most Beautiful: The New Science of Evo Devo and
the Making of the Animal Kingdom, W.
W. Norton, 2005.
Chaitin, G. Algorithmic Information
Theory, reprinted 2003. URL as of Sept.
6, 2005:
http://www.cs.auckland.ac.nz/CDMTCS/
chaitin/cup.pdf.
CFCS, Committee on the Fundamentals
of Computer Science: Challenges and
Opportunities, National Research Council, Computer Science: Reflections on
the Field, Reflections from the Field,
2004. URL as of 9/9/2005:
http://www.nap.edu/books/0309093015/
html/65.html.
Church, G. M., “From systems biology
to synthetic biology,” Molecular Systems Biology, March, 29, 2005. URL as
of 6/2005:
http://www.nature.com/msb/journal/v1/n
1/full/msb4100007.html.
Ciborra, C. "From Thinking to Tinkering: The Grassroots of Strategic Information Systems", The Information Society 8, 297-309, 1992.
Clarke, A. C., "Hazards of Prophecy:
The Failure of Imagination,” Profiles of
The Future, Bantam Books, 1961.
Cockshott, P. and G. Michaelson, “Are
There New Models of Computation: Re35/41
DRAFT
ply to Wegner and Eberbach.” URL as
of Oct. 10, 2005:
http://www.dcs.gla.ac.uk/~wpc/reports/
wegner25aug.pdf.
Colbaugh, R. and Kristen Glass, “Low
Cognition Agents: A Complex Networks
Perspective,” 3rd Lake Arrowhead Conference on Human Complex Systems,
2005.
Collins, Steven, Martijn Wisse, and
Andy Ruina, “A Three-Dimensional
Passive-Dynamic Walking Robot with
Two Legs and Knees,” The International
Journal of Robotics Research Vol. 20,
No. 7, July 2001, pp. 607-615, URL as
of 2/2005:
http://ruina.tam.cornell.edu/research/topi
cs/locomotion_and_robotics/papers/3d_p
assive_dynamic/3d_passive_dynamic.pdf
Comm Tech Lab and the Center for Microbial Ecology, The Microbe Zoo, URL
as of Oct 10, 2005:
http://commtechlab.msu.edu/sites/dlcme/zoo/microbes/riftiasym.html
Comte, A. “Positive Philosophy,” translated by Harriet Martineau, NY: Calvin
Blanchard, 1855. URL: as of 7/2005:
http://www.d.umn.edu/cla/faculty/jhamli
n/2111/ComteSimon/Comtefpintro.html.
Cowan, R., “A spacecraft breaks open a
comet's secrets,” Science News Online,
Vol. 168, No. 11 , p. 168, Sept. 10, 2005.
URL as of 9/9/2005:
http://www.sciencenews.org/articles/200
50910/bob9.asp.
Dennett, D. C., The Intentional Stance,
MIT Press/Bradford Books, 1987.
Dennett, D. C. “Real Patterns,” The
Journal of Philosophy, (88, 1), 1991.
Dennett, D. C., Darwin's Dangerous
Idea: Evolution and the Meanings of
Life, V, 1996.
Emergence Explained
2/12/2016
Dick, D., et. al., “C2 Policy Evolution at
the U.S. Department of Defense,” 10th
International Command and Control Research and Technology Symposium, Office of the Assistant Secretary of Defense, Networks and Information Integration (OASD-NII), June 2005. URL as
of 6/2005:
http://www.dodccrp.org/events/2005/10t
h/CD/papers/177.pdf.
Einstein, A., Sidelights on Relativity, An
address delivered at the University of
Leyden, May 5th, 1920. URL as of
6/2005:
http://www.gutenberg.org/catalog/world/
readfile?fk_files=27030.
Emmeche, C, S. Køppe and F. Stjernfelt,
“Levels, Emergence, and Three Versions
of Downward Causation,” in Andersen,
P.B., Emmeche, C., N. O. Finnemann
and P. V. Christiansen, eds. (2000):
Downward Causation. Minds, Bodies
and Matter. Århus: Aarhus University
Press. URL as of 11/2004:
http://www.nbi.dk/~emmeche/coPubl/20
00d.le3DC.v4b.html.
Fodor, J. A., “Special Sciences (or the
disunity of science as a working hypothesis),” Synthese 28: 97-115. 1974.
Fodor, J.A., “Special Sciences; Still Autonomous after All These Years,” Philosophical Perspectives, 11, Mind, Causation, and World, pp 149-163, 1998.
Fredkin, E., "Digital Mechanics", Physica D, (1990) 254-270, North-Holland.
URL as of 6/2005: This and related papers are available as of 6/2005 at the
Digital Philosophy website, URL:
http://www.digitalphilosophy.org/.
Gardner, M., Mathematical Games: “The
fantastic combinations of John Conway's
new solitaire game ‘life’," Scientific
American, October, November, Decem36/41
DRAFT
ber, 1970, February 1971. URL as of
11/2004:
http://www.ibiblio.org/lifepatterns/octob
er1970.html.
Grasse, P.P., “La reconstruction du nid
et les coordinations inter-individuelles
chez Bellicosi-termes natalensis et Cubitermes sp. La theorie de la stigmergie:
Essai d'interpretation des termites constructeurs.” Ins. Soc., 6, 41-83, 1959.
Hardy, L., “Why is nature described by
quantum theory?” in Barrow, J.D.,
P.C.W. Davies, and C.L. Harper, Jr. Science and Ultimate Reality, Cambridge
University Press, 2004.
Holland, J. Emergence: From Chaos to
Order, Addison-Wesley, 1997.
Hume, D. An Enquiry Concerning Human Understanding, Vol. XXXVII, Part
3. The Harvard Classics. New York: P.F.
Collier & Son, 1909–14; Bartleby.com,
2001. URL a of 6/2005::
www.bartleby.com/37/3/.
Kauffman, S. “Autonomous Agents,” in
Barrow, J.D., P.C.W. Davies, and C.L.
Harper, Jr. Science and Ultimate Reality,
Cambridge University Press, 2004.
Kim, J. “Multiple realization and the
metaphysics of reduction,” Philosophy
and Phenomenological Research, v 52,
1992.
Kim, J., Supervenience and Mind. Cambridge University Press, Cambridge,
1993.
Langton, C., "Computation at the Edge
of Chaos: Phase Transitions and Emergent Computation." In Emergent Computation, edited by Stephanie Forest.
The MIT Press, 1991.
Laughlin, R.B., A Different Universe,
Basic Books, 2005.
Emergence Explained
2/12/2016
Laycock, Henry, "Object", The Stanford
Encyclopedia of Philosophy (Winter
2002 Edition), Edward N. Zalta (ed.),
URL as of 9/1/05:
http://plato.stanford.edu/archives/win200
2/entries/object/.
Leibniz, G.W., Monadology, for example, Leibniz's Monadology, ed. James
Fieser (Internet Release, 1996). URL as
of 9/16/2005:
http://stripe.colorado.edu/~morristo/mon
adology.html
Lowe, E. J., “Things,” The Oxford Companion to Philosophy, (ed T. Honderich),
Oxford University Press, 1995.
Maturana, H. & F. Varela, Autopoiesis
and Cognition: the Realization of the
Living., Boston Studies in the Philosophy of Science, #42, (Robert S. Cohen
and Marx W. Wartofsky Eds.), D. Reidel
Publishing Co., 1980.
Miller, Barry, "Existence", The Stanford
Encyclopedia of Philosophy (Summer
2002 Edition), Edward N. Zalta (ed.),
URL as of 9/1/05:
http://plato.stanford.edu/archives/sum20
02/entries/existence/.
NASA (National Aeronautics and Space
Administration), “Hurricanes: The
Greatest Storms on Earth,” Earth Observatory. URL as of 3/2005
http://earthobservatory.nasa.gov/Library/
Hurricanes/.
Nave, C. R., “Nuclear Binding Energy”,
Hyperphysics, Department of Physics
and Astronomy, Georgia State University. URL as of 6/2005:
http://hyperphysics.phyastr.gsu.edu/hbase/nucene/nucbin.html.
NOAA, Glossary of Terminology, URL
as of 9/7/2005:
http://www8.nos.noaa.gov/coris_glossar
y/index.aspx?letter=s.
37/41
DRAFT
O'Connor, Timothy, Wong, Hong Yu
"Emergent Properties", The Stanford
Encyclopedia of Philosophy (Summer
2005 Edition), Edward N. Zalta (ed.),
forthcoming URL:
http://plato.stanford.edu/archives/sum20
05/entries/properties-emergent/.
Prigogine, Ilya and Dilip Kondepudi,
Modern Thermodynamics: from Heat
Engines to Dissipative Structures, John
Wiley & Sons, N.Y., 1997.
2/12/2016
Series and Cellular Automata, PhD. Dissertation, Physics Department, University of Wisconsin-Madison, 2001. URL as
of 6/2005:
http://cscs.umich.edu/~crshalizi/thesis/si
ngle-spaced-thesis.pdf
Shalizi, C., “Review of Emergence from
Chaos to Order,” The Bactra Review,
URL as of 6/2005:
http://cscs.umich.edu/~crshalizi/reviews/
holland-on-emergence/
Ray, T. S. 1991. “An approach to the
synthesis of life,” Artificial Life II, Santa
Fe Institute Studies in the Sciences of
Complexity, vol. XI, Eds. C. Langton, C.
Taylor, J. D. Farmer, & S. Rasmussen,
Redwood City, CA: Addison-Wesley,
371--408. URL page for Tierra as of
4/2005:
http://www.his.atr.jp/~ray/tierra/.
Shalizi, C., “Emergent Properties,”
Notebooks, URL as of 6/2005:
http://cscs.umich.edu/~crshalizi/noteboo
ks/emergent-properties.html.
Rendell, Paul, “Turing Universality in
the Game of Life,” in Adamatzky, Andrew (ed.), Collision-Based Computing,
Springer, 2002. URL as of 4/2005:
http://rendell.server.org.uk/gol/tmdetails.
htm,
http://www.cs.ualberta.ca/~bulitko/F02/
papers/rendell.d3.pdf, and
http://www.cs.ualberta.ca/~bulitko/F02/
papers/tm_words.pdf
Summers, J. “Jason’s Life Page,” URL
as of 6/2005:
http://entropymine.com/jason/life/.
Rosen, Gideon, "Abstract Objects", The
Stanford Encyclopedia of Philosophy
(Fall 2001 Edition), Edward N. Zalta (ed.), URL as of 9/1/05:
http://plato.stanford.edu/archives/fall200
1/entries/abstract-objects/.
Smithsonian Museum, “Chimpanzee
Tool Use,” URL as of 6/2005:
http://nationalzoo.si.edu/Animals/Think
Tank/ToolUse/ChimpToolUse/default.cf
m.
Trani, M. et. al., “Patterns and trends of
early successional forest in the eastern
United States,” Wildlife Society Bulletin, 29(2), 413-424, 2001. URL as of
6/2005:
http://www.srs.fs.usda.gov/pubs/rpc/200
2-01/rpc_02january_31.pdf.
University of Delaware, Graduate College of Marine Studies, Chemosynthesis,
URL as of Oct 10, 2005:
http://www.ocean.udel.edu/deepsea/level
-2/chemistry/chemo.html
Sachdev, S, “Quantum Phase Transitions,” in The New Physics, (ed G. Fraser), Cambridge University Press, (to
appear 2006). URL as of 9/11/2005:
http://silver.physics.harvard.edu/newphy
sics_sachdev.pdf.
Uvarov, E.B., and A. Isaacs, Dictionary
of Science, September, 1993. URL as of
9/7/2005:
http://oaspub.epa.gov/trs/trs_proc_qry.na
vigate_term?p_term_id=29376&p_term
_cd=TERMDIS.
Shalizi, C., Causal Architecture, Complexity and Self-Organization in Time
Varzi, Achille, "Boundary", The Stanford Encyclopedia of Philosophy (Spring
Emergence Explained
38/41
DRAFT
2004 Edition), Edward N. Zalta (ed.),
URL as of 9/1/2005:
http://plato.stanford.edu/archives/spr200
4/entries/boundary/.
Varzi, A., "Mereology", The Stanford
Encyclopedia of Philosophy (Fall 2004
Edition), Edward N. Zalta (ed.), URL as
of 9/1/2005:
http://plato.stanford.edu/archives/fall200
4/entries/mereology/ .
Wallace, M., “The Game is Virtual. The
Profit is Real.” The New York Times,
May 29, 2005. URL of abstract as of
6/2005:
http://query.nytimes.com/gst/abstract.ht
ml?res=F20813FD3A5D0C7A8EDDAC
0894DD404482.
Wegner, P. and E. Eberbach, “New
Models of Computation,” Computer
Journal, Vol 47, No. 1, 2004.
2/12/2016
Wolfram, S., A New Kind of Science,
Wolfram Media, 2002. URL as of
2/2005:
http://www.wolframscience.com/nksonli
ne/toc.html.
Woodward, James, "Scientific Explanation", The Stanford Encyclopedia of Philosophy (Summer 2003 Edition), Edward
N. Zalta (ed.). URL as of 9/13/2005:
http://plato.stanford.edu/archives/sum20
03/entries/scientific-explanation/.
Zuse, K., “Rechnender Raum” (Vieweg,
Braunschweig, 1969); translated as Calculating Space, MIT Technical Translation AZT-70-164-GEMIT, MIT (Project
MAC), Cambridge, Mass. 02139, Feb.
1970. URL as of 6/2005:
ftp://ftp.idsia.ch/pub/juergen/zuserechne
nderraum.pdf.
Wegner, P. and D.. Goldin, “Computation beyond Turing Machines”, Communications of the ACM, April 2003. URL
as of 2/22/2005:
http://www.cse.uconn.edu/~dqg/papers/c
acm02.rtf.
Weinberg, S., “Reductionism Redux,”
The New York Review of Books, October
5, 1995. Reprinted in Weinberg, S., Facing Up, Harvard University Press, 2001.
URL as of 5/2005 as part of a discussion
of reductionism:
http://pespmc1.vub.ac.be/AFOS/Debate.
html
Wiener, P.P., Dictionary of the History
of Ideas, Charles Scribner's Sons, 197374. URL as of 6/2005:
http://etext.lib.virginia.edu/cgilocal/DHI/dhi.cgi?id=dv4-42.
WordNet 2.0, URL as of 6/2005:
www.cogsci.princeton.edu/cgibin/webwn.
Emergence Explained
39/41
DRAFT
2/12/2016
Figures and Tables
Table 1. Dissipative structures vs. selfperpetuating entities
Dissipative structures
Self-perpetuating entities
epiphenomena, e.g., 2-chamber example.
Has functional design, e.g., hurricane.
cial boundaries.
Self-defining boundaries
nally maintained energy gradient.
Imports, stores, and internally distributes
energy.
Figure 1. Four Rayleigh-Benard convection patterns
Emergence Explained
40/41
DRAFT
2/12/2016
Figure 2. Anatomy of a hurricane. [Image from [NASA].]
Emergence Explained
41/41
Download