Short_course._Morning - Department of Computer Science

Principles of Complex Systems:
How to think like nature
Part 1
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Does nature really think?
Russ.Abbott@Aero.org
 1998-2007. The Aerospace Corporation. All Rights Reserved.
1
Complex systems course overview
9:00–9:10.
9:10–9:25.
9:25–9:45.
9:45–9:55.
9:55–10:05.
10:05–10:15.
10:15–10:30.
10:30–10:45.
10:45–10:55.
10:55–11:00.
Introduction and motivation.
Overview – unintended consequences, mechanism,
function, and purpose; levels of abstraction, emergence,
introduction to NetLogo.
Emergence, levels of abstraction, and the reductionist blind
spot.
Modeling; thought externalization; how engineers and
computer scientists think.
Lots of echoes and
Break.
repeated themes from
Evolution and evolutionary computing. one section to another.
Innovation – exploration and exploitation.
Platforms – distributed control and systems of systems.
Groups – how nature builds systems; the wisdom of crowds.
Summary/conclusions – remember this if nothing else.
2
Principles of Complex Systems:
How to think like nature
What “complex systems” means,
and why you should care.
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
 1998-2007. The Aerospace Corporation. All Rights Reserved.
3
What we will be talking about.
“Complex systems” refers to an
anti-reductionist way of thinking
that developed in the 1980s in
Biology, Computer Science,
Economics, Physics, and other
fields. (The term complexity is
also used this way.)
– It is not intended to refer to a
particular category of systems, which
are presumably distinguished from
other systems that aren’t “complex.”
Isn’t that true
of all systems?
But if I had to define what a
“complex system” is …
– A collection of autonomous
elements that interact both with
each other and with their
environment and that exhibits
aggregate, ensemble, macro
behaviors that none of the
elements exhibit individually.
System: a construct or collection of
different elements that together
produce results not obtainable by the
elements alone. — Eberhardt Rechtin
We are in the business of
Systems Architecting of Organizations:
building “complex systems.” Why Eagles Can't Swim, CRC, 1999.
4
A satellite in a geostationary orbit:
one of the simplest possible “complex systems”
Fixed with respect to the earth as a reference frame.
An “emergent” property
But nothing is tying it down.
No cable is holding it in place.
What is the environment?
period of the orbit = period of the earth’s rotation
Typical of complex system mechanisms.
Multiple independent or quasi-independent processes
— which are not directly connected causally (agents) —
interact within an environment to produce a result.
5
Complex systems terms
• Emergence. A level of abstraction that can be
described independently of its implementation.
• Multi-scalar. Applicable to systems that are
understood on multiple levels simultaneously,
especially when a lower level implements the
emergence of some functionality at a higher level.
6
See next
few slides
Why should you care?
• Because our corporate leadership, our customers, and their contractors
think it’s important.
– Rumsfeld’s inspiration for transformation in the military grew out of this way
of thinking. This field is not new. It’s at least 2 decades old.
– The Command and Control Research Program (CCRP) in the Pentagon
(Dave Alberts) is successfully promoting this style of thinking within the DoD.
– Complex systems thinking is a generalization of and the foundation for netcentric thinking—and the way the world has changed as a result of the web.
• You should understand what they are talking about
– So that you can explain it to them.
• Because it offers a powerful new way to think about how systems work.
• Because large systems—and especially systems of systems (another
important buzz-word)—tend to be complex in the ways we will discuss.
• Because the ideas are interesting, important, and good for you.
7
General Hamel and Dr. Austin think it’s important
• General Michael Hamel (moderator)
– “Where Commercial, Civil, and Military Space
Intersect: Cooperation, Conflict, and Execution of
the Mission”
• Plenary Session, AIAA Space 2007.
• Dr. Wanda Austin,
– “Space System of Systems Engineering”
• USC Center for Systems and Software Engineering
Convocation, October 2006.
8
What is System of Systems Engineering?*
The process of planning, analyzing, organizing, and integrating
the capability of a mix of existing and new systems into a
system-of-system capability that is greater than the sum of
the capabilities of the constituent parts.
Emergence
The process emphasizes the process of discovering,
developing, and implementing standards that promote
interoperability among systems developed via different
sponsorship, management, and primary acquisition processes.
Many people know that something is missing
from the way we look at systems traditionally. But
most are groping to express just what it is. This
course is intended to help sharpen the picture.
Platforms
* USAF SAB Report: System of Systems Engineering for Air Force
Capability Development, July 2005
9
What is a System of Systems?
Functional decomposition
Small stovepipes
to large stovepipes – NO
Level of
abstraction
Loosely coupled and tightly
integrated – YES
Platforms
10
10
Planning Complex Endeavors (April 2007)
by David S. Alberts and Richard E. Hayes
Alberts’ term for what a
complex system does.
The Command and Control Research Program (CCRP)
has the mission of improving DoD’s understanding of
the national security implications of the Information Age.
• John G. Grimes, Assistant Secretary of Defense (NII) & Chief
Information Officer
• Dr. Linton Wells, II, Principal Deputy Assistant Secretary of
Defense (NII)
• Dr. David S. Alberts, Special Assistant to the ASD(NII) & Director
of Research
11
From the forward by John G. Grimes
As this latest book from the CCRP explains, we can no longer be
content with building an “enterprise-wide” network that stops at the
edges of our forces, nor with a set of information sources and
channels that are purely military in nature. We need to be able to work
with a large and diverse set of entities and information sources. We
also need to develop new approaches to planning that are better
suited for these coalition operations.
The implications are significant for a CIO as it greatly expands the
who, the what, and the how of information sharing and collaboration.
What is this “new
way of thinking?”
It also requires a new way of thinking about
effectiveness, increasing the emphasis we place on agility, which, as
is explained in this book, is the necessary response to uncertainty and
complexity.
From Chapter 1. Introduction
Platforms
The economics of communications and information technologies has created enormous
opportunities to leverage the power of information and collaboration cost effectively by adopting
Power to the Edge principles and network-centric concepts.
Exploration and
exploitation
Fine to use these
terms, but what do we
really mean by them?
12
From Chapter 2. Key Concepts
Complicated Systems
Systems that have many moving parts or actors and are highly
dynamic, that is, the elements of these systems constantly interact
with and impact upon one another. However, the cause and effect
relationships within a complicated situation are generally well
understood, which allows planners to predict the consequences of
specific actions with some confidence.
I think this misses the point. Most systems and interactions are (eventually) “well
understood.” Complicated systems are often fully entrained with one locus of control.
13
From Chapter 2. Key Concepts
Both chaos and phase transitions
Complex Endeavors (Systems)
Complex endeavors involve changes and behaviors that
I disagree—although
sometimes the only
way to predict it is to
run (a model of) it.
cannot be
predicted in detail, although those behaviors and changes can be
expected to form recognizable patterns. Complex endeavors are also characterized by
small differences in initial conditions or
relatively small perturbations (seemingly tactical actions) are associated with very
large changes in the resulting patterns of
behavior and/or strategic outcomes.
circumstances in which relatively
Some complex situations develop into complex adaptive systems (CAS), which tend to
be robust—to persist over time and across a variety of circumstances. These are often
observed in nature in the form of
biological or ecological
systems. However, while these systems are thought of as robust, they can be
pushed out of balance even to the point of collapse through cascades of negatively
reinforcing conditions and behaviors. Such perturbations are what ecologists fear when
a habitat is reduced to an isolated geographic area or when invasive, nonnative
species are introduced.
Note biological reference
14
Like many networks, this complex system
involves many independently operating
elements (the trains) that together enable one to
get from any one station to any other without a
massive number of point-to-point connections.
Simon Patterson is fascinated by the information which orders our lives. He humorously dislocates and subverts sources of information
such as maps, diagrams and constellation charts; one of his best known works is The Great Bear, in which he replaced the names of stations
on the London Underground map with names of philosophers, film stars, explorers, saints and other celebrities. By transforming
authoritative data with his own associations he challenges existing rationales.
15
The world in a grain of sand
To see the world in a grain of sand,
and heaven in a wild flower,
to hold infinity in the palm of your hands,
and eternity in an hour. –William Blake
Do you understand what that means? It’s
profound, but it uses poetry to make its point.
A primary objective of this class is to say in as plain
a way as possible what many people have been
groping for when talking about complex systems.
Many of the ideas may seem like common sense.
It’s just that we’ll be looking at them more closely.
16
Introduction to Complex Systems:
How to think like nature
Unintended consequences;
mechanism, function, and purpose
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
This segment introduces
some basic concepts.
 1998-2007. The Aerospace Corporation. All Rights Reserved.
17
A fable
• Once upon a time, a state in India had too many snakes.
• To solve this problem the government instituted an incentivebased program to encourage its citizens to kill snakes.
• It created the No Snake Left Alive program.
– Anyone who brings a dead snake into a field office of the
Dead Snake Control Authority (DSCA) will be paid a
generous Dead Snake Bounty (DSB).
• A year later the DSB budget was exhausted. DSCA had paid
for a significant number of dead snakes.
• But there was no noticeable reduction in the number of
snakes plaguing the good citizens of the state.
• What went wrong?
18
The DSCA mechanism
Receive dead snake
certificate. Submit
certificate to DSCA.
DSCA
What would you do if
this mechanism were
available in your world?
Receive
money.
Start a
snake
farm.
Catch, kill, and submit
a dead snake.
Dead snake
verifier
19
Moral: unintended consequences
• A mechanism is installed in an environment.
• The mechanism is used/exploited in unanticipated
ways.
• Once a mechanism is installed in the environment, it
will be used for whatever purposes “users” can think
to make of it …
– which may not be that for which it was originally
intended.
The first lesson of complex systems thinking is
that one must always be aware of the relationship
between systems and their environments.
20
Parasites that control their hosts
• Dicrocoelium dendriticum causes host ants
to climb grass blades where they are eaten
by grazing animals, where D. dendriticum
lives out its adult life.
• Toxoplasma gondii causes host mice not to
fear cats, where T. gondii reproduces.
• Spinochordodes tellinii causes host insects
to jump into the water and drown, where S.
tellinii grows to adulthood.
It’s amazing how far exploitation
of environmental mechanisms
can go. (See platforms, later.)
21
Locomotion in E. coli
• E. coli movements consist of
short straight runs, each lasting
a second or less, punctuated by
briefer episodes of random
tumbling.
• Each tumble reorients the cell and sets it off in a
new direction.
Exploration
Exploitation
• Cells that are moving up the gradient of an attractant
tumble less frequently than cells wandering in a
homogeneous medium or moving away from the source.
• In consequence, cells take longer runs toward the source
and shorter ones away.
Gain benefit
Harold, Franklyn M. (2001) The Way of the Cell: Molecules,
Organisms, and the Order of Life, Oxford University Press.
22
Mechanism, function, and purpose
• Mechanism: The physical processes within
an entity.
– The chemical reactions built into E.coli that result in its
flagella movements.
– The DSCA mechanism.
• Function: The effect of a mechanism on the
environment and on the relationship between
an entity and its environment.
– E. coli moves about. In particular, it moves up nutrient
gradients.
– Snakes are killed and delivered; money is exchanged.
Wikipedia Commons
• Purpose: The (presumably positive)
consequence for the entity of the change in
its environment or its relationship with its
environment. (But Nature is not teleological.)
– E. coli is better able to feed, which is necessary for its
survival.
– Snake farming is encouraged?
Socrates
Compare to Measures of Performance, Effectiveness, and Utility
23
Teleology: building “purpose”
Nature
Designed
E.g., E. coli locomotion to food
E.g., Reduce snake population
• Evolve a new mechanism
• Envision a purpose
• Experience the resulting
functionality
• Imagine how a function can
achieve that purpose
• If the functionality enhances
survival, keep the mechanism
• Design and develop a
mechanism to perform that
function
• “Purpose” has been created
implicitly as part of a new
level of abstraction
Most of the design steps require
significant conceptualization abilities.
• Deploy the mechanism and
hope the purpose is achieved
In both cases, the world will
be changed by the addition
of the new functionality. The
purpose is more likely to be
achieved in nature.
24
NetLogo: let’s try it
File > Models Library > Biology > Ants
Click Open
25
Two levels of emergence
• No individual chemical reaction inside the
ants is responsible for making them follow
the rules that describe their behavior.
• That the internal chemical reactions
together do is an example of emergence.
• No individual rule and no individual ant is
responsible for the ant colony gathering
food.
• That the ants together bring about that
result is a second level of emergence.
Colony results
Ant behaviors
Ant chemistry
Each layer is a
level of abstraction
Notice the similarity to layered
communication protocols
26
Two levels of emergence
Applications, e.g., email, IM, Wikipedia
• No individual chemical reaction inside the
WWW
(HTML) — for
browsers
+ servers
ants
is responsible
making
them follow
the rules that describe
their behavior.
Presentation
• That the internal Session
chemical reactions
together do is an
example of emergence.
Transport
• No individual rule
and no individual ant is
Network
responsible for the ant colony gathering
Physical
food.
• That the ants together bring about that
result is a second level of emergence.
Colony results
Ant behaviors
Ant chemistry
Each layer is a
level of abstraction
Notice the similarity to layered
communication protocols
27
Principles of Complex Systems:
How to think like nature
Emergence: what’s right and what’s
wrong with reductionism
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
Philosophical, but
with a practical
corollary at the end.
Presumptuous?
 1998-2007. The Aerospace Corporation. All Rights Reserved.
28
Emergence: the holy grail of complex systems
How macroscopic behavior arises from microscopic behavior.
Emergent entities (properties
or substances) ‘arise’ out of
more fundamental entities and
yet are ‘novel’ or ‘irreducible’
with respect to them.
Stanford Encyclopedia of Philosophy
http://plato.stanford.edu/entries/properties-emergent/
Plato
The ‘scare’ quotes identify
problematic areas.
Emergence: Contemporary Readings in Philosophy and Science
Mark A. Bedau and Paul Humphreys (Eds.), MIT Press, April 2008.
29
Cosma Shalizi
http://cscs.umich.edu/~crshalizi/reviews/holland-on-emergence/
Someplace … where quantum field theory
meets general relativity and atoms and void
merge into one another, we may take “the
rules of the game” to be given.
Call this
emergence
if you like.
It’s a fine-sounding
word, and brings to
mind southwestern
creation myths in
an oddly apt way.
But the rest of the observable, exploitable order
in the universe
benzene molecules, PV = nRT, snowflakes, cyclonic
storms, kittens, cats, young love, middle-aged
remorse, financial euphoria accompanied with acute
gullibility, prevaricating candidates for public office,
tapeworms, jet-lag, and unfolding cherry blossoms
Where do all these regularities come from?
30
Erwin Schrödinger
“[L]iving matter, while not eluding the ‘laws of
physics’ … is likely to involve ‘other laws,’ [which]
will form just as integral a part of [its] science.”
Erwin Schrödinger, What is Life?, 1944.
Jerry Fodor
Steven Weinberg
The ultimate reductionist.
Why is
there
anything
except
physics?
Philip Anderson
John
Holland
The ability to reduce everything to simple
fundamental laws [does not imply] the ability to
start from those laws and reconstruct the
universe. … [We] must all start with reductionism,
which I fully accept. “More is Different” (Science, 1972)
31
The fundamental dilemma of science
Emergence
Are there autonomous
higher level laws of nature?
The functionalist claim
The reductionist position
How can that be if everything can be reduced
to the fundamental laws of physics?
My answer
It can all be explained in terms
of levels of abstraction.
32
The Game of Life
File > Models Library > Computer Science > Cellular Automata > Life
Click Open
33
Gliders
• Gliders are causally powerless.
– A glider does not change how the rules operate or which cells will be
switched on and off. A glider doesn’t “go to an cell and turn it on.”
– A Game of Life run will proceed in exactly the same way whether one
notices the gliders or not. A very reductionist stance.
• But …
– One can write down equations that characterize glider motion and
predict whether—and if so when—a glider will “turn on” a particular cell.
– What is the status of those equations? Are they higher level laws?
Like shadows, they don’t “do” anything.
The rules are the only “forces!”
34
Game of Life as a Programming Platform
• Amazing as they are, gliders are also trivial.
– Once we know how to produce a glider, it’s
simple to make them.
• Can build a library of Game of Life patterns and
their interaction APIs.
By suitably arranging these patterns,
one can simulate a Turing Machine.
Paul Rendell. http://rendell.server.org.uk/gol/tmdetails.htm
A second level of emergence.
Emergence is not particularly mysterious.
35
Downward causation entailment
• The unsolvability of the TM halting problem entails the
unsolvability of the GoL halting problem.
– How strange! We can conclude something about the GoL because
we know something about Turing Machines.
– Yet the theory of computation is not derivable from GoL rules.
• One can use glider “velocity” laws to draw conclusions (make
predictions) about which cells will be turned on and when that will
happen. (Also downward entailment.)
GoL gliders and Turing Machines are
causally reducible but ontologically real.
– You can reduce them away without
changing how a GoL run will proceed.
– Yet they obey higher level laws, not
derivable from the GoL rules.
36
Level of abstraction: the reductionist blind spot
A concept computer science has contributed to the world.
A collection of concepts and relationships that can be described
independently of its implementation.
Every computer application creates one.
A level of abstraction is causally reducible to its implementation.
– You can look at the implementation to see how it works.
Its independent specification—its properties and way of being in
the world—makes it ontologically real.
– How it interacts with the world is based on its specification
and is independent of its implementation.
– It can’t be reduced away without losing something
37
Practical corollary: feasibility ranges
• Physical levels of abstraction
are implemented only within
feasibility ranges.
• When the feasibility range is
exceeded a phase transition
generally occurs.
Require contractors to identify the feasibility range
within which the implementation will succeed and
describe the steps taken to ensure that those
feasibility ranges are honored—and what happens
if they are not. (Think O-rings.)
38
Principles of Complex Systems:
How to think like nature
Modeling, the externalization of
thought, and how engineers and
computer scientists think
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
 1998-2007. The Aerospace Corporation. All Rights Reserved.
39
Modeling problems:
the difficulty of looking downward
Models of computer security or
terrorism will always be incomplete.
Can only model
unimaginative
enemies.
• It is not possible to find a non-arbitrary base level
for models.
– What are we leaving out that might matter?
• Use Morse code to transmit messages on encrypted lines.
• No good models of biological arms races.
– Insects vs. plants: bark, bark boring, toxin, anti-toxin, … .
• Geckos use the Van der Waals “force” to climb.
Nature is not segmented into
a strictly layered hierarchy.
Epiphenomenal
40
Modeling problems:
the difficulty of looking upward
• Don’t know how to build models that can notice emergent
phenomena and characterize their interactions. We don’t
know what we aren’t noticing.
– We/they can use our commercial airline system to deliver
mail/bombs.
Exploit an
• Model gravity as an agent-based system. existing process
– Ask system to find equation of earth’s orbit.
– Once told what to look for, system can find ellipse. (GP)
– But it won’t notice the yearly cycle of the seasons — even
though it is similarly emergent.
Models of computer security or
terrorism will always be incomplete.
Can only model
unimaginative
enemies.
41
Turning dreams into reality
• Computer Scientists and Engineers both turn
dreams (ideas) into reality—systems that operate
in the world.
• But we do it in very different ways.
42
Intellectual leverage in Computer Science:
executable externalized thought
• Computer languages enable executable externalized thought—
different from (nearly) all other forms of externalized thought
throughout history!
– Software is both intentional—has meaning—and executable.
– All other forms of externalized thought (except music) require a
human being to interpret them.
• The bit provides a floor that is both symbolic and real.
– Bits are: symbolic, physically real, and atomic.
– Bits don’t have error bars.
– Can build (ontologically real) levels of abstraction above them.
• But the bit limits realistic modeling.
– E.g., no good models of evolutionary arms races and many
other multi-scale (biological) phenomena. No justifiable floor.
– Challenge: build a computer modeling framework that
supports dynamically varying floors.
43
Intellectual leverage in Engineering:
mathematical modeling
• Engineering gains intellectual leverage through
mathematical modeling and functional decomposition.
– Models approximate an underlying reality (physics).
– Models don’t create ontologically independent entities.
• Engineering is both cursed and blessed by its
attachment to physicality.
– There is no reliable floor in the material world.
• Engineering systems often fail because of
unanticipated interactions among well designed
components, e.g. acoustic coupling that could not be
identified in isolation from the operation of the full
systems. National Academy of Engineering, Design in the New Millennium, 2000.
– But, if a problem appears, engineers (like scientists) can
dig down to a lower level to solve it.
44
Engineers and computer scientists are different —
almost as different as Venus and Mars
• Engineers are grounded in physics.
– Ultimately there is nothing besides physics.
– Even though engineers build things that have very different
(emergent) properties from their components, engineers tend
to think at the level of physics.
– When designing systems, engineers start with an idea and
build down to the physics—using functional decomposition
and successive approximation.
• Engineering is (proudly) applied physics.
• Computer scientists live in a world of abstractions.
– Physics has very little to do with computer science worlds.
– For computer scientists, there is more than physics, but we
may have a hard time saying what it is—emergence.
– When designing systems, Computer scientists start with the
bit and build up to the idea—using levels of abstraction.
• Computer science is (cautiously) applied philosophy.
45
Complex systems course overview
9:00–9:10.
9:10–9:25.
9:25–9:45.
9:45–9:55.
9:55–10:05.
10:05–10:15.
10:15–10:30.
10:30–10:45.
10:45–10:55.
10:55–11:00.
Introduction and motivation.
Unintended consequences – mechanism, function, and
purpose; introduction to NetLogo.
Emergence – the reductionist blind spot and levels of
abstraction.
Modeling; thought externalization; how engineers and
computer scientists think.
Break.
Evolution and evolutionary computing.
Innovation – exploration and exploitation.
Platforms – distributed control and systems of systems.
Groups – how nature builds systems; the wisdom of crowds.
Summary/conclusions – remember this if nothing else.
46
Backups
47
The reductionist blind spot
• Darwin and Wallace’s theory of evolution by natural selection is
expressed in terms of
– entities
– their properties
– how suitable the properties of the entities are for the environment
– populations
– reproduction
– etc.
• These concepts are a level of abstraction.
– The theory of evolution is about entities at that level of abstraction.
• Let’s assume that it’s (theoretically) possible to trace how any state
of the world—including the biological organisms in it—came about
by tracking elementary particles
• Even so, it is not possible to express the theory of evolution in terms
of elementary particles.
• Reducing everything to the level of physics, i.e., naïve reductionism,
results in a blind spot regarding higher level entities and the laws
that govern them.
48
I’m showing this slide to invite anyone who is interested to work on this with me.
How are levels of abstraction built?
• By adding persistent constraints to what exists.
– Constraints “break symmetry” by ruling out possible future states.
– Should be able to relate this to symmetry breaking more generally.
• Easy in software.
– Software constrains a computer to operate in a certain way.
– Software (or a pattern set on a Game of Life grid) “breaks the
symmetry” of possible sequences of future states.
• How does nature build levels of abstraction? Two ways.
Isn’t this just common sense?
– Energy wells produce static entities.
Ice cubes act differently from
• Atoms, molecules, solar systems, …
water and water molecules.
– Activity patterns use imported energy to produce dynamic entities.
• The constraint is imposed by the processes that the dynamic
entity employs to maintain its structure.
• Biological entities, social entities, hurricanes.
• A constrained system operates differently (has additional laws—
the constraints) from one that isn’t constrained.
49