A bit presumptuous?
Introduction to Complex Systems:
How to think like nature
Course overview: two hours
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
Besides, does
nature really think?
 1998-2007. The Aerospace Corporation. All Rights Reserved.
1
Introduction to Complex Systems:
How to think like nature
What complex systems means,
and why you should care.
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
 1998-2007. The Aerospace Corporation. All Rights Reserved.
2
What we will be talking about.
• The term complex systems refers to a broad range of disciplines and
ways of thinking. (The term complexity is also used this way.)
– It is not intended to refer to a particular category of systems, which are
presumably distinguished from other systems that aren’t complex.
• But if I had to define what a complex system is …
– A system of autonomous elements that interact both with each other and
with their environment and that exhibits aggregate, ensemble, macro
behaviors that none of the elements exhibit individually.
Isn’t that true of
all systems?
System: a construct or collection of
different elements that together
produce results not obtainable by the
elements alone. — Eberhardt Rechtin
We are in the business of
Systems Architecting of Organizations:
producing complex systems. Why Eagles Can't Swim, CRC, 1999.
3
See next
few slides
Why should you care?
• Because our corporate leadership, our customers, and their contractors
think it’s important. You should understand what they are talking about—so
that you can explain it to them.
– Rumsfeld’s inspiration for transformation in the military grew out of this way
of thinking. This field is not new. It’s at least 2 decades old.
– The Command and Control Research Program (CCRP) in the Pentagon
(Dave Alberts) is successfully promoting this style of thinking within the DoD.
– Net-centricity—and the way the world has changed as a result of the web—
illustrates this way of thinking.
• Think of complex systems thinking as a generalization of and the foundation
for net-centric thinking.
• Because it gives you a powerful new way to think about how systems work.
• Because large systems—and especially systems of systems—tend to be
complex in the ways we will discuss.
– These are considered important by our leadership and our customers.
• Because the ideas are interesting, important, and good for you.
4
General Hamel and Dr. Austin think it’s important
• General M. Hamel (moderator), “Where
Commercial, Civil, and Military Space Intersect:
Cooperation, Conflict, and Execution of the
Mission,” Plenary Session, AIAA Space 2007,
• Dr. Wanda Austin, “Space System of Systems
Engineering,” USC Center for Systems and
Software Engineering Convocation, October 2006.
5
What is System of Systems Engineering?*
The process of planning, analyzing, organizing, and
integrating the capability of a mix of existing and new systems
into a system-of-system capability that is greater than the
sum of the capabilities of the constituent parts.
The process emphasizes the process of discovering,
developing, and implementing standards that promote
interoperability among systems developed via different
sponsorship, management, and primary acquisition processes.
* USAF SAB Report: System of Systems Engineering for Air Force
Capability Development, July 2005
6
What is a System of Systems?
Small stovepipes
to large stovepipes – NO
Loosely coupled and tightly
integrated – YES
7
7
Nature of Space System of Systems
(SOS) Engineering
• Multi-faceted Constraints
– Evolving set of interlocking issues and constraints
– No definitive statement of the problem; requirements
continually change
– The problem is typically understood only after a solution is
developed
– Many stakeholders care about how the problem is
resolved, making the problem-solving process
fundamentally a social problem
– Getting the “optimal” answer is less important than
obtaining the stakeholders’ acceptance of the emerging
solution
– Usually required to maintain connectivity to the legacy
capability
8
Increasing Complexity of
Space System-of-Systems
Future
Direction
Space Control, SBR,
EELV, Commercial Space, etc.
Weather
Imagery
Signals
Goals &
Objectives
DMSP
NSS System B
DSCS
Block 02
DSP
FLTSAT
GPS Nuclear Detection
GPS Navigation
NSS System C
NSS System A
Space
Asset
Evolution
Milsatcom UHF / EHF
Functional
Integration
Start of
Horizontal
Space System
Integration
Communications
Functional
Integration
Milsatcom Crosslinking
Leo Consolidation,
HASA, AIS, etc.
Milsatcom Global Broadcast
Full Mission
Space System
Integration
Stove Pipe
Space
Systems
Complexity in NSS Support
Increasing
Capability
Block 06
5000.
Type
Acquisition
Spiral Acquisition
Significant
Changes in
System Acquisition
System of
System
Fully
Integrates
Mission
Support
Space
Enterprise
H
O
R
I
Z
O
N
T
A
L
1990
2000
System Development Time Line
2010
Space
Air
Weapons
Terrestrial
Partial
Integration
Space
Air
Weapons
I
N
T
E
G
R
A
T
I
O
N
Terrestrial
Information
Integration
Support
Integration
Space
Air
Separate
Missions
Weapons
Terrestrial
A-Spec Flowdown
1980
Fully
Integrated
2020
9
9
Planning Complex Endeavors (April 2007)
David S. Alberts and Richard E. Hayes
The Command and Control Research Program (CCRP)
has the mission of improving DoD’s understanding of
the national security implications of the Information Age.
• John G. Grimes, Assistant Secretary of Defense (NII) & Chief
Information Officer
• Dr. Linton Wells, II, Principal Deputy Assistant Secretary of
Defense (NII)
• Dr. David S. Alberts, Special Assistant to the ASD(NII) & Director
of Research
10
From the forward by John G. Grimes
As this latest book from the CCRP explains, we can no longer
be content with building an “enterprise-wide” network that stops
at the edges of our forces, nor with a set of information sources
and channels that are purely military in nature. We need to be
able to work with a large and diverse set of entities and
information sources. We also need to develop new approaches
to planning that are better suited for these coalition operations.
The implications are significant for a CIO as it greatly expands
the who, the what, and the how of information sharing and
collaboration. It also requires a new way of thinking about
effectiveness, increasing the emphasis we place on agility,
which, as is explained in this book, is the necessary response
to uncertainty and complexity.
Alberts’ cover-all word
11
From Chapter 1. Introduction
• Information Age environments (whether military,
government, or business) are all characterized by
increasing complexity and uncertainty, as well as by
the need for more rapid responses. As a result,
individual entities and groups of entities with common
goals need to be more agile to be successful in the
Information Age.
• Today the economics of communications and
information technologies has created enormous
opportunities to leverage the power of information and
collaboration cost effectively by adopting Power to the
Edge principles and network-centric concepts.
12
From Chapter 1. Introduction
Complex endeavors: undertakings that have one or more of
the following characteristics:
Alberts’ term for what a
complex system does.
1. The number and diversity of the participants is such that
a. there are multiple interdependent “chains of command,”
b. the objective functions of the participants conflict with
one another or their components have significantly
different weights, or
c. the participants’ perceptions of the situation differ in
important ways; and
2. The effects space spans multiple domains and there is
a. a lack of understanding of networked cause and effect
relationships, and
b. an inability to predict effects that are likely to arise from
alternative courses of action.
13
From Chapter 2. Key Concepts
Complicated Systems
Systems that have many moving parts or actors and are highly
dynamic, that is, the elements of these systems constantly interact
with and impact upon one another. However, the cause and effect
relationships within a complicated situation are generally well
understood, which allows planners to predict the consequences of
specific actions with some confidence.
I think this misses the point to
some extent. Most systems and
interactions are (eventually) “well
understood.” Complicated
systems are often fully entrained
through one locus of control.
Think Rube Goldberg device.
Lots of gears and moving parts,
but they are all meshed—when it
works properly.
14
From Chapter 2. Key Concepts
I disagree
Complex Endeavors (Systems)
Complex endeavors involve changes and behaviors that cannot be
predicted in detail, although those behaviors and changes can be
expected to form recognizable patterns. Complex endeavors are also
characterized by circumstances in which relatively small differences
in initial conditions or relatively small perturbations (seemingly tactical
actions) are associated with very large changes in the resulting
patterns of behavior and/or strategic outcomes. Note biological reference
Some complex situations develop into complex adaptive systems
(CAS), which tend to be robust—to persist over time and across a
variety of circumstances. These are often observed in nature in the
form of biological or ecological systems. However, while these
systems are thought of as robust, they can be pushed out of balance
even to the point of collapse through cascades of negatively
reinforcing conditions and behaviors. Such perturbations are what
ecologists fear when a habitat is reduced to an isolated geographic
area or when invasive, nonnative species are introduced.
15
From Chapter 2. Key Concepts
Net-centric operations involves a number of interrelated
concepts that form an intellectual basis for … Information
Transformation of the DoD.
• It is about human and organizational behavior
• It is based on adopting a new way of thinking—networkcentric thinking—and applying it to military operations
• It focuses on the power that can be generated from the
effective linking or networking of the enterprise.
This course is about
new ways of thinking.
Not just hardware (and software).
16
From Chapter 1. Introduction
Disruptive innovation or transformation is by definition
more than incremental improvement or sustaining
innovation. It requires venturing beyond comfort zones,
taking voyages of discovery.
Think of this course as one
of your voyages of discovery.
17
Complex systems course outline
Morning
Unintended consequences – mechanism, function, and
purpose; introduction to NetLogo.
9:00–10:30. Emergence – the reductionist blind spot and levels of
abstraction.
10:30–10:45. Break.
10:45–11:30. Modeling; thought externalization; how engineers and
computer scientists think.
8:00–9:00.
Afternoon
12:30–1:30.
1:30–2:15.
2:15–2:30.
2:30–3:15.
3:15–4:15.
4:15–4:30.
Evolution and evolutionary computing.
Innovation – exploratory behavior; initiative and
integration; resource allocation.
Break.
Platforms – distributed control and systems of systems.
Groups – the wisdom of crowds.
Summary/conclusions – remember this if nothing else.
18
Complex systems course overview
9:00–9:10.
9:10–9:25.
9:25–9:45.
9:45–9:55.
9:55–10:10.
10:10–10:20.
10:20–10:35.
10:35–10:45.
10:45–10:55.
10:55–11:00.
Introduction and motivation.
Unintended consequences – mechanism, function, and
purpose; introduction to NetLogo.
Emergence – the reductionist blind spot and levels of
abstraction.
Modeling; thought externalization; how engineers and
computer scientists think.
Break.
Evolution and evolutionary computing.
Innovation – exploratory behavior; initiative and
integration; resource allocation.
Platforms – distributed control and systems of systems.
Groups – the wisdom of crowds.
Summary/conclusions – remember this if nothing else.
19
Introduction to Complex Systems:
How to think like nature
Unintended consequences;
mechanism, function, and purpose
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
This segment introduces
some basic concepts.
 1998-2007. The Aerospace Corporation. All Rights Reserved.
20
A fable
• Once upon a time, a state in India had too many snakes.
• To solve this problem the government instituted an incentivebased program to encourage its citizens to kill snakes.
• It created the No Snake Left Alive program.
– Anyone who brings a dead snake into a field office of the
Dead Snake Control Authority (DSCA) will be paid a
generous Dead Snake Bounty (DSB).
• A year later the DSB budget was exhausted. DSCA had paid
for a significant number of dead snakes.
• But there was no noticeable reduction in the number of
snakes plaguing the good citizens of the state.
• What went wrong?
21
The DSCA mechanism
Receive dead snake
certificate. Submit
certificate to DSCA.
DSCA
What would you do if
this mechanism were
available in your world?
Receive
money.
Start a
snake
farm.
Catch, kill, and submit
a dead snake.
Dead snake
verifier
22
Moral: unintended consequences
• The preceding is an example of what is sometimes called an
unintended consequence.
• It represents an entire category of (unintended and unexpected)
phenomena in which
– a mechanism is installed in an environment, but then
– the mechanism is used/exploited in unanticipated ways.
• Once a mechanism is installed in the environment, it will be
used for whatever purposes “users” can think to make of it …
– which may not be that for which it was originally intended.
Upcoming ideas: platforms, stigmergy.
That’s how
nature works.
The first lesson of complex systems thinking is
that one must always be aware of the relationship
between systems and their environments.
23
Parasites that control their hosts
• Dicrocoelium dendriticum causes host ants to
climb grass blades where they are eaten by
grazing animals, which is where D. dendriticum
lives out its adult life.
• Toxoplasma gondii cause mice not to fear cats,
which is where T. gondii reproduces.
• Spinochordodes tellinii causes host insects to
jump into the water and drown, where S. tellinii
grows to adulthood.
24
Locomotion in E. coli
• E. coli movements consist of short straight runs, each
lasting a second or less, punctuated by briefer episodes
of random tumbling.
• Each tumble reorients the cell and sets it off in a new
direction.
• Cells that are moving up the gradient of an attractant
tumble less frequently than cells wandering in a
homogeneous medium or moving away from the source.
• In consequence, cells take longer runs toward the source
and shorter ones away.
Upcoming idea: exploratory behavior.
Harold, Franklyn M. (2001) The Way of the Cell: Molecules,
Organisms, and the Order of Life, Oxford University Press.
25
Mechanism, function, and purpose*
• Mechanism: The physical processes
within an entity.
– The chemical reactions built into E.coli that result in
its flagella movements.
– The DSCA mechanism.
• Function: The effect of a mechanism on
the environment and on the relationship
between an entity and its environment.
– E. coli moves about. In particular, it moves up nutrient
gradients.
– Snakes are killed and delivered; money is exchanged.
Wikipedia Commons
• Purpose: The (presumably positive)
consequence for the entity of the change in
its environment or its relationship with its
environment.
– E. coli is better able to feed, which is necessary for its
survival.
– Snake farming is encouraged?
Socrates
*Compare to Measures of Performance, Effectiveness, and Utility
26
NetLogo: let’s try it
File > Models Library > Biology > Ants
Click Open
27
Simple ant foraging model
Ant rules
• If you are not carrying food,
• Move up the chemical-scent
gradient, if any.
• Pick up food, if any.
• Otherwise move randomly.
• If you are carrying food, move up
the nest-scent gradient. When you
reach the nest, deposit the food.
• population: number of ants
• diffusion-rate: rate at which the
chemical (pheromone) spreads
• evaporation-rate: rate at which
chemical evaporates
In “to look-for-food” procedure,
change “orange” to “blue”.
After running once, play around with
the population, diffusion-rate, and
evaporation-rate.
Turns plotting on/off.
Implemented chemically in real
ants, by software in NetLogo.
Can you get this picture, with paths to
all food sources simultaneously?
28
Two levels of emergence
Applications, e.g., email, IM, Wikipedia
• No individual chemical reaction inside the
WWW
(HTML) — for
browsers
+ servers
ants
is responsible
making
them follow
the rules that describe
their behavior.
Presentation
• That the internal Session
chemical reactions
together do is an
example of emergence.
Transport
• No individual rule
and no individual ant is
Network
responsible for the ant colony gathering
Physical
food.
• That the ants together bring about that
result is a second level of emergence.
Colony results
Ant behaviors
Ant chemistry
As we’ll see later,
each layer is a
level of abstraction
Notice the similarity to layered
communication protocols
29
Complex systems terms
• Emergence. A level of abstraction that can be
described independently of its implementation.
– Examples include the movement of E. coli and ants through space
toward a food source, which can be described independently of how
it is brought about.
• Multi-scalar. Applicable to systems that are
understood on multiple levels simultaneously,
especially when a lower level implements the
emergence of some functionality at a higher level.
– E. coli motion and ant foraging are both examples of multi-scalar
systems.
30
Introduction to Complex Systems:
How to think like nature
Emergence: what’s right and what’s
wrong with reductionism
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
Presumptuous again?
 1998-2007. The Aerospace Corporation. All Rights Reserved.
31
Emergence: the holy grail of complex systems
How macroscopic behavior arises from microscopic behavior.
Emergent entities (properties
or substances) ‘arise’ out of
more fundamental entities and
yet are ‘novel’ or ‘irreducible’
with respect to them.
Stanford Encyclopedia of Philosophy
http://plato.stanford.edu/entries/properties-emergent/
Plato
The ‘scare’ quotes identify
problematic areas.
32
Cosma Shalizi
http://cscs.umich.edu/~crshalizi/reviews/holland-on-emergence/
Someplace … where quantum field theory
meets general relativity and atoms and void
merge into one another, we may take “the
rules of the game” to be given.
Call this
emergence
if you like.
It’s a fine-sounding
word, and brings to
mind southwestern
creation myths in
an oddly apt way.
But the rest of the observable, exploitable order
in the universe
benzene molecules, PV = nRT, snowflakes, cyclonic
storms, kittens, cats, young love, middle-aged
remorse, financial euphoria accompanied with acute
gullibility, prevaricating candidates for public office,
tapeworms, jet-lag, and unfolding cherry blossoms
Where do all these regularities come from?
33
Erwin Schrödinger
“[L]iving matter, while not eluding the ‘laws of
physics’ … is likely to involve ‘other laws,’ [which]
will form just as integral a part of [its] science.”
Erwin Schrödinger, What is Life?, 1944.
Jerry Fodor
Steven Weinberg
The ultimate reductionist.
Why is
there
anything
except
physics?
Philip Anderson
John
Holland
The ability to reduce everything to simple
fundamental laws [does not imply] the ability
to start from those laws and reconstruct the
universe. … [We] must all start with
reductionism, which I fully accept.
“More is Different” (Science, 1972)
34
The fundamental dilemma of science
Are there autonomous
higher level laws of nature?
The functionalist claim
The reductionist position
How can that be if everything can be reduced
to the fundamental laws of physics?
My answer
It can all be understood
as levels of abstraction.
35
The Game of Life
File > Models Library > Computer Science > Cellular Automata > Life
Click Open
36
Gliders
• Gliders are causally powerless.
– A glider does not change how the rules operate or which cells will be
switched on and off. A glider doesn’t “go to an cell and turn it on.”
– A Game of Life run will proceed in exactly the same way whether one
notices the gliders or not. A very reductionist stance.
– Cells don’t “notice” gliders — any more than gliders “notice” cells.
• But …
– One can write down equations that characterize glider motion and
predict whether—and if so when—a glider will “turn on” a particular cell.
– What is the status of those equations? Are they higher level laws?
Like shadows, they don’t “do” anything.
The rules are the only “forces!”
37
Game of Life Programming Platform
• Amazing as they are, gliders are also trivial.
– Once we know how to produce a glider, it’s
simple to make them.
• Can build a library of Game of Life patterns and
their interaction APIs.
By suitably arranging these patterns,
one can simulate a Turing Machine.
Paul Rendell. http://rendell.server.org.uk/gol/tmdetails.htm
A second level of emergence.
Emergence is not particularly mysterious.
38
Downward causation entailment
• The unsolvability of the TM halting problem entails the
unsolvability of the GoL halting problem.
– How strange! We can conclude something about the GoL because
we know something about Turing Machines.
• Earlier, we dismissed the notion that a glider may be said to “go to
a cell and turn it on.” Because of downward entailment, there is
hope for talk like this.
– One can write glider “velocity” laws and then use those laws to
draw conclusions (make predictions) about which cells will be
turned on and when that will happen.
• GoL gliders and Turing Machines are causally reducible yet
ontologically real.
– They obey higher level laws, not derivable from the GoL rules.
39
Level of abstraction
A collection of concepts and relationships that can
be described independently of its implementation.
Every computer application creates one.
• A level of abstraction is causally reducible to its
implementation.
• Its independent specification—its way of being in
the world—makes it ontologically independent.
Examples
• The collection of Game of Life patterns.
– One can catalog the patterns and their interactions
without ever talking about Game of Life rules
• A Game of Life Turing Machine. Turing described it
independently of any implementation.
40
The reductionist blind spot
• Darwin and Wallace’s theory of evolution by natural selection is
expressed in terms of
– entities
– their properties
– how suitable the properties of the entities are for the environment
– populations
– reproduction
– etc.
• These concepts are a level of abstraction.
– The theory of evolution is about entities at that level of abstraction.
• Let’s assume that it’s (theoretically) possible to trace how any state
of the world—including the biological organisms in it—came about
by tracking elementary particles
• Even so, it is not possible to express the theory of evolution in terms
of elementary particles.
• Reducing everything to the level of physics, i.e., naïve reductionism,
results in a blind spot regarding higher level entities and the laws
that govern them.
41
I’m showing this slide to invite anyone who is interested to work on this with me.
How are levels of abstraction built?
• By adding persistent constraints to what exists.
– Constraints “break symmetry” by ruling out possible future states.
– Should be able to relate this to symmetry breaking more generally.
• Easy in software.
– Software constrains a computer to operate in a certain way.
– Software (or a pattern set on a Game of Life grid) “breaks the
symmetry” of possible sequences of future states.
• How does nature build levels of abstraction? Two ways.
Isn’t this just common sense?
– Energy wells produce static entities.
Ice cubes act differently from
• Atoms, molecules, solar systems, …
water and water molecules.
– Activity patterns use imported energy to produce dynamic entities.
• The constraint is imposed by the processes that the dynamic
entity employs to maintain its structure.
• Biological entities, social entities, hurricanes.
• A constrained system operates differently (has additional laws—
the constraints) from one that isn’t constrained.
42
Practical corollary: feasibility ranges
• Levels of abstraction are
implemented only within
feasibility ranges.
• When the feasibility range is
exceeded a phase transition
generally occurs.
Require contractors to identify the feasibility range
within which the implementation will succeed and
describe the steps taken to ensure that those
feasibility ranges are honored—and what happens
if they are not. (Think O-rings.)
43
Introduction to Complex Systems:
How to think like nature
Modeling, the externalization of
thought, and how engineers and
computer scientists think
Russ Abbott
Sr. Engr. Spec.
310-336-1398
Russ.Abbott@Aero.org
 1998-2007. The Aerospace Corporation. All Rights Reserved.
44
Modeling problems:
the difficulty of looking downward
Models of computer security or
terrorism will always be incomplete.
Can only model
unimaginative
enemies.
• Strict reductionism implies that it is impossible to
find a non-arbitrary base level for models.
– What are we leaving out that might matter?
• Use Morse code to transmit messages on encrypted lines.
• No good models of biological arms races.
– Combatants exploit and/or disrupt or otherwise foil each
other’s epiphenomena.
• Insects vs. plants: bark, bark boring, toxin, anti-toxin, … .
• Geckos use the Van der Waals “force” to climb.
Nature is not segmented into
Epiphenomenal
a strictly layered hierarchy.
45
Modeling problems:
the difficulty of looking upward
• Don’t know how to build models that can notice emergent
phenomena and characterize their interactions. We don’t
know what we aren’t noticing.
– We/they can use our commercial airline system to deliver
mail/bombs.
Exploit an
• Model gravity as an agent-based system. existing process
– Ask system to find equation of earth’s orbit.
– Once told what to look for, system can find ellipse. (GP)
– But it won’t notice the yearly cycle of the seasons — even
though it is similarly emergent.
Models of computer security or
terrorism will always be incomplete.
Can only model
unimaginative
enemies.
46
Intellectual leverage in Computer Science:
executable externalized thought
• Computer languages enable executable externalized thought—
different from all other forms of externalized thought
throughout history!
– There is nothing comparable in engineering—or any other field.
– All other forms of externalized thought require a human being
to interpret them.
• The bit provides a floor that is both symbolic and real.
– Bits are: symbolic, physically real, and atomic.
– Bits don’t have error bars.
– Can build (ontologically real) levels of abstraction above them.
• But the bit limits realistic modeling.
– E.g., no good models of evolutionary arms races and many
other multi-scale (biological) phenomena. No justifiable floor.
– Challenge: build a computer modeling framework that
supports dynamically varying floors.
47
Intellectual leverage in Engineering:
mathematical modeling
• Engineering gains intellectual leverage through
mathematical modeling and functional decomposition.
– Models approximate an underlying reality (physics).
– Models are judged by the width of their error bars.
– Models don’t create ontologically independent entities.
• Engineering is both cursed and blessed by its
attachment to physicality.
– There is no reliable floor.
• “Engineering systems often fail … because of
[unanticipated interactions among well designed
components, e.g. acoustic coupling] that could not be
identified in isolation from the operation of the full
systems.” National Academy of Engineering, Design in the New Millennium, 2000.
– But if a problem appears, engineers (like scientists) can
dig down to a lower level to solve it.
48
Engineers and computer scientists are different —
almost as different as Venus and Mars
• Engineers are grounded in physics.
– Ultimately there is nothing besides physics.
– Even though engineers build things that have very different
(emergent) properties from their components, engineers tend to
think at the level of physics.
– When designing systems, engineers start with an idea and
build it down to the physics—using functional decomposition.
• Engineering is (proudly) applied physics.
• Computer scientists live in a world of abstractions.
– Physics has very little to do with computer science worlds.
– For computer scientists, there is more than physics, i.e.,
emergence—but may have had a hard time saying what it is.
– When designing systems, Computer scientists start with the bit
and build it up to the idea—using levels of abstraction.
• Computer science is (cautiously) applied philosophy.
49