Complexity Road Map

advertisement
Sorin
Page 1 3/10/2016
1
Sorin’s Draft for Expert’s Complexity Road-Map
CONTENTS
A first look at complexity ............................................................................................................... 5
From Reductionism to the Multi-Agent Complexity Paradigm ..........................................................5
Complex Multi-Agent-Models ................................................................................................................6
Beyond the Soft vs. Hard Science Dichotomy .......................................................................................8
Multi-Agent Complexity is intrinsically Interdisciplinary .................................................................10
Popular books.........................................................................................................................................11
Complexity Mechanisms and Methods........................................................................................ 11
LINK to “More Concepts” (by Gerard Weisbuch) ............................................................................12
http://shum.huji.ac.il/~sorin/report/Gerard1.doc
Multiscale Structure of the Complex Multi-Agent Method ...............................................................12
Irreducible Complexity .........................................................................................................................16
New Causal Schemes (parallel, asynchronous dynamics, Markov webs) .........................................18
New Ontological Schemes (Network vs tree ontology, dynamical ontology, postext) ......................18
New Experimental platforms (distributed, electronic experiments, NAtlab, avatars) ....................19
Logistic Systems .....................................................................................................................................19
Power laws and dynamics of their emergence .....................................................................................23
Multi-Agent Modeling of Reaction-Diffusion Systems .......................................................................27
Autocatalytic Growth ................................................................................................................... 28
The Birth of Macroscopic Objects from Microscopic “Noise” ..........................................................29
The A(utocatalysis)-Bomb .....................................................................................................................30
The Logistic Bomb: Malthus-Verhulst-Lotka-Volterra-Montroll-May-Eigen-Schuster systems ..32
Autocatalysis and localization in immunology B-Bomb .....................................................................33
Multi-Agent Simulation of the Emergence of Immune Funct; ..........................................................33
Autocatalysis in a social system; The Wheat Bomb ............................................................................35
Microscopic Simulation of Marketing; The Tulip Bomb ..................................................................37
Stochastic logistic systems yield scaling and intermittent fluctuations .............................................38
Why Improbable Things are so Frequent? .........................................................................................39
Complexity in Various Domains.................................................................................................. 40
Short First Tour of complexity examples ............................................................................................40
Physics .....................................................................................................................................................42
LINK To “Complex BIOLOGY” (by Gerard Weisbuch and…)
http://shum.huji.ac.il/~sorin/report/Biology.doc
1
Sorin
Page 2 3/10/2016
2
42
The Complexes of the Immune Self ......................................................................................................43
LINK to “COMPLEX IMMUNOLOGY” (By Yoram Louzoun) .....................................................43
http://shum.huji.ac.il/~sorin/report/Complex%20immunology.doc
LINK to “SOCIAL SCIENCE” (by Geard Weisbuch and JP Nadal)
http://shum.huji.ac.il/~sorin/report/Social%20Science.doc
LINK to “COGNITIVE SCIENCE” (by Geard Weisbuch and JP Nadal)
http://shum.huji.ac.il/~sorin/report/Cognitive%20Science.doc
LINK to “Social Psychology” (by Andrzej Nowak)
http://shum.huji.ac.il/~sorin/report/Nowak-Soc-Psych.doc
LINK to “Minority GAME” (by YC Zhang)
http://shum.huji.ac.il/~sorin/report/Zhang-Minority.rtf
Economics ...............................................................................................................................................43
LINK to “ECONOPHYSICS” (By Dietrich Stauffer)
http://shum.huji.ac.il/~sorin/report/Econophysics.doc
Spatially distributed social and economic simulations .......................................................................46
LINK to “MANAGEMENT quotations on complexity”
http://shum.huji.ac.il/~sorin/report/Management.doc
LINK to “COMPLEXITY OF RISK” (by Peter Richmond)
http://shum.huji.ac.il/~sorin/report/Complexity%20of%20Risk.doc
LINK to “INFORMATION Complexity” (Contribution by Sorin Solomon and Eran Shir)
http://shum.huji.ac.il/~sorin/report/Information.doc
The Social life of computers ..................................................................................................................47
LINK to “DESIGN to EMERGE” (by Eran Shir and Sorin Solomon)
http://shum.huji.ac.il/~sorin/report/Designed%20to%20Emerge.doc
LINK to “Making the Net Work” (by Eran Shir and Sorin Solomon)
http://shum.huji.ac.il/~sorin/report/Making%20the%20Net%20Work.doc
LINK to “The Introspective Internet” (Contribution by Sorin Solomon and Scott Kirkpatrick)
http://shum.huji.ac.il/~sorin/ccs/Dimes2003-AISB.pdf
Networks .................................................................................................................................................52
LINK to “NETWORKS Dynamics and Topology” (by Gerard Weisbuch and…)
http://shum.huji.ac.il/~sorin/report/Networks%20Dynamics%20and%20Top
ology.doc
Network manipulation and Novelty Creation .....................................................................................54
2
Sorin
Page 3 3/10/2016
3
Some Directions for the Future ............................................................................................................57
Methodological Issues.................................................................................................................. 59
Conclusions and Recommendations ........................................................................................... 60
Organizational Recommendations .......................................................................................................64
3
Sorin
Page 4 3/10/2016
4
Preambule
Complexity is not “a” science. It is the shape which sciences assume after selforganizing along their natural internal connections that cut across the traditional
disciplinary boundaries. Its objects of study are adaptive collective entities emerging
in one discipline (at a coarse space-time scale) as the result of simpler interactions
of elementary components belonging to another discipline (at a finer scale).
Because Complexity does not fall within the pigeon hole of one single discipline, one
is often tempted to define it as yet another pigeon hole. This is both too much and
too little: it is too much because complexity does not have as usual disciplines have
a well defined area of responsibility. It is too little because in its maximalistic form,
complexity claims or at lest strives to address a very wide and increasing range of
phenomena in all science.
In fact, in each science there are basic laws and objects that form the elementary
concepts of that science. Usually, to obtain a complexity problem it is enough to ask
about the mechanisms and (finer scale) objects generating those basic laws and
objects. The reason for it is that if the emergence of the basic objects of a science
were easy to explain, that science wouldn’t have been constituted as an
independent science to start with. Indeed, it would rather be treated by the scientists
just as a (relatively trivial) branch of the science that explains its fundamental objects
and laws. The sciences owe their very existence equally to 2 contradictory facts:
- the relative ease to identify and characterize their basic elements and
interactions
- the complexity of explaining them.
It is this tension that lead to the “decision” of founding independent sciences:
Sciences, that deal with the well defined basic objects and their postulated
properties while renouncing to explain those objects in terms of a finer level.
Of course this introduces the necessity for postulating those basic properties which
in the end might turn out to be true only at various degrees (like the “indivisibility” of
atoms or the Darwinian characterization of evolution or the identity of social groups
as such).
The singular times at which those postulates become accessible for study,
challenge, validation / falsification are the periods of scientific upheaval and of great
developments: the discoveries on the structure of atoms, the discovery of the
chemical basis of genetics (from the double helix on), and hopefully – one day - the
discovery of the laws explaining the emergence of social behavior or of intelligence.
It is at those times that considering complex phenomena becomes unavoidable
and the pressure is strong enough to re-evaluate and overthrow the initial decision
(which underlined the very establishment of the subject as an independent
discipline) of taking the basic entities and their properties as granted. Overthrowing
the very basics of a science is of course likely to encounter a lot of fear and hostility
among the rank and file and among the leaders of the “threatened” field. This is an
enough reason for some of the scientists in the target discipline to become hostile to
the complexity “attack”.
Much of the present Complexity work may be thought as an application (with
appropriate adjustments) of the table proposed 30 years ago by Anderson [2] in the
4
Sorin
Page 5 3/10/2016
5
paper where he defined Complexity as “More Is Different”. In this table, the
“simpler” science appears in the second column and the “more complex one” (whose
basic rules are to be deduced from the “simpler one”) - in the first:
Simpler
Atomic physics
Chemistry
Molecular Biology
Cell Biology
…..
Psychology
Social Sciences
Composite
elementary particles
Atomic Physics
Chemistry
Molecular Biology
………
Physiology
Psychology
But complexity is not offering just a way of answering questions from one science
using concepts from another. By suggesting similar concepts and methods in
connecting the “simpler” to the “more complex” science in each pair (each row) in
the table, it is promoting a new (agent / network - based) “universal”, unifying
scientific language. This new language allows the formulation of novel – not
conceivable until now- new questions. This implies a new research grammar which
allows novel interrogative forms. As such, it requires the growth of a new generation
of scientists mastering this interdisciplinary universal grammar. Thus complexity is
not a juxtaposition of various expertises. It is rather an intimate fusion of knowledge,
a coordinated shift in the very objectives, scope and ethos of the affected disciplines.
Consequently, it encounters fierce resistance from distinguished scientists that see
themselves bona-fide defending the old identity of their disciplines. To avoid conflict,
complexity should be given its own space and support rather then expecting that
complexity projects will be supported by the departments whose very identity they
are challenging (or sending complexity projects to beg or steal resources from them
Complexity induces a new relation between theoretical / “academic” science and
“applied” science. Rather then applying new hardware devices (results of
experimental research) to physical reality, complexity applies new theoretical
concepts (self-organization, self- adaptation emergence) to real but not necessarily
material objects: social and economic change, individual and collective creativity,
information flow that underlies life etc. Thus, much of the applications of Complexity
are of a new brand: "Theoretical Applied Science" and should be recognized as such
in order to evaluate their full expected practical impact.
A first look at complexity
From Reductionism to the Multi-Agent Complexity Paradigm
The reductionist dream (and its extreme form - physicalism) accompanied science
from its very birth. The hope to reduce all reality to simple microscopic fundamental
laws had its great moments in the physical sciences (Newton, Maxwell, Planck,
Einstein, Gell-Mann) but it suffered bitter discrediting early defeats in explaining life
5
Sorin
Page 6 3/10/2016
6
(Descartes), conscience (La Metterie), thought (Russel) and socio-economical
phenomena (Marx).
In the last decades, the reductionist ambitions got rekindled by a series of
methodological and technological new tools. Some of the new developments
highlighted the role of the Multi-Agents Complex Paradigm in describing and
explaining the macroscopic complex phenomena in terms of a recurrent hierarchy of
scales.
In a Multi-Agent Hierarchy, the properties at each scale level are explained in terms
of the underlying finer level immediately below it.
The main operations involved are:

the identification of the relevant objects (agents) appropriate for describing the
dynamics at each scale.
 expressing iteratively the emergence of the coarser scale objects as the
collective features of a finer scale.
The identification of the hierarchy of scales involves sophisticated theoretical
techniques (from mathematics and physics) as well as computer simulation,
visualization and interactive animation tools bordering artificial reality.
In this interactive research process, the models, analytical techniques, computer
programs and visualization methods become often the very expression and
communication vehicle of the scientific understanding which they helped to uncover.
In exchange, for this cognitive and language shift from the classical reductionist
position, the multi-agent multi-level complexity approach yielded new important
results in systems spreading over a wide range of subjects: phases of physical
systems, fractality in reaction-diffusion chemical systems, Boolean “hard problems”,
proteomics, psychophysics, peer-to-peer organizations, vision, image processing
and understanding, inventive thinking, cognition, economics, social planning and
even esthetics and stories structure.
The Multi-Agent Complexity Paradigm develops lately into a framework for defining,
identifying and studying the salient collective macroscopic features of complex
systems in a most general context. As such it constitutes a strong conceptual and
methodological unifying factor over a very wide range of scientific, technological
(and other) domains.
Complex Multi-Agent-Models
One of the main complexity ideas is the basic difference between the methods of
explanation, which are based on a (deterministic) global parameterization of a
macroscopic dynamics and the (stochastic) "multi-agent-modeling" (MAM) method.
In many physical, economic etc situations, we find it is most natural to view the
system under study as a collection of elements (the agents), which interact among
themselves in relatively simple ways, and (co-)evolve in time. We will call
6
Sorin
Page 7 3/10/2016
7
occasionally “microscopic” such a representation of the “MACRO” objects in terms
of their many “MICRO” components (agents) which interact by elementary
interactions (INTER).
For example:
- Animal populations (MACRO) are composed of individuals (MICRO) which are
constantly born (and die), and compete (INTER) with other members of their species
(for food, mates etc).
- The stock market (MACRO), is composed of many investors, each having a certain
personal wealth (MICRO). The investors buy and sell stocks (INTER), using various
strategies designed to maximize their wealth.
We are listing below the multi-agent MICRO-INTER-MACRO scheme of a few
systems. Each scheme, lists the “microscopic” agents (MICRO), their elementary
interactions (INTER) and the macroscopic emerging features (MACRO):
Microscopic Drivers and Macroscopic Jams
MICRO - cars
INTER - go ahead/give way at intersections.
MACRO - traffic flow, jamming.
Dramas - mathematical categories endowed with dynamics
MICRO - categories
INTER - relations, composition laws
MACRO - (stories) dramas
Microscopic Concepts and Macroscopic Ideas
MICRO - elementary concepts
INTER - archetypical structures and thought procedures
MACRO - creative ideas and archetypes
- Microscopic Seers and Macroscopic Sight
MICRO - line elements, points in 2 Dimensions..
INTER - time and space data integration.
MACRO - 3 Dimensional global motion.
- Microscopic Draws and Macroscopic Drawings
MICRO - line curvature, speed, discrete mental events.
INTER - continuity, kinematics, breaks, (mind) changes.
MACRO - shapes , representational meaning.
- Microscopic Wealth and Macroscopic Power Laws
MICRO - investors, shares
INTER - sell/buy orders
MACRO - market price (cycles, crushes, booms, stabilization by noise)
7
Sorin
Page 8 3/10/2016
8
The following research routine is common for most of these studies:



modeling the system as composite of many Micros.
simulation and visualization of the resulting model.
identification and numerical study of the slow eveolving modes within the
microscopic system.
 identification of the Macros.
 predictions based on the Macros behavior of the model.
 comparison with the experimental macroscopic behavior
 validating or correcting the microscopic model in view of the comparison.
 starting new experiments based on these results.
The use of this dialogue with an artificial system created in the computer in order to
understand its large scale properties extends to systems away from equilibrium and
to complex systems which are not characterized by an energy functional or by an
asymptotic probability distribution.
Complexity proposes among other things to use this understanding in the task of
formulating and studying Multi-Agent models for basic problems in a wide range of
fields.
Beyond the Soft vs. Hard Science Dichotomy
Until recently, science was structured in "hard" vs. "soft" disciplines.
With a slight oversimplification, "hard" sciences were fields where the problems were
formulated and solved within a mathematical language based mainly on (partial
differential) equations.
The archetypal "hard" science was physics and for any discipline to progress
quantitatively it was supposed to emulate its example.
In fact chemistry became "harder" as the use of the Schroedinger equations to
describe chemical processes became possible.
Biology, recognizing the impossibility of reducing its matter to equations developed a
defensive attitude of mistrust against theoretical explanations while
Economics had often chosen the opposite road: to sacrifice detailed quantitative
confrontation with empirical data for the sake of closed tractable models.
The study of innovation, creativity and novelty emergence had chosen to ignore the
"Newtonian paradigm" altogether taking sometimes even pride in the ineffable,
irreducible, holistic character of its subject.
This was natural, as the mechanisms allowing the understanding of phenomena in
those fields are most probably not expressible in the language of equations.
Fortunately it turns out that equations are not the only way to express understanding
quantitatively. In fact, with the advent of computers one can express precisely and
quantitatively almost any form of knowledge.
E.g. adaptive agents, can "learn" certain tasks by just inputting lists of "correct" and
"incorrect" examples, without any explicit expression of the actual criteria of
"correctness" (sometimes unknown even to the human "teacher").
This relaxation of the allowed scientific language and the focus on the emergence of
collective objects with spatio-temporal (geometrical) characteristics renders the
8
Sorin
Page 9 3/10/2016
9
scientific discourse more congenial to the "daily" cognitive experience and to
practical applications.
One can now formulate in a precise "hard" way any "naive" explanation by
simulating (enacting) in the computer the postulated elements and interactions. One
can then verify via "numerical experiments" whether these "postulates" lead indeed
to effects similar to the ones observed in nature. The computer can deal
simultaneously with a macroscopic number of agents and elementary interactions
and therefore can bridge between "microscopic" elementary causes and
"macroscopic" collective effects even when they are separated by many orders of
magnitude.
The following steps are a first sketch of the main ideas:

Microscopic Representation
Represent the (continous) system in terms of (many) "microscopic" agents
("elementary degrees of freedom") which interact by simple rules.
 Collective Macros
Identify sets of (strongly coupled) elementary degrees of freedom which act during
the evolution of the system mostly as a single collective object (macro).
 Effective Macro Dynamics
Deduce the emergence of laws governing effectively the evolution of the system at
the Macros scale as a coarse (average) expression of the simple rules acting at the
elementary level.
 Emergence of Scale Hierarchies
Apply iteratively to the Macros systems at various scales the last two steps. This
leads to a hierarchy of Macros of Macros (etc.) in which the emerging laws
governing one level are the effective (coarse) expression of the laws at the finer
level immediately below.
 Universality
The general macroscopic properties of the coarsest scale depend on the
fundamental microscopic scale only through the intermediary of all the levels in
between (especially the one just finer than the coarsest). This usually means that
relevant macroscopic properties are common to many microscopic systems. Sets of
microscopic systems leading to the same macroscopic dynamics are called
universality classes.
 Irreducible complex kernels
One can show that there exists classes of (sub-)systems for which one cannot
reexpress the fundamental microscopic dynamics in terms of effective interactions of
appropriate macros. We call them Irreducibly complex (ref). Such (sub-)systems are
in general non-universal and their properties are in many respect unique. All one can
do is to become familiar with their properties but not to explain (understand) them as
generic / plausible consequences of the functioning of their parts. If in the process of
analyzing a system in terms of a macro hierarchy one meets such irreducible
kernels, the best thing is to treat them as single objects and construct explanations
taking them and their interactions as the starting point (i.e. as a given input).
The main message of the Multi-Agent Complexity approach is that domains, fields
and subjects which until now seemed to allow a continuous infinity of possible
variations of their behaviors, may be treated in terms of a limited number of discrete
9
Sorin
Page 10 3/10/2016
10
objects (macros) subjected to a discrete limited number of effective rules and
capable to follow a rather limited number of alternative scenarios.
This greatly limits the options for the effective macroscopic dynamics and the effort
to analyze, predict and handle it.
In turn, the Macros and their effective rules can be understood (if one whishes a
finer level of knowledge) in terms of a limited set of interacting components.
The criticism of the above scheme can range between "trivial" to "far fetched" but it
turned out to give non-trivial valid results in a surprisingly wide range of problems in
theoretical physics (ref), chemistry (ref), computational physics (ref), image
processing (ref), psychophysics (ref), spin glasses (ref), ultrametric systems (ref),
economy (ref), psychology (ref) and creative thinking (ref).
Multi-Agent Complexity is intrinsically Interdisciplinary
Long after the discovery of atoms and molecules it was still customary in science to
think about a collection of many similar objects in terms of some “representative
individual” endowed with the sum, or average of their individual properties. It was as if,
in spite of the discovery of the discrete structure and its capability to induce dramatic
phase transitions [1] , many scientists felt that the potential for research and results
within the continuum / linear framework has not been exhausted and insisted to go on
with its study. For another hundred of years, the established sciences were able to
progress within this conceptual framework.
In fact, one may argue that this “mean field” / continuum / linear way of thinking is what
conserved the classical sciences as independent sub-cultures. Indeed, it is exactly
when these assumptions do not hold that the great conceptual jumps separating
between the various sciences arise. When “More Is Different” [2] life emerges from
chemistry, chemistry from physics, conscience from life, social conscience/ organization
from individual conscience etc.
Similarly, the emergence of complexity takes place in human created artifacts:
collections of simple instructions turn into a complex distributed software environment,
collections of hardware elements turn into a world wide network, collections of switches
/ traffic lights turn into communication / traffic systems etc.
This study of the emergence of new collective properties qualitatively different from the
properties of the “elementary” components of the system breaks the traditional
boundaries between sciences [3]: the “elementary” objects belong to one science - say
chemistry - while the collective emergent objects to another one - say biology.
For lack of other terms (and in spite of many objections that can be advanced) we will
call below the science to which the “elementary” objects belong - the “simpler” science
while the science to which the emergent collective objects belong will be called the
“more complex” science. As for the methods, they fall “in between”: in the
“interdisciplinary space”. The ambitious challange of the Complexity community (its
“manifest destiny”) is prospecting, mapping, colonizing and developing this
“interdisciplinary” territory.
It is not by chance that the initial reaction to this enterprise was not very enthusiastic:
the peers in the “simpler” science recognized the complexity objective (explaining the
10
Sorin
Page 11 3/10/2016
11
emergence and properties of the “more complex” science) as strange to its own
endeavor. The peers in the “target”, “more complex” field felt that the basic concepts
(the elements from the “simpler science”) are strange to the conceptual basis of their
discipline (and too far away from its observable phenomenology). And all together felt
that the very problematics and methods proposed by Complexity are not faithful to the
classical way of making and dividing science.
In the case of the electronic and software artifacts, the “more complex” science is not
defined as such to this very day. The naïve (and probably wrong) assumption is that the
scientists responsible for it are and should be the people in charge with the elementary
artifacts (computer scientists and electronic engineers). In the case in which the
elementary objects are humans, the situation can be further complicated / complexified.
Indeed, in this case, their behavior can be influenced by their recognition of the
collective emergent (social) objects as such. This is a final blow to even neo-reductionist
thinking, as the emergent “more complex” level becomes explicitly and directly an actor
in the “simpler” individuals dynamics.
Fortunately an increasing number of scientific leaders and many young students find the
challenge of Complexity crucial for further progress not only in pure science but also in
understanding and mastering the most of our daily experience. In the last years this
claim is being more and more substantiated.
Popular books
Complexity: The Emerging Science at the Edge of Order and Chaos (Waldrop,
1992)
Complexity, Managers, and Organizations (Streufert and Swezey, 1986),
Chaos: Making a New Science (Gleick, 1987)
At Home in the Universe: The Search for the Laws of Self-Organization and
Complexity (Kauffman, 1995)
Leadership and the New Science: Learning about Organization from an Orderly
Universe (Wheatley, 1992) (Quant mech)
Figments of Reality: The Evolution of the Curious Mind (Stewart and Cohen, 1997)
and
The Collapse of Chaos: Discovering Simplicity in a Complex World (Cohen and
Stewart, 1994)
Complexity and Creativity in Organizations (Stacey, 1996)
Complexity Mechanisms and Methods
Even though the Complexity community had to take its distance from the core of
theoretical physics due to the unenthusiastic reception it received there, many of the
crucial ingredients of Complexity appeared in the context of Theoretical Physics. In
fact Anderson listed [4] as his preferred examples phenomena which take place in
physical systems: superconductivity, superfluidity, condensation of nucleons in
nuclei, neutron stars, glasses.
He emphasized that in spite of the fact that microscopic interactions in the above
phenomena are very different they can be all explained as realizations of a single
dynamical concept: Spontaneous Symmetry Breaking. Similarly, the laws of
emergence of computing (and thinking?) are independent of whether they are
11
Sorin
Page 12 3/10/2016
12
implemented on elementary objects consisting of silicon, vacuum tubes, or neurons.
Therefore, the mere fact that various phenomena fall superficially in different
empirical domains should not discourage scientists to study them within a unified
conceptual framework. This birth gift of an extreme unifying potential haunted in the
intervening 30 years the Complexity community as its main blessing and curse.
The role of Complexity ideas and techniques originating in theoretical physics is
hopefully going to grow in the future as more and more theorists realize the
inexhaustible source of fascinating challenges that real world offers to their thought.
After all, the renormalization group was introduced exactly in order to bridge between
elementary microscopic interactions and their macroscopic collective effects.
LINK to “More Concepts” (by Gerard Weisbuch)
http://shum.huji.ac.il/~sorin/report/Gerard1.doc
Multiscale Structure of the Complex Multi-Agent Method
According to the discussions above, Multi-Agent Paradigm concerns the study of
complex systems for which the origin of complexity can be traced to the very attempt
by our perception to describe a macroscopic number of “MICROscopic” objects and
events in terms of a limited number of “MACROscopic” features .
Multi-Agent Modeling provides the techniques through which one can systematically
follow the birth of the complex macroscopic phenomenology out of the simple
elementary laws.
The Multi-Agent paradigm consists in deducing the macroscopic objects (Macros)
and their phenomenological complex ad-hoc laws in terms of a multitude of
elementary microscopic objects ( Micros) interacting by simple fundamental laws.
The Macros and their laws emerge then naturally from the collective dynamics of the
Micros as its effective global large scale features.
However, the mere microscopic representation of a system cannot lead to a
satisfactory and complete understanding of the macroscopic phenomena. Indeed,
the mere copying on the computer of a real-life system with all its problems does not
by itself constitute a solution to those problems.
It is clear that a satisfactory Multi-Agent procedure of such complex systems has to
be Multiscale. Therefore, the Multi-Agent approach is not trying to substitute the
study of one scale for the study of another scale; one is trying to unify into a
coherent picture the complementary descriptions of a one and the same reality.
In fact one can have a multitude of scales such that the Macros of one scale become
the Micros of the next level. As such the "elementary" Micros of one Multi-Agent
Model do not need to be elementary in the fundamental sense: it is enough that the
details of their internal structure and dynamics are irrelevant for the effective
dynamics of the Macros.
12
Sorin
Page 13 3/10/2016
13
More precise expressions of some of these ideas were encapsulated in the
renormalization group and in the multigrid method.
Multigrid
In the last decade the 2 have interacted profitably and their relative strengths and
weaknesses were complemented.
MG was offered for the last 20 years as a computational philosophy to accelerate
the computations arising in various scientific fields. The idea was to accelerate
algorithms by operating on various scales such that the computations related with a
certain length scale are performed directly on the objects relevant at that scale.
In our present view, the multi-scale / multi-grid phenomena and the relevant Macros
hierarchies are considered for their own interest even (sometimes) in the absence of
a multi-scale algorithm which accelerates the computations.
The multi-scale concept is proposed as a tool to reformulate and reorganize the way
of approaching the problematics of various fields. Thus its usefulness transcends by
far the mere application of a computational technique and may induce in certain
fields a shift in the concepts, the language, the techniques and even in their
objectives.
Understanding the critical behavior of a microscopic system of Micro's reduces then
to the identification of the relevant Macro's and the description of their long timescales evolution. Conversely, finding an appropriate Multi-Agent explanation for a
macroscopic complex phenomenon is to find a system of Micro's whose effective
macroscopic critical dynamics leads to Macro's modeling well the macroscopic
complex phenomenon.
The emergence of this sort of algorithmic operational way of acquiring and
expressing knowledge has a very far reaching methodological potential.
Critical Slowing Down as the Label of Emergent Objects
The computational difficulty is one of the main characteristics of complex systems
and the time necessary for their investigation and/or simulation grows very fast with
their size. The systematic classification of the the difficulty and complexity of
computational tasks is a classical problem in computer science.
The emergence of large time scales is often related to random fluctuations of MultiAgent spatial structures within the system. Long range and long times scale
hierarchies (=Critical Slowing Down (CSD)) are usually related to collective
degrees of freedom (macros) characterizing the effective dynamics at each scale.
13
Sorin
Page 14 3/10/2016
14
Usually, it is the dynamics of the macros during simulations which produces the
Critical Slowing Down (CSD) and reciprocally, the slow modes of the simulation
dynamics project out the relevant macros.
Therefore, a better theoretical understanding of the Multi-Agent structure of the
system, enables one to construct better algorithms by acting directly on the relevant
macros. Reciprocally, understanding the success of a certain algorithm yields a
deeper knowledge of the relevant emerging collective objects of the system.
Many complex systems in biophysics, biology, psychophysics and cognition display
similar properties generalizing the universality classes and scaling of the statistical
mechanics critical systems. This is an unifying effect over a very wide range of
phenomena spreading over most of the contemporary scientific fields. In the
absence of a rigorous theoretical basis for such wide area of application, the
investigation of these effects relies for the moment mainly on a case-by-case
studies.
The perception of Critical Slowing Down as an unwanted feature of the
simulations, lead into oblivion in the past studies the fundamental importance of
CSD as a tool in identifying the relevant Macros and their critical dynamics.
However, in the context of Reduced CSD (RCSD) algorithms the fact that the
acceleration of the dynamics of a certain mode eliminates/reduces CSD is a clear
sign that the critical degrees of freedom were correctly identified and their dynamics
correctly understood.
RCSD algorithms express and validate in an objective way (by reducing the
dynamical critical exponent z) our previous intuitive understanding of the collective
macroscopic dynamics of large ensembles of microscopic elementary objects.
In certain systems, which resist the conceptualization of their understanding in
closed analytic formulae, this kind of "algorithmic operational understanding" might
be the only alternative.
With this in mind, one may attempt to use CSD and its elimination (reduction) as a
quantitative measure of the "understanding" of the model's critical properties.
For instance an algorithm leading to z=2 would contain a very low level of
understanding while the "ultimate" level of understanding is when one needs not
simulate the model at all in order to get the properties of systems independently of
their size (z=0 or analytic understanding).
An extreme case are the simulations of spin glasses where for naïve simulations the
correlation times increase faster then exponentially with the size of the system but
where belief propagation techniques have proved quite efficient.
In simplicial quantum gravity the increase is faster than any function \cite{nabu}. This
is the "ultimate" CSD: {\it in}computability; that is, $a \ priori$ impossibility to
compute within a systematic approximation procedure the numbers "predicted" by
the theory \cite{nabu}.
The rise of incomputability in the context of the Multi-Agent approach allows a new
perspective on the issues of predictability and reductionism: the possibility arises
that the future state the universe is physically totally determined by its present state,
yet the future state cannot be predicted because its physical parameters as
determined by the theory are mathematically uncomputable numbers.
14
Sorin
Page 15 3/10/2016
15
(Unfortunately this fascinating point falls outside the scope of present document.)
Until now for non-complex, simple problems one would have only two possibilities: to
know or not to know. However, as explained above, withn the multi-agent complex
paradigm there are intermediate degrees of understanding. One way to categorized
them is by the ability to eliminate or reduce the CSD. In many models embedding a
knowledge that we have about the model can result in a better (faster) dynamics.
Often, the way to a "more efficient algorithm" passes through the understanding of
the "relevant collective objects" and their dynamics.
There are few "tests" to establish that for a given critical system the set of
Macro's was correctly identified:
- One would need to make sure that there is a "large" number of Macro's.
This requirement makes sure that a large fraction of the relevant degrees of freedom
is indeed represented by the Macro's that were discovered.
- Then one has to check that they are relevant in the sense that they are not just
symmetries of the theory. In other words, a change in an Macro should have an
influence on the important measurables of the system.
- One of the more stringent tests is to verify that the resulting Macro dynamics is
"free". That is, in a typical configuration of Micro's the resulting dynamics of the
Macro seems to be free. This is a signal that the correct Macro have been identified.
An analogy for the relation between Macro's and Micro's can be found in language.
The letters would be the Micro's and the words will be the Macro's.
Of course, manipulating words amounts to manipulating letters.
However when one "works" in the words-level one need not bother with the letterlevel, even though these two levels co-exist.
The macros may overlap rather than being separated by sharp boundaries.
In fact, the same Micro may belong (to various degrees) to more than one Macro.
This "fuzziness" rends the boundaries defining a Macro a "subjective" choice, a
matter of degree/opinion, which, while not affecting the validity of the numerical
algorithm, sets the scene for further conceptual elasticity.
It suggests continuous interpolation and extrapolation procedures closer in their
form and essence to the working of the natural human intelligence.
In fact, through substituting the binary logic of the Micros with the continuous one of
the Macros, one may avoid the no-go theorems, the paradoxes and the puzzles
related to (un)computability, (i}reversibility and (creative) reasoning.
The precise yet "smeared" formulation of the Macros within the multi-agent
multiscale modeling approach bypasses these classical conceptual puzzles arising
in the naïve reductionist methods. In particular, while the Macros acquire a certain
degree of reality, individuality and causal behavior at the macroscopic level, their
conceptual boundaries are fuzzy enough to avoid the paradoxes which would arise
should one try to apply Macro categories in the microscopic domain (their
boundaries "dissolve" gracefully as one tries to resolve them with a too sharp
"microscope").
In the Multi-Agent Multiscale framework, there is no contradiction between
considering the ocean as a set of molecules or a mass of waves. These are just
complementary pictures relevant at different scales.
15
Sorin
Page 16 3/10/2016
16
Algebraic Multigrid.
A direction with a particular conceptual significance is the Algebraic Multigrid.
The Algebraic multigrid basic step is the transformation of a given network into a
slightly coarser one by freezing together a pair of strongly connected nodes into a
single representative node. By repeating this operation iteratively, Algebraic Multigrid
ends up with nodes which stand for large collections of strongly connected
microscopic objects [15]. The algorithmic advantage is that the rigid motions of the
collective objects are represented on the coarse network by the motion of just one
object. One can separate in this way the various time scales. For instance, the time
to separate two stones connected by a weak thread is much shorter than the time
that it takes for each of the stones to decay to dust. If these two processes are
represented by the same network then one would have to represent time spans of
the order of millions of years (typical for stone decay) with a time step of at most 1
second (the typical time for the thread to break). The total number of time steps
would become unbearably large. The Multi-grid procedure allows the representation
of each sub-process at the appropriate scale. At each scale the collective objects
which can be considered as “simple” elementary objects at that scale are
represented by just one node. This is a crucial step whose importance transcends
the mere speeding up of the computations. By labeling the relevant collective objects
at each scale, the algorithm becomes an expression of the understanding of the
emergent dynamics of the system [16] rather than a mere tool towards acquiring that
understanding. Multigrid (and their cousins - Cluster) algorithms have the potential to
organize automatically the vast amounts of correlated information existing in
complex systems such as the internet, fNMR data, etc.
Irreducible Complexity
In the preceding discussion we argued that complex macroscopic systems can
be described, explained and predicted by the Multi-Agent Models of the interactions
between their "elementary atoms".
How does one decide what these atoms should be?
If they are chosen too close to the macroscopic level, the strength of the resulting
scientific explanation is diminished: explaining that a car works because it has
wheels and engine is not very illuminating about the heart of the matter: where does
it take the energy and how does it transform it into controlled mechanical motion?
If the atoms of explanation are sent to too fine scales, one gets bogged into
irrelevant details: e.g. if one starts explaining the quantum mechanical details of the
oxidation of hydrocarbon molecules (fuel burning) one may never get to the point of
the car example above.
Beyond the practicalities of choosing the optimal level of Multi-Agent, one can
discern certain features which are of principle importance.
They set the natural limits of reductionism, of explanation, of understanding, of
science in general and of sciences among themselves.
16
Sorin
Page 17 3/10/2016
17
It is instructive to consider the following example in which the reduction is
guaranteed by construction, yet completely ineffective.
Consider a program running on a PC. In principle one can reduce the knowledge of
the program to the knowledge of the currents running through the chips of the
computer. Yet such a knowledge is not only difficult to achieve, validate and store,
but it is also quite irrelevant for what we call "understanding".
The right level of detail for understanding in this case is the flow chart of the
algorithm implemented by the program (and in any case a level coarser than the
"assembler" instructions of the machine).
In the same way, the problem of reducing mental activity to neuron firings is not
so much related to the issue of whether one needs in addition to the physical laws
assumptions of a "soul" which is governed by additional, transcendental laws.
Rather, the question is whether the generic (non-fine-tuned) dynamics of a set of
neurons can explain the cognitive functions. In fact, after millions of years of
intensive selection by survival pressures, it is reasonable to assume that the system
of neurons is highly non-generic, depending of all kinds of improbable accidents and
therefore a totally reductionist approach to its understanding (relying on the generic
properties of similar systems) might be quite ineffective.
However, let us not forget that if a system is numerically unstable during its MultiAgent Simulation this might reflect also in a functional instability of the natural
system too. Therefore, the failure of the "naive" Multi-Agent Simulation method, may
signal problems with the formulation of the problem or a certain irrelevance to the
natural reality.
This is not so when the fine tuning is fixed once for ever by the fundamental laws.
For instance the role of water in the metabolism of all living systems depends very
finely on the value of the energy excitation of some specific electronic quantum level.
In fact the small change in this energy level induced e.g. by using "heavy" rather
then normal water is enough to completely disable its proper metabolic function.
In this case, the numerical instability in computing the phenomenon does not lead to
functional instability of the natural system, since the water molecule properties are
universal and fixed once for ever. Even if a quantum-mechanical computation could
prove that those numbers just come out the right way for sustaining the metabolism
of living entities, this would not constitute more of an explanation than computing the
motion of the balls in a lottery to explain why the numbers of your mother in law
came up twice.
These examples indicate the very important characteristic of the complex
irreducible systems:
the emergence a non-generic systems, disturbing the applicability of the Multi-Agent
Simulation procedure is associated with the borders between sciences. Once
concluding that certain biological properties (as above) are not explainable by
generic chemical and physical properties of their parts, it is natural to consider those
biological properties as a datum and try to concentrate the understanding efforts to
their consequences rather than their "explanation" in terms of their parts.
Indeed, irreducible complexity is related with the fact that every detail of the system
influences all the other details across the system: there is no possibility to divide the
system into autonomous sub-systems responsible for well defined "sub-tasks"
[Simon, Rosen].
17
Sorin
Page 18 3/10/2016
18
The reduction of biology to chemistry and physics is not invalidated here by the
intervention of new, animistic forces, but by the mere irrelevance of our reductionist
generic reflexes to a non-generic fine-tuned situation.
The situation is similar with being given the map of a complicated labyrinth: one can
have the knowledge of each wall and alley (neuron, synapse), still it would take
highly non-trivial effort to find the way out.
Even finding (after enormous effort) a way out of the labyrinth would not mean
"understanding" the system: any small addition or demolition of a small wall would
expose the illusory, unstable nature of this knowledge by generating a totally new
situation which would have to be solved from scratch.
By its very dismissal in explaining this kind of irreducible-complex systems, the
Multi-Agent Simulation is offering science one of its most precious gifts. It allows the
retracing of the conceptual frontiers between the various scientific disciplines. The
boundary is to be placed at the level where there is a non-generic irreducible object
which becomes the building block for an entire new range of collective phenomena.
More specifically, Multi-Agent Complexity is teaching us when a reductionist
approach is worth launching and where are the limits beyond which it cannot be
pushed: where the generic Multi-Agent Simulation dynamics has to be applied and
where one has to accept as elementary objects specific irreducibly complex
structures with "fine-tuned" fortuitous properties.
While such irreducible objects might have fortuitous characteristics, lack generality
and present non-generic properties, they might be very important if the same set of
core-objects / molecules /organelles appears recurrently in biological, neurological or
cognitive systems in nature.
Recognizing the "irreducibly complex" parts of a complex system (rather than
trying vainly to solve them by Multi-Agent Modeling means) might be a very
important aspect both conceptually and computationally.
In such situations, rather than trying to understand the irreducibly complex objects
and properties on general grounds (as collections of their parts), one may have to
recognize the unity and uniqueness of these macros and resign oneself in just
making an as intimate as possible acquaintance with their features.
One may still try to treat them by the implicit elimination method
[Solomon95,Baeker97] where the complex objects are presenting, isolating and
eliminating themselves by the very fact that they are projected out by the dynamics
as the computationally slow-to-converge modes.
One could look at the necessity to give up the extreme reductionism (going with
the reduction below the first encountered non-generic object) as "a pity". Yet one
has to understand the emergence of these nontrivial thresholds as the very salt
which gives "taste" to the world as a WONDERfull place where unexpected things
which weren't "put by hand from the beginning" can emerge.
Moreover one should be reassured that the fundamental "in principle" reduction
of macroscopic realty to the fundamental microscopic laws of the material reality is
not endangered (or at least not more endangered than it started with).
New Causal Schemes (parallel, asynchronous dynamics, Markov webs)
To be developed by Lev Muchnick
New Ontological Schemes (Network vs tree ontology, dynamical ontology,
postext)
18
Sorin
Page 19 3/10/2016
19
To be developed
New Experimental platforms (distributed, electronic experiments, NAtlab,
avatars)
To be developed by Gilles Daniel
Logistic Systems
Logistics equations were studied in the context of complex systems initially for the
wrong reasons: their deterministic yet unpredictable (chaotic) solutions for more then 3
nonlinear coupled equations and their fractal behavior in the discrete time-step version
seemed for a while as a preferred (if not royal) road to complexity. Even after those
hopes were realistically re-assessed one cannot ignore the ubiquitness of the logistic
systems: Montroll ‘… all the social phenomena … obey logistic growth’, Robert May:
“…”Montroll introduced in this context the concept of "sociological force" which induces
deviations from the default "universal" logistic behavior he considers generic to all social
systems [5].
Moreover, as described later in this report, when the spatial distributed and stochastic
character of the equations are appropriately taken into account, the logistic systems turn
out to lead naturally and generically to collective Macro objects with adaptive and highly
resilient properties and to scaling laws of the Pareto type.
In 1798, T.R. Malthus [1] wrote the first equation describing the dynamics of a set of
autocatalitically proliferating individuals:
dx / dt = (birth rate- death rate) x
with its obvious exponential solution
(malthus autocatalytic equation)
x ~ e rt
where r = (birth rate- death rate)
According to the contemporary estimations the coefficients were such
as to insure the doubling of the population every 30 years or so.
The impact of the prediction of an exponential increase in the population was so great,
that everybody breathed with relief when P.F. Verhulst [2] offered a way out of it:
dx /dt = r x - c x2
(Verhulst logistic eq.)
where the c coefficient represents the effect of competition and other limiting factors
(finite resources, finite space, finite population etc). The solution of this equation starts
exponentially but saturates asymptotically to the carrying capacity
x = r / competition
19
Sorin
Page 20 3/10/2016
20
The solution was verified on animal population: the population of sheep in Tasmania
(after a couple was lost by British sailors on this island with no sheep predators [3]),
pheasants, turtledoves, bees colony growth, e. coli cultures, drosofillas in bottles, water
fleas, lemmings, etc.
Applications of the logistic curve in technological change and new product diffusion
were considered on [8] The fit was found excellent it for 17 productss (e.g. detergents
displacing of soap in US, Japan).
The application of the Logistic Equation has been used to describe social change
diffusion: the rate of adoption is proportional [12] to the number of people that have
adopted the change times the number of the agents that still haven’ t.
Unfortunately detailed data on the spatio-temporal patterns of propagation were
collected only for a few instances [DATA] of novelty propagation (hybrid corn among
farmers in Iowa, antibiotics among physicians in US family planning among rural
population in Korea).
The modeling of the aggregate penetration of new products in the marketing literature
generally follows the Bass model [7] which in turn is based on the theory of Rogers'[7]
for the diffusion of innovations.
This theory postulates in addition to the internal ("word of mouth") influences
the role of communication methods - external influence (e.g., advertising, mass media).
The equations turn out to be of the same generic logistic for,
Further developments of the equations addressed the problems of
1.interaction between different products sharing a market,
2.competition between producers
3.effects of repeat purchase
4.the dynamics of substitution of an old product (technology) by a new one.
In epidemiology, Sir Ronald Ross [14] wrote a system of differential coupled equations
to describe the course of malaria in humans and mosquito.
This model was taken up by Lotka in a series of papers [15] and in particular in [16]
where the system of equations generalizing (to vectors and matrices) the logistic
equation :
dxi /dt = rij xj - ci,j xi xj
(diff eq logistic system)
were introduced.
The interpretation given by Lotka to this logistic system in the malaria context was:
- xi represented the number of malaria infected individuals in various populations
indexed by i (e.g. i=1 humans and i= 2 mosquitos).
- The terms rij represent probability by an individual from the i population of catching
the disease upon interacting with an individual from the j species.
- The terms ci,j represent the saturation corrections due to the fact that the newly
infected individual can be already ill.
20
Sorin
Page 21 3/10/2016
21
To this date, the best field data for these systems occurs in the context of epidemics
(superior antigenpropagation). Bayley [34] reviews the applications in epidemiology
(applications to influenza, measels and rabies) and in particular "multiple site" models
[35] generalizing the original malaria problem studied by Ross and by Lotka.
Vito Voltera became involved independently with the use of differential equations in
biology and social sciences [17] (the English translations of the references [17-??]
together with further usefull information can be found inthe collection [18]:
In particular, Volterra re-deduced the logistic curve by reducing the verhulst-pearl
equation to a variational principle (maximixing a function that he named "quantity of life"
). V.A. Kostitzin generalized the logistic equation [18] to the case of time and variable
dependent coefficients and applied it to species genetic dynamics [19]. This was further
generalized by A Kolmogoroff [22]. May showed that logistic systems [24] are almost
certainly stable for a wide range of parameters.
The extension of the relevance of these models to many other subjects continued for
the rest of the century e.g. [23]. See also [24] for a very mathematical study of the
equations of the logistic form.
Another generalizationwas proposed by Eigen [36] and Eigen and Shuster [37] [38]
was in the context of Darwinian selection and evolution in prebiotic environments. One
assumes a species of autocatalytic molecules which can undergo mutations. The
various mutant "quasi-species" have various proliferation rates and can also mutate one
into the other.
In the resulting equations describing the system rij represent hen the increase in the
population of species i due to mutations that transform
the other species j into it. This dynamics is the crucial ingredient in the study of
molecular evolution in terms of the autocatalytic dynamics of polynucleotide replicators.
Eigen and Schuster showed that the system reaches a "stationary mutant distribution of
quasispecies". The selection of the fittest is not completely irrelevant but it refers now to
the selection of the highest eigenvalue eigenvector. The importance of the spreading of
the mutants over an entire genomic space neighbourhood of the current fitness
maximum was emphasized in the context of a very hostile and changing environment in
by many authors.
The spread of the population and the discreteness of the genetic space lead to a
situation in which populations which would naively disappear (in the hypothesis of
continuum genetic space) in fact survive, adapt and proliferate.
As remarked in Mikhailov and Mikhailov and Loskutov [46][47] the Eigen equations are
relevant to market economics: If one denotes by i the agents that produce a certain kind
of commodity and compete on the market one may denote by xi the amount of
commodity the agent i produces per unit time (production of i).Then (diff eq logistic
system)
Is obtained by assuming that a certain amount of the profit is reinvested in production
and by taking into account various competition and redistribution (and eventual
cooperation) effects.
21
Sorin
Page 22 3/10/2016
22
One may apply the same equations to the situation in which xi represent the investment
in a company i or the value of its shares. Marsili Maslov and Zhang have shown that
the equation (diff eq logistic system) characterizes the optimal investment strategy.
The ecology-market analogy was postulated already in [Schumpeter] and [Alchian] See
also [Nelson and Winter] [Jimenez and Ebeling] [Silverberg] [Ebeling and Feistel]
[Jerne].
The extension to spatially extended logistic systems in terms of partial differential
equations was first formulated in [25] in the context of the propagation of a mutant
superior gene in a population.
dxi /dt =( rij + dj ∆ ) xj - ci,j xi xj
(spatially distributed logistic)
where the coefficient dj and the Lagrangian operator ∆ ≡∂r∂r represent the spatial
diffusion due to the jumping of individuals between neighboring locations.
The mathematical study of these "Fisher fronts" was taken up by [26] folowed up by a
large literature in physics journals. [quote anomalous diffusion works].
For the further development of this direction, see [28-33].
A large body of mathematical literature [36-40] (much of it quite illegible to nonmathematicians) has addressed in the past the stochastic generalizations of the logistic/
lotka-volterra equations in which the coefficients and the unknowns are stochastic
variables. One of the difficulties is that the continuum (differential equation – like)
notation of a stochastic process is not unambiguous. In general the interpretation is
along the Ito lines [ref Ito] and becomes unambiguous when the process is specified at
the discrete time level. [41]. Of particular importance for survaivability, resilience and
sustainability are also the random space-time fluctuations in the coefficients rij due to
the stochastic / discrete character of the substrate / environment. This randomness is
also responsible for random variations in the sub-species fitness which in turn can be
shown to be responsible for the emergence of power laws in the distribution of
individuals between the various sub-populations. Similar effects are seen in the other
applications of the logistic systems where microscopic discreteness and insuing
randomness lead quite universally to stable power laws (even in arbitrary and
dramatically non-stationary global conditions).
This multiscale distribution of the sub-populations constitutes an additional link
betiween the microscopic stage of the natural selection and the macroscopic dynamics
of the populations. Most of the effort was in the direction of extending the stability /
cycles theorems existing in the deterministic case.
Moreover Horsthemke and Lefever applied it to the particular case of the stochastic
extension of the logistic Verhulst equation.
Rather than having a self-averaging effect, this multiplicative noise leads to scaling
variations in the sub-population sizes and consequently ([London][Biham et al][Paris]) to
power tailed (Levy) fluctuations in the global sub-species populations.
22
Sorin
Page 23 3/10/2016
23
Power laws and dynamics of their emergence
One of the early hints of complexity was the observation in 1897 by Pareto that the
wealth of individuals spreads over many orders of magnitude (as opposed to the
size of a person which ranges roughly between ½ meter and 2 meters). The
dynamics of the social wealth is then not dominated by the typical individual but by a
small class of very rich people. Mathematically one realized that instead of the usual
fixed scale distributions (Gaussian, exponential), the wealth follows a “power law”
distribution. Moreover, in spite of the wide fluctuations in the average wealth during
crises, booms, revolutions, the exponent of the power laws remained between
narrow bounds for the last 100 years.
Similar effects [7] were observed in a very wide range of measurements: meteorite
sizes, earthquakes, word frequences and lately internet links. In all these systems,
the presence of power laws constitutes a conceptual bridge between the
microscopic elementary interactions and the macroscopic emergent properties.
Recently, attention has turned to the internet which seems to display quite a number of
power-law distributions: the number of visits to a site [4], the number of pages within a
site [5], and the number of links to a page [6], to name a few..
We will see in detail in this report that the autocatalytic character of the microscopic
interactions governing these systems can explain this behavior in a generic unified
way.
A quick plausibility argument is based on the observation that a dynamics in which
the changes in the elementary variables are proportional to the current values is
scale invariant. I.e. the dynamics is invariant under rescaling (= a transformation that
multiplies all the variables by an arbitrary common factor). The fact that the autocatalytic dynamics is invariant under rescaling, suggests that it leads to a distribution
of the variables which is invariant under rescaling too [8]. The only functions which
are invariant under rescaling are the power laws: P(K x) ~ (K x) -1-  ~ x -1-  ~ P( x).
Note that by taking the logarithm of the variables, random changes proportional to
the present value become random additive changes. This brings auto-catalytic
dynamics within the realm of statistical mechanics and its powerful methods can be
applied efficiently
To get the priorities straight: the power law distribution of word frequencies was
discovered Estoup in 1916 [Estoup 1916] long before Zipf) Also the power law in the
city size distribution was not discovered by ZZipf but by Auerbach in 1913 [Auerbach
1913]. The power law in the number of papers produced by an author was
discovered by Lotka. For other places in information dynamics where power laws
appear see [Bookstein 90].
In RNA and proteomic sequences, a very early study was published already in 1955:
G Gamow, M Ycas (1955), "Statistical correlation of protein and ribonucleic acid
composition", Proceedings of National Academy of Sciences, 41 (12), 1011-1019
(Dec 15, 1955).
The earliest papers connected to power laws in WWW and internet started to appear
in the mid 90’s: S Glassman (1994), WE Leland et al (1994), C R Cunha, A
Bestavros, M E Crovella 1995. V Almeida et al (1996), M F Arlitt, C L Williamson
23
Sorin
Page 24 3/10/2016
24
(1997), ME Crovella, A Bestavros (1997), P Barford, ME Crovella , 1997, ME
Crovella, M S Taqqu, A Bestavros (1998), N Nishikawa, T Hosokawa, Y Mori, K
Yoshida, H Tsuji (1998), BA Huberman, PLT Pirollo, JE Pitkow, RM Lukose, (1998),
A-L Barabasi, R Albert (1999).
In the following list, for clarity and brevity, instead of saying:
“ The probability density for a person to have wealth x is approximately a power law
P(x) ~ x -1-  “ , we just make a list entry named “wealth of an individual” or
“individual wealth”:
Duration of individual stays at one address
Time a purchaser stays with a supplier
Duration of browsing a website
Time for get rid of inventory items
Time a political party stays in power
Duration of wars
Time for device functioning without failure
Duration patient stays in hospital
Time to turn prospect into sale.
Time for searching for missing persons.
Time for unaccounted teenagers.
Average survival time of a new business
Time to complete a painting.
Time that a bad debt will remain unpaid.
Duration of engineering projects.
Assets shares in a Portfolio
firm size
Size of rounding error in a final computer result
Ecological population size
Bankruptcy sizes.
Detection of false data by an auditor
Volume of Website traffic
Frequencies of words in texts
The size of human settlements
File size distribution of Internet traffic
Clusters of Bose-Einstein condensate
Size of oil reserves in oil fields
The length distribution in batched computer jobs
24
Sorin
Page 25 3/10/2016
25
returns on individual stocks
Size of sand particles
Size of meteorites
Size of moon craters
Number of species per genus
Areas burnt in forest fires
Stored Stock size per product type
sales volumes per product type
profit per customer
pollution rate per vehicle
sales results per advertisement
complaints per product type / service type
car rentals per customer
product quantity consumed per consumer
telephone calls per caller
frequency of code portion usage
Decisions per meeting time
Results per action item
Number of Interruptions per interrupter
Occurrences per error type
Sales per sale-person
Revenue per company unit
Amount of Crimes per criminals
Fruits per plant
Website / Blog Popularity.
Search Engine Queries per question
Distribution of peering sessions per router
Internet site connections
Movies-Demand in Video Servers.
Size Distribution of Firms.
Territory distribution in a Society.
income distribution of companies
Human behavior
Non-coding portions of DNA.
size of RNA Structures ,
25
Sorin
Page 26 3/10/2016
26
Earthquake areas
Size of Phylogenic tree branches
Duration of peering sessions carried by routers,
Size of stored computer files sizes
the sizes of earthquakes
size of solar flares
war duration and intensity,
the frequency of use of words
the frequency of occurrence of personal names (in most cultures)
the number of citations received by papers
the number of hits on web pages
the sales of books, CD’s
the numbers of species in biological taxa
number of calls received by a person
Frequencies of family names
Number of protein sequences associated to a protein structure
Freqences of psychiatric diseases
heartbeat intervals
Frequency of family names
Nr of Species with individuals of a given size
Nr of Species vs number of specimens
Nr of Species vs their life time
Nr of Languages vs number of speakers
Nr of countries vs population / size
Nr of towns vs. population
Nr of product types vs. number of units sold
Nr of treatments vs number of patients treated
Nr of patients vs cost of treatment
Nr of moon craters vs their size
Nr of earthquakes vs their strenth
Nr of meteorites vs their size
Nr of voids vs their size
Nr of galaxies vs their size
Nr of rives vs the size of their basin
26
Sorin
Page 27 3/10/2016
27
A promising concept which might dominate this direction for the coming years is
stochastic logistic systems of generalized Lotka-Volterra type [9] (spatially
distributed logistic) with random coefficients rij .
Multi-Agent Modeling of Reaction-Diffusion Systems
Reaction diffusion systems are multi-agent systems where the agents may move freely /
randomly (diffuse) in space as long as they do not encounter one another and react
when they eventually meet.
The usual approach to reaction-diffusion processes in their field of origin (chemistry)
was to express them in terms of density fields D(r,t) representing the average density of
the different reactants (agents of a given type) as continuos functions of time t and
spatial location r (in certain cases the dependence on r is neglected and the system is
represented as a single object).
This approach stands in contrast to the Multi-Agent approach which consists of tracking
in time the individual location of each and the individual transformations which the
agents undergo upon meeting.
Whatever approach is taken the interest in a reaction-diffusion system is usually its
spatio-temporal evolution.
In the density field approach the spatio-temporal distribution is explicitly expressed by
the variables D (r,t) and the partial-differential equations governing them (see below)
while in Multi-Agent approach the spatio-temporal distribution emerges only upon
averaging the agents positions over finite space-time ranges (i.e. over many stochastic
system instances =“configurations”).
In the complex applications one often needs to represent space by a discrete regular
mesh and record the number of agents of each kind on each mesh site.
The Multi-Agent approach is closer to the real system when there are only trace
densities of the different reactants. Indeed, the partial differential equations approach
describes the system with less accuracy when the discreteness of the reactants is
apparent.
We will see later in the report, that for auto-catalytic reactions the microscopic
discreteness influences the macroscopic long range behavior of the system in
generating power laws, localized spatio-temporal structures and collective selforganized behavior. The continuous approach may miss occasionally, or even
systematically, such effects as we will see below.
These special Reaction-Diffusion Multi-Agent effects are not restricted to describing
chemical systems. Indeed reaction-diffusion processes of the type (spatially
distributed logistic) have even been used extensively in population biology [Maynard]
(where auto-catalysis is called "reproduction"), marketing (see section. ..) immunology,
finance, social science, etc )). Discretization is crucial in the behavior of such autocatalytic models thus showing that the Multi-Agent is indispensable to researchers who
need to model real-life emerging systems. Indeed, due to the spatio-temporal
fluctuations in the autocatalytic coefficients rii even if the growth rate is in average
negative < rii > << 0, the system presents rare singular points where there is
momentary growth. One can show that in a very wide range of conditions the islands of
growth generated by one such singular fluctuation survive the actual life of the original
27
Sorin
Page 28 3/10/2016
28
and are able to keep growing due to the occurrence of another such fluctuation on their
territory. Thus, an entire infinite chain of singular fluctuations (not too far one from the
preceding one) might insure the survival forever of the x’s as a population. Note that
while the collective islands looks like searching and opportunistically taking advantage
of the environment fluctuations, the actual individuals are completely naïve (zero
intelligence /rationality). This mechanism explains the role of collective emergent
objects such as cells, species, institutions, herds in insuring the sustainability and
resilience of adaptive activity in situations which otherwise look hopeless and doomed
to decay.
Moreover, the multiplicative stochastic character of the xi rii term can be shown to
imply the emergence of robust scaling even in conditions in which the probability
distribution rii and the nonlinear saturation terms are highly non stationary and lead
to chaotic global dynamics.
This explains why inspecting the list of Scaling systems and the list of Logistic
systems one sees a very strong overlap between them.
Autocatalytic Growth
As described above the discrete character of the components (i.e. the multi-agent
character) is crucial for the macroscopic behavior of the complex systems. In fact, in
conditions in which the (partial differential) continuum approach would predict a
uniform static world, the slightest microscopic granularity insures the emergence of
macroscopic space-time localized collective objects with adaptive properties which
allow their survival and development [5].
The exact mechanism by which this happens depends crucially of another unifying
concept appearing ubiquitously in complex systems: auto-catalyticity. The dynamics
of a quantity is said auto-catalytic if the time variations of that quantity are
proportional (via stochastic factors) to its current value. It turns out that as a rule, the
“simple” objects (or groups of simple objects) responsible for the emergence of most
of the complex collective objects have auto-catalytic properties. In the simplest
example, the size of each “simple” object jumps at every time instant by a (random)
quantity proportional to its current size.
Autocatalyticity insures that the behavior of the entire system is dominated by the
elements with the highest auto-catalytic growth rate rather than by the typical or
average element [6].
This explains the conceptual gap between sciences: in conditions in which only a
few exceptional individuals dominate, it is impossible to explain the behavior of the
collective by plausible arguments about the typical or “most probable” individual. In
fact, in the emergence of nuclei from nucleons, molecules from atoms, DNA from
simple molecules, humans from apes, there are always the un-typical cases (with
accidentally exceptional advantageous properties) that carry the day. This effect
seems to embrace the emergence of complex collective objects in a very wide range
of disciplines from bacteria to economic enterprises, from emergence of life and
Darwinism to globalization and sustainability. Its research using field theory,
microscopic simulation and cluster methods is only at its beginning.
28
Sorin
Page 29 3/10/2016
29
The Birth of Macroscopic Objects from Microscopic “Noise”
Suppose one has the following "economic model":
- people are spread at different (fixed) locations in space.
- At each time step the wealth of each of them is increased by a
(different) random growth factor (all extracted from the same
common distribution). We will call such a dynamics auto-catalytic.
- each of the individuals spreads a fraction of its wealth among its
neighbors.
It is found that in spite of the fact that macroscopically / statistically the growth
factor is distributed uniformly among the individuals, the resulting overall distribution
of wealth will be characterized by a spatial distribution with very wealthy
neighborhoods separated by a very poor background.
We will find repeatedly that the emergence of such an inhomogeneity invalidates the
averaging / representative agent / continuum approaches and transforms the MultiAgent Modeling in a central research tool of such systems. (cf Anderson 1997 :
“Real world is controlled …
– by the exceptional, not the mean;
– by the catastrophe, not the steady drip;
– by the very rich, not the ‘middle class’.
we need to free ourselves from ‘average’ thinking.”
It is for this reason that in our examples we stress the spatio-temporal
inhomogeneity / localization effects.
Consequently, the Multi-Agent Paradigm is in a two-ways reciprocal relation to
the stochastic auto-catalytic systems:
- the Multi-Agent structure is the one which allows the localization to emerge and
reciprocally, in turn
- the emergence of localization selects the Multi-Agent method as the appropriate
research formalism.
We will see below that this is the core mechanism in many complex systems
spreading over many different fields.
Moreover, the spontaneous emergence of inhomogeneity/ spatio-temporal
localization in a priory homogenous noisy conditions is induced generically by autocatalysis.
A couple of questions and answers about the specific choice of examples.
- Why are we concentrating on microscopically auto-catalytic systems?
- Because usually, in the absence of auto-catalytic interactions the
microscopic systems do not reach macroscopic dynamical relevance in
terms of spatio-temporally localized complex / adaptive collective objects.
- Why do we study those localized macroscopic objects as examples of
Multi-Agent Modeling?
- Because the homogenous systems do not need usually the Multi-Agent
Simulation approach: they can be treated by averaging over the
homogenous masses of similar microscopic objects (mean field,
representative agent).
29
Sorin
Page 30 3/10/2016
30
Thus, one of the key concepts underlying the emergence of complex macroscopic
features is auto-catalysis. We therefore give at this point a provisory definition of it:
auto-catalysis = self-perpetuation, /reproduction, /multiplication
As opposed to the usual stochastic systems in which the microscopic dynamics
changes typically the individual microscopic quantities by additive steps (e.g. a
molecule receiving or releasing a quanta of energy), the auto-catalytic microscopic
dynamics involve multiplicative changes (e.g. the market worth of a company
changes by a factor (index) after each elementary transaction). Such auto-catalytic
microscopic rules are widespread in chemistry (under the name of auto-catalysis),
biology (reproduction / multiplication, species perpetuation), social sciences (profit,
returns, rate of growth).
The A(utocatalysis)-Bomb
The first and the most dramatic example of the macroscopic explosive power of the
Multi-agent auto-catalytic systems the nuclear (“Atomic”) bomb. The simple microscopic
interaction underlying it is that the U235 nucleus, when hit by a neutron splits in a few
energetic fragments including neutrons:
n + U ---> n + n + etc.
(autocatalysis equation 1)
On the basis of (autocatalysis equation 1) even without knowing what is a neutron or
a U235 nucleus, it is clear that a macroscopic “reaction chain” may develop: if there
are other U235 nuclei in the neighborhood, the neutrons resulting from the first
(autocatalysis equation 1) may hit some of them and produce similar new reactions.
Those reactions will produce more neutrons that will hit more U235 that will produce
more neutrons…..
http://tqd.advanced.org/3471/nuclear_weapons_fission_diag.html
30
Sorin
Page 31 3/10/2016
31
Figure 1: At the left of the diagram, a neutron hits a U-235 nucleus causing it to fission
(this is symbolized by the green arrow) which results in 2 nuclear fragments, energy and
3 neutrons. In the middle section of the diagram, two of the resulting neutrons hit each
a new U-235 nucleus (one depicted in the upper part of the diagram and one at the
lower part of the diagram). In the right section of the diagram, each of the newly hit U235 nuclei fission (green arrows) each into 2 nuclear fragments, energy and 3 new
neutrons.
The result will be a chain (or rather "branching tree") of reactions in which the neutrons
resulting from one generation of fission events induce a new generation of fission
events by hitting new U235 nuclei (Figure 1).
This "chain reaction" will go on until eventually, the entire available U235 population
(of typically some 10**23 nuclei) is exhausted and their corresponding energy is
emitted: the atomic explosion.
http://www.ccnr.org/fission_ana.html
http://www.chem.uidaho.edu/~honors/fission.html
The crucial feature in the equation above, which we call "auto-catalysis", is that by
inputting one neutron n in the reaction one obtains two (or more) neutrons (n+n).
The theoretical possibility of iterating it and have an exponentially increasing
macroscopic number of reactions was explained in a letter from Einstein to President
Roosevelt. In turn this lead to the initiation of the Manhattan project and the eventual
construction of the A-bomb.
It is not by chance that the basic Multi-Agent method (the Monte Carlo simulation
algorithm used until this very day in physics applications) was invented by people
[Metropolis, Metropolis, Teller, Teller and Ulam] involved in the Manhattan project (and
the subsequent thermo-nuclear reactions projects): the Multi-Agent method is the best
fit method to compute realistically the macroscopic effects originating in microscopic
interactions!
The crucial fact is that one can understand the chain reaction while knowing virtually
nothing of nuclear physics. After the Multi-Agent method reduces the systems to their
relevant elements one is left with abstract systems which admit an universal formalism
[Mack] and method of treatment [S95].
Beyond its utility in each field Multi-Agents Modeling constitutes a major unifying factor
for the various human fields of activity and a strong communication tool across the
disciplinary borders. Any researcher involved in complexity has had the personal
experience in which people from various disciplines with virtually no common
background were able to carry lively and productive dialogues based on the Multi-Agent
formulations of the systems under discussion.
This can be done with no compromise on the specificity of each system: the particular
circumstances of each system are readily included in the Multi-Agent formulation of the
system. This is very different from usual analytical models whose applicability is very
sensitive to the smallest changes in the circumstances (even within the same scientific
subject).
31
Sorin
Page 32 3/10/2016
32
The Logistic Bomb: Malthus-Verhulst-Lotka-Volterra-Montroll-MayEigen-Schuster systems
In social and biological systems, the realization that (dramatic) macroscopic events can
be generated by chains of auto-catalytic elementary "microscopic" events emerges
lately in its full power only after the macroscopic effects were experienced (for good of
for bad) for many centuries.
The reaction (autocatalysis equation 1) leads naively to the differential equation
(malthus autocatalytic equation). However there are crucial corrections to this naïve
translation. As discussed in the logistic equation section, the fact that there is a finite
number of U-235 nuclei in the particular slob of material means that at some stage the
probability to find a non-fisioned yet U-235 nucleus will decrease.
This is expressed by the saturation term in (Verhulst logistic eq.) . Moreover, the fact that the
nuclei are placed in a particular spatial geometry means that it is important how many neutrons
can move from one location (where they were generated) to another location (where to initiate a
reaction). One can try to express it by differential equations of the type (spatially distributed
logistic) or (diff eq logistic system). However, this is not always correct. The long flights of the
neutrons, the slowing down in various conditions, the change of reaction rate as a function of
their speed may defy analytic solutions in many situations. Going back to the direct
representation of the system in terms of its elementary agents is always a valid option and often
the only one.
In fact some crucial effects are obtainable and in fact discernable only in this formalism.
In particular, the connection between the autocatalytic dynamics of the logistic systems
and the emergence of power laws and of localized adaptive resilient collective objects is
obtainable only when one takes into account the stochastic character related to the
discrete (rather then continuum) texture of the U-235 material and of the other systems
discussed below.
32
Sorin
Page 33 3/10/2016
33
Autocatalysis and localization in immunology B-Bomb
In no field is the auto-catalysis and localization more critical than in the emergence of
living organisms functions out of the elementary interactions of cells and enzymes.
From the very beginning of an embryo development the problem is how to create a
"controlled chain reaction" such that each cell (starting with the initial egg) divides in
similar cells, yet spatio-temporal structures (systems and organs) emerge.
Let us consider the immune system as an example.
The study of the Immune System (IS) for the past half century has succeeded in
characterizing the key: cells, molecules, and genes.
As always in complex systems, the mere knowledge of the MICROS is not sufficient
(and, on the other hand, some details of the micros are not necessary). Understanding
comes from the identification of the relevant microscopic interactions and the
construction of a Multi-Agent Simulation which to demonstrate in detail how the complex
behavior of the IS emerges. Indeed, the IS provides an outstanding example of the
emergence of unexpectedly complex behavior from a relatively limited number of simple
components interacting according to known simple rules.
By simulating their interactions in computer experiments that parallel real immunology
experiments, one can check and validate the various mechanisms for the emergence of
collective functions in the immune system. (E.g. recognition and destruction of various
threatening antigens, the oscillations characteristic to rheumatoid arthritis, the
localization of diabetes 1 to pancreatic islets etc). This would allow one to design further
experiments, to predict their outcome and to control the mechanisms responsible for
various auto - immune diseases and their treatment.
Multi-Agent Simulation of the Emergence of Immune Funct;
The immune system task is to recognize and kill entities which do not belong to the
organism ("self") [Atlan and Cohen 98].
The standard scheme describing the way the immune system does it is shown in FIG( 2
).
Already at the very superficial level, one sees that the scheme in Fig 2 is very similar
with the Figure (1) which describes the consequences of the autocatalysis equation 1
(or the reproduction of capital reaction 3.1 for that matter).
Let us describe in more detail what happens in Figure (2).
At the left of the diagram Fig (2), the immune system is producing (by iterated divisions)
cells which are called B-cells and which (as opposed to all the other cells in the
organism) undergo mutations in as far as their genetic information is concerned.
For instance, after 3 generations of divisions, one ends up with 2 3 =8 B-cells in the
middle of Figure 2. In reality, the B-cells carrying different genetic information are
identified rather by a specific shape which each of them owns (as shown for the cells
1-8 in the Fig 2).
From now on the story of the B-cells is exactly like the one of the neutrons and the
U235: whenever the a B-cell hits an entity which carries a shape complementary to its
own specific shape (this is called and Antigen and it is denoted by Ag) 3 things
happen:
33
Sorin
Page 34 3/10/2016
1)
The Ag entity is (eventually) destroyed
2)
The B cell life is prolonged
34
3)
New B cells with the same genes (and specific shape) are (eventually)
produced
This is shown in Fig 2: an entity having shapes which fit the B-cells 2, 4, and 7 is
present (around the center of the upper edge of Fig 2). Consequently, B-cells 2, 4 and 7
reproduce and the Ag is destroyed.
B+ Ag -----------> B+B + etc.
autocatalysis equation 2
The rest of the Fig 2 is not of interest for our purpose. We already have all the
ingredients for our "B-bomb":
The B's resulting from autocatalysis equation 2 will "hit" other Ag's and generate new
B's. As long as there are Ag's around, the B's will keep proliferating exponentially. This
is the phase where the microscopic heterogeneity is amplified by auto-catalytic
reactions to macroscopic coherent spatio-temporal features: in the present case the
mounting of the macroscopic immune response by the organism. One should not
consider auto-catalysis our exclusively our ally: after all is how the antigens act too in
order to mount and localize ever-changing adaptive attacks on the integrity of our
immune "self".
In conclusion: whenever a critical mass of a foreign entity Ag shows up, there will be
always at least one of the 2n B-cells which fit it (is Idiotypic to it). The chain reaction
autocatalysis equation 2 insures that, very fast, a large number of B-cells Idiotypic to
Ag are produced and all the Ag destroyed.
Of course there are saturation mechanisms (B’s competition for space / resources,
repeated B - Ag encounters) as well as mutations of the Eigen –Schuster mutation
ones. These additional effects, as well as the spatial distribution lead to equations of the
type which introduce the additional terms corresponding to (spatially distributed logistic)
or (diff eq logistic system ) with stochastic coefficients.
In fact those are responsible for the scaling behavior of the type described in JD
Burgos, P Moreno-Tovar (1996), "Zipf-scaling behavior in the immune system",
Biosystems, 39(3):227-232.
34
Sorin
Page 35 3/10/2016
35
Fig 9 The left (yellow discs) part of the scheme insures (by repeated reproduction and mutations)
that there are so many different types of B-cells, that for any Antigen presented to the body, there
is at least one B-cell fitting (being Idiotypic to) it. This is the phase of creating microscopic
inhomogeneity in the B genetic space.
The gray disks in the middle part of the diagram represent a particular Antigen Ag which fits the
B-cells 2,4 and 7.
The multi-colored right part of the figure shows the subsequent proliferation of 2,4,7 cells (and
other operations directed towards killing Ag). This is the auto-catalytic phase localizing (in the
genetic space) the defense against Ag.
http://wsrv.clas.virginia.edu/~rjh9u/clsel.html ;
Page maintained by Robert J. Huskey, last updated December 1, 1998.
JD Burgos, P Moreno-Tovar (1996), "Zipf-scaling behavior in the immune system",
Biosystems, 39(3):227-232.
Autocatalysis in a social system; The Wheat Bomb
To exemplify the formal equivalence which systems experience upon Multi-Agent
formalization let us recount the story of the introduction of agriculture in Europe.
Radiocarbon dating of artifacts associated with farming life shows that farming spread
from Anatolia (now Turkey) to northern Europe in less than 2000 years (from 8 to 6
thousands years ago). This was termed as the "Neolithic Revolution" and it was
associated among other things with the spread of the proto-indo-european language in
Europe.
35
Sorin
Page 36 3/10/2016
36
Figure 2: The points represent the archeological locations where farming artifacts were found. The
colors represent the dating of those sites. One sees that the earliest points are situated in the
Middle East while the later points are moving progressively to larger distances (one can show that
the average distance increases linearly in time).
http://marijuana.newscientist.com/ns/970705/features.html New Scientist 5 Jul 1997, Ancestral
Echoes; see also How was agriculture disseminated? The case of Europe © Paul Gepts 1999
PLS103: Readings - Lecture 13 http://agronomy.ucdavis.edu/gepts/pb143/lec13/pb143l13.htm
Among the theories proposed by the various groups in order to explain the spread of the
Neolithic Revolution throughout Europe were:
 learning agriculture from neighbors (and transmitting it to other neighbors),
 sons/ daughters establishing farms next to parents farms,
In order to differentiate between the competing theories one has to analyze the
macroscopic archeological (genetic, linguistic) data and compare with the results of
the various postulated individual scale mechanisms:
The main feature which is however established is that the data are incompatible
with a simple diffusion mechanism. Indeed, simple diffusion would imply an expansion
of the farming territory which depends of the square root of time and a fuzzy boundary
separating the farming territory from the unsowed territory. In reality the speed of
expansion was constant in time and it advanced along relatively sharp (though irregular)
boundaries.
This kind of behavior will be traced repeatedly in our examples to the discrete, autocatalytic behavior of the microscopic dynamics.
As opposed to molecules diffusion, where the effect is dominated by the behavior of the
bulk of the population (and therefore it lends itself to a local averaging treatment), here
the pioneering, fore-running singular individuals (Eneas, the Asians which passed the
Bering straits into America few thousands of years ago to become the ancestors of the
American Indians, Columbus, Little Johnny Appleseed etc.), are the ones which impose
the speed and the aspect of the system evolution.
More precisely once the individuals arrive in a "virgin" territory, they multiply fast to the
level of saturating the new territory. So the colonization does not need to wait for the
immigration of a large mass of colonizers from outside.
This crucial role of the individual elements and events requires therefore the use of the
multi-agent modeling approach.
36
Sorin
Page 37 3/10/2016
37
When expressed microscopically, the problem "reduces" formally to one very similar to
the eq. (2.1) if one denotes by
-
n the carrier of the new language / technology (agriculture) and by
-
U the carrier of the old language / technology (hunting-gathering)
The postulated adoption of the new language and/or technology will be symbolically
denoted by:
n+ U ---> n+ n + .
autocatalysis equation 3
Our point is not that the actual macroscopic dynamics in the two systems
autocatalysis equation 1 and 3 are identical. The point is that the same formal
elementary operations while leading to specific complex consequences are expressible
and can be processed within a common universal and very elastic multi-agent
autocatalytic formalism.
Microscopic Simulation of Marketing; The Tulip Bomb
http://www.usnews.com/usnews/issue/980803/3bean.htm
The tulip mania is one of the most celebrated and dramatic bubbles in
history. It involved the rise of the tulip bulbs prices in 1637 to the level of
average house prices. In the same year, after an increase by a factor of 20
within a month, the market collapsed back within the next 3 months. After
loosing a fortune in a similar event (triggered by the South Sea Co.) in 1720
at the London Stock, Sir Isaac Newton was quoted to say, "I can calculate
the motions of the heavenly bodies, but not the madness of people."
http://www.morevalue.com/glossary/restrict/EMH-Bubbles.html
It might seem over-ambitious to try where Newton has failed but let us not forget that
we are 300 years later, have big computers and had plenty of additional opportunities to
contemplate the madness of people.
One finds that global ``macroscopic'' (and often "catastrophic") economic
phenomena are generated by reasonably simple buy and sell ``microscopic'' operations.
Much attention was paid lately to the sales dynamics of marketable products. Large
amounts of data has been collected describing the propagation and extent of sales of
new products, yet only lately one started to study the implications of the autocatalytic
multi-agent reaction-diffusion formalism in describing the underlying “microscopic”
process .
As examples to the ``macroscopic'' phenomena displayed by the process of product
marketing we could take:

Sudden death - the total and almost immediate disappearing of a product due
to the introducing of a new generation better product

Wave-fronts - the geographical spreading of new products through the
markets in the shape of spatio-temporal waves [Bettelheim et al.].

The larger profit of the second producer of a new product after the failure (or
limited success) of the first producer to introduce it [Goldenberg et al.].

The fractal aspect of the early sales distribution as a predictor for the success
of the campaign.

The role of negative-word-of-mouth in blocking otherwise successful
campaigns.
37
Sorin
Page 38 3/10/2016
38
All thse phenomena can be traced of various conditions acting withn the basic
framework of the autocatalytic reactions-diffusion multi-agent systems governed by
microscopic interactions of the autocatalysis equation 1-3 type.
Beyond the generic scientific and historical importance of those studies, they are of
crucial importance for the stability and viability of certain industries. An exaple that was
studied in detail was the "Tamagotchi '' craze which months after taking the world by
storm resulted in the next financial year in severe losses 4 times larger than the
projected profits.
The opposite phenomenon could happen as well though it would never make the news:
that a company closes a certain unpopular production line just months before a great
interest in the product would have re-emerged (this in fact seems to have happened to
the Firebird Pontiac). It is therefore possible that Microscopic Simulation might become
soon a marketing and production planning tool in the hands of the macro-economists.
Stochastic logistic systems yield scaling and intermittent fluctuations
We have seen that one often encounters in may disciplines stochastic systems
consisting of many autocatalytic elements (i.e described by autocatalysis equations 13). Some of the most striking examples affecting our daily life are in the financial field.
For instance, the wealth of the individual traders [3], the market capitalization of each
firm in the market [4] and the number of investors adopting a common investment
strategy [5] are all stochastically autocatalytic in the sense that their jumps (elementary
quanta of change) from one time instant to the next are typically proportional (via
stochastic factors) to their value.
Even though such systems are rare in physics, advanced statistical mechanics
techniques do apply to them and have turned out to have crucial relevance: while the
usual partial differential equation treatment of these systems often predicts a ‘dead’,
tradeless market, the renormalization group ‘corrections’ ensure the emergence of a
macroscopic adaptive collective dynamics which allows the survival and development of
a lively robust market [6].
Stochastic autocatalytic growth [7] generates, even in very non-stationary logistic
systems, robust Pareto–Zipf power law wealth distributions [8]. It was shown [9] that it if
the market is efficient, one can map it onto a statistical mechanics system in thermal
equilibrium. Consequently, the Pareto law emerges in efficient markets as universally as
the Boltzmann law holds universally for all systems in thermal equilibrium.
The study of markets as composed of interacting stochastic autocatalytic elements has
led to many theoretical quantitative predictions, some of which have been brilliantly
confirmed by financial measurements. Among the most surprising ones is the
theoretically predicted equality between the exponent of the Pareto wealth distribution
and the exponent governing the time interval dependence of the market returns
distribution.
In fact in all the fields described in the preceding sections, one finds the recurring
connection between the autocatalytic dynamics and the emergence of the power laws
and fractal fluctuations.
38
Sorin
Page 39 3/10/2016
39
In the presence of competition, the autocatalytic systems reduce to logistic
systems which were long recognized as ubiquitous.
Thus the observation by Montroll:
‘… all the social phenomena … obey logistic growth’
becomes the direct explanation of the older observation by Davis:
‘No one, however, has yet exhibited a stable social order, ancient or
modern, which has not followed the Pareto pattern’.
In fact, beyond Davis’s statement, the Pareto law stability holds even for non-stable
social order (booms, crashes, wars, revolutions, etc), provided the markets are efficient.
Why Improbable Things are so Frequent?
Fine-tuned irreducibly complex systems have generically a low probability to
appear and highly integrated/arranged systems are usually "artificial" (often manmade) and untypical. Yet many complex systems are found lately to be "selforganized".
More precisely, the amount of non-generic, fine tuned and highly integrated
systems is much larger in nature from what would be reasonably expected from
generic stochastic estimations. It often happens that even though the range of
parameters necessary for some nontrivial collective phenomenon to emerge is very
narrow (or even an isolated single "point" out of an continuum infinite range), the
phenomenon does actually take place in nature. This it leads to collective objects
whose properties are not explainable by the generic dynamics of their components.
The explanation of the generic emergence of systems which are non-generic from
the Multi-Agent point of view seems to be related to self-catalyzing dynamics.
As suggested by the examples above, the frequency with which we encounter nongeneric situations in self-catalyzing systems is not so surprising. Consider a space of
all possible systems obtainable form certain chemical and physical parts. Even if a
macroscopic number of those systems are not auto-catalytic and only a very small
number happen to be auto-catalytic after enough time, one of the auto-catalytic
systems will eventually arise. Once this happens, the auto-catalytic system will start
multiplying leading to a final (or far-future) situation in which those auto-catalytic - a
priory very improbable systems - are "over-represented" compared with their
"natural" probability of occurrence.
To use the immunology example, the Multi-Agent models cannot and do not propose
to explain in detail how exactly each of the B-cells which happen to fit the invading
Ag came to be produced. However, it does explain, how, once produced, they
multiply to the level of a macroscopic immune response by the organism. Actually as
in many cases, this effect (Clone Selection) was identified and appreciated at the
qualitative level without need to recur to Multi-Agent models. However to Multi-Agent
models are useful beyond the quantitative expression of the already existing ideas in
formulating and proving further corrections to them: the immunological homunculus,
the emergence of cognitive functions and meaning in the immune system etc..
39
Sorin
Page 40 3/10/2016
40
Complexity in Various Domains
Short First Tour of complexity examples
The emergence of traffic jams from single cars
The traffic simulation [17,18] is an ideal laboratory for the study of complexity: the
network of streets is highly documented and the cars motion can be measured and
recorded with perfect precision. Yet the formation of jams is not well understood to this
very day. In fact in some of the current projects it became necessary to introduce details
not only of the car motion but also of the location of the workplace and home of the
driver and passengers, their family structure and their life-style habits. Providing all this
realistically for a population of 1 M people is an enormous computational and human
time load and sometimes it seems that even this level of detail is not sufficient. Simpler,
but not less important projects might be the motion of masses of humans in structured
places, especially under pressure (in stadiums as match ends, or in theaters during
alarms). The social importance of such studies is measured in many human lives.
From customers to markets
After loosing a fortune in a bubble (triggered by the South Sea Co.) in 1720 at the
London Stock, Sir Isaac Newton was quoted to say: “I can calculate the motions of the
heavenly bodies, but not the madness of people.” It might seem over-ambitious to try
where Newton has failed but let us not forget that we are 300 years later, have big
computers and had plenty of additional opportunities to contemplate the madness of
people.
The traditional approach in the product diffusion literature, is based on differential
equations and leads to a continuous sales curve. This is contrasted with the results
obtained by a discrete model that represents explicitly each customer and selling
transaction [19]. Such a model leads to a sharp (percolation) phase transition [20] that
explains the polarization of the campaigns in hits and flops for apparently very similar
products and the fractal fluctuations of the sales even in steady market conditions .
The emergence of financial markets from investors
The financial economics has a long history of using precise mathematical models to
describe the market behavior. However, in order to be tractable, the classical market
models (the Capital Asset Pricing Model, the Arbitrage Pricing Theory, the Option
Valuation Black-Scholes formula) made assumptions which are found invalid by the
behavioral finance and market behavior experiments. By using the direct computer
representation of the individual investors’ behavior, one can study the emergence of the
(non-equilibrium) market dynamics in the presence of completely realistic conditions.
The simulations performed until now [21][22] have already suggested generic universal
relationships which were abstracted and then taken up for theoretical study in the
framework of stylized models.
The emergence of the Immune Self from immune cells
The immune system is a cognitive system [23]: its task is to gather antigenic
information, make sense out of it and act accordingly. The challenge is to understand
how the system integrates the chemical signals and interactions into cognitive moduli
and phenomena. Lately, a few groups adopted the method of representing in the
40
Sorin
Page 41 3/10/2016
41
computer the cells and enzymes believed to be involved in a immune disease,
implement in the computer their experimentally known interactions and reactions and
watch the emergence of (auto-)immune features similar with the ones observed in
nature [24]. The next step is to suggest experiments to validate/ amend the postulated
mechanisms.
The emergence of Perceptual Systems
The micro-to-macro paradigm can be applied to a wide range of perceptual and
functional systems in the body. The main steps are to find the discrete microscopic
degrees of freedom, their elementary interactions and to deduce the emergent
macroscopic degrees of freedom and their effective dynamics. In the case of the visual
system [25] this generic program is quite advanced. By using a combination of
mathematical theorems and psychophysical observations one identified the
approximate, ad-hoc algorithms that the visual system uses to reconstruct 3 D shapes
from 2 D image sequences. As a consequence, one predicted specific visual illusions
that were dramatically confirmed by experiment [26]. This kind of work can be extended
to other perceptual systems and taken in a few directions: guidance for medical
procedures, inspiration for novel technology, etc.
Microscopic Draws and Macroscopic Drawings
The processes of drawing and handwriting (and most of the thought processes) look
superficially continuous and very difficult to characterize in precise terms. Yet lately it
was possible to isolate very distinct discrete spatio-temporal drawing elements and to
put them in direct relation to discrete mental events underlying the emergence of
meaningful representation in children [27]. The clinical implications e.g. for (difficulties
in) the emergence of writing are presently studied. This realization that there are
intermediate (higher than neuron) scale “atoms” in the cognitive processes is very
encouraging for the possibility to apply complexity methods in this field.
Conceptual Structures with Transitive Dynamics
Dynamical networks were mentioned as a candidate for a “lingua franca” among
complexity workers. The nodes are fit to represent system parts / properties while the
links can be used to represent their relationships. The evolution of objects, production
processes, ideas, can then be represented as operations on these networks.
By a sequence of formal operations on the initial network one is lead to a novel network.
The changes enforced in the network structure amount to changes in the nature of the
real object. The sequence of operations leading to novel objects is usually quite simple,
mechanical, well defined and easy to reproduce.
It turns out that a handful of universal sequences (which have been fully documented)
are responsible for most of the novelty emergence in nature. Incidentally, ideas
produced by a computer that applied one of these sequences obtained (from doubleblind humans) higher inventiveness marks than the ideas produced by (a second group
of) humans [28].
The basic dynamical element in this conceptual dynamics seems to be “the diagonal
link” or the “transitive connection” (the emergence of a link between A and C if there are
already links between A and B and between B and C). This object has been found in
recent measurements to be highly correlated with crucial conceptual events as identified
by competent humans. Moreover the density of “diagonal links” has been found to be
strongly correlated with the salience of the text [29].
41
Sorin
Page 42 3/10/2016
42
Physics
Physics emerged in the seventeenth century under the name "Mathematical Principles
of Natural Philosophy", as the cutting edge of the reductionist effort to understand
Nature in terms of simple fundamental laws.
Natural phenomena which could not (yet) be reduced to the fundamental laws were
defined as belonging to other sciences and research fields: chemistry, biology,
psychology, sociology, philosophy, etc.
This definition is lately challenged by 2 parallel developments:

the reductionist belief that understanding the ultimate microscopic laws is
sufficient (or necessary) in order to understand their complex macroscopic
consequences has become untenable even within the boundaries of Physics

the phenomena originating in the other fields, became tractable by "hard"
techniques which use quantitative modeling to get precise quantitative predictions from
simple fundamental laws. Such research projects require the use of technical
background, mathematical language, theoretical paradigms, conceptual structures,
relevance criteria, research attitudes, scientific ethos and academic education specific
until now only to Physics.
Consequently, Physics has the chance to become in the 2000's the leading edge in
most of the efforts to solve the newly emerging scientific problems.
If the physicists of the 2000's will display the same courage, creativity, passion, and
imagination as the generation of Plank, Einstein and Bohr, in reshaping their field, then
the beginning of the 21 century is guaranteed to be as successful for Physics as the
beginning of the 20-th. The present meeting is a small step to prepare the grounds for it.
LINK To “Complex BIOLOGY” (by Gerard Weisbuch and…)
http://shum.huji.ac.il/~sorin/report/Biology.doc
42
Sorin
Page 43 3/10/2016
43
The Complexes of the Immune Self
The immune system (IS) is a complex system whose function is critical to health: The IS
protects the individual from infectious diseases, and possibly from tumors, yet the IS
can cause autoimmune diseases, allergies and the rejection of grafted organs. Thus,
understanding how the IS works and how it can be controlled, turned specifically on or
off, is critical to health.
Indeed, the recent emergence of new Infectious agents, like HIV, and the spread of
antibiotic resistance requires new approaches to vaccine development. The increase in
incidence of autoimmune diseases also calls attention to the need for a better
understanding of the IS. Hence, any increased understanding of the IS might have
important applications.
On the basic level, study of the IS for the past half century has succeeded in
characterizing the key cells, molecules, and genes that combine to form the IS. The
successful reduction of the IS to its microscopic component parts, however, has not
provided us with the ability to understand the macroscopic behavior of the IS. The
problem is that the reduction of the IS to its fundamental building blocks has not clarified
how the complex behavior of the IS emerges from the interactions of these building
blocks. The IS, for example, exhibits the cognitive faculties of self-organization,
learning, memory, interpretation, and decision making in the deployment of its forces,
for good or bad. Indeed, the IS provides an outstanding example of the emergence of
unexpectedly complex behavior from a relatively limited number of components
interacting in known patterns. In short, understanding the emergence of immune
cognition is an intellectual challenge with the potential to solve very practical problems.
LINK to “COMPLEX IMMUNOLOGY” (By Yoram Louzoun)
http://shum.huji.ac.il/~sorin/report/Complex%20immunology.doc
LINK to “SOCIAL SCIENCE” (by Geard Weisbuch and JP Nadal)
http://shum.huji.ac.il/~sorin/report/Social%20Science.doc
LINK to “COGNITIVE SCIENCE” (by Geard Weisbuch and JP Nadal)
http://shum.huji.ac.il/~sorin/report/Cognitive%20Science.doc
LINK to “Social Psychology” (by Andrzej Nowak)
http://shum.huji.ac.il/~sorin/report/Nowak-Soc-Psych.doc
LINK to “Minority GAME” (by YC Zhang)
http://shum.huji.ac.il/~sorin/report/Zhang-Minority.rtf
Economics
Complexity applications in Economics
43
Sorin
Page 44 3/10/2016
44
The objective of complexity work in financial economics is to investigate the emergence
of complex macroscopic market dynamics out of the individual traders’ microscopic
interactions [1]. Their main tool is the complex multi-agent modeling.
What can complexity offer economics?
The multi-agent modeling approach permits a departure from the analytically fortified
towers of rational expectations equilibrium models. It allows the investigation of markets
with realistically imperfect investors, rather than markets composed of perfectly rational
‘homo-economicus’agents. In the words of Economics Nobel Laureate Harry Markowitz;
‘… Microscopic Simulation of Financial Markets [14] points us towards the future of
financial economics. If we restrict ourselves to models which can be solved analytically,
we will be modelling for our mutual entertainment, not to maximize explanatory or
predictive power.’.
Rational expectations equilibrium theory is to be complemented by stochastic models of
agents with limited information, bounded rationality, subjective and limited memory, and
finite computational capability. In return, these agents will be endowed with learning,
evolutionary and natural selection features [15–17].
Very dramatic effects have been studied in systems where the new information/product/
strategy, rather than being universally broadcasted, flows only among individuals that
are in direct binary interaction. In this case, there exists a ‘critical density’ of potential
joiners below which the information/novelty/ trend does not propagate throughout the
system [18]. To get an idea of the strength of this effect note that there are well-known
conditions [19] where more then half of the individuals are potential joiners and yet the
trend does not reach even 1% of its potential adopting community. This effect is
currently confronted successfully with real market data.
What does Economics offer to complexity?
The stock market is the largest, most well-tuned, efficient and well-maintained
emergence laboratory in the world, with the most dense and precise measurements
performed, recorded, transmitted, stored and documented flawlessly on extremely
reliable databases. Add to this the potential relevance to the most money-saturated
human activity in the world and we obtain a very promising vast area to exercise our
drives for understanding.
As opposed to elementary particle field theory, in which the microscopic ‘bare’
interactions are to be inferred from the emerging dynamics, or to cosmology where the
emerging macroscopic features are unknown at the largest scales, in financial markets
both the microscopic operations and the macroscopic trends are fully documented. The
old dream of Boltzmann and Maxwell of following in detail the emergence of
macroscopic irreversibility, generic universal laws and collective robust features from
microscopic simple elementary interactions can now be fully realized with the help of
this wonderful immense thermodynamic machine where the Maxwell demons are
human size and (Adam Smith’s [20]) invisible hand is more visible than ever.
Are we in danger of over-simplifying?
Quite the contrary! The multi-agent modeling techniques allow the long awaited injection
of behaviorally realistic ‘souls’ [21] into the ‘rational’ financial agents. Rather than
‘dehumanizing’ the trader models, we introduce the possibility of integrating into them
the data from a wide range of neighboring behavioral sciences.
44
Sorin
Page 45 3/10/2016
45
The motivation of these efforts is not a belief that everything can be reduced to physics,
to mechanics. The objective is to identify those things that can be reduced, and expose
those that cannot be reduced.
The macroscopic financial lab which is the stock/futures/money-market may be
considered as a macroscopic human behavior lab capable of dealing round the clock,
simultaneously, with millions of subjects around the world. Those subjects are naturally
motivated and act in natural yet highly structured conditions. All subjects around the
world are exposed to very uniform stimuli (the information appearing on their standard
trader screens) in identical procedural order. Their decisions (buy–sell orders) are
taken freely, and their content and relative timing are closely monitored and
documented in a way which makes the data immediately available for massive
computer processing and for comparison with the theoretical (or microscopic simulation)
predictions.
Practical benefits
Understanding, monitoring and managing the dynamics of financial markets is of crucial
importance: the lives of most individuals in Western society depend on these dynamics.
Market dynamics affect not only investments and savings in pension plans, but also the
real business cycle, employment, growth and ultimately the daily functioning of our
society.
Understanding and regulating the dynamics of financial markets is in some ways similar
to predicting and monitoring weather or road traffic, and at least as important. Several
groups have attempted to develop certain prediction capabilities about the financial
markets; however, universally accepted successful methods or results have not been
published until now. Some groups claim that they entrusted their know-how to private
profit-oriented companies. Fortunately this has not yet led to macroscopic disasters, but
it is certainly a matter of top priority that the public and the authorities in charge of
economic stability will have at their disposal standard reliable tools of monitoring,
analysis and intervention. Settling for less than that would be like leaving traf-fic control
to the trucking companies.
The next objective should be to create the human, methodological and technical
capabilities to transform the monitoring, prediction and regulation of stock markets into a
reliable activity at a level comparable to the current capabilities of estimating urban
traffic: one cannot predict individual car accidents but one can predict, based on the
current data, the probable behavior of the system as a whole. Such a prediction ability
allows optimization of system design as well as on-line intervention to avert unwanted
jams etc. Moreover, one can estimate the effect of unpredictable events and prepare
the reaction to them.
Importance of Finance Market studies for the Multi-Agent Complexity
The stock market is the ideal space for the strategic opening to a new kind of science
such as, for example, articulated and led by Anderson [25] for the last three decades: it
offers a perfectly rigorous experimental and theoretical research framework while
avoiding the artificial traditional boundaries (and the resulting restrictions and
limitations) between the over-populated feuding kingdoms of the ‘exact’ sciences
continent.
45
Sorin
Page 46 3/10/2016
46
The successful study of stock market dynamics requires a synthesis of knowledge and
techniques from different domains: financial economics, psychology, sociology, physics
and computer science. These fields have very different ‘cultures’: different objectives,
criteria of success, techniques and language. Bringing people from these disciplines
together is not enough—a profound shift in their way of thinking is necessary.
LINK to “ECONOPHYSICS” (By Dietrich Stauffer)
http://shum.huji.ac.il/~sorin/report/Econophysics.doc
Spatially distributed social and economic simulations
One of the earliest classical models alluding to such a program was the
classical Segregation model of Thomas Schelling. This model is implemented
and described on the Internet Education Project page of the Carl Vinson
Institute of Government http://iep.cviog.uga.edu:2000/SIM/intro.htm:
“In the early 1970s, Thomas Schelling created one of the first agent-based
models that was to lead to a new view of the underlying causes of an
important social indicator. Shelling was interested in understanding why
residential segregation between races appears to be so persistent in spite of
there occuring a major change in the stated preferences of most people
regarding the percentage of other-race neighbors that they would be
comfortable living nearby.
Schelling created a simple model in which a given simulated agent (or set
of agents) prefers that some percentage of her neighbors be of her own
“color.” If an agent is residing on a square or cell and the agent does not
have at least that percentage of her own kind nearby, the agent moves to
another square or cell. What Schelling discovered was that even when the
agents in the simulation had only a weak preference for nearby agents being
of the same “color,” the result after several moves (or chances for each agent
in turn to move to another square) was a farily high level of segregation. . . . . .
...........................
The basic rule in this model is for agents to keep moving to an available
space and to move to the space that has the highest percentage of agents of
one’s own color. Once one is surrounded by a certain required percentage of
agents of one’s own type, the agent ceases to move. However, it is often the
case that as other agents move to meet their need, even agents who formerly
were satisfied with their place/neighborhood will have to move on to another
place once one of their like-kind neighbors has move. . . . . . . . .
…What is interesting from a social dynamics perspective is the effect that
even a rather slight preference for neighbors of one’s own kind can have on
the residential segregation of the agents. Sometimes such a slight preference
will take a number of moves to take effect. When the agents’ preference levels
for “ones own color” become stronger, it will often take a large number of
moves for the full segregation effect to play out. . . .
46
Sorin
Page 47 3/10/2016
47
A more modern, developed and sophisticated reincarnation of those ideas is
the Sugarscape environment described by Epstein and Axtell in the book
"Growing Artificial Societies" (Brookings Institution, 1996.). This model has
been widely covered in the media during the recent years, e.g. in the New
Scientist of 4 October1997 Giles Wright writes in the article "Rise and Fall":
http://www.newscientist.com/ns/971004/features.html:
In Sugarscape, dots representing people or families move around a digital
landscape in search of food--sugar. Whether they live or die depends on
whether they find enough food to satisfy their "metabolic" needs. The dots, or
"agents", are given a range of abilities--such as how far they can "see" over
their virtual landscape when searching for food--and are programmed to obey
certain rules. In the most basic scenario, the agents look for the richest source
of sugar, and go there to eat it. But they are in competition with each other and
with groups of agents programmed with different rules and abilities. By
modifying the rules governing how the agents interact, Axtell and Epstein can
make them either fight or cooperate. They can allow the agents to accumulate
wealth, excess sugar, and measure their "health" by how much sugar they eat.
And by introducing mating, the researchers make the agents pass on their
abilities--and the rules they obey--to their offspring.
With just a few rules and conditions, the agents in Sugarscape begin to mimic
aspects of real life. For example, there is a maximum number of agents that can live
in any one model, which depends on the amount of sugar available. This relates to
the idea that the Earth has a "carrying capacity"--the density of population it can
sustain. When the level of sugar fluctuates between areas--effectively creating
"seasons" --the agents migrate from area to area. Axtell and Epstein have also seen
the equivalents of tribal formation, trade, and even hibernation. Similarly, extinction,
or the end of a civilisation, might be an outcome of the agents following a particular
rule. "
LINK to “MANAGEMENT quotations on complexity”
http://shum.huji.ac.il/~sorin/report/Management.doc
LINK to “COMPLEXITY OF RISK” (by Peter Richmond)
http://shum.huji.ac.il/~sorin/report/Complexity%20of%20Risk.doc
LINK to “INFORMATION Complexity” (Contribution by Sorin Solomon
and Eran Shir) http://shum.huji.ac.il/~sorin/report/Information.doc
The Social life of computers
Beyond the surprises in the behavior of the individual computers, much more farreaching surprises lurked from another corner: already 30 years ago, Physics Nobel
Prize laureate Phil Anderson (1972) wrote a short note called “More is Different”. The
main point was to call attention to the limitations of thinking about a collection of many
similar objects in terms of some “representative individual” endowed with the sum, or
average of their individual properties. This “mean field” / continuum / linear way of
47
Sorin
Page 48 3/10/2016
48
thinking seems to fail in certain crucial instances. In fact there are exactly those
instances that emerge as the conceptual gaps between various disciplines. Indeed, the
great conceptual jumps separating the various sciences and the accompanying
mysteries connected to the nature of life, intelligence, culture arise exactly when “More
Is Different”. It is there that life emerges from chemistry, chemistry from physics,
conscience from life, social conscience/ organization from individual conscience etc.
The physical, biological and social domains are full of higher functions that emerge
spontaneously from the interactions between simpler elements. The planetary
computers infrastructure has acquired a large number of elements by now. It is
therefore not unexpected that this bunch of man-made artifacts would develop a mind
of itself. In fact we do perceive now days that the very basic rules under which
computers were supposed to function are being affected. Are we facing a “More is
Different” boundary? It might be too early to decide, but let us make a list of the ways
the rules of the game have changed:

Instead of being closed systems, the computers came to be exposed to a rapidly
changing environment: not just some environment dynamics, but a continuous change
in the very rules that govern the environment behavior.

Some of the environment consists of other computers, so the potential infinite
endogenous loop of adaptation, self-reaction and competition is sparkled.

Instead of a deterministic dynamics, the system is submitted to noise sources
(Shnerb et al. 2001) that do not even fulfill the right conditions to allow rigorous proofs
(e.g. in many of the cases, far from being “normal” the noise probability distribution
doesn’t even have a finite standard deviation).

Instead of worrying about the integrity of one project at a time, one has to face
the problems related with the evolution in parallel of many interacting but independently
owned projects, carried out by different teams.

Instead of being clustered in a well-defined, protected, fixed location (at fixed
temperature :-), the computers started to be placed in various un-coordinated locations
and lately, some of them mounted on moving platforms.

Instead of a single boss that decided which projects go ahead, with what
objectives and with what resources, the present environment is driven by the collective
behavior of masses of users whose individual motivations, interests and resources are
heterogeneous and unknown. Unfortunately psychology was never one of the strengths
of the computer people…
In conclusion the main change to which we are called to respond is the passage of
computer engineering from deterministic, up-to-down designed and controlled systems
to stochastic, self-organizing systems resulting from the independent (but interacting)
actions of a very-large number of agents.
Main conceptual shifts related to this:
 The fast evolution of rules /players behavior make classical AI and game theory
tools inapplicable => Relying on adaptive co-evolution becomes preferable (even
though in general there are no absolute mathematically rigorous success guarantees).
 Instead of well defined asymptotic states obtainable by logical trees etc., the pattern
selection depends on spatio-temporal self-organization, frozen accidents, and a Nash
equilibrium might not exists or be unreachable in finite time.
48
Sorin
Page 49 3/10/2016
49
 inhomogeneity and lack of self-averaging (Solomon 2001) make usual statistics in
terms of averages and standard deviations unpractical: resulting (Pareto-Zipf power)
distributions (Levy and Solomon 1996) might not have strictly speaking an average (and
in any case not close to the median) and neither finite standard deviation.
 systems display phase transitions: dramatic global system changes triggered by
small parameter changes.
The situation seems really hopeless. But a ray of light appears:
In their encounters with the real world, CS people encountered occasionally quite
accomplished (and sometimes attractive) entities which were not designed (let alone
rationally) by anybody.
So may be there is another way to create things beside constructing an exhaustive
logical tree that prescribes in detail the system action in each given situation.
For lack of another name one could call it “natural evolution”.
There is long list of large prices to pay by giving up the “old way”: the system might not
fulfill well defined user specifications, might not be fault free and might not be
predictable, etc etc
The Identity Crisis of the Single Computer
When Turing defined the conceptual framework for information processing machines he
limited their scope to the context of rational, deterministic, closed systems. For a long
while the computer engineers and computer scientists were proud of the absolute
conceptual and practical control that they had on their algorithms and machines. Even
when randomness appeared it was under well defined rules that still allowed rigorous
proofs and rigorous control. Most of the project managers spent most of the time to
eliminate any possibility that the system would do something else than what explicitly
prescribed by the user specifications.
As long as the main requests of the society from the computing machines were just to
compute, this perfect order, predictability, reproducibility and their closed self-contained
nature constituted the very backbone of the product integrity. However, at some stage,
computers became so useful, cheap, powerful and versatile that the customers required
their massive integration in real life situations. Upon throwing the computers into the
real life one opened a Pandora box full of surprises:
 computers proved to be inexplicably poor in tasks that the AI pioneers have
(recurrently!) predicted to be quite at hand:
Example : Turing stated more then 50 years ago the famous Turing test: a woman
communicates in writing with a computer and respectively with another woman. The
objective of the computer is to mascarade as a woman. Turing’s hope was (Turing
1950):
“… in about fifty years’ time … interrogator
will not have more than 70 percent chance
of making the right identification … ”
Obviously the prediction is far from being fulfilled at the present time.
 humans either, turned out to be not-so-good at being … humans.
49
Sorin
Page 50 3/10/2016
50
Example: Men fail in proportion of 80% the above Turing test. Therefore a computer
which would succeed 70% as predicted by Turing, would be in fact super-human (or at
least super-man) (Adam and Solomon 2003).
 computers could do better then people, task that humans have claimed to
themselves as intrinsically and exclusively human.
Example: By mimicking on computer the common structural elements of past successful
ideas one was able to construct idea generating algorithms. The computer generated
ideas were ranked (by humans) higher than ideas generated by (other) humans
(Goldenberg et al 1999).
 computers turned out to do the same tasks that human do in very different ways.
Example: By analyzing mathematically psychophysical observations one was able to
infer the approximate, quick-fix ad-hoc algorithms that the visual system uses to
reconstruct from 2 D image sequences the 3 D shapes. They are totally different from
the ideal mathematically rigorous 3D reconstruction algorithms. In fact using this
knowledge one was able to predict specific visual illusions (that were dramatically
confirmed by subsequent experiments) (Rubin et al 1995).
LINK to “DESIGN to EMERGE” (by Eran Shir and Sorin Solomon)
http://shum.huji.ac.il/~sorin/report/Designed%20to%20Emerge.doc
From Integrated Robot Flocks to Dividuals
The idea that a collection of objects sharing information can be more efficient than a
more intelligent single object has tremendous potential.
As opposed to humans, robots can share and integrate directly visual, and other nonlinearly structured information. They do not suffer from the humans need to first
transform the information in a sequence of words. Moreover, the amount, speed and
precision of the data they can share are virtually unlimited.
Like in the story of the blinds feeling an elephant, the communication channels typical to
humans are not sufficient to insure fast, precise and efficient integration of their
knowledge / information / intelligence. By contrast, robots with their capability to
determine exactly their relative position and to transmit in detail the raw data “they see”
are perfectly fit for the job perfectly.
So, in such tasks, while
1 human >
1 robot
one may have
100 humans < 100 robots.
This may lead to the concept of Integrated Robot Flocks which is much more powerful
than the biologically inspired ants nest metaphor because the bandwidth of information
sharing is much more massive. Rather than learning from biology, we may learn here
how to avoid its limitations.
For instance, rather than thinking of the communicating robots as an integrated flock,
one can break with the biology (and semantics) inspiration and think in terms of the
divided individual (should one call them “dividuals”?):
50
Sorin
Page 51 3/10/2016
51
Unlike biological creatures, the artificial ones do not have to be spatially connected: it
may be a great advantage to have a lot of eyes and ears spread over the entire hunting
field. Moreover, one does not need to carry-over the reproduction organs when the
teeth (mounted on legs) go to kill the pray (the stomach too can be brought-in only later
on, in case there is killing).
Of course, a good idea is to steal from time to time the control on somebody’s else
wings (as a form of Non-Darwinian evolution). Contrast it to the usual way real animals
exploit one another: they are constrained by their biological reality to first degrade the
wings of the prey to simple molecules at which stage, it is too late to use them for flying.
The anthropomorphic- biological grounding here is a liability that one has to free oneself
from, rather than a source of creative inspiration. All said above is true both for robots
(acting in the real physical “hardware” world) as well as for “bots” acting as software
creatures.
Encounters of the Web kind
The emergence of a “thinking brain” by the extension of a distributed computerized
system to an entire planet is a recurring motif in science-fiction stories and as such a bit
awkward for scientific consideration. Yet, if we believe that a large enough collection of
strongly interacting elements can produce more than their sum, one should consider
seriously the capabilities of the web to develop emergent properties much beyond the
cognitive capabilities of its components. As in the case of the Integrated Robots Flocks,
the relative disadvantage of the individual computer vs. the individual human is largely
compensated by the “parapsychological” properties of the computers: any image
perceived by one of them at one location of the planet can be immediately shared as
such by all. Moreover they can share their internal state with a precision and candor
that even married couples of humans can only envy.
A serious obstacle in recognizing the collective features emerging in the web is the
psychological one: people have a long history of insensitivity to even slightly different
forms of “intelligence”. In fact various ethnic / racial groups have repeatedly denied one
another such capabilities in the past. Instead of trying to force upon the computers the
human version of intelligence (as tried unsuccessfully for 30 years by AI), one should be
more receptive to the kind of intelligence the collections of computer artifacts are “trying”
to emerge.
An useful attitude is to approach the contact with web in the same way we would
approach a contact with a extraterrestrial potentially intelligent being. A complementary
attitude is to study the collective activity of the web from a cognitive point of view, even
to the level of drawing inspiration from known psychological processes and structures.
LINK to “Making the Net Work” (by Eran Shir and Sorin Solomon)
http://shum.huji.ac.il/~sorin/report/Making%20the%20Net%20Work.doc
LINK to “The Introspective Internet”
(Contribution by Sorin Solomon and Scott Kirkpatrick)
http://shum.huji.ac.il/~sorin/ccs/Dimes2003-AISB.pdf
THE IDEA
51
Sorin
Page 52 3/10/2016
52
The idea is having the Net measure itself, simulate itself and foretell its own future.
Instead of using a single machine to study the Internet one will use (potentially) all the
machines on the Internet
The study of the Internet gives us a first opportunity to use the object of study itself, as
the simulation platform .
THE METHOD
We proposed to use the Internet as the main infrastructure that will simulate itself and
its future generations:
recruit millions of nodes spread over the entire Net that:
- Perform local measurements of their Internet neighborhood
- Participate as message routing/processing elements in collective Internet experiments
One can hope to harness into the task hundreds of thousands and even millions of
users and nodes in all hierarchy levels, that will contribute some of their CPU power,
Internet bandwidth, and most importantly, their topological/ geographical whereabouts
on the net, to the effort of monitoring and studying current days Internet on the one
hand, and simulate and study its future on the other.
This can lead to the largest computer simulator ever, one that will be many scales larger
than the second largest one. But this simulator will not merely be a very large one. It
will also be extra-ordinary since each of its elements will be unique:
each of the elements will bring not only generic resources such as cycles ,memory or
bandwidth, it will be also a true representative of its local net neighborhood. For that
matter, it will create a strong connection between the simulator and its object of
simulation, i.e. reality .
The Tool: DIMES
(Distributed Internet MEasurement and Simulation)
DIMES proposed the creation a distributed platform that enable:
- Global scale measurement of Internet graph structure, packet traffic statistics,
demography
- Simulation of Internet behavior under different conditions.
- Simulation of the Internet future.
DIMES proposed creating a uniform research infrastructure to which programmable
experiment nodes could be added spontaneously. Using this layer, large scale statistics
aggregation efforts will be conducted. We will conduct simulations that will study both:
- the reaction of the current Internet to various irregular situations and
- the usability of concrete new ideas, algorithms and even net physical growth schemes.
Networks
The Language of Dynamical Networks
52
Sorin
Page 53 3/10/2016
53
The unifying power of the Complexity view is expressed among other in the
emergence of a common language which allows the quick, effective and robust /
durable communication and cooperation between people with very different
backgrounds [10]. One of these unifying tools is the concept of dynamical network.
Indeed, one can think about the “elementary” objects (belonging to the “simpler”
level) as the nodes of the network and about the “elementary” interactions between
them as the links of the network [11]. The dynamics of the system is then
represented by (transitive) operations on the individual links and nodes
((dis)appearance, substitutions, etc.) [12].
The global features of the network correspond to the collective properties of the
system that it represents: (quasi-)disconnected network components correspond to
(almost-)independent emergent objects; scaling properties of the network
correspond to power laws, long-lived (meta-stable) network topological features
correspond to (super-)critical slowing down dynamics. In this way, the mere
knowledge of the relevant emerging features of the network might be enough to
devise methods to expedite by orders of magnitude desired processes [13] (or to
delay or stop un-wanted ones). The mathematical tools implementing it are
developed presently and include multi-grid and cluster algorithms.
Percolation models
Let us consider a regular arbitrarily large square lattice. Each site i represents a
"customer". The basic model considers only one product of fixed quality quantified by a
real number q between 0 and 1. The customers have different expectations represented
too by numbers p(i) between 0 and 1. The condition that the customer i buys the
product is that one of his/ her neighbors bought the product and that p(i) < q. If p(i) < q
but no neighbor has bought the product, i is a "potential buyer" because if he only new
about the product, the quality of the product q would satisfy its expectations p(i).
However, he is not (yet?) an actual buyer, because in the absence with a direct contact
with another customer which bought the product, he "doesn't know" enough about the
quality of the product and therefore would not buy it at this time.
If one starts a campaign by selling the product to a particular customer j, what will be
the outcome amount of sales?
Obviously, this depends on q and on the distribution of p(i).
If one makes the simple assumption that the p(i)'s are independent random numbers
distributed uniformly between 0 and 1, then one has a surprise: even if q =0.59, which
means it satisfies the expectations of 59% of the customers, its actual sales will be less
than 1% !
For people familiar with percolation theory, the reason is obvious: sales take place only
within clusters of contiguous sites, if the initial site belongs to a small cluster of potential
buyers surrounded by customers with p> 0.59, then the only sales will be within that
cluster even if there are "an infinity" of other disconnected clusters "out there". In fact in
the case of two dimensional square lattice it is known that if the density of "potential
buyers" is less than pc = 0.593... then all the contiguous clusters are finite even if the
size of the system is taken to infinity. Consequently, for q < 0.593... not even a finite
fraction of the potential market is realized. For q = 0.593 a finite fraction of the lattice
becomes actual buyers: the largest cluster is "infinite" (i.e. of the system size).
53
Sorin
Page 54 3/10/2016
54
Obviously the phenomenon does not depend on the network on which the "customers"
live. Any lattice which presents a percolation transition would do. I.e. any lattice/
network in which the largest cluster becomes "infinite" when the density of "potential
customer" points becomes larger than pc would do. Moreover, the particular way in
which the critical density pc of "potential customers" is realized is not important: it can
be decided in advance by fixing p(i) for every site, but also it can be (partly) decided
randomly at the time that the point is reached by the sales wave. It can also be enforced
by having each point spending with some probability some time in the "potential buyer"
mode.
The features which do affect the marketing percolation transition are however space
and time correlations between the "potential buyer" state.
For instance, we will see that inhibiting the "potential buyer" state for a while after each
buy, will lead to periodic waves and spatial correlations. Another extension was
considered in the modeling of the race between the HIV and the immune system, where
the topologies induced by the possible mutations in the shape spaces of the virus and of
the immune cells are different. This has important effects which go beyond the ones that
can be deduced on the basis of the knowledge of the static percolation transition.
The marketing percolation transition phenomenon corresponds to the hit-flop
phenomenon in the movies industry: some movies make a fortune while others, with no
significant quality difference never take-off . However, with appropriate changes the
framework can characterize many market phenomena, as well as epidemics, novelty
propagation, etc.
FURTHER MODIFICATIONS OF THE SOCIAL PERCOLATION MODEL
In order to make the model more realistic, one has to introduce additional features, but
the crucial one is the understanding that the actual sales depend in a dramatic way on
the spatial distribution of the customers and on the communication between them. Until
recently, the mass of customers was considered uniform and each individual connected
to the entire public in an "average" way. Obviously this concept missed the crucial
effects mentioned above and many others described below. Also the latest applications
take into account the negative influence that dissatisfied customers may have on other
potential buyers.
LINK to “NETWORKS Dynamics and Topology” (by Gerard Weisbuch
and…)
http://shum.huji.ac.il/~sorin/report/Networks%20Dynamics%20and%20Top
ology.doc
Network manipulation and Novelty Creation
Objects and links / relations between them seems to be the very basis for humans
to organize their experience:
$$o-------o$$
The most abstract branch of mathematics: category theory takes the same attitude:
the basic elements are objects and their relations (morphisms).
54
Sorin
Page 55 3/10/2016
55
Out of objects / nodes and links, one can build arbitrary abstract schemes:
$$ o----o-----o
|
\
|
\
o---o---o
$$
If one wishes to endow these schemes with a dynamics, one can introduce some
elementary operations: erasing and creating objects and links.
A special operation in category theory is related with the property of transitivity:
given that there is a link between two nodes A and B and a link between B and C,
one can postulate a link between A and C. Schemes in which the diagrams are
transitive are richer and are usually perceived as more meaningful.
In fact, in his attempt to formalize all creation (and Creator) on an axiomatic basis,
Spinoza [Etica III] in his "Ethics" explains that new relationships take place in the
child's mind by this transitive mechanisms: (s)he will grow attached to things which
are connected to his basics necessities, food, heat, soft support. It is no wander that
(s)he gets attached the mother:
$$
baby ---- milk
\
|
\
|
\
|
mother
$$
and then, by iteration to things associated with her
$$
baby --------mother
\
|
\
|
\
|
lulaby $$
This kind of mechanism was found very effective in creating artificially ideas that are
perceived by humans as creative.
In fact when presented double blind to a group of human judges, the ideas
generated mechanically received higher creativity grades than the ideas generated
by (another group of) humans.
To obtain this result a simple program was used.
55
Sorin
Page 56 3/10/2016
56
The program was asked to generate creative advertisements given a product and
the quality to be advertised:
$$
product ------- quality. $$
In the first stage, the program associated to the product P one of its parts P1, or one
of the objects on which it acts. In the second stage it associated to the quality Q an
object that is routinely associated with that quality Q1. In the third stage, the program
suggested an advertisement that connected P1 with Q1.
$$
P -----?---Q
P
Q
|
|
|
|
|
|
|
|
P1
Q1
P1--------Q1 $$
In fact it was found that a large percentage of the top prized advertisements over the
years fall in this scheme. For instance, a famous series advertising Bally shoes as
giving a feeling of freedom showed clouds, tropical islands, forests all in the shape of
a foot. Some of the computer-generated ads were: a coo-coo clock with the coo-coo
in the shape of a plane (belonging to the right company), a computer terminal
(belonging to the right producer) offering flowers, etc.
By representing the parts and properties a product and their relationships and
applying similar simple operations one can invent new products or service
procedures. For instance one can save time in pizza delivery and insure it is fresh by
using the motorcycle engine to cook it. Usually the "creative" results are obtained by
forcing the scheme through intermediate phases, which would have been rejected
as inconsistent by humans.
For instance by taking the scheme of a propeller plane
Body-Wings-Air-Propeller
and eliminating the wings one is lead to the scheme of a Helicopter:
-Air-Propeller
Body-
While by doing the same to a jet plane
Body-Wings-Jet Engine
one is lead to the scheme of a missile.
Body-Jet Engine
Making full tour, one can analyze creation stories in terms of the appearance of
diagonal links. It is found that indeed they are related to the moments in the story,
which were considered as salient by the experts. Moreover, their density was double
in significant stories as opposed to similar popular stories with ostensibly the same
plot.
56
Sorin
Page 57 3/10/2016
57
Some Directions for the Future
Of course, below are only guesses of which could be the possible directions in which
the subject may evolve. Some of the ideas may looks strange, but they have an
illustrative purpose of the potentialities of the approach (see also [30-32]).
Identifying and Manipulating the “Atoms” of Life
The situation in molecular biology, genetics and proteonics today resembles the
situation of Zoology before Darwin and of Chemistry before the periodic table:
“everything” is known (at least all the human genes), some regularity rules are
recognized, but the field lacks an unifying dynamical principle. In particular the
dynamics of “folding” (the process that gives the proteins their shape given a certain
base sequence) and the relation between each protein shape and its function are
anybody’s guess.
In principle it is arguable that these problems can be solved within the borders of the
present techniques and concepts (with some addition of data mining and informatics).
However, I would bet rather on the emergence of new concepts, in terms of which this
“total mess” would become “as simple” as predicting the chemical properties of
elements in terms of the occupancy of their electronic orbitals. So the problem is: what
are the “true” relevant degrees of freedom in protein/ genes dynamics? Single bases /
nucleic acids are “too small”; alpha chains or beta sheets - too big.
Of course answering this problem will transform the design of new medicines into a
systematic search rather than the random walk that is today.
Interactive Markets Forecast and Regulation
Understanding and regulating the dynamics of the (financial) markets is in some ways
similar to predicting and monitoring weather or road traffic, and at least as important:
One cannot predict individual car accidents but one can predict based on the present
data the probable behavior of the system as a whole. Such prediction ability allows the
optimization of system design as well as on-line intervention to avert unwanted
disturbances etc. Moreover one can estimate the effect of unpredictable events and
prepare the reaction to them.
It is certainly a matter of top priority that the public and the authorities in charge of
economic stability will have at their disposal standard reliable tools of monitoring,
analysis and intervention.
In the past it was assumed that the market dynamics is driven by exogenous factors
and/or by uncontrollable psychological factors and/or by purely random fluctuations.
This discouraged the study of their endogenous dynamics in a systematic quantitative
realistic way. Other difficulties were the lack of knowledge on the numerical stability
properties of the problem and on the nature of the relevant data necessary to describe
the dynamics realistically. To make the things worse, until a few years ago, much of the
trading data was also not available (especially in as far as the identity of the traders
performing various successive transactions).
In the last years progress was obtained in all the issues above and the main difficulty
that remains is at the cultural human level: the successful study of the stock market
57
Sorin
Page 58 3/10/2016
58
dynamics requires the synthesis of knowledge and techniques from different domains:
financial economics, psychology, sociology, physics and computer science. These fields
have very different “cultures”: different objectives, criteria of success, techniques and
language. Bringing people from these disciplines together is not enough - a deep shift in
their way of thinking is necessary.
Usually this requires “growing” a new generation of “bilingual” young scientists that
produce the synthesis in their own minds. Otherwise, even the most efficient software
platform will just be reduced to a very expensive and cumbersome gadget.
Horizontal Interaction Protocols and Self-Organized Societies
The old world was divided in distinct organizations: some small (a bakery, a shoe store)
and some large (a state administration, an army) [33].
The way to keep it working was for the big ones to have a very strict hierarchical chain
of command and for the small ones (which couldn’t support a hierarchy) to keep
everybody in close “horizontal” personal contact. With the emergence of the third sector
(public non-profit organizations), with the emergence of fast developing specialized
activities, with the very lively ad-hoc merging and splitting of organizations, the need for
lateral (non-hierarchical) communication in large organizations has increased. Yet, as
opposed to the hierarchical organization, nobody knows how to make and keep under
control a non-hierarchical organization. The hope is that some local protocols acting at
the “local” level may lead to the emergence of some global “self-organizing” order. The
study and simulation of such systems might lead to the identification of modern “
Hammurapi codes of laws” which to regulate (and defend) the new “distributed” society.
Mechanical Soul (re-)Search
The internal structure of psyche came under scientific scrutiny with the work of Freud.
Yet to this very day there is no consensus of its nature. Even worse: most of the
professionals in this field have resisted any use of the significant new tools that
appeared in the intervening 100 years. This is likely to be a great loss as even the
simplest computer experiments lead often to very unexpected and clue letting results.
The old Turing test measuring the computer against the humans may be now left behind
for a more lateral approach: not “who is better”, but “how do they differ?”, “can human
thought learn from mechanical procedures?” This may help humans to transcend the
human condition by identifying and shading away self-casted limits to ones selves.
Example of specific projects:
 Invention machines: programs that generate “creative” ideas.
 Predicting / Influencing the emergence of new political/ moral /social / artistic ideas
out of the old ones.
 Identifying the structure of meaningful / interesting stories / communication.
 Understanding and influencing sentiment dynamics: Protocols for education towards
positive feelings.
 Automatic family counselor: Inventing procedures, “rites” for solving personal / family
problems.
 Automatic art counselor: Documenting, analyzing and reproducing ideas dynamics
and personal development in drawings (small children, Picasso drawing suites)
58
Sorin
Page 59 3/10/2016
59
 Pedagogical aids: Understanding and exploiting the interplay between ideas
expressed in words and their internal (pictorial) representation.
Methodological Issues
Experimental data
There is a myth that there are no experimental data for complexity. This is related with
the fact that the tasks of experiments in complexity are different than in the usual
sciences. E.g. in particles one knows the macroscopic behavior and has to look for
experiments to probe the micro. In complex systems one usually knows both the macro
and the micro but the intermediate scales connecting between them are not understood.
The experimental characterization of the collective objects relevant at various scales
including their (conditional and unconditional) probability distributions and time (auto)correlations is a very well defined objective. As with all Complexity, the only problem is
that it does not fall within one of the classical disciplines.
The myth of irreproducibility is not more justified than accusing classical mechanics of
irreproducibility just because in real life one cannot reproduce dice throwing
experiments.
The Laws of Complexity
The archetype of finding “the basic laws” of a science is to find a small group of basic
dynamical principles from which “all” the phenomenology of the field can be explained.
For instance, the chemical properties of the atoms can be (in principle) deduced from
the quantum electromagnetic interactions between electrons and nuclei. Pauling earned
his Nobel prize for putting forward this program.
In the case of Complexity the rules of the game are completely changed: BOTH the
macroscopic phenomenology of the collective objects AND the “elementary” properties
of the “simple” objects are known. The challenge is to deduce one from the other
WITHOUT introducing new natural laws !
In this sense finding the “laws of complexity” has to be preceded by a better
understanding of what are we really looking for.
In the meantime, one can concentrate on producing uniform criteria by which to decide
in generic situations which objects are to be considered “elementary” and which
collectives are to be considered (and to which degree) as emergent objects.
These criteria should be standardized together with the search for other regularities
(power laws, scaling, critical slowing down etc).
Theory and Complexity
Here there are 2 levels at which theorists can function in the context of Complexity:
 the long range level with its hope for a “grand theoretical synthesis” providing the
“laws of complexity emergence” (modulo the doubts above).
59
Sorin
Page 60 3/10/2016
60
 the level at which we act now: applying the tools which we have described above:
thermodynamics, statistical mechanics, scaling, multiscale, clusters, universality, graph
theory, game theory, discrete dynamics, microscopic simulation, informatics etc.
The use of these methods (especially by somebody else) often reminds one of the
saying: “when carrying a hammer, a lot of things look like nails”. This might caution us to
keep looking for simplicity even when carrying complexity.
Conclusions and Recommendations
The KUHNIAN description of the 2 modes of SCIENCE
Most of the scientific researchers, most of their time are engaged in a very conservative
mode that is dedicated to developing the logical implications of the existing paradigm
and implementing and demonstrating (and applying) them practice.
In this mode the scientific leadership is measured by
-the depth and width of awareness of the present published work
(I hesitate to call it knowledge or even information because it may contain
misperceptions and just jargon: names for things that one does not really understand).
- the capability to formulate / articulate the dominating paradigm in
an authoritative way and in causing colleagues to engage in its service.
- the amount of work one can associate one’s name with.
Thus typically a scientific research field is a conceptually conservative and socially
guild-like self-defending community .
If the scientific field is still valid and there is still a lot of useful work to be dome within it,
then this is a very satisfactory situation.
If however for some reason the paradigm is inadequate or has exploited its interesting
implications (and applications), the situation can become really nasty as described by
Feynman’s “Straw Airplanes” metaphor: a scientific field may have all the external
features of scientific research: peer community, professional associations, departments,
grants, scholarships, journals, a developed professional jargon that takes 3
undergraduate years to only start using properly (and a PhD to master it completely)
etc but produce nothing of real value. (careful to distinguish dead fields from living
conservative fields).
From time to time, very rarely, a scientific field enters a “revolutionary” period.
Note that even during revolutionary periods, most of the community and especially its
established leaders are still in the “steady mood”. In fact they are fiercely (and as they
see it – loyally) defending their intellectual homeland (or as the others would put it fief).
At times this is a life-and-death confrontation as it literally was in the case of Boltzmann.
Moreover, as the history (/ science too) is written by the winners (/survivors), one would
never know of all the scientific revolutions that were scientifically justified and failed on
the ground of the social-political confrontations within the scientific community.
Usually we prefer to take the “optimistic” view of Plank that stated that a new scientific
paradigm wins not by the established scientists adopting it but rather by them passing
away and the new generation leaving the old ways…
From the facts described above it turns out that even in the situations in which a
scientific paradigm is marred by host of internal contradictions or systematically
60
Sorin
Page 61 3/10/2016
61
invalidated by empirical evidence, the scientific communities are capable to introduce
complicated enough caveats and corrections that would allow a professionally (for that
community) acceptable formulation of the problems only after a very significant
investment in learning the current doctrina. The hope of such systems is that somebody
that didn’t lost stomach for so many years of studying the dominating paradigm will
continue it after being invested with the powers and authorities to make decisions and
changes.
The mere logical contradictions and miss-match with reality can then be hidden in all
kind of scientifically looking terms that would throw the blame of the lack of knowledge/
professionalism of the “profane”.
I would like to emphasize that fortunately, in my relatively wide interdisciplinary
professional experience, the scary scenario above is a minority though not a negligible
exception.
Yet we have to take care of it because it appears exactly where breakthroughs are likely
/ necessary.
On the opposite side, of course there are many cases in which one is not clear if a
direction is a real revolution or a stunt. Moreover, a real revolution might start with not
necessarily the final correct ideas (example of Bohr theory off the Hydrogen spectrum:
“it applied Monday , Wednesday and Friday the new “quantum” ideas and the rest of the
week the classical mechanics”). Yet the appropriate frame of mind can be as – again –
Bohr said: “young man your theory is crazy. And after a few moments thought: “ but not
crazy enough to be true”.
On this background quite a number of the present ideas should be viewed: possibility
and nature of nanotech devices, possibility of realistic artificial reality including universal
simulation and visualization devices, possibility of software creation (and validation) of
software, possibility of automatic model generation for generic input systems, possibility
of artificially intelligent machines, possibility of artificial cells / living organisms,
possibility of self-organizing enterprises / institutions, possibility that emergence is
governed by fundamental laws similar with the ones governing fundamental science.
Are these harmful myths or useful - if not realistic - starting points?
Before thinking of how to nourish and defend the new ideas, we have to think of ways to
ascertain at least at some level - not 80% (because this is not insured even in
established sciences) but at least at 20% its seriousness.
Feynman had – as always- a solution for this too: if a researcher cannot explain to a
person in the street in 1 hour what he is doing, it means he does not know what he is
doing…
More seriously, the issue of having peer reviews for “scientific revolutions” but not
among the “threatened” field is an open but very serious problem.
Again, even 20% success rate of high risk PROJECTS is acceptable.
The problem of the quality control is other:
If one has 80% low quality PEOPLE one will have a completely corrupted peer review
community and consequently the intellectual collapse of that scientific community.
61
Sorin
Page 62 3/10/2016
62
[insert here the many mechanisms by which high risk research communities may
become (at least partially) havens for lesser researchers both as scientists and as
behavior).
Thus the point is not to insist on the judgment of the project but on the success track of
the proponent. This is not such a big change: in any case, even in standard disciplinary
research the judgment by the peers is based on previous achievements rather then
what is actually proposed / promised for the “future”.
In high risk , by definition the future chances are not great (or estimable).
The right way would be to just look at the personal track, status and achievements
and recommendations the person got BEFORE he put himself on the spot by choosing
the “present” “controversial” high risk project.
In particular overt success in previous High Risk projects should be a strong factor.
Also high regards of the researcher by a previous strong disciplinary community.
Supporting Complexity / Interdisciplinary research
“The world has problems;
The university has departments”
(anonymous)
- in reality / nature problems do not choose their correct conceptualization /
formulation
and solution according to our pre-determined disciplinary frontiers.
- Some problems have solutions that fall outside the domain of the definition of the
problem
Why is interdisciplinary research difficult?
it requires a change in the frame of mind.
It requires giving up ways of thinking and activities with which
somebody got already used with.
It requires learning many new and difficult things with no clear delimitation of
what is necessary and what is sufficient to learn.
It places one outside the range of a reference peer communy.
It threatens the position of the disciplinary colleagues
It makes one look unprofessional
It makes one look as if one is acting and making (controversial)
statements beyond its expertise area.
It requires interaction with people which do no speak the same jargon.
How would you react to somebody that claims that Latin originates from
Sumerian but speaks very bad English and Italian?
Brings one and one’s students outside the circle of standard recognized job
slices.
Another problem: the scientific status of a scientist and of a project in complexity is still
judged by non-complexity specialists from the “relevant fields”.
62
Sorin
Page 63 3/10/2016
63
This has to change: there is enough peers that created interdisciplinary research that is
worth in itself to have a normal real PEER review system for complexity.
(a new department?)
Otherwise, the ideal ”give the researchers themselves the power” is misplaced in this
context: most of the very negative initial reactions to some ideas which then became
accepted as very valuable came from disciplinarian peers.
Adisciplinary Greenhouses-> provisory sub-institutes
One cannot and need not establish a new institute each time a new idea seems
to take off.
One has to find structures that can be established and dismantled in the contemporary
rhythm of rising and falling / or fulfilling the potential of ideas.
One has to find alternatives to the old heavy to build heavy to dismantle institutes
without transforming the lives of the scientists into a continuous exam.
One should separate the issue of personal tenure from the one of the
continuity of subjects of study. Otherwise tenure becomes effective retirement.
The “MORE IS DIFFERENT” transition
often marks the conceptual boundaries between disciplines
It helps to bridge them by addressing
within a common conceptual framework the fundamental problems
of one of them in terms of the collective phenomena of another.
MORE IS DIFFERENT is a new universal grammar with new interrogative forms
allowing to express novel questions of a kind un-uttered until now
We need to foster a new generation of
bi- or multi-lingual scientists with
this grammar as their mother-language.
We need to recognize
MORE IS DIFFERENT interdisciplinary expertise as a crucial tool for future research on
equal footing with disciplinary professional expertise.
-
develop, reward and support Complexity approach as such.
“MORE IS DIFFERENT” is a fusion of knowledge
rather then merely a juxtaposition of expertises
- implies a coordinated shift in the - objectives,
- scope and
- ethos
of the involved disciplines (including healing academic vs.
technology / industry dichotomy)
Sometimes this caused opposition
from some leaders of the affected disciplines
which felt that the identity of their science is threatened
by this fusion and shift in scope.
=> To avoid conflict in the future, complexity should be given space and support
on its own right rather then sending it to beg or steal from the established disciplines.
63
Sorin
Page 64 3/10/2016
64
Complexity Induced New relation:
Theoretical Science  Real Life Applications
Traditional Applied Science applied hardware devices
(results of experimental science)
to material / physical reality.
Modern Complexity rather applies theoretical methods
- new (self-)organization concepts and
- (self-)adaptation emergence theories
to real life, but
not necessarily material / physical items:
- social and economic change,
- individual and collective creativity,
- the information flow in life
Applications of Complexity are thus of a new brand:
"Theoretical Applied Science" and should be recognized as such when evaluating their
expected practical impact
Organizational Recommendations
How to Promote Interdisciplinary / High Risk Research?
-
establish an European Center for Interdisciplinary Research it could be distributed
and / or itinerant (like CNRS)
-
Main Task: to host “instant” “disposable” institutes
on emerging interdisciplinary / high risk/ high stakes issues
-
The members of the “disposable institutes”
will hold Tenure-Track European Interdisciplinary Chairs
independent on the fate of the disposable institutes
Thus ECIR will “insure”/“cover” their risk taking
-
The tenure-track can end in tenured (ECIR)
European Interdisciplinary Professorships
-
Researchers will be selected / promoted at ECIR
on the basis of their proven expertise
to carry out interdisciplinary research as such.
Instruments of the European Center for Interdisciplinary Research
-
gradual, according to how ripe is the recipient subject
-
triangle: 2 advisors+ bridge PhD student (100K €) (support summer schools for
meeting, visits, fellowship)
-
6-12 month interdisciplinary institute programs (500 K €)
(buy sabbaticals for professors + bring students)
64
Sorin
-
Page 65 3/10/2016
65
c) 3-5 year “disposable” institutes (3-5 M €)
university hosting it, should be well compensated
and could keep the institute after the 3 years.
participants: local people + students + visitors
+ holders of the
- European Interdisciplinary tenure(-track) chairs
to provide expertise with interdisciplinary projects
Evaluating interdisciplinary proposals
-
In emergent research situations
beyond the known frontiers
it is not clear what knowledge will be relevant next
-
Thus strong professional expertise in a strictly limited area
is less important than the
generic capability / know-how to conduct research
in situations of uncertainty and
in unchartered trans-/ extra- disciplinary territory
-
Thus the judges should consider the overall
- interdisciplinary - expertise
- scientific connections and
- past achievements
- ease in navigating within dynamic research networks
-
rather then
- individual disciplinary authority and position
- ease in managing static large disciplinary research groups
Algorithm to Evaluate Interdisciplinary researchers relevance
- map the interdisciplinary cooperation network
(- people are nodes - cooperations and common papers, are links).
- give priority to people with high interdisciplinarity rather then high rank / disciplinary
authority
65
Sorin
Page 66 3/10/2016
66
Subjects that need
synthesis
Discipline3
Discipline 1
Discipline 2
REFERENCES
1. T.R. Malthus An essay on the principle of population
printed for J Johnsonin St Paul's Churchyard, London 1798
reprinted by Macmillan et Co 1894)
2. P.F. Verhuulst Notice sur la Loi que la Population Suit
dans son Accroissement;
Correspondence Mathematique et Physique
publie par A Quetelet 10 (1838) pp 113-121;
(English translation in D. Smith and N. Keyfiz
Mathematical Demography
Springer NY 1977 pp 333-337)
Mem. Acad Roy Bruxelles 28(1844) 1)
3. J Davison Trans. Roy. Soc. South Aus. 62 (1938) 342
4. E.W. Montroll and W.W. Badger
Introduction to Quantitative aspects of Social Phenomena
North Holland Amsterdam 1984
5. E.W. Montroll PNAS USA 75 (1978) 4633
6. J. Salk The Survival of the Wisest Columbia U Press NY 1983
66
Sorin
Page 67 3/10/2016
67
7. N. Goel, S. Maitra and E.W. Montroll Rev Mod Phys 43 (1971) 231
7.b Population Dynamics in a homogenous Environment
TM Hallam p 61-94 i
in Biomathematics Vol 17 Mathematical Ecology
Ed TG Hallam and SA Levin
Springer Verlag Berlin Heidelberg 1986
7. Coleman, J.S., Katz, E., and Menzel, H. (1966). Medical Innovation: A Diffusion Study.
BobbsMerrill, New York.
Ryan B. and Gross, N.C. (1943). ''The Diffusion of Hybrid Seed Corn in Two Iowa
Communities,'' Rural Sociology, 8, 15.
Rogers, E. M. (1995). The Diffusion of Innovations, 4th, New York: Free Press.
Sultan, F., Farley, J. U., and Lehmann, D. R. (1990). ''A Meta Analysis of Applications
of Diffusion Models,'' Journal of Marketing Research, 27, 70.
Mahajan, V., Muller, E., and Bass, F. M. (1990). ''New Product Diffusion Models in
Marketing: A Review and Directions for Research,''Journal of Marketing, 54, 1.
Rogers E. and Kincaid, D.L. (1981). Communication Networks: A New Paradigm for
Research. New York: Free Press.
Bass, F. M., (1969). ''A New Product Growth Model for Consumer Durables,'' Man
agement Science, 15, 215.
Parker, P. M. (1994).''Aggregate Diffusion Models in Marketing: A Critical Review,''
International Journal of Forecasting 10, 353.
8 J.C. Fisher and R.C. Pry in
Practical Applications of Technological Forecasting in Industry
ed M.J. Cetron, Chichester, NY 1971.
9. E.W. Montroll and B.J. West in Synergetics ed H. Haken 1986 Teubner Stuttgart 1973
10. D. Sahal Patters of Technological Innovation
Addison Wesley London 1981
11. Solomon et al Social Percolation , Goldenberg et al. Marketing Percolation
12. J.S. Coleman Introduction to Mathematical Sociology
The Free Press of Glencoe, Collier-Macmillan, London 1964
13. G.J Paulik
"Digital Simulation of natural animal communities"
in
"Pollution and marine Ecology" TA Olson and FG Burgess Eds
Wiley - Interscience 1967
14. R. Ross "The prevention of malaria" ; John Murray London 1911
15. Alfred J. Lotka
Quantitative Studies in Epidemiology Nature Feb 1912 p 497,
67
Sorin
Page 68 3/10/2016
68
Science Progress 1920 v 14, p413
A.J. Lotka Reation between birth rates and death rates
Science 26 (1907) 21-22
A.J. Lotka Die Evolution vom Standpunkte der Physik
Ostwalds Annalen der Naturphilosophie 10 (1911) 59
A.J. Lotka Elements of Physical Bilology Williams and Wilkins
Baltimore MD 1924 (and Dover NY 1956)
16. A.J. Lotka Contribution to the analysis of malaria
epidemiology Supplement to the Amer J Hygiene
parts I p 1-37 , II pp 38-54 and III pp 55-95.
17 Vito Volterra Sui tentativi di applicazione della matematiche alle
scienze biologicha e sociale
Giornale degli Economisti 23 (1901) 436-458
V. Volterra Les mathematiques dans les sciences biologiques et
sociales La Revue du mois , Paris 1 (1906) 1-20).
18. Francesco M Scudo and James R Ziegler The golden age of
Theoretical Ecology: 1923-1940 Lecture Notes in
Biomathematics Managing Editor S Levin
Sprienger-Verlag New York 1978 ).
18 V.A. Kostitzin The logistic law and its generalizations
Acta biotheoretica 5 (1940) 155-159)
19. V.A. Kostitzin Sur la segregation physiologique et la
variation des especes. Acta Biotheoretica 5 (1940) 160.
20. V.A. Kostitzin Symbiose, parasitisme and evolution
Hermann 1934 Paris.
21. Vito Volterra "Variations and Fluctuations of the
Number of individuals in animal
Species living together"
In Chapman R N "Animal ecology with especial reference to
insects Mc Graw Hill NY 1931
translated from the french version in
Journal du Conseil international pour l'exploitation de la mer II,
Vol 1 , 1928 (a translation from the original text (Volterra 1926).
Variazioni e flutuazioni del numero d'individui
in specie animali conviventi.
Mem Accad Lincei (6) 1926 31-113
see also the adnotated translation in
Francesco M Scudo and James R Ziegler The golden age of
Theoretical Ecology: 1923-1940 Lecture Notes in
Biomathematics Managing Editor S Levin
Sprienger-Verlag New York 1978 ).
22. A. Kolmogoroff Sulla teoria di Volterra dell lotta per lesistenza
Giorn Instituto Ital. Attuari 7 (1936) 74-80)
23. Manfred Peshel and Werner Mende The Predator-Prey Model;
Do we live in a Volterra World? Springer Verlag, Wien , NY 1986
24. J Hofbauer and K Sigmund
68
Sorin
Page 69 3/10/2016
69
The theory of evolution and Dynamical Systems
London Mathematical Society Student Texts 7
Cambridge Univ Press 1988
24. J Maynard Smith Models in Ecology Cambridge U press 1974 p 104
and
May RM Will a large complex system be stable? Nature 238 (1972) 413-14
36. M Eigen Naturweissenschaften 58 (1971) 465
37. M Eigen and P Schuster The Hypercycle
Springer Berlin Heidelberg 1979
38. P Schuster Physica 22 D (1986) 100-119
L.D. Landau E.M. Lifshitz course of theoretical physics vol 3
quantum mechanics (pergamon oxford 1977) S38
J. Schumpeter the theory of economic development
Harvard Univ. Press Cambridge 1934
A.A. Alcian J. Political Economy 58 (1951) 211 222
R.R. Nnelson S.G. Winter
An Evolutionary Theory of Economic Change
Harvard Univ Press Cambridge 1982
M.A. Jimenez Montano and W. Ebeling Collective Phenomena 3 (1980) 107-114
G. Silverberg
Modeling Economic Dynamics and Technical Change;
Mathematical Approaches to Self-Organization and Evolution
in
Technical Change and Economic Theory
ed Dosi et al
Pinter NY 1988 pp 531-559
W. Ebeling R. Feistel
Physik der Selbstorganisation und Evolution
Akademie-Verlag, Berlin 1982
N.K. Jerne
Structural Analogies Between the Immune System and the
Nervous System in Stability and Origin of Biological Information
Ed. by I.R. Miller, Wiley NY 1975 pp 201-204
25. R.A. Fisher The wave of advance of advantageous genes
Ann. Eugen. London 7 (1937) 355-369;
26. A. Kolmogorov, I Petrovsky and N Piscunov
Etude de l'equations de la diffusion avec croissance de la quantite
de la matiere et son application a un probleme biologique
Bull. Univ. Moscou , Ser. Internation., Sec. A, 1 No 6 (1937) 1-25
foloowed up by a large literature in physics journals.
69
Sorin
Page 70 3/10/2016
70
27. A. Turing The chemical basis of morphogenesis
Philos. Trans. R. Soc. London Ser. B 237 (1952) 37-72
28. S A Levin 1986 Random walk models of movement
and their implications pp 143-154 in T Hallam and SA Levin
(eds) Mathematical Ecology : An introduction
Springer Verlag 1986 Heidelberg.
29. S.A. Levin
Population Models and Community Structure in
Heterogenous Environments
pp 439-476
in
MAA Studies in Mathematics Vol 16
Studies in Mathematical Biology
Part II
Populations and Communities
S.A. Levin Editor
Mathematical Association of America 1978
30. H. Kierstead and L.B. Slobodkin
The Size of Water Masses Containing
Plancton Blooms J. Mar. Res. 12 (1953) 141-147.
31. H. Meinhardt Models of biological formation
Acad. Press London 1982
32. H.T. Banks and K.A. Murpy
Quantitative Modeling of Growth and Dispersal in Population Models
Mathematical Topics in population Biology
E. Teramoto and M. Yamaguti
Lecture Notes in Biomathematics 71
Springer Verlag Berlin-Heidelberg 1987
33. Masayasu Mimura "Coexistence in Competition-Diffusion Systems"
in
Differential Equations, Models in Biology,
Epidemiology and Ecology
Proceedings Claremont 1990
S Busenberg and M Martelly (Eds)
Springer Verlag Lecture Notes
in Biomathematics 92 Managing Editor S A Levin
Berlin 1991
34. Norman T. J. Bailey
Spatial Models in the Epidemiology of infectious diseases
p 233-260 in
Biological Growth and Spread
W. Jager, H. Rost and P. Tautu
Springer Verlag
Lecture Notes in Biomathematics 38
Heidelberg 1980
35. S. Rushton and A.J. Mautner (1955) the deterministic model
of a simple epidemic for more than one community
Biometrika 42 (1955) 126-132)
36. R.M. May and R.M. Anderson
Thr Transmission of Human Immunodeficiency Virus (HIV)
Reprinted from Philosophical Transactions ao The Royal Society
70
Sorin
Page 71 3/10/2016
71
(Discussion Meeting on the Epidemiology and Ecology of Infections
Disease Agents, edited by Anderson and Thresh)
p 262
W.L. Liu and S.A. Levin Influenza and some related Mathematical Models p235
H.W. Hethcote Rubella p 212
in Applied Mathematical Ecology Eds
S.A. Levin , Thomas Hallam and Louis Gross
Biomathematic Texts
Springer Verlag Berlin Heidelberg 1989
36. M. Turelli Random environments and stochastic calculus
Theor. Pop. Biol. 12 (1977) 140-178
37. F. Solomon random walks in random environment
ann prob 3 (1975) 1-31]
38 D. L. Soloman and C. Walter editors
Mathematical Models in Biological Discovery
Springer Verlag Berlin 1977
39. Harry Kesten Random Processes in Random Environments
82-92
in
Biological Growth and Spread
W. Jager, H. Rost and P. Tautu
Springer Verlag
Lecture Notes in Biomathematics 38
Heidelberg 1980
See also
40. Y Ogura and H Kesten Recurrence properties
of Lotka-Volterra models with Random Fluctuations J Math Soc
Japan 32 (1981) 335
41. M. Turelli Stochastic Community Theory
A Partial Guided Tour pp 321 -341
L.M. Ricciardi Stochastic Population Theory:
birth and death processes pp 155-190
and
Diffusion Processes p 191-240
in Biomathematics Vol 17 Mathematical Ecology
Ed TG Hallam and SA Levin
Springer Verlag Berlin Heidelberg 1986
SA Levin Random walk models of movement and their
applications p 149-154
Yu. M. Svirezhev and D.O. Logofet
Stability of Mathematical Communities
English Translation, Mir Publishers
Moscow 1983.
42. A.S. Mikhailov Selected Topics in Fluctuational
Kinetics of Reactions
Physics Reports 184 No 5-6 (1989) 307-374
North Holland Amsterdam
71
Sorin
Page 72 3/10/2016
72
Ya. B. Zeldovich Zh Tech Fiz 19 (1949) 1199
Sov Electrochem 13 (1977) 581
dokl akad nauk sssr 257 (1981) 1173
S.F. Shandarin, A.G. Doroshkevich and Ya. B Zzeldovich
Sov. Phys. Usp. 26 (1983) 46
A.M. Gutin, A.S. Mikhailov and V.V. Yashin Sov. Phys.
JETP 65 (1987) 535
H.K. Janssen, Z. Phys. B 58 (1985) 311
Ya. B. Zeldovich, S.A. Molchanov, A.A. Ruzmaikin and D.D. Sokoloff
Proc. Nat. Acad. Sci. USA 84 (1987) 6323
43. Bruce J West
An essay on the Importance of being Nonlinear
Lecture Notes in Biomathematics 62
Sspringer Verlag heidelberg 1985
44. W Horsthemke and R Lefever
Noise Induced Transitions Theory and Applications
in Physics Cemistry and Biology
Springer Verlag Berlin Heidelberg 1984
see also
L. Arnold , W. Horsthemke and R Lefever
White and colored external noise and transition phenomena
in nonlinear systems Z Phys B 29 (1978) 867
R.P. Garray , R. Lefever
A kinetic approach to the immunology of cancer , stationary state properties
of effector - target cell reactions
J Theor Biology 73 (1978) 417.
45 P. Richmond, Power Law Distributions and Dynamic behaviour of Stock Markets, to appear in Eur.
J. Phys 2001.
46.
Mikhailov, Alexander S.
Foundations of synergetics. I. Distributed active systems. 1st ed.
Berlin, Springer-Verlag, 1990.
Springer series in synergetics. v.51. [1st ed.]
47.
Mikhailov, A.S. Loskutov, A.Yu.
Foundations of synergetics. II. Complex patterns.
Berlin, Springer-Verlag, 1991.
Springer series in synergetics. v.52.
\end{document}
72
Download