click here

advertisement
UCLA
Human Complex Systems
www.hcs.ucla.edu
presents
The 3rd
Lake Arrowhead
Conference
On Human Complex Systems
May 18 – 22, 2005
Lake Arrowhead, California
2
Welcome
“Human complex systems” include: ethnic groups, societies, and other political, economic and cultural
entities of all kinds. Within these systems, a large number of individuals (with varying personal attributes) interact
in a variety of ways. From these interactions emerge large scale structures and processes that in turn affect the
individuals within them.
Large-scale systems like cultures and societies are holistic, dynamic and self-referential. They are often
difficult or impossible to predict – or even vaguely picture – if we just look at the individuals involved one-by-one
or in a static framework. For example, individuals and their beliefs make a political movement, but we miss the
whole if we only look at individuals, and we miss change if we view the movement as fixed. Human complex
systems are dynamic: they change or develop over time as a result of influences from inside and outside the
system. Also these systems are self-referential – individuals and organizations refer to, maintain and change
themselves. And agents do not exist in a vacuum but use artifacts (technology) and move through a physical
environment.
Social scientists have had a hard time explaining consumer trends, stock market crises, and the
development of political and artistic movements, to name a few phenomena. Usually, the approach has been to
determine the smaller, underlying causes of the larger phenomena. In the past, social scientists might approach
these problems from an idealizing point of view, often talking about general characteristics of a group based on a
broad observation of trends or actions supposedly definitive of that group. Or social scientists might shy away
from such problems altogether.
Today, however, we social scientists have access to analytical tools and methodologies. These include
specific use of computers and draw on developments in complexity theory. One example is multi-agent modeling
and simulation. Social scientists are approaching their fields with new ways to think about their subjects. These
new ways bridge the gap between individuals, their perceptions and thoughts, and the socio-cultural wholes to
which they belong. We can even explore and analyze multiple levels of agency at the same time.
We are excited to announce that a Human Complex Systems interdepartmental degree program at UCLA
has been created to provide undergraduate students majoring in specific departments in the social sciences and
humanities with an overview of these new tools and methods. Students in the biological and physical sciences are
also welcome. The result enhances their major work and better prepares them for graduate school or entrance into
the work place as tomorrow’s managers and leaders. Today, decision making and management are expressly
requiring new skills to deal with increasing information flow and rapid socio-technological change.
Details on Human Complex Systems can be found at: http://www.hcs.ucla.edu/.
Among social scientists, we believe that computational, multi-agent approaches to analyzing connections
between human beings, their environment and technology will help raise the level of scientific theory and practice
in the social sciences. Only in the last half-century have we discovered the concept of “things that think.” On our
desks we have the means to explore complex “what if” scenarios and experiment with counterfactuals and “life as
it could be.” Our artificial worlds are fashioned by the same creative evolutionary power that created us.
We welcome you to Lake Arrowhead and thank you for taking part in what we hope will be a continuing
journey of many enriching and inspiring meetings.
Signed, the Center for Human Complex Systems:
Phil Bonacich
Nicholas Gessler
Susanne Lohmann
Bill McKelvey
Dario Nardi
Dwight Read
Francis Steen
3
Notes
4
Program
Wednesday, May 18, 2005
6:30pm–7:50pm
DINNER
8:00pm–9:50pm
OPENING SESSION
Welcome by Bill McKelvey
“The Big Event” live group simulation by Dario Nardi
10:00pm–11:45pm
RECEPTION
IRIS ROOM
TAVERN ROOM
Thursday, May 19, 2005
9:00am–12:00am
MODELING COGNITIVE DYNAMICS
Chair: Bill McKelvey
IRIS ROOM
1. Accounting for Agent Orientations
presented by David L. Sallach
2. Desirable Difficulties: Learning, Teaching, and Collaboratively Bridging
presented by Jason Finley
3. Knowledge and Cognition in the Social Sciences: Modeling with Low Cognition Agents
presented by Paul Ormerod, Bridget Rosewell, Rich Colbaugh and Kristin Glass
Short Break
4. Meaning-Oriented Social Interactions: Conceptual Prototypes, Inferences, and Blends
presented by Veena S. Mellarkod
5. Modeling Agent Personality, Metabolism and Environment without Boundaries, Using Fuzzy Cognitive
Maps
presented by Dario Nardi
12:00pm–1: 20pm
LUNCH
5
1:30pm–3:30pm
TRACK ONE
MARKET BEHAVIORS
Chair: David Midgley
SKYVIEW ROOM
1. A Behavioral Model of Coalition Formation
presented by Paolo Patelli
2. MOTH: A More Naturalistic Conceptualization of Reciprocal Altruism in Biological Prisoners’ Dilemma
Games
presented by Nicholas S. Thompson
3. Spatial Patterns of Market Evolution
presented by Zhangang Han
4. Specialization of Evolving Agents with Expectations
presented by Zhangang Han
4:00pm–6:00pm
EMERGENT NETWORKS
Chair: Dario Nardi
SKYVIEW ROOM
1. A Model of Large Socially Generated Networks
presented by Brian W. Rogers
2. Powerful Knowledge: Emergent Order and Indigenous Knowledge
presented by Michael Fischer
3. Reasons Vs. Causes. Emergence as Experienced by the Human Agent
presented by Paul Jorian
4. Cultural Models for Action: Systems of Distributed Knowledge in a Non-monolithic Universe.
presented by David Kronenfeld
1:30pm–6:00pm
TRACK TWO
POLITICAL DYNAMICS
Chair: Edward P. MacKerrow
IRIS ROOM
1. A Theory of Lexical Authorship in International Relations
presented by Jonathan Eells
2. Communication among Selfish Agents: From Cooperation to Display
presented by Jean-Louis Dessalles
3. Modeling Succession Crises in Authoritarian Regimes: Beyond “Slime Mold” Complexity
presented by Britt Cartrite
Short Break
4. Opinion Leaders and the Flows of Citizens’ Political Preferences: An Assessment with Agent-based
Models
presented by Cheng-shan Frank Liu
5. Modeling the Behavioral Norms Underlying Competition and Coordination in Government Bureaucracies
presented by Edward P. MacKerrow
6:30pm– 7:50pm
DINNER
8:00pm–10:00pm
SOCIAL
TAVERN ROOM
6
Friday, May 20, 2005
9:00am–12:00pm
FLOW DYNAMICS in PUBLIC ARENA
Chair: Nicholas Gessler
IRIS ROOM
1. Evaluating Airline Market Responses to Major FAA Policy Changes
presented by Ashley Williams
2. Agent-based Models of Risk Sensitivity: Appliations to Social Unrest, Collective Violence and Terrorism
presented by Lawrence Kuznar
3. Man on Fire: Crowd Dynamics at Zozobra, Santa Fe’s Burning Man
presented by Stephen Guerin and Owen Densmore
Short Break
4. Public Services Systems: Elements of an Agent-based Model
presented by Mary Lee Rhodes
5. School Districts as Complex Adaptive Systems: A Simulation of Market-based Reform
presented by Spiro Maroulis and Uri Wilensky
12:00pm–1:00pm
LUNCH
Afternoon available for hiking, informal meetings, etc.
6:30pm–7:50pm
DINNER
8:00pm–9:30pm
Power Laws, Statistics, and Agent-based Models
Setting up the Issues: Dwight Read; Bill McKelvey (20 minutes each)
Panel Discussants: (10 minutes each)
1. Phil Bonacich
2. David Midgley
3. TBA
9:30pm–11:45pm
SOCIAL
TAVERN ROOM
7
Saturday, March 21, 2005
9:00am–12:00pm
1.
2.
3.
MULTISCALE STUDIES
Chair: Dwight Read
IRIS ROOM
A Model of Hierarchically Decomposed Agents
presented by Eugenio Dante Suarez
Challenges for Biologically Inspired Computing
presented by Russ Abbott
Coevolutionary Dynamics of Strategic Networks: Developing a Complexity Model of Emergent Hierarchies
presented by Brian Tivnan
Short Break
4.
5.
Compound Agents and Their Compound Multiagent Systems
presented by John Hiles
Small n Evolving Structures: Dyadic Interaction between Intimates
presented by William A. Griffin
12:00pm–1:15pm
1:30pm-3:30pm
1.
2.
3.
4.
2.
3.
4.
5.
VALIDATION STUDIES
Chair: David Sallach
IRIS ROOM
Destructive Testing and Empirical Validation of a Multi-agent Simulation of Consumers, Retailers, etc.
presented by David Midgley
Low Cognition Agents: A Complex Network Perspective
presented by Rich Colbaugh & Kristin Glass
The Effect of Friction in Pedestrian Flow Scaling Through a Bottleneck
presented by Sanith Wijesinghe
Validation Issues in Computational Social Simulation
presented by Jessica Glicken Turnley
4:00pm–6:30pm
1.
LUNCH
METHODOLOGICAL ISSUES
Chair: Russ Abbott
IRIS ROOM
Artificial Culture—A Post Human Approach to Process
presented by Nicholas Gessler
Peeking into the Black Box: Some Art and Science to Visualizing Agent-based Models
presented by Stephen Guerin
Iscom Innovation Model
presented by Marco Villani
Understanding the Open Source Software Community with Simulation and Social Analysis
presented by Scott Christley
The Complementarities of Mathematical Models and Computer Simulations: An Example
presented by Phil Bonacich
6:30pm–8:00pm
DINNER
8:00pm–9:30pm
9:30pm-11:45pm
CLOSING PERSPECTIVES
Chair: Phil Bonacich
David Sallach
Jessica Glicken Turnley
SOCIAL
IRIS ROOM
TAVERN ROOM
8
Abstracts
Name: Russ Abbott
Affiliation: California State University, Los Angeles
Email: Russ.Abbott@GMail.com
Web: abbott.calstatela.edu
Title: Challenges for Biologically Inspired Computing
We discuss a number of fundamental areas in which biologically inspired computing has so far failed to mirror
biological reality. These failures make it difficult for those who study biology, the social sciences, and many other scientific
fields to benefit from biologically inspired computing. These areas reflect aspects of reality that we do not understand well
enough to allow us to build adequate models. The failures-to-date are as follows.
1. The apparent impossibility of finding a realistic base level at which to model biological (or most other real-world)
phenomena. Although most computer systems are stratified into disjoint and encapsulated levels of abstraction (sometimes
known as layered hierarchies), the universe is not.
2. Our inability to characterize on an architectural level the processes that define biological entities in both enough
detail and with sufficient abstraction to model them.
3. Our inability to model fitness except in terms of artificially defined functions or artificially defined fitness units.
Fitness to an environment is not (a) a measure of an entity's conformance to an ideal, (b) an entity's accumulation of what
might be called "fitness points," or even (c) a measure of reproductive success. Fitness to an environment is an entity's ability
to acquire and use the resources available in that environment to sustain and perpetuate its life processes.
4. Our inability to build models that allow emergent phenomena to add themselves and their relationships to other
phenomena back into our models as first class citizens.
These failures arise out of our inability as yet to fully understand what we mean by emergence.
As an initial step towards surmounting these hurdles, we attempt to clarify what the problems are and to offer a
framework in terms of which we believe they may be understood.
Name: Phillip Bonacich
Affiliation: Department of Sociology
Email: bonacich@soc.ucla.edu
Title: The Complementarities of Mathematical Models and Computer Simulations: An Example
Computer simulations are obviously useful in exploring of social processes so complex that exact mathematical
solutions do not exist. However, simulations can be useful as a guide to empirical research even when there is a
mathematical solution. An example is presented in which all the equilibrium states of a stochastic model could be inferred
and an experiment was designed to test whether the model actually described human behavior in groups. Simulations
conducted before the experiment improved the test of the model. First, simulations revealed that some of the equilibrium
outcomes would never occur except under unusual starting conditions. Thus the failure to ever find these patterns did not
imply a deficiency in the model. Second, it was revealed in the simulations that some states, while not equilibria in the very
long run, were nevertheless relatively enduring, and thus their appearance in the experiment would not contradict the model.
Name: Britt Cartrite
Affiliation: University of Pennsylvania
Email: cartrite@sas.upenn.edu
Web: www.sas.upenn.edu/~cartrite
Title: Modeling Succession Crises in Authoritarian Regimes: Beyond "Slime Mold" Complexity
The sequellae of political successions are of particular interest today in authoritarian or semi-authoritarian Muslim
countries, governments so reliant on one man's authority and so weakly institutionalized as to be regarded as "one bullet
regimes." The image of such a state suggests succession as a catastrophic event. Although distinguishing between types of
authoritarian regimes represents an underdeveloped area of study, such variation implies that behind the façade of the "single
bullet regime" different logics of control and stabilization may lurk. The premise of this study is that although chaos may
result from a succession crisis in an authoritarian state, and although the exact script for what will unfold in any particular
9
country by the death or disappearance of a dominant leader cannot be written ahead of his/her demise, variations in
authoritarian structures generate different patterns of chaos and state survival or breakdown. Evaluating regime performance
necessitates modeling individual agents, the social environment, and the state structures that overlay society but are not
synonymous with it. Using PS-I, we create a generic template entitled "Virtualstan." We represent state institutions as
distinct, hierarchical "classes" of agents, including a "Great Leader", linked spatially and / or through "remote listening" to
one another in a landscape primarily populated by non-state "basic" agents. Based on 100 landscapes, we superimpose three
distinct types of authoritarian regimes, comparing their relative performance over time along a number of dimensions. We
then introduce the disappearance of the Great Leader to evaluate the impact of a succession crisis both within and across
regime-types. Among the more interesting findings is that regime survival emerges from distinct regime-specific
"strategies"; further, when breakdown does occur, regime-type informs both the magnitude and the specific pattern of state
collapse. This "triadic structure" approach allows us to evaluate both social and institutional effects of succession crises.
Name: Scott Christley
Affiliation: University of Notre Dame
Email: schristley@nd.edu
Title: Understanding the Open Source Software Community with Simulation and Social Analysis
Over the past few years, our research group at the University of Notre Dame has been applying various techniques
to study the Open Source Software Community. These techniques include agent-based simulation, data mining, social
network analysis, and community structure analysis to name just a few. In this talk, we will discuss what these different
techniques have taught us about Open Source Software, some of the limitations and frustrations we have encountered when
applying these techniques, and what are our current and future research directions.
Name: Rich Colbaugh and Kristen Glass
Affiliation: New Mexico Institute of Mining and Technology
Email: ashley@mitre.org
Title: Low Cognition Agents: A Complex Networks Perspective
Complex networks are ubiquitous in nature and society, and recent research has made clear both the importance of
understanding these networks and the limitations inherent in attempting to obtain this understanding using reductionist
methods. Indeed, for systems ranging from the Internet to metabolic networks to the global economy, it is essential to study
the interaction of the system components – the network itself – in order to obtain a real understanding of system behavior.
Moreover, it is often the evolution of the network which is its most important feature.
This talk explores, from a complex networks perspective, the validity and utility of modeling and analyzing
socioeconomic phenomena using “low cognition” agents. The talk begins with a summary of some fundamental concepts
associated with complex networks and low cognition agent models. For example, we show that many social systems evolve
so that the topology and protocols governing agent interaction provide robustness and evolvability in the presence of
environmental complexity and variation in the cognitive strategies of the individual agents, and that such systems therefore
admit useful low cognition agent models. We then illustrate the utility of these concepts and methods for real world
applications through a series of case studies. The application domains considered include national security (e.g., terrorism,
weapons proliferation), the social sciences (e.g., financial markets, organization dynamics), and the biological sciences (e.g.,
metabolic and gene regulatory networks, epidemiology).
Name: Peter Danielson
Affiliation: Centre for Applied Ethics, Univ. of BC
Email: pad@ethics.ubc.ca
Web: robo.ethics.ubc.ca/~pad
Title: Models for discovering dynamic moral norms and constructing better ethical agents
My research project, open-source ethics, aims to bring to the transparency and power of computational cognitive and
social science to the traditional craft of ethics (Danielson in progress). The project is building three models:
1. Norms Evolving in Response to Dilemmas (NERD) uses an innovative web-based
10
survey (yourviews.ubc.ca ) to collect data on moral norms. A suite of advisors, representing a wide range of popular, policy,
and ethical perspectives, guides hundreds of participants through real and hypothetical histories of contemporary science
policy issues. (Danielson, Ahmad et al. 2004) argues that NERD is deep, cheap, and improvable. NERD’s three parallel
surveys yield data on questions like: do human-health issues elicit absolutist responses resource issues do not? NERD’s
simplified world model is designed to facilitate modeling its wealth of quantitative and qualitative participant data.
2. Modeling Ethical Mechanisms, the focus of this paper, is one way our team attempts to explain the NERD data. We model
populations (in Java) as consisting of numbers of agents (in Scheme) ranging from barely aware of the issues, through trendfollower, selfish, reciprocators, altruists (Danielson 2003). Note two features:
 Social norms are central (Danielson in press). (1) provides two proxies for social norms, direct feedback on choices and
indirectly via advisors.
 Ethical variety is explained by a plurality of mechanisms. E.g. although norms play a big part, few are simple norm
followers. Conversely, making all agents utility maximizers also oversimplifies. Some care about fairness, the public
good, their family, or the “best” advice (Danielson 2002). Focusing on real ethics data avoids sterile methodological
debates.
3. Fog and Mirrors models democratic self governance in a world where all citizens (and not just social scientists and their
clients in government, business, and NGOs) have access to the tools developed by (1) and (2).
Name: Jean-Louis Dessalles
Affiliation: PARISTECH, Ecole Nationale Superieure des Telecommunications
Email: dessalles@enst.fr
Web: www.enst.fr/~jld
Title: Communication among selfish agents: from cooperation to display
Communication of honest information is known to be fundamentally unstable in populations of selfish agents. As
agents have more interest in benefiting from others' information than in giving away their own knowledge, general muteness
is the only attractor. A considerable amount of efforts has been devoted to find suitable circumstances allowing cooperative
communication to emerge. The basic idea is to allow agents to recognize each other and to memorize non-responsive
behavior. The problem is that the conditions for cooperation, e.g. strong locality of interactions or small input of newcomers,
are not robust. Close to the edge of viability, cooperation oscillates between adaptiveness and counter-adaptiveness, which
makes reciprocal cooperation unlikely to serve as theoretical basis for communication among selfish agents.
We developed an alternative model along the lines of the general Theory of Honest Signaling (also known as Theory
of Costly Signaling). In our model, agents communicate to display their ability to get original information. Contrary to
utilitarian models of communication, there is no problem of cheating due to the presence of information thieves. The problem
is rather to show how informational abilities (being the first to know) can become an invaluable quality in the population of
agents.
By means of theoretical modeling and simulations of genetically evolving populations of agents, we show that
honest communication can emerge in a "political" context, i.e. a situation in which individuals are in competition to form
coalitions. Any quality of agents that is positively correlated with coalition success will be valued and thus displayed. The
ability to know before the others is supposed to be such a quality. This model offers a new and consistent account of
communication among selfish agents that can serve as theoretical basis to explain the initial emergence of human language.
Name: Jonathan Eells
Affiliation: UCLA - poli-sci
Email: eells@ucla.edu
Web: www.bdel.com
Title: A Theory of Lexical Authorship in International Relations
Historical literary practice has moved in recent years toward an investigative method known as stylometry, which
analyses collections of text in order to attribute authorship - useful in the modern debates over "who wrote Shakespeare's
plays", as well as for discovering plagiarism. Fundamental to the stylometric thesis is the idea that each individual author
writes with a distinct voice and style, expressed in the use of fragments and particles within a text (certain prepositional
phrases, certain locutions, etc.) as well as by the use of certain meaning-laden, or lexical, words and phrases.
Attributing authorship using one text in comparison with a control text is indistinct from seeking orthography
between two putatively distinct narratives. If the narratives lexically diverge or converge, a researcher could claim that the
authors of the individual texts are diverging or converging on a particular theme or story as well. The implication is obvious
11
at the international level, where individual speakers placed within a lexical space, compared to others with whom they are
politically engaged, to see if they express the same message or are talking past one another. For many speeches over long
periods of time, computational analysis is the only solution for determining the lexical
relationship among many speakers, and for determining trends and tendencies to diverge or converge on a shared narrative
held common (or rejected) among states and their leaders. Rather like a literary Venn diagram, computational analysis
should reveal just how much the those lexical circles overlap, and show what changes in that shared space may manifest over
time.
This project follows from its principles. As a computational exercise, stylometric analysis works better as a top
down project, meaning that the “agent” per se is actually a comprehensive language model. Individual texts such as speeches
by political leaders, are discreet quanta of information that the language agent parses for comparison to other texts. The
agent is able to assess qualities of isomorphism and orthogonality among sets of text that a researcher studies.
Texts derived from the speeches of state leaders regarding issues considered core to the international relations canon
will be formalized for lexical analysis by the language agent, and compared over time to show whether or not
divergence/convergence trends can be established. At the same time, this project will of course quite self-consciously submit
itself to criticism as a meaningful measure of international relations practice. At day's end, it will be known whether lexical
analysis of international relations is worth time on the mainframe, and if so, begin to formalize measures of political
lexicality as a means of moving international relations forward.
Name: Jason Finley
Affiliation: Researcher, UCLA Department of Psychology
Email: jfinley@ucla.edu
Web: iddeas.psych.ucla.edu/
Title: Desirable Difficulties: Learning, Teaching, and Collaboratively Bridging
Despite extensive progress in the last several decades on understanding the cognitive processes that underlie human
learning and memory, there has remained a gap between the findings of cognitive science and actual educational practices.
Recent research, for example, questions the common view that student performance during instruction indexes
learning and validly distinguishes among instructional practices. Work by Robert Bjork and other researchers has established
that conditions of practice that appear optimal during instruction can fail to support long-term retention and transfer of
knowledge; whereas, and remarkably, conditions that introduce difficulties for the learner—slowing the apparent rate of the
learning—can enhance long-term retention and transfer. The discovery of such "desirable difficulties" (Bjork, 1994, 1999)
not only raises concerns about prevailing educational practices, but also suggest unintuitive ways to enhance instruction.
Desirable difficulties include: spacing practice sessions, varying practice conditions, interleaving material, and requiring the
learner to generate material rather than re-reading it.
However, implementing and testing such ideas has traditionally been quite challenging, due to the differences
between paradigms and practices of cognitive psychologists and those of educational researchers and classroom teachers.
A current project makes use of a successful web-based educational software
system to bridge that gap. This technology-enhanced learning environment enables researchers to test the educational impact
of desirable difficulties by consistently varying the conditions of instruction. It also provides a fluid and minimally invasive
way to introduce cognitively informed changes into current practices.
While desirable difficulties may prove to be beneficial to online learning in the laboratory, their effect will likely not
be as straightforward in the dynamic multi-agent system of the real classroom, owing to factors such as student frustration,
variable student reading ability, working in pairs, and teachers' limited resources. Furthermore, students are not passive
learning agents, but are rather aware of and have notions about their own learning, which we refer to as metacognition. We
are just beginning to explore ways to improve and leverage students' metacognition through use of desirable difficulties.
In addition to describing our research and findings to-date, I will give reports and reflections on working across
disciplines and on the role of information technology in enabling and enhancing such collaboration. In doing so, I will also
comment on the dynamic system of research and practice.
Name: Michael Fischer
Affiliation: University of Kent
Email: M.D.Fischer@ukc.ac.uk
Web: fischer.md
Title: Powerful Knowledge: Emergent order and indigenous knowledge
12
One of the key issues in agent-based modeling is the relationship between active and passive processing that arises
from synergies between agents. Using results from research projects I explore indigenous knowledge systems (IKS) and the
relationship of IKS to agent-based models of culture more generally. IKS are typically found in the wild in the form of
general knowledge and specialized knowledge. Specialized knowledge is local to specific individuals or groups of
individuals, but the specification of this knowledge and its uses are part of general knowledge.
Maintaining and reproducing IKS requires a high level of fidelity in both general and diversified specialized
knowledge to serve as a consistent resource for an agent community to apply in diverse circumstances to unique problems. I
am focusing on two issues that greatly impact the maintenance, evolution, transmission and instantiation of IKS. First, how
the fidelity of IKS impacts its use; how consistent does IKS need to be, and what aspects need to be consistent. Second, what
are likely ways of preserving fidelity in a distributed framework.
The paper is based on current research on the structure of indigenous knowledge
together with research in collaboration with Dwight Read on instantiation of kinship relationships into kinship terminologies,
work that that confirms Read's long standing conjecture that the outer logic of kinship terminologies is regulated by very
strong internal structuration. I propose that maintenance of the internal structure of local knowledge domains is more
important than maintaining the external specifics of domains of knowledge. I will present results from agent-based models
that suggest internal structure maintenance is more likely to arise from inter-agent synergies than from individually
encapsulated methods; that agents are most likely to succeed by identifying and adopting domain structures that arise from
processes that would otherwise increase entropy in alternative domain structures.
Name: Nicholas Gessler
Affiliation: UCLA Human Complex Systems Center
Email: gessler@ucla.edu
Web:
Title: Artificial Culture - A Posthuman Approach to Process
Culture emerges from multiple agency, operating in different media, in different locations and at different speeds of
influence. Empirically, culture is the complex interaction of cognitions, individuals, groups, artifacts and architectures,
differentially distributed through space and time. Artificial culture is the program of describing, understanding and
explaining the evolution of culture process by translating cognitive, discursive and mathematical anthropological theory into
computer models simulations. From the space of real-world instances, we seek to discover the boundaries of the space of
artificial-world possibilities by exploring constellations of counterfactual “what-if” scenarios. If we are to provide a
platform for the variety of theories extant in anthropology today, we must find ways to foster temporal and atemporal
emergences, ways to recognize, encapsulate and capture emergences for use as primitives in yet higher constructs, deal with
the perceptions of and exchange of information, misinformation and disinformation, and perceptions of and exchange of
information about information, misinformation and disinformation. We must begin to build simulations in which these things
coevolve on their own, and to this end we look at some suggestive implementations.
Name: Jessica Glicken Turnley
Affiliation: Galisteo Consulting Group, Inc.
Email: jgturnley@aol.com
Web: www.galisteoconsulting.com
Title: Validation Issues in Computational Social Simulation
Traditional validation exercises match computational model output against a state of the ‘real world.’ I suggest that,
at best, validation exercises often compare model output to an implicit conceptual model or, in worst cases, to a formalized
version of the conceptual model. This begs the question of whether (or to what extent) model output is compared to the
target system itself. These questions introduce the role of social theory and its relationship to computational models.
Furthermore, computational social simulations often deal with future states in which the (current) intent of actors and other
unknown factors can significantly affect outcomes. Complex system approaches are making it evident that many different
initial conditions can lead to the same end state and vice versa…a single initial condition can lead to many possible futures.
Therefore, even validating social simulations against historical data is questionable without some more complete exploration
of the theoretical space that is driving the initial conceptual model. This discussion covers such questions as the selection
logic for including some data and not others in the computational social model, including questions of theoretical frameworks
and observer bias; ways in which computational formats can constrain data type and selection and the impact this might have
13
on model output and use; and issues around the management of non-observable states such as ‘intent’ in a computational
framework.
Name: William A. Griffin
Co-Authors: Olga Kornienko, Shana Schmidt, and Amy Long
Affiliation: Arizona State University
Email: william.griffin@asu.edu
Web: www.public.asu.edu/~atwag/
Title: Small n Evolving Structures: Dyadic Interaction between Intimates.
Agent based modeling, by definition, has sought to simulate processes and emergence among agents numbering
from the tens to the thousands. Consequently, one neglected, yet immensely critical, area of social and behavioral evolution
is the continuously emerging process of intimate dyadic interaction. Whether it is the relationship between a husband and
wife or a parent and child, these dyads form the foundation of social processes. The most difficult aspect of developing a
dyadic level ABM is determining the “multiscale structure” that emerges, and then evolves. What rules determine the
reciprocal relationship between relationship quality and the overt moment-to-moment behaviors? How do attributional sets,
derived from a unique history and maintained by each member of the dyad, influence the behavioral trajectories across time
scales? With intimate dyads, couple sustainability is invariably linked to their behavioral and affect exchanges and these, of
course, are dependent on the reciprocal dynamics of perception, attribution, and previous interactions. The structure of these
data, taken from couples during a 15-minute dyadic interaction, systematically varies as a function of marital quality; my lab
and others have consistently found this. To date, however, the underlying mechanisms that generate these complex processes
have not been converted to algorithms that would allow simulation. Using micro-social behavioral data (e.g., negative
statements, nonverbal back channeling, self-report affect ratings) collected at the Marital Interaction Lab at Arizona State
University we will discuss our attempts to develop a marital interaction ABM that generates data intended to replicate
realized data. Specifically, individual behaviors expressed at each talk turn are used as independent agents; the interaction
among these agents provides the emergence of expressed affect between the relationship partners. The expressed affect
generated by the model is then compared to extant affect expression data to determine model fit.
Name: Stephen Guerin
Affiliation: RedfishGroup
Email: stephen@redfish.com
Web: www.redfish.com
Title: Peeking Into the Black Box: Some Art and Science to Visualizing Agent-Based Models
This paper explores current metaphors for visualizing agent-based models. Metaphors include grid, network, ndimensional cubes and landscape visualization techniques. A final section offers some theory underlying visualization of
complex systems models with emphasis on mappings to non-equilibrium systems; conserved quantities and their flows; order
creation; identifying order parameters and control parameters; and the presentation of phase transitions. Demonstration
models created by the author will be used to illustrate the visualization techniques and concepts. These include a social model
of the UK Criminal Justice System done for Parliament and the London School of Economics, MIT MediaLab's
RealityMining project, and DrugSim - explaining the incidence curve of heroin use in Baltimore in collaboration with Dr.
Michael Agar.
Name: Stephen Guerin AND Owen Densmore
Affiliation: RedfishGroup, Santa Fe NM
Email: stephen.guerin@redfish.com, owen@redfish.com
Web: redfish.com
Title: Man On Fire: Crowd Dynamics at Zozobra, Santa Fe's Burning Man
This paper presents our work on a pilot project with the The Santa Fe Police Department (SFPD) and the Santa Fe
Fire Department (SFFD) to model crowd dynamics at the Zozobra event. Zozobra is a long-standing end-of-summer public
event in Santa Fe, NM. It starts late afternoon, continuing into late evening, culminating in the burning of Zozobra, Old Man
14
Gloom. The SFPD and SFFD have central responsibility for the public safety and security during Zozobra Celebrations. Of
primary concern is the safe ingress and egress of a crowd of 45,000.
An agent-based model of Zozobra pedestrian and vehicle traffic is developed to understand the dynamics of crowds
in this venue and evaluate of strategies for mitigating dangerous incidents during the event. Interactive 3D visualizations will
be presented that illustrate the capabilities of the model. The model provides a space that realistically represents the Zozobra
event using Geographic Information Systems (GIS) data including street, elevation and building data utilizing detailed
available blueprints, physical site visits, and high resolution arial photographs.
Example scenarios explored in the model are:
• changing spatial configuration of exits and obstacles
• varying the crowd density
• introduction of dangerous incidents like fire, fights or gunfire
• converting a collection of roads to one way, pedestrian-only lanes or traffic-only lanes
• varying the distribution of SFPD and SFFD personnel
• introducing incidents demanding emergency medical access
• allowing post-Zozobra entertainment on the Santa Fe Plaza
Name: Paul Jorion, PhD
Affiliation:
Email: paul_jorion@msn.com
Title: Reasons Vs. Causes. Emergence as experienced by the human agent
Whenever laws (similar to those found in physics) are discovered that explain human collective behavior, these
seem “external” to the motives that the agents active in the process assign themselves as the “reasons” why they are acting
the way they do. To physicists, if a “law” can be formulated to describe a type of human behavior, then man’s sentiment of
freewill while he is enacting that behavior must be illusory.
This is not the case: the agent’s “reasons” add to the explanation of the process. More specifically, it will be shown
on examples from the US stock and housing markets that “reasons” explain the critical points (likely to create a bifurcation)
that the physicist observes as being part of the behavior emergent from the agents’ interactions. The mechanism is the
following: catastrophes that induce suffering in men typically derive from positive feedbacks where a process snowballs;
man’s response typically materializes into a negative feedback which stops the snowballing and restores a process where
divergence from an equilibrium is damped (homeostatis).
The way men achieve a harmonization of agents’ behavior is through formulating “rules” that force harmonization
when the agents comply. Following a rule may escape the agent’s consciousness and become a habit; this is what Durkheim
understood with the “internalized social”: when following the rule has become a “second nature” and agents follow it
unaware. The “invisible hand” refers to processes where there’s no “following a rule” because the following the rule has been
internalized.
Rules manage to conflate human “reasons” and physical “causes”. This is why the laws of physics were called
“laws” in the first place: because they account for a process similar to that of agents following a rule.
Name: Lawrance Kuznar, Director
Affiliation: Decision Sciences and Theory Institute, Dept. Anthropology, Indiana University – Purdue University
Co-Authors: William Frederick, Dept. Mathematical Sciences, and Robert Sedlmeyer, Dept. Computer Sciences
Email: kuznar@ipfw.edu
Title: Agent Based Models of Risk Sensitivity: Applications to Social Unrest, Collective Violence and Terrorism
Political scientists debate the efficacy of poverty and relative deprivation theories for explaining collective violence,
including current terrorism threats. According to these approaches, relative differences in wealth and social status create
grievances that cause people to compete violently for resources. Recognizing that rebellions are indeed risky, we have
operationalized the notion of relative deprivation as a special case of risk sensitivity as measured by economists. We have
developed and tested a mathematical model, the expo-sigmoid formulation, which models status distribution in a society and
its relation to risk sensitivity and collective violence. The model allows specific predictions of individuals likely to rebel, and
targets subsets of poor and relatively wealthy, yet aggrieved individuals in a society. We use agent-based models (ABMs) to
explore how decision rules and risk sensitivity recursively influence status differences, causing some individuals take great
15
risks to address grievances. Our ABM also serves as a platform for testing the relative efficacy of competing models of
decision making thought to lead to collective violence. These alternative approaches include rational choice theory, prospect
theory, and bounded rationality.
Name: David B. Kronenfeld
Affiliation: University of California at Riverside
Email: david.kronenfeld@ucr.edu
Web: http://pages.sbcglobal.net/david-judy/david.html
Title: Cultural Models for Action: Systems of Distributed Knowledge in a Non-monolithic Universe.
“Cultural Models” (CM) is a term that has come to apply to culturally standardized and shared/distributed cognitive
structures for explaining or structuring action. They contrast with more cultural conceptual systems (such as kinship or
ethnobiological terminological systems) and more general procedures analyzing and imposing initial structure on new
problems. They are functionally a little like Schank and Abelson’s “scripts”. CMs combine motives, emotions, goals,
mechanisms, classificatory information, etc.--in each case, perhaps, cross-linking to separate cognitive structures within
which these separate entities are organized, structured, and classified--into possible actions. CMs can be used by individual
actors to generate behavior--often after some consideration of the downstream implications of the choice of one model over
another--but are not themselves the individual internal cognitive schemas that actually generate behavior. Different CMs are
cross-linked with one another in a variety of ways. One area of cross-linkage includes models held by members of a given
community in response to similar situations (as in overlap among models for doing similar things, modes for use in similar
situations, models involving similar attitudes or goals, and so forth). Another kind of cross-linkage involves models for more
or less the same thing that are held by members of different communities--especially where membership overlaps in one way
or another. CMs have to be easily learned, productive, and systematic.
I want to discuss the implications of these kinds of overlap for the shape of cultural models and the way in which
they are learned, held, and applied. Illustrative examples will be utilized, but no systematic formal description or model of
CMs will be offered--it’s too soon.
Name: Cheng-shan Frank Liu
Affiliation: The University of Kansas
Email: ashan@ku.edu
Web: lark.cc.ku.edu/~ashan
Title: Opinion Leaders and the Flows of Citizens' Political Preferences: An Assessment with Agent-Based Models
The origin and the consequence of public opinion changes have been an important field of study in political science.
One well known work is John Zaller's (1992) Receive-Accept-Sample (RAS) model. This RAS model suggests, first, that an
individual’s political awareness influences the way he or she consumes political information. Second, opinion leaders have a
great leverage to influence individuals who have low level of political awareness and to slow down the flows of deliberation.
Nevertheless, over a decade, researchers have not fully examined this model empirically. One reason is that the RAS model is
based on assumed universal axioms and rules about humans' information processing. Another reason is that it is a static
model and does not deal with the dynamics of opinion changes. Furthermore, the model does not include citizens'
communication networks of political discussion and citizens’ daily consumption of media messages. Those limits constrain
the RAS mod el from empirical applications and leave these questions unexplored: Do opinion leaders or political experts
influence flows of public preferences? If opinion leaders and general citizens process political information in different ways,
what are the consequences of continuous interaction between opinion leaders and the mass public? Inspired by Huckfeldt,
Johnson, and Sprague's (2004) autoregressive model of political preferences, I constructed an agent-based model to go
beyond the constraints of the RAS model. This Swarm model, based on empirical findings in communication studies and
cognitive psychology, demonstrates that opinion leaders are resistant to social pressure and conformity and are likely to be
the disseminators of disagreement from outside of their communication networks. This paper also addresses the implications
of the simulation for the study of deliberative democracy and the spreading of ideas online and offline.
Name: Veena S. Mellarkod
Affiliation: Argonne National Laboratory & Texas Tech University
Email:: mellarko@cs.ttu.edu
16
Title: Meaning-Oriented Social Interactions: Conceptual Prototypes, Inferences and Blends
Agent based modeling is one of the best ways to incorporate social interactions, with a potential to advance both the
social and computational sciences. The modeling of social agents is an interesting area of research for a number of reasons:
the complexity of the domain, the applicability to a wide range of real world situations, the inherent challenge of designing
and modeling issues. Modeling agents in terms of social actions has greater potential when the agent's interpretive process is
incorporated. An "interpretive agent" (IA) computational research strategy emphasizes the way that meaningful responses to
circumstances are produced via social interaction. Designing such agents requires great care since modeling even simple
interpretive behaviors in agents can multiply the computational complexity to exponential levels. The strategy used in the
paper attempts to capture domain complexity without confronting computational limits.
The approach is by emphasizing shared social prototypes and actions through three assumptions and mechanisms
[Sallach 2003]. The three assumptions are that: agent simulation is a productive domain for experimentation of interpretive
models; natural agents regularly alternate between continuous to discrete models; agents dynamically maintain an orientation
field with an emotional valence for relevant agents, objects and resources. The three mechanisms are: prototype inference,
orientation accounting, and situation definition.
The current paper shows our work on the first mechanism: prototype inference. Prototype concepts are used to
comprehend and derive inferences about the world. Prototypes can refer to divergent types of phenomena including objects,
animals, people, emotions, circumstances and abstract entities such as numbers. There are several issues involved with
prototypes: the structure and representation of prototypes and their networks, deriving inferences from prototypes, the
granularity of the level of inference, the comparison and blending of different prototypes.
Prototype structure is represented using CoddÕs RM/T relational model [Codd 1979]. A prototype is composed by
entities, events and/or relationships, where at least one is present. The collection of prototypes as a whole has a radial
structure in which core instances are exemplars that provide a default definition, while atypical instances reside on the
periphery. Different prototypes are blended to create a new one using the join operators of RM/T. The paper discusses these
issues and how such capabilities integrate together to form an agent knowledge structure of the agent together with distinctive
reasoning capabilities.
Name: Edward P. MacKerrow
Affiliation: Complex Systems Group, Theoretical Division, Los Alamos National Laboratory
Email: mackerrow@lanl.gov
Title: Modeling the Behavioral Norms Underlying Competition and Coordination in Government Bureaucracies
Many government agencies exhibit very different behavioral dynamics when compared to commercial
organizations. The underlying socio-organizational behaviors in these government agencies frequently lead to the emergence
of federations of “stove-piped” departments – competing with each for funding, knowledge, prestige, and capability. The
“organizational glue” that is found in cooperative enterprises is rare in the government sector. Why? To better understand the
social norms that lead to these effects we have developed a social simulation of a large national research laboratory. This
simulation is constructed of agents representing the employees and their roles in the laboratory. We model internal funding
and staffing dynamics over the allocation of research funds. Agents in the model are heterogeneous in their capability and
research interests. These agents work on simulated projects with heterogeneous requirements and timelines. Basis vectors
that span the set of skill areas are used to model capabilities, interests, and requirements. Effective staffing aligns an agent’s
capability and interest vector align well with project requirements. Internal competition over funding, available capacity,
empire building, desires to staff unfunded researchers, and other competitive norms often prevent effective staffing. Our
organizational model is unique. The individual behaviors appear to be rather universal inside the roles of scientists,
technicians, managers, and support staff – simplifying the behavioral rules and individual objectives for each class of agent.
The model is conditioned on a large amount of real-world data, and time series; allowing us to build a model with high
specificity and validate it against real world dynamics. We compare how well the dynamics of agents built upon specific
behavioral norms, roles, objectives, and strategies compare with historical metrics of the real-world organization. Our longterm goal is to explore different reward structures and constraints to nurture stronger inter-departmental cooperation.
Name: David Midgley
Affiliation: INSEAD
Email: david.midgley@insead.edu
Web: www.insead.edu
17
Title: Destructive Testing and Empirical Validation of a Multi-Agent Simulation of Consumers, Retailers and Manufacturers
We are working on an important problem in business, namely the complex interaction between consumers, retailers
and manufacturers that leads to market and economic outcomes such as consumer satisfaction, retail and manufacturer
profits. Individual aspects of this problem have been examined in the literature but we do not believe that the complete
system has been modeled with realistic assumptions; hence our knowledge of these important interactions remains
incomplete. We are using the RePast platform to produce a more realistic model of the interactions between these three types
of agent. We do this by specifying the memory and decision rules of the individual agent types, basing these rules on the
consumer psychology and behavioral decision-making literatures.
However, the purpose of our paper is not to discuss the details of our simulation but to address the critical issue of
how we validate this work. By validation, we mean two things. First, that the model produces realistic behavior across a
plausible range of initial conditions (face validity). Second, that it can be estimated on real data to produce meaningful
conclusions and insights (empirical validity).
Drawing on the ideas of Miller’s ANTS1 and Bleuler et al’s PISA2 we have embedded the RePast model in a testing
and estimation framework. We will report on how we formulated objectives for destructive testing of the model to establish
face validity, how we chose efficient search algorithms for fitting many parameters to empirical data, and how we designed
this framework to be extensible as we sophisticate the underlying RePast model. We will also present data on our validation
efforts and draw some broader conclusions on validating agent-based models—hopefully contributing some modest answers
to the validation questions raised by Co-Directors of the UCLA Center for Computational Social Science in their call for
papers.
Name: Dario Nardi
Affiliation: Human Complex Systems, UCLA
Email: dnardi@math.ucla.edu
Web: www.darionardi.com, www.socialbot.com
Title: Modeling Agent Personality, Metabolism and Environment without Boundaries Using Fuzzy Cognitive Maps
ABSTRACT
It is understood that intelligent agents have mental models of their world and have various internal processes
(physical, mental, emotional) even as they interact with other agents. Many simulations ignore intra-agent life, or model
intra-agent characteristics as discrete from the larger inter-agent simulation. But real agents and their environments are
linked in numerous non-linear ways, without artificial Cartesian boundaries such as “in the head model” vs. “in the world
behavior”.
The Fuzzy Cognitive Map, or FCM is a type of interative dynamical network. It is a powerful tool to model agents’
personalities, metabolisms, behaviors and social and physical environments in a unified way. For the presentation, I will
demonstrate this tool in two ways:
a) By implementing the theory of temperament, which proposes 4 deep patterns of agent core needs and values. This is
similar to Maslow’s hierarchy of needs. However, the 4 temperament patterns exist only in relation to an agent’s
environment, including other agents, so to model temperament as solely in an agent’s mind does not normally work.
b) By implementing a model of the endocrine system, the body’s hormone producing glands and their dynamics
effects. Hormone action is heavily driven by environment, including such interactions as physically touching other
agents. And the endocrine system is heavily driven by feedback loops. The Fuzzy Cognitive Map makes it easy to
model metabolism, behavior and environment together.
While the FCM is not novel (c.f. Bart Kosko), it’s applications have not been fully realized, particularly in the area of multiagent systems and intra-agent life.
Name: Paul Ormerod and Bridget Rosewell, Rich Colbaugh and Kristin Glass
Affiliation: Volterra Consulting, London, UK
Email: pormerod@volterra.co.uk
Web: www.paulormerod.com
Title: Knowledge and Cognition in the Social Sciences: Modeling with Low Cognition Agents
The standard socio-economic science model requires the assumption of powerful cognition on behalf of its agents.
Efficient outcomes arise from the cognitive powers of the agents. An alternative view posits that efficiency is the joint
product of institutional structure and agent actions (see, for example, Vernon Smith, American Economic Review, June
18
2003). Models in which agents have low cognition may be able to offer superior accounts of many socio-economic
phenomena than the standard approach. We illustrate this both with specific examples of successful models which explain
empirical phenomenon, and offer a general hypothesis that:
 system sensitivity to variations in topology increases as system maturity increases
 system sensitivity to variations in vertex characteristics decreases as system maturity increases
This result suggests that as complex systems “mature”, the importance of accurately modeling their vertex characteristics
decreases (i.e. the relevant topology of the system is more important than the specific behavioral rules of agents.) We also
address what is perhaps the most difficult aspect of this approach for standard economics to accept, namely the inherent
limits to learning better strategies in most situations.
Name: David R.Purkey, Ph.D.
Affiliation: Natural Heritage Institute
Email: dpurkey@n-h-i.org
Title: Introducing Agency into Water Resource Simulation Models
Typically water resources systems models assume that water demand is related solely to climatic factors and ignore
the role of decisions taken by actors operating under differing objectives, perceptions and constraints. This poses a problem
for water planners trying to develop equitable and sustainable water management plans in a river basin context. WEAP21,
developed by the Stockholm Environment Institute, is water resource systems model that attempts to balance water supply
and demand under various climatic and hydrologic conditions. The model is limited by the fact that human/environment
interactions and feedback as well as the impact of heterogeneity in the behavior of individual actors on emergent water
demand are not considered.
It is anticipated that emergent water demand estimates based on a dynamic representation of the cumulative
influence of individual water use decisions at the local level will enhance the quality of information used in developing
integrated water management plans. This will be accomplished by replacing “demand nodes” based on climate forcing with
nested agent based models where the actions of the agents are determined by numerous factors, including the simulated state
of the water supply system. This paper will discuss the possible design of such a model and highly abstract environmental
and hydrological processes, as well as to models such as WEAP21 which eliminate behavioral process altogether.
Name: Paolo Patelli
Affiliation: Complex System Group, CNLS, Theoretical Division, Los Alamos National Laboratory.
Email: paolo@lanl.gov
Title: A behavioral model of coalition formation
A hierarchical agent based model of coalition formation is investigated. At the individual level agents which share
commonalities self organize into groups. We compare the effect of two different agent decision functions on the model
dynamics. The first decision function is based on expected utility theory, the second on prospect theory. At the
organizational level groups can form coalitions with other groups. The dynamics of coalition formation are investigated with
an exogenous/endogenous reward function. We are applying our model towards better understanding the dynamics of
coalitions formation between political opposition groups and terrorist organizations. We compare the results of our models
with the findings of previous studies that are based on the spin glasses approach.
Name: Mary Lee Rhodes
Affiliation: Trinity College Dublin
Email: rhodesml@tcd.ie
Title: Public Service Systems (PSS): elements of an agent-based model
Public administration scholars (Boston 2000, Pollitt & Bouckaert 2004) have suggested that “systems effects” may
be one of the keys to unlocking the door of cause and effect relationships in public service delivery. Others (Blackman 2001,
Chapman 2002), have been more specific and proposed that complex adaptive systems (CAS) theory offers a framework for
understanding public service dynamics that is superior to existing frameworks. This paper advances the arguments put
forward by these authors and applies a CAS framework to the organizational landscape of housing in Ireland in an effort to
address the potential gap in our understanding of cause and effect relationships operating in the Irish housing system.
19
Specifically, I focus on the process of decision-making in various organizations involved in the provision of
housing, including developers, bankers, local authorities, non-profit housing organizations, advocacy groups and policy
makers. Analysis of data in over 50 interviews of senior decision-makers in these organizations suggests that:
1) The outcome of the Irish housing system does not appear to be directly related to the aggregate purpose(s) of the system^-s
constituent agents, though one type of agent and one (clearly very powerful) agent do have purposes that are consistent with
the observed outcomes.
2) Environmental factors appear to affect some system agents more than others, though this may be an artifact of the
particular outcome variable studied (owner-occupation rate).
3) The assumptions agents make about how the system works appear to be both endogenous to the system (i.e., affected by
the agents themselves) and variable across different agents. Policy interventions that assume exogenous, homogenous "rules
of the game" are therefore unlikely to result in predictable outcomes.
Using the NK modeling approach described in Siggelkow & Levinthal (2003), the paper concludes with
recommendations based on the empirical data from the Irish housing system on how to formulate a model for public policy
simulations that take into account complex adaptive systems dynamics including:
* the range of possible decisions that agents can make (see example in Rhodes & Keogan (2004) for the non-profit sector).
* the range of key factors that agents consider in choosing among options. Factors may be exogenously determined (e.g.,
external to the system) or endogenous (e.g., a function of the decisions and actions of the system agents).
* the interaction(s) among decisions and factors as per Siggelkow & Levinthal (2003), i.e. the K factor in Kauffman^-s
(1993) NK model.
* the value proposition (weighted elements of power, economic gain, social gain and innovation) adopted by the agent
(Rhodes & Mackechnie 2003)
Name: Brian W. Rogers
Affiliation: CalTech
Email: rogers@hss.caltech.edu
Web: www.hss.caltech.edu/~rogers
Title: A Model of Large Socially Generated Networks
We present a model of network formation where entering nodes find other nodes to link to both completely at
random and through search of the neighborhoods of these randomly chosen nodes. This is the first model of network
formation that accounts for the full spectrum of features that characterize large socially generated networks: (i) a small
diameter (maximal path length between any pair of nodes), (ii) high clustering (the tendency of two nodes with a common
neighbor to be neighbors), (iii) an approximately scale free degree distribution (distribution of the number of links per node),
(iv) positive correlation in degree between neighboring nodes (assortativity), and(v) a negative relationship between node
degree and local clustering. We fit the model to data from four networks: a portion of the www, a co-author network of
economists, a network of citations of journal articles, and a friendship network from a prison. Besides offering a close fit of
these diverse networks, the model allows us to impute the relative importance of search versus random attachment in link
formation. For instance, the fitted ratio of random meetings to search based meetings is seven times higher in the formation
of the economics co-authorship network as compared to the www.
Name: David L. Sallach
Affiliation: Argonne National Laboratory
Email: sallach@anl.gov
Title: Accounting for Agent Orientations
The present paper describes a strategy for providing social agents with richer internal processes, while avoiding
computational gridlock when modeling large-scale agent populations. Equally important, the internal heuristics described
here are designed to support social interaction (cf., Garfinkel 1967; Heritage 1984), rather than only (implicitly isolated)
psychological mechanisms.
The framework is based on the Interpretive Agents (IA) research program (Sallach 2003; Sallach 2004; Sallach &
Mellarkod 2004). The IA program defines three interrelated mechanisms: prototype concepts and reference point reasoning,
situation definition, and orientation accounting. Prototype data structures allow the integration of complex conceptual
information in ways that allow inferences to be drawn by boundedly rational agents. Situational definition provides a focus
of action based upon one type of prototype, the ‘situation’ as defined by the situation theory formalism (Barwise 1989;
20
Devlin 1991). The situation is an open-ended, evolving conceptual structure that defines the pertinent action set available to
‘situated’ agents.
The primary focus of this paper is on the orientation accounting mechanism(s). Prototype concepts and situational
classification are considered only insofar as they interface with orientation accounting. As used in the IA program, an
orientation is a complex structure that combines cognitive and emotional components. Its cognitive components are prototype
concepts that manifest a radial structure. The emotional components of orientation structures are constituted by continuous
valences, with values ranging from +1.0 to -1.0, that are directed at a particular prototype concept or instance. These three
are simple, allowing complexity issues to be framed in terms of several interrelated questions: 1) how do orientations
contribute to action selection? 2) which (aspects of) prototypes concepts are the focus of the agent’s emotional attraction or
repulsion, and 3) how emotional valences change in response to exogenous and endogenous events.
Social accounting is a reflective procedure in which an agent calibrates his or her actions,
or even internal orientations (cf., Mills 1940; Garfinkel 1967; Kuran 1995), taking into account how other agents can be
expected to react. The formal representation of these processes may involve the definition of high-level constraints and
operators that not only allow a clearer formulation but also, as will be illustrated, draw upon emerging innovations in
programming languages (Van Roy & Haridi 2004).
Name: Roberto Serra and Marco Villani
Affiliation: University of Modena and Reggio Emilia and Venice University
Email: rserra@racine.ra.it
Title: Iscom Innovation Model
I2M (Iscom Innovation Model) is an agent-based model of innovation processes, currently under development
within the European “Information Society as a COMplex system” project. The model is based upon a theory of innovation by
Lane and Maxfield. The theory inspires and constrains the features of the model, thus reducing the embarasse de richesse that
is one of the major methodological problems of agent-based modeling. Artefacts (represented by vectors of integers) are
produced by agents using (arithmetic) recipes. A numerical variable (the “strength”) is associated to each agent, and in order
to use another agent’s artefacts as a production input the agent must “pay” some strength. The basic dynamics, absent
innovation, is one of production and sales, where the external world supplies “raw materials” and external demand.
Depending upon the initial conditions, self-sustaining cycles of production and exchange can emerge among the modeled
agents. Innovation – that is, the generation of new recipes, in particular desired directions, called “goals” – results in
substantial modification of the system dynamics: ; in particular, the set of agents and artifacts supported by the system
becomes much richer. Two innovation regimes are introduced: a “lonely” mode, in which each agent tries to introduce new
products by itself, and a “relational” mode, in which two agents can improve their reciprocal knowledge (recipes and goals)
and can decide to try to jointly develop a new artifact. We present some results comparing system dynamics under noinnovation, lonely innovation and relational innovation.
Name: Eugenio Dante Suarez
Affiliation: Trinity University
Email: esuarez@trinity.edu
Title: A Model of Hierarchically Decomposed Agents
The paper explores the reinterpretation of the neoclassical economics paradigm of utility maximizing agents, by
considering them to be collections of possibly independent aims, which make the agent’s aggregate decision level an
emergent resulting force. This possibility has been explored by discourse theorists who posited the concept of subject
positions, in which individuals tend to locate themselves within several different identity discourses. This description
generalizes the established approach by introducing the concept of a bordered maximization, in which the size of the group,
as well as its cohesion, are endogenous variables in the maximization process. Such a conceptualization serves as a starting
point for the description of a broader theory of complex hierarchical social structures, in which no agent can be unitarily
defined, but is instead modeled as a blurry object composed of changeable, strategic parts.
Moreover, under this proposed paradigm, the concept of Pareto optimality is
rendered artificial, since disconnected inter-temporal individuals cannot provide the necessary distinctions to apply the
concept. In view of this dilemma, the paper proposes different measures of welfare for a group so defined, connecting the
idea of a disjointed individual to that of a group or society. I propose that a parsimonious representation of decisions can
synthesize the influences on an agent as a restrictive realm of action for the ‘selfish’ utility function of the individual, creating
what I define as a behavioral function, which best describes behavior in a network of peers and structures to which the agent
21
belongs.
The proposed model is of a very general nature, and may prove useful in analyzing not only economic and general
strategic phenomena, but also evolutionary, psychological, and political aspects of reality.
Name: Brian Tivnan
Affiliation: George Washington University
Email: bktivnan@earthlink.net
Title: Coevolutionary Dynamics of Strategic Networks: Developing a Complexity Model of Emergent Hierarchies
Extending previous research of organization-environment interaction via boundary spanning activity (Hazy, Tivnan
et al. 2003), this study explores the possible emergence of strategic networks of organizations (Gulati, Nohria et al. 2000).
This research includes the development of an agent-based model in RePast (Chicago 2004) and tests of its analytical
adequacy (McKelvey 2002). Consistent with Salthe’s (1993) triadic structures, the model stems from Granovetter’s concepts
of weak ties (1973) and embeddedness(1985) by situating Macy and Skvoretz’s (1998) simple agents (i.e., components) and
March’s (1991) as well as Hazy, Tivnan et al.’s (2003) organizational structures (i.e., system) in Tivnan’s (2004)
coevolutionary context of competition and collaboration (i.e., environment). This study represents an attempt to develop a
complexity model of emergent hierarchies (Lichtenstein and McKelvey 2004).
This paper describes the development of the Coevolutionary model of Boundary-spanning Agents and Strategic
Networks (C-BASN). C-BASN represents an extension of Hazy and Tivnan’s (2004) investigation of the boundary-spanning
activity of and the structural emergence in a single organization. Benefiting from other research on learning in adaptive
networks (Allen 2001; McKelvey 2001), C-BASN allows for the exploration of the collaborative efforts of organizations in a
competitive, coevolutionary context; namely, the emergence of strategic networks.
The type of coevolution (McKelvey 2002) depicted in C-BASN is the coevolution of mutation rate and the
environment. All the more applicable in high-velocity environments (Eisenhardt 1989) and hypercompetitive contexts
(D'Aveni 1994), what an organization has learned (Schwandt and Marquardt 1999) and the rate at which it learns (McKelvey
2002) offer the organization its best source for sustainable, competitive advantage (McKelvey 2001). That is, an organization
must learn faster and more effectively than its competitors to establish an initial competitive advantage, and then that same
organization must continue to learn faster still if it is to sustain its competitive advantage.
Name: Nicholas S. Thompson
Co-Authors: Owen Densmore, David Joyce and John Kennison
Affiliation: Program in Social, Evolutionary, and Cultural Psychology, Clark University, Worcester and FRIAM group,
Santa Fe
Email: nthompson@clarku.edu
Web: home.earthlink.net/~nickthompson/
Title: MOTH: A More Naturalistic Conceptualization of Reciprocal Altruism in Biological Prisoners’ Dilemma Games?
In discussions of the biological evolution of social behavior, an organism is said to be altruistic when it enhances the
reproduction of another organism at some cost to its own. The existence of altruistic behavior is a problem because
reproductive advantage is the principle engine of evolutionary change. One remarkably influential solution has been based on
a “TIT FOR TAT” strategy in an iterated prisoners’ dilemma game. TIT FOR TAT was made famous by its repeated
triumph in a tournament for computer programs publicized in Robert Axelrod’s book, THE EVOLUTION OF
COOPERATION. So influential was this work that it has served as the basis for hundreds, perhaps thousands, of articles,
using TIT FOR TAT reciprocity to explain various features of human and animal social behavior.
This unanimity may be premature. In the Axelrod tournament, a player remained with its assigned partner until the
number of moves stipulated for the game was completed. This provision seems to violate an almost universal feature of
animal psychology, that animals tend to leave a situation that they find distasteful and stay in one that they find pleasant. Our
intuition is that an altruist that discarded defectors and continued to play with fellow altruists would be able to prevail
because it would sort through the population “looking for” other altruists until it found one. We called this new strategy,
Myway Or The Highway, or MOTH for short.
We are exploring these ideas with a series of Applets, current versions of which are posted at
Http://aleph0.clarku.edu/~djoyce/Moth/ Our preliminary conclusion is that the MOTH strategy does indeed compete well
against a variety of alternatives, including TIT FOR TAT. This conclusion suggests to us that the overwhelming
convergence of the literature on a single model of reciprocity to explain social evolution may have been premature.
22
Name: John Hiles
Affiliation: U.S. Naval Postgraduate School
Email: jhiles@mindspring.com
Web: www.movesinstitute.org/
Title: Compound Agents and Their Compound Multiagent Systems
Compound Agents were designed to permit our projects to model social and psychological dimensions of behavior
at the same time. In a Compound Multiagent System, agents interact with each other to produce the aggregate behavior of
the social or organizational system, and, at the same time, embedded within each of those socially interacting agents, an
internal Multiagent System processes the sensed experience of an individual agent, compares that experience with the agent’s
intent, and produces decisions about how that agent will act. Bio-inspired mechanisms for coordinating these external and
internal systems have simplified the construction and interpretation of these Compound Multiagent Systems, or CMAS.
Patterned after receptor molecules and the cascading signals that they produce on the surfaces of living cells, our mechanisms
are called Connectors, Tickets, and Membranes. We have collected these mechanisms in a programming library, called the
CMAS library. These mechanisms have been used to create agents equipped with an embedded MAS that produces a
computational analog of Conceptual Blending. We compare the external interactions of agents with the integrated knowledge
networks produced by the agent’s embedded blending system as a result of those external experiences. Each of these
Compound Agents is a dynamically adaptive link between external and internal worlds.
Name: Spiro Maroulis and Uri Wilensky
Affiliation: Northwestern University: Center for Connected Learning and Computer-Based Simulation
Email: spiro@ccl.northwestern.edu
Web: ccl.northwestern.edu
Title: School Districts as Complex Adaptive Systems: A Simulation of Market-Based Reform
Much controversy surrounds market-based reforms in education. On one hand, proponents of choice-based reforms
claim that giving parents the ability to choose the school their children attend provides both access to better schooling to the
disadvantaged populations as well as the incentives necessary for school reform (Chubb and Moe 1990). On the other hand,
opponents of school choice claim that choice-based programs will not bring about the hoped for improvements in schools,
but instead only drain resources from troubled schools that can least afford to lose them (Marshall and Tucker 1992). One
particular choice-based reform effort – government subsidized school vouchers – has received considerable attention, as it
provides tuition subsidies to disadvantaged students who wish to attend other, usually private, institutions.
This paper studies the effects of market-based educational reforms using the iterative construction of a
computational, agent-based model in the NetLogo programmable modeling environment (Wilensky 1999). By
conceptualizing a school district as a complex adaptive system of interdependent families and schools, we create a
computational environment that allows for a type of controlled experimentation that has not yet been possible with small pilot
choice programs. The agents in this particular model are families with heterogeneous preferences and imperfect information
deciding where to send their children to school, and schools responding to changes in enrollment patterns and other
environmental pressures. This simulation highlights impact of information quality, the dynamics of school enrollment
patterns, as well as the distributional consequences of choice programs. The current challenges and future possibilities of
using agent-based modeling to understand educational system reform are also discussed.
Name: Ashley Williams
Affiliation: MITRE
Email: ashley@mitre.org
Title: Evaluating Airline Market Responses to Major FAA Policy Changes
At many large airports, demand far exceeds capacity resulting in costly delays and unpredictable operations. As an
interim solution until additional facilities can be built, the Federal Aviation Administration (FAA) has recently begun
investigating new resource allocation policies. To assure Congress and the flying public that any new policy will be fair and
efficient, the FAA needs quantitative results to back up their proposal and reasonable certainty that it will have the intended
effects without substantial side effects. In current practice, analysts typically make assumptions about how the market will
respond to the new policy. Then, the relevant metrics are computed to inform decision makers about the relative merits of
23
various policies. However, because any new resource allocation policy would constitute a significant deviation from the
current economic and operational environment, simply extrapolating the market, or “demand,” response from existing
demand patterns would be a gross simplification of the problem and would likely result in misleading conclusions. In fact,
any solution that requires an assumption about the future demand pattern would likely suffer from the analyst’s bias.
The MITRE Corporation’s Center for Advanced Aviation System Development (CAASD) is building the
MarketFX™ model to predict future demand patterns endogenously, and in parallel with the supply side simulation. By
modeling the market-driven behavior of all direct aviation facility users (the airlines), the new environment in which they
must operate and the passenger responses to airline schedule changes, the MarketFX™ model produces a high-fidelity
simulation of the profit maximizing behavior of the airline industry while making very few high level assumptions.
This is a difficult problem because the airline industry has multiple users with unique schedule networks, business
models, and cost structures. They typically value the same resources differently and, consequently, respond to new policies
and capacity changes differently. Add to that the fact that all users are making multiple, interdependent decisions
simultaneously, and it becomes apparent that the profit maximization problem is simply intractable using traditional
optimization techniques. The MarketFX™ model solves this problem by coupling agent-based simulation with the latest
machine learning techniques to model the airline planning process.
In this presentation, we discuss the high-level architecture of the MarketFX™ model, the translation of airline
planning decision makers into computational decision agents, and some preliminary results. We will also discuss the proper
interpretation of these results and the consequent validation methodology it requires.
Name: Dr. Sanith Wijesinghe
Affiliation: New England Complex Systems Institute
Email: sanith@necsi.org
Web: www.necsi.org
Title: The Effect of Friction in Pedestrian Flow Scaling Through a Bottleneck
There is a critical need to establish spatial design principles that allow for the safe and efficient dispersion of
pedestrians at large public gatherings. As evident by recent disasters at religious festivals, social gatherings, and football
stadiums pedestrian stampedes continue to result in numerous injuries and significant fatalities. Recent efforts to address
building/fire safety issues in this regard are increasingly “performance-based”…
[1] using a variety of simulation tools instead of conforming to traditional prescriptive/rule based handbooks. Gwynne et.al.
[2] have identified a total of 22 different evacuation models currently in use or in development for this purpose. While these
simulation tools provide extensive configurational, behavioral and procedural descriptions their primary drawback is the lack
of convincing validation [2]. In this paper we focus on the effect of building exit width on pedestrian flow rate and show the
popular use of lattice-ga based pedestrian dynamics without explicit consideration of friction leads to both differences in
qualitative flow features and scaling relationships compared to pedestrian dynamic models incorporating friction effects [3].
We also present preliminary experimental results of pedestrian flow through bottlenecks to help
calibrate existing simulation software, to help develop new and improved algorithms and provide a better understanding of
simulation limitations.
Name: Zhangang Han
Affiliation: Systems Science Department, School of Management, Beijing Normal University
Email: zhan@bnu.edu.cn
Web: www.bol.ucla.edu/~zhan/
Title: Specialization of Evolving Agents with Expectations
This paper studies the dynamics of how cooperation in a multi-agent system is evolved. The evolution of selfinterested individuals may lead to collective behavior due to mutual interactions. By introducing agents that each can take
two kinds of tasks, the system simulates an evolution process of specialization. The model contains totally N self-interested
agents and some resources that both are initially randomly scattered over a lattice. Agents search for resources and can mine
the resources or sell/buy information of
resources to/from others. There is no central planning and information sharing in this model.
Agents trade information through a market-like mechanism. A key feature of our model is that agents determine
their behavior in the market with expectations for future benefits. The prices on which agents agree to trade their information
is determined with expectations, well expectations are calculated from their accumulated experiences. Three learning
24
mechanisms (logistic, normalized, and biased learning) are proposed so that agents can adapt according to how good they
perform in the history.
Simulation results show that based on individual capabilities of searching and mining, specialization of agents can
evolve within the system. Agents tend to specialize on one task provided that the transaction cost is low. This paper also
shows that specialization increases total productivity of the system, and entropy decreases while the system evolves to
achieve labor division. When agents in the system do not specialize, the system does not have the highest entropy, as we may
intuitively think it does. It is in partial labor division state when the system has the highest entropy. This paper also shows
that with normalized learning and biased learning, the system can endure a relatively higher transaction cost value to sustain
labor division than with logistic learning. This may hint that these two learning mechanisms depict the reality better.
Name: Zhangang Han
Affiliation: Systems Science Department, School of Management, Beijing Normal University
Email: zhan@bnu.edu.cn
Web: www.bol.ucla.edu/~zhan/
Title: Spatial Pattern of Market Evolution
It is worthwhile to model individual behaviors that generate global phenomena. In this paper, we study how spatial
pattern of markets can evolve with an agent-based modeling approach. Without central planning, agents have to get resort to
memory of trading history to determine where to find trade partners.
In this paper, same number of sellers and buyers move on a grid torus starting from a random scatter state. A seller and a
buyer will trade when they move close enough to each other.
Three kinds of memory utilization mechanisms are proposed, namely environment marker, individual memory and
friends-suggestion. In the environment marker method, agents leave some hormone (buildings, advertisements, etc.) on the
ground after each trade and the hormone may evaporate. In the individual memory mechanism, agents remember their trading
history. They also gradually forget their trading history. In the friends-suggestion mechanism, an agent determines where to
move by considering both its own memory and its friends suggestions. Each agent offers suggestions to its friends to the best
of its knowledge. Agents try to move with a probability to the position that trade is most likely to occur. This probability
demonstrates to what extend agents tend
to seek for new trade possibilities or follow the prescribed order. Size of market is measured with the number of agents that
are close to each other.
Simulations show that a central market can emerge in the environment marker model and the friends-suggestion
model. There are also relatively smaller size markets besides the central market. While in the individual memory model only
small size markets can evolve even each individual can remember history of trade in the whole grid. Simulation also shows
that friend number is a crucial factor in friends-suggestion model to form a central market.
Notes
25
Contacts
Allan Kugel
Ashley Williams
Bill McKelvey
Brian Rogers
Brian Tivnan
Bridget Rosewell
Britt Cartrite
Cheng-Shan Liu
Dario Nardi
David Kronenfeld
David Midgley
David Purkey
David Sallach
Dwight Read
Edward MacKerrow
Eugenio Suarez
Jason Finley
Jean-Louis Dessalles
Jessica Glicken Turnley
John Hiles
Jonathan Eells
Kristin Glass
Lawrence Kuznar
Marco Villani
Mary Lee Rhodes
Michael Fischer
Murray Leaf
Nicholas Weller
Nick Gessler
Nick Thompson
Owen Densmore
Paolo Patelli
Paul Jorian
Paul Ormerod
Phil Bonacich
Rich Colbaugh
Roberto Serra
Russ Abbott
Sanith Wijesinghe
Sara Metcalf
Scott Christley
Spiro Maroulis
Stephen Guerin
Steve Doubleday
Tom McGill
Uri Wilensky
Veena Mellarkod
Will Tracy
William Griffin
Zhangang Han
kugel@rci.rutgers.edu
ashley@mitre.org
mckelvey@anderson.ucla.edu
rogers@hss.caltech.edu
bktivnan@earthlink.net
brosewell@volterra.co.uk
cartrite@sas.upenn.edu
ashan@ku.edu
dnardi@math.ucla.edu
kfeld@citrus.ucr.edu
david.midgley@insead.edu
dpurkey@n-h-i.org
sallach@anl.gov
dread@anthro.ucla.edu
mackerrow@lanl.gov
esuarez@trinity.edu
jfinley@ucla.edu
dessalles@enst.fr
jgturnley@aol.com
jhiles@mindspring.com
eells@ucla.edu
klg@ncgr.org
kuznar@ipfw.edu
mvillani@unimore.it
rhodesml@tcd.ie
M.D.Fischer@ukc.ac.uk
mjleaf@utdallas.edu
nweller@ucsd.edu
gessler@ucla.edu
nthompson@clarku.edu
owen@redfish.com
paolo@lanl.gov
paul_jorion@msn.com
pormerod@volterra.co.uk
bonacich@soc.ucla.edu
colbaugh@comcast.net
rserra@racine.ra.it
Russ.Abbott@GMail.com
sanith@necsi.org
sara_s_metcalf@yahoo.com
schristl@nd.edu
spiro@ccl.northwestern.edu
stephen@redfish.com
steve@lesscomputing.com
tcm@ssdp.caltech.edu
uri@northwestern.edu
mellarko@cs.ttu.edu
william.tracy@anderson.ucla.edu
william.griffin@asu.edu
zhan@bnu.edu.cn
26
Download