Beyond Stipulative Semantics: Agent

advertisement
Beyond Stipulative Semantics: Agent-Based Models of “Thick” Social Interaction
David Sylvan
Graduate Institute of International Studies, Geneva
sylvan@hei.unige.ch
Paper prepared for conference on “Computational Modeling
in the Social Sciences”
May 8-10, 2003
University of Washington
Abstract. Whatever their external referents, models need at least minimal intensional semantics so that
different symbols can reliably be differentiated. A stipulative strategy will not work; instead, in large-N
systems, this can be done through intentionality; for small-N systems, the conventions that govern “thick”
social interactions (in the Simmelian sense) also need to be specified for differentiating symbols. This
suggests that some kind of communication language be developed as a way for agent interaction to be
modeled: certain languages in the multi-agent system literature are in this respect promising though as yet
inadequate.
Introduction. One of the recurring issues in the agent-based modeling literature is the “realism”
of our models. This issue, which of course pertains in various ways to any modeling enterprise,
takes several forms. At times, the concern is with endowing agents with properties that roughly
correspond to their “real world” counterparts; at other times, the focus is on how the model’s
performance can be evaluated against either a generalized sense of historical trends in particular
systems, or against specific data. Conversely, and more generally, there has been considerable
discussion about how models can be used heuristically, when they are avowedly non- (or anti-?)
realistic.
For the most part, these concerns have been acted on in an ad hoc and scattershot fashion.
Typically, individual papers or monographs in which agent-based models are developed and
presented contain a section on one or more of the above issues, even if the discussion in that
section is often implicit as to the criteria being employed; typically, too, the notion of what
realism means and how important it is in often varies considerably across studies. All of this is
reasonable for an emerging field, even if a certain degree of systematization seems both
inevitable and desirable at some point. However, even granting the value of avoiding premature
closure, the lack of a more focused discussion has other consequences. One is that “realism” has
failed to be placed in a broader methodological context; another is that the types of agents
considered as (potentially) “realistic” are restricted to the exclusion of others. I wish to develop
2
both of these arguments in this paper, and then to propose modeling methods for investigating a
class of more “realistic” agents.
Semantics. At an abstract level, any model can be considered as an arrangement of symbols,
with the symbols claimed to signify or stand for particular objects or relations. The symbols may
be physical, as in the famous balls and rods that model the double helix of DNA; they may be
verbal and/or numerical, as in the textbook presentation of comparative advantage in international
trade; or they may be mathematical, as in the various formal models with which we are familiar.
There are numerous questions that can be raised about the status of the signified objects or
relations in the so-called real world, just as there are about the appropriateness, for this
signification, of particular mathematical formalisms; I here bypass these issues, which have been
dealt with at length in numerous other writings (for an introduction, see Robinson 1963; Chang
and Keisler 1990; Sylvan and Majeski 2000; cf. Post 1943).
What I wish to emphasize, rather, is simply the signification of the arranged symbols in a model.
We can call this, following such disparate figures as Brentano, the pragmatists Peirce, James, and
Dewey, and H. Garfinkel (1967), the “about-ness” of the symbols. Every model, no matter how
abstract and formal, is a model of or about something. In this sense, there is always a mapping
operation presumed in any model, in which its symbols are mapped onto some objects and
relations. In the social sciences, we normally think of this mapping as one between the symbols
and some external world, whether that world is the so-called real world, either actual or possible,
or, as in certain versions of modeling, a theory of the real world. Typical concerns about the
realism of models pertain to this mapping: for example, we observe that in the world, people
often behave in such and such ways and ask whether our arrangement of symbols in a model
adequately captures that type of behavior.
However, in the above formulation, there is nothing that restricts us from looking only at
mappings between symbols and external world. At the very least since Frege, it is well known
that symbols (e.g., words) can map both onto an external world and some other world, say of
thoughts or associations. This second type of mapping, for which we can somewhat loosely
borrow the semantic term “intension” can be distinguished from the first (“extension”) via the
kind of stylized example often resorted to by linguists and philosophers. Consider the following
sentences:
3
1) The liberator of Baghdad was chopping wood on his ranch today.
2) The Florida election stealer was chopping wood on his ranch today.
It is, I think, obvious that the external referent of 1) and 2) is the same person. This, though, does
not mean that we can simply substitute (with minor synonymic modifications) that referent from
2) into 1), as in:
3) The thief of Baghdad was chopping wood on his ranch today.
The individual referred to in 1) and 2), in other words, has a different intension in those
sentences; to put it in some of the schoolbook semantics learned by many of us in English classes
long ago, 1) and 2) have different connotations.
Arguably, intension is every bit as important a kind of mapping as extension when determining
the “about-ness” of a given model. In part this is because, for certain types of models, extension
involves nonexistent referents, as when we simulate a counterfactual situation or extend a
simulation far into the future. But intension is also important even when there is an unambiguous
external reality, for precisely the reason implied in the above example: a model that merely refers
to something out there in the world without some kind of intension fails to capture much of what
we want to model in the first place. A case in point is the intension typically attributed to the
moves in prisoners’ dilemma. Whatever the historical origins of the words “cooperate” and
“defect,” the only way that PD models can have any resonance (including with human players) is
because of the intensions of these words. For a move to be labeled as “cooperate” is, qua
extension, simply indicative that it was a particular choice; but the intension of the word
“cooperate” suggests that the move was undertaken with the aim of cooperating.
One might, of course, argue that neither of the above two claims is terribly compelling. As long
as there is some kind of extension, even if stylized and counterfactual, this is enough “aboutness” for the model to be interesting. The ideas that people have about symbols, the argument
would go on, have no bearing on what we can do with the model; the latter is, after all, purely
syntactic and there is no need to go any further. To answer this objection, it is necessary to back
up a bit and look more carefully into the notion of intension.
4
In order for a symbol to map onto some world, whether external, cognitive, associative, and so
forth (see the essays in Eco, Santambrogio, and Violi 1988 for a discussion of the alternatives), it
is necessary that the symbol be distinguished from other symbols. Indeed, if such distinctions
cannot be made, then there is no way for the syntax of the model itself to operate. These
distinctions cannot be made simply by means of labels, such that one move by an agent is called
“cooperate” and the other “defect.” At the very least, in computational terms, the moves must
involve different procedures: different steps, perhaps, or incrementing or decrementing different
registers, etc. In and of themselves, labels mean nothing, and no program would run without the
labels being bound to nonequivalent procedures. But this binding, to the extent that it involves
something, in this case procedures or register arithmetic, other than the external world, is
precisely a mapping onto some non-external world, i.e., intension. (Technically, this is why
Chomsky insists that what others call syntax could validly be called semantics.)
Thus, even if one eschews any notion of mental or cultural signification, models, in order to
function, must have what I would call a minimal categorial semantics. The question now
becomes how that semantics can be provided.
The simplest way of doing so is by stipulation. A researcher can bind labels to procedures by a
declaration (in both a technical and nontechnical sense) linking the two. One procedure can be
called “cooperation” and another “defection,” a third “move” and a fourth “stay put,” and so on.
(Schematically, we can say that a symbol A, here depicted as SA, is distinguishable from other
symbols B, C, and so forth if it is bound to a procedure pA not identical to other procedures pB,
pC, and so forth; pA in turn can either be made up of, preceded by, or followed by, an ordered
sequence of other symbols SP, SQ, SR, and so forth.) This kind of stipulation is the bread and
butter of computer programming and is in any case unavoidable for a model to function. But a
priori, there is no reason that it cannot bear the burden of intension. Readers who recall the
debates over artificial intelligence of the 1980s will recall Searle’s “Chinese Room” argument;
stipulation is Chinese Room-style intension. (Though of course Searle wanted to restrict
semantics to a particular kind of intension, namely ideas in heads; but this presumed a distinction
between semantics and syntax that is untenable.)
Stipulative semantics is particularly well suited for the kind of large-scale systems modeling that
first came into vogue in the 1970s. The symbols in such models refer to particular procedures,
expressed as structural equations and linked together precisely through the substitutability of
5
symbols across equations. A particular symbol will have both an extension -- typically, a
composite measure constructed from individual indicators -- and an intension, in which it is
embedded in one or more structural equations. No other mapping is deemed necessary, even if
from time to time, symbols will take the form of deliberately evocative labels. It is notorious,
though, that some of these labels are highly recondite, being pulled out of an abstract theory the
terms of which are numerous steps removed from the thoughts of the actors whose aggregated
behavior they supposedly describe.
The limitations of stipulative semantics stem from the mapping of its symbols onto procedures.
If, as is not uncommon, we write down a system of equations, we can, by substitution, eliminate
one or more of the symbols. Mathematically, this is unexceptional. Procedures, while distinct,
have no immanent existence; they can be transformed into each other. The problem comes when
the symbols refer, qua extension, not to aggregates of behaviors or to other large scale
phenomena, but to actions of individuals or corporate entities, i.e., to actions of agents. Such
actions, which have different extensions, we will often wish to model as being chosen by the
actors. The fact that, from an abstract point of view, one action is, constitutively, as it were,
transformable into another is of little or no use in helping understand the agents’ decision
mechanisms. For the intension of an action to be stipulated as a procedure linked to other
procedures is tantamount to saying that actions should be distinguished from other actions only
by the actions or preconditions which spur them or which they in turn spur.
This observation suggests a second way of providing a model’s semantics, one revolving around
intention (note the letter t here in place of the s in intension). Symbols in a model can be
distinguished from each other, not only by spurring procedures (as pointed out above, this is
unavoidable) but by the intention of the agent in carrying out the action. This mode of reasoning
is common in many agent-based models. In the Sugarscape world, for example, agents may carry
out several actions, such as movement, trade, reproduction, and so forth; these are distinguished
partly by spurring procedures but partly by intention: obtaining sugar, bequeathing sugar, etc.
The intension of a particular symbol whose extension is a given action is then a tuple composed
of the agent’s intentions in performing the action and the procedures that spurred this action.
(Schematically, we can say that a symbol, SA, is distinguishable from other symbols B, C, and so
forth if it is bound to a unique tuple comprised of a particular spurring procedure, pA, and a
particular intention, iA.) If the intentions change but not the spurring procedures, then the action
6
is different; similarly if the spurring procedures change but not the intention. For example, an
agent may be modeled as wishing to gain resources (intention) via trade, and also via warfare.
These latter actions are then distinguished from each other not by the intention but by the
spurring procedures: perhaps the agent’s power capabilities, perhaps the level of its neighbors’
armaments, and so forth. Alternatively (in a non-Sugarscape world), a choice between warfare
and trade may be triggered not by spurring procedures (say, power configuration or a neighbor’s
previous actions) but by differences in intentions (e.g., between wanting to gain resources or take
revenge). It is clear that, if only in probabilistic terms, actions in this framework are less easily
substitutable than are aggregate behavioral phenomena such as discussed above.
In the social sciences, intension via intention is a common modeling strategy. Most of our
models of action are of this sort in economics, and such models are also widely found in
sociology and political science. (It is worth noting that they do not presume the action in
question is necessarily an efficient or even effective way of achieving certain goals, but merely
that the action is intentional.) Although it is difficult to do a systematic assessment of agentbased models, my impression is that most of them are also of this intention-plus-spurring
procedure sort. Certainly some of the more well-known models, those of Axelrod, Epstein and
Axtell, and several of the participants in this workshop, fall into this category. (See, in the
references, Schelling 1971; Arthur 1988; March 1991; Epstein and Axtell 1996; Gintis 1996;
Cederman 1997; Axelrod 1997; Padgett 1997; Ormerod 1998; Macy 1999; Majeski, Linden, and
Spitzer1999; Bhavnani and Packer 2000; Lustig 2001; Watts 2003.)
Nonetheless, intention is not a panacea for eliminating all semantic difficulties. Consider what
happens when agents not only act but interact. In this case, the agents’ actions cannot be
considered separately: since one is an input to the other, we then have to distinguish acts from
each other when the acts may well be composed of different tuples across agents (and also across
time; analytically, this is potentially very difficult). One might be tempted to make the
distinction by scoping actions to agents, so that we would, say, speak of “defection-agent1,”
“defection-agent2,” and so forth. This proliferation of intensions is clearly not a desirable
solution, not only because it flagrantly violates Ockham’s Razor, but because it contains the
seeds of an even worse regress: if an agent performs the same act differently toward different
interlocutors (e.g., shaking hands; consulting diplomatically), then one would have to doubly
7
scope actions to both “receiving” and “originating” agents. How to tell these different actions
apart?
One way of dealing with the problem is to bypass it by ignoring differences between agents and
treating them as essentially homogeneous with regard to intentions and spurring procedures.
Interaction would then be glossed as actions, over a certain period of time, across a set (or entire
population) of agents. This approach, which is quite common in economic analyses of markets,
buys intensional tractability at the price of agent uniformity. It also means that interaction is
reduced to pairs (or triples, etc.) of actions. If this is considered unacceptable, then we need a
third way of distinguishing between actions.
Thickness. An alternative way of proceeding is by considering what I will here call “thick”
social interaction. Here I borrow from the work of both Simmel 1971 and Schutz 1967 (and for
the term, if not the content, from Geertz). When we interact with someone else, we do more -or, in some cases, less -- than simply perform an action aimed at accomplishing some end. We
behave in a coded, often highly stylized fashion that is mutually intelligible to both us and the
other. Thus, if a friend says hello to us, we may respond in a variety of ways, most of them
conventional, but without any particular intention in mind except to be friendly, an intention
which is usually “generated” at the instant of the interaction. In carrying out these responses, we
assume that our friend understands them and, further, that he/she understands that we are acting
with this assumption in mind. When engaging in this interaction, there is an extensive and subtle
interplay between the conventions (we can call this the “code”) of the actions performed, the
intentions we have vis-à-vis our friend, and the particular spurring procedures that, at a given
time and place, trigger our response.
What I wish to stress here is that interaction involves a kind of attuning toward the other (this
need not be gentle or cooperative) which, by its nature, must be carried out knowingly (even if
not with great thought) by both parties to the interaction. A set of acts by one agent toward
another is not in this sense necessarily indicative of an interaction; the latter goes well beyond a
pair of dyadic actions. Even then, interaction can vary enormously depending on whether the
parties to the interaction have dealt with other extensively for many years, or whether they are
brief, passing acquaintances. I will return to this point below.
8
The parties to thick interaction, by dint of the code they employ, group and reduce the number of
actions in which they engage. If we therefore add this code to the spurring procedures and
intentions discussed earlier (cf. Barwise and Perry’s 1983 semantic equations, though their
concern is principally linguistic), we have a means of distinguishing between actions while
rendering more manageable their total number. For example, assume that we are trying to model
certain diplomatic interactions between states, say attempts by powerful states to induce weaker
states to align their behavior with that of the former. If we only had intentions to work with, then
various “requests” (“demands”? “threats”? “ultimata”?) by the powerful could not be
distinguished from each other (their spurring procedures would be the same, as would their
intentions; a sense of the urgency of these requests could only be dealt with in an ad hoc fashion
by affixing little tags for how many requests had been made). But if we also had a code by
which to distinguish between different interaction contexts (e.g., ally vs. state in the same region
vs. near stranger), then we could differentiate certain requests (e.g., a request in which support is
assumed vs. a request that now is a call on friendship; and both of these vs. a business
transaction) while grouping others as equivalent (e.g., a request made to the UN ambassador and
another request several days later to the foreign minister). (Schematically, we can represent the
code for an interaction of a particular type z between agents x and y, Izxy, as a set of action
sequences m1...j,x,y:SA…N, in which the first action sequence, m1x, is carried out by agent x and
consists of a choice among actions represented by certain symbols, such as SA, SC, or SF; the
second action sequence, m2y, is carried out by agent y and consists of a choice among certain
symbols, say SC, SH, SJ, or SM, and so on. There are j moves in the interaction and N possible
actions. We then can say that a symbol, SA, used in an interaction Izxy, is distinguishable from
other symbols B, C, and so forth if it is bound to a unique 3-tuple comprised of a particular
spurring procedure, pA, a particular intention, iA, and a particular interaction code for the symbol,
cA, where cA  Izxy.)
As can already be imagined, modeling of this sort is considerably more intricate than most of the
agent-based models with which we are familiar. Thus, before proposing modeling strategies
which use this third, code-based approach to intensionality, it is worth considering just which
kind of phenomena seeem most appropriately dealt with by code, or, conversely, need only
intentions. I would argue that the basic distinction revolves around what we can call large-N
versus small-N systems. The latter -- for example, groups of states; parliamentary coalitions -are among the most important political phenomena; they are to be distinguished from large-N
9
systems, such as electorates, or mobs, and from micro-N systems, such as leaders and dyads. A
principal characteristic of small-N systems is that they involve repeated interactions among
interlocutors who expect to continue interacting in the future. This in turn makes it possible for
the participants in the system to see themselves as forming a group with its own norms and
memories, all of whose members are potentially relevant to each other, and among whom it is
possible to envisage a division of labor. Small-N systems, in the terminology of this paper, are
therefore marked by “thick” social interaction and ought ideally to have their intensions
distinguished by code means; for large-N systems, intention alone should be sufficient.
It is worth noting that until now, small-N systems have not been dealt with as adequately by
agent-based modelers as have large-N systems. There are several reasons for this. First,
interactions in agent-based models are usually seen as local: the agents are seen as “looking
around” their immediate neighbors and interacting only with those neighbors. This local-ness in
fact is what gives many of the more famous agent-based models their paradoxical quality: agents
who care only about what is going on immediately around them turn out, when their behavior is
generalized, to create global situations that are contrary to what they prefer. The defense of
local-ness typically is cast in terms of Simon-like observations on bounded rationality: agents
cannot and do not scan all agents; they satisfice rather than maximize; and so forth.
Second, the interactions in most agent-based models are anonymous. By this I mean that the
neighbors with whom agents interact are not distinguished from other, non-interacting neighbors
except for the fact that the former are contiguous to the agent whereas the latter are not. Agents
may, of course, “remember” how they interacted with certain interlocutors the last time around,
but this memory is purely utilitarian: there is no social significance to certain interlocutors as
opposed to others.
Third, the interactions in most agent-based models are dyadic. By this I mean that when agents
interact (of course, in some cases, they simply scan their neighborhood and, on that basis, alter
one of their own characteristics), they do so one interlocutor at a time. (This in turn means that
agent-based models must employ basic “housekeeping”rules for adjudicating among alternative
dyadic moves.)
None of these characteristics of typical interactions is well-suited to small-N systems. For one,
agents in such systems interact across the membership of the system (i.e., globally), rather than
10
locally. A good example is military alliances, in which states regularly consult all of their allies,
not just those nearby. Similarly, members of parliamentary coalitions can and do interact across
the entire range of the coalition: a necessity, if one wishes to hold the coalition together. This is
not to deny that agents in small-N systems are likely to be characterized by bounded rationality;
but it is to suggest that their reference group is simply not restricted to their neighbors.
Furthermore, agents in small-N systems interact non-anonymously. As pointed out above, states
in a coalition expect to and do interact with each other repeatedly; this means that they get to
know each other and modulate their interactions according to this knowledge. A state may learn,
for instance, that it is pointless to ask a particular other state to engage in a certain policy; if that
is the goal, then a third state may instead be contacted.
Finally, agents in small-N systems interact multilaterally. Not only do they contact each other
across the range of participants in the system, but they do so in clusters of two, three, four, or
more agents at a time. Meetings with such numbers of agents regularly take place and are used at
hammering out a common position; although this of course can be done via a series of iterated
bilateral consultations, it is far more efficient for agents in small-N systems to work jointly.
Communication. The above discussion makes it clear that “thick” social interaction presents
two challenges for agent-based models. One is to solve the intensionality problem, i.e., to
distinguish between different symbols in a model of such interactions. The other is to capture
some of the basic small-N system properties of thick social interaction. Arguably, the solution to
both of these problems would come from a way of modeling the code through which thick
interaction occurs. The intensity of small-N contacts suggests strongly that some sort of code
exists within the group; if this is the case, then interaction moves can be differentiated. How then
can this code be represented?
I would suggest, as a first approximation, constructing agent-based models which borrow from
the computer science literature on multi-agent systems (MAS). A notable feature of such systems
is an emphasis on the coordination of different agents by means of message-passing and a
division of labor. There are various MAS means available for modeling these kinds of
interactions, falling along a spectrum which stretches from synchronized turn-taking, through socalled “reactive” coordination, to decentralized and centralized planning operations (Ferber 1999,
ch. 8). Not surprisingly, such means map nicely onto the classic March and Simon (1958)
11
discussion of organizational coordination, though they are, of course, considerably more
developed from a procedural standpoint. Note that whichever type of MAS approach to
coordination is chosen, it involves the agents interacting in considerably more intricate ways
than in the more limited types of interactions in the kind of “classical” ABMs discussed above.
This of course introduces complexity into the modeling, but at the same time permits the
construction of more realistic models of small-N systems. (For literature, see Ferber 1999;
Winograd and Flores 1986; for a good discussion of issues involved in various ways of modeling
communications, see Labrou, Finin, and Peng 1999. For work on decentralized planning, see
Durfee, Lesser, and Corkill 1987; more recently, see Cunha and Belo 1997; and Fitoussi and
Tenenholtz 2000. On reactive coordination, see Hickman and Shiels 1991; a glimpse of current
approaches can be obtained in the work most recently by Myers and Morley 2002.the interactions
could be global, familiar, and multilateral. Institutionalization is thus a contingent empirical
phenomenon which serves as an identifying characteristic of small-N systems.)
Unfortunately, the kind of query languages available in MAS work on communication and
coordination are simply inadequate for what we need. They permit information to be passed
from one agent to another, but are too limited for extended exchanges such as occur in natural
languages. The latter, though, are so enormously complicated to represent that it seems best to
begin by specifying perhaps half a dozen common interaction types, writing down the first five to
ten moves in each case and then attempting, from those moves, to lay out the elements of a
lexicon and the preliminary syntactic “glue” by which lexical items can be put together -- and
extended, or inverted -- grammatically.
References
Arthur, W. B. 1988. Self-reinforcing mechanisms in economics. In K. J. Arrow and P.
Anderson, eds., The Economy as an Evolving Complex System. New York: Wiley.
Axelrod, R. 1997. The Complexity of Cooperation: Agent-Based Models of Competition and
Collaboration. Princeton: Princeton University Press.
Barwise, J. and J. Perry. 1983. Situations and Attitudes. Cambridge: MIT Press.
Bhavnani, R. and D. Backer. 2000. Localized ethnic conflict and genocide: accounting for
differences in Rwanda and Burundi. J. of Conflict Resolution 44: 283-306.
Cederman, L.-E. 1997. Emergent Actors in World Politics: How States and Nations Develop and
Dissolve. Princeton, NJ: Princeton University Press.
12
Chang, C. C., and H. J. Keisler. 1990. Model Theory. Amsterdam: North Holland.
Cunha, A. and O. Belo. 1997. A multi-agent based approach for load distribution in
multienterprise environments. In IASTED International Conference on Applied Informatics
(AI97): 306-309, Innsbruck, Austria.
Durfee, E. H., V. R. Lesser, and D. D. Corkill. 1987. Coherent cooperation among
communicating problem solvers. IEEE Transactions on Computers 36: 1275-91.
Eco, U., M. Santambrogio, and P. Violi, eds. 1988. Meaning and Mental Representations.
Bloomington: Indiana University Press.
Epstein, J. M. and R. Axtell. 1996. Growing Artificial Societies: Social Science from the Bottom
Up. Washington, DC: Brookings Institution Press.
Ferber, J. 1999. Multi-agent Systems: An Introduction to Distributed Artificial Intelligence.
Harlow, UK: Addison-Wesley.
Fitoussi, D. and M. Tennenholtz. 2000. Choosing social laws for multi-agent systems:
minimality and simplicity. Artificial Intelligence 119: 61-101.
Garfinkel, H. 1967. Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall.
Gintis, H. 1996. A Markov Model of Production, Trade, and Money: Theory and Artificial Life
Simulation. Amherst, MA: University of Massachusetts Press.
Hickman, S. and M. Shiels. 1991. Situated action as a basis for cooperation. In E. Werner and
Y. Demazeau, eds., Decentralized AI 2. Amsterdam: Elsevier North Holland.
Labrou, Y., T. Finin, and Y. Peng. 1999. The current landscape of agent communication
languages. IEEE Intelligent System. 14,2: 45-52.
Lustick, I. 2000. Agent-based modeling of collective identity: testing constructivist theory. J. of
Artificial Societies and Social Simulation 3,1: http://www.soc.surrey.ac.uk/jasss/3/1/1.html.
Macy, M. W. 1998. Social order in artificial worlds. J. of Artificial Societies and Social
Simulation 1,1: http://www.soc.surrey.ac.uk/jasss/1/1/4.html.
Majeski, S., C. Linden, and A. Spitzer. 1999. Agent mobility and the evolution of cooperative
communities. Complexity 5: 16-24.
Majeski, S. and D. Sylvan. 2000. Modeling theories of constitutive relations in politics.
Typescript.
March, J. G. 1991. Exploration and exploitation in organizational learning. Organizational
Science 2: 71-87.
March, J. G. and H. A. Simon. 1958. Organization. New York: Wiley.
13
Myers, K. L., and D. N. Morley. 2002. Conflict management for agent guidance. Proceedings
of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems
(AAMAS).
Ormerod, P. 1998. Butterfly Economics: A New General Theory of Social and Economic
Behavior. New York: Pantheon.
Padgett, J. F. 1997. The emergence of simple ecologies of skill: a hypercycle approach to
economic organization. In W. B. Arthur, S. N. Durlauf, and D. A. Lane, eds., The Economy as
an Evolving Complex System II. Reading, MA: Addison-Wesley.
Post, E. 1943. Formal reductions of the general combinatorial decision problem. Am. J. Math.
65: 1967-215.
Robinson, A. 1963. Introduction to Model Theory and to the Metamathematics of Algebra.
Amsterdam: North Holland.
Schelling, T. C. 1971. Dynamic models of segregation. J. of Mathematical Sociology 1: 143-86.
Schutz, A. 1967. The Phenomenology of the Social World. Evanston, IL: Northwestern
University Press.
Simmel, G. 1971. On Individuality and Social Forms. Chicago: University of Chicago Press.
Van De Ven, A. H., A. L. Delbecq, and R. Koenig, Jr. 1976. Determinants of coordination
modes within organizations. American Sociological R. 41: 322-38.
Watts, D. J. 2003. Six Degrees: The Science of a Connected Age. New York: W. W. Norton.
Wilhite, A. 2001. Bilateral trade and small-world networks. Computational Economics 18: 49-64.
Winograd, T. and F. Flores. 1986. Understanding Computers and Cognition: A New Foundation
for Design. Norwood, NJ: Ablex.
Wolfram, S. 2002. A New Kind of Science. Champaign, IL: Wolfram Media.
Download