to obtain the paper

advertisement
AGENT-BASED COMPUTATIONAL ECONOMICS AND EMOTIONS
FOR DECISION MAKING PROCESSES
PIETRO CIPRESSO
Institute of Human, Language and Environmental Sciences, IULM University – Milan
Via Carlo Bo, 8, 20143, Milan, Italy
pietro.cipresso@iulm.it
ANNA BALGERA
Institute of Human, Language and Environmental Sciences, IULM University – Milan
Via Carlo Bo, 8, 20143, Milan, Italy
MARCO VILLAMIRA
Institute of Human, Language and Environmental Sciences, IULM University – Milan
Via Carlo Bo, 8, 20143, Milan, Italy
Abstract
Preliminary remarks: Agent-based computational economics has been developed extensively in
recent years, using sophisticated algorithms of evolutionary computation and artificial life.
Surely, the trend has been to shape systems, frameworks, and environments to be as much like a
human being as possible. However, to adequately characterize the economic systems viewed as
complex adaptive systems, we must integrate emotional aspects with agent-based technology in
order to facilitate behavioral shortcuts for the development of fast and adaptive decisional skills,
which are innate human behaviors.
Theoretical foundation and state of art: There is a growing consensus among researchers in
agent-based computational economics that teamwork models can enable flexible coordination
among heterogeneous entities. These models are based on a belief-desire-intention (BDI)
architecture. We integrate analysis with a review of recent emotions theories that could be useful
for integration into agent-based computational economics environments.
Analysis and tools: Our role is to integrate agent-based architectures with emotions. So we
consider, the emotions, “behavioural shortcuts”. Our future purpose is to develop artificial agents
that incorporate emotions to run simulations and create frameworks that can be used
cooperatively with business intelligence technologies to understand the different ways that
enterprises decline or improve as a consequence of the actions of their managers.
Results: The theoretical results are expected to be of considerable importance in terms of
providing a defensible, functional approach for the analysis of future applications, and, above all,
they will provide the essential basis for creation of human-based systems.
Discussion and conclusions: The literature relevant to emotions and agent-based areas will be
reviewed, the elements of the model will be described, suggestions for future work will be
presented, and the many implications for theory, research, and practice will be discussed.
Keywords: Emotions, Complex Systems, Agent-based Computational Economics (ACE),
Network models, Simulations
1 Introduction
The question is not whether intelligent machines can have any emotions,
but whether machines can be intelligent without any emotions.
The Society of Mind - Marvin Minsky
In an agent-based model, a key role is assigned to subjects (i.e. agents), rules and
environment. From the methodological and operational points of view, emotions can
facilitate decision-making processes. Agents are characterized by an explicit or implicit
representation of targets, standards and aptitudes. Emotional mechanisms can facilitate
behavioral shortcuts in agents’ decision-making processes. In emotional contexts, we
consider learning-facilitating behavioral strategies and performances. Structural
characteristics of a system and of the events that are generated by the system itself are
taken into account, also related to interactions between agents and to critical elements.
1.1 An operative definition of ‘emotions’
We think that emotions could be defined as ‘facilitating and/or induction tools of
behavioral shortcuts’ (Villamira, Cipresso, in press).
This happens, above all, for those emotions that are linked with instant survival
(fear  escape and anger  attack) through shortcuts connecting sense organs, cerebral
areas and motor apparatuses.
We think it interesting to consider Fig.2, the result of one of the many attempts to
classify emotions and their composed effects (Robert Plutchik’s psycho-evolutionary
theory of emotions, 2002). It is obvious that difficulties arise when trying to establish
exact bounds and clear effects concerning the different operative definitions.
In non-temporal terms, we believe, as stated earlier, that it is useful to categorize
emotions as follows:
 emotions linked to the survival of individuals and the species;
 emotions not linked to the survival of individuals or the species.
Emotions linked to the survival of individuals and the species could be inserted in
artificial agents. With respect to human beings, examples of such emotions are fear and
anger, strictly connected to defense and attack (LeDoux, 1998)
In terms of artificial agents, with the tools and knowledge available today, it is possible
to consider defense and attack strategies utilizing the same general effect generated by
emotions.
To consider emotions at the agent level, we need, first of all, to provide an emotional
framework at this same level (Power and Dalgleish, 2008).
At the computational level, this may be realized with models that incorporate the
following categories of agent:

Risk-adverse agent (emotional feature: fear): if it prefers to obtain certainty in
the expected value of a certain aleatory quantity instead of the aleatory quantity
itself;
 Neutral agent (no emotion): if an agent is indifferent to the choice between the
expected value of a certain aleatory quantity and the quantity itself;
 Risk-inclined agent (emotional feature: anger/aggressiveness): if the agent
always prefers to obtain a certain aleatory quantity, instead of its expected value.
We need to consider different degree of the previous categories.
Fig.1. Robert Plutchik’s psycho-evolutionary theory of emotions, 2002
1.2 Preliminary remarks
Regarding the relationships between models and emotions, we can consider different
perspectives, for example (Picard, 1997):
1) recognition of human emotion
2) expression of emotional behavior
3) modeling and simulation of human behavior
The categories listed here should be viewed only as a general reference of more
specific research fields.
 In this first work, we do not consider architectures to interact with human
beings: the agents are intended to be autonomous in simulated environments and
the only interaction that users may have with the agents is through the
perspective of the application or with a command-line shell. Nevertheless, we
admit that it could be very interesting to have mechanisms for the recognition of
emotions, in human users intervene in the simulation. If it is possible for the user
to interact with the simulation through a voice-recognition system, so as to give
the agents orders with high-level commands, then recognizing the emotions
could be both useful and profitable.
 Our main focus is oriented toward the development of architectures by which
agents can maximise their autonomy and adaptability in the environment: as
mentioned above, interactions with human users, at present, are not considered.
 We are not interested in creating believable agents or agents which, in some
way, express emotions. The architectures that we propose are designed to use
emotional mechanisms, at an agent level, related to the performance of other
agents in the environment. The visual representation of each agent is a secondary
aspect.
 Finally, despite the use of anthropomorphic scenarios, in this case we do not take
into account any simulation models of human beings.
2 Methodological remarks
Phases of the methodological remarks:
 the study of emotional phenomena from a functional point of view trying, in any
case, to explore possible practical applications within autonomous agent
architectures;
 the development of a model, based on emotional mechanisms, compatible with
theories and models of emotions, in the fields of artificial intelligence,
psychology and neuroscience (Cipresso, Villamira, in press – a);
 the development of an agent’s architecture, whereby mechanisms based on
emotions provide operational advantage, in terms of performance, for agents
with limited resources, operating in complex and dynamic environments.
2.1 Emotional-agent-based architecture
First, it is important to structure the architecture in a more general context; i.e., within
the agents-rules-environment framework. This is essential for the correct
implementation-management of any architecture based on emotions and artificial
agents; and, moreover, it is the cornerstone of the structure that represents the
interaction among agents and the environment.
Figure 2: The framework agents-rules-environment. These three elements are strictly
connected and none of them exists without the others. This is similar to the ERA model
proposed by Gilbert-Terna; however, here it is used in a different manner.
2.2 Operational architecture
 Creating a basic structure for the agents: risk- adversity/neutrality/propensity 
on the basis of these features is generated a set of probable actions
(aggressiveness/defense);
 creating an environment like a container of emotional stimuli;
 creating resources (also for survival) and inserting dynamics with predators and
preys;
 implementing conditions (also environmental) to generate different
characteristics of the agents (Elman, 1990);
 making a selection (even on the basis of genetic and evolutionary algorithms);
 structuring emotional mechanisms, closely related to selection processes,
thereby facilitating behavioural shortcuts (Cipresso, Villamira, in press – b);
 structuring decision-making processes.
Figure 3: An artificial neural network considering implicit and explicit emotional
states in artificial agents.
2.3 Environment
To develop a system that fosters emotions, it is necessary to have an environment,
populated by artificial agents, which might include emotions. For modeling various
types of emotional state, we need to satisfy many conditions, at both the system and
agent level.
Agents must have an explicit or implicit representation of targets, standards and
attitudes. Furthermore, the agents also should be able to ‘transfer’ observations, in terms
of targets, standards, and attitudes. In practice, this last requirement necessitates the
following consequences for agents (Cipresso and Villamira, 2007b):
 an agent should be able to ‘verify’ if an event meets a particular target, or if it
has a positive or negative impact upon the probability that a particular target
could be satisfied;
 a fundamental key role may be played by regret;
 an agent should be able to make inferences and change expectations in the
future, as a result of these events;
 the expectations for the targets are very important for the emotions;
 an agent should have some type of memory of previous expectations, and should
be able to compare the new events with previous ones;
 an agent should have the ability to make inferences at the agent level, and these
should be transferable to other agents through information transfer processes;
 an agent should be able to compare its own actions with those of the other
agents, following and creating the standards for shared behaviors;
 finally, an agent should be able to compare external points of view (of the other
agents and of the environment) with the agent’s own attitudes (interior points of
view).
Fig. 3: Strategies with emotional phenomena depend upon agents’ targets and
capabilities, and the state of the environment.
3 Emotional mechanisms (what they are and how to implement them)

Environment/predators/emotions/selection/resources
o Stimulus  environmental responses, arising from agent
interactions.
o Environments with a lack of resources lead to competition; this
generates selection for survival between agents.

Elements to create facilitators of behavioural shortcuts:
o Preamble: Agent-based models, available today, are made up of
agents that interact in an environment, according to certain rules;
there is a certain degree of physical assimilation to the rules of
the real world in the simulated model (Power and Dalgleish,
2008);
o Proposal for future work: an innovative element could facilitate
the creation of behavioural shortcuts; these shortcuts could be
represented by giving to agents (in particular situations; e.g. in
case of danger) the capability to ‘transfer’ (an escape mechanism)
from one area to another environment’s area (advantage: it saves
itself; disadvantage: it arrives in a unknown area, perhaps more
dangerous).
Fig. 4: Individual events and actions in the environment (adapted from “An Action
Selection Architecture for an Emotional Agent” by G.J. Burghouts, R. op den Akker, D.
Heylen, M. Poel and A. Nijholt).
3.1 Emotions and agent learning
We have considered that the environment has a great influence on agents and,
consequently, on agent emotions and behavior. However, a fundamental problem has
emerged in the construction of agents: learning (Terna et al., 2006).
For many years, scholars of artificial intelligence (though others as well) have been
dealing with algorithms and models (e.g., neural networks) for agent learning (Simon,
1997).
Our purpose is, of course, to consider (as happens in humans) emotions also in the
learning of the agents. The instruments are many and most are linked to neural
networks; but not only to them. However, regardless of the informatics/mathematics
tool used, emotions may (and must) enter fully into the process of modeling the learning
process.
The steps to be considered in learning processes are (Lewis et al., 2008):
 environmental learning phase: this is the phase in which the agent acquires
information from the environment, following certain rules;
 interaction between agents phase: interacting between them, the agents learn,
modify, re-learn, etc., the rules;
 advanced interaction phase: through stimuli, agents learn and develop strategies
linked to environmental resources and other agents, creating rules for survival
and, above all, rules among agents.
3.2 Events, systems: imprinting and more
A fundamental preamble (Villamira, Comunicare, FrancoAngeli, Milano, in press):
1) the imprinting paradigm can be a good metaphor for each type of learning, in
the sense that the imprinting itself can be considered a form of ‘genetic’
learning:
a) structured in the evolutionary course by “Darwinian” selection;
b) stored in the genes, in the form of ‘instinct’, allowing immediate
behavioral performances, in order to preserve the life of newborns.
2) the usual forms of learning differ from imprinting, inasmuch as there is
needed a training, normally by trials and errors, which allows actors to act
immediately, after learning: behavioral strategies and performances take
place without the need to redo the paths of learning; (Ciarrochi and Mayer,
2007)
3) perceiving the context, agents select (thanks to learning) what they have
learned to consider important.
From a functional point of view, the main role of these mechanisms is to convert an
event with special properties into an emotion. In order to carry out such a distinction,
two phases are required:
1) the first phase consists of an emotional state calculation; i.e. using a
numerical representation of an emotional event. The calculation is based, on
first analysis, upon agent learning. Then, this calculation must be
standardised (within a neural network) in order to be compared with other
events;
2) the second phase consists of an assessment of the emotional meaning of the
event; e.g., if an agent ‘sees’ a resource, it makes an evaluation on the basis
of a ‘discriminator’, which is extremely variable in space and time, and
strongly influenced by the emotional state of the agent.
4 Critical factors
1) Previous and current models, perspectives and critical factors:
Far back, scholars have been interested in ‘emotions and AI’. As early as 1988,
Pfeifer, wrote the book - Artificial Intelligence Models of Emotion - which
reviewed many AI models of emotion. Subsequently, many models based upon
the most innovative techniques were studied. Many of these models, though
elegant and well constructed, have been forgotten, only because they have not
been fortunate enough to have sparked the interest of scholars. One especially
interesting model was proposed by El-Nasr & Yen (1988), where fuzzy logic
was used, a good tool – we think – in the emotions domain.
Today, the Recognition of Human Emotions, Modeling and Simulation of
Human Behaviors and Expression of Emotional-like Behaviors has had many
contributions. Notwithstanding, contributions that consider emotions in the
context of agent-based models and simulations are very few, the major reason
likely being the difficulties related to defining and objectifying emotions. On
this latest point, we must say that the large advances that have come about in
affective computing should allow for a much better understanding of complex
emotional processes.
2) The difficulties of transition, from representation of the model to its architecture
and simulation, are closely linked to the nature of the problem. However, it
always must be recalled that emotions have been, and continue to be essential
for the survival of the individual and the species, and could be an important
paradigm for agent-based simulations.
Bibliography
Burghouts G.J., R. op den Akker, Heylen D., Poel M. and Nijholt A. (2003), An Action
Selection Architecture for an Emotional Agent
Ciarrochi J., Mayer J.D. (2007), Applying Emotional Intelligence, Psychology
Press,New York
Cipresso P., Villamira M.A. (in press - b), An emotional perspective for Agent-based
Computational Economics in J. Vallverdú, D. Casacuberta, (eds.), Handbook of
Research on Synthetic Emotions and Sociable Robotics: New Applications in
Affective Computing and Artificial Intelligence, IGI Global
Cipresso P., Villamira M.A. (2007a), Aspettative razionali e microfondazione per
ecomomie in situazione di iperinflazione in Proceeding of WIVACE 2007
Cipresso P., Villamira M.A. (2007b), Shaping the “post-carbon” society: changes at
systemic level in transport, housing and consumer behaviour, an Agent-based
Computational Economics approach in Proceeding of International Association for
Research in Economic Psychology Conference, Ljubljana, Slovenia; ISBN: 978-961237-206-4;
Dowkins R. (1989), The selfish gene, Oxford University Press, Oxford
Elman J.L. (1990), Finding structure in time. Cognitive Science, 14
Fisher, I. (1930), The Theory of interest. The Macmillan Company. ISBN13 9780879918644
Howitt, P. (2006), The Microfoundations of the Keynesian Multiplier Process, Journal
of Economic Interaction and Coordination, 33-44
Kahneman D., Tversky, A. (1979), "Prospect Theory: An Analysis of Decision under
Risk", Econometrica, vol. 47, no. 2
LeBaron, B. (2006), Agent Based Computational Finance: Suggested Readings and
Early Research, Journal of Economic Dynamics and Control
LeDoux J. (1998), Fear and the brain: Where have we been, and where are we going?,
Biological-Psychiatry, 44, 12
Leijonhufvud, A. (2006), Agent-Based Macro, in Handbook of Computational
Economics, edited by Leigh Tesfatsion and Kenneth L. Judd, 1625-37, Vol. 2,
Amsterdam: North-Holland
Lewis M, Haviland-Jones J.M. & L. Feldman Barrett (Eds.), (2008), Handbook of
Emotions. Third Edition, Psychology Press,New York
Picard. R. (1997), Affective Computing. MIT, Press
Power M., Dalgleish T. (2008), Cognition and emotion. From order to disorder,
Psychology Press,New York
Simon H. (1997), Models of Bounded Rationality, Vol. 3. MIT Press
Terna P., Boero R., Morini M., Sonnessa M., Modelli per la complessità. La
simulazione ad agenti in economia, Bologna, Il Mulino, 2006.
Tesfatsion, L. (2006), Agent Based Computational Economics: A Constructive
Approach to Economic Theory, in Handbook of Computational Economics,
Tesfatsion L. and Judd K.L. (Eds.), 831-80, Vol. 2, North-Holland, Amsterdam
Villamira M.A., Cipresso P. (in press), Bio-inspired ICT for evolutionary emotional
intelligence in Artificial Life and Evolutionary Computation (tentative title), World
Scientific
Villamira M.A. (in press), Comunicare, FrancoAngeli, Milano
Download