Boundedly Rational and Emotional Agents: Simulating DAISY and DAIDO Trust Models

From: AAAI Technical Report FS-00-04. Compilation copyright © 2000, AAAI (www.aaai.org). All rights reserved.
Boundedly Rational and Emotional Agents:
Simulating DAISY and DAIDO Trust Models
MJ Prietula
University of Florida
prietula@ufl.edu
K.C. Carley
Carnegie Mellon University
kcarley@ece.cmu.edu
Abstract
A project is described where computational simulation is
used to explore empirically defined and explicitly articulated models of trust, gossip, and emotion.
Project Overview
Computational agents of various forms are used in research
and business contexts on the Internet. We interact with
them,
and
they
interact
with
each
other. Of interest is the potential growth of “A2A” (agent to
agent) and “A2H” (agent to human) interactions, such as
negotiations,
market
exchanges,
and informational search (Prietula, Carley & Gasser, 1998).
In particular, it is argued that the nature of these interactions defines rudimentary social contexts, and thus requires
rudimentary social knowledge and behaviors derivative of
that knowledge and those contexts (Epstein & Axtell, 1997;
Newell, 1990). Furthermore, it is argued that rudimentary
social behaviors can greatly facilitate certain types of agent
tasks − tasks that can exploit agent parallelism and communication based on that parallelism.
A straightforward social situation is explored where agents
attempt to achieve individual goals (e.g., an Internet search
task) and each agent can potentially benefit from the information held by other agents, defining a simple social context − informational exchange. However, by introducing
advice uncertainties (i.e., agent reliability, agent deception,
and environmental uncertainty), this simple social situation
quickly becomes robust and the general form is seen to
underlie many socio-organizational human processes (e.g.,
expertise networks, coalition formation, types of organizational learning).
In order to afford the rudimentary social activity required
in this framework, agents incorporate capabilities to define
and adapt their behaviors in such contexts. Specifically,
these agents have the capability to exchange information
about the task with other agents, to exchange information
about other agents, to establish trust in other agents based
on those types of information exchanges, and to have emotion-like responses to events in their task environment deriving from those information exchanges and trust judgments.
The overall objectives of this project are
1.
2.
3.
4.
to integrate specific, but simple, models of communication, trust, and emotion into boundedlyrational computational agents,
to integrate empirical findings about human-agent
behavior and human cognition into a grounded
computational model,
to use this empirically grounded model to systematically explore, through simulation studies and
computational theory building, how manipulations of individual parameters impact and interact
with individual and collective agent behaviors and
phenomena, and
to begin to articulate the components of A SocioCognitive Theory of Social Agents.
Through this research, insight into how simple, but plausible, models of social interaction and deliberation can influence collective behavior, and how that collective behavior
itself scales up from small group research sizes (e.g., three
to five agents) to Internet dimensions (thousands of agents).
It also will provide a fundamental test bed to explore alternative mechanisms (e.g., models of trust, models of emotion) of social agent behavior.
Agencies of Trust
We report the results of initial studies of agent-human trust,
a prototype simulation environment, and a set of simulations that are based on the models of agent trust derived
from those studies. The situation is as follows. Imagine that
N individuals have N agents acting in their behalf. A key
component of agent deliberation is the maintenance of a
series of trust relations among agents. How might a human
impart his or her trust model to the agent?
We explore the implications of two types of methods that
result in two families of models: do as I do (DAIDO) and
do as I say (DAISY). DAIDO models are based from empirical work (experimental and field) that examines how
humans impart trust on interacting with machine agents and
describes convergent models that are derived from a series
of choice situations involving trust. On the other hand,
DAISY models are based on humans describing the trust
algorithms directly, not in a particular context, but in an
abstract form. The presumption, of course, is that these
families of models differ; however, part of the empirical
effort of this research is to determine if and under what
circumstances these types of models are at variance. In
essence, DAIDO models are derived from observations of
behavior; DAISY models are derived from people’s descriptions of their behavior.
ciology of Organizations, Networks In and Around Organizations. JAI Press, Inc. :Stamford, CT, 3-30.
Carley, K. (1999b). Organizational Change and the Digital
Economy: A Computational Organization Science Perspective. In Brynjolfsson, Erik and Brian Kahin, (Eds.), Understanding the Digital Economy: Data, Tools, Research,
MIT Press: Cambridge, MA.
Epstein, J. & R. Axtell. (1997). Growing Artificial Societies. Boston, MA: MIT Press.
Newell, A. (1990). Unified Theories of
tion. Cambridge, MA: Harvard University Press.
Cogni-
Prietula, M. (forthcoming). Advice, Trust, and Gossip
Among Artificial Agents. In A. Lomi and E. Larsen (Eds.),
Simulating Organizational Societies: Theories, Models and
Ideas. MIT Press: Cambridge, MA.
The subsequent models are then incorporated into a simple
computational framework and the behavior of sets of agents
are simulated as they are faced with a series of tasks that
involve choice situations of exchanging information with
other agents. Using this framework, the behavior of the
individuals, the group, and the evolution of the group can
be examined (Carley, 1999a,b).
Prietula, M. & Carley, K. (1999). Exploring the effects of
agent trust and benevolence in a simulated organizational
task, Applied Artificial Intelligence, 13, 321-338.
The social architecture of the agents necessitated several
simple component decision models. A trust model defined
how agents differentiate source reliability and coalition
formation, defined by direct experience as well as agent-toagent communication. A gossip model is the exchange of
trust information among coalition members. This model
specifies the conditions under which gossip is generated, to
whom gossip is directed, and the conditions under which
gossip is attended. An emotion model augments the trust
and gossip models and accounts for non-linearities in agent
decisions (and behavior) based on confirmed or violated
expectations in interaction with the trust model. All agents
have learning parameters that define the rates at which they
adapt their models and behaviors (Carley, 1998; Prietula,
Carley, Gasser, 1998).
Prietula, M., Carley, K. & Gasser, L. (Eds.) (1998). Simulating Organizations: Computational Models of Institutions and Groups, AAAI/MIT Press.
This work builds on earlier work on trust and gossip (Prietula, forthcoming) and trust and emotions (Prietula and
Carley, 1999, 2000) into a more integrated group framework. The initial results from this new study will be
presented at the workshop and will be available from
the authors.
References
Carley, K. (1998). Organizational Adaptation. Annals of
Operations Research, 75, 25-47.
Carley, K. (1999a). On the Evolution of Social and Organizational Networks. In Steven B. Andrews and David
Knoke (Eds.) Vol. 16 special issue of Research in the So-
Prietula, M. & Carley, K. (2000). Boundedly rational and
emotional agents:Cooperation, trust, and rumor. To appear
in C. Castelfranchi and Y-H. Tan (Eds.), Deception, Fraud
and Trust in Virtual Societies. Kluwer.