WP1.2 Complexity models of trust networks

advertisement
Project no. 231200
QLectives
QLectives – Socially Intelligent Systems for Quality
Instrument: Large-scale integrating project (IP)
Programme: FP7-ICT
Deliverable D1.2.1
Novel models of agency and social structure for trust and cooperation
Submission date: 2012-01-31
Start date of project: 2009-03-01
months
Duration: 48
Organisation name of lead contractor for this deliverable: University of Warsaw
Project co-funded by the European Commission within the Seventh Framework
Programme (2007-2013)
Dissemination Level
PU
Public
x
PP
Restricted to other programme participants (including the
Commission Services)
RE
Restricted to a group specified by the consortium (including the
Commission Services)
CO
Confidential, only for members of the consortium (including the
Commission Services)
i
DOCUMENT INFORMATION
1.1 Author(s)
Author
Organisation
E-mail
Michal Ziembowicz
University of Warsaw
ziembowicz@gmail.com
1.2 Other contributors
Name
Organisation
E-mail
Andrzej Nowak
University of Warsaw
nowak@fau.edu
Wieslaw Bartkowski
University of Warsaw
wieslaw.bartkowski@psych.uw.pl
Thomas Grund
ETH Zurich
thomas.grund@gess.ethz.ch
Nigel Gilbert
University of Surrey
n.gilbert@surrey.ac.uk
Alastair Gill
University of Surrey
a.gill@surrey.ac.uk
Maria Xenitidou
University of Surrey
m.xenitidou@surrey.ac.uk
Matus Medo
University of Fribourg
matus.medo@unifr.ch
1.3 Document history
Version#
Date
Change
V0.1
Starting version, template
V0.2
01 September, 2011
First draft
V0.3
30 September, 2011
First final version
V1.0
30 January, 2012
Approved version to be
submitted to EU
1.4 Document data
Keywords
Simulational models, social influence,
reputation, trust, cooperation
Editor address data
ziembowicz@gmail.com
Delivery date
31 January, 2012
ii
1.5 Distribution list
Date
Issue
E-mail
Consortium members
Project officer
EC archive
iii
QLectives Consortium
This document is part of a research project funded by the ICT Programme of the
Commission of the European Communities as grant number ICT-2009-231200
.
University of Surrey
(Coordinator)
Department of Sociology/Centre
for Research in Social Simulation
Guildford GU2 7XH
Surrey
United Kingdom
Contact person: Prof. Nigel Gilbert
E-mail: n.gilbert@surrey.ac.uk
University of Fribourg
Department of Physics
Fribourg 1700
Switzerland
Contact person: Prof. Yi-Cheng
Zhang
E-mail: yi-cheng.zhang@unifr.ch
University of Warsaw
Faculty of Psychology
Warsaw 00927, Poland
Contact Person: Prof. Andrzej
Nowak
E-mail: nowak@fau.edu
Technical University of Delft
Department of Software
Technology
Delft, 2628 CN
Netherlands
Contact Person: Dr Johan
Pouwelse
E-mail: j.a.pouwelse@tudelft.nl
Centre National de la Recherche
Scientifique, CNRS
Paris 75006,
France
Contact person : Dr. Camille
ROTH
E-mail:
camille.roth@polytechnique.edu
ETH Zurich
Chair of Sociology, in particular
Modelling and Simulation,
Zurich, CH-8092
Switzerland
Contact person: Prof. Dirk Helbing
E-mail: dhelbing@ethz.ch
Institut für Rundfunktechnik
GmbH
Munich 80939
Germany
Contact person: Dr. Christoph
Dosch
E-mail: dosch@irt.de
University of Szeged
MTA-SZTE Research Group on
Artificial Intelligence
Szeged 6720, Hungary
Contact person: Dr Mark Jelasity
E-mail: jelasity@inf.u-szeged.hu
iv
QLectives introduction
QLectives is a project bringing together top social modelers, peer-to-peer engineers
and physicists to design and deploy next generation self-organising socially
intelligent information systems. The project aims to combine three recent trends
within information systems:



Social networks - in which people link to others over the Internet to gain
value and facilitate collaboration
Peer production - in which people collectively produce informational
products and experiences without traditional hierarchies or market incentives
Peer-to-Peer systems - in which software clients running on user machines
distribute media and other information without a central server or
administrative control
QLectives aims to bring these together to form Quality Collectives, i.e. functional
decentralised communities that self-organise and self-maintain for the benefit of the
people who comprise them. We aim to generate theory at the social level, design
algorithms and deploy prototypes targeted towards two application domains:


QMedia - an interactive peer-to-peer media distribution system (including live
streaming), providing fully distributed social filtering and recommendation for
quality
QScience - a distributed platform for scientists allowing them to locate or
form new communities and quality reviewing mechanisms, which are
transparent and promote
The approach of the QLectives project is unique in that it brings together a highly
inter-disciplinary team applied to specific real world problems. The project applies a
scientific approach to research by formulating theories, applying them to real systems
and then performing detailed measurements of system and user behaviour to
validate or modify our theories if necessary. The two applications will be based on
two existing user communities comprising several thousand people - so-called
"Living labs", media sharing community tribler.org; and the scientific collaboration
forum EconoPhysi
v
Spis treści
Introduction................................................................................................................................................... 1
Psychological models of social influence based on trust (University of Warsaw) ............................... 3
Review of literature ............................................................................................................................... 3
Petty & Cacioppo’s Elaboration Likelihood Model (ELM) ..................................................................... 4
Kruglanski’s Unimodel .......................................................................................................................... 5
Nowak & Latanés’s dynamical social impact theory ............................................................................. 6
Simulational model (University of Warsaw) ..................................................................................... 8
Background............................................................................................................................................ 8
Baseline model .................................................................................................................................... 10
Model using the Dynamic Theory of Social Impact ............................................................................. 10
Model of relational aspects of trust .................................................................................................... 14
Model based on small-world network ................................................................................................ 19
Punishment-driven cooperation and social cohesion among greedy individuals (ETH Zurich) .......... 22
Punishment and spacial structure ....................................................................................................... 22
Migration and greediness.................................................................................................................... 23
Brief introduction to analytical models for trust and reputation (University of Surrey, University of
Fribourg) ...................................................................................................................................... 25
Trust, Reputation and Quality in QScience – theoretical foundation (University of Surrey) ............. 27
Definition of the terms and their relationship: ................................................................................... 27
Egocentric network ....................................................................................................................... 27
Quality ................................................................................................................................................. 28
Reputation ........................................................................................................................................... 28
Integration with other apps ................................................................................................................ 29
Trust, Reputation and Quality in QScience (University of Fribourg) ................................................ 30
QScience network................................................................................................................................ 30
Definition and Interrelation of Trust, Reputation and Quality ............................................................ 30
Evaluation ............................................................................................................................................ 32
References ................................................................................................................................... 33
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Introduction
The aim of WP 1.2.1 was to explore the role of trust and social influence in information diffusion and its
interpretation. Specifically the work focused on the novel models of agency and social structure for trust
and cooperation as it was stated in the deliverable D1.2.1. Four partners were involved in the
workpackage: University of Warsaw (further: UWAR, the WP’s leader), University of Surrey (further:
UniS), University of Fribourg (further: UniF) and ETH Zurich (further: ETH). Each partner concentrated on
a different aspect of the research and their actions were coordinated during meetings and reciprocal
visits.
The theoretical foundations for the psychologically plausible models of agency were researched by
UWAR providing an extensive set of references. Several models of influence were reviewed (e.g.
Elaboration Likelihood Model and Unimodel) and some common features were identified. According to
psychological research social influence phenomenon in general can be divided into two processes: the
informational or cognitive influence based on consciously applied strategies and the normative or
peripheral social influence driven by socially relevant heuristics, mostly implicit in nature. Further the
Nowak and Latanee’s dynamical social impact theory is presented. In contrast to classic theories that
regard the process of influence from individual perspective, it is relates to the structural features of
social groups. Two factors are identified as moderators of the influence process: issue importance and
trust. The first changes the dynamics of information integration process and the second can be
translated to the strength of connection between two individuals.
The theoretical research conducted by UWAR led to the development of a simulational model of
information processing and the functioning optimization of social groups. The agents form a network
where link’s weights are defined by the level of trust. The network is embedded in an environment that
provide localized information that is being assessed by the agents. The result show that depending on
the level of signal-to-noise ratio that is responsible for the quality of information people either rely on
the opinions of others or use their own knowledge.
Whereas UWAR’s contribution concentrates on group information processing dynamics, the work of ETH
focuses on social mechanisms that promote cooperation among greedy individuals in the environment of
a public goods game. In a series of simulations the factors are identified that lead to the development of
cooperation in social groups. First set of factors is related to agents strategy. Along with traditional
cooperation/defection division two new strategies ore introduced, namely a “punishing cooperator” and
a “punishing defector”. The two different punishment strategies lead to different dynamics. Further
research builds up on the initial findings introducing the strategies mutations, the spatial constraints to
the number of interaction partners and finally providing agents with the mobility (i.e. possibility to
choose interaction partners). The model proves that moderate greediness favors social cohesion by a
coevolution between cooperation and spatial organization.
The insights provide by UWAR’s and ETH’s research served as inspiration for the work done by UniS and
UniF. UniS proposed a model of QScience – a platform being implemented within the Qlectives project.
The model concentrates on the users of QScience, referred to as “ego”, and their interactions with other
aasdasdasdasdasdasdausers as well as objects (i.e. scientific papers). The nodes within the ego network
1
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
may be created, categorized and ranked by the user based on frequency of interaction, recency of
interaction and reputation. Users can rate how much trust they place in each other based on their
comments, interactions and quality judgments. The overall trust together with the quality of objects
owned by an ego and the amount of interactions an ego makes define reputation of each user.
The platform, as it is planned, will enable the user to rate quality of documents with regard to the
usability of each one of them in given contexts, while the subject was performing diverse tasks. The
ratings will be aggregated within the user’s overall ego network, with the ratings of the closer nodes
having more weight than the further ones. The list of rated objects will be searchable on many specific
dimensions. How to integrate QScience platform with other useful applications is still under question.
Possible solutions are pointed to in the report.
Based on the assumptions proposed by UniS a simulational model was built by UniF. The system is
represented in the form of a network where nodes represent users endowed with reputation scores and
objects characterized by quality scores. Weighted links represent different interactions between two
kinds of nodes (e.g. authoring, rating, commenting, uploading or downloading papers etc.) . Quality of
objects develop through interaction and the formation of consensus in a group and is interlocked with
reputation that represents the general opinion of the community towards a user. Trust relations are
either derived from overlaps of users' interests and evaluations or can be specified by the users
manually. The results show that given sufficient number of skilled users, the model is able to discern
users with high ability values and items with high quality
In the following pages the contributions of each partner are presented in detail.
2
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Psychological models of social influence based on trust (University of Warsaw)
Review of literature
Social psychology can be adequately summarized as the science about social impact. G. W. Allport (1968)
delineated the focus of social psychology as “an attempt to understand… how the thought, feeling, and
behavior of the individual are influenced by the actual, imagined or implied presence of others”.
Throughout socialization and in adult life, people are continuously exposed to pressures originating in
their social environment. Every single social encounter is an arena of mutual influence between
interaction partners.
Social influence is deeply embedded in every aspect of interpersonal functioning. This fact can explain
the multitude of contexts in which this term has been evoked. Incarnations of social influence
mechanisms form central issues in social psychology: conformity (Asch, 1956), attitude change (e.g.
McGuire, 1985; Sherif, Sherif & Nebergall, 1965), persuasion (Petty & Cacioppo, 1986), obedience to
authority (Milgram, 1974), social power (French & Raven, 1959) and many more (c.f. Cialdini, 2001;
Nowak, Vallacher & Miller, 2003; Wojciszke, 2000). Social influence forms the basis of coordinated social
action, group formation and social bonding in general (Grzelak & Nowak, 2000).
Traditionally, two basic psychological needs are distinguished that motivate people to be subject to
external guidance. They have been originally proposed by Deutsch and Gerard in their Dual Process
Theory (1955). The first one, informational social influence, relates to the need to be right – to possess
appropriate information or be able to react adequately to the circumstances. Informational influence
occurs especially when the situation is ambiguous or when the person is not provided with enough
information to form opinion on her own (a famous example of this phenomena is Sherif’s experiment on
autokinetic effect, 1936). The other type of influence, normative social influence, is driven by the need to
be liked and accepted by the group of reference. Deviation from the established group norm may result
in rejection and ostracism. This threat motivates individuals to comply with majority even if their private
opinion is different (Asch’s line-judgment conformity experiments, 1956).
Another customary typology of social influence relates to the extent with which new information is
accepted by the target of persuasive attempts (Kelman, 1958). Compliance occurs when person accepts
influence from the external source in order to gain a reward or avoid a punishment but does not
necessarily internalize the imposed norm. Identification describes the process of attitude modification in
order to become similar to an admired authority. The deepest acceptance of social impact is
internalization, where the change in beliefs is motivated by the true, intrinsic conviction that the
persuasive information is correct.
Central to the study of social influence are the conditions in which people are likely to change their
behavior or internal state in response to the external information or norm. The study of social influence
has been in large part the study of mechanisms of social power and control. The conditions in which the
persuasion attempt is successful have been discussed from the perspective of the influence agent - the
one that exerts power, manipulates other people’s behavior, introduces attitude change (c.f. Cialdini,
3
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
2001; Doliński, 2005). The influence agent is equipped with a set of persuasive techniques that help to
maximize the probability of successful persuasion.
The most renown recapitulation of persuasive techniques was provided by R. Cialdini in his book
“Influence. Theory and practice” (2001; see also: Cialdini & Goldstein, 2004). He enumerated six rules or
‘weapons’ of effective social influence and provided convincing examples of experiments that used these
rules. The first collection of influence techniques is based on the norm of reciprocation – the rule to
compensate others for what was acquired from them. The norm of reciprocation is one of the most
pervasive rules of social coexistence. The second large group of techniques referred to “consistency and
commitment” - motivation to maintain former behaviors and commitments. This rule is illustrated by the
famous foot-in-the-door technique (Freedman & Fraser, 1966). The next rule – the rule of social proof –
affirms that, in ambiguous situations, people are likely imitate what other people are doing – especially
when the same is being done by many people (e.g. Milgram, Bickman & Berkowitz, 1969). Autority rule
asserts that people tend to obey authority figures as exemplified by Milgram’s experiment on obedience
(1974). Scarcity rule, often employed by salesmen, rests on limited availability of an offer which urges
customers to buy. Finally, liking rule states that we more often comply with a request of a person that
we like or admire.
The success of the source of influence is measured by the magnitude of social impact and malleability of
people’s perceptions of an object of persuasion. This standpoint is incited by the logic of market
economy, where consumers’ attitudes are the object of a running battle between brands (c.f. Wojciszke,
2000). Both politicians and sellers are interested in convincing their recipients. Social psychology
responds to this demand. (In extreme cases, social influence mechanisms are presented in the form of
straightforward ‘tips and tricks’ useful in persuading potential targets.)
Petty & Cacioppo’s Elaboration Likelihood Model (ELM)
One of the most influential models of attitude change is Elaboration Likelihood Model (ELM) proposed by
Petty and Cacioppo (1981, 1986). Central to this model is the "elaboration continuum", which ranges
from low elaboration (low thought) to high elaboration (high thought). The ELM distinguishes between
two routes to persuasion: the "central route," where a subject considers an idea logically, and the
"peripheral route," in which the audience uses preexisting ideas and superficial qualities to be
persuaded.
In the main postulate of the ELM model authors agree with Festinger (1950), who points out that
“people are motivated to hold correct attitudes”. According to Festinger (1954) “we would expect to
observe behavior on the part of persons which enables them to ascertain whether or not their opinions
are correct”. The main source of evaluation of opinion’s correctness are other people involved in the
social comparison mechanisms (Festinger, 1954).
The influence of other people is elaborated by people and the amount as well as nature of this
elaboration is moderated by individual and situational factors. The extent of the elaboration received by
a message can be viewed as a continuum going from no thought about the issue-relevant information
presented to complete elaboration of every argument. The likelihood of elaboration is determined by
person’s motivation and ability to evaluate the information (Petty & Cacioppo, 1986). At the high end of
4
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
the elaboration continuum following theoretical approaches to attitude change can be found:
inoculation theory (McGuire, 1964), cognitive response theory (Greenwald, 1968; Petty, Ostrom & Brock,
1981), information integration theory (Anderson, 1981) and the theory of reasoned action (Ajzen &
Fishbain, 1980). At the other end the theories based of affective cues or various persuasion rules and
heuristics can be found. The fundamental examples of the former kind are: classical conditioning (Staats
& Staats, 1958) and mere exposure effect (Zajonc, 1968). In the situations where no affective cues are
available people still are able to form judgments without the scrutiny of issue-relevant arguments. The
evaluation can be based on one’s own behavior (self-perception theory – Bem, 1972), rules learned from
past experience (heuristic model of persuasion – Chaiken, 1980), message’s position relative to one’s
own opinion (social judgment theory – Sherif & Sherif, 1967), consistency (balance theory – Heider,
1946) or certain attributional principles (Kelley, 1967). Each of the above theories states that certain
non-central features of a given topic are sufficient to form an attitude without the analysis of issuerelated arguments.
The two ends of the elaboration continuum relate to the two routes of persuasion. Central route
processes are those that require a great deal of thought, and therefore are likely to predominate under
conditions that promote high elaboration e.g. a person is presented with high quality arguments and has
high motivation connected with the issue (Petty & Cacioppo, 1986). Peripheral route processes, on the
other hand, often rely on environmental characteristics of the message, like the perceived credibility of
the source (Heesacker, Petty & Cacioppo, 1984), the attractiveness of the source (Maddux & Rogers,
1980), or the catchy slogan that contains the message (Petty & Cacioppo, 1986). The choice of the route
is based on the motivation and ability to process the message. High motivation or ability lead to higher
likelihood of central route of information processing whereas low values of these factors result in
choosing the peripheral route.
The most important motivational variable is the personal relevance of the subject. Personal relevance
occurs when people expect the issue “to have significant consequences for their own lives“ (Apsler &
Sears, 1968). Very close to personal relevance is the personal responsibility and accountability (Latané,
Williams, Harkin, 1977). Another motivational factor is “need for cognition” (Cohen, Stotland & Wolfe,
1955). People high in need for cognition enjoy relatively effortful cognitive tasks even in the absence of
feedback or reward (Cacioppo & Petty, 1982).
The ability related variables include distraction and repetition of the message. Distraction results in
higher cognitive load which disrupts the thoughts that would normally be elicited by the message. It is
crucial in the situations when a person has a high motivation and is otherwise able to process the
message but is relatively unimportant in the case of low motivation (Petty & Brock, 1981). Distraction
leads to higher likelihood of choosing the peripheral route. Repetition is more complicated in nature. On
one hand it can lead to higher liking regardless of the message content (mere exposure effect – Zajonc,
1968) however on the other hand it enhances the possibility to consider the implications of the message
content in a more objective way. It may also result in boredom and reactance decreasing the message
acceptance (Petty & Cacioppo, 1986).
Kruglanski’s Unimodel
5
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
An alternative to Elaboration Likelihood Model was proposed by Kruglanski and Thompson (1999) by the
name of “Unimodel”. In contrast to ELM it suggests that two distinct information inputs – “cues and
message arguments – should be subsumed as special cases of the more abstract category of persuasive
evidence”. "The two persuasion types share a fundamental similarity in that both are mediated by ifthen, or syllogistic, reasoning leading from evidence to a conclusion" (Kruglanski & Thompson, 1999).
The persuasion Unimodel is based on the Lay Epistemic Theory (LET) of the processes governing the
formation of subjective knowledge (Kruglanski, 1989). “According to LET, evidence refers to information
relevant to a conclusion. Relevance, in turn, implies a prior linkage between general categories such that
affirmation of one in a specific case (observation of the evidence) affects one’s belief in the other (e.g.,
warrants the conclusion). Such a linkage is assumed to be mentally represented in the knower’s mind,
and it constitutes a premise to which he or she subscribes” (Kruglanski & Thompson, 1999). In other
words the persuasion process takes place in the mind of the person being persuaded. The information
sources may vary yet they all converge on the conclusion stated in form of a simple syllogism (if-then).
In line with ELM, the Unimodel assumes an impact of motivation and ability on the processing of
persuasive information however it treats them in a slightly different manner. Based on the motivation
level and the cognitive capacity of a person more or less information can be processed. In the lowmotivation situation only short and easy to process information enters the mind and takes part in
forming the evidence. In contrast, when the motivation and capacity is higher a more elaborative process
is possible based on more complicated material. Not only the sheer amount of motivation but its kind is
important in seeking information and forming a conclusion. An individual trying to crystallize a judgment
on some issue may desire accuracy and confidence on the topic. However, the relative weight given
these two epistemic properties may vary, often outside the individual’s awareness (Austin & Vancouver,
1996). The greater the proportional weight assigned to confidence or assurance as such, the stronger the
individual’s motivation for nonspecific cognitive closure (Kruglanski & Webster, 1996). In contrast, the
greater the proportional weight assigned to accuracy per se, the stronger will be the individual’s
tendency to avoid closure and remain open-minded.
Nowak & Latanés’s dynamical social impact theory
Latané’s (1981) theory of social impact specifies that people are influenced by others according to their
number, strength and immediacy. Number refers the number of people exerting influence, with higher
numbers of people exerting stronger influence, but in such a way that as numbers grow, the influence of
each additional person decreases. Strength refers to the persuasive strength of a person, which can
depend on status, power, importance, intensity, etc. Immediacy is the closeness of another person in
(social) space or time, with influence decreasing exponentially as distance increases. The total amount of
influence or impact on a person is a multiplicative function of those three factors.
This theory provides deep insight in how individual people are influenced by their social environment,
but people at the same time exert influence on their environment as well. The interactive nature of the
relationship between individuals and their environment cannot be captured by this theory in the way it
was originally formulated, so Nowak, Szamrej and Latané (1990) expanded it into the dynamic theory of
social impact. By formalizing the original micro-level theory and subjecting it to a series of computer
simulations they have been able to show how from the interaction of individuals macrolevel properties
6
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
emerge in the form of structuring of social space. Their simulations on opinion formation has provided
insight in how polarization of opinions evolves over time, why at some point an equilibrium is reached in
the society and how it is possible for minorities with defiant opinions to survive in the midst of a majority
group with an opposing opinion.
Because of the decreasing influence with increasing immediacy localized pockets or clusters of people
with similar opinion appear rather than a scattered pattern or random mix of people with differing
opinions. Although people are highly sensitive to the majority opinion due to their numerical superiority,
minority clusters can survive if they are clustered around people with high strength. Because of the
strength of an “opinion leader”, “followers” can be prevented to succumb to the pressure of the larger
number of the majority as long as this strength outweighs the influence by numbers.
7
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Simulational model (University of Warsaw)
Background
Trust is one of the basic regulatory mechanisms in any social relation. It enables individuals to
differentiate their social interactions – from intimate ones through close relations to formal links. The
level of trust in a relationship determines the types of social processes that can be enacted on it; in
distant relations characterized by low trust individuals transmit information while in intimate, highly
trusted relationships they share emotions and are willing to commit resources without reciprocation.
Trust is believed to be a lubricant in social interactions that enables smooth operation of the gears of
social systems.
We propose that a trust-based mechanism performs an important function in optimization of functioning
of social groups and societies. In this view, trust regulates how much weight is put to each individual, so
the information coming from more credible individuals is weighted more, and the information coming
from less trusted individuals is weighted less. Changes of trust provide important mechanism for selforganization of social systems. Rules used by individuals to increase or decrease trust toward specific
others result in the increase of average quality of information they receive, and in the group level in
more accurate functioning of the group.
The goal of our simulations was to check what is the effect of different sets of rules of changing trust on
the level of individuals on the quality of information in social groups. Our aim is to demonstrate that
trust improves the quality of information on the group level, and to check which of several plausible sets
of rules for updating trust is most effective in increasing the quality of information in the group.
Two main ideas underlie our simulations. First, individuals differ in their ability to gather correct
information. Incorrect information may be represented as noise added to the correct value. We assume
in our simulations that each individual is characterized by a specific value of noise. The lower level of
noise, the more correct is the information this individual receives. Thus individuals characterized by low
level of noise have a good sense of what is happening around them. They represent trustworthy sources
of information. Individuals characterized by high values of noise, in contrast, get low quality information;
one should not rely on information from them. Second, we assume locality of information, the accuracy
of information often is a local phenomenon. Information that is true in one location may be false in
another location. The information “it rains” is true in some locations and false in others.
We are therefore most interested in how trust relations are formed and updated when information is
local, and its accuracy depends on location. In general, these individuals should be trusted or are located
nearby and have high quality information. We can explain the general idea using the example of
assessing the temperature. The prototype situation our simulations are that the temperature varies
smoothly between different locations. Each individual is equipped with a sensor that measures the
temperature. Sensors differ vastly in their quality, some produce much higher errors than others.
Individuals have to report their estimates of the temperature. Before reporting they can ask others for
their estimates. This situation is repeated many times. After each estimate, the individual can upgrade
8
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
their trust toward each individual who has sent an estimate. Individuals increase trust toward others
who provides accurate information and decreases trust toward those who provide inaccurate
information.
The exact rules differ between different models. We measure the average error in reporting the
temperature. We have used temperature for the clarity of explanation of our example. The same model,
however, may be interpreted as the model of assessment which news items are of most interest for
newspapers readers, what are the prices of houses, what is the local economic situation; in general, any
information that is of local character.
In our simulations individuals are placed in two-dimensional space. For several random places in the
space a temperature is chosen at random. Then the temperature is either smoothly varied between
these places, or is assumed to hold at a constant value in some distance form these locations. In each
case, nearby locations have similar (or identical) temperature. Each individual assesses the temperature
by reading own sensor. The readings of the sensor are: the local temperature plus random value
multiplied by the noise level characteristic for the individual. Then the individual receives information
from others to whom he or she is connected. Own judgment is corrected by taking into account
information received from others. Weighting of information depends on the level of trust toward the
source of information. Depending on the model after each round of simulation trust toward each
individual may be updated using rules specified by the model.
As the first network model we have adopted an unweighted hierarchical network in which the strength
of each link is 1. The network is constructed recursively in the following way. In the first step k nodes,
where k is from the interval (<GrMin;GrMax> are generated. These nodes are connected by links with
the probability of any two nodes been connected is equal to EdgeProb. Then in each step of the
generation process each node is divided into a cluster of kconnected nodes, where k is from the interval
<GrMin;GrMax> . Each node is connected with probability EdgeProb to other nodes in the cluster.
Random nodes inherit the connection from the higher level. This process is repeated iteratively in each
step of the generation process. After the network structure is constructed, the strength of each node is
randomly generated from a uniform distribution.
In our simulation we assumed GrMin =3, GrMax = and p=.7 We have used 4 recursion steps resulting in
approximately 500 nodes. We have used graph visualization algorithm to locate the nodes in 2-D space.
The resulting network has similar properties to a small world network. Individuals are most frequently
connected to others located nearby, and the probability of connections decreases with the distance in
the 2-D space.
In the first set of simulations the temperature of the environment was set only once. Two temperature
zones were used where the left side of the screen was characterized y low temperature, and the right
side by the high temperature. Color of the background visualizes the temperature. Simulation were run
for 10 steps. In each step every individual read the value of own sensor, broadcasted this value to other
connected individuals (if there were interactions), received the information from other connected
individuals, and corrected own judgment. Then the trust was updated. Own updated judgment was used
as a criterion for updating the credibility of others. The accuracy of judgments of individuals changed
9
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
noticeably in the first few steps and then stabilized. In the first set of simulations smooth transitions of
temperature were used.
Baseline model
To establish the baseline a model with no interactions between individuals was run. Figure 1 shows the
judgments of individuals. The color of the background represents the temperature of the environment.
Each circle corresponds to an individual. The color of circles represents the judgment of individuals. The
size of the circles represents initial credibility or trustworthiness of individuals (which does not affect the
dynamics of simulations in this model).
Figure 1. PODPIS
As we can see the average error with no interactions is approximately 0.15. This value reflects the
average setting of noise of the sensors of the individuals. Since there are no interactions and trust is not
updated, the accuracy of judgments stay constant during the simulation and the error after 10
simulation steps is identical to the first step.
Model using the Dynamic Theory of Social Impact
As the second model we have adopted the model of Dynamic Theory of Social Impact. The impact
specifies that the influence of the group of sources on a target is a cumulative function of the influence
of individuals
where:


hk - impact of opinion k
s - strength
10
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
 d - distance
We are using an unweighted network in this set of simulations, so the distance to each of the neighbors
is 1 and the influence on the target is the sum of influences weighed by the trustworthiness of
individuals. We should note, however, that the structure of the hierarchical network we are using
conforms to the assumptions of the Dynamical Social Impact model. Individuals are most densely
connected to nearby individuals and thus receive most influence from nearby individuals, the amount of
information received decreases with distance.
Figure 2a. Initial judgments of individuals, before interactions.
When the judgments are based only on the readings of own sensors the average error is .15. There is a
an abrupt change in judgments with the change of temperature. There is no local clustering of
judgments.
Figure 2b. Final judgment using where the influence from others follows a Dynamical Social Impact formula
11
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
The average error at the end of simulation, after 10 steps is approximately .1, which is .5 lower than the
error without social interactions. We can also see in the figure 2b, strong clustering of judgments,
individuals located nearby issue similar judgments. Judgments of those who are located near the line of
temperature change reflect influences for both temperature zones and thus tend to be close to the
average temperature of both zones. For this subset of individuals social influence may lead to increase in
error. Using information from others, who live in a different environment, to update own judgment may
increase the error of judgments. Overall increase in accuracy in the model of Dynamic Social Impact is
possible because the structure of links results in locality of influence, so most influence is achieved from
others in nearby locations, who are just likely to be exposed to the same conditions.
To test the hypothesis of locality, in the third simulation we have used a fully connected network (all
individual are interlined). In such a network there is no locality. Figure 3 shows the final judgments in a
densely connected network.
Figure 3. Final judgments in a fully connected network
The final judgments unified, after 10 simulation steps, by converging on the mean value. The average
error of individuals has dramatically increased. If the information depends on the location, and the
structure of contacts does not preserve locality, social influence is likely to lead to large increase of error.
This is because, even if the judgment of the other is correct, it reflects local conditions in the locations of
the others, which are likely to be different from the locations of the subject.
In conclusion, in the model of Dynamic Social Impact, the influence from others is likely to improve the
accuracy of judgment by weighting the most information coming from nearby individuals. In the original
model, the locality is guaranteed by dividing the influence of a source of influence by the square of the
distance between the source and the target of influence. In the unweighted social network model social
influence leads to increase in correctness if the distance dictates the structure of connections, i.e.
individuals are most strongly connected to others located in physical proximity. There is strong empirical
evidence showing that the frequency of contacts decreases with the square of the distance (Latane,
12
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Nowak, L’Herrou 1995). In a densely connected network, however, the target of influence is connected
to all the sources, regardless of the distance. If the information has local character, the influence from
others is likely to lead to an increase in error.
In the simulations based on the model of dynamical social impact, described above, the influence of
other individuals and their individual characteristics did not change in time. In the simulations described
below we investigate how the change in trustworthiness affects the error of judgments of individuals.
The simulations run for 10 rounds. In each round individuals updated the credibility of their sources of
information. If the information coming from an individual differed less than the criterion the trust was
increased. If the difference was larger, the credibility of the individual was decreased by a small constant.
The total change of credibility of a source in each simulation step was equal to the sum of changes
introduced by the targets of influence. Since the trust of a source was the same to all the individuals in
this model it amounted to reputation. The figures 4a and 4b below show typical change of judgments
and trust in the course of simulations.
Figure 4a TRUST model, the configuration of opinions after the first simulation step.
13
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Figure 4b TRUST model, Configuration of opinions after 10th simulation step
Trust
Step # 10
error 0,0667
As we can see in the figures 4a and 4b below, changing trust resulted in the reduction of error from .15
to .067, The error was reduced in more than half. As we can see in the picture 4b, the lowest credibility is
assigned to the individuals located in the border dividing the two zones of information. These individuals
also make the largest error. This happens because these individuals receive most conflicting information
from others. Trying to integrate the information they get from others they end up with judgments that
average information coming from two very different environments. This is information is far from being
correct in each of the environments, so others decrease the perceived credibility of those located near
the division of the two information zones. Minimizing the influence from these individuals by assigning
them lower credibility increases the correctness of the average judgments of the individuals in the
group. The line of low credibility individuals effectively creates a barrier to the flow of information
between the two zones. Blocking misleading information effectively increases correctness of the
judgment.
Model of relational aspects of trust
The next model explores relational aspects of trust. I this view, trust is a property of the relationship
between individuals, rather than a credibility of an individual. In the weight lining model we assumed,
trust can be portrayed as the strength of the link between individuals rather that the credibility of a
individual that is perceived in the same way by others. In the simulation, after issuing their judgments
each individual changes his or her estimate of how much to trust each source of information. It is
represented as changing the strength of the relationship between two individuals. The figures 5a nd 5b
show the distribution of opinions at the first and the 10th step of the simulation.
14
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Figure 5a Distribution of judgment and trust after the first step of the simulation
Link Weight
Step # 1
error 0,1527
Figure 5b Distribution of judgment and trust after the 10th step of the simulation
Link Weight
Step # 10
error 0,1188
As we can see in figures 5a and 5b changing links resulted in reduction of error, the reduction of error is,
however, much less effective than in the credibility model when the credibility of individual was
collectively changed, rather than links between individuals. This is because in the credibility model the
information about the trust one can give to others was cumulative by being effectively shared. In the
relational trust model everyone has to rely on their own experience, so the trust estimates are much less
reliable.
In reality individuals in social groups issue judgment on subsequent occasions and issues. So rather then
converge on a increasingly correct judgment on a single issue they issues a series of relatively
independent judgments, where each judgment serves as the basis for updating trust that will be used in
the subsequent judgments. This situation was investigated in the next series of simulations.
15
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
In the next series of simulations we investigated the evolution of trust in repeated judgments, where
rather then issuing 10 consecutive judgments on the basis of the same information, we assumed that
100 consecutive judgments were made, each one on the basis of new information. To preserve similarity
with our initial simulations we have assumed that the information in the environment (portrayed by the
background color) did into change, but on each round of simulation individual would get new reading of
their sensors (a sew set of random noise was added to the true value present at their location).
As the baseline condition we have used the model of dynamic social impact. We also used the locality of
connections assumption. The starting and final configuration of judgment is shown on the figures 6a and
6b. The strength of relations is portrayed at the darkness of the connection. We have used in the
simulations 4, rather than 2 zones of temperature arranged in a checkerboard pattern. Figure 6c, shows
the error in 100 steps of the simulation.
Figure 6a The model of dynamical social impact, initial configuration
Influence 0
16
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Figure 6b The model of dynamical social impact, configuration after 100 steps
Influence 100
Figure 6c Error in 100 steps of the dynamical social impact model
Influence
1.20E-01
1.00E-01
8.00E-02
6.00E-02
error
4.00E-02
2.00E-02
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
45
47
49
51
53
55
57
59
61
63
65
67
69
71
73
75
77
79
81
83
85
87
89
91
93
95
97
99
0.00E+00
As we can see, in this model the error with the current settings of individual sensors remain in the range
around .085 and as expected do not change in the course of the simulation.
17
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
In the next simulation we have used the model of relational trust. The settings of the simulation
correspond to the previous simulation. The results are displayed in the figures 7 a,b,c
Figure 7a Initial configuration of the judgments after the first simulation step
Figure 7b Configuration of judgments and relational trust after 100 rounds of simulation
18
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Figure 7c Error in judgments in 100 simulation steps
relations
1.20E-01
1.00E-01
8.00E-02
6.00E-02
error
4.00E-02
2.00E-02
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
45
47
49
51
53
55
57
59
61
63
65
67
69
71
73
75
77
79
81
83
85
87
89
91
93
95
97
99
0.00E+00
As we can see over the course of simulations the average error got significantly reduced. Also the links to
the individuals who are located on the borders between information zones got much weaker. The
stronger reduction of error as compared to the previously shown simulation of the model is likely due to
the higher number of simulation steps. Even though individuals have individually less information about
the credibility of others than in the model when they jointly establish the credibility of others, this
information accumulates in the curse of repeated experience in consecutive simulation steps.
Model based on small-world network
In the last simulation, we assumed a small world network with the prevalence of local connections but
with the significant number of far reaching connections. We have used local grids of 14 by 4 nodes. 80%
of connection were reassigned to random locations. We have used a relational trust model. IN contrast
to previous simulations for each step of the simulation a new distribution of information in the
environment was used. This was done by randomly choosing 29 locations. For each location a radius was
randomly chosen. Information within this radius was set to the same value that was randomly assigned.
In result the information was always local but the radius of the locality varied form step to step. Figures
8a, b, and c show the simulation results of this model.
19
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Figure 8a Initial configuration of the judgments after the first simulation step
Figure 8b Configuration of judgments and relational trust after 100 rounds of simulation
20
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Figure 8c Error in judgments in 100 simulation steps
As we can see over the course of time the error decreased and the strength of the longer connections
was decreased. The reduction of error was, however somewhat smaller than in the previous model that
used fixed information in the environment. This is because the structure of the information in the
environment was more variable than in the previous model, so the structure of connections could reflect
statistical rather than invariant properties of the distribution of information,
In conclusion in a series of simulations of several models we have investigated how the evolution of trust
can lead to increased correctness of judgments. We were interested in situations when the information
has local character. Social influence of other individuals leads to increased correctness if the structure of
influence reflects the local structure of information. In the dynamical social influence model this is
guaranteed by the formula. In the network model correctness in increased n the small world network
and hierarchical network that are characterized by the prevalence of local connections.
Models in which trust evolves as a function of experience evolve toward such structure. In the credibility
model individuals who have the information coming from the same environment are more highly
trusted. In models of relational trust more closely located individuals are trusted more, while the trust
toward individuals located at higher distance decreases. Models with constant information in the
environment giver results similar to models assuming varying information. The credibility model, where
individuals collectively establish trustworthiness of others leads to faster reduction of error, than the
relational trust model.
21
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Punishment-driven cooperation and social cohesion among greedy individuals
(ETH Zurich)
We have studied how (voluntary or imposed) changes in the interaction structure of individuals can
promote their cooperativeness. Starting from the usual well-mixed setting, where individuals’ decisions
are strongly determined by the behaviour of the whole population they belong to, we have explored the
effect of reducing the interaction range and providing the agents with the capability to choose their
interaction partners. Specifically, we have studied how costly punishment can succeed as a cooperation
enhancing mechanism when interactions are restricted to local groups (Helbing, Szolnoki, Perc & Szabó
2010b; Helbing, Szolnoki, Perc & Szabó 2010a) and the emergence and stability of social cohesion among
greedy mobile individuals (Roca & Helbing 2011).
Situations where individuals have to contribute to joint efforts or share scarce resources are ubiquitous.
Yet, without proper mechanisms to ensure cooperation, the evolutionary pressure to maximize
individual success tends to create a tragedy of the commons (such as over-fishing, for instance). From
Tit-for-Tat introduced by Axelrod (Axelrod 1984) many cooperation-enhancing mechanisms have been
introduced. Nowak (Nowak 2006) discusses the most important ones, but several others have been
studied. Among them, altruistic costly punishment has been probably the one attracting more attention
(Fehr & Gächter 2002; Boyd et al. 2003). Spacial interaction is another mechanism to augment the rate
of cooperation that has been studied for nearly 20 years following Nowak (Nowak & May 1992; Epstein
1998).
The first two studies we present integrate spacial interaction with punishment. In that respect they
follow earlier work (Brandt et al. 2003; Nakamaru & Iwasa 2006). However, the our first study about
punishment analyses in depth the effect of the structure of the punishment (its cost to the punisher and
the fine the punished pays) on the evolution of the system. The second article focuses on the influence
of random mutation on the evolution of the system. While the introduction of migration has already
been studied (Helbing & Yu 2008; Helbing & Yu 2009), the models discussed there rested on knowledge
of the strategies of all the other players. The introduction of a learning process following (Roth & Erev
1995; Macy & Flache 2002) shows how a migration dynamics can evolve without this knowledge.
Punishment and spacial structure
In (Helbing, Szolnoki, Perc & Szabó 2010b), the authors studied the evolution of cooperation in spatial
public goods games where, besides cooperation (C) and defection (D), punishing cooperation (PC) and
punishing defection (PD) strategies are considered. By means of a minimalist modelling approach, they
clarify and identify the consequences of the two punishing strategies.
A spacial public goods game is played on a periodic square lattice. Each site on the lattice is occupied by
one player. In accordance with the standard definition of the public goods game, cooperators (C and PC)
contribute to the public good and defectors (D and PD) contribute nothing. The sum of contribution is
multiplied by a synergy factor. Then, the players receive an equal share of this pot as payoff. The
22
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
punishing strategies (PC and PD) make an extra contribution to punish defectors. Finally, players adopt
the behavior of their neighbour with a probability depending on their difference of payoff.
Since punishment is costly, punishing strategies lose the evolutionary competition in case of well-mixed
interactions (i.e. situations where all individuals interact will all the rest). However, when interactions are
limited to the spatial neighbourhood, the outcome can be significantly different and cooperation may
spread. The underlying mechanism depends on the character of the punishment strategy. In the case of
cooperating punishers, increasing the fine results in a rising cooperation level. In contrast, in the
presence of the PD strategy, the level of cooperation shows a non-monotonous dependence on the fine.
Finally, they find that punishing strategies can spread in both cases but basing on largely different
mechanisms, which depend on the cooperativeness (or not) of punishers.
The same authors developed this line of work by adding strategy mutations to the usual strategy
adoption dynamics (Helbing, Szolnoki, Perc & Szabó 2010a). As expected, frequent mutations create kind
of well-mixed conditions, which support the spreading of defectors. However, when the mutation rate is
small, the final stationary state does not significantly differ from the state of the mutation-free model,
independently of the values of the punishment fine and cost. Nevertheless, the mutation rate affects the
relaxation dynamics. Rare mutations can largely accelerate the spreading of costly punishment. This is
due to the fact that the presence of defectors breaks the balance of power between both cooperative
strategies, which leads to a different kind of dynamics.
Migration and greediness
The two works presented above show how a cooperation enhancement mechanism can be benefited
from reducing interactions to a limited set of other individuals (spatial neighbours, in these cases).
However these kinds of situations are still quite restrictive, since individuals are ‘trapped’ in a static
neighbourhood where information about each person strategy and payoff is perfectly known by all the
rest. To some extend, from the individual agent’s viewpoint, we have just changed the scale of the
interaction space from global to local. Going further in this line of research relating Structure, Agency
and Cooperation, we have explored the influence over cooperation of individual mobility (i.e. providing
agents with the possibility to choose, to some extend, their interaction partners). Mobility plays a key
role in (Helbing & Yu 2008), where agents were looking for the appropriate neighbourhoods to be
successful, and has been introduced also recently in a model to study social cohesion among greedy
individuals with scarce information about each other.
Social cohesion can be characterized by high levels of cooperation and a large number of social ties. Both
features, however, are frequently challenged by individual self-interest. To understand how social
cohesion can emerge and persist in such conditions, Roca and Helbing (Roca & Helbing 2011) simulate
the creation of public goods among mobile agents, assuming that behavioural changes are determined
by individual satisfaction. Specifically, they study a generalized win-stay-lose-shift learning model, which
is only based on individuals’ previous experience.
As in the previous models involving punishment, this models involve agents playing spacial public goods
games within the neighbourhoods they belong to. Nevertheless, in this case, the base grid is sparse and
players can move to empty sites within a certain range. Also, there is no punishing strategy and players
23
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
can only cooperate or defect. Concerning the behavioral update, individuals in the model society are
expected to maintain or change their strategy and their social relationships (position) depending on the
payoffs obtained in the public goods games. Individuals tend to change their strategy or social
neighborhood when they are dissatisfied with their current payoffs. They follow a satisfying dynamic.
Each player has an individual aspiration level, which determines her satisfaction. The aspiration is
determined by the extreme payoffs that the individual experiences in the public goods games and a
parameter called greediness.
The most noteworthy aspect of this model is that it promotes cooperation in social dilemma situations
despite very low information requirements about the game and other players’ performance, and without
assuming commonly addressed mechanisms such as imitation, a shadow of the future, reputation
effects, signalling, or punishment. They find that moderate greediness favours social cohesion by a
coevolution between cooperation and spatial organization. However, a maladaptive trend of increasing
greediness, despite enhancing individuals’ returns in the beginning, eventually causes cooperation and
social relationships to fall apart.
24
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Brief introduction to analytical models for trust and reputation (University of
Surrey, University of Fribourg)
Before discussing models, we first need to define what do we understand under words trust, reputation,
and quality. We adopt two definitions offered by Jøsang (2007):


Decision trust is the extent to which one party is willing to depend on something or somebody in a
given situation with a feeling of relative security, even though negative consequences are possible.
Reputation is what is generally said or believed about a person's or thing's character or standing.
In addition to decision trust, one can speak about reliability trust which is however more specific and
hence less suitable to address the broad range of problems related to trust. With respect to these two
definitions, one can imagine that “Alice trusts Bob because of his good reputation.” as well as “Alice
trusts Bob despite his bead reputation.” The former sentence describes a situation where Alice has had
little or no contact with Bob and hence cannot do better than to rely on Bob's general reputation. By
contrast, the latter sentence describes a situation where Alice has had a long history of successful
interaction with Bob, which allows her to overcome Bob's poor general reputation. We see that trust and
reputation act together and depending on the history of interactions, one of them typically has more
relevance. Finally, quality relates to intrinsic properties of items or users (quality of a book or quality of a
reviewer, for example).
From the point of view of the QLectives projects, it is important to realize that reputation systems are
essential for online interactions where their task is to reduce the inherent information asymmetry. They
help us to answer questions like “Should I trust this review by Joe?” and “Should I really buy this item
from an eBay seller Maria?”. In addition, reputation systems have beneficial side-effects:
1.
they provide incentives for good behavior by providing a so-called “shadow of the future” – our
today's actions within the system have the potential of influencing our future outcomes
(Axelrod, 1984),
2.
they repel and lower influence of malicious users,
3.
they help to avoid undesired paralysis of the system which, as shown in Akerlof's paper about
“market for lemons”, threatens to occur in any commercial system with strong information
asymmetry (Akerlof, 1970).
To build a reputation system, there are three conditions to be met (Resnick et al, 2000). Firstly, users
must be long-lived entities that inspire an expectation of future interaction. If users come and go,
interacting in the system on a one-time basis, they do not feel motivated to behave well because their
low potential reputation within this system will be of no importance in the future. Secondly, feedback
about current interactions must be captured and distributed within the system. If insufficient data is
collected, trust and reputation will not be formed correctly. Thirdly, feedback must be used to guide
trust decisions. If feedback is collected but it is not delivered back to users to help their decision making
or if it is delivered in a way that they cannot perceive efficiently (the user interface is too complicated,
25
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
for example), the system will be of little use to its users. Even when these three conditions are fulfilled, a
reputation system may function poorly because:
1.
users do not bother to provide feedback at all (why should they?),
2.
it is difficult to elicit negative feedback (because of fear of retaliation),
3.
it is difficult to ensure honest reports (blackmailing, coalitions and spamming can appear),
4.
online systems often give the possibility of creating cheap pseudonyms,
5.
trust and reputation data is very sensitive which prohibits users from using your system.
Despite all these risks and problems, reputation systems have been successfully implemented in a wide
scale of Internet sites, ranging from commercial sites as eBay and Amazon, news services as Digg, and
information/help sites as Stackoverflow, Yahoo Answers, AllExperts, and others. For example, on eBay
buyers and sellers rate each other on the scale (-1, 0, 1) and simple summation of all ratings determines
user's reputation. Analysis of eBay transactions show that this simple system works surprisingly well
(Resnick et al, 2006; Houser and Wooders, 2006). User participation is very high (52% of buyers provide
feedback, for sellers this quote is even higher – 61%) and reputed sellers get higher prices for their
products.
Different descriptions of quality are appropriate under different circumstances (Reeves and Bednar,
1994). As such, there are various different understandings of what ‘quality’ means: For example, it may
mean ‘excellence’ according to the transcendental approach of philosophy, ‘value’ from an economics
perspective (i.e. excellence relative to price), ‘conformance to specifications’ which came from
manufacturing, or how ‘product or service meets or exceeds a customer’s expectation’ which comes
from marketing (Garvin, 1984; Reeves and Bednar, 1994).
In the context of Qlectves, we adopt an alternative, ‘product-based’, approach to quality which
originated in economics. This view considers quality to be a ‘precise and measurable variable’, and is an
inherent characteristic, rather than a property that is ascribed to a product or object (Garvin, 1984: 25).
A high quality object contains a greater quantity of a desirable attribute than does a low quality object
(e.g. knots per inch in the case of a rug). Theoretical work has sought to define the number of
dimensions for the evaluation of quality (Garvin, 1984; cf. Brucks and Zeithaml, 2000): (1) Performance
(primary operating characteristics of a product); (2) Features (‘bells and whistles’ of a product); (3)
Reliability (probability of a product failing within a specific period of time); (4) Conformance (degree that
a product’s design matches established standards); (5) Durability (measure of a product’s life); (6)
Serviceability (speed and competency of repair); (7) Aesthetics (subjective measure of how a product
looks, feels, sounds, smells or tastes); (8) Perceived Quality (subjective measure of how the product
measures up against a similar product). Experimental work (Ghylin et al. 2008) has also attempted to
define the characteristics relating to quality, with subsequent rating and clustering of data giving the
following groupings for perceptions of ‘general quality’: Negative affect (defective, failure, poor, bad),
Positive affect (high ranking, precision, terrific, flawless, superior, excellent, best), and Durability
(longevity, long lasting, durable); In terms of perceptions specifically relating to ‘product quality’, they
found: Negative affect (bad, low grade, poor, unsatisfactory), Durability (durable, longevity,
dependability, long lasting, everlasting), Conformance (good, good value), Positive affect (perfect,
excellent).
26
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Trust, Reputation and Quality in QScience – theoretical foundation (University
of Surrey)
A QScience user (henceforth: ‘ego’) needs to interact with both people and objects. The people are
mainly fellow scientists; the objects are electronic documents such as papers, pre-prints, blogs, email
messages, web pages, etc. We start with the assumption that there are far too many people and objects
for ego to interact or even notice them all, so that QScience should help in establishing which ones ego
should interact with.
Definition of the terms and their relationship:
Quality is assessed through the opinions of trusted others and is the basis of one’s reputation among
peers and the wider scientific community. Quality is a characteristic of objects; trust and reputation are
characteristics of people. All three characteristics are dynamic, that is, they change over time. For
instance trust of alter increases as more successful interactions between ego and alter take place, and
decreases in the absence of interactions. All three are also context dependent. For instance, the
perceived quality of an object may vary according to the purpose for which the judgement is being
made; a scientist may have an excellent reputation as an original thinker but be considered to be a
dreadful organiser. One may trust a colleague’s judgement about one topic, but mistrust their
judgement on another topic.
Egocentric network
The QScience platform will help ego to identify and collect links to others. For instance, all authors of
papers that ego has viewed would automatically become represented as nodes in the network.
Optionally, email contacts and address lists from other applications could also be imported. Ego can also
add nodes manually. Authors cited in papers that ego has saved within QScience will also become
nodes.
Ego will be able to classify and group nodes according to ego’s own typology. Examples of node groups
are: members of my research group; people I met at the XYZ conference; people interested in ABC; my
students. (This is similar to Google+’s ‘circles’, except that Google+ has no notion of a network, and
subsets and intersections of circles are not supported). A person may be added to any number of
groups, and a group may be added to another (e.g. the group ‘My MSc students’ could be added to the
group ‘All my students’).
Ego (and possibly QScience) will also be able to create links between nodes expressing some kind of
relationship between the nodes. For instance, one relationship might be ‘co-author’, and others might be
‘in the same institution’, ‘is a student of’, ‘is a friend of’ and so on. Some of these links might be
imported from other QScience users (e.g. if user A includes B and C in his network, and labels the links
from A to B and A to C ‘my student’, user X who knows user A might be able to import nodes B and C and
the link labels from A).
As well as being located in a network, the people represented by nodes will be ranked, from those
closest (most trusted) to those furthest away (neutral trust). Negative trust scores will not be allowed.
27
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
One graphical implementation would be to have ego create a ‘ladder’ for each group, and place people
on the ladder with those at the top being most trusted and those at the bottom least trusted. However,
QScience will also attempt to rank people itself, according to factors such as frequency of interaction
with ego, recency of interaction and reputation (see below for reputation). Manual intervention will
temporarily override the automatic ranking.
In summary, QScience will have an egocentric network of those known to ego. It should be possible for
ego to search and browse this network, and also to create and break links between nodes.
Quality
Ego will use QScience to access a variety of objects through the various functional modules (e.g. when
bookmarking publications, finding copies of cited papers, reading blog entries from close others etc.)
QScience will encourage ego (by boosting ego’s reputation score) to rate the quality of all objects they
encounter on a continuous scale from low to high quality. The quality assessment will be recorded,
together with the date/time stamp and the context (i.e. what task ego is performing while accessing the
object). The ‘context’ data provides a means for ego to distinguish between quality dimensions. For
example, a user could examine the quality of documents that have been ranked while ego was carrying
out a literature review, or those that were ranked while searching for a statistical formula.
The quality rating will be aggregated with the ratings of everyone else in the network who has provided a
rating, with each rating adjusted by the trust level of the rater (i.e. how close to ego the rater is) and how
old the ratings is, in order to provide an ‘automatic’ or default quality rating for the object. The formula
for combining these elements is left for further consideration. This may be overridden by ego manually
adjusting the rating.
Ego will be able to examine all objects with a quality rating, that is, every object that anyone in ego’s
network has ever rated. Since this may amount to a very long list of objects, the list will be browsable
and searchable by who rated, by quality, by purpose, by date etc. Ego can opt to get, for instance, daily
updates of objects newly rated, or objects that have achieved some threshold rating, or objects that
have been rated within a specified context etc.
It will be possible for ego to view the components of a quality rating: who contributed, when the rating
was done, in what context, and whether the rating was manual or automatic (but, for privacy reasons, it
will not be possible to see what rating a specific person gave to an object).
Reputation
Reputation is acquired through the judgements of others about (1) the quality of objects that ego ‘owns’
(e.g. has written or created), (2) others’ evaluations of the trust they place in ego and (3) the amount of
interaction (e.g. the number of quality judgements) that ego makes. Thus if ego is considered to be
trusted by many, creates high quality contributions and interacts frequently with others in their network,
they will gain a high reputation. The formula for combining these elements is left for further
consideration.
28
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
As well as an overall reputation score, it will be possible for ego to obtain a score based only on the trust
scores of those in certain groups, or those objects evaluated in certain contexts.
Since ego’s reputation depends on others’ trust rankings and on the quality scores of ego’s contributions
(which depend partly on others’ quality evaluations and their trust rankings) and the others’ trust
ranking depends on the quality of their contributions and their reputation etc., there are some
intentional similarities between this algorithm and Page Rank (and Leader Rank).
Integration with other apps
From the perspective of ego, QScience needs to be integrated with other applications, such as email,
social networking sites, bibliographic sites, and so on. There are two options for doing this: QScience
could import data from these other sites, using the APIs they usually provide or QScience could integrate
itself with the other sites. The former approach is used for example by Tweetdeck, which imports data
from Facebook, Twitter, LnkedIn, FourSquare, MySpace and Buzz and presents all messages from all
these services in one window). However, this would require ego to adopt another application, QScience,
whose reputation was not yet established. The better alternative is to integrate QScience with the other
applications. This would mean, for example, that emails generated by QScience would be sent using
ego’s email client and QScience use case functionality would be accessed through ego’s web browser.
One way to obtain ego’s quality ratings would be to attach a side tab to all the web pages that ego views.
29
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Trust, Reputation and Quality in QScience (University of Fribourg)
QScience network
QScience is intended to be a distributed platform for scientists allowing them to locate or form new
communities and quality reviewing mechanisms. The system can be represented by a network in which
nodes are:
 users, labelled by Latin letters and endowed with reputation scores R_i (t)
 items, labelled by Greek letters and endowed with quality scores Q_α (t)Items in QScience can be:
reviews, blogs, editorials, papers, news, events.
Many kinds of interactions between nodes are possible in QScience, each of them is represented by a
weighted link w_iαbetween two nodes where the link's weight depends on which interaction has
occurred. Possible interactions on the bipartite users-items network W={w_iα}are:
 user i authors a review/blog/paper α: wiα = A,
 user i uploads a review/blog/paper/news/event α: wiα = U,
 user i comments/votes an item α: wiα = V,
 user i reads/downloads a review/blog/paper α: wiα = D,
where A, U, V, D are numerical parameters of the system. In practice, it should hold that A > U > V > D,
reflecting how important/demanding each action is. This bipartite network is then projected on the
monopartite users-users network, where the weighted links represent now the trust relationships among
users.
Definition and Interrelation of Trust, Reputation and Quality
Quality is social: it is not an inherent property of an object but it is constructed through interactions
(meaning that there is a process of achieving consensus about the quality of the object). In our system,
evaluating items is a way of achieving belonging and affiliation with others: quality scores develop
through interaction and the formation of consensus in a group. Item α's quality score hence results from
the aggregation of W which is done as:
(1)
where t is the current time and τiα = t-tiα is the age of the interaction. This formula gives higher power to
reputable users. The decay function D(t)is intended to give a high weight to recent interactions and a low
(but non-zero) weight to old interactions.
Reputation represents the general opinion of the community towards a user. Hence it is ascribed by
others and assessed on the basis of the quality of user's actions. Denoting j's trust to i as Tji, we assume
30
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
(2)
Due to wiα terms, authoring a successful paper contributes to one's reputation more than commenting or
downloading it. Trust relations Tji can be derived from overlaps of users' interests and evaluations (that
is, from the interaction matrix W) or they can be specified by the users manually (which introduces
elements of social networks to the system). A possible way to determine trust from interactions is
represented by
(3)
Since equations (1), (2) and (3) are mutually interconnected, resulting quality and reputation values can
be determined by iterations similarly as for the classical PageRank and HITS reputation algorithms
(Franceschet, 2011; Kleinberg, Kumar, Raghavan, Rajagopalan, Tomkins,1999). To avoid divergence,
quality and reputation values are normalized in each iteration step so that
and
. Note that users' reputation in the system is not intended to be a copy of their
reputation in the real world – it only reflects actions done within QScience.
Figure 2. Users' reputation values (indicated by circle diameters) vs. their ability and activity parameters in the basic model.
Results were obtained by agent-based simulation of a system where 1000 users with a broad range of abilities and activities are
present and produce their own papers and read papers made byothers. The basic assumption is that the higher the user's ability,
the better papers this user produces/reads.
31
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Evaluation
The proposed trust and reputation model can be tested and evaluated by means of agent-based
simulations. When artificial agents are endowed with their intrinsic ability values, quality of a user's
interactions can be assumed to be directly influenced by this user's ability value (for example, a skilled
user authors high quality papers, provides accurate ratings, comments on good papers, and so forth).
The first simulations show that given sufficient number of skilled users, our model is indeed able to
discern users with high ability values and items with high quality. A detailed presentation of the model
and its extensive evaluation in various test scenarios will follow in one of our future deliverables.
32
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
References
Ajzen, I. & Fishbein, M. (1980). Understanding attitudes and predicting social behavior.
Englewood Cliffs, NJ: Prentice-Hall.
Akerlof, G. A. (1970). The market for" lemons": Quality uncertainty and the market mechanism.
The Quarterly Journal of Economics, 84(3), 488-500.
Allport, G. W. (1968). The historical background of modern social psychology. In G. A. Lindzey &
E. Aronson (Eds.), The handbook of social psychology (Vol. 1, pp. 1-46) Reading, Mass.: Addison Wesley.
Anderson, N. H. (1981). Integration theory applied to cognitive responses and attitudes. In R. E.
Petty, T. M. Ostrom, & T. C. Brock (Eds.), Cognitive responses in persuasion (pp. 361-397). Hillsdale, NJ:
Erlbaum.
Apsler, R. & D. Sears (1968), Warning, Personal Involvement, and Attitude Change, Journal of
Personality and Social Psychology, 9:3, 162-6.
Arrow, KJ. 1974: The limits of organization. W.W. Norton Company: USA.
Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a
unanimous majority. Psychological Monographs, 70(9).
Austin, J. T., & Vancouver, J. B. (1996). Goal constructs in psychology: Structure, process and
content. Psychological Bulletin, 120, 338–375.
Axelrod, R., 1984. The Evolution of Cooperation, New York.
Bem, D. J. (1972). Self-perception theory. In L. Berkowitz (Ed.), Advances in Experimental Social
Psychology, (Vol. 6, pp. 1-62). New York: Academic Press.
Boyd, R. et al., 2003. The evolution of altruistic punishment. Proceedings of the National
Academy of Sciences of the United States of America, 100, pp.3531-5. Available at:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=152327&tool=pmcentrez&rendertype=abst
ract.
Brandt, H., Hauert, C. & Sigmund, K., 2003. Punishment and reputation in spatial public goods
games. Proceedings. Biological sciences / The Royal Society, 270, pp.1099-104. Available at:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1691345&tool=pmcentrez&rendertype=abs
tract.
Brucks, Merrie, Valarie Zeithaml, and Gillian Naylor (2000). Price and Brand Name as Indicators
of Quality Dimensions, Journal of the Academy of Marketing Science, 28 (3), 359-374.
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social
Psychology, 42, 116–131.
Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source
versus message cues in persuasion. Journal of Personality and Social Psychology, 39, 752-756.
Cialdini, R. B. (2001). Wywieranie wpływu na ludzi. Teoria i praktyka. Gdańsk: Gdańskie
Wydawnictwo Psychologiczne.
Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and Conformity. Annual
Review of Psychology, 55(1), 591-621.
Cohen, A.R., Stotland, E., & Wolfe, D.M., (1955). An Experimental Investigation of Need for
Cognition, Journal of Abnormal and Social Psychology, Vol.51, No.2, pp.291-294.
33
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Coleman, P. T., Vallacher, R. R., Nowak, A., & Bui-Wrzosinska, L. (2007). Intractable conflict as an
attractor: Presenting a model of conflict, escalation, and intractability. American Behavioral Scientist.
Daniel Houser & John Wooders, 2006. "Reputation in Auctions: Theory, and Evidence from
eBay," Journal of Economics & Management Strategy, Wiley Blackwell, vol. 15(2), pages 353-369, 06.
Deutsch, M., & Gerard, H. B. (1955). A study of normative and informational social influences
upon individual judgment. Journal of Abnormal and Social Psychology, 51, 629 – 636.
Doliński, D. (2005) Techniki wpływu społecznego. Warszawa: Scholar.
Epstein, J.M., 1998. Zones of cooperation in demographic prisoner’s dilemma. Complexity.
Available at:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.8598&rep=rep1&type=pdf.
Fehr, E. & Gächter, S., 2002. Altruistic punishment in humans. Nature, 415, pp.137-40. Available
at: http://www.ncbi.nlm.nih.gov/pubmed/11805825.
Festinger, L. (1950). lnformal social communication. Psychological Review, 57, 271–282.
Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7: 117–140.
Freedman, J. L., & Fraser, S. C. (1966). Compliance without pressure: the foot-in-the-door
technique. Journal of Personality & Social Psychology, 4(2), 195-202.
French, J. R., & Raven, B. (1959). The bases of social power. In D. Cartwright, D. Cartwright (Eds.),
Studies in social power (pp. 150-167). Oxford England: Univer. Michigan.
Garvin, D. A. (1984). What does “product quality” really mean? Sloan Management Review, 26,
25-43.
Ghylin, K. M., Green, B. D., Drury, C. G., Chen, J., Schultz, J. L., Uggirala, A., Abraham, J. K., et al.
(2008). Clarifying the dimensions of four concepts of quality. Theoretical Issues in Ergonomics Science,
9(1), 73-94.
Greenwald, A. G. (1968). Cognitive learning, cognitive response to persuasion, and attitude
change. In A. G. Greenwald, T. C. Brock, and T. M. Ostrom (Eds.), Psychological foundations of
attitudes (pp. 147-170). New York: Academic Press.
Grzelak, J. Ł., & Nowak, A. (2000). Wpływ społeczny. In J. Strelau (Ed.), Psychologia. Podręcznik
akademicki (Vol 3, pp. 187-204) Gdańsk: Gdańskie Wydawnictwo Psychologiczne.
Heesacker, M., Petty, R. E, & Cacioppo, J. I. (1984), Field-dependence and attitude change:
Source credibility can alter persuasion by affecting message-relevant thinking, Journal of Personality, 51,
653-666.
Heider, F. (1946). "Attitudes and Cognitive Organization," Journal of Psychology, 21, 107-12
Helbing, D. & Yu, W., 2008. Migration as a mechanism to promote cooperation. Adv. Complex
Syst, 11, pp.641-652. Available at: http://iopscience.iop.org/0295-5075/81/2/28001.
Helbing, D. & Yu, W., 2009. The outbreak of cooperation among success-driven individuals under
noisy conditions. Proceedings of the National Academy of Sciences of the United States of America, 106,
pp.3680-5. Available at:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2646628&tool=pmcentrez&rendertype=abs
tract.
Helbing, D. et al., 2010a. Defector-accelerated cooperativeness and punishment in public goods
games with mutations. Physical Review E, 81, pp.1-4. Available at:
http://link.aps.org/doi/10.1103/PhysRevE.81.057104.
34
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Helbing, D. et al., 2010b. Punish, but not too hard: how costly punishment spreads in the spatial
public goods game. New Journal of Physics, 12, p.083005. Available at: http://stacks.iop.org/13672630/12/i=8/a=083005?key=crossref.77dc5c47ba40519fa052a6b641ce4661.
Kelley, H. H. (1967). Attribution theory in social psychology. In D. Levine (ed.), Nebraska
Symposium on Motivation (Volume 15, pp. 192-238). Lincoln: University of Nebraska Press.
Kelman, H. (1958). Compliance, identification, and internalization: Three processes of attitude
change. Journal of Conflict Resolution 1: 51–60.
Kruglanski, A. W. (1989). Lay epistemics and human knowledge: Cognitive and motivational
bases. New York: Plenum.
Kruglanski, A. W., & Thompson, E. P. (1999). Persuasion by a single route: A view from the
unimodel. Psychological Inquiry, 10, 83-109.
Kruglanski, A. W., & Webster, D. M. (1996). Motivated closing of the mind: “Seizing” and
“freezing.” Psychological Review, 103, 263–283.
Latané, B. (1981). The psychology of social impact. American Psychologist, 36, 343-356.
Latané, B., & Nowak, A. (1994). Attitudes as catastrophes: From dimensions to categories with
increasing involvement. In R. Vallacher & A. Nowak (Eds.), Dynamical systems in social psychology (pp.
219-249). New York: Academic Press.
Latané, B., Liu, J., Nowak., A., Bonavento, & M., Zheng, L (1995). Distance matters: Physical
distance and social impact. Personality and Social Psychology Bulletin , 21, 795-805
Latané, B., Williams, K. & Harkins, S. (1979). Many hands make light the work: The causes and
consequences of social loafing. Journal of Personality and Social Psychology 37 (6): 822–832
Lewenstein, M., Nowak, A., &Latané, B. (1993). Statistical mechanics of social impact. Physical
Review A, 45, 763-776.
Macy, M.W. & Flache, A., 2002. Learning dynamics in social dilemmas. Proceedings of the
National Academy of Sciences of the United States of America, 99 Suppl 3, pp.7229-36. Available at:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=128590&tool=pmcentrez&rendertype=abst
ract.
Maddux, J. E., & Rogers, R. W. (1980) Effects of Source Expertness, Physical Attractiveness, and
Supporting Arguments on Persuasion: A Case of Brains Over Beauty, Journal of Personality and Social
Psychology, 39, No. 2, 235-244.
McGuire, W. Inducing resistance to persuasion: Some contemporary approaches. In L. Berkowitz
(ed.), Advances in Experimental Social Psychology, Vol. 1, New York: Academic Press, 1964, 191-229.
McGuire, W. J. (1985). Attitudes and attitude change. In In G. A. Lindzey & E. Aronson (Eds.), The
handbook of social psychology (Vol. 2, pp. 233-346). Reading, Mass.: Addison Wesley.
Milgram S. (1974). Obedience to Authority. New York: Harper & Row.
Milgram, S., Bickman, L., & Berkowitz, O. (1969). Note on the drawing power of crowds of
different size. Journal of Personality and Social Psychology, 13, 79-82.
Misztal, B. (1998) Trust in Modern Societies: The Search for the Bases of Social Order, Polity
Press
Nakamaru, M. & Iwasa, Y., 2006. The coevolution of altruism and punishment: role of the selfish
punisher. Journal of theoretical biology, 240, pp.475-88. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/16325865.
35
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Nowak, A., Szamrej, J., & Latané, B. (1990). From private attitude to public opinion: a dynamical
theory of social impact. Psychological Review, 97, 362-376.
Nowak, A., Vallacher, R. R., & Miller, M. E. (2003). Social influence and group dynamics. In T.
Millon, M. J. Lerner, T. Millon, M. J. Lerner (Eds.) , Handbook of psychology: Personality and social
psychology, Vol. 5 (pp. 383-417). Hoboken, NJ US: John Wiley & Sons Inc.
Nowak, M.A. & May, R.M., 1992. Evolutionary games and spatial chaos. Nature, 359, pp.826-829.
Available at: http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/Nature92.pdf.
Nowak, M.A., 2006. Five rules for the evolution of cooperation. Science (New York, N.Y.), 314,
pp.1560-3. Available at: http://www.ncbi.nlm.nih.gov/pubmed/17158317.
Petty, R. E., & Cacioppo, J. T. (1981). Attitudes and Persuasion: Classic and Contemporary
Approaches. Dubuque, IA: Wm. C. Brown.
Petty, R. E., Ostrom, T. M., & Brock, T. C. (Eds.) (1981). Cognitive responses in persuasion.
Hillsdale: Erlbaum.
Petty, R.E. & Cacioppo, J.T. (1986). The Elaboration Likelihood Model of persuasion. New York:
Academic Press.
Petty, R.E. & Cacioppo, J.T. (1986). The Elaboration Likelihood Model of persuasion. New York:
Academic Press.
Petty, R.E., & Brock, T.C. (1981). Thought disruption and persuasion: Assessing the validity of
attitude change experiments. In R. Petty, T. Ostrom, & T. Brock (Eds.), Cognitive responses in persuasion
(pp. 55-79). Hillsdale, NJ: Erlbaum.
Reeves, C. A., Bednar, D. E., (1994). Defining Quality: Alternatives and Implications. Academy of
Management Review, 19, 419- 445.
Resnick, Paul, Zeckhauser, Richard, Friedman, Eric, and Kuwabara, Ko. Reputation Systems.
Communications of the ACM, 43(12), December 2000, pages 45-48
Resnick, Paul, Zeckhauser, Richard, Swanson, John, and Kate Lockwood. The Value of Reputation
on eBay: A Controlled Experiment. Experimental Economics. Volume 9, Issue 2, Jun 2006, Page 79-101.
Roca, C.P. & Helbing, D., 2011. Emergence of social cohesion in a model society of greedy, mobile
individuals. Proceedings of the National Academy of Sciences, 108, pp.11370-11374. Available at:
http://www.pnas.org/cgi/doi/10.1073/pnas.1101044108.
Roth, A.E. & Erev, I., 1995. Learning in extensive-form games: Experimental data and simple
dynamic models in the intermediate term. Games and Economic Behavior, 8, pp.164-212. Available at:
http://linkinghub.elsevier.com/retrieve/pii/S089982560580020X.
Sherif, C. W., Sherif, M., & Nebergall, R. E. (1965). Attitude and attitude change: The social
judgement-involvement approach. Philadelphia: Saunders.
Sherif, M. & Sherif, C. W. (1967). Attitudes as the individual’s own categories: The socialjudgment approach to attitude and attitude change. In C. W. Sherif and M. Sherif (eds.), Attitude, egoinvolvement and change (pp. 105-139). New York: Wiley.
Sherif, M. M. (1936). The psychology of social norms. Oxford England: Harper.
Staats, A. W., & Staats, C. K. (1958). Attitudes established by classical conditioning. Journal of
Abnormal and Social Psychology, 57, 37-40.
Sztompka, P. (2007) Zaufanie: fundament spoleczenstwa (Trust: the Foundation of Society, in
Polish), Kraków 2007: Znak Publishers
36
Qlectives deliverable D1.2.1 Novel models of agency and social structure for trust and cooperation
Wojciszke, B. (2000). Postawy i ich zmiana. In: J. Strelau (Ed.) Psychologia. Podręcznik akademicki
(Vol 3, pp. 79-106). Gdańsk: Gdańskie Wydawnictwo Psychologiczne.
Zajonc, R. B. (1968) Attitudinal effects of mere exposure. Journal of Personality and Social
Psychology, 9, Monongraph supplement No. 2, Part 2.
37
Download