Changing the rules of the game: Marco Janssen

advertisement
Changing the rules of the game:
experiments with humans and virtual agents
Marco Janssen
School of Human Evolution and Social Change,
School of Computing and Informatics,
Center for the Study of Institutional Diversity
In cooperation with:
ASU: Allen Lee, Deepali Bhagvat, Marty Anderies, Sanket Joshi, Daniel Merritt,
Clint Bushman, Marcel Hurtado, Takao Sasaki, Priyanka Vanjari, Christine
Hendricks
Indiana University: Elinor Ostrom, Robert Goldstone, Fil Menczer, Yajing
Wang, Muzaffer Ozakca, Michael Schoon, Tun Myint, David Schwab, Pamela
Jagger, Frank van Laerhoven, Rachel Vilensky
Thailand: Francois Bousquet, Kobchai Worrapimphong, Chutapa Khunsuk,
Sonthaya Jumparnin, Pongchai Dumrongrojwatthana
Colombia: Juan-Camilo Cardenas, Daniel Castillo, Jorge Maldonado, Rocio
Moreno, Silene Gómez, MariaWWW.OPENABM.ORG
Quintero, Rocio Polania, Sandra Polania, Adriana
1
Vasquez, Carmen Candelo, Olga Nieto, Ana Roldan, Diana Maya
The commons dilemma
• Dilemma between individual and group interests
– Group interest: cooperation
– Individual interest: free riding on efforts of others
• Public goods and common pool resources
• Expectation with rational selfish agents
– No public goods
– Overharvesting of common pool resources
• But, many empirical examples of self-governance
WWW.OPENABM.ORG
2
What contributes to cooperation in
commons dilemmas?
(based on research with artificial agents and humans)
•
•
•
•
•
Repeated interactions
Face-to-Face communication
Information on past actions on participant
Monitoring and sanctioning by subjects themselves
Diversity in motivation: Not all humans are selfish
and rational
But problem is not binary: cooperate or defect.
Important is defining the rules of the games and
enforcing them.
WWW.OPENABM.ORG
3
Grammar of Rules
• Rules are defined as shared understanding
about enforced prescriptions, concerning what
actions (or outcomes) are required, prohibited,
or permitted (Ostrom, 2005).
• Rules in use vs rules on paper
• Formal rules vs informal rules (formal rules have
explicit consequences defined for when the rules are broken
(sanctions) and can be enforced by a third party)
WWW.OPENABM.ORG
4
Puzzles
• In what way do users of a common resource
change the rules?
• What makes communication effective?
• How do this relate to experience?
• And to ecological dynamics?
WWW.OPENABM.ORG
5
Combining experiments and agentbased models
• Traditionally agent-based models on
cooperation very abstract
• Experiments in lab and field challenge
simplistic models of behavior
• Micro-level data to test models
• Going back and forth between experiments and
modeling may stimulate theory development
WWW.OPENABM.ORG
6
Common research questions
Laboratory experiments
Field experiments
Statistical analysis
Surveys
Interviews
models
Statistical analysis,
Surveys
Text analysis, ..
“role games”
Artificial worlds
models
WWW.OPENABM.ORG
7
Field experiments
• 3 types of games in 3 types of villages in Thailand
and Colombia
• Pencil and paper experiments
• First 10 rounds: open access
• Voting round: 3 types of rules: lottery, rotation,
private property
• Survey on rule options
• Second set of 10 rounds with chosen rule
• Survey
• In depth interviews with a few villagers
WWW.OPENABM.ORG
8
Field experiments (2)
• Fishery game:
– where to fish (A,B)
– how much effort
• Irrigation game (different position; upstream):
– How much investment in public good (water)
– What amount to take from (remaining) water
• Forestry game:
– How much harvest
WWW.OPENABM.ORG
9
Fishery village
(Baru)
Water irrigation village
(Lenguazaque)
Logging village
(Salahonda)
WWW.OPENABM.ORG
10
Phetchaburi river
WWW.OPENABM.ORG
Forest
village
Irrigation village
11
Fishery village
25
irrigation
forestry
fishery
number of groups
20
Rule choice
15
10
5
0
lotery (C)
rotation property lotery (T)
(C)
rights (C)
rotation
(T)
property
rights (T)
WWW.OPENABM.ORG
12
Forestry game
WWW.OPENABM.ORG
13
Fishery game
WWW.OPENABM.ORG
14
Irrigation game
WWW.OPENABM.ORG
15
WWW.OPENABM.ORG
16
Laboratory experiments
• Various spatially explicit real-time virtual
environments for small groups.
• Various rounds
• Treatments include different options of rule
choice and/or participants chat on informal
rules
WWW.OPENABM.ORG
17
Experiments from Spring 2007
•
•
•
•
•
•
•
Renewable resource, density dependent regrowth
Resource is 28x28 cells
4 participants
Duration round 4 minutes
First round is individual round (14x14 cells)
Text chat between the rounds
Option to reduce tokens of others at the end of each round (at a
cost)
• Explicit and implicit mode
• Different resource growth experiments:
•
•
•
•
Low growth (6 groups)
High growth (4 groups)
High / Low growth (6 groups)
Mixed growth (6 groups)
WWW.OPENABM.ORG
18
Tokens in the resource during the rounds
500
500
Round 1
Low
450
Round 2
400
Round 3
400
350
Round 4
350
Round 5
300
High
450
300
250
250
200
200
150
150
100
Round 2
Round 3
100
50
Round 4
Round 5
50
0
0
30
60
90
120
150
180
210
240
`
Round 1
0
0
30
60
90
120
150
180
210
500
Mixed
450
Round 1
Round 2
Round 3
Round 4
Round 5
400
350
300
250
500
High-Low
450
Round 1
Round 2
400
Round 3
350
Round 4
300
Round 5
250
200
200
150
150
100
100
50
50
0
0
30
60
90
120
150
180
0
210
240
WWW.OPENABM.ORG
0
30
60
90
120
150
180
210
19
240
Average number of tokens collected (blue) and left over
(red) for the 5 rounds
H
300
300
250
250
200
200
L
150
100
100
50
50
0
0
1
HL
150
2
3
4
5
300
300
250
250
200
200
Mix
150
100
50
50
WWW.OPENABM.ORG
1
2
3
4
5
2
3
4
1
2
3
4
5
150
100
0
1
20
0
5
Text analysis
• Coding the text: kind of rules, making sure people
understand agreement, off-topic chat, meaning of
experiment, etc.
• Is there a relation between the type of conversation
and the performance of the group?
• We would expect that groups who are more explicit
on the rules and make clear people understand it do
better.
• In some groups there is a clear dominance of one
person, how does this affect the outcome?
WWW.OPENABM.ORG
21
ra
l
ge
ne
ic
of
ft
op
at
ch
t
at
io
n
pe
rim
en
ex
y
te
gy
st
ra
af
fir
m
sp
ac
e
st
ra
WWW.OPENABM.ORG
tim
e
eg
y
te
g
st
ra
t
g
ni
n
un
ds
ro
nc
tio
sa
pa
st
ex
te
gy
op
of
ft
t
ic
at
ch
pe
rim
en
at
io
n
st
ra
y
eg
y
te
g
st
ra
af
fir
m
ac
e
sp
tim
e
ra
t
st
g
ni
n
un
ds
ro
nc
tio
ra
l
ge
ne
sa
pa
st
Initial results
0.25
0.2
0.15
0.1
3
4
5
0.05
0
0.4
0.35
0.3
0.25
0.2
0.15
H
M
L
0.1
0.05
0
22
Models of Rule changes
• Laboratory experiments will give us basic
empirical information to develop agent-based
model.
• ABM will be used to explore rule evolution is
agents adjust rules
WWW.OPENABM.ORG
23
Reasons for making a model of the
experimental data
• Testing alternative assumptions of behavior (
compare model with naïve models)
• Methodological challenge: What do we mean
with calibrating an agent-based model?
• Future option: experiments with artificial
agents and humans
• Using the “informed” agent-based model for
exploring theoretical questions in an artificial
world.
WWW.OPENABM.ORG
24
Model outline
30
• Timestep: 1 second.
25
• Actions: move and20harvest (explicit mode)
1
2
• Each agent has a basic
default speed (moves per second),
and
15
3
number of moves can
vary a little bit between seconds.4
10
• Define direction (target):
5
–
–
–
–
the more nearby a token is to the agent, the more valuable
0
the more nearby a token
is5 to the10 current
target,
the
0
15
20
25 more
30 valuable
the more other agents nearby a token, the less valuable
tokens who are straight ahead in the current path of direction of the
agent are more valuable.
• Harvest (expl mode); probabilistic choice depending on
number of tokens nearby
WWW.OPENABM.ORG
25
Testing the model
• Calibration on multiple metrics using genetic
algorithms
• Comparing calibrated model with naïve
models (random movement; greedy agents, no
heterogeneity)
• Turing tests
WWW.OPENABM.ORG
26
Towards a theoretical model of the
evolution of rules
• Artificial world where agents play many
rounds and adjust the rules of the game.
• What kind of rule sets will evolve? Are there
attractors of rule sets?
• How is this dependent on the ecological
dynamics?
• How is this dependent on the rule to change
the rules (constitution)?
WWW.OPENABM.ORG
27
Coding rules
• Grammar of Institutions (Crawford and
Ostrom, 1995)
• Rules is build up from 5 components:
–
–
–
–
–
Attributes (characteristics of the agents)
Deontic: may/must/must not
Aim: action of the agent
Conditions: when, where and how
Or else: sanctions when not following a rule
WWW.OPENABM.ORG
28
Process of constructing a rule from the
libraries
IF “other agent” in “my area” it MUST NOT “collect tokens” ELSE “penalty”
WWW.OPENABM.ORG
29
Rule space based on experiments
(Not yet in building blocks)
•
•
•
•
Explicit mode required of not
Start time harvesting
Time left before “going crazy”
Spatial allocation (none, corners, horizontal,
vertical)
• Speed limit
WWW.OPENABM.ORG
30
Including monitoring and sanctioning
• Monitoring:
– None
– One monitor who cannot harvest are receives a
quarter of the income
– Everybody monitors, and sanctioning is costly
– Monitoring rotates every x seconds (when
monitoring one cannot harvest)
WWW.OPENABM.ORG
31
Tinkering the rules
• After every round agents update their
preferences for rules (reinforcement learning),
propose which rule set for next round, after
which one of the proposed rule sets is chosen
and implemented.
WWW.OPENABM.ORG
32
Agents breaking rules
• Agents can break rules. If an action is not
allowed, it might break a rule with a
probability related to the opportunities
available (amount of tokens available nearby)
WWW.OPENABM.ORG
33
WWW.OPENABM.ORG
34
Distribution of total earnings
(100 evolutions of 100 rounds)
20
18
one
16
everybody
frequency
14
12
10
8
6
4
2
0
0
100
200
300
400
bin
WWW.OPENABM.ORG
35
Initial experiments
• Multiple (100) runs with 100 rounds with agents who conditionally
cheat. Best solution:
Low growth (one)
Low growth (everybody)
Speed limit
7.5
5
Mode
Expl
Not expl
Boundaries
Vertical
Vertical
Start-time
90
110
Time to go crazy
210
140
Earnings (tokens)
337
409
WWW.OPENABM.ORG
36
From ABM back to experiments
• Further analysis may provide us expectations
of outcomes for experiments with human
participants. Additional experiments can be
done to test those.
WWW.OPENABM.ORG
37
Areas to explore in model analysis
• Do clusters of rules evolve? And do these clusters
change with different tendencies of agents breaking
the rules.
• Co-evolution of cheating behavior and rules (incl.
monitoring/sanctioning)
• What are path-dependent trajectories?
• What if growth rates change between rounds? How
will this affect the evolved rule sets?
• How will differences in constitutional rules will affect
the ability to derive high performance.
WWW.OPENABM.ORG
38
Concluding remarks
• Combining agent-based models with
experiments in the field and the lab. The aim is
not to make predictive models, but theoretical
models grounded in empirical observations.
• Challenges:
– Calibration of agent-based models (multiple
metrics)
– Modeling communication
– Large scale controlled experiments with humans
WWW.OPENABM.ORG
39
Questions?
WWW.OPENABM.ORG
40
Download