Workshop on Noise and Imprecision in Individual and Interactive Decision-Making

advertisement
Workshop on Noise and Imprecision in
Individual and Interactive Decision-Making
University of Warwick
16–18 April 2012
Contents
Aims
3
Programme
4
Abstracts
6
Participants
11
Contacts
12
Notes
13
2
Aims
It is now widely recognised that decision-making is an intrinsically noisy process. When decisionmakers face exactly the same decision problem under the same controlled conditions in the lab, a
sizeable proportion of them fail to make the same decision, even in the absence of any feedback.
This applies equally to individual and interactive decision-making. Since most decision theories are
deterministic, in order to give them an empirical content and relate them to laboratory data, this
variability must be adequately modelled.
The aim of this workshop is to stimulate an open-minded and constructive discussion of the issues of
noise and imprecision in individual and interactive decision making, review the state of the art,
consider the strengths and limitations of the different approaches and identify future research
priorities.
This event is sponsored by the UK Economic and Social Research Council (grant number RES-051-270248) and the University of Warwick Global Priorities Programme on Individual Behaviour.
3
Programme
Monday 16th April
09:00 10:00
10:00 10:20
Registration (coffee available)
Welcome address
10:20 11:30
Session 1
11:30 11:50
11:50 13:00
Coffee break
Session 2
13:00 14:00
Lunch
14:00 15:10
Session 3
15:10 16:20
Session 4
16:20 16:40
16:40 17:50
Coffee break
Session 5
19:00
Conference dinner
Nat Wilcox: “Some Thoughts on Probabilistic Choice,
Experimental Designs and Experimental Purposes”
Michel Regenwetter: “Quantitative Testing of Decision
Theories”
Jerome Busemeyer: “Achieving More Coherent
Representations of Latent Utility by Using More Sophisticated
Stochastic Models of Manifest Behavior”
Jorg Rieskamp: “Testing Sequential Sampling Models Against
Standard Random Utility Models”
Graham Loomes/Dani Navarro-Martinez: “Sequential
Expected Utility Theory: Sequential Sampling in Economic
Decision Making under Risk”
Tuesday 17th April
08:30 09:10
09:10 10:20
Coffee
Session 6
10:20 11:30
Session 7
11:30 11:50
11:50 13:00
Coffee break
Session 8
13:00 14:00
Lunch
14:00
15:10
16:20
16:40
Session 9
Session 10
Coffee break
Session 11
15:10
16:20
16:40
17:50
Pavlo Blavatskyy: “How to Model Probabilistic Choice: A
Microeconomic Perspective”
Michael Birnbaum: “True and Error Models of Response
Variation in Choice and Judgment Studies”
Stephane Hess: “In Search of the Real Drivers of
Heterogeneity in Choice Models”
Miguel Costa-Gomes: “Level-k Models and Decision Noise”
Ted Turocy: “Quantal Response Equilibrium: A Survey”
Andrea Isoni/Graham Loomes: “Preference and Belief
Imprecision in Games”
4
Wednesday 18th April
08:30 09:10
09:10 10:20
Coffee
Session 12
10:20 11:30
Session 13
11:30 11:50
11:50 13:00
Coffee break
Session 14
13:00 14:30
Concluding remarks and lunch
Gordon Brown: “Noise, Context, and Individual Differences in
Risk Attitude”
Stefan Traub: “Attention and Revealed Preference in a
Portfolio Choice Experiment”
Peter Dayan: “A View from the Bottom”
5
Abstracts
Nat Wilcox (Chapman)
Some Thoughts on Probabilistic Choice, Experimental Designs and Experimental Purposes
In general we have two goals for algebraic preference models: (A) they survive skeptical withinsample hypothesis tests; and (B) they predict well to new or hold-out data. Most researchers now
agree that models of the probabilistic part of choice are needed to best accomplish both goals.
However, another matter may be less well appreciated: It is that "good experimental design"
depends both on our experimental purpose (that is, whether we are focusing on goal A or B), and
also on the model of probabilistic choice we specify. This talk will illustrate these two points using
several examples. The tentative conclusion is that specific experimental designs are (and should be)
driven both by specific purposes and specific assumptions about randomness. Because of this,
attempts to generalize conclusions across differently generated experimental data sets call for some
special caution.
Michel Regenwetter (Illinois)
Quantitative Testing of Decision Theories
I will present a general modeling framework for probabilistic specification of algebraic theories for
binary choice. I will provide examples of frequentist, as well as Bayesian, tests, e.g., of Cumulative
Prospect Theory, using individual participant laboratory choice data.
Jerome Busemeyer (Indiana)
Achieving More Coherent Representations of Latent Utility by Using More Sophisticated Stochastic
Models of Manifest Behavior
`Decision theorists are becoming increasingly frustrated by the ephemeral nature of utility. The
theorist searches for some coherence in the utility function of a decision maker across tasks and
contexts, but the behavioral data force the theorist to think otherwise. In particular, choices among
gambles can reverse with irrelevant changes in the descriptions of events that result in exactly the
same distribution of final outcomes; preferences measured by choices between gambles can reverse
when these preferences are measured by instead certainty equivalents; choices among consumer
products can reverse by changing the context of the choice set; choices between actions can reverse
when these choices are made under different time constraints. The purpose of this paper is to show
that by building more sophisticated stochastic models of behavior, and treating utility as a latent
parameter of this stochastic process, it is possible to recover the coherence that decision theorists
seek. In particular, I will focus on a stochastic model of choice and certainty equivalents called
decision field theory. The paper concludes with the point that a trade off must be accepted -- more
complex models of behavior are required to recover simpler representations of utility.
6
Jorg Rieskamp (Basel)
Testing Sequential Sampling Models against Standard Random Utility Models
Economists have addressed the probabilistic character of choice behavior by the development of
random utility models. Psychologists, on the contrary, have focused on the development of cognitive
models describing the underlying cognitive process that explains the variability of people's choices.
Sequential sampling models represent one prominent cognitive approach to explain decision
making. According to these models the decision maker accumulates evidence for the available
choice options until a decision threshold is crossed. In the present project we test a prominent
sequential sampling model (i.e. decision field theory) against standard random utility models (i.e.,
logit and probit models) to predict consumer behavior. The results show that for randomly selected
choice situations sequential sampling models predict people's decision better than random utility
models, but the improved fit is not large enough to justify the larger complexity of the sequential
sampling model. However, when focusing on choice situations in which the choice options influence
each others' evaluations the sequential sampling model outperforms the random utility models
substantially in predicting people's preferences.
Graham Loomes (Warwick)/Dani Navarro-Martinez (LSE)
Sequential Expected Utility Theory: Sequential Sampling in Economic Decision Making under Risk
We introduce the notion of sequential sampling (or repeated sampling) in a standard Expected
Utility (EU) framework to capture the deliberation process involved in decision making. We show
how a simple sequential-sampling EU model can be constructed, we illustrate its main implications,
and we present experimental evidence testing some of its predictions. Our results show that the
simple idea that individuals sample repeatedly from standard EU preferences can explain some of
the most prominent deviations from EU theory. Moreover, the model provides predictions on
additional measures related to deliberation (mainly response times and confidence), which standard
economic models are silent about. Most of the model’s predictions are supported by our
experimental evidence. Our sequential-sampling approach has also the potential to be extended to
other economic decision models.
Pavlo Blavatskyy
How to Model Probabilistic Choice: A Microeconomic Perspective
Empirical research often requires a method how to convert a deterministic microeconomic theory
into an econometric model. Several such methods have been proposed including classical strong
utility or Fechner model, strict utility or Luce model and random preference/utility as well as
contemporary Fishburn (1978) probabilistic EUT, Wilcox (2008, 2010) contextual utility and
Blavatskyy (2009, 2011) lattice approach. Yet, none of these methods satisfies some desired
microeconomic properties such as rare violations of first-order stochastic dominance or weak
stochastic transitivity, invariance to positive affine transformations of utility function and existence
of a well-defined measure of risk aversion. I shall present some personal thoughts on how to resolve
this problem giving birth to a new method, which can be regarded as a descendent of the classical
Fechner (strong utility) model and a sibling of Blavatskyy (2009, 2011) lattice approach.
7
Michael Birnbaum (Fullerton)
True and Error Models of Response Variation in Choice and Judgment Studies
The true and error model assumes that each participant can have a different set of true preferences
and that each choice problem or judgment can have a different rate of error. A slightly more general
model allows each person to have a different level of noise that amplifies the errors. Another
variant allows error rates to differ for people with different “true” preference patterns. A still more
general variant of TE model allows that an individual may change true preferences during a long
experiment from block to block, employing a mixture of different “true” preferences at different
times over the course of the experiment. This model does not imply that responses to the same
item by the same person on different trials will be independent (except in special cases), an
assumption made by certain random preference models. The true and error models are not
incompatible with Fechner/Thurstone/Luce type models, which impose a transitive underlying
continuum, but TE models do not exclude systematic intransitivity. They need not exclude the
possibility that evidence accumulates over time, triggering a response when a decision limen is
reached. Empirical evidence from tests of transitivity, stochastic dominance, restricted branch
independence and Allais paradoxes will be presented to illustrate applications of the models.
Because the true and error models do not impose transitivity, stochastic dominance, or related
properties that provide crucial tests among current decision making models (and yet these models
can be tested), they can be seen as apparently neutral frameworks in which these critical properties
can be tested.
Stephane Hess (Leeds)
In Search of the Real Drivers of Heterogeneity in Choice Models
The study of heterogeneity across individual decision makers is one of the key areas of activity in the
field of behavioural research. However, a disproportionately large share of the research effort
focusses on heterogeneity in sensitivities to individual attributes, and in particular how such
heterogeneity can be accommodated in a random coefficients framework. While differences in
marginal sensitivities clearly play a role in driving behaviour, this presentation makes the case that
retrieved differences in such sensitivities may in fact be caused by a number of different factors. In
particular, we look at the possible role of underlying attitudes, differences in decision rules across
respondents and the role of information processing strategies. We show evidence from a number of
studies that suggest that accounting for such richer behavioural patterns leads to important gains in
understanding of behaviour, and may also reduce the level of residual random heterogeneity.
Conversely, this suggests that not adequately accounting for such additional factors may overstate
the degree of unexplained heterogeneity in marginal sensitivities.
Miguel Costa-Gomes (Aberdeen)
Level-k Models and Decision Noise
The level-k model is one of the models currently used to fit data of experimental one-shot games. In
this talk I will discuss the modeling of three of its features: i) The adjustment of players’ beliefs via
iterated best responses. ii) The anchor of players’ beliefs, known as L0 behavior. and iii) Decision
noise. I will pay special attention to the role of decision noise in fitting the data, while highlighting its
relationship to some of the modeling choices for the other two features.
8
Ted Turocy (UEA)
Quantal Response Equilibrium: A Survey
McKelvey and Palfrey introduced the quantal response equilibrium (QRE) concept in a pair of articles
published in 1995 (Games and Economic Behavior, for strategic games) and 1998 Experimental
Economics, for extensive games). Its initial appeal lay in its elegant formulation which
simultaneously built upon well-established approaches in decision and game theory, including
random utility models, fixed-point reasoning, and purification of mixed-strategy equilibria, while at
the same time making predictions which matched anomalous behaviour across a range of
experimental games such as asymmetric matching pennies, centipede games, the traveler's
dilemma, among others. At the same time, open questions remain as to the empirical content of
QRE and its domain of applicability, especially as QRE is silent (or offers multiple interpretations) on
matters such as the origin of decision noise, and procedural mechanisms by which QRE-like
behaviour might arise. This talk will survey the history to date of QRE with the viewpoint of the
behavioural social scientist in mind, including what is (and is not) known about the theoretical and
mathematical structure of QRE, possible interpretations of the mathematical model, and its role in
computation and selection of Nash equilibria, as well as its successes and failures in organising
experimental data, and will suggest open questions and directions for current and future work.
Andrea Isoni (Warwick) /Graham Loomes (Warwick)
Preference and Belief Imprecision in Games
Many experimental studies have found that behaviour in simple one-shot games is inconsistent with
the assumption that strategy choices are best responses to equilibrium beliefs. These findings have
been explained either as best response to non-equilibrium beliefs – as in level-k and Cognitive
Hierarchy models – or as equilibria that reflect noisy preferences – as in Quantal Response Equilibria.
We investigate to what extent failure to best respond to stated beliefs is the result of preference
and/or belief imprecision. We elicit belief ranges and confidence in strategy choices in four classes of
one-shot 2x2 two-person games. Our measures of imprecision show a substantial degree of
sensitivity to parameter changes, both within and between game structures. Best response rates are
higher when players are more confident about their strategy choices, and for games in which belief
ranges are relatively narrow.
Gordon Brown (Warwick)
Noise, Context, and Individual Differences in Risk Attitude
Economists and psychologists typically take different approaches to individual differences in
attitudes towards risk. One idea is that stable individual differences in risk attitude exist, but that the
expression of and/or the measurement of these individual differences is subject to noise. Another
idea, associated with recent approaches within psychology, is that choices on tasks designed to
measure risk attitude are largely driven by experienced or retrieved comparison context. We report
an experiment in which people’s risk attitudes are measured on repeated occasions using a HoltLaury procedure with variation in the context of choice options. Such a procedure allows us to
apportion variance to (a) context effects, (b) stable individual differences, and (c) noise.
9
Stefan Traub (Bremen)
Attention and Revealed Preference in a Portfolio Choice Experiment
In a laboratory experiment each of 41 student subjects face a series of 16 successive grouped
portfolio selection problems. We classify subjects' choices into consistent and inconsistent choices as
to Varian's (1982) generalized axiom of revealed preference (GARP) and check whether chosen
portfolios are dominated in terms of first-order stochastic dominance. While dealing with their
choice tasks, we record the attention paid to each portfolio in terms of time spent on it. We
compute the first four central moments of the distribution function of attention. Preliminary data
analysis suggests that subjects who perform worse need more time to complete their tasks and their
distribution functions of attention exhibit less variance, skewness, and kurtosis.
Peter Dayan (UCL)
A View from the Bottom
This talk will summarise the key ideas emerged during the workshop from the perspective of
neuroscience.
10
Participants
Name
Andrea Isoni
Anna Conte
Chris Olivola
Chris Starmer
Daniel Navarro-Martinez
Daniel Read
Daniel Sgroi
David Butler
Eugenio Proto
Ganna Pogrebna
Gordon Brown
Graham Loomes
Han Bleichrodt
Horst Zank
Ildefonso Mendez Martinez
Jerome Busemeyer
John Hey
Jorg Rieskamp
Jose' Luis Pinto Prades
Maria Ruiz-Martos
Michael Birnbaum
Michel Regenwetter
Miguel Costa-Gomes
Nathan Wilcox
Neil Stewart
Pavlo Blavatskyy
Peter Dayan
Peter Hammond
Peter Moffatt
Robert Sugden
Robin Cubitt
Stefan Traub
Stephane Hess
Theodore Turocy
Ulrich Schmidt
Institution
University of Warwick
Max Plank Institute of Economics
Warwick Business School
University of Nottingham
London School of Economics
Warwick Business School
University of Warwick
Murdoch Business School
University of Warwick
University of Sheffield
University of Warwick
Warwick Business School
Erasmus Research Institute of
Management
University of Manchester
University of Murcia
Indiana University, Bloomington
University of York
University of Basel
Pablo de Olavide University
University of Castellón
California State University,
Fullerton
University of Illinois
University of Aberdeen
Chapman University
University of Warwick
University College London
University of Warwick
University of East Anglia
University of East Anglia
University of Nottingham
University of Bremen
Institute for Transport Studies,
Leeds
University of East Anglia
Christian Albrechts University,
Kiel
11
Email
a.isoni@warwick.ac.uk
a.conte@westminster.ac.uk
cyolivola@gmail.com
chris.starmer@nottingham.ac.uk
d.navarro-martinez@lse.ac.uk
daniel.read@wbs.ac.uk
daniel.sgroi@warwick.ac.uk
D.Butler@murdoch.edu.au
eugenio.proto@warwick.ac.uk
g.pogrebna@warwick.ac.uk
g.d.a.brown@warwick.ac.uk
g.loomes@warwick.ac.uk
bleichrodt@ese.eur.nl
horst.zank@manchester.ac.uk
ildefonso.mendez@um.es
jbusemey@indiana.edu
jdh1@york.ac.uk
jorg.rieskamp@gmail.com
jlpinto@upo.es
maria.ruizmartos@eco.uji.es
mbirnbaum@fullerton.edu
regenwet@illinois.edu
m.costagomes@abdn.ac.uk
nwilcox@chapman.edu
neil.stewart@warwick.ac.uk
pavlo.blavatskyy[at]gmail.com
dayan@gatsby.ucl.ac.uk
p.j.hammond@warwick.ac.uk
p.moffatt@uea.ac.uk
r.sugden@uea.ac.uk
robin.cubitt@nottingham.ac.uk
traub@uni-bremen.de
S.Hess@its.leeds.ac.uk
t.turocy@uea.ac.uk
uschmidt@bwl.uni-kiel.de
Contacts
Organisers:
- Graham Loomes (g.loomes@warwick.ac.uk)
- Andrea Isoni (a.isoni@warwick.ac.uk)
For any information about the workshop, please contact Andrea Isoni.
12
Notes
13
14
15
16
17
18
19
20
Download