Combining Expert Judgement: A Review for Decision Makers Simon French

advertisement
Combining Expert Judgement:
A Review for Decision Makers
Simon French
simon.french@mbs.ac.uk
Valencia 2:
Group Consensus Probability Distributions
The Expert Problem
Decision
Maker
Experts
The Group Decision
Problem
Group of
decision makers
The Text-Book
Problem
Group of experts
Issues and
undefined
decisions
2
Valencia 2:
Group Consensus Probability Distributions
The Expert Problem
Decision
Maker
Experts
The Group Decision
Problem
Group of
decision makers
The Text-Book
Problem
Group of experts
Issues and
undefined
decisions
3
Different contexts 
different assumptions appropriate
• Expert Problem
–
–
–
–
Expert judgements are data to DM
OK to calibrate judgements
no assumption of equality
Many to 1 communication
• Group Decision Problem
–
–
–
–
two step process: learn then vote
learn from each other  mutual communication
wrong to calibrate at decision?
equal voting power?
• Text book Problem
– Need to think of later unspecified decision
– Need to communicate to unspecified audiences
4
How do you question experts?
If the non-swimmer averages advice on depths … he drowns!
If he were to ask the question, ‘will I drown if I wade across?’ he
would get a unanimous answer: yes!
5
Approaches to the expert problem (1)
Bayesian
– Expert judgement is data
– Difficulty in defining likelihood
p( | x)
 p(x | )

p()
Posterior probability
 likelihood  prior probability
DM’s prior for
quantities of interest
in real problem
6
Approaches to the expert problem (1)
Bayesian
– Expert judgement is data, x
– Difficulty in defining likelihood
p( | x)
 p(x | )

p()
Posterior probability
 likelihood  prior probability
DM’s probability for the experts’ judgements
given actual quantity of interest
correlations? elicitation errors? calibration?
7
Approaches to the expert problem (2)
Opinion Pools
– Expert judgement are taken as probabilities
– Essentially a weighted mean
• arithmetic, geometric, …
– Weights defined from
• DM’s judgement
• Equal weights (Laplace, equal pay)
• Social networks
– Cooke’s Classical method
• Weights defined from calibration data
– Are there better scoring rules?
• Many applications
• Database of 45 studies
• Computationally easy
• Appears to discard poor assessors but actually finds
spanning set
8
But all this is the easy bit ….
Formulate
issues and
structure
problem
Expert advice
on what
might happen
Analysis
Decide
and
Implement
Expert input
on models,
parameters,
probabilities
•
cf, discussions of EDA then confirmatory statistics
•
How do you elicit models and probabilities?
•
Plausibility bias if it is the expert’s model?
9
Group decision problem
Many approaches:
•
combine individual pi(.) and ui(.) into group pg(.) and
ug(.) then form group expected utility ranking.
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.))
(pg(.), ug(.))
 ug(x) pg(x) dx
10
Group decision problem
Many approaches:
•
combine individual pi(.) and ui(.) into group pg(.) and
ug(.) then form group expected utility ranking.
•
individuals rank using their own expected utility
ordering then vote
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.))
u1(x)p1(x)dx u2(x)p2(x)dx
vote
vote
ui(x)pi(x)dx un(x)pn(x)dx
vote
vote
11
Group decision problem
Many approaches:
•
combine individual pi(.) and ui(.) into group pg(.) and
ug(.) then form group expected utility ranking.
•
individuals rank using their own expected utility
ordering then vote
•
altruistic Supra Decision Maker
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.))
 ug(x) pg(x) dx
12
Group decision problem
Many approaches:
•
combine individual pi(.) and ui(.) into group pg(.) and
ug(.) then form group expected utility ranking.
•
individuals rank using their own expected utility
ordering then vote
•
altruistic Supra Decision Maker
•
negotiation models
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.))
(p1(x*), u1(x*)), (p2(x*), u2(x*)), … (pi(x*), ui(x*)), …(pn(x*), un(x*)) 13
Group decision problem
Arrow’ Theorem and similar results 
•
combine individual pi(.) and ui(.) into group pg(.) and
ug(.) then form group expected utility ranking.
•
individuals rank using their own expected utility
ordering then vote
•
altruistic Supra Decision Maker
•
negotiation models
Paradox and impossibility theorems abound in group
decision making theory
14
Group decision problem
Arrow and similar results 
• • Decision
conferences
combine
individual pi(.) and ui(.) into group pg(.) and
• Built
around
decision
or negotiation
models
ug(.)
then‘reference’
form group
expected
utility ranking.
• Decision analysis as much about communication as about
• supporting
individuals
rankmaking
using their own expected utility
decision
• Might
vote or then
mightvote
leave the actual decision to unspoken
ordering
political/social processes
•
altruistic Supra Decision Maker
•
negotiation models
•
social process which translates individual decisions
into an implemented action
15
Group decision support systems
• The advent of the readily
available computing means
that algorithmic solutions to
the Group Decision Problem
are attractive.
• Few software developers
know any of the theory in this
area, and ignorance of Arrow
is rife.
16
The textbook problem
• How to present results to help in
future as yet unspecified decisions
• How does one report with that in
mind?
• Public participation and the web
means that many stakeholders to
issues are seeking and using
expert reports … whether or not
they understand them
17
Cooke’s Principles for scientific
reporting of expert judgement studies
• Empirical control: Quantitative expert assessments
are subjected to empirical quality controls.
• Neutrality: The method for combining/evaluating
expert opinion should encourage experts to state
their true opinions, and must not bias results.
• Fairness: Experts are not pre-judged, prior to
processing the results of their assessments.
• Scrutability/accountability: All data, including
experts' names and assessments, and all processing
tools are open to peer review and results must be
reproducible by competent reviewers.
18
Cooke’s Principles for scientific
reporting of expert judgement studies
• Empirical control: Quantitative expert assessments
are subjected to empirical quality controls.
• Neutrality: The method for combining/evaluating
expert opinion should encourage experts to state
their true opinions, and must not bias results.
• Fairness: Experts are not pre-judged, prior to
processing the results of their assessments.
• Scrutability/accountability: All data, including
experts' names and assessments, and all processing
tools are open to peer review and results must be
reproducible by competent reviewers.
Few reports satisfy this : Chatham house reporting 19
The Textbook Problem relates to …
• Exploring issues, formulating decision problems, Developing
prior distributions
• So report should anticipate meta-analyses* and give calibration
data, expert biographies, background information, etc.
• Since the precise decision problem is not known at the time of
the expert studies, the reports will be used to build the prior
distributions not update them
* Need meta-analytic approaches for expert judgement
• Little peer-review
• No publication bias
• ‘self’ promotion of reports by pressure groups
• Cooke’s principles not even considered.
20
The textbook problem for public
participation
• Public and stakeholders will need to develop
their priors from information available
• But they will not always be sophisticated DMs
nor will they be supported by an analyst
–
–
–
–
Behavioural issues
Probabilities versus frequencies (Gigerenzer)
Risk communication
celebrity
• Observables versus parametric constructs
21
Questions?
22
Download