An Info-gap Approach to Modelling Risk and Uncertainty in Bio-surveillance having Imperfect Detection rates

advertisement
An Info-gap Approach to
Modelling Risk and Uncertainty
in Bio-surveillance having
Imperfect Detection rates
Prof. David R. Fox
Acknowledgements:
•
Prof. Yakov Ben-Haim (Technion, Israel)
•
Prof. Colin Thompson (University of Melbourne)
Risk versus Uncertainty
Risk
1. risk = hazard x exposure or
risk = likelihood x consequence
2. Duckworth (1998):
•
•
•
•
is a qualitative term
cannot be measured
is not synonymous with probability
“to ‘take a risk’ is to allow or cause exposure to the
danger”
3. is the chance, within a specified time frame, of an
adverse event with specific (negative)
consequences
Risk versus Uncertainty
The AS4360:1999 Risk Matrix
LIKELIHOOD
CONSEQUENCE
Insignificant
Minor
Moderate
Major
Catastrophic
Almost
Certain
H
H
E
E
E
Likely
M
H
H
E
E
Possible
L
M
H
E
E
Unlikely
L
L
M
H
E
Rare
L
L
M
H
H
Risk
•
Development and adoption of a ‘standard’ risk metric
seems a long way off (never?);
•
Bayesian methods are becoming increasingly popular,
although acceptance may be hampered by biases and
lack of understanding;
•
More attention needs to be given to appropriate
statistical modelling. In particular:
- model choice
- Parameter estimation
- Distributional assumptions
- ‘Outlier’ detection and treatment
- robust alternatives (GLMs, GAMs, smoothers etc).
Uncertainty
• Severe uncertainty → almost no knowledge about
likelihood
• Arises from:
- Ignorance
- Incomplete understanding
- Changing conditions
- Surprises
• Is ignorance probabilistic?
Ignorance is not probabilistic – it is an info-gap
Shackle-Popper Indeterminism
Intelligence
• What people know, influences how they behave
Discovery
• What will be discovered tomorrow cannot be known
today
Indeterminism
• Tomorrow’s behaviour cannot be modelled
completely today
Knightian Uncertainty
Frank Knight
• Nov 7 1885 – Apr 15 1972
• Economist
• Author (Risk, Uncertainty and Profit)
Knightian Uncertainty
• Differentiates between risk and uncertainty
→ unknown outcomes and known probability
distributions
• Different to situations where pdf of a random
outcome is known
Dealing with Uncertainties
Strategies
• Worst-case
• Max-Min (utility)
• Min-Max (loss)
• Maximize expected utility
• Pareto optimization
• “Expert” opinion
• Bayesian approaches
• Info-Gap
Info-Gap Theory (Ben-Haim 2006)
• Is a quantitative, non-probabilistic approach to modelling
true Knightian uncertainty;
• Seeks to optimize robustness / immunity to failure or
opportunity of windfall;
• Contrasts with classical decision theory which typically
seeks to maximize expected utility;
An info-gap is the difference between
what is known and what needs to be
known in order to make a reliable and
responsible decision.
Components of an Info-Gap Model
1. Uncertainty Model
• Consists of nominal values of unknowns and an horizon
of uncertainty   0
2. Performance requirement
• Inequalities expressed in terms of unknowns
2. Robustness Criterion
• Is the largest  for which the performance requirements
in (2) are met  realisations of unknowns in the
uncertainty model (1)
• ‘Unknowns’ can be probabilities of adverse outcome
Robustness and Opportuneness
Pernicious
Uncertainty
Propitious
Robustness and Opportuneness
Robustness (immunity to failure)
is the greatest horizon of uncertainty at which failure
cannot occur
Opportuneness (immunity to windfall gain )
is the least level of uncertainty which guarantees
sweeping success
Note: robustness/opportuneness requires optimisation but not of the
performance criterion.
Robust satisficing vs direct optimization
Alternatives to optimization:
• Pareto improvement – an alternative ‘solution’ which
leaves one individual better off without making anyone
else worse off.
• Pareto optimal – when no further Pareto improvements
can be made
• Principle of good enough – where quick and simple
preferred to elaborate
• Satisficing (Herbert Simon, 1955) – to achieve some
minimum level of performance without necessarily
optimizing it.
Robust satisficing
Decision q is preferred over q if robustness of q is > robustness of q
at the same level of reward; i.e
q > q



if  q, rc   q, rc
where rc is reward required.

Robust satisficing
Thus, if
is the set of all feasible decision vectors q,
a robust-satisficing decision is one which maximizes
robustness on and satisfices performance at rc .
i.e

q c  rc  = arg max  q, rc
q

Note: q c  rc  usually (although not necessarily) dependes on rc
Fractional Error Models
~
• Best estimate of uncertain function U(x) is U(x)
-Although fractional error of this estimate is unknown
• The unbounded family of nested sets of functions is a
fractional-error info-gap model:
  

U , u  u  x : u  x  u  x  u  x ;   0
IG Models : Basic Axioms
All IG models obey 2 basic axioms:
1. Nesting
U  , u  is nested if  <  
 U  , u   U  , u 
2. Contraction
U  0, u  is a singleton set containing
its center point U  0, u   u
i.e when horizon of uncertainty is zero, the estimate
u is correct
An IG application to bio-surveillance
•
Thompson (unpublished) examined the general sampling problem
associated with inspecting a random sample of n items
(containers, flights, people, etc.) from a finite population of N using
an info-gap approach.
•
The info-gap formulation of the problem permitted the
identification of a sample size n such that probability of adverse
outcome did not exceed a nominal threshold, when severe
uncertainty about this probability existed.
•
Implicit in this formulation was the assumption that the detection
probability (ie. the probability of detecting a weapon, adverse
event, anomalous behaviour etc.) once having observed or
inspected the relevant item / event / behaviour was unity.
Surveillance with Imperfect Detection
I – the event that an object is inspected;
W – the event that an object is a security threat (eg. the object is a
weapon, the person is a terrorist, the behaviour is indicative of
malicious intent);
D – the event that the security breach is identified / detected.
Furthermore, we assume that only inspected objects are classified as either belonging
to D or D . We thus have I   D
P  I   P  D  P  D 
D and hence
Surveillance with Imperfect Detection
Arguably, the more important probability is
P W D 


and not
Define:
P W 
detection efficiency =   P  D W 
  P W 
n
   PI 
N
Surveillance with Imperfect Detection
Can show (see paper), that:
 1      1
P W D  
 p   , ,  

       1  1
For 100% inspections:
 1   


P W D 
 p 1, ,  

 1 
Furthermore:
p  , ,    p 1, ,    0<  1
Surveillance with Imperfect Detection
Performance criterion:
p   , ,  

p 1,  ,  
i.e.
 1      1 1  
   , ,   

      1  1  1   
Surveillance with Imperfect Detection
Fractional error model:
 ,   :

U  , ,   



max 0, 1        min 1, 1     
,
max 0, 1         min 1, 1      

Robustness function:
 

   ,  d   max  :  min    , ,     d  

   , U  , , 
 0
Surveillance with Imperfect Detection
Example
• Dept. of Homeland Security intelligence → attack on aircraft imminent
• Nature / mode of attack unknown
• All estimates (detection prob., prob. of attack etc.) subject to extreme
uncertainty.
 
Let    ,  with   0.7 and   0.05
Surveillance with Imperfect Detection
1
0.9
0.8
0.7
Performance
  0.975
0.6
  0.9
0.5
  0.85
  0.8
0.4
  0.7
0.3
  0.6
0.2
  0.5
  0.4
0.1
0
0
0.1
0.2
0.3
0.4
0.5
Robustness
0.6
0.7
0.8
0.9
1
Surveillance with Imperfect Detection
Comparison with a Bayesian Approach
Assume
 ~ beta(0.98,97.0225) and  ~ beta(14, 6) [0.5,1.0]
4
100
80
3
pdf
pdf
60
2
40
1
20
0
0
0
0.05
0.1
theta
0.15
0
0.2
0.4
0.6
phi
0.8
Surveillance with Imperfect Detection
0.8
1.0
Probability
0.8
0.6
  0.783
  0.827
0.4
  0.86
0.2
  0.88
  0.91
0.0
0.60
0.65
0.70
0.75
0.80
psi
0.85
0.90
0.95
1.00
Download