Abstracts

advertisement
Adversarial Inference in Diagnostic Testing
Professor Nozer Singpurwalla
City University of Hong Kong
Something as important as keeping the lights on has all the data and models it needs, right?
Wrong!
Dr Keith Bell
University of Strathclyde
An electric power system is widely recognised as one of the most complex single engineering entities
ever built, yet its safe and reliable performance is crucial to modern, industrialised society. Its
operation depends on adequate management of spatial and temporal dimensions in the face of
significant uncertainties. However, not only are engineers interested in the system’s performance,
but, conscious of the high costs of electrical energy, so too are policy makers, regulators and
consumers.
All the above stakeholders want reassurance about the power system’s performance both
technically and economically, now and in the future. This depends not only on large quantities of
data but also complex models. As ever with large engineering systems, to model everything precisely
is computationally challenging, but it is arguably pointless trying when so much data is lacking.
This presentation addresses some of the above challenges, highlights the judgments engineers and
modellers make and issues a plea for greater regulatory engagement with the issues.
Modelling Systemic Risks to Inform Repowering Decision
Professor Lesley Walls
University of Strathclyde
Our motivation is to inform an industrial decision to replace a conventional power station in a
context where there are risks and uncertainties associated with, for example, new energy
technologies such as renewables and smart grids. A modelling approach, grounded in the theoretical
principles of decision analysis, is developed to support managers who need to define engineering
system solutions, interface with multiple stakeholders, deal with interlocking uncertainties, and
make decisions within a compressed timeframe. Using a real industrial case, we describe the general
modelling process and outcomes of analysis.
Misuse and misunderstanding quantified risk models
Dr David J.K. Griffin
RSSB
Safety decisions are increasingly being made with the help of quantified risk models, considering
options that balance the competing objectives of reducing safety risk, improving train performance
and reducing cost.
These decision support models are developed for repeated and regular use typically by a range of
users, so it can be difficult to ensure that the users of the model are competent. Where models are
complex, the complexity is commonly hidden from the users, and the front-end focuses on simply
the model inputs and outputs. Even the documentation for the model often stresses the difference
between the “user guide” explaining how to use the model, and the “technical guide” explaining
how the model actually works. Consequently the model users, whilst understanding the mechanics
of using the model and getting results out of the model, often have little understanding of how the
model is calculating the results.
This paper focusses on the reasons why quantified risk models are misused and misunderstood. It
is primarily interested in situations where the results from the model are misinterpreted due to a
lack of understanding of the workings of the model by the model users. It examines the role that
model transparency in revealing the key assumptions and workings of the model to the users so that
they can correctly interpret the results.
Implicit Risk Attitudes: Comparing Experts and Lay People
Dr Calvin Burns
University of Strathclyde
The psychometric paradigm has been very useful for showing that different people perceive hazards
differently. Numerous studies have identified differences in risk perception between experts and lay
people. This research though is limited to explicit attitude measures (questionnaires) which require
people to consciously consider and state their attitude to attitude objects (i.e. by asking people to
think about a hazard and state how risky it is). Implicit attitude measures are being used increasingly
in social cognition research. These measures can offer new insights into risk attitude formation and
change.
Implicit measures assess attitudes that individuals may not be consciously aware that they hold
(Greenwald & Banaji, 1995) and are less susceptible to response biases like social desirability.
Implicit attitude measures very rarely correlate with explicit attitude measures (e.g. Fazio et al.,
1995; Greenwald et al., 1998) and are thought to influence spontaneous behaviours or behaviours
that individuals do not try to consciously control (Fazio, 1990). Burns (2012) used explicit and
implicit measures to investigate attitudes about occupational risk amongst construction workers.
Consistent with findings reported in the implicit attitude literature, no correlation was observed
between the explicit and implicit measures of risk. This paper extends that research by investigating
implicit attitudes about risk between experts and lay people and aims to investigate the extent to
which expert and lay people’s risk perceptions are influenced by emotion.
Comparing data and expert judgment – reliability and cost
Dr Linda Newnes
University of Bath
Dr Newnes will present findings from a recently completed PhD by Dr Xiaoxi Huang demonstrating
the differences that can occur between expert knowledge and actual data. The purpose of the
research presented was to ascertain the cost of service for manufacturing machinery. Eight years of
data was analysed and then compared with the expert opinion elicited from maintenance engineers.
The findings demonstrated an 18% difference in the estimated cost of service when comparing
actual data to the expert opinions.
Uncertainty and pairwise comparison in early day estimation
Andrew Langridge
PRICE Systems International
When estimating complex products it is common for the cost and secludes outcome expectation to
be set before the project requirement is fully understood and the delivery options fully
explored. The application of expert judgement often applied to generate early day forecasts as the
expert can be a useful source of information and guidance; however the organisation generating the
forecast is rarely mature enough to apply the expert judgement in a robust and repeatable
framework.
The outcome accuracy from this type of activity ranges from very good (+/- 50%) to very poor (not
even close) yet expectations from non-estimators don’t take into account that much of the forecast
is not evidence based and very little provenance exists to support the assertions of the experts. I
now need to make a confession; when estimating projects the more interesting the task the lower I
will tend to estimate the effort needed to complete the task!
Two simple but effective additions to the expert judgement forecast cycle can result in better
outcome robustness and these will be explored during this presentation
Design of experiments and sample size determination with imprecise utility
Dr Malcolm Farrow
Newcastle University
We consider the choice of a design for an experiment as a decision problem, typically with a multiattribute utility function. Multi-attribute utilities may be imprecisely specified, due to an
unwillingness or inability on the part of the client to specify fixed trade-offs or precise marginal
utility functions or because of disagreement within a group with responsibility for the decision. In
particular this may be so when the decision is the choice of a design or sample size for an
experiment. An approach to constructing and analysing imprecise multi-attribute utility hierarchies
has been developed in earlier work with Michael Goldstein. This earlier work, which allowed
imprecision in the trade-offs between attributes, has recently been extended to allow imprecision
also in the shape of marginal utility functions. The method is illustrated with a simple example
involving life-testing and reliability.
Reliability Updating in Linear Opinion Pooling for Multiple Decision Makers
Donnacha Bolger
Trinity College, Dublin
Having accurate sources of information is a vital prerequisite for good decision making. Here we
consider a multiple participant setting, where all decision makers have a collection of neighbours
with whom they share their beliefs about some common uncertain event of interest. When
determining which course of action to follow, a decision maker takes into account all the
information received from her neighbours. Over time, in light of the returns observed from choices
made, decision makers update their own beliefs over the uncertain event, and also adjust the degree
of consideration given to the opinions of each neighbour based on the level of reliability their
information is ascertained to have. In this paper we develop three methodologies which enable
participants to combine their own beliefs and the beliefs of neighbours into a single distribution, and
then construct suitable weights for their perception of the accuracy of the viewpoints involved. An
extended example demonstrates these approaches, and highlights their differences and similarities.
Download