Uploaded by Andrey Pelican

SafetyIandSafetyII-FVanderhaegen

advertisement
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/285396555
Erik Hollnagel: Safety-I and Safety-II, the past and future of safety
management
Article in Cognition Technology and Work · August 2015
DOI: 10.1007/s10111-015-0345-z
CITATIONS
READS
20
19,143
1 author:
Vanderhaegen Frédéric
201 PUBLICATIONS 2,257 CITATIONS
SEE PROFILE
All content following this page was uploaded by Vanderhaegen Frédéric on 07 June 2017.
The user has requested enhancement of the downloaded file.
Paper reference: Vanderhaegen, F. (2015). Erik Hollnagel: Safety-I and Safety-II, the past and future of safety management.
Cognition Technology & Work 17(3), 461-464, August 2015 - DOI: 10.1007/s10111-015-0345-z
Book review by Prof. F. Vanderhaegen, University of Valenciennes, France
Book title and author: “Safety I and Safety II – The Past and Future of Safety Management” by Erik
Hollnagel
The Dr Hollnagel’ book can be summarised by a quote from a famous French author: “L’essentiel est
invisible pour les yeux” (what is essential is invisible to the eyes), Saint-Exupéry, Le Petit Prince.
Indeed, the main idea of the Dr. Hollnagel’s book is to change the classical safety analysis process
that focuses mainly on negative causes and impacts of unwanted events, and to take into account
the success stories that tend to become invisible and insignificant, because they are considered as
normal, i.e. as planned. Even if this idea is not new, and has been the focus of several pieces of
research and PhD theses on numerous topics (e.g., safety, situation awareness, sense-making,
resilience, feedback of experience, good practices, functional analysis, auditing approaches, learning,
dissonance, barrier removal, etc.), Dr Hollnagel demonstrates by using his high expertise and giving a
lot of practical examples, the interest of such a distinction. He opposes the so-called Safety I and
Safety II concepts. Safety I relates to a condition where the aim is to be sure that the number of
unwanted outputs will be as low as possible, whereas Safety II concerns the condition of being
certain that the success of outputs will be as high as possible. The book relates to the limits of past or
current safety analysis methods (i.e. Safety I based approaches), and proposes ideas for the future
safety management (i.e. Safety II based approaches).
The book is composed of nine chapters that are organized originally: each chapter focuses on specific
arguments, and finishes with comments from the author together with a short list of references.
Each chapter is presented below.
Chapter 1: The issues.
This chapter is a philosophical introduction. It discusses the concept of safety and in particular the
etymology of the word and the manner in which it is used. The author proposes some examples that
show that the measurement of safety with current methods is insufficient. The main problem comes
from the fact that the probability of occurrence of the same events is not assessed in the same
manner. Therefore, obtained probabilities cannot be compared. Moreover, their interpretation can
differ depending on objective or subjective demonstrations or points of view. The leitmotiv of
defining safety as a dynamic non-event makes for some confusions with other concepts such as
dependability, reliability, availability, maintainability or integrity.
Chapter 2: The pedigree.
The chapter recalls the historical evolution of safety analysis or management. It presents three ages:
the first age concerns the analysis of technical failures. The second age focuses on human factors
considered as responsible for accident occurrence and the aim of reducing the contributions of
human operators. The third age is the age of safety management and relates to organisational
Vanderhaegen, F. (2015). Erik Hollnagel: Safety-I and Safety-II, the past and future of safety management. Cognition Technology & Work 17(3), 461-464
factors to overcome the limits of the first two ages. An interesting comparative table (i.e. Table 2.2)
shows that the characteristics of safety assessment from technical factors differ from the assessment
of the safety of human or organisational factors. This point will be the focus of the following
chapters.
Chapter 3: The current state.
The chapter points out that the definition of safety concerns mainly the assessment of the lack of
safety instead of the presence of safety. When the system usually runs successfully, it is normal that
people do not pay attention to this success. This kind of adaptive behaviour is named Habituation by
the author, who explains it by making a distinction between work that is really done (i.e. the socalled Work-As-Done or assimilated to the activity on chapter 6) and the work that should be done
(i.e. the so-called Work-As-Imagined or assimilated to the task on chapter 6). This distinction is
usually used to explain a lack of safety. Considering humans as such machines by designing a finite
number of behaviours is a mistake because of individual and collective human variability. This might
be true if all components of the human-machine system behave exactly as designed and if there are
no conflicts between criteria such as safety and for instance productivity, requiring a compromise to
be defined. The author criticises the classical loop for designing a safe system by taking into account
what goes wrong with retrospective analysis: when malfunctions occur, the identification process of
their causes leads to eliminating them by improving barriers. He prefers to focus on what goes right.
Rather than applying the ALARP approach (i.e. As-Low-As-Reasonably-Practicable, a concept related
to the safety-I process) he suggests using the AHARP one (i.e., As-High-As-Reasonably Practicable,
concept related to the safety-II process).
Chapter 4: The myths of safety-I.
The chapter discusses the assumptions and the myths of the Safety-I concept. The relations between
the causes and the consequences of an event, the relations between its probability of occurrence
and its severity, human error seen as the main cause of accidents and the root cause principle are
strongly criticised. The theme of the previous chapter is extended as regards prospective analysis:
safety analysis focuses on the causes of what that goes wrong instead of on the causes of what goes
right, because the corresponding events are considered as normal. The author argues that forwards
causality (i.e. analysis from causes to effects) differs from backwards causality (i.e. analysis from the
effects to causes): the occurrence of an effect from a cause does not mean that its corresponding
cause versus effects occurs systematically. This depends on system complexity and on the different
possible structures of an accident occurrence. Indeed, the causes of past accidents may be not
similar to those of future accidents. The sequential accident model (i.e. the domino model) and the
epidemiological model relate to respectively dependent or independent successive occurrence of
unwanted events, but remain applicable to relatively simple systems for which relations between
causes and effects are linear. The systemic model seems to be robust to such problems, but it is a
pity that it is not so clearly developed by the author (some references are given at the end of the
chapter to obtain details about this type of accident model). As a matter of fact, the risk dichotomy
related to the occurrence and the gravity of an event is also another myth that has to be discarded: it
relates to a classical safety goal related to an acceptable correlation between both parameters: the
more frequent an event is, the less dangerous it has to be. As 90% of accidents are due to human
error, the classical safety analysis process focuses mainly on the human contribution to what goes
Vanderhaegen, F. (2015). Erik Hollnagel: Safety-I and Safety-II, the past and future of safety management. Cognition Technology & Work 17(3), 461-464
wrong instead on what goes right. The author proposes to study the variability of human behaviour
in order to understand why some of behaviours that run usually well go sometimes wrong.
Moreover, the different assumptions or myths around the definitions, the interpretations and the
manifestations of human error degrade the interest of this concept, and increase the dilemma about
the positive or the negative contributions of humans to system safety. Finally, he argues that the root
cause analysis approaches are not satisfactory because they suppose that any problem has a root
cause and can be explained logically, mainly when the root cause relates to a human error.
Chapter 5: The deconstruction of safety-I.
The chapter summarises the previous chapter content by presenting arguments and wrong
assumptions in order to dismantle the safety-I principles and to introduce the safety-II concept.
Three main arguments are then discussed: phenomenology (i.e. the study of occurred events),
etiology (i.e. the causal study of these events) and ontology (i.e. the natural and main links between
events). Ontology concerns then the decomposition of a system, the bimodal viewpoints on a system
behaviour (e.g. right versus wrong, correct versus incorrect, etc.), and predictability. The increasing
complexity of human-machine systems makes the system non-deterministic and unpredictable, and
forces sometimes human operators to adapt tasks and to adjust performance. Therefore, Work-AsImagined defined on chapter 3 is clearly different from the Work-As-Done. It is also difficult to
confirm any improvement in safety when the number of wrong events is very low or constant, and a
combination of safe independent components does not obviously produce a safe system. These
points introduce the tractability concept developed in the next chapter in order to take into account
systems that are not decomposable, bimodal, or predictable, and to address the safety-II concept in
chapter 7.
Chapter 6: The need to change.
Because systems change and become more and more complex integrating an increasing number of
components that interact, the safety-I process is no longer suitable. The more complex a system is,
the more automated it is by making humans less and less capable to control the system they have
designed. As a matter of fact, systems become underspecified, i.e. intractable. New dimensions or
boundaries have then to be taken into account: the variability of pace time and of the duration of an
operation, the variability of the organisational level related to the life-cycle of a system and the
variability of the environment. Opposition between Work-As-Done and Work-As-Imagined remains
and success stories require variability or flexibility instead of rigidity: the less described a system is,
the more performance variability is required. Tractable versus intractable systems are introduced.
The description of a tractable system is simple, known and stable, and the system is independent and
easy to control. On the other hand, the description of an intractable system contain a lot of details, is
partly known, is unstable, and the system can interact with others systems and is difficult to control.
It is then better to study Work-As-Done because it relates to what people do really, and this is
systematically different from Work-As-Imagined which becomes useless for intractable system
because the system is by definition underspecified.
Chapter 7: The construction of safety-II.
This chapter constructs the safety-II concept and is organized similarly to chapter 5 that deconstructs
the safety-I concept. The ontology of safety-II relates to new assumptions: work cannot be specified
Vanderhaegen, F. (2015). Erik Hollnagel: Safety-I and Safety-II, the past and future of safety management. Cognition Technology & Work 17(3), 461-464
in detail; Work-As-Done is completely different from Work-As-Imagined; bimodality is not so well
adapted because it becomes difficult to explain or understand why components run correctly or not;
therefore, performance adjustment and performance variability are the main key points. The
etiology of safety-II has to take into account linear and non-linear propagations and relations
between causes and effects. As most situations cannot be explained by causal analysis, the author
suggests focusing on emergent outputs instead of resultant outputs. Emergent outputs are not
decomposable and not predictable. They cannot be explained by the classical principles of causality
seen in chapter 5. They are unexpected and unintended combinations of performance variability
depending on resonance instead of causality for identifying possible and approximate adjustments.
The author assumes that resonance relates to goal-oriented and regular behaviour of people facing
unexpected situations. Therefore, both proactive and reactive behaviours are required to expect the
actions of the others and to propose responses to these actions. Related to the phenomenology of
safety-II, it seems difficult to detect what goes right because this occurs regularly and because this
cannot be systematically or clearly explained. As “emergence is not something that can be seen” (see
page 134), these events can then be considered as invisible. The author recalls a concept presented
in chapter 1 which considers safety as a dynamic non-event in the safety-II approach. Such a nonevent cannot be observed or measured. Here, he makes a difference between what goes right and
what does not go wrong, and makes a parallel between resilience engineering and safety-II
management by developing proactive safety management. Proactive safety management consists in
realising adjustments before unwanted outputs occur. Then, some proactive actions may fail if the
expected outputs do not happen. However, this may be less expensive than doing nothing. At that
point, contradictions seem to occur. Indeed, the author criticises bimodality (page 127): “the
bimodality principle of Safety-I is therefore obsolete”) and predictability (page 105: “… sociotechnical systems are not decomposable, bimodal, or predictable”), but uses this bimodality principle
for identifying what goes right and this predictability principle to prevent or predict what may
happen! Safety-II may confront the same problems as safety-I: difficulty of identification and
interpretation of what goes right, difficulty of predictability of emergent events, important effort
required to investigate success stories rather than failed stories because the number of success
stories is greater (see chapter 4), and difficulty to apply such an effort knowing the new boundaries
presented in this chapter. Some of these practical questions are treated in chapter 8.
Chapter 8: The way ahead.
This chapter now presents safety-I and safety-II as complementary whereas the previous chapters
presented them as opposed. Note that this combination between safety-I and safety-II is named
safety-III in chapter 9. Safety-I concerns infrequent events, in a stable environment, and considering
an event as unique. Safety-II concerns frequent events, in an unstable environment, and considering
an event as not unique. It is a long-term investment and a variability-based approach instead a
probability-based one. Here, methods and techniques are required to learn from both success and
failed stories. The author then lists some approaches to identify and study such stories: interview or
field observations techniques related to WYLFIWYF (What-You-Look-For-Is-What-You-Find) to
investigate how work is done. Even if some events cannot be clearly explained, these approaches
may help to understand how and why people adjust performance to the situation. Three types of
adjustment are listed: the maintenance and the creation of acceptable work situations, recovery
from unacceptable work situations and the prevention of future problems. Safety-I makes efforts on
events that go wrong, then it mainly focuses on severity. Safety-II relates to events that go right and
Vanderhaegen, F. (2015). Erik Hollnagel: Safety-I and Safety-II, the past and future of safety management. Cognition Technology & Work 17(3), 461-464
that are more numerous, so that its main criterion of interest is the frequency of the events. Such
repeatability of the events makes possible a comparison between them in order to identify similarity
between events and facilitate the validity of a possible learning effect on what goes right and not.
The decision invest in some equipment relates to the reduction of risks for safety-I whereas it focuses
on the improvement of performance for safety-II. However, the demonstrations on pages 166 and
167 are not so convincing because the author considers that a safety investment is wasted if no
accident occurs without taking into account that this can be due to this investment! Similarly, it
might be interesting to identify the real role of safety-I studies for obtaining an important number of
success stories compared to the very low number of failed stories discussed on chapters 3 and 4!
Chapter 9: Final thoughts
This short chapter is like a conclusion of the book. The author reiterates the differences between
safety-I and safety-II. Safety-II studies what that goes right whereas safety-I only focuses on what
that goes wrong. Safety-II aims at describing the daily activity and its performance variability instead
of producing reports on accidents or incidents as is the case for safety-I. It requires a safety synthesis
and management instead a safety analysis and a risk assessment. Therefore, this needs the
development of adapted methods and techniques to study performance variability and adjustment
variability.
Vanderhaegen, F. (2015). Erik Hollnagel: Safety-I and Safety-II, the past and future of safety management. Cognition Technology & Work 17(3), 461-464
View publication stats
Download