The Psychology of Safety 101 Consider a few proposed definitions

advertisement
The Psychology of Safety 101
Consider a few proposed definitions to start. Risk may be defined as the probability that
loss or harm may occur. Also, an action or behaviour would be considered “safe” if the
risk that a loss may occur is acceptably low. (Martin and Schinzinger) This latter
definition should include the qualification that we have sufficient knowledge of the risks
to make a reasonable assessment, and that we are competent to do so. Finally, the
severity or consequence of a loss can be considered the product of the probability it will
occur and the magnitude of the loss.
Our experiences, either direct or indirect, help establish the paradigms that influence our
actions and our attitudes about risk and safety. This brief introduction is intended to help
understand how we, as humans, view risk and how we build paradigms that guide our
behavior to keep us safe and help us survive. It will look at one view of how we learn
and attempt to relate that to our perceptions of risk, our attitudes about safety and how we
establish “acceptably safe” practices. (Lynch)
Basically, all the knowledge we have gathered as a species during our evolution is based
on experience, either direct or indirect. In our early stages, as humans or even as a
species I suspect, most of our learning results from direct experience. This is because
when we are in the early stages of development, and often very vulnerable, our egocentric behaviour ensures that we will make it a priority to acquire the resources we need
to survive, and also to avoid things that will threaten our survival. At this stage our world
is “small” and most of what is important to us is close at hand. As we develop more
capability to meet our basic needs, we start to benefit from indirect experiences.
Probably our first source of indirect experience is being taught by others, our parents for
instance, appropriate behaviours to meet our needs and avoid harm. Our next is likely
observation; observing others’ behaviour and incorporating that into our own bank of
experiences which we will use to develop our own rules to help us thrive and survive.
The next level of indirect experience is just an extension of observation: investigation.
At this level, we may deliberately seek out others’ experiences and learning that has been
collected and recorded, in books or other forms of media, or experiment ourselves. In the
end, we will base our behaviour, our “rules”, on a composite sum of all these experiences.
In this discussion, I will focus on how this process of learning and development shapes
our attitudes and behaviour towards safety.
Why do we take risks at all? The answer is rooted in our instinctive behaviour; the rules
which have helped us survive and evolve as a species. Fundamentally, our survival
depends on two key elements: acquiring sufficient resources (food, shelter, fuel…) and
avoiding loss (either resources or personal injury, including loss of life). For most of our
existence as a species, decisions, especially those that had to be made quickly, were often
made instinctively. These decisions could often mean trading off risk of loss against
expenditure of resources. As experience accumulated, decisions undoubtedly improved.
However, decisions still had to be made, regardless of relevant previous experience. This
leads to another relevant observation that is still evident in our behaviour.
“People will often make decisions on the level of personal risk they are willing to take
given the information they have available, even if it is inaccurate or inadequate.”(Lynch)
Let us examine some factors that affect perceptions about safety and risk.
©Denard Lynch, P. Eng.
Page 1 of 5
January, 2014
The Psychology of Safety 101
Martin and Schinzinger describe several factors that influence how accurately people
assess risk, or at least how sensitive they are to the loss:
•
•
•
Voluntariness: people will accept a higher level of risk if they feel they have done
so by their own choice versus having the decision imposed on them. This may be
a result of the opportunity to validate the choice based on their own bank of
experiences rather than being subjected to someone else’s decision.
Magnitude: people are generally more sensitive to catastrophic losses than smaller
losses, even though the severity may be the same. For instance, the loss of one
life is generally less disconcerting than the loss of 100 lives, given an equivalent
relationship to the victims.
Proximity: people are more sensitive to losses that affect them directly than those
for which there is little personal consequence. This probably follows from our
ego-centric beginnings, where losses “close to home” were more likely to affect
your survival.
Fleddermann(64) adds some additional considerations:
•
•
•
•
Recoverable: people are more likely to accept risks where they feel they can
recover from the loss should it occur. This attitude is in concert with the
definition of severity offered above.
Low probability: people will generally ignore risks if the perceived probability of
loss is extremely low. This may be a subconscious acknowledgement that all
activity involves some risk, and at some level they are not worth the effort to
avoid or even consider.
Timing: people will discount a risk if the consequences are delayed in time.
Immediate consequences seem riskier, perhaps because they perceive less
opportunity for the consequences to be reduced or removed before they actually
occur. There is also a general perception that future events are predicted with less
certainty than near-term events.
Threshold: for situations with progressive risk, people will tolerate risks that only
have significant consequences at higher levels of exposure. (E.g. radiation
exposure.)
Finally, one that often pertains to our attitude about safe practices (Dekker):
•
“Experientially-limited”: people will generally discount the risk of an activity
with which they have had some experience, but which has not resulted in any loss
for them yet. In other words “People will discount the risk of something which
hasn’t bitten them yet.” This almost directly reflects the result of our experiential
learning. Until we have experienced a loss from some behaviour or activity, we
may be ignorant of the true risk involved, despite the fact that we know others
may have suffered such loss. It may also reflect the weighting we put on
experience: personal experience carries the most weight, and second-hand
experience from close friends or relatives (~proximity) probably carries more
weight or influence than that of strangers. This, I believe, is a fundamental
contributor to unsafe practices, or why people disregard generally accepted safe
practices.
©Denard Lynch, P. Eng.
Page 2 of 5
January, 2014
The Psychology of Safety 101
One final contributor to our inaccuracies or misconceptions, and related to the
“experience-limited” factor described above:
•
“Statistically illiteracy”: people generally do not have an adequate understanding
of statistical behaviour, or sufficient statistical evidence, to properly assess risks.
We will often assess the risk of a certain behaviour or activity as acceptably low
based on the fact that we have had satisfactory experience (i.e. hasn’t bitten us
yet) with it thus far (experience-limited), without understanding the statistical
implication that the number of “trials” on which we are basing that assessment
will give only a very low level of confidence that our assessment is suitably
accurate.
Statistical literacy is critical to understanding what constitutes an acceptably safe practice.
Given our experience-based learning habits and general statistical illiteracy, it is a
challenge to make adequate assessments of risk without making a deliberate attempt to
overcome these influences. Our responsibility for safety as engineers requires that we
have at least a basic appreciation for the interaction of these factors and consider them in
order to provide a safe workplace or a safe product or service for the public.
There is one more important category of influences to consider to add to our
understanding of how human nature affects safety decisions.
Dekker(25) describes the fatal crash of Air Alaska flight 261 in 2000 where 88 people
lost their lives due, at least on the surface, to the failure of a jackscrew used to trim the
tail wing. When this aircraft was introduced in the mid 60s, the recommended
maintenance interval for this item was every 300 flight hours. It was estimated that the
failed component recovered from this crash had not been serviced for over 5000 hours!
The tragic tail of how a piece of equipment so critical to safe flight could go from a
maintenance interval or 300 hours to over 5000 hours exemplifies why we need to
understand human psychology at least enough to avoid processes and conditions that will
potentially, and severely, compromise safety.
Dekker describes the process of moving from a safe practice to an unsafe practice (re:
unacceptable level of risk) as a “drift into failure”. He identifies key characteristics of
this type of “drift” environment. The two most relevant to this discussion are
summarized below:
•
•
Pressure: uncertainty and competition in the marketplace (or work environment in
general) creates a chronic pressure to trade-off resource and cost savings against
safety. This leads to a progressive erosion of the “safety margin”, leading to a
situation in which the risk of harm or loss is above the originally acceptable limit
or target.
Decrementalism: the negative effects of small, incremental reductions in the
safety margin, eventually leading to an unacceptable, or perhaps unknown, level
is risk. These incremental changes are often due to “pressure” as described above,
or due to our psychological tendency to discount the probability of a failure for
things that “haven’t bitten us yet”! The process, in combination, often consists of
i) a reduction or relaxing of routines or practices designed to maintain a safe
©Denard Lynch, P. Eng.
Page 3 of 5
January, 2014
The Psychology of Safety 101
margin (pressure), ii) a period of operation with no failures (experience-limited),
iii) an “empirical validation” of the “new” practice in the operator’s eyes so it
becomes the new standard operating practice (statistical illiteracy), and finally iv)
repeat this cycle until a disaster occurs!
Dekker also lists: dependence on initial conditions, unruly technology. and contributions
of the protective structure. While valid contributors to a “drift” environment, they are not
as obviously related to the psychological aspects of safety examined here.
In summary, the way we, as humans, acquire knowledge and build and use paradigms
affects the way we perceive and assess risks. As engineers responsible for public and
workplace safety, it is helpful to understand this process and how it influences decisions
on both establishing and following safe practices. Our “experientially-limited” and
“statistically-illiterate” nature leads us to irrational decisions about safety unless we
consciously use our intellect to modify our instinctive biases.
©Denard Lynch, P. Eng.
Page 4 of 5
January, 2014
The Psychology of Safety 101
Works Cited Dekker, Sidney. Drift into Failure: From Hunting Broken Components to Understanding
Complex Systems. Surrey: Ashgate, 2011.
Fleddermann, Charles B. Engineering Ethics, 2nd Edition. Upper Saddle River: Prentice
Hall, 2004.
Harris, Charles E. Jr., Michael S. Pritchard and Michael J. Rabins. Engineering Ethics Concepts and Cases, 3rd Edition. Thompson Wadsworth, 2005.
Legislature, Sasaktchewan. "Engineering and Geoscience Professions Act." Chapter E9.3 of the Statutes of Saskatchewan. Regina: Queen's Printer Saskatchewan, 01 01 1996.
Lynch, Denard. Supplementsry Class Notes for GE 449.3 "Engineering in Society".
Saskatoon, 01 01 2012.
Martin, Mike W. and Roland Schinzinger. Ethics in Engineering, Third Edition. McGraw
Hill, 1996.
©Denard Lynch, P. Eng.
Page 5 of 5
January, 2014
Download