ACHIEVING SUCCESSFUL SAFETY INTERVENTIONS

advertisement
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems
Revisiting safety and human factors paradigms to
meet the safety challenges of ultra complex and safe
systems
René AMALBERTI
IMASSA- Département Sciences Cognitives,
BP 73, 91223 BRETIGNY-SUR-ORGE, France
Email Ramalberti@imassa.fr
To be published as a chapter in a book :
Amalberti, R. (2001) Revisiting safety and human factors paradigms to meet the safety challenges of
ultra complex and safe systems, In B. Willpert, & B. Falhbruch, Challenges and pitfalls of safety
interventions, Elseiver : North Holland.
Summary : Very few complex and large scale systems have reached the remarkable safety level of 5.10 -7
risks of disastrous accident per safety unit in the system (safety units could be running time,
passenger/km, number of airport movements, etc.). These systems can be termed ultra-safe systems.
Nuclear industry, commercial aviation, and railways in some countries, are among the very few large
scale systems belonging to this category of ultra safe systems. They have specific behaviours, and raise
the question of safety apogees. Are these systems close to die when approaching their best performance?
How did they became safe? Would the safety solutions that led to safety excellence now becoming the
barriers to improve? What role human factors have played in the momentum of safety, and can play in
the future? The paper is divided into three parts. Part one describes the characteristics of ultra-safe
systems and their paradoxes for safety and human factors. Part two questions how a system becomes
ultra-safe, and what contribution is due to human factors in that safety momentum; Aviation serves as the
guiding example. Part three debates on how to see the human factors contribution in the future.
THE CONTEXT OF ULTRA-SAFE SYSTEMS
Three categories of socio-technical systems result from the analysis of safety figures :
The first category corresponds to non professional systems. Safety level remains below 10-3 (one
catastrophic accident per thousand trials). Most of the situations fitting this category are related to
individual experience of risk management. Mountain climbing is one of the best example: there is only
30% of chance to survive after three expeditions above 7000 meters in Himalaya (Oelz, 1999). The
safety of these systems cannot go much beyond 10-3 because of the absence of formal regulations
about risk taking, education, and practice. Risk is managed exclusively on individual basis.
The second category corresponds to systems whose safety level is levelling off nearby 10 -5. This is for
example the case of chartered flights, helicopter flying, chemical industry, road traffic, emergency
medicine, and anaesthesiology. All these systems are regulated. They have an executive safety
management level, monitoring incidents, and reacting promptly to incidents. Preferred solutions to
improve safety rely on prescriptive changes of regulations, policies, instructions, for most active at the
shop floor (at the level of individuals, front line teams, and interfaces). Safety solutions use to be
additive: more rules and procedures, more techniques, and more training. The feedback time to prove
the effectiveness of a given safety action for this category goes up to 2 years (time needed to gather
enough data to prove the benefit of the decision). For most, these systems cannot go much beyond the
1
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems
10-5 safety level- and sometimes only 10-4- , either because of the poor knowledge of the process
model, e.g. in chemical industry, or/and the uneven commitment of front line actors as safety players,
often aggravated by a chronic absence of recurrent training, e.g. the case of road traffic.
The first category corresponds to systems whose safety level is near or beyond 10 -6 . These ultra safe
systems have several major characteristics, that all relate to paradoxes :

Accidents are becoming extremely rare. However, the better the safety, the greater the irrational
consequences of remaining accidents/incidents : over-reactivity of the commercial market
(possibly leading to rapid bankruptcy), growing intolerance of media and public opinion, and
over-reactivity within the company. Top management is most of the time event-driven during the
crisis following the accident, cutting ‘heads’ rather than learning and changing long term safety
strategies and organisations. In normal time, safety improvement still continues to obey to an
additive, and prescriptive model aiming at modifying the working position. Conversely, very few
safety actions take place at the organisational level, either because of the absence of acceptable
markers and theories of safe organisational culture, or of the reluctance to change at the level of
middle management. Paradoxically, in crisis time, prescriptive approach at the factory floor is
reinforced, although it is already saturated. The only change in crisis management is that it opens
for a while a window of opportunity to change the organisation. However, in real field, most
changes are immediate or short term answers to the crisis, even before the conclusion of the
enquiry board. Therefore, very few follow-on actions take place on the long term, in sort that what
is not changed during the crisis period, remains not changed in the future (See Hale, in press). In
brief, the safer the system, the smaller the window of opportunity to change.

Because accidents are rare, reporting systems on incidents, and near incidents, are considered as
priorities. However, the more the system scrutinising in-service experience, the more the
ambiguity in the use of information. The main reason is that the better the safety, the weaker the
prediction of future accident. Remaining rare accidents often result from the co-presence of
multiple non severe incidents, which encourage gathering more information on near incidents in
reporting systems. Not surprisingly, the safety departments in charge of incident analysis are
overwhelmed by this new flow of information, and paradoxically, most new reported incidents
remain ignored in the incident analysis. Rob Lee reported recently a critical survey of the last
outcome of BASi, the famous Australian Aviation incident database, covering the period of June
1988 to June 1998 (Lee, 2000). Over 2300 incidents were stored and analysed during this period
with the BASI method, but the conclusion was surprisingly that almost 50% of the incidents had
finally no visibility, nor their lesson have been considered into any safety bulletin. Another
conclusion of Lee was that the suppression of 75% of the coding scheme in the procedure for
incident analysis of BASi should have impacted for only 0,5% of the safety reports. Should this
intellectual honesty in self criticising incident reporting system becoming a shared policy for
safety managers, false believes and poor strategies in safety management could be significantly
reduced.. Another severe bias of reporting system is related to what is reported as compared to the
total number of incidents. An extensive study was conducted in air traffic control using Hutchins’s
paradigm of information transformation through the successive supports that the report has to pass
through, from the incident occurrence, to its final storage into the data base (Hutchins, 1995 ;
Amalberti & Barriquault, 1999). The results show two embedded biases. One is the chronic under
representation of organisational causes into incident reporting, contrasting with the almost
systematic presence of blame on the front line position. Interviews with investigators show that
they file in priority causes that are unambiguously related to facts, and can effectively lead to short
term changes. In that view, investigators tend to consider that organisational causes are often
difficult to relate to facts, often polemic, and rarely followed by changes, therefore, give low
priority to these causes into the final reports. A second bias is the use of the reporting system as an
intentional mean to communicate problems to the management and executive level, or conversely
to pass a safety message back to end users. For example, the reports can be biased to emphasise
2
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems
the growing lack of proficient workers (these incidents are not really more numerous than it use to
be, but workers decide to report on them this month, trying by means of repetition to pass the
message that something should be done now). The reports can also be biased, with the complicity
of middle management, for example in order to pass the message back to senior air traffic
controllers that they are not too much distracted, and let novices working alone.

Another paradox is the increase of violations in ultra safe systems as a consequence of over
regulation. Violations obey in complex system to a bargaining process mixing violations that serve
the production, and violations that serve the individuals (Polet, Vanderhaegen, Amalberti,
submitted; Rasmussen, 1997 ). They become progressively part of the ‘normal’ way of doing the
job, and moreover, are at the source of the normal and expected performance of the system. The
resulting process is fragile but efficient, and it is almost impossible to attack one category of
violation (for example violations that serve individual interests) without reconsidering the others
(violations that serve the production). This equilibrium is often known and respected by the
proximity management, but ignored by the top management and the safety department. One of the
biggest problem with this unofficial area of ‘normal work’ based on violations, is that the situation
create extreme confusion in safety strategies. Should in-service experience reports on these
violations, the solution might not be to suppress them. Nevertheless, the more normal the
violations, the less the aptitude of individuals to read the boundaries of acceptable risk taking.
Robert Helmreich (2000) has proposed interesting results related to that paradox. Based on jump
seat observations of 3500 Flight segments, he showed that 72% of the flights exhibited at least one
error, with an average of two errors per flight. 85% of the error were unconsequential, and 15%
consequential for the flight. Violations –or non compliance to procedure- represented 54% of the
total errors. Very few of the violations were consequential. Nevertheless, crews committing at
least one violations are 1,5 times more likely to commit unintentional error, and 1,8 times more
likely to commit errors with consequences on the flight. One good solution to cope with these
‘normal’ violations could be to clean up existing regulations, to fit the ‘normal’ practice (Bourrier,
1999). However, this solution collides with the managerial fear to change anything into
regulations, because of the low visibility on the individual contribution of each rule to the final
safety level.

The feedback time to prove the effectiveness of a given safety action goes up to 6 years (time
needed to have enough data to prove the benefit of the decision). The safety officer and the
executive level never sees the result of his/her action, only the next or next-next manager will
benefit of. If we consider on one hand that media and public opinion are asking for immediate and
magic results, and on the other hand that all humans are somewhere egoist, it is not a surprise that
short term actions in ultra safe systems remain more than often politically-driven, and little
scientifically-based.

The situation may become so irrational, once the system installed on the safety plateau, that, in
some occasion, a given technology may even totally collapse world-wide with only the occurrence
of an isolated accident, so called the ‘big one’. For example, the Hindenburg accident in
NewYork, May, 6, 1937, sounded the knell of Airship technology. More recently, this is almost
the case of the turboprop commuter technology for 50 to 70 seats, whose technology is now
severely questioned since the last ATR accident. The French-Italian ATR bi-turboprop operated
by American Eagle severely iced and crashed in the north of US on October, 31, 1994 (NTSB,
report PB96-910402, 1996). It was the second time that an ATR was icing and crashing (first time
in Italy in the early 90s). The (US) public opinion and the market over reacted to the crash,
considering that alternative engine technology (fans) should be preferred. Within the five years
following the accidents, most of the competitors of ATR (Bombardier, Saab, Bae, Embraer) turn
their production to the new engine technology. However, the turbo-prop safety figures, even with
this accident, never went below requirements (aircraft type certification do not warrant zero
accident), and moreover, never went below the safety figures of alternative technology. Finally,
the reason of change is primarily to find in the technical maturity of this alternative technology,
and the objective improvement of customer satisfaction and commercial efficiency with fans
3
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems
technology. To sum up, when approaching the asymptotic safety performance of a system,
changes in technology are performance and business-driven rather than safety-driven, eventhough
the drive to decision making still wears the clothes of safety improvement.

Last in the series of paradoxes associated to ultra-safe system, most managers believe that the only
solution to escape the asymptotic safety level depends on more automation, although human
factors believe that the ultimate safety control asks for maintaining more operators in the loop.
Most managers in the aviation system think that human factors can impair safety if not properly
implemented into the system, but have not the potential to convey key-solutions for safety
improvement in the future.
HOW HUMAN FACTORS HAVE CONTRIBUTED TO MAKE CIVIL AVIATION AN
ULTRA SAFE SYSTEM
It is one thing to describe the behaviour of ultra safe systems when approaching their asymptote of
safety performance. It is another matter to question how they became ultra safe.
It is generally advocated in the human factors community that the ultimate safety is due to the
controllability of system by operators, and cognitive flexibility and adaptability of the operators.
Numerous human factors safety principles have been suggested to industry during the last two
decades, such as the Reason’s Swiss cheese model (1991), the Billing’s Human-centred design
paradigm (1997), or the Rochlin’s High-Reliability Organisation paradigm (1993), to cite only a few
of the large theoretical human factors contribution to safety.
It is therefore interesting to look at the recent history of safety improvement in professional Aviation,
a typical ultra-safe system, to see how much such human factors principles have been influential.
The surprise is that a thorough analysis of safety initiative in the aviation domain does not demonstrate
a forceful penetration and role of most of these academic human factors principles.
The role of human factors has long been restricted in the real field to human resource management,
making the concept of human factors a subsidiary concept of personnel management, little related to
technical concepts. In the aviation domain, most human factors recent advances are limited to training.
CRM training became the staple of human factors at most airlines whereas the need for human factors
considerations in the design and certification of aviation equipment was far less acknowledged.
The following section gives a more global view on the safety momentum in aviation and the
contribution of human factors:
1. Human factors have been introduced in the aviation regulation in the late 70’s. However, all of
these regulations are under specified, written in very general terms, escaping de facto to any
scientific approach, and, to some extent, to any validation (Human factors are still a domain of
bargain during the certification process).Two representative examples of Joint Aviation
Authorities rules (JARs) are given in the following:
‘Section 25.1309 requires that
“(b)The airplane systems and associated components... must be designed to
that...the occurrence of...failure conditions which would reduce the...ability of
the crew to cope with adverse operating conditions is improbable.
(c)
Warning information must be provided to alert the crew to unsafe
system operating conditions, and to enable them to take appropriate corrective
action. Systems, controls, and associated monitoring and warning means must be
designed to minimize crew errors which could create additional hazards.
(d)
...The [compliance] analysis must consider...the crew warning cues, corrective
action required, and the capability of detecting faults.”
4
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems
Section 25.233 states that “Landplanes must be satisfactorily controllable, without exceptional
piloting skill or alertness in...landings...”
There is no Interpretative and Explanatory Material (IEM) associated to such regulations into JARs
giving definitions of concepts (for example what means ‘exceptional piloting skills’), nor there is any
Associated Means for Compliance (AMC) proposing a method to comply with these rules (Abbott,
Slotte, & Stimson, 1996).

The last glass cockpit generation of advanced automated aircraft (Airbus family, Boeing 73’300600, 74’-400, 777, 717) dates back the end of the 80’s. The glass cockpit design has incorporated,
or created, most of the recent advances in HMI and as such, was a fantastic room for experiencing
modern human factors.

However, these new interfaces have not been designed by human factors specialists, but merely by
non-human factors educated engineers. Whereas basic human factors (e.g. anthropometrics and
psychophysics) have been considered, most of the glass cockpit design refers to innovative ideas
and intuitive human factors. These human factors were guided by a series of basic principles,
which are as follows:
 First, system performance enhancement, including safety, has been the triggering goal
of the new design. The design has been engineering-driven rather than human factorsdriven : as long as the performance and the safety of the overall system were
considered to be enhanced by engineering solutions, the crew concerns (including
authority and social comfort) were not first priority.
 Negative considerations on front line operators capacities have dominated the intuitive
human factors policies used in the design of glass cockpit. For intuitive human factors,
the first goal of a human factors-oriented design is to reduce the potential for crew
errors while conserving the human intelligence for planning and system back-up. Inservice experience was the tool to capture the most unreliable human sequences.

The world of design and certification has long been totally separated from the world of operations
and training. The input of the human factors academy in Cockpit Resource Management (CRM) at
the end of the eighties (leader style, communication, conflict resolution), despite the fact it was
very important for crew training improvement, had strictly no impact on the incorporation of
human factors in the design and certification of aeroplanes.

Following a series of human error-induced accident that have marked the introduction of the glass
cockpit generation, a growing and significant support to research in advanced (cognitive and
social) human factors was sustained worldwide by authorities. The accident analysis prioritised
research on cognitive ergonomics, human-error, automation, human-centered design, sociocognitive ergonomics, and system-safety (see Sarter and Amalberti, 2000; Woods, Johansnesen,
Cook, & Sarter, 1994).

Despite this tremendous growth of research and papers in aviation cognitive ergonomics, or maybe
because of it, the aviation industry has long considered, and still does sometimes, that most human
factors theorists criticise without understanding the domain, and are unable to propose any viable
solutions. Even during the worst period, when many accidents occurred (1992-1994), the
manufacturers have never considered their design as wrong. The results accumulated since this
period tend to acknowledge this view and to point to human factors people as ‘whistle blowers’.
Moreover, industry tends to consider that the key to a better accident prevention is to go for more
automation and safety nets rather than to return to the previous situation where crews had to
recover more manual controls. The development of TCAS (Traffic alert and Collision Avoidance
System), GPWS (Ground-Proximity Warning System), enhanced GPWS, as well as the design of
future datalink, are some examples of this trend.
5
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems

The divorce between academic human factors research and industry has been so extreme that only
two persons with a degree in academic human factors were employed in design and certification
offices by the whole French aviation industry in 1992 (compared to over ten thousand employees
in this area!). The situation was just a little better in the USA and in the rest of Europe where the
ratio of human factors specialists was significantly greater, but with a (low) position in the
hierarchy that did not necessarily influenced the design much.

The situation is now improving with the installation of a positive dialogue and a search for
integration of academic human factors methods in design and certification. The dialogue is being
developed at the highest level between the industry and the authorities. However, possible
advances in rule making within the industry are often twisted by political intentions, and the
search for side benefits. For example, one of the idea supported by manufacturers should be to
foster the use of new human factors methods during the certification, but provided the authorities
should give more responsibility to industry, change their policy, and put more emphasis on the
control of the process, to the detriment of the control of the end-product. The side advantage in
that case should consist in weakening the influence of the “Certification team”, made of flight test
pilots and engineers, legally in charge to test the system and ultimately deliver the licence to flight
(the ‘type certificate’). With such negotiations running, it is easy to imagine that the flight-test
personnel, despite their need for human factors, see sometimes them as a threat for their job, as
well as a threat for the end quality of the certification process, and do not support at best the
momentum.
To sum up, human factors has long been pictured by the aviation industry as follow:
 Human factors are not considered as the main source of the significant safety progresses made in
the past, technique is much. In Aviation, for example, significant changes in the safety curve are
usually related to the advent of the jet technology, simulation, radar, automation, etc, but not to
training, organisations, or error management.
 This is not contradictory with the idea that human factors are needed to reach and maintain the
potential level of performance procured by technique. Nobody should think the opposite way in
the aviation community. Human factors are welcome anytime they improve the controllability of
the human variety, fixing behaviours, making front line actors expectable team players in the large
socio-technical system. A very basic belief guides aviation safety strategies for years : a
significant potential of performance and safety is associated to technique, and this potential is
worsened by several deviations, failures, errors, violations, and non-adherence to procedure. Any
solutions providing managers with the information on deviance, and tools to counteract this
deviance in the future, are prioritised. Reporting systems, and then, systematic counteraction to
suppress the re-occurrence of identified incidents, are at the core of this strategy.
WHY THE SITUATION CHANGING?
There are three reasons why the situation is now calling for change. One is the need for a continuous
growth of performance for all modern technological systems, just in order to commercially survive.
This need for performance improvement put safety at risk (less margins, more pressure on the system),
and, therefore collides with the growing intolerance to accident. A second reason is the awareness that
continuing old safety strategies, once on the safety plateau, could have the potential to worsen safety,
as described in the first section of the paper. A third and last reason is that the knowledge on human
error and the control of risk has tremendously improved in the past decade, showing how gradually
inadequate is the error and incident suppression strategy. Let now expand on that last point.
Human factors have long accompanied and supported the industry belief that error should be
suppressed as well as incident to reach the ultimate safety. It is quite amusing to see that, despite
dozen of discourses on the need to let the ultimate control of system in hand of front line operators,
most human factors safety analysis and works have grasped human activities through errors, then have
proposed solution to reduced these errors, finally supporting the industry belief of human limitation.
6
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems
Only strategies have differed. Where human factors were recommending to change the design,
industry have given priority to change the training, the procedures, and ultimately to design technical
electronic envelopes, cocoon, and protections limiting operator’s freedom to take risk.
The result is not bad, since safety figures are excellent. But the error /incident suppression strategy is
probably short-sighted to go beyond the present safety level. Moreover, human factors specialists have
paradoxically given force to this strategy by giving systematic insights on human limitations, although
the same person do not share the way the industry has used this information to gradually control and
limit operators’ freedom.
It is advocated in the following that human factors could retrieve a certain influence on safety in ultra
safe system, provided they should propose a significant paradigm shift on human error and risk
control.
New developements in natural, ecological, psychology could support this paradigm shift (Klein,
Oranasu, Calderwood, and Zsambock, 1993; Amalberti, 1996; 2000). Part of this work is directed
towards understanding how systems and individuals are managing safety. A clear distinction is
introduced between three potential conflicting levels of risk management within the industry
(Amalberti and Valot, 2000; Amalberti, 2000). One is the overall safety objective at the level of the
company, or the top management. At that level, safety first aims at economically surviving in adverse
conditions (whether it is cause by a production flaw, or a dramatic accident); crisis management is first
priority. A second safety objective relates to the production line, and consists in giving priority to
quality. The product is first focus, and the reduction of error first priority. System risk analysis and
human reliability analysis are usual tools at this level. The third and last safety objective relates to
individual safety management. At this individual level, the comprehension of the dynamic
management of risk show that individuals feel safe as long as they have the feeling that they the
solution planned to reach the goal is still applicable. Experimental findings (Amalberti, 1996; Plat &
Amalberti, 2000; Wioland & Amalberti, 1996) show that most of the error flow is under cognitive
control, detected, but oftentimes not immediately recovered, and even never recovered.
Metaknowledge and confidence are at the kernel of this global control level.
Each of the three levels uses the other levels to achieve its own safety goal. Each level resists also to
the pressure coming from the other levels, in order to optimize and keep control of its own safety
logic. For example, the total quality management at the product level should expect individuals to
immediately recover all of their errors, but this result is so disruptive and contradictory to natural
behaviour that the final result is oftentimes only to push individual behaviour into silent and illegal
practices, without really adherence to the instructions.
The resulting overall macrosystem safety is an emergence of all these interactions and conflicts.
A significant paradigm shift could come from the confrontation of these conflicting safety goals at all
levels of the hierarchy, working on the control of an acceptable dynamic compromise, rather than on
the dominance of one view about other.
CONCLUSION
Ultra safe systems ask for a different safety management and raise difficult human factors points.
The system has became very good by accumulation of experience. Since nobody know really what is
good and bad in safety actions taken along the years, the logic is pure additive, and rules are never
cleaned. The system tends to be over regulated and conservative. Its makes profit, but tends also to be
less adaptive (for the management). Only crisis (accidents, economical changes) have the potential to
breakdown this ageing behaviour. The system waits for its death, with the hope of continuation
through a probable descendant (the next big leap in technique).
7
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems
When the system becomes ultra safe, many things change for human factors and for safety strategies.
First, the day-to-day regulation becomes more and more opaque for the executive level. It is made
unofficially by front line actors and first line management. That is a paradox at the moment the
executive level, because they fear for the complexity and for their position in case of problem, would
precisely reduce delegation and take orders. The job is done, but violations are more numerous. It is
important to acknowledge that these violations are symptoms of adaptation , not of loss of control.
The safety solutions does not consist systematically in suppressing these violations, but merely in
controlling them. They say little individually on the next accident, but their combination and pace of
occurrence (significant increase or decrease) are significant for announcing a future loss of control.
Note that cognitive and social models of field activity are required to conduct this integrative analysis
of violations, but one must say that these models are generally missing.
In the absence of such models, a careful solution should consist in creating a double feedback loop:
one for workers and for the executive level. The workers loop should aim at improving self awareness
of group workers to control the system, being aware of the derivatives of practices. The executive loop
should aim only at moving and adapting the organisational level, once the violations and deviance are
considered too numerous and not controllable by the workers themselves.
A second point with ultra safe system is that they challenge human factors at a level they have never
been before. The level of requirement is totally different. Methods that have proved efficiency at 10-5
are not long efficient at 10-7. An extensive revisitation of ergonomics methods (and probably of other
human factors contributive disciplines) is required to answer these new requirements of ultra safe
systems. This is one substantial future challenge for human factors, even they only serve as the tool
to reach and maintain the safety level potentially written into technique. Needless to say that thinking
about a future significant safety enhancement which should result of an innovative human factor
approach is a near dream, despite the dozen of optimistic papers assuming the versus.
DISCLAIMER
The ideas expressed in this paper only reflect the opinion of the author and must not be considered as
official views from any national or international authorities or official bodies to which the author
belongs.
REFERENCES








Abbott K., Slotte S., Stimson D., (Ed) (1996, June) The interfaces between flighcrews and modern
flight deck systems, Report of the FAA HF Team, June 1996, FAA: Washington DC.
Amalberti, R. & Barriquault C. (1999) Fondements et limites du retour d’expérience, Annales des
ponts et chaussées, 91, 9, 67-75 [Bases and limitations of reporting systems]
Amalberti, R. (1996). La conduite de systèmes à risques [The control of systems at risk]. Paris:
Presses Universitaires de France.
Amalberti, R. (2000). The paradoxes of almost totally safe transportation systems. Safety
Science.4
Amalberti, R & Valot, C. (2000) Modèle de la sécurité des grands systèmes socio-techniques vue
depuis l'individu, Rapport IMASSA 2000-06, Contrat CNRS-SHS / Convention 98-34 [Modelling
safety in large socio-technical systems]
Billings, C.E. (1997). Aviation automation : the search for a Human-centered approah, Mahwah,
NJ : LEA
Boy, G. (1995). "Human-like" system certification and evaluation. In J.M. Hoc, P.C. Cacciabue, &
E. Hollnagel (Eds.), Expertise and technology: cognition and human-computer cooperation (pp.
243-254). Hillsdale, NJ: Lawrence Erlbaum Associates.
Bourrier, M. (1999) Le nucléaire à l’épreuve de l’organisation, Paris: PUF
8
Amalberti, Revisiting safety and human factors paradigms to meet the safety challenges of ultra complex and safe systems














Duffey, R., & Saull, J. (1999) On a minimum error rate in complex technological systems, FSIIATA Joint conference Enhancing safety in the 21th century , Rio de Janeiro, Brazil, November
8-11, Cdrom, NY : FSI
Helmreich, R. (2000) Culture, threat, and error : Assessing system safety. In ‘ Safety in Airlines :
the management commitment’, London, UK: Royal Aeronautical Society.
Hutchins, E. (1995). How a cockpit remembers its speed. Cognitive Science, 19, 265-288
Klein, G., Oranasu, J. Calderwood, R. & Zsambock, C. (Eds) (1993) Decision making in action :
models and methods, Norwood, NJ :Ablex
Lee, R. (2000) The BASI system, Paper presented at the EAAP conference, Crieff, Scotland,
September 4-6th, to be published in M. Cook (Ed) Proceedings of the EAAP conference, AveburyAviation
Oelz, O.Risk (1999) Assessment and risk management in high altitude climbing, , 19 th Myron.
Laver International Post graduate course on « Risk management », Departement of Anesthesia,
University of Basel, March 26-27, 1999
Plat M. Amalberti R. (2000) Experimental crew training to surprises, In N.Sarter & R. Amalberti
(Ed) Cognitive Engineering in the Aviation Domain, Hillsdale: LEA, 287-307
Polet, P., Vanderhaegen, F. & Amalberti, R (submitted to a special issue of Safety science, 2000)
Modelling Border-line tolerated conditions of use (BTCUs) and associated risks
Rasmussen, J. (1997) Risk management in a dynamic society, a modelling problem, Safety
science, 27, 2-3, 183-214
Reason J. (1990) Human error, Cambridge: Cambridge University Press
Rochlin G. (1993) Essential friction: Error control in Organisational behaviour, In N. Akerman
(Ed), The necessity of friction, Berlin : Springer/Physica Verlag, 196-234
Sarter, N. & Amalberti, R. (2000) (ed) Cognitive Engineering in the Aviation Domain, Hillsdale:
LEA
Wioland, L., & Amalberti, R. (1996) When errors serve safety : towards a model of ecological
safety, In proceedings of the first Asian conference on Cognitive Systems Engineering in Process
Control (CSEP 96), November 12-15, 1996, Kyoto, Japan (pp 184-191)
Woods, D., Johansnesen, L., Cook, M., & Sarter, N. (1994) Behing Human Error, Wright
Patterson AFB, Dayton OHIO : CERSIAC SOAR 94-01
9
Download