Uploaded by david10172001

Elms Achieving Structural safety

advertisement
Structural Safety 21 (1999) 311±333
www.elsevier.nl/locate/strusafe
Achieving structural safety: theoretical considerations
David G. Elms*
Department of Civil Engineering, University of Canterbury, Christchurch, New Zealand
Abstract
The nature of safety and the related concepts of risk and hazard are de®ned and explored. Measures of
safety are discussed, and its relation to sustainability. Threats to safety are categorised, after which the
discussion moves to consideration of safety management. Emphasis is given to indirect means of ensuring
safety, particularly codes of practice, quality assurance, three ``enemies of knowledge'', and indicator
methods. # 1999 Elsevier Science Ltd. All rights reserved.
Keywords: Structure; Safety; Risk; Hazard; Failure; Human error; Assessment; Sustainability; System; Indicators
1. Introduction
Safety is central to structural engineering. For the most part, structures have a good record.
Failures are few, except in major natural disasters. Nevertheless, they occur, and at a rate that
requires a continual review of the means of achieving safety. The obvious approach is to design to
the relevant codes and ensure good management procedures. However, unexpected threats can
appear, reducing safety margins to the point of failure. The aim of the paper is to give an overview of ways to detect and combat the unexpected, so ensuring safety.
The paper's title promises ``theoretical considerations''. This is not to say the emphasis is on
quantitative analysis. Rather, good safety assessment and management need a thorough understanding of underlying concepts. They are not always easily de®ned. Yet de®nitions are important. They are the foundations of understanding. In what follows, there is therefore an emphasis
on de®nition and on categorisation. Both are necessary for a suciently rich language for e€ective
discussion, understanding and practice.
The argument begins by de®ning safety. It considers how safety is achieved and how it can be
known that a structure is safe. The discussion ranges from design philosophy to the nature of
* Corresponding author. Tel.: +64-3-364-2379; fax: +64-3-364-2758.
E-mail address: elmsd@cad.canterbury.ac.nz (D.G. Elms)
0167-4730/99/$ - see front matter # 1999 Elsevier Science Ltd. All rights reserved.
PII: S0167-4730(99)00027-2
312
D.G. Elms / Structural Safety 21 (1999) 311±333
codes, dealing on the way with risk management, with the problems of complexity and uncertainty, with both direct and indirect methods of assessment, and with underpinning ethical
principles.
2. Safety
2.1. The nature of safety
Safety and risk are closely related concepts. Yet they are very di€erent in nature. Risk is
quanti®able, but safety generally is not. It is something to be achieved or assured. Matousek
says ``Safety is a quality characteristic and should therefore not be restricted to being a mathematical factor'' [1]. We need to understand clearly what we mean when we say that a structure
is ``safe''.
In normal usage the word ``safe'' can have di€erent meanings. It can be applied both to artefacts and to people. For example, ``the electric heater is safe'', ``the bridge is safe to cross'', ``John
is a safe driver'', or ``Jane was in great danger, but now she is safe''. The common thread is the
freedom of people from physical harm. Pugsley pointed out that ``the safety of a structure is often
viewed from a human standpoint: `is that plank over a stream safe for me to walk over? Is that
bridge safe for public use?''' [2].
In structural engineering, the word ``safety'' is used more narrowly. A safe structure is one that
will not be expected to fail. This could sometimes be quanti®ed by saying that the probability of
failure is less than a particular limiting value, but such a value is not normally stated or known
explicitly. For the most part, safety is determined by other means: by reliance on codes, for
example. Historically, the emphasis on personal safety, of preventing death and injury, has tended to underlie the structural view of safety. This view is now changing. Particularly since the
Northridge earthquake, the emphasis has been shifting towards minimisation of economic loss. In
a sense, the focus has changed from ensuring safety to managing risk.
Nevertheless, safety is still an important concept, and we need to be sure how it is understood.
There are two usages: perfect safety and comparative safety. The ®rst is a limit. One is either
perfectly safe, or one is not. Complete safety is when one does not even think about the possibility
of harm. It is taken for granted. It is the point of view of the user, as it were, and it is a perception, a subjective view. Such a view of safety should not be isolated from the objective world of
engineering. We therefore need to see how a belief in safety is achieved.
In its second usage, safety is understood in a comparative sense. One considers the degree of
safety as in some way the lack of complete or perfect safety. Crossing the broken down bridge is
not very safe; indeed, it could be quite unsafe. Fording the river beneath would be safer, though it
would not be completely safe Ð there might always be the possibility of an accident.
Thus the concept of safety relates to two rather di€erent questions: ``Is this safe?'' and ``How
safe is it?''
Thus safety can be thought of as a state, a perceived state or a quality [1]. Because it is a state,
it cannot be quanti®ed directly. Rather, the state or quality is assured by assessing and controlling
risk, reliability and hazard.
We can now make some speci®c de®nitions.
D.G. Elms / Structural Safety 21 (1999) 311±333
313
2.2. De®nitions: safety, risk and hazard
2.2.1. Safety
In the light of the previous discussion, safety can be de®ned as follows:
1. A structure is safe if it will not fail under foreseeable demands, leading to loss of life,
injury and unacceptable economic loss, and if it is unlikely to fail under extraordinary
demands or circumstances.
The de®nition has two parts. The structure will be designed and maintained so that in ``ordinary'' circumstances there will be no failure, and it will be suciently robust that there will not be
catastrophic collapse in extraordinary circumstances although it might sustain damage. The
``extraordinary circumstances'' might arise for a number of reasons. A natural hazard might give
higher than expected loads - for example, a much larger than predicted earthquake. Alternatively
there may be some unforseen demand such as ship collision with a bridge, or terrorist action. The
World Trade Centre attack caused damage, but the structure was suciently robust and there
was no catastrophic collapse. On the other hand, the Ronan Point apartment building was not
robust, and a gas explosion led to progressive collapse [3]. Then again, there could be failure due
to unexpected material problems (the Kings Street Bridge failure [4]) or collapse mechanisms
(West Gate Bridge [5]). It is clear that these structures were not safe. However, that was only
known in hindsight. Prior to failure, the structures were thought to be safe.
This points up an inadequacy in the de®nition of safety given above. It assumes that some
agent knows the circumstances in which failure could occur at some future time in the life of the
structure. Yet the engineer (or anyone else for that matter) can only know the evidence available
at the time at which a statement on safety is made. From this point of view, safety can only be
seen as a belief or expectation, as Pugsley hinted. This is not to say the belief is arbitrary. It will
have grounds, and the possible grounds are many. Structural codes, quality assurance, experienced engineers and an absence of previous failures may all be reasons for the belief that a
structure is safe. Nevertheless, the point is that all aspects of the future cannot be known, and the
``real'' safety of a structure is necessarily hidden. This should always be borne in mind. Unjusti®ed
con®dence is dangerous, and wariness is a virtue.
The real point of the de®nition of safety given above is that it should be seen as a goal to be aimed
for, or a pair of principles to guide the processes of design, construction, operation and maintenance.
Safety is closely related to the likelihood or probability of failure. However, there is not a oneto-one correspondence. Rather, safety can be thought of as corresponding to an upper limit of
failure probability. A second de®nition of safety is therefore that
2. A structure is safe if the probability of failure during its design life is less than a speci®ed
low value.
Safety is related to risk. The risk to the occupants of a safe structure is low. Risk is a word
often used loosely, and it is rich in implications. Next, therefore, must come a de®nition of risk.
2.2.2. Risk
Risk can be quanti®ed, but because it necessarily relates to a future event, any expression of
risk must rely on a degree of belief. It is an estimate, and is therefore uncertain. Though some
314
D.G. Elms / Structural Safety 21 (1999) 311±333
have used the concept of ``real'' risk, the only sensible use of the term is as an asymptote that the
engineer might approach more closely by using better information and more sophisticated analysis. But as will be shown, neither information nor analysis can improve an estimate of risk
unless the improvements are balanced and comprehensive.
There are several meanings of risk in current usage. One de®nition is that risk is
1. A combination of the likelihood and the consequences of a future event, viewed in a context.
In some cases it can be formalised as
2. The product of the probability of occurrence and the quanti®ed consequence of a future
event.
Considering the ®rst de®nition, the inclusion of context is important and often forgotten [6,7].
The context of risk is often more closely related to the consequences of an event rather than its
likelihood. Utility theory gives a good illustration of this. A loss of $1000 would be trivial to a
millionaire but catastrophic to an impoverished student. In general, the ``context'' aspect of risk
refers to the environment in which a risk-based decision is made, for risk, in practice, is closely
linked to decision.
When considering risk, structural engineers for the most part focus on probability of failure
rather than on the consequence and context aspects of risk. However, building codes, where they
act for society, must necessarily take into account consequence and context, even though their
main emphasis is on controlling the likelihood of failure.
More loosely, but still usefully, risk can also be de®ned as
3. Threatvulnerability
The threat might be a natural hazard, and vulnerability the proneness to failure of a structure
due to such a hazard.
Depending how risk is assessed it can be categorised as observed risk, calculated risk or perceived risk [7]. All three estimates have an element of uncertainty associated with them, but in
di€erent ways. It is useful to bear in mind the di€erences between the three as they mean di€erent
things and should therefore be compared only with extreme caution.
Observed risk is an extrapolation from a number of observed occurrences. Road accidents are
an example. Generally, observed risk can only be a broad estimate of the likelihood of failure.
There are several reasons for this. Firstly, particularly where failure is infrequent (as with large
bridges), many di€erent structural types must be swept up into the same overall category. Secondly, there is no di€erentiation as to the cause of failure. As observed risk is statistically based,
no causal relation can be determined directly. All causes of failure are included. This leads to a
third point, which is that any estimate of observed risk is sensitive to the categories used for its
determination. ``Large bridges'', for example, is a fuzzy category with no precise delineation as to
which bridges are large and which are not. Therefore any estimate of observed risk will be less
dependable than at ®rst sight it might seem to be.
Calculated risk can almost be thought of as the converse of observed risk. Any calculation of,
say, the probability of failure of a beam or a building must be based on an analysis of that particular beam or building. It can of course be averaged out over a number of structures, but only
by calculating the failure probabilities for a number of di€erent scenarios. And unlike observed
D.G. Elms / Structural Safety 21 (1999) 311±333
315
risk, which includes all sources of failure, calculated risk depends on the choice of speci®c failure
modes and causes. Some might be omitted, such as human error, which might be very signi®cant
contributors to risk. Therefore in a sense, calculated failure probabilities are conditional probabilities. The estimate might be high or low: in some cases contributing e€ects are omitted from
calculations, while in others an analyst might make conservative assumptions. The latter is typically the case in the nuclear industry.
It is scarcely surprising that observed and calculated risks can di€er widely in their estimates.
Brown's well-known comparison gives typical observed failure frequencies for large bridges as
between 10ÿ2 and 10ÿ3 compared with typical calculated failure frequencies of 10ÿ6 [8]. The difference is not trivial.
Perceived risk is di€erent again. It is simply the perception of risk held by di€erent sections of the
public, or by those a€ected by or involved in a particular proposal. Perceived risk has particularly
become an issue in environmental contexts where there is, rightly, concern over the siting of some
facility such as a chemical plant or a sewage outfall. There are many well-documented reasons for
people's perception of the gravity of risk ([9], chapter 5). All are real, though not always rational, at
least in terms of the rationality of the engineer. Where perceived risk is an issue, an engineer should be
well skilled in risk communication. However, risk perception is not usually a matter of concern for
structural engineers. The public tends to trust us, except perhaps in dam construction.
An important distinction between risk and safety is that risk is a quantity that can become very
small indeed, but can never be zero. Any situation and any structure has a risk, even though it
can be in®nitesimal. One can always be struck by a meteorite. Safety, on the other hand, can
indeed be complete, using the limit-related de®nition of safety. There is of course a diculty in
specifying precisely the limiting risk below which a situation is safe. Nevertheless, when a risk is
absurdly low there can be no question: it is indeed safe.
It follows that even though a structure is safe, there is still the possibility of failure.
The third de®nition, that risk is the product of threat and vulnerability, tends to have been used
more in the chemical and process industry, in environmental contexts and in organisational risk
management rather than in structural engineering. It is included here as it is a helpful way of
looking at risk that relates to some of the approaches to risk management discussed later. At its
simplest, the de®nition can be thought of as the relation between the demands on a structure due
to loads and load-e€ects and its capacity to withstand them. Note, though, that ``capacity''
relates to survival and ``vulnerability'' to failure. Many engineers tend to focus more on capacity
than on vulnerability, yet the two views can lead to di€erent results. They are not interchangeable. Stephens pointed out that the results of an analysis focusing on failure would be more reliable than of one focusing on survival [10].
The threats to a structure lie both in its capacity and in the demands made upon it. Capacity
may be compromised due to errors in design, construction, maintenance and ongoing management. Demands generally result from hazards. Hazard is both an important concept and a word
often loosely used.
2.2.3. Hazard
A hazard is best thought of as a
potential for harm
316
D.G. Elms / Structural Safety 21 (1999) 311±333
Hazard is often confused with risk, but in reality the two are quite di€erent. A hazard is a
threat, the possibility of something happening which could be harmful, while risk relates to the
likelihood of the harm itself. For instance, an avalanche may threaten a road. The avalanche is a
hazard, lying in wait, as it were. But there is no risk if authorities are aware of the hazard and
close the road. A badly-secured handrail is a hazard to passers-by. They face a risk of falling, but
the risk results from the existence of the hazard together with the fact that the passers-by are
there, and vulnerable. We talk of natural hazards Ð earthquakes, ¯oods, windstorms and so on
Ð not of natural risks.
A hazard may therefore be a component of risk. One approach to risk management is to
manage hazards, and another is to manage vulnerability.
Hazards can of course vary in size, and like risk there is a combination of likelihood and
magnitude. However, a serious hazard will always have the potential for signi®cant harm, generally to people. No matter how likely the hazard, it will not be serious unless the consequence
could be serious. The quanti®cation of hazard is not as formal as for risk. Nevertheless, quanti®cation is necessary if it is to be used in a risk estimate, and also if risk is to be controlled through
hazard management.
Some hazards are continuously in place, like a trailing electric cord, while for others such as a
windstorm we can assess frequency and magnitude. Again, some hazards are due to human
activity, such as electric shock, explosion, transportation accidents or mining subsidence, while
others are due to natural causes.
There remains the question as to whether to use ``hazard'' to include human errors such as
mistakes in structural analysis. The de®nition of hazard as potential for harm would clearly
include human error. However, for convenience and because of the third de®nition of risk above,
the broader term ``threat''will be used in what follows to cover errors in design, construction,
operation and maintenance as well as hazards, while ``hazard'' itself will be reserved for external
threats such as natural hazards or the presence of hazardous materials.
2.3. Measures of safety
It was pointed out above that safety is a state or quality, and that because it can be achieved in
di€erent ways it is often not appropriate to quantify it. Nevertheless there is a variety of ways in
which safety can be assessed quantitatively when necessary.
2.3.1. Probability of failure
Structural engineers typically see safety in terms of probability of failure. This is the most usual
quantitative measure. However, probability of failure by itself is very limited as a measure of
safety. There are many things it does not take into account. A single structural failure might have
di€erent consequences depending on, for instance, the number of people a€ected or the social and
economic e€ects on the community. It is a question of from whose point of view safety is considered. If safety is a question of threat to an individual, it would not matter how many people
were in a building. All would face the same probability of death from its collapse. However, the
codes governing safety are written from the community's point of view, not that of the individual,
so that a simple measure based on failure frequency would be insucient. Codes typically take
into account the lower social acceptability of failures where there are multiple fatalities by
D.G. Elms / Structural Safety 21 (1999) 311±333
317
applying di€erent load or resistance factors for certain buildings. This is not always a rational
approach, as it is still fundamentally based on the assumption that probability of failure is the
right measure of safety.
Other branches of engineering use di€erent measures. Two that are more often used in the
process industries are the Fatal Accident Rate (FAR) and the so-called F±N curve. They have
considerable potential both for prescribing appropriate safety levels for major structures and also
as performance standards when and if more codes allow performance-based design. Nevertheless,
standards relating to failure cannot consider immediate life safety issues alone. In addition to
considering economic consequences they must also be concerned with functional matters. An
example would be the requirement for certain types of building to be minimally damaged in an
earthquake because they have a major role to play in society and because their ability to function
as soon as possible after a disaster is signi®cant. It is a question of the importance of a structure
to a community, and hence of societal risk. Rather than go further into risk issues, discussion will
be restricted to the FAR and the F±N curve.
2.3.2. Fatal accident rate
The fatal accident rate or FAR is a measure of the risk of death faced by an individual while
performing a particular activity or being in a particular situation. It is de®ned as [11]
FAR ˆ probability of death of an individual per hour exposure 108
An equivalent de®nition used when calculating FAR values from statistical information in an
industry or occupational category is
FAR ˆ
observed deaths in a time period 108
hours exposure per person people at risk
The FAR is useful in making safety comparisons between di€erent industries and activities. It
can also be useful when evaluating the safety implications of di€erent design or safety-management options. Though it does not seem to have been used in a structural context, nevertheless a
minimum FAR for any person occupying a structure could be speci®ed as one of the performance
standards to be met in a performance-based code. The total FAR for an activity could be the sum
of a number of contributing e€ects, so that for code purposes one need only be concerned with
the FAR due to structural failure. Given typical values that have been computed for di€erent
activities, a ®gure of FAR=1.0 would be appropriate for assessed structural failure. This ®gure is
lower than the observed FAR ®gure for safe activities such as being at home (FAR=2±3), but the
typical di€erence between observed and calculated fatality frequencies points to the need for a
low target value.
2.3.3. Societal risk
As well as individual safety, code writers and engineers must also take into account society's
aversion to large disasters. This can be dealt with by setting standards or guidelines in the form of
the so-called F±N curve ([9], chapter 2), which plots frequency against the magnitude of an accident, normally expressed as number of deaths. Logarithmic scales are used. The curves are
318
D.G. Elms / Structural Safety 21 (1999) 311±333
exceedance curves: that is, a point represents the frequency of an accident of a particular magnitude or greater. Fig. 1 gives an example of proposed guidelines expressed as F±N curves. The
lines dip down towards the right, expressing the requirement that larger disasters are less frequent. The ®gure is a suggested proposal related to the transportation of hazardous substances.
There are essentially three regions. Below and to the left of the ``negligible line'' the risk is deemed
negligible. Immediately above and to the right of the line is a sloping band: the ALARP region.
Within this band, the risk must be reduced to a value that is as low as reasonably practicable
(ALARP). It is bounded above and to the right by various lines re¯ecting di€erent circumstances,
above which the risk is deemed to be intolerably large.
Fig. 1. Possible societal risk standard for transportation of hazardous substances [12].
D.G. Elms / Structural Safety 21 (1999) 311±333
319
F±N curves can also be used to plot historical accidents by frequency and magnitude [13]. The
two types of F±N curve (standard/calculated and observed) should not normally be used together
because of the di€erence mentioned above between observed and calculated risk.
Fig. 1 is unnecessarily complicated for structural engineering situations as the consequences of
failure are unlikely to be broadly distributed. If a guideline is required as an F±N curve, then a
single line dipping down at 45 would be appropriate, as in Fig. 1. The FAR ®gure of 1.0 suggested above means a frequency of about 10ÿ4 per year. This ®gure can be roughly taken to
represent an annual frequency of one or more deaths rather than a single death, a necessary
interpretation for relating to an F±N curve. It sets a point on an F±N curve and therefore the
position of a line. Coincidentally the line falls precisely on the ``negligible line'' of Fig. 1.
3. Sustainability
There is now an increasing call for all engineering to be based on the principles of sustainability. Sustainability is ®nding its way into engineering codes of professional ethics. The World
Federation of Engineering Organisations (WFEO) International Code of Environmental Ethics,
for example, requires that engineers should always ``observe the principles and practices of sustainable development. . .'' [14]. There may be signi®cant implications in the future for structural
codes.
The principle of sustainability is that it is no longer appropriate only to consider the immediate
use of an artefact, but also to be careful in the use of resources and not to compromise the needs
of future generations. One statement of a sustainability ethic is that
All people have their basic needs satis®ed, so they can live in dignity, in healthy communities,
while ensuring the minimum adverse impact on the natural system, now and in the future
[15].
Clearly a basic human need is for safety, and on this basis a discussion of safety must address
the issue of sustainability.
There are however many diculties with the idea of sustainability, ranging from the theoretical
and philosophical to the practical and pragmatic. To begin with, though sustainability is a
splendid idea, it is very dicult to pin down what it might actually mean in practice. As with any
principle (such as utilitarianism) which deals with decision-making at a particular moment to
achieve an outcome in the future, there are problems of deciding what outcomes to take into
account. There is a problem of how far into the future they should be considered, and on whose
behalf the decisions should be based. Most engineers are used to thinking in terms of immediate
causes and consequences and not in a broad systems way. Sustainability, though, requires systems
thinking, and as Forrester points out, complex dynamic systems behave in a counterintuitive
manner [16]. The unexpected is the norm.
Like sustainability, utilitarianism is a consequentialist philosophy or principle of how to proceed (act so as to produce the greatest good for the greatest number). It is attractive to engineers
for good reason, but there are many theoretical objections to it. The arguments raised against
utilitarianism can also be invoked against the idea of sustainability. For example:
320
D.G. Elms / Structural Safety 21 (1999) 311±333
. It is dicult to decide the total ``good'' resulting from an action, and how far into the future
the consequences should be considered
. It is unclear who makes the decision as to what is good, and by what right
. Minority needs are not considered.
However, in a practical and pragmatic context the objections to utilitarianism seem less
important because engineers typically make underlying assumptions that ground the principle in
their reality. Essentially this means relying on the idea that the consequences of actions taken here
and now diminish with distance and time. There is an implicit use, in e€ect, of St. Venant's
principle, or of discounting theory. Similar assumptions could be used to underpin the application of sustainability principles. Nevertheless, the assumption that e€ects diminish with distance
may not always be valid.
There are also many practical diculties. Firstly, it is dicult to apply sustainability principles
in a competitive commercial environment, both because an engineering ®rm needs to survive and
also because a client will usually have tight ®nancial constraints. If sustainability can give a
market edge or can be shown to produce a cheaper alternative, then it is acceptable, otherwise its
application is dicult to achieve. Secondly, it is not always clear what constitutes good practice in
sustainability. Few case studies are yet available, though work is proceeding. Finally, designing
for sustainability can be complex and add signi®cant costs to the designer.
Life Cycle Analysis (LCA) represents one speci®c approach for achieving sustainability. It is a
cradle to grave approach. Initial applications have been principally in manufacturing industry,
driven at times by legislation requiring that products, even those as complex as cars, should be
recyclable. In some areas, shipbuilding for instance, there is a long tradition of recycling. As to
structures, there has always been recycling in the sense of alternative uses for buildings, and
refurbishment. Nevertheless the full reuse of structural materials is not yet the norm. Economic
analyses of disposal versus recycling have been carried out on oil platforms in an attempt to
establish a methodology. Such analyses are an important part of LCA but are not yet well
established. Nevertheless the ®eld and the ideas are developing fast.
The de®nition of a sustainability ethic given above requires thought as to how structural safety
might lead to an ``adverse impact on the natural system.'' Possibilities lie in the use of resources, and
in the ability of structures, components and materials to be recycled. Nevertheless, it is unclear at
present what could be done. Any simplistic solution or principle is bound to be counterproductive.
Progress must lie in the future.
4. Threats
4.1. Threats and their sources
There are many roads to failure, and many threats to a structure's successful performance. At
the simplest level the threats can be thought of as a matter of loads or demands being greater than
the capacity to withstand them. Such an approach is limited, though, and is only appropriate
where loads (or load-e€ects), capacity and the relation between them can be clearly speci®ed,
deterministically or probabilistically. In many cases this might be so, but there are few if any
D.G. Elms / Structural Safety 21 (1999) 311±333
321
structures for which all the threats can be speci®ed in such a way. More fundamentally, the simple
capacity/demand paradigm is often an inappropriate way of thinking about performance over a
structure's lifetime. It is misleading, and a di€erent way of thinking is required.
Hale [17] talks of three ``ages'' of safety, referring to approaches to the analysis and management of safety. The ®rst two ages concentrated on technical and human failures respectively. For
the third age, ``the concern is with complex socio-technical and safety management systems.'' The
history of approaches to structural safety has followed this pattern. Initially the emphasis was on
technical issues, concentrating on a better understanding and control of material properties, on
loads and on the mechanics of structural behaviour. Next came the recognition that human error
had to be taken into account in design, construction, operation and maintenance. This has taken
a number of forms: code requirements, good practice in checking and the adoption of quality
assurance regimes among others. Quality assurance spans into the third age, but for the most part
structural engineering has yet to ®nd useful paradigms for dealing with threats to safety arising
from complex organisational sources.
A di€erent categorisation of threats to safety is to say that there are three generic sources:
ignorance, uncertainty and complexity. They are not mutually exclusive but are three aspects of
the general threat of poor information. They can be thought of as the three enemies of knowledge.
Ignorance refers to a lack of knowledge of technical matters of which a designer should be
aware, with no understanding that the information is needed. Ignorance is not knowing, and not
knowing that one does not know. According to Thoreau (in Walden), Confucius said, ``The wise
man is he that knows that he does not know.'' Ignorance has two categories: ignorance in the
individual, and ignorance in the profession. Historical examples of ignorance include nitrogen
embrittlement of steel, the buckling behaviour of box-girder bridges, summed e€ects such as the
combination of fatigue, stress concentration and unrestrained crack paths leading to the Comet
aircraft failures, lack of understanding of structural behaviour in earthquakes, and ignorance of
the nature and magnitude of demands such as wind loads. Ignorance sometimes occurs because
professional practice has too short a memory and forgets historical failures. Historical failures of
suspended structures had been forgotten in the design of the ®rst Tacoma Narrows Bridge.
Uncertainty represents the situation where essential knowledge is lacking, but its absence is
known. This contrasts both with ignorance, where the lack of knowledge is not known, and randomness, such as where the strength of a piece of steel may not be known precisely but where the
parameters de®ning it as a random variable are known. Where a lack of knowledge is known, or
where there is too great an uncertainty, the uncertainty may be reduced by various strategies Ð
by obtaining more data, for example. For many situations it is impossible either in practice or in
principle to reduce uncertainty directly. An appropriate alternative strategy might then be to
avoid it by, for instance, adopting a di€erent design strategy or solution.
Complexity leads to an inability to predict. The behaviour of complex systems cannot be predicted precisely. Complexity can occur in many areas. It could be in design or construction, or it
could be in the socio-technical or managerial areas dealt with by Hale [17]. Complexity has to do
with predictability and understanding. It is not the same as intricacy. An intricate system has
many components, but it may not be complex if its behavior can be well predicted. A spring-driven
watch is intricate and has many components, but it is not complex as the behaviour of both parts
and whole are predictable. Casti [18] ascribes to Turing the idea that a theoretically possible measure of a system's complexity is the minimum length of binary code for its complete description. In
322
D.G. Elms / Structural Safety 21 (1999) 311±333
more general terms we can talk of the complexity of a system as being related to the ease or dif®culty of describing it fully. There are of course situations where a full deterministic description
cannot possibly be made Ð in chaotic regimes for instance Ð but prediction can sometimes be
bounded in terms of the parameters of random variables. The impossibility of predicting complex
system behaviour can be a major source of failure. Perrow [19] makes the point that failures are
the norm where systems are complex and tight-coupled.
Ignorance, uncertainty and complexity can thus be seen as three enemies of knowledge. They
are ways in which information is incomplete or in error and predictability is compromised. As
they are major sources of risk, strategies must be in place for dealing with all three.
4.2. Anatomy of failure
Much has been written on the causes of failure. Examination of many failures has shown
common elements and processes. Blockley [20] considered case studies of structural failure and
presented a classi®cation of failure which can be summarised as:
a. Failure in well-understood structures due to a random extremely high value of load or
extremely low value of strength
b. Failure as in (a) but where structural behaviour is poorly understood and system errors are large
c. Failure due to an independent random hazard whose statistical incidence can be known
d. Failure due to a mode of behaviour poorly understood by existing technology
e. Failure due to a designer not allowing for a normally well-understood mode of behaviour
f. Failures due to construction error
g. Failures due to a deteriorating ``climate'' (such as increasing ®nancial or political pressure)
h. Failures due to a misuse or abuse of a structure
Blockley then expanded these categories into a more detailed list.
The anatomy of disaster was discussed in detail by Turner [21], based on a study of a number
of accidents. He showed that a major disaster would have a characteristic history. The event itself
would result from an accumulation of problems during an incubation period, though the actual
occurrence would not happen until a precipitating event triggered it. Discussing the incubation
period, Turner said that ``there is an accumulation over a period of time of a number of events
which are at odds with the picture of the world and its hazards represented by existing norms and
beliefs.'' The discrepant events fell into two categories: either they ``are not known to anyone; or
they are known about but not fully understood. . .'' The reasons why they were not known or
were misunderstood were:
1.
2.
3.
4.
erroneous assumptions were made
there were problems in handling information in complex situations
the danger was belittled even when its onset was noticed
where formal procedures were not fully up-to-date, violations of formal rules and regulations came to be accepted as normal.
Thus in a sense, a disaster needed two components: energy and misinformation.
Turner viewed the problem from a sociological perspective, and so concentrated on problems
arising from a less than perfect ¯ow and use of information during the incubation period. He
D.G. Elms / Structural Safety 21 (1999) 311±333
323
discussed reasons why people did not see or act on the warning signs that were there to be seen.
He categorised information-¯ow problems in terms of information that is:
1. completely unknown
2. known but not fully appreciated
3. known by someone, but not brought together with other information at an appropriate time
when its signi®cance can be realised and its message acted upon
4. available to be known, but which cannot be appreciated because there is no place for it
within prevailing modes of understanding.
Perrow's approach has been referred to above [19]. Like Turner, he studied disasters, but his
analysis took him in a di€erent direction. Whereas Turner focused on problems of information
¯ow within groups, Perrow focused on the technical systems themselves. After reviewing many
cases, he concluded that ``interactive complexity and tight coupling Ð system characteristics Ð
inevitably will produce an accident.''
A diculty is that the key concepts of interactive complexity and tight coupling are only
loosely de®ned in Perrow's book. We can see what he is getting at and respect his analysis and
conclusions, but to use the two ideas they must be de®ned more carefully. Complexity and intricacy
have been discussed above. Fig. 2 shows how intricacy and interaction can combine to produce
complexity.
Reason, too, studied disasters, but he was more interested in the human factors involved [22].
Though he discusses many types and sources of human error, the most interesting comments
from our point of view concern the distinction between active errors and latent errors. A pilot
retracting the ¯aps at the wrong time and causing a crash will have made an active error. A design
mistake leading to a structure being vulnerable to an earthquake is a latent error. Its presence will
not be detected till an earthquake actually occurs, but then it becomes a major contributor to
disaster. In Reason's view, ``Latent errors pose the greatest threat to the safety of a complex
system.'' He goes on to take ideas of complexity and tight coupling from Perrow. He then
develops a metaphor of ``resident pathogens''. These are the latent errors in a system and are
Fig. 2. How intricacy and interaction can lead to complexity.
324
D.G. Elms / Structural Safety 21 (1999) 311±333
considered in more detail below. Reason shows the importance of looking for a di€erent set of
system problems than those proposed by Turner [21] and Perrow [19], and contributes a very
appropriate metaphor.
Blockley also suggested a useful metaphor. His balloon model builds on Turner's analysis, and
adds a way of managing the hazards facing a complex system.
``Imagine the development of an accident (failure, disaster) as analogous to the in¯ation of a
balloon. The start of the process is when air is ®rst blown into the balloon when the ®rst preconditions for the accident are established. Consider the pressure of the air as analogous to the
`proneness to failure' of the project. As the balloon grows in size, so does the `proneness to failure'.
Events accumulate to increase the predisposition to failure. The size of the balloon can be reduced
by lowering the pressure and letting the air out. This parallels the e€ects of management decisions
that remove some predisposing events and thus reduce the proneness to failure. If the pressure of
such events builds up until the balloon is very stretched then only a small trigger event, such as a pin
or a lighted match, is needed to release the energy pent up in the system. . . The symptoms that
characterise the incubation of an accident need to be identi®ed and checked'' [23].
Jenkins et al considered ten case studies of disaster [24] and grouped contributing in¯uences
into the four key areas of external in¯uences; changes over time; organisational matters; and
human response.
The focus of the report is on management issues contributing to failure. Thus the causes of
failure are usually complex. Some are hidden from sight Ð they could be called ``submarine
threats'' as they lie in wait, ready to attack when conditions are right. Indicators are there to be
seen beforehand, but for one reason or another they are neither perceived nor acted upon.
5. Management of safety
We now come to ways of achieving safety. There are many possible strategies, both direct and
indirect, and most are related closely to risk management ideas. First, general principles must be
considered.
5.1. Principles
The management principles for achieving safety revolve around the de®nition of structural safety
given above, that a structure is safe if it will not fail under foreseeable demands, and is unlikely to fail
under extraordinary demands or circumstances. There are thus two di€erent scenarios to be dealt with:
the ordinary or foreseeable situations, and the extraordinary. They are not alternatives: both must be
covered. Codes deal with the ®rst very well, but are less reliable in managing the second.
The foreseeable is what is known. It can be argued, therefore, that foreseeable situations are
those that do not involve ignorance, uncertainty and complexity.
A general principle is that wherever possible, risk should be assessed. The assessment can be
both direct, using various analytic procedures such as the First Order Second Moment approach
(FOSM) or First Order Reliability Method (FORM), or it could be indirect, using, for example,
an indicator method [28]. Assessment is followed by response, and this, too, can be either direct (by
for instance making the structure stronger) or indirect (for example by using quality assurance). In
D.G. Elms / Structural Safety 21 (1999) 311±333
325
any case, a two-tier approach is preferable: ensure the structure does not fail, but if it does, then
minimise the consequences.
The overall strategy for dealing with the extraordinary circumstances becomes clear. It is simply to use various means for reducing the e€ects of ignorance, uncertainty and complexity, either
by minimising them, or by ensuring a structure and its management are suciently robust that
their e€ects are minimised.
Engineering, however, is more than a matter of attaining certain standards; in this case
achieving a minimum level of safety. It is also a question of doing so eciently, with a minimum
use of resources or at minimum cost. Thus in addition to satis®cing Ð ensuring the limit is met Ð
there is also a need to optimise. There are many strategies for optimisation, but one that is particularly central to structural engineering is the strategy of optimising through balancing. The
idea is discussed further below.
Most of the principles of risk management relate to safety management. It is helpful to consider the use of risk management in disciplines far removed from structural engineering or even of
engineering as a whole. It is too broad a ®eld to be covered here, but useful summaries can be
found elsewhere [7].
5.2. Strategies
There are many strategies for achieving safety and managing risk. Quite apart from designing
carefully according to best practice and managing the whole process of design and construction
eciently, there are a number of speci®c issues that can be discussed.
5.2.1. Codes
Codes are central to structural engineering. They are the principal means by which society
assures itself of safety. For the most part, structural codes are prescriptive and state what must be
done, what loads should be designed for and so on. There is a move towards introducing performance-based alternatives, but few are operational at the time of writing.
A code for building structures is principally a satis®cing instrument; that is, a means by which a
minimum level of safety is achieved no matter to what structure it is applied. There is a tension
between a simpler code which results in a larger variation in safety levels between structures and a
more complex and detailed code which achieves a smaller variation in safety by a more precise
targeting of speci®c structural types. By ``simpler'' and ``complex'' is meant the degree to which the
code disaggregates structures into di€erent types, shapes, materials and usage. Fig. 3 shows that for
two codes with di€erent ranges of variation to achieve the same minimum level of safety, the code
with the greater variation must produce a higher average safety level over all structures. The simpler
code, with its higher variation, therefore leads to a higher overall cost to society. No satisfactory
principle has yet been proposed for deciding on the appropriate level of code complexity.
A second general point about codes is that they need to be balanced. There are two requirements for balance. The ®rst is a balance of quality of information. It may be summed up in the
``Principle of consistent crudeness'' [25,26] which states that
The quality of the output of a model cannot be greater than the quality of the crudest input or of
the model itself, modi®ed according to the sensitivity of the output to that input.
326
D.G. Elms / Structural Safety 21 (1999) 311±333
Fig. 3. A minimum safety level needs a higher average safety for large variance.
Thus if, for instance, the degree of safety of a structure (or its probability of failure) is determined by (a) analytically calculable matters and (b) more crudely estimated human factors, then
there is no need to carry out the analysis to any greater degree of re®nement than that which
applies to the human factor component. This observation runs contrary to the tacit assumption
that the more re®ned an analysis, or the more data one acquires, the better is the result. It is only
better if there is a balance of quality between the di€erent contributors to safety.
The question must therefore be asked: if a code does not cover all contributions to safety or all
threats uniformly, then what is its status and meaning as a means of ensuring safety? The answer
is that no code covers all aspects of safety. It has a domain of applicability. It ensures safety
within, but not outside, that domain.
Sometimes there is also an assumption that by making load and resistance factors Ð in essence,
safety factors Ð suciently large, a structure will survive all threats and will withstand extraordinary demands, those not speci®cally covered by the code. The assumption is not well founded
and may not hold, especially in non-resilient structures.
Codes are therefore necessary but not sucient for ensuring safety. There is an important
implication for the structural engineer. Often, design is seen as a matter of ful®lling the requirements of the relevant codes at minimum cost. However, a better approach is to view design as
beginning after the code requirements have been dealt with. That is, if codes deal with failure in
ordinary and foreseen circumstances, then the real task of the designer is to deal with the extraordinary. The domain of applicability of design is broader than that of structural codes. What is
needed is a problem-focussed rather than code-focussed approach.
The second code balance issue is whether the code requirements for di€erent demands are
balanced with respect to one another. For example the question could be asked: is the code
equally stringent with regard to live loads, wind loads and earthquakes, or are the wind load
requirements ``tougher'' than the others? For a balanced code they will be equally stringent.
However, it is not easy to specify what is meant by ``stringent''. Does it mean that there is an
equal failure frequency (annual probability of failure) for each demand? Probably not, because
D.G. Elms / Structural Safety 21 (1999) 311±333
327
the magnitude and context of failures would be di€erent. Should stringency then be framed in
terms of risk, by including the consequences of failure as well as probability? Again, probably
not, because of the diculty of de®ning consequences in terms of a single value as well as of
specifying whose consequences they would be. At present there is no satisfactory answer.
A further point of interest in structural codes is the extent to which they take into account the
consequences and context of failure from the point of view of society as a whole. Some buildings
are more central than others to the functioning of society Ð hospitals, communication centres
and banks for example Ð and these may be assigned higher resistance factors according to their
importance. This applies particularly to earthquake code requirements because earthquake
damage will a€ect many buildings simultaneously. There is an assumption that the resistance
factors will directly relate to the likelihood of failure, but the assumption is questionable. Code
factors are generally limited by an underlying focus on individual structures, and do not fully
consider the consequences of failure to society as a whole. It could be argued, for instance, that
buildings lining designated major access routes should have more stringent earthquake design
standards. A case could also be made for earthquake requirements to be lower in rural situations
both because reconstruction could take place more rapidly than in a widespread urban disaster
and because the overall socioeconomic consequences for a country would be lower for rural failures. Codes are not yet fully consistent in such matters. It could be argued that they need not be,
for the reasons of balance discussed above.
The discussion to this point has dealt with consistency and completeness issues and the argument has been that codes are neither balanced nor completely rational. This is hardly surprising
insofar as the issues are hard to pin down in practical terms. In any case codes evolve on a
pragmatic basis, dealing ®rst with the more obvious needs of society and the engineering profession. A more general comment is that codes do not usually make explicit all the assumptions on
which they are based. For example there may be a relationship between codes and methods of
design and construction. The requirement for a particular design approach may be explicit, or it
may not. A code might tacitly assume, for instance, that most structures would be momentresisting reinforced concrete frames designed using a capacity design approach. It would have a
hidden alignment with such structures even though nominally it would cover all types. Codes and
design methods go hand in hand. The relationship needs to be ®rmly in the minds of designers
and code writers alike.
5.2.2. Quality assurance
A second strategy for achieving safety is to use quality assurance for eliminating human error.
It deals with uncertainty and to some extent ignorance-the second and ®rst of the ``enemies of
knowledge''. Quality assurance is very e€ective. Nevertheless, it does not completely remove the
possibility of error. Its use is a necessary but not sucient condition for ensuring safety. It has
two main limitations.
The ®rst is that quality assurance assures the framework of a process, but not its content. It
ensures that the information and documentation is complete, interconnecting and trackable, but
it has little to do with the quality of the information. The process might be correct, but the product poor.
The second limitation is that it is easy to rely too much on ful®lling the quality process rather
than on considering what is actually done. A requirement-checking or ticking mentality takes
328
D.G. Elms / Structural Safety 21 (1999) 311±333
over as familiarity grows. This is a higher-order human error. A problem-focused rather than
technique-focused attitude is needed. This is not easily achieved in an environment demanding
commercial eciency.
5.2.3. Dealing with the enemies of knowledge
There are a number of strategies for dealing with ignorance, uncertainty and complexity. Some
overlap. For ignorance the general strategy is to achieve completeness, for uncertainty it is to
narrow the bounds and for complexity it is to simplify. In all cases an alternative strategy is to
bypass the problem by an indirect approach.
One can never be completely free from ignorance. Omniscience is unattainable. Nevertheless the
possibility of ignorance can be reduced. Partly it is a matter of being broadly knowledgeable. Partly it
is a matter of attitude Ð of being wary, of being able to take a broad overview and think in broad
system terms. Partly it is a matter of being willing to search widely, through contacts, literature and
the Internet. A guide as to the possibility of ignorance, of the need to search, is given by secondary
indicators, reviewed below. A further possibility is to group the information needed for a project into
broad categories, and to try to ensure that the categories are complete and cover the whole area.
There are a number of strategies for dealing with uncertainty. One can:
a. ®nd more information, and so try to narrow the bounds of uncertainty. One can attempt to
ensure that at a certain level, all the categories of information are complete; but it is more
than a matter of ®lling out the content. There are limits to collecting more information. It
might not be available either in fact because of limitations of time or resources or because the
appropriate research has not been done, or it may not be possible even in principle to obtain
the needed information. Chaos theory shows that there are situations where prediction is
impossible. Another point is the question of balance, discussed earlier, where the Principle of
Consistent Crudeness shows that there is little point in gathering information in one sector
alone as the quality of a result is dominated by the poorest of its contributing information.
The implication is that information gathering must be seen in an integrated way.
b. determine reasonable bounds on the uncertainty. Sometimes there are physical bounds that
set absolute limits, such as the overall fuel load limiting the energy of combustion in a ®re or
the maximum quantity of a stored pollutant which gives a limit to the possible environmental consequences of a failure. Beyond the physical limits, there are also limits on the
consequences of failure. They can be called ``experience limits'' and can be assessed by a
careful following-through of the results of failure. Assessment of bounds was one of the
strategies used elsewhere in assessing the safety of nuclear-powered ships [27].
c. use indicator methods as an indirect method for safety assessment. Indicator methods are
discussed in a separate section below.
d. simplify the problem, for instance by using a di€erent structural type, to reduce uncertainty.
This is of course one of the strategies for dealing with complexity. Simpli®cation might not
always be the most sensible approach, though. It might lead to less cost-e€ective solutions
or to a reduction in functionality or serviceability of the structure, or it may lead to lower
overall resilience. Nevertheless it is an important option.
e. use a fail safe approach to ensure that if a failure does take place it would not be catastrophic. Fail safe strategies are in general use in many safety-critical design areas such as
D.G. Elms / Structural Safety 21 (1999) 311±333
329
chemical plants, nuclear facilities and transportation (rail operations, road transport and
aircraft design). Ductility, resilience and redundancy in structural engineering create multiple load paths and essentially act as fail-safe approaches.
f. use dominant-mode design to ensure that unwanted failure modes cannot occur because
failure will ®rst take place in desired ``dominant'' modes. An example is the capacity-design
approach for earthquake design.
Complexity can arise both in structural concepts themselves and in the processes of design and construction. The general approach to dealing with complexity is to simplify. This might take the form of a
better ordering of a system into subsystems, of using more formal procedures, of using dominant mode
design approaches or of merely using a simpler design concept. Alternatively, if complexity cannot be
reduced, then more attention must be paid to secondary approaches such as ensuring resilience, the
existence of robust secondary barriers to failure, or that the consequences of failure are not grave.
5.2.4. Indicator methods
Secondary approaches to assessing the likelihood of failure and reducing it can be called indicator methods [28]. Section 4.2 above has referred to a number of approaches to the understanding of failure, and all result in indicators of the proneness to failure.
Blockley's approach [20] is similar to one proposed by Pugsley [29], who, from a consideration of
accident histories, distilled a list of parameters which could increase the likelihood of accident, namely:
1.
2.
3.
4.
5.
6.
7.
8.
new or unusual materials
new or unusual methods of construction
new or unusual types of structure
experience and organisation of design and construction team
research and development background
industrial climate
®nancial climate
political climate
Pugsley did not intend the list to be de®nitive. He was discussing the approach, not the detail,
and said ``Clearly this list could be extended, or each item expanded, or restated under a smaller
number of broader heads.'' The aim was simply to give a framework by which an engineer or a
group of engineers could look at a structure or project ``and have a good chance of assessing in
broad terms its accident proneness.''
With a little modi®cation the list of indicators can be made more general. In item 3, for
instance, ``structure'' could be replaced by ``product'', and in this form the list was used in
assessing the safety of nuclear powered ships [27]. Item 4 could be expanded to include ongoing
management. The phrasing could also be changed to make the items consistent with one another:
as the list stands, the presence of some items (``new or unusual materials'', for instance) is intended as a bad sign, while others (such as ``industrial climate'') are neutral. There is also a question of
the completeness of the list. An additional item could be ``system complexity'', which is Perrow's
indicator of proneness to failure [19].
In practical terms, the safety climate indicators represent a helpful aid to designers and managers in knowing when to look for possible trouble. They also indicate possible strategies for
330
D.G. Elms / Structural Safety 21 (1999) 311±333
minimising the possibility for failure.. The nuclear-powered ship exercise referred to above
showed that the good safety record of US nuclear-powered ships stemmed in part from the use of
a conservative approach to materials and manufacturing techniques, and emphasis on simplicity
in design, very tight management control and a budget that was carefully protected from political
pressures.
Turner's analysis [21] stopped short of suggesting speci®c measures for reducing the likelihood of
disaster. Nevertheless, his categories of information-¯ow problem given earlier can be taken as
indicators. It is dicult to see such information problems without the bene®t of hindsight. What is
required is that people should be trained to look for them. There are two possible approaches: to
consider the psychology of error, or to look for general indicators of information-system problems.
Following Reason's approach [22], the problem is, how to discover and neutralise latent errors.
As mentioned above, Reason developed a metaphor of ``resident pathogens''. He says: ``For the
pathogen metaphor to have any value, it is necessary to establish an a priori set of indicators
relating to system morbidity and then to demonstrate clear causal connections between these
indicators and accident liability across a wide range of complex systems and in a variety of accident conditions.'' He is right. Unfortunately, the indicators he proposes are far from clear.
The indicator associated with Blockley's balloon model [23] is the multiplicity of precursor
events and items. The more the precursor events, latent errors and so on, the more likely will be
the occurrence of a serious failure.
A limitation of the balloon metaphor is that it could be taken to imply that all that is required
to guarantee safety is to deal with a sucient number of predisposing events or latent errors and
reduce the pressure in the balloon. It is good practice in safety management to manage incidents
and near misses, for this will indeed improve safety by reducing the likelihood of failure. In the
case of potential disasters, where the consequences of failure are grave, the same principles apply,
but only in part. Insofar as major failures occur because of the interaction between a number of
contributing events or errors, removing some of them and letting air out of the balloon will
reduce the likelihood of disaster. On the other hand, it is precisely Perrow's point that in a complex system, disaster is inevitable no matter how carefully and assiduously design and management are carried out. In the limit, there will be an in®nite number of in®nitely unlikely
combinations of event that could lead to disaster Ð the zero-in®nity problem. The balloon model
could hide the need to address the problem of system complexity, though this was never its
intention.
A ®nal indicator approach addresses system complexity using a metaphor of health [28,30]. It
provides a general framework for assessing the degree of proneness to failure of a system by
assessing its ``health''.
To be able to use the metaphor of health, it must ®rst be clear what system is being addressed,
and diagnostic criteria must be found to detect symptoms of ill health.
The healthy-system (H-S) approach applies ®ve criteria to a system. If all are ful®lled, the system is healthy. The criteria are: Balance, Completeness, Cohesion, Consistency, Discrimination.
A system will become unhealthy, or in the limit fail, if there is a failure of one or more of the
criteria, namely a failure:
. in balance, where some of the elements of the system are too large and others are too small
with regard to the system's purpose;
D.G. Elms / Structural Safety 21 (1999) 311±333
331
. in completeness, where the system has elements missing which are essential to its ful®lment;
. in cohesion, where something is wrong with the system's structure, and with the way its elements relate to one another;
. in consistency, where elements or connections in the system are inconsistent either with each
other or with the system's purpose;
. in discrimination, where the various parts of a system and the way they interrelate are
unclear, ambiguous or confused.
Despite the apparent simplicity of the healthy-system criteria, their application in practice
requires care and experience. This is to be expected, partly because the approach is subjective,
and partly because the systems dealt with are complex.
The H-S criteria can be used in two quite di€erent ways: for assessing the ``health'' of an
existing system, or as a guide for developing or designing a new system. It is important to
understand that in practice the criteria must be interpreted very broadly. They are guidelines,
rather than a checklist, and their role is to provide a general framework through which to view
complex systems.
A signi®cant limitation common to all the indicator methods is that they are qualitative indicators. They are not to be found as numbers, and so there is considerable uncertainty in their
application. They are powerful guides in experienced hands. However, there are signi®cant problems in training inexperienced people to use them and in ensuring uniform standards of application throughout an organisation or between projects. One approach to allowing indicator
methods to produce quantitative measures appears to be the use of questionnaires. Psychologists
have developed questionnaire techniques into robust methods with repeatable numerical outcomes in the form of correlation coecients and so on. Research in this direction is still in its
infancy and has not yet reached publication. Its initial applications are aimed at ongoing safety
management of large safety systems. However, it is a promising approach that may well ®nd its way
into structural engineering as a means of bridging the gap between qualitative and quantitative
assessment.
6. Conclusion
Throughout this paper it has been assumed that conventional means of ensuring safety through
good practice and design are well-understood, so that the emphasis has been on overarching
concepts and on less conventional and indirect approaches. De®nitions have had a major part to
play as they help provide clarity in discussion and understanding. The nature of safety and related
concepts has been considered at length, together with a consideration of appropriate value-measures. When dealing with threats to safety, the weight of the discussion has been placed on systemic
threats rather than on those that are better understood, such as loads. Similarly, consideration of
the means of achieving safety emphasised indirect rather than direct approaches.
Indirect approaches to safety assessment and management are becoming increasingly important as design and construction become more sophisticated. However, there is much development
still to be carried out before they are widely used in a formal sense by structural engineers. Ideas
need to be re®ned and developed through application in practice, and the ®eld of computer-based
332
D.G. Elms / Structural Safety 21 (1999) 311±333
aids for indirect safety appraisal and management is still in its early days. A central theme of this
paper is that an integrated and balanced, and therefore a systems-oriented, approach is needed.
There is much to be done, and it needs to be approached with vigor.
References
[1] Matousek M. Quality assurance. In: Blockley DI, editor. Engineering Safety. UK: McGraw±Hill, 1992. p. 72±88.
[2] Pugsley AG. Concepts of safety in structural engineering. J Inst Civil Engrs 1951;36(5):5±51.
[3] Griths H, Pugsley A, Saunders OA. Report of the inquiry into the collapse of ¯ats at Ronan Point, Canning
Town: presented to the Minister of Housing and Local Government. London: HMSO, 1968.
[4] Barber EHE. Report of Royal Commission into the failure of Kings Bridge: presented to both Houses of Parliament by His Excellency's command. Melbourne (Australia): Government Printer, 1963.
[5] Barber EHE. Report of Royal Commission into the failure of West Gate Bridge. Melbourne (Australia): Government Printer, 1971.
[6] Elms DG. Risk assessment. In: Blockley DI, editor. Engineering Safety. Maidenhead (UK): McGraw±Hill, 1992.
p. 28±46.
[7] Elms DG. Risk management Ð general issues. In: Elms DG, editor. Owning the future: integrated risk management principles and practice. New Zealand: Centre for Advanced Engineering, 1998. p. 43±62.
[8] Brown CB. A fuzzy safety measure. Proc ASCE, Jour Engrg Mech Div 1979;105(n.EM5):855±72.
[9] Royal Society. Risk: analysis, perception and management. London: Royal Society, 1992.
[10] Stephens KG. Using risk methodology to avoid failure. In: Elms DG, editor. Owning the future: integrated risk
management principles and practice. New Zealand: Centre for Advanced Engineering, 1998. p. 303±8.
[11] Aven T. Reliability and risk analysis. London: Elsevier Applied Science, 1992.
[12] Health and Safety Commission, UK. Major hazards aspects of the transport of dangerous substances. London:
HMSO, 1991.
[13] United States Nuclear Regulatory Commission. Reactor safety study (USNRC,WASH 1400). Washington:
USNRC, 1975.
[14] Du€ell R. Toward the environment and sustainability ethic in engineering education and practice. Jour of Prof
Issues in Engineering Education and Practice (ASCE), v. 124, p. 78±90.
[15] Peet NJ, Bossel H. Ethics and sustainable development: being fully human and creating a better future. Proc. Int.
Soc. for Ecological Economics Conf., Beyond Growth: Policies and Institutions for Sustainability, Santiago
(Chile), November 1998.
[16] Forrester JW. Urban dynamics. Cambridge, MA: MIT Press, 1969.
[17] Hale A. Introduction: the goals of event analysis. In: Hale A, Wilpert B, Freitag M, editors. After the event: from
accident to organisational learning. Oxford (UK); Elsevier Science Ltd., 1997.
[18] Casti JL. Searching for certainty: what scientists can know about the future. London: Abacus, 1991.
[19] Perrow, C. Normal accidents: living with high-risk technologies. Basic Books, 1984.
[20] Blockley DI. The nature of structural safety. London: Ellis Horwood, 1980.
[21] Turner BA. Manmade disasters. London: Wykeham Publications, 1978.
[22] Reason J. Human error. New York: Cambridge University Press, 1990.
[23] Blockley DI. Concluding re¯ections. In: Blockley DI, editor. Engineering safety. Maidenhead (UK): McGraw±
Hill, 1992. p. 446±65.
[24] Jenkins AM, Brearley SA, Stephens P. Management at risk. Culcheth, Cheshire: The SRD Association, 1991.
[25] Elms DG. The principle of consistent crudeness. Proc. Workshop on Civil Engineering Application of Fuzzy Sets,
Purdue University, IN, 1985. p. 35±44.
[26] Elms DG. Consistent crudeness in system construction. In: Topping BHV, editor. Optimisation and arti®cial
intelligence in civil engineering, vol. 1. Dordrecht (The Netherlands): Kluwer Academic Publishers, 1992. p. 61±
70.
[27] Somers E, Bergquist P, Elms DG, Poletti A. The safety of nuclear powered ships. Wellington (New Zealand):
Department of the Prime Minister and Cabinet, 1992.
D.G. Elms / Structural Safety 21 (1999) 311±333
333
[28] Elms DG. Indicator approaches for risk management and appraisal. In: Stewart M, Melchers R, editors. Integrated risk assessment. Rotterdam: Balkema, 1998. p. 53±9.
[29] Pugsley AG. The prediction of proneness to structural accidents. The Structural Engineer 1973;51(6):195±6.
[30] Elms DG. System health approaches for risk management and design. In: Shiraishi, Shinozuka and Wen editors.,
Structural safety and reliability: Proc. ICOSSAR `97, Kyoto, Japan, November 1997. Rotterdam: Balkema, 1998.
p. 271±7.
Download