Uploaded by jrsalazarj80

Guidelines for Chemical Process Quantitative Risk Analysis compressed

advertisement
GUIDELINES FOR
Chemical
Process
Quantitative
Risk Analysis
SECOND EDITION
m
AMERICAN INSTITUTE OF
CHEMICALENGINEERS
^ fn
i %Mf|gPI
CENTERFOR ^k
CHEMICAL PROCESS SAFElY ••*
CENTER FOR CHEMICAL PROCESS SAFETY
of the
AMERICAN INSTITUTE OF CHEMICAL ENGINEERS
3 Park Avenue
New York, New York 10016-5991
Copyright © 2000
American Institute of Chemical Engineers
3 Park Avenue
New York, New York 10016-5991
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, electronic, mechanical, photocopying,
recording, or otherwise without the prior permission of the copyright owner.
Library of Congress Cataloging-in-Publication Data
CIP data has been applied for,
ISBN: 0-8169-0720-X
PRINTED IN THE UNITED STATES OF AMERICA
1 09 8 7 6 5 4 3 2 1
It is sincerely hoped that the information presented in this volume will lead to an even more impressive safety record for
the entire industry; however, the American Institute of Chemical Engineers, its consultants, CCPS Subcommittee
members, their employers, and their employers' officers and directors disclaim making or giving any warranties or
representations, express or implied, including with respect to fitness, intended purpose, use or merchantability and/or
correctness or accuracy of the content of the information presented in this document and accompanying software. As
between (1) American Institute of Chemical Engineers, its consultants, CCPS Subcommittee members, their employers,
their employers' officers and directors and (2) the user of this document and accompanying software, the user accepts any
legal liability or responsibility whatsoever for the consequences of its use or misuse.
This book is available at a special discount when ordered in
bulk quantities. For information, contact the Center for
Chemical Process Safety at the address shown above.
Preface
The American Institute of Chemical Engineers (AIChE) has a long history of involvement with process safety and loss control for the chemical and petrochemical industries. Through its strong ties with process designers, constructors, operators, safety
professionals, and academia, the AIChE has enhanced communications and fostered
improvement in the high safety standards of the industry. AIChE publications and
symposia are an important resource for the chemical engineering profession on the
causes of accidents and means of prevention.
The Center for Chemical Process Safety (CCPS) was established in 1985 by the
AJChE to develop and disseminate technical information for use in the prevention of
chemical process accidents. The Center is supported by nearly 100 organizations,
including oil and chemical companies, engineering design and construction companies, engineering consultants, universities, and government agencies, which are associated with the chemical processing industries. Since its founding, the CCPS has
sponsored numerous symposia, organized and sponsored research in process safety
related areas, and published an extensive series of "Guidelines" books which are
regarded as a primary source of process safety information.
One of the early "Guidelines" books was the Guidelinesfor Chemical Process Quantitative Risk Analysis (CPQRA Guidelines)) published in 1989. This book was intended to
provide a complete overview of the tools and techniques required to do a quantitative
analysis of the risk associated with the immediate impact of potential episodic accident
events such as fires, explosions, and the release of acutely toxic material. The book was
directed toward the analysis of acute hazards, not chronic health effects. The CPQRA
Guidelines is part of a series of "Guidelines" books which address process hazard identification and analysis, risk assessment, and risk decision making. Related CCPS books
include:
• Guidelines for Process Equipment Reliability Data (1989)
• Guidelines for Hazard Evaluation Procedures, 2nd Edition with Worked Examples
(1992)
• Guidelines for Preventing Human Error in Process Safety (1994)
• Tools for Making Acute Risk Decisions with Chemical Process Safety Applications
(1995)
• Guidelines for Transportation Risk Analysis (1995)
Since its original publication in 1989, the CPQRA Guidelines has been a primary
resource for those in the chemical industry who use quantitative risk analysis as a risk
management tool. In 1995, the CCPS Risk Analysis Subcommittee decided that there
had been sufficient advances in the technology of risk analysis that an updated edition
was appropriate. This update is intended to:
•
•
•
•
Provide more detail on selected techniques than available in the original edition
Update the models based on improvements in modeling technology
Provide more worked examples
Provide spreadsheet implementation of the consequence analysis examples,
available on a disk.
Since the publication of the original CPQRA Guidelines in 1989, much has
occurred in the area of consequence models, the topic of Chapter 2. For this reason, the
most significant changes in the second edition will be found in Chapter 2. The revision
provides more detail on consequence models, including more models and a more complete presentation on the fundamental basis, updates the models based on improvements and experience in modeling technology, and provides more worked examples.
All of the worked examples in Chapter 2 have also been provided with spreadsheet
solutions in a disk included with this book.
The outline of the original book, including Chapter 2 was maintained, with the
exception that a separate section on jet fire models was included in Chapter 2. The sections in the revised book retain the structure of the original. Each modeling section in
Chapter 2 contains a presentation of the purpose, philosophy, applications, description
of the technique, a logic diagram, theoretical foundation, input requirements and availability, output, simplified approaches, and sample problems. A discussion section for
each modeling section contains a presentation on strengths and weaknesses, identification and treatment of possible errors, utility, resources needed, and available computer
codes.
The other chapters of the book have also been updated significantly, but less extensively. Chapter 1, describing the overall framework of CPQBA, has been updated, and
some discussion of risk guidelines and criteria have been incorporated. Chapter 2
(Consequence Analysis) has been extensively rewritten and expanded as described
above. In Chapter 3 (Frequency Analysis), the section on common cause failure has
been updated to incorporate new techniques and methods. New worked examples have
been added to Chapter 4 (Risk Calculation) to illustrate risk calculation techniques,
and the chapter discusses the calculation of "Aggregate Risk", as used in API 752
(1995), "Management of Hazards Associated with Location of Process Plant Buildings." Chapter 5 (CPQEA Data) has been updated to include current information on
sources of data required for a CPQRA. A discussion of "Sneak Analysis" has been
added to Chapter 6 (Special Topics), and the discussion of Markov Analysis has been
expanded. The example problems in Chapter 8 have been reworked, to correct some
minor mathematical errors and use more accurate estimates of the impact area of the
incidents considered. Chapter 9, on future research needs, has been updated, and a
brief discussion of software safety has been added. The Appendices are essentially
unchanged.
Preface to the First Edition
The American Institute of Chemical Engineers (AIChE) has a 30-year history of
involvement with process safety and loss control for chemical and petrochemical
plants. Through its strong ties with process designers, constructors, operators, safety
professionals, and academia, the AIChE has enhanced communication and fostered
improvement in the high safety standards of the industry. AIChE publications and
symposia have become an information resource for the chemical engineering profession on the causes of accidents and means of prevention.
The Center for Chemical Process Safety (CCPS) was established in 1985 by the
AIChE to develop and disseminate technical information for use in the prevention of
major chemical accidents. The Center is supported by over 60 industrial sponsors in the
chemical process industry (CPI), who provide the necessary funding and professional
guidance to its technical committees. Since its founding, CCPS has published four volumes in its Guidelines series.
• Guidelines for Hazard Evaluation Procedures (hereafter referred to as HEP Guidelines] addresses method of identifying, assessing, and reducing hazards.
• Guidelines for Use of Vapor Cloud Dispersion Models (hereafter referred to as
VCDM Guidelines) surveys the current (at the time of publication) vapor cloud
dispersion models, shows how to use them, and discusses their strengths and
weaknesses.
• Guidelines for Safe Storage and Handling of High Toxic Hazard Materials (hereafter referred to as SHTM Guidelines) discuses techniques that are used to minimize releases of high toxic hazard vapors. The presentation ranges from
improving the inherent safety of the process to improving the reliability of
piping and vessels.
• Guidelines for Vapor Release Mitigation (hereafter referred to as VRM Guidelines)
discusses the techniques that are used to mitigate vapor releases from venting,
equipment failure, etc.
The Guidelines for Chemical Process Quantitative Risk Analysis (hereafter referred to as
CPQRA Guidelines) builds on the Guidelines for Hazard Evaluation Procedures to show
the engineer how to make quantitative risk estimates for the hazards identified by the
techniques given in that volume. A companion book, Guidelines for Process Equipment
Reliability Data (hereafter referred to as PERD Guidelines), is expected to be issued
concurrently.
The CPI has developed a format and scope for quantitative risk analysis distinct
from that used elsewhere (e.g., in the nuclear industry's Probabilistic Risk Assessments). To emphasize the distinction, this volume uses the term "Chemical Process
Quantitative Risk Analysis" (CPQRA) for the methodology covered herein.
Before discussing this volume, it should be noted that the primary goal of CPQRA
is to provide tools for reducing high risks in chemical plants handling hazardous materials. In applying these tools to a specific operation, appropriate management actions,
based on results from a CPQRA study, help to make facilities handling hazardous
chemicals safer. That is, quantitative estimates of risk allow major risk contributors to
be identified and the effectiveness of various risk reduction measures to be determined.
They also give guidance to the facility and to its neighbors in evaluating emergency
response plans. CPQRA may also highlight areas that require attention in risk management programs.
Process releases are sometimes classified into four groups: continuous process
vents, fugitive losses, emergency relief vents, and emergency unplanned episodic
releases. This book is directed only toward the analysis of acute hazards represented by
the last two groups of releases. It does not consider chronic health effects.
Similar in some respects to a discounted cash flow analysis, CPQRA provides an
estimate of future performance. The estimate's uncertainty is directly proportional to
the depth and detail for the calculation and quality of data available and used. Whereas
a discounted cash flow deals with estimates with accuracy of ± 15%, CPQRA estimates
have much greater uncertainty, typically one or more orders of magnitude. Given the
infinite number of potential incidents, insufficient data, limited resources, and inherent
uncertainties, an in-depth CPQRA cannot be accomplished for most of the industry's
plant, processes, operating systems, and equipment. This book explains procedures to
select from a wide range of methods—from relatively simple to progressively more
complex—the CPQRA techniques that are appropriate to prepare the risk estimate
required.
The CPQRA Guidelines provide the following:
• For process engineers, a guidance for CPQRA techniques, so they can understand the terminology, communicate with risk analysts, perform a simple
CPQRA study, and understand and present the results.
• An overview of CPQRA so that senior management, unit and project management, and practicing chemical engineers can understand how risk estimates are
developed, their uncertainty and limitations, and how to interpret and use the
results.
• For unit and project management, a guide to the utility of CPQRA, the likely
complexity of the study, the resources needed, and appropriate use at different
stages of facility life.
Careful study of the material in this book can produce only a basic level of competence. Furthermore, the reader must recognize CPQRA does not provide exact
answers; inadequacies in the data and the models lead to uncertainty in the results. The
engineer who needs a deeper understanding of the discipline can consult the literature
listed in the References and Bibliography, and can consider formal training such as that
listed in Appendix B.
In this volume, CCPS has endeavored—through text, worked examples, and case
studies—to make the reader aware of the potential of CPQRA and its component techniques. Techniques have been selected that permit an adequate estimate of risk to be
obtained with a reasonable amount of effort. Although future improvements in models
and data are probable, the general methodology (as presented herein) is not likely to
change significantly.
Because of the comprehensiveness and complexity of the CPQRA Guidelines, there
may be some inconsistencies, errors, etc. in this book. The reader's comments, suggestions for improvements, and supporting rationale on deficiencies or errors are welcomed. These will be collected, reviewed, and made public immediately (where
warranted) or during the next revision of the book. Please direct any comments on
these Guidelines to
AIChE/CCPS
Attention: CPQRA Guidelines
3 Park Avenue
New York, NY 10016-5991
Acknowledgments
The Guidelines far Chemical Process Quantitative Risk Analysis, Second Edition (CPQRA
Guidelines) has been updated from the initial 1989 edition under the guidance of the
Center for Chemical Process Safety (CCPS) Risk Assessment Subcommittee (BJVSC).
Most of the material from the initial (1989) edition of the book, which was written by
the 1989 RASC members, Technica, Inc. (now DNV Technica), and several other
contributors, remains in this edition. The contributions of the original edition authors
are listed in the "Acknowledgments to the First Edition."
Writing of new material for the Second Edition and updating and revision of the
original text of the CPjQJfM Guidelines was done by the RASC and with the aid of several other authors. The BASC was chaired by Dennis C. Hendershot (Rohm and Haas
Company), and the BASC members include Brian R. Dunbobbin and Walter Silowka
(Air Products and Chemicals, Inc.), Arthur G. Mundt (Dow Chemical), William
Tilton (DuPont), Scott Ostrowski (ExxonMobil Chemical), Donald L. Winter
(Mobil), Raymond A. Freeman (Solutia), Arthur Woltman (Shell), Thomas Janicik
(Solvay Polymers), Bjchard M. Gustafson (Texaco), William K. Lutz (Union Carbide), Chuck Fryman (FMC), Delia Wong (Nova Chemicals, Ltd.), Felix Freiheiter
and Thomas Gibson (Center for Chemical Process Safety).
The RASC particularly recognizes the major contribution of Dr. Daniel A. Crowl
of Michigan Technological University, for extensively revising Chapter 2 (Consequence Analysis) of this book, including the addition of a large amount of new and
original material. Dr. Crowl also developed the set of Chapter 2 example problems and
spreadsheet solutions which are included with the book, and provided oversight for the
revisions to the example problems in Chapter 8.
Other volunteer authors also made major contributions to this edition:
• Dr. Henrique Paula of JBF Associates provided the revised discussion of
common cause failure in Section 3.3.1.
• Mr. Paris Stavrianidis of Factory Mutual Research revised the discussion of
Markov Analysis in Chapter 6.
• Mr. James Vogas of Boeing Aerospace Operations provided the discussion of
Sneak Analysis in Chapter 6.
• Mr. Robert Charette of Itabhi Corporation contributed the discussion of software safety in Chapter 9
• Mr. Chad Mashuga of Michigan Technological University revised and updated
calculations in the example problems in Chapter 8, and also updated and
improved the text, figures, and tables.
The RASC also thanks the CCPS management and staff for their support of this
project, including Mr. Bob Perry, Dr. Jack Weaver, and Mr. Les Wittenberg. The
RASC also thanks the following for their peer review of CPQRA Guidelines, Second
Edition:
Sanjeev Mohindra, Arthur D. Little, Inc.
Henry Ozog, Arthur D. Little, Inc.
Kenneth H. Harrington, Batelle Memorial Institute
James L. Paul, Celanese
Jack Philley, DNV
David W. Jones, EQE International
Walter L. Frank, EQE International
Adrian Garcia, FMC Corp.
John A. Hoffmeister, Lockheed Martin Energy Systems, Inc.
Jan C. Windhorst, Nova Chemicals, Ltd.
Willard C. Gekler, PLG,Inc.
Paul Baybutt, Primatech, Inc.
Peter Fletcher, Raytheon Engineers & Constructors, Inc.
Gerard Opschoor, TNO Prins Maurits Laboratorium
Ken Murphy, U.S. Department of Energy
Jim Lightner, Westinghouse Savannah River Co.
The RASC dedicates this book to two of our friends and colleagues, Mr. Donald L.
Winter of Mobil Oil Corporation, and Mr. Felix Freiheiter of the Center for Chemical
Process Staff. Both were significant contributors to the Second Edition, and to the
many other activities of the CCPS Risk Assessment Subcommittee for many years. Mr.
Winter unfortunately passed away due to a sudden illness during the later stages of the
writing of the book. Mr. Freiheiter also passed away as the book was being prepared for
publication. Their influence can be found throughout the book.
Acknowledgments to
the First Edition
This volume was written jointly by the CCPS Risk Assessment Subcommittee and
Technica, Inc. The CCPS Subcommittee was chaired by R. W. Ormsby (Air Products
and Chemicals), and included (in alphabetical order); R. E. DeHart, II (Union Carbide), H. H. Feng (ICI Americas, formerly of Stauffer Chemical), R. A. Freeman
(Monsanto), S. B. Gibson (du Pont), D. C. Hendershot (Rohm and Haas), C. A.
Master (Fluor Daniel), R. F. Schwab (Allied-Signal), and J. C. Sweeney (ARCO
Chemical). T. W. Carmody, F. Freiheiter, R. G. HiU, and L. H. Wittenberg of CCPS
provided staff support. The Technica Team was directed by D. H. Slater and managed
by R. M. Pitblado. The Technica team included B. Morgan, A. Shafaghi, L. G. Bacon,
M. A. Seaman, L. J. Bellamy, S. R. Harris, P. Baybutt, D. M. Boult, and N. C. Harris.
F. P Lees (Univeristy of Loughborough) reviewed an early draft of the document and
his comments are gratefully acknowledged. The substantial contributions of the
employer organizations (both in time and resources) of the Subcommittee and of
Technica are gratefully acknowledged.
An acknowledgment is also made to JBF Associates, Inc. (J. S. Arendt, D. F.
Montague, H. M. Paula, L. E. Palko) for their preparation of the subsection on
common cause failure analysis (Section 3.3.1) and inclusion of additional material in
the section on fault tree analysis (Section 3.2.1), and to Meridian Corporation (C. O.
Schultz and W. S. Perry) for the preparation of the section on toxic gas effects (Section
2.3.1).
Two specific individuals should also be acknowledged for significant contributions: C. W. Thurston of Union Carbide for assistance in the preparation of the subsection on programmable electronic systems (Section 6.3) and G. K. Lee of Air
Products and Chemicals who assisted in the preparation of the subsections addressing discharge rates, flash and evaporation, and dispersion (Sections 2.1.1., 2.1.2, and
2.1.3).
Finally, the CCPS Risk Assessment Subcommittee wishes to express its sincere
gratitude to Dr. Elisabeth M. Drake for reviewing the final manuscript and her many
helpful comments and suggestions.
Management Overview
Risk analysis methodology has been applied to various modern technologies such as
aerospace, electronics, and nuclear power. Over the last 15 years this methodology has
been adapted to the particular needs of the CPI. The Center for Chemical Process
Safety (CCPS) of AIChE has developed this book to provide a guidance manual for the
application of this methodology. The term "Chemical Process Quantitative Risk Analysis" (CPQEA) is used to emphasize the unique character of this methodology as
applied to the CPI.
CPQBA identifies those areas where operations, engineering, or management
systems may be modified to reduce risk, and may identify the most economical way to
do it. It can be applied in the initial siting and design of the facility and during its entire
life. The primary goal of CPQRA is that appropriate management actions, based on
results from a CPQRA study, help to make facilities handling hazardous chemicals
safer.
CPQBA is one component of an organization's total risk management. It allows
the quantitative analysis of risk alternatives that can be balanced against other considerations. Management can then make more informed, cost-effective decisions on the allocation of resources for risk reduction. The reader is reminded that the AIChE/CCPS
publication^ Challenge to Commitment strongly encourages CPI management to have
an effective, comprehensive process safety and risk management program.
CPQRA can be applied at any stage in the life of a facility. The depth of study may
vary depending on the objectives and information available. Maximum benefits result
when CPQRA is applied at the beginning (conceptual and design stages) of a project
and maintained throughout its life.
Although elements of a CPQBA process are being practiced today in the CPI, only
a few organizations have integrated this process into their risk management program.
However, application of CPQBA is becoming more widespread and may become an
integral part of more companies' risk management programs. The reason that these
methods are not in more widespread use is that detailed CPQBA techniques are complex and cost intensive, and require special resources and trained personnel. Also,
CPQBA techniques have not been well understood and described in the literature.
However, an investment in CPQRA often pays tangible returns in identifying cost
effective process or operational improvements.
The philosophy behind this volume is to provide an introduction to CPQRA
methodology in sufficient depth so that a process engineer with some practice can
undertake simple CPQRA studies with minimal outside assistance. The engineer
should be able to
1.
2.
3.
4.
5.
6.
7.
8.
respond to a request for a risk assessment;
convert the request into a definable study objective;
develop a scope of work;
understand the types of data and sources of information required;
estimate the time and costs for the study;
calculate the results;
analyze the results for reasonableness;
present the results in a useful format.
The careful definition of scope and depth of study in the application of CPQRA is
crucial to success because it is cost and resource intensive. Detailed CPQRA should be
used sparingly and only to that depth of study necessary to achieve a study's goals and
objectives. If not properly controlled, even a simple CPQRA can generate an unmanageable calculation burden.
Careful study of the material in these guidelines can produce only a basic level of
competence. Supplementary review of important references and training is essential.
This book is directed toward the assessment of episodic, short-term hazards rather than
chronic health hazards. Discussion of regulatory issues, public risk perception, risk criteria, and acceptable risk are excluded.
Finally, it should be thoroughly understood that CPQBA methodology is a
sophisticated analysis tool that requires fundamental assumptions about the management and maintenance systems and process safety programs in place at a given facility.
Unless management is committed to process safety and has those necessary support
programs in place, use of a CPQBA is futile. However, with this support, CPQBA can
be a valuable complementary tool to improving safety in a chemical plant.
Organization of the Guidelines
This volume provides an introduction to the techniques of CPQBA in sufficient detail
that process engineers, not specifically trained in this technology, should be able to
undertake elementary risk analysis studies. Numerous worked examples and case studies have been provided to illustrate each component technique. A comprehensive bibliography provides references to more advanced topics.
Chapter 1 describes the broad framework of CPQRA, its component techniques,
and current practices.
Chapter 2 summarizes quantitative techniques used for consequence analysis,
including physical models for fire, explosions, and dispersion of flammable or toxic
materials (Sections 2.1 and 2.2). Human health effects and structural damage are
reviewed (Section 2.3), along with evasive actions (Section 2.4) such as shelter, escape,
and evacuation.
Chapter 3 reviews quantitative techniques for estimating incident frequency,
including the historical record (Section 3.1), fault tree analysis (Section 3.2.1) and
event tree analysis (Section 3.2.2). Complementary techniques are also reviewed,
including common-cause failures (Section 3.3.1), human reliability (Section 3.3.2),
and external event (Section 3.3.3) analyses.
Chapter 4 provides a description of commonly used risk measures (Section 4.1),
their forms of presentation (Section 4.2), guidelines for their selection (Section 4.3),
and methods for their calculation (Section 4.4). Importance, uncertainty, and sensitivity factors are also addressed (Section 4.5).
Chapter 5 discusses data sources used in CPQRA, including historical incident
data (Section 5.1), process and plant data (Section 5.2), chemical data (Section 5.3),
environmental data (Section 5.4), equipment reliability data (Section 5.5), human reliability data (Section 5.6), and the use of expert opinion (Section 5.7).
Chapter 6 reviews, briefly, several topics that are relevant to CPQEA, including
domino effects (Section 6.1), unavailability of protective systems (Section 6.2), reliability of programmable electronic systems (Section 6.3), and other techniques (Section 6.4).
Chapter 7 addresses the application and utilization of CPQBA results, for examples ranging from the simple to the complex.
Chapter 8 provides two case studies to demonstrate the application of CPQBA
techniques. The first case study of a railcar loading terminal (Section 8.1) is designed to
use manual calculation techniques. The second case study, a hydrocarbon distillation
column (Section 8.2), employs slightly more sophisticated modeling techniques.
Chapter 9 discusses research and development needs to improve CPQRA. Following Chapter 9, various appendices supporting the material in the text are provided.
Finally, a summary of available computer software has been provided in Chapters
2, 3, and 4, presenting models for consequence, frequency, and risk estimation,
respectively.
Nomenclature and Units
The equations in the volume are from a number of disciplines and reference sources,
which do not have consistent nomenclature (symbols) and units. In order to facilitate
comparisons with the sources, we have used the conventions of each source, rather than
impose a standard "across the board" for the volume.
Nomenclature and units are given after each equation (or set of equations) in the
text. Readers are cautioned to ensure that they are using the proper values when applying these equations to their problems.
Acronyms
AAR
ACGIH
ACMH
AEC
AGA
AIChE/CCPS
AIChE-DIERS
AIChE-DIPPR
AIHA
AIT
API
ARC
ASEP
ASME
ATC
BLEVE
CAER
CCF
CCPS
CEP
CFD
CAlA
CONSEQ
CPI
CPU
CPQRA
CRC
CSTR
DDCS
DOE
American Association of Railroads
American Conference of Governmental Industrial Hygienists
Advisory Commission on Major Hazards
Atomic Energy Commission
American Gas Association
American Institute of Chemical Engineers—Center for
Chemical Process Safety
American Institute of Chemical Engineers—Design Institute
for Emergency Relief Systems
American Institute of Chemical Engineers—Design Institute
for Physical Property Data
American Industrial Hygiene Association
Auto-Ignition Temperature
American Petroleum Institute
Accelerating Rate Calommetry
Accident Sequence Evaluation Program
American Society of Mechanical Engineers
Acute Toxic Concentration
Boiling Liquid Expanding Vapor Explosion
Community Awareness and Emergency Response
Common Cause Failure
Center for Chemical Process Safety
Chemical Engineering Progress
Computational Fluid Dynamics
Chemical Manufacturers Association
Consequence Analysis Computer Software (DNV)
Chemical Process Industry
Computer Processing Unit
Chemical Process Quantitative Risk Analysis
Chemical Rubber Company
Continuous Stirred Tank Reactor
Distributed Digital Control System
Department of Energy
DOT
DSC
EEC
EEGL
EFCE
EF
ENF
EPA
EPRI
ERPG
ERV
ESD
ESV
ETA
EuReDatA
FAR
FDT
FEMA
FMEA
FN
FR
FTA
HAZOP
HEART
HEP
HFA
HMSO
HRA
HSE
IChemE
ICI
IDLH
IEEE
IFAL
IHI
INPO
IPRDS
ISBN
KTT
LC
LCL
LD
LFL
LNG
LOG
LPG
Department of Transportation
Differential Scanning Calorimeter
European Economic Community
Emergency Exposure Guidance Level
European Federation of Chemical Engineers
Error Factor
Expected Number of Failures
Environmental Protection Agency
Electric Power Research Institute
Emergency Response Planning Guidelines
Emergency Response Value
Emergency Shutdown Device
Emergency Shutdown Valve
Event Tree Analysis
European Reliability Data Association
Fatal Accident Rate
Fractional Dead Time
Federal Emergency Management Agency
Failure Modes and Effects Analysis
Frequency Number
Failure Rate
Fault Tree Analysis
Hazard and Operability
Human Error Assessment and Reduction Technique
Hazard Evaluation Procedures
Human Failure Analysis
Her Majesty's Stationary Office
Human Reliability Analysis
Health and Safety Executive
Institute of Chemical Engineers (Great Britain)
Imperial Chemical Industries
Immediately Dangerous to Life and Health
Institute of Electrical and Electronic Engineers
Instantaneous Fractional Annual Loss
Individual Hazard Index
Institute of Nuclear Power Operations
In-Plant Reliability Data System
International Standard Book Number
Kinetic Tree Theory
Lethal Concentration
Lower Confidence Limit
Lethal Dose
Lower Flammable Limit
Liquified Natural Gas
Level of Concern
Liquified Petroleum Gas
MAPPS
MFEA
MIL-HDBK
MOCUS
MR
MSDS
MORT
MTFB
NAS
NASA
NFPA
NIOSH
NJ-DEP
NOAA
NPRDS
NRC
NSC
NTIS
NTSB
NUEJEG
OAT
OREDA
ORC
OSHA
PC
PE
PEL
PERD
PES
PFD
PFOD
PHA
PSdD
PV
PLC
PLG
PEA
RBD
R &D
EXG
RMP
ROD
ROF
RSST
RTECS
SCRAM
Maintenance Personnel Performance Simulation
Multiple Failure/Error Analysis
Department of Defense Military Handbook
Computer Program for Minimal Cut Set Determination
Median Rank
Material Safety Data Sheets
Management Oversight and Risk Tree Analysis
Mean Time Between Failure
National Academy of Science
National Aeronautical and Space Administration
National Fire Protection Association
National Institute for Occupational Safety and Health
New Jersey Department of Environmental Protection
National Oceanic and Atmospheric Administration
Nuclear Plant Reliability Data System
National Research Council
National Safety Council
National Technical Information Service
National Transportation Safety Board
Nuclear Regulatory Commission
Operator Action Tree
Offshore Reliability Data Handbook
Organization Resources Counselors, Inc. (Washington, DC)
Occupational Safety and Health Administration
Paired Comparisons
Process Engineer
Permissible Exposure Limits
Process Equipment Reliability Data
Programmable Electronic System
Process Flow Diagram
Probability of Failure on Demand
Preliminary Hazard Analysis
Piping and Instrumentation Diagram
Pressure Volume
Programmable Logic Controller
Pressurized Liquified Gas
Probabilistic Risk Assessment
Reliability Block Diagram
Research and Development
Refrigerated Liquified Gas
Risk Management Plan
Average Rate of Death
Average Rate of Failure
Reactive Systems Screening Tool
Registry of Toxic Effect of Chemical Substances
Support Center Regulatory Air Models
SHTM
SLIM-MAUD
SPEGL
SRD
SRS
STEL
SYREL
TCPA
THERP
TNT
TLV
TNO
TXDS
UCL
UFL
UCSIP
UNIDO
VCDM
VCE
VDI
VRM
VSP
WASH-1400
WDT
Storage and Handling of High Toxic Hazard Materials
Success Likelihood Index Methodology-Multi Attribute
Utility Decomposition
Short Term Public Emergency Guidance Levels
Safety and Reliability Directorate (U.K. Atomic Energy
Authority, Warrington, England)
System Reliability Service
Short Term Exposure Limits
Systems Reliability Service Data Base
Toxic Catastrophe Prevention Act
Technique for Human Error Rate Prediction
Trinitrotoluene
Threshold Limit Values
Netherlands Organization for Applied Scientific Research
Toxicity Dispersion
Upper Confidence Limit
Upper Flammable Limit
Union des Chambres Syndicales de LTndustrie de Petrole
United Nations Industrial Development Organization
Vapor Cloud Dispersion Modeling
Vapor Cloud Explosion
Verein Deutscher Ingenieure
Vapor Release Mitigation
Vent Sizing Package
Reactor Safety Study (Rasmussen, 1975)
Watchdog Timer
Contents
Preface ........................................................................................................
xi
Preface to the First Edition ..........................................................................
xiii
Acknowledgments ....................................................................................... xvii
Acknowledgments to the First Edition .........................................................
xix
Management Overview ................................................................................
xxi
Organization of the Guidelines .................................................................... xxiii
Acronyms .................................................................................................... xxv
1. Chemical Process Quantitative Risk Analysis ....................................
1
1.1 CPQRA Definitions ........................................................................................
5
1.2 Component Techniques of CPQRA ...............................................................
7
1.2.1 Complete CPQRA Procedure ......................................................
7
1.2.2 Prioritized CPQRA Procedure .....................................................
13
1.3 Scope of CPQRA Studies ..............................................................................
15
1.3.1 The Study Cube ..........................................................................
15
1.3.2 Typical Goals of CPQRAs ...........................................................
18
1.4 Management of Incident Lists ........................................................................
19
1.4.1 Enumeration ...............................................................................
20
1.4.2 Selection .....................................................................................
24
1.4.3 Tracking ......................................................................................
29
1.5 Applications of CPQRA ..................................................................................
29
1.5.1 Screening Techniques ................................................................
30
1.5.2 Applications within Existing Facilities ...........................................
32
1.5.3 Applications within New Projects .................................................
32
1.6 Limitations of CPQRA ....................................................................................
33
This page has been reformatted by Knovel to provide easier navigation.
v
vi
Contents
1.7 Current Practices ...........................................................................................
36
1.8 Utilization of CPQRA Results ........................................................................
38
1.9 Project Management ......................................................................................
38
1.9.1 Study Goals ................................................................................
39
1.9.2 Study Objectives .........................................................................
39
1.9.3 Depth of Study ............................................................................
41
1.9.4 Special User Requirements .........................................................
44
1.9.5 Construction of a Project Plan .....................................................
44
1.9.6 Project Execution ........................................................................
50
1.10 Maintenance of Study Results .......................................................................
50
1.11 References ....................................................................................................
52
2. Consequence Analysis .........................................................................
57
2.1 Source Models ...............................................................................................
59
2.1.1 Discharge Rate Models ...............................................................
60
2.1.2 Flash and Evaporation ................................................................
95
2.1.3 Dispersion Models ...................................................................... 111
2.2 Explosions and Fires ..................................................................................... 153
2.2.1 Vapor Cloud Explosions (VCE) .................................................... 157
2.2.2 Flash Fires .................................................................................. 180
2.2.3 Physical Explosion ...................................................................... 181
2.2.4 BLEVE and Fireball ..................................................................... 204
2.2.5 Confined Explosions ................................................................... 217
2.2.6 Pool Fires ................................................................................... 224
2.2.7 Jet Fires ...................................................................................... 237
2.3 Effect Models ................................................................................................. 244
2.3.1 Toxic Gas Effects ........................................................................ 250
2.3.2 Thermal Effects ........................................................................... 267
2.3.3 Explosion Effects ........................................................................ 274
2.4 Evasive Actions ............................................................................................. 277
2.4.1 Background ................................................................................ 277
2.4.2 Description .................................................................................. 279
2.4.3 Example Problem ........................................................................ 282
2.4.4 Discussion .................................................................................. 282
2.5 Modeling Systems ......................................................................................... 283
2.6 References .................................................................................................... 284
This page has been reformatted by Knovel to provide easier navigation.
Contents
vii
3. Event Probability and Failure Frequency Analysis ............................ 297
3.1 Incident Frequencies from the Historical Record ........................................... 297
3.1.1 Background ................................................................................ 297
3.1.2 Description .................................................................................. 298
3.1.3 Sample Problem ......................................................................... 301
3.1.4 Discussion .................................................................................. 303
3.2 Frequency Modeling Techniques .................................................................. 304
3.2.1 Fault Tree Analysis ..................................................................... 304
3.2.2 Event Tree Analysis .................................................................... 322
3.3 Complementary Plant-Modeling Techniques ................................................. 330
3.3.1 Common Cause Failure Analysis ................................................. 331
3.3.2 Human Reliability Analysis .......................................................... 368
3.3.3 External Events Analysis ............................................................. 379
3.4 References .................................................................................................... 387
4. Measurement, Calculation, and Presentation of Risk Estimates ...... 395
4.1 Risk Measures ............................................................................................... 395
4.1.1 Risk Indices ................................................................................ 396
4.1.2 Individual Risk ............................................................................. 397
4.1.3 Societal Risk ............................................................................... 399
4.1.4 Injury Risk Measures ................................................................... 399
4.2 Risk Presentation ........................................................................................... 400
4.2.1 Risk Indices ................................................................................ 401
4.2.2 Individual Risk ............................................................................. 402
4.2.3 Societal Risk ............................................................................... 403
4.3 Selection of Risk Measures and Presentation Format ................................... 406
4.3.1 Selection of Risk Measures ......................................................... 406
4.3.2 Selection of Presentation Format ................................................ 407
4.4 Risk Calculations ........................................................................................... 408
4.4.1 Individual Risk ............................................................................. 408
4.4.2 Societal Risk ............................................................................... 418
4.4.3 Risk Indices ................................................................................ 423
4.4.4 General Comments ..................................................................... 425
4.4.5 Example Risk Calculation Problem .............................................. 425
4.4.6 Sample Problem Illustrating That F-N Curves Cannot Be
Calculated from Individual Risk Contours .................................... 438
This page has been reformatted by Knovel to provide easier navigation.
viii
Contents
4.5 Risk Uncertainty, Sensitivity, and Importance ............................................... 442
4.5.1 Uncertainty ................................................................................. 442
4.5.2 Sensitivity ................................................................................... 450
4.5.3 Importance .................................................................................. 451
4.6 References .................................................................................................... 452
5. Creation of CPQRA Data Base ............................................................. 457
5.1 Historical Incident Data .................................................................................. 459
5.1.1 Types of Data ............................................................................. 459
5.1.2 Sources ...................................................................................... 463
5.2 Process and Plant Data ................................................................................. 464
5.2.1 Plant Layout and System Description .......................................... 464
5.2.2 Ignition Sources and Data ........................................................... 464
5.3 Chemical Data ............................................................................................... 468
5.3.1 Types of Data ............................................................................. 468
5.3.2 Sources ...................................................................................... 469
5.4 Environmental Data ....................................................................................... 469
5.4.1 Population Data .......................................................................... 469
5.4.2 Meteorological Data .................................................................... 471
5.4.3 Geographic Data ......................................................................... 472
5.4.4 Topographic Data ....................................................................... 473
5.4.5 External Event Data .................................................................... 473
5.5 Equipment Reliability Data ............................................................................. 475
5.5.1 Terminology ................................................................................ 475
5.5.2 Types and Sources of Failure Rate Data ..................................... 485
5.5.3 Key Factors Influencing Equipment Failure Rates ........................ 490
5.5.4 Failure Rate Adjustment Factors ................................................. 497
5.5.5 Data Requirements and Estimated Accuracy ............................... 499
5.5.6 Collection and Processing of Raw Plant Data .............................. 499
5.5.7 Preparation of the CPQRA Equipment Failure Rate Data
Set .............................................................................................. 508
5.5.8 Sample Problem ......................................................................... 513
5.6 Human Reliability Data .................................................................................. 515
5.7 Use of Expert Opinions .................................................................................. 518
5.8 References .................................................................................................... 518
This page has been reformatted by Knovel to provide easier navigation.
Contents
ix
6. Special Topics and Other Techniques ................................................. 525
6.1 Domino Effects .............................................................................................. 525
6.1.1 Background ................................................................................ 525
6.1.2 Description .................................................................................. 526
6.1.3 Sample Problem ......................................................................... 528
6.1.4 Discussion .................................................................................. 528
6.2 Unavailability Analysis of Protective Systems ............................................... 529
6.2.1 Background ................................................................................ 529
6.2.2 Description .................................................................................. 530
6.2.3 Sample Problem ......................................................................... 535
6.2.4 Discussion .................................................................................. 536
6.3 Reliability Analysis of Programmable Electronic Systems ............................. 537
6.3.1 Background ................................................................................ 537
6.3.2 Description .................................................................................. 538
6.3.3 Sample Problem ......................................................................... 546
6.3.4 Discussion .................................................................................. 548
6.4 Other Techniques .......................................................................................... 549
6.4.1 MORT Analysis ........................................................................... 550
6.4.2 IFAL Analysis .............................................................................. 550
6.4.3 Hazard Warning Structure ........................................................... 550
6.4.4 Markov Processes ...................................................................... 551
6.4.5 Monte Carlo Techniques ............................................................. 559
6.4.6 GO Methods ............................................................................... 559
6.4.7 Reliability Block Diagrams ........................................................... 560
6.4.8 Cause-Consequence Analysis ..................................................... 560
6.4.9 Multiple Failure/Error Analysis (MFEA) ........................................ 561
6.4.10 Sneak Analysis ........................................................................... 563
6.5 References .................................................................................................... 570
7. CPQRA Application Examples ............................................................. 573
7.1 Simple/Consequence CPQRA Examples ...................................................... 573
7.1.1 Simple/Consequence CPQRA Characterization ........................... 573
7.1.2 Application to a New Process Unit ............................................... 574
7.1.3 Application to an Existing Process Unit ........................................ 575
7.2 Intermediate/Frequency CPQRA Examples .................................................. 575
7.2.1 Intermediate/Frequency CPQRA Characterization ....................... 575
This page has been reformatted by Knovel to provide easier navigation.
x
Contents
7.2.2 Application to a New Process Unit ............................................... 576
7.2.3 Application to an Existing Process Unit ........................................ 577
7.3 Complex/Risk CPQRA Examples .................................................................. 577
7.3.1 Complex/Risk CPQRA Characterization ...................................... 577
7.3.2 Application to a New or Existing Process Unit .............................. 578
7.4 References .................................................................................................... 578
8. Case Studies .......................................................................................... 579
8.1 Chlorine Rail Tank Car Loading Facility ........................................................ 580
8.1.1 Introduction ................................................................................. 580
8.1.2 Description .................................................................................. 580
8.1.3 Identification, Enumeration, and Selection of Incidents ................ 583
8.1.4 Incident Consequence Estimation ............................................... 587
8.1.5 Incident Frequency Estimation .................................................... 593
8.1.6 Risk Estimation ........................................................................... 596
8.1.7 Conclusions ................................................................................ 605
8.2 Distillation Column ......................................................................................... 605
8.2.1 Introduction ................................................................................. 605
8.2.2 Description .................................................................................. 606
8.2.3 Identification, Enumeration, and Selection of Incidents ................ 609
8.2.4 Incident Consequence Estimation ............................................... 612
8.2.5 Incident Frequency Estimation .................................................... 619
8.2.6 Risk Estimation ........................................................................... 625
8.2.7 Conclusions ................................................................................ 632
8.3 References .................................................................................................... 634
9. Future Developments ............................................................................ 635
9.1 Hazard Identification ...................................................................................... 636
9.2 Source and Dispersion Models ...................................................................... 636
9.2.1 Source Emission Models ............................................................. 636
9.2.2 Transport and Dispersion Models ................................................ 637
9.2.3 Transient Plume Behavior ........................................................... 637
9.2.4 Concentration Fluctuations and the Time Averaging of
Dispersion Plumes ...................................................................... 637
9.2.5 Input Data Uncertainties and Model Validation ............................ 638
9.2.6 Field Experiments ....................................................................... 638
This page has been reformatted by Knovel to provide easier navigation.
Contents
xi
9.2.7 Model Evaluation ........................................................................ 638
9.3 Consequence Models .................................................................................... 639
9.3.1 Unconfined Vapor Cloud Explosions (UVCE) ............................... 639
9.3.2 Boiling Liquid Expanding Vapor Explosions (BLEVES) and
Fireballs ...................................................................................... 640
9.3.3 Pool and Jet Fires ....................................................................... 640
9.3.4 Toxic Hazards ............................................................................. 640
9.3.5 Human Exposure Models ............................................................ 641
9.4 Frequency Models ......................................................................................... 642
9.4.1 Human Factors ........................................................................... 642
9.4.2 Electronic Systems ..................................................................... 642
9.4.3 Failure Rate Data ........................................................................ 644
9.5 Hazard Mitigation ........................................................................................... 645
9.6 Uncertainty Management ............................................................................... 645
9.7 Integration of Reliability Analysis, CPQRA, and Cost-Benefit Studies ........... 646
9.8 Summary ....................................................................................................... 646
9.9 References .................................................................................................... 647
Appendix A: Loss-of-Containment Causes in the Chemical
Industry ............................................................................... 649
Appendix B: Training Programs .............................................................. 653
Appendix C: Sample Outline for CPQRA Reports ................................. 659
Appendix D: Minimal Cut Set Analysis ................................................... 661
Appendix E: Approximation Methods for Quantifying Fault Trees ...... 671
Appendix F: Probability Distributions, Parameters, and
Terminology ........................................................................ 689
Appendix G: Statistical Distributions Available for Use as Failure
Rate Models ........................................................................ 695
Appendix H: Errors from Assuming That Time-Related Equipment
Failure Rates Are Constant ................................................ 705
This page has been reformatted by Knovel to provide easier navigation.
xii
Contents
Appendix I: Data Reduction Techniques: Distribution
Identification and Testing Methods ................................... 709
Appendix J: Procedure for Combining Available Generic and
Plant-Specific Data ............................................................. 717
Conversion Factors ................................................................................... 721
Glossary ..................................................................................................... 725
Index ........................................................................................................... 739
This page has been reformatted by Knovel to provide easier navigation.
Index
Index terms
Links
A
Absolute probability judgment, human reliability analysis
374
Accident sequence evaluation program, human reliability analysis
370
Age. See Time-in-service interval
Aggregate risk
calculations
423
defined
404
Aggregate risk index
434
435
Aircraft impact
external events analysis
383
external events data
473
Algorithm fault tree construction
386
311
Applications examples
complex/risk CPQRA
577
intermediate/frequency CPQRA
575
simple/consequence CPQRA
573
Approximation methods for fault trees
671
(See also Fault tree analysis)
applications
672
description
672
reliability parameters
672
reliability parameters selection
677
repairable or nonrepairable models
677
discussed
686
sample problems
679
technology
671
Atmospheric stability, dispersion models
112
Automatic fault tree synthesis
312
Average individual risk
417
432
604
Average rate of death
423
434
603
This page has been reformatted by Knovel to provide easier navigation.
739
740
Index terms
Links
B
Baker's method (overpressure from ruptured sphere), physical explosion
198
Baker-Strehlow method, TNO multi-energy model
169
Blast effects, BLEVE and fireball
205
Blast fragments, BLEVE and fireball
213
Blast wave parameters, vapor cloud explosion (VCE)
174
BLEVE and fireball
204
application
204
computer codes
217
description
204
blast effects
205
equations
207
fragments
205
input requirements
211
logic diagram
210
output
211
radiation
207
simplifications
211
technique
204
theory
211
error identification
215
event tree analysis
327
example problems
211
BLEVE blast fragments
213
BLEVE thermal flux
211
future developments
640
philosophy
204
purpose
204
resources required
217
strengths and weaknesses
215
thermal flux from, thermal effects
271
utility
216
Boiling liquid expanding vapor explosions. See BLEVE and fireball
Boiling pool vaporization, flash and evaporation models
106
Boolean algebra, minimal cut set analysis
663
Brasie and Simpson method, TNT equivalency model
165
Britter and McQuaid model, dense gas dispersion
150
This page has been reformatted by Knovel to provide easier navigation.
176
329
216
273
741
Index terms
Links
C
Case studies. See Chlorine rail tank car loading facility case study; Distillation
column case study
Cause-consequence analysis
560
Chemical data
468
sources of
469
types of
468
Chemical process quantitative risk analysis. See CPQRA
Chemical reactivity hazards data, chemical data sources
Chemical scoring, screening techniques
Chlorine rail tank car loading facility case study
469
31
580
description
580
incident consequence estimation
587
discharge rate calculations
587
dispersion calculations
591
toxicity calculations
590
incident frequency estimation
593
incident identification, enumeration, and selection
583
overview
580
risk estimation
596
individual
596
single number measures and indices
602
societal
599
Combustion in vessel, overpressure from, confined explosions
222
Common cause failure analysis
331
applications
340
CCF coupling
336
computer codes
368
defined
335
description
341
defenses, against coupling identification
347
fault tree incorporation
350
framework overview
341
group components identification
344
model parameter estimation
355
model selection
353
Paula and Daggett method
359
This page has been reformatted by Knovel to provide easier navigation.
742
Index terms
Links
Common cause failure analysis (Continued)
quantification approaches
349
error identification
368
overview
331
philosophy
334
purpose
334
resources required
368
sample problem
359
strengths and weaknesses
367
utility
368
Complementary plant-modeling techniques
common cause failure analysis
330
331
(See also Common cause failure analysis)
external events analysis
379
(See also External events analysis)
human reliability analysis
368
(See also Human reliability analysis)
Complexity, CPQRA study cube
16
Complex/risk CPQRA example
577
Component techniques, CPQRA
7
Compressed gas, physical explosion
194
Concentration fluctuation, dispersion plumes, future developments
637
Confidence limits, determination of, equipment reliability data
506
Confined explosions
217
applications
217
computer codes
224
description
218
input requirements
220
logic diagram
220
output
221
simplifications
222
technique
218
theory
220
error identification
223
example problem
222
philosophy
217
purpose
217
This page has been reformatted by Knovel to provide easier navigation.
9
221
743
Index terms
Links
Confined explosions (Continued)
resources required
224
strengths and weaknesses
223
utility
224
Consequence analysis
chlorine rail tank car loading facility case study
57
587
(See also Chlorine rail tank car loading facility case study)
distillation column case study
612
(See also Distillation column case study)
effect models
244
(See also Effect models)
estimation of consequence, CPQRA component techniques
evasive actions
11
277
(See also Evasive actions)
explosions and fires
153
(See also Explosions and fires)
future developments
639
impact analysis and, risk calculations
426
modeling systems
283
overviews
57
reduction of consequence, CPQRA component techniques
13
source models
59
dense gas dispersion
141
(See also Dense gas dispersion)
discharge rate models
60
(See also Discharge rate models)
dispersion models See Dispersion models
flash and evaporation models
95
(See also Flash and evaporation models)
uncertainties
57
Constant failure rate models, equipment reliability data
480
Containment loss, causes of
649
Conversion factors
721
Cost-benefit studies, future developments
646
CPQRA. (See also Consequence analysis)
applications
29
existing facilities
This page has been reformatted by Knovel to provide easier navigation.
32
13
744
Index terms
Links
CPQRA (Continued)
new projects
32
screening techniques
30
applications examples
complex/risk
577
intermediate/frequency
575
simple/consequence
573
component techniques
7
consequence analysis
57
current practices
36
definitions
9
5
incident list management
19
enumeration
20
incident selection
24
tracking
29
limitations
33
overview
xxi
project management
38
1
(See also Project management)
risk analysis
2
risk assessment
2
risk management
3
scope of studies
15
goals
18
study cube
15
study results maintenance
50
uses of results
38
CPQRA case studies. See Chlorine rail tank car loading facility case study;
Distillation column case study
CPQRA data base. See Data base creation
CPQRA Report outline
659
D
Data base creation
457
(See also Historical incident data)
chemical data
468
sources of
469
This page has been reformatted by Knovel to provide easier navigation.
20
745
Index terms
Links
Data base creation (Continued)
types of
468
combination procedures, generic and plant-specific data
717
data reduction techniques
709
environmental data
469
external events
473
geographic
472
meteorological
471
population
469
topographical
473
equipment reliability data
475
(See also Equipment reliability data)
historical incident data
459
sources of
459
types of
459
human reliability data
515
overview
457
process and plant data
464
ignition sources
464
plant layout and system description
464
Decision making, risk uncertainty and
448
Deflagration
confined explosions
218
defined
153
Demand-related failures, equipment reliability data
503
Dense gas dispersion
141
applications
142
computer codes
153
description
143
input requirements
148
logic diagram
146
output
149
simplifications
150
techniques
143
theory
146
error identification
152
example problem
150
This page has been reformatted by Knovel to provide easier navigation.
149
746
Index terms
Links
Dense gas dispersion (Continued)
philosophy
141
purpose
141
resources required
153
strengths and weaknesses
152
utility
153
Detonation
confined explosions
219
defined
153
Discharge rate models
60
applications
62
computer codes
94
description
62
equations
65
fire exposure
80
gas discharges
71
liquid discharges
67
technique
62
two-phase discharge
76
error identification
94
example problems
81
gas discharge due to external fire
93
gas discharge through hole
87
gas discharge through piping
88
liquid discharge through hole
81
liquid discharge through piping
85
liquid trajectory from hole
83
two-phase flashing flow through piping
91
philosophy
61
purpose
60
resources required
94
strengths and weaknesses
93
Dispersion models
111
atmospheric stability
112
future developments
636
neutral and positively buoyant plume and puff models
119
(See also Neutral and positively buoyant plume and puff models)
This page has been reformatted by Knovel to provide easier navigation.
69
747
Index terms
Links
Dispersion models (Continued)
overview
111
release elevation
117
release geometry
117
release momentum and buoyancy
118
terrain effects
117
wind speed
115
Dispersion plumes, future developments
637
Distillation column case study
605
conclusions
632
description
606
incident consequence estimation
612
incident frequency estimation
619
incident identification, enumeration, and selection
609
overview
605
risk estimation
625
individual
625
societal
631
Domino effects
525
Dose-response functions, effect models
244
Dust explosion, confined explosions
219
E
Earthquake. See Seismic events
Economic loss index, risk indices calculation
424
Effect models
244
dose-response functions
244
example problem
248
explosion effects
274
(See also Explosion effects)
probit functions
246
thermal effects
267
(See also Thermal effects)
toxic gas effects
250
(See also Toxic gas effects)
This page has been reformatted by Knovel to provide easier navigation.
248
748
Index terms
Electronic systems, future developments
Links
642
(See also Programmable electronic systems; Reliability analysis of programmable
electronic systems)
Emergency Exposure Guidance Levels (EEGL), toxic gas effects
Enumeration, incident list management
252
20
Environmental data
469
external events
473
geographic
472
meteorological
471
population
469
topographical
473
Equipment reliability data
475
collection and processing of
499
CPQRA data set preparation
508
data requirements and accuracy
499
factors influencing
490
discussed
491
equipment age
495
failure modes, causes and severity
492
listed
490
failure rate adjustment factors
497
future developments
644
generally
475
sample problem
513
terminology
475
constant failure rate models
480
equipment failure rates
476
equipment reliability
476
failure rate models
480
nonconstant failure rate models
482
probability of failure rate
478
time-in-service interval
479
time-related, errors in assuming constant rates of
705
types and sources of
485
generic data
486
judgmental data
490
plant-specific data
485
This page has been reformatted by Knovel to provide easier navigation.
501
488
749
Index terms
Links
Equipment reliability data (Continued)
predicted data
487
490
Equivalent social cost
423
435
Evacuation failure estimation, evasive actions
282
Evaporating pool, flash and evaporation models
106
Kawamura and McKay Direct Evaporation Model
Evaporation, flash and evaporation models
107
99
Evasive actions
277
benefits of
281
computer codes
283
description
279
input requirements
280
output
280
simplifications
280
technique
279
theory
280
error identification
283
example problem
282
purpose
277
resources required
283
strengths and weaknesses
282
technology
278
utility
283
Event probability and failure frequency analysis
297
complementary plant-modeling techniques
330
(See also Complementary plant-modeling techniques)
frequency modeling techniques
304
(See also Frequency modeling techniques)
incident frequency from historical record
297
(See also Historical incident data)
Event tree analysis
322
applications
322
computer codes
330
description
322
input requirements
327
output
327
simplifications
327
This page has been reformatted by Knovel to provide easier navigation.
604
750
Index terms
Links
Event tree analysis (Continued)
technique
322
theory
327
error identification
328
purpose
322
resources required
330
sample problem
327
strengths and weaknesses
328
technology
322
utility
330
Existing facilities, CPQRA applications
374
Explosion, defined
154
Explosion effects
274
applications
274
computer codes
277
description
274
input requirements
276
output
276
simplifications
276
technique
274
theory
275
error identification
277
example problem
276
philosophy
274
purpose
274
resources required
277
strengths and weaknesses
277
utility
277
BLEVE and fireball
153
204
(See also BLEVE and fireball)
confined explosions
217
(See also Confined explosions)
definitions
153
flash fires
180
jet fires
237
This page has been reformatted by Knovel to provide easier navigation.
329
32
Expert judgment/opinion, human reliability
Explosions and fires
330
518
751
Index terms
Links
Explosions and fires (Continued)
(See also Jet fires)
logic diagrams (explosions)
154
physical explosion
181
(See also Physical explosion)
pool fires
224
(See also Pool fires)
vapor cloud explosion (VCE)
157
(See also Vapor cloud explosion (VCE))
Exposure models, future developments
641
Exposure periods, estimation of, equipment reliability data
503
External events analysis
379
applications
380
description
input requirements
384
output
384
simplifications
384
technique
380
theory
384
environmental data
473
error identification
386
listing of events
381
purpose
379
resources required
387
sample problem
385
strengths and weaknesses
386
technology
380
utility
387
F
Facility screening, screening techniques
32
Factory Mutual Research Corporation method, TNT equivalency model
165
Failure modes and effects analysis (FMEA)
561
Failure mode trends, equipment reliability data
505
Failure rates
505
(See also Equipment reliability data calculation of, equipment reliability data)
CPQRA data set preparation
This page has been reformatted by Knovel to provide easier navigation.
508
752
Index terms
Links
Failure rates (Continued)
equipment reliability data
476
frequency rates contrasted, equipment reliability data
478
future developments
644
models of, equipment reliability data
480
probability, equipment reliability data
478
Fanning friction factor, discharge rate models
67
Fatal accident rate
424
Fault tree analysis
304
(See also Approximation methods for fault trees)
applications
305
common cause failure analysis and
350
computer codes
321
definitions
306
description
305
algorithm fault tree construction
311
automatic fault tree synthesis
312
input requirements
314
manual fault tree construction
309
output
315
qualitative and quantitative analysis
313
simplifications
315
technique
305
theory
314
error identification
320
purpose
304
resources required
321
sample problem
315
strengths and weaknesses
320
technology
305
utility
320
Field experiments, future developments
Fire, gas discharge due to external, discharge rate models
638
93
(See also Explosions and fires; Jet fires; Pool fires)
Fireball. See BLEVE and fireball
Fire exposure, discharge rate models
Fixed concentration-time relationship, toxic gas effects
This page has been reformatted by Knovel to provide easier navigation.
80
262
433
604
753
Index terms
Links
Flame height, pool fires
228
Flame tilt and drag, pool fires
229
Flammable limits, defined
154
Flash and evaporation models
applications
95
96
computer codes
111
description
97
evaporation
99
flashing
97
input requirements
104
logic diagram
104
output
105
pool spread
102
theory
104
example problems
105
boiling pool vaporization
106
evaporating pool
106
evaporating pool (Kawamura and McKay Direct Evaporation Model)
107
isenthalpic flash fraction
105
pool spread
110
philosophy
96
purpose
95
resources required
Flash fires, explosions and fires
Flashing, flash and evaporation models
Flashpoint temperature, defined
111
110
180
97
154
Flooding
external events analysis
380
external events data
473
F-N curve
individual risk contours and
438
risk importance
451
risk uncertainty
443
societal risk calculations
433
445
Fragment range, physical explosion
201
203
Fragments, BLEVE and fireball
205
213
Fragment velocity, from vessel rupture, physical explosion
199
This page has been reformatted by Knovel to provide easier navigation.
447
216
754
Index terms
Links
Frequency analysis
estimation, CPQRA component techniques
13
future developments
642
risk calculations
428
Frequency-frequency AND gate pairing, fault tree analysis
319
Frequency modeling techniques
304
event tree analysis
322
(See also Event tree analysis)
fault tree analysis
304
(See also Fault tree analysis)
Frequency reduction, CPQRA component techniques
Future developments
13
635
consequence models
639
CPQRA, reliability analysis, and cost-benefits study integration
646
frequency models
642
hazard identification
636
hazard mitigation
645
human exposure models
641
overview
635
pool and jet fires
640
source and dispersion models
636
toxic hazards
640
uncertainty management
645
G
Gas discharge
discharge rate models
71
due to external fire, discharge rate models
93
through hole, discharge rate models
87
through piping, discharge rate models
88
Generic data
equipment reliability data
486
plant-specific data and, combination procedures
717
Geographic data, environmental data
472
Geometric view factor, pool fires
231
GO methodology
559
This page has been reformatted by Knovel to provide easier navigation.
488
755
Index terms
Links
H
Hazard identification
CPQRA component techniques
9
fault tree analysis
308
future developments
636
Hazard mitigation, future developments
645
Hazard warning structure
550
Health & Safety Executive method, TNT equivalency model
165
Historical incident data
297
(See also Data base creation)
applications
298
computer codes
303
description
298
input requirements
301
output
301
simplifications
301
technique
298
theory
301
error identification
303
purpose
297
resources
303
sample problem
301
sources of
459
strengths and weaknesses
303
technology
298
types of
459
utility
303
Hole
gas discharge through, discharge rate models
87
liquid discharge through, discharge rate models
81
liquid trajectory from, discharge rate models
83
Hole size, discharge rate models
64
Human error
assessment and reduction technique
376
rate prediction
370
Human exposure models, future developments
This page has been reformatted by Knovel to provide easier navigation.
641
316
756
Index terms
Human reliability analysis
Links
368
applications
369
computer codes
379
description
369
input requirements
376
logic diagram
376
output
377
simplifications
377
technique
369
error identification
379
future developments
642
purpose
368
resources required
379
sample problem
377
technology
368
utility
379
Human reliability data
data base creation
515
expert opinion
518
I
IFAL (Instantaneous Fractional Annual Loss) analysis
550
Ignition sources, process and plant data
464
Immediately Dangerous to Life and Health (IDLH) concentrations, toxic gas effects
252
Impact analysis. See Consequence analysis
Incident enumeration, CPQRA component techniques
9
Incident frequency from historical record. See Historical incident data
Incident list management
19
enumeration
20
incident selection
24
tracking
29
Incident number, CPQRA study cube
17
Incident selection
CPQRA component techniques
incident list management
Individual hazard index, risk indices calculation
This page has been reformatted by Knovel to provide easier navigation.
9
24
424
11
757
Index terms
Links
Individual risk calculations
408
average individual risk
417
case study
chlorine rail tank car loading facility
596
distillation column
625
contours and profiles (transects)
409
general approach
410
importance of
452
minimum individual risk
417
profiles (transects)
418
sample problems
429
simplified approaches
412
Individual risk measures
defined
396
presentation formats
402
use of
397
Industrial hygiene and toxicity data, chemical data sources
469
Industrial Risk Insurers method, TNT equivalency model
165
Influence diagram approach, human reliability analysis
374
Injury risk measures, use of
399
Instantaneous Fractional Annual Loss (IFAL) analysis
550
Intermediate/frequency CPQRA example
575
Inventory studies, screening techniques
Isenthalpic flash fraction, flash and evaporation models
31
105
Isopleths
plume release with
135
puff release with
137
J
Jet fires
237
applications
237
computer codes
243
description
237
example problem
240
input requirements
240
logic diagram
238
simplifications
240
This page has been reformatted by Knovel to provide easier navigation.
243
758
Index terms
Links
Jet fires (Continued)
technique
237
theory
240
error identification
243
future developments
640
purpose
237
resources required
243
strengths and weaknesses
242
Judgmental data, equipment reliability data
490
L
Leak duration, discharge rate models
Likelihood, defined
Likelihood estimation, CPQRA component techniques
65
298
11
Liquid discharge
discharge rate models
67
through hole, discharge rate models
81
through piping, discharge rate models
85
Liquid trajectory, from hole, discharge rate models
69
83
Logic diagram
BLEVE and fireball
210
confined explosions
220
221
dense gas dispersion
146
149
domino effects
527
explosions
154
TNT equivalency model
171
flash and evaporation models
104
human reliability analysis
376
jet fires
238
neutral and positively buoyant plume and puff models
126
physical explosion
193
194
pool fires
226
227
reliability analysis of programmable electronic systems
539
unavailability analysis
533
Loss-of-containment, causes of
This page has been reformatted by Knovel to provide easier navigation.
649
173
759
Index terms
Links
M
Maintenance personnel performance simulation, human reliability analysis
374
Management Oversight and Risk Tree (MORT) analysis
550
Manual fault tree construction, fault tree analysis
309
Markov processes
551
example
556
generally
551
model development
554
SIS system application
553
Maximum individual risk
431
Meteorological data, environmental data
471
Minimal cut set analysis
661
Boolean algebra
663
description
662
overview
661
sample problem
665
Minimum individual risk
Model construction, CPQRA component techniques
417
11
Model evaluation, future developments
638
Modeling systems, consequence analysis
283
Monte Carlo techniques
559
Moody friction factor, discharge rate models
67
Mortality index, risk indices calculation
424
MORT analysis
550
Moving puff, toxic gas effects
262
Multiple failure/error analysis (MFEA)
561
N
Neutral and positively buoyant plume and puff models
119
applications
119
computer codes
140
description
120
input requirements
126
logic diagram
126
output
126
plume model
124
This page has been reformatted by Knovel to provide easier navigation.
602
760
Index terms
Links
Neutral and positively buoyant plume and puff models (Continued)
puff model
123
simplifications
128
theory
126
error identification
138
example problems
128
plume releases
128
plume release with isopleths
135
puff release
131
puff release with isopleths
137
philosophy
119
purpose
119
resources required
140
strengths and weaknesses
138
utility
140
New projects, CPQRA applications
Nonconstant failure rate models, equipment reliability data
32
482
O
Operator action tree, human reliability analysis
375
Overpressure, from combustion in vessel, confined explosions
222
Overpressure from ruptured sphere
Baker's method, physical explosion
198
Prugh's method, physical explosion
196
P
Paired comparisons, human reliability analysis
374
Pasquill-Gifford model. See Neutral and positively buoyant plume and puff models
Paula and Daggett method, common cause failure analysis
359
PERD guidelines, equipment reliability data
487
503
Permissible Exposure Limit (PEL), toxic gas effects
254
Physical explosion
181
computer codes
203
description
182
applications
193
input requirements
193
This page has been reformatted by Knovel to provide easier navigation.
488
501
761
Index terms
Links
Physical explosion (Continued)
logic diagram
193
output
194
projectiles
186
simplifications
194
technique
182
theory
193
error identification
202
example problems
194
compressed gas
194
fragment range in air
201
fragment velocity from vessel rupture
199
overpressure from ruptured sphere (Baker's method)
198
overpressure from ruptured sphere (Prugh's method)
196
philosophy
181
purpose
181
resources required
203
strengths and weaknesses
202
utility
203
Piping
gas discharge through, discharge rate models
88
liquid discharge through, discharge rate models
85
two-phase flashing flow through, discharge rate models
91
Plant layout and system description, process and plant data
464
Plant-specific data
equipment reliability data
485
generic data and, combination procedures
717
Plume model, neutral and positively buoyant plume and puff models
124
Plume release
neutral and positively buoyant plume and puff models
with isopleths
128
135
Point source model, pool fire radiation
234
Pool fires
224
applications
225
computer codes
237
description
225
burning rate
225
This page has been reformatted by Knovel to provide easier navigation.
194
203
762
Index terms
Links
Pool fires (Continued)
flame height
228
flame tilt and drag
229
geometric view factor
231
input requirements
233
logic diagrams
226
output
233
pool size
228
received thermal flux
232
simplifications
233
surface emitted power
229
technique
225
theory
233
error identification
237
example problem
233
future developments
640
philosophy
224
purpose
224
resources required
237
strengths and weaknesses
237
utility
237
Pool size, pool fires
228
Pool spread, flash and evaporation models
102
Population data, environmental data
469
Predicted data, equipment reliability data
487
Probability, of failure rate, equipment reliability data
478
Probability distributions
689
Probit equations and functions
effect models
246
toxic gas effects
258
Process hazard indices, screening techniques
31
Process and plant data
ignition sources
464
plant layout and system description
464
Programmable electronic systems
future developments
642
reliability analysis of
537
This page has been reformatted by Knovel to provide easier navigation.
227
110
490
111
763
Index terms
Links
Programmable electronic systems (Continued)
(See also Reliability analysis of programmable electronic systems)
Projectiles, physical explosion
186
Project management
38
project execution
50
project plan construction
44
cost control
49
quality assurance
46
resource requirement estimation
44
scheduling
45
training requirements
48
special user requirements
44
study depth
41
study goals
39
study objectives
39
Protective system, unavailability analysis of
529
Prugh's method (overpressure from ruptured sphere), physical explosion
196
Puff model, neutral and positively buoyant plume and puff models
123
Puff release
with isopleths, neutral and positively buoyant plume and puff models
137
neutral and positively buoyant plume and puff models
131
toxic gas effects
262
Q
Quality assurance, project plan construction
46
R
Radiant flux, from jet fire
240
Radiation
207
(See also Thermal effects BLEVE and fireball)
pool fires
233
Received thermal flux, pool fires
232
Release elevation, dispersion models
117
Release geometry, dispersion models
117
Release momentum and buoyancy, dispersion models
118
Release phase, discharge rate models
Reliability analysis, cost-benefit studies integration, future developments
This page has been reformatted by Knovel to provide easier navigation.
62
646
243
764
Index terms
Reliability analysis of programmable electronic systems
Links
537
background
537
description
538
future directions
642
sample problem
546
strengths and weaknesses
548
utility
549
Reliability block diagrams
Resource requirement estimation, project plan construction
560
44
Risk
defined
5
importance of
451
sensitivity of
450
Risk analysis, CPQRA
2
Risk assessment, CPQRA
2
Risk calculations
408
chlorine rail tank car loading facility case study
596
(See also Chlorine rail tank car loading facility case study)
computer codes
441
CPQRA component techniques
11
CPQRA study cube
15
distillation column case study
625
(See also Distillation column case study)
estimate utilization, CPQRA component techniques
11
general comments
425
importance of
451
individual risk
408
average individual risk
417
contours and profiles (transects)
409
general approach
410
minimum individual risk
417
profiles (transects)
418
simplified approaches
412
risk indices
423
sample problems
425
consequence and impact analysis
426
F-N curve calculation
438
This page has been reformatted by Knovel to provide easier navigation.
15
765
Index terms
Links
frequency analysis
428
general information
425
incident identification
426
incident outcomes
426
individual risk estimation
429
societal risk calculation
433
summary
437
societal risk
418
aggregate risk
423
general procedure
419
simplified procedure
420
Risk indices
calculation of
423
defined
396
importance of
452
presentation formats
401
use of
396
Risk management, CPQRA
3
Risk measures
395
individual
397
injury
399
overview
395
presentation formats
400
individual risk
402
risk indices
401
selection of
407
societal risk
403
risk indices
396
selection of
406
societal
399
Risk presentation formats, risk measures
400
(See also Risk measures)
Risk reduction, CPQRA component techniques
15
Risk sensitivity
450
Risk uncertainty
442
(See also Uncertainties)
case studies
449
This page has been reformatted by Knovel to provide easier navigation.
766
Index terms
Links
Risk uncertainty (Continued)
combination of
447
decision making and
448
display and interpretation of
447
evaluation and representation of
445
management of, future developments
645
propagation of
447
significance of
449
sources of
442
448
S
Safety Instrumented System (SIS), Markov processes
553
Scheduling, project plan construction
45
Screening techniques, CPQRA applications
30
Seismic events
external events analysis
380
external events data
473
Selection. See Incident selection
Sensitivity. See Risk sensitivity
Short-Term Public Emergency Guidance Levels (SPEGL), toxic gas effects
252
Simple/consequence CPQRA example
573
Single number risk measures and indices, case study
602
SIS system, Markov processes
553
Sneak analysis
563
application
563
discussed
567
example problem
568
input requirements
567
output
567
purpose
563
resource required
568
strengths and weaknesses
568
technique
564
technology
563
theory
566
Societal risk calculations
aggregate risk
418
423
This page has been reformatted by Knovel to provide easier navigation.
383
385
767
Index terms
Links
Societal risk calculations (Continued)
case study
chlorine rail tank car loading facility
599
distillation column
631
general procedure
419
importance of
452
sample problems
433
simplified procedure
420
Societal risk measures
defined
396
presentation formats
403
use of
399
Solid plume radiation model, pool fire radiation
235
Source emission models, future developments
636
Source models
consequence analysis
dense gas dispersion
59
141
(See also Dense gas dispersion)
discharge rate models
60
(See also Discharge rate models)
dispersion models
111
(See also Dispersion models)
flash and evaporation models
95
(See also Flash and evaporation models)
future developments
636
Statistical distributions
695
Study cube, CPQRA studies
15
Study results maintenance, CPQRA
50
Success likelihood index method, human reliability analysis
374
Surface emitted power, pool fires
229
System description, CPQRA component techniques
9
T
Taxonomy data cells, equipment reliability data
503
Terrain effects, dispersion models
117
Thermal effects
267
applications
267
This page has been reformatted by Knovel to provide easier navigation.
768
Index terms
Links
Thermal effects (Continued)
computer codes
273
description
267
input requirements
270
output
270
simplifications
270
technique
267
theory
270
error identification
272
example problems
271
thermal flux estimate
271
thermal flux from BLEVE fireball
271
philosophy
267
purpose
267
resources required
273
strengths and weaknesses
272
utility
272
Thermal flux, thermal effects
Thermodynamic path and endpoint, discharge rat models
Thermodynamic theory, flash and evaporation models
273
271
63
104
Threshold Limit Values-Short-Term Exposure Limits (TLV-STEL), toxic gas
effects
252
Time averaging, dispersion plumes, future developments
637
Time-in-service interval, equipment reliability date
479
254
495
Time-related failures
equipment reliability data
503
errors in assuming constant rates of
705
TNO multi-energy model, vapor cloud explosion (VCE)
165
TNT blast, explosion effects
276
176
TNT equivalency model
logic diagram
171
173
vapor cloud explosion (VCE)
159
175
Topographical data, environmental data
473
Tornado
external events analysis
380
external events data
473
Toxic Dispersion (TXDS) method, toxic gas effects
This page has been reformatted by Knovel to provide easier navigation.
256
383
385
769
Index terms
Links
Toxic endpoints, toxic gas effects
256
Toxic gas effects
250
computer codes
267
description
260
input requirements
261
output
262
simplifications
262
technique
260
theory
261
error identification
266
example problems
262
fixed concentration-time relationship
262
moving puff
262
philosophy
250
probit equations
258
purpose
250
resources required
267
strengths and weaknesses
265
utility
266
Toxic hazards, future developments
640
Tracking, incident list management
29
Training programs
653
Training requirements, project plan construction
48
Transient plume behavior, future developments
637
Transport models, future developments
637
2-K method, discharge rate models
66
Two-phase discharge, discharge rate models
76
Two-phase flashing flow, through piping, discharge rate models
91
U
Unavailability analysis, of protective system
Uncertainties
529
57
(See also Risk uncertainty consequence analysis)
future developments
638
management of, future developments
645
Unconfined vapor cloud explosion (UVCE), future developments
This page has been reformatted by Knovel to provide easier navigation.
639
68
770
Index terms
Links
V
Vapor cloud explosion (VCE)
157
applications
159
computer codes
180
description
159
Baker-Strehlow method
169
input requirements
172
logic diagram
171
output
174
simplifications
174
theory
171
TNO multi-energy model
165
TNT equivalency model
159
error identification
179
example problems
174
blast wave parameters
174
TNO and Baker-Strehlow methods
176
TNT equivalency
175
future developments
639
philosophy
157
purpose
157
resources required
179
strengths and weaknesses
179
Velocity, of fragments, from vessel rupture
199
Vessel, combustion in, overpressure from
222
Vessel rupture, fragment velocity from
199
W
Winds
dispersion models
115
external events analysis
380
puff release, toxic gas effects
262
Z
Zero failures, screening for, equipment reliability data
This page has been reformatted by Knovel to provide easier navigation.
505
383
385
Chemical Process
Quantitative Risk Analysis
Chemical process quantitative risk analysis (CPQRA) is a methodology designed to
provide management with a tool to help evaluate overall process safety in the chemical
process industry (CPI). Management systems such as engineering codes, checklists and
process safety management (PSM) provide layers of protection against accidents.
However, the potential for serious incidents cannot be totally eliminated. CPQRA
provides a quantitative method to evaluate risk and to identify areas for cost-effective
risk reduction.
The CPQRA methodology has evolved since the early 1980s from its roots in the
nuclear, aerospace and electronics industries. The most extensive use of probabilistic
risk analysis (PRA) has been in the nuclear industry. Procedures for PBA. have been
defined in the PRA Procedures Guide (NUEJEG, 1983) and the Probabilistic Safety Analysis Procedures Guide (NUREG, 1985).
CPQBA is a probabilistic methodology that is based on the NUBiG procedures.
The term "chemical process quantitative risk analysis" is used throughout this book to
emphasize the features of this methodology as practiced in the chemical, petrochemical, and oil processing industries. Some examples of these features are
•
•
•
•
•
Chemical reactions may be involved
Processes are generally not standardized
Many different chemicals are used
Material properties may be subject to greater uncertainty
Parameters, such as plant type, plant age, location of surrounding population,
degree of automation and equipment type, vary widely
• Multiple impacts, such as fire, explosion, toxicity, and environmental contamination, are common.
Acute, rather than chronic, hazards are the principal concern of CPQRA. This
places the emphasis on rare but potentially catastrophic events. Chronic effects such as
cancer or other latent health problems are not normally considered in CPQRA.
One objective of this second edition is to incorporate recent advances in the field.
Such advances are necessary and desirable as highlighted by the late Admiral Hyman
Rjckover:
We must accept the inexorably rising standards of technology, and we must relinquish
comfortable routines and practices rendered obsolete because they no longer meet the
new standards.
Many hazards may be identified and controlled or eliminated through use of qualitative hazard analysis as defined in Guidelines for Hazard Evaluation Procedures., Second
Edition (CCPS, 1992). Qualitative studies typically identify potentially hazardous
events and their causes. In some cases, where the risks are clearly excessive and the existing safeguards are inadequate, corrective actions can be adequately identified with
qualitative methods. CPQRA is used to help evaluate potential risks when qualitative
methods cannot provide adequate understanding of the risks and more information is
needed for risk management. It can also be used to evaluate alternative risk reduction
strategies.
The basis of CPQRA is to identify incident scenarios and evaluate the risk by defining the probability of failure, the probability of various consequences and the potential
impact of those consequences. The risk is defined in CPQRA as a function of probability or frequency and consequence of a particular accident scenario:
Risk = F(s, c,f)
s
c
/
= hypothetical scenario
= estimated consequence(s)
= estimated frequency
This "function" can be extremely complex and there can be many numerically different risk measures (using different risk functions) calculated from a given set of s, c,f.
The major steps in CPQRA, as illustrated in Figure 1.1 (page 4), are as follows:
Risk Analysis:
1. Define the potential event sequences and potential incidents. This may be based
on qualitative hazard analysis for simple or screening level analysis. Complete
or complex analysis is normally based on a full range of possible incidents for all
sources.
2. Evaluate the incident outcomes (consequences). Some typical tools include
vapor dispersion modeling and fire and explosion effect modeling.
3. Estimate the potential incident frequencies. Fault trees or generic databases
may be used for the initial event sequences. Event trees may be used to account
for mitigation and postrelease events.
4. Estimate the incident impacts on people, environment and property.
5. Estimate the risk. This is done by combining the potential consequence for each
event with the event frequency, and summing over all events.
Risk Assessment:
6. Evaluate the risk. Identify the major sources of risk and determine if there are
cost-effective process or plant modifications which can be implemented to
reduce risk. Often this can be done without extensive analysis. Small and inexpensive system changes sometimes have a major impact on risk. The evaluation
may be done against legally required risk criteria, internal corporate guidelines,
comparison with other processes or more subjective criteria.
7. Identify and prioritize potential risk reduction measures if the risk is considered
to be excessive.
Bisk Management:
Chemical process quantitative risk analysis is part of a larger management system.
Risk management methods are described in the CCPS Guidelines for Implementing Process Safety Management Systems (AIChE/CCPS, 1994), Guidelinesfor Technical Management of Chemical Process Safety (AIChE/CCPS, 1989), andPtow* Guidelines for Technical
Management of Chemical Process Safety (AIChE/CCPS, 1995).
The seven steps in Figure 1.1 are typical of CPQRA. However, it is important to
remember that other risks, such as financial loss, chronic health risks and bad publicity,
may also be significant. These potential risks can also be estimated qualitatively or
quantitatively and are an important part of the management process.
This chapter provides general outlines for the major areas in CPQRA as listed
below. The subsequent chapters provide more detailed descriptions and examples.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Definitions of CPQRA terminology (Section 1.1)
Elements that form the overall framework (Section 1.2)
Scope of CPQRA (Section 1.3)
Management of incident lists (Section 1.4)
Application of CPQRA (Section 1.5)
Limitations of CPQRA (Section 1.6)
Current practices (Section 1.7)
Utilization of CPQRA results (Section 1.8)
Project management (Section 1.9)
Maintenance of study results (Section 1.10)
CPQRA provides a tool for the engineer or manager to quantify risk and analyze
potential risk reduction strategies. The value of quantification was well described by
Lord Kelvin. Joschek (1983) provided a similar definition:
a quantitative approach to safety . . . is not foreign to the chemical industry. For every
process, the kinetics of the chemical reaction, the heat and mass transfers, the corrosion
rates, the fluid dynamics, the structural strength of vessels, pipes and other equipment
as well as other similar items are determined quantitatively by experiment or calculation, drawing on a vast body of experience.
CPQBJV enables the engineer to evaluate risk. Individual contributions to the
overall risk from a process can be identified and prioritized. A range of risk reduction
measures can be applied to the major hazard contributors and assessed using
cost-benefit methods.
Comparison of risk reduction strategies is a relative application of CPQRA. Pikaar
(1995) has related relative or comparative CPQRA to climbing a mountain. At each
stage of increasing safety (decreasing risk), the associated changes may be evaluated to
see if they are worthwhile and cost-effective. Some organizations also use CPQRA in
an absolute sense to confirm that specific risk targets are achieved. Further risk reduction, beyond such targets, may still be appropriate where it can be accomplished in a
cost-effective manner. Hendershot (1996) has discussed the role of absolute risk guidelines as a risk management tool.
CPQRA Steps
Define the potential
accident scenarios
Evaluate the event
consequences
Estimate the potential
accident frequencies
Estimate the
event impacts
Estimate the
risk
Evaluate the
risks
Identity and prioritize
potential risk reduction
measures
FIGURE 1.1 CPQRA Flowchart
Application of the full array of CPQRA techniques (referred to as component
techniques in Section 1.2) allows a quantitative review of a facility's risks, ranging from
frequent, low-consequence incidents to rare, major events, using a uniform and consistent methodology. Having identified process risks, CPQRA techniques can help focus
risk control studies. The largest risk contributors can be identified, and recommendations and decisions can be made for remedial measures on a consistent and objective
basis.
Utilization of the CPQRA results is much more controversial than the methodology (see Section 1.8). Watson (1994) has suggested that CPQRA should be considered as an argument, rather than a declaration of truth. In his view, it is not practical or
necessary to provide absolute scientific rigor in the models or the analysis. Rather, the
focus should be on the overall balance of the QBA and whether it reflects a useful measure of the risk. However, Yellman and Murray (1995) contend that the analysis
"should be, insofar as possible, true—or at least a search for truth." It is important for
the analyst to understand clearly how the results will be used in order to choose appropriately rigorous models and techniques for the study.
1.1. CPQRA Definitions
Table 1.1 and the Glossary define terms as they are used in this volume. Other tabulations of terms have been compiled (e.g., IChemE, 1985) and may need to be consulted
because, as discussed below, there currently is no single, authoritative source of accepted
nomenclature and definitions. CPQRA is an emerging technology in the CPI and there
are terminology variations in the published literature that can lead to confusion. For
example, while risk is defined in Table 1.1 as "a measure of human injury, environmental
damage or economic loss in terms of both the incident likelihood and the magnitude of
the loss or injury," readers should be aware that other definitions are often used. For
instance, Kaplan and Garrick (1981) have discussed a number of alternative definitions
of risk. These include:
• Risk is a combination of uncertainty and damage.
• Risk is a ratio of hazards to safeguards.
• Risk is a triplet combination of event, probability, and consequences.
Readers should also recognize the interrelationship that exists between an incident, an incident outcome, and an incident outcome case as these terms are used
throughout this book. An incident is defined in Table 1.1 as "the loss of containment of
material or energy," whereas an incident outcome is "the physical manifestation of an
incident." A single incident may have several outcomes. For example, a leak of flammable and toxic gas could result in
•
•
•
•
a jet fire (immediate ignition)
a vapor cloud explosion (delayed ignition)
a vapor cloud fire (delayed ignition)
a toxic cloud (no ignition).
A list of possible incident outcomes has been included in Table 1.2.
The third and often confusing term used in describing incidents is the incident outcome case. As indicated by its definition in Table 1.1, the incident outcome case specifies values for all of the parameters needed to uniquely distinguish one incident
outcome from all others. For example, since certain incident outcomes are dependent
on weather conditions (wind direction, speed, and atmospheric stability class), more
than one incident outcome case could be developed to describe the dispersion of a
dense gas.
Frequency: Number of occurrences of an event per unit of time.
Hazard: A chemical or physical condition that has the potential for causing damage to people, property, or the
environment (e.g., a pressurized tank containing 500 tons of ammonia)
Incident: The loss of containment of material or energy (e.g., a leak of 10 Ib/s of ammonia from a connecting
pipeline to the ammonia tank, producing a toxic vapor cloud) ; not all events propagate into incidents.
Event sequence: A specific unplanned sequence of events composed of initiating events and intermediate events
that may lead to an incident.
Initiating event: The first event in an event sequence (e.g., stress corrosion resulting in leak/rupture of the
connecting pipeline to the ammonia tank)
Intermediate event: An event that propagates or mitigates the initiating event during an event sequence (e.g.,
improper operator action fails to stop the initial ammonia leak and causes propagation of the intermediate event
to an incident; in this case the intermediate event could be a continuous release of the ammonia)
Incident outcome: The physical manifestation of the incident; for toxic materials, the incident outcome is a
toxic release, while for flammable materials, the incident outcome could be a Boiling Liquid Expanding Vapor
Explosion (BLEVE), flash fire, unconfined vapor cloud explosion, toxic release, etc. (e.g., for a 10 Ib/s leak of
ammonia, the incident outcome is a toxic release)
Incident outcome case: The quantitative definition of a single result of an incident outcome through
specification of sufficient parameters to allow distinction of this case from all others for the same incident
outcomes. For example,a release of 10 Ib/s of ammonia with D atmospheric stability class and 1.4 mph wind
speed gives a particular downwind concentration profile, resulting, for example, in a 3000 ppm concentration at
a distance of 2000 feet.
Consequence: A measure of the expected effects of an incident outcome case (e.g., an ammonia cloud from a 10
Ib/s leak under Stability Class D weather conditions, and a 1.4-mph wind traveling in a northerly direction will
injure 50 people)
Effect zone: For an incident that produces an incident outcome of toxic release, the area over which the airborne
concentration equals or exceeds some level of concern. The area of the effect zone will be different for each
incident outcome case [e.g., given an IDLH for ammonia of 500 ppm (v), an effect zone of 4.6 square miles is
estimated for a 10 Ib/s ammonia leak]. For a flammable vapor release, the area over which a particular incident
outcome case produces an effect based on a specified overpressure criterion (e.g., an effect zone from an
unconfined vapor cloud explosion of 28,000 kg of hexane assuming 1% yield is 0.18 km2 if an overpressure
criterion of 3 psig is established). For a loss of containment incident producing thermal radiation effects, the area
over which a particular incident outcome case produces an effect based on a specified thermal damage criterion
[e.g., a circular effect zone surrounding a pool fire resulting from a flammable liquid spill, whose boundary is
defined by the radial distance at which the radiative heat flux from the pool fire has decreased to 5 kW/m2
(approximately 1600 Btu/hr-ft2)]
Likelihood: A measure of the expected probability or frequency of occurrence of an event. This may be
expressed as a frequency (e.g., events/year), a probability of occurrence during some time interval, or a
conditional probability (i.e., probability of occurrence given that a precursor event has occurred, e.g., the
frequency of a stress corrosion hole in a pipeline of size sufficient to cause a 10 Ib/s ammonia leak might be
1 x 10"3 per year; the probability that ammonia will be flowing in the pipeline over a period of 1 year might be
estimated to be 0.1; and the conditional probability that the wind blows toward a populated area following the
ammonia release might be 0.1)
Probability: The expression for the likelihood of occurrence of an event or an event sequence during an interval
of time or the likelihood of occurrence of the success or failure of an event on test or demand. By definition,
probability must be expressed as a number ranging from O to 1.
Risk: A measure of human injury, environmental damage or economic loss in terms of both the incident
likelihood and the magnitude of the loss or injury
Risk analysis: The development of a quantitative estimate of risk based on engineering evaluation and
mathematical techniques for combining estimates of incident consequences and frequencies (e.g., an
ammonia cloud from a 10 Ib/s leak might extend 2000 ft downwind and injure 50 people.
For this example, using the data presented above for likelihood, the frequency of injuring 50 people
is given a s l x l O " 3 x 0 . 1 x 0 . 1 = lx 10~5 events per year)
Risk assessment: The process by which the results of a risk analysis are used to make decisions, either through a
relative ranking of risk reduction strategies or through comparison with risk targets (e.g., the risk of injuring 50
people at a frequency of 1 X 10"5 events per year from the ammonia incident is judged higher than acceptable,
and remedial design measures are required)
INCIDENTS
INCIDENT OUTCOME
Toxic Vapor
Atmospheric Dispersion
100lb/min
Release of
HCN from
a Tank Vent
INCIDENT OUTCOME CASES
5 mph Wind, Stability Class A
10 mph Wind, Stability Class D
15 mph Wind, Stability Class E
etc.
Jet Fire
BLEVE of
HCN Tank
Tank Full
Tank 50% Full
etc.
After 15 min. Release
Unconfined Vapor
Cloud Explosion
After 30 min. Release
After 60 min. Release
etc.
FIGURE 1.2. The relationship between incident, incident outcome, and incident outcome
cases for a hydrogen cyanide (HCN) release.
The event tree in Figure 1.2 has been provided to illustrate the relationship
between an incident, incident outcomes, and incident outcome cases. Each of these
terms will be developed further in this chapter.
1.2. Component Techniques of CPQRA
It is convenient (for ease of understanding and administration) to divide the complete
CPQRA procedure into component techniques (Section 1.2.1). Many CPQRAs do
not require the use of all the techniques. Through the use of prioritized procedures
(Section 1.2.2), the CPQRA can be shortened by simplifying or even skipping certain
techniques that appear in the complete CPQRA procedure.
1.2.1. Complete CPQRA Procedure
A framework for the complete CPQRA methodology for a process system is given in
Figure 1.3. This diagram shows
• the full logic of a CPQRA in more detail
• the relationship between a CPQRA and a risk assessment
• the interaction of a CPQRA with
-the analysis data base
-user requirements
-user reaction to risk estimates from a CPQRA
FABLE 1 .2. CPQRA Hazards, Event Sequences, Incident Outcomes, and Consequences
Event Sequences
Process hazards
Significant inventories of:
Flammable materials
Combustible materials
Unstable materials
Corrosive materials
Asphyxiants
Shock sensitive materials
Highly reactive materials
Toxic materials
Inciting gases
Combustible dusts
Pyrophoric materials
Extreme physical conditions
High temperatures
Cryogenic temperatures
High pressures
Vacuum
Pressure cycling
Temperature cycling
Vibration/liquid hammering
Initiating events
Process upsets
Process deviations
Pressure
Temperature
Flow rate
Concentration
Phase/state change
Impurities
Reaction rate/heat of reaction
Spontaneous reaction
Polymerization
Runaway reaction
Internal explosion
Decomposition
Containment failures
Pipes, tanks, vessels,
gaskets/seals
Equipment malfunctions
Pumps, valves, instruments,
sensors, interlock failures
Loss of utilities
Electrical, nitrogen, water,
refrigeration, air, heat
transfer fluids, steam,
ventilation
Management systems failure
Human error
Design
Construction
Operations
Maintenance
Testing and inspection
External events
Extreme weather conditions
Earthquakes
Nearby accidents' impacts
Vandalism/sabotage
Intermediate events
Propagating factors
Equipment failure
safety system failure
Ignition sources
Furnaces, flares, incinerators
Vehicles
Electrical switches
Static electricity
Hot surfaces/cigarettes
Management systems failure
Human errors
Omission
Commission
Fault diagnosis
Decision-making
Domino effects
Other containment failures
Other material release
External conditions
Meteorology
Visibility
Risk reduction factors
Control/operator responses
Alarms
Control system response
Manual and automatic
emergency shutdown
Fire/gas detection system
Safety system responses
Relief valves
Depressurization systems
Isolation systems
High reliability trips
Back-up systems
Mitigation system responses
Dikes and drainage
Flares
Fire protection systems
(active and passive)
Explosion vents
Toxic gas absorption
Emergency plan responses
Sirens/warnings
Emergency procedures
Personnel safety equipment
Sheltering
Escape and evacuation
External events
Early detection
Early warning
Specially designed structures
Training
Other management systems
Incident outcomes
Analysis
Discharge
Flash and evaporation
Dispersion
Neutral or positively buoyant
gas
Dense gas
Fires
Pool fires
Jet fires
BLEVES
Flash fires
Explosions
Confined explosions
Vapor cloud explosions
(VCE)
Physical explosions
Dust explosions
Detonations
Condensed phase detonations
Missiles
Consequences
Effect analysis
Toxic effects
Thermal effects
Overpressure effects
Damage assessments
Community
Workforce
Environment
Company assets
Figure 1.3 also provides cross-references to other sections of this volume, where
details of the techniques are given. The full logic of a CPQRA involves the following
component techniques:
1. CPQRA Definition
2. System Description
3. Hazard Identification
4. Incident Enumeration
5. Selection
6. CPQRA Model Construction
7. Consequence Estimation
8. Likelihood Estimation
9. Risk Estimation
10. Utilization of Risk Estimates
A brief account of the role of each of the techniques is given below, and more
detailed accounts are given in the sections indicated.
• CPQRA Definition converts user requirements into study goals (Section
1.9.1) and objectives (Section 1.9.2). Risk measures (Section 4.1) and risk presentation formats (Section 4.2) are chosen in finalizing a scope of work for the
CPQRA. A depth of study (Section 1.9.3) is then selected based on the specific
objectives defined and the resources available. The need for special studies (e.g.,
the evaluation of domino effects, computer system failures, or protective system
unavailability) is also considered (Chapter 6). CPQEA definition concludes with
the definition of study specific information requirements to be satisfied through
the construction of the analysis data base.
• System Description is the compilation of the process/plant information needed
for the risk analysis. For example, site location, environs, weather data, process
flow diagrams (PFDs), piping and instrumentation diagrams (PSdDs), layout
drawings, operating and maintenance procedures, technology documentation,
process chemistry, and thermophysical property data may be required. This
information is fed to the analysis data base for use throughout the CPQRA.
• Hazard Identification is another step in CPQEA. It is critical because a hazard
omitted is a hazard not analyzed. Many aids are available, including experience,
engineering codes, checklists, detailed process knowledge, equipment failure
experience, hazard index techniques, what-if analysis, hazard and operability
(HAZOP) studies, failure modes and effects analysis (FMEA), and preliminary
hazard analysis (PHA). These aids are extensively reviewed in the HEP Guidelines, Second Edition (AIChE/CCPS, 1992). Typical process hazards identified
using these aids are listed in Table 1.2. Additional information on common chemical hazards is given in Bretherick (1983), Lees (1980), and Marshall (1987).
• Incident Enumeration is the identification and tabulation of all incidents without regard to importance or initiating event. This, also, is a critical step, as an
incident omitted is an incident not analyzed (Section 1.4.1).
• Selection is the process by which one or more significant incidents are chosen to
represent all identified incidents (Section 1.4.2 .1), incident outcomes are identi-
CPQRA DEFINITION
(see below)
Goals (§1.9.1)
Objectives (§1.92)
Depth of study (§1.9.3)
Risk measures (§4.1)
Risk presentation (§4.2)
Special topics (§6)
Database requirements (§5)
USER REQUIREMENTS
Standards. etc.
Economic criteria
Risk targets
(see below)
SYSTEM LOCATION
EXTERNAL DATA SOURCES (§5)
SYSTEM DESCRIPTION
HAZARD IDENTFICATION
(HEP Guidelines)
INCIDENT ENUMERATION
(§1.4.1)
LEGEND
Methodology Execution
Sequence
INCIDENT
SELECTION (§1.4.2)
Information Flow
Sequence
CPQRA MODEL
CONSTRUCTION (§1.22)
LIKELIHOOD ESTIMATION
(frequencies of Probability)
Economic
Criteria
Historical Incident Approach (§3.1)
Frequency Modeling
Fault tree analysis (§3.2.1)
Event tree analysis (§3.22)
Other techniques (§6.4)
Complementary Modeling
Common cause failure (§3.3.1)
Human reliability analysis (§3.32)
External events analysis (§3.3.3)
Acceptable
ECONOMIC ASSESSMENT
CO NSE(3UENCE ESTIMAlDON (2)
ANALYSIS DATABASE
Process Plant Data (§52)
Chemical data
Process description
PFD and P&ID
Plant layout
Operating procedures
Environmental Data (§5.4)
Land use and topography
Population and demography
Meteorological data
Likelihood Data
Historical incident data (§5.1)
Reliability data (§5.5)
Physical Models
Effects Models
! Discharge (§2.1.1)
Toxic gas (§2.3.1 )
Thermal (§2.32)
Explosion (§2.3.3)
Flash & evaporation
(§2.12)
Neutral & dense gas
dispersion (§2.1 .3)
Unconfirmed explosion
(§22.1)
Pool & Jet fires (§2.2.5)
BLEVE (§2.2.3)
Not
Acceptable
RISK ESTIMATION
SYSTEM COST EVALUATION
CALCULATION
USER REACTION
SYSTEM MODIFICATION
Frequency
Reduction
Reduction
Design
Inventory
Layout
Isolation
Control
Operation
Management
procedures
Mitigation
Evasive
Action (§2.4)
!
Risk calculation
(§4.4)
MODIFICATIONS MENU
(D System
@ CPQRA
© Requirements
Q Siting
© Business strategy
QUALITY
Risk uncertainty,
sensitivity and
importance (§4.5)
Acceptable
Not
acceptable
UTILIZATION OF RISK ESTIMATE
Risk Assessment
(Absolute or Relative)
(§1-8)
NEW/MODIFIED
SYSTEM DESIGN
COMPLETE
Risk targets
REVISE BUSINESS STRATEGY
• ABANDON PROJECT
• SHUT DOWN OPERATIONS
(see above)
(see above)
FIGURE 1.3. Framework for CPQRA methodology and chapter/sect/on headings.
fied (Section 1.4.2.2), and incident outcome cases are developed (Section
1.4.2.3).
• CPQKA Model Construction covers the selection of appropriate consequence
models (Chapter 2), likelihood estimation methods (Chapter 3) and their integration into an overall algorithm to produce and present risk estimates (Chapter
4) for the system under study. While various algorithms can be synthesized, a
prioritized form (Section 1.2.2) can be constructed to create opportunities to
shorten the time and effort required by less structured procedures.
• Consequence Estimation is the methodology used to determine the potential
for damage or injury from specific incidents. A single incident (e.g., rupture of a
pressurized flammable liquid tank) can have many distinct incident outcomes
[e.g., unconfmed vapor cloud explosion (UVCE), boiling liquid expanding
vapor explosion (BLEVE), flash fire]. These outcomes are analyzed using source
and dispersion models (Section 2.1) and explosion and fire models (Section
2.2). Effects models are then used to determine the consequences to people or
structures (Section 2.2). Evasive actions such as sheltering or evacuation can
reduce the magnitude of the consequences and these may be included in the analysis (Section 2-3)
• Likelihood Estimation is the methodology used to estimate the frequency or
probability of occurrence of an incident. Estimates may be obtained from historical incident data on failure frequencies (Section 3.1), or from failure sequence
models, such as fault trees and event trees (Section 3.2). Most systems require
consideration of factors such as common-cause failures [a single factor leading to
simultaneous failures of more than one system, e.g., power failure (Section
3.3.1), human reliability (Section 3.3.2), and external events (Section 3.3.3)].
• Risk Estimation combines the consequences and likelihood of all incident outcomes from all selected incidents to provide one or more measures of risk (Chapter 4). It is possible to estimate a number of different risk measures from a given
set of incident frequency and consequence data, and an understanding of these
measures is provided. The risks of all selected incidents are individually estimated
and summed to give an overall measure of risk. The sensitivity and uncertainty of
risk estimates and the importance of the various contributing incidents to estimates are discussed in Section 4.5.
• Utilization of Bisk Estimates is the process by which the results from a risk
analysis are used to make decisions, either through relative ranking of risk reduction strategies or through comparison with specific risk targets.
The last CPQBA step (utilization of risk estimates) is the key step in a risk assessment. It requires the user to develop risk guidelines and to compare the risk estimate
from the CPQRA with them to decide whether further risk reduction measures are
necessary. This step has been included as a CPQRA component technique to emphasize its overall influence in designing the CPQRA methodology, but it is not discussed
in this book. Guidelines for decision analysis are contained in Tools for Making Acute
Risk Decisions (AlChE/CCPS, 1995).
Before discussing the remaining functions and activities shown in Figure 1.3, it is
important to recognize that all of the component techniques introduced above have
not been developed to the same depth or extent, nor used as widely for the same length
of time. Consequently, it is helpful to classify them according to "maturity," a term
used here to combine the concepts of degree of development of the technique and years
in use in the CPI. Greater confidence and less uncertainty are associated with the more
mature component techniques, such as hazard identification and consequence estimation. Discomfort and uncertainty increase as maturity decreases. Frequency estimation
is much less developed and practiced and accordingly classified, along with incident
enumeration and selection techniques, as less mature than hazard identification and
consequence estimation. The most underdeveloped and newest technique to the CPI of
those listed, risk estimation, is the least mature of any of the CPQRA component techniques. Accordingly, the most uncertainty associated with any component technique
accompanies risk estimates.
By reviewing the maturity scale, it is easy to rank the component techniques
according to their development potential. While consequence estimation techniques
are fairly sophisticated and some may argue "well-developed,35 frequency estimation
techniques offer developmental challenges and enhancement necessities. Risk estimation techniques, especially companion methodologies such as uncertainty analysis,
require substantial development and refinement, and much greater exposure before
becoming widely accepted and "user friendly." The subject of the maturity of the techniques will be revisited in Section 1.2.2 as one driving force in the precedence ordering
of CPQRA calculations.
While not considered a component technique, the development of the analysis
data base is a critical early step in a CPQBJV. In addition to the data from the system
description, this data base contains various kinds of environmental data (e.g., land use
and topography, population and demography, meteorological data) and likelihood
data (e.g., historical incident data, reliability data) needed for the specific CPQRA.
Much of this information must be collected from external (outside company) sources
and converted into formats useful for the CPQRA. Chapter 5 discusses the construction of the analysis data base, and details the various sources of data available.
As shown in Figure 1.3, user reaction to the results of a risk assessment using the
CPQBA estimate can be summarized as a menu of modification options:
•
•
•
•
•
systems modification through engineering/operational/procedural changes
amendment of the goals or scope of the CPQBJV
relaxation of user requirements
alternative sites
adjustments to basic business strategy.
Systems modification involves the proposal and evaluation of risk reduction strategies by persons knowledgeable in process technology. Bask estimation provides insight
into the degree of risk reduction possible and the areas where risk reduction may be
most effective. Proposed risk reduction strategies can incorporate changes to either
system design or operation, in order to eliminate or reduce incident consequences or
frequencies. As shown in Figure 1.3, such proposals need to be shown to meet all business needs (e.g., quality, capacity, legality, and cost) before being reviewed by CPQRA
techniques. The other user options are self-explanatory and are more properly treated
in a discussion of the risk assessment process and related risk management program.
1.2.2. Prioritized CPQRA Procedure
Most applications of the CPQBA methodology will not need to use all of the available
component techniques introduced in Section 1.2.1. CPQBJV component techniques
are flexible and can be applied selectively, in various orders. Consequence estimation
can be used as a screening tool to identify hazards of negligible consequence (and therefore a negligible risk) to avoid detailed frequency estimation. Similarly, frequency estimation can identify hazards of sufficiently small likelihood of occurrence that
consequence estimates are unnecessary. The procedure outlined in Figure 1.4 has been
constructed to illustrate one way to prioritize the calculations. It has been designed to
provide opportunities to shorten the time and effort needed to achieve acceptable
results. These opportunities arise naturally due to the ordering of the calculations. The
criteria for establishing the priority of calculations are based on the maturity of the
component techniques and their ease of use. The more mature consequence estimation
techniques are given highest priority. These techniques are also the most easily executed. The degree of effort increases through the procedure, along with uncertainties as
the maturity of the component techniques decreases.
The prioritized CPQBA procedure given in Figure 1.4 involves the following steps:
Step
Step
Step
Step
Step
1—Define CPQRA.
2—Describe the system.
3—Identify hazards.
4—Enumerate incidents.
5—Select incidents, incident outcomes, and incident outcome cases
These five steps are the same as the corresponding steps in Figure 1.3, and are discussed in Section 1.2.1.
• Step 6 Estimate Consequences. If the consequences of an incident are acceptable at any frequency, the analysis of the incident is complete. This is a simplification of the risk analysis, in which the probability of occurrence of the incident
within the time period of interest is assumed to be 1.0 (the incident is certain to
occur). For example, the overflow of an ethylene glycol storage tank to a containment system poses little risk even if the event were to occur. If the consequences
are not acceptable, proceed to Step 7.
• Step 7 Modify System to Reduce Consequences. Consequence reduction
measures should be proposed and evaluated. The analysis then returns to Step 2
to determine whether the modifications have introduced new hazards and to
reestimate the consequences. If there are no technically feasible and economically
viable modifications, or if the modifications do not eliminate unacceptable consequences, proceed to Step 8.
• Step 8 Estimate Frequencies. If the frequency of an incident is acceptably low,
given estimated consequences, the analysis of the incident is complete. If not,
proceed to Step 9.
• Step 9 Modify System to Reduce Frequencies. This step is similar in concept
to Step 7. If there are no technically feasible and economically viable modifications to reduce the frequency to an acceptable level, proceed to Step 10. Otherwise, return to Step 2.
STEP1
DEFINE CPQRAGOALS,
OBJECTIVES, DEPTH OF
STUDY, ETC.
STEP 2
DESCRIBE SYSTEM
EQUIPMENT DESIGN.
CHEMISTRY,
THERMODYNAMICS.
OPERATING PROCEDURES.
ETC.
STEP 3
IDENTIFY HAZARDS
EXPERIENCE, CODES
CHECKLISTS, HAZOPS, ETC.
STEP 4
ENUMERATE INCIDENTS
LIST OF
ENUMERATED INCIDENTS
STEPS
SELECT INCIDENTS
CONSEQUENCE AND
EFFECT MODELS,
DECISION CRITERIA
LIST OF SELECTED
INCIDENTS, INCIDENT
OUTCOMES. INCIDENT
OUTCOME CASES
DESIGN ACCEPTABLE
(CONSEQUENCES ACCEPTABLY
LOW AT ANY FREQUENCY
OF OCCURRENCE)
CONSEQUENCES ARE TOO HIGH
STEP?
MODIFY SYSTEM TO
REDUCE CONSEQUENCES
NO
DESIGN ACCEPTABLE
STEPS
(FREQUENCIES ACCEPTABLY
ESTIMATE FREQUENCIES
LOW FOR ANY CONSEQUENCES)
STEP 6
ESTIMATE CONSEQUENCES
YES
HISTORICALANALYSIS
FAULT TREE ANALYSIS
EVENTTREE ANALYSIS
DECISION CRITERIA
YES
FREQUENCIES ARE TOO HIGH
STEP 9
MODIFY SYSTEM TO
REDUCE FREQUENCIES
STEP 10
COMBINE FREQUENCIES
AND CONSEQUENCES TO
ESTIMATE RISK
RISKSARETOOHIGH
DECISION CRITERIA
YES
STEP 11
MODIFY SYSTEM TO
REDUCE RISK
NO
DESIGN UNACCEPTABLE
(COMBINATION OF CONSEQUENCES AND
FREQUENCIES UNACCEPTABLY HIGH)
FIGURE 1.4. One version of a prioritized CPQRA procedure.
DESIGN ACCEPTABLE
(COMBINATION OF
CONSEQUENCES AND
FREQUENCIES
ACCEPTABLY LOW)
LEGEND
METHODOLOGY EXECUTION
SEQUENCE
INFORMATION FLOW
SEQUENCt
• Step 10 Combine Frequency and Consequences to Estimate Risk. If the risk
estimate is at or below target or if the proposed strategy offers acceptable risk
reduction, the CPQRA is complete and the design is acceptable.
• Step 11 Modify System to Reduce Risk. This is identical in concept to Steps 7
and 9. If no modifications are found to reduce risk to an acceptable level, then
fundamental changes to process design, user requirements, site selection, or
business strategy are necessary.
In summary, Figure 1.3 presents the overall structure of CPQRA, and Figure 1.4
illustrates one method of implementation. A complete CPQRA as illustrated in Figure
1.3 may not be necessary or feasible on every item or system in a given process unit.
Guidance on the selection and use of CPQRA component techniques is presented later
in this chapter.
1.3.
Scope of CPQRA Studies
It is good engineering practice to pay careful attention to the scope of a CPQRA, in
order to satisfy practical budgets and schedules; it is not unusual for the work load to
"explode" if the scope is not carefully specified in advance of the work and enforced
during project execution. This section introduces the concept of a study cube ( Figure
1.5) to relate scope, work load, and goals (Section 1.3.1) and then gives typical goals
for CPQRAs of various scopes (Section 1.3.2).
1.3.1 The Study Cube
CPQRAs can range from simple, "broad brush" screening studies to detailed risk analyses studying large numbers of incidents, using highly sophisticated frequency and
consequence models. Between these extremes a continuum of CPQRAs exists with no
rigidly defined boundaries or established categories. To better understand how the
scope ranges for CPQRAs it is useful to show them in the form of a cube, in which the
axes represent the three major factors that define the scope of a CPQRA: risk estimation technique, complexity of analysis, and number of incidents selected for study. This
arrangement also allows us to consider "planes" through the cube, in which the value of
one of the factors is held constant.
1.3.1.1. THE STUDY CUBE AXES
For this discussion, each axis of the Study Cube has been arbitrarily divided into three
levels of complexity. This results in a total of 27 different categories of CPQRA,
depending on what combinations of complexity of treatment are selected for the three
factors. Each cell in the cube represents a potential CPQBA characterization. However, some cells represent combinations of characteristics that are more likely to be
useful in the course of a project or in the analysis of an existing facility.
Risk Estimation Technique. Each of the components of this axis corresponds to a
study exit point in Figure 1.4. The complexity and level of effort necessary increase
RISK ESTIMATION TECHNIQUE
Consequence Frequency
Risk
Complex
Intermediate
Simple
INCREASING NUMBER OF
SELECTED INCIDENTS
ORIGIN
Bounding
Group
Cube's
Main
Diagonal
Representative
Set
Expansive
List
FIGURE 1.5. The study cube. Each cell in the cube represents a particular CPQRA study with a
defined depth of treatment and risk emphasis. For orientation purposes, the shaded cells
along the main diagonal of the cube are described in Table 1.5.
along the axis—from consequence through frequency to risk estimation—but not necessarily linearly.
In another sense, the representation of estimation by consequence, frequency, and
risk is indicative of the level of maturity of these techniques. Quantification of the consequences from an incident involving loss of containment of a process fluid has been
extensively studied. Once a release rate is established, the development of the resulting
vapor cloud can be fairly well described by various source and dispersion models,
although gaps in our understanding—particularly for flashing or two-phase discharges,
near-field dispersion, and local flow effects—do exist. Quantification of the frequency
of an incident is less well understood. Where historical data are not available, fault tree
analysis (FTA) and event tree analysis (ETA) methods are used. These methods rely
heavily on the judgment and experience of the analyst and are not as widely applied in
the CPI as consequence models. Much remains to be learned about how to produce a
truly representative risk estimate with minimum uncertainty and bias.
Complexity of Study. This axis presents a complexity scale for CPQRAs. Position
along the axis is derived from two factors:
• the complexity of the models to be used in a study
• the number of incident outcome cases to be studied
Model complexity can vary from simple algebraic equations to extremely complex
functions such as those used to estimate the atmospheric dispersion of dense gases. The
number of incident outcome cases to be studied is the product of the number of incident outcomes selected and the number of cases to be studied per outcome. The
number of cases to be studied may range from one—assuming uniform wind direction
and a single wind speed—to many, using various combinations of wind speed, direction, and atmospheric stability for each incident outcome.
Figure 1.6 illustrates how model complexity and the number of incident outcome
cases are combined to produce the simple, intermediate, and complex zones in the
study cube.
Number of Incidents. The three groups of incidents used in Figure 1.5—bounding
group, representative set, and expansive list—can be explained using the three classes of
incidents in Table 1.3.
The bounding group contains a small number of incidents. Members of this group
include those catastrophic incidents sometimes referred to as the worst case. The intent
of selecting incidents for this group is to allow determination of an upper bound on the
estimate of consequences. This approach focuses attention on extremely rare incidents,
rather than the broad spectrum of incidents that often comprises the major portion of
the risk. The representative set can contain one or more incidents from each of the
three incident classes in Table 1.3 when evaluating risks to employees. When evaluating risk to the public, the representative set of incidents would probably only include
selections from the catastrophic class of events because small incidents do not normally
have significant impact at larger distances. The purpose of selecting representative incidents is to reduce study effort without losing resolution or adding substantial bias to
the risk estimate. The expansive list contains all incidents in all three classes selected
through the incident enumeration techniques discussed in Section 1.4.1.
NUMBER OF INCIDENTOUTCOME CASES
INCREASING COMPLEXITY
OF MODELS
SMALL
ELEMENTARY
ADVANCED
SIMPLE/
INTERMEDIATE
SOPHISTICATED
INTERMEDIATE
MEDIUM
LARGE
SIMPLE/
INTERMEDIATE
INTERMEDIATE
INTERMEDIATE/
COMPLEX
INTERMEDIATE/
COMPLEX
FIGURE 1.6. Development of complexity of study axis values for the Study Cube. The main
diagonal values (shaded cells) correspond with the "complexity of study values" used in
Figure 1,5.
1.3.1.2. PLANES THROUGH THE STUDY CUBE
The study cube provides a conceptual framework for discussing factors that influence
the depth of a CPQRA. It is arbitrarily divided into 27 cells, each defined by three factors, and qualitative scales are given for each factor or cube axis.
In addition to considering cells in the study cube, it is convenient to refer to planes
through the cube, especially through the risk estimation technique axis. A separate
plane exists for consequence, frequency, and risk estimation. Anywhere within one of
these planes, the risk estimation technique is fixed. Referring to consequence plane
studies, there are nine combinations of the complexity of study and number of selected
incidents. The use of the plane concept when describing CPQRAs is intended to reinforce the notion that several degrees of freedom exist when defining the scope of a
CPQRA study, and it is not enough to cite only the risk estimation technique to be
used when discussing a specific level of CPQRA.
1.3.2. Typical Goals of CPQRAs
Examples of typical goals of CPQEAs are summarized in Table 1.4, which highlights
incident groupings that are appropriate to achieve each goal. Ideally, all incidents
would be considered in every analysis, but time and cost constraints require optimizing
the number of incidents studied. Consequently, incident groups other than the expansive list are preferred.
Goals that are appropriate early in an emerging capital project will be constrained
by available information. However, for a mature operating plant, sufficient information will usually be available to satisfy any of the goals in Table 1.4. The amount and
quality of information available for a CPQRA depend on the stage in the process' life
when the study is executed. This effect is illustrated conceptually in Figure 1.7. A specific depth of study can be executed only if the process information available equals or
exceeds the information required.
Each of the 27 depths of study shown in the Study Cube has specific information
requirements. The information required for a CPQRA is a function of not only the
position of the corresponding cell in the study cube (depth of study) selected, but also
the specific study objectives. In general, information needs increase as
• the number of incidents increases,
• the complexity of study (number of incident outcome cases and complexity of
models) increases,
• the estimation technique progresses from consequence through frequency to risk
estimation calculations.
TABLE 1.3. Classes of Incidents
Localized incident
Localized effect zone, limited to a single plant area (e.g., pump fire, small
toxic release)
Major incident
Medium effect zone, limited to site boundaries (e.g., major fire, small
explosion)
Catastrophic incident
Large effect zone, off site effects on the surrounding community (e.g., major
explosion, large toxic release)
INFORMATION
AVAILABLE
PROJECT
INCEPTION
COMMISSIONING
DETAILED
DEMOLITION
DESIGN
DESIGN
DECOMMISSIONING
RECORDS
CONSTRUCTION
BASIS
DESTROYED
SHUTDOWN
EXISTING
NEW PROJECTS
AND
RECORDS
RETENTION
FACILITY
FACILITY
REOJIRED
REMOVAL
PROCESS LIFE CYCLE
FIGURE 1.7. Information availability to CPQRA along the life of a chemical process.
Conceptually, information requirements increase moving from the origin along
the main diagonal of the Study Cube. Specific study objectives are developed from the
CPQRA goals by project management (Section 1.9.2). These specific objectives may
add information requirements (often unique) to those established by the position in
the cube.
In order to discuss important issues of study specification, it is convenient to limit
attention to three of the 27 cells in the cube. These three cells are a simple/consequence
CPQRA, intermediate/frequency CPQRA, and complex/risk CPQRA (Table 1.5).
They occupy the main diagonal of the cube as illustrated in Figure 1.5. The cells are
defined in terms of increasing CPQRA resolution. The choice of these cells in no way
implies that they represent the most common types of risk studies. They are only presented to explain the general parameters of this form of presentation of CPQRA study
depth. Further information on CPQRA studies for different cells in the study cube is
given in Chapter 7, where a number of qualitative examples are presented. Chapter 8
presents more specific, quantitative case studies.
1.4. Management of Incident Lists
Effective management of a CPQRA requires enumeration (Section 1.4.1) and selection (Section 1.4.2) of incidents, and a formal means for tracking (Section 1.4.3) the
incidents, incident outcomes, and incident outcome cases. Enumeration attempts to
ensure that no significant incidents are overlooked; selection tries to reduce the incident outcome cases studied to a manageable number; and tracking ensures that no
selected incident, incident outcome, or incident outcome case is lost in the calculation
procedure.
TABLE 1.4. Typical Goals of CPQRAs
To Screen or Bracket the Range of Risks Present for Further Study. Screening or bracketing
studies often emphasize consequence results (perhaps in terms of upper and lower bounds of effect
zones) without a frequency analysis. This type of study uses a bounding group of incidents.
To Evaluate a Range of Risk Reduction Measures. This goal is not limited to any particular
incident grouping, but representative sets or expansive lists of incidents are typically used. Major
contributors to risk are identified and prioritized. A range of risk reduction measures is applied to the
major contributors, in turn, and the relative benefits assessed. If a risk target is employed, risk
reduction measures would be considered that could not only meet the target, but could exceed it if
available at acceptable cost.
To Prioritize Safety Investments. All organizations have limited resources. CPQRA can be used to
prioritize risks and ensure that safety investment is directed to the greatest risks. A bounding group or
representative set of incidents is commonly used.
To Estimate Financial Risk. Even if there are no hazards that have the potential for injury to people,
the potential for financial losses or business interruption may warrant a CPQEJV. Depending on the
goals, different classes of incidents might be emphasized in the CPQRA. An annual insurance review
might highlight localized and major incidents using a bounding group with consequences specified in
terms of loss of capital equipment and production.
To Estimate Employee Risk. Several companies have criteria for employee risk, and CPQRA is used
to verify compliance with these criteria. In principle, the expansive list of incidents could be
considered, but the major risk contributors to plant employees are localized incidents and major
incidents (Table 1.3). Rare, catastrophic incidents often contribute less than a few percent to total
employee risk. A representative set or bounding group of incidents may be appropriate.
To Estimate Public Risk. As with employee risk, some internal-corporate and regulatory agency
public risk criteria may have been suggested or adopted as "acceptable risk" levels. CPQRA can be
used to check compliance. Where such criteria are not met, risk reduction measures may be
investigated as discussed above. The important contributors to off-site, public risk are major and
catastrophic incidents. A representative set or expansive list of incidents is normally utilized.
To Meet Legal or Regulatory Requirements. Legislation in effect in Europe, Australia, and in some
States (e.g., NJ and CA) may require CPQRAs. The specific objectives of these vary, according to the
specific regulations, but the emphasis is on public risk and emergency planning. A bounding group or
representative set of incidents is used.
To Assist with Emergency Planning. CPQRA may be used to predict effect zones for use in
emergency response planning. Where the emergency plan deals with on-site personnel, all classes of
incidents may need to be considered. For the community, major and catastrophic classes of incidents
are emphasized. A bounding group of incidents is normally sufficient for emergency planning
purposes.
1.4.1. Enumeration
The objective of enumeration is to identify and tabulate all members of the incident
classes in Table 1.3, regardless of importance or of initiating event. In practice, this can
never be achieved. However, it must be remembered that omitting important incidents
from the analysis will bias the results toward underestimating overall risk.
The starting point of any analysis is to identify all the incidents that need to be
addressed. These incidents can be classified under either of two categories, loss of containment of material or loss of containment of energy. Unfortunately, there is an infinite number of ways (incidents) by which loss of containment can occur in either
category. For example, leaks of process materials can be of any size, from a pinhole up
to a severed pipe line or ruptured vessel. An explosion can occur in either a small container or a large container and, in each case, can range from a small ccpufP to a catastrophic detonation.
TABLE 1.5. Definitions of Cells Along the Main Diagonal of the Study Cube (Figure 1.5)
Simple/Consequence CPQRA
Estimation Technique—Consequence
Complexity of Study
Number of Incident Outcome Cases—Small
Complexity of Model—Elementary
Number of Incidents—Bounding Group
This is a CPQRA that is useful for screening or risk bounding purposes. It requires the least amount
of process definition and makes extensive use of simplified techniques. In terms of Figure 1.4, it
consists of consequence calculations only (Steps I through 7). A Simple/Consequence CPQRA is
suitable for screening at any stage of the project: in the case of an existing plant, screening might
highlight the need to consider further study; at the design stage, it might aid in optimizing siting and
layout.
Intermediate/Frequency CPQRA
Estimation Technique—frequency
Complexity of Study
Number of Incident Outcome Cases—Medium
Complexity of Model—Advanced
Number of Incidents—Representative Set
This is a more detailed CPQRA that corresponds to Steps I through 9 in Figure 1.4. It cannot be
applied until the design is substantially developed, unless historical frequency techniques are applied. It
may be applied at any time after process flow sheet definition. Complete descriptions of the process
and equipment are not usually necessary. A Representative Set of incidents is chosen. In principle, the
results of an Intermediate/Frequency CPQKA should approximate a detailed study, but have less
resolution.
Complex/Risk CPQRA
Estimation Technique—Risk
Complexity of Study
Number of Incident Outcome Cases—Large
Complexity of Model—Sophisticated
Number of Incidents—Expansive List
This is the most detailed CPQRA. It employs the full methodology described in Figure 1.4. It may be
applied to operating plants or to capital projects, but only after detailed design has been completed,
when sufficient information is available. Where appropriate, it would employ the most sophisticated
analytical techniques reviewed in Chapters 2 and 3. However, it would be unlikely to apply the most
sophisticated techniques to all aspects of the study—only to those items that contribute most to the
result. Due to the number of incidents, incident outcomes and incident outcomes cases considered,
this study level provides the highest resolution.
The HEP Guidelines, Second Edition (AIChE/CCPS, 1992) outlines the roles of
HAZOP, FMEA, and What-If in hazard assessment. The supplemental "Questions for
Hazard Evaluation" shown in Appendix B of the HEP Guidelines can be helpful for identifying hazards, initiating events, and incidents. While none of these hazard identification
techniques directly produces a list of incidents, each provides a methodology from which
initiating events can be developed. Proper scenario selection is extremely important in
CPQRA and the results of the analysis are no better than the scenarios selected.
In addition to the above techniques, Table 1.2 can be used as a checklist to assist in
further incident enumeration through listing candidate initiating events, intermediate
events, and incident outcomes and consequences. It should be understood that there is
Reality List
(All incidents)
NUMBER OF INCIDENTS
Initial List (All incidents
identified by enumeration)
Revised List (Initial List less
those handled subjectively)
Condensed List (Revised
List without redundancies)
Expansive List (List
from which incidents
for study are selected)
Representative
Set
Bounding
Group
NAME LISTOF INCIDENT^
FIGURE 1.8. Incident lists versus number of incidents (comparison of lists developed through
incident selection to the reality list).
no single technique whose application guarantees the comprehensive listing of all incidents (i.e., the reality list of Figure 1.8 is unattainable). Nonetheless, use of hazard
identification techniques and Table 1.2 can lead to the identification of a broad spectrum of incidents, sufficient for defining even the expansive list of incidents (Section
1.4.2.1).
Other approaches for enumeration of major incidents and their initiating events
have been developed. One of these uses fault tree analysis (FTA). The fault tree is a
logic diagram showing how initiating events, at the bottom of the tree, through a
sequence of intermediate events, can lead to a top event. This analysis requires two
knowledge bases: (1) a listing of major subevents which contribute to a top event of
loss of containment, and (2) the development of each subevent to a level sufficient to
describe the majority of initiating events. For enumeration, this process is executed
without any attempt to quantify the frequency of the top event. However, this fault
tree can serve as a means for obtaining frequencies later in the CPQRA. The success of
this technique is principally dependent on the expertise of the analyst. An example is
given by Prugh (1980).
The "Loss of Containment Checklist53 included in this book as Appendix A can be
applied to enumerate credible incidents. This checklist considers causes arising from
nonroutine process venting, deterioration and modification, external events, and process deviations. Sample incidents include the following:
• overpressuring a process or storage vessel due to loss of control of reactive materials or external heat input
• overfilling of a vessel or knock-out drum
• opening of a maintenance connection during operation
• major leak at pump seals, valve stem packings, flange gaskets, etc.
• excess vapor flow into a vent or vapor disposal system
• tube rupture in a heat exchanger
• fracture of a process vessel causing sudden release of the vessel contents
• line rupture in a process piping system
• failure of a vessel nozzle
• breaking off of a small-bore pipe such as an instrument connection or branch line
• inadvertently leaving a drain or vent valve open.
The reader should note, however, that the loss of containment checklist should not
be considered exhaustive, and other enumeration techniques should be considered in
developing an expansive list of incidents.
Another way to generate an incident list is to consider potential leaks and major
releases from fractures of all process pipelines and vessels. The enumeration of incidents from these sources is made easier by compiling pertinent information (listed
below), relevant to all process and storage vessels. This compilation should include all
pipework and vessels in direct communication, as these may share a significant inventory that cannot be isolated in an emergency.
• vessel number, description, and dimensions
• materials present
• vessel conditions (phase, temperature, pressure)
• connecting piping
• piping dimensions (diameter and length)
• pipe conditions (phase, pressure drop, temperature)
• valving arrangements (automatic and manual isolation valves, control valves,
excess flow valves, check valves)
• inventory (of vessel and all piping interconnections, etc.)
This approach is discussed in more detail in the Rijnmond Area Risk Study
(Rijnmond Public Authority, 1982) and the Manual of Industrial Hazard Assessment
Techniques (World Bank, 1985). Of necessity, this approach excludes specific incidents
and initiating events that would be generated by hazard identification methods (e.g.,
releases from emergency vents or relief devices). Freeman et al. (1986) describe a
system that addresses both fractures and other initiating events. The list of incidents
can also be expanded by considering each of the incident outcomes presented in Table
1.2 and proposing credible incidents that can produce them. Pool fires might result
from releases to tank dikes or process drainage areas; vapor cloud explosions, flash
fires, and dispersion incidents from other release scenarios; confined explosions (e.g.,
those due to polymerization, detonation, overheating) from reaction chemistry and
abnormal process conditions; or BLEVE, from fire exposure to vessels containing
liquids.
1.4.2. Selection
The goal of selection is to limit the total number of incident outcome cases to be studied to a manageable size, without introducing bias or losing resolution through overlooking significant incidents or incident outcomes. Different techniques are used to
select incidents (Section 1.4.2.1), incident outcomes (Section 1.4.2.2), and incident
outcome cases (Section 1.4.2.3). The risk analyst must be proficient in each of these
techniques if a defensible basis for a representative CPQRA is to be developed.
1.4.2.1. INCIDENTS
The purpose of incident selection is to construct an appropriate set of incidents for the
study from the initial list that has been generated by the enumeration process. An
appropriate set of incidents is the minimum number of incidents needed to satisfy the
requirements of the study and adequately represent the spectrum of incidents enumerated, considering budget constraints and schedule.
The effects of selection are shown graphically in Figure 1.8. The reality list contains all possible incidents. It approaches infinitely long. The initial list contains all the
incidents identified by the enumeration methods chosen. The remaining lists are
described in this section. Figure 1.8. shows the relative reductions in list size that are
achieved by successive operations on the initial list.
One of the risk analyst's jobs is to select a subset of the Initial List for further analysis. This involves several tasks, each resulting in a unique list ( Figure 1.8). Throughout
the selection process, the risk analyst must exercise caution so that critical incidents,
which might substantially affect the risk estimate, are not overlooked or excluded from
the study. The initial list of incidents is reviewed to identify those incidents that are too
small to be of concern (Step 4, Figure 1.4). Removing these incidents from the initial
list produces a revised list (Figure 1.8).
To be cost effective and reduce the CPQRA calculational burden, it is essential to
compress this revised list by combining redundant or very similar incidents. This new
list is termed the condensed list (Figure 1.8). This list can and should be reduced further by grouping similar incidents into subsets, and, where possible, replacing each
subset with a single equivalent incident. This grouping and replacement can be accomplished by consideration of similar inventories, compositions, discharge rates, and discharge locations.
The list formed in this manner is the expansive list and represents the list from
which the study group is selected. A detailed or complex study would utilize the entire
expansive list of incidents, while a screening study would utilize only one or two incidents from this list.
The expansive list can be reduced to one or both of two smaller "lists55: the bounding group or the representative set (Section 1.3.1; and Figure 1.5). Selection of a
bounding group of incidents typically considers only the subsets of catastrophic incidents on the expansive list. This may be further reduced by selecting only the worst
possible incident or worst credible incident.
Selection of a representative set of incidents from the expansive list should include
contributions from each class of incident, as defined in Table 1.3. This process can be
facilitated through the use of ranking techniques. By allocating incidents into the three
classes presented in Table 1.3, an inherent ranking is achieved. Further ranking of indi-
vidual incidents within each incident class is possible. Various schemes can be devised
to rank incidents within each incident class (e.g., preliminary ranking criteria based on
the severity of hazard posed by released chemicals, release rate, and total quantity
released). A ranking procedure is important in the selection of a representative set of
incidents if the study is to minimize bias or loss of resolution.
Ranking can also be a useful tool if the study objectives (Section 1.9.2) exclude
incidents below a specified cutoff value. One example is the establishment of a cutoff
for loss of containment of material events by specifying a limited range of hole sizes for
a wide range of process equipment (e.g., two for process pipework, one representing a
full-bore rupture and the other 10% of a full bore rupture). This approach is presented
in the Manual of Industrial Hazard Assessment Techniques (World Bank, 1985). Such a
cutoff is arbitrary and a more fundamental approach is to identify, from consequence
techniques (Chapter 2), the minimum incident size of importance for each of the materials used on-site. This ensures consistent treatment of materials of different hazards.
Figure 1.9 (Hawksley, 1984) contains data on pipeline failures including the frequency
distributions for holes of various sizes.
1.4.2.2 INCIDENT OUTCOMES
The purpose of incident outcome selection is to develop a set of incident outcomes that
must be studied for each incident included in the finalized incident study list (i.e., the
bounding group, representative set, or expansive list of incidents). Each incident needs
to be considered separately. Using the list of incident outcomes presented in Table 1.2,
the risk analyst needs to deter nine which may result from each incident. This process is
not necessarily straightforward. While the analyst can decide whether an incident
LEGEND
PIPE FAILURES PER FOOT YEAR
Weep
Canadian Atomic
Energy
GuIfOiI
PIPE DIAMETER (INCH)
FIGURE 1.9. Summary of some pipe failure rate data. From Hawksley (1984). Reprinted with
permission.
involving the loss of a process chemical to the atmosphere needs to be examined using
dispersion analysis because of potential toxic gas effects, what happens if the same
material is immediately ignited on release?
Figure 1.2 was presented to illustrate how one incident may create one or more
incident outcomes, using the logical structure of an event tree. More detailed event
trees have been developed in attempts to illustrate the complicated and often interrelated time series of incident outcomes that can occur. Figure 1.10 presents such an
event tree developed by Mudan (1987) to show all potential incident outcomes from
the release (loss of containment) of a hazardous chemical. Naturally, the properties of
the chemical, conditions of the release, etc., all influence which of the logical paths
shown in Figure 1.10 will apply for any specific incident. All such paths need to be considered in creating the set of outcomes to be studied for each incident included in the
finalized study list. After examination, it soon becomes apparent that even Figure 1.10
is not detailed enough to cover all possible permutations of phenomena that can immediately result from a hazardous material release.
Detailed logical structures (see Figures 1.11 and 1.12) have been developed [e.g.,
see UCSIP (1985)] to try to account for the mix of incident outcomes that can result
following an incident. No single comprehensive logic diagram exists. Various computer programs have been developed, however, to assist the analyst. Ultimately, the
analyst must be satisfied that the set of outcomes selected for each incident in the finalized study list adequately represents the range of phenomena that may follow an
incident.
1.4.2.3. INCIDENT OUTCOME CASES
As shown in Figure 1.2, for every outcome selected for study, one or more incident
outcome cases can be constructed. Each case is defined through numerically specifying
sufficient parameters to allow the case to be uniquely distinguished from all other cases
developed for the same outcome.
An easy distinction between incident outcome cases is in the prevailing weather.
When considering the dispersion of a cloud formed from the release of a process chemical to the atmosphere, the analyst must decide how the travel of the cloud "downwind"
is to be studied. Various parameters—wind speed, atmospheric stability, atmospheric
temperature, humidity, etc.—all need to be considered.
Once the risk analyst has identified all of the parameters that influence specification
of an incident outcome, ranges of values for each parameter need to be developed, and
discrete values created within each range. An incident outcome case is specified by the
data set containing the analyst's selection of a unique value within the range developed
for each parameter. The number of outcome cases that can be created equals the
number of possible permutations of this data set using all of the discrete values for each
of the parameters.
As discussed in Section 1.9.3, the combinatorial expansion of incident outcome
cases can adversely affect resource requirements for a CPQRA without substantially
adding to the quality of the resulting risk estimate or insights from the study. An experienced analyst will be able to limit the number of incident outcome cases to be studied.
For example, problem symmetry may be exploited, worst case conditions assumed,
plume centerline concentrations selected rather than developing complete cloud pro-
Incident
No Release No Impact
Tankcar
Explosion
or BLEVE
Release
Gas
Liquid and/or
Liquified Gas
Gas Vents
Liquid Flashes
to Vapor
Flame Jet
Forms (if
ignited)
Pool Slowly
Evaporates
Vapor Cloud
Travels Downwind
(if not ignited)
Pool Fire
Occurs
Liquid
Rainout
Vapor Cloud
Ignites Explosion
Vapor Cloud
Ignites Flashfire Occurs
Vapor Plume
Travels
Downwind
Plume Ignites,
Explosion and/or
Flashfire Occurs
No Ignition Toxic Vapor
Exposure
No Ignition Toxic Vapor
Exposure
Pool Fire
Occurs
FIGURE ]. 10. Typical spill event tree showing potential incident outcomes for a hazardous
chemical release.
files, and a directional incident outcome assumed rather than study an omnidirectional
incident. Each decision removes a multiplier from the number of cases to be studied.
It is the analyst's responsibility to ensure that sufficient definition results from the
number of incident outcome cases specified to achieve study objectives. Decisions
made concerning parameter selection and the range of values to be studied within each
parameter need to be challenged through peer review and documented. Likewise the
perceived importance of such parameters and their values can and should be checked
through sensitivity studies following the development of an initial risk estimate. It is
Ie ttw Reletee Ie There Immediate ' U Uw Cloud
tnetantaneoue?
Ignition?
I Denser Then Air?
yes
Ie There Oeleyed
Ignition?
Fireball
Yes
Adiabatic
No Expansion
Assess Impacts
Dense Cloud Yes
Dispersion
YM
No
Neutral/
Buoyant
No Dispersion
Incident
Outcome Case
Flash Fire
or Explosion
Harmless
Flash Fire
or Explosion
YBS
No
Estimated
Duration
Calculate
Release
No Rate
Assess Impacts
Harmless
Jet Flames
Assess Impacts
Xes
Flash Fire
or Explosion
Jet
No Dispersion
Assess Impacts
Dense Cloud Y&s
Dispersion
Yes
Mo
No
Neutral/
Buoyant
Dispersion
Harmless
Flash Fire
or Explosion
Yes
No
Assess Impacts
Assess Impacts
Harmless
FIGURE 1.11. Spill event tree for a flammable gas release.
also the analyst's responsibility to recognize the sensitivity of the cost of the CPQRA to
each parameter and avoid wasting resources.
One effective strategy is to screen the parameter value ranges and select a minimal
number of outcome cases to complete a first pass risk estimate. Using sensitivity methods, the importance of each selected parameter value can be determined, and adjustments made in subsequent passes, maintaining control of the growth of the number of
incident outcome cases while observing impacts on resulting estimates.
It is also useful to determine upper and lower bounds for the risk estimate using
the parameter-value range available. This offers the analyst a reference scale against
which to view any single point estimate, along with its sensitivity to changes in any
given parameter. Various mathematical models are available for determining the upper
and lower bounds for the parameter-value ranges available. These include techniques
commonly used in the statistical design of experiments (e.g., see Box and Hunter,
1961; Kilgo, 1988). These methods can be used to identify critical parameters from all
of the parameters identified. Linear programming techniques and min/max search
strategies (e.g., see Carpenter and Sweeny, 1965; Long, 1969; Nelder and Mead,
1964; Spendley et al, 1962) can be used thereafter to find values for these critical
parameters that will produce both the upper and lower bounds (maximum and minimum values) for the risk estimate.
I U the ReleaM Ia There Immediate1
I Instantaneous?
Ignition?
Does a Pool
Form?
Does the Pool
Ignite?
FIGURE 1.12. Spill event tree for a flammable liquid release.
Since these bounds can be established without exhaustively examining all of the
incident outcome cases possible, the experienced analyst can manage the number of
cases to be examined without compromising the desire to develop a quantitative understanding of the range—a feel for spread—of the risk estimate.
1.4.3. Tracking
The development of some risk estimates, such as individual risk contours or societal
risk curves requires a significant number of calculations even for a simple analysis. This
can be time consuming if a manual approach is employed for more than a few incident
outcome cases. Chapter 4, Section 4.4, describes risk calculation methods and provides
examples of various simplifiied approaches. The techniques are straightforward, however many repetitive steps are involved, and there is a large potential for error. A computer spreadsheet or commercial model is generally useful in manipulating,
accounting, labeling, and tracking this information. The case studies of Chapter 8 illustrate these grouping, accounting, labeling, and tracking processes.
1.5. Applications of CPQRA
No organization or society has the resources to perform CPQRAs (of any depth) on all
conceivable risks. In order to decide where and how to use the resources that are availNext Page
Previous Page
I U the ReleaM Ia There Immediate1
I Instantaneous?
Ignition?
Does a Pool
Form?
Does the Pool
Ignite?
FIGURE 1.12. Spill event tree for a flammable liquid release.
Since these bounds can be established without exhaustively examining all of the
incident outcome cases possible, the experienced analyst can manage the number of
cases to be examined without compromising the desire to develop a quantitative understanding of the range—a feel for spread—of the risk estimate.
1.4.3. Tracking
The development of some risk estimates, such as individual risk contours or societal
risk curves requires a significant number of calculations even for a simple analysis. This
can be time consuming if a manual approach is employed for more than a few incident
outcome cases. Chapter 4, Section 4.4, describes risk calculation methods and provides
examples of various simplifiied approaches. The techniques are straightforward, however many repetitive steps are involved, and there is a large potential for error. A computer spreadsheet or commercial model is generally useful in manipulating,
accounting, labeling, and tracking this information. The case studies of Chapter 8 illustrate these grouping, accounting, labeling, and tracking processes.
1.5. Applications of CPQRA
No organization or society has the resources to perform CPQRAs (of any depth) on all
conceivable risks. In order to decide where and how to use the resources that are avail-
able, it is necessary to select specific subjects for study and to optimize the depth of
study for each subject selected. This selection process or screening technique is discussed (Section 1.5.1) along with its use for existing facilities (Section 1.5.2) and new
projects (Section 1.5.3).
1.5.1. Screening Techniques
In creating a screening program, it is helpful to determine the organizational levels that
are most amenable to screening, and those where CPQRAs can be applied most effectively. Figure 1.13 illustrates the structure of a typical CPI organization. It shows a
hierarchical scheme, with the organization divided into facilities (plants), the facilities
divided into process units, the process units divided into process systems and the process systems divided into pieces of equipment. A general observation is that the
number of possible CPQRAs increases exponentially—but that the scope of each one
narrows—moving from the top to the bottom of the hierarchy. Use of CPQRA is typically restricted to the lower levels of the hierarchy, and in those levels it is selectively
applied.
Methods are needed to screen—prioritize and select—process units, systems, and
equipment for selective application of CPQRA. These methods must ensure that all
facilities are considered uniformly in the screening process.
Establishment of a prioritized listing of candidate studies allows efforts to focus on
the most onerous hazards first and, depending on available resources, progress to less
serious hazards. Certain listings are "zoned35 according to high, medium, and low levels
of concerns, and studies placed into the lowest class receive attention only after all studies in higher classes have been executed. If a decision is made to zone a priority list, it is
important to establish zone cutoff criteria prior to screening in order to avoid bias.
Bask estimates can be developed at any level of the typical CPI organization, but
usually focus on specific elements of the lower levels of the hierarchy—for instance, the
COMPANY
HEADQUARTERS
MANUFACTURING
FACILITIES
PROCESS
UNITS
PROCESS
SYSTEMS
PROCESS
EQUIPMENT
FIGURE 1.13. Structure of a typical CPI company.
risk from the rupture of a storage tank. The following discussions of screening methods
show that methods are available to study various levels of the typical CPI organization.
1.5.1.1. PROCESS HAZARD INDICES
Dow Chemical has developed techniques for determining relative hazard indices for
unit operations, storage tanks, warehouses, etc. One generates an index for fire and
explosion hazards (Dow*s Fire & Explosion Index Hazard Classification Guide, 7th ed.,
AIChE 1994), and another an index for toxic hazards (Dow^s Chemical Exposure
Index Guide, 1st ed., AIChE 1994). ICFs Mond Division has developed similar techniques (The Mond Index) and has proposed a system for using these indices as a
guide to plant layout (ICI, 1985). A modified Mond-like index has also been proposed for evaluation of toxic hazards (Tyler, 1996). These techniques consider the
hazards of the material involved, the inventory, operating conditions, and type of
operation. While the values of the indices cannot be used in an absolute sense as a
measure of risk, they can be used for prioritization, selection, and ranking. The value
of the index may be helpful in deciding whether a CPQRA should be applied, and
the appropriate depth of study.
1.5.1.2. INVENTORYSTUDIES
The inventories of hazardous materials should be itemized (including material in process, in storage, and in transport containers). The information should include significant properties of the material (e.g., toxicity, flammability, explosivity, volatility),
normal inventory and maximum potential quantity, and operating or storage conditions. In some cases, screening can, or must, be done by means of government specifications (New Jersey, 1988, and EEC's "Seveso Directive,5' 1982).
Major hazards can be identified from an inventory study. Where these are toxic
hazards, simple dispersion modeling—assuming the worst case and pessimistic atmospheric conditions—can be performed. Where fires or explosions are the hazards, similar simple consequence studies may be made. Estimated effect zones can be plotted on a
map to determine potential vulnerabilities (population at risk, financial exposure, business interruption, etc.); for screening purposes, estimates of local populations may be
sufficient. Of course, when significant vulnerabilities are found, more thorough studies
may be required.
1.5.1.3. CHEMICALSCORING
Various systems have been developed to assign a numeric value to hazardous chemicals
using thermophysical, environmental, toxicological, and reactivity characteristics. The
purpose of each system is to provide an objective means of rating and ranking chemicals according to a degree of hazard reference scale. Three of these methodologies are
systems proposed by the NFPA 325M (1984), the U.S. EPA (1980, 1981), and
Rosenblum et al. (1983).
NFPA has a rating scheme that assigns numeric ratings, from O to 4, to process
chemicals. These ratings represent increasing health, flammability, and reactivity hazards; the fourth rating uses special symbols to denote special hazards (e.g., reactivity
with water). This system is intended to show firefighters the precautions that they
should take in fighting fires involving specific materials; however, it can be used as a
preliminary guide to process hazards. The U.S. EPA has developed methods for rank-
ing chemicals based on numerical values that reflect the physical and health hazards of
the substances. Rosenblum et al. (1983) give an index system that assigns numerical
values to the various hazards that chemicals possess and that can be used to prioritize a
list of chemicals. This technique is more complex and less-practiced than the NFPA
diamond system.
1.5.1.4. FACILITY SCREENING
In addition to the screening techniques presented in previous subsections, other prioritization and selection approaches have been proposed which focus on facilities as
opposed to chemicals alone. One such approach has been offered by Mudan (1987).
This approach uses mathematical models for blast, fire, and toxicity for screening
chemical facilities. A similar approach has been proposed by Renshaw (1990). Less
sophisticated approaches have also been used to screen facilities. For example, if the
number of facilities to be screened is not too large, and if the organization's safety personnel are sufficiently experienced, it is possible to subjectively rank facilities by consensus. Whatever method is used, it is important to apply it consistently and document
the results of its application for future reference and update.
1.5.2. Applications within Existing Facilities
In order to examine process risks from all existing facilities within an organization, it is
essential to develop a study plan. This plan documents the screening methods to be
used to qualitatively or quantitatively rank all facilities within the organization and then
rank all process units within those facilities. These prioritized lists can then be compared and a master list developed which can be used to establish the study plan for
CPQRA.
When developing any study plan for existing facilities using a screening method, it
is most cost effective to ensure that the plan is directed at the lowest level of the organization's hierarchy (Figure 1.13). Once the prioritized study plan is developed, the
depth of CPQRA needs to be determined for each candidate study from the top down.
Table 1.6 offers qualitative guidance for determining the depth of CPQRA appropriate
for each of the layers of the organizational hierarchy (Figure 1.13). Recognize that this
is an idealization where a risk estimate plane CPQRA is reserved for process equipment
and system studies only and, even then, only after consequence and frequency plane
studies have been completed and show the need for further study.
1.5.3. Applications within New Projects
The depth of study presented in Table 1.6 directly applies to new projects as well. The
main distinction between new projects and existing facilities (Figure 1.7) is the information available for use in the CPQRA. Early in a new project, information is constrained, limiting the depth of the study. This constraint is virtually nonexistent for
existing facilities. As a new project progresses, the information constraint is gradually
removed.
TABLE 1 .6. Applicability and Sequence Order of Depth of Study for Existing Facilities
Risk estimation technique*
Organizational
hierarchy level
Depth of study
Consequence
Frequency
Risk
Company
Simple/consequence
Intermediate/frequency
Complex/risk
1
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
Facility
Simple/consequence
Intermediate/frequency
Complex/risk
1
1
N.A.
2
N.A.
N.A.
N.A.
N.A.
N.A.
Process unit
Simple/consequence
Intermediate/frequency
Complex/risk
1
1
1
2
2
N.A.
3
N.A.
N.A.
Process system
Simple/consequence
Intermediate/frequency
Complex/risk
1
1
1
2
2
2
3
3
3
Equipment
Simple/consequence
Intermediate/frequency
Complex/risk
1
1
1
2
2
2
3
3
3
4
NA, not applicable; 1, First task in series; 2, second task in series; 3, third task in series.
1.6. Limitations of CPQRA
CPQEA limitations must be understood by management if sensible goals are to be
established for studies. These limitations must also be understood by the technical personnel responsible for the study. Some references address the potential limitations of
CPQRA (Freeman, 1983; Joschek, 1983; PiIz, 1980).
A summary of technical and management limitations, their implications, and possible means for reducing their impact is provided in Table 1.7. More detailed treatment
of the technical limitations of CPQRA component techniques is provided in Chapters
2 and 3. Technical limitations of the data required for CPQRA and of special topics are
addressed in Chapters 5 and 6, respectively.
From Table 1.7 it is apparent that many of the limitations of CPQRA arise from
uncertainty. The estimation of uncertainty is discussed in Section 4.5. Uncertainty
should decrease in the future, as models become standardized, equipment failure rate
data relevant to the CPI are more fully developed and collected systematically, risk analysis expertise becomes more widely disseminated, and human consequence effect data are
more widely developed. Some specific data (e.g., toxicity) are currently incomplete and
inexact and are a major source of uncertainty in CPQBA. Where uncertainty is a major
issue, relative or comparative uses of CPQRA may be preferable to absolute uses.
Where CPQBA risk estimates are to be compared in an absolute sense to risk targets, or risk "acceptability criteria," concern should increase over the issue of absolute
accuracy of these estimates. Unlike process economic studies such as discounted cash
flow analysis that use cost estimate qualities with an accuracy of +15%, CPQBA estimates have much greater absolute uncertainty, typically covering one or more orders of
TABLE 1 .7. Limitations of CPQRA and Means to Address Them
Cause of limitation
Implication to CPQRA
Remedies
TECHNICAL
Incomplete or
inadequate
enumeration of
incidents
Underestimate risk for a
representative set or
expansive list of incidents
Require proper documentation
Involve experienced CPQRA practitioners
Apply alternative enumeration techniques
Peer review/quality control
Review by facility design and operations
personnel
Improper selection of
incidents
Underestimate risk for all
incident groupings
Involve experienced CPQBJV practitioners
Apply alternative enumeration techniques
Peer review/quality control
Review by facility design and operations
personnel
Unavailability of
required data
Possibility of systematic bias
Secure additional resources for data
acquisition
Uncertainty in
consequences, frequencies,
or risk estimates
Expert review/judgment
Ensure that knowledgeable people are
involved in assessing available data
Check results against other models or
historical incident records; evaluate
sensitivities
Incorrect prioritization of
major risk contributors
Consequence or
frequency model
assumptions/validity
Similar in effect to data
limitations
Ensure appropriate peer review
Check results against other models or
historical incident records
Ensure that models are applied within the
range intended by model developers
Ensure that mathematical or numerical
approximations that may be used for
convenience do not compromise results
Use, if feasible, different models (e.g., a
more conservative and a more optimistic
model) to establish the impact of this type
of uncertainty
MANAGEMENT
Resource limitations
(personnel, time,
models)
Skills unavailable
Insufficient time to complete
depth of study
Extend schedule
Insufficient depth of study
Defer study until resources available
Inadequate quality of study
Identify major risk contributors and
emphasize these
Incorrect preparation and
analysis
Amend scope of work
Improper interpretation of
results
Acquire expertise through training
programs, new personnel, or consultants
RATIO Wp°ggg? FAILURE RATE
magnitude. The estimate's uncertainty is directly proportional to the depth and detail
of the calculation and quality of models and data available and used.
Both the Canvey Island (Health & Safety Executive, 1978, 1981) and Rijnmond
Public Authority (1982) studies present discussions of uncertainty and accuracy, and
the reader is referred there for further detail. Two other sources of insight into the issue
of absolute accuracy and uncertainty are also available. Figure 1.14, taken from Ballard
(1987), summarizes data collected by the National Centre of Systems Reliability on a
number of reliability assessments for the period 1972-1987. The diagram was developed through collecting data on the actual performances of plants and process systems
and prior estimates of the reliability of these same plants and process systems. While
there are a number of uncertainties related to the studies, data collection methods, etc.,
as stated by Ballard, "it is clear [from Figure 1.14] that in an overall sense one can
expect the results of a reliability study to give a very good indication of the likely accident frequency from a plant." The 2:1 and 4:1 ranges shown on Figure 1.14 indicate
that about 60% of the predictions of failure rates were within a factor of two and about
95% were within a factor of four of actual performance data.
While Figure 1.14 presents cause for accepting risk estimates as reasonable, the
second source of insight offers cause for concern. Figure 1.15, taken from Arendt et al.
(1989), summarizes the results of a European benchmark study (see Amendola, 1986)
that showed the difficulty in reproducing CPQEA estimates, and the substantial
dependency of these estimates on the very basic, defendable, but different assumptions
made by various teams of analysts. Each of the teams in the study was given identical
systems to analyze, the component techniques to use, and a common Analysis Data
CUMULATIVE FREQUENCY
FIGURE 1.14. Frequency distribution of the failure rate ratio collected by the National Centre
of Systems Reliability over the period 1972-1987 From Ballard (1987), reprinted with
permission.
ANNUAL SYSTEM FAILURE PROBABILITY
TEAMS OF CPQRA EXPERTS
FIGURE 1.15. Results of European Benchmark Study. From Arendt et al. (1989), reprinted
with permission.
Base. The teams were also allowed complete freedom in making assumptions, selecting
incidents to study, choosing failure rate data, etc. Figure 1.15 shows that the resulting
estimates ranged over several orders of magnitude, well beyond the range of uncertainty calculated by some of the teams. When the teams were subsequently directed to
follow similar assumptions, the resulting estimates converged to a much more acceptable range (i.e., within a factor of 5). This study and its implications is discussed in
more detail in Chapter 4. Consequently, it is important to recognize that along with
the technical uncertainties associated with models and data discussed elsewhere in this
book, the essence of the accuracy and corresponding uncertainty of a risk estimate also
depend heavily upon the expertise and judgment of the analyst. The need to document
and review such assumptions is discussed in depth in Section 1.9.5.3 on Quality
Assurance.
1.7.
Current Practices
Safety in design and operation has been important to the CPI since its inception. A
wide range of safety techniques, many of which are currently used by companies and
regulatory agencies, have evolved. In the preparation of the original edition of this
book, a survey was conducted of 29 major chemical and petroleum companies—
believed to represent the majority of companies practicing CPQRA techniques in
1986. The results of the survey are summarized in Table 1.8.
All companies use basic engineering codes and standards as part of their safety
review. Virtually all companies utilize some qualitative methods for hazard identification. The most common techniques include checklist and index methods. About 60%
of the surveyed companies use structured techniques such as HAZOPs or FMEAs.
Some companies have their own customized or combined versions, which they refer to
as process hazard review techniques. Almost half of these companies are using one of
the risk estimation techniques. Quantitative risk targets are being used by about 10%
TABLE 1.8. Survey of Process Safety Techniques in Use3
Safety technique
Existing techniques
Codes and standards
Unstructured hazard identification (e.g., indices, judgment)
Structured hazard identification (e.g., HAZOP, FMEA)
CPQRA techniques
Consequence estimation
Frequency estimation
Risk estimation
Use of risk targets
Percentage of surveyed
companies using technique
100
95
60
40
30
20
10
"Basis: Survey of 29 major U.S. companies (chemical and petroleum) done by Technica in 1986.
of the surveyed companies. Concerns have been expressed over the liability implications of conducting CPQBAs because the existence of these studies implies acceptance
of certain levels of risk. Some companies continue to rely on established practice (as
specified by engineering codes or standard practices). On legal advice, some are reluctant to produce CPQRAs, fearing that misinterpretation of risk estimates could be
damaging. The counter argument, expressed by those companies that perform
CPQBAs, is that the expected reduction in frequency of occurrence or consequence of
various incidents more than offsets potential legal difficulties.
A few of the companies surveyed have clear corporate risk policies and targets,
which have strong and active corporate board level support. In these companies, the
application of various CPQBA techniques plays an important part in the decision-making process. This commitment is reflected in the quality of staff and resources
available to CPQBA.
In the public arena, the U.S. government, and national organizations have
expressed substantial interest in CPQBA techniques. In some states, legislation requiring quantitative risk assessment has been considered or enacted. The establishment of
formal risk management programs, which include elements of CPQBA techniques, is a
fundamental requirement for most of the legislation (e.g., New Jersey, California,
etc.). The U.S. Environmental Protection Agency has also included some risk considerations in the Risk Management Program (RMP) rule under the Clean Air Act
Ammendments (40CFR68, Risk Management Programs for Chemical Accidental
Release Prevention).
As with any human endeavor, the risk associated with chemical processing facilities
cannot be reduced to zero. Corporate and government approaches to risk management
clearly accept this fact. A number of papers have been published on the application of
CPQRA in the U.S. and overseas, including DeHart and Gaines (1987), Freeman etal.
(1986), Gibson (1980), Goyal (1985), Helmers and Schaller (1982), Hendershot
(1996), Ormsby (1982), Renshaw (1990), Seaman and Pikaar (1995), Van Kuijen
(1987), and Warren Centre (1986).
At the time of the survey in Table 1.8, few companies possessed the technical
resources and expertise required to implement the complete range of CPQBA tech-
niques, although most employed some of the techniques. Dow Chemical, Rohm and
Haas, British Petroleum and Union Carbide have published papers describing how
they have implemented elements of CPQEA into formal risk management programs
(Mundt, 1995; Poulson et al., 1991; Renshaw, 1990; and Seaman and Pikaar, 1995).
Many felt that their process safety programs would be substantially enhanced by the use
of appropriate CPQRA techniques in process design and operation, while others did not
see any incremental benefit from implementing CPQRA techniques. The latter believed
that their knowledge and experience already provide for safe plant design and operation.
1.8. Utilization of CPQRA Results
As identified in the management overview section, there are many potential uses of
CPQRA results. All of these are variations of approaches to risk reduction. This section
highlights the relative and absolute application of CPQBA results.
Relative uses of CPQRA results include a comparison and ranking of various risk
reduction schemes based on their competitive effectiveness in reducing risk. A table of
cost-risk benefits is constructed (e.g., cost of risk reduction measure vs. reduction in
risk achieved—see Section 4.1). This type of assessment is easier to apply and much less
affected by potential errors in CPQRA than absolute comparisons of risk estimates
with specified targets.
Absolute uses of CPQRA results are usually based on predetermined risk targets.
Several government agencies (e.g., Netherlands—Van Kuijen, 1987) have established
quantitative risk criteria that must be met for planning approvals or for the maintenance of existing operations. Figure 1.16 shows some of the risk criteria that have been
used by various organizations. The uncertainty bands for these criteria are generally
plus or minus one order of magnitude. Also, it should be emphasized that the criteria
are dependent upon the method and data specified. CPQBA study results should only
be evaluated against criteria based on the exact methodology used in the study.
A few companies also employ risk targets; however, these are usually for in-plant
risks, some of which have been published (Helmers and Schaller, 1982). Targets for
risks to the public are much more difficult to define (e.g., consideration of both individual and societal risks). Rohm and Haas and British Petroleum are companies that
have established and published risk criteria (Renshaw, 1990). Where targets are being
used, initial risk estimates are compared with these targets. Where the target has not
been achieved, further risk reduction measures are evaluated to reduce the risk estimate
to or below the targeted level. Means to reduce the risk further, below the target, are
usually pursued if the cost of implementing additional risk reduction measures is reasonable or the uncertainty of the risk estimate is of substantial concern. For this use,
potential errors in the CPQRA results can be important.
1.9. Project Management
This section offers an overview of the role of CPQRA project management. A CPQRA
must be carefully managed in order to obtain the required results in a timely and cost
effective manner. Project management tasks include study goals (Section 1.9.1), study
Frequency of N or More Fatalities Per Year
Number of Fatalities Per Event
FIGURE 1.16. Acceptable risk criteria. AL>\RA, as low as reasonably achievable.
objectives (Section 1.9.2), depth of study (Section 1.9.3), special user requirements
(Section 1.9.4), project plan (Section 1.9.5), and execution (Section 1.9.6).
Figure 1.17 provides a logic diagram for CPQRA project management. This
figure shows the unique characteristics of a CPQRA, which depart from normal engineering project management tasks. These tasks must all be addressed within the bounds
of applicable constraints (risk targets, budget, tools, people, time, and data).
1.9.1. Study Goals
Section 1.3.2 and Table 1.4 describe typical study goals. These can originate from
external sources, such as regulatory agencies, or from internal initiatives (e.g., senior
management).
1.9.2. Study Objectives
It is critical for project management to understand the study goals and to firmly establish study objectives. The study objectives define the project goals in precise terms that
DEFINE GOALS OF CPQRA
(TABLE 1.4)
(§1.9-1)
USER
REQUIREMENTS
CONVERT GOALS INTO STUDY
OBJECTIVES ANDSECURE
USER ACCEPTANCE
(§1.9.2)
Approved scope of work (initial)
Approved scope of
work (revised)
DETERMINE REQUIRED
DEPTH OF STUDY TO SATISFY
OBJECTIVES
(§1.9.3)
See
Figure 1.18
DEFINE DOCUMENTATION
REQUIREMENT TO
SATISFY USER
(§1.9.4)
CONSTRUCT PROJECT PLAN
(§1-9.5)
Estimate Resource
Requirements (§1.9.5.1)
Prepare Schedule (§1.9.5.2)
Establish Quality Assurance
Procedures (§1.9.5.3)
Establish Training
Requirements (§1.9.5.4)
Establish Cost Control
Procedures (§1.9.5.5)
CONVERT NEW
REQUIREMENTS INTO
REVISED SCOPE OF WORK
AND SECURE USER
ACCEPTANCE
USER REVIEWS DRAFT
AND ACCEPTS STUDY OR MODIFIES
REQUIREMENTS
Draft
report
EXECUTE AND COMPLETE
PROJECT
(§1.9.6)
STUDY ACCEPTED
FIGURE 1.17. Logic diagram for CPQRA project management.
lead to a project that can be satisfactorily managed to completion. This can best be
accomplished by creating a scope of work document that is reviewed and accepted by
the user. Where user requirements have been defined, in writing, in advance of the
study (e.g., determined by government regulation), this step reduces to interpretation
of the requirements for senior management approval. In converting study goals into
objectives through scope of work documents, project management defines the extent
of study within the organizational hierarchy (Figure 1.13).
Possible study objectives include
• determination of societal risk from company operations that include any of a
specified list of chemicals
• determination of risk to employees from modification to an existing process unit
• identification of cost effective risk reduction measures for achieving target risk
levels for an existing process unit
• evaluation and ranking of competitive process strategies considering impact to
the surrounding community
• determination of relative effectiveness of each of several alternatives to reduce
risk from a single piece of equipment.
1.9.3. Depth of Study
A careful determination of the depth of study is essential if CPQRA goals and objectives are to be achieved, adequate resources are to be assigned, and budget and schedules are to be controlled. The calculation workload for a given depth of study can
expand factorially as one moves from the origin along any one of the axes of the study
cube (Section 1.3.1). It is essential to estimate this calculation burden prior to finalizing a depth of study so that project costs and schedule requirements can be evaluated. A
risk analyst and a risk methods development specialist can provide project management
with valuable assistance in estimating this workload and with guidance in selecting an
appropriate depth of study.
Figure 1.18 presents a schematic for determining the appropriate depth of study.
Basically, given an approved scope of work, which specifies the risk measures to be calculated and presentation formats to be used, the analyst needs to select the following
(Section 1.3.1):
• the appropriate risk estimation technique
• the appropriate complexity of study
• the appropriate number of incidents.
Once values have been assigned to each of these study parameters, the depth of
study—cell within the study cube given in Figure 1.5—has been determined.
SELECTION OF CELL WITHIN CUBE (SECTION 1.3 AND FIGURE 1.5)
APPROVED SCOPE OF WORK
(INITIAL OR REVISED)
Study Objectives
Extent of Study
(See Figure 1.17)
SELECT APPROPRIATE RISK
ESTIMATION TECHNIQUE
SELECT APPROPRIATE
COMPLEXITY OF STUDY
SELECT APPROPRIATE
NUMBER OF INCIDENTS
Depth of study defined
DEFINE DOCUMENTATION
REQUIREMENTS
FIGURE 1.18. Selection of appropriate depth of study.
{Figure 1.17)
Various aids to understanding the depth of study and the sensitivity of each of
these three parameters are provided in this volume. Table 1.5 describes the depths of
study for each of the cells along the main diagonal of the study cube, and Table 1.6
reviews the applicability and sequential order of depth of study for the various levels of
the organizational hierarchy given in Figure 1.13. Table 1.6 shows that if a risk analysis
(as opposed to consequence or frequency analysis) is required for a facility, it is necessary to synthesize it from analyses done at the process system or equipment levels.
After a depth of study has been selected, the cost of the study and schedule should
be estimated and presented to the user for approval. At this point, it is often necessary
to revisit study goals and objectives and approved scope of work to see if opportunities
exist for reducing costs or accelerating schedules.
Costs have a direct relationship to each of the three cell parameters. The prioritized
CPQRA Procedure (Section 1.2.2), an illustration of one sequential approach to using
risk estimation techniques, is designed to offer opportunities for cost savings by deferring more detailed studies until simple consequence and frequency estimates have been
executed.
Hazard evaluation and consequence calculations are undertaken first to bracket or
bound the risks in a facility or establish the extent of hazard posed by a single piece of
equipment. The depth of consequence studies increases if required at successively
lower levels of the facility's hierarchy (Figure 1.13). Frequency calculations can next be
undertaken for process units, systems, and pieces of equipment; the depth of these
studies follows the same pattern as for consequence studies.
Finally, risk calculations are primarily reserved for process systems and equipment.
The complexity of these calculations and the number of incident outcome cases necessary
for each piece of equipment and associated piping limit use of this technique to screening
or intermediate studies. A decision to select a cell in the risk estimate plane represents a
"quantum jump" in complexity and calculation workload from either the consequence or
frequency planes. To illustrate, consider a system that processes flammable materials that
has 10 incidents selected for study. Suppose these 10 incidents result in 20 separate incident outcomes. If there are 8 wind directions, 3 wind speeds, 3 weather stabilities, and 2
ignition cases for each cloud, there are 144 ( 8 x 3 x 3 x 2 ) incident outcome cases for
each of the 20 incident outcomes. If the calculation grid for a risk contour plot were 10 X
10 (i.e., 100 grid receptor points, which is relatively coarse for drawing risk contours) a
total of 288,000 (20 X 144 X 100) calculations is necessary.
This provides only a base-case estimate of risk. Any evaluation of the range of the
estimate or of risk reduction measures requires multiplication of this burden by another
factor. Such an effort is often impractical for manual implementation. The number of
incident outcome cases to study can expand dramatically based on the depth of study
selected. A single, omnidirectional incident outcome (e.g., BLEVE) produces a single
incident outcome case. A directional toxic incident becomes in effect ^incident outcome cases, where W is the number of weather cases. A flammable directional incident
becomes WI incident outcome cases, where/ is the number of separate cloud ignition
cases. Each incident may lead to several incident outcomes that may lead to many incident outcome cases. In effect, each aspect of the study produces a parameter. The
number of discrete values for this parameter serves as a multiplier in amplifying the
number of cases that need to be constructed and executed by the risk analyst.
TABLE I.9. Parameters Affecting Calculation Burden
Study parameters (XJ*
Typical values
/= Number of incident outcomes
5-30
W- Weather stability classes
2-6
N= Wind direction
8-16
S= Wind speeds
1-3
V= Day/night variations
1-2
E= Number of end points (lethality, serious injury, etc.)
1-5
T= Ambient temperature cases (season variations)
1-4
I= Ignition cases
1-3
P= Population cases
1-3
G1= Grid points for individual risk contours
100-1000
G5= Grid points for societal risk curves
1-100
M. = Number of iterations on base case
2-5
L= Number of risk reduction options to be studied
3-5
"Parameters listed may or may not apply in the following formula to estimate the study's calculation burden:
Number of calculations = |jX,
i=\
where n = number of applicable parameters and X1 = study parameters from above listing.
Table 1.9 lists typical values for various study parameters and offers a formula for
estimating the number of cases. This listing is not complete, nor are the values offered
applicable to all studies. In fact, a study for a single process unit that considers isolation
and mitigation may have more than 1000 incident outcomes rather than only 5 to 30.
Evaluation of a large facility would require consideration of many such units. Although
the CPQBA methodology presented here applies to these more complex studies,
extensive use of computer models by knowledgeable practitioners is generally recommended to provide cost-effective results. As with the example presented above, the analyst would develop an estimate by selecting values for those parameters that apply and
multiplying them together. The analyst can also develop estimate sensitivity by varying
parameter values within the ranges given in Table 1.9 and using the resulting variations
to determine confidence limits for the study's cost estimate.
In selecting an appropriate depth of study, balance must be maintained between
trying to construct a representative system model and a manageable CPQRA. Excessively realistic scenarios (in terms of the number of incidents considered, the number of
weather and ignition cases, etc.) may result in a study of unacceptable duration or cost,
without providing any significant increase in accuracy or insight into process risk. The
uncertainties in a risk estimate are often such that substantially increasing the number
of incidents considered offers little improvement in estimate quality. A well-selected
CPQRA at a lesser depth of study (for example, one that can exploit symmetry and
restrict weather and ignition cases) may produce very meaningful results at substantially reduced computational effort and costs.
1.9.4. Special User Requirements
Before constructing a project plan it is imperative to understand user requirements,
including any special requirements for reporting and documenting study results. Such
special requirements, particularly documentation, may add substantially to project
resource requirements. This is discussed in more detail in Section 4 3.
1.9.5. Construction of a Project Plan
A written project plan should be prepared for every CPQRA, regardless of the scope of
work or depth of study. The circulation and availability of such a plan to members of
the project team provides for communication, team building, and direction. It is only
through the preparation of such a written plan that aspects of the study critical to its
success receive adequate attention.
Various texts on project management offer useful guidance on preparing a project
plan, including suggested plan contents. This material need not be presented here.
However, there are aspects of a project plan for a CPQBA that are unique and these are
discussed in the following sections.
1.9.5.1. ESTIMATION OF RESOURCE REQUIREMENTS
CPQRAs can require considerable resources. However, if the scope of work, depth of
study, and special user requirements are well defined, and if study progress is carefully
monitored, resources can be efficiently managed. Principal resources include people,
time, information, tools, and funding. A typical allocation of these resources for a
CPQBJV of an ordinary process system is shown in Figure 1.19, which is an abbreviated representation of Figure 1.3, through risk estimation. The process system is considered to be of moderate complexity with reactors, distillation train, preheat and heat
recovery systems, and associated day-storage. It is located close to populated areas, but
with no special topographical or other features that might warrant greater depth of
treatment. Resource estimates are provided in Figure 1.19 for the three depths of study
discussed in Table 1.5. Special topics addressed in Chapter 6 are not included in Figure
1.19, as they are not common to all studies. The estimates presented assume a
once-through estimate of risk. Further iterations to satisfy acceptability of the risk estimate (Figure 1.3) or to satisfy modified user requirements (Figure 1.17) are not
included. The number of iterations can be considered incremental cost multiples of the
once-through estimate.
Table 1.10 summarizes the total manpower requirements for the depth of study
alternatives obtained from Figure 1,19. The upper and lower limits are approximations
only. Nonetheless, they are in general agreement with studies conducted by experienced companies. The very broad range of time required for frequency estimation
reflects the variation in use of complex tools, such as fault trees. Fault trees are commonly used in the nuclear industry. As noted on the table, project management activi-
INITIAL CPQRA DEPTH OF STUDY
SELECTED
SIMPLE/
INTERMEDIATE/ COMPLEX/
CONSEQUENCE FREQUENCY
RISK
ANALYSIS
DEPTH OF STUDY
DATA . PROCESS PLANT
REQUIRED. ENVIRONMENTAL
- LIKELIHOOD
ABBREVIATIONS
PE - PROCESS ENGINEER
RA - RISK ANALYST
MW = PERSON WEEK
PFD - PROCESS FLOW DIAGRAM
P&ID = PIPING & INSTRUMENTATION
DIAGRAM
HAZOP = HAZARDS & OPERABILfTY
STUDY
FMEA = FAILURE MODE 4 EFFECT
ANALYSIS
ETA . EVENT TREE ANALYSIS
FTA - FAULT TREE ANALYSIS
DATA BASE DEVELOPMENT
SIMPLE/
INTERMEDIATE/ COMPLEX/
CONSEQUENCE FREQUENCY RISK
4-6 MW
0-1. 5 MW
2-4MW
HAZARD IDENTIFICATION & INCIDENT SELECTION
SIMPLE/
COMPLEX/
INTERMEDIATE/
DEPTH OF STUDY
CONSEQUENCE FREQUENCY
RISK
PEOPLE
PE
PE
PEfRA
EFFORT
0.5-1. 8 MW
1-2MW
2-4MW
TOOLS
What if/PHA
Course HAZOP
HAZOP/FMEA
DATA
Concept
PFO
PFD.P&ID
(Export
shown
layout)
INCIDENTS
2-6
90-100
18-20
IDENTIFIED
DEPTH OF STUDY
PEOPLE
EFFORT
TOOLS
CONSEQUENCE ESTIMATION
INTERMEDIATE/
SIMPLE/
CONSEQUENCE FREQUENCY
PE or RA
PE
2-3MW
0.1-1 MW
DETAILED
SIMPLE
MODELS
MODELS
I
COMPLEX/
RISK
PE+RA
SIMPLE
5-10 MW
DETAILED
MODELS
DEPTH OF STUDY
PEOPLE
EFFORT
TOOLS
FREQUENCY ESTIMATION
MODERATE/
I SIMPLE/
[ CONSEQUENCE FREQUENCY
PE
or RA
PE
0.05-1 MW
0-0.05 MW
HISTORICAL DATA HISTORICAL
DATA
OPTIONAL
COMPLEX/
RISK
PE+RA
SPREADSHEET
10-20 MW
HISTORICAL
DATA, SIMPLE
FTA/ETA
DETAILED FTA/ETA
AFTER STUDY
HISTORICAL DATA
PROGRESSION THROUGH DEPTHS OF
STUDY (REFER TO FIGURE 1.4)
FINDINGSOF INITIALCPQRA REQUIRE
INCREASED DEPTH OF STUDY
DEPTH OF STUDY
PEOPLE
EFFORT
TOOLS
RISK ESTIMATION
INTERMEDIATE/
SIMPLE/
CONSEQUENCE FREQUENCY
PE+RA
PE MW
0.05-1 MW
0-0.05
MINIMAL OR
MINIMAL
SIMPLE
MODELS
COMPUTER
PACKAGE
COMPLEX/
RISK
RA
2-5MW
SIMPLE OR
DETAILED
COMPUTER
PACKAGE
FIGURE 1.19. Resource allocation guidelines for a process system CPQRA.
ties have not been included in the totals presented. Administration of the project may
require an additional 5-10% of the total manpower estimates presented.
1.9.5.2. PROJECT SCHEDULING
Table 1.10 provides guidance on the total manpower required for a risk analysis. The
elapsed time is a function, to some degree, of the number of personnel provided, but
there is an inherent task structure in each depth of study that constrains project management from paralleling all individual tasks. Consequence and frequency analyses can
be done in parallel, but must logically follow hazard identification and incident selection. Final risk estimation must await completion of the consequence and frequency
analyses.
TABLE 1.10. Manpower Requirements for Depths of Study of a Single Process
System (UNIT)
Activity
Simple/consequence Moderate/frequency
CPQRA
CPQRA
(person-week)"
(person-week)"
Complex/risk
CPQRA
(person-week)"
0.5-1.5
2-4
4-8
1-2
2-4
4-8
Consequence estimation
0.5-1
2-3
3-10
Frequency estimation
0.5-1
0.5-2
3-20
Risk estimation
0.5-1
0.5-2
2-5
Preparation of final report
0.5-1.5
2-4
2-8
3.5-8
9-19
18-59
Data compilation
Hazard identification/incident selection
Totals*
"Note that the data presented have units of person-weeks. These data also need to be converted to calendar weeks
by the project manager through development of a project schedule. The resulting number of calendar weeks may
be substantially greater than the values shown above, depending on availability of critical personnel, tools, training opportunities, etc.
6
ThC values presented do not include project management activities, which can be estimated as an additional
burden of 5-10% of the totals shown. Sensitivity studies are also not included and are often required to evaluate
potential risk mitigation measures.
Opportunities to execute tasks in parallel must be* balanced against opportunities
to avoid tasks through following prioritized procedures such as discussed earlier in this
chapter (Section 1.2.2).
In constructing the project schedule, it is important to obtain input and agreement
from the risk analyst and other specialist members or groups. Milestones need to be
established that correlate with the logical end points presented Figure 1.3. Having
well-defined milestones permits meaningful status reports to be issued throughout the
life of the project.
1.9.5.3. QUALITYASSURANCE
The first step in quality assurance is to ensure the adequacy and availability of staff and
resources for the study. Since CPQRA is a relatively new CPI technology, it is likely
that the expertise of staff support will be deficient in certain technology areas. Consequently, quality assurance is a critical check and balance procedure of any CPQBA project plan. Adequate resources need to be assigned to quality assurance as a line item in
the project plan.
Early risk analysis studies (e.g., Rijnmond Public Authority, 1982) were routinely
passed on to independent reviewers. These reviews were budgeted at up to 10% of the
primary budget. Such outside reviews are now less common, but are appropriate for
organizations relatively inexperienced in CPQRA. Alternatively, outside experts may
be commissioned to undertake the study. Their activities can be monitored by company staff. This monitoring may be done by periodic meetings or by a staff member
assigned to the review team.
Such peer reviews or reviews by corporate staff of outside-expert work products
are only one of several layers of reviews that can be built into the project plan to ensure
TABLE 1.11. CPQRA Reviews and Purposes
Project team internal review
Identify miscommunication; challenge method selections, models used, assumptions, etc. Perform first
complete review of the initial draft report of the study prior to release to the user
Plant staff
Reveal any misrepresentation of plant practices, existing hardware and process configurations, facility
operational data, and site characteristics
Corporate staff
Ensure consistency with previous CPQRA formats, adherence to company CPQKA practices,
adequacy of documentation, etc. If staff includes risk analysts, provide peer review functions to the
project team
Peer or expert review
Review should be carried out by competent risk analysts not involved in the CPQRA. Review should
focus on appropriateness of methods, quality and integrity of the data base used, validity and
reasonableness of assumptions and judgments, as well as recommendations for further study
Management
Assuming the role of user, management should be satisfied that the report meets its requirements
completely, in line with the agreed on scope of work and that all conclusions and recommendations, if
any, are thoroughly understood
quality. The need for reviews by members of the project team, by plant and corporate
staff groups, by peers or experts, and by management should be considered in planning
and scheduling activity. The purposes of these various reviews are given in Table 1.11.
Each of these reviews should produce a written report of findings to the project
team manager. All findings should be formally resolved prior to issuing a final report.
Any report from plant or corporate staff may be useful to add to the study as documentation. Reports from peer reviewers or experts should be added to the CPQRA without
alteration to enhance the credibility of the report and to document the performance of
such a critical review.
Even though the component techniques of a CPQRA are rigorous and disciplined,
numerous opportunities exist to introduce uncertainty and error into the study. For
this reason, a formal quality assurance program may be desirable. Such programs have
routinely been developed to assure the quality of probabilistic risk assessments (PRAs)
in the nuclear industry. Such efforts have focused on the following areas of concern
[PRA Procedures Guide (NUREG/CR 2300, 1983)]:
• Completeness. Treatment of the full range of tasks, analyses, and model construction and evaluation should be assured. The completeness issue is most significant in any risk analysis. It includes such diverse concerns as identification of
initiating events, determination of plant and operator responses, specification of
system or component failure modes, physical processes analysis, and application
of numerical input data.
• Comprehensiveness. A probabilistic risk assessment is unlikely to identify every
possible initiating event and event sequence. The aim is to ensure that the significant contributors to risk are identified and addressed. Assurance must be provided
that comprehensive treatment is given to all phases of the study in a manner that
provides confidence that all significant incidents have been considered.
• Consistency. Consistency in planning, scope, goals, methods, and data within
the study is essential to a credible assessment. Equally important is an attempt to
achieve consistency from one study to another, especially in methodologies and
the application of data, in order to allow comparison between systems or plant
designs. In many cases, the acceptability of an activity is based on its comparability (risk) with other similar activities. The use of standardized methods and procedures enhance comparability.
• Traceabilily. The ability to retrace the steps taken, that is, reconstruct the
thought process to reproduce an answer, is important not only to the reviewer
and regulator but also to the study team.
• Documentation. The documentation associated with a PRA is substantial.
Large amounts of information are generated during the analysis, and many
assumptions are made. The information must be well documented to permit an
adequate technical review of the work, to ensure reproducible results, to ensure
that the final report is understandable, and to permit peer review and informed
interpretation of the study results.
Identical quality concerns exist in performing CPQEAs. Table 1.12 shows potential
areas within CPQRAs that require attention in the development of specific quality assurance procedures. Recognize that this table is not necessarily exhaustive and that any particular CPQRA will have its own quality assurance needs. At the least, planning for every
CPQRA needs to consider how each of the five areas listed above will be addressed.
1.9.5.4. TRAINING REQUIREMENTS
CPQRAs require the use of skilled and experienced personnel. For simpler studies
(consequence or frequency), the skills of the process engineer with some risk analysis
training may be adequate. A CPQBA utilizing the risk plane requires inputs from both
process engineers and risk analysts. A risk analyst without the support of a process engineer experienced in the design and operation of the particular process unit, system, or
piece of equipment, is unlikely to understand the process in adequate detail to carry out
the study. Process engineers must be thoroughly trained and have participated in preparing risk estimates for real process systems before they undertake CPQRAs without
the assistance of risk analysts.
There are several reference texts and training courses that provide an introduction
to CPQBA (Appendix B). Important skills include knowledge of hazard identification
techniques [reviewed in the HEP Guidelines, Second Edition (AIChE/ CCPS, 1992)]
and the consequence and frequency estimation techniques reviewed in this book.
Useful introductory publications to CPQBA topics include the other texts in the CCPS
Guidelines series, Lees (1980), TNO (1979), and Rijnmond Public Authority (1982).
The technique descriptions in Chapters 2 and 3 identify many useful references specific
to the individual techniques. A topical bibliography that offers numerous references
under many of the topics related to CPQBA is being made available by CCPS on diskettes. (Contact CCPS in New York for details.)
TABLE 1.12. Focus of Project Quality Assurance Procedures
Data compilation
• Data, should, be checked as being correct, relevant and up-to-date
• Data on chemical toxicity should be reviewed for reasonableness
• Documentation of the sources of data used should be maintained
Incident enumeration and selection
• The historical record should be reviewed
• Incidents should reflect all major inventories of hazardous materials
• Incidents rejected (especially rare, large ones) should be reviewed and documented
• Documentation used for hazard identification and for incident enumeration and selection (HAZOP,
What-lf. etc.) should be maintained
Consequence estimation
• Models should be well documented
• Trial runs should be compared against known results for validation (to protect against misunderstanding of
model requirements)
• Consequence results should consider all important effects (e.g., explosion analysis should include blast and
thermal radiation effects)
• Effect models should correspond to the study objectives
• Documentation of input data and results should be maintained
Frequency estimation
• Historical data should be confirmed as being applicable
• Fault and event tree model results should be confirmed against the historical record where feasible
• Documentation of the frequency estimation should be maintained
Risk estimation
• Results should be checked against experience for reasonableness
• Audit trail of documentation should he maintained
It is important to note that well-constructed and well-executed CPQRAs rely
heavily on judgment. Short training programs provide users with the necessary tools;
however, judgment can come only from the experience of applying them. Project management must be aware that estimates from inexperienced practitioners need greater
scrutiny than those from accomplished risk analysts.
1.9.5.5. PROJECT COST CONTROL
As CPQBAs can consume substantial resources, attention to cost control in developing
a project plan is essential. Once funding has been approved, it is important to document the allocation of that funding to accomplish the study. This allocation covers
• manpower costs (internal to the organization)*
• tool acquisition and installation (hardware and software) *
• data acquisition* computer costs
• training costs
• travel
• publication and presentation
• outside consultant services (all types)*
• project overheads
The four starred (*) items above offer unique problems for CPQBA project managers. They represent greater uncertainty in preparing project cost estimates than do
the other contributors. Consequently, greater effort to define them for estimate purposes is required, and greater attention to them through cost control procedures
during the project is necessary. The project manager must rely on the risk analyst for
estimating model development costs, software acquisition costs, outside consulting
services, and data acquisition expense.
Because of the potential for uncertainty, it is good practice to require that the risk
analyst provide documentation for cost estimates, including statements from any anticipated source of outside service (e.g., consultants, data acquisition). For example, if the
scope of work required earthquake analysis and this was beyond the capabilities of the
organization's staff, it would be necessary to provide at least preliminary estimates for
this analysis from outside firms. While this may require additional effort in preparing
resource requirements, this effort should result in better definition of costs prior to
project approvals and the avoidance of cost overruns thereafter. Such documentation
can also be used as input to cost control procedures over the life of the project. Otherwise, routine project cost controls in use for managing capital projects can be applied.
1.9.6. Project Execution
A project manager has successfully completed the project when he has completed his
scope of work. In preparing that scope of work, the project manager should specify the
means of measuring his project's progress in terms of percent completion. To calibrate
the project milestones with completed performance, the project manager needs to
confer with the risk analyst and agree on the assignment of degrees of project completion with logical end points in the CPQRA sequence (Figure 1.3).
The project manager is responsible for providing status reports comparing actual
versus estimated progress presented on the approved project schedule. Causes for
delays or cost overruns need to be investigated and explained, and remedial action identified and implemented where necessary.
1.10. Maintenance of Study Results
CPQRA results should be maintained after the completion of the study as an integral
part of a company's risk management program. Any actions taken as a result of the
study should be documented as well. As discussed in the management overview,
CPQBA results can be important to the company's risk management program (New
Jersey, 1988). Such a program should be kept up to date, and so should the associated
CPQRAs. The CPQEA report should be a living document. As the plant is modified
or as procedures change, the CPQRA should be updated, where relevant, to provide
management with information on the effect of such changes on risk. The CCPS Guidelinesfor Process Safety Documentation (1995) describe the documentation in more detail.
It is important to control and monitor the distribution of all copies of a CPQRA
report so that each recipient receives all updates and does not use outdated information
for decision-making. Periodically the register of report holders should be used to confirm location of all report copies and updated throughout the organization.
Documenting the systematic approach followed in performing CPQRAs permits
subsequent readers, perhaps uninvolved with the original work, to follow the analysis.
Each individual stage—hazard and incident identification, consequence and frequency
estimation, and risk estimation—can be important later. The maintenance of CPQRA
results also provides continuity to a risk management program. The importance of
management systems in the reduction of risk is receiving greater attention (Batsone,
1987). Bask management program components discussed by Boyen et al. (1988) are
itemized below, along with their dependencies on maintained CPQRAs:
• Technology
-Process Safety Information. The CPQRA provides a current summary of
hazards on the site and a listing and summary of all important relevant documents.
-Process Risk Analysis. This is the primary function of the CPQRA, one that
must be kept up to date and made available to new staff.
-Management of Change (Technology). All changes/modifications should be
subjected to the same rigor of analysis as the original study.
-Rules and Procedures. These should be developed in the context of the
CPQRA results.
• Personnel
-Staff Training. The CPQRA presents insights to specific facility risks with all
relevant documents appended or referenced.
-Incident Investigation. The CPQRA can be useful in incident investigation,
to check whether the event was properly identified and if protective systems
performed as expected. If not previously identified, it should be added to the
CPQEA and the results recalculated. Additional risk reduction measures may
be suggested.
-Auditing. The CPQRA can serve as a guide to the auditor to familiarize the
auditor with major risk contributors and past studies of them.
• Facilities
-Equipment Tests and Inspections. The CPQEA highlights the importance
of testing intervals in maintaining protective system reliability. Regular checks
are necessary to ensure these are maintained.
-Prestartup Safety Reviews. This function is similar to the auditing role.
Important features are identified for inspection and checking.
-Management of Change (Facilities). See Management of Change (Technology).
• Emergencies
-CPQEAs can assist in developing a site emergency response plan.
• Some Additional Uses (not specific to the site risk management program)
-Community Relations. Discussions with the local community are often aided
by the availability of up-to-date CPQRAs.
-Plant Comparisons. Many companies operate several plants of similar design.
CPQRA data from one can be used as a guide for new plants or for modifying
other existing plants.
-Operating Standards. All the CPQRA component techniques make assumptions of how the plants should be operated (HAZOP, fault tree failure frequencies, consequence calculations, etc.). When documented and kept current, these
can be checked at a later stage for accuracy.
It is important to recognize that a CPQRA shows whether a plant can operate at a
given risk level, but cannot ensure that the plant will operate consistent with the
assumptions used to estimate risk. Naturally, if actual operations differ from study
assumptions, the risk estimates produced cannot be considered representative. Study
assumptions need to reflect reality, and as reality changes, so must study assumptions.
Corresponding risk estimates will need to he undated. Updates can be triggered by
• process changes (e.g., hardware, software, material, procedures), availability of
improved input data (e.g., toxicology data)
• introduction of company risk targets
• advances in CPQRA component techniques
• changes to company property (e.g., neighboring process units, administration
building relocation)
• changes in neighboring property (e.g., expansion of a housing development to
company property limits)
Maintenance of a CPQRA means much more than assuming the availability of a
copy of the original study in an organization's files, though it is important to preserve
and store the results in a secured system. The need to maintain the study should be recognized and accepted at the time the commitment is made to execute the CPQRA. As
with any process documentation, without such commitment, the CPQRA report will
gradually hut assuredly become dated and lose its value to the company's risk management program.
1,11. References
AIChE/CCPS (1992), Guidelines for Hazard Evaluation Procedures second edition with worked
examples. Center for Chemical Process Safety, American Institute of Chemical Engineers,
New York.
AIChE/CCPS (1995), Tools for Making Acute Risk Decisions, Center for Chemical Process Safety,
American Institute of Chemical Engineers, New York.
AIChE/CCPS (1995), Guidelines for Process Safety Documentation, Center for Chemical Process
Safety, American Institute of Chemical Engineers, New York.
AIChE/CCPS (1994), Guidelines for Implementing Process Safety Management Systems, Center for
Chemical Process Safety, American Institute of Chemical Engineers, New York.
AIChE/CCPS (1989), Guidelines for Technical Management of Chemical Process Safety, Center for
Chemical Process Safety, American Institute of Chemical Engineers, New York.
AIChE/CCPS (1995), Plant Guidelines for Technical Management of Chemical Process Safety,
Center for Chemical Process Safety, American Institute of Chemical Engineers, New York.
AIChE/CCPS (1988a). Guidelines for Safe Storage and Handling of High Toxic Hazard Materials.
Center for Chemical Process Safety. American Institute of Chemical Engineers, New York.
Amendola, A. (1986). "Uncertainties in Systems Reliability Modeling: Insight Gained Through
European Benchmark Exercises," Nuclear Engineering Design 93, 215-225, Amsterdam,
The Netherlands: Elsevier Science Publishers.
Arendt, J. S. et al. (1989). ^4 Manager^ Guide to Quantitative Risk Assessment of Chemical Process
Facilities. JBF Associates, Inc., Knoxville, Term., Report No. JBFA-119-88, prepared for the
Chemical Manufacturers Association, Washington, D.C.; January.
Ballard, G. M. ( I 987). "Reliability Analysis—Present Capability and Future Developments."
SRS Quarterly Digest System Reliability Service, UK Atomic Energy Authority,
Warrington, England, pp. 3-11, October.
Batsone, R. J. (1987). Proceedings of the International Symposium on Preventing Major Chemical
Accidents. Washington, D.C. (J. L. Woodward, ed.). American Institute of Chemical Engineers, New York. Feb. 3-5.
Box. G. E. P. and Hunter, J. S. (1961). "The 2k-p Fractional Factorial Designs. Part 1,"
Technometrics 3(3), 311-346.
Boyen, V. E. et al. (1988). "Process Hazards Management." Document developed by Organization Resource Counselors, Inc. (ORC) [submitted to OSHA for future rulemaking on process hazards management], Washington, D.C.
Bretherick, L. (1983). Handbook of Reactive Chemical Hazards, 2nd edition. London:
Butterworths.
Carpenter, B. H., and Sweeny, H. C. (1965). "Process Improvement with 'Simplex3
Self-Directing Evolutionary Operation." Chemical Engineering 72(14), 117-126,
DeHart, R., and Gaines, N. ( I 987): "Episodic Risk Management at Union Carbide." AIChE
Spring National Meeting, Symposium on Chemical Risk Analysis, Houston. American
Institute of Chemical Engineers. New York.
Dow3s Fire and Explosion Index—Hazard Classification Guide, 7th edition, 1994. CEP Technical
Manual, American Institute of Chemical Engineers, New York.
Daw's Chemical Exposure Index Guide, 1994 . CEP Technical Manual, American Institute of
Chemical Engineers, New York.
EEC (19X2). "Major-Accident Hazards of Industrial Activities" ("Seveso Directive"). European
Economic Community. Council Directive X2-501-EEC Official Journal (OJ) Reference
No. L23), 5.8.1982: Amended October 1982. [Available from European Economic Community, Press and Information Services, Delegation of the Commission of European Communities, Suite 707, 210OM Street, N.W., Washington, D.C.]
Freeman, R. A. (19X3). "Problems with Risk Analysis in the Chemical Industry." Plant/Operations Progress 2(3), 185-190.
Freeman. R. A. et al. (1986). "Assessment of Risks from Acute Hazards at Monsanto." 1986
Annual Meeting, Society for Risk Analysis. Nov. 9-12, Boston, MA. Society for Risk Analysis. 8000 West Park Drive, Suite 400, McLean. VA 2210'.
Gibson, S. B. (1980). "Hazard Analysis and Risk Criteria." Chemical Engineering Progress
(November), 46-50.
Goyal, R. K. (19X5). "PRA-Two Case Studies from the Oil Industry." Paper presented at Session 5A of Reliability C85, Symposium Proceedings, July 10-12, 1985; VoI 2, p. 5A/3.
Jointly sponsored by National Centre of Systems Reliability. Warrington, England and
Institute of Quality Assurance, London, England.
Hawksley, J. L. (1984). Some Social, Technical and Economical Aspects of the Risks of Large Chemical
Plants. Chemrawn III, World Conference on Resource Material Conversion, The Hague,
June 25-29.
Health & Safety Executive (1978). Canvey—An Investigation of Potential Hazards from the Operations in the Canvey Island/Thurrock Area. 195 pp. HMSO. London, UK.
Health & Safety Executive (1981). Canvey—A Second Report, 130 pp. HMSO, London, UK.
Helmers, E. N and L. C. Schaller (1982). "Calculated Process Risk and Hazards Management."
AIChE Meeting, Orlando, FL, Feb. 20-Mar. 3. American Institute of Chemical Engineers,
New York.
IChemE (1985). Nomenclature of Hazard and Risk Assessment in the Process Industries. Institution
of Chemical Engineers, UK.
Hendershot, D. C. (1996) "Risk Guidelines as a Risk Management Tool," 1996 Process Plant
Safety Symposium, Houston, TX, April 1-2, 1996, Session 3
ICI (Imperial Chemical Industries) (1985). TheMondIndex, 2nd edition. ICI PLC, Explosion
Hazards Section. Technical Department. Winnington, Northwick, Cheshire CW8 4DJ,
England.
Joschek K. T. (1983). "Risk Assessment in the Chemical Industry." Plant/Operations Progress
2(1 January), 1-5.
Kaplan, S., and B. J. Garrick ( 1981). "On the Quantitative Definition of Risk." Risk Analysis
1(1), 11-27.
Kilgo, M. B. (1988). "An Application of Fractional Factorial Experimental Designs." Quality
Engineering, \, 19-23. American Society for Quality Control and Marcel Dekker, New
York.
Lees, F. P. (1980). Loss Prevention in the Process Industries, 2 Volumes. Butterworths. London
and Boston.
Long, D. E. (1969). "Simplex Optimization of the Response from Chemical Sys terns. "Anal.
Chim. Acta 46,193-206.
Marshall, V. C. (1987). Major Chemical Hazards. Wiley, New York.
Mudan, K. S. (1987). "Hazard Ranking for Chemical Processing Facilities." ASME Winter
Annual Meeting, Boston, MA. Dec. 13-18. American Society of Mechanical Engineers,
New York.
Nelder, J. A., and Mead, R. (1964). "A Simplex Method for Function Minimization." The Computer Journal 7, 308-313.
Mundt, A. G. (1995). "Process Risk Management for Facilities and Distribution." AIChE
Summer National Meeting, Boston, July 30-Aug 2, American Institute of Chemical Engineers, New York.
New Jersey (1983). 'Toxic Catastrophe Prevention Act Program." State of New Jersey,
N.J.A.C. 7: 1, 2, 3, 4 and 6. New Jersey Register, Monday, June 20, 1988, 20 N.J.R. 1402.
NFPA 325M (1984). Fire Hazard Properties of Flammable Liquids, Gases, and Volatile Solids.
National Fire Protection Association, Quincy, MA 02269.
NUREG (1983). PRA Procedures Guide: A Guide to the Performance of Probabilistic Risk Assessment
for Nuclear Power Plants, 2 volumes, NUREG/CR-2300, U.S. Nuclear Regulatory Commission, Washington, D.C. (available from NTIS).
NUREG (1984). PZM Status Review in the Nuclear Indusrry, NUREG-1050, Nuclear Regulatory Commission, Washington D.C. September, 1984 (available from NTIS).
NUREG (1985). Probabilistic Safety Analysis Procedures Guide, NUREG/CR-2815, Nuclear
Regulatory Commission, Washington D.C. August, 1985 (available from NTIS).
Ormsby, R. W. (1982). "Process Hazards Control at Air Products." Plant/Operations Progress 1,
141-144.
Pikaar, J. (1995). "Risk Assessment and Consequence Models." 8th International Symposium on
Loss Prevention and Safety Promotion in the Process Industries^ June 6-9, 1995, Antwerp, Belgium, Keynote lecture on Theme 4.
Pikaar, M. J., and M. A. Seaman (1995). A Review of Risk Control. Zoetermeer, Netherlands:
Ministrie VROM.
PiIz, V. (1980). "What Is Wrong with Risk Analysis?" 3rd International Symposium on Loss
Prevention and Safety Promotion in the Process Industries, Basle, Switzerland, 6/448-454.
Swiss Society of Chemical Industries, September 15-19.
Poulson, J.M. et al. (1991). "Managing Episodic Incident Risk in a Large Corporation." AIChE
Summer National Meeting, Pittsburgh, Aug 18-21, American Institute of Chemical Engineers.
Prugh. R. W. (1980). "Application of Fault Tree Analysis." Chemical Engineering Progress July,
59-67.
Rijnmond Public Authority (1982). A Risk Analysis of 6 Potentially Hazardous Industrial Objects
in the Rijnmond Area—A Pilot Study. D. Reidel, Dordrecht, the Netherlands and Boston,
MA.
Renshaw, P.M. (1990). "A Major Accident Prevention Program." Plant/Operations Progress
9(3), 194-197.
Rosenblum, G. R. et al. (1983). "Integrated Risk Index Systems." Proceedings of the Society for
Risk Analysis. Plenum Press, New York, 1985.
Seaman, M. A., and Pikaar, M. J., "A Review of Risk Control," VROM, 11030/150, June, 1995.
Spendley. W. et al. (1962). "Sequential Application of Simplex Designs in Optimisation and
Evolutionary Operation." TechnomePncs 4(4), November.
TNO (1979). Methods for the Calculation of the Physical Effects of the Escape of Dangerous Materials:
Liquids and Gases, 2 Volumes. P.O. Box 312, 7300 AH Apeldoorn, The Netherlands.
Tyler, B. J. et al. (1996), "A toxicity hazard index," Chemical Health & Safety., January/February,
1996,19-25.
USCIP Working Party (1985). "Standard Plan for the Implementation of Hazard Studies 1:
Refineries." Union des Chambres Syndicales de 1' Industrie de Petrole (UCSIP), Paris,
France.
US EPA (1980). "Chemical Selection Method: An Annotated Bibliography": Toxic Integration
Information Series. EPA 560/TTIS-80-001, November (available from NTIS).
US EPA (1981). "Chemical Scoring System Development," by R. H. Ross and P. Lu, Oak
Ridge National Laboratory. Interagency Agreement No: 79-D-x9856, June (available from
NTIS).
Van Kuijen, C. J. ( I 987). "Risk Management in the Netherlands: A Quantitative Approach."
UNIDO Workshop on Hazardous Waste Management and Industrial Safety, Vienna. June
22-26.
Warren Centre (1986). Hazard Identification and Risk Controlfor the Chemical and Related Industries—Major Industrial Hazards Project Report (D. H. Slater, E. R. Corran, and R. M.
Pithlado, eds.). University of Sydney, NSW 2006. Australia.
Watson, S. R. (1994). "The Meaning of Probability in Probabilistic Risk Assessment." Reliability Engineering and System Safety 45, 261-269.
Watson, S. R. (1995). "Response to Yellman andMurray's comment on The meaning of probability in probabilistic risk analysis'." Reliability Engineering and System Safety 49,207-209.
World Bank (1985). Manual of* Industrial Hazard AssessmentTechniques. Office of Environmental and Scientific Affairs. World Bank, Washington, D.C.
Yellman, T. W., and Murray, T. M. (1995). "Comment on The meaning of probability in
probabilistic risk analysis'." Reliability Engineering and System Safety 49, 201-205.
Consequence Analysis
All processes have a risk potential. In order to manage risks effectively, they must be
estimated. Since risk is a combination of frequency and consequence, consequence (or
impact) analysis is a necessary step in the risk management process.
This chapter provides an overview of consequence and effect models commonly
used in CPQRA (as shown in Figure 2.1). Accidents begin with an incident, which usually results in the loss of containment of material from the process. The material has hazardous properties, which might include toxic properties and energy content. Typical
incidents might include the rupture or break of a pipeline, a hole in a tank or pipe, runaway reaction, fire external to the vessel, etc. Once the incident is defined, source models
are selected to describe how materials are discharged from the process. The source model
provides a description of the rate of discharge, the total quantity discharged (or total time
of discharge), and the state of the discharge, that is, solid, liquid, vapor or a combination.
A dispersion model is subsequently used to describe how the material is transported
downwind and dispersed to some concentration levels. For flammable releases, fire and
explosion models convert the source model information on the release into energy
hazard potentials such as thermal radiation and explosion overpressures. Effect models
convert these incident-specific results into effects on people (injury or death) and structures. Environmental impacts could also be considered (Paustenbach, 1989), but are not
considered here. Additional refinement is provided by mitigation factors, such as water
sprays, foam systems, and sheltering or evacuation, which tend to reduce the magnitude
of potential effects in real incidents.
Good overviews of consequence models are given in Growl and Louvar (1990),
Fthenakis (1993), Lees (1986, 1996), Marshall (1987), Mecklenburgh (1985),
Rijnmond Public Authority (1982), TNO (1979), and Warren Centre (1986).
The objective of this chapter is to review the range of models currently available for
consequence analysis. Some material on these models is readily available, either in the
general literature or as part of the AICHE/CCPS publication series. Where otherwise
available, detailed model descriptions are not provided; instead, the reader is directed
to the specific references. Otherwise, a description adequate for initial calculations is
provided.
Consequence Analysis to Achieve a Conservative Result. All models, including
consequence models, have uncertainties. These uncertainties arise due to (1) an incom-
Selection of a Release Incident
* Rupture or Break in Pipeline
* Hole in a Tank or Pipeline
* Runaway Reaction
* Fire External to Vessel
* Others
Selection of Source Model
to Describe Release Incident
Results may Include:
* Totai Quantity Released
(or Release Duration)
* Release Rate
* Material Phase
Selection of Dispersion Model
(if applicable)
* Neutrally Buoyant
* Heavier than Air
* Others
Results may Include:
* Downwind Concentration
* Area Affected
* Duration
Flammable
Selection of Fire
and Explosion Model
* TNT Equivalency
* Multi-Energy Explosion
* Fireball
* Baker-Strehlow
* Others
Results may Include:
* Blast Overpressure
* Radiant Heat Flux
Flammable
and/or Toxic?
Toxic
Selection of Effect Model
* Response vs. Dose
* Probit Model
* Others
Results may Include:
* Toxic Response
* No. of Individuals Affected
* Property Damage
Mitigation Factors:
* Escape
* Emergency Response
* Shelter in Place
* Containment Dikes
* Other
Risk Calculation
FIGURE 2.1. Overall logic diagram for the consequence models for releases of volatile,
hazardous substances.
plete understanding of the geometry of the release, that is, hole size, (2) unknown or
poorly characterized physical properties, (3) a poor understanding of the chemical or
release process, and (4) unknown or poorly understood mixture behavior, to name a
few.
Uncertainties that arise during the consequence modeling procedure are treated by
assigning conservative values to some of these unknowns. By doing so, a conservative
estimate of the consequence is obtained, defining the limits of the design envelope. This
ensures that the resulting engineering design to mitigate or remove the hazard is
over designed. Every effort, however, should be made to achieve a result consistent with
the demands of the problem.
For any particular modeling study, several receptors might be present which
require different decisions for conservative design. For example, dispersion modeling
based on a ground level release will maximize the consequence for the surrounding
community, but will not maximize the consequence for plant workers at the top of a
process structure.
To illustrate conservative modeling, consider a problem requiring an estimate of
the gas discharge rate from a hole in a storage tank. This discharge rate will be used to
estimate the downwind concentrations of the gas, with the intent on estimating the
toxicological impact. The discharge rate is dependent on a number of parameters,
including (1) the hole area (2) the pressure within and outside the tank (3) the physical
properties of the gas, and (4) the temperature of the gas, to name a few.
The reality of the situation is that the maximum discharge rate of gas will occur
when the leak first occurs, with the discharge rate decreasing as a function of time as the
pressure within the tank decreases. The complete dynamic solution to this problem is
difficult, requiring a mass discharge model cross-coupled to a material balance on the
contents of the tank. An equation of state (perhaps nonideal) is required to determine
the tank pressure given the total mass. Complicated temperature effects are also possible. A modelling effort of this detail is not necessarily required to estimate the
consequence.
A much simpler procedure is to calculate the mass discharge rate at the instant the
leak occurs, assuming a fixed temperature and pressure within the tank equal to the
intial temperature and pressure. The actual discharge rate at later times will always be
less, and the downwind concentrations will always be less. In this fashion a conservative
result is ensured.
For the hole area, a possible decision is to consider the area of the largest pipe connected to the tank, since pipe disconnections are a frequent source of tank leaks. Again,
this will maximize the consequence and insure a conservative result. This procedure is
continued until all of the model parameters are specified.
Unfortunately, this procedure can result in a consequence that is many times larger
than the actual, leading to a potential overdesign of the mitigation procedures or safety
systems. This occurs, in particular, if several decisions are made during the analysis,
with each decision producing a maximum result. For this reason, consequence analysis
should be approached with intelligence tempered with a good dose of reality and
common sense.
2.1. Source Models
Source models are used to quantitatively define the release scenario by estimating discharge rates (Section 2.1.1), total quantity released (or total release duration), extent of
flash and evaporation from a liquid pool (Section 2.1.2), and aerosol formation (Section 2.1.2). Dispersion models convert the source term outputs to concentration fields
downwind from the source (Section 2.1.3). The relationship between source and dispersion models, and the various model types, is shown schematically in Figure 2.2. As
shown in Figure 2.2, source and dispersion models are highly coupled, with the results
of the source model being used to select the appropriate dispersion model.
2.1.1. Discharge Rate Models
2.1.1.1. BACKGROUND
Purpose. Most acute hazardous incidents start with a discharge of flammable or toxic
material from its normal containment. This may be from a crack or fracture of process
Release of Volatile
>lazardous Substance
Gas
Release Two-Phase
Phase?
Liquid
T>
Boiling Point?
No
Flashing?
Yes
Two-phase Flashing
Flow Model
No
No
Flashing
Yes
Flash
Aerosol
Formation?
Yes
Liquid
Rain out?
Gas and
Aerosol Model
No
Aerosol
Transport/
Evaporation
Model
Pool
Formation
Model
Pool
Evaporation
Model
Gas
Density?
Dense
Neutral
Neutral
Buoyancy
Dispersion
Model
FIGURE 2.2. Logic diagram for discharge and dispersion models.
Dense
Gas
Dispersion
Model
Yes
TABLE 2.1. Conservative Design Approaches Based on the Objective
of the Risk Study
Objective of Study
Conservative Design Approach
1 . Protect vessel from overpressure
Estimate minimum flow through emergency
relief system.
2. Design downstream treatment system.
Estimate maximum flow through emergency
relief system to give maximum load on
downstream equipment.
3. Estimate external consequences of emergency
relief system release.
(a) Estimate maximum discharge from
emergency relief system to give maximum source
term and downwind concentrations.
(b) Consider most likely release.
vessels or pipework, from an open valve, or from an emergency vent. Such leaks may be
gas, liquid, or two-phase flashing liquid-gas releases. Different models are appropriate
for each of these—unfortunately there is no single model for all applications. Estimates
of discharge rate and total quantity release (or duration of the release) are essential as
input to other models (as shown in Figure 2.2). The total quantity released may be
greater or less than the vessel volume (depending on connecting pipework, isolation
valves, etc.).
Philosophy. The underlying technology for gas and liquid discharges is well developed
in chemical engineering theory and full descriptions are available in standard references
such as Perry and Green (1984) and Crane Co. (1986). Reviews of discharge rate
models can be found in the Guidelines for Vapor Cloud Dispersion Models (AIChE/CCPS
1987a, 1996a), its companion workbook (AIChE/CCPS, 1989a), Growl and Louvar
(1990), Fthenakis (1993), API (1995), and AIChE/CCPS (1996a). A qualitative
description of the method is also presented by AIChE/CCPS (1995a).
The treatment of a two-phase flashing discharge is more empirical. Initial investigations for the nuclear industry have been supplemented by the AIChE Design Institute for Emergency Relief Systems (DIERS) as described by Fisher (1985), Fisher et
al. (1992), and Boicourt (1995). The design philosophy with the DIERS models is to
select the minimal discharge rate at the design pressure of the process unit, and to maximize the relief area via selection of a minimal mass flux model. Many of these models
also assume no-slip which tends to result in the lowest mass discharge predictions. Use
of these mass flux models to represent source models will result in an under-prediction
of the discharge rate and hence, for dispersion problems, a nonconservative result.
Table 2.1 shows how the objective of the study determines the conservative design
approach and hence the source model selected. If the objective of the study is to protect
the vessel via emergency relief system design, then a source model is chosen to minimize the relief system mass flow and thus maximize the relief area. A two-phase flow
model would typically be selected as the source model. If, on the other hand, the objective of the study is to design a downstream containment/treatment system, then a
source model is selected to maximize the mass flow discharge. A single-phase liquid
discharge model might be appropriate here. Finally, if the study objective is to deter-
mine the community consequences of the release, then a source model is selected to
maximize the mass discharge and maximize the downwind concentrations.
Applications. Discharge models are the first stage in developing the majority of consequence estimates used in CPQRA, as shown in Figure 2.1. The applications of interest
are those relating to two categories of process release: emergency engineered releases
(e.g., relief valves) and emergency unplanned releases (e.g., containment failures).
Continuous releases (e.g., process vents) and fugitive emissions (e.g., routine storage
tank breathing losses) are not typically the focus of CPQRA.
2.1.1.2. DESCRIPTION
Description of Technique. The first step in the procedure is to determine an appropriate scenario. Table 2.2 contains a partial list of typical scenarios grouped according to
the material discharge phase, i.e. liquid, gas, or two-phase. Figure 2.3 shows some conceivable discharge scenarios with the resulting effect on the material's release phase.
Additional information is available elsewhere (AIChE/CCPS, 1987a, 1995a, 1996a;
Lees, 1986, 1996; World Bank, 1985).
Several important issues must be considered at this point in the analysis. These
include: release phase, thermodynamic path and endpoint, hole size, leak duration, and
other issues.
Release Phase. Discharge rate models require a careful consideration of the phase of
the released material. The phase of the discharge is dependent on the release process
and can be determined by using thermodynamic diagrams or data, or a vapor-liquid
equilibrium model, and the thermodynamic path during the release. Standard texts on
vapor-liquid equilibrium (Henley and Seader, 1981; Holland, 1975; King, 1980;
Smith andMissen, 1982; Smith and VanNess, 1987; Walas, 1985) or any of the commercial process simulators provide useful guidance on phase behavior. The starting
point of this examination is defined by the initial condition of the process material
before release. This may be normal process conditions or an abnormal state reached by
TABLE 2.2 Typical Release Outcomes (Emergency Engineered or Emergency
Unplanned Releases), and the Relationship to Material Phase
Liquid Discharges
• Hole in atmospheric storage tank or other atmospheric pressure vessel or pipe under liquid head
• Hole in vessel or pipe containing pressurized liquid below its normal boiling point
Gas Discharges
• Hole in equipment (pipe, vessel) containing gas underpressure
9
Relief valve discharge (of vapor only)
9
Boiling-ojf evaporation from liquid pool
• Relief valve discharge from top of pressurized storage tank
• Generation of toxic combustion products as a result of fire
Two-Phase Discharges
• Hole in pressurized storage tank or pipe containing a liquid above its normal boiling point
• Relief valve discharge (e.g. ,due to a runaway reaction or foaming liquid)
Wind
Direction
Immediately
Resulting
Vapor Cloud
Pure
Vapor Jet
Catastrophic Failure of
Pressurized Tank
Small Hole in Vapor
Space of a Pressurized Tank
Liquid
Jet1
Intermediate Hole in
Vapor Space of a Pressurized
Tank
Jet 4
Escape of Liquified Gas from
a Pressurized Tank
Liquid Jet
Evaporating
Cloud
Spillage of Refrigerated
Liquid into Bund
Spillage of Refrigerated
Liquid onto Water
Fragmenting
Jet
V: Vapor
PL: Liquified Gas
Under Pressure
RL: Refrigerated Liquid
d: Droplets
Boiling
Pool
High-Velocity Fragmenting
Jet from Refrigerated
Containment Vessel
FIGURE 2.3. Illustrations of some conceivable release mechanisms. In most cases the jet could
be two-phase (vapor plus entrained liquid aerosol). From Fryer and Kaiser (1979).
the process material prior to the release. The end point of the pathway will normally be
at a final pressure of one atmosphere.
Thermodynamic Path and Endpoint. The specification of the endpoint and the thermodynamic pathway used to reach the endpoint is important to the development of the
source model. If, for instance, initially quiescent fluid is accelerated during a release,
and the endpoint is defined as moving fluid, then the assumption of an isentropic pathway is normally valid. If, however, the endpoint is defined as quiescent fluid (for example, a pool of liquid after a release), independent of any transient accelerations, then the
initial and final enthalpies would be assumed equal (this does not imply that the
enthalpy is constant during the release process).
Table 2.3 demonstrates the impact of the various thermodynamic paths on a total
energy balance for an open system. For the isenthalpic case, AT = O for an ideal gas
TABLE 2.3. Implications for Various Thermodynamis Assumptions on the Total
Energy Balance
Total Energy Balance: AH + AKE + APE = Q- Ws
where AH
AKE
APE
Q
W^
Assumptions:
is the change in enthalpy
is the change in kinetic energy
is the change in potential energy
is the heat (+ = input; - = output)
is the shaft work (+ = output; - = input)
External energy balance
Open system with steady flow, that is, no-accumulation of mass or energy,
fixed system boundaries
Term
AH
Isenthalpic:
Isentropic:
O
Wx
Q
AT
AS
O*
—
O
—
O
—
O
—
O
—
0+
Note 1
>
Note 2
O*
<
APE
<
-<
Isothermal:
Adiabatic:
AKE
—
>
—
Note 2
—
>
NOTES: * Ideal gas only
+
Reversible processes only.
NOTE 1: From the remaining terms of the total energy balance:
Q- Ws- AKE- APE = O
NOTE 2: From the remaining terms of the total energy balance:
AH+ AKE + APE-JF 5 =O
If the process is reversible, the work calculated is the maximum work.
since the enthalpy is a function of temperature only. For the isentropic case, Q = O
since dS = dQ/T. For the isothermal case, AH = O since the enthalpy for an ideal gas is a
function of temperature only. For the adiabatic case, AS = O for a reversible process
only. For both the isentropic and adiabatic cases, the shaft work determined is a maximum for reversible processes.
For isentropic releases, an equilibrium flash model can be used to determine the
final temperature, composition and phase splits at ambient pressure. Clearly, if the
pathway stays in the gas or liquid phase, it is modeled accordingly. However, if a phase
change is encountered, then two-phase flow may need consideration in modeling the
release. A pure liquid will flash at its normal boiling point, while a mixture will flash
continuously and with varying compositions over the range from its dew point to
bubble point temperatures.
For releases of gases through pipes, either adiabatic or isothermal flow models are
available (Levenspiel, 1984; Growl and Louvar, 1990). For releases of gases at the
same source temperature and pressure, the adiabatic flow model predicts a larger, i.e.
conservative, flowrate, while the isothermal model predicts a smaller flowrate. The
actual flowrate is somewhere in between these values. For many problems, the
flowrates calculated by each approach are close in value.
Hole Size. A primary input to any discharge calculation is the hole size. For releases
through a relief system, the actual valve or pipe dimension can be used. For releases
through holes, the hole size must be estimated. This must be guided by hazard identification and incident enumeration and selection processes (whether this would be a
flange leak, medium size leak from impact, full-bore rupture, etc.). No general consensus is currently available for hole size selection. However, a number of methodologies
are suggested:
• World Bank (1985) suggests characteristic hole sizes for a range of process
equipment (e.g., for pipes 20% and 100% of pipe diameter are proposed).
• Some analysts use 2 and 4-inch holes, regardless of pipe size.
• Some analysts use a range of hole sizes from small to large, such as 0.2,1,4 and 6
inches and full bore ruptures for pipes less than 6-inches in diameter.
• Some analysts use more detailed procedures. They suggest that 90% of all pipe
failures result in a hole size less than 50% of the pipe area. The following
approach is suggested:
-For small bore piping up to I1Xi" use 5-mm and full-bore ruptures.
-For 2-6" piping use 5-mm, 25-mm and full-bore holes.
-For 8-12" piping use 5-, 25-, 100-mm and full-bore holes.
-For a large hole in a pressure vessel—assume a 10-min discharge of the contents. A complete failure is discouraged. Also, assume complete failure of
incoming and outgoing lines and check if discharge of the contents through
these lines will be less than 10 min. If less than 10 min, assume 10 min.
-For pumps, look at the suction and discharge lines. Consider a seal leak, 5-, 25-,
and 100-mm holes, depending on line sizes.
Leak Duration. The Department of Transportation (1980) LNG Federal Safety
Standards specified a 10-min leak duration; other studies (Rijnmond Public Authority,
1982) have used 3 min if there is a leak detection system combined with remotely actuated isolation valves. Other analysts use a shorter duration. Actual release duration may
depend on the detection and reaction time for automatic isolation devices and response
time of the operators for manual isolation. The rate of valve closure in longer pipes can
influence the response time. Due to the water hammer effect, designers may limit the
rate of closure in liquid pipelines.
Other Issues. Other special issues to consider when analyzing discharges include the
following.
• Time dependence of transient releases: Decreasing release rates due to decreasing upstream pressure.
• Reduction in flow: Valves, pumps, or other restrictions in the piping that might
reduce the flow rate below that estimated from the pressure drop and discharge
area.
• Inventory in the pipe or process between the leak and any isolation device.
Fundamental Equations. Discharge rate models are based on a mechanical energy
balance. A typical form of this balance is
tff+£c,-o+£<'j-'?>+2x+£-»
(2.1.D
where
P is the pressure (force/area)
p is the density (mass/volume)
g is the acceleration due to gravity (length/time2)
gc is the gravitational constant (force/mass-acceleration)
z is the vertical height from some datum (length)
v is the fluid velocity (length/time)
f
is a factional loss term (Iength2/time2)
Ws is the shaft work (mechanical energy/time)
m is the mass flow rate (mass/time)
The frictional loss term, ]>/f, in Eq. (2.1.1) represents the loss of mechanical
energy due to friction and includes losses due to flow through lengths of pipe; fittings
such as valves, elbows, orifices; and pipe entrances and exits. For each frictional device
a loss term of the following form is used
f= f (v
2
}
' * teJ
(2 L2)
-
where K^ is the excess head loss due to the pipe or pipe fitting (dimensionless) and v is
the fluid velocity (length/time)
For fluids flowing through pipes, the excess head loss term Kf is given by
/r =
K
(
4
^
If
(2-1.3)
where f is the Fanning friction factor (unitless)
L is the flow path length (length)
D is the flow path diameter (length)
2-KMethod. For pipe fittings, valves, and other flow obstructions, the traditional
method has been to use an equivalent pipe length, £equiv, in Eq. (2.1.3). The problem
with this method is that the specified length is coupled to the friction factor. An
improved approach is to use the 2-Kmethod (Hooper, 1981, 1988), which uses the
actual flow path length in Eq. (2.1.3)—equivalent lengths are not used—and provides
a more detailed approach for pipe fittings, inlets and outlets. The 2-K method defines
the excess head loss in terms of two constants, the Reynolds number, and the pipe
internal diameter:
1+
* f = ^-W
^M
1V
Re
V
UJ inches /
where
K1 and K00 are constants (dimensionless)
NRe
is the Reynolds number (dimensionless)
^inches is tne internal diameter of the flow path (inches).
The metric equivalent to Eq. (2.1.4) is given by
+
i+
*'-£ *-( €:)
where ID01171 is the internal diameter in mm.
(2.1.4)
Table 2.4 contains a list of lvalues for use in Eqs. (2.1.4) and (2.1.5) for various
types of fittings and valves.
For pipe entrances and exits, a modified form of Eq. (2.1.4) is required to model
the behavior
K{=
^t +K~
( 2 - L6 )
For pipe entrances, K1 = 160 and/C^ = 0.50 for a "normal" entrance, and 1.0 for a
Borda type entrance. For pipe exits, K1 = O andK00 = 1.0. Equations are also provided
for orifices (Hooper, 1981) and for changes in pipe sizes (Hooper, 1988).
For high Reynolds number, that is, N"Rc > 10,000, the first term in Eq. (2.1.6) is
negligible and Kf = K00. For low Reynolds number, that is, N^ < 50, the first term
dominates and Kf = K1JN^.
The Fanning friction factor for flow through pipes is found from commonly available charts (Perry and Green, 1984). A generalized equation is also available to calculate the friction factor directly, or for spreadsheet use (Chen, 1979):
1
..
( e,ID
V7=-41ogl° (^7065
5.0452 loglo A\
<2-L7>
N^—J
and
[(./J) "- /7.149)""']
-[ 2.8257 + UJ
J
A
A
(
8)
where e is the pipe roughness, given in Table 2.5.
At high Reynolds numbers (fully developed turbulent flow), the friction factor is
independent of the Reynolds number. From Eq. (2.1.7), for large Reynolds numbers,
1
/
D\
= 41 gl
37
77
° ° I TJ
(2.1-9)
The Fanning friction factor differs from the Moody friction factor by a constant value
of 4.
The above equations provide a useful framework for modelling both incompressible and compressible fluid flow through pipes and holes. For discharge modelling, the
usual objective is to determine the flow rate of material. However, to determine the
friction factor for a pipe, or the K factors for a fitting, the Reynolds number is required.
Thus, a trial-and-error solution is required since the Reynolds number is not known
until the flow rate is known. A spreadsheet can be easily applied to achieve the solution.
A special case occurs at high Reynolds number, where the friction factor is constant
and Kf = K00. For this case the solution is direct.
Liquid Discharges. For liquid discharges, the driving force for the discharge is normally pressure, with the pressure energy being converted to kinetic energy during the
discharge. Since the density remains constant during the discharge, the pressure integral in the mechanical energy balance, Eq. (2.1.1), can be integrated directly to result in
the following simplified equation:
^L+ ^ ( 2 2 _ Z i )+ _ l _ ( , 2 _ , 2 ) + ^ f + ^ = 0
(2 . L10 )
TABLE 2.4. "Two-K" Constants for Loss Coefficients and Valves3
Kf=^+Kjl
Fittings
Elbows
^Rc
90°
+
I
)
*i
^OO
Standard (r/D = 1), threaded
800
0.40
Standard (r/D = 1 ) , flanged/welded
800
0.25
Long radius (r/D — 1.5), all types
800
0.20
1000
1.15
2 welds (45°)
800
0.35
3 welds (30°)
800
4 welds (22. 5°)
800
0.30
0.27
5 welds (18°)
800
0.25
Mitered (r/D =1.5)
45°
*
'AnchcsJ
1 weld (90°)
Standard (r/D = 1), all types
500
0.20
Long radius (r/D =1.5)
500
0.15
Mitered, 1 weld (45°)
500
0.25
Mitered, 2 welds (22.5°)
500
0.15
Standard (r/D = 1), threaded
1000
0.70
Standard (r/D = 1), flanged/welded
1000
0.35
Long radius (r/D = 1.5), all types
1000
0.30
Standard, threaded
500
800
0.40
800
0.80
0.60
180°
Tees
Used as
elbows
Long radius, threaded
Standard, flanged/welded
1000
Stub-in branch
Run-through
Gate, plug, or
ball
Globe
Diaphragm
200
150
0.10
0.50
Flanged/welded
250
100
0.05
0.00
= 1.0
300
0.10
Reduced trim ft = 0.9
500
0.15
Reduced trim ft = 0.8
1000
0.25
FuU line size
ft
Standard
1500
4.00
Angle or Y-type
1000
2.00
Dam type
1000
800
2.00
0.25
Lift
2000
10.00
Swing
1500
1.50
Tilting disk
1000
0.50
Butterfly
Check
1
Threaded
Stub-in branch
Valves
0.70
"From William B. Hooper, Chemical Engineering, August 24, 1981, p. 97
TABLE 2.5. Roughness Factor e for
Clean Pipes3
Pipe material
£, mm
Riveted steel
1-10
Concrete
0.3-3
Cast iron
0.26
Galvanized iron
0.15
Commercial steel
0.046
Wrought iron
0.046
Drawn tubing
0.0015
Glass
0.0
Plastic
0.0
"Selected from Levenspiel(1984, p. 22).
For pipe flow, the mass flux through the pipe is constant and, for pipes of constant
cross-sectional area, the liquid velocity is constant along the pipe as well. In all cases,
frictional losses occur due to the fluid flow. If the flow is considered frictionless and
there is no shaft work, the resulting equation is called the Bernoulli equation,
^L + J-(Z2 - Zl) + _!.(,> - ^ ) = Q
(2.L11)
If the balance is performed across two points on the pipe of constant cross section,
then V2 = V1 and Eq. (2.1.11) can be simplified further.
Discharges of pure (i.e. nonflashing) liquids through a sharp-edged orifice are well
described by the classical work of Bernoulli and Torricelli (Perry and Green, 1984).
The model is developed from the mechanical energy balance, Eq. (2.1.1), by assuming
that the frictional loss term is represented by a discharge coefficient, C0 (Crowl and
Louvar, 1990). The result is
m = ACD ^Pg 0 (P 1 -P 2 )
(2.1.12)
where
m is the liquid discharge rate (mass/time)
A is the area of the hole (length2)
C0 is the discharge coefficient (dimensionless)
p is the density of the fluid (mass/volume)
gc is the gravitational constant (force/mass-acceleration)
P1 is the pressure upstream of the hole (force/area)
P2 is the pressure downstream of the hole (force/area)
The following guidelines are suggested for the discharge coefficient, C0 (Lees,
1986):
1. For sharp-edged orifices and for Reynolds numbers greater than 30,000, the
discharge coefficient approaches the value 0.61. For these conditions the exit
velocity is independent of the hole size.
2. For a well-rounded nozzle the discharge coefficient approaches unity.
3. For short sections of pipe attached to a vessel with a length-diameter ratio not
less than 3, the discharge coefficient is approximately 0.81.
4. For cases where the discharge coefficient is unknown or uncertain, use a value
of 1.0 to maximize the computed flows to achieve a conservative result.
Equation (2.1.11) can be used to model any discharge of liquid through a hole,
provided that the pressures, hole area, and discharge coefficient are known or estimated. For holes in tanks, the pressure upstream of the hole depends on the liquid head
and any pressure in the tank head space.
The 2-K method presented earlier is a much more general approach and can be
used to represent liquid discharge through holes, in place of Eq. (2.1.12). By applying
the orifice equations for the 2-Kmethod (Hooper, 1981), the discharge coefficient can
be calculated directly. The result is
CD
l
I
v=>
(2.1.13)
A/i+IX
where^Xf ls tne sum °f all excess head loss terms, including entrances, exits, pipe
lengths and fittings, provided by Eqs. (2.1.2) (2.1.4), and (2.1.6). For a simple hole in
a tank, with no pipe connections or fittings, the friction is caused only by the entrance
and exit effects of the hole. For Reynolds numbers greater than 10,000, Kf = 0.5 for
the entrance and/Cf = 1.0 for the exit. Thus ^Kf = 1.5 and, from Eq. (2.1.13), C0 =
0.63, which nearly matches the suggested value of 0.61.
The solution procedure to determine the mass flow rate of discharged material
from a piping system is as follows:
1. Given: Length, diameter, and type of pipe; pressures and elevation changes
across the piping system; work input or output to the fluid due to pumps, turbines, etc.; number and type of fittings in the pipe; properties of the fluid,
including density and viscosity.
2. Specify the initial point (point 1) and the final point (point 2). This must be
done carefully since the individual terms in Eq. (2.1.10) are highly dependent
on this specification.
3. Determine the pressures and elevations at points 1 and 2. Determine the initial
fluid velocity at point 1.
4. Guess a value for the velocity at point 2. If fully developed turbulent flow is
expected, then this is not required.
5. Determine the friction factor for the pipe using either Eq. (2.1.7) or Eq.
(2.1.9).
6. Determine the excess head loss terms for the pipe, using Eq. (2.1.3), and the fittings, using Eq. (2.1.4). Sum the head loss terms and compute the net frictional
loss term using Eq. (2.1.2). Use the velocity at point 2.
7. Compute values for all of the terms in Eq. (2.1.10) and substitute into the
equation. If the sum of all the terms in Eq. (2.1.10) is zero, then the computation is completed. If not, go back to step 4 and repeat the calculation.
8. Determine the mass flow rate using the equation m = pvA.
If fully developed turbulent flow is expected, the solution is direct. Substitute the
known terms into Eq. (2.1.10), leaving the velocity at point 2 as a variable. Solve for
the velocity directly.
For holes in tanks, the discharge of material through the hole results in a loss of
liquid and a lowering in the liquid level. For this case, Eq. (2.1.12) is coupled with a
total mass balance on the liquid in the tank to obtain a general expression for the tank
drainage time (Growl, 1992).
1
cv>*V(»)
'-AC^jJvi^T
(2'L14)
where
t is the time to drain the tank from volume V2 to volume V1 (time)
V is the liquid volume in the tank above the leak (length3)
h is the height of the liquid above the leak (length)
Eq. (2.1.14) assumes a constant leak area, A, and a constant discharge coefficient,
CD. This equation can be integrated once the volume versus height function is specified, V = V(h). Results are available for a number of geometries (Growl, 1992). Eq.
(2.1.14) can also be integrated numerically if volume versus height data are available.
The mass discharge rate of liquid from a hole in a tank is determined using the following equation (Growl and Louvar, 1990). This assumes that friction is represented
by a discharge coefficient, CD, and accounts for the pressure due to the liquid head
above the hole.
I (a
P
\
m=pvA = pACD . 2 H-5- + ^
\ \ P
I
(2.1.15)
where
m is the mass discharge rate (mass/time)
v is the fluid velocity (length/time)
A is the area of the hole (length2)
C0 is the mass discharge coefficient (dimensionless)
gc is the gravitational constant (force/mass acceleration)
Pg is the gauge pressure at the top of the tank (force/area)
p is the liquid density (mass/volume)
g is the acceleration due to gravity (length/time2)
hL is the height of liquid above the hole (length)
Equation (2.1.15) applies to any tank of any geometry. The mass discharge
decreases with time as the liquid level drops. The maximum discharge rate happens
when the leak first occurs.
Gas Discharges. Gas discharges may arise from several sources: from a hole at or near
a vessel, from a long pipeline, or from relief valves or process vents. Different calculation procedures apply for each of these sources. The majority of gas discharges from
process plant leaks will initially be sonic, or choked. Rate equations for sonic and subsonic discharges through an orifice are given in AIChE/CCPS (1987a, 1995a), API
(1996), Crane Co. (1986), Growl and Louvar (1990), Fthenakis (1993), and Perry
and Green (1984).
For gas discharges, as the pressure drops through the discharge, the gas expands.
Thus, the pressure integral in the mechanical energy balance, Eq. (2.1.1), requires an
equation of state and a thermodynamic path specification to complete the integration.
For gas discharges through holes, Eq. (2.1.1) is integrated along an isentropic path
to determine the mass discharge rate. This equation assumes an ideal gas, no heat transfer and no external shaft work. Refer to Table 2.3 for a summary of these assumptions.
J21T M L \ IP \2/k IP y*- 1)/4 "
*-<H^£[&)
-i)
j
<"•">
where
m is mass flow rate of gas through the hole (mass/time)
CD is the discharge coefficient (dimensionless)
A is the area of the hole (length2)
P1 is the pressure upstream of the hole (force/area)
gc is the gravitational constant (force/mass-acceleration)
M is the molecular weight of the gas (mass/mole)
k is the heat capacity ratio, CJCV (unitless)
Rg is the ideal gas constant (pressure-volume/mole-deg)
T1 is the initial upstream temperature of the gas (deg)
P2 is the downstream pressure (force/area)
As the upstream pressure P1 decreases (or downstream pressure P2 decreases), a
maximum is found in Eq. (2.1.16). This maximum occurs when the velocity of the discharging gas reaches the sonic velocity. At this point, the flow becomes independent of
the downstream pressure and is dependent only on the upstream pressure. The equation representing the sonic, or choked case is
Ike Ml 2 V"""*"1'
*—-<^fe(lTl)
f2-1-17'
The pressure ratio required to achieve choking is given by
P
( 2
\"/(k~l)
-T-(ITI)
<"J«
Equation (2.1.18) demonstrates that choking conditions are readily produced—an
upstream pressure of greater than 13.1 psig for an ideal gas is adequate to produce
choked flow for a gas escaping to atmospheric. For real gases, a pressure of 20 psig is
typically used.
Equations (2.1.15) through (2.1.17) require the specification of a discharge coefficient, CD. Values are provided in Perry and Green (1984) for square-edged, circular
orifices. For these types of discharges withNRe > 30,000, a value of 0.61 is suggested.
API (1996) recommends a discharge coefficient of 0.6 for default screening purposes,
along with a circular hole. For a conservative estimate with maximum flow, use a value
of 1.0.
Equations (2.1.15) through (2.1.17) also require a value of k, the heat capacity
ratio. Table 2.6 provides selected values. For monotonic ideal gases, k = 1.67, for
diatomic gases, k = 1.4 and for triatomic gases, k — 1.32. API (1996) recommends a
value of 1.4 for screening purposes.
For gas releases through pipes, the issue of whether the release occurs adiabatically
or isothermally is important. For both cases the velocity of the gas will increase due to
the expansion of the gas as the pressure decreases. For adiabatic flows the temperature
of the gas may increase or decrease, depending on the relative magnitude of the frictional and kinetic energy terms. For choked flows, the adiabatic choking pressure is less
than the isothermal choking pressure. For real pipe flows from a source at a fixed pressure and temperature, the actual flow rate will be less than the adiabatic prediction and
greater than the isothermal prediction. Growl and Louvar (1990) show that for pipe
TABLE 2.6. Heat Capacity Ratios k for Selected Gases3
Chemical formula or
symbol
Approximate
molecular weight (M)
Heat capacity ratio
k = Cp/Cv
C2H2
—
26.0
29.0
1.30
1.40
NH3
17.0
1.32
Argon
Ar
39.9
1.67
Butane
.11
Name of gas
Acetylene
Air
Ammonia
C4H10
58.1
Carbon dioxide
CO2
44.0
.30
Carbon monoxide
CO
28.0
1.40
Chlorine
Cl
70.9
1.33
Ethane
C2H6
30.0
1.22
Ethylene
C2H4
28.0
1.22
Helium
He
4.0
1.66
Hydrogen chloride
HCl
36.5
1.41
Hydrogen
H2
2.0
1.41
Hydrogen sulfide
H2S
34.1
1.30
Methane
CH4
16.0
1.32
CH3Cl
50.5
1.20
Methyl chloride
Natural gas
—
19.5
1.27
Nitric oxide
NO
30.0
1.40
Nitrogen
N2
28.0
1.41
N2O
44.0
1.31
Nitrous oxide
Oxygen
O2
32.0
1.40
Propane
C3H8
44.1
1.15
Propene (propylene)
C3H6
42.1
1.14
SO2
64.1
1.26
Sulfur dioxide
"From Crane (1986).
flow problems the difference between the adiabatic and isothermal results are generally
small. Levenspiel (1984) shows that the adiabatic model will always predict a flow
larger than the actual, provided that the source pressure and temperature are the same.
Crane (1986) reports that "when compressible fluids discharge from the end of a reasonably short pipe of uniform cross-sectional area into an area of larger cross section,
the flow is usually considered to be adiabatic." Crane (1986) supports this statement
with experimental data on pipes having lengths of 130 and 220 pipe diameters discharging air to the atmosphere. As a result, the adiabatic flow model is the model of
choice for compressible gas discharges through pipes.
For ideal gas flow, the mass flow for both sonic and nonsonic conditions is represented by the Darcy formula (Crane, 1986):
. _,,, P^p1(P1-Pj
f~lX~~
(2 L19)
-
where
m
Y
A
gc
P1
P1
P2
^Kf
is the mass flow rate of gas (mass/time)
is a gas expansion factor (unitless)
is the area of the discharge (length2)
is the gravitational constant (force/mass-acceleration)
is the upstream gas density (mass/volume)
is the upstream gas pressure (force/area)
is the downstream gas pressure (force/area)
are the excess head loss terms, including pipe entrances and exits, pipe lengths
and fittings (unitless).
The excess head loss terms, ^Kf, are found using the 2-K method presented earlier in
the section on liquid discharges. For most accidental discharges of gases, the flow is
fully developed turbulent flow. This means that, for pipes, the friction factor is independent of the Reynolds number and, for fittings, Kf = K00 , and the solution is direct.
The gas expansion factor, Y, in Eq. (2.1.19) is dependent only on the heat capacity
ratio of the gas, &, and the frictional elements in the flow path, ^Xf • ^ is determined
using a complete adiabatic flow model (Growl and Louvar, 1990) using the following
procedure. First, the upstream Mach number, Ma^ of the flow is determined from the
following equations:
k +I
In
f
2Y
+1 1l- Mar
2 L(^ )
1 / 1
\
\-»
- Ma = - - 1 + 4 > Kf =0
J ^
) ^
(2.1.20)
The solution is obtained by trial and error by guessing values of the upstream Mach
number, M0, and determining if the guessed value meets the equation objectives. This
can be easily done using a spreadsheet.
The next step in the procedure is to determine the sonic pressure ratio. This is
found from
P1 -P2
/"2YT
-Sr=1-^M
(2-1-21)
If the actual ratio is greater than this, then the flow is sonic or choked, and the pressure drop predicted by Eq. (2.1.21) is used to continue the calculation. If less, then the
flow is not sonic, and the actual pressure drop ratio is used.
Finally, the expansion factor, Y, is calculated from
y=Ma 11BXpTj
2
(P -P J
1
(2.L22)
2
The above calculation to determine the expansion factor can be completed once k
and the frictional loss term, ^Kf, are specified. This computation can be done once and
for all with the results shown in Figures 2.4 and 2.5. As shown in Figure 2.4, the pressure ratio (P1 - P2)/Pi is a weak function of the heat capacity ratio, k. The expansion
factor, Y, has little dependence on &, with the value of Y varying by less than 1% from
the value at k = 1.4 over the range from k = 1.2 to 1.67. Figure 2.5 shows the expansion factor for k = 1.4.
1.67
1.2
All points at or above function
are sonic flow conditions.
Excess Head Loss, Kf
Expansion Factor, Y
FIGURE 2.4. Sonic pressure drop for adiabatic pipe flow for various heat capacity ratios, k. All
regions above the curve represent sonic flow. [See Eqs. (2.1.20)-(2.1.22).]
Excess Head Loss, Kj
FIGURE 2.5. The expansion factor Y for adiabatic pipe flow for k = 1.4, as defined by Eq.
(2.1.22).
The functional results of Figures 2.4 and 2.5 can be fit using an equation of the
form In Y = .A(InIQ)3 + 5(InIQ)2 + C(InIQ) + D , where^l, B, C, andD are constants.
The results are shown in Table 2.7 and are valid for the ^Qrange indicated, within 1%.
The procedure to determine the adiabatic mass flow rate through a pipe or hole is
as follows:
1. Given: k based on the type of gas; pipe length, diameter and type; pipe
entrances and exits; total number and type of fittings; total pressure drop;
upstream gas density.
2. Assume fully developed turbulent flow to determine the friction factor for the
pipe and the excess head loss terms for the fittings, pipe entrances and exits. The
Reynolds number can be calculated at the completion of the calculation to
check this assumption. Sum the individual excess head loss terms to get ]£/Q .
3. Calculate (P1 -P2)/Pi from the specified pressure drop. Check this value against
Figure 2.4 to determine if the flow is sonic. All areas above the curves in Figure
2.4 represent sonic flow. Determine the sonic choking pressure, P2, by either
using Figure 2.4 directly, interpolating a value from the table, or using the
equations provided in Table 2.7
4. Determine the expansion factor from Figure 2.5. Either read the value off of the
figure, interpolate from the table, or use the equation provided in Table 2.7.
5. Calculate the mass flow rate using Eq. (2.1.19). Use the sonic choking pressure
determined in step 3 in this expression.
The above method is applicable to gas discharges through piping systems as well as
holes.
Two-Phase Discharge. The significance of two-phase flow through restrictions
and piping has been recognized for some time (Benjamin and Miller, 1941). Beginning
in the mid-1970s AIChE/DIERS has studied two-phase flow during runaway reaction
venting. DIERS researchers have emphasized that this two-phase flow usually requires
a larger relief area compared to all-vapor venting (Fauske et al., 1986). Leung (1986)
provides comparisons of these areas over a range of overpressure. Research supported
by the nuclear industries has contributed much to our understanding of two-phase
TABLE 2.7. Correlations for the Expansion Factor Y and the Sonic Pressure Drop
Ratio (P1 - P2)'/P1 as a Function of the Excess Head Loss Kf. The correlations are
within 1% of the actual value within the range specified.
]ny = .A(In K^ + B(In Kf)2 + C(In Kf) + D
Function value, y
A
B
Expansion factor, Y
0.0006
-0.0185
Sonic pressure drop ratio, k = 1.2
0.0009
Sonic pressure drop ratio, k = 1.4
Sonic pressure drop ratio, k = 1.67
C
D
Range o f ICf
0.1141
-0.5304
0.1-100
-0.0308
0.261
-0.7248
0.1-100
0.0011
-0.0302
0.238
-0.6455
0.1-300
0.0013
-0.0287
0.213
-0.5633
0.1-300
flow, as have a large number of studies undertaken by universities and other independent organizations.
When released to atmospheric pressure, any pressurized liquid above its normal
boiling point will start to flash and two-phase flow will result. Two-phase flow is also
likely to occur from depressurization of the vapor space above a volatile liquid, especially if the liquid is viscous (e.g., greater than 500 cP) or has a tendency to foam.
It should be noted that the two-phase models presented below predict minimum
mass fluxes and maximum pressure drops, consistent with conservative relief system
design (Fauske, 1985). Thus, they may not be suitable for source modeling. The orifice
discharge equation, Eq. (2.1.12), will always predict a maximum discharge flux. This
result is shown in Figure 2.6, which shows the mass flux as a function of upstream pressure for identical diameter pipes of varying lengths. Note that the two-phase model
predicts a minimal result, while the orifice discharge equation predicts a maximum
result.
Two-phase flows are classified as either reactive or nonreactive. The reactive case is
typical of emergency reliefs of exothermic chemical reactions. This case is considered
later.
The nonreactive case involves the flashing of liquids as they are discharged from
containment. Two special considerations are required. If the liquid is subcooled, the
discharge flow will choke at its saturation vapor pressure at ambient temperature. If the
liquid is stored under its own vapor pressure, a more detailed analysis is required. Both
of these situations are accounted for by the following expression (Fauske and Epstein,
1987):
Mass Flux, 1000 kg/m**2 - s
*-^|<&.+%-
(2.1-23)
2-Phase Theory
Orifice Equation
Pressure, MPa
FIGURE 2.6. The mass flux from a flashing two-phase flow as a function of the upstream
pressure. The data are plotted for various pipe lengths. The orifice equation predicts a
maximum flow, while the two-phase model predicts a minimum flow. (Data from Fauske,
1985.)
where
m is the two-phase mass discharge rate (mass/time)
A is the area of the discharge (length2)
GSUB is the subcooled mass flux (mass/area time)
GERM is the equilibrium mass flux (mass/area time)
N is a nonequilbrium parameter (dimensionless).
The properties are evaluated at the storage temperature and pressure. The
subcooled mass flux is given by
GSUB=CD>/2pf£c(P-PMt)
(2.1.24)
where
C0
pf
gc
P
P*
is the discharge coefficient (unitless)
is the density of the liquid (mass/volume)
is the gravitational constant (force/mass acceleration)
is the storage pressure (force/area)
is the saturation vapor pressure of the liquid at ambient temperature
(force/area)
For saturated liquids, equilibrium is reached if the discharge pipe size is greater
than 0.1 m (length greater than 10 diameters) and the equilibrium mass flux is given by
(Growl and Louvar, 1990)
GERM =^1^
v
fg V P
(2.1.25)
where
hfg is the enthalpy change on vaporization (energy/mass)
i?fg is the change in specific volume between liquid and vapor (volume/mass)
T is the storage temperature (absolute degrees)
Cp is the liquid heat capacity (energy/mass deg)
with the properties evaluated at the storage temperature and pressure. Note that the
temperature must be in absolute degrees and is not associated with the heat capacity.
The nonequilibrium parameter, N, accounts for the effect of the discharge distance. For
short discharge distances, a nonequilibrium situation occurs and the liquid does not
have time to flash during the discharge process—the flashing occurs after discharge and
the liquid discharge is represented by Eq. (2.1.12). For discharge distances greater
than 0.1 m, the liquid reaches an equilibrium state and chokes at its saturation vapor
pressure. A relationship for N, the nonequilibrium parameter, is given by (Fauske and
Epstein, 1987)
f
L
<2 26
»WWt « °^ < -'- >
where AP is the total available pressure drop (force/area), L is the pipe length to the
opening (length), andL c is the distance to equilibrium conditions, usually 0.1 m.
For L = O, Eqs. (2.1.23) and (2.1.26) reduce to Eq. (2.1.12), describing liquid
discharge through a hole.
There are many equally valid techniques for estimating two-phase flow rates. The
nuclear industry has undertaken substantial analysis of critical two-phase flow of
steam-water mixtures. The Nuclear Energy Agency (1982) has published a review of
four models and summarized available experimental data. Klein (1986) reviews the
one-dimensional DEERS model for the design of relief systems for two-phase flashing
flow. Three-dimensional models are also available although little published information on their use is available. Additional complexity does not guarantee improved accuracy and can unnecessarily complicate the task of risk analysis.
For reactive two-phase discharges, the discharge is driven by the energy created
within the fluid due to the exothermic reaction, and the relief analysis is highly coupled
to the energy balance of the reactor. This case is discussed in detail by Fisher et al.
(1992), Fthenakis (1993), and Boicourt (1995).
Two-phase relief design is based on the equation A = m/G^ where m is the mass
flow rate and G is the mass flux. To insure a conservatively designed relief device, i.e.
relief area larger than required, the mass flux or relief discharge model is selected to
minimize the mass flux through the relief. A discharge model predicting a smaller mass
flux through the relief will ensure a larger relief area and hence a conservative design.
For consequence modeling, the discharge models must be selected to maximize the
mass flux. Therefore, the relief mass flux models should not be used as the basis for a
consequence model since the conservatism is in the wrong direction.
The mass flow rate through the relief is estimated using an energy balance on the
reactor vessel. The assumption of a tempered system is made for this analysis. A tempered reactor assumes (1) no external heat losses from the vessel, and (2) that the vessel
contains a volatile solvent with the resulting pressure build-up due to the vapor pressure of the solvent as a result of the increasing system temperature from the exothermic
reaction. The result is conservative due to the assumption of no heat losses, but not
overly conservative for fast runaway reactions.
For a tempered reaction system, the heat generated by the reaction is equated to
the sensible heat change of the reacting liquid mass as its temperature increases and
the heat loss due to the evolution of volatile solvent. The result is (Boicourt, 1995)
jjmVhf
Q = mCv— +
*^
dt
mv^
where
Q
m
Cv
T
t
m
V
hfg
vfg
(2.1.27)
is the heat generation by reaction (degrees/time)
is the mass within the reactor vessel (mass)
is the heat capacity at constant volume (energy/mass deg)
is the absolute temperature of the reacting material (degrees)
is the time (time)
is the mass flow rate through the relief (mass/time)
is the reactor vessel volume (length3)
is the enthalpy difference between liquid and vapor (energy/mass deg)
is the specific volume difference between the liquid and vapor
(volume/mass)
The closed form solution to Eq. (2.1.27) is (Leung, 1986b),
.
m=
qm0
r,
+ r,
=vF
[J^r â„–{g/™0\)\
(2 L28)
'
where q is the average heat release rate (energy/time) and m0 is the initial reaction mass
(mass).
The assumptions inherent in Eq. (2.1.28) are (Boicourt, 1995)
1.
2.
3.
4.
5.
Homogeneous conditions in the reactor.
Constant physical properties.
Cp = Cv and Cp is the heat capacity of the liquid.
Vapor phase incompressibility; that is, dvjdt is constant during overpressure.
The average heat release rate, ^, during the overpressure is approximated by
C
P \(dT\ UT\ 1
*-T[b*H*-JJ
("-ao
where the subscripts s and m refer to the set conditions and turnaround conditions, respectively. The set conditions refer to conditions at the set pressure and
the turnaround conditions refer to the conditions at the maximum pressure
during the relieving process.
6. m is constant during the overpressure.
7. Single component system.
Expressions are also available for reactive systems with a variety of vent lines and
relief configurations (Boicourt, 1995).
Discharge from vessels exposed to fire. Where discharge is from a relief due to fire
exposure in a nonreacting system, a long established empirical method for estimating
relief rates is that given in the National Fire Protection Association Codes (NFPA,
1987a, b) or the American Petroleum Institute Recommended Practice (API, 1976,
1982). A key assumption of these methods is gas only flow. Crozier (1985) provides a
summary of the relevant formulas. Certain relief situations (e.g., reacting systems) can
give rise to two-phase discharges that will require greater relief area for the same vessel
protection assuming gas-only discharges (Fauske et al., 1986; Leung, 1986a).
The recent work by AIChE/DIERS provides guidance for vessels subject to runaway reaction or external fire (Fauske et al., 1986). Birk and Dibble (1986) provide a
mechanistic, transient discharge model for simulating release rates from pressure vessels exposed to external fire.
NFPA 30 (NFPA, 1987a) recommends four heat flux values through the tank wall
based on the wetted surface area for nonpressurized tanks. For LPG (pressurized
tanks) considered in NFPA 58 (NFPA, 1987b) the heat flux is based on the total tank
surface area rather than the wetted surface area although little heat transfer occurs
through the nonwetted portion. Experience has indicated that this approach is satisfactory for LPG. However, metal only in contact with vapor may heat rapidly under external fire conditions and lose its strength leading to a BLEVE as pressure builds. Further,
in the United States, most LPG installations follow the rules stated in NFPA 58, which
are adopted by many regulatory jurisdictions. NFPA 58 basically covers LPG of molecular weight between 30 and 58. NFPA 58 requirements are based on the following
equations (implicit in its Appendix D) for predicting heat flux:
Qf = 34,500^4°82
(2.1.30)
whereQ^ is the heat input through the vessel wall due to fire exposure (BTU/hr),yl is
the total surface area of the vessel (ft2), and F is the environment factor (dimensionless).
The area, A, in this equation is the entire surface area of the vessel, not the wetted surface area that is used in related equations. However, the error introduced by this difference in the calculation for a full tank is small.
For water spray protection over the entire surface of the tank (designed according
to NFPA 15 (1985) with a density of 0.25 gpm/ft2 or more), F = 0.3.
For an approved fire-resistant installation, F = 0.3. For an underground or buried
tank, F = 0.3 (from NFPA 58, 1987b, Appendk D-2.3.1).
For water spray with good drainage F= 0.15.
The values for F above are not multiplicative if combined protection systems are in
place.
The gas discharge rate from the relief valve, m, is then calculated by equating the
energy input rate to the rate of energy removal due to vaporization. This results in the
following equation:
*=fif/*fg
( 2 - L31 )
where mis the gas discharge rate from relief valve (mass/time) and /;fg is the latent heat
of vaporization at relief pressure (energy/mass)
A detailed discussion of the formulas used in NFPA Codes can be found in Appendix B of the Flammable and Combustible Liquids Code Handbook (NFPA, 1987a).
API RP520 (API, 1976) recommends a similar formula applicable to pressurized storage of liquids at or near their boiling point where the liquids have a higher molecular
weight than that of butane.
All of the recommended heat flux equations in API 520 and NFPA Codes that are
used to design relief valves assume that the liquids are not self-reactive or subject to
runaway reaction. If this situation arises, it will be necessary to take the heat of reaction
and the rate of the reaction into account in sizing the relief device.
2.1.1.3. EXAMPLEPROBLEMS
Example 2.1: Liquid Discharge through a Hole. Calculate the discharge rate of a
liquid through a 10-mm hole, if the tank head space is pressurized to 0.1 barg. Assume
a 2-m liquid head above the hole.
Data: Liquid density = 490 kg/m3
Solution: For liquid discharges, Eq. (2.1.10) applies. The 2-Kmethod will be used to
determine the frictional components.
A diagram of the process is shown in Figure 2.7. Points 1 and 2 denote the initial
and final reference points. For this case there are no pumps or compressors, so Ws = O.
Also, at point 1, V1 = O.
0.1 bar gauge
Hole
Liquid
FIGURE 2.7. Example 2.1: Liquid discharge through a hole.
Applying these assumptions, Eq. (2.1.10) reduces to
Sc(P2-Pi)
P
1 2
v
+ g(*2 -*1) +^2 +gc 2,'f
^
=0
Assume NRe > 10,000. Then the excess head loss for the fluid entering the hole is
7^=0.5.
For the exit, K^ = 1.0. Thus, ££f = 1-5 and from Eq. (2.1.2)
i',-^
Also, P1 = 0.10 bar gauge and P2 = O bar gauge.
The hole area is
jrD2 3.14(10 x 10'3 m) 2
, ,
A = ^-=
— =7.85 x HT5 m 2
2
4
The terms in the above equation are as follows:
(kg m/s2 \
(l N/m 2 }
( I N / 0 ^ ~ °-10 bar)(100'000 Pa / bar\-^a—J
^(P2 _ P I )
~p
490 kg/m 3
~
= -20.4 m 2 /s 2
g(z2 - Z1) = (9.8 m/s2) (O m - 2 m) = -19.6 m2/s2
Substituting the terms into Eq. (2.1.10)
-20.4-19.6 + \vl +^-vl =0
Zi
2*
Solving gives V2 = 5.7 m/s. Then
m = pv2A = (490 kg/m 3 )(5.7 m/s)(7.85XlO" 5 m 2 ) = 0.22 kg/s
This is the maximum discharge rate for this hole. The discharge rate will decrease with
time as the liquid head above the hole is decreased. Also, the maximum discharge rate
would occur if the hole were located at the bottom of the tank.
The solution is readily implemented using a spreadsheet, as shown in Figure 2.8.
Click to View Calculation Example
Example 2.1: Liquid Discharge through a Hole in a Tank
Input Data:
Tank pressure above liquid:
Pressure outside hole:
Liquid density:
Liquid level above hole:
Hole diameter:
Excess Head Loss Factors:
Entrance:
Exit:
Others:
TOTAL:
0.1 barg
O barg
490 kg/m**3
2m
10 mm
0.5
1
O
1.5
Calculated Results:
Hole area:
7.9E-05 m**2
Equation terms:
Pressure term:
Height term:
Velocity coefficient:
Exit velocity:
Mass flow:
I
-20.4082 m**2/s**2
-19.6 m**2/s**2
1.25
5.7 m/s
0.22 kg/s
Figure 2.8. Spreadsheet output for Example 2.1: Liquid discharge through a hole in the tank.
Example 2.2: Liquid Trajectory from a Hole. Consider again Example 2.1. A
stream of liquid discharging from a hole in a tank will stream out of the tank and impact
the ground at some distance away from the tank. In some cases the liquid stream could
shoot over any diking designed to contain the liquid.
(a) If the hole is 3 m above the ground, how far will the stream of liquid shoot
away from the tank?
(b) At what point on the tank will the maximum discharge distance occur? What is
this distance?
Solution: (a) The geometry of the tank and the stream is shown in Figure 2.9. The
distance away from the tank the liquid stream will impact the ground is given by
s = v2t
FIGURE 2.9. Tank geometry for Example 2.2.
(2.1.32)
Click to View Calculation Example
Example 2.2a: Liquid Trajectory from a Hole
Input Data:
Liquid velocity at hole:
Height of hole above ground:
5.7 m/s
3m
Calculated Results:
Time to reach ground:
Horizontal distance from hole:
I
0.78 s
4.46 m
I
|
FIGURE 2.10. Spreadsheet output for Example 2.2a: Liquid trajectory from a hole.
where s is the distance (length), V2 is the discharge velocity (distance/time), and t is the
time (time).
The time, £, for the liquid to fall the distance h, is given by simple acceleration due
to gravity,
t=fikTg
(2.1.33)
These two equations are implemented in the spreadsheet shown in Figure 2.10.
The velocity is obtained from Example 2.1. The horizontal distance the stream will
impact the ground is 4.46 m away from the base of the tank.
Solution (b) The solution to this problem is found by solving Eq. (2.1.10) for V2.
The algebraic result is substituted into Eq. (2.1.32), along with Eq. (2.1.33). The
resulting equation for s is differentiated with respect to h. The expression is set to zero
to determine the maximum, and solved for h. The result is
*.l(w+«i]
A SP )
<"•*>
where H is the total liquid height above ground level (length).
Equations (2.1.33) and (2.1.34) are then substituted into Eq. (2.1.32) for s to
determine the maximum distance. The result is
, = ^MM
(2.1.35)
!/"1X
IfPg = O, i.e. the tank is vented to the atmosphere, then the maximum discharge
distance, from Eq. (2.1.34) occurs when the hole is located at h = H/2. As the tank
pressure increases, the location of the hole moves up and eventually reaches the top of
the liquid.
These equations are conveniently implemented using a spreadsheet, as shown in
Figure 2.11. For this case, the hole location for the maximum discharge conditions is at
3.54 m above the ground. The maximum discharge distance is 4.48 m.
This example demonstrates the important point that the incident is selected based
on the objective of the study. If the objective of the study is to determine the maximum
discharge rate from the tank, then a hole is specified at the bottom of the tank. If the
study objective is to determine the maximum discharge distance, then Eq. (2.1.34) is
used to place the location of the hole.
Click to View Calculation Example
Example 2,2b: Maximum Discharge Distance from a Hole in a Tank
Input Data:
Tank pressure above liquid:
Max. liquid height in tank:
Density of liquid:
Excess Head Loss Factors:
Entrance:
Exit:
Others:
TOTAL:
0.1 bang
5m
490 kg/m**3
0.5
1
O
1.5
Calculated Results:
Hole height for max. distance:
3.54 m <- Above ground
Actual height:
Discharge distance:
3.54 m <-Cannot exceed liquid height
4.48 m
I
I
|
FIGURE 2.11. Spreadsheet output for Example 2.2b: Liquid trajectory from a hole.
Example 2.3: Liquid Discharge through a Piping System. Figure 2.12 shows a
transfer system between two tanks; The system is used to transfer a hazardous liquid.
The pipe is commercial steel pipe with an internal diameter of 100-mm with a total
length of 10 m. The piping system contains two standard, flanged 90° elbows and a
standard, full-line gate valve. A 3-kW pump with an efficiency of 70% assists with the
liquid transfer. The maximum fluid height in the supply tank is 3 m, and the elevation
change between the two tanks is as shown in Figure 2.12.
Data: Fluid density (p) = 1600 kg/m3
Fluid viscosity (JJL) = 1.8 X 10~3 kg/m s
Solution: The postulated scenario is the detachment of the pipe at its connection
to the second tank. The objective of the calculation is to determine the maximum dis-
Standard
Gate Valve
Pump
Pipe Detaches here
FIGURE 2.12. Example 2.3: Liquid discharge through a piping system.
charge rate of liquid from the pipe. Liquid would also discharge from the hole in the
tank previously connected to the pipe, but this is not considered in this calculation.
The 2-Kmethod, in conjunction with Eq. (2.1.10) will be used. A trial and error
solution method is required, as discussed in the section on liquid discharges. A spreadsheet solution is best, with the output shown in Figure 2.13
Click to View Calculation Example
Example 2.3: Liquid Discharge through a Piping System
Input Data:
!Guessed discharge velocity:
Fluid density:
Fluid viscosity:
Pipe diameter:
Pipe roughness:
Point 1 pressure:
Point 2 pressure:
Point 1 velocity:
Point 1 height:
Point 2 height:
Pipe length:
Net pump energy:
Fittings:
Elbows:
Valves:
Inlet:
Exit:
Number
2
1
1
1
7.74 m/s
T
1600 kg/m**3
0.0018 kg/m*s
0.1 m
0.046 mm
O Pa
O Pa
O m/s
13 m
Om
10 m
-2.1 kw
K1
800
300
160
O
K-inifinity
0.25
0.1
0.5
1
Calculated Results:
Reynolds No:
Friction factor:
Pipe area:
687702
0.0043 0.000103
0.0079 m**2
Fittings and pipe K factors:
Elbows:
0.629
Valves:
0.126
Inlet:
0.500
Exit:
1.000
Pipe:
1.718
TOTAL:
3.974
Mechanical energy balance terms (m**2/s**2):
Pressure:
0.00
Height:
-127.49
Point 1 velocity:
0.00
Fittings/pipe:
118.92
Pump:
-21.60
TOTAL:
-30.17
[Calculated Discharge Velocity:
7.77 m/s
Velocity Difference:
-0.03081 m/s
[Resulting mass discharge rate:
97.61 kg/s
""")
|
FIGURE 2.13. Spreadsheet output for Example 2.3: Liquid discharge through a piping system.
The initial and final reference points are shown in Figure 2.12 by the numbers in
the small squares. The pressures at these points are equal. The total elevation change
between the two points is 10 + 3 = 13m.
The pipe roughness factor is found in Table 2.5. The constants for the fittings are
found in Table 2.4.
The 3 kW pump is 70% efficient so the net mechanical energy transferred to the
fluid is (0.70)(3 kW) = 2.1 kW. The pump energy is entered as a negative value since
work is going into the system.
The calculated results are determined as follows. The Reynolds number is determined from the guessed velocity, the pipe diameter, the fluid density and viscosity. The
friction factor is determined using Eq. (2.1.7). The Kf factors for the elbows and valves
are determined using Eq. (2.1.4). The J^ factors for the inlet and exit effects are determined using Eq. (2.1.6). The pipeXf factor is found using Eq. (2.1.3). The excess head
loss factors for the complete piping system are summed as shown.
The mechanical energy balance terms all have units of m2/s2. The balance term for
the fittings and pipe length is computed using Eq. (2.1.2). The guessed velocity is used
here. The pump term in the balance is found from
W8 _ W8
m
pv2 A
where V2 is the guessed velocity. The mechanical energy balance terms are summed, as
shown, with the difference representing the remaining term, l/2 v\. This represents the
calculated velocity in the spreadsheet.
The trial-and-error solution is achieved by manually entering velocity values until
the guessed and calculated values are nearly identical, or by using a spreadsheet solving
function.
The resulting mass discharge rate is determined from PV2A, and has a value of 97.6
kg/s.
Example 2.4: Gas Discharge through a Hole. Calculate the discharge rate of propane through a 10-mm hole at the conditions of 250C and 4 barg (5.01 bar abs).
Data: Propane heat capacity ratio =1.15 (Crane, 1986)
Propane vapor pressure at 250C = 8.3 barg
Solution: The steps to determine the discharge rate are:
a. Determine phase of discharge.
Since the total pressure is less than the vapor pressure of liquid propane, the discharge must be as a vapor. The gas discharge equations must be used.
b. Determine flow regime, i.e. sonic or subsonic.
The choking pressure is determined using Eq. (2.1.18).
/
^
\ */<*-!)
1*2«= UU
P1
(* + l j
/
9
v 1.15/0.15
= P-1
UlS/
=0574
Choked = (5-01 bar)(0.574) = 2.88 bar
Since P2 = 1.01 bar is less than Pchoked, the flow is sonic through the hole.
c. Determine the flow rate.
The area of the discharge is
,4 = —=7.85 x 10- 5 w 2
4
Use Eq. (2.1.17) to determine the mass flow rate
/
M
X( *+!)/(*-!)
x 2.15/0.15
-b)
=a355
Assume a discharge coefficient, C0 of 0.85. Substituting into Eq. (2.1.17)
J
T
,, / - \ (*+!)/( A-I)
^ l^i)
^<0,5>(,S5 x10-.m>xso™j|ii^|^
^choked =°-0900 kg/S
This problem can also be solved using the 2-K method in conjunction with Eq.
(2.1.19). For a hole, the factional losses are only due to the entrance and exit
effects. Thus, ^Xf = 0.5 H- 1.0 = 1.5. fork = 1.2, from Figure 2.4 (or equations in Table 2.7) (P1 - P2)TP1 = 0.536 and it follows that P2 = 2.32 bar. Since
the ambient pressure is well below this value, the flow will be choked. From
Figure 2.5 (or equation in Table 2.7), the expansion factor, Y, is 0.614. The
upstream gas density is
_P M _ (501.000Pa)(44 kg/kg-mole) _ o n / > | _
3
p— 1
—
r—:
— o.yuKg/
3
&/ m
JIgT1 (8314Pam /kg-moleK)(298K)
Substituting into Eq. (2.1.19), and using the choked pressure for P2,
m=YA
|2gcPl(fi -P2^j
\ 2X
* -(MlW-KXl(T* „') JK!*> */-')(»« M . „ 086 k8/s
V
JL.D
This result is almost identical to the previous result.
The method is readily implemented using a spreadsheet, as shown in Figure
2.14. The spreadsheet prints out the mass flows for a range of k values—the user
must interpolate to obtain the final result.
Example 2.5: Gas Discharge through a Piping System. Calculate the mass flow rate
of nitrogen through a 10-m length of 5-mm diameter commercial steel pipe. Assume a
scenario of pipe shear at the end of the pipe. The nitrogen is supplied from a source at a
pressure of 20 bar gauge and a temperature of 298 K. The piping system includes four
90° elbows (standard, threaded) and two full line gate valves. Calculate the discharge
Click to View Calculation Example
Example 2.4: Gas Discharge through a Hole
Input Data:
Heat capacity ratio of gas:
Hole size:
Upstream pressure:
Dowstream pressure:
Temperature:
Gas molecular weight:
1.15
10 mm
5.01 bar abs
1.01 bar abs
298 K
44
Excess Head Loss Factors:
Entrance:
0.5
Exit:
1
Others:
0_
TOTAL:
1.5
Calculated Results:
Hole area:
Upstream gas density:
Expansion factor, Y:
7.9E-05 m**2
8.90 kg/m**3
0.614
Actual pressure ratio:
0.80 <-- Must be greater than sonic
pressure ratio below to insure
sonic flow.
Heat capacity ratio, k:
Sonic pressure ratios:
Choked pressure:
1.2
0.536
2.33
1.4
1.67
0.575
0.618
2.13
1.91 bar
Mass flow:
0.0861
0.0892
!interpolated mass flow:
0.085342 kg/s
0.0925 kg/s
I
FIGURE 2.14. Spreadsheet output for Example 2.4: Gas discharge through a hole.
rate by two methods (1) using the orifice discharge equation, Eq. (2.1.17) and assuming a hole size equal to the pipe diameter, and (2) using a complete adiabatic flow
model. For nitrogen, k = 1.4.
Solution: The problem will be solved using two methods (1) a hole discharge and
(2) an adiabatic pipe flow solution.
Method 1: Hole discharge.Assume a discharge coefficient, C0 = 0.85. The
cross-sectional area of the pipe is
^ = ^-=1.96 x 1(T5 m2
4
Also,
M
I
2
V'*"7"""
= 334
-y °
I 2 \<14/"'
Equation (2.1.17) is used to estimate the mass discharge rate,
I kg Mf
^choked
=C
D^^f
2 V4+1^*'1*
(-j-fi)
Substituting into Eq. (2.1.17),
,5 2,
.
f (1.4)(28 kg / kg - mole)(0.334)
m=(0.85)(1.96xlOm )(2.1 X l O 6 P a ) —
^—^
—
v
^j (8314 Pa m 3 / kg-mole K)(298 K)
= 0.0804 kg / s
Method 2: Adiabatic flow model. For commercial steel pipe, from Table 2.5,
e = 0.046 mm and it follows that
£ 0.046 mm
— =—
=0.0092
D
5 mm
Assume fully developed turbulent flow. Then, the friction factor is calculated using Eq.
(2.1.9),
TT410Md^H0-42
/= 0.00921
The excess head loss due to the pipe length is given by Eq. (2.1.3),
=
f
4ft
D
=
(4)(0.00921)(10 m)
(0.005 m)
For the elbows, at the expected high discharge rates, IQ = IC00. Thus, from Table 2.4, JQ
= 0.4 for each elbow and for each ball valve JQ = 0.1. The exit effect of the gas leaving
the pipe must also be included, that is, Kf= 1.0. Thus, adding up all the contributions,
2/Q = 73.7 + (4)(0.4) + (2)(0.1) + 1 = 76.5
From Figure 2.4 (or the equations in Table 2.7), for k = 1.4 and/Q = 76.5,
1
"
2
=0.9141=»P2 =1.80 bar
It follows that the flow is sonic since the downstream pressure is less than this. From
Figure 2.5 (or Table 2.7), the gas expansion factor, Y = 0.716. The gas density at the
upstream conditions is
=
Pl
P1M
RgT
=
(2.1 XIQ 6 Pa)(28 kg / kg - mole) =
(8314 Pa m 3 / k g - m o l e K)(298 K)
^
'
gm
Substituting into Eq. (2.1.19),
. VA /2ScP1(P1 -P2)~
m = YA
=j
1
2X
, . |(2)(23.7kg/m 3 )(2.1xl0 6 Pa - 0.18OxIO 6 Pa)
m = (0.716)(1.96 x 10~5 m 2 )J—
^
—
V
= 0.0153 k g / s
76.5
Click to View Calculation Example
Example 2.5: Gas Discharge through a Piping System
Input Data:
Heat capacity ratio, k:
Temperature:
Molecular weight of gas:
Point 1 pressure:
Point 2 pressure:
Pipe diameter:
Pipe length:
Pipe roughness:
1.4
298
28
2101000
101325
0.005
10
0.046
K
Pa
Pa
m
m
mm
Fittings:
Number
Elbows:
Valves:
Inlet:
Exit:
4
2
O
1
Kjnfinite
0.4
0.1
0.5
1
Calculated Results:
Pipe area:
Initial gas density:
Pipe friction factor:
2E-05 m**2
23.74 kg/m**3
0.009214
Fittings and pipe K factors:
Elbows:
1.60
Valves:
0.20
Inlet:
0.00
Exit:
1.00
Pipe:
73.71
TOTAL:
76.51
Ln(K):
4.34
Expansion factor:
0.72
Heat capacity ratio.k
1.2
1.4
1.67
(P1-P2)/P1:
0.906
0.914 0.929
P-choked:
197534.4 180552.4 148466.5 Pa
Mass flow:
0.015273 0.015341 0.015468 kg/s
[Interpolated mass flow:
0.015341 kg/s
|
FIGURE 2.15. Spreadsheet output for Example 2.5: Gas discharge through a piping system.
The mass discharge rate calculated assuming a hole is more than 5 times larger than
the result from the adiabatic pipe flow method. Both methods require about the same
effort, but the adiabatic flow method produces a much more realistic result.
The entire adiabatic pipe flow method is readily implemented using a spreadsheet.
The spreadsheet solution is shown in Figure 2.15.
Example 2.6: Two-Phase Flashing Flow through a Pipe. Propane is stored in a
vessel at its vapor pressure of 95 bar gauge and a temperature of 298 K. Determine the
discharge mass flux if the propane is discharged through a pipe to atmospheric pressure. Assume a discharge coefficient of 0.85 and a critical pipe length of 10 cm. Determine the mass flux for the following pipe lengths:
(a)
O cm
(b)
5 cm
(c) 10 cm
(d) 15 cm
Data: Heat of vaporization:
Volume change on vaporization:
Heat capacity:
Liquid density:
3.33 X 105 J/kg
0.048 m3/kg
2230 J/kg K
490 kg/m3
Solution: The solution to this problem is accomplished directly using Eqs.
(2.1.23) through (2.1.26). This is readily implemented using a spreadsheet, as shown
in Figure 2.16. The output shown is for a pipe length of 5 cm. The results are
Pipe Length (cm)
Mass Flux (kg/m2 s)
O
82,000
5
11,900
10
8,510
15
8,510
The mass flux at a pipe length of zero is equal to the discharge of liquid through a
hole, represented by Eq. (2.1.12). At a pipe length of 10 cm, the discharge reaches
equilibrium conditions and the mass flux remains constant with increasing pipe length.
Click to View Calculation Example
Example 2.6: Two-phase Flashing Flow through a Pipe
Input Data:
Ambient Temperature:
298 K
Saturation pressure:
95 bar gauge
Storage pressure:
95 bar gauge
Downstream pressure:
O bar gauge
Critical pipe length:
10 cm
Pipe length:
5 cm
Discharge coefficient:
0.85
Heat of vaporization:
333000 J/kg
Volume change on vaporization
0.048 m**3/kg
Heat capacity:
2230 J/kg K
Liquid density:
490 kg/m**3
Calculated Results:
Total available pressure drop:
Non-equilibrium parameter:
95 bar
0.5108
Subcooled mass flux:
Equilibrium mass flux:
All liquid discharge thru hole:
O kg/m**2 s
8510.3 kg/m**2s
82015 kg/m**2 s
[Combined mass flux:
11907.8 kg/m**2 s I
FIGURE 2.16. Spreadsheet output for Example 2.6: Two-phase flashing flow through a pipe.
Example 2.7: Gas Discharge due to External Fire. Calculate the gas relief through a
relief valve for an uninsulated propane tank with 5 m2 surface area that is exposed to an
external pool fire.
Data: Surface area = 5 m2 = 53.8 ft2
Environment factor J7 =1.0
Latent heat of vaporization /;fg = 333 kj/kg (Perry and Green, 1984)
1 Btu/hr = 2.93 x 10^ kj/s
Solution: First use Eq. (2.1.30) to estimate the heat flux into the vessel due to the
external fire:
Qf = 34,500E4°82
= (34,500)(l)(53.8)a82 Btu/hr
= 9.06 XlO 5 Btu/hr
= 265.4 kj/s
Then from Eq. (2.1.31) the venting rate is
a
265.4 kj/s
*-^- Hnv4"°-797^
This rate is higher than would be predicted by the API 520/521 method and, after
an initial period, it may not be sustained.
The spreadsheet solution to this problem is shown in Figure 2.17.
Click to View Calculation Example
Example 2.7: Gas Discharge Due to External Fire
Input Data:
Surface area:
Environment factor:
Latent heat of vaporization:
5 m**2
1
333 kJ/kg
Calculated Results:
Surface area:
Heat Flux:
53.82 ft**2
906110.8 BTU/hr
265.49 kJ/s
Venting Rate:
0.797 kg/s
FIGURE 2.I7. Spreadsheet output for Example 2.7: Gas discharge due to external fire.
2.1.1.4. DISCUSSION
Strengths and Weaknesses. Gas and liquid phase discharge calculation methods are
well founded and are readily available from many standard references. However, many
real releases of pressurized liquids will give rise to two-phase discharges. To handle
two-phase discharges, the DIERS project developed methods for designing relief systems for runaway reactors or other foaming systems. Other simplified approximate
methods have also been developed (e.g., Fauske and Epstein, 1987).
For mixtures, the discharge models become considerably more complex and is
beyond the scope of the material here. For discharge of liquid and gas mixtures through
holes, pipes, and pumps, average properties of the mixture can be used. For flashing
discharges through holes, if the thermodynamic path during the discharge is known,
then a thermodynamic simulator might be used to determine the final phase splits and
compositions.
Identification and Treatment of Possible Errors. Gas and liquid discharge equations contain a discharge coefficient. This can vary from 0.6 to 1.0 depending on the
phase and turbulence of the discharge. The use of a single value of 0.61 for liquids may
underestimate the lower velocity discharges through larger diameter holes. Similarly,
the value of 1.0 may overestimate gas discharges. All discharge rates will be time
dependent due to changing composition, temperature, pressure, and level upstream of
the hole. Average discharge rates are case dependent and a number of intermediate calculations may be necessary to model a particular release. The mass flow rate of
two-phase flashing discharges will always be bounded by pure vapor and liquid discharges.
The 2-K method for both liquid and gas discharges through holes and pipes provides the capability to include entrance and exit effects, pumps and compressors,
changes in elevation, changes in pipe size, pipe fittings, and pipe lengths. The discharge
coefficient is inherent in the calculation and does not require an arbitrary selection.
A method has also been presented to perform a complete adiabatic pipe flow calculation using the 2-K approach. This method produces a much more realistic answer
than by representing the pipe as a hole, and requires about the same calculational effort.
Utility. Gas and liquid phase discharge calculations are relatively easy to use. The
DIERS methodology requires the use of commercial computer codes or experimental
apparatus and is not easy to apply, needing expert knowledge.
Resources Needed. No special skills are required for gas or liquid discharge calculations. Less than 1 hour with an electronic calculator or spreadsheet is usually adequate
for a single calculation, with further calculations taking minutes. Two-phase flow analysis requires specialist knowledge and in most cases access to a suitable computer package, unless the simplified methods of Fauske and Epstein (1987) are employed.
Available Computer Codes
Pipe flow:
AFT Fathom (Applied Flow Technology, Louisville, OH)
Crane Companion (Crane ABZ, Chantilly, VA)
FLO-SERIES (Engineered Software, Inc., Lacey, WA)
INPLANT (Simulation Sciences Inc., Fullerton, CA)
Two-phase flow:
DEERS Klein (1986) two-phase flashing discharges (JAYCOR Inc.)
SAFIRE (AIChE, New York)
Spreadsheets from Fauske and Associates for two-phase flow
Several integrated analysis packages also contain discharge rate simulators. These include:
ARCHIE (Environmental Protection Agency, Washington, DC)
EFFECTS-2 (TNO, Apeldoorn, The Netherlands)
FOCUS+ (Quest Consultants, Norman, OK)
PHAST (DNV, Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
SAFETI (DNV5 Houston, TX)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
TRACE (Safer Systems, Westlake Village, CA)
2.1.2. Flash and Evaporation
2.1.2.1. BACKGROUND
Purpose. The purpose of flash and evaporation models is to estimate the total vapor or
vapor rate that forms a cloud, for use as input to dispersion models as shown in Figure
2.1 and Figure 2.2.
When a liquid is released from process equipment, several things may happen, as
shown in Figure 2.18. If the liquid is stored under pressure at a temperature above its
normal boiling point (superheated), it will flash partially to vapor when released to
atmospheric pressure. The vapor produced may entrain a significant quantity of liquid
as droplets. Some of this liquid may rainout onto the ground, and some may remain
suspended as an aerosol with subsequent possible evaporation. The liquid remaining
behind is likely to form a boiling pool which will continue to evaporate, resulting in
additional vapor loading into the air. An example of a superheated release is a release of
liquid chlorine or ammonia from a pressurized container stored at ambient
temperature.
Case A
Boiling
Point
Leak
Tank
with
Liquid
Ambient
Aerosol
Flash
Boiling Pool
Pool Spread
Case B
Leak
T
Boiling > T Ambient
Point
Tank
with
Liquid
Evaporating
Pool
Pool Spread
FIGURE 2.18. Two common liquid-release situations dependent on the normal boiling point
of the liquid. Aerosol formation is also possible for Case B if the release velocities are high.
Next Page
Previous Page
FOCUS+ (Quest Consultants, Norman, OK)
PHAST (DNV, Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
SAFETI (DNV5 Houston, TX)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
TRACE (Safer Systems, Westlake Village, CA)
2.1.2. Flash and Evaporation
2.1.2.1. BACKGROUND
Purpose. The purpose of flash and evaporation models is to estimate the total vapor or
vapor rate that forms a cloud, for use as input to dispersion models as shown in Figure
2.1 and Figure 2.2.
When a liquid is released from process equipment, several things may happen, as
shown in Figure 2.18. If the liquid is stored under pressure at a temperature above its
normal boiling point (superheated), it will flash partially to vapor when released to
atmospheric pressure. The vapor produced may entrain a significant quantity of liquid
as droplets. Some of this liquid may rainout onto the ground, and some may remain
suspended as an aerosol with subsequent possible evaporation. The liquid remaining
behind is likely to form a boiling pool which will continue to evaporate, resulting in
additional vapor loading into the air. An example of a superheated release is a release of
liquid chlorine or ammonia from a pressurized container stored at ambient
temperature.
Case A
Boiling
Point
Leak
Tank
with
Liquid
Ambient
Aerosol
Flash
Boiling Pool
Pool Spread
Case B
Leak
T
Boiling > T Ambient
Point
Tank
with
Liquid
Evaporating
Pool
Pool Spread
FIGURE 2.18. Two common liquid-release situations dependent on the normal boiling point
of the liquid. Aerosol formation is also possible for Case B if the release velocities are high.
If the liquid is not superheated, but has a high vapor pressure (volatile), then vapor
emissions will arise from surface evaporation from the resulting pools. The total emission rate may be high depending on the volatility of the liquid and the total surface area
of the pool. An example is a release of liquid toluene, benzene or alcohol.
For liquids which exit a process as a jet, flow instabilities may cause the stream to
break up into droplets before it impacts the ground. The size of the resulting droplets
and the rate of air entrainment in the jet, as well as the initial temperature of the liquid,
influence the evaporation rate of the droplets while in flight. The time of flight (drop
trajectories) influences the fraction of the release which rains out, evaporates, or
remains in the aerosol/vapor cloud (DeVaull et al., 1995).
Additional references on this subject are the AIChE/CCPS Guidelines for Use of
Vapor Cloud Dispersion Models (AIChE/CCPS, 1987a, 1996a), Growl and Louvar
(1990), Fthenakis (1993), the Guidance Manual for Modeling Hypothetical Accidental
Releases to the Atmosphere (API, 1996), Understanding Atmospheric Dispersion of Accidental Releases (AIChE/CCPS, 1995a), and several published conference proceedings
(AIChE, 1987b, 1991, 1995b).
Philosophy. If the liquid released is superheated, then the amount of vapor and liquid
produced during flashing can be calculated from thermodynamics assuming a suituable
path. An isentropic path is assumed if the fluid is accelerated during its release. A constant initial and final enthalpy are assumed if the initial and final states of the fluid are
quiescent. During the flash a significant fraction of liquid may remain suspended as a
fine aerosol. Some of this aerosol may eventually rain out, but the remainder will
vaporize due to air entrained into the cloud. In some circumstances ground boiloff of
the rainout may be so rapid that all the discharge may enter the cloud almost immediately. In other cases the quantity of liquid may be so great that it cools the ground
enough to sufficiently reduce surface vaporization from the pool. The temperature of
the liquid pool that remains may be significantly below the boiling point of the liquid
due to evaporative cooling. For cold liquids deposited on warm substrates, a large initial boiloff is followed by lesser vaporization as the substrate cools; eventually heat
input may be restricted to atmospheric convection or sunlight. Liquid pool models are
primarily dominated by heat transfer effects.
If the liquid released is not superheated, but relatively volatile, then the vapor loading is due to evaporation. The evaporation rate is proportional to the surface area of the
pool and the vapor pressure of the liquid, and can be significant for large pools. These
models are primarily dominated by mass transfer effects. Wind and solar radiation can
also affect the evaporation rate.
Both empirical and pseudomechanistic models based on heat and mass transfer
concepts are available and are based on the thermodynamic properties of the liquid
and, for the boiling pool, on the thermal properties of the substrate (e.g., ground).
Vaporization rates may vary greatly with time. The dimensions of the vapor cloud
formed over the pool are often required as input to some dense gas dispersion models
(Section 2.1.3.2); this is empirical and is not provided by most models.
Applications. Spilling of liquids is common during loss of containment incidents in
the chemical process industries. Thus, flash and evaporation models are essential in
CPQEA. The Rijnmond study (Rijnmond Public Authority, 1982) provides good
examples of the use of flash and evaporation models. Wu and Schroy (1979) show how
evaporation models may be applied to spills.
2.1.2.2. DESCRIPTION
Description of Technique
Flashing. The flash from a superheated liquid released to atmospheric pressure can be
estimated in a number of ways. If the initial and final state of the release is quiescent,
then the initial and final enthalpies are the same (this does not imply a constant
enthalpy process). For pure materials, such as steam, a Mollier entropy-enthalpy diagram or a thermodynamic data table can be used.
For liquids that are accelerated during the release, such as in a jet, a common
approach is to assume an isentropic path. These calculations can also be performed for
pure materials using a Mollier chart or tabulated thermodynamic data. The difference
in numerical result between the isentropic and isenthalpic pathways is frequently small
for many release situations, but this is not always guaranteed and depends on the thermodynamic behavior of the material.
A standard equation for prediction of the fraction of the liquid that flashes can be
derived by assuming that the sensible heat contained within the superheated liquid due
to its temperature above its normal boiling point is used to vaporize a fraction of the
liquid. This isenthalpic analysis leads to the following equation for the flash fraction
(Growl and Louvar, 1990),
(T -Tb)
^7V=Cp(2.1.36)
%
where
Cf is the heat capacity of the liquid, averaged over T to Tb (energy/mass deg)
T is the initial temperature of the liquid (deg)
Tb is the atmospheric boiling point of the liquid (deg)
hfg is the latent heat of vaporization of the liquid at Tb (energy/mass)
Fv is the mass fraction of released liquid vaporized (unitless)
TNO (1979) provide a flash equation based on the integrated heat balance of a
parcel of flashing liquid. Manual treatment of multicomponent mixtures is time consuming. It is easier to use flash capabilities in commercial process simulators (e.g.,
PRO-II, HY-SYS, ASPEN PLUS, PD-PLUS) or their equivalents available in-house.
The fraction of released liquid vaporized (Fv) is a poor predictor of the total mass
of material in the vapor cloud, because of the possible presence of entrained liquid as
droplets (aerosol). There are two mechanisms for the formation of aerosols: mechanical and thermal. The mechanical mechanism assumes that the liquid release occurs at
high enough speeds to result in surface stress. These surface stresses cause liquid droplets to breakup into small droplets. The thermal mechanism assumes that breakup is
caused by the flashing of the liquid to vapor.
At low degrees of superheat, mechanical formation of aerosols dominates and
droplet break-up frequently depends on the relative strength of inertial/shear forces
and capillary forces on the drop. This ratio is frequently expressed as the Weber
number, and the largest droplets in the jet have a diameter, d, estimated by a Weber
number stability criteria, We = p^d/o, where the effects of surface tension, a, jet
velocity, ^, and air density, pa, all contribute (DeVaull et al., 1995). Although widely
used, the Weber number does not provide a complete answer to the problem and several alternative forms have been presented (Muralidhar et al., 1995).
At higher degrees of superheat, a flashing mechanism dominates, usually producing smaller droplets.
A study of jet break-up using hydrogen fluoride is provided by Tilton and Farley
(1990).
To date, no completely acceptable method is available for predicting aerosol formation, although many studies and several experimental tests have been completed
(AIChE, 1987, 1991, 1995b; Fthenakis, 1993)—this area is under continuing study
and development at this time. Blewitt et al. (1987) describe experiments with anhydrous hydrofluoric acid spills at the Department of Energy test site in Nevada. In the
majority of their tests no liquid accumulated on the test pad although the theoretical
adiabatic flash fraction was only 0.2. Wheatley (1987) summarizes seven sets of experiments in Europe and the United States on ammonia. He found that for pressurized
releases of ammonia there was no rainout, but that some did occur for semirefrigerated
ammonia. It is clear that when materials flash on release, at certain storage pressures
and temperatures, all the released mass contributes to the cloud mass, rather than only
the vapor fraction.
Aerosol entrainment has very significant effects on cloud dispersion that include
• The cloud will have a larger total mass.
• There will be an aerosol component (contributing to a higher cloud density).
• Evaporating aerosol can reduce the temperature below the ambient atmospheric
temperature (contributing to a higher cloud density).
• The colder cloud temperature may cause additional condensation of atmospheric
moisture (contributing to a higher cloud density).
Taken together, these effects tend to significantly increase the actual density of
vapor clouds formed from flashing releases. The prediction of these effects is necessary
to properly initialize the dispersion models. Otherwise, the cloud's hazard potential
may be grossly misrepresented.
A common practice for estimating aerosol formation is to assume that the aerosol
fraction is equal to some multiple of the fraction flashed, typically 1 or 2. This method
has been attributed to Kletz (Lees, 1986). This approach has no fundamental basis, is
probably inaccurate, but is still in common practice. Wheatley (1987) suggests this
may significantly underestimate the total mass in the cloud because little rainout occurs
for superheated releases with flash fractions as low as 10%.
The most common means to estimate the aerosol content is to predict droplet size
and from this the settling dynamics in the atmosphere. Flashing from releases of superheated liquids has been discussed by several authors including DeVaull et al. (1995),
Fletcher (1982), Melhem et al. (1995), Wheatley (1986, 1987), and Woodward
(1995). Also, AIChE/CCPS (1989b) contains a model that discusses the atomization
due to acceleration (depressurization) as well as superheat of such releases expanding to
ambient pressure. One approach is to determine the maximum droplet size from
observed, critical droplet Weber numbers, typically in the range 10-20 (Emerson,
1987; Wheatley, 1987). The atmospheric settling velocity of such a droplet may be
estimated from Stokes Law or turbulent settling approximation (Clift et al., 1978;
Coulson et al., 1978). For example, ammonia droplets must be at least 0.3 mm for the
settling velocity to reach 1 m/s. Given the elevation and orientation of the release and
the jet velocity, the amount of rainout of aerosol and the resultant mass of material in
the cloud can be estimated using the settling velocity. The amount of moisture in the
ambient air should be included in these considerations.
API (1996) states that, in general, particles of size less than 100 jum will tend to act
like a mist or fog, and stay suspended for wind speeds greater than about 2 m/s if
released from heights greater than 1 or 2 m.
Melhem (Fthenakis, 1993) provides a model for aerosol formation based on the
mechanical, or available, energy content of the liquid. The change in available energy is
the difference between the internal and expansion energy of the fluid. The rational is
that the mechanical energy contained within the liquid is the energy used to cause the
liquid breakup. A modified Weber number, including the available energy, is
proposed.
Muralidhar et al. (1995) provides a good fit for experimental HF release data using
a modified Weber number. An analysis of this modified form, coupled with experimental data, leads them to conclude that for HF releases a good representation of the data is
obtained if the initial droplet diameter (in meters) is approximated by Z) = 0.960/w,
where a is the liquid surface tension (N/m) and u is the release velocity (m/s).
It is unclear at present which aerosol formation model is appropriate for risk analysis. Many models provide far too much complexity for risk analysis. At this time, most
risk analysts use a model based on a fraction of the total amount flashed. For a conservative result, assume all of the aerosol evaporates.
Evaporation. Evaporation from liquid spills onto land and water has received substanEvaporation.
tial attention. Land spills are better defined as many spills occur into a dike or other
retention system that allows the pool size to be well estimated. Spills onto water are
unbounded and calculations are more empirical.
A number of useful references are available in AIChE/CCPS (1987a) and
AIChE/CCPS (1995b). More detailed calculation procedures are given in Cavanaugh,
et. al. (1994), Drivas (1982), Fleischer (1980), Kawamura and McKay (1987),
MacKay and Matsuga (1973), Shaw and Briscoe (1978), Stiver et al. (1989), TNO
(1979), and Webber (1991). Wu and Schroy (1979) handle a second component and
Studer et al. (1987) include the dynamics of a deep pool.
Vaporization from a pool is determined using a total energy balance on the pool,
Jrri
™Cp — = H-Lm
(2137)
where
m is the mass of the pool (mass)
Cp is the heat capacity of the liquid (energy/mass deg)
T is the temperature of the liquid in the pool (deg)
t is the time (time)
H is the total heat flux into the pool (energy/time)
L is the heat of vaporization of the liquid (energy/mass )
m is the evaporation rate (mass/time)
The heat flux, Jf, is the net total energy into the pool from radiation via the sun,
from convection and conduction to the air, from conduction via the ground, and other
possible energy sources, such as a fire.
The modeling approaches using Eq. (2.1.37) are divided into two classes: low and
high volatility liquids. High volatility liquids are those with boiling points near or less
than ambient or ground temperatures.
For high volatility liquids, the vaporization rate of the pool is controlled by heat
transfer from the ground (by conduction), the air (both conduction and convection),
the sun (radiation), and other surrounding heat sources such as a fire or flare. The cooling of the liquid due to rapid vaporization is also important.
For the high volatility case, Eq. (2.1.37) can be simplified by assuming steady
state, resulting in
H
^=Y
(2.1.38)
where m is the vaporization rate (mass/time), H is the total heat flux to the pool
(energy/time), and L is the heat of vaporization of the pool (energy/mass).
The initial stage of vaporization is usually controlled by the heat transfer from the
ground. This is especially true for a spill of liquid with a normal boiling point below
ambient temperature or ground temperature (i.e., boiling liquid). The heat transfer
from the ground is modeled with a simple one-dimensional heat conduction equation
given by
*.cr, -T)
*.-S^iB-
<2-'-39>
where
#g is the heat flux from the ground (energy/area-time)
(energy/area)
ks is the thermal conductivity of the soil (energy/length-time-deg)
(energy/length deg)
Tg is the temperature of the soil (deg)
T is the temperature of liquid pool (deg)
as is the thermal diffusivity of the soil (area/time)
t is the time after spill (time)
Equation (2.1.39) is not considered conservative.
At later times, solar heat fluxes and convective heat transfer from the atmosphere
become important. In case of a spill onto an insulated dike floor these fluxes may be the
only energy contributions. This approach seems to work adequately for LNG, and perhaps ethane and ethylene. The higher hydrocarbons (C3 and above) require a more
detailed heat transfer mechanism. This model also neglects possible water freezing
effects in the ground, which can significantly alter the heat transfer behavior.
For liquids having normal boiling points near or above ambient temperature,
diffusional or mass transfer evaporation is the limiting mechanism. The vaporization
rates for this situation are not as high as for flashing liquids or boiling pools, but can be
significant if the pool area is large. A typical approach is to assume a vaporization rate
of the form (Matthiessen, 1986)
Mk AP ™
****=—J-TZ7
^g L
(2.1.40)
where
^ mass is tne mass transfer evaporation rate (mass/time)
M is the molecular weight of the evaporating material (mass/mole)
kg is the mass transfer coefficient (length/time)
A is the area of the pool (area)
Fat is the saturation vapor pressure of the liquid (force/area)
Rg is the ideal gas constant (pressure volume/mole deg)
TL is the temperature of the liquid (abs. deg)
This model assumes that the concentration of vapor in the bulk surrounding gas is
much less than the saturation vapor pressure.
The difficulty with Eq. (2.1.40) is the need to specify the mass transfer coefficient,
kg. There are several procedures to estimate this quantity. The first procedure is to use a
reference material and estimate the change in mass transfer coefficient due to the
change in molecular weight. This results in an expression of the form (Matthiessen,
1986)
>.-*&r
where &g° is a reference mass transfer coefficient (length/time) and .M0 is a reference
molecular weight (mass/mole). A typical reference substance used is water, with a mass
transfer coefficient of 0.83 cm/s (Matthiessen, 1986).
A correlation based on experimental data is provided by MacKay and Matsuga
(1973). This correlation assumes neutral atmospheric stability and applies only for a
pure component.
kg = 0.00482A7c°-67^0-78^-0-11
(2.1.42)
where
kg is the mass transfer coefficient (m/s)
Nsc is the Schmidt number (unitless)
u is the wind velocity 10m off the ground (m/s)
dp is the diameter of the pool (m)
The Schmidt number is given by
N
^=D^M=^
C2-1-43)
where v is the kinematic viscosity (force/length time), Dm is the molal diffusivity
(moles/length time), M is the molecular weight of the material (mass/mole), and D is
the diffusivity (length/time).
Kawamura and MacKay (1987) developed two models to estimate evaporation
rates from ground pools of volatile and nonvolatile liquids—the direct evaporation and
surface temperature models. Both models are based on steady-state heat balances
around the pool and include solar radiation, evaporative cooling, and heat transfer
from the ground. Both models agree well with experimental data, typically within
20%, with some differences being as high a 40%. The direct evaporation model is the
simpler model, whereas the surface temperature model requires an iterative solution to
determine the surface temperature of the evaporating pool.
The direct evaporation model includes an evaporation rate due to solar radiation,
given by
Q^l MA
^s0, =—£
(2.1.44)
11 V
where
m^ is the evaporation rate (mass/time)
JQ50, is the solar radiation (energy/area-time)
M is the molecular weight (mass/mole)
A is the pool area (area)
Hv is the heat of vaporization of the liquid (energy/mole)
Equation (2.1.38) is combined with Eq. (2.1.44) representing evaporation due to
mass transfer.
( i }
(P]
*« -*-(l^J+ *-(]H7j
(2-1-45)
where wtot is the net evaporation rate (mass/time), /3 is a parameter which is a function
of vapor pressure (dimensionless), and wmass is the mass transfer evaporation rate given
by Eq. (2.1.40) (mass/time)
The parameter ft is given by
KH^r+^S
<,-,
where
NSc is the dimensionless Schmidt number, given by Eq. (2.1.43)
£/grd is the overall heat transfer coefficient of the ground (energy/area-time-deg)
Hg is the ideal gas constant (pressure volume/mole deg)
T is the absolute temperature (deg)
k is the mass transfer coefficient (length/time)
P*at is the saturation vapor pressure (pressure)
Hv is the heat of vaporization of the liquid (energy/mole)
The value of ft controls the relative contributions of solar and mass transfer evaporation. If ft is small compared to unity, then solar evaporation dominates, whereas ifft is
large, then mass transfer evaporation is dominant.
Pool Sf read. An important parameter in all of the evaporation models is the area of the
pool. If the liquid is contained within a diked or other physically bounded area, then
the area of the pool is determined from these physical bounds if the spill has a large
enough volume to fill the area. If the pool is unbounded, then the pool can be expected
to spread out and grow in area as a function of time. The size of the pool and its spread
is highly dependent on the level and roughness of the terrain surface—most models
assume a level and smooth surface.
One approach is to assume a constant liquid thickness throughout the pool. The
pool area is then determined directly from the total volume of material. The Dow
Chemical Exposure Index (AIChE, 1994) uses a constant pool depth of 1 cm.
Wu and Schroy (1979) solved the equations of motion and continuity to derive an
equation for the radius of the pool. This equation produces a conservative result,
assuming the spill is on a flat surface, the pool growth is not constrained, and the pool
growth will be radial and uniform from the point of the spill. The result is
r
O2
3
iVS
'=h^x^xcosH
(2X47)
where
r is the pool radius (length)
t is the time after the spill (time)
C is a constant developed from experimental data, see below (dimensionless)
g is the acceleration due to gravity (length/time2)
pis the density of the liquid (mass/volume)
Qj^ is the volumetric spill rate after flashing (volume/time)
^is the viscosity of the liquid (mass/length time)
/?is the angle between the pool surface and the vertical axis perpendicular to
the ground, see below (degrees)
The Reynolds number for the pool spread is given by
X7
2
= ^AFP
^ ^T
(2-1-48)
and the constant, C, has a value of 2 for a Reynolds number greater than 25 and a value
of 5 for Reynolds number less than or equal to 25.
The pool surface angle is given by
fS = t^[(0.2S + B)°*-O.S\°-5
(2'L49)
22.489r4p
*°^r
<"•*»
Clearly, the solution to this model is iterative since several of the parameters in Eq.
(2.1.47) depend on a value of the pool radius, which is the desired result.
A more complex model for pool spread has been developed by Webber (1991).
This model is presented as a set of two coupled differential equations which models
liquid spread on a flat horizontal and solid surface. The model includes gravity spread
terms and flow resistance terms for both laminar and turbulent flow. Solution of this
model shows that the pool diameter radius is proportional to t in the limit where gravity balances inertia, and as £1/8 in the limit where gravity and laminar resistance balance.
This model assumes isothermal behavior and does not include evaporation or boiling
effects.
Some work has been completed on pools on rough surfaces (Webber, 1991).
For liquids spilled on water, the treatment is significantly different. For this case
the gravity term must be modified in terms of the relative density difference between
the released liquid and the water (Webber, 1991). Solutions to these equations result in
an early time solution with the pool radius proportional to tl/2 when the resistance is
dominated by the displaced water. The asymptotic laminar viscous regime results in a
solution with the radius proportional to t1/4. The flow of water beneath the pool is most
important in this regime.
Logic Diagram. A fundamentally based model must solve the simultaneous, time
dependent, heat, mass, and momentum balances. A logic diagram is given in Figure
2.19.
Theoretical Foundation. Equilibrium flash models for superheated liquids are based on
thermodynamic theory. However, estimates of the aerosol fraction entrained in the
resultant cloud are mostly empirical or semiempirical. Most evaporation models are
based on the solution of time dependent heat and mass balances. Momentum transfer is
typically ignored. Pool spreading models are based primarily on the opposing forces of
gravity and flow resistance and typically assume a smooth, horizontal surface.
Input Requirements and Availability. Flash models require heat capacity, latent heat of
vaporization data for the pure materials, normal boiling point temperatures, as well as
Define Factors that
Determine the Spill Rate
Tank pressure
Liquid height
Diameter of hole
Discharge coefficient
Density
Define Physical
Properties of Materials
VLE data
Heat capacity
Heat of vaporization
Liquid density
Emissivity
Viscosity
Combine Input Data
and Calculate
Spill rate
Pool growth
Heat transfer
Mass transfer
Define Physical
Conditions
Ground density and
thermal conductivity
Ambient temperature
Wind speed
Solar radiation
FIGURE 2.19. Logic diagram for pool evaporation.
Results
Evaporation rate
versus spill time
the initial conditions of temperature and pressure. The AIChE/DIPPR physical properties compilation (Banner and Daubert, 1985) is a useful source of temperature
dependent properties. For flashing mixtures a commercial process simulator would
normally be used. If droplet size is to be determined to allow estimation of settling
velocity, the velocity of discharge must be calculated, along with density and surface
tension of the liquid and the density of gas.
Evaporation models for boiling pools require definition of the leak rate and pool
area (for spills onto land), wind velocity, ambient temperature, pool temperature,
ground density, specific heat, and thermal conductivity. Radiation parameters (e.g.,
incoming solar heat flux, pool reflectivity, and emissivity) are also needed if solar radiation is a significant factor. Most of these data are readily available, but soil characteristics are quite variable.
Evaporation models for nonboiling liquids require the leak rate and pool area (for
spills onto land), wind velocity, ambient temperature, pool temperature, saturation
vapor pressure of the evaporating material, and a mass transfer coefficient.
Pool spreading models require the liquid viscosity and density, and possibly a turbulent friction coefficient. Values for the turbulent friction coefficient have been measured by Webber (1991).
Output. The output of flash models is the vapor-liquid split from a discharge of a
superheated liquid. Aerosol and rainout models provide estimates of the fractions of
the liquid that remain suspended within the cloud.
The output of evaporation models is the time-dependent mass rate of boiling or
vaporization from the pool surface. These models rarely give atmospheric vapor concentrations or cloud dimensions over the pool, which may be required as input to dense
gas or other vapor cloud dispersion models.
The pool spreading models provide the radius or radial spread velocity of the pool
from which the total pool area and depth is determined.
Simplified Approaches. For evaporation cases, a simplified approach for smaller releases
of liquids with normal boiling points well below ambient temperature is to assume all
the liquid enters the vapor cloud, either by immediate flash plus entrainment of aerosol,
or by rapid evaporation of any rainout.
2.1.2.3. EXAMPLEPROBLEMS
Example 2.8: Isenthalpic Flash Fraction. Calculate the flash fraction of liquid propane flashed from 10 barg and 250C to atmospheric pressure.
Data: Heat capacity, Cp:
2.45 kj/kg K (average 231-298 K)
Ambient temperature, T:
298 K (250C)
Normal boiling point, Tb: 231 K (-420C)
Heat of vaporization, /?fg:
429 kj/kg at -420C (Perry and Green, 1984)
Solution: Using Eq. (2.1.36)
(T -T.)
(298 K - 231K)
2 45k;/kgK)X
- °°> ^ °' (429H/kg)
=°-38
F
Click to View Calculation Example
Example 2.8: lsenthalpic Flash Fraction
Input Data:
Ambient temperature:
Boiling point temp, at pressure:
Heat capacity:
Heat of vaporization:
298 K
231 K
2.45 kJ/kg-K
429 kJ/kg
Calculated Results:
[Flash fraction:
0.3831
FIGURE 2.20. Spreadsheet output for Example 2.8: lsenthalpic flash fraction.
Experimental results suggest this may seriously underestimate the actual cloud
mass, as aerosol droplets will be carried with the dispersing cloud.
The spreadsheet output for this problem is shown in Figure 2.20.
Example 2.9: Boiling Pool Vaporization. Calculate the vaporization rate due to
heating from the ground at 10 s after an instantaneous spill of 1000 m3 of LNG on a
concrete dike of 40 m radius.
Data: Thermal diffusivity of soil, as:
4.16 X 10"7 m2/s
Thermal conductivity of soil, k;. 0.92 W/m K
Temperature of liquid pool, T:
109 K (-1640C)
Temperature of soil, Tg:
293 K (2O0C)
Heat of vaporization of pool, L: 498 kJ/kg at -1640C
(Shaw and Briscoe, 1978)
Solution: The total pool area = nr1 = (3.14)(40 m) 2 = 5024 m2. The liquid depth in
the pool is thus (1000 m3)/(5024 m2) = 0.2 m. Thus, there is more than adequate
liquid in the spill to cover the containment area. The heat flux from the ground is given
by Eq. (2.1.39):
*.CT g -r)
(0.92W/mK)(293K-109K)
,Rxlo4T/2s
a =
—— =
r-r- = 44.Oo
X IU J/m s
112
7
2
1/2
**
(nast)
[(3.14)(4.16 x 10' m /s)(10 s)]
Then, the evaporative flux, m, is given by Eq. (2.1.38)
H
4.68 XlO 4 J/m 2 s
, 2
,» =±± =
U= 0.094 kg/m 2 s
5
L
4.98 XlO J/kg
The total evaporation rate for the entire pool area is
(0.094 kg/m2 s)(5024 m2) = 472 kg/s
The spreadsheet output for this problem is shown in Figure 2.21.
Example 2.10: Evaporating Pool. Estimate the evaporation rate for a 100 m2 pool of
liquid hexane at a temperature of 298 K.
Data: M = 86
Fat =15 l m m Hg
Click to View Calculation Example
Example 2.9: Boiling Pool Vaporization
Input Data:
Thermal diffiusivity of soil:
Thermal conductivity of soil:
Temperature of the liquid pool:
Temperature of the soil:
Heat of vaporization:
Time:
Pool area:
4.2E-07 m**2/s
0.92 W/m-K
109 K
293 K
498000 J/kg
10 s
5024 m**2
Calculated Results:
Heat flux from ground:
Evaporative flux:
Total evaporation rate:
46826 J/m**2 s
0.094 kg/m**2 s
472.4 kg/s
FIGURE 2.21. Spreadsheet output for Example 2.9: Boiling pool vaporization.
Solution: This is considered a low volatility pool problem. Equations (2.1.40) to
(2.1.42) apply. The mass transfer coefficient for the evaporation is estimated using Eq.
(2.1.41):
(
=(o 83cm/s)
493cm s
-it} - y =°- /
M
\1//3
/18\ 1//3
Equation (2.1.40) is used to estimate the evaporation rate:
ML AP^
wmass = -^h=—
1 7
SI
_ (0.086 kg/gm- mole)(0.493x IQ"2 m/s)(100 m2)(151 mm Hg)(1 atm/760 mm Hg)
(82.057 x 1(T6 m3 atm/gm - mole K)(298 K)
= 0.344 kg/s
Clearly the evaporation rate from the boiling pool is significantly greater than the evaporation rate from the volatile liquid. The spreadsheet output for this problem is shown
in Figure 2.22.
Example 2.11: Pool Evaporation Using Kawamura and MacKay (1987) Direct
Evaporation Model. Determine the evaporation rate from a 10-m diameter pool of
pentane at an ambient temperature of 296 K. The pool is on wet sand and the solar
energy input rate is 642 J/m2s.
Click to View Calculation Example
Example 2.10: Evaporating Poot
Input Data:
Area of pool:
Ambient temperature:
Molecular weight of liquid:
Saturation vapor pressure:
Calculated Results:
Mass transfer coefficient:
Evaporation rate:
100 m**2
298 K
86
151 mm Hg
0.004928 m/s
0.344349 kg/s
FIGURE 2.22. Spreadsheet output for Example 2.10: Evaporating pool.
Ambient temperature:
296 K
Wind speed at 10 meters:
4.9 m/s
Physical properties of pentane:
Molecular weight:
72
Heat of vaporization:
27 kj/mol
Vapor pressure at ambient temp.:
0.652 bar abs
Kinematic viscosity in air:
1.5 X 10~5 m2/s
Physical properties of air:
Diffusivity:
7.1 X IQr6 m2/s
Heat transfer properties:
Solar radiation:
642 J/m2s
Heat transfer coefficient for pentane: 43.1 J/m2 s 0C
Heat transfer coefficient for ground: 45.3 J/m2 s 0C
Solution: The total area of the pool is
^.-PJ.^KMm)'.^,.,
4
4
The Schmidt number is determined from Eq. (2.1.43).
v
1.5xlO~ 5 m 2 /s
D
7.1XlO~ 6 m 2 /s
N,Sc =— =
—T—rr=2.ll
The mass transfer coefficient is determined from Eq. (2.1.42)
£ = 0.00482N-°-67^°-7V-(m
= 0.00482(2.11)-° 67 (4.9 m/s )°'78(10 m)'0'11
= 7.74 XlO' 3 m/s
The overall ground heat transfer coefficient, C/grd5 is a combination of the liquid
and ground heat transfer coefficients,
l
-JL = JL+ 1 =
i
C/grd " hliq hsrd " 43.1 J/m 2 s°C + 45.3 J/m 2 s°C
t/grd = 22.1J/m 2 s°C
The evaporation rate due to mass transfer effects is given by Eq. (2.1.40)
5
= MkAP "
^mass -
R^
(72 kg/kg - mole)(7.74 X 10~3 m/s)(78.5 m 2 )(65.2 kPa)
=
:-:
= 1.16kff/S
(8.314 kPa m 3 /kg - mole K)(296 K)
&/
The evaporation rate due to solar energy input is determined from Eq. (2.1.44)
* =2^
H
v
=
(642 J/m 2 s)(72 kg/kg- mole)(78.5 m 2 )
(27.4 kj/mol) (1000 mol/kg - mole)
. ,
'
g/S
Click to View Calculation Example
Example 2.11; Pool Evaporation using Kawamura and MacKay
Direct Evaporation Model
Input Data:
Geometry:
Diameter of pool:
10 m
Physical Properties of Liquid:
Molecular weight of liquid:
Heat of vaporization of liquid:
Vapor pressure of liquid at ambient:
Kinematic viscosity of liquid in air:
72
27.4 kJ/mol
0.652 bar abs
1.5E-05 m**2/s
Physical Properties of Air:
Diffusivity:
7.1E-06 m**2/s
Heat Transfer Properties:
Solar input:
Heat transfer coefficient of liquid:
Heat transfer coefficient of ground:
0.642 kJ/m**2 s
0.0431 kJ/m**2 s K
0.0453 kJ/m**2 s K
Ambient temperature:
Wind speed at 10 meters:
296 K
4.9 m/s
Calculated Results:
Pool area:
Schmidt number:
Mass transfer coefficient:
Overall ground heat transfer coefficient:
78.54 m**2
2.11
0.00783 m/s
0.0221 kJ/m**2 s K
Evaporation Rates:
Mass transfer:
Solar radiation:
1.17 kg/s
0.13 kg/s
Beta:
0.193
Net evaporation rate:
0.301 kg/s
FIGURE 2.23. Spreadsheet output for Example 2.11: Pool evaporation using Kawamura and
MacKay (1987) Direct evaporation model.
The value of /J is determined from Eq. (2.1.46)
ft. - [ L3650
N +, ^ V l V '
^kJPaU
-[( ^iK J * ~~r~\^m
67
- [L*rn kJpa I7 11^0.67 . (2.21xlQ- 2 kJ/m 2 sK)(8.314Pam 3 /molK)(296K)l
"Lr* molK/
>
7.74 XlO- 3 m/s
J
[(8.314 Pam 3 /molK)(296 K ) 2 I [ IkJ \
[ (65225 Pa)(27.4 kj/mol)2 J ^ 1000 J J
'
The net evaporation rate is determined from Eq. (2.1.45)
(
I
}
( ft }
*«=*"" (TT^ J+*- (IT^ J
-(ai32 kg/s)(r^m)+(L16 k^(^^}=0-299 kg/s
It is clear that both mass transfer and solar evaporation contribute to the net result.
The spreadsheet implementation of this problem is given in Figure 2.23.
Example 2.12: Pool Spread. Estimate the pool radius at 100 s for a continuous spill
of liquid water on an unconstrained flat surface. Assume a discharge rate of 1 liter/s
(0.001 m3/s) and that the water is at ambient temperature.
Data: Liquid density, p:
1000 kg/m3
Liquid viscosity, /*
0.001 kg/m-s
Solution: The Wu and Schroy (1979) model presented in Eqs. (2.1.47) through
(2.1.50) will be used. From Eq. (2.1.47)
f
t*
oO2
T /5
\ 33 22 / . x^^g-xcosfisinl
[C Jt /6g
v
\|
r=
Substituting the known values,
3
3
2
f
(10Os)3
J 1/5
n
X (1000 kg/m ) (0.001 m /s)
r = —=3
r-;
TT~
;
x cos 6 sin B
2
2
[C (3.14) /(6)(9.8m/s )
O.OOlkg/m-s
1/5
r
- cos/?sin£~|
p
r= 5.96XlO- 3
L
^
J
with r having units of meters.
The Reynolds number of the spreading pool is given by Eq. (2.1.48)
=
Re
2J^p = (2)(0.001 m3/s)(1000 kg/m 3 ) = 637
Jtrju
(3.14)r (0.001 kg/m-s)
r
The value of B is given by Eq. (2.1.50)
^22.489r 4 p^ (22.489)r4(1000kg/m3) = 2 2 5 x l ( ? io
J^AF ^
(0.001 m 3 /s) (0.001 kg/m-s)
4
and the pool spread equation is determined using Eq. (2.1.49). The entire procedure
can easily be solved using a spreadsheet. The output is shown in Figure 2.24. The solution is done iteratively using a manual trial and error procedure. The resulting pool
radius is 64.1 m.
If a constant pool depth of 1 cm is assumed, the resulting pool diameter is 0.56 m,
significantly smaller than the Wu and Schroy (1979) result. A smaller pool diameter
would result in a smaller evaporation rate.
2.1.2.4. DISCUSSION
Resources Needed. A process engineer can perform all of the calculations in this section within a short period of time, particularly with the aid of a spreadsheet or a
PC-based mathematics package.
Click to View Calculation Example
Example 2.12: Pool Spread via Wu and Schroy (1979) model
Input Data:
Time:
Volumetric spiii rate:
Liquid density:
Liquid viscosity:
100 s
0.001 (m**3)/2
1000 kg/m**3
0.001 kg/m-s
Calculated Results:
Initial estimate of pool diameter:
B:
Beta:
Reynolds number:
Selected value of C:
Recalculated value of pool radius:
64.08 m
3.79E-H7
1.57
9.94
<— Trial and Error solution!
5
|
64.09 m
""1
FIGURE 2.24. Spreadsheet output for Example 2.12: Pool spread.
Available Computer Codes
Wu and Schroy, Monsanto Chemical Co. (St. Louis, MO), available from the
Chemical Manufacturers Association under the name of PAVE—Program to
Assess Volatile Emissions.
Shaw and Briscoe, Safety and Reliability Directorate (Warrington, UK)
SPILLS, M. T. Fleischer, Shell Development Company (Houston, TX)
Several integrated analysis packages also contain evaporation and pool models. These include:
ARCHIE (Environmental Protection Agency, Washington, DC)
EFFECTS-2 (TNO, Apeldoorn, The Netherlands)
HGSYSTEM (LPOOL) (Available from EPA Bulletin Board)
PHAST (DNV, Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
TRACE (Safer Systems, Westlake Village, CA)
SAFETI (DNV, Houston, TX)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
2.1.3. Dispersion Models
Accurate prediction of the atmospheric dispersion of vapors is central to CPQBA consequence estimation. Typically, the dispersion calculations provide an estimate of the
area affected and the average vapor concentrations expected. The simplest calculations
require an estimate of the release rate of the gas (or the total quantity released), the
atmospheric conditions (wind speed, time of day, cloud cover), surface roughness,
temperature, pressure and perhaps release diameter. More complicated models may
require additional detail on the geometry, discharge mechanism, and other information on the release.
Three kinds of vapor cloud behavior and three different release-time modes can be
Next Page
defined:
Previous Page
Click to View Calculation Example
Example 2.12: Pool Spread via Wu and Schroy (1979) model
Input Data:
Time:
Volumetric spiii rate:
Liquid density:
Liquid viscosity:
100 s
0.001 (m**3)/2
1000 kg/m**3
0.001 kg/m-s
Calculated Results:
Initial estimate of pool diameter:
B:
Beta:
Reynolds number:
Selected value of C:
Recalculated value of pool radius:
64.08 m
3.79E-H7
1.57
9.94
<— Trial and Error solution!
5
|
64.09 m
""1
FIGURE 2.24. Spreadsheet output for Example 2.12: Pool spread.
Available Computer Codes
Wu and Schroy, Monsanto Chemical Co. (St. Louis, MO), available from the
Chemical Manufacturers Association under the name of PAVE—Program to
Assess Volatile Emissions.
Shaw and Briscoe, Safety and Reliability Directorate (Warrington, UK)
SPILLS, M. T. Fleischer, Shell Development Company (Houston, TX)
Several integrated analysis packages also contain evaporation and pool models. These include:
ARCHIE (Environmental Protection Agency, Washington, DC)
EFFECTS-2 (TNO, Apeldoorn, The Netherlands)
HGSYSTEM (LPOOL) (Available from EPA Bulletin Board)
PHAST (DNV, Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
TRACE (Safer Systems, Westlake Village, CA)
SAFETI (DNV, Houston, TX)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
2.1.3. Dispersion Models
Accurate prediction of the atmospheric dispersion of vapors is central to CPQBA consequence estimation. Typically, the dispersion calculations provide an estimate of the
area affected and the average vapor concentrations expected. The simplest calculations
require an estimate of the release rate of the gas (or the total quantity released), the
atmospheric conditions (wind speed, time of day, cloud cover), surface roughness,
temperature, pressure and perhaps release diameter. More complicated models may
require additional detail on the geometry, discharge mechanism, and other information on the release.
Three kinds of vapor cloud behavior and three different release-time modes can be
defined:
Vapor Cloud Behavior:
• Neutrally buoyant gas
• Positively buoyant gas
• Dense (or negatively) buoyant gas
Duration of Release:
• Instantaneous (puff)
• Continuous release (plumes)
• Time varying continuous
The well-known Gaussian models describe the behavior of neutrally buoyant gas
released in the wind direction at the wind speed. Dense gas releases will mix and be
diluted with fresh air as the gas travels downwind and eventually behave as a neutrally
buoyant cloud. Thus, neutrally buoyant models approximate the behavior of any vapor
cloud at some distance downwind from its release. Neutrally or positively buoyant
plumes and puffs have been studied for many years using Gaussian models. These studies have included especially the dispersion modeling of power station emissions and
other air contaminants used for air pollution studies. Gaussian plumes are discussed in
more detail in Section 2.1.3.1.
Dense gas plumes and puffs have received more recent attention with a number of
large-scale experiments and sophisticated models being developed in the past 20 years
(Hanna et al., 1990; API, 1992; AIChE/CCPS, 1995b, 1995c). Dense gas plumes are
discussed in more detail in Section 2.1.3.2.
Any organization planning to undertake CPQRA must undertake dispersion calculations for both neutral/positive buoyancy and dense gases and for plume and puff
releases. Which model to use is usually obvious, but there are no simple published
guidelines for model selection (AIChE/CCPS, 1987a, 1995b, 1996a). Borderline cases
include moderate sized, dense toxic gas releases or smaller scale dense flammable
releases. These may be handled adequately by the simpler neutral buoyancy models.
Most dense gas models have an automatic internal transition to neutral Gaussian dispersion when the density effects become negligible, gravity spreading is slowed down
to some fraction of wind speed, or Gaussian dispersion predicts more growth
(AIChE/CCPS, 1987a). Such models may be used for any release that is initially dense,
even if this phase is of short duration. However, Gaussian models applied to any dense
gas release will always produce a conservative result, that is, the computed downwind
distances, concentrations and area affected will be much larger than the actual result. In
some cases the Gaussian result may be orders of magnitude larger.
A large number of parameters affect the dispersion of gases. These include (1)
atmospheric stability (2) wind speed (3) local terrain effects (4) height of the release
above the ground (5) release geometry, that is, from a point, line, or area source (6)
momentum of the material released, and (7) buoyancy of the material released.
Atmospheric Stability. Weather conditions at the time of the release have a major
influence on the extent of dispersion. Some of these effects are shown in Figure 2.25,
where the behavior of the plume changes markedly depending on the stability of the
atmosphere. Good reviews are available in Hanna et al. (1982), Pasquill and Smith
(1983), and Slade (1968). The primary factors are the wind speed and the atmospheric
Stable (Fanning), Stability Classes E9 F
Neutral Below, Stable Above (Fumigation)
Unstable (Looping), Stability Classes A, B
Neutral (Coning), Stability Class D
Stable Below, Neutral Aloft (Lofting)
FIGURE 2.25. Effect of atmospheric stability on plume dispersion. From SIade (1968).
stability. Atmospheric stability is an estimate of the turbulent mixing; stable atmospheric conditions lead to the least amount of mixing and unstable conditions to the
most.
The atmospheric conditions are normally classified according to six Pasquill stability classes (denoted by the letters A through F) as shown in Table 2.8. The stability
classes are correlated to wind speed and the quantity of sunlight. During the day,
increased wind speed results in greater atmospheric stability, while at night the reverse
is true. This is due to a change in vertical temperature profiles from day to night.
Within the stability classes, A represents the least stable conditions while F represents
the most stable.
Stability is commonly defined in terms of the atmospheric vertical temperature
gradient, but Hanna et al. (1982) suggest that a better approach be based on some
direct measure of turbulence (e.g., using the Richardson number). In the former, the
magnitude of the atmospheric temperature gradient is compared against the adiabatic
lapse rate (ALR 0.98°C/100 m), which is the rate of temperature change with height
for a parcel of dry air rising adiabatically. In neutral stability the gradient is equivalent
to the ALR. Stable conditions refer to a gradient less than the ALR (ultimately to a
temperature inversion) and unstable conditions to greater than the ALR. Most people
use the Pasquill letter classes because they have produced satisfactory results and are
easy to use. In CPQRA, wind speed and stability should be obtained from local meteorological records (Section 5.1) whenever possible. Where these stability data are not
available, Pasquill's simple table (Table 2.8) permits atmospheric stability to be estimated from local sunlight and wind speed conditions.
In the absence of detailed meteorological data for a particular site, two common
weather combinations (stability and wind speed) used in many CPQRA studies are D
TABLE 2.8. Meterological Conditions Defining the Pasquill-Gifford Stability Classes
(Gifford, 1976)
Insolation category is determined from the table below
Daytime insolation
Nighttime conditions
Anytime
Surface wind
speed, m/s
Strong
Moderate
Slight
Thin overcast or
>4/8 low cloud
>3/8
cloudiness
Heavy
overcast
<2
A
A-B
B
F
F
D
2-3
A-B
B
C
E
F
D
3^
B
B-C
C
D
E
D
4-6
C
OD
D
D
D
D
>6
C
D
D
D
D
D
A: Extremely unstable conditions
B: Moderately unstable conditions
C: Slightly unstable conditions
D: Neutral conditions
E: Slighrly
Slightly stable conditions
F: Moderately stable conditions
Method for estimating insolation category, where degree of clotidiness is defined as
that fraction of the sky above the local apparent horizon that is c:overed by clouds
Solar elevation angle
>60°
Solar elevation angle
<60°but >35°
Solar elevation angle
<35°but>15°
4/8 or less or any amount
of high, thin clouds
Strong
Slight
Slight
5/8 to 7/8 middle clouds
(2000 m to 5000 m base)
Moderate
Slight
Slight
Slight
Slight
Slight
Degree of cloudiness
5/8 to 7/8 low clouds
(less than 2000 m base)
at 5 m/s (11 mph) and F at 2 m/s (4.5 mph). The first is typical for windy daytime situations and the latter for still nighttime conditions. Stability class D is typically the most
frequent, while class F is the second most frequent stability condition. A wind speed of
from 1.0 to 1.5 m/s is frequently used with F stability since F stability may occur at
these low wind speeds. Table 2.8 can be used to select other representative weather
conditions.
Wind Speed. Wind speed is significant as any emitted gas will be diluted initially by
passing volumes of air. As the wind speed is increased, the material is carried downwind faster, but the material is also diluted faster by a larger quantity of air. Significant
local variations in wind speed and direction are possible due to terrain effects even over
distances of only a few miles. Data should be collected on-site with a dedicated meteorological tower.
Wind speed and direction are often presented in the form of a wind rose. These
show the wind patterns at a particular location. The wind rose is usually presented in
compass point form with each arm representing the frequency of wind from that direction (i.e., a north wind blows southward). Data sources are discussed in Section 5.4.2.
Wind data are normally quoted on the basis of 10 m height. Wind speeds are
reduced substantially within a few meters of ground due to frictional effects. As many
smaller discharges of dense materials remain near ground level, wind data should be
corrected from 10 m to that relevant for the actual release. An equation for the wind
speed profile is given for near-neutral and stable wind profiles in API (1996) and
AIChE/CCPS(1996a):
l(,
U
Z
A
^ Z\
— = - I n — + 4.5u, k\ ZQ
L)
(2.1.51)
>
where
u is the wind speed (m/s)
u* is the friction velocity constant which is empirically derived (m/s)
k is von Karman's constant, with a value of 0.41
z is the height (m)
ZQ is the surface roughness length parameter (m)
L is the Monin-Obukhov length (m)
More complicated expressions are available for other atmospheric stability conditions (Hanna, 1982).
The friction velocity, u* is a measure of the frictional stress exerted by the ground
surface on the atmospheric flow. It is equal to about 10% of the wind speed at a height
of 10 m. The fraction increases as the surface roughness increases or as the boundary
layer becomes more unstable.
The Monin-Obukhov length, L, is positive during stable conditions (nighttime)
and is negative during unstable conditions (daytime). It is defined by
L
"^^f)
2
2 L52)
<'
whereg is the acceleration due to gravity (m/s ), Tis the absolute temperature (K), and
H is the surface heat flux (J/m2). Values for the length, L, are given in Table 2.9.
TABLE 2.9. Relation between the Monin-Obukhov Length, L, and Other
Meteorological Stability Conditions (AlChE/CCPS, 1 996)
Time and weather
Wind speed, u
Monin-Obukhov
length, L
Pasquill-Gifford
stability class
Clear night
< 3m/s
10m
F
Stable
I
2-4 m/s
50m
E
Neutral
Cloudy or windy
Any
> 1 100m|
D
Unstable
4
Sunny
2-6 m/s
-50m
B or C
<3 m/s
-10m
A
Description
Very stable
Very unstable
TABLE 2. 1 0. Surface Roughness Parameter, Z0, for Use with Equation (2. 1.51)
Terrain
classification
Terrain description
Surface roughness, Z0,
meters
Highly urban
Centers of cities with tall buildings, very hilly or
mountainous area
3-10
Urban area
Centers of towns, villages, fairly level wooded country
1-3
Residential area
Area with dense but low buildings, wooded area,
industrial site without large obstacles
1
Large refineries
Distillation columns and other tall equipment pieces
1
Small refineries
Smaller equipment, over a smaller area
0.5
Cultivated land
Open area with great overgrowth, scattered houses
0.3
Flat land
Few trees, long grass, fairly level grass plains
0.1
Open water
Large expanses of water, desert flats
0.001
Sea
Calm open sea, snow covered flat, rolling land
0.0001
Observed values for the surface roughness, Z0, are provided in Table 2.10. It is recommended that the surface roughness length for large refineries be set to 1 m and for
small refineries at 0.5 m.
According to Eq. (2.1.51), a plot of (In z) versus u should yield a straight line with
intercept (In Z0) and slope u*. This presents an effective method to determine these
parameters locally by measurement of wind speeds at different heights.
If the second term in Eq. (2.1.51) containing the Monin-Obukov length is set to
zero, then a simple and well known power law relation is obtained (API, 1996):
uz
I 1 ( z\
-^- = - l n —
u*
k
\ZQ)
(2.1.53)f
Equation (2.1.5 3) can be simplified further to a power law relation if the velocity is
compared to a velocity at a fixed height (Hanna et al., 1982):
/zy
\\Q)
U =UW
*
where p is a power coefficient (unitless).
<2-L54)
TABLE 2.11. Wind Speed Correction Factor for Equation (2.1.54)
Pasquill-Gifford stability
class
Power law atmospheric coefficient, p
~
Urban
Rural
A
0.15
0.07
B
0.15
0.07
C
0.20
0.10
D
0.25
0.15
E
0.40
0.35
F
0.60
0.55
The power coefficient is a function of atmospheric stability and surface roughness.
Typical values are given in Table 2.11.
Local Terrain Effects. Terrain characteristics affect the mechanical mixing of the air as
it flows over the ground. Thus, the dispersion over a lake is considerably different from
the dispersion over a forest or a city of tall buildings. Most dispersion field data and
tests are in flat, rural terrains.
Height of the Release above the Ground. Figure 2.26 shows the effect of height on
the downwind concentrations due to a release. As the release height increases, the
ground concentration decreases since the resulting plume has more distance to mix
with fresh air prior to contacting the ground. Note that the release height only affects
the ground concentration—the concentration immediately downwind at the release
height is unchanged.
Release Geometry. An ideal release for Gaussian dispersion models would be from a
fixed point source. Real releases are more likely to occur as a line source (from an escaping jet of material), or as an area source (from a boiling pool of liquid).
Continuous Release Source
Wind Direction
Plume
As Release Height Increases, this Distance
Increases. The Increased Distance Leads to
Greater Dispersion and a Lower Concentration
at Ground Level.
FIGURE 2.26.Effect of release height on ground concentration. As the release height
increases, the ground concentration decreases.
Initial Acceleration
and Dilution
Wind
Dominance of
Internal Buoyancy
Dominance of
Ambient
Turbulence
Release Source
Transition from Dominance of
Internal Buoyancy to
Dominance of Ambient Turbulence
FIGURE 2.27. The initial acceleration and buoyancy of the released material affects the plume
behavior. The release shown is a dense gas release exhibiting initial slumping followed by
dispersion to a neutrally buoyant state.
Momentum of the Material Released and Buoyancy. A typical dense gas plume is
shown in Figure 2.27. Dense gases may also be released from a vent stack; such releases
exhibit a combination of dense and Gaussian behavior (Ooms et al., 1974), with initial
plume rise due to momentum, followed by plume bendover and sinking due to dense
gas effects. Far downwind from the release, due to mixing with fresh air, the plume will
behave as a neutrally buoyant cloud.
Since most releases are in the form of a jet rather than a plume, it is important to
assess the effects of initial momentum and air entrainment on the behavior of a jet.
Near its release point where the jet velocity differs greatly from the wind velocity, a jet
entrains ambient air due to shear (velocity difference), grows in size, and becomes
diluted. For a simple jet (neutral buoyancy), its upward momentum remains constant
while its mass increases. Therefore, if vertically released, the drag forces increase as the
surface area increases and eventually horizontal momentum dominates. The result is
that the jet becomes bent over at a certain distance and is dominated by the wind
momentum. If the jet has positive buoyancy (buoyant jet), the upward momentum will
increase and the initial momentum will become negligible compared to the momentum gained due to the buoyancy. Then, the jet will behave like a plume. The rises of
simple or buoyant jets, collectively called plume rises, have been studied by many
researchers and their formulas can be found in Briggs (1975,1984) or most reviews on
atmospheric diffusion (including Hanna et al., 1982).
For a dense or negatively buoyant jet, upward momentum will decrease as it travels. Finally it will reach a maximum height where the upward momentum disappears
and then will start to descend. This descending phase is like an inverted plume. Simple
formulas for the maximum rise, downwind distance to plume touchdown, and dilution
at the touchdown were derived by Hoot et al. (1973) and used in the VCDM Workbook (AIChE/CCPS, 1989a).
2.1.3.1. NEUTRAL AND POSITIVELY BUOYANT PLUME AND PUFF MODELS
Background
Purpose. Neutral and positively buoyant plume or puff models are used to predict average concentration and time profiles of flammable or toxic materials downwind of a
source based on the concept of Gaussian dispersion. Plumes refer to continuous emissions, and puffs to emissions that are short in duration compared with the travel time
(time for cloud to reach location of interest) or sampling (or averaging) time (normally
lOmin).
Philosophy. Atmospheric diffusion is a random mixing process driven by turbulence in
the atmosphere. The concentration at any point downwind of a source is well approximated by a Gaussian concentration profile in both the horizontal and vertical dimensions. Gaussian models are well established with the original work undertaken by
Sutton (1953) and updated by Gifford (1976), Pasquill (1974), and Slade (1968).
Applications. The U.S. EPA uses Gaussian models extensively in its prediction of atmospheric dispersion of pollutants. Gaussian models are directly applicable in risk analyses
for neutral and positively buoyant emissions as the models have been validated over a
wide range of emission characteristics (Hanna et al., 1982) and downwind distances
(0.1 to 10 km). They may also be applied to smaller releases of dense gas emissions
where the dense phase of the dispersion process is relatively short compared with the
neutrally buoyant phase (e.g., smaller releases of toxic materials). Density has to be
checked at the touchdown of a dense jet for applicability of Gaussian models. Gaussian
models are not generally applicable to larger scale releases of dense materials since the
dense gas slumps toward the ground and is not dispersed and transported as rapidly
downwind as a neutrally buoyant cloud. For these types of releases a dense cloud model
is required.
The concentrations predicted by Gaussian models are time averages. Thus, local
concentrations might be greater than this average. This result is important when estimating dispersion of highly toxic or flammable materials where local concentration
fluctuations might have a significant impact on the consequences. The dispersion
models implicitly include an averaging time through the dispersion coefficients, since
the experiments to determine the coefficients were characterized by certain averaging
times (AIChE/CCPS, 1996a). AIChE/CCPS (1995c) defines the averaging time as the
"user specified time interval over which the instantaneous concentration, mass release
rate, or any other variable, is averaged." AIChE/CCPS (1995c) further states that
"With increased averaging time (i.e. increased event duration for an accidental release)
the plume from a point source meanders back and forth over a fixed receptor. As the
high concentration in an instanteous 'snapshot3 plume flaps back and forth, the time
averaged concentration will decrease on the plume centerline, and increase on the outer
fringes of the plume. At the same time, meandering will increase the intensity of concentration fluctuations everywhere across the plume, and produce longer periods of
zero concentration intermittancy near the plume centerline. To estimate the probability of exceeding toxic or flammable concentration thresholds these averaging time
effects must be accurately predicted." Most Pasquili-Gifford Gaussian models include
an implicit 10-minute averaging time.
Description
Hanna et al. (1982), Pasquill and Smith (1983) and Growl and Louvar (1990) provide
good descriptions of plume and puff discharges. Another description, with a hazard
analysis orientation, is given by TNO (1979). Plume models are better defined than
puff models. This section highlights only the key features of such models; the reader
should refer to the references for further modeling details.
Gaussian dispersion is the most common method for estimating dispersion due to
a release of vapor. The method applies only for neutrally buoyant clouds and provides
an estimate of average downwind vapor concentrations. Since the concentrations predicted are time averages, it must be considered that local concentrations might be
greater than this average; this result is important when estimating dispersion of highly
toxic or flammable materials where local concentration fluctuations might have a significant impact on the consequences. Averaging time corrections can be applied.
A complete development of the fundamental equations is presented elsewhere
(Growl and Louvar, 1990). The model begins by writing an equation for the conservation of mass of the dispersing material:
dC
d
1
f
£\
O
/^
rc\
dt
dx. 3 '
(2.1.55)
u
=
1
where C is the concentration of dispersing material (mass/volume);/ represents the
summation over all three coordinates, xy y, and z (unitless); and u is the velocity of the
air (length/time).
The difficulty with Eq. (2.1.55) is that it is impossible to determine the velocity u
at every point since an adequate turbulence model does not currently exist. The solution is to rewrite the concentration and velocity in terms of an average and stochastic
quantity: C = (C) + C'; u^ = (u]) + u- where the brackets denotes the average value
and the prime denotes the stochastic, or deviation variable. It is also helpful to define an
eddy diffusivity, K^ (with units of area/time) as
1
-K^=(U
3 C')
3
dx •
(2.1.56)
By substituting the stochastic equations into Eq. (2.1.55), taking an average, and
then using Eq. (2.1.56), the following result is obtained:
«n+L.â„–--*-{K.«n\
j
dt
\ ' / dxj
dxj (
tej )
(2.1.57)
The problem with Eq. (2.1.57) is that the eddy diffusivity changes with position,
time, wind velocity, prevailing atmospheric conditions, to name a few, and must be
specified prior to a solution to the equation. This approach, while important theoretically, does not provide a practical framework for the solution of vapor dispersion
problems.
Sutton (1953) developed a solution to the above difficulty by defining dispersion
coefficients, Ox^ oy^ andaz defined as the standard deviation of the concentrations in the
downwind, cross wind, and vertical (x, y, z) directions, respectively. The dispersion
coefficients are a function of atmospheric conditions and the distance downwind from
the release. The stability classes are shown in Table 2.8.
Pasquill (1962) recast Eq. (2.1.57) in terms of the dispersion coefficients, and
developed a number of useful solutions based on either continuous (plume) or instantaneous (puff) releases. Gifford (1961) developed a set of correlations for the dispersion coefficients based on available data. The resulting model has become known as the
Pasquill-Gifford model.
Dispersion coefficients oy and O2 for diffusion of Gaussian plumes are available as
graphs (Figure 2.28). Predictive formulas for these are available in Hanna et al. (1982),
Lees (1980),and TNO (1979) and are given in Table 2.12. Use of such formulas allow
for easy application of spreadsheets.
Puff emissions have different spreading characteristics from continuous plumes
and different dispersion coefficients (oy and az) are required as presented in Figure
Distance Downwind, km
4
Distance Downwind, km
4
3
3
2
2
1
1
Distance Downwind, km
Distance Downwind, km
FIGURE 2.28. Dispersion coefficients for a continuous release or plume. The top two graphs
apply only for rural release conditions and the bottom two graphs apply only for urban
release conditions.
TABLE 2.12. Recommended Equations For Pasquill-Gifford Dispersion Coefficients
for Plume Dispersion3
Pasquill-Gifford stability class
oy (m)
oz (m)
Rural Conditions
A
0.22*(1 + O.OOOL*)-1^
0.20*
B
12
0.16*(1 + O.OOOL*)- /
0.12*
C
0.1Lv(I + O.OOOL*)-1/^
0.08*(1 + 0.0002*)-1/2
D
0.0&*(1 + O.OOOL*)-1^
0.06*(1 + 0.0015A;)-1/2
1
E
o.o6*(i + o.oooi*)- ^
o.oa*(i + o.oooa*)-1
F
0.04*(1 + O.OOOl*)-1^
0.016*(1 + 0.0003A:)-1
Urban Conditions
12
+1/2
0.24v(l + O.OOlx)/
A-B
0.32*(1 + 0.0004^)-1/2
C
0.22x;(l + 0.0004*)-1/2
D
12
0.16^(1 + 0.0004*)- /
0.14v(l + 0.0003x)-1/2
E-F
0.1Lv(I + 0.0004*)-V2
0.0&v(l + 0.0015*)-1/2
0.2(k
" From AIChE/CCPS (1996). The downwind distance, x, has units of meters.
Distance Downwind, km
Distance Downwind, km
FIGURE 2.29. Dispersion coefficients for an instantaneous release or puff. These apply only
for rural release conditions and are developed based on limited data.
2.29, with equations provided in Table 2.13. Experimental data for puff emissions are
much more limited than for plumes and thus puff models have greater uncertainty.
Also, because of a lack of data, it is often assumed oy = Ox. Hanna et al. (1982) provide
some guidance on appropriate values ofoy and oz based on the formula of Batchelor
(1952). TNO (1979) provides more detailed guidance with formulas to predict ay and
oz for both continuous and puff emissions. The TNO puff a values are taken to be one
half those for continuous plumes, while the oz values are unaltered.
TABLE 2.13. Recommended
Equations for Pasquill-Gifford Dispersion Coefficients
for Puff Dispersion3
Stability class
oy or Ox
oz
092
A
0.1&*
0.60*° 75
B
0.14*092
0.53.*0-73
C
0.10*092
0.34*0-71
D
092
0.06*
0.15.*0-70
E
0.04*° 92
O. IQ*065
F
0.02*°89
0.05.*061
* From AIChE/CCPS (1996). The distance downwind,*, and the dispersion coefficients have units of meters
Puff Model. The puff model describes near instantaneous releases of material. The
solution depends on the total quantity of material released, the atmospheric conditions,
the height of the release above ground, and the distance from the release. The equation
for the average concentration for this case is (Turner, 1970)
<cx*,>,^0-
vf
T
(
«p-i^.
1
X e
2
I" ifz-H} ]
\2~
J
(2.1.58)
\ l(z + H\2^
; 4'4—)j +exp bhd]j
where
(C) is the time average concentration (mass/volume)
G* is the total mass of material released (mass)
0*, Oy3 and 0Z are the dispersion coefficients in the #, ^, and z directions (length)
y is the cross-wind direction (length)
z is the distance above the ground (length)
H is the release height above the ground (length)
Equation (2.1.58) assumes dispersion from an elevated point source with no
ground absorption or reaction.
Here x is the downwind direction, y is the crosswind direction, andz is the height
above ground level. The initial release occurs at a height H above the ground point at
(x,y,z) = (0,0,0), and the center of the coordinate system remains at the center of the
puff as it moves downwind. The center of the puff is located at x = ut.
Notice that the wind speed does not appear explicitly in Eq. (2.1.58). It is implicit
through the dispersion coefficients since these are a function of distance downwind
from the initial release and the atmospheric stability conditions.
If the coordinate system is fixed at the release point, then Eq. (2.1.58) is multiplied
by the factor below:
•*Hfcf)1
where u is the wind speed (length/time), t is the time since the release (time), and# is
the downwind direction (length). The factor (JK; - ut) represents the width of the puff.
A typical problem is to determine the downwind distance from a release to a fixed
concentration. Since the downwind distance is not known, the dispersion coefficients
cannot be determined. The solution for this case requires a trial and error solution
(refer to the example problem at the end of this section on the puff).
Another typical requirement is to determine the cloud boundary at a fixed concentration. These boundaries, or lines, are called isopleths. The locations of these are found
by dividing the equation for the centerline concentration, that is, (C)(AJjO5O/), by the
general ground level concentration provided by Eq. (2.1.58). The resulting equation is
solved for y to give
yY = (ffJ
^2 1In ((C)(XMt)]
'f "VO(A^O,*) J
(2.1.OU)
;
^
whereby is the off-center distance to the isopleth (length), (C)(#,0,0,f) is the downwind
centerline concentration (mass/volume), and (C)(.xy,0,£) is the concentration at the
isopleth.
Equation (2.1.60) applies to ground level and elevated releases.
The procedure to determine an isopleth at any specified time is
1. Specify a concentration, (C)* for the isopleth.
2. Determine the concentrations, (C)(#,0,0,f), along the x-axis directly downwind
from the release. Define the boundary of the cloud along this axis.
3. Set (C)(Ay,0,f) = (C)* in Eq. (2.1.60) and determine the value ofjy at each centerline point determined in step 2. Plot the y values to define the isopleth using
symmetry around the centerline.
Plume Model. The plume model describes a continuous release of material. The solution depends on the rate of release, the atmospheric conditions, the height of the release
above ground, and the distance from the release. This geometry is shown in Figure
2.30. In this case the wind is moving at a constant speed, ^, in the .^-direction. The
equation for the average concentration for this case is (Turner, 1970).
T
/ \2~
(C)(x9y,z) = —^
exp
~U2noyozu r 2 l a I
L
f
p
J
2
[ l(z-H} ]
+
(2.1.61)
\ l(z + H}2]
>f hM J ^-Il-J J
where
(C)(x,y,z) is the average concentration (mass/volume),
G is the continuous release rate (mass/time)
Qx-* 0y> and oz are ^e dispersion coefficients in the x, y, and z directions (length)
u is the wind speed (length/time)
y is the cross-wind direction (length)
z is the distance above the ground (length)
H is the height of the source above ground level plus plume rise (length)
FIGURE 2.30. Three-dimensional view of Gaussian dispersion from an elevated continuous
emission source. From Turner (1970).
Equation (2.1.61) assumes dispersion from an elevated point source with no
ground absorption or reaction.
For releases at ground level, the maximum concentration occurs at the release
point. For releases above ground level, the maximum ground concentration occurs
downwind along the centerline. The location of the maximum is found using,
H
a =
* Vf
(2.1.62)
and the maximum concentration is found from
(CU=^IrN
enuH Io \
(2-1.63)
y
The procedure for finding the maximum concentration and the downwind distance for the maximum is
1. Use Eq. (2.1.62) to determine the dispersion coefficient, crz, at the maximum.
2. Use Figure 2.28 or Table 2.12 to determine the downwind location of the maximum.
3. Use Eq. (2.1.63) to determine the maximum concentration.
Equations (2.1.58) and (2.1.61) are applicable to ideal point sources from which
the vapors are released. More complex formulas for other types of sources can be found
in Slade (1968). At the source, the simple point-source models have concentration
values of infinity and therefore will greatly overpredict concentrations in the near field.
To apply them to a real source with given dimensions, the concept of a virtual point
source is introduced. The virtual source is located upwind from the real source such
that if a plume were originated at the virtual source it would disperse and match the
dimensions or concentration at the real source. However, to achieve this, a concentration at a centerline point directly downwind must be known.
There are several ways to determine the location of the virtual source for a plume:
1. Assume that all of the dispersion coefficients become equal at the virtual source.
Then, from Eq. (2.1.61)
/
O (y )=0 (Z ) =
0,Wv)
<>*(**)
r
w2
\nu~(CY)
(2.1.64)
The virtual distances,yv andzv, determined using Eq. (2.1.64) are added to the
actual downwind distance, #, to determine the dispersion coefficients, oy and az,
for subsequent computations.
2. Assume that xv = yv = zv. Then, from Eq. (2.1.61)
^ K ) ' a z K ) = ^T
(2 L65)
-
xv is determined from Eq. (2.1.65) using a trial and error approach. The effective distance downwindfor subsequent calculations using Eq. (2.1.61) is determined from (x + xv).
3. For large downwind distances, the virtual distances will be negligible and the
point source models are used directly.
The puff and plume model equations can be equated to determine the downwind
distance for a transition criteria from the puff to a plume.
Logic Diagram. A logic diagram for the calculation of a plume or puff dispersion case
using a Gaussian dispersion model is given in Figure 2.31.
Theoretical Foundation. Gaussian models represent well the random nature of turbulence. The dispersion coefficients oy and O2, are empirically based, but results agree as
well with experimental data as with other more theoretically based models. They are
normally limited to predictions between 0.1 and 10 km. The lower limit allows for flow
establishment and overcomes numerical problems, without introducing virtual
sources, which can predict concentrations greater than 100% near the source.
Input Requirements and Availability. Input requirements for Gaussian plume or
puff modeling are straightforward. The source emission in terms of mass rate (plume)
or mass (puff) must be defined. Wind speed and atmospheric stability must be specified. Wind speed should be appropriate for the height of the center line. The standard
equation assumes a point source with no deposition, reaction, or absorption of vapors.
Alternative equations exist for line, area and volume sources, with deposition, reaction
or absorption, if relevant (Pasquill and Smith, 1983; Turner, 1970).
Output. The output of plume models is the time averaged concentration at specific
locations (in the three spatial coordinates: x, y, z) downwind of the source. For toxic or
DEFlNITiON OF SOURCE
Release Rate or Total Mass
Release Elevation
Source Type: Point, Line,
Area
LOCAL INFORMATION
Wind Speed
Atmospheric Stability
Urban or Rural Terrain
Specify Isopleth
Concentration
Puff
Puff or Plume?
Plume
Specify Time
Specify Location
of Interest: x,y,z
Determine Puff
Location
Calculate
Centerline
Concentrations
Calculate
Centerline
Concentrations
Determine
Isopleth Location
Determine
Isopleth
Locations
More Spatial Steps to
Define Cloud Shape?
Determine
Isopleth Area
Determine
Isopleth Area
FIGURE 2.31. Logic diagram for Gaussian dispersion.
flammable clouds it may be desired to plot a particular isopleth corresponding to a concentration of interest (e.g., fixed by toxic load or flammable concentration). This
isopleth usually takes the form of a skewed ellipse. It is usually easiest to computerize
the model and determine the contour numerically.
Puff models generate time varying output, and individual puffs can be followed to
consider the effects of wind changes. At every point (x, y, z) downwind from the point
of release, there will be a unique concentration versus time profile.
Simplified Approaches. The Pasquill-Gifford Gaussian models are a simplified
approach to dispersion modeling. They are sometimes used to get a first estimate for
dense gas dispersion, but the mechanisms differ substantially (Section 2.1.3.2). Results
from one such model are shown in Figures 2.32 to 2.33 for the downwind distance to a
specified concentration and the total isopleth area (all dimensionless) as a function of a
scaled variable, L*. As evidenced in those figures, the use of dimensionless variables
allows plotting the generic physical behavior on a single graph. By defining a scaled
length,
L
'=ttrf
(2 166)
-
a dimensionless downwind distance,
** =Y
(2.1.67)
^=W
(2-L68)
and a dimensionless area,
then nomographs can be developed for determining the downwind distance and the
total area affected at the concentration of interest, (C)*. Figures 2.32 and 2.33 can be
readily curve fit with the resulting equations provided in Table 2.14.
Example Problems
x*, Dimensionless Downwind Distance = x/L *
Example 2.13: Plume Release 1. Determine the concentration in ppm 500 m downwind from a 0.1 kg/s ground release of a gas. The gas has a molecular weight of 30.
Assume a temperature of 298 K, a pressure of 1 atm, F stability, with a 2 m/s wind
speed. The release occurs in a rural area.
Weather
Stabilities
L*, Scaled Length (m) t L =
FIGURE 2.32. Dimensionless Gaussian dispersion model output for the distance to a particular
concentration. This applies for rural release only.
A*, Dimenslonless Impact Area = A/(l_*)
Weather
Stabilities
L*. Scaled Length (m) t L =
FIGURE 2.33. Dimensionless Gaussian dispersion model output for the impact isopleth area.
This applies for rural release only.
TABLE 2.14. Curve Fit Equations for Downwind Reach and Isopleth Area. These
Values Are Used in the Equation Form:
Injy = C0 + C1 In(L*') + C2 [In(L*)]2 + ^3 [In(L*)]3
y
X*
= JL
L*
A
•=
Stability class
CQ
^1
C2
B
1.28868
0.037616
-0.0170972
£3
0.00367183
4
D
2.00661
0.016541
1.4245IxIQ-
0.0029
F
2.76837
0.0340247
0.0219798
0.00226116
B
1.35167
0.0288667
-0.0287847
0.0056558
D
1.86243
0.0239251
-0.00704844
0.00503442
F
2.75493
0.0185086
0.0326708
0.00392425
(L")
Solution. This is a continuous release of material and is modeled using Eq.
(2.1.61) for a plume. Assuming a ground level release (H = O), a location on the
ground (z = O) and a position directly downwind from the release (y = O), then Eq.
(2.1.61) reduces to
(C)(XjOJO)=-
no yozu
For a location 500 m downwind, from either Table 2.12 or Figure 2.28, for F-stability
conditions, oy = 19.52 m, and az = 6.96 m. Substituting into the above equation
<C>(5
°°m'°'0) • ^ - (M4M19jfm)%Si m)(2 m/s) ° "7 *10" */»'
This concentration is 117 mg/m3. To convert to ppm, the following equation is used
C
(0.08206 L atm Y T \
, ,
Xmg/m
^ igm-moleKjipM)
=
The result is 95 ppm.
This calculation is readily implemented via spreadsheet. The output is shown in
Figure 2.34. The spreadsheet solution enables the user to specify a release height and
any location in (x, y, z) space. Furthermore, the spreadsheet prints results for all stability classes. Note that the concentration is reduced to 8 ppm for urban conditions with
F-stability.
Example 2.14: Plume Release 2. What continuous release of gas (molecular weight
of 30) is required to result in a concentration of 0.5 ppm at 300 m directly downwind
on the ground? Also estimate the total area affected. Assume that the release occurs at
ground level and that the atmospheric conditions are worst case.
Click to View Calculation Example
Example 2.13: Plume Release #1
Input Data:
Release rate:
Molecular weight:
Temperature:
Pressure:
Release height:
Distance downwind:
Distance off wind:
Distance above ground:
0.1 kg/s
30
298 K
1 atm
Om
500 m
Om
Om
<— X
<—Y
<— Z
Calculated Results:
RURAL CONDITIONS:
Assumed wind speed:
Dispersion Coefficients:
Sigma y:
Sigma z:
Downwind concentration:
PPM:
************************* Stability Classes ************************
A
B
C
D
E
F
0.1
0.1
2
3
2
2 m/s
107.35
78.07
53.67
39.04
29.28
19.52m
100.00
60.00
38.14 22.68
13.04 6.96m
2.97E-05 6.80E-05 7.77E-06 1.20E-05 4.17E-05 1.17E-04 kg/m**3
29.65
67.95
7.77 11.99 41.68 117.22 mg/m**3
24.17 55.39
6.34 9.77 33.97
95.55 PPM
URBAN CONDITIONS:
Assumed wind speed:
Dispersion Coefficients:
Sigma y:
Sigma z:
Downwind concentration:
************ Stability Classes **************
A-B
C
D
E-F
0.1
2
3
2 m/s
146.06 100.42
73.03
50.21 m
146.97 100.00
44.27
30.24m
1.48E-05 1.58E-06 3.28E-06 1.05E-05 kg/m**3
14.83
1.58 3.28 10.48 mg/m**3
PPM:
12.09 1.29 2.68 8.55 PPM
FIGURE 2.34. Spreadsheet output for Example 2.13: Plume Release 1.
Solution. From Eq. (2.1.61), with H = O, z = O, and 7 = O,
(C>(*,0,0)= —
no yozu
Worst case atmospheric conditions are selected to maximize (C). This occurs with
minimum dispersion coefficients and minimum wind speed, ^, within a stability class.
By inspection of Figure 2.28 and Table 2.8, this occurs with F-stability and u = 2 m/s.
At 300 m = 0.3 km, from Figure 2.28, oy = 11.8 and az = 4.4. The concentration in
ppm is converted to kg/m3 by application of the ideal gas law. A pressure of 1 atm and
temperature of 298 K are assumed.
,
mg/m
3
=
( gm-moleK \IPM\^
(o.08206LatmJl~T"j C PP m
Using a molecular weight of 30 gm/gm-mole, the above equation gives a concentration of 0.61 mg/m3. The release rate required is computed directly
G = (C)* jra y o z u =(0.61 mg/m3)(3.14)(11.8m)(4.4m)(2 m/s) =201 mg/s
This is a very small release rate and demonstrates that it is much more effective to
prevent the release than to mitigate it after the fact.
The spreadsheet output for this part of the example problem is shown in Figure
2.35. The spreadsheet solution enables the user to specify a release height and any location in (x,y,z) space. Furthermore, the spreadsheet prints results for all stability classes.
The area affected is determined from Figure 2.33. For this case,
_f
1.99xlO-*kg/s
]1/2
L =
T6
-.—r~
= 12.o m
[(2 m/s)(0.61 XlO' kg/m 3 )J
From Figure 2.33,^4* = 20 and it follows that
A = A* (L*)2 =(20)(12.8m)2 = 3277m 2
Example 2.15: PufF Release. A gas with a molecular weight of 30 is used in a particular process. A source model study indicates that for a particular accident outcome 1.0
kg of gas will be released instantaneously. The release will occur at ground level. The
plant fence line is 500 m away from the release.
a. Determine the time required after the release for the center of the puff to reach
the plant fence line. Assume a wind speed of 2 m/s.
b. Determine the maximum concentration of the gas reached outside the fence
line.
c. Determine the distance the cloud must travel downwind to disperse the cloud
to a maximum concentration of 0.5 ppm. Use the stability conditions of part b.
d. Determine the width of the cloud, assuming a 0.5 ppm boundary, at a point
5 km directly downwind on the ground. Use the stability conditions of part b.
Click to View Calculation Example
Example 2.14: Plume Release #2
Input Data:
Target concentration:
Molecular weight:
Temperature:
Pressure:
Release height:
Distance downwind:
Distance off wind:
Distance above ground:
0.5 ppm
30
298 K
1 atm
Om
300 m
Om
Om
<— X
<— Y
<— Z
Calculated Results:
Target concentration:
0.6134 mg/m**3
6.1E-07 kg/m**3
RURAL CONDITIONS:
Assumed wind speed:
Dispersion Coefficients:
Sigma y:
Sigma z:
Release rate:
A
0.1
B
Stability Classes
C
D
0.1
2
3
E
2
F
2 m/s
65.03
47.30
32.52
23.65
17.74 11.82m
60.00
36.00
23.31 14.95
8.26
4.40m
7.52E-04 3.28E-04 2.92E-03 2.04E-03 5.64E-04 2.01 E-04 kg/s
751.92 328.11 2921.31 2043.60 564.41 200.68 mg/s
URBAN CONDITIONS:
Assumed wind speed:
Dispersion Coefficients:
Sigma y:
Sigma z:
Release rate:
FIGURE 2.35.
************ Stability Classes **************
A-B
C
D
E-F
0.1
2
3
2 m/s
90.71 62.36
45.36
31.18m
82.09
60.00
30.47
19.93 m
1.44E-03 1.44E-02 7.99E-03 2.40E-03 kg/s
1435.03 14421.47 7989.49 2395.28 mg/s
Spreadsheet output for Example 2.14: Plume Release 2.
Solution
a. The time required after the release for the puff to reach the fence line is given by
x 50Om
t =- =
= 250 s = 42 min
u 2 m/s
This leaves very little time for emergency warning or response.
b. The maximum concentration will occur at the center of the puff directly downwind from the release. This concentration is given by Eq. (2.1.58), assuming a
release on the ground,
(C)KAO,*) =
G
3/2
V2 jt6/* oxoyoz
The stability conditions are selected to maximize (C) in the equation above.
This requires dispersion coefficients of minimum value. From Figure 2.29, this
occurs under F stability with a minimum wind speed of 2 m/s. At a distance
downwind of 500 m, from Figure 2.29 or Table 2.13, oy = 6.1 and az = 2.2 m.
Also assume Ox = a. Substituting into the equation on the preceding page,
1.0kg
(CX^A(M)- ^ (3.14)^ (6 W (2.2 m)
= 1.55 XlO" 3 kg/m 3
= 1550mg/m3
Assuming a pressure of 1 atm and a temperature of 298 K, this converts to
1263 ppm.
The spreadsheet output for parts a and b of this problem is provided in
Figure 2.36. The spreadsheet provides additional capability for specifying the
release height and any downwind location.
c. The concentration at the center of the puff is given by the equation above. In
this case the dispersion coefficients are not known since the downwind distance
is not specified. For this gas, 0.5 ppm = 0.613 mg/m3. Substituting the known
quantities,
/ 2
1-Okg
°- 61Xl °
^" °V2(3.14)"'.X
a/ oz = 2.07 x 105 m3
This equation is satisfied at the correct distance from the release point. The
equations for the dispersion coefficients from Table 2.13 are substituted and
solved for A?. The result is (0.02*092)2 (0.05*° 61) = 2.07 X 105.
x = 12.2 km
Click to View Calculation Example
Example 2.15a,b: Puff Release
This part determines the concentration downwind at a specified point (X1Y1Z)
Input Data:
Total release:
Molecular weight:
Temperature:
Pressure:
Release height:
Distance downwind:
Distance off wind:
Distance above ground:
1 kg
30
298 K
1 atm
Om
500 m
Om
Om
<— X
<— Y
<— Z
Calculated Results:
Assumed wind speed:
Dispersion Coefficients:
Sigma y:
Sigma z.
Downwind concentration:
************************* Stability Classes ************************
A
B
C
D
E
F
0.1
0.1
2
3
2
2 m/s
54.74
42.58
30.41 18.25 12.17 6.08m
63.44
49.49
28.04
11.62 5.68 2.21 m
6.7E-07 1.4E-06 4.9E-06 3.3E-05 0.000151 0.00155 kg/m**3
0.67
1.42 4.90 32.81 151.08 1549.72 mg/m**3
PPM:
0.54 1.15 3.99 26.74 123.15 1263.22 PPM
Arrival time:
5000
5000
250
167
250
250 s
FIGURE 2.36. Spreadsheet output for Example 2.15a,b: Puff release.
Click to View Calculation Example
Example 2.15c: Puff Release
This part determines how far cioud must travel to reach a specified concentration at
the center.
input Data:
Total release:
Molecular weight:
Temperature:
Pressure:
Release height:
Distance off wind:
Distance above ground:
1 kg
30
298 K
1 atm
Om
Om
Om
<— Y
<— Z
Target Concentration:
tGuessed downwind distance:
0.5 ppm
12239m
<—X
Calculated Results:
Target concentration:
0.6134 mg/m**3
\
************************* stabilitv Classes ************************
A
B
C
D
E
F
0.1
0.1
2
3
2
2 m/s
Assumed wind speed:
Dispersion Coefficients:
Sigma y:
Sigmaz:
Calculated concentration:
1037.53 806.96 576.40 345.84 230.56
115.28m
698.17 510.89 271.51 109.02
45.40
15.58m
1.7E-10 3.8E-10 1.4E-09 9.7E-09 5.3E-08 6.1E-07 kg/m**3
0.00
0.00
0.00
0.01
0.05
0.61 mg/m**3
0.00
0.00
0.00
0.01
0.04
0.50 ppm
0.500
0.500
0.499
0.492
0.457
-0.000
Residual:
NOTE: Adjust GUESSED DOWNWIND DISTANCE above to zero residua! in stability class of interest.
FIGURE 2.37. Spreadsheet output for Example 2.15c: Puff release
This part of the solution is readily implemented via spreadsheet, as shown
in Figure 2.37. The solution is achieved by trial and error—the user must adjust
the guessed downwind distance until the residual shown below the applicable
stability class is zero. The spreadsheet provides additional capability to specify a
release height and any downwind location.
d. The width of the puff at a specified point downwind can be determined by multiplying the equation above for the centerline concentration by Eq. (2.1.59), to
convert the coordinate system to one that remains fixed at the release point.
The resulting equation is
G*
f l(x-ut\2~
<C>(*,0,0,*)=-7=^-3/2
exp
-- ——
*j2n61* OXG yoz
L 2 ^ ox )
where the quantity x - ut represents the width of the puff. At a downwind distance of 5 km, from Figure 2.29 or Table 2.13, assuming F stability,
oy = Ox = 50.6 m
and
oz = 9.0 m.
Substituting into the above equation,
/
7
1-0kg
[
l(x-ut}2
0.61X10-' kg/m' =^14),/a (50.6mf(9m)^[--2(^
x-ut = 106 m
_
Click to View Calculation Example
Example 2.15d: Puff Release
This part determines the cloud width to a target concentration at a specified point
downwind.
Input Data:
Total release:
Molecular weight:
Temperature:
Pressure:
Release height:
Distance downwind:
Distance off wind:
Distance above ground:
Target Concentration:
Calculated Results:
Target concentration:
Assumed wind speed:
Dispersion Coefficients:
Sigma y:
Sigma z:
Puff width:
Time for puff to pass:
1 kg
30
298 K
1 atm
Om
5000 m
Om
Om
<— X
<— Y
<— Z
0.5 ppm
0.6134 mg/m**3
6.1E-07 kg/m**3
************************* stability Classes ************************
A
B
C
D
E
F
0.1
0.1
2
3
2
2 m/s
455.33 354.14 252.96
151.78 101.18
50.59m
356.76 265.78 143.80
58.26
25.37
9.02 m
1.7E-09 3.8E-09 1.4E-08 9.5E-08 4.9E-07 5.5E-06 kg/m**3
0.0
0.0
0.0
0.0
0.0
106.0 m
O
O
O
O
O
106 s
FIGURE 2.38. Spreadsheet output for Example 2.15d: Puff release
The puff thickness is thus 2 X 106 m = 212 m. At a wind speed of 2 m/s, the
puff will take 212 m/(2 m/s) = 106 s to pass. The spreadsheet output for this
part of the example problem is shown in Figure 2.38.
Example 2.16: Plume with Isopleths. Develop a spreadsheet program to determine
the location of an isopleth for a plume. The spreadsheet should have specific cells for
inputs for:
•
•
•
•
•
•
•
•
release rate (gm/s)
release height (m)
spatial increment (m)
wind speed (m/s)
molecular weight
temperature (K)
pressure (atm)
isopleth concentration (ppm)
The spreadsheet output should include, at each point downwind:
• bothjy and z dispersion coefficients, Ox and oz (m)
• downwind centerline concentrations (ppm)
• isopleth locations (m)
The spreadsheet should also have cells providing the downwind distance, the total
area of the plume, and the maximum width of the plume, all based on the isopleth value.
Use the following case for computations, and assume worst case stability conditions:
Release rate:
Release height:
Molecular weight:
Temperature:
Pressure:
Isopleth cone:
50 gm/sec
Om
30
298 K
1 atm
10 ppm
Solution: The spreadsheet output is shown in Figure 2.39. Only the first page of
the spreadsheet output is shown. The following notes describe the procedure:
1. The downwind distance from the release is broken up into a number of spatial
increments, in this case 10-m increments. The plume result is not dependent on
this selection, but the precision of the area calculation is.
2. The equations for the dispersion coefficients (oy and az) are fixed based on stability class, in this case F-stability. These columns in the spreadsheet would
need to be re-defined if a different stability class is required.
3. The dispersion coefficients are not valid at less than 100 m downwind from the
release. However, they are assumed valid to produce a complete picture back to
the release source.
4. The isopleth calculation is completed using Eq. (2.1.60) and the procedure
indicated.
Click to View Calculation Example
Input Data:
Release Rate:
Release Height:
Increment:
Wind Speed:
Molecular Weight:
Temperature:
Pressure:
Isopleth Cone:
50 gm/sec
Om
10 m
2 m/sec
30
298 K
1 atm.
10 ppm
!Assumed Stability Class: F
Calculated Results:
Max. plume width:
Total Area.
Distance Crosswind m
Example 2.16: Plume with lsopleths
|
37.34 m
66461 m"2
Distance Downwind, m
Downwind
Downwind
Downwind
Dispersion Coeff.
Centerline Isopleth
Distance Sigma Sigma Center! ine Center! ine
Downwind
Z
Concentration Concentration Concentration Location Negative Area
y
(m)
(mg/m"3)
(m)
(ppm)
(m)
(gm/m**3)
(m)
(m"2)
O
O
O
O
0.0
O
10
0.16
124.775
124775.2
0.40
101695.5
-1.7
17.2
1.7
20
31302.7
31.303
25512.7
31.7
0.32
0.80
-3.2
3.2
30
0.48
13.961
13960.8
11378.4
1.20
45.0
-4.5
4.5
40
7.880
7880.2
6422.6
-5.7
57.4
0.63
1.60
5.7
50
2.00
5060.8
4124.7
0.79
5.061
-6.9
69.2
6.9
60
3526.6
0.94
2.39
3.527
2874.3
-8.1
80.5
8.1
70
2.79
2.600
2599.9
91.3
1.10
2119.0
-9.1
9.1
80
1997.4
1.25
3.19
1.997
1627.9
101.7
-10.2
10.2
FIGURE 2.39. Spreadsheet output for Example 2. 16: Plume with isopleths
5. The plume is symmetric. Thus, the plume is located at ±y.
6. The plume area is determined by summing the product of the plume width
times the size of each increment.
7. The maximum plume width is determined using the @MAX function in
Quattro Pro (or its equivalent function in other spreadsheets).
8. For the maximum plume width and the total area, specific cell numbers must be
summed for each run.
Example 2.17: Puff with Isopleths. Develop a spreadsheet program to draw isopleths for a puff. The isopleths must be drawn at a user specified time after the release.
The spreadsheet should have specific inputs for
•
•
•
•
•
•
•
•
•
•
total quantity released (kg)
time after release (s)
distance downwind for center of puff (m)
release height (m)
spatial increment (m)
wind speed (m/s)
molecular weight
temperature (K)
pressure (atm)
isopleth concentration (ppm)
The spreadsheet output should include, at each point downwind:
•
•
•
•
downwind location, or location with respect to puff center.
bothjy and z dispersion coefficients, oy and oz (m)
downwind centerline concentrations (ppm)
isopleth locations (m)
Use the following case for your computations:
Release mass:
Release height:
Molecular weight:
Temperature:
Pressure:
Isopleth cone:
Weather stability:
Wind speed:
50 kg
Om
30
298 K
1 atm
1.0 ppm
F
2 m/s
1. At what time does the puff reach its maximum width?
2. At what time and at what distance downwind does the puff dissipate?
Solution: The most direct approach is to use a coordinate system that is fixed on
the ground at the release point. Thus, Eq. (2.1.59) is used in conjunction with Eq.
(2.1.58). The equations for the dispersion coefficients for a puff are obtained from
Table 2.13.
In order to reduce the number of spreadsheet cells, a spreadsheet grid that moves
with the center of the puff is used. In this case 50 cells were specified on either side of
the center of the puff.
The procedure for the spreadsheet solution is
1.
2.
3.
4.
5.
6.
7.
Specify a time (entered by user).
Compute #, the downwind distance, at each cell in grid.
Compute they and 2 dispersion coefficients (oy and az).
Compute the centerline concentration at each grid point using Eq. (2.1.58)
Compute the isopleth location at each grid point using Eq. (2.1.60).
Compute both the 4- and - isopleth to define both sides of puff.
Plot the results.
The resulting spreadsheet output is shown in Figure 2.40. Only the first page of
the spreadsheet output is shown.
To determine the maximum plume width, a trial and error approach is used. Specified times are entered into the spreadsheet and the maximum width is determined
manually from the output. The results are shown in Figure 2.41, which shows the puff
width as a function of time. Note that the puff increases in width to a maximum of
about 760 m, then decreases in size. The maximum width occurs at about t = 13,000
sec, when the puff is 6.5 km downwind from the release. The time for the puff to dissipate is determined by increasing the time until the isopleth disappears. This occurs at
about 22,800 s when the puff is 45.5 km downwind.
Discussion
Strengths and Weaknesses. The Gaussian dispersion model has several strengths. The
methodology is well defined and well validated. It is suitable for manual calculation, is
readily computerized on a personal computer, or is available as standard software packages. Its main weaknesses are that it does not accurately simulate dense gas discharges,
validation is limited from 0.1 to 10 km, and puff models are less well established than
plume models. The predictions relate to 10 min averages (equivalent to 10 min sampling times). While this may be adequate for most emissions of chronic toxicity, it can
underestimate distances to the lower flammable limit where instantaneous concentrations are of interest. More discussion will follow.
Identification and Treatment of Possible Errors. Benarie (1987) discusses errors in Gaussian and other atmospheric dispersion models for neutral or positive buoyancy releases.
He highlights the randomness of atmospheric transport processes and the importance
of averaging time. The American Meteorological Society (1978) has stated that the
precision of models based on observation is closely tied to the scatter of that data. At
present the scatter of meteorological data is irreducible and dispersion estimates can
approximate this degree of scatter only in the most ideal circumstances.
As vapors disperse, mixing occurs as turbulent eddies of a spectrum of sizes interact with the plume. Thus, portions of the plume may have local concentrations that
deviate above and below the average concentrations estimated by models.
Click to View Calculation Example
Example 2. 17: Puff Model:
Input Data:
Time:
Wind Speed:
Total Release:
Step Increment:
Release Height:
No. of Increments:
Molecular Weight:
Temperature:
Pressure:
lsopleth Cone.:
1000 sec
2 m/s
50 kg
1.6 m
O m
50
30
298 K
1 atm
1 ppm
!Assumes F-stability I
Calculated Results:
Distance Downwind:
lsopleth Cone.:
Max. Puff Width:
2000 m A
1.23 mg/m 3
139.71 m
Distance Downwind, m
Puff Width (m)
Distance
from Distance Dispersion Coeff. Centerline
Center Downwind Sigma y Sigma z Cone.A
+lsopleth -lsopleth
(m)
mg/m 3
(m)
(m)
(m)
(m)
(m)
•80
5.0 0.048067
1920
16.7
-78.4
5.0 0.076732
16.7
1921.6
-76.8
5.0 0.121212
16.7
1923.2
-75.2
5.0 0.189482
16.8
1924.8
-73.6
5.0 0.293133
16.8
1926.4
5.0 0.448802
-72
16.8
1928
-70.4
5.0 0.680078
16.8
1929.6
5.1 1.019989
-68.8
16.8
1931.2
10.9
-10.9
5.1 1.514203
-67.2
16.8
1932.8
18.4
-65.6
-18.4
5.1 2.225068
1934.4
16.8
23.5
-64
-23.5
5.1 3.236626
1936
16.8
-62.4
27.5
1937.6
-27.5
5.1 4.660695
16.9
FIGURE 2.40 Spreadsheet output for Example 2. 1 7: Puff with isopleths.
Time after release (s)
FIGURE 2.41. Puff width as a function of time for Example 2.17
In addition, major wind direction shifts may cause a dispersing plume to change
direction or meander. While such changes do not have a major effect on the hazard area
of the plume relative to its centerline, they do matter with respect to the specific area
impacted.
Gifford (1975) attempts to account for the effects of averaging time through the
following relation:
f
*,(0 =\(:M
PG /
r
"J=V^PG
(2-L69)
(2'L7°)
where
Fy is the factor to account for the effect of averaging time (unitless)
f a is the averaging time (time)
£PG is the averaging time for the standard Pasquill-Gifford curves, i.e., 600 s
a is the dispersion coefficient averaged over ta (length)
oy PG is the standard Pasquill-Gifford dispersion coefficient (length)
Based on limited experimental data, Gifford suggests q = 0.25 to 0.3 for f a of 1 to
100 hr and 0.2 for short averaging times (e.g., 3 min). Because of lack of data, most
models use 0.2 for even shorter averaging times. The lower limit of Fy is the value for
instantaneous release which TNO (1979) assumes 0.5 (this means their assumption of
instantaneous release is about 19 s). Many dispersion cases will give rise to effect zones
using Gaussian models of less than 100 m. As this is outside the validation limits
(0.1-10 km), such predictions should be treated with caution.
Equations (2.1.69) and (2.1.70) are essentially identical to the averaging time
expression provided by AIChE/CCPS (1996a)
( 1/5)
/c\(
t \ It \ ~
(C)(*i)
M
=
(CKt2)
(2.1.71)
W
where (C) is the average gas concentration (mass/volume) and t is the respective averaging time (time). A more detailed discussion of averaging time is provided in
AIChE/CCPS (1995c, 1996a).
Utility. Gaussian models are relatively easy to use, but plume dispersion is not a simple
topic. A wide range of calculation options is available (plume and puff discharges;
absorption or reflection at ground level; deposition of material; point, line, and area
sources), thus care is required in selecting the right equations for the dispersion parameters and for predicting concentration. Wind velocity should be the average over the
plume depth.
Resources Needed. Dispersion modeling requires some experience if meaningful results
are to be obtained. Calculations are quick to perform on a calculator or personal computer. A single dispersion calculation might take 1-2 hr to analyze by an experienced
person on a calculator or spreadsheet assuming all meteorological data are available.
Collection and analysis of such data may be time consuming (several days depending
on availability).
Available Computer Codes
There are many air pollution models available. Guidelines for Vapor Cloud Dispersion
Models (AIChE/CCPS, 1987a; 1996a) reviews these and other computer codes and
compares their predictions.
2.1.3.2. DENSEGASDISPERSION
Background
Purpose. A dense gas is defined as any gas whose density is greater than the density of
the ambient air through which it is being dispersed. This result can be due to a gas with
a molecular weight greater than that of air, or a gas with a low temperature due to
auto-refrigeration during release, or other processes.
The importance of dense gas dispersion has been recognized for some time. Early
field experiments (Koopman et al., 1984; Puttock et al., 1982; Van Ulden, 1974) have
confirmed that the mechanisms of dense gas dispersion differ markedly from neutrally
buoyant clouds. When dense gases are initially released, these gases slump toward the
ground and move both upwind and downwind. Furthermore, the mechanisms for
mixing with air are completely different from neutrally buoyant releases.
Reviews of dense gas dispersion and modeling are given by AIChE/CCPS (1987a,
1995b, 1996a), Goyal and Al-Jurashi (1990), Blackmore et al. (1982), Britter and
McQuaid (1988), Havens (1987), Lees (1986, 1996), Raman (1986), and Wheatley
and Webber (1984).
Philosophy. Three distinct modeling approaches have been attempted for dense gas dispersion: mathematical, dimensional and physical.
The most common mathematical approach has been the box model (also known as
top-hat or slab model), which estimates overall features of the cloud such as mean
radius, mean height, and mean cloud temperature without calculating detailed features
of the cloud in any spatial dimension. Some models of this class impose a Gaussian distribution equating to the average condition.
The other form of mathematical model is the more rigorous computational fluid
dynamics (CFD) approach that solves the complete three-dimensional conservation
equations. These methods have been applied with encouraging results (Britter, 1995;
Lee et al. 1995). CFD solves approximations to the fundamental equations, with the
approximations being principally contained within the turbulence models—the usual
approach is to use the K-e theory. The CFD model is typically used to predict the wind
velocity fields, with the results coupled to a more traditional dense gas model to obtain
the concentration profiles (Lee et al., 1995). The problem with this approach is that
substantial definition of the problem is required in order to start the CFD computation. This includes detailed initial wind speeds, terrain heights, structures, temperatures, etc. in 3-D space. The method requires moderate computer resources.
The dimensional analysis method has been used succesfully by Britter and
McQuaid (1988) to provide a simple but effective correlation for modelling dense gas
releases. The procedure examines the fundamental equations and, using dimensional
analysis, reduces the problem to a set of dimensionless groups. Data from actual field
tests are then correlated using these dimensionless groups to develop a nomograph
describing dense gas release. A detailed comparison of model predictions with field test
data (Hanna et al., 1993) shows that the Britter-McQuaid method produces remarkably good results, with the predictions closely matching test results and outperforming
many more complex models. However, this result is expected since the
Britter-McQuaid method is based on the test data in the first place.
Physical (scale) models employing wind tunnels or water channels have been used
for dense gas dispersion simulation, especially for situations with obstructions or irregular terrain. Exact similarity in all scales and the re-creation of atmospheric stability and
velocity distributions are not possible—very low air velocities are required to match
large scale results. Havens et al (1995) attempted to use a 100-1 scale approach in conjunction with a finite element model. They found that measurements from such flows
cannot be scaled to field conditions accurately because of the relative importance of the
molecular diffusion contribution at model scale. The use of scale models is not a
common risk assessment tool in CPQRA and readers are directed to additional reviews
by Meroney (1982), and Duijm et al. (1985).
Applications. Dense gas mathematical models are widely employed to simulate the dispersion of flammable and toxic dense gas clouds. Early published examples of applications include models used in the demonstration risk assessments for Canvey Island
(Health & Safety Executive, 1978, 1981) and the Rijnmond Port Area (Rijnmond
Public Authority, 1982), and required in the Department of Transport LNG Federal
Safety Standards (Department of Transportation, 1980). While most dense gas models
currently in use are based on specialist computer codes, equally good and versatile
models are publicly available (e.g., DEGADIS, SLAB). The underlying dispersion
mechanisms and necessary validation are more complex than any other area of consequence modeling.
For prediction of toxic consequences, two common approaches are the use of
either a specific toxic concentration or a toxic dose criterion. Toxic dose is determined
as toxic gas concentration for the duration of exposure to determine an effect based on
specified probit models (Section 2.3.1).
With flammable releases, the mass of flammable material, as well as the extent of
the flammable zone, is important in determining the unconfmed vapor cloud explosion
and flash fire potential. The use of the LFL (lower flammable limit) or Vi LFL in determining these parameters is a subject of debate. Some indications of the issues involved
are provided below.
Most flammable releases do not follow neutral, or Gaussian, behavior since they
are almost always heavier than air. As the release continues to mix with air the Gaussian
model will eventually apply, but the cloud will no longer be flammable.
The basis for specification of V2 LFL (e.g., Department of Transportation, 1980) is
to allow for variations in instantaneous cloud concentrations. Pasquill-Gifford Gaussian
models have an implicit 10 min averaging time. Benarie (1987) notes that transient concentrations may differ from the average predicted by a factor up to 4 at the 5% confidence level. A problem with using Vi LFL is that hazard zones will be consistently
overpredicted; based on the Canvey Study (Health & Safety Executive, 1981), this
overprediction is typically about 15-20% in distance. While individual flammable pockets may ignite at the Vi LFL distance, there is a probability that the whole cloud will not.
The mass of flammable material in the cloud (i.e., above LFL concentration) based
on the Vi LFL isopleth will be overestimated by as much as a factor of two. Consider,
for example, a puff release. The mass of flammable material in the cloud is constant (as
no transport out of the cloud is permitted), although the total cloud size and mass
increase due to dilution. At the Vi LFL concentration not all the mass can be flarnma-
ble, and the total dimension for the flammable portion of the cloud must be overestimated. Thus flash fire and damage zones from vapor cloud explosions will be
consistently overpredicted. However, the energy available in a flammable cloud is
based on the average concentration, so the average concentration is the appropriate criterion for the estimation of vapor cloud explosion impacts.
Van Buijtenen (1980) developed a number of equations for the amount of gas in
the explosive region of a vapor cloud or plume. It was found that, for an instantaneous
release, a large fraction of the total amount released (50% for methane) can be in the
explosive region, irrespective of source strength and meteorological conditions. For a
continuous source, the amount in the explosive region is strongly dependent on source
strength and meteorological conditions.
Spicer (1995,1996) used the DEGADIS heavy gas computer code to model propane releases. It was determined that cloud concentrations as high as 90% of the LFL
could provide "sustained flames."
Most releases of flammables occur as high pressure or liquefied gas releases. For
these types of releases, the primary dilution mechanism is due to entrainment of air by
shear as the release jets into the surrounding air. An equation for the dilution of a turbulent, free jet from a rounded hole is given by Perry and Green (1984)
0
i= -^
P-I-")
where
q is the total jet volumetric flow rate at distance x (volume/time)
qQ is the initial jet volumetric flow rate (volume/time)
x is the distance from the release point (length)
D0 is the opening diameter (length)
Equation (2.1.72) applies only for 7 < (x/DQ) < 100.
Equation (2.1.72) shows that entrainment can be substantial. For a 1-cm diameter
jet, the total volumetric flow at 1 meter above the discharge will be 32 times the initial
volumetric flow. Thus, the initial dilution with air by the jet may reduce the concentrations below the LFL. However, flammable material will accumulate adjacent to the jet
eventually resulting in concentrations high enough for ignition.
Equation (2.1.72) is also useful for determining the initial release concentration as
an initial starting point for a detailed dense gas dispersion model.
Different risk analysts recommend a number of procedures for determining the
flammable mass via dispersion:
1. For flammable materials consider four concentrations: UFL, LFL, l/2 LFL, 1A
LFL. For explosive materials, consider the LFL and 100% concentrations.
2. If the averaging time for the dispersion model is unadjusted, that is, 10 min for
Gaussian dispersion, then use l/2 LFL as the flash limit. If the averaging time is
20 sec, use the LFL for the flash limit.
Description
Description of Techniques. Detailed descriptions of the mechanisms of dense gas dispersion and the specific implementations for a wide variety of mathematical models are
given in AIChE/CCPS (1987a, 1995a,b, 1996a). This is not reproduced here in any detail.
The transitional phases in a heavy gas dispersion situation are given in Figure 2.27.
Following a typical puff release, a cloud having similar vertical and horizontal
dimensions (near the source) may form. The dense cloud slumps under the influence of
gravity increasing its diameter and reducing its height. Considerable initial dilution
occurs due to the gravity-driven intrusion of the cloud into the ambient air. Subsequently the cloud height increases due to further entrainment of air across both the vertical and horizontal interface. After sufficient dilution occurs, normal atmospheric
turbulence predominates over gravitational forces and typical Gaussian dispersion
characteristics are exhibited.
Raman (1986) lists typical box model characteristics. The vapor cloud is treated as
a single cylinder or box containing vapor at a uniform concentration. Air mixes with
the box as it disperses downwind. Box width increases as it spreads due to gravity
slumping. The usual assumptions are
• The vapor cloud disperses over flat terrain.
• The ground has constant roughness.
• There are no obstructions.
• Local concentration fluctuations are ignored.
• The treatment of chemical reactions or deposition is limited.
The use of K-£ theory models can relax several of these assumptions. However,
validation data are not sufficiently available to verify the models and some numerical
problems (pseudodispersion and concentration discontinuities) are unsolved.
The Britter and McQuaid (1988) model was developed by performing a
dimensional
demensional analysis and correlating existing data on dense cloud dispersion. The
model is best suited for instantaneous or continuous ground level area or volume
source releases of dense gases. Atmospheric stability was found to have little effect on
the results and is not a part of the model. Most of the data came from dispersion tests in
remote, rural areas, on mostly flat terrain. Thus, the results would not be applicable to
urban areas, or highly mountainous areas.
The model requires a specification of the initial cloud volume, the initial plume
volume flux, the duration of release, and the initial gas density. Also required is the
wind speed at a height of 10 m, the distance downwind, and the ambient gas density.
The first step is to determine if the dense gas model is applicable. If an initial buoyancy is defined as
«.-*^
(2.1.73,
where
g0 is the initial buoyancy factor (length/time2)
g is the acceleration due to gravity (length/time2*
P0 is the initial density of released material (mass/volume)
pa is the density of ambient air (mass/volume)
A characteristic source dimension can also be defined dependent on the type of
release. For continuous releases,
D, .(^
(2,.74>
where Dc is the characteristic source dimension for continuous releases of dense gases
(length), #0 is the initial plume volume flux for dense gas dispersion (volume/time),
and u is the wind speed (length/time)
For instantaneous releases, the characteristic source dimension is defined as:
D, =F01/3
(2.1.75)
where D1 is the characterisitic source dimension for instantaneous releases of dense
gases (length), F0 is the initial volume of released dense gas material (length3)
The criteria for a sufficiently dense cloud to require a dense cloud representation
are, for continuous releases,
\l/3
/
-^M
and for instantaneous releases,
£0.15
(2.1.76)
/2
(^ v y—>0.20
(2.1.77)
8o
Q)
If these criteria are satisfied, then Figures 2.42 and 2.43 are used to estimate the
downwind concentrations. Table 2.15 provides equations for the correlations in the
figures.
The criteria for determining whether the release is continuous or instantaneous is
calculated using the following group:
uRd
—
(2.1-78)
Full-Scale
Data Region
FIGURE 2.42. Britter-McQuaid dimensional correlation for dispersion of dense cloud plumes.
Full-Scale
Data Region
Passive Limit
FIGURE 2.43. Britter-McQuaid dimensional correlation for dispersion of dense cloud puffs.
where Rd is the release duration (time), and A; is the downwind distance in dimensional
space (length).
If the group has a value greater than or equal to 2.5, then the dense gas release is
considered continuous. If the group value is less than or equal to 0.6, then the release is
considered instantaneous. If the value lies in-between, then the concentrations are calculated using both continuous and instantaneous models and the minimum concentration result is selected.
The Britter and McQuaid model is not appropriate for jets or two-phase plume
releases due to the entrainment effect noted earlier.
Logic Diagram. A brief logic diagram showing the inputs, calculation sequence and
outputs from a dense gas model is shown in Figure 2.44.
Theoretical Foundation. Neutral buoyancy Gaussian models do not employ correct
mechanisms, but, fortuitously, results for many small to medium sized spills are not
grossly inaccurate (except for F stability where transition to passive phase takes place
further downwind). As the mechanism is incorrect this generalization may not always
be true.
Box models employ a simpler theoretical basis than K-£ theory models, however,
the major mechanisms of gravity slumping, air entrainment, and thermodynamic processes are included. In terms of validation, box models have received substantial attention and good results are claimed by the authors. K-£ theory models allow restrictive
assumptions of flat terrain and no obstructions to be relaxed, but there are numerical
problems and there is a lack of relevant validation data for these cases.
Computational fluid dynamics (CFD) is able to account for changes in terrain,
buildings, and other irregularities. However, the solution includes simplifications to
TABLE 2.15. Equations Used to approximate the Curves in the Britter-McQuaid
Correlations Provided in Figure 2.42 for Plumes
Concentration Ratio
ration CJC0
Valid ranee for
Equation for
° = M^)
^ '^'"[(W^J
the Navier-Stokes equations and requires detailed specification of the initial
conditions.
The Britter-McQuaid model is a dimensional analysis technique, with a correlation developed from experimental data. However, the model is based only on data
taken in flat, rural terrain, and can only be applied to these types of releases. The model
is only based on the conditions of the test data and is unable to account for the effects of
parameters such as release height, ground roughness, wind speed profiles, etc.
TABLE 2.16. Equations Used to approximate the Curves in the Britter-McQuaid
Correlations Provided in Figure 2.43 for Puffs
Valid range for
Concentration Ratio
ration CnJC0
« = login'l g°F? I
*\ ^ )
Equation
for£ = logJ -£=•
H
[X' J
Input Requirements and Availability. Given the large number and variety of dense gas
models, it is not possible to generalize on model input requirements. The model itself
or one of the reviews noted above should be consulted for specific details.
More detailed dense gas models require additional inputs. These could include
ground roughness, physical properties of the spilled material (molecular weight, atmospheric boiling temperature, latent heat of vaporization), wind speed profiles, and the
physical properties of the ground (heat capacity, porosity, thermal conductivity).
Less straightforward is the definition of the source term: the initial conditions of
cloud mass, temperature, concentration, and dimensions (height, width). This is a
function of the discharge type (spill or pressurized jet), the rate and duration (or mass if
a puff) of release, temperature before and after any flash, the flash fraction, aerosol/fog
formation, and initial dilution. Some models include source term models which may
not be apparent to the user.
LOCAL INFORMATION
Input Data
Physical properties:
Molecular weight, boiling
point, heat capacity,
latent heat, LFL, toxic
cone, or toxic dose.
Wind speed
Atmospheric stability
Surface roughness
Source term
calculation
(sometimes in the
dense model
package)
CHEMICAL INFORMATION
ESTIMATE CLOUD SIZE OR
PLUME GENERATION RATE
Hole size
Phase of release (gas, liquid, 2-phase)
Flash fraction
Aerosol and rainout fractions
Release duration
Pool boiloff (from rainout fraction)
Cloud initial dilution
Cloud geometry
RUN DENSE GAS MODEL PACKAGE
Dispersion
Calculation
Calculation for initial gravity slumping
Entrapment of air
Heat transfer to/from cloud
Transition to Gaussian dispersion
OUTPUT FROM DENSE GAS MODEL
Concentration - distance - time results
FIGURE 2.44. Logic diagram for dense clouds.
Output. As with input requirements, specific model output varies greatly. Broadly, the
following output would be considered essential for a full analysis:
• source term summary (if calculated by model): jet discharge or pool boiloff rate,
temperature, aerosol fraction, rainout, initial density, initial cloud dimensions,
time variance
• cloud dispersion information: cloud radius and height (or other dimensions as
appropriate), density, temperature, concentration, time history at a particular
location, distance to specified concentrations.
• special information: terrain effects, chemical reaction or deposition, toxic load at
particular locations, mass of flammable material in cloud.
Simp lifted Approaches. Some users employ Gaussian neutral buoyancy models for dense
gas releases; however, the mechanisms are incorrect and certain weather conditions are
poorly modeled. In the second Canvey Report (Health & Safety Executive, 1981) a
power law correlation of the form R = kM®A (where R = downwind distance to the
lower flammable limit, M — mass of puff emission, and k = constant dependent on
material and weather conditions) was suggested for large propane and butane puff
emissions as an equation of best fit based on many runs of the DENZ dense gas package. Considine and Grint (1984) have extended this approach substantially with the
constant and the power in the above power law expression derived for pressurized and
refrigerated releases of propane and butane, over land and onto sea, for instantaneous
or continuous releases.
The Britter-McQuaid (1988) model is reasonably simple to apply, and produces
results which appear to be as good as more sophisticated models. However, detailed
specifications on the geometry of the release are required. Furthermore, the model provides only an estimate of the concentration at a fixed point immediately downwind
from the release. It does not provide concentrations at any other location, or the area
affected. Finally, the model applies only to ground releases.
Example Problem
Example 2.18: Britter andMcQuaidModel. Britter andMcQuaid (1988) report on
the Burro LNG dispersion tests. Compute the distance downwind from the following
LNG release to obtain a concentration equal to the lower flammability limit (LFL) of
5% vapor concentration by volume. Assume ambient conditions of 298 K and 1 atm.
The following data are available:
Spill rate of liquid:
0.23 m3/s
Spill duration (Rd):
174 s
Windspeed at 10 m above ground (u): 10.9 m/s
LNG density:
425.6 kg/m3
LNG vapor density at boiling point of-1620C: 1.76 kg/m3
Solution: The volumetric discharge rate is given by
#n =
^0
(0.23m3/s )(425.6 kg/m 3 )
j—5
=bb.om /s
1.76 kg/m 3
'
The ambient air density is computed from the ideal gas law and gives a result of
1.22 kg/m3. Thus, from Eq. (2.1.73)
§ =g(Po-p,\
(9 8m/s, •>
1 1.224 H-29m/*2
° i^r -
(1-76 - 1.224 \
,,
STEP 1: Determine if the release is considered continuous or instantaneous. For this
case Eq. (2.1.78) applies and the quantity must be greater than 2.5 for a continuous
release. Substituting the required numbers,
uRd _(10.9m/s)(174s)^ c
:2l L.D
—
X
X
and it follows that for a continuous release
x < 758 m
The final distance must be less than this.
STEP 2: Determine if a dense cloud model applies. For this case Eqs. (2.1.74) and
(2.1.76) apply. Substituting the appropriate numbers,
/ ? 0 \ 1 / 2 (55.6m 3 /sV /2
Dcc = p.
= —— M =2.26m
\u )
( 10.9 m/s J
(g*9o V'3 r(4.29m/s2 )(55.6 m 3 /s)T /3
Pf^= '• ^
^
= 0.75 > 0.15
^ 3 Dj
[ (10.9 m/s) 3 (2.26m) J
it is clear that the dense cloud model applies.
STEP 3: Adjust the concentration for a nonisothermal release. Britter and MacQuaid
(1988) provide an adjustment to the concentration to account for nonisothermal
release of the vapor. If the original concentration is C*, then the effective concentration
is given by
C=
c*+(i-c*)(r a /r 0 )
where Ta is the ambient temperature and T0 is the source temperature, both in absolute
temperature. For our required concentration of 0.05, the above equation gives an
effective concentration of 0.019.
STEP 4: Compute the dimensionless groups for Figure 2.42.
(go 2 ?o ]1/5 = [(4.29 m/s 2 ) 2 (55.6 mVs) T =Q ^
( u5 J
[
(10.9 m/s)5
J
and
(itf.
(«l=*f
.,* m
\u J
( 10.9 m/s J
STEP 5: Apply Figure 2.42 to determine the downwind distance. The initial concentration of gas, C0, is essentially pure LNG. Thus, C0 = 1.0 and it follows that Cm/C0 =
0.019. From Figure 2.42,
x
=
/—iTT
(?o/«)V2
163
and it follows that x = (2.25 m)(163) = 367 m. This compares to an experimentally
determined distance of 200 m. This demonstrates that dense gas dispersion estimates
can easily be off by a factor of 2. A Gaussian plume model assuming worst case weather
conditions (F stability, 2 m/s wind speed) predicts a downwind distance of 14 km.
Clearly the dense cloud model provides a much better result.
A spreadsheet implementing the Britter-McQuaid method is shown in Figure
2.45. Only the first page of the spreadsheet output is provided. The extensive tables
used to interpolate the Britter-McQuaid values are not shown.
Discussion
Strengths and Weaknesses. The major strength of most of the dense gas models is their
rigorous inclusion of the important mechanisms of gravity slumping, air entrainment,
and heat transfer processes. Their primary weakness is related to source term estimaClick to View Calculation Example
Example 2.18: Britter-McQuaid Modei
Input Data:
Spill rate:
Spill duration:
Windspeed at 10 m:
Density of liquid:
Vapor density at boiling point:
Ambient Temperature:
Ambient Pressure:
Source Temperature:
Required concentration:
Calculated Results:
Ambient air density:
Initial buoyancy:
Volumetric discharge rate:
Char, source dimension:
Target concentration:
0.23
174
10.9
425.6
1.76
298
1
111
0.05
mA3/s
s
m/s A
kg/m 3
kg/mA3
K
atm
K
1.223572 kg/mA3
4.296427
55.61818 mA3/s
2.25889 m
0.019227
Computed value of Britter-McQuaid X-axis dimensional group: 0.367164
Interpolated value of Britter-McQuaid y-axis dimensional group: 162.6935
[Distance downwind:
Continuous release criteria:
Dense gas criteria:
367.51 m
|
5.16 <-- Must be greater than 2.5
0.43 <- Must be greater than 0.15
FIGURE 2.45. Spreadsheet output for Example 2.18: Birtter-McQuaid model.
tion and the high level of skill required of the user. Automatic source term generation
models can improve this situation substantially. Some validation of the models has
been provided (Hanna et al., 1990; API5 1992).
Identification and Treatment of Possible Errors. Errors can arise from four broad sources.
Important mechanisms of dense gas dispersion may be omitted for a particular release
scenario; model coefficients fitted to limited validation data may be incorrect; the
source term may be incorrectly defined; or model assumptions of flat terrain and uniform roughness may be invalid.
Errors due to omitted mechanisms or incorrect coefficients can be checked only by
reviewing model validation. It is also important to note that few validation data exist
for certain release types (e.g., large-scale sudden releases of liquids onto land especially
for long distance toxic impacts). Where some doubt exists, users should undertake a
range of sensitivity runs to determine the significance of the uncertainty.
Utility. Some of the computer codes are relatively easy to run, but this can be deceptive.
Those models having automatic source term generation are the most straightforward
to run, but there may be limits to the cases that may be modeled. Models without
source term generation impose a greater load on the user, and some information
requested such as initial dilution or initial cloud dimensions may be very difficult to
specify.
Resources Needed. Dense gas dispersion models require a skilled user. In order to obtain
such skills the minimum requirements would be extensive reading of dense gas model
literature reviews, examination of dense gas trial results, and several practice exercises.
Unskilled use of dense gas models can lead to misleading results. One purpose of the
CCPS Guidelines for Use of Vapor Cloud Dispersion Models (AIChE/CCPS, 1987a;
1989a; 1996a) is to offer an introduction to dense gas model use.
A dense gas computer model is a prerequisite for dispersion analysis. It is possible to
develop a model, however, this is a major task due to the number of mechanisms involved
and the amount of validation required. One to five man years is required to develop a full
capability, adequately validated dense gas model. Most users will obtain a publicly or commercially available model. These can run on personal computers or mainframes.
Available Computer Codes
AIChE/CCPS (1987a; 1996a) reviews dense and neutral gas codes and provide contact addresses for all of these. The latest edition of the Chemical Engineering Progress
software review should be consulted.
2.2. Explosions and Fires
The objective of this section is to review the types of models available for estimation of
the consequences of accidental explosion and fire incident outcomes. More detailed
and complete information on this subject is provided in Baker et al. (1983),
AIChE/CCPS (1994), Lees (1986, 1996), and Bjerketvedt et al. (1997).
A number of important definitions related to fires and explosions follow.
Deflagration: A propagating chemical reaction of a substance in which the reaction or
propagating front is limited by both molecular and turbulent transport and
advances into the unreacted substance at less than the sonic velocity in the
unreacted material. Resulting overpressures from a deflagration are typically no
more than one or two atmospheres—these are significant enough to cause substantial damage to surrounding structures.
Detonation: A propagating chemical reaction of a substance in which the reaction or
propagating front is limited only by the rate of reaction and advances into the
unreacted substance at or greater than the sonic velocity in the unreacted material
at its initial temperature and pressure. Detonations are capable of producing much
more damage than deflagrations; overpressures from a detonation can be several
Next Page
Previous Page
for long distance toxic impacts). Where some doubt exists, users should undertake a
range of sensitivity runs to determine the significance of the uncertainty.
Utility. Some of the computer codes are relatively easy to run, but this can be deceptive.
Those models having automatic source term generation are the most straightforward
to run, but there may be limits to the cases that may be modeled. Models without
source term generation impose a greater load on the user, and some information
requested such as initial dilution or initial cloud dimensions may be very difficult to
specify.
Resources Needed. Dense gas dispersion models require a skilled user. In order to obtain
such skills the minimum requirements would be extensive reading of dense gas model
literature reviews, examination of dense gas trial results, and several practice exercises.
Unskilled use of dense gas models can lead to misleading results. One purpose of the
CCPS Guidelines for Use of Vapor Cloud Dispersion Models (AIChE/CCPS, 1987a;
1989a; 1996a) is to offer an introduction to dense gas model use.
A dense gas computer model is a prerequisite for dispersion analysis. It is possible to
develop a model, however, this is a major task due to the number of mechanisms involved
and the amount of validation required. One to five man years is required to develop a full
capability, adequately validated dense gas model. Most users will obtain a publicly or commercially available model. These can run on personal computers or mainframes.
Available Computer Codes
AIChE/CCPS (1987a; 1996a) reviews dense and neutral gas codes and provide contact addresses for all of these. The latest edition of the Chemical Engineering Progress
software review should be consulted.
2.2. Explosions and Fires
The objective of this section is to review the types of models available for estimation of
the consequences of accidental explosion and fire incident outcomes. More detailed
and complete information on this subject is provided in Baker et al. (1983),
AIChE/CCPS (1994), Lees (1986, 1996), and Bjerketvedt et al. (1997).
A number of important definitions related to fires and explosions follow.
Deflagration: A propagating chemical reaction of a substance in which the reaction or
propagating front is limited by both molecular and turbulent transport and
advances into the unreacted substance at less than the sonic velocity in the
unreacted material. Resulting overpressures from a deflagration are typically no
more than one or two atmospheres—these are significant enough to cause substantial damage to surrounding structures.
Detonation: A propagating chemical reaction of a substance in which the reaction or
propagating front is limited only by the rate of reaction and advances into the
unreacted substance at or greater than the sonic velocity in the unreacted material
at its initial temperature and pressure. Detonations are capable of producing much
more damage than deflagrations; overpressures from a detonation can be several
hundred psig in value. This, however, is a complex issue and depends on many factors, including geometry, impulse duration, confinement, etc.
Flammable Limits: The minimum (lower flammable limit, LFL) and maximum
(upper flammable limit, UFL) concentrations of vapor in air that will propagate a
flame.
Flashpoint Temperature: The temperature of a liquid at which the liquid is capable of
producing enough flammable vapor to flash momentarily. There are many ASTM
methods, including D56-87, D92-90, D93-90, and D3828-87 (ASTM, 1992) to
determine flashpoint temperatures. The methods are grouped according to two
types: open and closed cup. The closed cup methods typically produce values
which are somewhat lower.
Explosion: Several definitions are available for the word "explosion." AIChE/CCPS
(1994) defines an explosion as "a release of energy that causes a blast." A "blast" is
subsequently defined as "a transient change in the gas density, pressure, and velocity of the air surrounding an explosion point." Growl and Louvar (1990) define an
explosion as "a rapid expansion of gases resulting in a rapidly moving pressure or
shock wave." NFPA 69 (NFPA, 1986) defines an explosion as "the bursting or
rupture of an enclosure or a container due to the development of internal pressure." An explosion can be thought of as a rapid release of a high-pressure gas into
the environment. This release must be rapid enough that the energy is dissipated as
a pressure or shock wave. Explosions can arise from strictly physical phenomena
such as the catastrophic rupture of a pressurized gas container or from a chemical
reaction such as the combustion of a flammable gas in air. These latter reactions
can occur within buildings or vessels or in the open in potentially congested areas.
Many types of outcomes are possible for a release. This includes vapor cloud explosions (VCE) (Section 2.2.1), flash fires (Section 2.2.2), physical explosions (Section
2.2.3), boiling liquid expanding vapor explosions (BLEVE) and fireballs (Section
2.2.4), confined explosions (Section 2.2.5), and pool fires and jet fires (Section 2.2.6).
Figure 2.46 provides a basis for logically describing accidental explosion and fire scenarios. The output of the bottom of this diagram are various incident outcomes with
particular effects (e.g., vapor cloud explosion resulting in a shock wave).
Accidental Release of Materials
That Could Burn
Physical
Explosions
Confined
Explosions
Other Loss of
Containment Resulting
in Explosions
Figure
2.46a
Figure
2.46b
Figure
2.46c
FIGURE 2.46. Logic diagram for explosion events.
From Figure
2.46
Gas Dispersion
Go to Figure
2.46d
Gas Phase
(PV Explosion)
Liquid Temp. < Boiling Point
No BLEVE, PV Explosion from
Gas Phase Only
No Ignition or
not flammable
Ignition
Delayed
VCE with Pool
Fire Potential
Thermal Radiation
Blast Waves
Physical
Explosions
Projectiles
Gas and Liquid
Phase
Liquid Temp. >
Boiling Point
(BLEVE)
Immediate
No Ignition or
not flammmable
Fireball with
Pool Fire
Flash Fire with Pool
Fire Potential
Ignition
Delayed
Immediate
Flash
Fire
VCE
Fireball
FIGURE 2.46a. Logic diagram for physical explosions.
From Figure
2.46
Combustion
within Low
Strength
Structures
With
Explosion
Venting
Combustion, Thermal
Decompositions, or
Runaway Reaction
within Process Vessels/
Equipment
Confined
Explosions
Without
Explosion
Venting
Blast Waves
Projectiles
Thermal Radiation
Vent
Through
Relief
System
Contained
Within
Relief
System
Catastrophic
Rupture of
Vessels/
Equipment
Release to
Atmosphere
Goto
Figure
2.46c
FIGURE 2.46b. Logic diagram for confined explosions.
Contained
Within
Process
Equipment
From Figure
2.46
Blast Waves
Projectiles
Other Loss of Containment
Resulting in Explosions
Two-Phase
Gas Phase
Gas Cloud
Dispersion
Gas and
Aerosol
Goto
Figure
2.4Bd
Turbulent Free Jet
Delayed
Ignition
VCE
Thermal Radiation
Flash
Immediate
Ignition
Flash Fire
Followed by
Pool Fire
Liquid Phase
Liquid Rain
Out
Vaporization
No
Ignition
Jet Fire
Delayed
Ignition
Immediate
Ignition
VCE
Flash Fire
Followed by
Pool Fire
No
Ignition
FIGURE 2.46c. Logic diagram for other losses of containment.
From Figure
2.46a, b or c
Blast Waves
Projectiles
Gas Cloud
Dispersion
Thermal Radiation
Flash Fire
Delayed
Ignition
Immediate
Ignition
VCE
Flash Fire
No
Ignition
FIGURE 2.46d. Logic diagram for explosions from gas cloud dispersion.
The major difficulty presented to anyone involved in CPQRA is in selecting the
proper outcomes based on the available information and determining the consequences. The consequences of concern in CPQBA studies for explosions in general are
blast overpressure effects and projectile effects; for fires and fireballs the consequences
of concern are thermal radiation effects. Each of these types of explosions and fires can
be modeled to produce blast, projectile and/or thermal radiation effects appropriate for
use in CPQRA studies and these techniques are described in the designated sections.
2.2.1 .Vapor Cloud Explosions (VCE)
2.2.1.1. BACKGROUND
Purpose
When a large amount of flammable vaporizing liquid or gas is rapidly released, a vapor
cloud forms and disperses with the surrounding air. The release can occur from a storage tank, process, transport vessel, or pipeline. Figure 2.46 describes the various failure
pathways under which this scenario can occur. If this cloud is ignited before the cloud is
diluted below its lower flammability limit (LFL), a VCE or flash fire will occur. For
CPQRA modeling the main consequence of a VCE is an overpressure that results while
the main consequence of a flash fire is direct flame contact and thermal radiation. The
resulting outcome, either a flash fire or a VCE depends on a number of parameters discussed in the next section.
Davenport (1977, 1983) and Lenoir and Davenport (1992) have summarized
numerous VCE incidents. All (with one possible exception) were deflagrations rather
than detonations. They found that VCEs accounted for 37% of the number of property
losses in excess of $50 million (corrected to 1991 dollars) and accounted for 50% of the
overall dollars paid. Pietersen and Huerta (1985) has summarized some key features of
80 flash fires.
Philosophy
AIChE/CCPS (1994) provides an excellent summary of vapor cloud behavior. They
describe four features which must be present in order for a VCE to occur. First, the
release material must be flammable. Second, a cloud of sufficient size must form prior
to ignition, with ignition delays of from 1 to 5 min considered the most probable for
generating vapor cloud explosions. Lenoir and Davenport (1992) analyzed historical
data on ignition delays, and found delay times from 6 s to as long as 60 min. Third, a
sufficient amount of the cloud must be within the flammable range. Fourth, sufficient
confinement or turbulent mixing of a portion of the vapor cloud must be present.
The blast effects produced depend on whether a deflagration or detonation results,
with a deflagration being, by far, the most likely. A transition from deflagration to detonation is unlikely in the open air. A deflagration or detonation result is also dependent
on the energy of the ignition source, with larger ignition sources increasing the likelihood of a direct detonation.
AIChE/CCPS (1994) also provides the following summary:
In the experiments described, no explosive blast-generating combustion was observed
if initially quiescent and fully unconfined fuel-air mixtures were ignited by low-energy
ignition sources. Experimental data also indicate that turbulence is the governing
factor in blast generation and that it may intensify combustion to the level that will
result in an explosion.
Turbulence may arise by two mechanisms. First, it may result either from a violent
release of fuel from under high pressure in a jet or from explosive dispersion from a
ruptured vessel. The maximum overpressures observed experimentally in jet combustion and explosively dispersed clouds have been relatively low (lower than 100 mbar).
Second, turbulence can be generated by the gas flow caused by the combustion process
itself and interacting with the boundary conditions.
Experimental data show that appropriate boundary conditions trigger a feedback in the
process of flame propagation by which combustion may intensify to a detonative level.
These blast-generative boundary conditions were specified as
• spatial configurations of obstacles of sufficient extent.
• partial confinement of sufficient extent, whether or not internal obstructions were
present.
Examples of boundary conditions that have contributed to blast generation in vapor
cloud explosions are often a part of industrial settings. Dense concentrations of process
equipment in chemical plants or refineries and large groups of coupled rail cars in railroad shunting yards, for instance, have been contributing causes of heavy blasts in
vapor cloud explosions in the past. Furthermore, certain structures in nonindustrial settings, for example, tunnels, bridges, culverts, and crowded parking lots, can act as blast
generators if, for instance, a fuel truck happens to crash in the vicinity. The destructive
consequences of extremely high local combustion rates up to a detonative level were
observed in the wreckage of the Flixborough plant (Gugan, 1979).
Local partial confinement or obstruction in a vapor cloud may easily act as an initiator for detonation, which may propagate into the cloud as well. So far, however, only
one possible unconfined vapor cloud detonation has been reported in the literature; it
occurred at Port Hudson, Missouri (National Transportation Safety Board, 1972; Burgess and Zabatakis, 1973). In most cases the nonhomogeneous structure of a cloud
freely dispersing in the atmosphere probably prevents a detonation from propagating.
Other experimental studies have also demonstrated that there is a minimum mass
of flammable material that is required to allow transition from a flash fire to VCE.
These estimates range from 1 ton (Wiekema, 1979) to 15 tons (Health & Safety Executive, 1979).
Some caution should be exercised in the determination of a minimum value.
Gugan (1979) provides a few examples of VCEs with quantities as low as 100 kg for
more reactive species such as hydrogen and acetylene. North and MacDiarmid (1988)
report on explosions from the release and ignition of approximately 30 kg of hydrogen,
although it was partially confined under the roof of a compressor shed.
It is also believed that materials with higher fundamental burning velocities, such
as hydrogen, acetylene, ethylene oxide, propylene oxide and ethylene are more readily
inclined to transition to a VCE for a given release quantity.
Flammable vapor clouds may be ignited from a number of sources that may be
continuous (e.g., fired heaters, pilot flames) or occasional (e.g., smoking, vehicles,
electrical systems, static discharge). Clouds are normally ignited at the edge as they
drift. The effect of ignition is to terminate further spread of the cloud in that direction.
Flash fires initially combust and expand rapidly in all directions. After the initial combustion, expansion is upward because of buoyancy. As the number of ignition sources
increases the likelihood of ignition will generally increase correspondingly. Thus, a site
with many ignition sources on or around it would tend to prevent clouds from reaching their full hazard extent, as most such clouds would find an ignition source before
this occurs. Conversely, few clouds on such a site would disperse safely before ignition.
A more complex CPQEA could take account of the location and probability of
surrounding ignition sources (see Chapter 5, Section 5.2.2). This might be done by
considering a number of separate ignition cases applied to a given release. Early igni-
tion, before the cloud becomes fully formed, might result in a flash fire or an explosion
of smaller size. Late ignition could result in an explosion of the maximum possible
effect. The following approaches have been used to locate the blast epicenter, although
no theoretical basis exists at present for any method:
1.
2.
3.
4.
at the leading edge of the cloud at the LFL concentration.
at the point on the centerline where the fuel concentration is stoichiometric.
at the release point of the equipment item.
halfway between the equipment item and the LFL at the leading edge of the
cloud.
5. at the center of an identifiable congested volume whose vapor concentration is
within the flammable range.
Typically, other uncertainties are more important in the analysis. A more
detailed analysis would determine the flammable mass in the dispersing cloud (see
page 142).
Applications
VCE models have been applied for incident analysis [e.g., Sadee et al. (1977) for the
Flixborough explosion] and in risk analysis predictions (Rijnmond Public Authority,
1982). A flash fire model has been developed for risk analysis purposes by Eisenberg et
al. (1975).
2.2.1.2. DESCRIPTION
Description of Technique
Important parameters in analyzing combustion incidents are the properties of the
material: lower and upper flammable limits (LFL and UFL), flash point, auto ignition
temperature, heat of combustion, molecular weight, and combustion stoichiometry.
Such data are readily available (Department of Transportation, 1978; Perry and Green,
1984; Stull, 1977).
The following models of VCEs presented here include:
• TNT equivalency model
• TNO multi-energy model
• Modified Baker model
All of these models are quasi-theoretical/empirical and are based on limited field
data and accident investigations.
TNT Equivalency Models. The TNT equivalency model is easy to use and has been
applied for many CPQRAs. It is described in Baker et al. (1983), Decker (1974), Lees
(1986, 1996), and Stull (1977).
The TNT equivalency model is based on the assumption of equivalence between
the flammable material and TNT, factored by an explosion efficiency term:
wA
^TNT
(2.2.1}
where
W is the equivalent mass of TNT (kg or Ib)
TJ is an empirical explosion efficiency (unitless)
M is the mass of hydrocarbon (kg or Ib)
Ec is the heat of combustion of flammable gas (kj/kg or Btu/lb)
E1OT is the heat of combustion of TNT (4437-4765 kj/kg or 1943-2049 Btu/lb).
A typical pressure history at a fixed point at some distance from a TNT blast is
shown in Figure 2.47. The important parameters are the peak side-on overpressure (or
simply peak overpressure),^?0, the arrival time, £a, the positive phase duration time, £d,
and the overpressure impulse, ip which is defined as the area under the positive duration
pulse,
«P=/odp*
(2-2.2)
The impulse is an important aspect of damage-causing ability of the blast on structures since it is indicative of the total energy contained within the blast wave.
The above parameters can be scaled using the following equations:
fs =^A
'• =F^
**
Td
~wv*
*a
^a -^vF
(2.2.3)
(2 2 4
--)
(2 2 5)
--
(2.2.6)
Pressure
The explosion effects of a TNT charge are well documented as shown in Figure
2.48 for a hemispherical TNT surface charge at sea level. Equations for the functions in
Figure 2.48 are provided in Table 2.17. The various explosion parameters in Figure
2.48 are correlated as a function of the scaled range, Z. The scaled range is defined as
distance, R, divided by the cube root of TNT mass,l/F, with Undetermined from Eq.
(2.2.1):
Time
FIGURE 2.47. Typical pressure history for a TNT-type explosion. The pressure curve drops
below ambient pressure due to a refraction at time td.
Duration, td (ms)/(kg
(ms)
TNT)1/3
Arrival Time, t a (ms)/(kg
(ms)
TNT)1/3
Impulse, i p (Pa
(Pa s)/(kg
s)
TNT)1/3
Scaled Overpressure, p s
Scaled Distance, Z (m/kg173 )
FIGURE 2.48. Shock wave parameters for a spherical TNT explosion on a surface at sea level
(Lees, 1996).
The peak side-on overpressure is used to estimate the resulting damage using Table
2.18a for general structures and Table 2.18b for process equipment. Tables 2.18a and
b do not account for the blast impulse or the particular structure involved. Thus, they
should only be used for estimation.
Correlations are also available for TNT blasts in free air, without a ground surface
(U.S. Army, 1969). This would apply to an elevated blast with the blast receptor very
near the source of the blast. Since this is rarely the case in chemical plant facilities, the
reflection of a blast wave off of the ground dictates the use of Figure 2.48.
Other pressure quantities in blast modeling are the reflected pressure and the
dynamic pressure. The reflected pressure is the pressure on a structure perpendicular to
the shock wave and is at least a factor of 2 greater than the side-on overpressure.
Another quantity is the dynamic overpressure—it is determined by multiplying the
density of the air times the square of the velocity divided by 2. The overpressure used
most frequently for blast modeling in risk analysis is the peak side-on overpressure.
The flammable cloud explosion yield is empirical, with most estimates varying
between 1 and 10% (Brasie and Simpson, 1968; Gugan, 1979; Lees, 1986). Bodurtha
(1980) gives the upper limit on the range of efficiency as 0.2. Eichler and Napademsky
(1978) from reviews of historical data conclude the maximum expected efficiency is
0.2 for a symmetric cloud, but could be significantly higher—up to 0.4 for an asymmetric cloud. This factor is based on analysis of many VCE incidents. As doubt exists as
to the actual mass involved in many VCE incidents, the true efficiency is uncertain.
Prugh (1987) gives a helpful correlation of flammable mass versus VCE probability
from historical data. Decker (1974) shows how to link a Gaussian dispersion model
with the TNT model.
TABLE 2.17. Equations for the Blast Parameters Functions Provided in Figure 2.48
The functions are tabulated using the following functional form:
n
log K) 0=]};,(* +Mo glo Z)'
*=0
where 0 is the function of interesr; c^ a, b are constants provided in the table below, and Z is the scaled distance
(m/kg'/3)
Function
0
Constant
Range
Overpressure^
(kPa)
Impulse if
(Pas)
Duration time t&
(ms)
Arrival time £a
(ms)
NOTE: The number of significant figures is a function of the curve fit method only and not indicative of the accuracy of
the method. See Example Problem 2.19, for application of these equations. (From Lees, 1996.)
TABLE 2.18a. Damage estimates for common structures based on overpressure
(Clancey, 1972). These values should only be used for approximate estimates.
Pressure
psig
kPa
Damage
0.02
0.14
Annoying noise (137 dB if of low frequency 10-15 Hz)
0.03
0.21
Occasional breaking of large glass windows already under strain
0.04
0.28
Loud noise (143 dB), sonic boom, glass failure
0.1
0.69
Breakage of small windows under strain
0.15
1.03
Typical pressure for glass breakage
0.3
2.07
"Safe distance" (probability 0.95 of no serious damage below this value);
projectile limit; some damage to house ceilings; 10% window glass broken
0.4
2.76
Limited minor structural damage
0.5-1.0
3.4—6.9
Large and small windows usually shattered; occasional damage to
window frames
0.7
4.8
Minor damage to house structures
1.0
6.9
Partial demolition of houses, made uninhabitable
1-2
6.9-13.8
Corrugated asbestos shattered; corrugated steel or aluminum panels,
fastenings fail, followed by buckling; wood panels (standard housing)
fastenings fail, panels blown in
1.3
9.0
Steel frame of clad building slightly distorted
2
13.8
Partial collapse of walls and roofs of houses
2-3
13.8-20.7
Concrete or cinder block walls, not reinforced, shattered
2.3
15.8
Lower limit of serious structural damage
2.5
17.2
50% destruction of brickwork of houses
3
20.7
Heavy machines (3000 Ib) in industrial building suffered little
damage; steel frame building distorted and pulled away from foundations
3-4
20.7-27.6
Frameless, self-framing steel panel building demolished; rupture of oil
storage tanks
4
27.6
Cladding of light industrial buildings ruptured
5
34.5
Wooden utility poles snapped; tall hydraulic press (40,000 Ib) in building
slightly damaged
5-7
34.5-48.2
Nearly complete destruction of houses
7
48.2
Loaded train wagons overturned
7-8
48.2-55.1
Brick panels, 8-12 inches thick, not reinforced,fail by shearing or flexure
9
62.0
Loaded train boxcars completely demolished
10
68.9
Probable total destruction of buildings; heavy machine tools (7000 Ib)
moved and badly damaged; very heavy machine tools (12,000 Ib) survive
300
2068
Limit of crater lip
TABLE 2. 18b. Damage Estimates Based on Overpressure for Process Equipment3
Overpressure, psi
Equipment
0.5
1.0
1.5
I Control house steel roof
A
C
D
A
E
P
Control house concrete roof
I Cooling tower
I Tank: cone roof
B
2.0
2.5
D
N
F
O
A
I Reactor: chemical
A
H
5.0
6.0
7.0
7.5
i
8.5
9.0
P
9.5
10
12
K
16
18
20
T
V
IP
T
U
I
I Pine supports
P
D
I
T
SO
Q
H
T
I
J Electric motor
H
J Blower
Q
I
v
T
R
T
I Pressure vessel: horizontal
PI
I Utilities: gas regulator
I
T
MQ
I
V
I Steam turbine
I
I Heat exchanger
I
T
M
S
T
I
I
I Pressure vessel: vertical
I
j Pump
I
* See page 165 for the key to this table.
14
T
F
Tank: floating roof
( Tank sphere
8.0
T
I Reactor: cracking
I Extraction column
6.5
T
I
J Fractionation column
5.5
U
I
Regenerator
Utilities: electronic transformer
4.5
LM
G
1 Utilities: gas meter
4.0
K
I Fire heater
I Filter
3.5
N
D
I Instrument cubicle
3.0
T
T
V
Key to Table 2. 18b
K. Unit uplifts (half tilted)
L. Power lines are severed
A. Windows and gauges broken
M. Controls are damaged
B. Louvers fall at 0.2- 0.5 psi
N. Block walls fell
O. Frame collapses
C. Switchgear is damaged from roof collapse
D. Roof collapses
P. Frame deforms
E. Instruments are damaged
Q. Case is damaged
F. Inner parts are damaged
R. Frame cracks
S. Piping breaks
G. Brick cracks
H. Debris —missile damage occurs
T. Unit overturns or is destroyed
I. Unit moves and pipes break
U. Unit uplifts (0.9 tilted)
J. Bracing falls
V. Unit moves on foundation
The explosion efficiency depends on the method for determining the contributing
mass of fuel. Models based on the total quantity released have lower efficiencies.
Models based on the dispersed cloud mass have a higher efficiency. The original reference must be consulted for the details.
The following methods for estimating the explosion efficiency are summarized by
AIChE (1994):
1. Brasie and Simpson (1968): Use 2% to 5% of the heat of combustion of the
total quantity of fuel spilled.
2. Health & Safety Executive (1979 and 1986): 3% of the heat of combustion of
the quantity of fuel present in the cloud.
3. Industrial Risk Insurers (1990): 2% of the heat of combustion of the quantity
of fuel spilled.
4. Factory Mutual Research Corporation (AIChE/CCPS, 1994): 5%, 10%, and
15% of the heat of combustion of the quantity of fuel present in the cloud,
dependent on the reactivity of the material. Higher reactivity gives a higher efficiency. Use the following efficiencies for the highly reactive materials specified:
diethyl ether, 10%; propane, 5%; acetylene, 15%.
The application of an explosion efficiency represents one of the major problems
with the TNT equivalency method.
The problem with the TNT equivalency model is that little, if any, correlation
exists between the quantity of combustion energy involved in a VCE and the equivalent weight of TNT required to model its blast effects. This result is clearly proven by
the fact that, for quiescent clouds, both the scale and strength of a blast are unrelated to
fuel quantity present. These factors are determined primarily by the size and nature of
the partially confined and obstructed regions within the cloud.
TNO Multi-Energy Method: This method is described in detail in AIChE (1994), Van
den Berg (1985), and Van den Berg et al. (1987). The multi-energy model assumes
that blast modeling on the basis of deflagrative combustion is a conservative approach.
The basis for this assumption is that an unconfined vapor cloud detonation is extremely
unlikely; only a single event has been observed.
The basis for this model is that the energy of explosion is highly dependent on the
level of congestion and less dependent on the fuel in the cloud.
The procedure for employing the multi-energy model to a vapor cloud explosion is
given by the following steps (AIChE/CCPS, 1994):
1. Perform a dispersion analysis to determine the extent of the cloud. Generally,
this is performed assuming that equipment and buildings are not present. This
is due to the limitations of dispersion modeling in congested areas.
2. Conduct a field inspection to identify the congested areas. Normally, heavy
vapors will tend to move downhill.
3. Identify potential sources of strong blast present within the area covered by the
flammable cloud. Potential sources of strong blast include:
• congested areas and buildings such as process equipment in chemical plants
or refineries, stacks of crates or pallets, and pipe racks;
• spaces between extended parallel planes, for example, those beneath closely
parked cars in parking lots, and open buildings, for example, multistory parking garages;
• spaces within tubelike structures, for example, tunnels, bridges, corridors,
sewage systems, culverts;
• an intensely turbulent fuel-air mixture in a jet resulting from release at high
pressure.
The remaining fuel-air mixture in the cloud is assumed to produce a blast of
minor strength.
4. Estimate the energy of equivalent fuel-air charges.
• Consider each blast source separately.
• Assume that the full quantities of fuel-air mixture present within the partially
confined/obstructed areas and jets, identified as blast sources in the cloud,
contribute to the blasts.
• Estimate the volumes of fuel-air mixture present in the individual areas identified as blast sources. This estimate can be based on the overall dimensions of
the areas and jets. Note that the flammable mixture may not fill an entire
blast-source volume and that the volume of equipment should be considered
where it represents an appreciable proportion of the whole volume.
• Calculate the combustion energy E (J) for each blast by multiplication of the
individual volumes of the mixture by 3.5 X 106 J/m3. This value is typical for
the heat of combustion of an average stoichiometric hydrocarbon-air mixture (Harris 1983).
5. Estimate strengths of individual blasts. Some companies have defined procedures for this, however, many risk analysts use their own judgment.
• A safe and most conservative estimate of the strength of the sources of a strong
blast can be made if a maximum strength of 10 is assumed—representative of a
detonation. However, a source strength of 7 seems to more accurately represent actual experience. Furthermore, for side-on overpressures below about 0.5
bar, no differences appear for source strengths ranging from 7 to 10.
• The blast resulting from the remaining unconfined and unobstructed parts of
a cloud can be modeled by assuming a low initial strength. For extended and
quiescent parts, assume minimum strength of 1. For more nonquiescent
parts, which are in low-intensity turbulent motion, for instance, because of
the momentum of a fuel release, assume a strength of 3.
6. Once the energy quantities E and the initial blast strengths of the individual
equivalent fuel-air charges are estimated, the Sachs-scaled blast side-on
overpressure and positive-phase duration at some distance R from a blast
source is read from the blast charts in Figure 2.49 after calculation of the
Sachs-scaled distance:
K=
Wp^JJ
(2 2 8)
--
where
R is the Sachs-scaled distance from the charge (dimensionless)
R is the distance from the charge (m)
E is the charge combustion energy (J)
P0 is the ambient pressure (Pa)
The blast peak side-on overpressure and positive-phase duration are calculated from the Sachs-scaled quantities:
P 8 =AP 5 -P 0
and
*d
173
"(^M))
"
=*d
^o
(2.2.9)
(2.2.10)
where
P5 is the side-on blast overpressure (Pa)
AP5 is the Sachs-scaled side-on blast overpressure (dimensionless)
P0 is the ambient pressure (Pa)
td is the positive-phase duration (s)
td is the Sachs-scaled positive-phase duration (dimensionless)
E is the charge combustion energy (J)
CQ is the ambient speed of sound (m/s)
If separate blast sources are located close to one another, they may be initiated almost simultaneously. Coincidence of their blasts in the far field cannot be
ruled out, and their respective blasts should be superimposed. The most conservative approach to this issue is to assume a maximum initial blast strength of 10
and to sum the combustion energy from each source in question. Further definition of this important issue, for instance the determination of a minimum distance between potential blast sources so that their individual blasts may be
considered separately, is a factor in present research.
The possibility of unconfined vapor cloud detonation should be considered
if (a) environmental and atmospheric conditions are such that vapor cloud dispersion is slow, and (b) a long ignition delay is likely. In that case, the full quantity of fuel mixed within detonable limits should be assumed for a fuel-air
charge whose initial strength is maximum 10.
The major problem with the application of the TNO multi-energy method is that
the user must decide on the selection of a severity factor, based on the degree of con-
dimensionless positive phase duration (T+)
dimensionless maximum 'side on1 overpressure (Af| )
combustion energy-scaled distance (R)
combustion energy-scaled distance (R)
^o = atmospheric pressure
C0 = atmospheric sound speed
E = amount of combustion energy
R 0 * charge radius
FIGURE 2.49. TNO multi-energy model for vapor cloud explosions. The Sachs scaled side-on overpressure and positive-phase duration are provided as a
function of the Sachs scaled distance (AlChE/CCPS, 1994).
finement. Little guidance is provided for partial confinement geometries. Furthermore, it is not clear how the results from each blast strength should be combined.
Baker-SPrehhw Method: This method is a modification of the original work by Strehlow
et al. (1979), with added elements of the TNO multi-energy method. A complete
description of the procedure is provided by Baker et al. (1994).
Strehlow's spherical model was chosen because a curve is selected based on flame
speed, which affords the opportunity to use empirical data in the selection. The procedures from the TNO multi-energy method were adopted for determination of the
energy term. Specifically, confinement is the basis of the determination of the size of
the flammable vapor cloud that contributes to the generation of the blast overpressure,
and multiple blast sources can emanate from a single release.
Baker et al. (1994) state that experimental data suggests that the combined effects
of fuel reactivity, obstacle density and confinement can be correlated to flame speed.
They describe a set of 27 possible combinations of these parameters based on 1, 2, or
3D flame expansions. Six of the possible combinations lacked experimental data, but
they were able to interpolate between the existing data to specify these values. The
results are shown in Table 2.19. The flame speeds are expressed in Mach number units.
Note that the values in Table 2.19 represent the maximum flame speed for each case
and will produce a conservative result.
Reactivity is classified as low, average, and high according to the following recommendations of TNO (Zeeuwen and Wiekema, 1978). Methane and carbon monoxide
are the only materials regarded as low reactivity, whereas hydrogen, acetylene, ethylene, ethylene oxide, and propylene oxide are considered to be highly reactive. All other
TABLE 2.19. Flame Speed in Mach Number for Soft Ignition Sources
(Baker etal., 1994)
Obstacle Density
ID Flame Expansion Case
High
Reactivity
High
Medium
Low
5.2
5.2
5.2
Medium
2.265
1.765
1.029
Low
2.265
1.029
0.294
Obstacle Density
2D Flame Expansion Case
Reactivity
High
Medium
Low
High
1.765
1.029
0.588
Medium
1.235
0.662
0.118
Low
0.662
0.471
0.079
Obstacle Density
3D Flame Expansion Case
Reactivity
High
Medium
Low
High
0.588
0.153
0.071
Medium
0.206
0.100
0.037
Low
0.147
0.100
0.037
TABLE 2.20. Geometric Considerations for the Baker-Strehlow Vapor Cloud
Explosion Model (Baker, 1 996)
Type
Dimension
Point
Symmetry
3-D
"Unconfined volume,"
almost completely
free expansion.
2-D
Platforms carrying process
equipment; space beneath
cars; open sided multistory buildings.
Line
Symmetry
Planer
Symmetry
1-D
Description
Geometry
Tunnels, corridors, or
sewage systems.
fuels are classified as average reactivity. Fuel mixtures are classified according to the
concentration of the most reactive component.
Confinement is based on three symmetries, as shown in Table 2.20:
point-symmetry (3D), line-symmetry (2D), and planar-symmetry (ID).
Point-symmetry, also referred to as spherical or unconfined geometry, has the
lowest degree of flame confinement. The flame is free to expand spherically from a
point ignition source. The overall flame surface increases with the square of the distance from the point ignition source. The flame-induced flow field can decay freely in
three directions. Therefore, flow velocities are low, and the flow field disturbances by
obstacles are small.
In line-symmetry, that is, a cylindrical flame between two plates, the overall flame
surface area is proportional to the distance from the ignition point. Consequently,
deformation of the flame surface will have a stronger effect than in the point-symmetry
case.
In plane- symmetry, that is, a planar flame in a tube, the projected flame surface
area is constant. There is hardly any flow field decay, and flame deformation has a very
strong effect on flame acceleration.
Obstacle density is classified as low, medium and high, as shown in Table 2.21, as a
function of the blockage ratio and pitch. The blockage ratio is defined as the ratio of the
area blocked by obstacles to the total cross-section area. The pitch is defined as the distance between successive obstacles or obstacle rows. There is normally an optimum
value for the pitch; when the pitch is too large, the wrinkles in the flame front will burn
out and the flame front will slow down before the next obstacle is reached. When the
pitch is too small, the gas pockets between successive obstacles are relatively unaffected
by the flow (Baker et al., 1994).
Low density assumes few obstacles in the flame's path, or the obstacles are widely
spaced (blockage ratio less than 10%), and there are only one or two layers of obstacles.
At the other extreme, high obstacle density occurs when there are three or more fairly
closely spaced layers of obstacles with a blockage ratio of 40% or greater per layer.
Medium density falls between the two categories.
TABLE 2.2 1 . Confinement Considerations for the Baker-Strehlow Vapor Cloud
Explosion Model (Baker, 1 996)
Type
Obstacle Blockage Ratio
Per Plane
Low
Less than 10%
Medium
Between 10%
and 40%
Two to Three Layers
High
Greater than
40%
Three or More Fairly
Closely Spaced
Obstacle Layers
Pitch for Obstacle Layers
Geometry
One or Two Layers
of Obstacles
of Obstacles
A high obstacle density may occur in a process unit in which there are many closely
spaced structural members, pipes, valves, and other turbulence generators. Also, pipe
racks in which there are multiple layers of closely spaced pipes must be considered high
density.
Once the flame speed is determined, then Figure 2.50 is used to determine the
side-on overpressure and Figure 2.51 is used to determine the specific impulse of the
explosion. The curves on these figures are labeled with two flame velocities: Afw and
.Msu. M.w denotes the flame velocity with respect to a fixed coordinate system, and is
called the "apparent flame speed." Msu is the flame velocity with respect to the
unburned gas ahead of the flame front. Both of these quantities are expressed in Mach
numbers, and are calculated in relation to the ambient speed of sound.
Figures 2.50 and 2.51 are based on free air bursts —for a ground or near ground
level explosion, the energy is multiplied by a factor of two to account for the reflected
blast wave.
The procedure for implementing the Baker-Strehlow method is similar to the
TNO Multi-Energy method, with the exception that steps 4 and 5 are replaced by
Table 2.19 and Figures 2.50 and 2.51.
Logic Diagram. A logic diagram for the application of the TNT equivalency method is
given in Figure 2.52. The main inputs are the mass and dimensions of the flammable
cloud and an estimate of explosion efficiency. The main outputs are the peak side-on
overpressure or damage levels with distance.
Theoretical Foundation. The TNT model is well established for high explosives, but
when applied to flammable vapor clouds it requires the explosion yield, 77, determined
from past incidents. There are several physical differences between TNT detonations
and VCE deflagrations that limit the theoretical validity. The TNO multi-energy
method is directly correlated to incidents and has a defined efficiency term, but the user
is required to specify a relative blast strength from 1 to 10. The Baker-Strehlow
Scaled Overpressure, pg = p°/ p
Scaled Impulse = i8a0/ ( P02^E1*)
Sachs Scaled Distance, R = R / ( E / P0 )1/3
FIGURE 2.50. Baker-Strehlow model for vapor cloud explosions. The curve provides the
scaled overpressure as a function of the Sachs scaled distance (Baker, 1996).
Sachs Scaled Distance, R = R / (E / PQ )1/3
FIGURE 2.51. Baker-Strehlow model for vapor cloud explosions. The curve provides the
scaled impulse as a function of the Sachs scaled distance (Baker, 1996).
method uses flame speed data correlated with relative reactivity, obstacle density and
geometry to replace the relative blast strength in the TNO method. Both methods produce relatively close results in examples worked.
Input Requirements and Availability. The following inputs are required for the individual explosion models:
Release /
Dispersion
Model
Concentration Profiles
Estimate Mass
and Extent of
Flammable
Cloud
Estimate TNT
Equivalent
Weight,
Equation (2.2.1)
Estimate Scaled
Distance
Parameter
for Specified
Overpressure
Heat of Combustion,
Explosive Efficiency
TNT Scaled Overpressure Curve, Figure 2.4S1
or Equations from Table 2.17
Estimate Effect
Distance,
Equation (2.2.7)
Determine
Vapor Cloud
Explosion (VCE)
Effect Zone
FIGURE 2.52. Logic diagram for the application of the TNT equivalency model.
• The TNT equivalence, TNO multi-energy and Baker-Strehlow methods require
the mass of flammable material in the vapor cloud, and the lower heat of combustion for the vapor.
• The TNT equivalent model requires the specification of the explosion efficiency.
The TNO multi-energy method requires the specification of the degree of confinement and the specification of a relative blast strength.
• The Baker-Strehlow method requires a specification of the chemical reactivity,
the obstacle density and the geometry.
Baker (1996) provides guidelines to determine the mass of flammable material.
For small releases of flammable materials, a typical approach would be to obtain the
fuel mass between the flammability limits using a dispersion model. This approach,
however, does not work once the flammable portion of the cloud achieves a size that is
greater than the confined volume. For this case, the confined volume must be used to
estimate the energy term. This can be done by inspecting the process plant and identifying reasonable bounds for confinement and congestion. In most cases, the answer is
fairly obvious since equipment is frequently lined up along either side of a pipe rack or
alley. Process plants have a large variety of confinement based on the geometry of the
plant. Towers which extend above confined areas are in the open and are normally not
considered in the energy estimates. As a result, the upper bound for the volume is usually the upper bound of the congestion above the confined areas.
The confined volume for a multi-level unit in a chemical plant is very frequently the
volume within the structural steel framework supporting the equipment, with possible
exceptions where there are ground level items, such as towers, adjoining a multi-level
unit. Frequently, the upper-most level of a multi-level unit has very little equipment,
and it is overly conservative to extend the confined volume all the way up to the top of
the equipment on the upper deck. A reasonable judgment must be made during a site
inspection based on the freedom with which a flame can expand away from a confined
zone.
Output. All three methods predict side-on overpressure and specific impulse with distance. The overpressure is useful to determine the consequence directly, via Table 2.18.
The specific impulse is necessary to determine the dynamic loading effects on a structure.
Simplified Approaches. The TNT, TNO multi-energy and Baker-Strehlow methods are
simplified approaches. A further simplification would be to use the initial vapor cloud
mass as input without applying a dispersion model, but this might overestimate cloud
size after it drifts to an ignition source.
2.2.1.3. EXAMPLE PROBLEMS
Example 2.19: Blast Wave Parameters. A 10-kg mass of TNT explodes on the
ground. Determine the overpressure, arrival time, duration time, and impulse 10 m
away from the blast.
Solution: This problem is solved by using Eq. (2.2.7) to determine the scaled
distance.
R
10m
, 11/a3
Z = -T7l/3
T- =
TTT
/
1 3 = 4.64 m/kg
W
(10kg) /
' B
The required quantities are determined by using Figure 2.48 or Table 2.17. Using
Table 2.17, for the overpressure,
a = -0.2143
b = 1.3503
Then
a + Iog10 Z = -0.2143 +1.3503 log10 (4.64) =0.6859
Substituting into the equation provided in Table 2.17, and using the values for the
constants,
ii
io
gio /=5/i (<*+M°gio+zy
*=0
= ]>Y(0.6859y
J=O
Iog10 / = 2.781 -1.696(0.6859)1 -0.15416(0.6859)2 + 0.5141(0.6859)3
+0.0988(0.6859)4 -0.2939(0.6859)5 + •••
jp° = 49.27 kPa
The procedure is similar for the other quantities required.
The entire procedure is easily implemented using a spreadsheet, using the equations found in Table 2.17. The output of this spreadsheet is shown in Figure 2.53. The
results are
Scaled distance: 4.64 m/kg1/3
Overpressure:
49.3 kPa = 7.14 psia
Specific impulse: 136.4 Pa-s
Pa-s63.3 Pa-s
Pulse duration: 7.9
3.7ms
ms
Arrival time:
7.3 ms
15.8
ms
Example 2.20: TNT Equivalency. Using the TNT equivalency model, calculate the
distance to 5 psi overpressure (equivalent to heavy building damage ) of an VCE of 10
short tons of propane.
Click to View Calculation Example
Example 2.19: Blast Parameters
Input Data:
TNT Mass:
Distance from blast:
Calculated Results:
Scaled distance, z:
10 kg
10m
4.6416 m/kg**(1/3)
Overpressure Calculation:
a+b*log(z):
Overpressure:
(only valid for z > 0.0674 and z < 40)
0.685866
49.27 kPa
7.15 psig
7.148302
Impulse Calculation:
a+b*log(z):
Impulse:
(only valid for z > 0.0674 and z < 40)
-0.34244
63.30 Pas
(Pa s)/(kg TNT)1/3
63.30211
Duration Calculation:
a+b*log(z):
Duration:
(only valid for z > 0.178 and z < 40)
-1,22726
3.67 ms
(ms)/(kg TNT)1/3
3.671785
Arrival Time Calculation:
a+b*log(z):
Arrival time:
(only valid for z > 0.0674 and z < 40)
0.716136
7.344 ms
7.344
(ms)/(kg TNT)1/3
136.38 Pa s
7.91 ms
15.82 ms
FIGURE 2.53. Spreadsheet output for Example 2.19: Blast parameters.
Data:
Mass: 10 tons = 20,000 Ib
Lower heat of combustion (propane) (J5C): 19,929 Btu/lb (46.350 kj/kg)
Assumed explosion efficiency (rj): 0.05
Assumed Ec>TNT: 2000 Btu/lb
Solution: From Eq. (2.2.1),
_r]MEc _(0.05)(20,0001b)(19,929BTU/lb)
".E1OT "
(2000BTU/lb)
= 9965 Ib TNT = 4520 kg TNT
The scaled overpressure is 5 psia/14.7 psia = 0.340. From Figure 2.48 the scaled
distance is 5.7 m/(kg TNT)1/3. Converting the scaled distance into an actual distance:
R ^ZW1/3 =(5.7 m/kg1/3)(4520kg)1/3 =94.2 m = 309 ft
The procedure is easily implemented using a spreadsheet, as shown in Figure 2.54.
In this case the solution is by trial and error—the distance is modified to achieve the
desired overpressure. The results are the same as the numerical calculation above.
Example 2.21: TNO and Baker-Strehlow Methods. (Baker et al., 1994) Consider
the explosion of a propane-air vapor cloud confined beneath a storage tank. The tank is
supported 1 m off the ground by concrete piles. The concentration of vapor in the
cloud is assumed to be at stoichiometric concentrations. Assume a cloud volume of
Click to View Calculation Example
Example 2.20: TNT Equivalency of a Vapor Cloud
input Data:
TNT Mass:
Distance from blast:
4520 kg
94.2 m
<-- Trial & error distance to get overpress
Calculated Results:
Scaled distance, z : 5 . 6 9 7 3 m/kg**(1/3)
Overpressure Calculation:
a+b*log(z).
Overpressure:
(only valid for z > 0.0674 and z < 40)
0.806052
34.57 kPa
5.02 psig
5.015473
Impulse Calculation:
a+b*log(z):
Impulse:
(only valid for z > 0.0674 and z < 40)
-0.1282
52.68 Pas
(Pa s)/(kg TNT)1/3
52.68371
Duration Calculation:
a+b*log(z):
Duration:
(only valid for z > 0.178 and z < 40)
-0.919
3.98 ms
(ms)/(kg TNT)1/3
3.975091
Arrival Time Calculation:
a+b*log(z):
Arrival time:
(only valid for z > 0.0674 and z < 40)
0.83877
10.039
(ms)/(kg TNT)1/3
10.039 ms
871.08 Pa s
65.72 ms
165.98 ms
FIGURE 2.54. Spreadsheet output for Example 2.20: TNT equivalency of a vapor cloud.
2094 m3, confined below the tank, representing the volume underneath the tank.
Determine the overpressure as a function of distance from the blast using:
a. the TNO multi-energy method
b. the Baker-Strehlow method
Solution: (a) The heat of combustion of a stoichiometric hydrocarbon-air mixture
is approximately 3.5 MJ/m3 and, by multiplying by the confined volume, the resulting
total energy is 7329 MJ. To apply the TNO multi-energy method a blast strength of 7
was chosen. A standoff distance is then specified and the Sachs scaled energy is determined using Eq. (2.2.8). The curves labeled "7" on Figure 2.49 are then used to determine the overpressure. The procedure is repeated at different standoff distances.
The procedure is readily implemented via spreadsheet, as shown in Figure 2.55,
for a standoff distance of 30 m. The spreadsheet includes the data digitized from Figure
Click to View Calculation Example
Example 2.21 a: TNO Multi Energy Model
Input Data:
Heat of combustion:
Standoff distance:
Ambient pressure:
Speed of sound at ambient:
3.5
30
101325
344
MJ/m**3
m
Pa
m/s
<- Used for duration only
Enter volume for each blast strength in table beiow:
(Use 1 for a nominal blast and 10 for maximum blast)
Blast
Strength
1
2
3
4
5
6
7
Volume
m**3
O
O
O
O
O
O
2094
8
O
9
10
O
0_
2094
Calculated Results:
Blast
Strength
1
2
3
4
5
6
7
8
9
10
Total
Sachs
Energy
Scaled
MJ
Distance
O
O
O
O
O
O
7329
0.7200
Scaled
Overpressure
0.75109
Side-on Overpressure
KPa
psi
76.1039
11.0410
Scaled
Duration
Duration
msec
0.268
32.408
O
O
O
Assumes additive pressures — >
76.10
11.04
FIGURE 2.55. Spreadsheet output for Example 2.2 Ia: TNO multi-energy method.
Overpressure (KPa)
TNO Multi-Energy
Standoff Distance (m)
FIGURE 2.56. Comparison of results for Example 2.21.
2.49 (not shown in Figure 2.55). The results are interpolated in the spreadsheet from
the digitized data.
The results of the complete calculation, as a function of standoff distance, are
shown in Figure 2.56.
(b) The Baker-Strehlow pressure curves apply to free air blasts. Since the vapor
cloud for this example is at ground level, the energy of the cloud is doubled to account
for the strong reflection of the blast wave. The resulting total explosion energy is thus
14,60OMJ.
Click to View Calculation Example
Example 2.21 b: Baker-Strehlow Vapor Cloud Explosion Model
input Data:
Standoff distance:
Flame speed:
Explosion energy:
Ambient pressure:
Ambient speed of sound:
30
0.662
14600
101325
344
Calculated Results:
Scaled distance:
m
m/s
MJ
Pa
m/s
0.57 <-- Must be less than 10!!
Data interpolated from tables below:
Flame
Velocity, Mw
Mach No.
0.037
0.0742
0.125
0.25
0.5
OverPressure
Ps/Po
0.006331
0.025902
0.065492
0.18224
0.519522
Flame
Velocity, Mw Scaled
Mach No.
Impulse
0.25 0.047177
0.5 0.063603
1 0.058252
2 0.057796
4 0.060414
1
2
4
5.2
0.806106
0.634971
0.911338
0.836178
5.2 0.058246
Interpolated Scaled Overpressure:
Actual Overpressure:
Interpolated Scaled Impulse:
Specific Impulse:
0.6124
62.0489 KPa
0.06187
955.3785 KPa - ms
FIGURE 2.57. Spreadsheet output for Example 2.21 b: Baker-Strehlow vapor cloud explosion
model.
The next step is to determine the flame speed using Table 2.19. Because the vapor
cloud is enclosed beneath the storage tank the flame can only expand in two directions.
Therefore, confinement is 2D. Based on the description of the piles the obstacle density is
chosen as medium. The fuel reactivity for propane is average. The resulting flame speed
from Table 2.19 is 0.662. Once a standoff distance is specified, the Sachs scaled energy is
determined from Eq. (2.2.8). The final pressure is interpolated from Figure 2.50.
The entire procedure is readily implemented using a spreadsheet, shown in Figure
2.57 for a standoff distance of 30 m. The spreadsheet contains data digitized from Figures 2.50 and 2.51 (not shown in Figure 2.57). The results are interpolated by the
spreadsheet from the digitized data.
The complete results of the procedure, as a function of distance, are shown in
Figure 2.56. For this example problem the TNO multi-energy and the Baker-Strehlow
methods produce similar results. Based on the uncertainty inherent in these models, the
results are essentially identical.
2.2.1.4. DISCUSSION
Strengths and Weaknesses
All of the methods (except the TNT equivalency) require an estimate of the vapor concentration—this can be difficult to determine in a congested process area. The TNT
equivalency model is easy to use. In the TNT approach a mass of fuel and a corresponding explosion efficiency must be selected. A weakness is the substantial physical difference between TNT detonations and VCE deflagrations. The TNO and Baker-Strehlow
methods are based on interpretations of actual VCE incidents—these models require
additional data on the plant geometry to determine the confinement volume. The TNO
method requires an estimate of the blast strength while the Baker-Strehlow method
requires an estimate of the flame speed.
Identification and Treatment of Possible Errors
The largest potential error with the TNT equivalency model is the choice of an explosion efficiency. One needs to ensure that the yield corresponds with the correct mass of
fuel. An efficiency range of 1-10% affects predicted distances to selected overpressures
by more than a factor of two [from Eq. (2.2.1), the distance to a particular
overpressure is proportional to the cubic root of the calculated TNT equivalent].
Another error is in the estimation of the flammable cloud mass, which is based on flash
and evaporation calculations (Section 2.1.2) and dispersion calculations (Section
2.1.3), both of which are subject to error. No dispersion model is capable of predicting
vapor concentrations in a congested process area. A smaller source of error is the
quoted heat of combustion for TNT which varies about 5%. The TNT model assumes
unobstructed blast wave propagation, which is rarely true for chemical plants. The
TNT equivalency model has the virtue of being easiest to use.
Resources Needed
TNT equivalency calculations to predict overpressure can be completed in under an
hour, given complete dispersion model output for cloud mass and extent.
Available Computer Codes
Vapor Cloud Explosion Modeling:
AutoReaGas (TNO Prins Maurits Laboratory, The Netherlands)
REACFLOW-2D (JRC Safety Technology, Ispra)
VCLOUD (W. E. Baker Engineering, San Antonio, TX)
Several integrated analysis packages also contain explosion simulators. These include:
ARCHIE (Environmental Protection Agency, Washington, DC)
EFFECTS-2 (TNO, Apeldoorn, The Netherlands)
uFLACS (DNV, Houston, TX)
PHAST (DNV, Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
SAFER (Safer Systems, Westlake Village, CA)
SAFETI (DNV, Houston, TX)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
2.2.2. Flash Fires
A flash fire is the nonexplosive combustion of a vapor cloud resulting from a release of
flammable material into the open air. Experiments have shown (AIChE, 1994) that
vapor clouds only explode in areas where intensely turbulent combustion develops and
only if certain conditions are met. Major hazards from flash fires are from thermal radiation and direct flame contact.
The literature provides little information on the effects of thermal radiation from
flash fires, probably because thermal radiation hazards from burning vapor clouds are
considered less significant than possible blast effects. Furthermore, flash combustion of
a vapor cloud normally lasts no more that a few tenths of a second. Therefore, the total
intercepted radiation by an object near a flash fire is substantially lower than in the case
of a pool fire.
Flash fire models—if based on flame radiation—are subject to large error if radiation is estimated incorrectly, because predicted radiation varies with the fourth power
of temperature.
Typically, the burning zone is estimated by first performing a dispersion model
and defining the burning zone from the Vi LFL limit back to the release point, even
though the vapor concentration might be above the UFL. Turbulence induced combustion mixes this material with air and burns it.
In order to compute the thermal radiation effects produced by a burning vapor
cloud, it is necessary to know the flame's temperature, size, and dynamics during its
propagation through the cloud. Thermal radiation intercepted by an object in the
vicinity is determined by the emissive power of the flame (determined by the flame
temperature), the flame's emissivity, the view factor, and an atmospheric-attenuation
factor. See Section 2.2.4 for methods for modeling thermal radiation.
Flash fire models are also subject to similar dispersion model errors present in VCE
calculations.
Next Page
Previous Page
2.2.3. Physical Explosion
2.2.3.1. BACKGROUND
Purpose
When a vessel containing a pressurized gas ruptures, the resulting stored energy is
released. This energy can produce a shock wave and accelerate vessel fragments. If the
contents are flammable it is possible that ignition of the released gas could result in
additional consequence effects. Figure 2.46 illustrates possible scenarios that could
result. This subsection illustrates calculation tools for both shock wave and projectile
effects from this type of explosion.
Philosophy
A physical explosion relates to the catastrophic rupture of a pressurized gas filled vessel.
Rupture could occur for the following reasons:
1. Failure of pressure regulating and pressure relief equipment (physical
overpressurization)
2. Reduction in vessel thickness due to
a. corrosion
b. erosion
c. chemical attack
3. Reduction in vessel strength due to
a. overheating
b. material defects with subsequent development of fracture
c. chemical attack, e.g., stress corrosion cracking, pitting, embrittlement
d. fatigue induced weakening of the vessel
4. Internal runaway reaction.
5. Any other incident which results in loss of process containment.
Failure can occur at or near the operating pressure of the vessel (items 2 and 3
above), or at elevated pressure (items 1 and 4 above).
When the contents of the vessel are released both a shock wave and projectiles may
result. The effects are more similar to a detonation than a vapor cloud explosion
(VCE). The extent of a shock wave depends on the phase of the vessel contents originally present. Table 2.22 describes the various scenarios.
There is a maximum amount of energy in a bursting vessel that can be released.
This energy is allocated to the following:
•
•
•
•
vessel stretch and tearing
kinetic energy of fragments
energy in shock wave
"waste" energy (heating of surrounding air)
The relative distribution of these energy terms will change over the course of the
explosion. Exactly what proportion of available energy will actually go into the production of shock waves is difficult to determine. Saville (1977) in the UK High Pressure
Safety Code suggests that 80% of the available system energy becomes shock wave
energy for brittle type failure. For the ejection of a major vessel section, 40% of the
TABLE 2.22. Characteristics of Various Types of Physical Explosions
Type
Shock Wave Energy
Gas-filled vessel
Expansion of gas
Liquid-filled vessel
Liquid temperature < Liquid boiling point
Expansion of gas from vapor space volume;
liquid contents unchanged and runs out.
Liquid-filled vessel
Liquid temperature > Liquid boiling point
Expansion of gas from vapor space volume
coupled with flash evaporation of liquid.
available system energy becomes shock wave energy. For both cases, the remainder of
the energy goes to fragment kinetic energy.
In general, physical explosions from catastrophic vessel rupture will produce directional explosions. This occurs because failure usually occurs from crack propagation
starting at one location. If the failure were brittle, resulting in a large number of fragments, the explosion would be less directional. However, the treatment of shock waves
from this type of failure usually does not consider directionality.
2.2.3.2. DESCRIPTION
Description of Technique
Several methods relate directly to calculation of a TNT equivalent energy and use of
shock wave correlations as in Figure 2.48 and Table 2.17. There are various expressions
that can be developed for calculating the energy released when a gas initially having a
volume, F, expands in response to a decrease in pressure from a pressure, P1, to atmospheric pressure, P0. The simplest expression is due to Erode (1959). This expression
determines the energy required to raise the pressure of the gas at constant volume from
atmospheric pressure, P0, to the initial, or burst, pressure, P1,
(P1 -P 0 )F
* = ^T
(2.2.11)
where E is the explosion energy (energy), V is the volume of the vessel (volume), andy
is the heat capacity ratio for the expanding gas (unitless)
If it is assumed that expansion occurs isothermally and that the ideal gas law
applies, the following equation can be derived (Brown, 1985):
w ( 9xio , I b - mole Ib - TNT") v /P \ r
-r ft* BTU J \f-f-« »
1
where
W
V
P1
P2
P0
is the energy (Ib TNT)
is the volume of the compressed gas (ft3)
is the initial pressure of the compressed gas (psia)
is the final pressure of expanded gas (psia)
is the standard pressure (14.7 psia)
2 212>
"fc] < /P1 \
T0 is the standard temperature (4920R)
Rg is the gas constant (1.987 Btu/lb-mole-°R)
1.39 x ICT6 is a conversion factor (this factor assumes that 2000 BTU = 1 Ib TNT)
Another approach (Growl, 1992) is to apply the concept of available energy. Available energy represents the maximum mechanical energy that can be extracted from a
material as it moves into equilibrium with the environment. Growl (1992) showed that
for a nonreactive material initially at a pressure P and temperature T, expanding into an
ambient pressure of PE, then the maximum mechanical energy, E} derivable from this
material is given by
E=R T n
* [ (^}-(1-^}}
(2 2 13)
--
Note that the first term within the brackets is equivalent to the isothermal energy of
expansion. The second term within the parenthesis represents the loss of energy as a
result of the second law of thermodynamics. The result predicted by Eq. (2.2.13) is
smaller than the result predicted assuming an isothermal expansion, but greater than
the result assuming an adiabatic expansion.
The calculated equivalent amount of TNT energy can now be used to estimate
shock wave effects. The analogy of the explosion of a container of pressurized gas to a
condensed phase point source explosion of TNT is not appropriate in the near field
since the vessel is not a point source. Prugh (1988) suggests a correction method using
a virtual distance from an explosion center based on wor£ by Baker et al. (1983) and
Petes (1971). This method is described below.
When an idealized sphere bursts, the air shock has its maximum overpressure right
at the contact surface between the gas sphere and the air. Since, initially, the flow is
strictly one-dimensional, a shock tube relationship between the bursting pressure ratio
and shock pressure can be used to calculate the pressure in the air shock. The blast pressure, P5, at the surface of an exploding pressure vessel is thus estimated from the following expression (Baker et al., 1983; Prugh, 1988):
r
P=P ls
[
-i-2y/(y-l)
1
1
S-^- ^- )
(2.2.14)
J(YT/M)(l + S.9Ps)_
where
P5 is the pressure at the surface of the vessel (bar abs)
Pb is the burst pressure of the vessel (bar abs)
y is the heat capacity ratio of the expanding gas (Cp/Cv)
T is the absolute temperature of the expanding gas (K)
M is the molecular weight of the expanding gas (mass/mole)
The above equation assumes that expansion will occur into air at atmospheric pressure at a temperature of 250C. A trial and error solution is required since the equation is
not explicit for P5.
Equation (2.2.14) also assumes that the explosion energy is distributed uniformly
across the vessel. In reality this is rarely the case.
The procedure of Prugh (1988) for determining the overpressure at a distance
from a bursting vessel is as follows:
1. Determine the energy of explosion using Eq. (2.2.12).
2. Determine the blast pressure at the surface of the vessel, P5, using Eq. (2.2.14).
This is a trial and error solution.
3. The scaled distance, Z, for the explosion is obtained from Figure 2.48, or the
equations in Table 2.17. Most pressure vessels are at or near ground level.
4. A value for the distance, R, from the explosion center is calculated using Eq.
(2.2.7) where the equivalent energy of TNT, W, has been calculated from Eq.
(2.2.1).
5. The distance from the center of the pressurized gas container to its surface is
subtracted from the distance, R, to produce a virtual "distance" to be added to
distances for shock wave evaluations.
6. The overpressure at any distance is determined by adding the virtual distance to the
actual distance, and then using this distance to determine Z, the scaled distance.
Figure 2.48 or Table 2.17 is used to determine the resulting overpressure.
AIChE/CCPS (1994) describe a number of techniques for estimating overpressure for a rupture of a gas filled container. These methods are derived mostly from
the work of Baker et al. (1983) based on small scale experimental studies.
The first method is called the "basic method" (AIChE/CCPS, 1994). The procedure for this method is
1. Collect data. This includes:
• the vessel's internal absolute pressure, P1
• the ambient pressure, P0
• the vessel's volume of gas filled space, V
• the heat capacity ratio of the expanding gas, y
• the distance from the center of the vessel to the "target," r
• the shape of the vessel: spherical or cylindrical
2. Calculate the energy of explosion, E, using the Erode equation, Eq. (2.2.11).
The result must be multiplied by 2 to account for a surface explosion.
3. Determine the scaled distance, R, from the target using
/P \1/3
a Hv
\^/
(2-2-15)
4. Check the scaled distance. IfR < 2 then this procedure is not applicable and the
refined method described later must be applied.
5. Determine the scaled overpressure, P5, and scaled impulse, *s, using Figures
2.58 and 2.59, respectively.
6. Adjust P5 and is for geometry effects using the multipliers shown in Tables 2.23
and 2.24.
7. Determine the final overpressure and impulse from the definitions of the scaled
variables.
8. Check the final overpressure. In the near field, this approach might produce a
pressure higher than the vessel pressure, which is physically impossible. If this
occurs, take the vessel pressure as the calculated overpressure.
Scaled Overpressure, Ps = ps/ P0 -1
Scaled Impulse,!= (isao)/(P^/3E1/3)
Scaled Distance, R = r (P0/ E)1/3
FIGURE 2.58. Scaled overpressure curve for rupture of a gas-filled vessel for the basic method.
Scaled Distance, R = r (PQ/E)1/3
FIGURE 2.59. Scaled impulse curve for rupture of a gas-filled vessel for the basic method. The
upper and lower lines are error limits for the procedure.
IfR < 2, then the above procedure must be replaced by a more detailed approach
(AIChE/CCPS, 1994). This approach replaces steps 4 and 5 above in the basic procedure with the following steps:
4a. Calculate the initial vessel radius. A hemispherical vessel on the ground is
assumed for this calculation. From simple geometry for a sphere, the following
equation for the initial vessel radius is obtained:
•2T^\l/3
~\ =0.782F1/3
(2.2.16)
(
where r0 is the initial vessel radius (length) and F is the vessel volume (length3)
TABLE 2.23. Adjustment_Factors for P, and is for Cylindrical
Vessels as a Function of R (Baker et al., 1975)
Multiplier for
R
P1
i.
<0.3
4
2
>0.3 < 1.6
1.6
1.1
>1.6<3.5
1.6
1
>3.5
1.4
1
TABLE 2.24. Adjustment_Factors for P, and \ for Spherical
Vessels as a Function of R (Baker et al., 1975)
Multiplier for
*
P1
^s
<1
2
1.6
>1
Ll
1
Scaled variable definitions:
R - , P i ) V3
MTJ '
p-P>
P
T
'-JT'
J _
*s"o
''-tf^F*
4b. Determine the initial starting distance, R0, for the overpressure curve,
_
/p \i/3
*o='oHr
(2.2.17)
X^/
4c. Calculate the initial peak pressure, P5, using Eq. (2.2.14). A trial and error solution is required.
4d. Locate the starting point on the overpressure curves of Figure 2.60 using R0
and P5. The closest curve shown on the figure, or an interpolated curve is appropriate here._
__
5. Determine P5 at another R from Figure 2.60 using the curve (or interpolated
curve) which goes through the starting point of step 4d.
Tang et al. (1996) present the results of a detailed numerical simulation procedure
to model the effects of a bursting spherical vessel. They numerically solved the
nonsteady, nonlinear, one-dimensional flow equations. This resulted in a more detailed
figure to replace Figure 2.60.
AIChE/CCPS (1994) also provides a more detailed method to include the effects
of explosively flashing liquids during a vessel rupture.
Projectiles
When a high explosive detonates, a large number of small fragments with high velocity
and chunky shape result (AIChE/CCPS, 1994). In contrast, a BLEVE produces only a
few fragments, varying in size (small to large), shape (chunky or disk shaped), and ini-
Scaled Overpressure, P8 = ps /P0 - 1
Scaled Distance, R = r ( P0 /E )1/3
FIGURE 2.60. Scaled overpressure curve for rupture of a gas-filled vessel for the more
detailed method.
rial velocities. Fragments can travel long distances because large, half-vessel fragments
can "rocker" and disk-shaped fragments can "frisbee." Schulz-Forberg et al. (1984)
describe an investigation of BLEVE-induced vessel fragmentation. Baum (1984) also
discusses velocities of missiles from bursting vessels and pipes.
Baker et al. (1983), Brown (1985, 1986) and AIChE/CCPS (1994) provide formulas for prediction of projectile effects. They consider fracture of cylindrical and
spherical vessels into 2, 10, and 100 fragments. Typically, for these types of events,
only 2 or 3 fragments occur.
The first part of the calculation involves the estimation of an initial velocity. Once
fragments are accelerated they will fly through the air until they impact another object
or target on the ground. The second part of the calculation involves estimation of the
distance a projectile could travel.
In general, according to Baker et al. (1983), the technique for predicting initial
fragment velocities for spherical or cylindrical vessels bursting into equal fragments
requires knowledge of the internal pressure (P), internal volume (F0), mass of the container/fragment (Mc), ratio of the gas heat capacities (y), and the absolute temperature
of the gas at burst (T0).
The results of a parameter study (Baker et al., 1983) were used to develop Figure
2.61, which is used to determine the initial fragment velocity, u. The scaled pressure in
Figure 2.61 is given by
(P-P 0 )F
p=
,,
(2.2.18)
McaQ2
^
>
where
P
P
is the scaled pressure (unitless)
is the burst pressure of the vessel (force/area)
Scaled Fragment Velocity = v, /K a0
Scaled Pressure, P = (P - P0 ) V / (Mc a*)
FIGURE 2.61. Scaled fragment velocity versus scaled pressure (Baker et Bl1 1983).
P0 is the ambient pressure of the surrounding gas (force/area)
V is the volume of the vessel (length3)
Mc is the mass of the container (mass)
^0 is the speed of sound of the initial gas in the vessel (length/time)
The speed of sound for an ideal gas is computed from
•• -HT)
/TyK \1/2
<2 2J9>
'
where
^0 is the speed of sound (length/time)
T is the absolute temperature (temperature)
7 is the heat capacity ratio of the gas in the vessel (unitless)
R^ is the ideal gas constant (pressure - volume/mole deg)
M is the molecular weight of the gas in the vessel (mass/mole)
The^-axis in Figure 2.61 is the dimensionless velocity given by
-~
Ka0
(2.2.20)
where v{ is the velocity of the fragment (length/time), K is a correction factor for
unequal mass fragments given by Figure 2.62, and ^0 is the speed of sound of the gas in
the vessel (length/time)
Table 2.25 contains curve fit equations for the fragment velocity correlations presented in Figure 2.61. The data in Figure 2.62 are curve fit by the equation
K = 1.306 x (Fragment Mass Fraction) +0.308446
The procedure for applying this approach is as follows:
1. Given:
Number of fragments, n
Total mass of vessel, Mc
Mass fraction for each fragment
(2.2.21)
TABLE 2.25. Curve Fit Equations for the Fragment Velocity Data of Figure 2.61
( v \
In —l— = #lnP +b
\K*QJ
Spheres
Number of
fragments, n
Cylinders
a,
b
a
b
2
0.622206
0.213936
0.814896
0.355218
10
0.598495
0.221165
0.598255
0.564998
100
0.603469
0.287515
0.591785
0.602712
Variables:
vi is the velocity of the fragment (length/time)
K is the correction factor for unequal fragments
#0 is the speed of sound of the gas in the vessel (length/time)
P is defined by Eq. (2.2.18)
2.
3.
4.
5.
6.
Internal burst pressure of vessel, P
Volume of vessel, V
Ambient pressure, P0
Absolute temperature of gas in vessel, T
Heat capacity ratio of gas in vessel, y
Molecular weight of gas in vessel, M
Determine speed of sound of gas in vessel using Eq. (2.2.19).
Determine scaled pressure using Eq. (2.2.18).
Determine dimensionless velocity from Figure 2.61 or Table 2.25.
Determine unequal fragment correction from Figure 2.62 or Eq. (2.2.21).
Determine actual velocity for each fragment using Eq. (2.2.20).
An empirically derived formula developed by Moore (1967) provides a simplified
method to determine the initial velocity, H^ of a fragment,
» = 1.0921^-)
\JVLC i
(2.2.22)
I
3C V
G=[I + -J
(2.2.23,
H1+Is;)'
<2-2-24'
where for spherical vessels
and for cylindrical vessels
where
u
C
E
Mc
is the initial fragment velocity (m/s)
is the total gas mass (kg)
is the energy (J)
is the mass of casing or vessel (kg)
Adjustment Factor, K
Fragment Fraction of Total Mass
FIGURE 2.62. Adjustment factor for unequal mass fragments (Baker et al., 1983).
Moore's equation was derived for fragments accelerated from high explosives
packed in a casing. The equation predicts velocities higher than actual, especially for
low pressures and few fragments.
For pressurized vessels, a simplified method to determine the initial velocity of a
fragment is by the Moore (1967) equation,
IPD 3
«=2.05^-
(2.2.25)
where
u is the initial velocity of the fragment (ft/s)
P is the rupture pressure of the vessel (psig)
D is the fragment diameter (inches)
W is the weight of the fragment (Ib)
The next step is to determine the distance the fragments will fly. From simple physics, it is well-known that an object will fly the greatest distance at a trajectory angle of
45°. The maximum distance is given by
«2
''max = —
O
(2.2.26)
where rmax is the maximum horizontal distance (length), u is the initial object velocity
(length/time), andg is the acceleration due to gravity (length/time2).
Kinney and Graham (1985) suggest a very simple formula for estimating a safety
distance from a bomb explosion
r = 120»;1/3
(2.2.27)
where r is the distance (m) and w is the mass of TNT (kg).
Baker et al. (1983) plotted the solutions to a set of differential equations, incorporating the effects of fluid-dynamic forces. The solutions are shown on Figure 2.63. The
— PoCoAor
Scaled Fragment Range, R = —n
Mf
P0C0A0U2
Scaled Initial Velocity, u = —
M, g
FIGURE 2.63. Scaled fragment range versus scaled initial distance (Baker et al., 1983).
results assume that the position of the fragment remains the same with respect to its
trajectory, that is, that the fragment does not tumble. Figure 2.63 plots scaled maximum range, R, versus the scaled initial velocity, u. These quantities are given by
_
p0CDADr
=^^
R
_ = p<A^X
Mf g
< 2 ' 2 - 28 )
9
where
R is the scaled maximal range (dimensionless)
u is the scaled initial velocity (dimensionless)
r is the maximal range (length)
P0 is the density of the ambient atmosphere (mass/volume)
CD is the drag coefficient, provided in Table 2.26 (unitless)
AD is the exposed area in plane perpendicular to the trajectory (area)
g is the acceleration due to gravity (length/time2)
Mf is the mass of the fragment (mass)
Figure 2.63 requires a specification of the lift-to-drag ratio,
£±-±-
(2.2.30)
^ D ^1D
where CL is the lift coefficient (unitless) and^4L is the exposed area in the plane parallel
to the trajectory (area).
For "chunky" fragments, which are normally expected, the lift coefficient is zero
for these objects and the lift-to-drag ratio is thus zero. For thin plates, which have a
large lift-to-drag ratio, the "frisbee" effect can occur, and the scaled range more than
doubles the range calculated when lift forces are neglected. Refer to Baker et al. (1983,
Appendix E, page 688) for a discussion and additional values for the lift coefficient, CL.
Table 2.26 contains drag coefficients for various shapes.
TABLE 2.26. Drag Coefficients for Fragments (Baker et al., 1983)
Shape
Right circular cylinder (long
rod), side on
Sphere
Rod, end on
Disk, face on
Cube, face on
Cube, edge on
Long rectangular member,
face on
Long rectangular member,
edge on
Narrow strip, face on
Sketch
The procedure for implementing this method is as follows:
1. Given:
2.
3.
4.
5.
Fragment mass, Mf
Initial fragment velocity, u
Exposed area perpendicular to direction of movement, AD
Density of the ambient air, p0
Lift to drag ratio.
Determine drag coefficient from Table 2.26.
Determine scaled velocity from Eq. (2.2.29).
Determine scaled range from Figure 2.63.
Determine actual range from Eq. (2.2.28)
The dashed line on Figure 2.63 represents the maximum range computed using
Eq. (2.2.26).
Brown (1985,1986) provides other methods for fragment prediction. Additional
references on projectiles include Sun et al. (1976), TNO (1979), andTunkel (1983).
TNO considers that the most likely failure point will be at an attachment to the vessel,
so they consider nozzles, manholes, and valves as typical projectiles in their analysis.
Fragment distances and sizes are discussed further in Section 2.2.4 (BLEVE) and
Section 2.3 (injuries and damage from projectiles).
Applications
In general, these types of failures result in risk to in-plant personnel. However, vessel
fragments can be accelerated to significant distances. The Canvey Study (Health &
Safety Executive, 1978) considered projectile damage effects on other process vessels.
Logic Diagram
A logic diagram for the modeling of projectile effects due to the explosion of pressure
vessels is provided in Figure 2.64.
Theoretical Foundation
The technology of energy release from pressurized gas containers has been receiving
attention for over a century beginning with catastrophic failures of boilers and other
pressure vessels. Ultra high pressure systems has also generated interest.
Much experimental work has been done, primarily small scale with containers
which burst into a large number of fragments, to relate the shock wave phenomena to
the well developed TNT relationships.
Input Requirements and Availability
The technology requires data on container strength. Maximum bursting pressure of the
container can be derived from specific information on the metallurgy and design. In
accidental releases, pressure within a vessel at the time of failure is not always known.
However, an estimate can usually be made (AIChE/CCPS, 1994). If failure is initiated
by a rise in initial pressure in combination with a malfunctioning or inadequately
designed pressure-relief device, the pressure at rupture will equal the vessel's failure
pressure, which is usually the maximum allowable working pressure (MAWP) times a
safety factor. For initial calculations, a usual safety factor of four is applied for vessels
Rupture of a
Pressurized
Vessel
Estimate Number
of Fragments
Estimate Initial
Fragment Velocity
Estimate
Maximum Range
of Fragment
Assess Impact of
Projectiles on
Surrounding
Areas
FIGURE 2.64. Logic diagram for the calculation of projectile effects for rupture of pressurized
gas-filled vessels.
made of carbon steel, although higher values are possible. In general, the higher the
failure pressure, the more severe the effect.
Output
The output from this analysis is overpressure and impulse versus distance for shock
wave effects and the velocity and expected maximum range of projectiles which are
generated by the burst vessel.
Simplified Approaches
The techniques presented are basically simplified approaches. It can be conservatively
assumed that 100% of the stored energy is converted to a shock wave.
2.2.3.3. EXAMPLE PROBLEMS
Example 2.22: Energy of Explosion for a Compressed Gas. A 1-m3 vessel at 250C
ruptures at a vessel burst pressure of 500 bar abs. The vessel ruptures into ambient air at
0
a pressure of 1.01 bar and 25 C. Determine the energy of explosion and equivalent
mass of TNT using the following methods:
a. Brode's equation for a constant volume expansion, Eq. (2.2.11).
b. Brown's equation for an isothermal expansion, Eq. (2.2.12)
c. Growl's equation for thermodynamic availability, Eq. (2.2.13)
Solution: (a) Substituting the known values into Eq. (2.2.11)
(P-P0)F
E
r-l
~
_ (500 bar -1.01 bar)(105 Pa/bar) (1 m 3 ) (Nm~ 2 /Pa)
E
L4^1
~
8
E = 1.25 x 10 Nm = 125 MJ
Since TNT has an explosion energy of 1120 cal/gm = 4.69 X 106 J/kg
TNT equiv.
mass =
M
1.25 XlO 8 J
-,6
= 26.6 kg
TNT
&
4.69 x 10 J/kg
(b) For this case, I m 3 = 35.3 ft3, T0 = 5360R. Substituting into Eq. (2.2.12)
(
, Ib-moleIb-TNT^ (p,\
(p,\
W= 1.39X10"6
=
F - U l 2eT 0 In —
3
(
ft BTU
J [P0 r °
[P2)
/
W
, Ib-moleIb-TNT^
1 3 X10
-I ' '
x U87
I
ft'
BTU
, (SOObar'l
ETU I
<35 3ft
' %5ItaJ
V536-R)In(Jg^]
Ib-mole 0 R/ V
'
1^1.01 bar J
W = 160.7 Ib of TNT = 72.9 kg of TNT.
6
Since TNT has an energy of 4.69 X 10 J/kg, this represents 342 MJ of energy.
'~â„–H-n
c. Substituting into Eq. (2.2.13),
\ /500 bar \ (
1.01 bar V
E =(8.314
J/mole K)(298
K) In
-— - 1 --—-—
v
A
'
'[ (im bar J (
500 bar J
E= 1.29 x 104 J/mole
The number of moles of gas in the vessel is determined from the ideal gas law. It is
20,246 gm-moles. The total energy of explosion is thus,
E = (1.29 X 104 J/mole) (20,246 moles) = 261 MJ
This is equivalent to 55.7 kg of TNT.
Click to View Calculation Example
Example 2.22: Energy of Explosion for a Compressed Gas
Input Data:
Vessel volume:
Vessel pressure:
Final pressure of expanded gas:
Ambient pressure:
Heat capacity ratio of expanding gas:
Temperature of gas:
1 m**3
500 bar abs
1.01 bar abs
1.01 bar abs
1.4
298 K
Calculated Results:
Brode's equation assuming constant volume expansion:
Energy of explosion:
1.25E+08 Joules
TNT equivalent:
26.60 kg TNT
Brown's equation assuming isothermal expansion:
TNT equivalent:
160.68 Ib TNT
72.89 kg TNT
Energy of explosion:
3.42E+08 Joules
Growl's equation from thermodynamic availability:
Moles of gas in vessel:
20246.36 gm-moles
Energy of explosion:
2.61 E+08 Joules
TNT equivalent:
55.69 kg TNT
FIGURE 2.65. Spreadsheet output for Example 2.22: Energy of explosion for a compressed
gas.
The calculation for all three parts of this example is readily implemented via
spreadsheet. The output is shown in Figure 2.65.
The three methods do provide considerably different results.
Example 2.23: Prugh's Method for Overpressure from a ruptured sphere. A
6-ft3 sphere containing high pressure air at 770F ruptures at 8000 psia. Calculate the
side-on overpressure at a distance of 60 ft from the rupture. Assume an ambient pressure of 1 atm and temperature of 770F.
Additional data for air:
Heat capacity ratio, y: 1.4
Molecular weight of air: 29
Solution: From Eq. (2.2.12)
H
For this
P1
P2
P0
V
R
T0
(
L39xi
r
° ft»BTU rbtf' ° "teJ
, Ib-moleIb-TNT^
particular case,
= 8000 psia = 551 bar abs
= 14.7 psia= 1.01 bar
= 14.7 psia = 1.01 bar
= 6ft 3 = 0.170m3
= 1.987BTU/lb-mol°R
= 770F = 5370R = 298 K
/P 1 ^
/P 1 ^j
Substituting into the equation
w
(
L39x 10
-(
, I b - mole Ib - TNT \
ft-BTU
6 ft
, /8000 psia Vl.987 BTU\
} 'MyUn^d
X(M 7 -K)J^I")
^ 14.7 psia I
W = 305 Ib TNT = 135 kg TNT
The pressure at the surface of the vessel is calculated from Eq. (2.2.14)
r
-|-2y/(y-i)
3.5(y-I)(P5 -1)
b
~ s [ ~ V(yT/Af)(I + 5.9P 5 )_
where
P5 is the pressure at surface of vessel, 1.01 bar abs
Pb is the burst pressure of vessel, 551 bar abs
y =1.4
T = 2980K
M = 29 gm/gm-mole
By a trial-and-error solution
P5 = 10.21 bar abs = 148.1 psia
Since the vessel is at grade, the blast wave will be hemispherical. The scaled pressure is
_
Pn
148rpsia
Ps =-*-=
. =10.07
Pa
14.7 psia
From Figure 2.48, and Eq. (2.2.7)
Z = I.I4 = R/Wl/*
Since W = 13.8 kg TNT it follows that R = 2.74 m = 8.99 ft.
The radius of the spherical container is
r= 0.782 F1/3 =0.782 (6 ft 3 ) 1 / 3 =1.4 ft
The 'Virtual distance" to be added to distances for blast effects evaluations would
be 8.99 - 1.4 = 7.59 feet (2.31 m). Therefore, the blast pressure at a distance of 60 ft
(18.28 m) from the center of the sphere would be evaluated using a scaled distance of
Z = (18.28 m + 2.31 m)/(12.7 kg TNT)1/3
or
Z = 8.58
From Figure 2.48 this results in a final overpressure of 18.38 kPa or 2.67 psia. Without
the virtual distance, the final overpressure is 3.18 psi.
The entire procedure is readily implemented via a spreadsheet, as shown in Figure
2.66. This implementation requires two trial-and-error procedures. The first is used to
determine the pressure at the surface of the vessel and the second procedure is used to
determine the final overpressure. The user must manually adjust the guessed value until
the recomputed value is identical.
Click to View Calculation Example
Example 2.23: Prugh's Method for Overpressure from a Ruptured Sphere
Input Data:
Vessel burst pressure:
Distance from vessel center:
Vessel volume:
Final pressure:
Heat capacity ratio:
Molecular weight of gas:
Gas temperature:
551.43 bar abs
18.28 m
0.17 m**3
1.01325 bar abs
1.4
29
298 K
Calculated Results:
English units equivalents of above data:
Vessel burst pressure:
Vessel volume:
Final pressure:
Temperature:
8000.02 psia
6.00 ft**3
14.7 psia
536.4 R
Energy of Explosion from Brown's Equation:
30.49 Ib TNT
13.83 kg TNT
Trial and error solution to determine surface pressure:
Guessed Value:
10.21 bar abs <—Adjust until equal to value immediately below
Calculated Value:
10.20937 bar abs
English Equivalent:
148.12 psia
Trial and error solution to determine virtual distance:
TNT Mass:
13.83 kg
Distance from blast:
2.738 m
<- Adjust to match surface pressure above
Calculated Results:
Scaled distance, z:
1.1407 m/kg**(1/3)
Overpressure Calculation:
a+b*log(z):
Overpressure:
Radius of vessel:
Virtual distance to add:
Effective distance from blast:
<-- OK value!
(only valid for z > 0.0674 and z < 40)
-0.13717
1021.44 KPa
148.1886 psia
<- Must match surface pressure above
0.43m
2.30 m
20.58 m
Final overpressure calculation using effective distance:
TNT Mass:
13.83 kg
Distance from blast:
20.58 m
Calculated Results:
Scaled distance, z:
8.5759 m/kg**(1/3)
Overpressure Calculation:
a+b*log(z):
Overpressure:
<~ OK value!
(only valid for z > 0.0674 and z < 40)
1.045885
18.38 kPa
I
2.67 psia
|
FIGURE 2.66. Spreadsheet output for Example 2.23: Prugh's method for overpressure from a
ruptured sphere.
Example 2.24: Baker's Method for Overpressure from a Ruptured Vessel. Rework
Example 2.23 using Baker's method.
Solution: The steps listed in the text are followed.
STEP 1: Collect data. The data are already listed in Example 2.23.
STEP 2: Calculate the energy of explosion. The Erode equation, Eq. (2.2.11) is used.
(551 bar -1.01bar)(105 Pa/bar)(0.17Om3)
E=^-—
'-=23.4MJ
This result must be multiplied by 2 to use the overpressure curves for an open blast.
The effective energy is thus 46.9 MJ.
STEP 3: Determine the scaled distance. From Eq. (2.2.15)
/P0 \1/3
[(LOl bar) (105 Pa/bar) (NnT2 /Pa)]1/3
H=r —
=(18.28m)
'-^
'•—=2.37
V
UJ
'[
46.8XlO 6 J
J
STEP 4: Check if R > 2. This is satisfied in this case.
STEP 5: Determine the scaled overpressure from Figure 2.58. The result is 0.098.
STEP 6: Adjust the overpressure for geometry effects. Table 2.24 contains the multipliers for spherical vessels. The multiplier is 1.1. Thus, the effective scaled overpressure is
(1.1)(0.098) = 0.108.
STEP 7: Determine the final overpressure. From the definition of the scaled pressure,
ps = (0.108S)(LOl bar) = 0.110 bar = 1.6 psi
STEP 8: Check the final pressure. In this case the final pressure is less than the burst
pressure of the vessel.
This result is somewhat less than the value of 2.57 psi obtained using Prugh's method.
The solution is readily implemented via spreadsheet, as shown in Figure 2.67.
Example 2.25: Velocity of Fragments from a Vessel Rupture. A 100-kg cylindrical
vessel is 0.2 m in diameter and 2 m long. Determine the initial fragment velocities if the
vessel ruptures into two fragments. The fragments represent 3/4 and 1/4 of the total
vessel mass, respectively. The vessel is filled with helium at a temperature of 300 K, and
the burst pressure of the vessel is 20.1 MPa.
For helium,
Heat capacity ratio, y:
Molecular weight: 4
1.67
Solution: The procedure detailed in the text is applied.
L Given: Number of fragments, n = 2
Total mass of vessel, Mc = 100 kg
Mass fraction for each fragment:
first fragment = 0.75, second fragment = 0.25
Internal burst pressure of vessel, P = 20.1 MPa
Volume of vessel, V
V = (-JD2L = ^(02 m) 2 (2.Om) =0.0628m 3
Click to View Calculation Example
Example 2.24: Baker's Method for Overpressure from a Ruptured Vessel
Input Data:
Vessel burst pressure:
551.43 barabs
Distance from vessel center:
18.28 m
Vessel volume:
0.17 m**3
Final pressure:
1.01325 barabs
Heat capacity ratio:
1.4
Molecular weight of gas:
29
Gas temperature:
298 K
Speed of sound in ambient gas:
340 m/s
Calculated Results:
Energy of explosion using Brode's equation for constant volume expansion.
Energy of explosion:
23.39 MJ
TNT equivalent:
4.99 kg TNT
Effective energy of explosion (x 2):
46.79 MJ
Scaled distance:
2.37
Interpolated scaled overpressure:
Interpolated scaled impulse:
0.098591
0.021681
Vessel shape:
Spherical
Cylindrical
Overpressure multiplier for vessel shape:
1.1
1.6
Corrected scaled overpressure:
0.1085
0.1577
Actual overpressure:
0.1099 bar
0.1598 bar
1.59 psi
2.32 psi
Impulse multiplier for vessel shape:
1
1
Corrected scaled impulse:
0.0217
0.0217
Actual impulse:
39.64 KPa - ms
39.64 kPa - ms
FIGURE 2.67. Spreadsheet from Example 2.24: Baker's method for overpressure from a
ruptured vessel.
Ambient pressure, P0 = 0.101 MPa
Absolute temperature of gas in vessel, T = 300 K
Heat capacity ratio of gas in vessel, y = 1.67
Molecular weight of gas in vessel, M = 4
2. Determine speed of sound of gas in vessel using Eq. (2.2.19).
00 =
(TyR \1/2 r(300K)(1.67)(8.314J/ g m-moleK)[(k g m 2 /s 2 )/lj]l 1/2
-L
=
L
1
±
=1020m/s
\ M )
(4gm/gm-rnole)(lkg/1000gm)
3. Determine scaled pressure using Eq. (2.2.18).
_ _ ( p , p o ) F _(2Q.l -Q.l)(xl0 6 Pa)(0.0628m 3 )[(lN/m 2 )/Pa][(kgm/s 2 )/lN]
P
MsI
(100 kg) (1020 m/s)2
P = 0.012
4. Determine the dimensionless velocity from Figure 2.61, or Table 2.25. For n =
2, the dimensionless velocity for spheres is 0.079.
5. Determine the unequal fragment correction from Figure 2.62. For mass fraction = 0.75, K = 1.29 and for mass fraction = 0.25, K = 0.63.
6. Determine actual velocity for each fragment using Eq. (2.2.20).
For the large fragment,
v. = 0.0793I6*0 = (0.0793)(1.3)(1020 m/s) = 104 m/s
For the small fragment,
P. = (0.0793)(0.635)(1020m/s) = 51.4 m/s
The large fragment has the greater velocity, which is due to the unequal fragment
correction.
This procedure is readily implemented via a spreadsheet, as shown in Figure 2.68.
The spreadsheet must be run for each fragment—the output shown is for the large
fragment.
Example 2.26: Range of a Fragment in Air. A 100 kg end of a bullet tank blows off
and is rocketed away at an initial velocity of 25 m/s. If the end is 2 m in diameter, estimate the range for this fragment. Assume ambient air at 1 atm and 250C.
Solution: The ambient air density is first determined. This is determined using the
ideal gas law.
p
PM
=
R^T
=
(latm)(29 kg/kg-mole)
'
= 1.19kg/m
[0.082057 (m 3 atm)/(kg - moleK)](298 K)
3
Click to View Calculation Example
Example 2.25: Velocity of Fragments from a Vessel Rupture
Input Data:
Total mass of vessel:
Total volume of vessel:
Number of fragments:
Mass fraction of total for fragment:
Pressure of gas within vessel:
Ambient gas pressure:
Temperature of gas within vessel:
Heat capacity ratio of gas within vessel:
Molecular weight of gas within vessel:
Calculated Results:
Speed of sound of gas within vessel:
Adjustment factor for unequal mass:
Scaled pressure:
100kg
0.0628 m**3
2
0.25
20.101 MPa
0.101 MPa
300 K
1.67
4
1020 m/s
0.634945
0.012062
Dimensionless velocity for various shapes and numbers:
_n
2
10
100
Spheres
0.079277
0.088671
0.092694
Cylinders
0.038977
0.125189
0.133769
Sphere Cylinder
Interpolated dimensionless velocity for
actual number of fragments:
0.079277 0.038977
Actual velocity of fragment:
51.37 25.25 m/s
FIGURE 2.68. Spreadsheet output for Example 2.25: Velocity of fragments from a vessel
rupture.
The surface area of the fragment is
_ K D » _ (3.14)(2 m) 2
A) - 4 -6.14m
4
We will assume that the fragment flies with its full face area perpendicular to the
direction of travel. Other orientations will result in different ranges. For the case where
the fragment face is parallel to the direction of travel it is possible that the fragment
might "frisbee" as a result of lift generated during its movement.
The drag coefficient, CD is determined from Table 2.26. For a round fragment
with its face perpendicular to the direction of travel, C0 = 0.47.
The scaled velocity is determined from Eq. (2.2.29),
_ = P0C0A0U2
U
M{g
= (1.19kg/m
3
)(0.47)(3.14m 2 )(25m/s) 2
(100kg)(9.8m/s2)
From Figure 2.63, the scaled fragment range is
R = 0.81.
The actual range is determined from Eq. (2.2.28)
r=
M{R
(100kg)(0.81)
2
=
;—rr- = 46.1m
3
P0CDAD
(1.19kg/m )(0.47)(3.14m2)
The maximum range is determined from Eq. (2.2.26).
ul
(25m/s)2
r
™=T=9^y^=63'8m
The calculation is readily implemented via a spreadsheet, as shown in Figure 2.69.
The data of Figure 2.63 is contained within the spreadsheet, but not shown. Also
shown on the output is the maximum distance achieved assuming the presence of lift.
This is the maximum range for any of the specified values of the lift to drag ratio. Note
that with lift it is possible to exceed the maximum range and, in some cases, the increase
can be to more than twice the maximum range.
2.2.3.4. DISCUSSION
Strengths and Weaknesses
The main strength of these methods is that they are based mostly on experimental data.
The weakness is that many of the approaches are empirical in nature, using correlations
based on dimensional or dimensionless groups. Extrapolation outside of the range of
the correlations provided may lead to erroneous results. For the purposes of this text,
the range of validity may be assumed to be the range provided by the figures and tables.
The energy of explosion methods assume that the explosion occurs from a point
source, which is rarely the case in actual process equipment explosions.
Identification and Treatment of Possible Errors
It is very difficult to predict the number of projectiles and where they will be propelled.
These methods are more suited for accident investigations, where the number, size and
location of the fragments is known.
Click to View Calculation Example
Example 2.26: Range of a Fragment in Air
Input Data:
Mass of fragment:
Initial fragment velocity:
Drag coefficient of fragment:
Lift to drag ratio:
Exposed area of fragment:
Temperature of ambient air:
Pressure of ambient air:
Calculated Results:
Density of ambient air:
Scaled velocity of fragment:
100 kg
25 m/s
0.47
O
3.14 m**2
298 K
1 atm
1.19 kg/m**3
1.12
Interpolated values from figure for various lift to drag ratios:
Lift to drag
ratio
0
0.5
1
3
5
10
30
50
100
Scaled Range
Range
(m)
0.80622
46.06
0.816541
46.65
0.946952
54.10
1.11779
63.87
1.309836
74.84
0.387583 22.14
0.082977
4.74
0.050037
2.86
0.023483
1.34
Interpolated range:
Theoretical max. range (no lift):
Max, possible range (with lift):
46.06 m
63.78 m
74.84 m
FIGURE 2.69. Spreadsheet output for Example 2.26: Range of a fragment in air.
Utility
In general, vessels of pressurized gas do not have sufficient stored energy to represent a
threat from shock wave beyond the plant boundaries. These techniques find greater
application involving in-plant risks.
These types of incidents can result in domino effects particularly from the effects of
the projectiles produced. Very few CPQRA studies have ever incorporated projectile
effects on a quantitative basis.
Resources
A process engineer should be able to perform each type of calculation in a few hours.
Spreadsheet applications are useful.
Available Computer Codes.
DAMAGE (TNO5 Apeldoorn, The Netherlands)
SAFESITE (W. E. Baker Engineering, Inc., San Antonio, TX)
Several integrated analysis packages contain explosion fragment capability. These include:
QRAWorks (PrimaTech, Columbus, OH)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
2.2.4. BLEVE and Fireball
2.2.4.1. BACKGROUND
Purpose
This section addresses a special case of a catastrophic rupture of a pressure vessel. A
boiling liquid expanding vapor explosion (BLEVE) occurs when there is a sudden loss
of containment of a pressure vessel containing a superheated liquid or liquified gas.
This section describes the methods used to calculate the effects of the vessel rupture and
the fireball that results if the released liquid is flammable and is ignited.
Philosophy
A BLEVE is a sudden release of a large mass of pressurized superheated liquid to the
atmosphere. The primary cause is usually an external flame impinging on the shell of a
vessel above the liquid level, weakening the container and leading to sudden shell rupture. A pressure relief valve does not protect against this mode of failure, since the shell
failure is likely to occur at a pressure below the set pressure of the relief system. It
should be noted, however, that a BLEVE can occur due to any mechanism that results
in the sudden failure of containment, including impact by an object, corrosion, manufacturing defects, internal overheating, etc. The sudden containment failure allows the
superheated liquid to flash, typically increasing its volume over 200 times. This is sufficient to generate a pressure wave and fragments. If the released liquid is flammable, a
fireball may result.
A special type of BLEVE involves flammable materials, such as LPG. A number of
such incidents have occurred including San Carlos, Spain (July 11, 1978), Crescent
City, Illinois (June 21, 1970), and Mexico City, Mexico (November 19, 1984).
Films of actual BLEVE incidents involving flammable materials (NFPA, 1994)
clearly show several stages of BLEVE fireball development. At the beginning of the
incident, a fireball is formed quickly due to the rapid ejection of flammable material due
to depressurization of the vessel. This is followed by a much slower rise in the fireball
due to buoyancy of the heated gases.
BLEVE and projectile models are primarily empirical. A number of papers review
BLEVE modeling, including AIChE (1994), Moorehouse and Pritchard (1982),
Mudan (1984), Pitblado (1986), and Prugh (1988).
Application
BLEVE models are often required for risk analysis at chemical plants (e.g., Rijnmond
Public Authority, 1982) and for major accident investigation (e.g., Mexico City,
Pietersen and Huerta, 1985).
2.2.4.2. DESCRIPTION
Description of Technique
The calculation of BLEVE incidents is a stepwise procedure. The first step should be
pressure and fragment determination, as this applies to all BLEVE incidents (whether
for flammable materials or not). For flammable materials the prediction of thermal
intensity from fireballs should also be considered. This requires a determination of the
fireball diameter and duration.
AIChE (1994) provides the most up-to-date reference on modeling approaches
for BLEVEs.
Blast Effects
Blast or pressure effects from BLEVEs are usually small, although they might be
important in the near field (such as the BLEVE of a hot water heater in a room). These
effects are of interest primarily for the prediction of domino effects on adjacent vessels.
However, there are exceptions. Some BLEVEs of large quantities of nonflammable liquids (such as CO2) can result in energy releases of tons of TNT equivalent.
The blast wave produced by a sudden release of a fluid depends on many factors
(AIChE, 1994). This includes the type of fluid released, energy it can produce on
expansion, rate of energy release, shape of the vessel, type of rupture, and the presence
of reflecting surfaces in the surroundings. Materials below their normal boiling point
cannot BLEVE.
Baker et al. (1983) discuss pressure wave prediction in detail and provides a sample
problem in Chapter 2 of their book. TNO (1979) also provide a physical explosion
model, which is used by Pietersen and Huerta (1985) in the analysis of the Mexico City
incident. Prugh (1988) presents a method for calculating a TNT equivalent that also
incorporates the flash vaporization process of the liquid phase in addition to the vapor
phase originally present.
AIChE (1994) states that the blast effect of a BLEVE results not only from the
rapid expansion (flashing) of the liquid, but also from the expansion of the compressed
vapor in the vessel's head space. They claim that, in many incidents, head-space vapor
expansion produces most of the blast effects.
AIChE (1994) describes a procedure developed by Baker et al. (1975) and Tang
et al. (1996) for determining both the peak overpressure and impulse due to vessels
bursting from pressurized gas. This procedure is too detailed to be described in detail
here. The method results in an estimate of the overpressure and impulse due to blast
waves from the rupture of spherical or cylindrical vessels located at ground level. The
method depends on the phase of the vessel's contents, its boiling point at ambient pressure, its critical temperature, and its actual temperature. An approach is also presented
to determine blast pressures in the near-field, based on the results of numerical simulations. These methods are only for the prediction of pressure effects.
Fragments
The prediction of fragment effects is important, as many deaths and domino damage
effects are attributable to fragments. The method of Baker et al. (1983) can be used,
but specific work on BLEVE fragmentation hazards has been done by the Association
of American Railroads (AAR) (1972, 1973) and by Holden and Reeves (1985). The
AAR reports that of 113 major failures of horizontal cylindrical tanks in fire situations,
about 80% resulted in projected fragments.
Fragments are usually not evenly distributed. The vessel's axial direction receives
more fragments than the side directions. Baker et al. (1983) discuss fragment prediction in detail. Figure 2.70 provides data for the number of fragments and the fragment
range, based on work by Holden and Reeves (1985). Figure 2.70 shows that roughly
80% of fragments fall within a 300-m (1000-ft) range. Interestingly, BLEVEs from
smaller LPG vessels have a history of greater fragment range; one end section at the
Range, R (m)
LPG Events < 90 m3
LPG Events > 90 m3
Percent Fragments with Range < R
Number of Fragments
No. of Fragments =
-3.77 •(• 0.0096 * (Vessel Capacity, m3)
Vessel Capacity, m3
FIGURE 2.70. Correlations for the fragment range and number of fragments. (From Hodlen
and Reeves, 1985.)
Mexico City LPG BLEVE incident traveled 1000 m (3300 ft). The total number of
fragments is approximately a function of vessel size. Holden and Reeves (1985) suggest a correlation based on seven incidents, as shown in Figure 2.70.
Number of fragments = -3.77 + 0.0096[Vessel capacity (m3)]
(2.2.31)
Range of validity: 700-2500 m3
Figure 2.70 and the AAR data (Association of American Railroads, 1972, 1973)
indicate that a small number of fragments is likely in any BLEVE incident regardless of
size. BLEVEs typically produce fewer fragments than high pressure detonations—between 2 and 10 are typical. BLEVEs usually don't develop the high pressures
which lead to greater fragmentation. Instead, metal softening from the heat exposure
and thinning of the vessel wall yields fewer fragments.
Normally, propane (LPG) storage tanks are designed for a 250-psig working pressure. A normal burst pressure of four times the working pressure is expected for ASME
coded vessels, or 1000 psig. BLEVEs usually occur because of flame impingement on
the unwetted portion (vapor space) of the tank. This area rapidly reaches 1200EF and
becomes sufficiently weakened that the tank fails at approximately 300-400 psig
(Townsend et al., 1974).
Empirical Equations for BLEVE Fireball Diameter, Duration, and Fireball Height
Pitblado (1986) lists thirteen published correlations and compares BLEVE fireball
diameters as a function of mass released. The TNO formula (Pietersen and Huerta,
1985) gives good overall fit to observed data, but there is substantial scatter in the
source data. All models use a power law correlation to relate BLEVE diameter and
duration to mass. Useful formulas for BLEVE physical parameters are (AIChE, 1994):
Maximum fireball diameter (m): Dmax = 5 . 8 M l / 3
(2.2.32)
Fireball combustion duration (s):
^BLEVE = 0.45 Ml/* forM < 30,000 kg
^BLEVE = 2.6M1/6forM > 30,000 kg
(2.2.33)
(2.2.34)
Center height of fireball (m): HELEVE = 0.75 Z>max
(2.2.35)
Initial ground level hemisphere diameter (m): £>initial = 1.3JDmax
(2.2.36)
where M is the initial mass of flammable liquid (kg). The particular formulas for fireball diameter and duration do not include the volume of oxygen for combustion. This,
of course, varies and should affect the size of the fireball.
The initial diameter is used to describe the initial ground level fireball before buoyancy forces lift it.
Radiation
Four parameters used to determine a fireball's thermal radiation hazard are the mass of
fuel involved and the fireball's diameter, duration, and thermal emissive power
(AIChE, 1994). The radiation hazards are then calculated using empirical relations.
The problem with a fireball typical of a BLEVE is that the radiation will depend on
the actual distribution of flame temperatures, the composition of the gases in the vicinity
of the fireball (including reactants and products), the geometry of the fireball, absorption
of the radiation by the fireball itself, and the geometric relationship of the receiver with
respect to the fireball. All of these parameters are difficult to quantify for a BLEVE.
Johnson et al. (1990) completed experiments with fireballs of butane and propane
of from 1000 to 2000 kg size released from pressurised tanks. They found average surface emissive radiation of between 320 to 375 kw/m2, a fireball duration of from 4.5 to
9.2 s and fireball diameters of 56 to 88 m. AIChE (1994) suggests using an emissive
power of 350 kW/m2 for large-scale releases of hydrocarbon fuels, with the power
increasing as the scale of the release decreases.
The emissive radiative flux from any source is represented by the
Stefan-Boltzmann law:
Emm=oTf
(2.2.37)
where Emax is the maximum radiative flux (energy/area time); o is the
Stefan-Boltzmann constant (5.67 x 10"11 kW/m2 K4 = 1.71 X 10~9 BTU/hr ft2 0R4);
and Tf is the absolute temperature of the radiative source (deg).
Equation (2.2.37) applies only to a black-body and provides the maximum radiative energy flux. For real sources, the emissive power is given by
£= £ £ max
(2.2.38)
where £ is the emissive energy flux (energy/area time) ande is the emissivity (unitless).
The emissivity for a black-body radiator is unity, whereas the emissivity for a real
radiation source is typically less than unity.
For fireballs, Beer's law is used to determine the emissivity (AIChE, 1994).
This is represented by the following equation:
e = I-e-kD
(2.2.39)
where k is an extinction coefficient (I/length) andD is the fireball diameter (length)
Hardee et al. (1978) measured an extinction coefficient of 0.18 irf1 from LNG
fires, but AIChE (1994) reports that this overpredicts somewhat the radiation from
fireballs.
Thermal radiation is usually calculated using surface emitted flux, E, rather than
the Stefan-Boltzmann equation, as the latter requires the flame temperature. Typical
energy fluxes for BLEVEs (200-350 kW/m2) are much higher than in pool fires as the
flame is not smoky. Roberts (1981) and Hymes (1983) provide a means to estimate
surface heat flux based on the radiative fraction of the total heat of combustion.
RMH,
E=
r
2 40
^B max
~t^BLEVE
7LU
(2- - >
where
E is the radiative emissive flux (energy/area time)
R is the radiative fraction of the heat of combustion (unitless)
M is the initial mass of fuel in the fireball (mass)
Hc is the net heat of combustion per unit mass (energy/kg)
£)max is the maximum diameter of the fireball (length)
Z-BLEVE is the duration of the fireball (time)
Hymes (1983) suggests the following values for R:
0.3 for fireballs from vessels bursting below the relief set pressure
0.4 for fireballs from vessels bursting at or above the relief set pressure.
AIChE (1994) combines Eq. (2.2.40) with the empirical equation by Robert's
(1981) for the duration of the combustion phase of a fireball. This results in an equation for the radiation flux received by a receptor, Er, at a distance L
2.2r^RHcM2/3
E
< 4^1
(2 2M)
'
where
Er is the radiative flux received by the receptor (W/m2)
ra is the atmospheric transmissivity (unitless)
R is the radiative fraction of the heat of combustion (unitless)
H0 is the net heat of combustion per unit mass (J/kg)
M is the initial mass of fuel in the fireball (kg)
Xc is the distance from the fireball center to the receptor (m)
The atmospheric transmissivity, ra, is an important factor. Thermal radiation is
absorbed and scattered by the atmosphere. This causes a reduction in radiation received
at target locations. Some thermal radiation models ignore this effect, effectively assuming a value of ra = 1 for the transmissivity. For longer path lengths (over 20 m), where
absorption could be 20-40%, this will result in a substantial overestimate for received
radiation. Useful discussions are given in Simpson (1984) and Pitblado (1986).
Pietersen and Huerta (1985) recommend a correlation formula that accounts for
humidity
r a = 2.02(PW*S)~°'°9
(2-2-42)
where ra is the atmospheric transmissivity (fraction of the energy transmitted: O to 1);
Pw is the water partial pressure (Pascals, N/m2); Xs is the path length distance from the
flame surface to the target (m).
An expression for the water partial pressure as a function of the relative humidity
and temperature of the air is given by Mudan and Croce (1988).
(
c328\
14.4114-—a
(2.2.43)
/
where Pw is the water partial pressure (Pascals, N/m2); (RH) is the relative humidity
(percent); Ta is the ambient temperature (K).
A more empirically based equation for the radiation flux is presented by Roberts
(1981) who used the data of Hasegawa and Sato (1977) to correlate the measured
radiation flux received by a receptor at a distance, L, from the center of the fireball,
£r= S.28xlO^
(2.2.44)
^C
with variables and units identical to Eq. (2.2.41).
The radiation received by a receptor (for the duration of the BLEVE incident) is
given by
Er = r ,EF21
(2.2.45)
where
Er is the emissive radiative flux received by a black body receptor
(energy/area time)
ra is the transmissivity (dimensionless)
E is the surface emitted radiative flux (energy/area time)
F21 is a view factor (dimensionless)
As the effects of a BLEVE mainly relate to human injury, a geometric view factor
for a sphere to a receptor is required. In the general situation, a fireball center has a
height, H, above the ground. The distance L is measured from a point at the ground
directly beneath the center of the fireball to the receptor at ground level. For a horizontal surface, the view factor is given by
,21=_^_
where D is the diameter of the fireball. When the distance, L, is greater than the radius
of the fireball, the view factor for a vertical surface is calculated from
, (L2*<w
Fn
21
+H2)3'2
(2V .2.47)'
More complex view factors are presented in Appendix A of AIChE (1994). For a
conservative approach, a view factor of 1 is assumed.
Once the radiation received is calculated, the effects can be determined from Section 2.3.2.
Logic Diagram
A logic diagram showing the calculation procedure is given in Figure 2.71. This shows
the calculation sequence for determination of shock wave, thermal, and fragmentation
effects of a BLEVE of a flammable material.
BLEVE
Thermal Radiation
Mass of Flammable
Estimate BLEVE Size and
Duration
Equations (2.2.32) - (2.2.36)
Radiant Fraction
Emitted
Estimate Surface Emitted
Flux
Equation (2.2.40)
Distance to Target
Estimate Geometric View
Factor
Equations (2.2.46) - (2.2.47)
Estimate Atmospheric
Transmissivity
Equation (2.2.42)
Estimate Received Thermal
Fiux
Equation (2.2.45)
Determine Thermal Impact
Section 2.3.2
FIGURE 2.71. Logic diagram for calculation of BLEVE thermal intensity at a specified receptor.
Theoretical Foundation
BLEVE models are a blend of empirical correlations (for size, duration, and radiant
fraction) and more fundamental relationships (for view factor and transmissivity).
Baker et al. (1983) have undertaken a dimensional analysis for diameter and duration
which approximates a cube root correlation. Fragmentation correlations are empirical.
Input Requirements and Availability
BLEVE models require the material properties (heat of combustion and vapor pressure), the mass of material, and atmospheric humidity. Fragment models are fairly simplistic and require vessel volume and vapor pressure. This information is readily
available.
Output
The output of a BLEVE model is usually the radiant flux level and duration.
Overpressure effects, if important, can also be obtained using a detailed procedure
described elsewhere (AIChE, 1994). Fragment numbers and ranges can be estimated,
but a probabilistic approach is necessary to determine consequences.
Simplified Approaches
Several authors use simple correlations based on more fundamental models. Similarly
the Health & Safety Executive (1981) uses a power law correlation to summarize their
more fundamental model. Considine and Grint (1984) have updated this to
r50 = 22^379Af0'307
(2.2.48)
where r5Q is the hazard range to 50% lethality (m), t is the duration of BLEVE (s), and
M is the mass of LPG in BLEVE (long tons = 2200 Ib).
The fragment correlations described for LPG containers are simplified approaches.
2.2.4.3. EXAMPLE PROBLEMS
Example 2.27: BLEVE Thermal Flux. Calculate the size and duration, and thermal
flux at 200 m distance from a BLEVE of an isolated 100,000 kg (200 m3) tank of propane at 2O0C, 8.2 bar abs (680F, 120 psia). Atmospheric humidity corresponds to a water
partial pressure of 2810 N/m2 (0.4 psi). Assume a heat of combustion of 46,350 kj/kg.
Solution. The geometry of the BLEVE are calculated from Eqs. (2.2.32)- (2.2.36).
For an initial mass, M = 100,000 kg, the BLEVE fireball geometry is given by
A™ = 5.8 M1/3 = (5.8)(100,000 kg)1/3 = 269 m
^BLEVE = 2.6 AT1/6 = (2.6)(100,000 kg)1/6 = 17.7 s
«BLEVE = 0.75 Dmax = (0.75)(269 m) = 202 m
Anidai = 1-3 An* = (l-3)(269 m) = 350 m
For the radiation fraction,^, assume a value of 0.3 (Hymes, 1983; Roberts, 1981).
The emitted flux at the surface of the fireball is determined from Eq. (2.2.40),
3 100 000k
RMHC
g)(4 6-350kVkg)=345kjKs
= 345kWK
= (°- )(
J/
*DJL*BLEVE
(3.14)(269m)22(17.7s)
'
The view factor, assuming a vertically oriented target, is determined from Eq.
(2.2.47).
L(D/2)2
21
(L 2 +H^EVE) 3 7 2
(200m)(269m/2)2
=
=Q
^
[(200m) 2 +(202m) 2 ] 3/2
The transmissivity of the atmosphere is determined from Eq. (2.2.42). This
requires a value, X5, for the path length from the surface of the fireball to the target, as
shown in Figure 2.72. This path length is from the surface of the fireball to the receptor
and is equal to the hypotenuse minus the radius of the BLEVE fireball.
Path Length = V^BLEVE +^ 2 - ^r
= [(202m)2 +(20Om) 2 ] 172 -(0.5)(269m) =150m
The transmissivity of the air is given by Eq. (2.2.42),
r a =2.02(P W X S )- 009 =(2.02)[(2810Pa)(150m)]"°09 =0.630
The received flux at the receptor is calculated using Eq. (2.2.45)
Er = T,EF21 =(0.630)(345kW/m 2 )(0.158)=34.3kW/m 2
This received radiation is enough to cause blistering of bare skin after a few seconds of exposure.
An alternate approach is to use Eq. (2.2.41) or (2.2.44) to estimate the radiative
energy received at the receptor. In this caseXc is the distance from the center of the fireball to the receptor. From geometry this is given by
Xc = ^/(202 m) 2 + (200 m) 2 = 2842 m
Substituting into Eq. (2.2.41)
_2.2T^RHCM2/* _22(0.630)(0.3)(46.35xl0 6 J/kg)(100,000kg)2/3
E
" "
^JtXl
"
(4) (3.14) (2842 m) 2
= 40.9kW/m 2
BLEVE Fireball
Path Length
Receptor
FIGURE 2.72 Geometry for Example 2.27: BLEVE thermal flux.
which is close to the previously calculated value of 34.2 kW/m2. Using Eq. (2.2.44)
r
£ =
8.28XlO 5 M 0 7 7 1 (8.28xl05)(100,000kg)0771 „ „ , , 2
=
=
~
=73.4kW/m
X2C
(284.2m) 2
'
which is a different result, more conservative in this case.
This problem is readily implemented using a spreadsheet. The spreadsheet output
is shown in Figure 2.73.
Example 2.28: Blast Fragments from a BLEVE. A sphere containing 293,000 gallons of propane (approximately 60% of its capacity) is subjected to a fire surrounding
the sphere. There is a torchlike flame impinging on the wall above the liquid level in the
tank. A BLEVE occurs and the tank ruptures. It is estimated that the tank fails at
approximately 350 psig. Estimate the energy release of the failure, the number of fragments to be expected, and the approximate maximum range of the fragments. The
inside diameter of the sphere is 50 ft, its wall thickness is % incn5 an d tne shell is made
of steel with a density of 487 lbm/ft3. Assume an ambient temperature of 770F and a
pressure of 1 atm.
Solution. The total volume of the sphere is
F=
,gl = (3.14)(50 f t )- = 6 5 | 4 5 o f t i = 1 8 5 4 m i
6
6
The volume of liquid is 0.6 x 65,450 ft3 = 39,270 ft3. The vapor volume is 65,450 ft3
- 39,270 ft3 = 26,180 ft3. If we assume that pressure effects are due to vapor alone,
ignoring any effect from the flashing liquid, and if we assume isothermal behavior and
Click to View Calculation Example
Example 2.27: BLEVE Thermal Flux
Input Data:
Initial flammable mass:
Water partial pressure in air:
Radiation Fraction, R
Distance from fireball center on ground:
Heat of Combustion of fuel:
100000
2810
0.3
200
46350
kg
Pascals
m
kJ/kg
Calculated Results:
Maximum fireball diameter:
Fireball combustion duration:
Center height of fireball:
Initial ground level hemisphere diameter:
Surface emitted
flux:
Path length:
Transmissivity:
View Factor:
Received flux:
269.2
17.7
201.9
350.0
344.9
149.6
0.630
m
s
m
m
kW/m**2
Horizontal Vertical
0.16 0.16
34.63
34.30 kW/m**2
FIGURE 2.73. Spreadsheet output for Example 2.27: BLEVE thermal flux.
> 30,000
an ideal gas, then the energy of explosion due to loss of physical containment alone
(i.e., no combustion of the vapor) is given by Eq. (2.2.12)
W = 1.39 x HT6 F(^-]R T0 In(^)
Vo J
V2 I
,6
,3 (364.7 psia\
/1.987 BTU \ (364.7 psia\
= 1.39 x IQ- v(26,180 ft ) ——^- (537° R) 7
-—In ^ •
\ 14.7 psia f
Ub-HIoIe 0 R/ ( 14.7 psia )
W= 3090IbTNT
The TNT equivalent could be used with Eq. (2.2.1) and Figure 2.48 to determine the
overpressure at a specified distance from the explosion.
The number of fragments is estimated using Eq. (2.2.31).
Number of fragments = -3.77 + 0.0096 (vessel capacity, m3)
= -3.77 + 0.0096 (1854m3)
= 14 fragments
The total volume of the %-inch (0.0625 ft) vessel shell is
V = ^(Dl -D13) = ~ [(5O ft+ 0.0625 ft) 3 -(50ft) 3 ] =246 ft 3
The mass of the vessel is 246 ft3 X 487 lb/ft3 = 119,700 Ib. If this weight is distributed evenly among 14 fragments, the average weight of each fragment is 119,700 lb/14
= 8547 Ib.
A quick estimate of the intial velocity of the fragments is determined from Eq.
(2.2.25):
"=2'°5^
where
u is the intial velocity of the fragment (ft/s)
P is the rupture pressure (psig)
D is the diameter of the fragment (inch)
W is the weight of the fragment (Ib)
The average diameter of the fragment is estimated by assuming that each shell fragment is crumbled up into a sphere. Thus, we can determine a fragment diameter by
assuming a sphere equal in surface area to the original outer surface area of the fragment. The total surface area of the original vessel is
A = TtD2 = (3.14)(50 ft) 2 = 7854 ft2
The fragment surface area is then, 7850 ^/14 = 561 ft2. The equivalent diameter of a
sphere with this surface area is
2
[A
/561
ft
D=J-= J
= 13.36 ft = 160 in.
V Jt
V 3.14
Substituting the numbers provided into Eq. (2.2.25)
1(350 psig)(160 in.)3
U =2
-°5f
8557 Ib
=842 ft S=25? m S
/
/
The procedure by Baker is used to calculate the approximate range of a missile
under these circumstances
P0 = 1.19 kg/m3 = 0.0740 lbm/ft3 (density of air)
M =8557 Ib (3,866 kg)
AD = 561 ft2 (52.12m2)
From Table 2.26 select a drag coefficient for a sphere
CD
= 0.47
The scaled initial velocity in Figure 2.63 can now be calculated,
PoCD.lD»2
Mg c
=
(0.0740 Ibn. /ft3)(0.47)(561 ft2 )(839ft/s)2
(85571b m )(32.17ft/s 2 )
If it is assumed that the fragment is "chunky," that is,
^=O
CD^-D
then from Figure 2.63, for a scaled initial velocity of 50.4
PO C D A)-R _ A Q1
—
4.81
M
Solving for R
(«l)(M47lb.)
(0.0740 Ib m / f t 3 ) (0.47) (561 ft 2 )
=2106ft=642m
This is the expected range of the fragments. If the fragments were flatter instead of
spherical, then the drag coefficient would be larger and the resulting distance would be
less.
The spreadsheet implementation of this example is provided in Figure 2.74.
2.2.4.4. DISCUSSION
Strengths and Weaknesses
BLEVE dimensions and durations have been studied by many authors and the empirical basis consists of several well-described incidents, as well as many smaller laboratory
trials. The use of a surface emitted flux estimate is the greatest weakness, as this is not a
fundamental property. Fragment correlations are subject to the same weaknesses discussed in Section 2.2.3.4.
Identification and Treatment of Possible Errors
The two largest potential errors are the estimation of the mass involved and the surface
emitted flux. The surface emitted flux is an empirical term derived from the estimated
radiant fraction. While this is not fundamentally based, the usual value is similar in
magnitude (but larger) than that used in API 521 for jet flare radiation estimates. A
simplified graphical or correlation approach is a check, but these do not allow for differing materials or atmospheric conditions.
Click to View Calculation Example
Example 2.28: Blast Fragments from a BLEVE
Input Data:
Diameter of sphere:
Vessel failure pressure:
Vessel liquid fill fraction:
Vessel wall thickness:
Vessel wall density:
Temperature:
Ambient pressure:
Drag coefficient of fragment:
Lift to drag ratio:
is.24
2514
0.6
1.905
7800
298
101.325
0.47
O
Calculated Results:
Diameter of sphere:
Vessel failure pressure:
Vessel wall thickness:
Vessel wall density:
Temperature:
Total volume of sphere:
Liquid volume:
Vapor volume:
!Energy of explosion:
!Number of fragments:
Volume of vessel shell:
Total mass of vessel:
Average mass of each fragment:
Total surface area of sphere:
Surface area for each fragment:
Average diameter of spherical fragment:
[Initial velocity of fragment:
Density of ambient air:
Scaled velocity of fragment:
m
KPa abs
cm
kg/m**3
K
KPa abs
1853.33 m**3 =
1112.00 m**3 =
741.33 m**3 =
1401.70 kg TNT =
14
6.96 m**3 =
54278kg
=
3877.03 kg
=
729.66 m**2 =
52.12 m**2 =
4.07 m
=
256.76 m/s =
1.19 kg/m**3=
50.41
50.00 ft
364.73 psia
0.75 in
486.95 lb/ft**3
536.40 R
65447.46 ft**3
39268.48 ft**3
26178.98 ft**3
3090.18 IbTNT
245.74 tt**3
119661 Ib
8547.25 Ib
7853.79 ft**2
560.99 ft**2
13.36 ft
842.39 ft/s
0.0740 lb/tt**3
I
|
|
Interpolated values from figure for various lift to drag ratios:
Lift to drag
ratio
0
0.5
1
3
5
10
30
50
100
Scaled Range
Range
(m)
4.810431 641.99
5.299823 707.30
3.964659 529.11
0.77503 103.43
0.490619
65.48
0.238585 31.84
0.079547 10.62
0.051752
6.91
0.023798
3.18
Interpolated range:
Theoretical max. range (no lift):
Max, possible range (with lift):
642 m
6727 m
707 m
=
=
=
2106 ft
22071 ft
2321 ft
FIGURE 2.74. Spreadsheet output for Example 2.28: Blast fragments from a BLEVE.
Utility
BLEVE models require some care in application, as errors in surface flux, view factor,
or transmissivity can lead to significant error. Thermal hazard zone calculations will be
iterative due to the shape factor and transmissivity which are functions of distance.
Fragment models showing the possible extent of fragment flight and damage effects are
difficult to use.
Resources Needed
A process engineer with some understanding of thermal radiation effects could use
BLEVE models quite easily. A half-day calculation period should be allowed unless the
procedure is computerized, in which case much more rapid calculation and exploration
of sensitivities is possible. Spreadsheets can be readily applied.
Available Computer Codes
Several integrated analysis packages contain BLEVE and fireball modeling. These
include:
ARCHIE (Environmental Protection Agency, Washington, DC)
EFFECTS-2 (TNO, Apeldoorn, The Netherlands)
PHAST (DNV3 Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
TRACE (Safer Systems, Westlake Village, CA)
2.2.5. Confined Explosions
2.2.5.1. BACKGROUND
Purpose
Confined explosions in the context of this section (see Figure 2.46) include deflagrations or other sources of rapid chemical reaction which are constrained within vessels
and buildings. Dust explosions and vapor explosions within low strength vessels and
buildings are one major category of confined explosion that is discussed in this chapter.
Combustion reactions, thermal decompositions, or runaway reactions within process
vessels and equipment are the other major category of confined explosions. In general,
a deflagration occurring within a building or low strength structure such as a silo is less
likely to impact the surrounding community and is more of an in-plant threat because
of the relatively small quantities of fuel and energy involved. Shock waves and projectiles are the major threats from confined explosions.
Philosophy
The design of process vessels subject to internal pressure is treated by codes such as the
UnfiredPressure Vessel Code (ASME, 1986). Vessels can be designed to contain internal
deflagrations. Recommendations to accomplish this are contained in NFPA 69 (1986)
and Noronha et al. (1982). The design of relief systems for both low strength enclosures and process vessels, commonly referred to as "Explosion Venting," is covered by
Guide for Venting Deflagrations (NFPA 68, 1994). As of this writing both NFPA 68
and NFPA 69 are under revision, with major changes to include updated information
from the German standard VDI 3673 (VDI, 1995). Details on the new VDI update
are contained in Siwek (1994).
Applications
There are few published CPQRAs that consider the risk implications of these effects;
however the Canvey Study (Health & Safety Executive, 1978) considered missile
damage effects on process vessels.
Next Page
Previous Page
Resources Needed
A process engineer with some understanding of thermal radiation effects could use
BLEVE models quite easily. A half-day calculation period should be allowed unless the
procedure is computerized, in which case much more rapid calculation and exploration
of sensitivities is possible. Spreadsheets can be readily applied.
Available Computer Codes
Several integrated analysis packages contain BLEVE and fireball modeling. These
include:
ARCHIE (Environmental Protection Agency, Washington, DC)
EFFECTS-2 (TNO, Apeldoorn, The Netherlands)
PHAST (DNV3 Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
TRACE (Safer Systems, Westlake Village, CA)
2.2.5. Confined Explosions
2.2.5.1. BACKGROUND
Purpose
Confined explosions in the context of this section (see Figure 2.46) include deflagrations or other sources of rapid chemical reaction which are constrained within vessels
and buildings. Dust explosions and vapor explosions within low strength vessels and
buildings are one major category of confined explosion that is discussed in this chapter.
Combustion reactions, thermal decompositions, or runaway reactions within process
vessels and equipment are the other major category of confined explosions. In general,
a deflagration occurring within a building or low strength structure such as a silo is less
likely to impact the surrounding community and is more of an in-plant threat because
of the relatively small quantities of fuel and energy involved. Shock waves and projectiles are the major threats from confined explosions.
Philosophy
The design of process vessels subject to internal pressure is treated by codes such as the
UnfiredPressure Vessel Code (ASME, 1986). Vessels can be designed to contain internal
deflagrations. Recommendations to accomplish this are contained in NFPA 69 (1986)
and Noronha et al. (1982). The design of relief systems for both low strength enclosures and process vessels, commonly referred to as "Explosion Venting," is covered by
Guide for Venting Deflagrations (NFPA 68, 1994). As of this writing both NFPA 68
and NFPA 69 are under revision, with major changes to include updated information
from the German standard VDI 3673 (VDI, 1995). Details on the new VDI update
are contained in Siwek (1994).
Applications
There are few published CPQRAs that consider the risk implications of these effects;
however the Canvey Study (Health & Safety Executive, 1978) considered missile
damage effects on process vessels.
2.2.5.2. DESCRIPTION
Description of the Technique
The technique is based on the determination of the peak pressure. Where this is sufficient to cause vessel failure, the consequences can be determined.
For most pressure vessels designed to the ASME Code, the minimum bursting
pressure is at least four times the "stamped" maximum allowable working pressure
(MAWP). For a number of reasons (e.g., initial corrosion allowance, use of next available plate thicknesses), vessel ultimate strengths can greatly exceed this value. TNO
(1979) uses a lower value of 2.5 times MAWP, as European vessels can have a lower
factor of safety. It is possible to be more precise if plate thickness, vessel diameter, and
material of construction are known. A burst pressure can be estimated using the ultimate strength of the material and 100% weld efficiency in a hoop stress calculation.
Specialist help is desirable for those calculations. Treatments of the bursting and fragmentation of vessels is given in Section 2.2.3.
The explosion of a flammable mixture in a process vessel or pipework may be a deflagration or a detonation. Detonation is the more violent form of combustion, in
which the flame front is linked to a shock wave and moves at a speed greater than the
speed of sound in the unreacted gases. Well known examples of gas-air mixtures which
can detonate are hydrogen, acetylene, ethylene and ethylene oxide. A deflagration is a
lower speed combustion process, with speeds less than the speed of sound in the
unreacted medium, but it may undergo a transition to detonation. This transition
occurs in pipelines but is unlikely in vessels or in the open.
Deflagrations can be vented because the rate of pressure increase is low enough
that the opening of a vent will result in a lower maximum pressure. Detonations, however, cannot be vented since the pressure increases so rapidly that the vent opening will
have limited impact on the maximum pressure.
A dust explosion is usually a deflagration. Some of the more destructive explosions
in coal mines and grain elevators give strong indications that detonation was
approached but efforts to duplicate those results have not been verified experimentally.
Certain factors in the combustion of combustible dust are unique and as a result they
are modeled separately from gases.
Deflagrations. For flammable gas mixtures, Lees (1986) summarizes the work of
Zabetakis (1965) of the U.S. Bureau of Mines for the maximum pressure rise as a result
of a change in the number of moles and temperature.
^nax _ » 2 ^ 2 _ M1T2
~pT~^rT~M^
where
Pmax
P1
n
T
M
1
2
is the maximum absolute pressure (force/area)
is the initial absolute pressure (force/area)
is the number of moles in the gas phase
is the absolute temperature of the gas phase
is the molecular weight of the gas
is the initial state
is the final state
(2-2-49)
Equation (2.2.49) will provide an exact answer if the final temperature and molecular weight are known and the gas obeys the ideal gas law. If the final temperature is
not known, then the adiabatic flame temperature can be used to provide a theoretical
upper limit to the maximum pressure. Equation (2.2.49) predicts a maximum pressure
usually much higher than the actual pressure—experimental determination is always
recommended.
NFPA 68 (NFPA5 1994) also gives a cubic law relating rate of pressure rise to
vessel volume in the form
^G or KS(=V1'3 (^)
(2.2.50)
\^ /max
where K0 is the characteristic deflagration constant for gases and KSt is the characteristic
venting constant for dusts. The "St" subscript derives from the German word for dust,
or Staub. The deflagration constant is not an inherent physical property of the material,
but simply an observed artifact of the experimental procedure. Thus, different experimental approaches, particularly for dusts, will result in different values, depending on
the composition, mixing, ignition energy, and volume, to name a few. Furthermore,
the result is dependent on the characteristics of the dust particles (i.e., size, size distribution, shape, surface character, moisture content, etc.).
The (dP/dt)m^ value is the maximum slope in the pressure versus time data
obtained from the experimental procedure. ASTM procedures are available (ASTM,
1992).
Senecal and Beaulieu (1997) provide extensive experimental values for K0 and
Pmax. Correlations OfK0 with flame speed, stoichiometry and fuel autoignition temperature are provided.
The experimental approach is to produce nomographs and equations for calculating vent area to relieve a given overpressure. The NFPA 68 guide (NFPA, 1994) also
lists tables of experimental data for gases, liquids, and dusts that showPmax anddP/rf£.
The experimental data used must be representative of the specific material and process
conditions, whenever possible.
From these experimental data and from the relations given by Zabetakis, the maximum pressure rise for most deflagrations is typically
P2TP1 = 8 for hydrocarbon-air mixtures
P2TP1 = 16 for hydrocarbon-oxygen mixtures
where P2 is the final absolute pressure and P1 is the initial absolute pressure. Some risk
analysts use conservative values of 10 and 20, respectively, for these pressures.
Detonation. Lewis and von Elbe (1987) describe the theory of detonation, which
can be used to predict the peak pressure and the shock wave properties (e.g., velocity
and impulse pressure). Lees (1986) says the peak pressure for a detonation in a containment initially at atmospheric pressure may be about 20 bar (a 20-fold increase).
This pressure can be many times larger if there is reflection against solid surfaces.
Dust Explosions. Bartknecht (1989), Lees (1986), and NFPA 68 (1994) contain a
considerable amount of dust explosion test data. The nomographs in NFPA 68 can be
used to estimate the pressure within a vessel, provided the related functions of vent size,
class of dust (St-I, 2, or 3), or KSt, vessel size, and vent release pressure are known.
Nomographs for three dust classes
St-I for KSt < 200 bar m/s
St-2 for 200 < KSt < 300 bar m/s
St-3 for ICSt > 300 bar m/s
are available. In addition, nomographs are provided for specific ICSt values for the range
of 50-600 bar m/s. Empirical equations are also provided that allow the problem to be
solved algebraically.
In the case of low strength containers, similar estimates can be made using the
equations outlined by Swift and Epstein (1987).
If the values of peak pressure calculated exceed the burst pressure of the vessel, then
the consequences of the resulting explosion should be determined. As in Sections 2.2.3
and 2.2.4, the resulting effects are a shock wave, fragments, and a burning cloud.
Although the pressure at which the vessel may burst may be well below the maximum
pressure that could have developed, it is frequently conservatively assumed that the
stored energy released as a shock wave is based on the maximum pressure that could
have developed.
In chemical decompositions and detonations it is also frequently assumed that the
available chemical stored energy is converted to a TNT equivalent.
The phenomenon of pressure piling is an important potential hazard in systems with
interconnected spaces. The pressure developed by an explosion in Space A can cause
pressure/temperature rise in connected Space B. This enhanced pressure is now the starting point for further increase in explosion pressure. This phenomenon has also been seen
frequently in electrical equipment installed in areas using flammable materials.
A small primary dust explosion may have major consequences if additional combustible dust is present. The shock of the initial dust explosion can disperse additional
dust and cause an explosion of considerably greater violence. It is not unusual to see a
chain reaction with devastating results.
Logic Diagram
The logic of confined explosion modeling showing the stepwise procedure is provided
in Figure 2.75.
Theoretical Foundations
Although the fundamentals of combustion and explosion theory have been evolved
over the last 100 years, the detailed application to most gases has been more recent. For
simple molecules, the theoretical foundation is sound. For more complex species, particularly dust and mists, the treatment is more empirical. Nevertheless, good experimental data have been pooled by the U.S. Bureau of Mines (Zabetakis, 1965; Kuchta,
1973), NFPA 68 (NFPA, 1994), VDI3673 (VDI, 1995), andBartknecht (1989). An
alternate approach is used in the UK and other parts of Europe as described by
Schofield(1984).
Input Requirements and Availability
The technology requires data on container strengths and combustion parameters. The
latter are usually readily available; data on containment behavior are more difficult.
Flammable Mixture/
Chemical in Process
Vessel or Enclosure
Estimate Maximum
Pressure
Equation (2.2.49)
Estimate Burst Pressure
of Vessel or Enclosure
Is Max. Pressure
greater than
Burst Pressure of Vessel
or Enclosure?
Are Secondary
Effects Possible?
* Pressure Piling
* Secondary Dust
Explosion
No
Consequence
Estimate
Overpressure using
Methods in Section
2.2.4.2
Estimate Projectile
Effects using
Methods in Section
2.2.3.2
FIGURE 2.75. Logic diagram for confined explosion analysis.
Vessel bursting pressure can be derived accurately only with a full appreciation of the
vessel metallurgy and operating history; however, it should be sufficient for CPQRA
purposes to refer to the relevant design codes and estimate the bursting pressure based
on the safety factor employed.
Output
This analysis provides overpressure versus distance effects and also projectile effects.
Using NFPA 68 (NFPA51994), overpressures can be estimated for vented vessels and
buildings, which allows estimates to be made of the expected damage levels.
Simplified Approaches
The peak pressures achieved in confined explosions can be estimated as follows: deflagration is eight times the initial absolute pressure, and detonation 20 times, for hydrocarbon-air mixtures. It can be assumed that pressure vessels fail at about four times the
design working pressure. In the cases of dust explosions, the NFPA nomographs can
be used for relatively strong vessels and the modified Swift-Epstein equations indicated
in NFPA 68 (NFPA, 1994; see also Swift and Epstein, 1987) for low strength structures (such as buildings).
2.2.5.3. EXAMPLE PROBLEM
Example 2.29: Overpressure from a Combustion in a Vessel. A i m 3 vessel rated at
1 barg contains a stoichiometric quantity of acetylene (C2H2) and air at atmospheric
pressure and 250C. Estimate the energy released upon combustion and calculate the
distance at which a shock wave overpressure of 21 kPa can be obtained. Assume an
energy of combustion for acetylene of 301 kcal/gm-mole.
Solution: The stoichiometric combustion of acetylene at atmospheric pressure
inside a vessel designed for 1 barg will produce pressures that will exceed the expected
burst pressure of the vessel.
The stoichiometric combustion of acetylene requires 2.5 mole of O2 per mole of
acetylene:
C2H2 + 2.5O2 -*• 2CO2 + H2O
1 mole of air contains 3.76 mole N2 and 1.0 mol O2. The starting composition is
C2H2 + 2.5O2 + (2.5)(3.76)N2, resulting in the following initial gas mixture,
Compound
Moles
Mole fraction
C2H2
1.0
0.078
O2
2.5
0.194
N2
9.4
0.728
Total
12.9
1.000
A 1-m3 vessel at 250C contains
3, /273KVl gm-mole^
—
r
v(Im ;)
3 =40.90gmb
(298K^0.0224m j
mole
The amount of acetylene in this volume that could combust is
(40.90 gm-mole) (0.078) = 3.19gm-mole
Therefore the energy of combustion, Ec, is
Ec = (3.19 gm - mole) (301 kcal/gm - mole) = 960 kcal
Since 1 kg of TNT is equivalent to 1120 kcal, then the TNT mass equivalent =
960/1120 = 0.86 kg TNT. This represents the upper bound of the energy. The vessel
will probably begin to fail at about 5 barg. However, the rate of pressure rise during the
combustion may exceed the rate at which the vessel actually comes apart. The effective
failure pressure, therefore, is somewhere between the pressure at which the vessel
begins to fail and the maximum pressure obtainable from combustion inside a closed
vessel. As in physical explosions (Section 2.2.3) some fraction of the energy goes into
shock wave formation.
The most conservative assumption is to assume all of the combustion energy goes
into the shock wave. Thus, from Figure 2.48 for P5 = 21 kPa, Z = 7.83. Then from Eq.
(2.2.7)
R^ =ZWl/* =(7.83m/kg1/3)(0.86kgTNT)1/3 =7.44m
The spreadsheet output for this example is shown in Figure 2.76.
2.2.5.4. DISCUSSION
Strengths and Weaknesses
The main strength of these methods is that they are based largely on experimental data.
Their main weakness is frequently lack of data, particularly for dusts. Suitable methods
for handling gas mixtures and hybrid systems composed of flammable dusts and vapors
are lacking.
Identification and Treatment of Possible Errors
Schofield (1984) reports that experiments on the behavior of flammable mixtures in
large volumes (30 m3 or 1000 ft3) indicate that venting calculations developed from
small scale experiments may oversize the vents. Evaluation of container strengths can
Click to View Calculation Example
Example 2.29: Overpressure from a Combustion in a Vessel
Input Data:
Mole fraction of fuel:
Molecular weight of fuel:
Volume of vessel:
Energy of combustion of fuel:
Initial temperature:
Initial pressure:
0.078
26
1 m**3
301 kcal/gm-mole
25 deg. C.
O barg
Calculated Results:
Total moles in vessel:
Total moles of fuel:
Total mass of fuel:
Total energy of combustion:
Equivalent mass of TNT
40.90 gm-mole
3.19 gm-mole
82.94 kg
960 kcal
0.86 kg of TNT
!Distance from blast:
Scaled distance, z:
7.44 m
~|<— Trial and error to get desired overpressure
7.832 m/kg**(1/3)
Overpressure Calculation:
a+b*log(z):
Overpressure:
(only valid for z > 0.0674 and z < 40)
0.992653
20.99 kPa
3.045 psia
FIGURE 2.76. Spreadsheet output for Example 2.29: Overpressure from a combustion in a
vessel.
be a main source of error. Vessels are often stronger than safety factors assume and this
factor may be conservative in terms of the frequency or probability of vessel rupture,
but conversely, not conservative in terms of calculating the consequences of rupture.
Utility
The techniques discussed here are straightforward to apply and the data are readily
available (provided a simplistic estimate of bursting pressure is acceptable).
Resources
A process engineer should be able to perform each type of calculation in an hour.
Available Computer Codes
WinVent (Pred Engineering, Inc., Palm City, FL)
2.2.6. Pool Fires
2.2.6.1. BACKGROUND
Purpose
Pool fires tend to be localized in effect and are mainly of concern in establishing the
potential for domino effects and employee safety zones, rather than for community
risk. The primary effects of such fires are due to thermal radiation from the flame
source. Issues of intertank and interplant spacing, thermal insulation, fire wall specification, etc., can be addressed on the basis of specific consequence analyses for a range of
possible pool fire scenarios.
Drainage is an important consideration in the prevention of pool fires—if the
material is drained to a safe location, a pool fire is not possible. See NFPA 30 (NFPA,
1987a) for additional information. The important considerations are that (1) the liquid
must be drained to a safe area, (2) the liquid must be covered to minimize vaporization,
(3) the drainage area must be far enough away from thermal radiation fire sources, (4)
adequate fire protection must be provided, (5) consideration must be provided for
containment and drainage of fire water and (6) leak detection must be provided.
Philosophy
Pool fire modeling is well developed. Detailed reviews and suggested formulas are provided in Bagster (1986), Considine (1984), Crocker and Napier (1986), Institute of
Petroleum (1987), Mudan (1984), Mudan and Croce (1988), and TNO (1979).
A pool fire may result via a number of scenarios. It begins typically with the release
of flammable material from process equipment. If the material is liquid, stored at a
temperature below its normal boiling point, the liquid will collect in a pool. The geometry of the pool is dictated by the surroundings (i.e., diking), but an unconstrained pool
in an open, flat area is possible (see Section 2.1.2), particularly if the liquid quantity
spilled is inadequate to completely fill the diked area. If the liquid is stored under pressure above its normal boiling point, then a fraction of the liquid will flash into vapor,
with unflashed liquid remaining to form a pool in the vicinity of the release.
The analysis must also consider spill travel. Where can the liquid go and how far
can it travel?
Once a liquid pool has formed, an ignition source is required. Each release has a
finite probability of ignition and must be evaluated. The ignition can occur via the
vapor cloud (for flashing liquids), with the flame traveling upwind via the vapor to
ignite the liquid pool. For liquids stored below the normal boiling point without flashing, the ignition can still occur via the flammable vapor from the evaporating liquid.
Both of these cases may result in an initial flash fire due to burning vapors—this may
cause initial thermal hazards.
Once an ignition has occurred, a pool fire results and the dominant mechanism for
damage is via thermal effects, primarily via radiative heat transfer from the resulting
flame. If the release of flammable material from the process equipment continues, then
a jet fire is also likely (see Section 2.2.7). If the ignition occurs at the very beginning of
the release, then inadequate time is available for the liquid to form a pool and only a jet
fire will result.
The determination of the thermal effects depends on the type of fuel, the geometry
of the pool, the duration of the fire, the location of the radiation receiver with respect to
the fire, and the thermal behavior of the receiver, to name a few. All of these effects are
treated using separate, but interlinked models.
Application
Pool fire models have been applied to a large variety of combustible and flammable
materials.
2.2.6.2. DESCRIPTION
Description of Technique—Pool Fire Models
Pool fire models are composed of several component submodels as shown in Figure
2.77. A selection of these are briefly reviewed here:
• burning rate
• pool size
• flame geometry, including height, tilt and drag
• flame surface emitted power
• geometric view factor with respect to the receiving source
• atmospheric transmissivity
• received thermal flux
Burning Rate
For burning liquid pools, the radiative heat transfer and the resulting burning rate
increases with pool diameter. For pool diameters greater than 1 m, radiative heat transfer dominates and the flame's geometric view factor is constant. Thus, a constant burning rate is expected. For pool diameters greater than 1 m, Burgess et al. (1961) showed
that the rate at which the liquid pool level decreases is given by
ymax= 127 X 10-6^
(2.2.51)
where jymax is the vertical rate of liquid level decrease (m/s), AH0 is the net heat of combustion (energy/mass), and AH* is the modified heat of vaporization at the boiling
Pool Fire
Estimate Vertical or
Mass Burning Rate
Equations (2.2.51),
(2.2.53)
Estimate Flame Height
Equation (2.2.55)
Estimate Maximum Pool
Diameter
Equation (2.2.54)
Select Radiation Model
Solid Plume
Radiation
Point Source
Radiation Model
Figure 2.77b
Model
Figure 2.77a
IEstimate
Thermal
Effect
Section 2.3.2
FIGURE 2.77. Logic diagram for calculation of pool fire radiation effects.
point of the liquid given by Eq. (2.2.52) (energy/mass). Typical vertical rates are
0.7 x IO"4 m/s (gasoline) to 2 X 1(T4 m/s (LPG).
The modified heat of vaporization includes the heat of vaporization, plus an adjustment for heating the liquid from the ambient temperature, T a , to the boiling point
temperature of the liquid, TBP.
A H * = A H V + £ B P CpdT
(2.2.52)
where A/fv is the heat of vaporization of the liquid at the ambient temperature
(energy/mass) and Cp is the heat capacity of the liquid (energy/mass-deg).
Equation (2.2.52) can be modified for mixtures, or for liquids such as gasoline
which are composed of a number of materials (Mudan and Croce, 1988).
Point Source Radiation
Model
Solid Plume Radiation
Model
Estimate Radiant
Fraction
Table 2.27
Estimate Surface
Emitted Power
Equation (2.2.59)
Estimate Point Source
Location from Flame
Height
Estimate Geometric
View Factor
Equations (2.2.46),
Estimate Point Source
View Factor
Equation (2.2.60)
(2.2.47)
(Estimate
Transmissivity
Equation (2.2.42)
Estimate Trasmissivity
Equation (2.2.42)
Estimate Incident
Radiation Flux
Equation (2.2.62)
Estimate Incident
Radiant Flux
Equation (2.2.61)
FIGURE 2.77a. Logic diagram for the solid
plume radiation model.
FIGURE 2.77b. Logic diagram for the point
source radiation model.
The mass burning rate is determined by mutiplying the vertical burning rate by the
liquid density. If density data are not available, the mass burning rate of the pool is estimated by
-a &H
m B = I X 1(T3-^-
(2.2.53)
where mE is the mass burning rate (kg/m2 s).
Equation (2.2.51) fits the experimental data better than Eq. (2.2.53), so the procedure using the vertical burning rate and the liquid density is preferred. Typical values
for the mass burning rate for hydrocarbons are in the range of 0.05 kg/m2s (gasoline)
to 0.12 kg/m2 s (LPG). Additional tabulations for the vertical and mass burning rates
are provided by Burgess and Zabetakis (1962), Lees (1986), Mudan and Croce (1988)
and TNO (1979).
Equations (2.2.51) to (2.2.53) apply to liquid pool fires on land. For pool fires on
water, the equations are applicable if the burning liquid has a normal boiling point well
above ambient temperature. For liquids with boiling points below ambient, heat trans-
fer between the liquid and the water will result in a burning rate nearly three times the
burning rate on land (Mudan and Croce, 1988).
Pool Size
In most cases, pool size is fixed by the size of the release and by local physical barriers
(e.g., dikes, sloped drainage areas). For a continuous leak, on an infinite flat plane, the
maximum diameter is reached when the product of burning rate and surface area equals
the leakage rate.
D max = 2 1 —
\ny
V(2.2.54)
^
where Dmax is the equilibrium diameter of the pool (length), VL is the volumetric liquid
spill rate (volume/time), and y is the liquid burning rate (length/time).
Equation (2.2.54) assumes that the burning rate is constant and that the dominant
heat transfer is from the flame. More detailed pool burning geometry models are available (Mudan and Croce, 1988).
Circular pools are normally assumed; where dikes lead to square or rectangular
shapes, an equivalent diameter may be used. Special cases include spills of cryogenic
liquids onto water (greater heat transfer) and instantaneous unbounded spills (Raj and
Kalelkar, 1974).
Flame Height
Many observations of pool fires show that there is an approximate ratio of flame height
to diameter. The best known correlation for this ratio is given by Thomas (1963) for
circular pool fires.
0.61
-^
(2.2.55)
PaV^D
1
where
H is the visible flame height (m)
D is the equivalent pool diameter (m)
mE is the mass burning rate (kg/m2 s)
pa is the air density (1.2 kg/m3 at 2O0C and 1 atm.)
g is the acceleration of gravity (9.81 m/s2)
Bagster (1986) summarizes rules of thumb forH/D ratios: Parker (1973) suggests
a value of 3 and Lees (1994) lists a value of 2.
Moorhouse (1982) provides a correlation for the flame height based on large-scale
LNG tests. This correlation includes the effect of wind on the flame length:
r
-10.254
^ =6 .2 _^=
D
[p, VgD J
'-°M10
044
(2.2.56)
where U10* is a nondimensional wind speed determined using
*;°= K**JJ)/Pv r
(2 2 57)
--
where uw is the measured wind speed at a 10m height (m/s) andpv is the vapor density
at the boiling point of the liquid (kg/m3).
Flame Tilt and Drag
Pool fires are often tilted by the wind, and under stronger winds, the base of a pool fire
can be dragged downwind. These effects alter the radiation received at surrounding
locations. A number of correlations have been published to describe these two factors.
The correlation of Welker and Sliepcevich (1966) for flame tilt is frequently quoted,
but the American Gas Association (AGA) (1974) andMudan (1984) note poor results
for LNG fires. The AGA paper proposes the following correlation for flame tilt:
for u < 1
1 r
. ,
cosO=—= for
u >1
Vu*
cos 0 = 1
(2.2.58)
'
where u is the nondimensional wind speed given by Eq. (2.2.57) at a height of 1.6 m
and 6 is the flame tilt angle (degrees or radians).
Flame drag occurs when wind pushes the base of the flame downwind from the
pool, with the upwind edge of the flame and flame width remaining unchanged. For
square and rectangular fires the base dimension is increased in the direction of the
wind. The thermal radiation downwind increases because the distance to a receiver
downwind is reduced. For circular flames, the flame shape changes from circular to
elliptical, resulting in a change in view factor and a change in the radiative effects.
Detailed flame drag correlations are provided by Mudan and Croce (1988).
Risk analyses can include or ignore tilt and drag effects. Flame tilt is more important; flame drag is an advanced topic, and many pool fire models do not include this
effect. A vertical (untilted) pool fire is often assumed, as this radiates heat equally in all
directions. If a particularly vulnerable structure is located nearby and flame tilt could
affect it, the CPQRA should consider tilt effects (both toward and away from the vulnerable object) and combine these with appropriate frequencies allowing for the direction of tilt.
Surface Emitted Power
The surface emitted power or radiated heat flux may be computed from the
Stefan-Boltzmann equation. This is very sensitive to the assumed flame temperature,
as radiation varies with temperature to the fourth power (Perry and Green, 1984). Further, the obscuring effect of smoke substantially reduces the total emitted radiation
integrated over the whole flame surface.
Two approaches are available for estimating the surface emitted power: the point
source and solid plume radiation models. The point source is based on the total combustion energy release rate while the solid plume radiation model uses measured thermal fluxes from pool fires of various materials (compiled in TNO, 1979). Both these
methods include smoke absorption of radiated energy (that process converts radiation
into convection). Typical measured surface emitted fluxes from pool fires are given by
Raj (1977), Mudan (1984), and Considine (1984). LPG and LNG fires radiate up to
250 kW/m2 (79,000 Btu/hr-ft2 ). Upper values for other hydrocarbon pool fires lie in
the range 110-170 kW/m2 (35,000-54,000 Btu/hr-ft2), but smoke obscuration often
reduces this to 20-60 kW/m2 ( 6300-19,000 Btu/hr-ft2 ).
For the point source model, the surface emitted power per unit area is estimated
using the radiation fraction method as follows:
1.
2.
3.
4.
Calculate total combustion power (based on burning rate and total pool area).
Multiply by the radiation fraction to determine total power radiated.
Determine flame surface area (commonly use only the cylinder side area).
Divide radiated power by flame surface area.
The radiation fraction of total combustion power is often quoted in the range
0.15-0.35 (Mudan, 1984; TNO, 1979). See Table 2.27.
While the point source model provides simplicity, the wide variability in the radiation fraction and the inability to predict it fundamentally detracts considerably from
this approach.
The solid plume radiation model assumes that the entire visible volume of the
flame emits thermal radiation and the nonvisible gases do not (Mudan and Croce,
1988). The problem with this approach is that for large hydrocarbon fires, large
amounts of soot are generated, obscuring the radiating flame from the surroundings,
and absorbing much of the radiation. Thus, as the diameter of the pool fire increases,
the emitted flux decreases. Typical values for gasoline are 120 kW/m2 for a 1-m pool to
20 kW/m2 for a 50-m diameter pool. To further complicate matters, the high turbulence of the flame causes the smoke layer to open up occasionally, exposing the hot
flame and increasing the radiative flux emitted to the surroundings. Mudan and Croce
(1988) suggest the following model for sooty pool fires of high molecular weight
hydrocarbons to account for this effect,
E>v=Eme-SD+Es(I-e-SD)
(2.2.59)
where
£av is the average emissive power (kW/m2)
Em is the maximum emissive power of the luminous spots (approximately
140 kW/m2)
E5 is the emissive power of smoke (approximately 20 kW/m2)
S is an experimental parameter (0.12 m"1)
D is the diameter of the pool (m)
TABLE 2.27. The Fraction of Total
Energy Converted to Radiation for
Hydrocarbons (Mudan and
Croce, 1988)
Fuel
Fraction
Hydrogen
0.20
Methane
0.20
Ethylene
0.25
Propane
0.30
Butane
0.30
C5 and higher
0.40
Equation (2.2.59) produces an emissive power of 56 kW/m2 for a 10-m pool and
20 kW/m2 for a 100-m pool. This matches experimental data for gasoline, kerosene and
JP-4 fires reasonably well (Mudan and Croce, 1988).
Propane, ethane, LNG, and other low molecular weight materials do not produce
sooty flames.
Geometric View Factor
The view factor depends on whether the point source or solid plume radiation models
are used.
For the point source model, the view factor is given by
PP=^
(2.2.60)
Maximum View Factor at Ground Level, F21
wherePp is the point source view factor (length"2) and* is the distance from the point
source to the target (length).
Equation (2.2.60) assumes that all radiation arises from a single point and is
received by an object perpendicular to this. This view factor must only be applied to the
total heat output, not to the flux. Other view factors based on specific shapes (i.e., cylinders) require the use of thermal flux and are dimensionless. The point source view
factor provides a reasonable estimate of received flux at distances far from the flame. At
closer distances, more rigorous formulas or tables are given by Hamilton and Morgan
(1952), Crocker and Napier (1986), and TNO (1979).
For the solid plume radiation model, the view factors^are provided in Figure 2.78
for untilted flames and Figure 2.79 for tilted flames. Figure 2.78 requires an estimate of
the flame height to diameter, while Figure 2.79 requires an estimate of the flame tilt.
The complete equations for these figures are provided by Mudan and Croce (1988).
Both figures provide view factors for a ground level receiver from a radiation source
Dimensionless Distance from Flame Axis
= Distance from Flame Axis / Pool Radius
FIGURE 2.78. Maximum view factors for a ground-level receptor from a right circular cylinder
(Mudan and Croce, 1988).
Maximum View Factor at Ground Level, F21
Dlmensionless Distance from Flame Axis
= Distance from Flame Axis / Pool Radius
FIGURE 2.79. Maximum view factors for a ground-level receptor from a tilted circular cylinder
(Mudan and Croce, 1988).
represented by a right circular cylinder. Note that near the source the view factor is
almost independent of the flame height since the observer is exposed to the maximum
radiation.
Received Thermal Flux
The computation of the received thermal flux is dependent on the radiation model
selected.
If the point source model is selected, then the received thermal flux is determined
from the total energy rate from the combustion process:
Et = ^QrFf =r^ms^HcAFp
(2.2.61)
If the solid plume radiation model is selected, the received flux is based on correlations of the surface emitted flux:
£ r =r a AH c f 2 1
(2.2.62)
where
Er is the thermal flux received at the target (energy/area)
ra is the atmospheric transmissivity, provided by Eq. (2.2.42) (unitless)
Qx is the total energy rate from the combustion (energy/time)
F is the point source view factor (length"2)
T] is the fraction of the combustion energy radiated, typically 0.15 to 0.35
mE is the mass burning rate, provided by Eq. (2.2.53) (mass/area-time)
AHC is the heat of combustion for the burning liquid (energy/mass)
A is the total area of the pool (length2)
P21 is the solid plume view factor, provided by Eqs. (2.2.46) and (2.2.47)
Values for the fraction of the combustion energy radiated, rj, are given in Table
2.27.
Theoretical Foundation
Burning rate, flame height, flame tilt, surface emissive power, and atmospheric
transmissivity are all empirical, but well established, factors. The geometric view factor
is soundly based in theory, but simpler equations or summary tables are often
employed. The Stefan-Boltzmann equation is frequently used to estimate the flame
surface flux and is soundly based in theory. However, it is not easily used, as the flame
temperature is rarely known.
Input Requirements and Availability
The pool size must be defined, either based on local containment systems or on some
model for a flat surface. Burning rates can be obtained from tabulations or may be estimated from fuel physical properties. Surface emitted flux measurements are available
for many common fuels or are calculated using empirical radiation fractions or solid
flame radiation models. An estimate for atmospheric humidity is necessary for
transmissivity. All other parameters can be calculated.
Output
The primary output of thermal radiation models is the received thermal radiation at
various target locations. Fire durations should also be estimated as these affect thermal
effects (Section 2.3.2).
Simplified Approaches
Crocker and Napier (1986) provide tables of thermal impact zones from common situations of tank roof and ground pool fires. From these tables, safe separation distances
for people from pool fires can be estimated to be 3 to 5 pool diameters (based on a
"safe" thermal impact of 4.7 kW/m2).
2.2.6.3. EXAMPLE PROBLEM
Example 2.30: Radiation from a Burning Pool. A high molecular weight hydrocarbon liquid escapes from a pipe leak at a volumetric rate of 0.1 m3/s. A circular dike with
a 25 m diameter contains the leak. If the liquid catches on fire, estimate the thermal flux
at a receiver 50 m away from the edge of the diked area. Assume a windless day with
50% relative humidity. Estimate the thermal flux using the point source and the solid
plume radiation models.
Additional Data:
Heat of combustion of the liquid:
Heat of vaporization of the liquid:
Boiling point of the liquid:
Ambient temperature:
Liquid density:
Heat capacity of liquid (constant):
43,700 kj/kg
300 kj/kg
363 K
298 K
730 kg/m3
2.5 kJ/kg-K
Solution: Since the fuel is a high molecular weight material, a sooty flame is
expected. Equations (2.2.51) and (2.2.53) are used to determine the vertical burning
rates and the mass burning rates, respectively. These equations require the modified
heat of vaporization, which can be calculated using Eq. (2.2.52):
AH*=AH V +/^ B P Cp^r
= 300 kj/kg + (2.5 kj/kg K) (363 K - 298 K) = 462 kj/kg
The vertical burning rate is determined from Eq. (2.2.51):
, AHr
, (43,700
ty/kg"!
,
*-- = 1.27X10-* ^=(1.27xlO-*)[ ;62kJ/J4gJ = 1.20xlO-* m/s
The mass burning rate is determined by multiplying the vertical burning rate by
the density of the liquid:
^B = P>max = (730 kg/m3)(1.20 XlO" 4 m/s) =0.0876 kg/m 2 s
The maximum, steady state pool diameter is given by Eq. (2.2.54),
fr>T
I
(0.10 m3/s)
Anax =2,-^- =2J
'—.
=32.6 m
\*y
V (3.14)(1.20x KT4 m/s)
Since this is larger than the diameter of the diked area, the pool will be constrained
by the dike with a diameter of 25 m. The area of the pool is
,J-JgI-'" 4 "*-"'-491m'
4
4
The flame height is given by Eq. (2.2.55),
H
( mn }
— = 42
^=
D
IPaVSDj
n*i
r
2
(0.0876 kg/m
s)
;
=42
i
, 6/
=
3
2
[(1.2 kg/m )7(9.81 m/s )(25 m)
~i°-61
=1.59
Thus, H= (1.59)(25m) = 39.7m
Point Source Model. This approach is based on representing the total heat release as
a point source. The received thermal flux for the point source model is given by Eq.
(2.2.61). The calculation requires values for the atmospheric transmissivity and the
view factor. The view factor is given by Eq. (2.2.60), based on the geometry shown in
Figure 2.80. The point source is located at the center of the pool, at a height equal to
half the height of the flame. This height is (39.7 m)/2 = 19.9 m. From the right triangle formed,
x2 = (19.9 m) 2 + (25 + 50 m) 2 = 6020 m2
x = 77.6 m
This represents the beam length from the point source to the receiver. The view
factor is determined using Eq. (2.2.60)
5
2
FPP =—^r
=
r = 1.32 XlO" m'
4^v2
(4)(3.14)(77.6 m) 2
Fire
Receptor
Pool
FIGURE 2.80. Geometry of Example 2.30: Radiation from a burning pool.
The transmissivity is given by Eq. (2.2.42) with the partial pressure of water given
by Eq. (2.2.43). The results are
RJ-T
F
^3281
Pw = —exp 14.4114-^— =0.0156 atm = 1580Pa at298 K
r a = 2.02(PWXS)~°°9 =(2.02)[(1580Pa)(77.6m)]-°-09 =0.704
The thermal flux is given by Eq. (2.2.61), assuming a conservative value of 0.35
for the fraction of the energy converted to radiation.
^r =^a^^B A ^ r c^ F p
Er =(0.704)(0.35)(0.0876kg/m 2 s)(43,700kJ/kg)(491m 2 )(1.32xlO- 5 m' 2 )
= 6.11kJ/m 2 s=6.11kW/m 2
Solid Plume Radiation Model. The solid plume radiation model begins with an estimate of the radiant flux at the source. This is given by Eq. (2.2.59)
E>v=Eme~SD+Es(l-e-SD)
= (140kW/m 2 y- (0 - 12m " 1)(25m U(20kW/m 2 )][l-^ (0 - 12m " 1)(25m) ]
= 26.0kW/m2
Figure 2.78 is used to determine the geometric view factor. This requires the
height to pool radius ratio and the dimensionless distance. Since H/D = 1.59, H/R =
2(1.59) = 3.18. The dimensionless distance to the receiver is X/R, where R is the
radius of the pool and X is the distance from the flame axis to the receiver, that is,
50 m + 25/2 m = 62.5 m. Thus, ^R = 62.5 m/12.5 m = 5 and from Figure 2.78,
F21 = 0.068.
The atmospheric transmissivity is given by Eq. (2.2.42)
r a =2.02(PwXs)"a°9 =(2.02) [(158O Pa) (SOm)]-0-09 =0.732
The radiant flux at the receiver is determined from Eq. (2.2.45)
Er = r a AHCP21 =(0.732)(26.0kW/m 2 )(0.068)=1.3kJ/m 2 s = 1.3kW/m2
The result from the solid plume radiation model is smaller than the point source
model. This is most likely due to consideration of the radiation obscuration by the
flame soot, a feature not treated directly by the point source model. The differences
between the two models might be greater at closer distance to the pool fire.
The spreadsheet output for this example is shown in Figure 2.81.
Click to View Calculation Example
Example 2.30: Radiation from a Burning Pool
Input Data:
Tfquid leakage rate:
Heat of combustion of liquid:
Heat of vaporization of liquid:
Boiling point of liquid:
Ambient temperature:
Liquid density:
Constant heat capacity of liquid:
Dike diameter:
Receptor distance from pool:
Relative humidity:
Radiation efficiency for point source mode
Calculated Results:
Modified heat of vaporization:
Vertical burning rate:
Mass burning rate:
Maximum pool diameter:
Diameter used in calculation:
Area of pool:
Flame H/D:
Flame height:
Partial pressure of water vapor:
Point Source Model:
Point source height:
Distance to receptor:
View factor:
Transmissivity:
!Thermal flux at receptor.
Solid Plume Radiation Model:
Source emissive power:
Distance from flame axis to receptor:
Flame radius:
Flame H/R ratio:
Dimensionless distance from flame axis:
0.1 m**3/s
43700 kJ/kg
300 kJ/kg
363 K
298 K
730 kg/m**3
2.5 kJ/kg-K
25 m
50 m
50 %
0.35
462.5
1.20E-04
0.087598
32.57
25
490.87
1.59
39.72
1579.95
kJ/kg
m/s
kg/m**2-s
m
m
m**2
m
Pa
19.86m
77.58 m
1.3E-05 m**(-2)
0.70
6.12 kW/m**2 |
25.97
62.5
12.5
3.18
5.00
lntepolated values from figure:
Flame
H/R
0.5
1
3
6
View
Factor
0.014709
0.028085
0.0666
0.094514
Interpolated view factor:
0.06825
Transmissivity:
0.732
[Thermal flux at receptor:
1.30 kW7m**2 |
FIGURE 2.81. Spreadsheet output for Example 2.30.'Radiation from a burning pool.
2.2.6.4. DISCUSSION
Strengths and Weaknesses
Pool fires have been studied for many years and the empirical equations used in the
submodels are well validated. The treatment of smoky flames is still difficult. A weakness with the pool models is that flame impingement effects are not considered; they
give substantially higher heat fluxes than predicted by thermal radiation models.
Identification and Treatment of Possible Errors
The largest potential error in pool fire modeling is introduced by the estimate for surface emitted flux. Where predictive formulas are used (especially Stefan-Boltzmann
types) simple checks on ratios of radiated energy to overall combustion energy should
be carried out. Pool size estimates are important, and the potential for dikes or other
containment to be overtopped by fluid momentum effects or by foaming should be
considered.
Utility
Pool fire models are relatively straightforward to use.
Resources Necessary
A trained process engineer will require several hours to complete a pool fire scenario by
hand if all necessary thermodynamic data, view factor formulas, and humidity data are
available.
Available Computer Codes
DAMAGE (TNO, Apeldoorn, The Netherlands)
PHAST (DNV, Houston, TX)
QRAWorks (PrimaTech, Columbus, OH)
TRACE (Safer Systems, Westlake Village, CA)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
2.2.7. Jet Fires
2.2.7.1. BACKGROUND
Purpose
Jet fires typically result from the combustion of a material as it is being released from a
pressurized process unit. The main concern, similar to pool fires, is in local radiation
effects.
Application
The most common application of jet fire models is the specification of exclusion zones
around flares.
2.2.7.2. DESCRIPTION
Description of Technique
Jet fire modeling is not as well developed as for pool fires, but several reviews have been
published. Jet fire modeling incorporates many mechanisms, similar to those considered for pool fires, as is shown on the logic diagram in Figure 2.82. Three approaches
Jet Fire
Estimate Discharge
Rate
Section 2.1.1
Estimate Flame
Height
Equation (2.2.63)
Estimate Point
Source Location
Estimate Radiant
Fraction
Table 2.27
Estimate Point
Source View Factor
Equation (2.2.60)
Estimate
Transmissivity
Equation (2.2.42)
Estimate Incident
Radiant Flux
Equation (2.2.61)
Estimate Thermal
Effects
Section 2.3.2
FIGURE 2.82. Logic diagram for the calculation of jet fire radiation effects.
are reviewed by Bagster (1986): those of API 521(1996a), Craven (1972), andHustad
and Sonju(1985). The API method is relatively simple, while the other methods are
more mechanistic. A more recent review is provided by Mudan and Croce (1988).
The API (1996) method was originally developed for flare analysis, but is now
applied to jet fires arising from accidental releases. Flare models apply to gas releases
from nozzles with vertical flames. For accidental releases, the release hole is typically
not a nozzle, and the resulting flame is not always vertical. For the modeling
approaches presented here, the assumption will be made that the release hole can be
approximated as a nozzle. The assumption of a vertical flame will provide a conservative result, since the vertical flame will provide the largest radiant heat flux at any receptor point.
The API (1996) method is based on the radiant fraction of total combustion
energy, which is assumed to arise from a point source along the jet flame path. A graph
is provided in API 521 (API, 1996a) that correlates flame length versus flame heat. The
radiant fraction is given as 0.15 for hydrogen, 0.2 for methane, and 0.3 for other
hydrocarbons (from laboratory experiments). A further modifying factor of 0.67
should be applied to allow for incomplete combustion.
Mudan and Croce (1988) provide a more detailed and recent review of jet flame
modeling. The method begins with the calculation of the height of the flame. If we
define the break point for the jet as the point at the bottom of the flame, above the
nozzle, where the turbulent flame begins, then the flame height is given for turbulent
gas jets burning in still air by
L
5 3 Tf r
' | —/ i Lc T,+n( 1
x^.l
r-C
— =—
7)M
A.} C 7 y « T L
f ]
n->M\
(2.2.63)
where
L is the length of the visible turbulent flame measured from the break point (m)
A- is the diameter of the jet, that is, the physical diameter of the nozzle (m)
C7 is the fuel mole fraction concentration in a stoichiometric fuel-air mixture
(unitless)
Tp, TJ are the adiabatic flame temperature and jet fluid temperature, respectively (K)
aT is the moles of reactant per mole of product for a stoichiometric fuel-air
mixture (unitless)
Ma is the molecular weight of the air (mass/mole)
Mf is the molecular weight of the fuel (mass/mole)
For most fuels, C7 is typically much less than 1, aT is approximately 1, and the ratio
Tp/Ty varies between 7 and 9. These assumptions are applied to Eq. (2.2.63) resulting
in the following simplified equation,
L
15 [M~
T^\W(
<2-2-64>
Mudan and Croce (1988) also provide expressions for the flame height considering the effects of crosswind.
The radiative flux received by a source is determined using a procedure similar to
the point source method described for pool fires in Section (2.2.6.2). For this case, the
radiant flux at the receiver is determined from
E1 = r1QIFp =r,r,mAHcFf
where
Er
ra
Q1.
Fp
rj
m
AH0
is the radiant flux at the receiver (energy/area-time)
is the atmospheric transmissivity (unitless)
is the total energy radiated by the source (energy/time)
is the point source view factor, provided by Eq. (2.2.60) (length"2)
is the fraction of total energy converted to radiation (unitless)
is the mass flow rate of the fuel (mass/time)
is the energy of combustion of the fuel (energy/mass)
(2.2.65)
For this model, the point source is located at the center of the flame, that is, halfway
along the flame centerline from the break point to the tip of the flame, as determined by
Eqs. (2.2.63) or (2.2.64). It is assumed that the distance from the nozzle to the break
point is negligible with respect to the total flame height. The fraction of the energy converted to radiative energy is estimated using the values provided in Table 2.27.
None of the above methods consider flame impingement. In assessing the potential for domino effects on adjacent hazardous vessels, the dimensions of the jet flame
can be used to determine whether flame impingement is likely. If so, heat transfer
effects will exceed the radiative fraction noted above, and a higher heat fraction could
be transferred to the impinged vessel.
Theoretical Foundations
The models to predict the jet flame height are empirical, but well accepted and documented in the literature. The point source radiation model only applies to a receiver at a
distance from the source. The models only describe jet flames produced by flammable
gases in quiescent air—jet flames produced by flammable liquids or two-phase flows
cannot be treated. The empirically based radiant energy fraction is also a source of error.
Input Requirements
The jet flame models require an estimate of the flame height, which is determined from
an empirical equation based on reaction stoichiometry and molecular weights. The
point source radiant flux model requires an estimate of the total energy generation rate
which is determined from the mass flow rate of combustible material. The fraction of
energy converted to radiant energy is determined empirically based on limited experimental data. The view factors and atmospheric transmissivity are determined using
published correlations.
Simplified Approaches
Considine and Grint (1984) give a simplified power law correlation for LPG jet fire
hazard zones. The dimensions of the torch flame, which is assumed to be conical, are
given by
where
L
W
m
rs 50
t
L = 9.1m05
(2.2.66)
W=0.25L
(2.2.67)
rs>50 = 1.9t°AmQA7
(2.2.68)
is the length of torch flame (m)
is the jet flame conical half-width at flame tip (m)
is the LPG release rate subject to 1 < m < 3000 kg/s (kg/s)
is the side-on hazard range to 50% lethality, subject to r > W (m)
is the exposure time, subject to 10 < t < 300 s (s)
2.2.7.3. EXAMPLE PROBLEM
Example 2.31: Radiant Flux from a Jet Fire. A 25-mm hole occurs in a large pipeline resulting in a leak of pure methane gas and a flame. The methane is at a pressure of
100 bar gauge. The leak occurs 2-m off the ground. Determine the radiant heat flux at a
point on the ground 15 m from the resulting flame. The ambient temperature is 298 K
and the humidity is 50% RH.
Additional Data:
Heat capacity ratio, £, for methane:
Heat of combustion for methane:
Flame temperature for methane:
1.32
50,000 kj/kg
2200 K
Solution: Assume a vertical flame for a conservative result and that the release hole
is represented by a nozzle. The height of the flame is calculated first to determine the
location of the point source radiator. This is computed using Eq. (2.2.63)
_ L _ _ 5 3 F±B\C
4. ~CT ^l « T [ T
+ l_c }M7
(
T
X_
The combustion reaction in air is
CH4 + 2O2 + 7.52N2 -* CO2 + 2H2O + 7.52N2
Thus, Cx = 1/(1 + 2 + 7.52) = 0.095, Tf/T- = 2200/298 = 7.4 anda T = 1.0. The
molecular weight of air is 29 and for methane 16. Substituting into Eq. (2.2.63),
95 0 095
200
^^jissh ^- - '!]=
Note that Eq. (2.2.64) yields a value of 212, which is close to the value of 200 produced using the more detailed approach. Since the diameter of the issuing jet is 25 mm,
the flame length is (200)(25 mm) = 5.00 m.
Figure 2.83 shows the geometry of the jet flame. Since the flame base is 2 m off the
ground, the point source of radiation is located at 2 m + (5.00 m)/2 = 4.50 m above
the ground.
The discharge rate of the methane is determined using Eq. (2.1.17) for choked
flow of gas through a hole. For this case,
(for choked flow through a hole)
Jet Flame
Receptor
FIGURE 2.83. Geometry for Example 2.31: Radiant flux from a jet fire.
Substituting into Eq. (2.1.17)
c
Ike M( 2 V 4+ W*- 1 '
*- »^ J^lifiJ
= (1.0)(4.91 XlO' 4 m 2 )(100XlO 5 N/m 2 )
x I
(132X1kg °VNs 2 )(16 kg/kg - mole)(0.341)
1
J (0.082057 m 3 atm/kg - mole K)(298 K)(101,325 N/m 2 atm)
'
S/§
From Figure 2.83, the radiation path length is the length of the hypotenuse. Thus,
x2 = (4.50 m) 2 + (15 m)2 = 245 m2
x = 15.7 m
The point source view factor is given by Eq. (2.2.60)
Fp=
4^ r = (4)(3.14)(15.7m 2 ) =3 - 25Xl °" 4m2
The transmissivity of the air at 50% RH is determined using Eqs. (2.2.42) and
(2.2.43). The result is ra = 0.812. The fraction of the total energy that is converted to
radiation is found in Table 2.27. For methane this is r\ = 0.2. The radiation at the
receiver is determined using Eq. (2.2.65)
Er = T^mAHcFp
= (0.812)(0.2)(8.37 kg/s)(50,000kj/kg)(325x!0~4 m" 2 )
= 22.1kJ/m 2 s=22.1kW/m 2
A spreadsheet implementation of this problem is shown in Figure 2.84.
This example is a bit unrealistic in that the flame will most likely blow out due to
the high exit velocity of the jet. As the flow velocity of the jet is increased, the flame
moves downstream to a new location where the turbulent burning velocity equals the
flame velocity. As the velocity is increased, a point is eventually reached where the
burning location is so far downstream that the fuel concentration is below the lower
flammability limit due to air entrainment. Mudan and Croce (1988) provide flame
blowout criteria.
2.2.7.4. DISCUSSION
Strengths and Weaknesses
Jet flames are less well treated theoretically than pool fires, but simple correlations such
as the API or Mudan and Croce (1988) methods allow for adequate hazard estimation.
Flame impingement effects are not treated—they give substantially higher heat fluxes
than predicted by thermal radiation models. Liquid and two-phase jets cannot be modeled using this approach. The jet flame models presented here assume vertical flames
for a conservative result.
Click to View Calculation Example
Example 2.31: Radiant Flux from a Jet Fire
Input Data:
Distance from flame:
Hole diameter:
Leak height above ground:
Gas pressure:
Ambient temperature:
Relative humidity:
Heat capacity ratio for gas:
Heat of combustion for gas:
Molecular weight of gas:
Flame temperature:
Discharge coefficient for hole:
Ambient pressure:
Fuel mole fraction at stoichiometric:
Moles of reactant per mole of product:
Molecular weight of air:
Fraction of total energy converted:
Calculated Results:
Area of hole:
Gas discharge rate:
Ud ratio for flame:
Flame height:
Location of flame center above ground:
Radiation path length:
Point source view factor:
Water vapor partial pressure:
Atmospheric transmissivity:
[Flux at receptor location:
15m
25 mm
2m
100 bar gauge
298 K
50 %
1.32
50000 kJ/kg
16
2200 K
1
101325 Pa
0.095
1
29
0.2
0.000491
8.368
199.7
4.99
4.50
15.66
0.000325
1580
0.813
m**2
kg/s
m
m
m**2
Pa
22.07 kW/m**2
|
FIGURE 2.84. Spreadsheet for Example 2.31: Radiant flux from a jet fire.
Identification and Treatment of Possible Errors
Jet fire models based on point source radiation approximations will give poor thermal
flux estimates close to the jet, and more mechanistic models should be used. The radiant energy fraction is also a source of error. The models presented here do not apply if
wind is present, see Mudan and Croce (1988).
Resources Necessary
A trained process engineer would require several hours to complete a jet fire scenario
by hand if all necessary thermodynamic data, view factor formulas, and humidity data
are available.
Available Computer Codes
EFFECTS (TNO, Apeldoorn, The Netherlands)
PHAST (DNV, Houston, TX)
QRAWorks (Primatech, Columbus, OH)
SUPERCHEMS (Arthur D. Little, Cambridge, MA)
TRACE (Safer Systems, Westlake Village, CA)
Next Page
Previous Page
2.3.
Effect Models
The physical models described in Section 2.1 generate a variety of incident outcomes
that are caused by release of hazardous material or energy. Dispersion models (Section
2.1.3) estimate concentrations and/or doses of dispersed vapor; vapor cloud explosions
(VCE) (Section 2.2.1), physical explosion models (Section 2.2.3), fireball models
(Section 2.2.4), and confined explosion models (Section 2.2.5) estimate shock wave
overpressures and fragment velocities. Pool fire models (Section 2.2.6), jet fire models
(Section 2.2.7), BLEVE models (Section 2.2.4) and flash fire models (Section 2.2.2)
predict radiant flux. These models rely on the general principle that severity of outcome
is a function of distance from the source of release.
The next step in CPQBA is to assess the consequences of these incident outcomes.
The consequence is dependent on the object of the study. For the purpose of assessing
effects on human beings, consequences may be expressed as deaths or injuries. If physical property, such as structures and buildings, is the object, the consequences may be
monetary losses. Environmental effects may be much more complex, and could include
impacts on plant or animal life, soil contamination, damage to natural resources, and
other impacts. Modeling of environmental impacts is beyond the scope of this book.
Many CPQEA studies consider several types of incident outcomes simultaneously
(e.g., property damage and exposures to flammable and/or toxic substances). To estimate risk, a common unit of consequence measure must be used for each type of effect
(e.g., death, injury, or monetary loss). As discussed in Chapter 4, the difficulty in comparing different injury types has led to the use of fatalities as the dominant criterion for
thermal radiation, blast overpressure, and toxicity exposures,
One method of assessing the consequence of an incident outcome is the direct
effect model, which predicts effects on people or structures based on predetermined
criteria (e.g., death is assumed to result if an individual is exposed to a certain concentration of toxic gas). In reality, the consequences may not take the form of discrete
functions (i.e., a fixed input yields a singular output) but may instead conform to probability distribution functions. A statistical method of assessing a consequence is the
dose-response method. This is coupled with a probit equation to linearize the
response. The probit (probability unit) method described by Finney (1971) reflects a
generalized time-dependent relationship for any variable that has a probabilistic outcome that can be defined by a normal distribution. For example, Eisenberg et al.
(1975) use this method to assess toxic effects by establishing a statistical correlation
between a "damage load" (i.e., a toxic dose that represents a concentration per unit
time) and the percentage of people affected to a specific degree. The probit method can
also be applied to thermal and explosion effects.
Numerous reference texts are available on toxicology, including Caserett and
Doull (1980) and Williams and Burson (1985). These provide more detail on toxicology for risk analysts.
Dose-Response Functions. Toxicologists define toxicity as "the ability of a substance
to produce an unwanted effect when the chemical has reached a sufficient concentration at a certain site in the body" (NSC, 1971).
Most toxicological considerations are based on the dose-response function. A
fixed dose is administered to a group of test organisms and, depending on the outcome,
the dose is either increased until a noticeable effect is obtained, or decreased until no
effect is obtained.
There are several ways to represent dose. One way is in terms of the quantity
administered to the test organism per unit of body weight. Another method expresses
dose in terms of quantity per skin surface area. With respect to inhaled vapors, the dose
can be represented as a specified vapor concentration administered over a period of
time.
It is difficult to evaluate precisely the human response caused by an acute, hazardous exposure for a variety of reasons. First, humans experience a wide range of acute
adverse health effects, including irritation, narcosis, asphyxiation, sensitization, blindness, organ system damage, and death. In addition, the severity of many of these effects
varies with intensity and duration of exposure. For example, exposure to a substance at
an intensity that is sufficient to cause only mild throat irritation is of less concern than
one that causes severe eye irritation, lacrimation, or dizziness, since the latter effects are
likely to impede escape from the area of contamination.
Second, there is a high degree of variation in response among individuals in a typical population. Withers and Lees (1985) discuss how factors such as age, health, and
degree of exertion affect toxic responses (in this case, to chlorine). Generally, sensitive
populations include the elderly, children, and persons with diseases that compromise
the respiratory or cardiovascular system.
As a result of the variability in response of living organisms, a range of responses is
expected for a fixed exposure. Suppose an organism is exposed to a toxic material at a
fixed dose and the responses are recorded and classified into a number of response categories. Some of the organisms will show a high level of response while some will show
a low level. A typical plot of the results is shown in Figure 2.85. The results are frequently modeled as a Gaussian or "bell-shaped" curve. The shape of the curve is defined
entirely by the mean response, //, and a standard deviation, o. The area under the curve
represents the percentage of organisms affected for a specified response interval. In particular, the response interval within one standard deviation of the mean represents 68%
of the individual organisms. Two standard deviations represents 95.5% of the total
individuals. The entire area under the curve has an area of 1, representing 100% of the
individuals.
Percent or Fraction of
Individuals Affected
Average
Low
Response
Average
Response
High
Response
FIGURE 2.85. Typical Gaussian or bell-shaped curve.
The experiment is repeated for a number of different doses and Gaussian curves are
drawn for each dose. The mean response and standard deviation is determined at each
dose.
A complete dose-response curve is produced by plotting the cumulative mean
response at each dose. This result is shown in Figure 2.86. For convenience, the
response is plotted versus the logarithm of the dose, as shown in Figure 2.87. This
form typically provides a much straighter line in the middle of the dose range. The logarithm form arises from the fact that in most organisms there are some subjects who
can tolerate rather high levels of the causative variable, and conversely, a number of
subjects who are sensitive to the causative variable,
Probit Functions. For most engineering computations, particularly those involving
spreadsheets, the sigmoidal-shaped dose-response curve of Figure 2.87 does not provide much utility; an analytical equation is preferred. In particular, a straight line would
be ideal, since it is amenable to standard curve fit procedures.
For single exposures, the probit (probability unit) method provides a transformation method to convert the dose-response curve into a straight line. The probit variable
Y is related to the probability P by (Finney, 1971):
p=
1
Y 5
~
I u2\
~j^ /H~T~r
^fZJt _„
\
2 }
(2 3 1}
--
Response
where P is the probability or percentage, Y is the probit variable, and u is an integration
variable. The probit variable is normally distributed and has a mean value of 5 and a
standard deviation of 1.
Dose
Response (Percent)
FIGURE 2.86. Typical dose-response curve.
Logarithm of the Dose
FIGURE 2.87. Typical response versus log(dose) curve.
For spreadsheet computations, a more useful expression for performing the conversion from probits to percentage is given by,
f
y _ 5 fly-SlV
P=SJl+7-^(LJJj
(2.3.2)
where ccerf' is the error function.
Table 2.28 and Figure 2.88 also show the conversion from probits to percentages.
Probit
TABLE 2.28. Conversion from Probits to Percentages
Percentage
FIGURE 2.88. The relationship between percentage and probit.
Probit equations for the probit variable, Y^ are based on a causative variable, V
(representing the dose), and at least two constants. These equations are of the form,
Y = ^-^k2 InV
(2.3.3)
where U1 and k2 are constants. Probit equations of this type are derived as lines of best fit
to experimental data (percentage fatalities versus concentration and duration) using
log-probability plots or standard statistical packages.
Probit equations are available for a variety of exposures, including exposures to
toxic materials, heat, pressure, radiation, impact, and sound, to name a few. For toxic
exposures, the causative variable is based on the concentration; for explosions, the
causative variable is based on the explosive overpressure or impulse, depending on the
type of injury or damage. For fire exposure, the causative variable is based on the duration and intensity of the radiative exposure. Probit equations can also be applied to estimate structural damage, glass breakage, and other types of damage.
EXAMPLE PROBLEM
Example 2.32: Dose-Response Correlation via Probits. Eisenberg et al. (1975)
report the following data on the effect of explosion peak overpressures on eardrum rupture in humans:
Percentage
Affected
Peak Overpressure
(N/m2)
Equivalent Overpressure
(psi)
1
16,500
2.4
10
19,300
2.8
50
43,500
6.3
90
84,300
12.2
Determine the probit correlation for this exposure.
Solution: The percentages are converted to a probit variable using Table 2.28.
The results are
Percentage
1
Probit
2.67
10
3.72
50
5.00
90
6.28
Figure 2.89 is a plot of the percentage affected versus the natural log of the peak
overpressure. This demonstrates the classical sigmoid shape of the response versus log
dose curve. Figure 2.90 includes a plot of the probit variable (with a linear probit scale)
versus the log of the peak overpressure. The straight line confirms the form of Eq.
(2.3.3) and the resulting fit is Y = -16.7 + 2.03 In(P0), where P° is the peak
overpressure in Pa, or N/m2.
Percent Affected
In (Overpressure, N/m2 )
FIGURE 2.89. Plot of percentage affected versus the log of the peak overpressure for Example
2.32: Dose-response correlation via probits.
Click to View Calculation Example
Example 2.32: Dose-Response Correlation via Probits
Input Data:
Peak
Peak
Overpressure Overpressure
Percentage
Calculated Calculated
Probit
Affected
(N/m**2)
psi
LN (Overpressure)^ Probit
Percentage
1
16500
2.39
2.67
9.71
2.39
3.02
19300
10
3.72
2.80
9.87
3.34
4.84
43500
6.31
50
5.00
10.68
4.99
49.44
84300
90
12.23
6.28
11.34
6.33
90.77
Calculated Results:
Regression Output from Spreadsheet follows:
Regression Output:
Constant
StdErrofYEst
R Squared
No. of Observations
Degrees of Freedom
2.03
0.28
Probit
X Coefficient(s)
Std Err of Coef.
-16.66
0.37
0.96
4
2
In (Overpressure, N/m**2)
FIGURE 2.90. Spreadsheet output for Example 2.32: Dose-response correlation via probits.
The output from the spreadsheet solution to this problem is shown in Figure 2.90.
The probit equation is fit using a least-squares line fitting technique supported by the
spreadsheet.
2.3.1. Toxic Gas Effects
2.3.1.1. BACKGROUND
Purpose
Toxic effect models are employed to assess the consequences to human health as a result
of exposure to a known concentration of toxic gas for a known period of time. Mitigation
of these consequences by sheltering or evasive action is discussed in Section 2.4.
This section does not address the release and formation of nontoxic, flammable
vapor clouds that do not ignite but pose a potential for asphyxiation. Nontoxic substances can cause asphyxiation due to displacement of available oxygen. Asphyxiant
concentrations are typically assumed to be in the range of 50,000-100,000 ppm (5 to
10 volume percent).
For CPQBA, the toxic effects are due to short-term exposures, primarily due to
vapors. Chronic exposures are not considered here.
Philosophy
For toxic gas clouds, concentration-time information is estimated using dispersion
models (Section 2.1.3). As shown by Figure 2.89, probit models are used to develop
exposure estimates for situations involving continuous emissions (approximately constant concentration over time at a fixed downwind location) or puff emissions (concentration varying with time at a downwind location). It is much more difficult to apply
other criteria that are based on a standard exposure duration (e.g., 30 or 60 min) particularly for puff releases that involve short exposure times and varying concentrations
over those exposure times. The object of the toxic effects model is to determine
whether an adverse health outcome can be expected following a release and, if data
permit, to estimate the extent of injury or fatalities that are likely to result.
For the overwhelming majority of substances encountered in industry, there are
not enough data on toxic responses of humans to directly determine a substance's
hazard potential. Frequently, the only data available are from controlled experiments
conducted with laboratory animals. In such cases, it is necessary to extrapolate from
effects observed in animals to effects likely to occur in hun^ns. This extrapolation
introduces uncertainty and normally requires the professional judgment of a toxicologist or an industrial hygienist with experience in health risk assessment.
Also, many releases involve several chemical components or multiple effects. At
this time the cumulative effects of simultaneous exposure to more than one material is
not well understood. Are the effects additive, synergistic, or antagonistic in their effect
on population? As more information is developed on the characterization of multiple
chemical component releases from source and dispersion experimentation and modeling, corresponding information is needed in the toxicology arena. Unfortunately, even
toxic response data of humans to single component exposures are inadequate for a large
number of chemical species.
Finally, there are no standardized toxicology testing protocols that exist for studying episodic releases on animals. This has been in general a neglected aspect of toxicology research. There are experimental problems associated with the testing of toxic
chemicals at high concentrations for very short durations in establishing the concentration/time profile. In testing involving fatal concentration/time exposures, the question
exists of how to incorporate early and delayed fatalities into the study results.
Many useful measures are available to use as benchmarks for predicting the likelihood that a release event will result in injury or death. AIChE (AIChE/CCPS, 1988a)
reviews various toxic effects and discusses the use of various established toxicologic criteria. These criteria and methods include
• Emergency Response Planning Guidelines for Air Contaminants (ERPGs)
issued by the American Industrial Hygiene Association (AIHA).
• Immediately Dangerous to Life or Health (IDLH) levels established by the
National Institute for Occupational Safety and Health (NIOSH).
• Emergency Exposure Guidance Levels (EEGLS) and Short-Term Public Emergency Guidance Levels (SPEGLs) issued by the National Academy of Sciences/National Research Council.
• Threshold Limit Values (TLVs) established by the American Conference of
Governmental Industrial Hygienists (ACGIH) including Short-Term Exposure
Limits (STELs) and ceiling concentrations (TLV-Cs).
• Permissible Exposure Limits (PELs) promulgated by the Occupational Safety
and Health Administration (OSHA).
• Various state guidelines, for example the Toxicity Dispersion (TXDs) method
used by the New Jersey Department of Environmental Protection (NJ-DEP).
• Toxic endpoints promulgated by the U.S. Environmental Protection Agency.
• Probit Functions.
• Department of Energy (DOE) Temporary Emergency Exposure Limits (TEELs)
The criteria (ERPGs, IDLHs, etc.) and methods listed above are based on a combination of results from animal experiments, observations of long- and short-term
human exposures, and expert judgment. The following paragraphs define these criteria
and describe some of their features.
ERPGs. Emergency Response Planning Guidelines (ERPGs) are prepared by an
industry task force and are published by the American Industrial Hygiene Association
(AIHA). Three concentration ranges are provided as a consequence of exposure to a
specific substance:
• The ERPG-I is the maximum airborne concentration below which it is believed
that nearly all individuals could be exposed for up to 1 hr without experiencing
any symptoms other than mild transient adverse health effects or without perceiving a clearly defined objectionable odor.
• The ERPG-2 is the maximum airborne concentration below which it is believed
that nearly all individuals could be exposed for up to 1 hr without experiencing
or developing irreversible or other serious health effects or symptoms that could
impair their abilities to take protective action.
• The EEJG-3 is the maximum airborne concentration below which it is believed
nearly all individuals could be exposed for up to 1 hr without experiencing or
developing life-threatening health effects (similar to EEGLs).
ERPG data (AIHA, 1996) are shown in Table 2.29. As of 1996 47 ERPGs have
been developed and are being reviewed, updated and expanded by an AIHA peer
review task force. Because of the comprehensive effort to develop acute toxicity values,
ERPGs are becoming an acceptable industry/government norm.
IDLHs. The National Institute for Occupational Safety and Health (NIOSH)
publishes Immediately Dangerous to Life and Health (IDLH) concentrations to be
used as acute toxicity measures for common industrial gases. An IDLH exposure condition is defined as a condition "that poses a threat of exposure to airborne contaminants when that exposure is likely to cause death or immediate or delayed permanent
adverse health effects or prevent escape from such an environment" (NIOSH, 1994).
IDLH values also take into consideration acute toxic reactions, such as severe eye irritation, that could prevent escape. The IDLH is considered a maximum concentration
above which only a highly reliable breathing apparatus providing maximum worker
protection is permitted. If IDLH values are exceeded, all unprotected workers must
leave the area immediately.
IDLH data are currently available for 380 materials (NIOSH5 1994). Because
IDLH values were developed to protect healthy worker populations, they must be
adjusted for sensitive populations, such as older, disabled, or ill populations. For flammable vapors, the IDLH is defined as 1/10 of the lower flammability limit (LFL)
concentration.
EEGLs and SPEGLs. Since the 1940s, the National Research Council's Committee on Toxicology has submitted Emergency Exposure Guidance Levels (EEGLs) for
44 chemicals of special concern to the Department of Defense. An EEGL is defined as a
concentration of a gas, vapor, or aerosol that is judged to be acceptable and that will
allow healthy military personnel to perform specific tasks during emergency conditions
lasting from 1 to 24 hr. Exposure to concentrations at the EEGL may produce transient irritation or central nervous system effects but should not produce effects that are
lasting or that would impair performance of a task. In addition to EEGLs, the National
Research Council has developed Short-Term Public Emergency Guidance Levels
(SPEGLs), defined as acceptable concentrations for exposures of members of the general public. SPEGLs are generally set at 10-50% of the EEGL and are calculated to take
account of the effects of exposure on sensitive, heterogenous populations. The advantages of using EEGLs and SPEGLs rather than IDLH values are (1) a SPEGL considers effects on sensitive populations (2) EEGLs and SPEGLs are developed for several
different exposure durations, and (3) the methods by which EEGLs and SPEGLs were
developed are well documented in National Research Council publications. EEGL and
SPEGL values are shown in Table 2.30.
TLV-STEL. Certain American Conference of Governmental Industrial Hygienists (ACGIH) criteria may be appropriate for use as benchmarks (ACGIH, 1996). In
particular, the ACGIH3S threshold limit values-short-term exposure limits
(TLV-STELs) and threshold limit value-ceiling limits (TLV-C) are designed to pro-
TABLE 2.29. Emergency Response Planning Guidelines, ERPGs (AIHA, 1996). All
values are in ppm unless otherwise noted. Values are updated regularly.
Chemical
ERPG-I
ERPG-2
ERPG-3
Acetaldehyde
Acrolein
Acrylic Acid
Acrylonitrile
AUyI Chloride
10
0.1
2
NA
3
200
0.5
50
35
40
1000
3
750
75
300
Ammonia
Benzene
Benzyl Chloride
Bromine
1,3 -Butadiene
25
50
1
0.2
10
200
150
10
1
50
1000
1000
25
5
5000
w-Butyl Acrylate
w-Butyl Isocyanate
Carbon Disulfide
Carbon Tetrachloride
Chlorine
0.05
0.01
1
20
1
25
0.05
50
100
3
250
1
500
750
20
0.1
0.1
NA
2 mg/m3
20
1
1
0.2
10 mg/m3
100
10
10
3
30 mg/m3
300
Crotonaldehyde
Diborane
Diketene
Dimethylamine
Dimethylchlorosilane
2
NA
1
1
0.8
10
1
5
100
5
50
3
50
500
25
Dimethyl Disulfide
Epichlorohydrin
Ethylene Oxide
Formaldehyde
Hexachlorobutadiene
0.01
2
NA
1
3
50
20
50
10
10
250
100
500
25
30
Hexafluoroacetone
Hexafluoropropylene
Hydrogen Chloride
Hydrogen Cyanide
Hydrogen Fluoride
NA
10
3
NA
54
1
50
20
10
20
50
500
100
25
50
Chlorine Trifluouride
Chloroacetyl Chloride
Chloropicrin
Chlorosulfonic Acid
Chlorotrifluoroethylene
Hydrogen Sulfide
Isobutyronitrile
2-Isocyanatoethyl Methacrylate
Lithium Hydride
Methanol
0.1
10
NA
25 jugm/m3
200
30
50
0.1
100 Mgm/m3
1000
100
200
1
500 Mgm/m3
5000
(continued)
TABLE 2.29. (continued)
Chemical
ERPG-I
ERPG-2
ERPG-3
Methyl Chloride
NA
400
K)OO
Methylene Chloride
200
750
4000
125
Methyl Iodide
25
50
Methyl Isocyanate
0.025
0.5
5
Methyl Mercaptan
0.005
25
100
0.5
3
15
500
Methyltrichlorosilane
Monomethylamine
Perfluoroisobutylene
Phenol
Phosgene
Phosphorus Pentoxide
10
100
NA
0.1
0.3
10
50
200
NA
5 mg/m
0.2
3
1
25 mg/m
3
100 mg/m3
Propylene Oxide
50
250
750
Styrene
50
250
1000
2 mg/m3
10 mg/m3
30 mg/m3
Sulfonic Acid (Oleum, Sulfur
Trioxide, and Sulfuric Acid)
Sulfur Dioxide
Tetrafluoroethylene
0.3
15
3
200
1000
10,000
5 mg/m3
20 mg/m3
100 mg/m3
Toluene
50
300
1000
Trimethylamine
0.1
100
Titanium Tetrachloride
Uranium Hexafluoride
Vinyl Acetate
5 mg/m
5
3
500
3
15 mg/m
75
30 mg/m3
500
tect workers from acute effects resulting from exposure to chemicals; such effects
include, among others, irritation and narcosis. TLV-STELS are the maximum concentration to which workers can be exposed for a period of up to 15 minutes without suffering (1) intolerable irritation (2) chronic or irreversible tissue change (3) narcosis of
sufficient degree to increase accident proneness, impair self-rescue, or materially reduce
worker efficiency, provided that no more than four excursions per day are permitted,
with at least 60 minutes between exposure periods, and provided that the daily
TLV-TWA is not exceeded. The ceiling limits (TLV-C's) represent a concentration
which should not be exceeded, even instantaneously.
Use of STEL or ceiling measures may be overly conservative if the CPQRA is
based on the potential for fatalities; however, they can be considered if the study is
based on injuries.
PEL. The Permissible Exposure Limits (PELs) are promulgated by the Occupational Safety and Health Administration (OSHA) and have force of law. These levels
are similar to the ACGIH criteria for TLV-TWAs since they are also based on an 8-hr
time-weighted average exposures. OSHA-cited "acceptable ceiling concentrations,"
"excursion limits,55 or "action levels55 may be appropriate for use as benchmarks.
TABLE 2.30. Emergency Exposure Guidance Levels (EEGLs) from the National
Research Council (NRC). All values are in ppm unless otherwise noted.
Compound
1-Hr. EEGL
24-Hr. EEGL
Source
Acetone
8,500
1,000
NRCI
Acrolein
0.05
0.01
NRCI
100
NRCIV
3
Aluminum oxide
15 mg/m
Ammonia
100
Arsine
1
0.1
NRCI
Benzene
50
2
NRCVI
Bromotrifluoromethane
25,000
NRC III
Carbon disulfide
50
NRCI
Carbon monoxide
400
50
NRCIV
Chlorine
3
0.5
NRCII
Chlorine trifluoride
1
Chloroform
100
30
NRCI
Dichlorodifluoromethane
10,000
1000
NRC H
Dichlorofluoromethane
100
3
NRC H
Dichlorotetrafluoroethane
10,000
1000
NRC II
1,1 -Dimethylhydrazine
0.24"
o.or
NRCV
Ethanolamine
50
3
NRC H
Ethylene glycol
40
20
NRCIV
Ethylene oxide
20
1
NRCVI
Fluorine
7.5
Hydrazine
0.12*
0.005"
NRCV
Hydrogen chloride
20/r
20/1"
NRCVII
10
NRCIV
NRCVII
NRCII
NRCI
Hydrogen sulfide
Isopropyl alcohol
Lithium bromide
Lithium chromate
200
400
3
15 mg/m
3
100 Mg/m
7 mg/m
NRCII
3
NRCVII
3
NRCVIII
50 /xg/m
3
Mercury (vapor)
0.2 mg/m
NRCI
Methane
5000
NRCI
Methanol
200
10
NRCIV
Methylhydrazine
0.24"
0.01"
NRCV
Nitrogen dioxide
.1«
0.04"
NRCIV
Nitrous oxide
10,000
Ozone
1
NRClV
0.1
NRCI
(continued)
TABLE 2.30 (continued)
Compound
1-Hr. EEGL
24-Hr. EEGL
Source
Phosgene
0.2
0.02
NRC H
3
Sodium hydroxide
2 mg/m
Sulfur dioxide
10
NRCII
5
3
NRCII
Sulfuric acid
1 mg/m
Toluene
200
100
NRCVII
Trichloroethylene
200 ppm
10 ppm
NRC VIII
Trichlorofluoromethane
1500
500
NRC E
Trichlorotrifluoroethane
1500
500
NRC II
10
NRC H
100
NRCII
Vinylidene chloride
Xylene
200
NRCI
a SPEGL value.
TXDS Acute Toxic Concentration. Some states have their own exposure guidelines. For example, the New Jersey Department of Environmental Protection
(NJ-DEP) uses the Toxic Dispersion (TXDS) method of consequence analysis for the
estimation of potentially catastrophic quantities of toxic substances as required by the
New Jersey Toxic Catastrophe Prevention Act (TCPA) (Baldini and Komosinsky,
1988). An acute toxic concentration (ATC) is defined as the concentration of a gas or
vapor of a toxic substance that will result in acute health effects in the affected population and one fatality out of 20 or less (5% or more) during a 1 hr exposure. ATC values
as proposed by the NJ-DEP are estimated for 103 "extraordinarily hazardous substances," and are based on the lowest value of one of the following:
• the lowest reported lethal concentration (LCLO) value for animal test data
• the median lethal concentration (LC50) value from animal test data multiplied
by 0.1
• the IDLH value.
Refer to Baldini and Komosinsky (1988) for a listing of the ATC values for the
103 "extraordinarily hazardous substances," or contact the NJ-DEP.
Toxic Endpoints. The EPA (1996) has promulgated a set of toxic endpoints to be
used for air dispersion modeling of toxic gas releases as part of the EPA Risk Management Plan (RMP). The toxic endpoint is, in order of preference: (1) the ERPG-2, or
(2) the Level of Concern (LOG) promulgated by the Emergency Planning and Community Right-to- Know Act. The Level of Concern (LOC) is considered "to be the
maximum concentration of an extremely hazardous substance in air that will not cause
serious irreversible health effects in the general population when exposed to the substance for relatively short duration" (EPA, 1986). Toxic endpoints are provided for 77
chemicals under the RMP rule (EPA, 1996).
In general, the most directly relevant toxicologic criteria currently available, particularly for developing emergency response plans, are ERPGs, SPEGLs, and EEGLs.
These were developed specifically to apply to general populations, and to account for
sensitive populations and scientific uncertainty in toxicologic data. For incidents
involving substances for which no SPEGLs or EEGLs are available, IDLHs provide an
alternative criteria. However, because IDLHs were not developed to account for sensitive populations and because they were based on a maximum 30-min exposure period,
U.S. EPA suggests that the identification of an effect zone should be based on exposure
levels of one-tenth the IDLH (EPA, 1987). For example, the IDLH for chlorine dioxide is 5 ppm. Effect zones resulting from the release of this gas would be defined as any
zone in which the concentration of chlorine dioxide is estimated to exceed 0.5 ppm. Of
course, the approach is very conservative and gives unrealistic results; a more realistic
approach is to use a constant-dose assumption for releases less than 30 min using the
IDLH.
The use of TLV-STELs and ceiling limits may be most appropriate if the objective
of a CPQBA is to identify effect zones in which the primary concerns include more
transient effects, such as sensory irritation or odor perception. Generally, persons
located outside the zone that is based on these limits can be assumed to be unaffected
by the release.
For substances that do not have IDLHs, Levels of Concern (LOCs) are estimated
from median lethal concentration (LC50) or median lethal dose (LD50) levels reported
for mammalian species (EPA, 1987). LC50S and LD50S are concentration or dose levels,
respectively, that kill 50% of exposed laboratory animals in controlled experiments.
These can also be estimated from lowest reported lethal concentration or lethal dose
levels (LC50 and LDLO, respectively). Inhalation data (LC50 or LCLO) are preferred over
other data (LD50 or LDLO). Using these data, the level of concern is estimated as follows (EPA, 1986):
LC50 x 0.1
LCLO
LD50 x 0.01
LDLO x 0.1
Because the "level of concern" derived from an LD50 or LDLO represents a "specific" dose in units of mg/kg body weight, it is necessary to convert this "specific" dose
to an equivalent 30-min exposure to an airborne concentration of material as follows:
, 3 (level of concern) (70 kg)
1
" ^"
0.4 „•
<2'3-4'
where 70 kg is the assumed weight of an adult male and 0.4 m3 is the approximate
volume of air inhaled in 30 min. The estimated IDLH, whether derived from LC50 or
LD50 data, is divided by a factor of 10 to identify consequence zones.
The Department of Energy's Subcommittee on Consequence Assessment and Protective Action (SCAPA) (Craig et al., 1995) provide a hierarchy of alternative concentration guidelines in the event that ERPG data are not available. These interim values
are known as Temporary Emergency Exposure Linits (TEELs). This hierarchy is
shown in Table 2.31.
These methods may result in some inconsistencies since the different methods are
based on different concepts—good judgment should prevail.
TABLE 2.31. Recommended Hierarchy of Alternative Concentration Guidelines
(Craig etal., 1995)
Guideline
Hierarchy of Alternative
Guidelines
ERPG-I
AIHA
EEGL (30-minute)
NAS
IDLH
NIOSH
ERPG-2
AIHA
EEGL (60 minute)
NAS
LOG
EPA/FEMA/DOT
PEL-C
OSHA
TLV-C
ACGIH
5 x TLV-TWA
ACGIH
ERPG-3
AIHA:
NAS:
NIOSH:
EPA:
FEMA.
DOT:
OSHA:
ACGIH:
Source of Alternative
AIHA
PEL-STEL
OSHA
TLV-STEL
ACGIH
3 x TLV-TWA
ACGIH
American Industrial Hygiene Association
National Academy of Sciences Committee on Toxicology
National Institutes for Occupational Safety and Health
Environmental Protection Agency
Federal Emergency Management Agency
U. S. Department of Transportation
U.S. Occupational Safety and Health Administration
American Conference of Governmental Industrial Hygienists
Application of Probit Equations
For about 20 commonly used substances, there is some information on dose-response
relationships that can be applied to a probit function to quantify the number of fatalities that are likely to occur with a given exposure. Where sufficient information exists,
use of the probit function can refine the hazard assessment; however, despite the
appearance of greater precision, it is important to remember that probit relationships
for specific substances are typically extrapolated experimental animal data and, therefore, uncertainty surrounds these risk estimates when they are applied to human populations. Many probit models are the result of the combination of a wide range of animal
tests involving different animal species producing widely varying responses. There has
been little effort to try to utilize those studies and data that represent the greatest similarity to human exposure. There is also a standard error associated with the use of the
probit function and if only a few data points are available the confidence limits of the
resulting correlation can be very broad.
The probit method is a statistical curve fitting method. Furthermore, the results
are often extrapolated beyond the experimental data range. This presents a difficult
problem for higher doses since the toxicity mechanisms might change.
TABLE 2.32. Probit Equation Constants for Lethal Toxicity
The probit equation is of the form
Y = a + b In(O1O
where
Y
a, b, n
C
tc
is the probit
are constants
is the concentration in ppm by volume
is the exposure time in minutes
Substance
Acrolein
U.S. Coast Guard (1980)
World Bank (1988)
^^^^^^^^^^_^^^^^_______________—__—_—___ —-———-——_———-_________________^______ •.
a
b
n
a
b
n
-9.931
2.049
1
Acrylonitrile
-29.42
3.008
1.43
Ammonia
-35.9
1,85
2
5.3
2
0.92
2
3.7
1
Benzene
-109.78
Bromine
-9.04
Carbon Monoxide
-37.98
Carbon Tetrachloride
-6.29
0.408
2.50
Chlorine
-8.29
0.92
2
Formaldehyde
-12.24
1.3
2
Hydrogen Chloride
-16.85
2.00
1.00
Hydrogen Cyanide
-29.42
3.008
1.43
Hydrogen Fluoride
-25.87
3.354
1.00
Hydrogen Sulfide
-31.42
3.008
1.43
Methyl Bromide
-56.81
5.27
1.00
1.637
0.653
Methyl Isocyanate
-5.642
Nitrogen Dioxide
-13.79
1.4
2
Phosgene
-19.27
3.686
1
Propylene Oxide
Sulfur Dioxide
Toluene
-7.415
-15.67
-6.794
0.509
2.00
2.10
1.00
0.408
2.50
-9.93
2.05
1.0
-9.82
0.71
2.00
1.01
0.5
-5.3
0.5
2.75
-21.76
2.65
1.00
-26.4
3.35
1.0
-19.92
5.16
1.0
-19.27
3.69
1.0
0.54
Probit equations for a number of different vapor exposures are provided in Table
2.32.
Fatality probit coefficients are also available for approximately 10 materials in Tsao
and Perr (1979). Withers and Lees (1985) provide a review of acute chlorine toxicity,
and Withers (1986) presents a similar review of acute ammonia toxicity. Rijnmond
Public Authority (1982) provides probits for four chemicals (chlorine, ammonia,
hydrogen sulfide, and acrylonitrile): however, the Withers and Lees reviews are more
recent. Prugh (1995) presents a compilation of probit equations for 28 materials,
showing widely differing results from different investigators. Schubach (1995) performs a comparison between CCPS (AIChE, 1988a) and TNO (1992) probit equations. He demonstrates the sensitivity of risk assessment results to differences in probit
equations. Franks et al. (1996) provides a summary of probit data and how these data
are related to LC50 values.
2.3.1.2. DESCRIPTION
Description of Technique
To determine the possible health consequences of a toxic release incident outcome, dispersion models are used to develop a contour map describing the concentration of gas
as a function of time, location, and distance from the point of release. This is a reasonably simple process for a continuous release since the concentration is constant at a
fixed point. However, this approach is more difficult for an intermittent or instantaneous release since concentration-time information is required. Once the concentration-time information are developed from the dispersion models it is relatively
straightforward to use established toxicologic criteria (e.g., EBJPG, EEGL, SPEGL, or
IDLH) to assess the likelihood of an adverse outcome. Effects zones can be identified
that represent areas in which the concentration of gas and duration of exposure exceed
these criteria. All humans exposed within the consequence zone are assumed to be at
risk of experiencing the adverse effects associated with exposure to the material. In
some cases adjustments might be necessary due to sensitive populations.
Once the concentration-time information are determined, the next step is to determine the toxic dose. Toxic dose is usually defined in terms of concentration per unit
time of exposure raised to a power multiplied by duration of exposure (Cw£), with n
typically ranging from 0.5 to 3 (Lees, 1986). This relationship is an expansion of the
original Haber law developed in 1924 which states that, for a given physiological
effect, the product of the concentration times the time is equal to a constant.
For continuous releases, toxic dose may be calculated directly, since the concentration is constant. For instantaneous, time-varying (puff) releases, the toxic dose is estimated by integration or summation over several time increments.
t
n
toxic dose = J Cndt « ^C.w A*.
t0
i=i
(2.3.5)
where
C is the concentration (usually ppm or mg/m3)
n is the concentration exponent (dimensionless)
t is the exposure time (min)
i is the time increment (dimensionless)
Although there are various criteria that can be applied for the determination of
effect zones beyond the plant boundary there is no concensus within industry on which
criteria to apply. This same problem exists for local, state, and federal regulatory bodies
as well. Because such wide variation exists, judgment of trained toxicologists should be
utilized.
Theoretical Foundation
Probit equation parameters for individual gases are usually derived from animal experiments. Accurate concentrations and duration values are rarely available for historical
toxic accidents, but approximate estimates may be derived in some cases to complement the animal data. Probit equation parameters for gas mixtures are not currently
available.
The probit method is simply a statistical curve-fitting approach to handle the nonlinear experimental data from exposures. Extrapolation outside the range of the applicable data is unreliable.
Animal experiments are usually done on groups of rats or mice, but other species
are also used. The variability in toxic effect (concentration and time) between animal
species can be substantial. No definitive correlation is available to relate human and
animal responses, for example, the relationship between species often depends on the
substance to which the relevant species are exposed; substance specific conversion
models are sometimes required. Therefore, species-specific methods need to be defined
for converting animal data to human effects or for using animal data directly. Anderson
(1983) suggests that an equivalent dose for humans can be estimated based on mouse
data taking into account LC50 data, air intake, weight, target organs, and other factors.
A further consideration is that probit data are developed using mean exposure concentrations. It is not known whether the approach is applicable to time varying concentrations as would be expected from a moving puff.
Probit data are available from a number of sources (US Coast Guard, 1980; World
Bank, 1988; Prugh, 1995; Lees, 1996). These data are shown in Table 2.32.
Prugh (1995) provides a concise summary of probit models for 28 chemicals. His
summary shows a wide variability in coefficient and exponent values between different
investigators. Schubach (1995) demonstrates that this results in a great variability in
the predicted consequences. Ten Berge et al. (1986) discuss the applicability of Haber's
law and conclude that a concentration exponent of 1 does not fit the available data.
Prugh (1995) also performs a detailed analysis for chlorine, demonstrating that
Eq. (2.3.5), with a fixed exponent n, fits the available data at high concentrations, but
not at low. This implies that the probit equation and Eq. (2.3.5) does not fit the data
over wide concentration ranges. He concludes that this might be true for other chemical species.
Input Requirements and Availability
The analysis of toxic effects requires input at two levels
1. Predictions of toxic gas concentrations and durations of exposure at all relevant
locations.
2. Toxic criteria for specific health effects for the particular toxic gas.
Predictions of gas cloud concentrations and durations are available from neutral
and dense gas dispersion models (Section 2.1.3). IDLH and other acute toxic criteria are available for many chemicals and are described by AIChE/CCPS (1988b).
Probit equations are readily applied using spreadsheet analysis, but are not as readily
available.
Output
The usual output of toxicity effect analysis is the identification of populations at risk of
death or serious harm and the percentage of the population that may be affected by a
given toxic gas exposure.
Simplified Approaches
The use of established toxicity measures (e.g., ERPGs5 EEGLs5 SPEGLs5 ACGIH
TLV-STELs5 TLV-Cs) is usually a simpler approach than the probit model. However,
when the release is of longer or shorter duration than the published criteria time durations the results are more difficult to interpret.
2.3.1.3. EXAMPLE PROBLEMS
Example 2.33: Percent Fatalities from a Fixed Concentration-Time Relationship. Determine the likely percentage of fatalities from a 20-min exposure to 400 ppm
of chlorine.
Solution: Use the probit expression for chlorine fatalities found in Table 2.32:
Y =-8.29 + 0.92 In(C2 re)
Substituting for this exposure,
Y = -8.29 + 0.92 ln(4002 x 20) = 5.49
Table 2.28, Figure 2.88, or Eq. (2.3.2) is used to convert from the probit to percentages. The result is 69%.
The spreadsheet output for this example is shown in Figure 2.91. The spreadsheet
has been generalized so that the user can specify as input any general probit equation
form.
Example 2.34: Fatalities Due to a Moving Puff. A fixed mass of toxic gas has been
released almost instantaneously from a process unit. The release occurs at night with
calm and clear conditions. If the gas obeys the probit equation for fatalities
Y = -17.1 +1.69 In(^C2 75T)
where C has units of ppm and T has units of minutes.
Click to View Calculation Example
Example 2.33: Percent Fatalities from a Fixed Concentration-Time Relationship
Input Data:
Concentration:
Exposure Time:
Probit Equation:
k1:
k2:
Exponent:
-8.29
0.92
2
Calculated Results:
Probit Value:
Percent:
5.49
68.81 %
400 ppm
20 minutes
FIGURE 2.91. Spreadsheet output for Example 2.33: Percent fatalities form a fixed
concentration-time relationship.
a. Prepare a spreadsheet to determine the percent fatalities at a fixed location
2000-m downwind as a result of the passing puff. Vary the total release quantity and plot the percent fatalities vs. the total release quantity.
b. Change the concentration exponent from n = 2.75 to n = 2.50 in the probit
equation and determine the percent fatalities for a 5-kg release. How does this
compare to the previous result?
Additional data:
Molecular weight of gas: 30
Temperature:
298 K
Pressure:
1 atm
Release height:
Ground level
Wind speed:
2 m/s
Solution: (a) A diagram of the release geometry is shown in Figure 2.92. The
material is released instantaneously at the release point to form a puff, and the puff
moves downwind toward the receptor target. As the puff moves downwind, it mixes
with fresh air.
The most direct approach is to use a coordinate system for the puff that is fixed on
the ground at the release point. Thus, Eq. (2.1.59) is used in conjunction with Eq.
(2.1.58). Since the release occurs at ground level, Hr = O, and the resulting working
equation is
(2 3 6
«*•*»>-55^4^)']
'-'
For a night release, with clear conditions and a wind speed of 2 m/s, the stability
class is F. Thus, from Table 2.13 and x = 2000 m downwind.
Ox =0y =0.02*089 =0.02 (2000m)089 =17.34m
oz = 0.05*061 =5.2 (2000m)061 =5.2 m
The spreadsheet output for this example is shown in Figure 2.93. The most versatile approach is to design the spreadsheet cell grid to move with the center of the puff,
Wind
Direction
Instantaneous
Release
Point
Moving
Puff
Fixed
Receptor
Location
FIGURE 2.92. Geometry for Example 2.34: Fatalities due to a moving puff.
Click to View Calculation Example
Example 2.34: Fatalities Due to a Moving Puff
Probit Equation:
Exponent:
k1:
k2:
Calculated Results:
Distance Downwind:
Time Increment:
Max. cone, in puff:
Results:
Probit:
Percent fatalities:
1000 sec
2 m/s
5 kg
1.5m
Om
50
30
298 K
1 atm
2.75
-17.1
1.69
Puff Concentration Profile
Concentration, ppm
Input Data:
Time:
Wind Speed:
Total Release:
Step Increment:
Release Height:
No. of Increments:
Molecular Weight:
Temperature:
Pressure:
Distance from Puff Center, m
2000 m
0.0125 min
334 ppm
"""
7.33
99.02
Tables for Dispersion Calculation:
Distance
Distance Dispersion Coeff. Centerline
from
Center Downwind Sigma y Sigma z Cone.
Cone, Causative
(m)
(m)
mg/mA3
(m)
(m)
ppm
Variable
-75
1925
16.8
0.0
5.0 0.020022
0.0
16.8
-73.5
0.0
1926.5
5.0 0.030113
0.0
16.8
5.0 0.04488
0.0
1928
-72
0.0
-70.5
16.8
5.0 0.066284
1929.5
0.1
0.0
5.1 0.097014
-69
16.8
1931
0.0
0.1
-67.5
5.1 0.140718
1932.5
16.8
0.1
0.0
16.8
5.1 0.202287
-66
1934
0.2
0.0
16.8
1935.5
5.1 0.288208
-64.5
0.2
0.0
5.1 0.406986
16.8
-63
0.3
1937
0.0
16.9
5.1 0.569645
1938.5
-61.5
0.5
0.0
16.9
5.1 0.790307
-60
0.6
1940
0.0
5.1 1.086849
16.9
1941.5
-58.5
0.9
0.0
16.9
5.1 1.48163
1943
-57
1.2
0.0
16.9
5.1 2.002272
1944.5
-55.5
0.0
1.8
16.9
1946
5.1 2.682467
2.2
0.1
-54
FIGURE 2.93. Spreadsheet output for Example 2.34: Fatalities due to a moving puff.
rather than assigning each cell to a fixed location in space with respect to the release.
This reduces the total number of spreadsheet cells required. The spreadsheet includes
50 cells on either side of the puff center. The cell width is assumed to be small enough
that the concentration is approximately constant within each cell. The width of each
cell can be varied at the top of the spreadsheet to adjust the total distance encompassed.
This value can be adjusted so that the full concentration profile of the passing puff is
included by the cells.
The rigorous solution to the problem would vary the time at the top of spreadsheet
and track the concentration at x = 2000 m as the puff passed. However, the puff has a
small enough diameter and the puff passes relatively quickly so that the concentration
profile will not change much in shape as it passes. Thus, the concentration profile centered around x = 2000 m can be used to approximate the actual concentration profile
as a function of time.
The procedure for this approach is
1. Compute # at each cell in the grid (first column)
2. Compute centerline concentration at each point using Eq. (2.3.6).
3. Compute C2'75 T at each point on the grid. The concentration must have units
of ppm and the time has units of minutes.
4. Form the sum JC 275 T.
5. Calculate the probit using the results of step 4 and the probit equation.
6. Convert the probit to percentage.
The spreadsheet output for a 5 kg release is shown in Figure 2.93. Figure 2.93
includes a plot of the puff concentration at x = 2000 m when the puff is centered at that
point.
The spreadsheet is executed repeatedly using differing values of the total release
quantity. The percent fatalities are recorded for each run. The total results are plotted in
Figure 2.94. These results show that a relatively small change in total release (from
about 3 to 7 kg) changes the percent fatalities from 3 to 98 percent.
(b) For this case, the spreadsheet is executed with w = 2.50 with a total release of
5 kg. The percent fatalities in this case is 48.3%. Thus, a small change in the exponent
in the probit results in a large change in the effects. The fixed downwind distance with
10% fatalities changes from 2912 m to 2310 m with a change in n from 2.75 to 2.50.
2.3.1.4. DISCUSSION
Percent Fatalities
Strengths and Weaknesses
A strength of the probit method is that it provides a probability distribution of consequences and it may be applicable to all types of incidents in CPQRA (fires, explosions,
Total Release, kg
FIGURE 2.94. Total fatalities versus total quantity released for Example 2.34: Fatalities due to
a moving puff.
toxic releases). It is generally the preferred method of choice for CPQRA studies. A
weakness of this approach is the restricted set of chemicals for which probit coefficients
are published. Probit models can be developed from existing literature information and
toxicity testing.
EBJ5G, EEGL5 IDLH, or other fixed concentration methods are available for
many more chemicals, but these do not allow comparisons that differ in duration form
the exposure time used to establish the guidelines.
Identification and Treatment of Possible Errors
The potential for error arises both from the dispersion model and the toxicity measures. Errors in dispersion modeling are addressed in Section 2.1.3.
Interpretation of animal experiments are subject to substantial error due to the limited number of animals per experiment and imprecise applicability of animal data to
people.
The probit method is only a statistical data fitting technique. The data are also
developed based on a constant mean exposure to animals—the approach assumes that
the probit equations can be applied to varying concentrations.
For far field effects, that is, effects at large distances downwind from the release,
the predicted consequences are highly sensitive to the dispersion and toxic effects
models. As the distance from the release increases, the area impacted increases as the
square of the distance. This increases the sensitivity of the consequences. For instance,
Example 2.34 demonstrates that a change in probit concentration exponent from
n = 2.75 to n — 2.50 changes the predicted consequence dramatically—this is well
within the variability range for published probit equations.
Wide variability exists in published probit equations. Currently, no heuristics are
available to assist in the selection of an appropriate equation. If the EBJPG-3 concentration is used with a probit equation assuming a one hour exposure, the results should
predict a low percentage of fatalities. For many chemicals this is not the case.
Another factor to consider is the degree of exertion likely to be present in the affected
population. The inhalation rate in humans varies from 6 liters/min at rest to about 43
liters/min during slow running. One means for quantifying the error is to validate the
combined dispersion results and probit effects against known historical accidents,
although these data are rare.
Finally, since the area increases as the square of the distance from the release, the
population impact increases in the far field. This increases the sensitivity of the probit
method to lower concentrations.
Utility
Toxicology is a specialized area and few engineers and scientists have a good understanding of the underlying basis for the various toxicity criteria. Once a criterion has
been selected, whether probit or a fixed value system (EBJ5G, EEGL, etc.), the application is straightforward. Fixed values for toxicity measures are easier to apply than
probits, especially for plume emissions. It is always preferable to use data for which
there is sufficient documentation about how the data were obtained, rather than to use
reference values where little or no supporting information is available.
Resources Needed
Some understanding of toxic effects is important because such effects are highly variable; generalizing or uncritically applying formulas can yield very misleading results.
Probit equations should be developed from experimental animal data only in collaboration with a skilled industrial hygienist or toxicologist trained in health risk assessment
techniques. Regardless of the method used to estimate the potential health consequences of an incident outcome (i.e., use of toxicity measures or probit functions), a
toxicologist should be called to provide input to this aspect of a CPQRA.
Available Computer Codes
DAMAGE (TNO5 Apeldorn, The Netherlands)
PHAST (DNV, Houston, TX)
TRACE (Safer Systems, Westlake Village, CA)
2.3.2. Thermal Effects
2.3.2.1. BACKGROUND
Purpose
To estimate the likely injury or damage to people and objects from thermal radiation
from incident outcomes.
Philosophy
Thermal effect modeling is more straightforward than toxic effect modeling. A substantial body of experimental data exists and forms the basis for effect estimation. Two
approaches are used:
• simple tabulations or charts based on experimental results
• theoretical models based on the physiology of skin burn response.
Continuous bare skin exposure is generally assumed for simplification. Shelter can
be considered if relevant (Section 2.4).
Applications
Thermal effect modeling is widely used in chemical plant design and CPQRA. Examples include the Canvey Study (Health & Safety Executive, 1978, 1981), Rijnmond
Public Authority (1982) risk assessments, and LNG Federal Safety Standards (Department of Transportation, 1980). The API 521 (1996a) method for flare safety exclusion
zones is widely used in the layout of process plants.
2.3.2.2. DESCRIPTION
Description of Techniques
API (1996a) RP 521 provides a short review of the effects of thermal radiation on
people. This is based on the experiments of Buettner (1957) and Stoll and Green
(1958). The data on time for pain threshold is summarized in Table 2.33 (API,
1996a). It is stated that burns follow the pain threshold "fairly quickly." The values in
TABLE 2.33. Exposure Time Necessary to Reach
the Pain Threshold (API7 1966a)
Radiation intensity
(Btu/hr/ft2)
kW/m2
Time to pain
threshold (s)
500
1.74
60
740
2.33
40
920
2.90
30
1500
4.73
16
2200
6.94
9
3000
9.46
6
3700
11.67
4
6300
19.87
2
TABLE 2.34. Recommended Design Flare Radiation Levels Excluding Solar Radiation
(API, 1996a)
Permissible design level (K)
a
Btu/hr/ft2
kW/m2
Conditions*
5000
15.77
Heat intensity on structures and in areas where operators are not
likely to be performing duties and where shelter from radiant heat is
available, for example, behind equipment
3000
9.46
Value of K at design flare release at any location to which people have
access, for example, at grade below the flare or on a service platform
of a nearby tower. Exposure must be limited to a few seconds,
sufficient for escape only
2000
6.31
Heat intensity in areas where emergency actions lasting up to 1 min
may be required by personnel without shielding but with appropriate
clothing
1500
4.73
Heat intensity in areas where emergency actions lasting several
minutes may be required by personnel without shielding but with
appropriate clothing
500
1.58
Value of K at design flare release at any location where personnel are
continuously exposed
On towers or other elevated structures where rapid escape is not possible, ladders must be provided on the side
away from the flare, so the structure can provide some shielding when K is greater than 200 Btu/hr/ft 2 (6.31
kW/m2).
Table 2.33 may be compared to solar radiation intensity on a clear, hot summer day of
about 320 Btu/hr ft2 (1 kW/m2).
Based on these data, API suggests thermal criteria (Table 2.34), excluding solar
radiation, to establish exclusion zones or determine flare height for personnel exposure.
Other criteria for thermal radiation damage are shown in Table 2.35.
TABLE 2.35. Effects of Thermal Radiation (World Bank, 1985)
Radiation intensity
(kW/m2)
Observed effect
37.5
Sufficient to cause damage to process equipment
25
Minimum energy required to ignite wood at indefinitely long exposures
(nonpiloted)
12.5
Minimum energy required for piloted ignition of wood, melting of plastic
tubing
9.5
Pain threshold reached after 8 sec; second degree burns after 20 sec
4
Sufficient to cause pain to personnel if unable to reach cover within 20 s.
however blistering of the skin (second degree burns) is likely; 0% lethality
1.6
Will cause no discomfort for long exposure
TIME, s
Near 100%
fatalities
Mean 50%
fatalities
1% Fatalities
Significant
injury
threshold
Data of
Mixter
(1954)
INCIDENT THERMAL FLUX, kW/M2
FIGURE 2.95. Serious injury/fatality levels for thermal radiation (Mudan, 1984).
Mudan (1984) summarizes the data of Eisenberg et al. (1975) for a range of burn
injuries, including fatality, and of Mixter (1954) for second-degree burns (Figure
2.95). Eisenberg et al. (1975) develop a probit model to estimate fatality levels for a
given thermal dose from pool and flash fires, based on nuclear explosion data.
(tI4/*\
Y = -14.9 + 2.56 In —Uo4 J
(2.3.7)
where Y is the probit (Section 2.3.1), t is the duration of exposure (sec), and/ is the
thermal radiation intensity (W/m2).
Lees (1986) summarizes the data from which this relationship was derived. The
probit method has found less use for thermal injury than it has for toxic effects. Mathematical models of thermal injuries can be based on a model detailed description of the
skin and its heat transfer properties. Experiments have shown that the threshold of pain
occurs when the skin temperature at a depth of 0.1 mm is raised to 450C. When the
skin surface temperature reaches about 550C blistering occurs. Mehta et al. (1973)
describe a thermal energy model to predict damage levels above 550C.
Lees (1994) provides a detailed analysis of fatal injuries from burns, including a
review of probit equations. He also considers the effects of clothing and buildings on
the resulting injuries.
Schubach (1995) provides a review of thermal radiation targets for risk analysis.
He concludes that (1) the method of assuming a fixed intensity of 12.6 kW/m2 to represent fatality is inappropriate due to an inconsistency with probit functions and (2) a
thermal radiation intensity of 4.7 kW/m2 is a more generally accepted value to represent injury. This value is considered high enough to trigger the possibility of injury for
people who are unable to be evacuated or seek shelter. That level of heat radiation
would cause injury after 30 s of exposure.
Schubach (1995) also suggests that the fatality probit data of Eisenberg et al.
(1975) applies to lightly clothed individuals, and that the type of clothing would have a
significant effect on the results.
The effect of thermal radiation on structures depends on whether they are combustible or not and the nature and duration of the exposure. Thus, wooden materials will
fail due to combustion, whereas steel will fail due to thermal lowering of the yield
stress. Many steel structures under normal load will fail rapidly when raised to a temperature of 500-60O0C, whereas concrete will survive for much longer. Flame
impingement on a structure is more severe than thermal radiation.
Theoretical Foundation
Thermal effects models are solidly based on experimental work on humans, animals,
and structures. A detailed body of theory has been developed in the area of fire engineering of structures.
Input Requirements and Availability
The inputs to most thermal effect models are the thermal flux level and duration of
exposure. Thermal flux levels are provided by one of the fire consequence models (Section 2.2.4 or 2.2.6), and durations by either the consequence model (e.g., for
BLEVEs) or by an estimate of the time to extinguish the fire. More detailed models use
thermal energy input after a particular skin temperature is reached. Data for these
models are more difficult to provide.
Output
The primary output is the estimated level of injury from a specified exposure.
Simplified Approaches
The use of a fixed thermal exposure criteria, resulting in a fixed injury or fatality level,
without accounting for duration of exposure is a simplified approach. This allows the
consequence models to be used to predict a standard thermal exposure level, without
reference to the specific details of each incident in terms of duration. The fixed criteria
may be based on an implicit exposure time. The LNG Federal Safety Standards
(Department of Transportation, 1980) use a fixed criteria of 5 kW/m2 for defining limiting thermal flux levels for people.
2.3.2.3. EXAMPLE PROBLEMS
Example 2.35: Thermal Flux Estimate Based on 50% Fatalities. Determine the
thermal flux necessary to cause 50% fatalities for 10 and 100 s of exposure.
Solution: From Figure 2.95, the flux levels corresponding to 50% fatalities for 10
and 100 s are 90 and 14 kW/m2, respectively. Using the Eisenberg probit method, Eq.
(2.3.7) is rearranged to solve for the thermal radiation intensity/:
r
n3/4
_ 104exp[(Y + 14.9)/2.56]
1
I
'
J
For 50% fatality, the probit variable, Y = 5.0 (Table 2.28)
For t = 10 s, 1=61 kW/m2
For *= 10Os, 7 = llkW/m 2
These results differ from those of Figure 2.95 by about 30%. It is unlikely that
much greater accuracy can be achieved. This example demonstrates the importance of
the duration of exposure, especially for short duration incidents such as BLEVEs (on
the order of 10-20 s). A fixed criterion, suitable for prolonged exposures, may be inappropriate for such incidents.
The spreadsheet output for this problem is provided in Figure 2.96. The data of
Figure 2.95 have been digitized and are included in the spreadsheet. The spreadsheet
will determine the thermal flux based on any specified exposure time and percent
fatalities.
Example 2.36: Fatalities Due to Thermal Flux from a BLEVE Fireball. Estimate
the fatalities due to the thermal flux from a BLEVE based on the release of 39,000 kg of
flammable material. Assume that 400 workers are distributed uniformly from a distance of 75 m to 1000 m from the ground location of the fireball center.
Click to View Calculation Example
Example 2.35: Thermal Flux Estimate
Input Data:
Exposure time:
Percent Fatalities:
10 seconds
50 %
Calculated Results:
Thermal Flux Estimate for:
Significant injury threshold:
Percent Fatalities:
1
50
100
Interpolated Flux for Specified Percent:
21.55 kW/m**2
38.00 kW/m**2
85.16 kW/m**2
131.47 kW/m**2
85.16 kW/m**2
Thermal Flux Estimate Based on Eisenberg Fatality Probit:
Probit:
5.00
Thermal Flux:
60.53 kW/m**2
FIGURE 2.96. Spreadsheet output for Example 2.35: Thermal flux estimate based on 50%
fatalities.
Solution: The incident radiant flux from a BLEVE fireball is estimated using Eq.
(2.2.44) and the fireball duration is estimated from Eq. (2.2.34). The fireball center
height, JFfBLEVE, is given by Eq. (2.2.35). The solution will assume that the fireball stays
fixed at the fireball center height during the entire exposure duration. The receptor distance from the fireball center height to the receptor is given from geometry as
Receptor Distance = y HgLEVE + L2
where L is the distance on the ground from the fireball center.
The probit equation for fireball fatalities is given by Eq. (2.3.7).
The procedure is to divide the distance from 75 to 1000 m into a number of small
shells of equal thickness. Assume that the shell thickness is small enough that the incident thermal flux at the center of each cell is approximately constant throughout the cell
thickness. The procedure at each shell is as follows:
1.
2.
3.
4.
5.
6.
7.
8.
Compute the distance from ground zero to the center of the current shell.
Compute the receptor distance from the fireball center to the current shell.
Compute the incident heat flux at the shell center using Eq. (2.2.44).
Compute the probit for fatality using Eq. (2.3.7).
Convert the probit to a percentage using Eq. (2.3.2).
Calculate the total shell area.
Determine the total number of workers in the shell.
Multiply the total number of workers by the percent fatalities to determine the
total fatalities.
9. Sum up the fatalities in all shells to determine the total.
The output of the spreadsheet solution is shown in Figure 2.97. A total of 16.2
fatalities is predicted. A shell size thickness of 5-m was selected, with a total of 185
shells. The results are essentially independent of this value. The fatalities drop to zero at
about 270 m from the BLEVE.
2.3.2.4. DISCUSSION
Strengths and Weaknesses
Thermal effect models are simple and are based on extensive experimental data. The
main weakness arises when the duration of exposure is not considered.
Identification and Treatment of Errors
Thermal effect data relates to bare skin. People wearing heavy clothes or protected by
buildings would be much less likely to be injured (the effect of sheltering is discussed in
Section 2.4). Also, in hot sunny climates, it may be necessary to add solar radiation
intensity to that estimated by consequence models to determine total radiation exposure from an incident. In general, error in thermal effects is likely to be less than errors
in estimating explosion and toxic effects.
Utility
Thermal effect models are easy to apply for human injury. The issue of duration of
exposure may be difficult to resolve where shelter is available, but limited. Thermal
Click to View Calculation Example
Example 2.36: Fatalities due to Thermal Fiux from BLEVE Fireball
Input Data:
Total people:
Inner radius:
Outer radius:
Total flammable:
Distance increment:
400
75 m
1000 m
39000 kg
5m
Calculated Results:
!Total Fatalities:
Total Area:
People/m**2:
Total increments:
Time duration:
Max. fireball diam.:
Height of fireball:
16.21
3122338 m**2
0.000128
185
15.14 s
196.69 m
147.52 m
Ground Receptor Heat
Distance Distance FluxA
m
m
kW/m 2
Probit
Percent
Area
m**2
People
in area
People Cumulative
Fatalities People
FIGURE 2.97. Spreadsheet output for Example 2.36: Fatalities due to thermal flux from a
BLEVE fireball.
effects on steel structures are more difficult to calculate, as an estimate of temperature
profiles due to the net radiation balance (in and out of the structure) and conduction
through the structure may be necessary.
Resources Needed
A process engineer using a hand calculator can predict thermal effects with little special
training. Effects on structures require sophisticated thermal modeling.
Available Computer Codes
DAMAGE (TNO, Apeldorn, The Netherlands)
PHAST (DNV, Houston, TX)
TRACE (Safer Systems, Westlake Village, CA)
2.3.3. Explosion Effects
2.3.3.1. BACKGROUND
Purpose
Explosion effect models predict the impact of blast overpressure and projectiles on
people and objects.
Philosophy
Most effect models for explosions are based on either the blast overpressure alone, or a
combination of blast overpressure, duration, and/or specific impulse. The blast
overpressure, impulse and duration are determined using a variety of models, including
TNT equivalency, multi-energy and Baker-Strehlow methods. See Section 2.2 for
details on these models.
Applications
Virtually all CPQBJVs of systems containing large inventories of flammable or reactive
materials will need to consider explosion effects. Some analyses may also need to consider condensed phase explosions or detonations of unstable materials. Examples
include the Canvey Study (Health & Safety Executive, 1978, 1981) and Rijnmond
Public Authority (1982) risk assessments. However, in the case of very large explosions, or for explosions or sites near off-site structures, significant offsite damage could
result. Many accident investigations have employed explosion effect models (e.g.,
Sadeeetal., 1977).
Since the blast overpressure decreases rapidly as the distance from the source
increases, significant offsite damage from blasts is not expected. Most studies are
directed toward on-site damage.
2.3.3.2. DESCRIPTION
Description of the Technique
Explosion effects have been studied for many years, primarily with respect to the layout
and siting of military munitions stockpiles. Baker et al. (1983) and Lees (1986,1996)
provide extensive reviews of explosion effects and the effects of projectiles. Explosion
effects are classified according to effects on structures and people.
Structures. Overpressure duration is important for determining effects on structures. The positive pressure phase of the blast wave can last from 10 to 250 ms, or
more, for typical VCEs. The same overpressure level can have markedly different
effects depending on the duration. Therefore, some caution should be exercised in
application of simple overpressure criteria for buildings or structures. This criteria can
in many cases cause overestimation of structural damage. If the blast duration is shorter
than the characteristic structural response times it is possible the structure can survive
higher overpressures. Baker et al. (1983) discuss design issues relating to the response
of structures to explosion overpressures. AIChE/CCPS (1996b) provides an extensive
review of risk criteria and risk reduction methods for structures exposed to explosions,
and a discussion of blast resistant building design.
Eisenberg et al. (1975) provide a simple probit model to describe the effects on
structures.
Y = -23.8 + 2.92 In(P 0 )
(2.3.8)
where Y is the probit and P° is the peak overpressure (Pa)
The probit, Y, can be converted to a percentage using Eq. (2.3.1). The percentage
here represents the percent of structures damaged. More detailed effect models for
structures are based on both the peak overpressure and the impulse (Lees, 1996;
AIChE/CCPS, 1996b).
Tables 2.18a and 2.18b provide an estimate of damage expected as a function of
the overpressure.
The interpretation of these data is clear with respect to structural damage, but subject to debate with respect to human casualties. The Rijnmond (1982) study equates
heavy building damage to a fatal effect, as those inside buildings would probably be
crushed.
People. People outside of buildings or structures are susceptible to
1. direct blast injury (blast overpressure)
2. indirect blast injury (missiles or whole body translation)
Relatively high blast overpressures (>15 psig) are necessary to produce fatality
(primarily due to lung hemorrhage). Eisenberg et al. (1975) provides a probit for fatalities as a result of lung hemorrhage due to the direct effect of overpressure,
Y = -77.1+ 6.91 In(P0)
(2.3.9)
where Y is the probit and P° is the peak overpressure (Pa).
It is generally believed that fatalities arising from whole-body translation are due to
head injury from decelerative impact. Baker et al. (1983) present tentative criteria for
probability of fatality as a function of impact velocity. They also provide correlations
for determining impact velocity as a function of the incident overpressure and the ratio
of the specific impulse over the mass of the human body to the % power. Lees (1996)
provides probit equations for whole body translation and impact.
Injury to people due to fragments usually occurs either because of penetration by
small fragments or blunt trauma by large fragments. Baker et al. (1983) review skin
penetration and suggest that it is a function of A/M where A is the cross-sectional area
of the projectile along its trajectory and M is the mass of the projectile. Injury from
blunt projectiles is a function of the fragment mass and velocity. Very limited information is available for this effect.
TNO (1979) suggest that projectiles with a kinetic energy of 100 J can cause
fatalities.
Theoretical Foundation
The probit models are simply a convenient method to fit the limited data. Most effect
models, particularly for human effects, are based on limited, and sometimes indirect
data.
The basis for explosion effect estimation is experimental data from TNT explosions. These data are for detonations and there may be differences with respect to
longer duration deflagration overpressures.
Input Requirements and Availability
The primary input is the blast overpressure (defined as the peak side-on overpressure),
although for structural damage analysis, an estimate of the duration is also necessary.
Projectile damage analysis requires an estimate of the number, velocity, and spatial distributions of projectiles, and is more difficult than overpressure analysis.
Output
The output is the effect on people or structures of blast overpressure or projectiles.
Simplified Approaches
For explosion effects, some risk analysts assume that structures exposed to a 3 psi peak
side-on overpressure, or higher, will suffer major damage, and assume 50% fatalities
within this range (corresponding to a probit value of 5).
2.3.3.3. EXAMPLE PROBLEM
Example 2.37: 3-psi Range for a TNT Blast. 100 kg of TNT is detonated. Determine the distance to the 3-psi limit for structures and 50% fatalities.
Solution: The solution is by trial and error. The procedure to determine the blast
overpressure is described in Section 2.2.1 (see Example 2.19). The procedure is as
follows:
1.
2.
3.
4.
Guess a distance.
Calculate the scaled distance using Eq. (2.2.7).
Use Figure 2.48 or the equations in Table 2.17 to determine the overpressure.
Check if the overpressure is close to 3 psi.
The procedure is repeated until an overpressure of 3 psi is obtained. The result is
36.7m.
A spreadsheet implementation of this problem is provided in Figure 2.98.
Click to View Calculation Example
Example 2.37: 3 psi Range for a TNT Blast
Input Data:
Mass of TNT:
100 kg
Calculated Results:
Trial and Error Solution for 3 psi range:
Guessed distance:
36.72 m
Scaled distance, z:
<— Trial and Error to get pressure!
7.9111 m/kg**(1 /3)
Overpressure Calculation:
a+b*log(z):
Overpressure:
(only valid for z > 0.0674 and z < 40)
0.9985635
20.68 KPa
3.0002715 psia
<— Must be 3 psi.
FIGURE 2.98. Spreadsheet output for Example 2.37: 3 psi range for a TNT blast.
2.3.3.4. DISCUSSION
Strengths and Weaknesses
The strength of explosion and projectile effect models is their base of experimental data
and general simplicity of approach. A weakness relates to the difference between
indoor and outdoor effects. People may be killed indoors due to building collapse at
lower overpressure than outdoors due to overpressure alone. A rigorous treatment of
projectile effects is difficult to undertake. Explosions in built-up areas are rarely uniform in effects. VCEs are often directional and this effect is not accounted for in current
effect models.
Identification and Treatment of Possible Errors
The relationship between overpressure and damage level is well established for TNT5
but this relationship may be in error when applied to VCEs.
Utility
Explosion effect models are easy to use. Projectile effect models are more difficult to
apply.
Resources Needed
Given quantitative results from explosion overpressure models and projectile analysis,
effects can be determined by reference to published data on damage or injury level. No
special computational resources are required.
Available Computer Codes
SAFESITE (W. E. Baker Engineering, San Antonio, TX)
HEXDAM, VASDIP, VEXDAM (Engineering Analysis, Inc., Huntsville, AL)
Several integrated computer codes also include explosion effects. These include,
DAMAGE (TNO, Apeldoorn, The Netherlands)
QRAWorks (Primatech, Inc., Columbus, OH)
2.4.
Evasive Actions
2.4.1. Background
Purpose
In the event of a major incident, the consequences to people will probably be less serious than predicted by the release and incident outcome models described in Sections
2.1 and 2.2 and the effect models in Section 2.3. This is not only because of uncertainties in modeling incident outcomes or modeling limitations that may lead to conservative assumptions and results but also because of topographical and physical obstruction
factors, and because of evasive actions taken by people. Evasive actions can include
evacuation, escape, sheltering, and heroic medical treatment. This section addresses the
impact of evasive actions as mitigating factors to a CPQRA study.
Escape from a vapor cloud release is primarily associated with toxic releases. Flammable clouds exist within shorter distances from the source and if ignited present thermal and blast effects beyond the initial cloud dimensions. There is usually very little
Next Page
Event Probability and Failure
Frequency Analysis
This is the second of three chapters that describe the component methods of CPQRA:
consequence techniques, frequency techniques, and risk evaluation techniques. It
describes techniques used to calculate incident frequencies and subsequent consequence probabilities (Figure 1.2).
The chapter is divided into three main sections as shown in Figure 3.1. Section 3.1.
describes the use of historical data to calculate incident frequencies. This is an appropriate
method when sufficient, relevant data are available on the incidents of interest. Section
3.2 describes modeling techniques used to estimate likelihoods (frequencies or probabilities) from more basic data when historical data are not available. Within this section, the
main techniques are fault tree and event tree analysis. Finally, Section 3.3 describes common-cause failure analysis, human reliability analysis, and external events analysis.
3.1. Incident Frequencies from the Historical Record
3.1.1. Background
Purpose. In many cases, the incident frequency information required in a full or partial
CPQRA (Figure 1.2) can be obtained directly from the historical record. The number
of recorded incidents can be divided by the exposure period (e.g., plant years, pipeline
LIKELIHOOD
(Frequency or Probability)
SECTION 3.1
SECTION 3.2
SECTION 3.3
Historical
Record
Fault tree analysis
Event tree analysis
Common-cause analysis
Human reliability analysis
External events analysis
FIGURE 3.1. Organization of Chapter 3: frequency estimation techniques.
mile-years) to estimate a failure estimate of the frequency. This is a straightforward
technique that provides directly the top event frequency without the need for detailed
frequency modeling (Section 3.2,1). Event probabilities can similarly be estimated for
inclusion in event tree analysis (Section 3.2.2). Examples of the use of historical information are the conditional probability of a vapor cloud explosion (VCE) following a
release, or a fire following a pipeline rupture.
We use the term likelihood for the numerical output of this technique; frequencies
or probabilities may be derived using this approach. The units of frequency are the
number of events expected per unit time. Probabilities are dimensionless and can be
used to describe the likelihood of an event during a specified time interval (e.g., 1 year)
or the conditional probability that an event will occur, given that some precursor event
has happened.
Technology. The historical approach is based on records and incident frequencies and
is not limited by the imagination of the analyst in deriving failure mechanisms, as might
be the case with fault tree analysis. Conversely, rare incidents may not have occurred
unless the population of items is very large. However, a number of criteria have to be
satisfied for the historical likelihood to be meaningful. These include sufficient and
accurate records and applicability of the historical data to the particular process under
review. Provided these criteria are met, which is often difficult, the frequency information is relatively straightforward to calculate. Its simplicity should give added confidence to senior management and others who must review the CPQRA.
Applications. The historical frequency technique is applicable for a number of important cases in CPQRA. It is often used early in the design stage, before details of plant
systems and safeguards and defined. Modeling techniques (Section 3.2.1) cannot be
applied at this stage.
Similarly, the technique is ideal where failure causes are very diverse and difficult to
predict, such as with transportation incidents. Abkowitz and Galarraga (1985) provide
a typical example applied to maritime transport.
The technique is not limited to the early stages of a design. The simplicity of
approach (given suitability of the data) allows quick, economical frequency estimates
to be generated. Limited safety resources can then be directed to other important parts
of CPQEJV (e.g., consequence analysis, risk evaluation).
3.1.2. Description
Description of the Technique. The historical approach is summarized by a five-step
methodology (Figure 3.2).
1.
2.
3.
4.
5.
Define context.
Review source data.
Check data applicability.
Calculate incident frequency.
Validate frequency.
The conditions when frequency modeling techniques need to be employed are
indicated on the logic diagram (Figure 3.2).
STEP1
DEFINE CONTEXT
Clear specification
of incident for analysis
Requirement:
Determine Incident
Frequency/Probability
STEP 2
REVIEW SOURCE DATA
Historical accident data:
* company/national
• adequate description
Determine failures
Determine equipment
exposure
No relevant sources
STEP 3
CHECK DATA APPLICABILITY
Check effect of:
• technological change
• plant environment
• modified safety
procedures
Reject nonapplicable data
Modify equipment exposure
Sources not appropriate
•*V.VWAV.VVVAVS>>V.%'.»SSW>VAVWy
Exit
STEP 4
CALCULATE LIKELIHOOD
Calculate Likelihood:
(failures/exposure)
Modify for:
• technological change
• plant environment
• modified safety
procedures
STEP 4
CALCULATE LIKELIHOOD
Calculate Likelihood
Use Fault Tree Analysis
(Section 3.2.1)
STEPS
VALIDATE LIKELIHOOD
Recheck against known data:
• company
• industry
• national
Estimate accuracy of value
Answer
Legend:
incident
Frequency or
Probability
Historical Approach Sequence
Alternative Approach
FIGURE 3.2. Procedure for predicting incident likelihood from the historical record.
Step L Define Context. The historical approach may be applied at any stage of a
design—conceptual, preliminary, or detailed design—or to an existing facility. After
the CPQRA has been defined, the next two steps of CPQBJV, system description and
hazard identification (Figure 1.2), should be completed to provided the details necessary to define the incident list. These steps are potentially iterative as the historical
record is an important input to hazard identification. The output of this step is a clear
specification of the incidents for which frequency estimates are sought.
Step 2. Review Source Data. All relevant historical data sources should be consulted.
The data may be found in company records, government, or industry group statistics
(Section 5.1). It is unlikely that component reliability databases (Section 5.5) will be of
much use for major incident frequencies.
The source data should be reviewed for completeness and independence. Lists of
incidents will almost certainly be incomplete and some judgment will have to be used
in this regard. The historical period must be of sufficient length to provide a statistically
significant sample size. Differences in data gathering techniques and variation in data
quality must also be evaluated.
Incident frequencies derived from lists containing only one or two incidents of a
particular type will have large uncertainties. When multiple data sources are used,
duplicate incidents must be eliminated. Sometimes, the data source will provide details
of the total plant or item exposure (plant-years, etc.). Where the exposure is not available, it will have to be estimated from the total number and age of process plants in
operation, the total number of vehicle-miles driven, etc.
Step 3. Check Data Applicability. The historical record may include data over long
periods of time (5 or more years). As the technology and scale of plant may have
changed in the period, careful review of the source data to confirm applicability is
important. It is a common mistake for designers to be overconfident that relatively
small design changes will greatly reduce failure frequencies. In addition, larger scale
plants (those that employ new technology such as heat recovery) or special local environmental factors may introduce new hazards not apparent in the historical record, It is
commonly necessary to review incident descriptions and discard those failures not relevant to the plant and scenario under review.
Step 4. Calculate Event Likelihood. When the data are confirmed as applicable and
the incidents and exposure are consistent, the historical frequency can be obtained by
dividing the number of incidents by the exposed population. For example, if there have
been five major leaks from pressurized ammonia tanks from a population of 2500 vessel-years, the leak frequency can be estimated at 2 X 10~3 per vessel-year.
Where the historical data and the plant under review are not totally consistent, it is
necessary to exercise judgment to increase or decrease the incident frequency. This is
most easily done if the historical data are categorized by failure-cause: only those frequencies that cover differences in the reference plant(s) and the study plant are modified. A structured procedure for such modification is the Delphi technique (Section
5.5). Where the data are not appropriate, an alternative method, such as fault tree analysis, must be employed (Section 3.2.1).
Sup 5. Validate Frequency. It is often possible to compare the calculated incident
frequency with a known population of plant or equipment not used for data generation. This is a useful check as it can highlight an obvious mistake or indicate that some
special feature has not received adequate treatment.
Theoretical Foundation. The main assumption of the technique is that the historical
record is complete and the population from which the incidents are drawn is appropriate and sufficiently large for the event likelihoods to be statistically significant. The
record of reported occurrences should include the significant failure modes that are difficult to analyze, such as human factors, common-cause failures, management systems,
and industrial sabotage.
Input Requirements and Availability. Historical data have diverse sources and may
be difficult to acquire. Data are of two types: incident data and plant or item exposure
periods. Common sources and their availability are described in Section 5.1.
Output. The output from this analysis is a numerical value for the event likelihood,
sometimes with an indication of error bounds. In the case of a frequency value, this is
equivalent to the top event value in a fault tree analysis. If the output is a probability
(e.g., the likelihood of a flash fire vs an unconfmed explosion from a flammable vapor
cloud), it may be used directly for risk calculations.
Simplified Approaches. Many analysts use a default set of historical event likelihoods
that they have collected over the years from previous projects and studies. This obviates
the need to go back to original sources when a detailed analysis is not required, and it
may be suitable for CPQRA at an early stage.
3.1.3. Sample Problem
The sample problem illustrates the estimation of leakage frequencies from a gas pipeline.
Step 1. Define Context. The objective is to determine the leakage frequency of a proposed 8-in.-diameter, 10-mile-long, high-pressure ethane pipe, to be laid in a
semiurban area. The proposed pipeline will be seamless, coated, and cathodically protected, and will incorporate current good design and construction practices.
Step 2. Review Source Data. The database found to be the most complete and applicable is the gas transmission leak report data collected by the U.S. Department of
Transportation for the years 1970-1980 (Department of Transportation, 1970-1980).
It is based on 250,000 pipe-miles of data, making it the largest such database. It contains details of failure mode and design/construction information. An additional factor
is its availability on magnetic tape, thus permitting computer analysis. Conveniently, it
contains both incident data and pipeline exposure information.
Step 3. Check Data Applicability. The database includes all major pipelines, of mixed
design specification and ages, Thus, inappropriate pipelines and certain nonrelevant
incidents must be rejected. The remaining, population exposure data are still extensive
and statistically valid. Those data rejected in this example are
• Pipelines
-Pipelines that are not steel.
-Pipelines that are installed before 1950.
-Pipelines that are not coated, not wrapped, or not cathodically protected.
• Incidents
-Incidents arising at a longitudinal weld.
-Incidents where construction defects and materials failures occurred in pipelines that were not hydrostatically tested.
Step 4. Calculate Likelihood. The pipeline leakage frequencies are derived from the
remaining Department of Transportation data using the following procedure:
1. Estimate the base failure rate for each failure mode (i.e., corrosion, third party
impact, etc.).
2. Modify the base failure rate, as described above, where necessary to allow for
other conditions specific to this pipeline. In particular, the Department of
Transportation failure frequency attributable to external impact is found to be
diameter dependent, and data appropriate for an 8-in. pipeline should be used.
As the pipeline is to be built in a semiurban area, the failure frequency for external impact is subjectively judged to increase by a factor of 2 to reflect higher frequency digging activities. Conversely, the semiurban location is expected to
reduce the frequency of failure due to natural hazards, because of the absence of
river crossings, etc. The frequency of this failure mode is judged to be reduced
by a factor of 2. (Further discussion of the use of judgment may be found in section 5.5.)
Table 3.1 shows the application of Steps 3 and 4 to the raw frequency data. The
approximate distribution of leak size (full bore, 10% of diameter, pinhole) by failure
mode is then obtained from the database. This distribution is used to predict the freTABLE 3.1. Contribution of Failure Mechanisms to Pipeline Example
Failure frequency (per 1000 pipe mile-years)
Failure mode
Raw DOT data
Modified data
(inappropriate
data removed)
Material defect
0.21
0.07
1.0
0.07
Corrosion
0.32
0.05
1.0
0.05
External impact
0.50
0.24*
2.0
0.48
Natural hazard
0.35
0.02
0.5
0.01
Other causes
0.06
0.05
1.0
0.05
Total failure frequency
1.44
0.43
—
0.66
" This value is appropriate for an 8-in. pipe
Modification
factor
(judgment)
Final values
quency of hole sizes likely from the pipeline. Thus, if this distribution were I 3 10, and
89%, respectively, the full bore leakage frequency for the 10-mile pipeline would be
0.01 x (0.06 leaks/1000 pipe mile years) X 10 miles = 6.6 X 10~5 per year
Step 5. Validate Likelihood. In the United Kingdom, the British Gas Corporation
reportedly had 75 leaks on their transmission pipelines between 1969 and 1977, on a
pipeline exposure of 84,000 mile-years. This gives a final leakage frequency of 0.89 per
1000 mile-years, which is consistent with the value given in Table 3.1.
3.1.4. Discussion
Strengths and Weaknesses. The main strength of the technique is that the use of historical event data, where the accumulated experience is relevant and statistically meaningful, has great "depth53 as it will include most significant routes leading to the event.
Also, it has a high degree of credibility with nonspecialist users, who have to base
important decisions on the CPQBA. The main weakness relates to accuracy and applicability. Technological change, either in the scale of plant or the design (materials, process chemistry, energy recovery) may make some historical data inapplicable.
Identification and Treatment of Possible Errors. The main source of error in the
estimation of likelihoods for the historical record is the use of data that are inappropriate or too sparse to give statistically meaningful results. Often there are good data on
the number of incidents or failures, but poor data on the population in which these failures have occurred. For these cases, it is necessary to adopt modeling techniques such
as fault tree analysis (Section 3.2.1). The sensitivity of CPQRA results to input data
and the quantification of the resulting uncertainty are discussed further in Section 4.4.
Utility. The technique is not difficult to apply, although data gathering can be time
consuming. The time required to estimate an incident frequency from historical data
can be reduced if the company/user assembles and keeps updated a database of the historical incident data (Section 5.1).
Resources. The analyst should be an engineer because technical judgment is involved.
This is especially important when checking for appropriateness of the data before acting
on it. An in-house information scientist may be able to assist in data gathering. However,
it may be more time and cost effective to use consultants for unusual problems (specialist
firms exist for railway, maritime, and other industry-related incident studies (Section
5.1). Industry associations may be able to assist in identification of such expertise.
Available Computer Codes. There are no codes available for the actual performance
of the analysis from historical data. However, there are a number of computerized incident and failure databases available. These include the U.S. Department of Transportation gas pipeline databases (Department of Transportation 1970-1980), those for
liquid pipelines, and others for transportation incidents (Section 5.1).
3.2. Frequency Modeling Techniques
This section introduces the main techniques for modeling the likelihood of incidents
and the probabilities of outcomes in CPQRA: fault tree analysis (FTA), which is used
to estimate incident frequencies (e.g., major leakage of a flammable material) (Section
3.2.1), and event tree analysis, which may be used to quantitatively estimate the distribution of incident outcomes (e.g., frequencies of explosions, pool fires, flash fires, safe
dispersal) (Section 3.2.2).
If the understanding of the system and the underlying data is poor, modeling techniques, even complex ones, will not improve the accuracy of the final estimate of frequency. The analyst should employ the most appropriate tool for the task.
3.2.1. Fault Tree Analysis
3.2.1.1. BACKGROUND
An essential goal of a CPQRA is to establish the frequency of the identified hazardous
incidents. The historical record provides the most straightforward technique for this
purpose, subject to the conditions of applicability and adequacy of records and databases (Section 3.1). Where the historical information cannot be used, a mechanistic
model of plant component data and operator response can be employed.
In CPQRAs the analyst normally calculates a number of different reliability characteristics. A few of these are expected number of failures per year, probability of failure
on demand, and unreliability. For example, in using fault tree/event tree methods to
estimate the frequency of a large toxic material release, an analyst may have to calculate
(1) the expected frequency of loss of cooling to a reactor (with an exothermic reaction
and potential material release), (2) the probability that interlocks fail to shut down the
reactor on demand, and (3) the probability of failure on demand of an emergency
system scrubber.
Selecting the appropriate reliability parameter for any event appearing in a fault
tree/event tree and determining whether to treat the event as repairable or nonrepairable are important considerations in a CPQEA.
Simple fault tree models can be quantified using the gate-by-gate method
(described in this section). Large or complex fault trees may require the use of the minimal cut set method. The reduction of a fault tree to minimal cut sets is described in
Appendix D. Appendix E defines the typical reliability parameters calculated in a
CPQRA and describes the equations used to calculate these parameters. Selecting the
wrong reliability parameter or improperly treating an event as repairable (or
nonrepairable) can cause the analyst to grossly underestimate (or overestimate) the
expected frequency of an accident, Appendix E provides guidance on the treatment of
an event as repairable or nonrepairable.
Purpose. Fault tree analysis permits the hazardous incident (top event) frequency to be
estimated from a logic model of the failure mechanisms of a system. The model is based
on the combinations of failures of more basic system components, safety systems, and
human reliability. An example would be the prediction of the frequency of a major fire
4ue to failure of a flammable liquid pump that has special valving and fire protection.
Because of the special design features, historical pump fire data are not applicable and
the frequency of fire must be estimated, based on knowledge of pump usage, seal leakage frequency, reliability of valves, fire protection equipment, and operator response.
Technology. Fault trees were first developed at Bell Telephone Laboratories in 1961
for missile launch control reliability. Haasl (1965) further developed the technique at
Boeing. It has been an essential part of nuclear safety analysis since the Reactor Safety
Study (Rasmussen, 1975). Currently, fault trees are finding greater application within
the U.S. chemical process industry.
The underlying technology is the use of a combination of relatively simple logic
gates (usually AND and OR gates, definitions in Table 3.2) to synthesize a failure model
of the plant. The top event frequency or probability is calculated from failure data of
more simple events. The top event might be a BLEVE, a relief system discharging to
atmosphere, or a runaway reaction. As well as providing the top event quantitative information, the fault tree can provide powerful qualitative insight into the potential failure
modes of a complex system through minimal cut set analysis (Appendix D).
A basic assumption in FTA is that all failures in a system are binary in nature. That
is, a component or operator either performs successfully or fails completely. In addition, the system is assumed to be capable of performing its task if all subcomponents
are working. Fault trees do not treat degradation of a system or its components. Similarly, FTA treats only instantaneous failures. Freeman (1983) provides an example of
the inclusion of time delays in fault trees and highlights the difficulties. Time delays are
common in the initiation of real hazardous events, but they are usually omitted in FTA.
Applications. FTA has been used extensively. It has found application in the chemical
process industry, to address safety and reliability problems, during the past few
decades. To date, the most common application in the process industry has been in the
area of reliability, and the analysis of complex interlock or control systems.
The use of FTA in CPQRA differs slightly from a reliability application, because
the top event is usually the frequency of a hazardous incident (confined explosion, leakage of flammable material, etc.)
In the late 1960s Browing (1969) used a fault tree (called Loss Analysis Diagram,
or LAD) to analyze the safety of process systems. Among chemical companies, ICI
(Gibson, 1977), du Pont (Flournoy and Hazlebeck, 1975), and Air Products (Doelp et
al., 1984) have adopted the technique as part of their safety management programs.
There are several examples of the use of FTA in recent CPQRA studies. Fault trees
were used for frequency analysis in a major risk assessment study carried out for six hazardous installations in the Netherlands (Rijnmond Public Authority, 1982). Other
published examples include an analysis of an ethylene vaporization unit (Hauptmanns,
1980), a propane pipeline (Lawley, 1980), a combustor system (Doelp et al., 1984), a
flammables tank (Ozog, 1985), and a cumene oxidation plant (Arendt et al., 1986b).
The AIChE course on Risk Assessment in the Chemical Industry (Kaplan et al., 1986)
contains several additional examples.
3.2.1.2. DESCRIPTION
Description of the Technique. FTA is described in several references: Haasl (1965),
Nuclear Regulatory Commission (Roberts et al., 1981), ANSI Standard N41.4-1976
TABLE 3.2. Terms Used in Fault Tree Analysis
Term
Definition
Event
An unwanted deviation from the normal or expected state of a system component
Top event
The unwanted event or incident at the "top" of the fault tee that is traced
downward to more basic failures using logic gates to determine its causes
and likelihood
Intermediate event
An event that propagates or mitigates an initiating (basic) event during the
accident sequence (e.g., improper operator action, failure to stop an ammonia
leak, but an emergency plan mitigates the consequences)
Basic event
A fault event that is sufficiently basic that no further development is judged
necessary (e.g., equipment item failure, human failure, external event)
Undeveloped event
A base event that is not developed because information is unavailable or
historical data are adequate.
Logic gate
A logical relationship between input (lower) events and a single output
(Higher) event. These logical relationships are normally represented as AND
or OR gates. AND gates combine input events, all of which must exist
simultaneously for the output to occur. OR gates also combine input events,
but any one is sufficient to cause the output. Other gate types, which are
variants of these and are occasionally used, include inhibit gate, priority AND,
exclusive OR, and majority voting gate. Details of these are given in the
introductory texts noted elsewhere.
Likelihood
A measure of the expected occurrence of an event, This may be expressed as a
frequency (e.g., events/years), a probability of occurrence during some time
interval, or a conditional probability (e.g., probability of occurrence given that
a precursor event has occurred)
Boolean algebra
That branch of mathematics describing the behavior of linear functions of
variables that are binary in nature: on or off, open or closed, true or false. All
coherent fault trees can be converted into an equivalent set of Boolean equations.
Minimal cut set
The smallest combination of component and human failures that, if they all
occur, will cause the top event to occur. The failures all correspond to basic or
undeveloped events. A top event can have many minimal cut sets, and each
minimal cut set may have a different number of basic or undeveloped events.
Each event in the minimal cut set is necessary for the top event to occur, and all
events in the minimal cut set are sufficient for the top event to occur.
(ANSI/IEEE, 1975), Fussell et al. (1974), McCormick (1981), Henley (1981), and
Billington and Allan (1986). These provide a fuller introduction and cover more
advanced topics than is possible in this volume. The objective of this discussion is to
provide a sufficient introduction so that relatively simple fault trees can be constructed
and common errors avoided.
The usual objectives of applying FTA to a chemical process are one or more of the
following:
• estimation of the frequency of occurrence of the incident (or of the reliability of
the equipment)
• determination of the combinations of equipment failures, operating conditions,
environmental conditions, and human errors that contribute to the incident
• identification of remedial measures for the improvement of reliability or safety
and the determination of their impact, and to identify which measures have the
greatest impact for the lowest cost.
Plant layout
Process description
PFDS and P&IDS
Equipment design
Fundamental
Properties
STEP1
SYSTEM DESCRIPTION
Understand operation
of system
STEP 2
HAZARD IDENTIFICATION
Identification of
top event
Experience
Historical record
HAZOP1 FMEA
STEPS
CONSTRUCTION OF FAULT TREE
General CPQRA Procedure
(See Figure 1.3)
Specific to
Fault Tree Analysis
Develop failure logic
Use "and" and "or" gates
Proceed down to basic events
Computer codes for
large fault trees
Reliability data for
• components
• operator response
Computer codes for
large fault trees
STEP 4
QUALITATIVE EXAMINATION
OF STRUCTURE
Minimal cut set analysis
Insight into all failure modes
Qualitative ranking of importance
Susceptibility to common-cause failure
Data Uncertainty
STEPS
QUANTITATIVE EVALUATION
OFFAULTTREE
OPTIONAL STEP
FURTHER
QUANTIFICATION
Top event frequency
Boolean or gate-by-gate
approach
Importance analysis
Sensitivity
Uncertainty
FIGURE 3.3. Logic diagram for application of fault tree analysis.
An FTA itself, may satisfy the CPQBA without a full risk analysis (this is equivalent to exiting the CPQRA study at step 8 in Figure 1.3). Table 3.2 provides some preliminary definitions necessary to understand the application of the technique.
The stepwise procedure for undertaking FTA is given in Figure 3.3. This procedure consists of several steps.
1.
2.
3.
4.
5.
system description and choice of system boundary
hazard identification and selection of the top event
construction of the fault tree
qualitative examination of structure
quantitative evaluation of the fault tree
Some additional studies may be carried out after the completion of the above steps.
These might include sensitivity, uncertainty, and importance analyses (Section 4.5).
Steps 1 and 2 correspond to equivalent steps in Figure 1.3 for the usual sequential
approach to CPQBA.
Step L System Description. This is a very important step in the process of Fault Tree
Analysis since an understanding of the causes of undesirable events can be achieved
only through a thorough knowledge of how the system functions. The description
stage is essentially open-ended; it is up to the analyst to state data needs, but the following list is indicative of the information required:
• chemical and physical processes involved in the plant/system
• specific information on the whole process and every stream (e.g., chemistry,
thermodynamics, hydraulics)
• hazardous properties of materials
• Plant and site layout drawings
• process conditions (PFDs)
• system hardware (PSdDs)
• equipment specifications
• operation of the plant (operating, maintenance, emergency, start-up, and
shut-down procedures)
• human factors [e.g., operations-maintenance, operator-equipment, and instrumentation (man-machine) interfaces]
• environmental factors.
Process information, hardware information, and human factors elements are critical to the development and analysis of fault trees. The analyst must choose the system
boundary to be consistent with the overall goals of the CPQRA.
Step 2. Hazard Identification. The HEP Guidelines (AIChE/CCPS, 1985) review a
number of methods for identifying hazards. These include preliminary hazard analysis
(PHA), what-if analysis, hazard and operability (HAZOP) studies, and failure modes
and effects analysis (FMEA). The output of this step must be a clear list of top event
incidents selected for FTA. Top events are usually major events such as toxic or flammable material releases, vessel failures, or runaway reactions. Table 1.2 lists the typical
initiating events and incidents that might be the focus of FTA.
The development of fault trees is time consuming. Thus, the list of top events must
be kept to a manageable size. From published studies, 10-20 top events are often adequate to characterize the risk from a single process plant of moderate complexity, but
more may be needed for a specific installation. It is rare that resources would permit the
completion of a FTA on a complete process plant, including all the identified incidents
that could occur. The historical record (Section 3.1) combined with component failure
data (Section 5.2) are commonly used for the bulk of less complex incidents analyzed.
Step 3. Construction of Fault Trees. Three approaches to fault tree construction are
manual, algorithmic, and automatic.
1. Manual Fault Tree Construction. Fault tree construction is an art rather than
a science. General rules that have been developed by practitioners over the past
decade and described in several references including Henley and Kumonaoto
(1981), Roberts et al. (1981), and AIChE/CCPS (1985). However, there are
no specific rules indicating what events or gates to use. Fault trees are logic diagrams that show how a system can fail. Normally a fault tree is constructed
from the top down. An undesired event or outcome is chosen for study. This
event becomes the top event. Beginning with the top event, the necessary and
sufficient causes are identified together with their logical relationship. To
accomplish this, the analyst asks, "How can this happen?" or "What are the
causes of this event?" This process of deductive reasoning is continued until the
analyst judges that sufficient resolution has been obtained to allow for the later
assignment of probabilities or frequencies to the basic events.
For example, the top event might be "failure of room lamp to light." The
fault tree for this top event is constructed by considering why the lamp might
not light. The analyst determines that there are two reasons why the lamp
might not light:
• failure of the bulb to light
• failure of electricity to get to the lamp
The risk analyst then explores the causes of each of these two possibilities. Reasons for "failure of bulb to light" include:
• light bulb burned out
• no light bulb in lamp.
Reasons for "Failure of electricity to get to the lamp" include:
• failure to turn on switch
• lamp not plugged into electrical outlet
• no power in the wall electrical outlet.
If desired, the risk analyst can explore the reasons why there is no power in the
wall outlet. Reasons might include:
• wiring shorted
• fuse blown in basement
• no electrical power in house.
All of the above can be represented as OR gates.
The questioning process continues until the risk analyst is satisfied that the
failure model is adequate to describe the problem under study.
It is obvious that this questioning process could continue almost forever. In
the above simple example, the risk analyst could continue questioning why
there might be no electrical power in the house. Problems with the electrical
power distribution system, with the power generating system, or with the
supply of fuel to the power generating system could be included. This points
out the need for clearly defined boundaries of a study. Problems arising outside
the boundary will not be developed further.
Once the risk analyst has completed the questioning process, a fault tree can
be constructed. As a general rule, the standard symbols for fault tree construe-
Output
Inputs
Output
And
OR Gate: The output occurs if one or more of the inputs
to the gate exists.
AND Gate: The output occurs If ai! of the inputs to the
gate exist simultaneously.
Inputs
BASIC EVENT: The basic event represents a basic fault that
requires no further development into more basic events.
INTERMEDIATE EVENT: The rectangle is often used to present
descriptions of events that occur because of one or more other
fault events.
HOUSE EVENT: The house event represents a condition that
is assumed to exist as a boundary condition (probability of
occurrence = 1).
UNDERDEVELOPED EVENT: The underdeveloped event
represents a fault event that is not examined further because
information is unavailable, its consequences are insignificant,
or because a system boundary has been reached.
TRANSFER SYMBOLS: The transfer in symbol indicates that the
fault tree is developed further at the occurrence of the
corresponding transfer out symbol (on another page). The
symbols are labeled to ensure that they can be differentiated.
FIGURE 3.4. Standard fault tree symbols.
tion should be used. A template of fault tree symbols is available from Berol
(RapiDesign R-555 Fault Tree). Since some of these symbols may be unfamiliar to the nonspecialist reader of the CPQBA, it is suggested that extra labels be
added for assistance (e.g., the words "and" or "or" inside/beside the gate symbols). Figure 3.4 presents the fault tree symbols that will be used in this volume.
Using the symbols in Figure 3.4, the fault tree for the above simple example is
given in Figure 3.5.
In large fault trees, it is common to label each logic gate and basic event with
a unique identifier. For example, the logic gates might be labeled Gl, G2, etc.
and the basic events might be labeled BEl, BE2, etc. These labels are often used
to input the fault tree logic into various computer programs used to compute
the frequency of the top event. Such labels have been added to the fault tree in
Figure 3.5. Note that an undeveloped event BE7 has been added to represent
the condition "no power in the house." The external walls of the house represent the system boundary for this example. A detailed description of how fault
tees are developed is presented in Henley and Kumamoto (1981).
TOP EVENT
Failure of Lamp
to Light
Failure of
Electricity to
Get to Lamp
Failure of Bulb
to Light
G3
G2
Light Bulb
Burned Out
No Light Bulb
in Lamp
Failure to Turn
On Switch
Wiring
Shorted
No Electricity
in Wall Outlet
Lamp Not
Plugged In
Fuse
Blown
No Power
to House
FIGURE 3.5. Fault tree for failure of lamp to light.
It should be noted that fault trees are inherently subjective and may be
incomplete. However, the technique allows the fullest possible expression of
the analyst's understanding of the system can provide great insights into potential failure modes. Manual construction of fault trees is the most common
approach.
Some common mistakes in manual fault tree construction by beginners are
as follows:
• rapid development of one branch of a tree without systematically proceeding
down level by level (tendency to want to reach basic events too rapidly and
not to use broad subevent descriptions)
• omission of an important failure mechanism, or a false assumption of negligible contribution
• incorrect combinations of frequency and probability into logic gates (see
Step 5—Quantification)
• inappropriate balance between component failures and human errors
• failure to recognize dependence of events.
2. Algorithm Fault Tree Construction. Several attempts have been made to
devise more systematic methods for the development of fault trees using algorithms. The goal of these approaches has been to construct fault trees that are
complete, but as yet there is no way to guarantee this objective.
The first attempt to formalize fault tree construction was made by Haasl
(1965). Fussell (1973) developed a systematic approach for electrical systems
and suggested the use of models for the individual parts of a system. However,
Fussell et al. (1974b) have pointed out that formal approaches are unlikely to
replace manual construction of fault trees.
An advanced technique sometimes used for the analysis of chemical process
control systems is the directed graph (Digraph) method of Lapp and Powers
(1977). This is another kind of logic diagram. For large systems, Digraphs
become very complicated. Shafaghi et al. (1984) have developed an approach
based on the decomposition of a process system into control loops as opposed
to components. This approach is applicable to continuous processes. Prugh
(1980) has provided a set of generic patterns of fault trees that can be applied to
process systems. The patterns cover, for example, vessel rupture or explosion.
They can then be tailored to specific systems.
3. Automatic Fault Tree Synthesis. The objective of this approach is to enter
process flow diagrams or piping and instrument diagrams into the computer
and obtain fault trees for all conceivable top events. This idea has been pursued
by several groups, and the results have been a number of computer codes that
can generate fault trees.
Examples include the CAT code (Salem et al., 1981), the RIKKE code
(Taylor, 1982), and the Fault Propagation Code (Martin-Solis et al., 1980).
Although there has been some use of these codes, they have not been particularly successful. Andow (1980) describes some of the major difficulties of this
approach. Because of questionable or incomplete results, Koda (1988) cautions
against the use of automatic fault tree construction.
Step 4. Qualitative Examination of Structure. Once the fault tree is constructed, the
structure of the tree can be examined qualitatively to understand the mechanisms of
failure. This information is valuable as it provides a powerful insight into the possible
modes of failure (i.e., all the combinations of events that lead to the top event). This
process is known as minimal cut set analysis (Appendix D). In particular, the effectiveness of safeguards, the qualitative importance of various subevents, and the susceptibility to common-mode failures are highlighted. Roberts et al. (1981) discuss this
examination in detail.
For simple trees consisting of only a few gates, qualitative examination is possible
by inspection. HEP Guidelines (AIChE/CCPS, 1985) outline a straightforward matrix
system. In more complex fault trees, inspection is too difficult and more formal means
must be applied, such as Boolean analysis. Fault trees can be converted into an equivalent Boolean expression defining the top event in terms of a combination of all lower
events. This expression is usually expanded using the laws of Boolean algebra (Appendix D), until it expresses the top event as the sum of all the minimal cut sets. While the
algebra is tedious and potentially error prone for manual analysis, automated procedures are available (e.g., the MOCUS code of Fussell et al., 1974b).
The qualitative importance (essentially a ranking of the number of basic events in
all failure sets) can be determined from the minimal cut sets. The cut sets are ranked in
order of the number of basic events that must be combined to result in the top event. It
is argued that single event cut sets (single jeopardy) are highly undesirable, as only one
failure can lead to the top event; two event cuts sets (double jeopardy) are better, etc.
Further ranking based on human error or active and passive equipment failure is also
common (AIChE/CCPS, 1985). However, the qualitative approach can be misleading, It is very possible that larger cut sets have a higher failure frequency than smaller
ones. Quantitative evaluation is required to determine the most frequent cause of the
top event.
Common-cause failures are due to a single event affecting several branches or
events in the fault tree (Section 3.3.1). A common cause might be a power failure disabling several electrical safety systems simultaneously, or a maintenance error
mis calibrating all sensors. If power failure appears in two arms of a fault tree joined by
an AND gate, and the gate-by-gate method (Step 5, below) is followed, the final result
will be calculated incorrectly. The Boolean evaluation will identify and deal with this.
However, there may be many elements not included in the FTA that could result in
common-cause failure (e.g., common manufacturer, common locations), Roberts et
al. (1981) list several common cause failure categories to consider as does the documentation for the BACFIRE computer code (Gate and Fussell, 1977).
Step 5. Quantitative Evaluation of Fault Tree. Given the final structure of a fault tree
and estimated frequency or probability for each basic or undeveloped event, it is possible to calculate the Top event frequency or probability. This calculation is normally
done using the minimal cut set approach in the Boolean expression discussed in Step 4.
Details of the calculation methods are presented in Appendices D and E. This approach
is applicable to both large and small trees. Full descriptions of these calculations are
given in the standard texts on reliability referenced earlier.
An alternative is the simpler gate-by-gate approach described by Lawley (1980)
and Ozog (1985). As readers may wish to undertake simple FTA using only these
guidelines, the more complex Boolean approach is summarized in Appendix D. The
gate-by-gate approach can be used for large fault trees if dependency (repeated events)
is taken into account. It is susceptible to numerical error in the predicted top event frequency if the tree has a repeated event in different branches of an AND gate.
The gate-by-gate technique starts with the basic events of the fault tree and proceeds upward toward the top event. All inputs to a gate must be defined before calculating the gate output. All the bottom gates must be computed before proceeding to the
next higher level. The use of the gate-by-gate technique is demonstrated in the sample
problem.
The mathematical relationships used in the gate-by-gate technique are given in
Table 3.3. All inputs to a gate are assumed to be statistically independent. In addition,
the fault tree is assumed to be coherent. A coherent fault tree uses only "AND" and
"OR55 gates to represent the failure logic. For the methods described in this book, time
delay gates, inhibit conditions or "NOR" gates are not permitted. The use of these special gates is an advanced topic and is beyond the scope of this volume. These mathematical relationships can be extended to more than two inputs (additions for OR gates and
products for AND gates). When an OR gate has several inputs that are added, summing the input failure rates will overestimate the failure rate of the OR gate. This
TABLE 3.3. Rules for Gate-by-Gate Fault Tree Calculation3
Gate
Input pairing
Calculation for output
OR
P A ORP B
P(AORB) =1-(1-P A )(1-P B )
Units
= PA + P B -P A P B
-P A + PB
AND
FA OR PB
P(A OR B) = PA + P8
PA OR PB
Not
PA AND PB
P(A AND B) = PAPB
PA AND P8
Unusual pairing, reform to PA AND P8*
PA AND P8
P(A AND B) = PAPB
r1
permitted
r1
T, probability; F, frequency (time'1); t, time (usually year).
6
For an example, see sample problem.
approximation error is negligible for small probabilities and is always conservative.
Several probability terms, but only one frequency, may be brought into an AND gate.
Once a tree has been fully calculated, using either the gate-by-gate technique or the
Boolean Algebra technique, a number of optional quantitative studies are possible
(Figure 3.3). These studies include sensitivity, uncertainty, and importance analyses
(Roberts et al., 1981). Sensitivity analysis is used to determine the sensitivity of the top
event frequency to possible errors in base event data. Uncertainty analysis provides a
measure of the error bounds of the top event. Monte Carlo simulation methods are
commonly employed for uncertainty analysis. Importance analysis ranks the various
minimal cut sets in order of their contribution to the total system failure frequency.
Some standard reliability definitions for quantitative evaluation are given in Appendix
E. The definitions for reliability/unreliability and availability/unavailability are useful
in specifying values for basic and undeveloped events in fault trees. Further definitions
are presented in an example for unavailability analysis of protective systems (Section 6.2).
This volume provides only an introductory overview of reliability modeling.
Detailed reliability modeling techniques are presented in standard references such as
those listed at the beginning of this section.
Theoretical Foundation. FTA is based on a graphical logical description of the failure
mechanisms of a system. It is rigorously based on the concepts of set theory, probability
analysis, and Boolean algebra. The simpler gate-by-gate analysis is not as rigorous. A key
theoretical foundation in FTA is the assumption that components and systems either
operate successfully or fail completely (i.e., the binary nature of failure). FTA is not easy
to apply to systems that exhibit degraded behavior (partial failures). An important theoretical property is that of coherence. A fault tree is mathematically coherent if all of its
gates are AND and OR gates, with no inhibit gates, time delays, etc.
Input Requirements and Availability. System description and hazard identification
(Steps 1 and 2, Figure 3.3) require detailed knowledge of the system historical and
component failure information. Formalized procedures such as HAZOP are often used
in incident identification (Section 1.4). Before construction of the fault tree (Step 3)
can begin, a specific definition of the top event is required. A detailed understanding of
the operation of the system, its component parts, and the role of operators and of possible human errors is required. Qualitative examination (Step 4) does not require numerical data or component failure rates.
Quantitative estimation of likelihood (Step 5) requires numerical data on component failures rates, protective systems unavailability (fractional dead time), and human
error rates. Table 1.2 shows a range of items for which such data may be required.
Sources of data are discussed in Sections 5.5 and 5.6. Although some of these data may
be used directly, some may need to be modified using expert judgment (Section 5.7).
Protective system unavailability needs to be calculated based on the repair time and the
inspection interval planned. In addition to component and human error data, there
may be need for data on external events (Section 3.3.3; natural events: tornados, earthquakes, etc; and man-caused events: plane crashes, dam failures, etc.). Estimates of the
accuracy of these data are necessary for more detailed uncertainty analysis.
Output. The primary output of the qualitative evaluation is the overall structure of the
failure mechanisms and a list of minimal cut sets. A ranking of the minimal cut sets is
possible based on the number of basic events that must occur to cause the top event.
However, this ranking can be deceptive, and quantitative evaluation will produce more
meaningful results.
The primary output of the quantitative evaluation is the frequency (or probability)
of the top event and lower intermediate events. Gate-by-gate methods allow the direct
calculation of intermediate event probabilities or frequencies. Minimal cut set method
require the separate calculation of the intermediate event frequency or probability. An
importance analysis identifies those basic or intermediate events with the highest
potential to cause the Top event. A sensitivity analysis identifies those basic events to
which the estimated frequency or probability of the top event is most sensitive to
uncertainty in the basic event data. Efforts to improve the accuracy (reduce the uncertainty) of the top event can be concentrated on the most sensitive basic events.
Simplified Approaches. FTA is employed when simpler historical data are not
available or applicable. If only a crude estimate of incident frequency is required, the
fault tree need not be developed to the same degree of resolution as for a detailed reliability study on a complex interlock system. The fault tree would extend downward for
fewer levels, and many of the base events would be undeveloped events rather than
basic events. The gate-by-gate calculation method is appropriate only for simple trees
that have no repeated basic events. More complex trees are usually analyzed using
Boolean methods (Appendices D and E). However, even with Boolean methods,
repeated events must be identified by the analyst. The Boolean methods will not identify the same event if it is given two different designators.
3.2.1.3. SAMPLEPROBLEM
FTA is demonstrated using the example of a leakage from a storage tank developed by
Ozog (1985). Example 1 follows the stepwise procedure outlined in Figure 3.3. Exam-
pie 2 shows the conversion of a frequency-frequency AND gate pair into a frequency-conditional probability pair (Table 3.3).
EXAMPLEl
• Step 1. System Description. The P&ID for the storage tank system is given in
Figure 3.6. The storage tank (T-I) is designed to hold a flammable liquid under
slight nitrogen positive pressure. A control system (PICA-I) controls pressure.
In addition, the tank is fitted with a relief valve to cope with emergencies. Liquid
is fed to the tank from tank trucks. A pump (P-I) supplies the flammable liquid
to the process.
• Step 2. Hazard Identification. HAZOP was used by Ozog (1985) to identify the
most serious hazard as a major flammable release from the tank. This incident is
the top event that will be developed in the fault tree
• Step 3. Construction of the Fault Tree. Based on the knowledge of the system and
initiating events in the HAZOP study, the tree is constructed manually. Every
event is labeled sequentially with a B for basic or undeveloped event, M for interTo atmosphere
Nitrogen
To flare
From tank
trucks
Flammable Liquid
Storage Tank
T-1
To
Process
P & I D Legend
EQUIPMENT AND VALVES
FV
T
P
PV
RV
V
1"
Flow Control Valve
Tank
Pump
Pressure Control Valve
Relief Valve
Valve
1 Inch size
INSTRUMENTS
P
T
L
F
I
C
A
FIGURE 3.6. Flammables liquid storage tank P&ID.
Pressure
Temperature
Level
Flow
Indicator
Controller
Alarm
H- High,
L- Low
mediate event, and T for the top event. The procedure starts at the top event,
major flammable release, and determines the possible events that could lead to
this incident as
Ml:
M2:
B1:
M3:
M4:
Spill during truck unloading
Tank rupture due to external event
Tank drain breaks
Tank rupture due to implosion
Tank rupture due to overpressure
Events Ml, M2, MS, and M4 require further development. However, adequate
historical/reliability data exist for Event Bl to allow it to be treated as a basic event. The
analysis proceeds downward, one level at a time, until all failure mechanisms have been
investigated to the appropriate depth. The basic events and undeveloped events are
symbolized by circles and diamonds, respectively. Further development of the undeveloped events is not thought necessary or possible.
The final fault tree (Figure 3.7) is essentially identical to that given by Ozog
(1985). However, several intermediate event boxes have been added for clarity.
Step 4. Qualitative Examination of Structure. The qualitative ranking is best done
using minimal cut set analysis (Appendix D) for this problem. However, inspection
alone shows the five major mechanisms leading to major flammable release. For example, the single events Bl, B3, B4, B5, and B6 all lead to the top event. In this example,
qualitative ranking is of limited benefit as a frequency value is wanted for CPQRA.
In this step, the analyst should review the minimal cut sets to ensure that they represent real, possible, accidents. A minimal cut set that will not cause the top event is an
indication of an error in the construction of the fault tree or in the determination of the
minimal cut sets.
Step 5. Quantitative Evaluation of Fault Tree. For this example, the method of
gate-by-gate analysis is employed to quantify the fault tree of Figure 3.7. The tree must
be carefully scanned for repeated events, as these can lead to numerical error. There are
no repeated events. The analyst must enter a numerical value for frequency (per year)
or probability (dimensionless) into every base event (Sections 5.5 and 5.6 list common
data sources).
The calculation starts at the bottom of the tree and proceeds upward to the top
event. A calculation is presented for the left most branch of the tree to event Ml, spill
during truck unloading. For clarity, only one significant figure is used in this example.
The formulas used are from Table 3.3.
The lowest gate is M9, tank overfill and release via RV-I. The two inputs to this
AND gate are probabilities.
P(M9) = P(BlS) x P(B16)
= (1 x IQ-2) x (1 x IQ-2)
= 1 x HT4
Major Flammable
Release
T IjJxU ''/H
[
|Mll3xli"a/yr
Tuk Track
UaloadiBf
FrafwaBcy
I
|
B2| JM / y r |
I
"WIMf|
IxM'4
Vakicla
I»paa
luluw-'/rr
II
Eartk^uka
Aircraft
!.pact
B4J lxl«* * / yr
\
4
tjr
BlJ 1x1« •
Mllj.lxll "*/ yr|
I
Tuk DratB
Break
t 1 Bs|lxir
5
/yr
Taraada
B«|lxl*' 9 /yr
Faltara <
Bid Ix If2
MatariZTta
Tuk Track
BlTl -l.lf 1 I
due to
M3| 2 x l « - 3 / y r !
I
UlIMdI* T«U
ReattifW
NltrofM Parft
BtI
TaA Track
NaC SaapM
Bafara Ualaadlai
Raafut
Raa2wltk
Jalaadad Material
BIlIl « if
BIfI
Ix 1C1
Iadttced
IBT I i« / JT
PV.2 FaUa
Claud
Mil) I x W 7
I "JK- I
I1
Tuk Rupture I
Iaploslaa
Taak-Bj^tmrt
dvato
R«MU*a
Mf I 1 x It ' 4
iul i.i.' I
I
External ETMM
Taak OrarflU
ud BateMa
via EV-I
Iaj*fflcUat
VateM Ia Tuk
ta UaJaad Track
1
I *-5»T-|
SpUl
DwlBf Track
UilMdlM
UT*
PICA.l Faflt. I
ClMiBf
Blf|
PV-2
1 x It *
z
Z
I
Bellorr
Tuk
Overpreemred
B t I I,!.''
L«f *r
NUrot«. Supply
I
I Bll| I x M * * I
MllteMT'/jr
I
Pressure
Relief System
Failure
Mill Uf3 I
FaOnraar
PICA-I,
Bxctn Pr*M«r«
J«T«»k
BxcMd Capacity
«f RV.l
Bll|lxlt* ^yr
Mil] 4*lf '-5I yr
•Ml* «1* J
Prtsnir* RiM
ExcMds Capacity
rf PV-I ud RV-I
|B20| I X l O ' 1
PV.l Falli
Closed
FIGURE 3.7. Fault tree analysis of flammable liquid storage tank (Ozog, 1985).
M4J 2 x W S / y i
lBflttflldeat ta
Preheat VMimm
M(| 2 x !•*
Tu* Rapture
dot to
Overpressure
1
B2l|lxlt*3/yr]
V-I
CIostd
B14J I X l O - 3
Failure of,
HIfk Presrara
IB Tuk
",,1SS"
M12| 4x1« * 3/ yr
iBllllxlf1!
V-7
Cloaad
Tamparatara af
Ialat Hotter
TauNaraal
Bwllxli'V
lB14|lxli'3/yr|
Hlfh Pressure
Ia Flart
Header
B2S I 1x10*
3
/yrl
At the same level as M9 is Gate MlO, tank rupture due to reaction. There are four
inputs to this AND gate, all probabilities, and the Table 3.3 formula may be generalized as
P(MlO) = P(B17) x P(BlS) x P(B19) X P(B20)
= (1 x 10-3) x (1 x IQ-2) x (1 x IQ-1) x (1 x IQ-1)
= 1 x IQ-7
Gates M9 and MlO are inputs to Gate M5, major tank spill. There are two probabilities entering the OR gate:
P(MS) = 1 - [1 -P(M9)][1 -P(MlO)]
P(MS) = P(M9) + P(MlO)
s(l X IQ-4) + ( I x 10-7)
si x 10-4
Gate Ml is an intermediate event arm and is an AND gate with two inputs, a frequency and a probability
P(Ml) = P(B2) x P(MS)
= (300 yr"1) x (1 x HT*)
= 3 x HT2YiT1
In a similar manner, all other frequencies and probabilities may be calculated, up to
the top event. The top event frequency (T), major flammable release, is 3 x 10~2 yr'1 on
a release of every 30 years. This would be used in CPQRA according to the procedure
of Figure 1.2.
The frequencies of the five major intermediate events leading to this are
Ml:
M2:
Bl:
M3:
M4:
Spill during truck unloading
Tank rupture due to external event
Tank drain break
Tank rupture due to implosion
Tank rupture due to overpressure
3 X HT2 yr"1
3 X 10~5 yr"1
1 x 10"4 yr1
2 x 10"3 yr"1
2 x 10"5 yr"1
From the quantitative evaluation, the failures due to Ml and M3 contribute most
to the top event; frequency and remedial measures would be most productively
employed in these areas.
EXAMPLE 2. CONSERVATION OF FREQUENCY-FREQUENCY
AND GATE PAIRING
Given a particular LPG tank BLEVE frequency of 1 X 10"6 per year and a nearby
public area usage frequency of 10 times per year (8 hr exposure each), an AND gate frequency combination of BLEVE and people affected should be converted to frequency
of BLEVE and conditional probability of people present. This probability is
(10 times/yr x 8 hr)/(365 days/yr x 24 hr/day) = 0.009
The AND gate result = 1 x IO^6 per year x 0.009 = 9 x 10~9 per year frequency of
affecting the public from the BLEVE.
3.2.1.4. DISCUSSION
Strengths and Weaknesses. FTA is a very widely used technique for reliability analysis.
The theory of FTA has been well developed and there are many published texts and
papers describing its use. Several computer aids are available, and a large number of engineers have been trained in it. A particular advantage of the method is the complementary
information provided from the qualitative and quantitative analysis of the fault tree.
The main weakness is that much effort is usually required to develop the tree, and
there is a potential for error if failure paths are omitted or manual calculation methods
are incorrectly employed. The use of a computer package should virtually eliminate calculation errors. However, some of the benefits in understanding the system, which is
obtained in a manual analysis, might be lost. FTA requires substantial experience to
generate useful, well-structured trees in a reasonable period of time. There are some
theoretical weaknesses including the assumption of binary failure and the poor treatment of explicit time dependence and demand AND gates (Freeman, 1983). All of
these characteristics are found in chemical plant systems. In fact, many safeguards are
designed on the basis of real time delays in systems.
Identification and Treatment of Errors. Kletz (1981)provides an interesting example of important failure mechanisms in FTA. Review by other analysts is the best means
of avoiding omissions. The omission of an important failure mechanism is a failure of
the hazard identification step. Input data are subject to error, both in terms of component failure rates and human reliability (Sections 5.2 and 5.6). Such inaccuracies can be
investigated using sensitivity, uncertainty, and importance analyses (Section 4.5).
Other sources of inaccurate FTA come from false assumptions on how the plant systems are operated or tested. An example of such an error is the lack of understanding by
the risk analyst that a plant shutdown is required to test a component that requires a
frequent proof test. The misunderstanding may result in an improper calculation of the
availability of the component. Another possible error is overlooking repeated events
when using the gate-by-gate analysis. The more rigorous Boolean approach evaluating
minimal cut sets and resolving common-cause events automatically resolves this problem. Uncertainly analysis is an advanced topic and is well described in the PRA Procedures Guide (NUREG, 1983). Some other common errors include trying to add
frequencies and probabilities or multiplying two frequencies. Because FTA requires
substantial creativity, care, and judgment, its application by inexperienced people often
leads to error. When possible, the Top and Intermediate Event frequencies should be
checked against the historical record. This simple check may identify gross errors,
although the errors may arise from either event data uncertainty or logic errors in the
fault tree.
Utility. Of the 5 steps listed in FTA, the first two (system description and hazard identification) are more fully described in HEP Guidelines (AIChE/CCPS, 1985). Small
fault tree diagrams are relatively easy to understand. However, larger fault trees are difficult and time-consuming to construct. If an event has been omitted, it is relatively
easy to update the tree. The mathematical concepts of reliability and unreliability, availability and unavailability, failure rates, frequency, and probability can be difficult to
explain to nonspecialists.
Computer programs improve the utility of FTA substantially. Basic event data can
be easily amended, and changes to the structure of the tree are easily accomplished.
Minimal cut set analysis and all quantitative calculations are handled automatically.
However, fully automatic computer generation of fault trees is not a proven tool.
Resources Needed. FTA can be undertaken by trained nonspecialist engineering staffs
for relatively simple systems. The FTA team should include people familiar with the
process design, as well as with the management, operation, and maintenance of the unit
being analyzed. For more complex systems, involving multielement interlock systems,
intricate instrumentation controls, and procedural safeguards, a specialist should be consulted. An FTA computer package is essential for large problems. Precise estimates of
time required to develop a fault tree are obviously dependent on the complexity of the
system. Construction of a single moderately complex tree leading to a single top event
can take 1-3 days by a systems analyst, and several times this for a novice. Similarly, input
data can be difficult to find and several days may be necessary for this task.
Available Computer Codes. Over the past 15 years, many computer codes have been
developed for FTA. Table 3.4 lists a sample of computer codes that have been made
TABLE 3.4. Sample Computer Codes Available for Fault Tree Analysis
Step
3
4
5
Activity
Computer Codes
Availability
Construction of fault tree
Rikke
CAT
G. Apostolakis et al. (1978)
Fault Propagation
FP Lees, UK
Qualitative examination
Quantitative evaluation
R. Taylor, Denmark
Diagraph
S. Lapp and G. Powers (1977)
IRRAS-PC (plotting)
EG & G, Idaho
TREDRA
JBF Associates
GRAFTER
Westinghouse
BRAVO
JBF Associates
IRRAS-PC
EG & G, Idaho
CAFTA + PC
SAICUT
Science Applications Int. Corp.
MOCUS
JBF Associates
Science Applications Int. Corp.
GRAFTER
Westinghouse
BRAVO
JBF Associates
IRRAS-PC
EG & G, Idaho
CAFTA + PC
Science Applications Int. Corp.
SUPERPOCUS
JBF Associates
GRAFTER
Westinghouse
BRAVO
JBF Associates
RISKMAN
Pickard, Lowe, and Garrick
* R. Taylor, Advanced Risk Analysis, Egern Vej 16, 2000 Copenhagen, Denmark; EG & G Services Inc., P.O. Box
2266, Idaho Falls, ID 83401; FP Lees, Dept. Chemical Engineering, Loughborough University, Loughborough,
Leics, UK; Science Applications Int. Corp., 5150 El Camino Real, Los Altos, CA 94022; JBF Associates, 1000
Technology Drive, Knoxville, TN; Westinghouse Risk Management, P.O. Box 355, Pittsburgh, PA 15230;
Pickard, Lowe, and Garrick, 2260 University Dr., Newport Beach, CA 92660.
available recently. The table covers the three steps specific to FTA (Steps 3-5 Figure
3.3). Roberts et al. (1981) and Bari et al. (1985) provide a more extensive list of FTA
computer codes suitable for mainframe application.
Computer codes are available that will draw trees, given the input logic. Fault trees
can also be drawn using readily available personal computer graphics packages.
3.2.2. Event Tree Analysis
3.2.2.1. BACKGROUND
Purpose. An event tree is a graphical logic model that identifies and quantifies possible
outcomes following an initiating event. The event tree provides systematic coverage of
the time sequence of event propagation, either through a series of protective system
actions, normal plant functions, and operator interventions (a preincident application),
or where loss of containment has occurred, through the range of consequences possible
(a postincident application). Consequences can be direct (e.g., fires, explosions) or
indirect (e.g., domino incidents on adjacent plants.)
Technology. Event tree structure is the same as that used in decision tree analysis
(Brown et al., 1974). Each event following the initiating event is conditional on the
occurrence of its precursor event. Outcomes of each precursor event are most often
binary (SUCCESS or FAILURE, YES or NO), but can also include multiple outcomes (e.g., 100%, 20%, or 0% closure of a control valve).
Applications. Event trees have found widespread applications in risk analyses for both
the nuclear and chemical industries. Two distinct applications can be identified. The
preincident application examines the systems in place that would prevent incident-precursors from developing into incidents. The event tree analysis of such a
system is often sufficient for the purposes of estimating the safety of the system. The
postincident application is used to identify incident outcomes. The event tree analysis
can be sufficient for this application. Studies such as the Reactor Safety Study (Rasmussen, 1975), have used preincident event trees to demonstrate the effectiveness of successive levels of protective systems. Some CPI risk assessments (Health and Safety
Executive, 1981; World Bank, 1985) use postincident event trees. Protective systems
are also investigated this way. Arendt (1986a) demonstrates the use of event trees to
investigate hazards from a heater start-up sequence. Human reliability analysis uses
event tree techniques (see Section 3.3.2).
3.2.2.2. DESCRIPTION
Description of Technique. Preincident event trees can be used to evaluate the effectiveness of a multielement protective system. A postincident event tree can be used to
identify and evaluate quantitatively the various incident outcomes (e.g., flash fire,
UVCE, BLEVE dispersal) that might arise from a single release of hazardous material
(Figure 2.1). Figure 3.8 (from EFCE, 1985) demonstrates these two uses in a chemical
plant context. A preincident example is loss of coolant in an exothermic reactor subject
to runaway. A postincident example is release of a flammable material at a point (X)
and incident outcomes at a downwind location (T). In fact, the two uses are comple-
PRE-ACClDENT EVENTTREE
COOLANTFLOW
ALARM WORKING
B
REACTORTEMP.
ALARM WORKING
C
REACTORDUMP
VALVE WORKING
D
SEQUENCE
DESCRIPTION
1 SAFE SHUTDOWN
2 RUNAWAY REACTION
3 SAFE SHUTDOWN
REACTOR
COOLANT
FAILURE
A
4 RUNAWAY REACTION
5 SAFE SHUTDOWN
6 RUNAWAY REACTION
7 RUNAWAY REACTION
8 RUNAWAY REACTION
POST-ACCIDENT EVENT TREE
IGNITION
ATX
B
WIND
TOY
C
IGNITION
ATY
D
EXPLOSION
ONIGNITION
E
SEQUENCE
DESCRIPTION
1 EXPLOSION AT X
2 FIRE AT X
FLAMMABLE
RELEASE
ATX
A
3 EXPLOSION AT Y
4 FIRE AT Y
5 DISPERSES
6 DISPERSES
FIGURE 3.8. Examples of preincident and postincident event trees. From EFCE (1985).
mentary: the postincident event tree can be appended to those branches of the
preincident event tree that result in FAILURE of the safety system. Good descriptions
of preincident event trees are given in HEP Guidelines (AIChE/CCPS, 1985) and the
PRA Procedures Guide (NUREG, 1983).
Fault trees are often used to model the branching from a node of an event tree.
Also, the top event of a fault tree may be the initiating event of an event tree. By computing the frequency of the top event of a fault tree, the corresponding branching or
initiating event frequency can also be estimated. Note the difference in meaning of the
term initiating event between the applications of fault tree and event tree analysis. A
fault tree may have many initiating events that lead to the single top event, but an event
tree will have only one initiating event that leads to many possible outcomes.
The construction of an event tree is sequential, and like fault tree analysis, is
top-down (left-right in the usual event tree convention). The construction begins with
the initiating event, and the temporal sequences of occurrence of all relevant safety
functions or events are entered. Each branch of the event tree represents a separate outcome (event sequence). The sequence is shown in the logic diagram (Figure 3.9).
STEP1
Identify the
initiating event
STEP 2
Identify safety
function/hazard and
determine outcomes
STEP 3
Construct event tree
to all important outcomes
STEP 4
Classify the outcomes
in categories of
similar consequence
STEPS
Estimate probability
of each branch in
the event tree
STEP 6
Quantify the
outcomes
STEP 7
Test the outcomes
FIGURE 3.9. Logic diagram for event tree analysis.
Step 1. Identify the Initiating Event. The initiating event, in many CPQRAs, is a failure event corresponding to a release of hazardous material. This failure event will have
been identified by one of the methods discussed in Chapter 1 and in more detail in HEP
Guidelines (AIChE/CCPS, 1985). The initiating event might correspond to a pipe leak,
a vessel rupture, an internal explosion, etc. The frequency of this incident is estimated
from the historical record (Section 3.1) or by FTA (Section 3f2.1).
The event tree is used to trace the initiating event through its various hazardous
consequences. It will be simplest for incidents that have few possible outcomes (e.g.,
toxic releases or internal explosions). Releases that are both flammable and toxic may
have many possible outcomes.
Step 2. Identify Safety Function/Hazard Promoting Factor and Determine Outcomes. A
safety function is a device, action, or barrier, that can interrupt the sequence from an
initiating event to a hazardous outcome. A hazard promoting factor may change the
final outcome (e.g., from a dispersing cloud to a flash fire or to a VCE). It is most often
used in postincident analysis.
Safety functions can be of many types, most of which can be characterized as
having outcomes of either success or failure on demand. Some examples are
• automatic safety systems
• alarms to alert operators
• barriers or containment to limit the effect of an accident.
Hazard promoting factors are more varied and include
• ignitions or no ignition of release
• explosion or flash fire
• liquid spill contained in dike or not
• daytime or nighttime
• meteorological conditions.
A heading is used to label a safety function or hazard promoting factor. Most of the
above branches are binary choices. Meteorological conditions may be represented by
ranges of wind speeds, atmospheric stabilities, and wind directions.
The analyst must be careful to list all those headings that could materially affect the
outcome of the initiating event. These headings must be in chronological order of
impact on the system. Thus, headings, such as multiple ignition sources, may appear
more than once in the event tree depending on what is happening in time.
Step 3. Construct the Event Tree. The event tree graphically displays the chronological progression of an incident. Starting with the initiating event, the event tree is constructed (conventionally) left to right. At each heading or node, two or more
alternatives are analyzed (Step 2) until a final outcome is obtained for each node. Only
nodes that materially affect the outcome should be shown explicitly in the event tree.
Some branches may be more fully developed than others. In a preincident analysis, the
final sequence might correspond to successful termination of some initiating event or a
specific failure mode. The listing of the safe recovery and incident conditions is an
important output of this analysis. For a postincident analysis, the final results might
correspond to the type of incident outcome (e.g., BLEVE, UVCE, flash fire; safe
dispersal).
The event headings should be indicated at the top of the page, over the appropriate
branch of the event tree. It is usual to have SUCCESS or YES branch upward and
FAILURE or NO branch downward. Starting with the initiating event, many analysts
label each heading with a letter identifier. Every final event sequence can then be specified with a unique letter combination (Figure 3.6). A bar over the letter indicates that
the designated event did not occur.
Step 4. Classify the Outcomes. The objective in constructing the event tree is to identify important possible outcomes that have a bearing on the CPQRA. Thus, if an estimate of the risk of offsite fatalities is the goal of the analysis, only outcomes relevant to
that outcome (offsite fatalities) need be developed. Branches leading to lesser consequences can be left undeveloped. Where outcomes are of significance, it is often adequate to stop at the incident itself (e.g., explosion, large drifting toxic vapor cloud).
The subsequent risk analysis calculation (Section 4.4) will consider individual influencing factors (e.g., wind direction or atmospheric stability) on possible consequences
(Figure 1.2).
Many outcomes developed through different branches of the event tree will be
similar (e.g., an explosion may arise from more than one sequence of events). The final
event tree outcomes can be classified according to type of consequence model that must
be employed to complete the analysis.
Step 5. Estimate the Probability of Each Branch in the Event Tree. Each heading in the
event tree (other than the initiating event) corresponds to a conditional probability of
some outcome if the preceding event has occurred. Thus, the probabilities associated
with each branch must sum to 1.0 for each heading. This is true for either binary or
multiple outcomes from a node.
The source of conditional probability data may be the historical record (Section
5.1), plant and process data (Section 5.2), chemical data (Section 5.3), environmental
data (Section 5.4), equipment reliability data (Section 5.5), human reliability data
(Section 5.6), and use of expert opinion (Section 5.7). It may be necessary to use fault
tree techniques to determine some probabilities, especially for complex safety systems
encountered in preincident analyses.
Step 6. Quantify the Outcomes. The frequency of each outcome may be determined
by multiplying the initiating event frequency with the conditional probabilities along
each path leading to that outcome. As a check, the sum of all the outcome frequencies
must sum to the initiating event frequency. The above calculation assumes no dependency among event, or partial success or failure. Either of these conditions complicates
the numerical treatment beyond the scope of this book.
Step 7. Test the Outcomes. As with fault trees, poor event tree analysis can lead to
results that are inaccurate (e.g., due to poor data) or incorrect (e.g., important
branches have been omitted). An important step in the analysis is to test the results
with common sense and against the historical record. This is best done by an independent reviewer.
Theoretical Foundation. Event trees are pictorial representations of logic models or
truth tables. Their theoretical foundation is based on logic theory. Further discussions
are given in Henley and Kumamoto (1981) and Lees (1980). The frequency of an outcome is defined as the product of the initiating event frequency and all succeeding conditional event probabilities leading to that outcome.
Input Requirements and Availability. Analysts require a complete understanding of
the system under consideration and of the mechanisms that lead to all the hazardous
outcomes. This may be in the form of a time sequence of instructions, control actions,
or in the sequence of physical events that lead to hazardous consequences (e.g., the
spreading characteristics of a dense vapor cloud).
The starting point in event tree analysis is the specification of the initiating event.
This event may be identified using other CPQRA techniques such as HAZOP, the historical record, or experience. The quantitative evaluation of the event tree requires conditional probabilities at every node. As discussed earlier, these may be based on
reliability data, the historical record, experience, or from fault tree modeling.
Output. The output of event tree modeling can be either qualitative or quantitative.
The qualitative output shows the number of outcomes that result in success versus failure of the protective system in a preincident application. The qualitative output from a
postincident analysis is the number of more hazardous outcomes versus less hazardous
ones. It highlights failure routes for which no protective system can intervene (single-point failures). The quantitative output is the frequency of each event outcome.
These outputs (which might specify BLEVE, flash fire, or VCE frequencies) are
employed directly in CPQEA risk calculations.
Simplified Approaches. The event tree technique is a relatively simple approach.
However, it can be used in various levels of detail. The level to use for a particular task
can be selected based on the importance of the event or the amount of information
available.
3.2.2.3. SAMPLE PROBLEM
The sample problem is a postincident analysis of a large leakage of pressurized flammable material from an isolated LPG storage tank. An engineering analysis of the problem
indicates that the potential consequences include BLEVE of the tank if the leak is
ignited (either immediately or by flashback). If the leak does not immediately ignite, it
can drift toward a populated area with several ignition sources and explode (VCE), or
produce a flash fire. Other downwind areas have a lower probability of ignition. The
data relevant to the event tree are given in Table 3.5.
Using Table 3.5, an event tree is developed to predict possible outcomes from the
leakage of LPG. This event tree is not exhaustive. Not every outcome is developed to
completion; some events are terminated at entry points to specific consequence
models. For example, three outcomes are possible from BLEVEs [thermal impact,
physical overpressure, and fragments (Section 2.2.3)]. In practice, these outcomes
would be investigated separately in the BLEVE consequence model calculation.
TABLE 3.5. Sample Event Tree Input Data
Frequency or
probability*
(x KFVyr.)
Event
l
A. Large leakage of pressurized LPG
Source of data"
-O
Fault tree analysis
B. Immediate ignition at tank
0.1
Expert opinion
C. Wind blowing toward populated area
0.15
Wind rose data
D. Delayed ignition near populated area
0.9
Expert opinion
0.5
Historical data
0.2
Tank layout geometry
E. VCE rather than
flash
fire
F. Jet flame strikes the LPG tank
* These data are for illustrative purposes only.
The event tree for the LPG leak initiating event is given in Figure 3.10. From this,
a total of six outcomes are predicted. These outcomes and their predicted frequencies
are given in Table 3.6.
The total frequency of all outcomes is a check to ensure that this equals the initiating event frequency of 1 X 10""4 per year (i.e., 100.0 X 10"6 per year).
3.2.2.4, DISCUSSION
Strengths and Weaknesses. An important strength of the event tree is that it portrays
the event outcomes in a systematic, logical, self-documenting form that is easily
audited by others.
The logical and arithmetic computations are simple and the format is usually compact. Preincident event trees highlight the value and potential weaknesses of protective
systems, especially indicating outcomes that lead directly to failures with no intervening protective measures. Postincident event trees highlight the range of outcomes that
are possible from a given incident, including domino incidents, thereby ensuring that
important potential consequences are not overlooked.
The event tree assumes all events to be independent, with any outcome conditional
only on the preceding outcome branch. Every node of an event tree doubles the
number of outcomes (binary logic) and increases the complexity of classification and
combination of frequency. From a practical standpoint this limits the number of headings that can be reasonably handled to 7 or 8.
Identification and Treatment of Possible Errors. If multiple fault trees are used to
establish the frequencies of various nodes or decision points, common cause failures or
mutually exclusive events can arise that invalidate event tree logic. These problems arise
if the same basic event appears in the fault trees that are used to establish the probabilities of branching at the various event tree nodes. For example, an electrical power failure basic event may appear in several fault trees that support an event tree. Failure by
the risk analyst to recognize and deal with the commonality of the electrical power failure basic event will result in serious errors. Omission of outcomes can lead to serious
Large
LPG
Leakage
A
Immediate
ignition
B
Wind to
Populated
area
C
Delayed
ignition
O
UVCE
or
Rash Fire
E
Ignited jet
points at
LPG tank
F
Outcome
No (0.1)
FIGURE 3.10. Event tree outcomes for sample problem.
Frequency
BLEVE
ABF
axlO^/year
Local Thermal hazard
ABF
SxlO^/year
VCE
ABODE
6.1 x lO^/year
Flashfireand BLEVE
ABCDEF
1.2x10~*/year
Flash fire
ABCDEF
4.9 x lO^/year
Safe dispersal
ABCD
LAxlO^/year
VCE
ABCDE
39.5x10-6/year
Flash fire and BLEVE
ABCDEF
6.9 x lO^/year
Flash fire
ABCDEF
27.5x10^/year
Safe dispersal
ABCD
7.6x10"*/year
TOTAL
1 x 1(T4 /year
TABLE 3.6. Sample Event Tree Outcomes and Frequencies
Outcome
Sequences leading to
outcome
Frequency (per year)
2.0 x IQ-* = 2.0 x IQ-6
BLEVE
ABF
Flash Fire
ABCDEF + ABCDEF
4.9 x 1(T* + 27.5 x HT 6 = 32.4 x 10"6
Flash fire and BLEVE
ABCDEF +ABCDEF
1.2 x IQ-6 + 6.9 x HT 6 = 8.1 x 10^
UVCE
ABCDE + ABCDE
Local thermal hazard
ABF
Safe dispersal
ABCD +ABCD
6.1 x 1(T6 + 34.5 x 10^= 40.5 x ICT6
8.0 x 1(T6 = 8.0 x ICT6
1.4 x IQ-* + 7.6 x ID"6 = 9.0 x 1(T6
Total all outcomes
= 100 x IQ-6
error (e.g., domino effect on nearby equipment). Independent review of final event
trees is the best method to identify such faults (Step 7, Figure 3.9).
Errors can arise in the conditional probability data leading to major errors in the
predicted final outcome frequencies. The analyst should document sources of data
employed to allow for subsequent checking.
Utility. Event trees are a straightforward technique to use. They are a graphical form of
a logic table and are easier to understand by nonspecialists than fault trees. Provided the
assumptions of no dependency and total success and failure are met, the calculations are
easy. They are useful for both preincident and postincident analyses, and are especially
helpful in the analysis of sequential systems or in human error problems.
Resources Needed. Except for unusually complicated problems, event trees tend not
to require significant resources. Because protective system designs tend to be very complex (Section 6.2), postincident analyses tend to be easier to apply than preincident
analyses.
Computer Codes Available.
ETA II, Science Applications International Corp., 5150 El Camino Real, Los Altos,
CA 94022
RISKMAN, Pickard, Lowe and Garrick, Newport Beach, CA
SUPER, Westinghouse Risk Management, P.O. Box 355, Pittsburgh, PA 15230.
3.3. Complementary Plant-Modeling Techniques
The previous section (3.2) discusses the analysis of fault trees and event trees, by using
frequency and probability data. For ease of presentation in that section, some factors
that influence the quality of the analysis were deferred. In this section (3.3) we discuss
common-cause failure analysis (3.3.1), human reliability analysis (3.3.2), and external
Next page
Previous Page
TABLE 3.6. Sample Event Tree Outcomes and Frequencies
Outcome
Sequences leading to
outcome
Frequency (per year)
2.0 x IQ-* = 2.0 x IQ-6
BLEVE
ABF
Flash Fire
ABCDEF + ABCDEF
4.9 x 1(T* + 27.5 x HT 6 = 32.4 x 10"6
Flash fire and BLEVE
ABCDEF +ABCDEF
1.2 x IQ-6 + 6.9 x HT 6 = 8.1 x 10^
UVCE
ABCDE + ABCDE
Local thermal hazard
ABF
Safe dispersal
ABCD +ABCD
6.1 x 1(T6 + 34.5 x 10^= 40.5 x ICT6
8.0 x 1(T6 = 8.0 x ICT6
1.4 x IQ-* + 7.6 x ID"6 = 9.0 x 1(T6
Total all outcomes
= 100 x IQ-6
error (e.g., domino effect on nearby equipment). Independent review of final event
trees is the best method to identify such faults (Step 7, Figure 3.9).
Errors can arise in the conditional probability data leading to major errors in the
predicted final outcome frequencies. The analyst should document sources of data
employed to allow for subsequent checking.
Utility. Event trees are a straightforward technique to use. They are a graphical form of
a logic table and are easier to understand by nonspecialists than fault trees. Provided the
assumptions of no dependency and total success and failure are met, the calculations are
easy. They are useful for both preincident and postincident analyses, and are especially
helpful in the analysis of sequential systems or in human error problems.
Resources Needed. Except for unusually complicated problems, event trees tend not
to require significant resources. Because protective system designs tend to be very complex (Section 6.2), postincident analyses tend to be easier to apply than preincident
analyses.
Computer Codes Available.
ETA II, Science Applications International Corp., 5150 El Camino Real, Los Altos,
CA 94022
RISKMAN, Pickard, Lowe and Garrick, Newport Beach, CA
SUPER, Westinghouse Risk Management, P.O. Box 355, Pittsburgh, PA 15230.
3.3. Complementary Plant-Modeling Techniques
The previous section (3.2) discusses the analysis of fault trees and event trees, by using
frequency and probability data. For ease of presentation in that section, some factors
that influence the quality of the analysis were deferred. In this section (3.3) we discuss
common-cause failure analysis (3.3.1), human reliability analysis (3.3.2), and external
events analysis (3.3.3). The results of any of these analyses can be used in the frequency
modeling techniques of Section 3.2, and may have a major effect on the results of those
techniques.
3.3.1. Common Cause Failure Analysis
Functional redundancy and diversity are used throughout many industries to improve
the reliability and/or safety of selected systems. There is an increasing trend toward the
use of redundancy and diversity in the CPI-particularly in instrumentation and control
applications. Specifically, companies in the CPI have provided multiple layers of protection (multiple safeguards) to help ensure adequate protection against process hazards. Safeguards include both engineering and administrative controls that help
prevent or mitigate process upsets (e.g., releases) that can threaten employees, the
public, the environment, equipment, and/or facilities. Examples of safeguards include
(1) process alarms, (2) shutdown interlocks, (3) relief systems, (4) hydrocarbon detectors, (5) fire protection systems, and (6) plant process safety policies and procedures.
Using multiple safeguards often reduces risk. However, the very high reliability
theoretically achievable through the use of multiple safeguards, particularly through
the use of redundant components, can sometimes be compromised by single events
that can fail multiple safeguards. For example, all temperature sensors in an emergency
shutdown system can fail because of a miscalibration error during maintenance activities. The events that defeat multiple safeguards and are attributed to a single cause of
failure are often called dependent failures, and they have consistently been shown to be
important contributors to risk. These events are generally referred to as dependent failure events. Normally, in a CPQRA, several types of dependent failure events are
addressed explicitly in the failure logic models used to estimate accident frequencies
(e.g., failure of a support system such as instrument air). The causes of dependent failures that are not addressed explicitly, if judged to be important, should be addressed in
a common cause failure (CCF) analysis.
The importance of dependent failures and CCFs in systems analysis has long been
recognized. When multiple safeguards are used to help ensure adequate protection
against process hazards, accidents cannot occur unless multiple failures occur. Multiple
failures can happen as the result of the independent failure of each safeguard; however,
operational experience shows that multiple independent failures are rare. This is easily
understood with the simple, numerical example by Paula et al. (1997b). Consider a
shutdown interlock that consists of three redundant temperature switches—A, B, and
C. Each switch is designed to individually shut down the system upon high temperature. Also, assume that the probabilities that the switches will fail on demand (P(A),
P(B), andP{C}) are constant and equal to 0.01 (1 in 100 demands). If it is further
assumed that failures of these three switches are independent, then the total probability
that all switches will fail, P{S}, is given by
P{S} = P{A} x P{B} x P{C} = (0.01)3 = lO^/demand
That is, the system is expected to fail once in every one million demands. Further, if we
had assumed that the probabilities of switch failure on demand (P(A), P(B), and
P(C)) were equal to 0.001 (1 in 1000 demands), which may be difficult but possible to
obtain in practical applications, then the system would be expected to fail once in every
one trillion demands. These are rather unbelievable numbers because, as exemplified
later, systems with two, three, four, or even higher levels of redundancy have failed
often in commercial and industrial applications, including CPI facilities, aircraft, and
nuclear power plants. That is, the assumption of independence among redundant safeguards results in unrealistic, very low estimates for the probability of loss of all safeguards; it gives too much credit for multiple safeguards, potentially causing a gross
underestimation of risk.
But what makes the simple probabilistic evaluations presented above unrealistic?
As more complex designs evolved in the 1950s, engineers and reliability specialists discovered that multiple safeguards can also fail because of a single event (a dependent failure event) (Epler, 1969; Laurence, 1960; Siddall, 1957). In the previous example, all
three switches in the high temperature shutdown interlock could be miscalibrated
during maintenance, resulting in the functional unavailability of the entire system.
Because they are attributable to a single cause of failure, these dependent failure events
are often called CCF events. Many authors have used different terminology to describe
this class of events, including "cross-linked failure," "systematic failure," "common
disaster," and "common mode failure" (Edwards, 1979; Watson and Edwards, 1979;
Paula, 1995). This section defines and exemplifies dependent failures and CCFs, and it
provides guidance and quantitative data to account for CCFs when assessing risk for
CPI facilities. Examples where CCF played a major role in accidents are
• Engineering Construction. Hagen (1980) quotes a case of a CCF when a package of 4-year-old diodes in a rod drive system failed during a required trip of a
nuclear reactor, defeating all redundant systems.
• External Environment. Hoyle (1982) reports a silicon tetrachloride incident
due to common cause failure.
• Operation Procedure, Hagen (1980) discusses the Browns Ferry fire incident.
A fire induced by human error defeated several systems.
Other incidents, mainly from the nuclear industry, have been reported by Epler
(1969). More recent reviews are given by Edwards et al. (1979), Fleming et al. (1986),
and Paula et al. (1985).
There are numerous possible sources of dependencies among redundant equipment. Figure 3.11 presents a comprehensive categorization scheme for the causes of
dependent failures, including events external to the chemical process plant (e.g., earthquakes, fires, floods, high winds, aircraft collisions) and events internal to the plant
(e.g., fires and explosions). Some of these events can be treated in a CPQBA by developing specific event tree and fault tree logic models (Budnitz 1984; NUREG, 1983).
Other causes of dependency include failure of common support systems (common
power supply, common lube oil, common instrument air system, etc.) and functional
dependencies (e.g., loss of raw water system booster pumps on loss of suction from the
discharge of the upstream low-lift pump). Again, most CPQRAs incorporate these
dependencies explicitly in the analysis (i.e., in the event tree and fault tree logic
models).
There are still other important causes of dependent failures in Figure 3.11 that are
generally not explicitly addressed in a CPQBA. They result from harsh environments
CAUSES OF DEPENDENT FAILURES IN SYSTEMS WfTH REDUNDANCY
OPERATION
ENGINEERING
INSTALLATION*
COMMISSIONING
FUNCTIONAL
DEFICIENCIES
REALIZATION
FAULTS
MANUFACTURE
Hazard
Undetectable
Channel
Dependency
Inadequate
Quality
Control
Inadequate
Instrumentation
Common
Operation &
Protection
Components
Inadequate
Quality
Control
Inadequate
Standards
Inadequate
Standards
Inadequate
Inspection
Inadequate
Inspection
Inadequate
Testing
Inadequate
Testing &
Commissioning
Inadequate
Control
PROCEDURAL
CONSTRUCTION
DESIGN
Operational
Deficiencies
Inadequate
Components
ENVIRONMENTAL
ENERGETIC
EVENTS
OPERATION
NORMAL
EXTREMES
Imperfect
Repair
Operator
Errors
Temperature
Fire
Inadequate
Procedures
Pressure
Flood
Imperfect
Testing
Humidity
Weather
Imperfect
Calibration
Inadequate
Supervision
Vibration
Earthquake
Communication
Error
Acceleration
Explosion
Imperfect
Procedures
Stress
Missiles
Corrosion
Electric
Power
MAINTENANCE
&TEST
Inadequate
Supervision
Design Errors
Contamination
Design
Limitations
Interference
FIGURE 3.11. Classification system for dependent failures (Edwards et al., 1 979).
Radiation
Static Charge
Radiation
Chemical
Sources
(high temperature, humidity, corrosion, vibration and impact, etc.), inadequate
design, manufacturing deficiencies, installation and commissioning errors, maintenance errors, and other causes. These causes are, in general, so numerous that explicitly
representing them in the quantitative risk analysis models (event trees or fault trees)
greatly increases the size of the CPQRA and can overwhelm the analyst. This group of
dependent failures and any other known dependencies that are not, for whatever
reason, explicitly modeled are denoted residual CCFs.
3.3.1.1. BACKGROUND
Early CCF techniques and studies were mostly either qualitative (Epler, 1969, 1977;
Putney, 1981; Rasmuson et al., 1976,1982; Rooney et al, 1978; Wagner et al., 1977;
Worrell, 1985) or quantitative (Apostolakis, 1976; Apostolakis et al., 1983; Atwood,
1983a; Fleming, 1974; Fleming et al., 1978; Stamatelatos, 1982; Vaurio, 1981). A
qualitative CCF analysis investigates the factors that create dependencies among redundant components (Paula et al., 1990, 1991). It often includes analysis of the causes of
dependent failures, and attempts to identify those causes most likely to lead to a CCF.
The insights provided by the qualitative analysis are useful in developing recommendations regarding defenses against CCFs.
A quantitative CCF analysis evaluates the probability of occurrence of each postulated CCF event. These probabilities can then be used in the CPQRA. Recent quantitative CCF analyses have also included detailed analyses of available data (e.g., failure
occurrence reports) to help identify CCF events and to estimate parameters for quantitative CCF models (Battle et al., 1983; Edwards et al., 1979; Mosleh et al., 1988; NUS
Corporation, 1983; Paula, 1988; Poucet et al., 1987).
Several models are available for evaluating CCF probabilities. These models are
often referred to as parametric models and include the Beta Factor (Fleming, 1975),
the Multiple Greek Letter (Mosleh, 1988), the Binomial Failure Rate (Atwood, 1980)
and several others. Although there are theoretical differences between these models,
practical applications indicate that model selection is not critical in a CCF analysis.
Analysis results are much more sensitive to the qualitative and data analysis portions of
the CCF analysis (Mosleh et al., 1988; Poucet et al., 1987).
The current consensus of risk assessment experts is that an adequate CCF analysis
should rely on both qualitative and quantitative techniques. The integrated CCF procedure described in this section emphasizes both of these.
Purpose. CCF analysis objectives include the following: (1) identification of relevant
CCF events, (2) quantification of CCF contributors, and (3) formulation of defense
alternatives and stipulation of recommendations to prevent CCFs. The first objective
includes identifying the most relevant causes of CCF events, the second permits comparisons to be made with other contributors to system unavailability and plant risk, and
the third relies extensively on the first two objectives.
Philosophy. The underlying philosophy is to recognize the potential for CCFs (i.e.,
accept that they might exist in the system) and to account for CCFs by making the best
use of available historical experience (including plant-specific and generic data) based
on a thorough understanding of the nature of CCF events.
To understand CCF events and to model them, it is necessary to answer questions
such as the following (Paula et al., 1990): Why do components fail or why are they
unavailable? What is it that can lead to multiple failures? Is there anything at a particular facility that could prevent the occurrence of such multiple failures?
These questions lead to the consideration of three factors. The first is the root cause
of component failure or unavailability. The root cause is the specific event or factor that
may lead to a CCF. A detailed CCF analysis requires proper identification of the root
cause. The degree of detail in specifying the root cause is dictated by how specific an
analysis needs to be, but it is clear that a thorough understanding of CCF events and
how they can be prevented can only come from a detailed specification of the types of
root causes.
Given the existence of the root cause, the second factor is the presence of a linking
or coupling mechanism, which is what leads to multiple equipment failures. The coupling mechanism explains why a particular root cause impacts several components.
Obviously, each component fails because of its susceptibility to the conditions created
by the root cause; the role of the link or coupling mechanism is that it makes those conditions common to several components. CCFs therefore, can be thought of as resulting
from the coexistence of two factors: (1) a susceptibility for components to fail or to be
unavailable because of a particular root cause and (2) a coupling mechanism that creates the conditions for multiple components to be affected by the same cause.
The third factor increases the potential for CCFs. This is the lack of engineered or
operational defenses against unanticipated equipment failures. Typical tactics adopted
in a defensive scheme include design control, segregation of equipment, well-designed
test and inspection procedures, maintenance procedures, review of procedures, training of personnel, manufacturing quality control, and installation and commissioning
quality control. These tactics may be particularly effective for mitigating specific types
of dependent or CCFs.
As an example of a defensive strategy, physical separation of redundant equipment
reduces the chance of simultaneous failure caused by exposure of the equipment to certain environmental conditions. In this case, the defense acts to eliminate the coupling
mechanism. Other defensive tactics may be effective in reducing the likelihood of independent failures as well as dependent failures by reducing the susceptibility of components to certain types of root causes. Thus, it can be argued that a complete treatment
of CCFs should not be performed independently of an analysis of the independent failures; rather, the treatment of all failures should be integrated.
CCF Definition. For CPI applications, a CCF event is defined as multiple safeguards
failing or otherwise being disabled simultaneously, or within a short time, from the
same cause of failure (Paula et al., 1997b). Thus, three important conditions for an
actual CCF are that (1) multiple safeguards must be failed or disabled (not simply
degraded), (2) the failures must be simultaneous (or nearly simultaneous as discussed
next), and (3) the cause of the failure for each safeguard must be the same.
Within this definition, multiple failures occurring "simultaneously" (or nearly
simultaneously) does not necessarily mean occurring at the same instant in time.
"Simultaneously" means sufficiently close in time to result in failure to perform the
safety function required of the multiple safeguards (i.e., preventing and/or mitigating
the consequences of an accident). For instance, if emergency cooling water is required
from one of two, continuously running, redundant pumps for 2 hours to safely shut
down a reactor, "nearly simultaneous53 means "within 2 hours." That is, both pumps
failing any time within the 2-hour mission results iri a CCF. For interlock systems that
use redundancy (e.g., the high temperature shutdown interlock discussed earlier),
"nearly simultaneous" often means "within the time between testing of the redundant
equipment/5 (This assumes that once they occur, failures are detected and corrected
during the next test.)
Note that the essence of a CCF event is not the cause of failure, which could be
equipment failure, human error, or external damage (e.g., fire or external impact). In
fact, the available literature shows that the causes of CCF events are generally no different from the causes of single, independent failures. The only difference is the existence
of CCF coupling factors that are responsible for the occurrence of multiple instead of
single failures (Mosleh et al., 1988; Paula et al., 1990, 1991, 1995). For example, the
spurious operation of a deluge system can result in the (single) failure of an electronic
component, A, in a certain location of the CPI facility. The same deluge system failure
would probably have resulted in the failure of both redundant components, A and B, if
they were in the same location. The cause of component failure (water damage to electronic equipment) is the same in both cases; CCF coupling (same location in this example) is what separates CCF events from single failure events, Other CCF couplings
include common support system, common hardware, equipment similarity, common
internal environment, and common operating/maintenance staff and procedures.
These CCF couplings are discussed later.
Thus, the essence of a CCF event is the coupling in the failure times of multiple safeguards. This is illustrated in Figure 3.12, which shows the failure times for three redundant safeguards over a period of about 20 years. In case (a), each safeguard has failed
four times, and the times of failure are random (not linked or coupled). The pattern in
Figure 3.12a should be expected if no CCF coupling exists. Figure 3.12b shows the*
failure times for three other safeguards. Just like the safeguards in case (a), each safeguard in Figure 3.12b has failed four times over about 20 years. However, the failure
times are completely coupled in time (i.e., the safeguards always fail at the same time).
The pattern in Figure 3.12b is hypothetical because complete coupling in the failure
times does not occur even if all CCF couplings exist, but Figure 3.12b does illustrate
the essence of a CCF event.
CCF Coupling. Six CCF coupling types act alone or (more often) in combination to
create a CCF event. Each CCF coupling is discussed and exemplified in the following
paragraphs.
CCF coupling 1: common support system. Several types of safeguards have a functional dependency on support systems, including control systems [distributed control
systems (DCS), programmable logic controller (PLC), etc.] and utilities (instrument
air, electric power, steam, etc.). Although safeguards are often designed to "fail safe55
upon loss of support systems (e.g., isolation valve closing upon loss of control signal or
loss of instrument air), these are not the only failure modes. In fact, intentionally or
unintentionally, loss or degradation of support systems can defeat safeguards in some
Safeguard 1
Safeguard 2
Safeguard 3
Time (years)
(a)
Safeguard 1
Safeguard 2
Safeguard 3
Time (years)
(b)
FIGURE 3.12. Failure times for (a) independent and (b) completely coupled safeguards (Paula
etal., 1977)
applications This can be a source of coupling if the support systems are common to
multiple safeguards.
For example, if two electric-driven firewater pumps are supplied electric power
from the same motor control center (MCC), they will both be disabled if the MCC
fails. Also, there may still be coupling even when the safeguards rely on separate support systems. For example, it may appear no coupling should exist if pump A gets electric power from MCC A and pump B gets electric power from MCC B. However, it is
possible that coupling factors exist between MCC A and MCC B (e.g., a common
offsite electric feeder to both MCCs A and B). Therefore, it is not enough to provide
separate support systems for multiple safeguards; it must be ensured that CCF couplings within the separate support systems have also been eliminated or reduced.
Note that the common support system coupling factor refers to coupling that results
from safeguards being disabled because of loss or degradation of the support system. It
is also possible that the support system will malfunction in a way that damages the safeguards. For example, a power surge in the electric supply to the two firewater pumps A
and B could damage the electric motor on each pump. This type of coupling is considered with the common internal environment coupling, and thus is excluded from the
common support system coupling.
CCF coupling 2: common hardware. This coupling is similar to the common support
system coupling, but the coupling is the failure of hardware that is common (shared)
by multiple safeguards. A typical example of multiple safeguards with common hardware is two (or more) firewater pumps that take suction from a common header. All
pumps would fail if the header were inadvertently blocked, plugged, or ruptured. As
another example, several pumps were used to help ensure an adequate and continuous
supply of feedwater to steam boilers at the powerhouse for an oil refinery. However,
the inadvertent operation of a single low-level switch in the feedwater surge tank
caused simultaneous tripping of all boiler feedwater pumps.
The common hardware coupling factor has also been observed between redundant instrumentation, control/data acquisition equipment, and (to a lesser degree) protection systems. For example, Paula et al. (1993) discuss four "one-out-of-two"
redundant systems that failed a total of 23 times because of hardware failures in shared
equipment (bus, bus switching, wiring, etc.). In fact, Paula et al. (1993) concluded that
failures within common or shared equipment (e.g., output modules) are one of the
most important contributors to the frequency of failure of fault-tolerant DCS typically
used in CPI facilities.
CCF coupling 3: equipment similarity. Most CCFs observed in several industries
have involved similar equipment. This is primarily due to similar equipment being
affected by common design and manufacturing processes, the same installation and
commissioning procedures, the same operating policies and procedures, the same
maintenance programs. These commonalities allow for multiple failures that are due to
systematically repeated human errors or other deficiencies. For example, two redundant circuit breakers in the reactor protection system at a nuclear power plant in Germany failed to open. Investigation of the event revealed that, because of a deficiency in
the manufacturing of the breaker contacts, the coating on the contacts melted during
reactor operation and fused the contacts together. Both redundant breakers were manufactured following the same process and procedures, and obviously they were both
susceptible to (and failed from) the same deficiency.
Equipment similarity has also been an important factor to maintenance-related
failure events. For instance, during routine maintenance of a commercial aircraft, a
maintenance mechanic failed to install an O-ring seal in each of the three jet engines.
Shortly after takeoff, all three engines shut down after the lubricating oil was consumed
because of the missing seal. Fortunately, one engine restarted, allowing the pilot to
land. The cause of this incident was that, unknown to the mechanic, the storeroom had
changed the normal stocking procedure and now stocked the O-ring seal separate from
the other components in the lube oil seal replacement kit. (They changed the procedure
because of a packaging change from the part's manufacturer.) The similarity of the
piece-parts (and maintenance procedures) resulted in the mistake being systematically
made on all three engines.
CCF coupling 4: common location. Equipment in the same location may be susceptible to failure from the same external environmental conditions, including sudden, energetic events (earthquake, fire, flood, missile impact, etc.) and abnormal environments
(excessive dust, vibration, high temperature, moisture, etc.). For example, redundant
electronic equipment in a room could fail because of a fire in that location or from high
temperature if the air-conditioning system for that room fails.
Regarding sudden, energetic events, Stephenson (1991) discusses two unrelated
air tragedies (a Japan Air Lines Boeing 747 and a United Airlines DClO) that resulted
from the loss of redundant hydraulic systems. These systems failed because of damage
to the redundant hydraulic lines in the rudder of each aircraft; in both cases, all hydraulic lines were close together (common location). According to documents from the
National Transportation Safety Board and the Federal Aviation Administration, the
DC-IO accident resulted in 111 fatalities and many injuries when the plane crashed
during an emergency landing in Sioux Gateway Airport, Iowa. It was caused by catastrophic failure of the tail-mounted engine during cruise flight. The separation, fragmentation, and forceful discharge of the stage one fan rotor assembly led to severing or
loosening the hydraulic lines in the rudder of the aircraft. This in turn disabled all three
redundant hydraulic systems that powered the flight controls.
Regarding abnormal environments, operational experience in CPI and other
industrial facilities shows that the common location coupling factor is often strengthened by the equipment similarity coupling factor. This may be from similar equipment having the same (or similar) stress-resisting capacity (strength) to environmental
causes. Thus, similar components are more likely to fail simultaneously if the environments-induced stress exceeds the strength of the components. Dissimilar components
generally have different strengths regarding environmental causes, and the weakest
component is likely to fail first, allowing the operating/maintenance staff to detect and
correct the problem before additional failures occur.
CCF coupling 5: common internal environment. The internal environment sometimes causes or contributes to safeguard failures. Examples of internal environments
include air in an instrument air system, electric current in an electrical distribution
system, water in an emergency cooling system, and fluid in a hydraulic system. These
events can fail multiple safeguards if the internal environment is the same or similar for
these safeguards. An example mentioned earlier is a power surge in the electric supply
to two firewater pumps A and B that could damage the electric motor on each pump. A
more common example in CPI facilities is grass and other debris causing strainers in
river water pumps to plug, resulting in loss of suction to redundant pumps. Redundant
river water pumps have also failed because of accelerated internal erosion from abnormally high concentrations of sand in the water.
Obviously, any set of components subjected to a common internal environment
is susceptible to the CCF coupling common internal environment. In fact, operational experience shows that pneumatically operated valves have often been involved
in CCFs from the internal environment (e.g., moisture in the air supply). Heat
exchangers, pump strainers, and trash racks used in river water systems have also
been involved in CCFs from the internal environment (e.g., sand contamination).
However, this coupling is only weakly associated with other types of environments.
For example, check valves used in clean water service have been less susceptible to
this coupling. Also, CCFs involving electrical equipment is only occasionally associated with the internal environment (electrical supply). This may be due to better
controls (e.g., fault and surge protection in electrical distribution systems) of some
internal environments.
CCjF coupling 6: common operating/maintenance staff and procedure. Some catastrophic accidents were the result of human or procedural errors such as
misoperation, misalignment, and miscalibration of multiple safeguards. Theoretically, all safeguards (similar or dissimilar) operated or maintained by the same
staff or addressed by the same procedure (written or otherwise) are susceptible to
failure from a CCF. In the well-publicized accident at the Three Mile Island (TMI)
nuclear power plant in the United States, the plant operators (acting on inadequate
and misleading information) shut down the redundant trains of the emergency core
cooling system (ECCS). The ECCS had started automatically to respond to a small
loss of coolant event, and the operator action eventually led to uncovering the reactor core and core damage.
Operational experience suggests that when misalignment, miscalibration, and
other types of staff/procedural errors result in multiple failures, they often involve similar equipment. That is, this CCF coupling is often strengthened by the equipment
similarity coupling (or vice-versa). This is understandable because multiple misalignment and miscalibration errors are more likely to occur when the equipment involved is
similar. For example, the likelihood of inadvertently closing a redundant set of valves A
and B while attempting to close another set of valves C and D is much higher if these
two sets of valves look the same. Also, the common location and equipment similarity couplings together can strengthen the common operating/maintenance staff and
procedure coupling, For example, if an operator misaligns valves in one train of equipment, the likelihood of misaligning the valves on the redundant equipment increases if
the redundant equipment is similar and is in the same location; the operator could rely
on the incorrect alignment of one train to align the other train.
As another example of this coupling, on April 26, 1986, the worst accident in
the nuclear power industry occurred at Chernobyl Unit 4 (Chernobyl-4). It happened during a test designed to assess the reactor's safety margin in a particular set of
circumstances. Descriptions of the details of the incident are somewhat inconsistent,
but it has been established that the automatic trip systems on the steam separators
were deactivated by the operators to allow the test. That is, multiple safeguards were
disabled by the operators. (The ECCS was also isolated before the test, but experts
now believe this had little impact on the outcome of the accident.) Because this type
of reactor has a positive void coefficient [i.e., water turning into steam in the core
increases the reaction rate (and power generation)], controlling pressure and temperature in the core is particularly critical; the misoperation of safeguards (deactivation
of the trip systems) disabled the protection against inadvertent steam generation in
the core. Subsequent actions by the operators while conducting the test resulted in
an uncontrolled generation of steam in the core, causing the reactor power to peak
about 100 times above the design power.
Applications. CCFs should be considered in chemical process industry applications
that rely on redundancy or diversity to achieve high levels of system reliability and process safety. A CCF analysis is likely to be needed for studies of process systems in which
the accident frequency estimates derived from an analysis of independent failures are
very low. This is often the case when a system design makes extensive use of redundancy, voting logic, and so forth. These applications often involve instrumentation and
control systems and redundant mechanical equipment configurations. Normally, if a
CCF analysis is necessary, the emphasis should be on safety systems. Experience indicates that most CCF events have involved standby equipment.
3.3.1.2. DESCRIPTION
This section describes an integrated framework for a CCF analysis (Mosleh et al.,
1988). There are four stages in this framework, as illustrated in Figure 3.13.
We will present an overview of each of the four stages, and then discuss the following portions of the framework in more detail:
•
•
•
•
•
•
•
Identification of the groups of components to be included in the CCF analysis
Identification of the defenses against CCF coupling
CCF quantification approaches
Incorporation of CCF events in the fault tree
Selection of the CCF model
Estimation of CCF model parameters
Quantification of CCFs using the simple method by Paula and Daggett (1997b)
Overview of the framework. The integrated framework for a CCF analysis has four
stages:
KEY INPUT
• System description
• Drawing
• Procedures
• Component technical
manuals
• Plant event sequence
model
STAGE 1
System Logic Model
Development
KEY PRODUCTS
• Basic system
understanding
• System failure mode(s)
• Boundary conditions
• Logic model
• Screening criteria
• Generic root causes
• Coupling mechanisms
• System walk-through
STAGE 2
Identification of Common
Cause Component Groups
• Prioritization of systems
modeling characteristics
• Common cause groups
• Susceptibilities to CCF
• Defense against CCF
• System operating
experience data
• Generic operating
experience data
STAGE 3
Common Cause Modeling
and Data Analysis
• Parameter estimators
• Performance
objective
STAGE 4
System Quantification and
Interpretation of Results
• System unavailability
estimate
• Principal contributors
• Corrective actions
• Reliability management
insights
FIGURE 3.13. Framework for Common Cause Analysis (Mosleh et al., 1988).
Stage 1. System Logic Model Development. The objective of this stage, which includes
system familiarization and problem definition, is to construct a logic model that identifies the contributions of basic events that lead to the Top event. Section 3.2 describes
methods for developing these logic models.
Stage 2. Identification of Common Cause Component Groups. The objectives of this
stage include:
• Identifying the groups of system components to be included in or eliminated
from the CCF analysis
• Prioritizing the groups of system components identified for further analysis, so
that time and resources can best be allocated during the CCF analysis
• Providing engineering arguments to aid in the data analysis step (Step 3)
• Providing engineering insights for later formulation of defense alternatives and
stipulation of recommendations in Step 4 (System Quantification and Interpretation of Results)
These objectives are accomplished through the qualitative analysis and quantitative screening steps.
In the qualitative analysis, a search is made for common attributes of components
within a minimal cut set and mechanisms of failure that can lead to common cause
events. Past experience and understanding of the engineering environment are used to
identify signs of potential dependence among redundant components (e.g., component similarity). Experience is also used to identify the effectiveness of defenses that
may exist to preclude or reduce the probability of certain CCF events. The result of this
search is the identification of initial groups of system components to be included in the
analysis. An analysis of the root causes of equipment failure is then performed to substantiate and improve the initial identification. This root cause analysis involves reviewing failure occurrence reports for the plant as well as reports for similar facilities. The
information from the qualitative analysis is used to define CCF events (e.g., CCF or
redundant valves).
Quantitative screening is used to assign generic (and usually conservative) values
to the probability of each CCF event. The system unavailability is evaluated using these
values, and the potential dominant contributors to system unavailability are identified.
Stage 3. Common Cause Modeling and Data Analysis. The objectives of this stage are
(1) to modify the logic model developed in Stage 1 to incorporate common cause
events and (2) to analyze available data for quantifying these events. This modification
and analysis are accomplished in a four-step procedure.
• Stage 3.1. Incorporation of Common Cause Basic Events. To model CCFs, it is convenient to define common cause basic events in the logic models (e.g., fault
trees). Common cause basic events are those that represent multiple failures of
components from shared root causes. Figure 3.14 illustrates this step for systems
consisting of two redundant components.
• Stage 3.2. Data Classification and Screening. The purpose of this step is to evaluate
and classify event reports to provide input to parameter estimation of the CCF
basic events added to the logic model. This involves distinguishing between failure causes that are explicitly modeled in the event and fault trees and those that
TOP
TOP
A Fails
A and B Fail
Due to a CCF Event
Independent Failure
ofAandB
B Fails
A Fails
B Fails
Failure Model Without CCF Event Considerations
Failure Model With CCF Event Considerations
FIGURE 3.14. Conceptual fault tree model incorporating the common cause failure (CCF)
event.
are to be included in the residual common cause basic events. The sources of data
necessary for this step are event reports on both single and multiple equipment
failures at the plant under analysis as well as similar plants. This review of the data
concentrates on root causes, coupling mechanisms, and defensive strategies in
place at the plant of interest.
• Stage 3.3. Parameter Estimation. Typically, CCF models are used to estimate the
probabilities of CCF events. The analyst can use the information obtained in
Step 3.2 to estimate the parameters of such CCF models. Only the beta-factor
model will be illustrated in this overview. Descriptions and estimators for one
other model are presented later in this section and additional models are presented by Mosleh et al. (1988). The beta-factor model is the most commonly
used parametric model. This model assumes that the failure rate (assumed constant) for each component in a system can be expanded into additive independent and CCF contributions.
A=Ai+Ac
(3.3.1)
where A = component failure rate
AI = component failure rate for independent failures
Ac = component failure rate for CCFs
The beta-factor is
^rrr
A c -r-Aj
<3-3-2)
If the system consists of identical redundant units, the system CCF rate is /3A. The
following estimator is generally used for /?:
^=^r
<3-3-3)
where nc = total number of component failures that are due to CCF events and
H1 = total number of component failures that are due to independent causes
A basic assumption of the beta-factor model is that a CCF event will result in
the failure of all redundant components in the group being considered. This
assumption often leads to conservative predictions since, for example, a given
CCF event may fail only two out of three components in a group. Some other
CCF models [e.g., the multiple greek letter (MGL) model presented later in this
section] and do not incorporate this assumption.
Stage 4. System Quantification and Interpretation of Results. The purpose of this
stage is to synthesize the key outputs of the previous stages for the purpose of quantifying system failure probability. The event probabilities obtained for the common cause
events (as a result of Step 3 of the analysis) are incorporated into the solution for the
unavailability of the systems or, alternatively, into accident sequence frequencies in the
usual way fault tree and event tree models are quantified (Sections 3.2.1, 3.2.2, and
Appendix D). The outputs of this stage include the numerical results and the identification of key unavailability contributors. The key contributors are generally the focus of
recommendations for better defending against CCFs.
Identification of the Groups of Components to be Included in the CCF Analysis.
An important objective of Stage 2 in Figure 3.13 is to identify the groups of components that are susceptible to CCFs. Most CCF analyses consider each group of identical,
redundant components (e.g., redundant shutdown valves, redundant pressure transmitters) as a group of components that are susceptible to CCFs. This is consistent with
operational experience in several industries, which has shown that most CCF events
have affected similar equipment operated and maintained in the same way (Edwards
and Watson 1979; Fleming et al., 1985; Paula et al., 1985; Watson et al., 1979). For
the same reason, most CCF analyses assume that CCF events will not affect dissimilar or
diverse equipment. (One exception is an external event such as earthquakes and hurricanes, but external events are typically outside the scope of CCF analyses.)
However, when diverse equipment has piece-parts that are nondiverse (similar),
the equipment should not be assumed to be fully diverse. For example, two redundant,
motor-driven pumps may be from different manufacturers (and thus "dissimilar").
However, the motor starter (or other piece-parts of the pumps' electrical and control
circuit) for these pumps could be from the same manufacturer. The typical approach
here is to redefine the equipment boundary in the fault tree, and model the similar
piece-parts (a motor starter in this example) separately from the pumps. The portions
of the equipment that are similar (motor starts) are susceptible to CCFs , and the portions that are diverse (pump bodies and motor drivers) are not.
The simple guidelines provided in the two previous paragraphs are often adequate
for developing the initial groups of basic events that are susceptible to CCFs. However,
when operational data and resources are available, it is recommended that a detailed
qualitative analysis be done of the system under consideration to support the initial
groupings. Detailed qualitative analysis also helps ensure that no important CCF
events have been overlooked. The scope and depth of the analysis will depend on the
(1) information available (particularly operational data for the equipment of interest),
(2) experience of the analysis team (CCF analysis experience and design, operations,
and maintenance experience), and (3) resources available for the study.
Mosleh et al. (1988) and Paula (1988) present examples of detailed qualitative
analyses of CCFs. (These references also exemplify how the results of the qualitative
analyses can be used to support quantitative analyses.) Next, we discuss what should be
considered in a detailed qualitative analysis. CCFs have occurred because of many different causes. Extensive analyses of several hundred CCF events show that these events
can be grouped into a few classes of causes of failure (Edwards et al., 1979; Paula et al.,
1985 and 1990):
• Design, manufacturing, installation, and commissioning deficiencies
• Human and procedural errors during maintenance, testing, and operation of the
equipment
• Internal (e.g., erosion of valve internals) and external (e.g., excessively high temperature) environment for the equipment
• Energetic external events
Energetic external events can be external to the facility (earthquake, hurricane, aircraft collision, etc.) or internal to the facility (fire, explosion, etc.). Energetic external
events are listed above for completeness, but they are often the subject of special studies
and are not addressed in CCF analyses. The reason for considering external events separately from the CCF analysis is simple: the approaches best suited for an analysis of
external events (e.g., an earthquake) are different from the approaches best suited for
the analysis of other types of CCF events. Also, the type of expertise required to analyze
external events is different from the expertise required in CCF analysis; the analysis
team composition may be different when dealing with external events.
By starting with the comprehensive set of causes of CCFs listed above and analyzing operational data for the system of interest, the qualitative CCF analysis considers:
• The causes of failures applicable to the equipment of interest
• The group of components that could be affected by the occurrence of each cause
• The CCF potential (degree of dependence of CCF coupling) associated with
each cause/component group of interest
The last item above (CCF coupling) is critical because the causes of CCF events are
generally no different from the causes of single component failures; coupling is the real
factor that separates single and multiple failure events. Table 3.7 illustrates this point by
presenting six actual failure occurrences and corrective actions taken at different plants.
The first two events represent identical problems (at the same plant) resulting in single
and multiple failures. The next two events are examples of personnel failing to restore
redundant equipment following maintenance, again resulting in single and multiple
failures.
Examples such as those in Table 3.7 show that the reason a particular cause affects
several components is often associated with one or more conditions (or CCF coupling
TABLE 3.7. Actual Failure Occurrences and Corrective Actions (Paula et al., 1990)
Failure
Mechanism/Cause
Defense/
Corrective
Action
Comments
Plant
Event Description
A
One circuit breaker (CB)
to a valve tripped during a
test on a room ventilation
system
The thermal overload
setting on the CB was set
too low for the
abnormally hot
environment
The thermal
overload setting
was increased in
the tripped CB
and in the CB to
a redundant
valve
The untripped
CB to the
redundant
valve is in the
same room as
the tripped
CB (Room
104)
A
Two CBs to two
redundant valves tripped
during a test on a room
ventilation system
The thermal overload
settings on the CBs were
set too low for the
abnormally hot
environment
The thermal
overload settings
were increased
in both of the
CBs
Both CBs are
in Room 149
B
Auxiliary feed pump A
was not delivering an
adequate flow of
feedwater
An in-line conical strainer
was found in the pump
suction line. The strainer
was 95% plugged. This
event resulted from an
installation error (the
strainer should have been
removed before
operation). Strainers were
found in the suction line
for three other feedwater
pumps
The strainers
were removed
The three
other strainers
were not
plugged and
did not result
in failures
C
Both emergency service
water trains were
inoperable
Strainers became plugged
in both trains because of
contamination. Because
of maintenance oversight,
they had not been cleaned
often enough
Self-cleaning
strainers were
installed
D
The turbine bypass valve
alarm would not clear. An
investigation revealed a
relay was closed, making
the reactor protection
system (RPS) subchannel
Bl for load reject and
turbine valve closure
signals inoperable
During a recent
maintenance outage, a
pressure switch that
operates the relay was
isolated. The switch was
not returned to its proper
position before startup
The condition
was corrected,
and the
occurrence was
discussed with
maintenance and
operating
personnel
E
The main control board
indication for feedwater
flow was discovered to be
reading zero
Personnel left the
equalizing valves on the
three transmitters open
The valves were
closed.
Personnel were
indoctrinated on
the removal and
restoration of
instruments and
the observance
of indications
The other
RPS
subchannels
were operable
factors) that were the same for all components that failed. Thus, the CCF coupling factors previously defined provide the basis for identifying CCF potential among multiple
safeguards; every set of multiple safeguards applicable to a potential accident scenario
must be reviewed for the existence of CCF coupling. Table 3.8 summarizes key points
in the identification of CCF coupling.
The coupling factors common support system and common hardware are usually apparent on piping and instrumentation diagrams (PScIDs)5 logic diagrams for
interlock and shutdown systems, and other process safety information (PSI). CPQRA
analysts generally review these diagrams and PSI documents as part of the CPQRA,
and the review should reveal these types of dependencies.
However, CCF analysts should review all of this information in sufficient detail to
identify subtle support system dependencies or hardware dependencies. For example, a
detailed analysis of a fault-tolerant distributed control system (F-T DCS) was performed with instrumentation and control (I&C) specialists and a technician from
Honeywell—the DCS manufacturer. This F-T DCS is a Honeywell TDC 3000 that
controls a fluidized catalytic cracking (FCC) unit in a large refinery. The analysis
involved an in-depth review of the DCS logic diagrams and associated instrumentation, and it revealed some shared instrumentation for interlock systems. Also, the analysis team identified a few shutdown interlocks that were not "fail safe." In addition,
some redundant equipment in the F-T DCS was in the same location, making it susceptible to failure caused by loss of the heating, ventilating, and air-conditioning system.
Any set of similar safeguards (e.g., three identical temperature switches) is susceptible to the coupling factor equipment similarity. However, this coupling factor is not
limited to identical, redundant components. As previously discussed, some "dissimilar"
equipment (e.g., two pumps from different manufacturers) may have piece-parts (e.g.,
motor starter and IN&C devices) that are similar, being susceptible to this coupling
factor. Also, any set of safeguards that (1) are physically in the same location, (2) have
the same or similar internal environment, or (3) are operated or maintained by the
same staff or addressed by the same procedure (written or otherwise), is susceptible to
the following coupling factors: common location, common internal environment,
and common operating/maintenance staff and procedure, respectively.
Identification of the Defenses against CCF Coupling. An important consideration
in the identification of CCFs is the existence or lack of defenses against CCF coupling.
It is obvious from the previous discussion of CCF coupling that a search for coupling is
primarily a search for similarities in the design, manufacture, construction, installation,
commissioning, maintenance, operation, environment, and location of multiple safeguards. A search for defenses against coupling, on the other hand, is primarily a search
for dissimilarities among safeguards. Dissimilarities include differences in the safeguards themselves (diversity); differences in the way they are installed, operated, and
maintained; and differences in their environment and location.
Paula et al. (1990, pages 21-26, and 1997a, Appendix A) discuss defenses against
CCFs in more detail. For example, excellent defenses against the equipment similarity
coupling include functional diversity (the use of totally different approaches to achieve
roughly the same result) and equipment diversity (the use of different types of equipment to perform the same function). Spatial separation and physical protection (e.g.,
TABLE 3.8. Key Points in the Identification and Quantification of Coupling
Paula etal., 1977a)
Coupling
Factor
Common
support system
Common
hardware
CCF Identification
CCF Quantification
Support system dependencies and
common hardware dependencies are
usually not of interest if the safeguards
"fail safe" upon loss of the support
system or common hardware.
These coupling factors are highly
plant-specific, and plant personnel usually
know the frequency of loss of support
systems such as electric power and other
utility systems. Plant data should be used
to evaluate the probability of loss of
multiple safeguards resulting from the
unavailability of a common support
system
These CCF couplings are identified by
reviewing PSdDs, logic diagrams for
interlock and shutdown systems, and
other PSI documents associated with
the set of multiple safeguards.
Additional reviews may be required
with specialists on each support system
(e.g., D. S. specialists, including a
representative from the manufacturer)
Standard CPQRA techniques (e.g., fault
tree analysis) and generic failure rate data
can be used when plant data are not
available (e.g., to evaluate the probability
of failure of common hardware)
Equipment
similarity
Any set of similar safeguards or
safeguards that have similar piece parts
is susceptible to this coupling factor
Parametric models (based on empirical
data) provide an estimate of the
probability of CCF events resulting from
this coupling factor. This estimate
typically includes the contribution from
this coupling factor and contributions
from at least some causes considered in
the coupling factors common location,
common internal environment, and
common operating/maintenance staff
and procedure
Common
location
Any set of safeguards that are
physically in the same location is
susceptible to this coupling factor
All CCFs caused by sudden, energetic
events (earthquake, fire, flood, hurricane,
tornado, etc.) should be analyzed using
techniques specially designed for the
analysis of each type of event
Parametric models are used to analyze
CCF events resulting from the other
causes (abnormal environments)
associated with this coupling factor,
including excessive dust, vibration, high
temperature, moisture, etc.
Common
internal
environment
Any set of safeguards that have the
same or similar internal environment is
susceptible to this coupling factor
Parametric models are used to analyze
CCF events associated with this coupling
factor
Common
operating/
maintenance
staff and
procedure
Any set of safeguards (similar or
dissimilar) operated or maintained by
the same staff or addressed by the same
procedure (written or otherwise) is
susceptible to this coupling factor
Operator errors during accidents (i.e.,
misoperation actions) should be analyzed
using human reliability analysis
techniques
Parametric models are used to analyze
CCF events resulting from the other
causes (misalignment and miscalibration)
associated with this coupling factor
barriers) are often used to reduce the susceptibility of multiple safeguards to the
common location coupling.
As another example of defense against CCF coupling, staggering test and maintenance activities offers some advantages over doing these activities simultaneously or
sequentially. First, it reduces the coupling associated with certain human-related failures—those introduced during test and maintenance activities. The probability that an
operator or technician will repeat an incorrect action is lower when test or maintenance
activities are performed months weeks, or even days apart than when they are performed a few minutes or a few hours apart. A second potential advantage of staggering
test and maintenance activities relates to the maximum exposure time for CCF events.
If multiple safeguards fail because of a CCF event, then evenly staggering these activities reduces the maximum time that the multiple safeguards would be failed because of
that CCF event. (This is true if we assume that this type of failure is detectable by testing and inspecting.)
CCF quantification approaches. Table 3.8 summarizes key points in the quantification of CCFs. Three ways are available to quantify CCF events:
• Use CPQRA techniques specially designed for the analysis of the specific causes
of interest
• Use a parametric model (e.g., the Beta factor or the MGL model) (Mosleh et al.,
1988)
• Use a simple method specifically designed to account for CCFs involving safeguards in CPI facilities (Paula et al., 1997b)
The first two CCF couplings in Table 3.8 (common support system and
common hardware) are highly plant-specific, and they can be quantified using standard CPQRA techniques specially designed for the analysis of the specific causes of
interest. These techniques include generic failure rate data (CCPS, 1989) and fault tree
analysis. However, plant personnel usually know the frequency of loss of support systems (instrument air, steam, etc.) and this information should be used to evaluate the
probability of loss of multiple safeguards resulting from the unavailability of a common
support system.
Selected causes associated with the CCF coupling common location should also
be quantified using standard CPQRA techniques specially designed for the analysis of
these causes. Specifically, all CCFs caused by sudden, energetic events (earthquake,
fire, flood, etc,) should be analyzed using techniques specially designed for the analysis
of each type of event. The reason for considering these causes individually is that the
techniques best suited for one type of event (e.g., estimating the frequency of an earthquake) are generally different from the techniques best suited for the other types of
events (e.g., estimating the frequency of a hurricane or tornado). Section 3.3.3, External Event Analysis, presents these techniques in some detail and provides additional
references.
Selected causes associated with the CCF coupling common operating maintenance staff and procedure should also be quantified using standard CPQBA techniques specially designed for the analysis of these causes. Specifically, operator errors
during accidents (i.e., misoperation actions) should be analyzed using human reliability analysis techniques. This type of human error includes the actions taken during the
TMI and Chernobyl-4 accidents previously discussed. Section 3.3.2, Human Reliability Analysis, presents these techniques in some detail and provides additional
references.
Parametric models use empirical data, and they are typically used to quantify the
remaining CCF coupling (and the causes associated with a coupling that is not analyzed using standard CPQRA techniques). Specifically, parametric models are typically
used to quantify
1. all causes (inadequate design, manufacturing deficiencies, installation and commissioning errors, environmental stresses, etc.) associated with the CCF couplings equipment similarity and common internal environment,
2. the causes related to abnormal environments (excessive dust, vibration, high
temperature, moisture, etc.) associated with the common location CCF coupling, and
3. the causes related to misalignment and miscalibration associated with the
common operating/maintenance staff and procedure CCF coupling.
Parametric models are discussed in more detail later in this section.
Although parametric models have been used in CPQRAs, the detailed and complete quantifications provided by these models are not always required or
cost-effective. Paula et al. (1997b) developed a new, simplified method that can be
used instead of the more complicated parametric models. The simplified method is also
presented later in this section.
Incorporation of CCF Events in the Fault Tree. After CCF events have been identified in Stage 2, they must be incorporated into the fault tree. We will present two
approaches for accomplishing this.1 The first approach consists of replacing each basic
event that represents the failure of a component from a CCF component group with a
small fault tree. The small fault tree that will be used depends on the number of components, n, in the CCF component group.
Figures 3.15 through 3.17 present the fault tree logics for n = 2, 3, and 4. For two
components (A and B) in the CCF component group (n = 2), the logic is an OR gate
with two inputs. The first input represents the independent failure of the component,
and the second input represents both components failing because of a CCF event. If
n - 3, the logic represents the independent failure of the component, the CCF of the
component with one (and only one) of the other two components, and the CCF of all
three components. For n = 4, the logic represents the independent failure of the component, the CCF of the component with one (and only one) of the other three components, the CCF of the component with each set of two (and only two) of the other
three components, and the CCF of all four components. (Similar logics can be developed for n > 4.)
1 Some analysts believe that both approaches for incorporating CCF events in the fault tree are
approximate, and they slightly overestimate the contribution of CCFs because of double counting
of certain types of CCF events. This potential overestimation is discussed in detail by Mosleh et al.
(1989, pages C-I, C-2, and C-3), and it has negligible impact in practical applications.
Figures 3.15 through 3.17 also show the probabilities (Q^Q2,Q^ Z^QA) f°r eacn
event in the fault trees. Qj1 is the probability of a CCF resulting in a specific set of k failures. For example, Q2 is the probability of a CCF of components A and B. Later in this
section, we discuss how to calculate the values OfQ1 Q2 JQ3 and Q4 which is typically
accomplished using sets of parameters (/?, y, (5, etc.) specifically defined for quantification of CCFs.
The fault tree logic substitution procedure described above is conceptually simple.
Nonetheless, the incorporation of many pieces of fault tree into the fault tree for the
system of interest can result in a large fault tree. This is often not a problem because
most fault tree software packages can easily analyze the large fault trees that may result
after the incorporation of CCF events. However, some analysts may not have access to
fault tree software packages or may find it more convenient to analyze the fault tree by
hand. Therefore, an alternative approach for incorporating CCF events in the fault
trees may be useful.
The alternative approach for incorporating CCF events in the fault trees is called
the "pattern recognition" approach (Mosleh et al., 1989). (The simplified method
for quantification of CCFs, presented later in this section, uses this approach.) In the
pattern recognition approach, the analyst evaluates the total probability that a redundant set of components (e.g., three pressure transmitters) will fail according to a success criterion (e.g., two-out-of-three), and then incorporates this total probability
directly into the fault tree. That is, the specific combinations (e.g., components A
and B, components A and C) of failures that will cause the set of redundant components to fail are not modeled explicitly in the fault tree; the fault tree has a single
event that represents the failure of the redundant system from all possible combinations (independent failures, CCFs, and any combination of these), The reader is
referred to the work of Mosleh et al. (1989) for additional information on the pattern recognition approach.
Number of redundant
components (n) * 2
Component A
fails
independent failure of
component A
FIGURE 3.15. Fault tree modification to account for CCFs (n = 2).
CCF of components
A and B
Number of redundant
components (n) = 3
Component A
fails
CCF of component A
and one more
component
(B or C)
Independent failure of
component A
CCF of components
A, B, and C
FIGURE 3.16. Fault tree modification to account for CCFs (n = 3).
Number of redundant
components (n) * 4
Component A
fails
Independent failure of
component A
CCF of component A
and one more
component
(B1C1OrD)
CCF of component A
and two more
components (8 and C,
B and D. or C and D)
FIGURE 3.17. Fault tree modification to account for CCFs (n = 4).
CCF of components
A, 6, C, and O
Selection of the CCF Model. The previous paragraphs show how to incorporate
CCF events in the fault trees. Also, formulas were introduced in Figures 3.15, 3.16,
and 3.17 for the evaluation of the CCF event probabilities as a function OfJg1 Q2 JQ3
and Jg4 Qk can be evaluated in several ways. Perhaps the simplest conceptual approach
is to evaluate CCF probabilities directly from field data in the same way equipment failure rates and equipment failure probabilities are evaluated from field data. For example, if a system with two redundant trains of equipment experienced two CCFs in
approximately 120 system demands, the following CCF probability, Q2 can be estimated directly from these field data using standard reliability techniques:
Q2 = 2 CCFs/120 demands « 0.017/demand
Because of simplicity and consistency with the quantification of the other basic
events in a fault tree, direct evaluation is probably the best approach for quantifying
CCFs whenever statistically significant data are available for the redundant system of
interest or similar systems. However, CCFs are rare, and the analyst typically does not
have sufficient data to estimate failure rates and probabilities directly as illustrated
above. Thus, the analyst must rely on generic data (i.e., combined data from several
systems in different facilities). Generic data are often from systems and equipment that
are not identical to the system/equipment considered in the analysis and/or from systems/equipment used in other industries (e.g., nuclear power plants). Obviously, this
creates uncertainty in the CCF probability estimates.
Another source of uncertainty was first recognized by Fleming (1974 and 1975) in
the early attempts to collect and analyze generic CCF data, and it is still a source of
uncertainty today. Some equipment failure databases do not provide all the information needed in estimating failure rates and/or probabilities of failure on demand. [The
databases available at the time were based on licensee event reports (LERs), which are
submitted to the U.S. Nuclear Regulatory Commission (NRC) by nuclear power
plants (LER, 1987). LERs are still valuable sources of CCF data,] Specifically, some
databases contain information about system/equipment failures attributable to CCFs
and independent failures, but they do not provide the operating time and the total
number of demands for the systems/equipment. That is, most databases provide information to estimate the numerator in the equations for evaluating Qk^ but not sufficient
information to evaluate the denominator.
Based on this realization, Fleming (1974) evaluated CCFs indirectly; instead of
attempting to evaluate jQ^, he evaluated the ratio of component failures from CCFs to
the total number of failures for the component. This ratio is called the "beta factor.53
Then, for n = 2, Q2 can be obtained by multiplying the beta factor by the total probability of failure for the component. Specifically, the Beta Factor Model assumes that the
failure rate (or probability of failure) for a component in a redundant system can be
separated into independent and CCF contributions [Eq. (3.3.1)]:2
A = A1 + Ac
(3.3.4)
2 The Beta Factor Model and other parametric CCF models can be defined in terms of failure rates or
probabilities of failure on demand. We will assume the former for the sake of discussion. Defining
the CCF model in terms of failure rates or probabilities of failure on demand may result in
differences in the parameter estimation (e.g., different values of the beta factor).
where A1 = component failure rate for independent failures
Ac = component failure rate for CCFs
The beta factor is /3 = A^(A1 + Ac) [Eq. (3.3.2)]. Note that the beta factor is defined
at the component level, not at the system level. That is, the beta factor is the fraction of a
component's (not the system's) failure rate that results in simultaneous failure of a
redundant component from the same cause. Other models [e.g., the alpha-factor
(Mosleh et al., 1988) and the ^-factor models (Bodsberg et al.)] are defined at the
system level, and alpha/^ factors represent fractions of the system's failure rate that result
in multiple failures.
Fleming and Raabe (1978) also observed that despite the type of equipment
(valve, pump, instrumentation, etc.) and the value of the failure rate, the values of the
beta factors were relatively constant. They postulated that the nearly constant beta factors "may be an inherent characteristic, perhaps directly associated with the current
state of technology." If so, the uncertainty associated with indirect models (beta factor,
alpha factors-factor, etc.) may be relatively low, even when using generic data derived
from other industries. This assertion has not been formally proven. However, the relatively constant values of generic beta factors [and other factors (alpha, etc.)] have been
verified by several authors (Edwards and Watson, 1979; Montague et al., 1984). Also,
indirect models have been used in nearly all CCF analyses of real systems (versus the
analysis of a sample problem to demonstrate a method). These observations (relatively
constant values of factors and wide use of indirect models) suggest some acceptability
of the contention that there is lower uncertainly associated with indirect models than
with direct models.
Since its publication in 1974, the Beta-Factor Model was the most frequently used
CCF model in reliability and risk assessments in several industries (Montague et al.,
1984). Simplicity of application and availability of operational data to estimate the beta
factor values were certainly important reasons for the popularity of this model. Also,
the Beta-Factor Model was the first CCF model that used operational data; the empirical nature of the model provided a relatively high confidence on the final quantitative
results. However, this model has limitations. Specifically, this is a single-parameter
model that does not accurately model redundant systems with three or more trains.
The ususal assumption for systems with three or more levels of redundancy is that all
redundant trains fail when a CCF event occurs; this results in over prediction of the
system failure probability.
To address this limitation, several other models have been developed for quantification of CCF probabilities, including the binomial failure rate (BFR), alpha factor,
basic parameters-factor, and MGL models (Bodsberg et al.; Fleming et al., 1984;
Mosleh et al., 1989). For considerable time, there was debate about the best model for
quantitative CCF analysis. This debate was resolved during the Common Cause Failure Reliability Benchmark Exercise (CCF-RBE) (Poucet et al., 1986). The CCF-RBE
was conducted over a 2-year period with 10 teams participating from eight countries.
Each team analyzed the same system using models and data deemed appropriate by the
team. One of the most important conclusions was that "once the qualitative analysis
and system logic model are fixed and the available data are interpreted consistently, the
selection of a parametric model among a relatively large set of tested and tried models is
not particularly important and does not introduce an appreciable level of uncertainty"
(Mosleh, 1991). Therefore, the use of any one of several models is adequate and should
provide analysis results that are consistent with the results that would be obtained using
the other models.
Because it is the most straightforward and widely used extension of the
Beta-Factor Model, we suggest using the MGL model (Fleming et al., 1984). In this
model, other parameters are introduced to account for each additional level of redundancy. For example, for a system with four redundant trains of equipment, the MGL
parameters are defined as follows:
f$ = Conditional probability of a CCF event that fails at least two components,
given that a specific component has failed
y = Conditional probability of a CCF event that fails at least three components,
given that a CCF has occurred and affected at least two components
6 = Conditional probability of a CCF event that fails all four components, given
that a CCF has occurred and affected at least three components
Additional parameters [epsilon (s), mu (//), etc.] are defined similarly for systems
with higher levels of redundancy. Figures 3.15 through 3.17 show the equations for
evaluating the probabilities of CCFs (^)1 Q2 Q3 and JQ4) as a function of the values of
the MGL parameters (for up to n = 4). Estimation of the MGL parameters is
addressed next.
Estimation of CCF Model Parameters. An important step in the CCF analysis procedure is the estimation of the CCF model parameters (/?, y, 5, etc.). Mosleh et al.
(1988, page 3-49, and 1989, Appendix C) provide statistical estimators for the MGL
parameters, These references also
• provide guidelines for review, evaluation, and classification of operational data;
• discuss data sources;
• present a procedure for adjusting data from systems of different size (e.g., using
CCF data from systems with four redundant trains to estimate parameters for a
system with three redundant trains); and
• discuss the impact of different testing strategies (i.e., staggered versus
nonstaggered) on the estimators.
These references should be consulted if plant-specific data are available and the analyst
wishes to estimate the MGL parameters from field data. However, plant-specific data
are not available for most CPI applications. In this case, CCF analysts often use generic
data. Montague et al. (1984) and PSI (1997) present more than 80 generic beta factor
values published in the late 1970s and early 1980s for a variety of component types
(pump, diesel generator, air-operated valve, etc.). Many of these values were derived
from actual CCF data from safety systems at nuclear power plants, but these references
also include several beta factor values from other industries (chemical, aircraft, computer applications, and a conventional power plant at an oil refinery). These references
do not include data for the other MGL parameters (y, <5, etc.) because insufficient information was available in the 1970s and 1980s to estimate these parameters.
A CCF database developed recently at the Idaho National Engineering Laboratory
(INEL) (Kvarfordt et al., 1995) contains statistically significant data to evaluate MGL
parameters for systems with up to six redundant trains. INEL3S database contains more
than 17,000 failure occurrences involving safety-related components in more than 100
nuclear power plants in the United States. Of these, about 1700 involved CCF events
in a variety of safety-related components.3 Paula (1997c) used these data to estimate
the MGL parameters presented in Tables 3.9 and 3.10.
As mentioned before, CCF models can be defined in terms of failure rates or probabilities of failure on demand, and the parameter estimators may be different in each
case. Specifically, the estimators depend on the testing strategy (staggered versus
nonstaggered) when the CCF model is defined in terms of probabilities of failure on
demand. Another important variable in estimating parameters is system size (n = 2, 3,
4, etc.). Table 3.9 presents the estimators for the MGL parameters for selected component types, testing strategies, and value of n. These estimators apply in either of two
cases (1) CCF model defined in terms of failure rates and (2) CCF model defined in
terms of probabilities of failure on demand, assuming that the testing strategy for the
equipment in redundant systems is nonstaggered. Table 3.10 presents the estimators for
CCF models defined in terms of probabilities of failure on demand, assuming staggered
testing strategy.
The values of the MGL parameters in each of the Tables 3.9 or 3.10 are remarkably
similar for a variety of component types. For example, the values of the beta factors in
each table are within a factor of about three; the values of the gamma factors and delta
factors are within a factor of less than two. Also, with a few possible exceptions, the
small differences that we do observe in the MGL parameters for different component
types are difficult to explain.
That is, we see no strong engineering argument that would have allowed us to postulate these differences before seeing the data in the tables. These differences are not
from statistical uncertainty because the number of independent and CCF events used
to derive the MGL parameters is very large.
The few exceptions are the MGL parameters, particularly the beta factor values, for
air/gas operated valves and "other equipment" (heat exchanger, pump strainer, and
trash rack). For example, the beta factors for these component types are about three
times higher than the beta factor values for check valves and motor-operated valves in
Table 3.9. A review of the actual CCF events involving air/gas operated valves, heat
exchangers, pump strainers, and trash racks revealed that many of these events were
associated with the coupling factor common internal environment.
The data in Tables 3.9 and 3.10 suggest that the use of combined data (e.g., "all
valves" or "all equipment") to estimate parameters for equipment that is not in these
3 CCF events involve multiple equipment failing simultaneously, or within a short period of time,
from the same cause of failure (e.g., maintenance error, design deficiency). Thus, three important
conditions for an actual CCF event are that multiple equipment must be failed (not simply
degraded), the failures must be simultaneous (or nearly simultaneous), and the cause of failure for
each redundant component must be the same. However, there is uncertainty about these
conditions for several events in any CCF database, In these cases, weighting factors are used to
reflect the analyst's confidence about these events being actual CCF events. The 1,700 events
include the actual CCF events as well as the CCF events that involved some uncertainly concerning
these three conditions.
TABLE 3.9. Generic MGL Parameters for Models that Use (1) Failure Rates or (2)
Probabilities of Failure on Demand with Nonstaggered Testing (Paula, 1997c)
Equipment
n
ft
y
6
Air/gas-operated valve
Check valve
Motor-operated valve
Relief (remotely operated) valve
Safety valve
Combined data for all valve types listed
in this table
Equipment, Electrical (battery, battery
charger, circuit breaker, and motor)
Equipment, Rotating (diesel generator,
pump, and turbine)
Equipment, Other (heat exchanger,
pump strainer, and trash rack)
Combined data for all equipment,
including some equipment not listed in
this table
tables should not result in significant uncertainty in reliability analyses and risk assessments. That is, the estimates based on combined data seem representative of the estimates for most types of components. For example, the estimates for "all valves" in
Tables 3.9 and 3.10 are probably representative of the MGL parameters for hydraulic-operated valves, which are not shown in the tables. Also, it appears that it is not pos-
TABLE 3.10. Generic MGL Parameters for Models That Use Probabilities of Failure on
Demand with Staggered Testing
Equipment
n
f$
y
6
Air/gas-operated valve
Check valve
Motor-operated valve
Relief (remotely operated) valve
Safety valve
Combined data for all valve types listed
in this table
Equipment, Electrical (battery, battery
charger, circuit breaker, and motor)
Equipment, Rotating (diesel generator,
pump, and turbine)
Equipment, Other (heat exchanger,
pump strainer, and trash rack)
Combined data for all equipment,
including some equipment not listed in
this table
sible to use judgment to justify estimates other than those from the "combined data53
for any equipment that is not in Tables 3.9 and 3.10. This is because, with the few
exceptions noted, we cannot explain the individual departures from the estimates
obtained from "combined" data. Unless field data exist for a specific type of equipment,
the estimates obtained from "combined" data may be the best estimates for reliability
analyses and risk assessments.
Quantifications of CCFs Using the Simple Method by Paula and Daggett
(1997b). The quantification procedures available for CCFs have been briefly
described. These methods are often used as part of CPQRAs for CPI facilities. However, the detailed and complete quantification of CCF events is not always required or
cost-effective; in some applications approximate numbers are adequate to support
decisions about safeguards. This section presents a simple method that provides probability estimates in the right "ballpark.33 The simple method consists of a three-step procedure, which is done separately for each set of multiple safeguards
• Step 1—Review the set of multiple safeguards to identify the CCF couplings and
defenses that are in place against coupling. Previous discussions in this section
provide guidance for this identification step, and Table 3.8 shows the key points
in the identification of couplings
• Step 2—Establish the "strength" of the CCF coupling as High, Moderate to
High, Low to Moderate, or Low. Table 3.11 provides guidelines for establishing the coupling strength as a function of the CCF couplings and defenses identified in Step 1
• Step 3—Evaluate the probability of failure for the set of multiple safeguards using
Table 3.12. This probability depends on the level of redundancy and success
logic (one-out-of-two, one-out-of-three, etc.), the probability of failure on
demand (PFOD) for a single safeguard, the testing/maintenance strategy (staggered versus nonstaggered), and the coupling strength (High, Moderate to
High, Low to Moderate, or Low)
For example, if PFOD = 0.01, the coupling strength is Moderate to High for a set
of three safeguards configured as two-out-of-three success logic, and safeguards are
tested/maintained on a nonstaggered basis, the probability of at least two-out-of-three
safeguards failing on demand is 0.003. That is, the probability that at least two safeguards would fail on demand is about one-third of the probability of failure for a single
safeguard.
3.3.1.3. SAMPLE PROBLEM
This example considers the design of a continuous, stirred-tank reactor (CSTR) that
uses a highly exothermic reaction to produce a chemical compound. The CSTR will be
operated continuously throughout the year and will shut down annually for 2 weeks of
preventive maintenance. It is shown schematically in Figure 3.18.
Stage 1. System Logic Model Development
• System Description. The accident or concern is an upset condition resulting in
a runaway exothermic reaction in the CSTR. The protection against this undesirable event is provided by two CSTR dump valves (Vl and V2) that should open
and quench the reaction mixture in a water-filled sump if the temperature inside
the CSTR rises above a preset limit. The valve actuators are pneumatic and are
controlled by a voting logic unit (VLU). The VLU commands the valves to open
when at least two of three temperature channels indicate a high-high condition.
Each channel has a temperature transmitter (TT), and a temperature switch
TABLE 3.11. Guidelines for Determining the Coupling Strength (Paula et al., 1997b)
CCF Couplings
If one or more of the following CCF couplings exist: common support
system, common hardware, and common location (sudden, energetic
events only) AND the probability of occurrence of the event (support
system failure, failure of common hardware, or occurrence of a sudden,
energetic event) in the same order of magnitude as the probability of failure
for a single safeguard
CCF Coupling
Strength
High
(Note: If the probability of occurrence of the event [support system failure,
failure of common hardware, or occurrence of a sudden, energetic event] is
higher than the probability of failure of a single safeguard, the probability of
occurrence of the event dominates and safeguard redundancy is irrelevant)
For pneumatically operated valves, heat exchangers, pump strainers, and
trash racks, if the equipment similarity and common internal
environment CCF couplings exisz*
Moderate to High
If one or more of the following CCF couplings exist: common support
system, common hardware, and common location (sudden, energetic
events only) AND the probability of occurrence of the event (support
system failure, failure of common hardware, or occurrence of a sudden,
energetic event) is about one order of magnitude lower than the probability
of failure for a single safeguard
Low to Moderate
OR
If the equipment similarity and at least one of the following CCF
couplings exist: (1) common location (abnormal events only), (2)
common operating/maintenance staff and procedure, or (3) common
internal environment (except for pneumatically operated valves, heat
exchangers, pump strainers, and trash racks)
If none of the conditions for High, Moderate to High, or Low to
Moderate apply
Low
"Although any set of components subjected to a common internal environment is susceptible to the CCF coupling
common internal environment, operational experience shows that the equipment most affected by this coupling
are pneumatically operated valves, heat exchangers, pump strainers, and trash racks. This coupling has been less
significant for other component types such as check valves, electrical equipment (including motor-operated
valves), and rotating equipment (diesel generator, pump, and turbine).
(TSHH). The temperature switches are all set to trip at the same temperature
(high-high).
Every quarter, all the temperature channels will be tested and calibrated on
the same day. In addition, a temperature indicator in the control room allows
detection of sensor and transmitter failures (the operators will be required to
check and record these temperatures every 8-hour shift). However, failures of
the temperature switches will likely go undetected until the next quarterly test.
The valves and the VLU are tested during the annual maintenance by simulating
a signal from all three temperature channels.
• Problem Definition. Only the vessel, the temperature channels, the VLU, the
valves, and valve operators are addressed in this example. The instrument air
(IA) system supplies both pneumatic valve actuators and is assumed to fail on
TABLE 3. 1 2. Probability of Failure for Multiple Safeguards (Paula et al., 1 997b)
Testing/Maintenance Strategy
Staggered
Level of
Redundancy
and Success
Logic
Nonstaggered
Coupling Strength
High
Moderate Low to
to High Moderate
Low
High
Moderate Low to
to High Moderate
Low
One-out-of-two
PFOD = 0.1
5e-02*
PFOD = 0.03
2e-02
PFOD = 0.01
5e-03
PFOD = 0.003
2e-03
PFOD = 0.001
5e-04
PFOD = 0.0003 2e-04
PFOD = 0.0001 5e-05
3e-02
7e-03
2e-03
6e-04
2e-04
6e-05
2e-05
2e-02
4e-03
le-03
3e-04
le-04
3e-05
le-05
le-02
9e-04
le-04
9e-06
le-06
<le-06
< le-06
4e-02
le-02
3e-03
le-03
3e-04
le-04
3e-05
2e-02
4e-03
le-03
4e-04
le-04
4e-05
le-05
le-02
3e-03
7e-04
2e-04
6e-05
2e-05
6e-06
le-02
9e-04
le-04
9e-06
le-06
< le-06
< le-06
One-out-of-three
PFOD = 0.1
PFOD = 0.03
PFOD = 0.01
PFOD = 0.003
PFOD = 0.001
PFOD = 0.0003
PFOD = 0.0001
5e-02
2e-02
5e-03
2e-03
5e-04
2e-04
5e-05
2e-02
6e-03
2e-03
6e-04
2e-04
6e-05
2e-05
le-02
3e-03
9e-04
3e-04
9e-05
3e-05
9e-06
le-03
3e-05
le-06
< le-06
< le-06
< le-06
< le-06
3e-02
le-02
3e-03
le-03
3e-04
le-04
3e-05
le-02
3e-03
9e-04
3e-04
9e-05
3e-05
9e-06
5e-03
le-03
3e-04
le-04
3e-05
le-05
3e-06
le-03
3e-05
le-06
< le-06
< le-06
< le-06
< le-06
Two-out-of-three
PFOD = 0.1
PFOD = 0.03
PFOD = 0.01
PFOD = OK)03
PFOD = 0.001
PFOD = 0.0003
PFOD = 0.0001
6e-02
2e-02
5e-03
2e-03
5e-04
2e-04
5e-05
5e-02
le-03
3e-03
9e-04
3e-04
9e-05
3e-05
4e-02
7e-03
2e-03
5e-04
2e-04
5e-05
2e-05
3e-02
3e-03
3e-04
3e-05
3e-06
< le-06
< le-06
3e-02
le-02
3e-03
le-03
3e-04
le-04
3e-05
4e-02
7e-03
2e-03
5e-04
2e-04
5e-05
2e-05
3e-02
5e-03
le-03
3e-04
8e-05
2e-05
8e-06
3e-02
3e-03
3e-04
3e-05
3e-06
< le-06
< le-06
One-out-of-four
PFOD = 0.1
PFOD = 0.03
PFOD = 0.01
PFOD = 0.003
PFOD = 0.001
PFOD = 0.0003
PFOD = 0.0001
5e-02
2e-02
5e-03
2e-03
5e-04
2e-04
5e-05
2e-02
7e-03
2e-03
6e-04
2e-04
6e-05
2e-05
9e-03
3e-03
9e-04
3e-04
9e-05
3e-05
9e-06
le-04
< le-06
< le-06
< le-06
< le-06
< le-06
< le-06
3e-02
le-02
3e-03
le-03
3e-04
le-04
3e-05
8e-03
2e-03
7e-04
2e-04
7e-05
2e-05
7e-06
3e-03
8e-04
3e-04
8e-05
2e-05
7e-06
2e-06
le-04
< le-06
< le-06
< le-06
< le-06
< le-06
< le-06
Two-out-of-four
PFOD = 0.1
PFOD = 0.03
PFOD = 0.01
PFOD = 0.003
PFOD = 0.001
PFOD = 0.0003
PFOD = 0.0001
5e-02
2e-02
5e-03
2e-03
5e-04
2e-04
5e-05
3e-02
8e-03
3e-03
8e-04
3e-04
8e-05
3e-05
2e-02
4e-03
le-03
4e-04
le-04
4e-05
le-05
4e-03
le-04
4e-06
< le-06
< le-06
< le-06
< le-06
3e-02
le-02
3e-03
le-03
3e-04
le-04
3e-05
le-02
3e-03
le-03
3e-04
le-04
3e-05
le-05
8e-03
le-03
4e-04
le-04
4e-05
le-05
4e-06
4e-03
le-04
4e-06
< le-06
< le-06
< le-06
< le-06
Three-out-of-four
PFOD = 0.1
PFOD = 0.03
PFOD = 0.01
PFOD = 0.003
PFOD = 0.001
PFOD = 0.0003
PFOD = 0.0001
7e-02
2e-02
5e-03
2e-03
5e-04
2e-04
5e-05
7e-02
le-02
4e-03
le-03
4e-04
le-04
4e-05
6e-02
le-02
3e-03
7e-04
2e-04
6e-05
2e-05
6e-02
5e-03
6e-04
5e-05
6e-06
< le-06
< le-06
6e-02
le-02
4e-03
le-03
3e-04
le-04
3e-05
6e-02
9e-03
2e-03
6e-04
2e-04
5e-05
2e-05
6e-02
7e-03
le-03
3e-04
9e-05
3e-05
9e-06
6e-02
5e-03
6e-04
5e-05
6e-06
< le-06
< le-06
"Scientific notation: 5e-02 = 5 X 10~2 « 0.05.
FIELD
CSTR
CONTROL ROOM
INDICATION
VOTING
LOGIC
UNIT
2 OUT OF 3
INSTRUMENT
AlR
BUILDING
WALL
DUMP
VALVES
SUMP
FIGURE 3.18. Simplified system diagram for sample problem. CSTR, continuous stirred tank
reactor. V, valve; TE, temperature element; TT, temperature transmitter; TSHH, temperature
switch high-high.
demand with a probability of 0.001 (this system is not analyzed in detail in this
example). Other support systems are not required for successful operation of the
protection system. (The VLU is designed to open the valves on loss of electric
power.)
The top event of interest is "CSTR Fails to Dump following a High Temperature Upset." Successful operation of the protection systems requires operation
of at least two of the temperature channels, the VLU, and at least one of the
valves (including the respective actuator and the instrument air system). External
events such as earthquakes, fires, and floods are beyond the scope of this
example.
• Logic Model Development. Figure 3.19 presents a fault tree for this problem.
The data for this example are presented in Table 3.13. These data were derived
from plants operated by the same company, as part of the previous effort to collect reliability data. This effort was made a few years earlier and did not include
an attempt to collect CCF data.
Stage 2. Identification of Common Cause Component Groups.
• Qualitative Analysis. There are two CCF events of concern in this example: (1)
the CCF of the redundant dump valves and (2) the CCF of the redundant temperature channels. As previously mentioned, the reliability data obtained from a
CSTR Fails to
Dump Following a
High Temperature
Upset
VLU Does Not
Command Valves
To Open
Both Valves
Fail to
Open
Valve 1
Fails to
Open
Valve 2
Fails to
Open
VLUFallsto
To Open
Command Valves
Loss of
IA
Temperature
Channels Fail
To Trip
And
Temperature Channel
1 Fails to Trip
2 Out of 3
Voting Logic
Temperature Channel
2 Fails to Trip
Temperature Channel
3 Fails to Trip
FIGURE 3.19. Fault tree for sample problem.
previous effort did not address CCFs explicitly. However, the following observations from the data collection study are useful for CCF considerations:
-About 70% of all failures of valves used in this type of service involved blockage
of flow caused by process material plugging the valve inlet or the valve internals.
-The majority of failures involving temperature switches in other plants were
associated with maintenance activities (e.g., maladjusted set-points)
• Quantitative Screening. This step is important when performing an analysis of
a complete chemical process plant. In that case, the number of CCF events could
be high and some prioritization would be useful. In this problem, the
Beta-Factor Model will be used to develop preliminary CCF probabilities.
Generic experience indicates that a beta-factor for temperature channels is about
0.1 to 0.2 (Lydell, 1979; Meachum et al., 1983) and that the beta-factor for
pneumatic valves is about 0.2 (Stevenson and Atwood, 1983). Thus, the following preliminary CCF rates and probabilities are derived in connection with the
data in Table 3.13:
TABLE 3.13. Assumed Data for the CCF Sample Problem3
Failure rate b
(per year)
Equipment
Valve (includes vessel to valve piping , valves and valve operator)
0.1
Voting logic unit (VLU)
0.005
Probability of
failure on
Demand *
Instrument air system (IA)
0.001
Temperature sensing element (TE)
0.3
Temperature transmitter (Tl')
0.1
Temperature switch (TSHH)
0.025
"Some cells are intentionally left blank; not all parameters are applicable to all equipment.
^These values are for illustrative purposes only.
CCF rate for valves = /?VALVE x ^VALVE = 0.2 X O.I/year = 0.02/year
CCF rate for temperature sensing element = 0.2 X 0.3/year = 0.06/year
CCF rate for temperature transmitters = 0.2 X O.I/year = 0.02/year
CCF probability of failure on demand for temperature switches
= 0.2 x 0.025 = 5 x ID'3
Table 3.14 presents a preliminary evaluation of the protection system unavailability. For example, the results of "CCF of valves Vl and V2 to open" (third
row of Table 3.14) were calculated as follows:
Failure rate
= £VALVE x AVALVE
(CCF of both valves)
= 0.2 x O.I/year = 0.02/year
Maximum exposure time = 1 year (valves are tested annually)
Probability of failure
(1 year exposure)
= failure rate X maximum exposure time
x 0.5 on demand (PFOD)
= 0.02/year X 1 year X 0.5
= 0.01
Contribution to system =0.01 (the CCF of both valves is a minimal
unavailability
cut unavailability set)
^
., .
minimal cut set PFOD ,^
Percentage
x 100
& contribution =
system PFOD
= -5^1 x 100 = 44%
0.023
The results for other Table 3.14 entries were calculated similarly.
According to Table 3.14, CCFs involving valves contribute 44% to the system
unavailability, and CCFs involving the temperature switches contribute 22%.
CCFs involving sensing elements and transmitters contribute negligibly to
system unavailability and are not considered further in this analysis.
TABLE 3.14. Preliminary Evaluation of Protection System Unavailability3
Contributor to system
unavailability
Valve Vl or V2 fails to open
Failure
rate
(per
year)
0.1
Maximum
Contribution
to system
exposure Probability
time
of failure on unavailability
(years)
demand
(XlO' 3 )
1
0.05
(0.05)2
Both valves Vl or V2 fail to
open (independently)
Percentage
contribution
to system
unavailability
2.5
11
44
CCF of valves Vl and V2 to
open
0.02
1
0.01
Voting logic unit—failure to
output shutdown signal
when commanded
0.005
1
0.0025
2.5
11
0.001
1
4
1.9
8
Instrument air—loss of air
pressure
Temperature channel:
Sensing element
Transmitter
Switch—low
Two temperature channels
fail to trip (independently)
CCF of temperature channels
to trip
0.3
0.1
8hr
8hr
10
b
b
0.0025
3 x (0.025)2
0.005
Total system unavailability
5
22
22.9
100
"Some cells are intentionally left blank; not all parameters are applicable to all equipment.
^Negligible contribution.
Stage 3. Common Cause Modeling and Data Analysis.
• Step 3.1. Incorporation of Common Cause Basic Events. Figure 3.20 shows a modified fault tree for the sample problem. Two CCF events have been added to the
original fault tree.
• Step 3.2. Data Classification and Screening. When failure event reports are available, the analyst should review previous occurrences of failures and postulate
how they could have occurred in the system of interest. This review involves
identifying events whose causes are explicitly modeled in the fault tree. For
example, a failure report may describe a loss of two valves because of the loss of
instrument air, this event is already addressed in the fault tree (Figures 3.19 and
3.20) and should not be considered in evaluating CCFs of valves.
Another aspect that should be investigated is whether there are conditions
(e.g., a different maintenance program) that would make the failures that
occurred at other plants more (or less)likely to occur at the plant being studied. If
CSTR Fails to
Dump Following a
High Temperature
Upset
VLU Does Not
Command Valves
to Open
Both Valves
Fall to
Open
Both Valves
Independently
FaI to Open
Valve 1
Fails to
Open
Both Valves
Fail to Open
Due to CCF
Loss of
IA
Temperature
Channels Fail
to Trfr
VLU Falls to
Command Valves
to Open
Temperature
Channels Fail
to Trip
Independently
Valve 2
Fails to
Open
CCF of
Temperature
Channels
2 Out of 3
Voting Logic
Temperature Channel
1FaNs to Trip
FIGURE 3.20. Fault tree for sample problem modified for
common cause failure events.
Temperature Channel
2 Fails to Trjp
Temperature Channel
3 Fails to Trip
so, the generic data must be adjusted to accommodate those differences. This
topic is discussed in detail in NUREG/CR-4780 (Mosleh et al., 1988).
• Step 3.3 Parameter Estimation. When failure event reports are available to perform Step 3.2, the analyst develops a set of pseudodata, that is, generic data specialized for a particular process plant. NUREG/CR-4780 (Mosleh et al., 1988)
provides guidance on how to develop the pseudodata. These pseudodata can be
used to estimate parameters of any of the available parametric models (e.g., the
beta-factor).
Stage 4. System Quantification and Interpretation of Results
The CCF analysis results are shown in Table 3.14. This table indicates that two CCF
events are important contributors to system unavalibility: (1) the CCF of the valves
and (2) the CCF of the temperature switches.
Consider the CCF of the valves first. One possible alternative to reduce the likelihood of this event is to institute a periodic test (e.g., quarterly) of the valves. The test
could involve momentarily opening and then closing each valve and verifying the
proper flow to the sump. The benefit associated with this test is that the exposure time
for the CCF event is reduced. That is, if both valves were indeed failed, this condition
would be detected within, at most, one quarter. This benefit can be evaluated quantitatively by reducing the maximum exposure time for the valve CCF event from 1 to 0.25
years (a quarter) in Table 3.14. The probability of occurrence of the valve CCF event
would be reduced to 2.5 x 10"3, which is a factor of four lower than the probability value
without the test. Obviously, possible detrimental effects associated with instituting the
test (e.g., excessive valve wear) must be analyzed and compared with the benefits.
Consider now the CCF of the temperature switches. One possible alternative to
reduce the likelihood of this event is to stagger the (quarterly) test and adjust the temperature channels. That is, each channel will still be tested and adjusted every 3 months,
but these activities will be conducted about 1 month apart (rather than sequentially).
The benefit associated with this alternative is that the chances of maintenance-related
errors affecting multiple channels are reduced. This reduction is attained because similar human errors in each task are less likely to occur if the tasks are performed about 1
month apart than if the tasks are performed sequentially (a few minutes apart). Again,
possible detrimental effects associated with the modified testing and adjustment policy
(e.g., increased cost) must be compared with the benefits.
3.3.1.4. DISCUSSION
Strengths and Weaknesses. The main strength of the technique is that it acknowledges the historical evidence of CCF occurrences in redundancy applications.
Although CCF data are sparse (this is the main weakness of the CCF analysis), there are
sufficient data to indicate that CCF events tend to dominate system unavailability in
those applications where redundancy is used to improve system reliability performance.
Using multiple safeguards in CPI facilities often reduces risk. However, the very
high reliability theoretically achievable by multiple safeguards, particularly with redundant components, can sometimes be compromised by CCF events. CCF events have
consistently been shown to be important contributors to risk, and the frequency of
accident scenarios in CPI facilities may be grossly underestimated if CCFs affecting
multiple safeguards are not taken into account.
An important observation regarding CCFs is that the potential for the occurrence
of CCFs does not imply that using multiple safeguards is ineffective. On the contrary,
the use of multiple safeguards has been shown to reduce risks, and the information presented in this section supports this contention. However, it is important to recognize
that CCFs may limit the theoretical benefits achievable through the use of multiple
safeguards, particularly through the use of redundant components. A good understanding of CCFs provides a more realistic appreciation of risk in CPI facilities by
allowing better decisions to be made about the use of safeguards.
Identification and Treatment of Possible Errors. CCF analysis is limited by the lack
of plant-specific data. Thus, the analysis must rely extensively on generic experience.
There is judgment involved in using generic data for a specific plant. This problem can
be alleviated by using systematic procedures for analyzing generic data (Mosleh et al.,
1988) and by developing high-quality CCF databases.
Utility. A complete CCF analysis offers both quantitative and qualitative insights that
are helpful in establishing defense alternatives to improve availability and safety.
Resources. The CCD analyst should be an engineer experienced in risk assessment
techniques and in the analysis of failure reports. The CCF analysis should be peer
reviewed by CCF experts.
Available Computer Codes. There are several computer codes available for CCF analysis. The recently developed computer program CCF evaluates CCF parameters (e.g.,
MGL parameters) and CCF probabilities (Kvarford et al., 1995). The codes
COMCAN (Rasmuson et al., 1982), SETS (Worrell, 1985), WAMCOM (Putney,
1981), and BACKFIRE (Rooney et al., 1978) are useful when performing qualitative
CCF analyses of large, complex systems. The computer code BFR evaluates CCF rates
according to the binomial failure rate (quantitative) model (Atwood et al., 1983b),
3.3.2. Human Reliability Analysis
3.3,2.1. BACKGROUND
Purpose. The primary purpose of human reliability analysis (HRA) in a CPQILA is to
provide quantitative values of human error for inclusion in fault tree analysis (Section
3.2.1) and event tree analysis (Section 3.2.2). HBA techniques can also be valuable in
identifying potential recommendations for error reduction.
Technology. A human error is an action that fails to meet some limit of acceptability as
defined for a system. This may be a physical action (e.g., closing a valve) or a cognitive
action (e.g., fault diagnosis or decision making). Some examples where human error
can increase risk from a process plant are
• errors in operations or maintenance procedures that lead to increased demands
on protective systems
Next Page
Previous Page
consistently been shown to be important contributors to risk, and the frequency of
accident scenarios in CPI facilities may be grossly underestimated if CCFs affecting
multiple safeguards are not taken into account.
An important observation regarding CCFs is that the potential for the occurrence
of CCFs does not imply that using multiple safeguards is ineffective. On the contrary,
the use of multiple safeguards has been shown to reduce risks, and the information presented in this section supports this contention. However, it is important to recognize
that CCFs may limit the theoretical benefits achievable through the use of multiple
safeguards, particularly through the use of redundant components. A good understanding of CCFs provides a more realistic appreciation of risk in CPI facilities by
allowing better decisions to be made about the use of safeguards.
Identification and Treatment of Possible Errors. CCF analysis is limited by the lack
of plant-specific data. Thus, the analysis must rely extensively on generic experience.
There is judgment involved in using generic data for a specific plant. This problem can
be alleviated by using systematic procedures for analyzing generic data (Mosleh et al.,
1988) and by developing high-quality CCF databases.
Utility. A complete CCF analysis offers both quantitative and qualitative insights that
are helpful in establishing defense alternatives to improve availability and safety.
Resources. The CCD analyst should be an engineer experienced in risk assessment
techniques and in the analysis of failure reports. The CCF analysis should be peer
reviewed by CCF experts.
Available Computer Codes. There are several computer codes available for CCF analysis. The recently developed computer program CCF evaluates CCF parameters (e.g.,
MGL parameters) and CCF probabilities (Kvarford et al., 1995). The codes
COMCAN (Rasmuson et al., 1982), SETS (Worrell, 1985), WAMCOM (Putney,
1981), and BACKFIRE (Rooney et al., 1978) are useful when performing qualitative
CCF analyses of large, complex systems. The computer code BFR evaluates CCF rates
according to the binomial failure rate (quantitative) model (Atwood et al., 1983b),
3.3.2. Human Reliability Analysis
3.3,2.1. BACKGROUND
Purpose. The primary purpose of human reliability analysis (HRA) in a CPQILA is to
provide quantitative values of human error for inclusion in fault tree analysis (Section
3.2.1) and event tree analysis (Section 3.2.2). HBA techniques can also be valuable in
identifying potential recommendations for error reduction.
Technology. A human error is an action that fails to meet some limit of acceptability as
defined for a system. This may be a physical action (e.g., closing a valve) or a cognitive
action (e.g., fault diagnosis or decision making). Some examples where human error
can increase risk from a process plant are
• errors in operations or maintenance procedures that lead to increased demands
on protective systems
• failure of an operator when called on to restore a plant to a safe condition (e.g.,
by shutdown)
• errors in maintaining, calibrating, and testing control systems and protective systems.
HRA includes the identification of conditions that cause people to err and the estimation of the probability of that error. For HRA, it is always assumed that the operator is not malicious, hence sabotage is explicitly not considered.
The increasing use of complex computer control systems has produced additional
factors for consideration in human reliability (Section 6.3). Wickens (1984) has provided useful guidance on human factors in control systems.
Applications. HRA techniques originated in the aerospace industry and have been
applied in the nuclear industry. Miller and Swain (1987) list a number of studies conducted in nuclear applications. They provide a worked example for human error associated with an electronic programming device. They also note that a few HRA studies
have been applied in the chemical processing and petroleum industries, but that these
are proprietary reports. DeStreese (1983) discusses applications of human factors engineering to control rooms for LNG and suggests human error probabilities for that situation.
Kletz (1985) uses case studies to identify important factors leading to human
error. Simple qualitative guidelines for human error prediction in process operations is
given by Ball et al. (1985). Bellamy et al. (1986) discuss alternative approaches to
incorporate the results of HRAs into risk assessments, using examples of fault and
event trees.
3.3.2.2. DESCRIPTION
Description of Technique. Many of the applications of HRA are directed to specialists. A broad overview of techniques is given by Miller and Swain (1987). A comprehensive evaluation of HRA techniques is found in Swain (1988). All the techniques
have the following characteristics:
• identification of relevant tasks performed or to be performed (if the plant is at the
design stage) by operators
• representation of each task by some method, such as decomposition of the task
into its principal components to identify
-opportunities for error
-points of interaction with the plant
• use of data derived from historical records or judgment; some techniques have
their own database as well
• identification of the existence of conditions that affect error rates. These conditions are termed performance-shaping factors that take into account stress, training, and the quality of displays and controls used by operators
The results of an HBA are usually expressed in the form of human error probabilities or rates:
TT
TT. u L-IHuman Error Probability =
Number of errors
——
Number of opportunities for error
_
_
Number of errors
Human Error Rate = ——- —
:—
Total task duration
v( 3 3 5 )
*''
(x 3 3 6 )
'
The major techniques for obtaining human error quantification are given below.
Technique for Human Error Rate Prediction (THERP). THERP was developed for
the nuclear industry and is comprehensively described by Swain and Guttmann
(1983). Figure 3.21 presents a flow chart for use of THERP. The method requires
breaking down a procedure or overall task into unit tasks (task analysis) and combining
this information in the form of event trees. Conditional probabilities of success or failures for each branch of the tree are estimated. The event tree calculations are then performed. Although a database is provided, some judgment is required on the part of the
analysis in assigning probabilities.
Accident Sequence Evaluation Program (ASEP). A short and conservative method of
HRA is presented below. The method is based on the Accident Sequence Evaluation
Program (ASEP) developed by Swain (1987). This method should be used only for
initial screening of the importance of human error. If the probability of system failure
determined using the shortened procedure is unacceptable, the analyst should contact a
specialist in the field of human reliability engineering for more detailed analysis. The
short method estimates the human error probability for two stages in an accident
sequence:
1. preaccident
2. postaccident
The preaccident screening analysis is intended to identify those systems or subsystems that are vulnerable to human errors. If the probability of system failure is judged
to be acceptable using the method described below, human error is not important. If
the probability of system failure is judged to be unacceptable, a specialist in the field of
human reliability engineering should be consulted for a more detailed analysis.
Once an accident sequence has started, there is a chance that the operators will
detect the problem and correct it before any serious consequences result. For example,
the operators may detect that they have overfilled a reactor and drain off the excess
reactant before heating the batch. If they fail to drain the reactor before heating, the
reactor could be overpressured resulting in a release of toxic material. The postaccident
human reliability analysis is intended to evaluate the probability of the operators
detecting and correcting their error before the toxic material is released.
• Preaccident HEP Analysis. The steps in the preaccident human reliability analysis are presented in Table 3.15. First, those activities that are critical to the safe
operation of the system must be identified. At this step any dependence between
the critical activities should also be identified. For example, if the same maintenance crew performs a test of redundant high-pressure interlock systems on a
chemical reactor at the same time, these tests would be judged to be totally
dependent on each other. If the maintenance crew makes an error in testing the
Familiarization with operation of
plant, displays and controls
used by operators, admin, system
STEP1
Plant list
STEP 2
Review information
from fault tree analysis
For a plant at the design
^L ,
s t a g e task descriptions
f
Check branches of fault trees f o r m u s t ^ ^,0^ from
human failures affecting t h e r e | e v a n t deteBed system
10 e v e n t
P
i n f o r m a t i o n a n d preliminary
procedures
STEP 3
Talk-through
Familiarization with relevent
procedures
STEP 4
Task analysis
Break down tasks into smaller
discrete units of activity
STEPS
Develop HRA
event trees
Express each unit task sequentially
as binary branches of an event tree.
Each branch represents correct or
incorrect performance (§3.2.2)
STEP 6
Assign human
error probabilities
Data provided in the handbook
(Swain and Guttmann, 1983)
STEP 7
Estimate the relative
effects of performance
shaping factors
Data provided in the handbook
(Swain and Guttmann, 1983)
STEPS
Assess dependence
STEP 9
Determine success and
failure probabilities
STEP 10
Determine the effects
of recovery factors
Equations for modifying probabilities
on the basis of dependence between
tasks provided in the handbook
(Swain and Guttmann, 1983)
Total probabilities for success and
failures by multiplying branch
probabilities and summing
appropriately
Operators may recover from errors
before they have an effect. Recovery
factors are applied to dominant error
sequences
STEP 11
Perform a sensitivity
analysis, if warranted
STEP 12
Supply information to
fault tree analysis
Human error probability or rate
FIGURE 3.21. Overview of human reliability analysis.
TABLE 3.15. Preaccident Human Reliability Screening Analysis Procedure
Step
Description
1
Identify critical human actions that can cause an accident to occur.
2
Assume that the following basic conditions apply relative to each critical human action:
a. No indication of a human error will be annunciated in the control room.
b. The activity subject to human error is not checked by a postmaintenance,
postcalibration, or postoperation test.
c. There is no possibility for the person to detect that he (or she) has made an error.
d. Shift or daily checks of the activity subject to human error are not made or
are not effective.
3
Assign a human error probability of 0.03 to each critical activity.
4
If two or more critical activities are required before an accident sequence can occur, assign
a human error probability of 0.0009 for the entire sequence of activities. If these two or
more critical activities involve two or more redundant safety systems (interlocks, relief
valves, etc.)., assign a human error probability of 0.03 for the entire sequence of activities.
This is a conservative assumption to account for the same operator making the same
mistake on multiple safety systems.
first interlock, it is assumed that they will also make an error when testing the
other interlock systems.
Once a critical human error has been made, it is conservatively assumed in
Step 2 that there is no way to detect the error. A human error probability of 0.03
is assigned to each critical activity in Step 3. Finally, the probability of multiple
human errors occurring in a particular accident sequence is evaluated in Step 4.
The method in Step 4 conservatively assumes that if more than two independent
critical activities must be done incorrectly for an accident to occur, the additional
tests, checks, or critical activities are either not done or are performed incorrectly.
• Postaccident HEP Analysis. Once an accident sequence has started, the most
important variable is the time the operators have to detect and correct errors.
The chances of a control room crew detecting and correcting a problem are
better when they have 3 hours than if they only have 3 seconds before a serious
condition results. Before corrective action can be taken, the operators must diagnose that there is a problem. Figure 3.22 shows the probability of the control
room operators failing to properly diagnose an abnormal event that is annunciated in the control room, as a function of the time available for diagnosis. The
time available for diagnosis is computed as
T, = Tm ! T3
(3.3.7)
where Td = time available for control room operators to diagnose that an abnormal event has occurred, Tm = the maximum time available to correctly diagnose
that an abnormal event has occurred and to have completed all corrective actions
necessary to prevent the resulting incident, and Ta = the time required to complete all postdiagnosis required actions to bring the system under control.
Probability of Failure
Time Available (In minutes) for Diagnosis
of an Abnormal Event After Control Room
Annuclatlon, Td
FIGURE 3.22. Probability of failure by control room personnel to correctly diagnose an
abnormal event.
The maximum time to diagnose and correct a problem (Tm) must be determined by a detailed analysis of each accident sequence. An analysis of the time
delays created by such factors as the rate of heat transfer, chemical reaction kinetics, or flow rates may be required. This analysis normally requires process engineering support.
The time required to correct the problem (T3) is next determined. A list is
made of the operator tasks that must be completed to correct the problem created in each accident sequence. The times required to complete each of the operator tasks (including travel time) are determined using Table 3.16. For each
accident sequence, Ta is defined as the sum of the time for the operator to complete all the required tasks.
Once the times T01 and Ta have been estimated, the time available for the operators
to correctly diagnose an abnormal event (Td) is determined using Eq. (3.3.7). The
probability of the operators failing to diagnose the problem is next determined using
Figure 3.22. If the abnormal event is not annunciated in the control room by one or
more signals, the probability of failing to properly diagnose the problem is conservatively assumed to be 1.0.
Once the operators have diagnosed that an abnormal event has occurred and that a
serious incident is going to occur unless action is taken, the operator may fail to correctly deal with the problem. Table 3.18 presents the probabilities of human error for
various conditions or tasks that the operators must perform to prevent the incident
from occurring.
TABLE 3.16. Times Required for Postaccident Activities
Description of activity
Time required (min)
Find and initiate written procedure if not committed to memory
5"
Travel plus manipulation time on main control room panel
1
Travel plus manipulation time on secondary control room panels
2
Travel plus manipulation time of a manually operated field system
b
Process stabilization
c
"For a procedure to be considered to be fully committed to memory, the operators must demonstrate proficiency by frequent walk/talk through or testing. For example, the execution of an emergency shutdown procedure that is practiced on a quarterly basis would be considered as fully committed to memory.
b
tt available, use actual times determined by walkthrough simulations. Use twice the operator's estimate if no
other information is available. Use 5 min. if no information is available.
c
Once the required manipulations have been performed, the system may require some time to stabilize. The
length of time must be determined in consultation with process engineers. If no information is available, assume
the system instantly returns to a safe condition. However, this assumption must be flagged for further study.
Techniques Utilizing Expert Judgment. Although all HRA techniques require expert
judgment, some techniques are more heavily dependent on its application. Some structured methods have been developed that are suitable for use in HRA. Bias is a potential
problem in expert judgment, but this can be overcome by applications of techniques
such as paired comparisons. If possible, estimates obtained using expert judgment
should be calibrated with more objective data. A short overview of these, with key references and brief descriptions, follows:
• Absolute Probability Judgment (APJ) (Comer et al., 1984). This method employs
direct estimates of human error probabilities by an individual expert or, preferably, a group of experts.
• Paired Comparisons (PC) (Blanchard et al., 1966; Hunns and Daniels, 1980).
Tasks are presented in pairs to the experts for judgment as to which task has the
highest likelihood of error. Tasks with known human error probabilities are then
used for calibration.
• Influence Diagram Approach (IDA) (Phillips et al., 1985). Numerical evaluations
of the effects of combined influences (e.g., stress, quality of procedures, design)
on human reliability are made by expert judges. These evaluations provide
weightings for direct estimates of human error probabilities provided by the
same judges. Overall error probability can then be calculated.
• Success Likelihood Index Methodology Using Multiattribute Utility Decompositions
(SLIM-MAUD) (Embry et al., 1984). This method requires experts to generate
the important shaping factors and to define the relative likelihood of success for
each member of a set of tasks. Success Likelihood Index values are generated
which can then be converted to human error probabilities using paired comparison techniques.
Maintenance Personnel Performance Simulation (Siegel et al., 1984). This method,
used for maintenance reliability, is based, like THEBJP, on task analysis. The technique
TABLE 3.17. Human Error Probability in Recovery from an Abnormal Event
Human error
probability
Description
Perform a required action outside of the control room
1.0
Perform a required action outside of the control room while in radio contact with
the control room operators*
0.5
Perform a critical skill-based or rule-based action when no written procedures are
available
1.0
Perform a critical action under conditions of moderately high stress*
0.05
Perform a critical action under conditions of extremely high Stress'
0-25
The human error probability of 0.5 includes failure of the operator to either properly complete an assigned
task or failure to receive instructions due to a disruption of communications (noise, stress, radio failure, etc.).
^Conditions of moderately high stress would occur when dealing with an abnormal event that could result in a
major loss of product, shutdown of the process unit, operator employment action such as a reprimand, or other
adverse outcomes that are not life endangering to the operators.
Conditions of extremely high stress would occur when dealing with an abnormal event that could result in a
major fire, runaway reaction, or toxic chemical release that could kill or seriously injure the operator or his
friends.
addresses personnel and task characteristics. Each subtask is analyzed using a set of
algorithms plus a Monte Carlo simulation. The output provides probability of success,
time to completion, operator overload, idle time, and level of stress.
Operator Action Tree (OAT) (Hall et al., 1982). This method provides a means to
evaluate the performance of a plant operator, based on the sequence of tasks, through
ETA. Time available for response is the critical parameter; other performance shaping
factors are omitted.
Steps in the analysis are
1.
2.
3.
4.
Identify relevant plant safety functions from system event trees.
Identify related operator actions to achieve plant safety functions
Express actions as an OAT (Figure 3.23)
Calculate time available from first appearance of indications of abnormal conditions to the last point at which starting to take action will be successful.
5. Estimate error probabilities for time-reliability curve (Figure 3.24). The data
for this curve were derived from expert judgment.
Event
Occurs
Operator
Observes
Indications
Operator
Diagnoses
Problem
Operator
Carries Out
Required Response
FIGURE 3.23. Basic operator action tree.
Success/
Failure
Probability off Failure
Cut-Off for Accidents
With Frequencies
Less Than 1 per Year
Time, minutes
FIGURE 3.24. Operator action tree reliability curve.
Human Error Assessment and Reduction Technique (HEART) (Williams, 1986).
This method quantified the effect of a large number of performance shaping factors on
human error probability. Nominal human error probabilities are provided for a generic
list of tasks. A set of remedial measures for each error producing condition is provided.
Logic Diagram. The 12-step sequence for carrying out a THERP human error analysis is given in Figure 3.21. Other techniques differ from THERP and from one another
principally in Steps 5-8. Methods of task analysis also vary (Step 4). Although shown
as a sequence of discrete steps, iterations are possible with successive refinement as
fuller details and sensitivities of major parameters are identified.
Theoretical Foundation. The techniques are based in part on the psychology of
human behavior. They are derived from empirical models using statistical inference
and have not been adequately validated. Bias is a potential problem in expert judgment,
but this can be overcome by applications of techniques such as paired comparisons.
Input Requirements and Availability. To complete any HRA, a detailed description
of the process system, procedure, and overall task must first be developed. Application
requires either expert judgment, comprehensive data, or direct estimates of human
error probability data depending on the techniques used. The most common human
error probability data are derived from nuclear power plants but can be applied (with
judgment) to chemical plants. For a review of data see Topmiller et al. (1982). Additional sources of human error probability data are
• Swain and Guttmann (1983): database is supplied, which is derived from a
number of sources. It includes time reliability data for fault diagnosis.
• AIR (Munger et al., 1962): derived from empirical data, mainly error display
reading and control operation.
• Aerojet (Irwin et al., 1964): derived from an extension of the AIR database,
which includes expert judgment.
• TEPPS (Blanchard et al., 1966): derived from expert judgment; mainly display-reading and control operation.
Output. The methods provide estimates of human error probabilities or human error
rates for direct incorporation into fault and event trees. They may also identify tasks with
high values of human error, which designers may use to reduce overall error probability.
Simplified Approaches. ASEP and HEART are much simpler and quicker to apply
than THERP or group expert judgment techniques. However, ASEP and HEART are
not as accurate as THERP.
3.3.2.3. SAMPLE PROBLEM
Most of the techniques of HRA are lengthy and difficult to follow for nonexperts. For
this reason, one of the simpler techniques, ASEP, is used for the sample problem.
A CPQRA is being performed on reactor system (Figure 3.25). Raw material A is
charged in a batch mode from a storage tank to the reactor. The amount of raw material
charged to the reactors is determined by a timer and automatic shutoff valve. The reactor is normally filled to the 50% level. A HAZOP study found that the reactor could be
overcharged if the timer failed to signal the automatic shutoff valve to close. The operaProcess Vent
to Scrubber 109
Timer
Emergency Relief Vent
to Scrubber 109
Raw Material A
150 PSlG
STEAM
REACTOR
CTW &
STEAM
AGITATOR
CIRC PUMP
HEATER
FIGURE 3.25. Piping and instrument diagram for human error probability example problem.
tor would be alerted by the level alarm when the reactor reaches 70% full. At the high
level alarm, the operator is trained to close the automatic shutoff valve (XV-IOl) using
a manual override control switch on the control panel. The contents of the reactor
would then be pumped to the next process system to recover raw material A. The process engineer responsible for the system has determined that 15 min after annunciation
of the level alarm, the scrubber would be overfilled with raw material A, resulting in a
release to the atmosphere. The risk analyst wishes to estimate the probability that the
operator fails to properly deal with this situation.
Since this accident sequence is started by a mechanical timer failure, there are no
critical human activities involved in the preaccident stage. Thus, the preaccident analysis procedure of Table 3.15 is skipped. The first step in the postaccident analysis is to
determine how much time the operator has to diagnose that the reactor is overfilling
(Td). The process engineer has determined that the maximum time available for the
operators to diagnose and correct the overfill condition is 15 min (Tm). The time
required to complete all corrective actions (Ta) is estimated using Table 3.16 and presented in Table 3.18.
The time available for the operators to diagnose the situation is
rd = r m -r a
(3.3.8)
Td = 15 min - 6 min = 9 min
The probability that the operator fails to diagnose the situation is estimated as 0.55
(Figure 3.22). The probability of the operator making an error during the recovery
phase of the accident is determined using Table 3.17. The risk analyst judges that conditions of moderate stress should be used to evaluate the probability of the operator
failing to find and close the manual override switch. Based on conditions of moderate
stress, the probability of the operators failing to find and close the manual override
switch is 0.05.
The total probability of operator error in this accident sequence is the sum of the
diagnosis and recovery error probabilities. Thus, the total human error probability for
this accident sequence is
Human error probability = 0.55 + 0.05 = 0.60
(3.3.9)
This probability can now be used by the risk analyst in fault tree or event tree calculations. A human error probability of 0.6 would be considered to be extremely high if
this accident sequence was determined to be a major contributor to the system risk.
Design modifications such as the addition of interlocks or better material balance con-
TABLE 3.18. Task Analysis for Human Error Probability Example Problem
Task
Time required (min)
Find and initiate written procedure
5
Find and close manual override switch on main control panel
1
Total time, Ta
6
trol might be needed. Procedural changes such as the use of written check lists to verify
the proper charge to the reactor might also be needed.
3.3.2.4. DISCUSSION
Identification and Treatment of Possible Errors. All of the techniques require some
form of judgment on the part of the analyst. Errors can arise in the identification of
human factors in the FTA, the familiarization of the operators5 tasks, the definition of
task analysis, and the selection and application of data. Such errors can be reduced
through independent checks.
Utility. HRA is a specialist topic. It is best utilized for critical systems when the human
error component is important, particularly when there is potential for common mode
failure (e.g., an untrained operator making not one, but several errors which are not
independent). The techniques can be carried out by nonspecialists, given sufficient
understanding of the methods. When human error is determined to be a major contributor to the system risk, a review by a human factors specialist would be warranted.
Resources Needed. CPQRA human reliability analysis is a component technique
within FTA and ETA and increases the resource demands of these analyses. Several
person-weeks could be required to develop THERP human error probability estimates
for limited key areas for a single plant. Given a full understanding of the required tasks,
simpler techniques (e.g., ASEP) may require only a few hours of analysis.
Available Computer Codes.
MAPPS (Maintenance Personnel Performance Simulation): Applied Psychological
Services and Oak Ridge National Laboratory, Oak Badge, TN.
SLIM (Success Likelihood Index Method) and IMAS (Influence Model Assessment
System): Human Reliability Associates, UK.
3.3.3. External Events Analysis
3.3.3.1. BACKGROUND
Purpose. External events can initiate and contribute to potential incidents considered
in a CPQBA. Although the frequencies of such events are generally low, they may
result in a major incident. They also have the potential to initiate CCFs (Section 3.3.1)
that can lead to escalation of the incident. External events can be subdivided into two
main categories:
• natural hazards: earthquakes, floods, tornadoes, extreme temperature, lightning,
etc.
• man-induced events: aircraft crash, missile, nearby industrial activity, sabotage,
etc.
A partial list of possible external events is presented in Table 3.19 (NUREG, 1983,
1985). The risk analyst must decide which external events will be studied for a particu-
lar problem. The analyst should document which external events were studied and
which were excluded. Utilities failures are normally incorporated into the main system
analysis instead of being considered as external events.
Technology. Normal design codes for chemical plants have sufficient safety factors to
allow the plant to withstand major external events to a particular level (e.g., wind loading of 120 mph). The Federal Safety Standards for LNG Facilities (Department of Transportation, 1980) give quantitative design rules for seismic events, flooding, tornadoes,
and extreme wind hazards as follows:
• Seismic. The design should withstand critical ground motions with an annual
probability of 10"4 or less.
• Flooding. The design should withstand the effects of the worst flooding occurrence in a 100-year period.
• Winds. The design should withstand the most critical combination of wind
velocity and duration having a probability of 0.005 or less in a 50-year period
(annual probability of 10"* or less).
Only qualitative guidance is given for extremes of weather, frost heave, etc.
External events have been treated extensively in the nuclear industry. The PRA Procedures Guide (NUBJEG, 1983) presents methods for the comprehensive analysis of
external events with the major emphasis on safe shutdown procedures following external events.
Incorporation of a detailed analysis of external events in CPQRA may not be warranted unless the consequences are severe and frequencies high. Therefore, screening
calculations for frequency can prove very useful to establish the level for contribution of
risk from external events compared to other failure scenarios. Based on the results of
the screening calculations, decisions can be made whether to pursue a more detailed
assessment of external events and seek risk reduction measures.
Applications. External events are routinely included in PRAs of nuclear plants (e.g.,
Diablo Canyon PRA, Pacific Gas and Electric, 1977). A good overview of external event
hazard assessment in the nuclear industry with special reference to the Sizewell B Pressurized Water Reactor design is given by Hall et al. (1985). External events have also
been considered in a more limited way in CPQRAs such as the Canvey (Health and
Safety Executive, 1978) and Rijnmond (1982) studies.
3.3.3.2. DESCRIPTION
Description of the Technique. The PRA Procedures Guide (NUREG, 1983) gives a
good description of external event analysis. It lists a range of candidate external events
for consideration. The hazard intensities of external events can be represented by
parameters such as the peak ground acceleration of earthquakes, tornado intensities
(measured per Fujita, 1971), and the kinetic energy of aircraft. The PRA Procedures
Guide (NUREG, 1983) sets out a five-step procedure:
TABLE 3.19. Partial List of External Events
Event
Notes
Aircraft impact
Sites less than 3 miles from airports have higher frequencies
Avalanche
Can be excluded from most sites in United States
Barometric pressure
Rapid changes during hurricanes and severe storms
Coastal erosion
Also review external flooding
Drought
May impact the availability of cooling water for plant site
External flooding
Review rivers, lakes, streams, and storm water drainage impacts
Extreme winds or tornados
Site specific-extreme winds can create large numbers of missiles
Fire
Review locations of flammable-containing systems near plant site:
gasoline storage, LPG, fuel oil, etc.
Fog
May increase frequency of accidents
Forest fire
Review location of plant relative to large areas of trees.
Frost
Frost heave may damage foundations of plant structures
Hail
Include with review of possible missile impacts on plant
High tide, high lake level, or
high river stage
Include in external flooding review
High summer temperature
Review impact on vapor pressure of chemicals in storage systems
Hurricane
Site specific-include impacts under storm surge and extreme winds
Ice cover
Ice blockage or rivers, loss of cooling, and mechanical damage due to
falling ice are possible
Industrial or military facility
accident
Site specific—what other facilities are near plant site?
Internal flooding
Review failure of any large water storage tank on plant site; blockage of
storm water sewers
Landslide
Can be excluded for most sites in United States
Lightning
Should be considered during design. Computer control systems are
vulnerable. May also damage plant power grid
Low lake or river level
May halt raw material and product shipping. Alternative truck or rail
shipping may be used.
Low winter temperature
Thermal stresses and embrittlement may occur in storage tanks
Meteorite impact
All sites have approximately same frequency of occurrence
Missile impact
Shrapnel and large pieces of pressure vessels are possible from
explosions. Rocks bolts, and lumber may become missiles as a result of
extreme winds
Nearby pipeline accident
Site specific—what pipelines are nearby? Unconfined vapor cloud
explosions, spreading pool fires, and toxic chemical release are possible
Intense precipitation
Include under external and internal flooding
Release of chemicals from
onsite storage
Toxic chemicals may impair operators. Corrosive chemicals may
damage equipment and instruments
River diversion
Include under low river stage
Sabotage
Disgruntled employee may deliberately damage or destroy vital plant
systems
Sandstorm
May damage equipment and block air intakes
Seismic activity
Review earthquake classification of site. May require detailed analysis
TABLE 3.19 (continued)
Events
Notes
Shipwreck
May halt raw material and product shipping. Alternative truck or rail
shipping may be used
Snow
Review design load of roofs. May increase frequency of in plant
accidents. Include snow melt under high river and flooding
Soil shrink-sweli or
consolidation
May damage structure foundations or roads
Storm surge
Include under flooding. Impact of surge may damage structures
Terrorist attack
High explosives and weapons may be used against selected targets.
Essential personnel may be held for ransom or killed
Transportation accidents
Site specific. Accident on major highway may cause evacuation of site
Tsunami
Site specific. Include under flooding and storm surge
Toxic gas
May impair operators
Turbine generated missiles
Review location of high speed rotating equipment
Volcanic activity
May cause extensive downstream flooding. Volcanic ash may damage
equipment and plug air intakes
War
Damage caused by high intensity combat will probably be greater than
that caused by worst credible case from plant site
Waves
Include under external flooding
1. hazard analysis
2. plant system/structural response
3. evaluation of vulnerability
4. plant system and sequence analysis (fault and event trees)
5. consequence analysis
Steps 1, 4 and 5 are treated in the main CPQRA discussion, This section will
emphasize Steps 2 and 3. Figure 3.26 illustrates the approach to external events analysis
in CPQBA. Kaplan et al. (1983) describe the methodology for seismic risk analysis of
nuclear plants in detail and suggest the application of the same methodology to other
external events (e.g., winds and floods).
To assess the impact of external events, the response of plant systems and structures to a specified external hazard intensity is first estimated. The response of interest
is usually a vessel mechanical rupture or failure leading to loss of hazardous material. It
is important to differentiate between failures which might only lead to nonelastic
deformation (failure, to a structural engineer) and those which lead to equipment or
pipe rupture (failure, to a risk analyst).
The results of the analysis are incorporated into the overall plant frequency modeling as direct inputs to fault and event trees (Sample Problem, Section 3.2.1). In a
nuclear power plant PRA the response to external events is usually expressed by a
probabilistic estimate, with uncertainties explicitly considered. In CPQRA, simple discrete point estimates are usually adequate. The application of this technique to each of
the main types of external events is now discussed.
IDENTIFICATION OF
VULNERABLE ITEM
STRUCTURAL
EVALUATION OF ITEM
HAZARD INTENSITY
REQUIRED FOR FAILURE OF
VULNERABLE COMPONENT
(VULNERABILITY)
FREQUENCYOF
EXCEEDANCEOF
THIS INTENSITY
FREQUENCY OF FAILURE
INCORPORATE RESULTS
INTO CPQRA ANALYSIS
FIGURE 3.26. Logic diagram for external events analysis (based on discrete intensities for
failure).
Seismic. The calculation of the risk due to earthquakes requires two functions, one
characterizing earthquakes and the other the plant response. For earthquakes the
annual probability of exceeding a peak ground acceleration at the particular site is
required. Such data may be obtained in the United States from design response spectra
prepared by the U.S. Nuclear Regulatory Commission (NUREG31973). For the plant
response, the probability of failure of a particular plant item at a peak ground acceleration (often referred to as fragility curve or vulnerability curve) is required. The fragility
curves are not readily available for chemical industry items. It may be necessary to
undertake specific seismic vulnerability studies on vulnerable items, such as large refrigerated storage vessels for liquified flammable gases or toxic materials. Since these studies can be expensive and time consuming, it is important to apply this procedure only to
the most serious hazards for which frequency screening calculations indicate a serious
threat.
Extreme Wind. Using weather data, hazard curves can be generated that define the
frequency of exceeding a certain wind speed. These can be combined with a vulnerability curve for the specific item. Alternatively, an engineering analysis can review the item
containing hazardous material and determine the wind speed at which failure would be
expected. In either case, the output is the frequency of occurrence of an external event
capable of causing the failure.
Aircraft Impact. Aircraft impact may represent a significant risk in certain areas
(e.g., in the vicinity of airports). The aircraft crash hazard is site specific and the failure
is strongly dependent on the impact kinetic energy of the aircraft. Two types of data are
needed to analyze for aircraft impact: the aircraft crash rate in the site vicinity (per unit
area per year) and the effective target area of the vulnerable item. Crash rates for different categories of aircraft can be obtained from state and national authorities (e.g.,
FAA). The proximity of the site to airfields must be taken into account because crashes
are much more frequent within a radius of approximately 3 miles.
In assessing the effective target area, a number of site-specific factors need to be
taken into account. These factors include the height of buildings and the extent to
witch they shield one another. Skidding and near misses should also be evaluated
because aircraft crashes have resulted in skids more than 500 yards long, and near-miss
impact may produce consequences comparable to a direct hit. Other features to be considered include the damage potential of far flung debris, damage to piping, and the
effects of flammable (fuels) aboard the aircraft. The effective target area should not be
over-optimistically limited to the critical vessels.
Three dominant damages should be evaluated in the assessment of aircraft crashes:
• direct impact leading to penetration or perforation
• direct impact or near-misses producing intensive vibrations leading to failure
• direct impact of near-misses leading to fuel fires and deflagration (about
three-quarters of aircraft crashes lead to serious fuel fifes)
External Industrial Activities. Fires, explosions, of release of flammable materials
from nearby plants may affect the plant under study. Other features to consider include
ship collision if the plant is situated near a waterway, and explosion of flammable materials due to the proximity of transport routes.
Theoretical Foundation. External event techniques are empirically based, because of
their heavy dependence on historical data. The structural engineering considerations
that determine the effects on plant items are based on the same mathematical/physical
premises as any structural design, The absence of experience for rare events such as
earthquakes means that these assessed effects are only approximations. In particular the
modeling of dynamic behavior with large displacements, taking into consideration
plastic deformation capability, is still at a relatively early stage of development.
Input Requirements and Availability. The external event hazard curve or frequency
information is available from a number of sources. The PRA Procedures Guide (NUREG,
1983) provides some guidance. For seismic events a good review is given in thzDiabfa
Canyon PRA (Pacific Gas and Electric, 1977) and in the Zion PRA (Commonwealth
Edison, 1981). If the site is in an area of high seismic activity, and the level of treatment
warrants it, expert assistance may be required. The ASCE (1980) report summarizing
aircraft impact hazards provides many references for further information. Fujita (1971)
provides data on tornadoes and Simiu et al. (1979) provide extreme wind data for many
sites in the USA. Additional data sources are provided in Section 5.4.
The vulnerability of plant items is more difficult to estimate. Input from structural
and mechanical engineers with experience in dynamic loading calculations is essential.
Output. The output can range from a curve of frequency of event versus plant behavior
to one-to-three discrete failure cases consisting of initiating event frequency and plant
damage level.
Simplified Approaches. The PRA Procedures Guide approach (NUREG, 1983) highlights the probabilistic assessment of external events. A simpler approach is the use of
discrete external event intensities instead of probabilistic ones for defined failures.
3.3.3.3. SAMPLE PROBLEM
The sample problem is taken partly fro the Warren Centre Report (1986) and partly
from Hall et al. (1985). It demonstrates an example of a discrete rather than probabilistic assessment.
The problem considers the effects of external events on a site containing several
LPG spheres, whose dimensions and structural supports are shown in Figure 3.23. It is
located in an area subject to seismic activity and away from any airfield. A target area of
100 m2 is assumed (0.01 km2). No other external impact is judged significant.
Problem Statement
Earthquake External Impact
• Mode of failure—tensile breaking of braces followed by column failure due to side
sway and compression.
• Condition for failure—the structure will "fail" when lateral acceleration reaches
0.2 g.
• Annual probability ofexceedance—3 X 10"4 per year for 0.2 g.
• Furtherfactors—engineering judgment suggests that this magnitude of structural
failure has a probability of 0.5 that a significant leak will occur, and 0.1 that the
vessel will rupture.
Extreme Wind/Tornado.
• Mode of failure—as for earthquake
• Condition for failure—a wind speed of 500 mph (mechanical engineering analysis)
• Annual probability ofexceedance—the probability of such a wind speed may be
taken as much less than the seismic frequency, thus it may be neglected.
32 ft diameter
1 inch thick shell
columns
FIGURE 3.23. LPG tank arrangement. Shell material, 63,000 psi (ultimate tensile strength);
column material, 61,000 psi (ultimate tensile strength); total tank mass, 400 tons; 10% vapor
space.
Aircraft Impact
• Mode of failure—an aircraft impacting the sphere either breaks the shell at the
point of impact or knocks it over, with results similar to seismic failure.
• Condition for failure—a small aircraft of 6000 Ib at 200 knots would be sufficient
to cause the damage (mechanical engineering analysis).
• Annual probability ofexceedance—a crash rate of 2 X lO^/km2 year was developed
from local data, and the target area is 0.01 km2, giving a frequency estimate of
2 x 10"6 per year.
• Furtherfactors—engineering judgment is that this magnitude of structural failure
has a probability of 0.5 that only a significant leak will occur, and 0.5 that the
vessel will rupture.
Analysis. These data are used as follows: If fault trees are being applied, a new branch
corresponding to external events could be added leading directly to the top event
through an OR gate (see Sample Problem, Section 3.2.1).
Where significant leakage is the top event:
seismic event
= 0.5 X 3 X 10"4 per year = 1.5 X 10"4 per year
extreme wind event = negligible
aircraft impact
= 0.5 X 2 X 1O-6 per year = 1.0 X 1O-6 per year
Where total sphere failure is the top event:
seismic event
= 0.1 x 3 x 1(T* per year = 3.0 X 10~5 per year
extreme wind event = negligible
aircraft impact = 0.5 x 2 x 10"6 per year = 1.0 x IO^6 per year
In this example, seismic activity is more significant than aircraft impact.
3.3.3.4. DISCUSSION
Strengths and Weaknesses. The strengths of the technique are that likelihoods of
occurrence of major hazards will be indicated in the CPQRA. The main weakness lies
in the difficulty of rigorously estimating plant vulnerability because the sophisticated
techniques employed are beyond the scope of normal validation.
Identification and Treatment of Possible Errors. There are many uncertainties in
the analysis of external events due to lack of data and analytical models. The main area
of uncertainty and error relates to the component fragility/vulnerability analysis. Structural design codes are generally conservative and incorporate several safety factors,
which are imbedded in the design rules. Simple extrapolation of these design methods
to the more severe conditions required by external event analysis may be invalid. The
uncertainties with external event analysis are potentially higher than most other parts of
a CPQRA.
Double counting of failure frequencies is possible if historical failure data are combined with external event analysis. Unless the data are carefully checked, the historical
record may already contain several instances of external events. Budnitz (1984) pres-
ents a discussion of the uncertainties in the numerical estimates of risk that may be
expected due to earthquakes, fires, high winds, and floods.
Utility/Resources. Experienced risk analysts are probably required for this task, at
least in a supervisory or consultancy role, and significant structural engineering input is
essential. Other specialists will be required, especially if the sophisticated treatment as
suggested in the PRA Procedures Guide (NUREG, 1983) is used. The time involved
could vary from a few days to several months for a full seismic analysis.
Available Computer Codes. No known codes relevant to the CPI.
3.4.
References
Abkowitz, M., and Galarraga, J. (1985). 'Tanker Accident Rates and Expected Consequences in
US Ports and High Seas Regions," Conference on Recent Advances in Hazardous Materials
Transportation Research: An International Exchange. Sponsored by Research and Special Programs Administration DOT, Nov. 10-13, Washington, D.C.
AlChE/CCPS (1985). Guidelines for Hazard Evaluation Procedures. New York: Center for Chemical Process Safety, American Institute for Chemical Engineers.
Andow, P. K. (1980). "Difficulties in Fault Tree Synthesis for Process Plant.35 IEEE
Transactions on Reliability R-29, 1:2.
ANSI/IEEE (1975). IEEE Guide for General Principals of Reliability Analysis of Nuclear Power
Generating Station Protection Systems. ANSI Standard N41.4-1976 and IEEE Std 352-1975.
New York: Institute of Electrical and Electronic Engineers.
Apostolakis, G. E. (1976). 'The Effect of a Certain Class of Potential Common Mode Failures
on the Reliability of Redundant System." Nuclear Engineering and Design, 36.
Apostolakis, G. E., Sale, S. L., and Wu, J. S. (1978). "CAT: A Computer Code for the Automated Construction of Fault Trees." EPRINP-705, Palo Alto, CA: Electric Power Research
Institute.
Apostolakis, G., andMoiemi, P. (1983). "A Model for Common Cause Failures." ANS Transactions, Vol. 45, Winter Meeting, San Francisco, CA.
ASCE (1980), Report of the ASCE Committee on Impactive and Impulsive Loads, Proceedings
of Second ASCE Conference, CM Engineering and Nuclear Power, Vol. V, Knoxville, TN:
American Society of Civil Engineers..
Arendt, J. S. (1986a) "Determining Heater Retrofit through Risk Assessment." Plant/Operations Progress 5 (4): 228-231.
Arendt, J. S. Casada, M. L., and Rooney, J. J. (1986b). "Reliability and Hazard Analysis of a
Cumene Hydroperoxide Plant." Plant/Operations Progress 5(2): 97-102.
Atwood, C. L. (1980). Estimators for the Binomial Failure rate Common Cause Model.
NUREG CR-1401. Prepared for the US NRC by EG&G Idaho, Inc., Idaho FaUs, ID.
Atwood, C. 1., and Suitt, W. J. (1983b), User's Guide to BFR-Computer Code on Binomial Failure
Rate Common Cause Model. NUREG/CR-2729, EGG-EA-5502.
Atwood, L. (198 3 a). Common Cause Fault Rates for Pumps: Estimates Based on Licensee Event
Reports at U. S. Commercial Nuclear Power Plants, January 1,1972, through September 30,1980
NUREG/CR-2098, EG&G Idaho, Inc., Idaho falls, ID.
Ball, P., et al. (1985). Guide to Reducing Human Error in Process Operation (Short version),
Safety and Reliability Directorate, R347, Culcheth, UK.
Bari, R. A., et al. (1985), Probabilistic Safety Analysis Procedures Guide. US Nuclear Regulatory
Commission, NUREG/CR-2815.
Battle, R. E., and Campbell, D. J. (1983), Reliability of Emergency AC Power Systems at Nuclear
Power Plants. ORNL/TM-8545 (NUREG/CR-2989), Oak Ridge, TN: Oak Ridge
National Laboratory.
Bellamy, L. J., Kirwain, B. L, and Cox, R. A. (1986), "Incorporating Human Reliability into
Probabilistic Risk Assessment." 5th International Symposium on Loss Prevention and Safety
Promotion in the Process Industries, Cannes, France.
Berol Corp., RapiDesign R-555, Fault Tree Template, Engle Road, Danbury, CT 06810.
Billington, R., and Allen, R. N. (1986). Reliability Evaluation of Engineering Systems ^ Marshfield,
MA- Pitman Publishing, Inc.
Bods berg, L. Hokstad, P., Berstad, H., Myrland, B., and Onshus, T. Reliability Quantification of
Control and Safety Systems. The PDS-U Method. SINTEF Safety and Reliability, STF75
A93064, Trondheim, Norway.
Blanchard, R. E. Mitchell, M. B., and Smith, R. L. (1966). Likelihood of Accomplishment Scale for
a Sample of 'Man-Machine Activities, Santa Monica, CA* Dunlap.
Brown, R. V., Kahr, A. S., and Peterson, C. (1974), Decision Analysis for the Manager, New
York: Holt, Reinhardt & Winston
Browning, R. L. (1969). "Analyzing Industrial Risks." Chemical Engineering. October 20,
109-114.
Budnitz, R. J. (1984). "External Initiators in Probabilistic Reactor Accident Analysis—Earthquakes, Fires, Floods, Winds." Risk Analysis, 4(4): 323-335.
Cate, C. L., and Fussell, J. B. (1977). "BACKFIRE—A Computer Code for Common Cause
Failure Analysis." Dept. of Nuclear Engineering, University of Tennessee, Knoxville, TN
(also IBF Associates, Knoxville, TN).
CCPS (1989). Guidelines for Process Equipment Reliability Data, with Data Tables. New York:
Center for Chemical Process safety of the American Institute of Chemical Engineers
Commonwealth Edison Company (1981). Zion Probabilistic Safety Study, Chicago, IL.
Comer, M. K., Seaver, D. A., Stillwell, W. G., and Gaddy, C. D. (1984). Generating Human
Reliability Estimates Using Expert Judgment: Paired Comparisons and Direct Numerical Estimation: Vol. 1, Main Report; Vol. 2, Appendices. Nuclear Regulatory Commission.
NUREG/CR-3688, Washington, DC.
Department of Transportation (1970-1980). "Gas Transmission and Gathering Systems and
Leak or Test Failure Reports—Transmission and Gathering Systems." Annual Reports,
Research and Special Programs Administration, Office of Pipeline Safety, U.S. Department
of Transpiration, Washington, DC.
Department of Transportation (1980), "LNG Facilities, Federal Safety Standards." U.S.
Department of Transportation. Federal Register, 45(29): 9184r-9237.
DeStreese, J. G. (1983), "Human Factors Affecting the Reliability and Safety of LNG Facilities:
Final report," VoIs. 1 and 2. Gas Research Institute Report GRI 81/0106.2. Gas Research
Institute, 8600 West Bryn Maur Ave., Chicago, IL.
Doelp, L. C., Lee, G. K., Linney, R. E., and Ormsby, R. W. (1984). "Quantitative Fault Tree
Analysis: Gate-by-Gate Method." Plant/Operations Progress, 4(3): 227-238.
Edwards, G. T., and Watson, LA. (1979). A study of 'Common-ModeFailure. Safety and Reliability Directorate. Report R-146, UK Atomic Energy Authority.
EFCE (1985). "Risk Analysis in the Process Industries." Report of the International Study
group on Bisk Analysis, European Federation of Chemical Engineering, EFCE Publications
Series No. 45, available from the IchemE, Rugby, England.
Embrey, D., Humphreys, P. C. Rosa, E. A., Kirwain, B., and Rea, K. (1984). "SLIM-MAUD:
An Approach To Assessing Human Error Probabilities Using Structured Expert Judg-
ment." Brookhaven National Laboratory. NUREG/CR-3518 (BNL-NUREG-51716),
Washington, DC: U.S. Nuclear Regulatory Commission.
Epler, E. P. (1969). "Common Mode Failure Considerations in the Design of Systems for Protection and Control." Nuclear Safety 10(1): 38-45.
Epler, E. P. (1977). "Diversity and Periodic Testing in Defense Against Common Mode Failures." Nuclear System Reliability Engineering and Risk Assessment^ Philadelphia, PA: Society
for Industrial and Applied Mathematics.
Fleming, K. N. (1975). "A Reliability Model for Common Mode Failure in Redundant Safety
Systems." Proceedings of the Sixth Annual Pittsburgh Conference on Modeling and Simulation,
April 23-25; General Atomic Report GA-Al 3284, San Diego, CA.
Fleming, K. N., and Raabe, P. H. (1978).^ Comparison of Three Methods for the Quantitative
Analysis of Common Cause Failures. GA-A14568, San Diego, CA: General Atomic Corp.
Fleming, K. N., Mosleh, A., and Deremer, R. K. (1986). "A Systematic Procedure for the Incorporation of Common Cause Events into Risk and Reliability Models," Nuclear Engineering
and Design 93: 245.
Fleming, K. N. (1974). A Reliability Model for Common Mode failures in Redundant Safety Systems.
GA-Al3284, San Diego, CA: General Atomic Corp..
Fleming, K. N. (1975). "A Reliability Model for Common Mode Failures in Redundant Safety
Systems." Proceedings of the Sixth Annual Pittsburgh Conference on Modeling and Simulation.
Fleming, K. N., and Raabe, P. H. (1978). A Comparison of Three Methods for the Quantitative
Analysis of Common Cause Failures. GA-A14568, General Atomic Corp., San Diego, CA.
Fleming, K. N. et al. (1984). Event Classification and Systems Modeling of Common Cause failures.
American Nuclear Society 1984 Annual Meeting, New Orleans, LA.
Fleming, K., Mosleh, A., and Acey, D. (1985). Classification and Analysis of Reactor Operating
experience Involving Dependent Events. EPRI NP-3967, Palo Alto, CA: Electric Power
Research Institute.
Flournoy, P. A., and Hazlebeck, D. E. (1975). "DuPont Adopts Fault Tree Analysis to Assess
Plant Reliability." DuPont Innovation, 6(3): 1-5.
Freeman, R. A. (1983). "Problems with Risk Analysis in the Chemical Industry." Plant/Operations Progress, 2(3): 185-190.
Fujita, T. T. (1971), "Proposed Characterization of Tornadoes and Hurricanes by Area and
Intensity." SMRP Research Paper No. 91, Department of Geophysical Sciences, University
of Chicago, Chicago, IL.
Fussell, J. B. (1973). Fault Tree Analysis—Concepts and Techniques. Idaho Falls, ID: Aerojet
Nuclear Company.
Fussell, J. B. Powers, G. J., and Bennetts, R. G. (1974a), "Fault Trees: A State-of-the-Art Discussion." IEEE Transactions on Reliability, R23(l): 51-55.
Fussell, J. B., Henry, E. B., and Marshall, N. H. (1974b). "MOCUS: A Computer Program to
Obtain Minimal Cut Sets from Fault Trees." Report ANCR-1156, Idaho falls, ID: Aerojet
Nuclear Company.
Gibson, S. B. (1977). "Quantitative Measurement of Process Safety," I ChemE Symposium Series
No. 49, 1-11.
Haasl, D. F. (1965). "Advanced Concepts in Fault Tree Analysis." System Safety Symposium, June
8-9, 1965, Seattle: Boeing Company.
Hagen. E. W. (1980). "Common-Mode/Common-Cause Failure: A Review." Nuclear Engineering and Design, Amsterdam: North-Holland Publishing Company.
Hall, R. E., Fragola, J., and Wreathall, J. (1982). "Post Event Human Decision Errors: Operator Action Tree/Time Reliability Correlation." Brookhaven National Laboratory.
NUREG/CR-3010, Washington, DC: U.S. Nuclear Regulatory Commission.
Hall, S. F., Phillips, D. W., and Peckover, R. S. (1985). "Overview of External Hazard Assessment." Nuclear Energy 24(4): 211-227.
Hauptmanns, U. (1980). "Fault Tree Analysis of a Proposed Ethylene Vaporization Unit."/wd.
Eng. Chem. Fundam. 7P(3): 300-309.
Health and Safety Executive (1978). Convey Island/Thurrock Area. 195 pp. London: HMSO.
Health and Safety Executive (1981). Convey—A Second Report. 130 pp. London, UK: HMSO.
Henley, E. J., and Kumamoto, H. (1981), Reliability Engineering and Risk Assessment,
Englewood Cliffs, NJ: Prentice-Hall.
Hoyle, W. C. (1982). "Bulk Terminals: Silicon Tetra-Chloride Incident." In Hazardous Materials Spills Handbook (Bennett, G. F., Feates, F. S., and Wilder, L, Eds.), New York:
McGraw-Hill.
Irwin, I. A., Levitz, J. J. , and Freed, A. M. (1964). "Human Reliability in the Performance of
Maintenance." Report No. LRP 317/TDR-63-218, Sacramento, CA: Aeroject Corp.
Kaplan, S., Kazarians, M., and Schroy, J. M. (1986). "Risk Assessment in the Chemical Industry. "AlChE Today Series Courses^ New York- American Institute of Chemical Engineers.
Kaplan, S., Perla, H. F., and Bley, D. C. (1983). "A Methodology for Seismic Risk Analysis of
Nuclear Power Plants." Risk Analysis, 3(3), 169-180.
Kletz, T. A. (1985), An Engineer's View of Human Error ^ Rugby, UK- IchemE.
Koda, T., and Henley, E. J. (1988). "On Digraphs, Fault Trees, and Cut Sets." Reliability Engineering and System Safety, pp. 35-61.
Kvarfordt, K., Schierman, B., Clark, R., andMosleh, A. (1995). Common-Cause Failure Data
Collection and Analysis System, Volume 4-Common-Cause Failures Database and Analysis Software Reference Manual. INEL-94/0064, Idaho Falls, ID: Idaho National Engineering Laboratory.
Lapp, S. A., and Powers, G. J. (1977). "Computer Aided Synthesis of Fault Trees. "IEEE Transactions on Reliability, R-26(l), 2-13.
Laurence, G. C. (1960), "Reactor Safety in Canada,"Nucleonics 18(10).
Lawley, H. G. (1980). "Safety Technology in the Chemical Industry: A Problem in Hazard
Analysis with Solution. Reliability Engineering, 1, 89-113.
LER (1987). Licensee Event Report System. 10 CFR 50, Federal Register, 50(73).
Lydeil, B. (1979). Dependent Failure Analysis in System Reliability: A Literature Survey, RE05-79.
Chalmers University of Technology, Sweden.
Martin-Solis, G. A., Andow, P. K., and Lees, F. P. (1980). "Fault Tree Synthesis for Design and
Real Time Applications." Tans I ChemE, 60(1): 14-25.
McCormick, N. J. (1981). Reliability and Risk Analysis. New York: Academic Press.
Meachum, T. R., and Atwood, C. L. (1983). Common Cause Fault Ratesfor Instrumentation and
Control Assemblies: Estimates Based on Licensee Event Reports at U. S. Commercial Nuclear Power
Plants, 1976-1981. Report NUREG/CR-3289 (EGG-2258).
Miller, and Swain, A. D. (1987), Handbook of Human Factors (G. Salvendy, Ed.). New York:
Wiley, p. 1874.
Montague, D., and Paula, H. (1984), A Survey of ^-Factor and C-Factor Applications.
JBFA-LR-104-84, Knoxville, TN: JBF Engineering.
Mosleh, A. et al. (1988). Procedures for Treating Common Cause Failures in Safety and Reliability
Studies. Volume 1: Procedural Framework and Partial Guidelines. NUREG/CR-4780. (EPRI
NP-5613). Washington, DC: U.S. Nuclear Regulatory Commission.
Mosleh, A., et al. (1989). Procedures for Treating Common Cause Failures in Safety and Reliability
Studies. Volume U: Analytical Background and Techniques. NUREG/CR-4780 (EPRI
NP-5613), U.S. Nuclear Regulatory Commission, Washington, DC.
Mosleh, A. (1991). "Guest Editorial: Dependent Failure Analysis." Reliability Engineering and
System Safety 34(3).
Munger, J. S., Smith, R. W., and Payne, d. (1962). "An Index of Electronic Equipment
Operability Data Store" (AIR-C43-1/62-RP(I)). American Institutes for Research, Pittsburgh, PA.
NUS Corporation (1983). Ringhals 2 Probabilistic Safety Study, Volume 1: Main Report. NUS
Corporation, Gaithersburg, AID. Prepared for Swedish State Power Board.
NUREG (1973). "Design Response Spectra for Seismic Design of Nuclear Power Plants.35 Regulatory Guide 1.60, U.S. Nuclear Regulatory Commission, Washington, D.C.
NUREG (1983), PRA Procedures Guide: A Guide to the Performance of Probabilistic Risk Assessment
for Nuclear Power Plants, 2 vols. NUREG/CR-2300, U.S. Nuclear Regulatory Commission,
Washington, DC. (Available from NTIS).
NUREG (1985). Probabilistic Risk Assessment Course Documentation, "PRA Fundamentals."
NUREG/CR-4350-vl. U.S. Nuclear Regulatory Commission, Washington, DC.
Ozog, H. (1985) "Hazard Identification, Analysis and Control." Chemical engineering Feb. 18,
161-170.
Pacific Gas and Electric (1977). "Seismic evaluation for Postulated 7.5M Hosgri Earthquake,
Amendment 52, "Volume V, Units 1 & 2 Diablo Canyon Site. U.S. Nuclear Regulatory
Commission Docket Nos.50-275 and 50-323.
Paula, H. M., and Campbell, D. J. (1985). Analysis of ^DependentFailure Events and failure Events
Caused by Harsh Environmental Conditions. JBFA-LR-111-85, JBF Associates, Inc., Knoxville, TN.
Paula, H. (1988), "A Probabilistic Dependent failure Analysis of a D-C Electric Power System in
a Nuclear Power Plant." Nuclear Safety 29(2).
Paula, H. M., and Parry, G. W. (1990).v4 Cause-Defense Approach to the Understanding and Analysis of Common Cause Failures. NUREG/CR-5460, Sandia National Laboratories, Albuquerque, NM.
Paula, H. M., Campbell, D. J., and Rasmuson, D. M. (1991). "Qualitative Cause-Defense
Matrices: Engineering tools to Support the Analysis and Prevention of Common Cause Failures." Reliability engineering and System Safety 34(3).
Paula, H. M., Roberts, M. W., and Battle, R. E. (1993). "Operational Failure experience of
Fault-Tolerant Digital Control Systems." Reliability Engineering and System Safety 39.
Paula, H. (1995). 'Technical Note: On the Definition of Common Cause Failures." Nuclear
Safety 3(5(1).
Paula, H., Daggett, E., and Guthrie, V. (1997a). A Methodology and Software for Explicit
Modeling of Organizational Performance in Probabilistic Risk Assessment (Report for Task
1-Methodolqgy Development).JBF A-26I.Q1&-94, Final Draft, JBF Associates, Inc., Knoxville, TN.
Paula, H., and Daggett, E. (1997b). "Accounting for Common Cause Failures When Assessing
the Effectiveness of Safeguards." Accepted for presentation at the CCPS/AlChE5S International Conference and Workshop on Risk Analysis in Process Safety, Atlanta, GA.
Paula, H. (1997c). "Attachment A - CCF Data for Use in the HIPPS Fault Tree and a Procedure
for Selecting CCF Data." Task Order No. 9 Report - CCF Review, Letter Report
JBFA-LR-355.09-93 (prepared for Exxon Production Research Company), JBF Associates,
Inc., Knoxville, TN.
Phillips, L. D., Humphreys, P. C., and Embrey, D. (1985). "A Socio-technical Approach to
Assessing Human Reliability (STAHR)." Appendix D in Selby, D. "Pressurized Thermal
Shock Evaluation of the Calvert-Cliffs Unit-1 Nuclear Power Plant." Oak Ridge National
Laboratory, Oak Ridge, TN.
Poucet, A. et al. (1987). CCF-RBE Common Cause Failure Reliability Benchmark Exercise. Report
EUR 11054 EN, Commission of the European Communities, Luxembourg City.
Poucet, A., Amendola, A., and Cacciabue, P. (1986). Summary of the Common Cause Failure Reliability Benchmark Exercise. Joint Research Center Report, PER 1133/86, Ispra, Italy.
Prugh, R. W. (1980). "Application of Fault Tree Analysis." Chemical Engineering Progress, July,
59-67.
PSI (1997). Hazard Evaluation: Quantitative Risk Assessment Failure Data. Course 206 Notebook, Process Safety Institute, Knoxville, TN.
Putney, B. (1981). 'WAMCOM Common Cause Methodologies Using Large Fault Trees."
EPRINP-1851.
Rasmussen, N. C. (1975). Reactor Safety Study: An Assessment of Accident Risk in U.S. Nuclear
Power Plants. WASH 1400, NUREG 75/014, U.S. Nuclear Regulatory Commission,
Washington, DC (Available from NTIS.)
Rasmuson, D. M., Marshall, N. H., Wilson, J. R., and Burdick, C. R. (1976). COMCANU-A
Computer Program for Automated Common Cause Failure Analysis. TREE-1361, EG&G
Idaho, Inc., Idaho Falls, ID.
Rasmuson, D. M. et al. (1982), Use of COMCANm in System Design and Reliability Analysis.
EGG-2187, EG&G Idaho, Inc., Idaho FaUs, ID, October.
Rijnmond Public Authority (1982). A Risk Analysis of 6 Potentially Hazardous Industrial Objects
in the Rijnmond Area-A Pilot Study. D. Reidel, Dordrecht, the Netherlands and Boston, MA
(ISBN 90-277-1393-6).
Roberts, N. H., Veseley, W. E., Haasl, D. F., and Goldberg, F. F. (1981), Fault Tree Handbook,
U.S. Nuclear Regulatory Commission, NUREG-0492. Washington, DC.
Rooney, J. J., and Fussell, J. B. (1978). EACKFIREH-A Computer Program far Common Cause
Failure Analysis of Complex Systems University of Tennessee.
Salem, S. L., Apostolakis, G. E., and Okrendt, D. (1976). "A Computer Oriented Approach to
Fault Tree Construction," Electric Power Research Institute, EPRI-NP-288, Palo Alto, CA.
Shafaghi, A., Andow, P. K., and Lees, F. P. (1984). "Fault Tree Synthesis Based On Control
Loop Structure." Chemical Engineering Research and Design 62, 101-110.
Siddall, E. (1957). "Reliable Reactor Protection." Nucleonics 15(6).
Siegel, A. I., Banter, W. D., Wolf, J. J., Knee, H. E., and Haas, P. M. (1984). "Maintenance Personnel Performance Simulation (MAPPS) Model: Summary Description." Applied Psychological Services and Oak Ridge National Laboratory NUREG/CR-3626, U.S. Nuclear
Regulatory Commission, Washington, DC.
Simiu, E., Changery, M. J., and Filliben, J. J. (1979). "Extreme Wind Speeds at 129 Stations in
the Contiguous United States." NBS Building Science Series 118, U.S. Dept. of Commerce,
National Bureau of Standards.
Stamatelatos, M. G. (1982). "Improved Method for Evaluation Common-Cause Failure Probabilities." Transactions of the American Nuclear Society, 43.
Stephenson, J. (1991). System Safety 2000: A Practical Guide far Planning, Managing, and Conducting System Safety Programs', New York, NY: Van Nostrand Reinhold.
Steverson, J. A., and Atwood, C. L. (1983). Common Cause Fault Rates far Valves: Estimates Based
on Licensee Event Reports at U.S. Commercial Nuclear Power Plants, 1976-1980. Report
NUREG/CR-2770 (EGG-EA-5485).
Swain, A. D., and Guttmann, H. E. (1983) Handbook of Human Reliability Analysis with Emphasis
on Nuclear Power Plant Applications. Sandia National Laboratories. NUREG/CR-1278,
Washington, DC: U.S. Nuclear Regulatory Commission.
Swain, A. D. (1987). "Accident Sequence Evaluation Procedure Human Reliability Analysis
Procedure." Sandia National Laboratories, NUEG/CR-4772 (SAND86-1996), Washington, DC: U.S. Nuclear Regulatory Commission.
Swain, A. D. (1988). "Comparative Evaluation of Methods for Human Reliability Analysis."
Prepared for Gesellschaft fur Reaktorsicherheit (GRS), KoIn, Federal Republic of Germany,
GRS Project RS 688.
Taylor, J. R. (1982). "An Algorithm for Fault Tree Construction.35 IEEE Transactions on Reliability R-31(2): 137-146.
Topmiller, D. A., Eckel, J. S., and Kozinsky, E. J. (1982) "Human Reliability Data Bank for
Nuclear Power Plant Operations, Vol. 1: A Review of existing Human Reliability Data
Banks." (NUREG/CR-2744), Washington, DC: General Physics Corporation and Sandia
National Laboratories.
Vaurio, J. K. (1981). "Structures for Common Cause Failure Analysis." Proceedings of the International AN S/EN S Meeting on Probabilistic Risk Assessment, Port Chester, NY.
Wagner, D. P., Gate, C. L., and Fussell, F. B. (1977). "Common Cause Failure Analysis Methodology for Complex System." Nuclear Systems Reliability Engineering and Risk Assessment,
Philadelphia, PA: Society for Industrial and Applied Mathematics.
Warren Centre (1986). Major Industrial Hazards. The Warren Centre for Advanced Engineering, University of Sydney, Australia (ISBN 0949269-37-9).
Watson, I. A., and Edwards, G. T. (1979). "Common Mode Failures in Redundancy Systems."
Nuclear Technology ^ 46.
Wickens, C. D., and Goettle, B. (1984). "Multiple Resources and Display Formatting: The
Implications of Task Integration." Proceedings of the Human Factors Society 28th Annual
Meeting', Santa Monica, CA:. Human Factors Society, pp. 722-726.
Williams, J. (1986). "HEART-A Proposed Method for Assessing and Reducing Human Error."
Advances in Reliability Technology Symposium^ Bradford, UK, pp. B3/R/1-13, Warrington,
UK: National Centre of Systems Reliability.
World Bank (1985). Manual of Industrial Hazard Assessment Techniques. Office of Environmental and Scientific Affairs, World Bank, Washington, D.C.
Worrell, R. P. (1985). SETS Reference Manual NUREG/CR-4213, SAND83-2675, Albuquerque, NM: Sandia National Laboratories.
Measurement, Calculation, and
Presentation of Risk Estimates
Chapter 1 defines risk as a function of incident consequence and likelihood, Chapter 2
discusses how to estimate incident consequences, and Chapter 3 discusses how to estimate incident likelihood. This chapter combines and draws on the earlier chapters to
present ways to measure, calculate, and present risk estimates.
There is no way to measure risk or to present an estimate of it. This must be determined from the information and resources available and from the intended audience.
Before considering the mechanics of estimating various risk measures (Section 4.4), we
consider commonly used risk measures (Section 4.1), formats used for presenting risk
estimates (Section 4.2), and guidelines for selection of the risk measure(s) and presentation format(s) to meet the objectives of a study (Section 4.3). A simple sample problem is presented in Section 4.5 to demonstrate risk calculation techniques.
One should always remember that we are dealing with estimates. In order to use
these estimates properly for guiding technical decisions, for advising management, and
for communicating with the public and government—it is essential that the potential
extent of uncertainly be known. This is covered in Section 4.6.
4.1.
Risk Measures
Table 1.1 defines risk as a measure of economic loss, human injury or environmental
damage in terms of both the likelihood and the magnitude of the loss, injury or
damage. This chapter describes risk measures which estimate risk of human fatality
caused by the immediate impact of an accident—fire, explosion, or toxic material
release. Other kinds of risk which might result from chemical process incidents are not
discussed. Examples of types of risk not considered in this book include, for example:
• the long-term health effects arising from a single exposure to a toxic gas, which
does not cause immediate serious injury or fatality
• the health effects of chronic exposure to chemical vapors in the atmosphere over
a long time period
• the health effects of acute or chronic exposure to chemicals by various environmental routes such as drinking water contamination, environmental contamination, food supply contamination, and other mechanisms.
In CPQRA, a number of numerically different measures of risk can be derived
from the same set of incident frequency and consequence data. These different risk
measures characterize risk from different viewpoints, for example:
• risk to an individual vs. risk to a group
• risk to varying populations
• simple risk measures containing less information vs. complex measures containing a great deal of information about risk distribution.
This section discusses three commonly used ways of combining incident frequency
and consequence data to produce risk estimates:
• Risk indices (Section 4.1.1) are single numbers or tabulations of numbers
which are correlated to the magnitude of risk. Some risk indices are relative
values with no specific units, which only have meaning within the context of the
risk index calculation methodology. Other risk indices are calculated from various individual or societal risk data sets and represent a condensation of the information contained in the corresponding data set. Risk indices are easy to explain
and present, but contain less information than other, more complex measures.
• Individual risk measures (Section 4.1.2) can be single numbers or a set of risk
estimates for various individuals or geographic locations. In general, they consider the risk to an individual who may be in the effect zone of an incident or set
of incidents. The size of the incident, in terms of the number of people impacted
by a single event, does not affect individual risk, Individual risk measures can be
single numbers, tables of numbers, or various graphical summaries.
• Societal risk measures (Section 4.1.3) are single number measures, tabular sets
of numbers, or graphical summaries which estimate risk to a group of people
located in the effect zone of an incident or set of incidents. Societal risk estimates
include a measure of incident size (for example, in terms of the number of people
impacted by the incident or set of incidents considered). Some societal risk measures are designed to reflect the observation that people tend to be more concerned about the risk of large incidents than small incidents, and may place a
greater weight on large incidents.
4.1.1. Risk Indices
Bask indices are single numbers or tabulations, and they may be used in either an absolute or a relative sense (Section 1.8). Some risk indices represent simplifications of
more complex risk measures, and have units which have real physical meaning (fatal
accident rate, individual hazard index, average rate of death). Others are pure indices
which have no meaningful units, but which are intended to rank different risks relative
to each other (Equivalent Social Cost Index, Mortality Index, Dow Fire and Explosion
Index).
Limitations on the use of indices are that (1) there may not be absolute criteria for
accepting or rejecting the risk, and (2) indices lack resolution and do not communicate
the same information as individual or societal risk measures. Consequence indices
[e.g., Dow Fire and Explosion and Chemical Exposure Indices (Dow, 1994a, b)], con-
sider risk only in a relative sense. As an example of a use of risk indices for relative
assessment, a table may be developed that compares the equivalent social cost for a
range of possible risk reduction measures; this permits a ranking of these measures on
the basis of social benefit. Examples of the use of risk indices in absolute ways are the
fatal accident rate (FAR) targets that some companies have established.
• The Fatal Accident Rate (FAR) (Lees, 1980) is the estimated number of fatalities per 108 exposure hours (roughly 1000 employee working lifetimes). The
FAR is a single number index that is directly proportional to the average individual risk (Section 4.1.2). The only difference numerically is the time period,
which is 1 year for the average individual risk, so the FAR must be multiplied by
a factor of 108/(24 x 365) = 1.14 x 104.
• The Individual Hazard Index (IHI) (Helmers and Schaller, 1982) is the FAR for
a particular hazard, with the exposure time defined as the actual time that a
person is exposed to the hazard of concern. The IHI estimates peak risk.
• The Average Rate of Death (Lees, 1980) is defined as the average number of
fatalities that might be expected per unit time from all possible incidents. It is
also known as the accident fatality number. Average Rate of Death is a single
number average measure of societal risk.
• The Equivalent Social Cost Index (Okrent, 1981) is a modification of the Average Rate of Death and takes into account society's aversion to large-consequence
incidents.
• The Mortality Index or Number (Marshall, 1987) is used to characterize the
potential hazards of toxic material storage. It is based on the observed average
ratio of casualties to the mass of material or energy released, as derived from the
historical record. It is actually a hazard index rather than a risk index as frequency
of occurrence is not incorporated.
• The Dow Fire and Explosion Index (Dow, 1994a) and the Mond Index (ICI,
1985) estimate relative risk from fires and explosions. These indices can also be
used to estimate the magnitude of potential plant damage from a fire or explosion.
• The Dow Chemical Exposure Index (Dow, 1994b) estimates risk associated
with a single toxic chemical release. Tyler et al. (1996) have proposed an alternative toxicity hazard index.
• The Economic Index measures financial loss and its development is outside the
scope of this volume. The Economic Index may be treated and presented in
essentially the same way as FAR. Companies may have developed specific economic risk targets, and the Economic Index can be compared with them. If there
is no specific target, the relative merits of various risk reduction measures may be
easily ranked. O'Mara, Greenburg, and Hessian (1991) give an example of economic risk calculation.
4.1.2. Individual Risk
Considine (1984) defines individual risk as the risk to a person in the vicinity of a
hazard. This includes the nature of the injury to the individual, the likelihood of the
injury occurring, and the time period over which the injury might occur.
While injuries are of great concern, there are limited data available on the degrees
of injuries. Thus, risk analysts often estimate risk of irreversible injury or fatality, for
which more statistics are recorded.
Individual risk can be estimated for the most exposed individual, for groups of
individuals at particular places or for an average individual in an effect zone. For a given
incident or set of incidents, these individual risk measures have different values. Definitions of some individual risk measures are given below.
1. Individual risk contours show the geographical distribution of individual risk.
The risk contours show the expected frequency of an event capable of causing
the specified level of harm at a specified location, regardless of whether or not
anyone is present at that location to suffer that harm. Thus, individual risk contour maps are generated by calculating individual risk at every geographic location, assuming that somebody will be present and subject to the risk 100% of
the time (i.e., annual exposure of 8760 hours per year).
2. Maximum individual risk is the individual risk to the person(s) exposed to the
highest risk in an exposed population. This is often the operator working at the
unit being analyzed, but might also be the person in the general population
living at the location of highest risk. Maximum individual risk can be determined from risk contours by locating the person most at risk and determining
what the individual risk is at that point. Alternatively it can be determined by
calculating individual risk at every geographical location where people are present and searching the results for the maximum value.
3. Average individual risk (exposed population) is the individual risk averaged
over the population that is exposed to risk from the facility (e.g., all of the operators in a building, or those people within the largest incident effect zone). This
risk measure is only useful if the risk is relatively uniformly distributed over the
population, and can be extremely misleading if risk is not evenly distributed. If
a few individuals are exposed to a very high risk, this may not be apparent when
averaged with a large number of people at low risk.
4. Average individual risk (total population) is the individual risk averaged over a
predetermined population, without regard to whether or not all people in that
population are actually exposed to the risk. This average risk measure is potentially extremely misleading. If the population selected is too large, an artificially
low estimate of average individual risk will result because much of the population might be at no risk from the facility under study.
5. Average individual risk (exposed hours/worked hours). The individual risk for
an activity may be calculated for the duration of the activity or may be averaged
over the working day. For example, if an operator spends 1 hr per shift sampling a reactor and 7 hr per shift in the control room, the individual risk while
sampling would be 8 times the average individual risk for the entire work day,
assuming no risk for the time in the control room.
Examples of the first four of these measures of individual risks are provided in the
worked examples in Sections 8.1 and 8.2.
4.1.3. Societal Risk
Some major incidents have the potential to affect many people. Societal risk is a measure of risk to a group of people. It is most often expressed in terms of the frequency
distribution of multiple casualty events (the F-N curve.) However, societal risk can also
be expressed in terms similar to individual risk. For example, the likelihood of 10 fatalities at a specific location^ y is a type of societal risk measure. The calculation of societal
risk requires the same frequency and consequence information as individual risk. Additionally, societal risk estimation requires a definition of the population at risk around
the facility. This definition can include the population type (e.g., residential, industrial,
school), the likelihood of people being present, or mitigation factors (Section 2.4).
Individual and societal risks are different presentations of the same underlying
combinations of incident frequency and consequences. Both of these measures may be
of importance in assessing the benefits of risk reduction measures or in judging the
acceptability of a facility in absolute terms. In general, it is impossible to derive one
from the other. The underlying frequency and consequence information are the same,
but individual and societal risk estimates can only be calculated directly from that basic
data. This is illustrated in the example in Section 4.4.6.
The difference between individual and societal risk may be illustrated by the following example. An office building located near a chemical plant contains 400 people
during office hours and 1 guard at other times. If the likelihood of an incident causing a
fatality at the office building is constant throughout the day, each individual in that
building is subject to a certain individual risk. This individual risk is independent of the
number of people present—it is the same for each of the 400 people in the building
during office hours and for the single guard at other times. However, the societal risk is
significantly higher during office hours, when 400 people are affected, than at other
times when a single person is affected.
4.1.4. Injury Risk Measures
Risk to people can be defined in terms of injury or fatality. The use of injuries as a basis
for risk evaluation may be less disturbing than the use of fatalities. However, this introduces problems associated with degree of injury and comparability between different
types of injuries (such as thermal vs explosion vs toxic effects). In a risk assessment dealing with multiple hazards, it is necessary to add risks from different incidents. For
example, how are second degree burns, fragment injuries, and injuries due to toxic gas
exposure combined? Even where only one type of effect (e.g., threshold toxic exposure,
as illustrated in Figure 4.1) is being evaluated, different durations of exposure can
markedly affect the severity of injury. In general, the same calculation techniques can be
used to estimate risk of injury. The only difference is that the consequence and effect
models used to estimate the incident effect zones will be for injury rather than fatality.
Many risk assessments have been conducted on the basis of fatal effects. However,
there are uncertainties on precisely what constitutes a fatal dose of thermal radiation,
blast effect, or a toxic chemical. Where it is desired to estimate injuries as well as fatalities, the consequence calculation can be repeated using lower intensities of exposure
leading to injury rather than death. A simpler approach is to use observed ratios of
Concentration, volume ppm
FATAL
DANGEROUS
DISTRESS
•COUGHING
-IRRITATION
•SMELL
Exposure Time, minutes
FIGURE 4.1. Typical relationship between injury levels and concentration/exposure for a toxic
gas.
deaths to injuries, but this approach is likely to be less accurate. The Canvey risk assessments (Health & Safety Executive 1978,1981) used an equal number of serious injuries to fatalities. For different incidents, this ratio is highly variable. The Bhopal toxic
chemical release incident caused ,approximately 2500 fatalities and 20,000 significant
injuries, and about 200,000 persons sought medical treatment. The Feysin LPG
BLEVE caused 17 fatalities and 80 injuries (Marshall, 1987). However, such ratios are
difficult to compare because the degree of injury is often not adequately defined in the
incident descriptions, and because of the inability to correlate injury and fatality levels
between toxic exposures for most chemicals.
4.2. Risk Presentation
The large quantity of frequency and consequence information generated by a CPQRA
must be integrated into a presentation that is relatively easy to understand and use, The
form of presentation will vary depending on the goal of the CPQBA and the measure
of risk selected. The presentation may be on a relative basis (e.g., comparison of risk
reduction benefits from various remedial measures) or an absolute basis (e.g., comparison with a risk target).
Bisk presentation provides a simple quantitative risk description useful for decision making. The number of incidents evaluated in a CPQRA may be very large. Risk
presentation reduces this large volume of information to a manageable form. The end
result may be a single-number index, a table, a graph, (e.g., F-N plot), and/or a risk
map (e.g., individual risk contour plot).
Published risk studies have used a variety of presentation formats, including both
individual and societal risk measures. Typical presentation formats for the risk estimate
measures defined in Section 4.1 are presented in Table 4.1. Examples from the CPI
include the Canvey (Health & Safety Executive, 1978, 1981) and Rijnmond Public
TABLE 4.1. Presentation of Measures of Risk
Risk measure
Presentation format
Indices
Equivalent social cost index
A single number index value representation
Fatal accident rate
A point estimate of fatalities/108 exposure hours
Individual hazard index
An estimate of peak individual risk or FAR
Average rate of death
A number representing the estimated average number of fatalities per
unit time
Mortality index
A single value representation of consequence
Individual risk
Individual risk contour
Contour lines connecting points of equal risk superimposed over a
local map
Individual risk profile or risk
transect
A graph of individual risk as a function of distance from the plant in a
specified direction
Maximum individual risk
A single numerical value of individual risk corresponding to the
person at highest risk
Average individual risk
(exposed population)
A single numerical value estimating the average risk to a person in the
exposed population
Average individual risk
(total population)
A single numerical value estimating the average risk to a person in a
predetermined population, whether or not all members of that
population are exposed to the hazard
Societal risk
Societal risk curve
(F-N curve)
A graph of the cumulative probability or frequency of events causing
N or more fatalities, injuries or exposures versus N^ the number of
fatalities, injuries, or exposures
Average societal risk
Another term for average rate of death
Aggregate Risk
A term for societal risk to personnel in a building or facility
introduced in API 750 (API, 1995)
Authority, 1982) risk studies. The Reactor Safety Study (Rasmussen, 1975) for U.S.
nuclear power plants highlights societal risk.
4.2.1. Risk Indices
Because risk indices are single-number measurements, they are normally presented in
tables. For example, Kletz (1977) has tabulated the FAR for various industries in the
United Kingdom (Table 4.2). Index measures are frequently compared with risk targets (which may be derived from various risk exposures to the general public, for
instance, individual risk from lightning strike).
TABLE 4.2. Fatal Accident Rates in Various Industries
and Activities3
Activity
British industry (overall)
Fatal accident rate
(fatalities/1O* exposed hr)
4
Clothing and footwear manufacture
0-15
Vehicle manufacture
1-3
Timber, furniture, and so on
3
Metal manufacture, ship building
8
Agriculture
10
Coal mining
12
Railway shunters
45
Construction erectors
67
Staying at home (men 16-65)
1
Traveling by train
5
Traveling by car
57
* From Kletz (1977).
4.2.2. Individual Risk
Common forms of presentation of individual risk are risk contour plots (Figure 4.2)
and individual risk profiles, also known as risk transects (Figure 4.3). (Considine,
1984; Rijnmond, 1982).
The risk contour plot shows individual risk estimates at specific points on a map.
Risk contours ("isorisk" lines) connect points of equal risk around the facility. Places of
particular vulnerability (e.g., schools, hospitals, population concentrations) may be
quickly identified. The Netherlands Government (1985) requires risk contour plots in
order to satisfy its risk criteria.
The individual risk profile (risk transect) is a plot of individual risk as a function of
distance from the risk source (Figure 4.3). This plot is two-dimensional (risk vs distance) and is a simplification of the individual risk contour plot. Individual risk profile
examples are shown in Section 8.2. In order to use this format, two conditions must be
met: the risk source should be compact (i.e., well approximated by a point source) and
the distribution of risk should be equal in all directions. A candidate for this presentation format is a generic risk assessment for a common hazardous item (e.g., for a pressurized LPG storage tank at a typical retail site). Individual risk profiles (transects) can
also be used to show risk in a particular direction of interest, for example in the direction of a control building.
INDUSTRY
RIVER
Individual Risk of
Fatality, per year
FIGURE 4.2. Example of an individual risk contour plot. Note: The contours connect points of
equal individual risk of fatality, per year.
Distance from
Plant (meters)
FIGURE 4.3. Example of an individual risk profile, or risk transect.
4.2.3. Societal Risk
Societal risk addresses the number of people who might be affected by hazardous incidents. The presentation of societal risk was originally developed for the nuclear industry. The Reactor Safety Study (Rasmussen, 1975), made substantial use of societal risk
graphs, and they have frequently been used for chemical process risk analyses.
A common form of societal risk is known as an F-N (frequency-number) curve. An
F-N curve is a plot of cumulative frequency versus consequences (expressed as number
of fatalities). A logarithmic plot is usually used because the frequency and number of
fatalities range over several orders of magnitude. It is also common to show contribu-
FREQUENCY OF INCIDENTS RESULTING
IN A/ OR MORE FATALITIES PER YEAR
tions of selected incidents to the total F-N curve as this is helpful for identification of
major risk contributors.
Figure 4.4 is a sample F-N curve for a single liquefied flammable gas facility. The
facility contains two major parts—a shore-based operation and a marine transfer operation. The F-N curves for these two components of the installation are plotted in Figure
4.4, along with the F-N curve for the total facility. The societal risk F-N curve for the
total facility is equal to the sum of the F-N curves for the two facility components.
Figure 4.5, from the Reactor Safety Study (Rasmussen, 1975), estimates total United
States societal risk from a variety of sources. Occasionally, the societal risk for a single
facility, such as the one in Figure 4.4, will be plotted along with societal risk data for a
large group of people such as the data in figure 4.5. This comparison is not valid, and
societal risk data should not be presented in this way. The exposed populations are very
different-the entire population of the United States for figure 4.5, compared to a specific local population for Figure 4.4. Because of the large difference in the exposed population (the "society" for which the societal risk is estimated), single facility F-N curves
should not be presented on the same graph with F-N curves for a large population, and
the data should not be directly compared. Prugh (1992) discusses the application of
F-N curves to the chemical process industries, and provides some suggestions on how
to incorporate consideration of the exposed population into a decision making process
using F-N curves.
The American Petroleum Institute has introduced the term aggregate risk as a
type of societal risk measure in API RP 752, "Management of Hazards Associated with
Location of Process Plant Buildings35 (API, 1995). Aggregate risk is defined as "a mea-
TOTAL FACILITY RISK
,SHORE-BASED OPERATION
MARINE TRANSFER
OPERATIONS
FATALATIES, A/
FIGURE 4.4. Example of a societal risk F-N curve.
FREQUENCY OF INCIDENTS RESULTING IN A/ OR MORE
FATALITIES, F (PER YEAR)
FATALITIES, A/
FIGURE 4.5. Some examples of U.S. societal risk estimates. From Rasmussen (1975).
sure of the total risk to all personnel within a building(s) or within a facility, depending
on the risks being evaluated, who are impacted by a common event, taking into
account the total time spent in the building(s) or facility." Aggregate risk is equivalent
to the societal risk for the personnel in the building or facility. The risk calculation
methods for aggregate risk are the same as for societal risk, but, for aggregate risk as
defined in API 752, the population considered is restricted to personnel in the building
or facility being evaluated. CCPS (1996) provides additional discussion of aggregate
risk in the Guidelines for Evaluating Process Plant Buildings for External Explosions and
FireSy including a number of worked sample problems.
Another form of societal risk presentation is a tabulation of the risk of different
group sizes of people affected (e.g., 1-10, 11-100, 101-1000). This is a coarser form
of presentation than the F-N curve. Nonspecialists may find this format easier to interpret than a logarithmic plot. As with any simplification, information may be lost. For
example, if an engineering remedial measure for an important incident did not affect its
frequency of occurrence, but did reduce the number of people affected from 80 to 40,
there would be no change shown in a summary tabulation because the 11-100 band
includes both. An F-N plot would show this change.
The average rate of death index (average societal risk) represents a further simplification of the presentation of societal risk. By reducing the large amount of information
about the distribution of potential incident sizes contained in the F-N curve to a single
number, information about the distribution of societal risk is lost. However, the average societal risk index is easy to understand, and can be useful for many decision
making purposes.
4.3. Selection of Risk Measures and Presentation Format
The selection of risk measure and presentation format is dependent upon a number of
factors. Some studies must produce measures required by external agencies or by company management. In these cases, the type of risk measure may not be negotiable. At
other times, there may be substantial flexibility in making the selection. The same is
true of the presentation format.
4.3.1. Selection of Risk Measures
Factors to be considered in the risk measures to be presented include the following:
• Study objectives. Study objectives are discussed in Section 1.9.2 as a major component of a scope of work document. The study objectives may or may not point to
a specific risk measure, but the scope of work must define the risk measures to be
applied. This selection directly impacts resource and time requirements for the
study.
• Required Depth of Study. The development of a specific measure may be constrained by the depth of study. Table 4.3 presents risk measures that can be developed from the three risk estimation planes in the study cube (Section 1.3). These
relationships present specific constraints in the risk measure selection process.
• End Uses. The selection of risk measures is normally dictated by the planned end
use of the study, but the study objectives may not consider all possible end uses
of the results of the study. Questions raised following completion of the study
may not have been considered when study objectives were finalized. For example, study objectives might include a desire to determine the public risk from an
existing process unit using only a risk index. Following the study, a need may
arise to present study results to a local emergency planning committee. This group
may want to review study results on a different basis and may not understand what
is involved in producing another risk measure. Initial selection, therefore, should
anticipate other end uses beyond those defined by the study objectives.
• Population at Risk. The selection of risk measure may also be constrained by
whether the study is directed at in-plant employees or the surrounding public.
Individual risk is usually estimated for in-plant workers because of their proximity to the risk, but societal risk estimates may also be appropriate for large facilities with diverse working populations.
TABLE 4.3. Risk Measures Possible from Depths of Study
Study cube risk measures (Figure 1.4)
Consequence plane
Frequency plane
Risk plane
Individual risk
Contour
No
No
Yes
Profile
No
No
Yes
Maximum
No
No
Yes
Average (exposed)
No
No
Yes
Average (total)
No
No
Yes
Societal risk
F-N curve
No
No
Yes
Aggregate risk
No
No
Yes
Average
No
No
Yes
Index measures
Equivalent social cost
No
No
Yes
Fatal accident rate
No
No
Yes
Individual hazard index
No
No
Yes
Mortality index
Yes
Yes
Yes
Economic
Yes
Yes
Yes
4.3.2. Selection of Presentation Format
There are a limited number of presentation formats for each measure of risk, as Table
4.1 implies. The presentation format should be included in the study objectives (Section 1.9.2). One reason is that there is a major cost difference between generation of
single point estimates versus generation of a series of risk contours. The following factors should be considered in deciding which forms are chosen:
• User Requirements. As with the selection of risk measures, the user may have a
specific need to see risk estimates in a certain format. If so, this format requirement establishes the minimum level of effort required. However, there may be
value in presenting the results in other formats as well.
• User Knowledge. Where the user is unfamiliar with the possible formats, judicious
selection needs to be made through a prompting process where sample formats
are presented to and approved by the user before any effort is made to secure
approval for the scope of work. If a complex format is selected for presentation, it
may be necessary to orient and familiarize the user on the interpretation of the
risk presentation.
• Effectiveness of Communicating Results. No matter what the user's knowledge or
perception of the requirements, it is vital that the presentation communicate the
results in an acceptable fashion. The presentation should be as simple as necessary to ensure comprehension, but not so simple that resolution is lost or that
bias is introduced. It may be necessary to provide additional presentations in
order to satisfy the user's actual needs as well as the perceived ones.
• Potential Unrevealed Uses and Audiences. Oftentimes the results of a CPQBA
may be used for purposes outside the study objectives. These uses may lead to
misinterpretation of the results, because of the presentation formats chosen.
Under these circumstances, there may not be time to develop more suitable presentations. It may be prudent to consider the likelihood for such "unofficial" uses
and to provide appropriate presentations as part of the results.
• Need for Comparative Presentations. It may be desirable to present comparisons of
the results of a study with other risk assessments. This type of presentation may
offer the following:
-a comparison of alternate process design or operation options
-a comparison of the current risk estimates with risk estimates of other similar
systems studied previously, to highlight areas for risk reduction or further study
-a comparison of the current risk estimates with other internal risk estimates that
have been previously approved or rejected, or a comparison of the current risk
estimate with other published studies
-a comparison of risk estimates with other voluntary and involuntary risks, to
rank the current risk estimate among these reference values.
4.4. Risk Calculations
This section describes the procedures for calculating individual risk (Section 4.4.1), societal risk (Section 4.4.2), and risk indices (Section 4.4.3). The order of discussion has
been changed from earlier sections in this chapter because the risk index calculations use
some of the information developed in the individual or societal risk calculations.
4.4.1. Individual Risk
The following procedure for calculation of individual risk is based on a discussion by
IChemE (1985). The calculation of individual risk at a geographical location near a
plant assumes that the contributions of all incident outcome cases are additive. Thus,
the total individual risk at each point is equal to the sum of the individual risks, at that
point, of all incident outcome cases associated with the plant
IR,,,= JtX,,,,.
»=1
where
IR^0,
t4-4-1)
= the total individual risk of fatality at geographical location x} y
(chances of fatality per year, or yr1)
IRx , = the individual risk of fatality at geographical location x,y from
incident outcome case i (chances of fatality per year, or yr"1)
n = the total number of incident outcome cases considered in the analysis
The inputs to Eq. (4.4.1) are obtained from
IR*,,, =/,/>/,.
<4A2)
where
fi
= frequency of incident outcome case I3 from frequency analysis
(Chapter 3) (yr1)
pf}i = probability that incident outcome case i will result in a fatality at
location x, y, from the consequence and effect models (Chapter 2)
And the inputs to Eq. (4.4.2) are obtained from
fi=Fit0,tOC,i
(4-4-3)
where
F7 = frequency of incident 7, which has incident outcome case i as one of its
incident outcome cases (yr~J)
fo.i = probability that the incident outcome, having i as one of its incident
outcome cases, occurs, given that incident / has occurred:
poci = probability that incident outcome case i occurs given the occurrence
of the precursor incident I and the incident outcome corresponding to
the outcome case i
The calculation of the frequency of incident outcome case i^fp requires evaluation
of the incident outcome and incident outcome case probabilities (P0i, ^0cP giyen the
frequency of occurrence of the incident I. For example, a release of a nontoxic flammable material (incident) can result in a jet fire, pool fire, BLEVE, flash fire, unconfined
vapor cloud explosion, or safe dispersal if not ignited (incident outcomes). Each of
these outcomes has a conditional probability (p0)i) associated with it. Some of these
incident outcomes will be further broken down into incident outcome cases depending
on the ignition source location and weather conditions. Each of these incident outcome
cases has a conditional probability of occurrence (^0Ci)- ATI event tree is commonly
used to evaluate these relationships (sample problem, Section 3.2.2). Figure 4.6 is an
example event tree which illustrates the general application of Eqs. (4.4.2) and (4.4.3).
All individual risk calculation methods are based on these relationships. In general
these equations must be applied at all locations at which individual risk is to be calculated.
Simplified techniques can reduce the amount of calculation, but accuracy may be sacrificed. However, simplified techniques may be useful identifying the major contributors
to risk. Once these have been identified they can be subjected to a more detailed analysis.
4.4.1.1. INDIVIDUAL RISK CONTOURS AND PROFILES (RISK TRANSECTS)
Two example calculation approaches are presented for estimating individual risk at various geographical locations around a facility and for using this information to generate
risk contours and profiles. A general approach is discussed first, requiring the estimation of individual risk at every location for the study group of incidents, incident outcomes, and incident outcome cases. Ignition source and weather data can be
incorporated in any degree of detail as defined by the depth of study for the analysis.
This approach generally requires computer calculation, but simplified consequence and
effect models may make it feasible to use graphical or hand calculation as illustrated in
the sample problem in Section 8.1.
INCIDENT
INCIDENT
OUTCOME
INCIDENT
OUTCOME
CASE
Flammable, Toxic No ignition of release Specific wind
Gas Release
- toxic vapor cloud
direction, wind
speed, atmospheric
stability class
Frequency, F1
Probability,
Probability,
POJ
POC.I
PROBABILITY
OF FATALITY
Probability of fatality
given specific toxic
vapor exposure dose
Probability,
Pf.i
Incident OutY e s
c o m e Case i
Individual Risk
IR1
No Fatality
FIGURE 4.6. A sample event tree illustrating individual risk calculations [Eqs. (4.4.2) and
(4.4.3)] for one incident outcome case resulting from a flammable, toxic gas release.
The second approach incorporates simplifying assumptions restricting, for example, the number of weather cases and ignition sources, and is suitable for hand
calculation.
4.4.1.2. GENERALAPPROACH
The general approach requires the application of Eqs. (4.4.1). (4.4.2), and (4.4.3) at
every geographical location surrounding the facility. Figure 4.7 is a logic diagram
showing the calculation procedure. Application of the general approach to a real problem, incorporating detailed treatment of ignition sources and a wide variety of weather
conditions, results in an extremely large number of incident outcome cases. A large
number of individual calculations is required and computer tools are essential. Sophisticated computer programs are available to do these risk calculations.
This procedure requires definition of frequency and effect zones for each incident
outcome case as defined in Chapter 1. Chapters 2 and 3 discuss methods for estimating
consequences and frequencies, respectively. This information is used to estimate the
individual risk for all incident outcome cases at each geographic location, using Eqs.
(4.4.1), (4.4.2), and (4.4.3). The result is a list of individual risk estimates at the geographic locations considered. These risk estimates can then be plotted on a local map.
Risk contours connecting points of equal individual risk can be drawn manually or by
any standard graphics contouring package.
When a chemical is both toxic and flammable (e.g., hydrogen cyanide), extreme
caution must be exercised in the definition of incident outcome cases for individual risk
estimation. A release of such a chemical can result in an unconfmed vapor cloud explo-
List of study group incidents,
incident outcomes, and
incident outcome cases
(Chapter 1)
Define geographic area
and individual locations
of interest
FREQUENCY ANALYSIS
Determine frequency of ail
incident outcome cases
[Chapter 3 and Equation
(4,4.3)]
CONSEQUENCE ANALYSIS
Determine effect zone and
probability of fatality at
every location in effect zone
for all incident outcome cases
(Chapter 2)
Select a geographic
location
Determine individual risk
at selected location
[Equations (4.4.1), (4.4.2)]
Record individual risk
at selected location
Risk
calculated
for all
locations?.
No
Yes
Plot individual risk
estimates on local
mao
Draw individual risk
contours connecting
points of equal risk
FIGURE 4.7. General procedure for calculation of individual risk contours.
sion or a downwind toxic vapor release. Both events could occur during the same
release. Analysis of an incident with both outcomes is beyond the scope of this volume.
Individual risk contours for fatalities with mitigating factors (shelter, escape to
shelter, or evacuation) can differ by a factor of 10 or more from contours without mitigating factors. Individual risk contours for a particular level of injury will be more distant than fatality contours for the same plant. Study objectives should establish the
basis and form of individual risk calculations, and whether mitigating factors are
included. The potential confusion from using different bases for risk estimation has led
many to estimate individual risk contours for fatalities with no mitigating or presence
factors, providing a consistent basis for studies. However, in many cases this will be
unrealistically conservative, especially if the study results are being compared to absolute risk targets.
4.4.1.3. SIMPLIFIED APPROACHES
The approach described above may be simplified in several ways. For example, the
objectives of a particular study may not require a full knowledge of the geographic distribution of individual risk. Perhaps the study objective can be fulfilled by calculating
individual risk. Perhaps the study objective can be fulfilled by calculating individual risk
at a few locations of particular interest. For example, a study may be undertaken to
compare the risk for several potential locations for a control building, requiring risk
evaluation only at the specific locations under consideration. In this case the methodology discussed above need only be applied to the locations of interest, greatly reducing
the computational effort. The individual risk estimates for the locations of interest are
exactly the same as would be obtained if the full risk contour map were developed, since
the same calculation is done at each location, but fewer locations are considered. What
is lost is the detailed information about the geographic distribution of risk.
A second simplified approach is based on the following assumptions:
• All hazards originate at point sources.
• The wind distribution is uniform (i.e., the wind is equally likely to blow in any
direction).
• A single wind speed and atmospheric stability class can be used.
• No mitigation factors are considered.
• Ignition sources are uniformly distributed (i.e., ignition probability does not
depend on direction).
• Consequence effects can be treated discretely. The level of effect within a particular effect zone is constant (e.g., 100% fatality). Beyond that zone there is no
effect.
The use of these assumptions results in symmetric risk contours-all risk contours
are circular. Thus, the individual risk determined in a radial direction from the source
defines the risk profile and the risk contour map. This type of individual risk calculation
might be suitable, for example, for a preliminary study of a new plant, before any decisions on the siting of the facility have been made.
Figure 4.8 shows the procedure for individual risk calculation using this simplified
approach. This procedure requires a list of all incidents, incident outcomes, and incident outcome cases considered in the study (Chapter 1). Consequences (effect zones)
and frequencies for all incident outcome cases must then be determined using the
methods outlined in Chapters 2 and 3, respectively. Because the effect zones are
assumed to be discrete (see last assumption above), the effect zone may be defined
simply in terms of a radial distance from the source, within which the effect under consideration (e.g., a certain toxic exposure, injury, fatality) occurs. For those incident
outcome cases affected by wind direction, an estimate of the width of the effect zone in
terms of the angle enclosed (i.e., treat the effect zone as a pie shaped section of a circle)
is also needed as shown in Figure 4.9. Note that this is an extremely conservative calcu-
List of study group incidents,
incident outcomes, and incident
outcome cases (Chapter 1)
CONSEQUENCE ANAL
Determine effect zone
radius and enclosed angle
(if relevant) for each
incident outcome case
(Chapter 2)
FREQUENCY ANALYSIS
Determine frequency of
each incident outcome case
(Chapter 3)
List of incident outcome
cases with effect zones
andf
Select incident outcome case
with largest effect zone
No
Use incident outcome
case frequency directly
Select incident outcome case
with next largest zone
Does wind
direction affect
location of
,effect zone,
Ves
Reduce incident outcome case
frequency by direction factor
[Equation (4.4.4)]
Draw a circle (risk contour)
around the origin of radius
equal to effect zone radius for
this incident outcome case
Assign !dividual risk value to
the contour [Equation (4.4.5)1
All
incident
'outcome cases'
considered?
No
Yes
RISKCONTOURMAP
COMPLETE
FIGURE 4.8. A simplified procedure for individual risk contours.
lation (overestimates risk). Compare the area covered by the actual toxic gas plume in
Figure 4.9 to the area of the pie-shaped segment used for the risk calculations. The
result of this is a list of all incident outcome cases for the study, each with its associated
frequency, effect zone radius, and enclosed angle (if needed).
To generate the risk contour map, select the incident outcome case with the largest
effect zone radius and draw the appropriate circle (risk contour) on the map of radius
Assumed Effect Zone
6 = Enclosed Angle
Actual Plume
Wind
Toxic
Release
Point
FIGURE 4.9. Effect zone for an incident outcome case dependent on wind direction for the
simplified individual risk estimation procedure of Figure 4.8.
equal to the effect zone radius. Next, determine if that incident outcome case is affected
by wind direction (for example, a flammable or toxic gas cloud will travel downwind,
but a condensed phase explosion will have equal effects in all directions regardless of
wind direction.) If the incident outcome case is affected by wind direction, the frequency must be reduced by a direction factor accounting for the fact that the wind will
be blowing in any particular direction for only a fraction of the incident outcome case
occurrences. This calculation is equivalent to the development of separate incident outcome cases, based on different weather conditions, in the general approach discussed
above. Because it has been assumed that the wind is equally likely to blow in any direction it can be shown that the direction factor is equal to 0-/360, where 0-= the angle
enclosed by the incident outcome case effect zone. The frequency of the incident outcome case affecting any particular location in a particular direction is
Aa =/(<y360)
(4.4.4)
where
fi)d = frequency at which incident outcome case i affects a point in any
particular direction assuming a uniform wind direction distribution (yr'1)
yj = estimated frequency of occurrence of incident outcome case / (yr'1)
di = the angle enclosed by the effect zone for incident outcome case i (degrees)
Incident
outcome
case
number
1
2
3
Frequency
ft
(year'1)
x
y
z
Impact zone
affected by
Wind?
£/
(deg)
ftf
(year-1)
[Equation (4.4.4)]
Length of
effect zone
No
Yes
No
NA
45
NA
NA
y/B
NA
a
b
c
Risk Contour 3
Risk Contour 2
Risk Contour 1
Risk [Equation (4.4.5)] IRC3= IRC2+f3
IRC2= IRC, +f2d
WC 1 =O-Hf 1
IRC3 = (x + y/8) + z
IRC2 = (X) +y/3
IRC, = x
FIGURE 4.10. Illustration of the simplified individual risk calculation procedure of Figure 4.7.
Incident outcome cases 1 and 3 are not affected by wind direction. Incident outcome case 2
is wind dependent.
The next step is to assign an individual risk value to the contour. This is equal to the
frequency of the incident outcome case i [adjusted as described by Eq. (4.4.4) if the
wind direction affects the location of the effect zone] added to the individual risk of the
next further risk contour.
IRQ=/J(Or^) + IRC,,!
(4.4.5)
where IRQ is the value of individual risk at the contour of the incident outcome case
under consideration (yr'1) and IRQ-1 is the value of individual risk at the next further
risk contour (yr'1) and/J andyj d are as defined for Eq. (4.4.4).
For the first contour drawn (for the incident outcome case with the greatest effect
zone radius) IRQ^1 is zero. This procedure is continued until all incident outcomes
cases have been considered. The map is in the form of a series of circles surrounding the
facility, each with an associated individual risk value. Figure 4.10 illustrates the application of these calculations.
The results of risk calculations using this method can also be displayed as an individual risk transect (Figure 4.11).
A third method for simplifying individual risk calculation is to assume a simple
step function for the probability of fatality as a function of the physical effect causing
the fatality (e.g., toxic gas concentration, heart radiation, explosion overpressure). The
simplest step function would be to assume a probability of fatality of 1 for a benchmark
physical effect equal to or greater than some threshold, and O for a lower value, as illustrated in Figure 4.12.
The simplified effect model allows the calculation of a specific geographic region
where the physical effect is equal to or greater than the benchmark value for each incident outcome case. The probability of fatality for all points within this region is equal
to the value defined by the step function used to approximate the actual effect. Figure
4.13 illustrates a simplified effect zone.
Individual Risk
Distance
FIGURE 4.11. Risk transect for the example illustrated in Figure 4.10.
Probability of Fatality
Benchmark Value
Actual Response
Magnitude of Physical Effect
(e.g., toxic dose, heat radiation dose, explosion overpressure)
FIGURE 4.12. Use of a benchmark value to approximate the actual dose-response relationship.
The use of benchmark values for effect models greatly reduces the calculation
burden for individual risk calculation because the risk of fatality is equal for a large
number of geographic locations. Simple graphical techniques can also be used to simplify the calculations, as illustrated in the examples in Section 4.4.5 and 8.1.
4.4.1.4. OTHER INDIVIDUAL RISK MEASURES
The above discussion reviews the procedures for calculation of individual risk at all geographic locations surrounding a plant, leading to an individual risk contour map. The
Probability of Fatality = O
Probability of
Fatality = 1.0
Boundary at which the
physical effect =
benchmark value
FIGURE 4.13. Simplified incident outcome case effect zone.
development of these individual risk measures requires knowledge of local population
only to the extent that the population might affect the realization of a hazard, for example, by providing an ignition source for a flammable cloud. The other individual risk
measures described in Table 4.1 consider the population exposed to the risk
Maximum Individual Risk. The maximum individual risk is determined by estimating the individual risk at all locations where people are actually present, and searching
the results for the maximum value of individual risk.
Average Individual Risk (Exposed Population). The average individual risk
(exposed population) is determined by averaging the individual risk of all persons
exposed to risk from the facility. It is first necessary to determine the population that
can be affected by at least one incident outcome case. The number of people at each
location within the farthest individual risk contour must be determined. The average
individual risk (exposed population) is then determined by
EIR*>A^
IR
1K
AV
=
^
y
Z^ *,y
x,y
where
IRAV = average individual risk in the exposed population (yr"1)
IR^y, = individual risk at location X3 y (yr~l)
Pay,= number of people at location x, y.
(4.4.6)
Only those locations where people actually are present need be considered, since
Pxy = O at locations where there are no people.
Average Individual Risk (Total Population). Average individual risk (total population) is determined by averaging the individual risk over a predetermined population
without regard to whether or not that entire population is subject to risk from the facility. For example, the predetermined population might be the total population inside a
plant, or the population of a town surrounding the plant. The calculation is the same as
given in Eq. (4.4.6), except the denominator is the total predetermined population:
IR
_ V ^*>yp**y
Av-Zr
p
x,y
T
(4.4.7)
where PT = total predetermined population for averaging risk (number of people).
This measure of individual risk must be used with caution. Average individual risk
can be made to appear very low by including a large number of people incurring little
or no risk in the predetermined population.
Both of the average individual risk measures discussed above can also be calculated
by dividing the average rate of death index (discussed below) by the population of
interest (either the exposed population or a predetermined total population).
Individual risk profiles, or risk transects are calculated using the same techniques
as individual risk contours. The only difference is the selection of locations for the calculations. For a risk transect the points are located along a straight line in the direction of
interest. Clearly, this greatly reduces the number of computations required. The second
simplified individual risk calculation method described in Section 4.4.1.3 (and shown in
Figure 4.8) produces individual risk profiles, or risk transects, directly.
4.4.2. Societal Risk
The following procedures for the calculation of societal risk F-N curves are based on a
discussion by the IChemE (1985). All of the information required for individual risk calculation is also required for societal risk calculation, as well as information on the population surrounding the facility. For a detailed analysis, the following may be needed:
• information on population type (e.g., residential, office, factory, school, hospital) for evaluating mitigation factors.
• information about time-of-day effects (e.g. for schools)
• information about day-of-week effects (e.g., for industrial, educational, or recreational facilities)
• information about percentage of time population is indoors for evaluating mitigating factors.
Differing population distributions can be treated using a single weighted average
population distribution, but this underestimates the effects of incidents that affect
infrequent large gatherings of people. The incident frequency for each population distribution is equal to the relative probability of occurrence of that population distribution times the total incident frequency.
4.4.2.1. GENERALPROCEDURE
Figure 4.14 shows the general procedure for calculation of a societal risk F-N curve.
The steps are the same as for individual risk calculation, through the estimation of consequences (effect zones) and frequencies. Then, it is necessary to combine this information with population data to estimate the number of people affected by each incident
outcome case.
List of study group incidents,
incident outcomes, and
incident outcome cases
(ChapteM)
FREQUENCY ANALYSIS
Determine frequency of all
incident outcome cases
[Chapter 3, and
Equation (4.4.3)]
CONSEQUENCE ANALYSIS
Determine effect zone and
probability of fatality at every
location in effect zone for all
incident outcome cases
(Chapter 2)
Select an incident
outcome case
Population
distribution
data
Determine total number
of fatalities for selected
incident outcome case
[Equation 4.4.8)1
All
incident
outcome cases
considered?
No
Yes
List of ail incident outcome
cases with associated
frequency and number
of fatalities
Put data in cumulative
frequency form
[Equation (4.4.9)]
Plot F-N Curve
FIGURE 4.14. Genera! procedure for calculation of societal risk F-N curves.
The number of people affected by each incident outcome case is given by
N
i=2P**yPfj
*,y
(4.4.8)
where N1- is the number of fatalities resulting from incident outcome case /; Px is the
number of people at location #, y\ and Pf • is as defined in Eq. (4.4.2).
The number of people affected by all incident outcome cases must be determined,
resulting in a list of all incident outcome cases, each with a frequency (from frequency
analysis) and the number of people affected [Eq. (4.4.8)]. This information must then
be put in cumulative frequency form in order to plot the F-N curve.
FN = 2j ^i f°r a^ incident outcome case i for which N. > N
(4.4.9)
i
where FN is the frequency of all incident outcome cases affecting N or more people, Fi is
the frequency of incident outcome case JT, and N- is the number of people affected by
incident outcome case i.
The result is a data set giving FN as a function of N, which is then plotted (usually
by a logarithmic plot) to give the F-N curve.
Table 4.4 (Marshall, 1987) illustrates the type of data tabulation required to construct an F-N curve. In this example, frequency and number of fatalities data for fires in
the United Kingdom (1968-1980) are taken from historical data. The resulting F-N
curve is shown in Figure 4.15. The case studies in Chapter 8 illustrate the calculation of
F-N curves using model estimates rather than historical data.
The societal risk calculation can be extremely time consuming, because fatalities
must be estimated for every incident outcome case. Incidents must be subdivided into
incident outcomes and incident outcome cases to evaluate each weather condition,
wind direction, ignition case, and population case (e.g., day-night). Thus, a single incident may have to be analyzed for WxNxIxP cases. (W = number of atmospheric
stability cases, N = number of wind directions, / = number of ignition cases, P =
number of population cases.) Given typical values for W (2-6), N (8-16)^ I (1-3), and
P (1-3); this could result in 16 to 864 incident outcome cases for a single incident. In
practice, representative weather conditions, wind direction, and population cases are
used to approximate the full spectrum of actual conditions. It is usually possible to further reduce the case list by noting symmetry and areas of no population (which can be
neglected for the societal risk calculation).
Mitigation factors (Section 2.4) (shelter, escape to shelter, and evacuation) can be
incorporated into the societal risk calculation. These factors will reduce the probability
of fatality [P^- in Eq. (4.4.2)]. They will be different for each incident type (e.g., fire,
explosion, toxic release) and incident duration. Global correction factors applied to
final fatality results, assuming constant mitigation, are not likely to be accurate.
4.4.2.2, SIMPLIFIED PROCEDURE
The amount of calculation required to estimate societal risk by the method of Figure
4.14 can be reduced by limiting the number of weather, wind direction, and population cases considered. This reduces the number of calculations, but sacrifices accuracy.
TABLE 4.4. Number of Fires in Which a Given Number of Fatalities Occurred in the United Kingdom 1968-1980a-b
Number of fatalities per fire
Year
Total
fires
with
fatalities
1
2
3
4
5
6
7
1968
670
na
na
na
na
na
1
1
8
9
_
1969
716
na
na
na
na
na
3
1970
707
na
na
na
na
na
2
1
1971
666
567
na
na
na
na
2
1
na
na
na
na
na
na
_
1
1972
911
798
na
1973
856
750
na
1974
756
764
na
na
na
na
1
1
1975
na
708
na
na
na
na
2
1
1976
759
682
36
28
11
1
1977
609
544
38
14
4
5
1978
709
629
61
111
6
1979
963
873
69
10
8
1980
863
785
46
22
4
3
1
Totals
7101
250
85
33
9
16
10
2
2
Frequency, events
per year
710
50
17
6.6
1.8
1.23
0.76
0.154
0.154
788.6
78.6
28.6
11.66
5.06
3.46
2.39
1.63
1.48
Frequency >w
fatalities
10
11
!
—
12
_
14
_
—
—
15
18
21
22
24
1
1
28
37
_
1
1
1
1
1
2
30
_
1
1
__
_
_
1
1
1
1
1
1
_
2
_
1
1
—
I
1
_
!
!
_
3
_
—
_
2
0.23 0.154
_
_
1
—
_
_
1
_
1
0.038' 0.076
1.23 1.00 0.846
0.760
1
1
2
1
1
1
1
1
1
0.076 0.051' 0.038' 0.038' 0.038' 0.025' 0.015' 0.025'
0.690
0.615
0.46
0.38
0.301
0.231
0.154
"From Marshall (1987).
^Spaces marked na denote that statistics are not available. Columns for 1-5 deaths show frequencies derived from those years for which statistics are available. Statistics are incomplete for some years
because of industrial action Mean values are based on the number of years for which statistics are available.
'Mean value based upon mean of a group of values of n.
0.076
F
(FREQUENCY OF EVENTS ^ A/ PER ANNUM)
ALL FIRES
N
(NUMBER OF FATALITIES IN A FIRE)
FIGURE 4.15. Plot of F-N fire statistics UK 1968-1980. From Marshall (? 987).
Another simplification is to assume that the probability of fatality^, in Equation
(4.4.8) can have only two values, a constant for all locations x, y within the effect zone,
and zero for all locations x, y not in the effect zone as described for individual risk calculations in section 4.4.1.3 and illustrated in Figure 4.13. The assumed probability of
fatality can have any appropriate value, but it must be taken as constant throughout the
effect zone. The simplified procedure is exactly the same as the general procedure of
Figure 4.14. However, the number of fatalities for each incident outcome case can be
determined graphically rather than by application of Eq.(4.4.8) at every location. The
number of fatalities due to incident outcome case i is determined by
1. superimposing a map of the effect zone for incident outcome case i on a population distribution map.
2. counting the number of people within the effect zone
3. multiplying by the probability of fatality within the effect zone.
N1 = P,pfl
(4.4.10)
where Pi is the total number of people within the discrete effect zone for incident outcome case I zndpfi is the discrete value of P^ the probability of fatality, within the effect
zone for incident outcome case i.
The F-N curve is then generated in the same way as for the general approach. The
simplified societal risk calculation procedure is demonstrated in the example problem
in Section 8.1.
The simplified approach has been used in published risk studies. For example, the
Rijnmond (1982) study estimated fatalities from explosion overpressure by "assuming
that all people indoors and within the 0.3 bar overpressure contour are killed."
4.4.2.3. AGGREGATE RISK
The procedures described in Sections 4.4.2.1 and 4.4.2.2 also apply to the calculation of
aggregate risk as defined by API 752 (API, 1995). The only difference between aggregate risk and a general societal risk calculation is that the population considered is limited
to personnel in a building or facility under evaluation for aggregate risk estimation.
4.4.3. Risk Indices
Calculation of risk indices requires the same input as for individual and societal risks,
but the procedures are different. The results of the calculations of risk indices are presented as single numbers or as tables.
4.4.3.1. AVERAGE RATE OF DEATH
The average rate of death (ROD) is a measure of societal risk and is not relevant to any
specific individual at a particular place. It can be calculated by
n
Average rate of death = A/,-^*=i
(4.4.11)
where f{ is the frequency of incident outcome case i (yr"1), Ni is the number of fatalities
resulting from incident outcome case /, and n is the number of incident outcome cases
in the study.
4.4.3.2. EQUIVALENT SOCIAL COST
Okrent (1981) suggests the use of a weighted average rate of death that takes into
account society's perception that multiple-fatality incidents are more serious than a collection of incidents with fewer fatalities. Consequences are raised to a power greater
than 1. This form is known as equivalent social cost.
n
Equivalent social cost =^tfi(Ni)f
;=i
where f = risk aversion power factor (p > 1).
(4.4.12)
If ^? = 1, equivalent social cost = average rate of death.
For nuclear applications, Okrent suggests a value for ^? of 1.2. This small value off
imposes a small penalty for multiple casualty incidents (unless they are very large). For
the chemical industry, others have suggested a value for p of 2 (Netherlands Government 1985).
The difference between the average rate of death and the equivalent social cost can
be illustrated by the following example. Consider an incident that might cause one
fatality every 10 years (Case 1) and other that might cause 100 fatalities once every
1000 years (Case 2):
Case 1: Average rate of death = 10"1 X 1 = 0.1 fatality per year
Case 2: Average rate of death = 10"3 X 100 = 0.1 fatality per year
Using p = 2, the equivalent social cost becomes
Case 1: Equivalent social cost = (IO'1) X I2 = 0.1
Case 2: Equivalent social cost = (IO"3) X (1002) = 10
The second incident now scores much higher.
The units of equivalent social cost are not meaningful. Therefore, the equivalent
social cost is a pure index for comparison of the effects of remedial engineering measures rather than an absolute risk measure. The equivalent social cost index is calculated
for the example problem in Section 8.1.
4.4.3.3. FATAL ACCIDENT RATE
As defined in Section 4.1, the only difference numerically between the fatal accident
rate and the average individual risk is the time period. A factor of 1.14 X IO4 (IO8 exposure hours vs 1 year) is therefore incorporated into equation (4.4.6):
FAR = IRAV(1-14 x IO4)
(4.4.13)
8
where FAR is the fatal accident rate (fatalities/10 exposure hours) and IRAV is the
average individual risk (yr-1) [from Eq. (4.4.6)].
This definition of FAR is for a person who remains at a fixed location where the
individual risk is constant in time. For a person who moves about in an effect zone, the
FAR is calculated by a time-weighted average of the FARs at each point where the
person spends time. Historically, FARs have been used for employee risk assessments.
4.4.3.4. INDIVIDUAL HAZARD INDEX
The IHI represents a peak value of FAR and is estimated by determining the maximum
FAR that a person is exposed to as he moves about in an effect zone. The IHI may be an
appropriate index to calculate for off-site persons at greatest risk (for example, at the
fence line or at the nearest off-site residence).
4.4.3.5. MORTALITY INDEX AND ECONOMIC LOSS
Calculation of these two index measures can be found in other references (Marshall,
1987; Boykin and Kazarian, 1987) and is not discussed here.
Next Page
Previous Page
4.4.4. General Comments
Calculation of individual or societal risk can be time consuming if a manual approach is
employed for more than a few incident outcome cases. The techniques are straightforward, however many repetitive steps are involved, and there is a large potential for
arithmetic error. A computer based approach permits many more incident outcome
cases to be examined than are feasible with manual calculations. In addition, substantial
reduction in effort can be achieved by combining incidents with similar consequences
and by exploiting symmetry.
No special resources are needed for calculation of various risk measures. A computer spreadsheet package would be useful in manipulating the large amount of data
involved in individual and societal risk computation.
4.4.5. Example Risk Calculation Problem
The following sample problem illustrates the risk calculation techniques described in
this chapter. The problem is derived form Hendershot (1989,1997) and a simple version was published by Theodore, Reynolds, and Taylor (1989). In this example, highly
simplified frequency and consequence data are postulated, so that the actual risk calculation techniques can be highlighted. Chapter 8 contains two case studies which illustrate CPQRA risk calculations in more detail, including incident frequency,
consequence and effect models.
4.4.5.1. BACKGROUND AND GENERAL INFORMATION
A risk assessment is being conducted for a chemical plant. Using various hazard identification techniques it is determined that only two incidents can occur in this unit. Frequency analysis modeling and consequence and effect modeling will not be included in
this example, which is not intended to illustrate these methodologies. Instead, where
frequency, probability, and consequence and effect estimates are needed, highly simplified results are postulated to make the risk calculations extremely simple and easy to
follow.
The following conditions apply to this example calculation:
• All hazards originate at a single point.
• Only two weather conditions occur. The atmospheric stability class and wind
speed are always the same. Half of the time the wind blows from the northeast,
and half of the time it blows from the southwest.
• There are people located around the site. The specific population distribution
will be described later in the example, when the information is needed.
• Incident consequences are simple step functions. The probability of fatality from
a hazardous incident at a particular location is either O or 1.
These simple conditions, and the description of the impact zones of incidents as
simple geometric areas, allow easy hand calculation of various risk measures. The techniques used to derive the risk measures from the underlying incident frequency and
consequence information are the same as for a complex CPQBA study using sophisticated models intended to represent the world as accurately as possible. The concepts
are the same; the difference is in the complexity of the models used, the number of incidents evaluated, and the complexity of the calculations.
4.4.5.2. INCIDENT IDENTIFICATION
Potential incidents for analysis are identified by applying appropriate incident identification techniques, including historical information (plant and process specific, as well
as generic industrial experience), checklists, and one or more of the hazard identification methodologies described in Chapter 1 and in the Guidelines for Hazard Evaluation
Procedures (CCPS, 1992). This is perhaps the most critical step in a CPQRA, because
any hazards not identified will not be evaluated, resulting in an underestimate of risk.
The hazard identification and process safety reviews identify only two hazardous incidents which can occur in the facility:
L An explosion resulting from detonation of an unstable chemical.
II. A release of a flammable, toxic gas resulting from failure of a vessel.
4.4.5.3. INCIDENT OUTCOMES
The identified incidents may have one or more outcomes, depending on the sequence of
events which follows the original incident. For example, a leak of volatile, flammable
liquid from a pipe might catch fire immediately (jet fire), might form a flammable cloud
which could ignite and burn (flash fire) or explode (vapor cloud explosion). The material
also might not ignite at all, resulting in a toxic vapor cloud. Chapter 1 refers to these
potential accident scenarios as incident outcomes. Some incident outcomes are further subdivided into incident outcome cases^ differentiated by the weather conditions and wind
direction, if these conditions affect the potential damage resulting from the incident.
The identified incidents for this facility were reviewed to determine all possible
outcomes, using an event tree logic model. Incident I, the explosion, is determined to
have only one possible outcome (the explosion), and the consequences and effects are
unaffected by the weather. Therefore, for Incident I there is only one incident outcome
and one incident outcome case. This can be represented as a very simple (in fact, trivial)
event tree with no branches, as shown in Figure 4.16.
Incident II, the release of flammable, toxic gas, has several possible outcomes (jet
fire, vapor cloud fire, vapor cloud explosion, toxic cloud). For this example, only two
outcomes are assumed to occur. If the gas release ignites there is a vapor cloud explosion. If the vapor cloud does not ignite, the result is a toxic cloud extending downwind
from the release point. Because there are only two possible weather conditions in the
example, three incident outcome cases are derived from Incident II as shown in the
event tree in Figure 4.16.
4.4.5.4. CONSEQUENCE AND IMPACT ANALYSIS
Determining the impact of each incident requires two steps. First, a model estimates a
physical concentration of material or energy at each location surrounding the facility-for example, radiant heat from a fire, overpressure from an explosion, concentration
of a toxic material in the atmosphere. A second set of models estimate the impact that
this physical concentration of material or energy has on people, the environment, or
property—for example, toxic material dose-response relationships. These models are
described in Chapter 2.
Incident
Incident
Incident Outcome
Outcomes
Cases
Event Tree for Incident I:
I - Explosion
I - Explosion
Event Tree for Incident II:
HA - Ignition
(Explosion)
IIB1 - Toxic Cloud
to Southwest
Il - Flammable Toxic Gas
UB - No Ignition
(Toxic Cloud)
IIB2 - Toxic Cloud
to Northeast
FIGURE 4.16. Event trees for the two incidents in the example risk calculation problem.
For purposes of illustrating risk calculations, consequence and impact models will
not be applied to the facility in the example. Instead, to make calculations easily understood, very simple impact zone estimates for the identified incident outcome cases will
be postulated:
• Incident Outcome Case I (explosion)—the explosion is centered at the center
point of the facility; all persons within 200 m of the explosion center are killed
(probability of fatality = 1.0); all persons beyond this distance are unaffected
(probability of fatality = O).
• Incident Outcome Case IIA (explosion)—the explosion is centered at the center
point of the facility; all persons within 100 m of the explosion center are killed
(probability of fatality = 1.0); all persons beyond this distance are unaffected
(probability of fatality = O).
• Incident Outcome Cases UBl 5 IIB2 (toxic gas clouds)—all persons in a pieshaped segment of radius 400 m downwind and 22.5 degrees width are killed
(probability of fatality = 1.0); all persons outside this area are unaffected (probability of fatality = O).
Figure 4.17 illustrates these impact zones.
Probability of
Fatality = O
Probability of
Fatality = O
Probability of
Fatality =1.0
Incident Outcome
Case I
Probability of
Fatality «1.0
Probability of
Fatality = O
Incident
Outcome Case HA
Probability of
Fatality = 1.0
Probability of
Fatality = O
Incident
Outcome Case IIB1
Incident
Outcome Case IIB2
FIGURE 4.17. Impact zones for example problem incident outcome cases.
4.4.5.5. FREQUENCYANALYSIS
Many techniques are available for estimating the frequency of incidents (Chapter 3),
including fault tree analysis, event tree analysis, and the use of historical incident data.
For this example, it is assumed that appropriate techniques have been applied with the
following results:
Incident I Frequency = 1 X 1(T6 events per year
Incident II Frequency = 3 X 10~5 events per year
Incident II Ignition probability =33%
These estimates, along with the specified weather conditions (wind blowing from the
northeast 50% of the time, and from the southwest 50% of the time) give the frequency estimates for the four incident outcome cases, as shown in the event trees of
Figure 4.18.
Incident
Incident
Outcomes
Incident Outcome
Cases
Event Tree for Incident I:
I - Explosion
I - Explosion Frequency =
1x10*6 per
^ear
Frequency = 1 x 10"6 per year
Event Tree for Incident II:
Ignition
•
Pr
Download