Uploaded by woodsaw2

361085005-Chemical-Engineering-Instrumentation-Improving-Plant-Operation-Safety-and-Control-Volume-2-pdf

advertisement
Instrumentation Volume 2
Improving Plant Operation,
Safety and Control
Table of Contents
Sampling Particulate Materials the Right Way...........................................................................................................4
To obtain a representative sample for particle size characterization, adhere to the golden rules of sampling and follow these
best practices
The Direct Integration Method: A Best Practice for Relief Valve Sizing....................................................................12
The approach described here is easier to use, and provides more-accurate results, compared to leading valve-sizing methodologies
Engineering for Plant Safety........................................................................................................................................15
Early process-hazards analyses can lead to potential cost savings in project and plant operations
Managing SIS Process Measurement Risk and Cost.................................................................................................24
With a focus on flowmeters, this article shows how advances in measurement technologies help safety system designers
reduce risk and cost in their safety instrumented systems (SIS) design and lifecycle management
Column Instrumentation Basics..................................................................................................................................32
An understanding of instrumentation is valuable in evaluating and troubleshooting column performance
Control Valve Position Sensors....................................................................................................................................40
Control Valve Performance..........................................................................................................................................41
Common Mistakes When Conducting a HAZOP and How to Avoid Them...............................................................42
An important part of ensuring the success of a HAZOP study is to understand the errors that can cause the team to lose focus
Chemical Process Plants: Plan for Revamps..............................................................................................................47
Follow this guidance to make the most of engineering upgrades that are designed to improve plant operations or boost
throughput capacity
Point-Level Switches for Safety Systems...................................................................................................................53
Industries that manufacture or store potentially hazardous materials need to employ point-level switches to protect people
and the environment from spills
Control Strategies Based On Realtime Particle Size Analysis...................................................................................58
Practical experience illustrates how to achieve better process control
Process Hazards Analysis Methods.............................................................................................................................62
Aging Relief Systems — Are they Working Properly?................................................................................................63
Common problems, cures and tips to make sure your pressure relief valves operate properly when needed
Overpressure Protection: Consider Low Temperature Effects in Design..................................................................69
Understanding the inherent limitations of current over-pressure protection analyses is key to developing a more robust heuristic
Things You Need to Know Before Using an Explosion-Protection Technique..........................................................73
Understanding the different classification methods is necessary to better select the explosion-protection techniques that will be used
Cybersecurity Defense for Industrial Process- Control Systems..............................................................................78
Security techniques widely used in information technology (IT) require special considerations to be useful in operational settings.
Here are several that should get closer attention
Plant Functional Safety Requires IT Security.............................................................................................................84
Cybersecurity is critical for plant safety. Principles developed for plant safety can be applied to the security of IT systems
Dilute-phase Pneumatic Conveying: Instrumentation and Conveying Velocity......................................................91
Follow these guidelines to design a well-instrumented and controlled system, and to optimize its conveying velocity
Alarm Management By the Numbers.........................................................................................................................94
Deeper understanding of common alarm-system metrics can improve remedial actions and result in a safer plant
Understand and Cure High Alarm Rates...................................................................................................................100
Alarm rates that exceed an operator’s ability to manage them are common. This article explains the causes for high alarm
rates and how to address them
Wireless Communication in Hazardous Areas.........................................................................................................105
Consider these criteria in deciding where wireless fits in today’s CPI plants and the explosive atmospheres that permeate them
Piping-System Leak Detection and Monitoring for the CPI.....................................................................................110
Eliminating the potential for leaks is an integral part of the design process that takes place at the very onset of facility design
Monitoring Flame Hazards In Chemical Plants........................................................................................................117
The numerous flame sources in CPI facilities necessitate the installation of advanced flame-detection technologies
Integrated Risk-Management Matrices.....................................................................................................................121
An overview of the tools available to reliability professionals for making their organization the best-in-class
Process Safety and Functional Safety in Support of Asset Productivity and Integrity.........................................126
Approaches to plant safety continue to evolve based on lessons learned, as well as new automation standards and technology
Improving the Operability of Process Plants............................................................................................................131
Turndown and rangeability have a big impact on the flexibility and efficiency of chemical process operations
Solids Discharge: Characterizing Powder and Bulk Solids Behavior.....................................................................138
How shear-cell testing provides a basis for predicting flow behavior
Advantages Gained in Automating Industrial Wastewater Treatment Plants........................................................142
Process monitoring and automation can improve efficiencies in wastewater treatment systems. A number of parameters
well worth monitoring, as well as tips for implementation are described
Feature Report
Sampling Particulate
Materials the Right Way
Remi Trottier and
Shrikant Dhodapkar
The Dow Chemical Company
I
n the chemical process industries
(CPI) it is often necessary to verify
material specification at various
points in the process. In that effort,
it is usually impossible — or at the
very least impractical — to measure
the whole production. Instead, small
samples must be extracted from a
parent population. Such is the case in
particle size characterization of bulk
solids, process streams and slurries.
While truly representative sampling has long been an important goal,
a number of current trends are driving
the incentive for rapid implementation
of top-notch sampling strategies to be
the standard, rather than the exception. These trends include the ever-increasing demand for superior material
quality in the high-technology industries, more-stringent pharmaceutical
regulations and higher environmental
standards, to name a few.
Unfortunately many sampling
strategies in use today do not take
into account the most modern sampling theories (for more on the history
of sampling strategies, see box, p. 45),
which leads to inaccurate test results
and unrealistic material specifications
that are impossible to verify properly.
The best practices outlined in this
article provide guidelines for collecting representative samples from most
solids handling and processing equipment and then reducing the sample
to the proper size for the analytical
technique used in the measurement.
In addition, an assessment of sampling errors, based on simple statistical theories, illustrates the pitfalls of
sampling methods.
One of the everyday examples of
sampling that all of us can relate to
is when a medical doctor orders blood
to be drawn for routine laboratory
42
To obtain a representative sample for particle size
characterization, adhere to the golden rules of
sampling and follow these best practices
analysis. In this example, we can all
appreciate the two main, necessary
characteristics of the sample:
1. That a relatively small sample is
taken (much smaller than the total
available)
2. That the sample be representative
of the whole (so that the correct diagnosis can be made)
Although both points are extremely
simple concepts, a great deal of diligence is usually necessary to achieve
them. Careless sampling of powders
or slurries often results in a faulty
conclusion, regardless of whether good
analytical techniques are employed. In
that respect, the first item that should
be considered for a particle-characterization study is a sampling protocol
that insures a representative sample
of the proper size.
Statistics of sampling
The first necessary step for a good sampling program is to define the sample
that is needed and clearly specify how
the sample is taken, including equipment specification. It is important to
keep in mind that in particulate material sampling, the best we can ever
achieve is a random sample where
all particles within the parent population have an equal chance of being
sampled, thereby assuming that no
systematic bias exists in the sampling
process. Since there is no such thing as
two identical samples, a perfectly extracted sample (random sample) will
always be inflicted by a residual error,
called the fundamental error (FE), as
first postulated by Gy [1]. This is due
to the heterogeneity of any particulate
Chemical Engineering www.che.com april 2012
sample that has a distribution of particle sizes. This notion that individual
particles are not identical is referred
to as constitutional heterogeneity
(CH). The higher the upper end of the
distribution, the higher the heterogeneity. The Gy sampling theory can
estimate the variance of this fundamental sampling error due to the CH,
using Equation (1), [2]:
(1)
Where MS is the mass of the sample,
ML is the mass of the parent population from which the sample is taken,
ƒ is a shape factor (0.5 for spheres, 1
for cubes, 0.1 for flakes), ρ is the particle density, cL is the mass fraction of
material in the size class of interest,
d1 is the average particle diameter
in the size class of interest, g is the
granulometric factor [ratio of the diameter corresponding to the 5th percentile of the size distribution to the
diameter corresponding to the 95th
percentile of the size distribution (d05/
d95)], d is the diameter corresponding
to the 95th percentile of the distribution (d95). This allows the calculation
of the fundamental error for any size
class in a distribution. If the mass of
the parent population is much greater
than the sample mass, the term 1/ML
can be dropped from the equation. A
few important highlights from the
above equation:
1. The variance of the fundamental
error decreases as the sample size
Example of a Sampling problem, with solution
A
fter several customer complaints, an engineer is assigned the responsibility of
setting up a sampling protocol for a ground product that frequently does not
meet the specification that no more than 5% of the mass, or volume distribution
should be greater than 250 microns (Figure 1). This product is sold in lots consisting of
several tons. This specification should be measured at the 99% confidence level. This
product has a density of 2.5 g/mL. Assuming that correct sampling techniques were
used to obtain a random sample, what is the minimum sample size that needs to be
collected and analyzed?
100
d05
=
= 0.40
d95
250
5th percentile
- 100 µm
95th percentile
-250 µm
Solution:
Material
specification
< 5% greater
than 250 µm
1. Since the mass of the sample is much smaller that the mass of the lot, the equation
for the fundamental error estimation [Equation (1)] can be rearranged as follows to
solve for the minimum sample mass
100
(4)
250 275
Diameter (micron)
2. Measure the size distribution on a volume, or mass basis to obtain the diameters Figure 1. Example of size distribution
with information necessary to calculate
corresponding to the 5th and the 95th percentile (Figure 1)
minimum sample mass
3. The 99% confidence level implies that the value of FE is 0.01. The variance of the
2
fundamental error, Var(FE), is 0.01 , or 0.0001. The shape factor (ƒ) can be set at
0.5, assuming that the particles can be approximated by spheres. The particle density (ρ) is 2.5 g/cm3. The fraction of material in
the size class of interest cL is 0.05 (5% > 250 microns). The average diameter in the size class if interest (d1) can be taken as 275
microns (see Figure 1). The granulometric factor (g) is defined as d05/d95 (see Figure 1) is 0.40 for this distribution. Finally, d,
defined as the 95th percentile of the distribution (see Figure 1) is 250 microns. Changing all units to CGS units to obtain the sample
mass (MS) in grams, we obtain the following:
(5)
Please note that not only a sample of 4.8 g is needed, but an analysis technique that can analyze the whole sample needs to be utilized.
❏
increases. Since the variance is
equal to the square of the fundamental sampling error, the fundamental sampling error decreases in
proportion to the square root of the
sample mass
2. The variance of the fundamental
error is a strong function of the
coarse end (95th percentile) of the
size distribution as dictated by the
d3 term.
The above equation can easily be rearranged to provide the minimum sample mass to be used in an analysis. The
sample mass estimate is the minimum
sample size, since additional sources
of error will contribute to the variance
of the total sampling error. It should
be noted that these additional contributors can be minimized through
good sampling practices, and therefore
are controllable to a large extent. Gy
broke down the total sampling errors
into seven basic components as listed
in Table 1.
The mass required to meet a product specification is related to the inherit degree of heterogeneity in the
material and the desired level of ac-
curacy and precision. In addition to
sampling error, analytical error will
also add to the uncertainty of the
measurement. With modern particlecharacterization
instrumentation,
the sampling error will typically become much larger than the expected
analytical error as the top end of the
distribution (95th percentile) exceeds
100 microns. Gy defined each of the
seven error components as an additive
model where the variance of the total
error is as follows:
TE = FE + GE + CE2 + CE3 + DE +
EE + PE
(2)
If correct sampling practices are utilized, the terms GE, CE2, CE3, DE,
EE, and PE are minimized, and are
much smaller that the FE term, for
particles sized greater than about
100 microns. This minimization of
the sampling error can only be accomplished through appropriate selection
of sampling equipment for all phases
of the sampling and sub-sampling process. For smaller particle sizes, where
the heterogeneity of the system decreases as the third power of particle
size, sampling typically becomes less
of an issue, and analytical errors take
over. Table 2 outlines the basic steps
for correct sampling.
Grab samples should not be used
even if one attempts to mix the bulk
specimen prior to sampling — for example, bulk bags or perhaps a sample
brought to the laboratory. It is simply
not possible to obtain a homogeneous
mix from blending alone, and therefore such a practice should not be used
to properly minimize grouping and
segregation errors. Pitard [2] showed
that the variance of the grouping error
can be compared to the variance of the
fundamental error as follows:
(3)
As a rule of thumb, at least 30 sample
increments (N) are recommended to
minimize GE errors.
Correct Sampling
Correct sampling implies following
a few simple rules throughout the
sampling process as well as using ap-
Chemical Engineering www.che.com april 2012
43
Feature Report
propriate sampling tools to minimize
the errors identified in the previous
section. Correct sampling practices include the following:
• Taking many samples at regular or
random time intervals (>30 samples), and sub-dividing into smaller
samples for analysis to minimize
grouping and segregation error (GE)
• Using correctly designed sampling
tools to minimize delimination and
extraction errors (DE and EE).
• Using common sense and diligence
to minimize sample preparation and
analysis errors (avoid particle settling, agglomeration, dissolution,
and swelling) (PE)
In this section, we will introduce sampling equipment designed to sample
from various solids systems including
static bulk materials, gravity flow systems, mechanical conveying systems,
pneumatic conveying systems, solidsprocessing unit operations and slurry
systems. The sampling techniques in
different systems are discussed and
recommendations for proper sampling
are provided.
Table 1. Seven basic sampling errors
Name
Description / Mitigation
1
Fundamental
Error (FE)
Caused by constitutional heterogeneity (CH). Reduce
FE by increasing sample size. Note that this is the
sample size that not only needs to be sampled, but
analyzed in its entirety
2
Grouping and
Segregation
Error (GE)
Incremental samples can be different from each other.
Reduce GE by collecting and combining several random sub-samples, taken correctly from the parent lot
3
Long-Range
Heterogeneity
Fluctuation Error
(CE2)
Fluctuations in size distribution over time contribute to
the heterogeneity. Reduce CE2 by collecting a large
number of sub-samples at random or regular intervals
to form a composite
4
Periodic
Heterogeneity
Fluctuation Error
(CE3)
Periodic fluctuations in size distribution over time contribute to the heterogeneity. Reduce CE3 by collecting
a large number of sub-samples at random or regular
intervals to form a composite
5
Increment
Delimitation
Error (DE)
Delimitation errors occur when the sampling process
does not give an equal probability of selecting all
parts of the parent lot. As an example, a grab sample
will only sample from accessible parts of the lot,
usually the surface. Reduce DE by using properly designed sampling tools and strategies
6
Increment
Extraction Error
(EE)
Since particles are discrete elements of various sizes,
they will be forced in or out of the sampling device
— even if they are on the sample target boundary. If
a particle’s center of gravity is within the sampling
boundary, it should be part of the sample, otherwise it
should not be part of the sample. Reduce EE by using
properly designed sampling tools
7
Preparation Error
(PE)
Sample degradation error caused by inadequate
preparation where particles settle, dissolve, aggregate,
break or swell during preparation or analysis. Use
proper sample handling and dispersion techniques
Sampling process overview
There are usually several stages in
particulate matter sampling, and it
is of paramount importance to maintain the integrity of the sample until
the analysis is carried out. Figure 2
takes us through the stages of a sampling process. Several increments are
taken from the bulk lot using properly
designed sampling equipment as outlined in the next section. The gross
sample may be too large to be sent
to the laboratory, and may need to be
reduced to a more practical weight.
Depending on the measurement technique, and the amount of sample required by the instrument sample delivery system, the laboratory sample
may need to be further sub-divided to
the test sample to be used in its entirety by the instrument. Even at the
laboratory-sample level, which is the
last step before analysis, the common
practice of simply scooping material
out of the container is likely to introduce bias. The overall goal of any
sampling procedure is simple: it is to
obtain a sample with a total sampling
error similar to that expected from the
fundamental sampling error, which is
44
solely governed by the heterogeneity
of the material — grab sampling at
any level will almost guarantee that
this goal will not be achieved.
Gross sample extraction
Consistent with Gy’s sampling theories, Allen [3] independently proposed
two “Golden Rules” of sampling:
1. Sample a moving stream — sampling of bulk solids at rest should
be avoided.
2. The whole of the stream of powder
should be taken for many small increments in time in preference to
part of the stream being taken for
the whole time.
Applying Gy’s principles and Allen’s
recommendations, extraction of a gross
sample consists of properly extracting
several increments from the parent lot
during processing or handling using
Chemical Engineering www.che.com april 2012
properly designed tools. Each increment
can be defined as the group of particles
extracted from the parent lot during a
single operation cycle of the sampling
device. The final gross sample should
consist of at least 30 such increments.
Static material sampling
Ideally, the sampling should have
been carried out before the material
became a static bulk, which is much
more difficult to correctly sample.
The degree of inhomogeneity will depend on the powder’s history. In the
case of free-flowing material, it is a
safe bet to assume segregation has
taken place during the transfer, and
for non-free flowing material, the degree of inhomogeneity will largely depend on its history.
The inherent problem with sampling
static material is that no equipment
S
History of sampling techniques
ampling became a common, but non-scientific practice first in the mining industry,
then in the pharmaceutical and chemical industries shortly after the industrial revolution. Back in those early days of sampling, although no rigorous theory existed,
scientists and engineers used a common-sense approach based on their intuition and
their experience to guess at the requirements on what constituted a good sample. In
the mid-19th century, Vezin was the first to introduce the concept of a minimum sample
size necessary for obtaining a representative sample, without the benefits of modern
sampling theories. He also invented a sampler that bears his name, and is still in use
today. It was not until the 1950s that the guessing game in sampling was replaced by a
more rigorous discipline, thanks to Gy’s [1] development of the statistical theories behind
sampling. This offered a structured approach to sampling from which all sampling errors
are broken down to basic components.
Bulk
or
process
stream
Gross sample
Lab sample
Test sample
> Kg
< Kg
g
Increments
Sample
division
Sample
division
Measured sample
g to mg
1
Sample delivery
system
2
3
Figure 3. The sampling thief is one
of the simplest devices to extract powder from a static bulk
Propagation of errors
Goal: total error ≈ fundamental error through correct sampling
Figure 2. In this sampling process, incremental sampling throughout the sampling and sample reduction process is practiced to minimize propagation of sampling errors
Table 2. Basic steps for correct sampling
1. Define sample quality
Data Quality Objective – precision and accuracy
required for product specification, or quality
2. Define sample size
Sample size: gross sample, lab sample, actual
amount analyzed
3. Define sampling strategy
Equipment, sampling location, sampling frequency, sample reduction
4. Preserve sample integrity
Sample reduction, prevent particle aggregation,
attrition, dissolution, and swelling
5. Verify that the required
data quality can be
achieved
Is the equipment and strategy used adequate to
meet data quality objective? Is the sample size
analyzed large enough?
exists that can take a sample where
every particle has an equal chance of
being sampled. There will always be
parts of the bulk that will not be accessible to the sampler.
The workhorse of the bulk sampling
domain remains the thief sampler
(Figure 3), which provides several increments taken at random throughout
the bulk material. This device consists
of a co-axial outer sleeve and an inner
hollow tube with matching grooves to
allow powder flow in the core of the
inner cylinder. In the first step of the
sampling procedure, the inner tube is
rotated so that the matching groves
are on opposite sides, then the probe
is inserted in the powder. The second
step consists of twisting the inner
tube to align the two sets of grooves,
thereby allowing powder to flow into
the sampler. Thirdly, the inner tube
is twisted to lock the powder into the
sampler, which is then withdrawn
from the bulk. This procedure is repeated several times to extract several increments to make up the bulk
sample ready for splitting. The shaded
region at the bottom of Figure 3 indicates the region where there is no
chance of sampling, which illustrates
a weakness of this device. Another
source of error to be aware of when
using this type of device occurs as the
material is being displaced down by
the probe moving through the bulk
material, thereby causing segregation
and preventing equal probability for
all particles to be sampled.
Sampling free-falling streams
The rotary chute sampler, also referred
to as the Vezin sampler, is a multi-purpose device that collects representative
samples from materials (dry powders or
slurries) that are free-falling from pipes,
chutes or hoppers. This sampler is generally a good choice for installation on
loading and unloading equipment, or at
the entrance or exit of material transfer equipment. Various versions of the
Vezin sampler are available in several
sizes from multiple manufacturers. This
device, shown in Figure 4, operates by
one or more cutters revolving on a central shaft, passing through the sample
stream and collecting a fixed percentage
of the total material. A Vezin sampler is
totally enclosed to minimize spillage or
leakage problems. The area between the
sample cutter and the discharge chute
is sealed to prevent possible contamination or sample loss.
Chemical Engineering www.che.com april 2012
45
Feature Report
Table 3. List of questions to
consider when Selecting A
Sampler
Is the material free-flowing?
Materiel properties
Is the material abrasive?
Is the material friable?
Does the material have a broad
size distribution?
Is the material dusty?
What is the largest particle
diameter
Is the material temperature sensitive?
Intersystems
Figure 4. Rotary chute sampler
As a rule of thumb, incremental
extraction errors can be minimized
by limiting the cutter speed to 0.6
m/s, an inner-wall sampler three
times the particle diameter (3d) for
coarse material, where d > 3 mm, and
at least 10 mm for finer material.
46
Are the particles dispersed in
gas phase?
Is the process in a pressurized
enclosure?
Is the process at elevated
temperature?
Is the process wet or dry?
Is the powder in motion?
Heath & Sherwood Co.
Figure 5. Linear gravity flow samplers
collect samples from free-flowing powders under the influence of gravity
Mechanical conveying systems
The conveyor types for mechanical and
pneumatic conveying of bulk solids include belt conveyor, screw conveyor,
bucket conveyor, vibrating conveyor,
and dense- or dilute-phase conveyers.
The best position for collecting the
samples is where the material falls in
a stream from the end of the conveyor.
One can then follow the procedure for
gravity flow or free-falling streams as
Chemical Engineering www.che.com april 2012
Sample requirements
What sample size is required?
Sampling from gravity flow
As shown in Figure 5, gravity flow can
be any free-flowing powder or slurry
from a conveyor, hopper, launder or
unit operation under the influence of
gravitational forces. When sampling in
such systems, each increment should
be obtained by collecting the whole of
the stream for a short time. The width
of the receiver should be made at least
10 mm or three times the diameter
of the largest particles — whichever
is larger. The volume of the receiver
must be large enough to ensure that
the receiver is never full of material.
The length of the receiver should be
sufficient to ensure that the full depth
of the stream is collected. The ladle or
receiver should cross the whole stream
in one direction at constant velocity.
For heavy mass flow, a traversing cutter as a primary sampler together with
a Vezin sampler as a secondary splitter
can usually be applied.
Process conditions
Is the process enclosed?
Are there any sanitary
requirements?
Is automatic sampling required?
Is a composite sample
required?
Is the sample sensitive to
moisture?
noted above. However, if the situation
is such that samples have to be taken
directly from within the conveying line,
several types of sampler have been developed. An example of such samplers
designed to extract samples from belt
conveyor systems is illustrated in Figure 6. The mid-belt sampler uses a rotating scoop that makes a pass across
the moving belt, thereby cutting a
clean cross section of material.
Powder
supply
Sample
cups
Rotation
Drive
axis
Direction
of rotation
Intersystems
Figure 8. The spinning riffler is comprised of a ring of containers rotating
under a powder stream
Figure 6. Automatic mid-belt samplers are used with belt conveyors
Vp
Vp
Vp
Vs
Vs
Vs
Vp > Vs
Vp < Vs
Vp = Vs
Figure 7. These illustrations of isokenetic sampling from a pipeline show the
sampling velocity (Vs) equal to the process velocity (Vp; left), Vp greater than Vs (middle), and Vp less than Vs (right)
Figure 9. The chute riffler
splits a sample using a series of
alternate chutes
Slurry sampling
ing fluid velocity (Vp). No sampling bias
is expected during isokinetic sampling.
If the process flow velocity is greater
than the sampling velocity, particle
inertia causes an excess of larger particles to enter the sampling probe while
a process flow velocity smaller than
sampling velocity will cause an excess
of larger particles to avoid the probe.
Therefore, non-isokinetic sampling will
introduce a bias based on the particle
size distribution.
The same basic sampling rule where
all particles have an equal chance of
being sampled must also be followed
when sampling from slurries. Knowledge of slurry properties and behavior
is essential to insure proper sampling
strategies. For instance, sampling a
slurry from a point in a tank, or flowing
through a pipeline requires the presence of a homogeneous suspension at
the point of sampling, which is depen-
dent on such parameters as particle
size and density, fluid density and viscosity, flowrate and pipe diameter [4].
Turbulent flow, which provides mixing,
is typically required to keep the slurry
well mixed before sampling. Pipelines
can be sampled isokinetically using
nozzles provided the slurry is well
mixed at the sampling point. Isokinetic
sampling (Figure 7) occurs when the
average fluid velocity in the sampling
tube (Vs) is the same as the surround-
Chemical Engineering www.che.com april 2012
47
Feature Report
Relevant standards on sampling of particulate materials
ASTM Standards:
ASTM B215 - 10 Standard Practices for Sampling Metal Powders
ASTM C322 - 09 Standard Practice for Sampling Ceramic Whiteware Clays
ASTM C50 - 00(2006) Standard Practice for Sampling, Sample
Preparation, Packaging, and Marking of Lime and Limestone
Products
ASTM C702 / C702M - 11 Standard Practice for Reducing Samples of Aggregate to Testing Size
ASTM D140 / D140M - 09 Standard Practice for Sampling Bituminous Materials
ASTM D1799 - 03a(2008) Standard Practice for Carbon Black—
Sampling Packaged Shipments
ASTM D1900 - 06(2011) Standard Practice for Carbon Black
Sampling Bulk Shipments
ASTM D1900-06(2011) Standard Practice for Carbon Black Sampling Bulk Shipments
ASTM D197 - 87(2007) Standard Test Method for Sampling and
Fineness Test of Pulverized Coal
ASTM D197 - 87(2007) Standard Test Method for Sampling and
Fineness Test of Pulverized Coal
ASTM D2013 / D2013M - 11 Standard Practice for Preparing
Coal Samples for Analysis
ASTM D2234 / D2234M - 10 Standard Practice for Collection of
a Gross Sample of Coal
ASTM D2590 / D2590M - 98(2011)e1 Standard Test Method for
Sampling Chrysotile Asbestos
It is better to sample from a vertical pipe so that particle segregation by
gravity can be avoided. In such a situation, the sampler should be located at
least ten pipe diameters downstream
from any bends or elbows in the pipe.
Particle diameter has a strong influence on particle segregation by gravity since the settling velocity is proportional to the square of the particle
diameter. Gravity starts to play an
important role at particle diameters
greater than roughly 50 microns. The
best approach, if possible, is to sample
at the discharge where a cross-stream
sampler (Figure 5) may be used as a
primary sampler followed by a Vezin
sampler cutter to reduce sample size.
This allows sampling even in the nonideal case where some segregation
may have occurred in the pipe. A large
48
ASTM D345 - 02(2010) Standard Test Method for Sampling and
Testing Calcium Chloride for Roads and Structural Applications
ASTM D346 / D346M - 11 Standard Practice for Collection and
Preparation of Coke Samples for Laboratory Analysis
ASTM D460 - 91(2005) Standard Test Methods for Sampling and
Chemical Analysis of Soaps and Soap Products
ASTM D75 / D75M - 09 Standard Practice for Sampling Aggregates
ASTM D979 / D979M - 11 Standard Practice for Sampling Bituminous Paving Mixtures
ASTM E105 - 10 Standard Practice for Probability Sampling Of
Materials
ASTM E122 - 09e1 Standard Practice for Calculating Sample Size
to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process
ASTM E141 - 10 Standard Practice for Acceptance of Evidence
Based on the Results of Probability Sampling
International Standards:
BS 3406: Part 1: 1986 British Standard Methods for Determination of particle size distribution Part 1. Guide to Powder Sampling, British Standards Institute, London (1986).
ISO/WD: 14888 Sample Splitting of Powders for Particle Size
Characterisation International Organization for Standardization,
Geneva.
ISO 2859-Statistical Sampling. http://www.iso-9000.
co.uk/9000qfa9.html, International Organization for Standardization, Geneva (2000).
number of cuts (>30) for both the primary and secondary samplers needs
to be extracted. Not all situations are
alike, and therefore, these samplers
need to be installed and designed
properly to fit the application.
Selection of the proper sampling
equipment may not always be trivial,
and may depend on material properties, type of process, and sample requirements. Table 3 provides a list of
questions to consider when designing
a sampling protocol.
Sample reduction
Powder sampling is typically done
at two levels, a gross sample taken
directly from the process, and then
sub-divided into samples suitable for
the laboratory. The spinning riffler,
as illustrated in Figure 8, has been
Chemical Engineering www.che.com april 2012
widely used for reducing the amount
of powder to be analyzed to a smaller
representative sample. In this commercially available device, a ring of
containers rotates under a powder
flow to be sampled, thereby cutting
the powder flow into several small increments so that each container consists of a representative sample. The
spinning riffler is a versatile device
that can handle free-flowing powders,
dusty powders and cohesive powders.
The operating capacity of this device
varies from 25 mL to 40 L. If only
the small capacity spinning riffler is
available, the Vezin sampler can be
used to reduce the gross sample to
the appropriate quantity suitable for
the spinning riffler. The spinning riffler, when properly used, is the most
efficient sample divider available.
Another commonly used device for
sample reduction of free-flowing powders is the chute riffler as shown in Figure 9. It consists of alternating chutes
where half of the material discharges
on one side and the second half on the
other. The total number of chutes represents the number of increments defining the sample. Although the sample
can be processed several times to in-
crease the number of total increments,
it will likely not match the number of
increments performed by the spinning
riffler. As such, the spinning riffler is
the best device for sample reduction
and should be used whenever possible.
Several standards dealing with powder sampling are available from a number of organizations. A comprehensive
list is provided in the box, p. 48.
Summary
Appropriate attention to sampling,
sample size reduction and data analysis is the first step towards obtaining
reliable analytical results from a batch
[5]. To obtain a representative sample,
one must adhere to the golden rules of
sampling and follow the best practices
as outlined in this article.
■
Edited by Rebekkah Marshall
Authors
References
1. Gy, Pierre, “Sampling Theory and Sampling
Practice. Heterogeneity, Sampling Correctness, and Statistical Process Control”, 2nd
Ed., CRC Press, Boca Raton, 1993.
2. Pitard, Francis F., “Pierre Gy’s Sampling Theory and Sampling Practice: Heterogeneity,
Sampling Correctness, and Statistical Process
Control”, CRC Press, Boca Raton, 1993.
3. Allen, T., “Particle Size Measurement”, 4th
Ed., Chapman & Hall, London, 1990.
4. Turian, R.M., and Yuan, T.F., Flow of Sluries
in Pipepines, 3, 1977, AIChE J., Vol. 23, 3, pp.
232–243.
5. Trottier, Remi and Dhodapkar, Shrikant, and
Wood, Steward, Particle Sizing Across the
CPI, Chem. Eng., April 2010, pp. 59–65.
Remi Trottier is a research
scientist in the Solids Processing Discipline of Engineering & Process Sciences
at The Dow Chemical Co.
(Phone: 979-238-2908; Email:
ratrottier@dow.com). He received his Ph.D. in chemical
engineering from Loughborough University of Technology, U.K,, and M.S. and B.S.
degrees in Applied Physics
at Laurentian University, Sudbury, Ont. He has
more than 20 years of experience in particle
characterization, aerosol science, air filtration
and solids processing technology. He has authored some 20 papers, has been an instructor
of the course on Particle Characterization at the
International Powder & Bulk Solids Conference/
Exhibition for the past 15 years.
Shrikant V. Dhodapkar is a
fellow in the Dow Elastomers
Process R&D Group at The
Dow Chemical Co. (Freeport,
TX 77541; Phone: 979-2387940; Email: sdhodapkar@dow.
com). He received his B.Tech.
in chemical engineering from
I.I.T-Delhi (India) and his
M.S.Ch.E. and Ph.D. from the
University of Pittsburgh. During the past 20 years, he has
published numerous papers on particle technology
and contributed chapters to several handbooks.
He has extensive industrial experience in powder
characterization, fluidization, pneumatic conveying, silo design, gas-solid separation, mixing, coating, computer modeling and the design of solids
processing plants. He is a member of AIChE and
past chair of the Particle Technology Forum.
Whether you need to transport, analyze, weigh, batch,
mix, grind, dry, shape or package you’ll find the solution at…
Exhibition & Conference: May 8–10, 2012
Donald E. Stephens Convention Center • Rosemont, IL (adjacent to O’Hare Airport)
Featuring:
Meet Industry Leaders with Thousands of the Latest Processing Solutions:
• Accessories
• Feeders
• Conveyors and Elevators
• Instrumentation & Controls
• Dryers
• Material Handling & Transportation
• Dust Collection/Control
• Mixers & Blenders
• Energy, Environmental &
Pollution Control
• Packaging & Bagging
• Plant Maintenance/
Safety/Health Products
• Storage
• Processing/Mixing/
Blending Equipment
• Weighing Systems & Scales
• Size Reduction
• Thermal Solids Processors
…and much more
• Particle Enlarger & Formers
• Filtration/Separation
Get the free
mobile app at
http://gettag.mobi
21377_CH_PTXi12
Follow the tag to register or log on to:
PowderShow.com
Produced and managed by: UBM Canon
11444 W. Olympic Blvd. • Los Angeles, CA 90064-1549
Tel: 310/445-4200 • Fax: 310/996-9499
Circle 22 on p. 82 or go to adlinks.che.com/40266-22
Chemical Engineering www.che.com april 2012
49
BS&B Safety Systems
Feature Report
Engineering
Practice
The Direct Integration Method:
A Best Practice for
Relief Valve Sizing
The approach described here is easier to use,
and provides more-accurate results,
compared to leading valve-sizing methodologies
Mark Siegal
Consulting Engineer
W
hat if someone were to
tell you that there is one
method available for sizing
relief valves that applies
to virtually every situation, including two-phase flow and supercritical
fluids? And what if they told you that
method is more accurate and easier
to use than traditional methods or
formulas? As it turns out, both of
these statements are true. The approach described here — the Direct
Integration Method — involves numerical integration of the isentropic
nozzle equation [1].
From as early as 2005, the “method
of choice” for determining the flow
through a relief valve has been the Direct Integration Method [2]. API 520
has also sanctioned this method due
to its general applicability to any situation where the fluid is homogeneous
[1]. However, because this method is
perceived to be difficult or time consuming, many engineers continue to
opt for older, simplified methods, even
though such methods can produce lessaccurate results. For instance, without
careful analysis, using the traditional
gas-phase equation near a fluid’s
critical point can yield an undersized
valve [3].
Fortunately, thanks to the widespread availability of process simulators and spreadsheet software, nu-
54
Silvan Larson
and William Freivald
Valdes Engineering Company
merical integration of the isentropic
nozzle equation is now easier, faster,
and more accurate than other methods for determining the mass flux
through a relief valve. This article discusses the use of process simulators
to simplify the numerical integration
method, and describes the advantages
of numerical integration over other
methods that may be used to calculate
the required relief valve area.
Calculation methods
Isentropic Converging Nozzle
Equation. The calculation of the theoretical mass flux for homogeneous
fluids through a relief valve is generally accepted to be modeled based on
the isentropic converging nozzle. The
isentropic nozzle equation is developed from the Bernoulli equation by
assuming that the flow is adiabatic
and frictionless [4].
(1)
The required nozzle area of the relief
valve is calculated using Equation (2).
(2)
To use Equation (1), the fluid density must be known as a function of
Chemical Engineering www.che.com April 2013
Figure 1. Today, with the help of
spreadsheet programs and simulators,
the once-cumbersome Direct
Integration Method is easier than
ever to use to size relief valves
pressure at constant entropy over the
pressure range encountered in the
nozzle. To solve the integral analytically, an equation of state needs to be
available for the fluid at constant entropy. However, for many fluids, such
an equation is not available for density
as a function of pressure. To overcome
this limitation, various simplifying assumptions were traditionally made to
allow the integral to be solved analytically, rather than by performing a numerical integration.
For instance, for non-flashing liquids, the density is assumed to be constant, and the integral is easily solved.
The traditional vapor-sizing equation
is obtained by assuming the vapor
is an ideal gas with a constant heat
capacity [5]. However, the assumptions required by these methods may
introduce large errors under some
conditions. In contrast, the Direct Integration Method has been shown to
produce more-accurate results.
Direct Integration Method. The
Direct Integration Method uses a numerical method to evaluate the integral in the isentropic nozzle equation
[2]. API 520 proposes the use of the
Trapezoidal Rule, shown below, to calculate the integral:
Nomenclature1
G0Mass flux,
lb/h • in.2
ρ
Density, lb/ft3
P0Relieving
pressure, psi
PnNozzle exit
pressure, psi
A Orifice area, in.2
WRelieving mass rate,
lb/h
KdDischarge coefficient,
unitless
Pi Pressure at stage i, psi
ρiDensity at stage
i, lb/ft3
1. Unit conversion may be required, depending on the units selected.
(3)
The method is performed by using a
process simulator to generate data
points for the fluid density at various
pressures, utilizing an isentropic flash
routine over a pressure range from the
relieving pressure to the exit pressure.
The simulation data are used to determine the theoretical mass flux at each
point.
Using Equation (3), the maximum
mass flux is determined by calculating the mass flux over incrementally
larger pressure ranges, beginning at
the relieving pressure, and observing
where a maximum flux is reached. If
the maximum occurs at the relief-valve
exit pressure (built-up backpressure),
then the flow is not choked. Generally
accurate results can be obtained with
pressure increments as large as 1 psi,
but smaller step sizes can be specified
if desired [2]. Once the mass flux is
determined, the required relief valve
orifice area* can be determined from
Equation (2).
The value of the discharge coefficient,
Kd, depends on the phase of the fluid
and varies by the manufacturer of the
relief valve. The discharge coefficient
corrects for the difference between the
theoretical flow and the actual flow
through the nozzle. This value is determined empirically for liquid and vapor
and reported by vendors for each make
and model of relief valve. If vendor
data are not available, an initial guess
of 0.975 for gases, or 0.65 for liquids
can be used [1].
For two-phase flow, the liquid-discharge coefficient should be used if
flow in the valve is not choked and the
maximum mass flux will occur at the
relief-valve exit pressure. If the flow is
choked, then the gas-discharge coefficient should be used and the maximum
* While relief valves are designed with a nozzle,
the area at the end of the nozzle is commonly
referred to as the “orifice area”.
mass flux will occur at some pressure
above the relief-valve exit pressure.
This is called the choked pressure [6]
Implementation
It is possible to fully automate the
Direct Integration Method using a
spreadsheet program (such as Microsoft Excel 2010) and a process simulator (such as AspenTech HYSYS 7.2) [7].
Users can automate the process to the
point where all they would need to do is
simply hit a button in the spreadsheet
program and the numerical integration
will be performed on an existing stream
in the simulator using a VBA (Visual
Basic for Applications) program.
First, the spreadsheet is set up to
accept the pressure and density data
for the numerical integration points.
The inlet and outlet pressure points,
pressure step size, and name of relief
stream in the simulator are placed
into specific cells in the spreadsheet,
which are referenced in the VBA code.
The VBA code instructs the simulator
to create a new, ideal expander process block and associated streams in
the simulator. The code then iterates
across the pressure range and modifies the pressure of the expander product stream and automatically exports
the pressure and density data to the
Excel spreadsheet.
For each data point in the spreadsheet, the summand, cumulative sum,
and mass flux are calculated using
Equation (3) with typical spreadsheet
formulas. When a maximum mass flux
is reached, the spreadsheet uses this
maximum flux value to calculate an
orifice size, given the relieving mass
rate and coefficients. Alternatively,
the data can be collected using the
“databook” feature in the simulator
and copied into the spreadsheet using
a simple copy-and-paste operation.
Two-phase relief scenarios
The existing single-phase vapor and
non-flashing liquid methods are relatively easy to calculate and the result-
ing predictions are fairly accurate at
conditions well away from the critical
pressure. However, two-phase models
are more difficult to implement. Existing two-phase flow models approximate the pressure-density relationship of the fluid in order to calculate
the integral in Equation (3).
One of the simplest models, the
Omega Method, assumes a linear
pressure-density relationship, with
the omega parameter (ω) representing the slope of the pressure-density
curve. An analytical solution to the
isentropic nozzle equation was developed using the omega parameter to
solve the integral [8].
The TPHEM Method uses three
pressure-density points to define coefficients for an empirical equation
of state “model” [9]. The empirical
equation is then used to evaluate the
integral numerically. Pressure-density data for these models are often
provided by a process simulator. If
a simulator is available, then it
is much simpler to use the Direct
Integration Method.
The Direct Integration Method is
fundamentally different from the
other methods described here because
it does not generate an explicit equation-of-state model to relate pressure
and density. Instead, pressure and
density data are generated using the
full thermodynamic models available
in the selected process simulator, and
these data are then used to solve the
integral numerically. Since there is
no reliance on a curve-fit pressuredensity model, the Direct Integration
Method is more exact and reliable,
assuming the simulator’s thermodynamic model is accurate. Specifically,
there is no chance for inaccuracies
associated with the fluid equation of
state “model” propagating through the
rest of the calculations resulting in
inaccurate mass flux estimations and
ultimately an inappropriate reliefvalve area [8, 9, 10].
Note that the Direct Integration
Method assumes that the two-phase
fluid is homogeneous, and that the
fluid is in mechanical and thermodynamic phase equilibrium. The homogeneous assumption is valid for most
two-phase reliefs due to high velocity
in the nozzle, which promotes mixing
Chemical Engineering www.che.com April 2013
55
GE/Consolidated and Allied Valve
Engineering Practice
Closing remarks
[2]. The mechanical equilibrium assumption is valid for flashing flows
[2]. The thermodynamic equilibrium
assumption is valid for nozzles with
a length longer than 10 cm [4]. Most
standard relief valves have a nozzle
that is slightly longer than this [11].
Pros and cons
Advantages of this Method. The Direct Integration Method is not bound
by the same constraints as many
other models or methods. Using this
approach, the same method can be
used whether the flow is choked or not
choked, flashing or not flashing, single
or two-phase, close or far from the
critical point, subcooled or supercritical. The only assumptions required
for the Direct Integration Method are
that flow through the relief valve is
isentropic, homogeneous, and in thermodynamic and mechanical equilibrium, although it is possible to adjust
the method to account for mechanical
non-equilibrium or slip [6].
Although most other methods give
unsatisfactory results near the thermodynamic critical point, the Direct Integration Method continues to function
properly [12]. Additionally, many other
concerns that come up when using
relief-valve model equations, such as
determining the heat capacity ratio or
isentropic expansion coefficients, are
no longer relevant since they are inherent to the simulator itself [3].
Downsides to this Method. The Direct Integration Method can produce
References
1. American Petroleum Inst., “Sizing, Selection,
and Installation of Pressure-Relieving Devices in Refineries,” ANSI/API RP 520, 8th
Ed., Part 1: Sizing and Selection, Washington, D.C., Dec. 2008.
2. Darby, R., Size safety-relief valves for any
conditions, Chem. Eng., pp. 42-50, Sept.
2005.
3. Kim, J.S., H J. Dunsheath and N.R. Singh,
Proper relief-valve sizing requires equation
mastery, Hydrocarbon Proc., pp. 77–80, Dec.
2011.
4. Huff, J., Flow through emergency relief devices and reaction forces, J. Loss Prev. Process Ind., Vol. 3, pp. 43–49, 1990.
5. Bird, R.B., and others, “Transport Phenomena,” pp. 481, John Wiley, New York, 1960.
6. Darby, R., On two-phase frozen and flashing
flows in safety relief valves: Recommended
calculation method and the proper use of the
discharge coefficient, J. Loss Prev. Process
Ind., Vol. 17, pp. 255–259, 2004.
56
FIGURE 2. The Direct Integration
Method is not only easy to use, but
provides more accurate results when
sizing pressure relief valves, since this
approach does not rely on a potentially
sensitive equation of state model
overly conservative results in a couple of circumstances, which can lead
to under-prediction of the mass flux
and selection of an oversized valve.
This appears to be an issue only when
the fluid is in two-phase frozen flow
(no flashing), or the relief valve has
a short nozzle and there is flashing
flow [2].
This potential limitation can be
compensated for in both situations by
applying a slip factor. However, at this
time, there is insufficient literature
available to provide accurate guidance
on the value of a slip factor. The accuracy of the calculation is also limited
by the accuracy of the physical property data in the simulator.
7. AspenTech, Aspen HYSYS Customization
Guide, Version 7.2, July 2010.
8. Leung, J.C., The Omega Method for Discharge Rate Evaluation, in “International
Symposium on Runaway Reactions and
Pressure Relief Design,” G.A. Melhem and
H.G. Fisher, Eds., pp. 367–393, AIChE., New
York, N.Y., 1995.
9. Center for Chemical Process Safety, “Guidelines for Pressure Relief and Effluent Handling Systems,” AIChE, New York, N.Y., 1998.
10. Diener, R., and J. Schmidt, Sizing of throttling device for gas/liquid two-phase flow,
Part 1: Safety valves, Process Safety Prog.,
Vol. 23, No. 4, pp. 335–344, 2004.
11. Fisher, H.G., and others, “Emergency Relief
System Design Using DIERS Technology —
The Design Institute for Emergency Relief
Systems (DIERS) Project Manual,” pp. 91,
Wiley-AIChE, 1992.
12. Schmidt, J., and S. Egan, Case studies of
sizing pressure relief valves for two-phase
flow, Chem. Eng. Technol., Vol. 32, No. 2, pp.
263–272, 2009.
Chemical Engineering www.che.com April 2013
Using a spreadsheet to import data
from a simulator and to calculate the
summation over a range of pressures
is extremely easy and straightforward. One simply needs to simulate
the relieving stream and perform
a flash operation at each pressure
and capture the required data. Not
only is the Numerical Integration
Method much simpler than the alternatives for two-phase flow, but it
is also more accurate, since it does
not rely on a potentially sensitive
equation-of-state model. There is no
need for a model because physical
property data are generated for each
data point directly from simulation.
In addition, the Numerical Integration Method can be used for singlephase flow and choked or not-choked
conditions. This versatility and ease
of calculation makes Numerical Integration the obvious choice for any
relief valve calculation where physical property data are available in a
process simulator.
n
Edited by Suzanne Shelley
Authors
Mark Siegal (Email: msiegal2
@gmail.com), was, until recently, a process engineer at
Valdes Engineering Company
where he was responsible for
process design, process modeling, and emergency relief
system design. He holds a
B.S.Ch.E. from the University
of Illinois at Urbana-Champaign.
Silvan Larson is a principal
process engineer at Valdes
Engineering Company (100
W 22nd St., Suite 185, Lombard, IL 60148; Phone: 630792-1886; Email: slarson@
valdeseng.com),where he is
responsible for process design
and emergency-relief-system
design. He has more than 30
years of experience in manufacturing and process design
engineering in the chemicals and petroleum refining industries. He holds a B.S.Ch.E. from University of WisconsinMadison and is a registered
professional engineer in Ill.
William A. Freivald is the
manager of process engineering at Valdes Engineering
Company (Phone: 630-7921886; Email: wfreivald@
valdeseng.com). He has more
than 17 years of international
process design experience in
specialty chemicals, gas processing and refining. He holds
a B.S.Ch.E. from Northwestern University and is a registered professional engineer in Illinois.
Feature Report
Engineering for Plant Safety
Early process-hazards analyses can lead to potential cost savings in
project and plant operations
Sebastiano
Giardinella
and Alberto
Baumeister
Ecotek group of
companies
Mayra
Marchetti
Consultant
In Brief
CPI project lifecycle
Process Hazard
Identification
When to use a given
method
Safe-design options
addressing hazards
early
final remarks
50
T
he chemical process industries (CPI)
handle a wide variety of materials,
many of which are hazardous by nature (for example, flammable, toxic
or reactive), or are processed at hazardous
conditions (such as high pressures or temperatures). The risks associated with CPI
facilities not only extend to the plant personnel and assets, but can potentially affect the
surrounding population and environment —
sometimes with consequences having regional or international scale, as in the case of
toxic vapor or liquid releases.
It is for this reason that process safety is
recognized as a key element throughout the
entire life of the plant, and several industry and
professional associations and government
authorities have issued norms, standards and
regulations with regards to this subject.
Process safety, as defined by the Center
for Chemical Process Safety (CCPS), is “a
discipline that focuses on the prevention and
mitigation of fires, explosions and accidental chemical releases at process facilities.
Excludes classic worker health and safety
issues involving working surfaces, ladders,
protective equipment and so on.” [1] Process
safety involves the entire plant lifecycle: from
visualization and concept, through basic
and detailed engineering design, construction, commissioning, startup, operations, re-
vamps and decommissioning.
In each of the plant life phases, different
choices are made by engineers that have a
direct impact on the overall risks in the facility; however, the highest opportunities for
cost-effective risk reduction are present in
the earlier phases of the project. In contrast,
the cost of implementing changes in the later
stages of the project increases dramatically.
Hence, it is important for the design team to
identify risks, and implement effective design
solutions as early as possible.
This article covers some of the typical decisions that the project design team has to
make over the course of a project, with examples of how the incorporation of process
safety throughout the entire design process
can significantly reduce the risk introduced
by a new CPI facility, while also avoiding potential cost-overruns, or unacceptable risk
scenarios at later stages.
CPI project lifecycle
A project for a new chemical process facility
usually involves different phases, which are
outlined here:
A screening or visualization phase. In this
phase, the business need for the plant is
assessed. Typical choices at this stage involve defining plant throughput, processing
technology, main blocks and plant location
Chemical Engineering
www.chemengonline.com
august 2015
Figure 1. The relative influence of decisions on
total life cost, and cost of implementing changes
throughout the project lifecycle
Project Life
Influence of
decisions on
total life cost
Cost of
implementing
changes
earlier in the project lifecycle have the
greatest impact on the total plant life
cost; in contrast, the cost of implementing changes in the later stages
of the project increases dramatically,
as can be seen on Figure 1.
The same holds true for overall
plant risk, as the impact of decisions
on overall facility risk is greatest in
the earliest stages of the project.
Risks and hazards
Visualization
Conceptual
engineering
Basic
engineering
(high-level), with the goal of developing a high-level project profile, and a
preliminary business case based on
“ball-park” estimates, benchmarks
and typical performance ranges, in
order to identify project prospects.
A conceptual engineering phase.
In this phase, the design team further develops the concept of the
plant, leading to a more-defined
project description, an improved
capital-cost estimate, and a moredeveloped business model. At this
stage, the process scheme is defined, along with the characteristics
of the major pieces of equipment and
their location on the layout (which
would ideally be set over a selected
terrain). The needs for raw materials,
intermediate and final product inventories, as well as utility requirements
are also established.
A basic engineering, or front end
engineering design (FEED) phase.
This sets the basis for the future
engineering, procurement and construction (EPC) phase, by generating a scope of work that further
develops the process engineering,
and includes the early mechanical,
electrical, instrumentation and civil/
structural documents and drawings.
This phase also serves to generate a
budget for the construction.
An EPC phase. The EPC phase also
includes the detailed engineering for
the development of the “for construction” engineering deliverables,
the procurement of equipment and
Chemical Engineering
Engineering, procurement & construction
Operations
bulk materials, the execution of the
construction work, the pre-commissioning, commissioning and startup
of the facilities.
Table 1 shows typical engineering
deliverables, along with their degree
of completion, for each phase of
project development.
After the plant construction is finished, the facility enters the operations phase. At the end of its life, the
plant is decommissioned.
It is a generally accepted fact in project management that decisions made
A risk can be defined by a hazard,
its likelihood (or probability) of occurrence, and the magnitude of its consequence (or impact).
A hazard, as defined by the Center
for Chemical Process Safety (CCPS),
is “an inherent chemical or physical
characteristic that has the potential
for causing damage to people, property or the environment” [2].
Process hazards can be classified
in terms of the following:
1. Their dependence on design
choices:
• Intrinsic — not dependent on
design decisions (that is, always
associated with the operation or
process). For instance, hazards
associated with the chemistry of
the materials being handled (flam-
TABLE 1. Typical Engineering Deliverables and Status per Project Phase
Deliverable
V
CE
BE
DE
Project scope, design basis and criteria
S
P
C
C
S
C
C
P
C
P/C
C
C
Plot plan
S
P/C
C
Process and utility flow diagrams (PFDs / UFDs)
S/P
P/C
C
P&IDs
S
P/C
C
Material & energy, utility balances
S
P/C
C
Equipment list
S/P
P/C
C
Single line diagrams
S/P
P/C
C
Data sheets, specifications, requisitions
S
P/C
C
Mechanical equipment design drawings and documents
S
P/C
C
Piping design drawings and documents
S/P
C
Electrical design drawings and documents
S/P
C
Automation and control drawings and documents
S/P
C
Civil / structural / architectural design drawings and documents
S/P
C
C3
C2/C1
Soil studies, topography, site preparation
Construction bid packages
Process block diagrams
Cost estimate
S/P
C5
C4
Key: V = visualization; CE = conceptual engineering; BE = basic engineering; DE = detailed engineering; S =
started; P = preliminary; C = completed; C5, C4, ..., C1 = Class 5, Class 4, ..., Class 1 cost estimate (AACE)
www.chemengonline.com
august 2015
51
Studies
Visualization
• Expert judgement
• High level risk
identification
Scope definition
Conceptual
engineering
Engineering, procurement & construction
Basic
engineering
• HAZID
• What-if
• Consequence
analysis
Detail
engineering
• HAZOP
• LOPA
• QRA
Precomm.,
comm & startup
Construction
• HAZOP
• Constructability
review
• Inspections
• Materials and
equipment tests
• FAT & SAT
• Hydrostatic tests
Figure 2. Typical hazards analyses that are used
throughout a CPI project lifecycle
mability, toxicity, reactivity and so
on); these properties cannot be
separated from the chemicals
• Extrinsic — dependent on design decisions. As an example:
hazards associated with heating
flammable materials with direct
burners can be avoided by using
indirect heating
2. Their source:
• Process chemistry — associated
with the chemical nature of the
materials (for example, flammability, toxicity, reactivity and
so on)
Mu?llerGmbh_Chemical
Engineering
•
Process
variables
—
associated
Chemical Engineering e UC
e
Decommissioning
• Preventive
• HAZID
and corrective
maintenance checks
• Periodically check
instrument and reliefvalve calibration
• Periodic hazards
analysis
material embrittlement
with the operating conditions
(pressure, temperature), and ma❍❍ higher material inventories
terial inventories. As general rules:
increase the impact of poten❍❍ higher pressures increase the
tial releases, whereas lower
impact of potential releases,
material inventories reduce
whereas vacuum pressures
response times in abnormal
increase the probability of air
operating conditions
entering the system
• Equipment failures — associated
❍❍ higher temperatures increase
with damages to plant equipment
the energy of the system (and • Utility failures — associated with
hazards, especially when
failures in utilities supplied to the
near the flashpoint or self-igfacility, such as electricity, cooling
nition temperature), whereas
water, compressed air, steam, fuel
very low temperatures could
or others
pose the risks of freezing,
• Human activity — associated with
86x123_2011.qxd:Mu�ll
Chem eng 1-4pgKrytox Ad 11-9-2014.qxp_Layout 2 11/15/14 12:46 PM
activities by humans over the facil86x123 formation
03/2011of hydrates, or
Ultra-Clean
The new cGMP-drum
offers process reliability by
validated cleaning procedures
Details of the Ultra-Clean line:
– Sanitary welded edging
– Geometry of beads and bottom
optimized for clean discharge
of product and for drum cleaning
– Body, base and lid in stainless
steel AISI 316
– FDA-approved silicone elastomer
seal ring, USP Class VI
– Choose from a range
of 20 different sizes
– Compliant with FDA and cGMP
guidelines
Müller GmbH - 79 618 Rheinfelden (Germany)
Industrieweg 5 - Phone: +49 (0) 76 23 / 9 69 - 0 - Fax: +49 (0) 76 23 / 9 69 - 69
A company of the Müller group
info@mueller-gmbh.com - www.mueller-gmbh.com
Circle 20 on p. 74 or go to adlinks.chemengonline.com/56200-20
52
Operations
Fluorinated Oils, Greases,
PTFE and Dry Film Lubricants
Miller-Stephenson offers a complete line of inert high
performance fluorinated lubricants that include
DuPont™ Krytox ® oils and greases, as well as a family of PTFE Dry Lubricants. They provide superior
lubricity while being thermally stable, nonflammable,
non-migrating, and contain no silicone. Krytox ® lubricants offer extreme pressure, anti-corrosion and antiwear properties, as well oxygen compatibility and low
outgassing. Our PTFE creates a superior release for
plastics, elastomers and resins with multiple releases
between applications.
For technical information and sample, call 800-992-2424 or
203-743-4447.
m
TM
s
Connecticut - Illinois - California - Canada
supportCE@mschem.com
miller-stephenson.com
Circle 19 on p. 74 or go to adlinks.chemengonline.com/56200-19
Chemical Engineering
www.chemengonline.com
august 2015
Page
Studies
Visualization
Scope definition
Conceptual
engineering
Engineering, procurement & construction
Basic
engineering
• Define plant
• Define process
capacity
scheme
• Select technology • Define
• Define process
equipment
blocks
and buildings
• Decide plant
location (layout)
location
• Define raw
materials,
products and
intermediate
product
inventories
Detail
engineering
• Select codes • Define
and standards process and
for design
controls
• Define basis • Define design
of design
conditions
• Define
electrical area
classification
• Select
materials
• Design/
specify
equipment
• Design
buildings
• Define
control and
emergency
Figure 3. Typical design decisions afsystems
fecting cost and risk throughout a CPI
• Design
project lifecycle
preliminary
relief system
ity (for example, operator error,
tampering with facilities, security
threats and so on)
• Environmental — associated with
environmental conditions (for example, earthquakes, hurricanes,
freezing, sandstorms and so on)
The likelihood of a risk can be expressed in terms of an expected fre-
• Analyze plant
hazards and
operability
• Identify layers
of protection
• Assess risk
• Identify
additional
safeguard
needs
Precomm.,
comm & startup
Construction
• Develop
construction
drawings
• Verify plant
hazards and
risks
• Finalize
safeguards
design
• Define
commissioning
and startup
procedures
• Conduct
constructability
review
• Conduct inspections
• Test piping and
materials
• Perform factory
acceptance and site
acceptance tests (FAT
& SAT)
• Calibrate instruments
and relief valves
• Perform hydrostatic
tests
• Train operations
personnel
quency or probability of occurrence.
This likelihood can be either relative
(low, medium, high), or quantitative (for instance, 1 in 10,000 years).
Quantitative values of the likelihood of
different categories of risk, or equipment failures, as well as risk tolerability criteria, can be obtained from literature sources, such as Offshore and
Operations
Decommissioning
• Perform preventive
and corrective
maintenance
• Periodically
check instrument
and relief valve
calibration
• Train new
operations
personnel
• Follow work
procedures
• Periodically assess
hazards
• Repeat previous
activities for
revamps /
expansions
• Assess hazards
• Follow work
procedures
• Document and
signal abandoned
facilities (for example,
underground piping,
ducting, and so on).
Onshore Reliability Data (OREDA),
American Institute of Chemical Engineers (AIChE), Center for Chemical
Process Safety (CCPS), American
Petroleum Institute (API), U.K. Health
and Safety Executive (HSE), Netherlands Committee for the Prevention
of Disasters by Dangerous Materials
(CPR), or local government agencies,
PROTECT PUMPS
DRY RUNNING • CAVITATION • BEARING FAILURE • OVERLOAD
MONITOR PUMP POWER
• Best Sensitivity
• Digital Display
COMPACT EASY MOUNTING
Only 3.25" x 6.25" x 2"
• Starter Door
• Panel
• Raceway
• Wall
TWO ADJUSTABLE SET POINTS
• Relay Outputs
• Adjustable Delay Timers
UNIQUE RANGE FINDER SENSOR
• Works on Wide-range of Motors
• Simplifies Installation
4-20 MILLIAMP ANALOG OUTPUT
WHY MONITOR POWER INSTEAD OF JUST AMPS?
PUMP POWER
Power is Linear-Equal Sensitivity
at Both Low and High Loads
NO LOAD
PUMPING
AMPS
POWER
VALVE CLOSING
FULL LOAD
VALVE OPENING
No Sensitivity
For Low Loads
NO LOAD
NO FLUID
FULL LOAD
CALL NOW FOR YOUR FREE 30-DAY TRIAL 888-600-3247
WWW.LOADCONTROLS.COM
Circle 18 on p. 74 or go to adlinks.chemengonline.com/56200-18
Chemical Engineering
www.chemengonline.com
august 2015
53
TABLE 2. Examples of Changes in Design as Result of Process Hazards analyses in
different Project phases
Impact on
Project
description
Conceptual
engineering
Basic
engineering
Process
definition
Industrial
solvents
manufacturing facility
1. Preliminary
process design
and equipment
characteristics
were defined
based on process
simulations and
best engineering
practices.
2. Tower diameter, reboiler, 3. Line routings
condenser, and pump
changed after
capacities changed,
constructaspare equipment, alterbility review,
nate lines and valves
adding presadded following What-If
sure drop,
analysis.
which altered
pumps and
control valves.
Plant layout /
area
High-pressure 1. Preliminary plot
3. Relief systems design
gas plant
plan was arranged
required further modifibased on available
cations to plot plan, and
terrain and recoman additional 10% of
mended equipment
space for flare exclusion
spacing.
area.
2. After conse4. After QRA, proper safequence analysis,
guards were selected
plant area was
in order to reduce risk
increased by 50%
contours to tolerable
and equipment and
levels in occupied buildbuildings were reings and public spaces,
located to prevent
hence reducing space
impact areas from
requirement by 25%
reaching occupied
versus that required by
buildings and pubconsequence analysis.
lic spaces.
Automation
and controls
Crude-oil cen- 1. Only summary detral processscription of major
ing facilities
control system
items developed in
conceptual engineering.
and they can be especially valuable
when performing quantitative, or
semi-quantitative studies.
The consequence of a risk can be
expressed in terms of its impact on
several recipients, such as assets,
personnel, society and environment.
The combination of likelihood and
consequence defines the risk. The
risk is then analyzed versus tolerability criteria, either qualitatively (for example, in a risk matrix), or quantitatively (for example, in risk contours).
Company management and the design team may then select measures
to eliminate or reduce individual risks,
if they are not in the tolerable range.
Process hazards identification
An experienced engineering design
team, with proper design basis documentation, and working under approved industry standards and best
engineering practices, is the first
54
Detailed
engineering
5. Location of
some lines
and equipment was
slightly
changed as
result of constructability
review, to
allow early
operations in
parallel with
construction.
2. Control system designed 5. Some addiaccording to P&IDs.
tional modifi3. Approximately 30%
cations were
more instruments and
required after
control loops added as
reception of
result of HAZOP.
vendor infor4. The overall system was
mation.
increased from SIL-1
to SIL-2 after LOPA, as
result of one section of
the plant handling light
ends.
factor in ensuring that plant hazards
can be avoided or reduced as early
as possible in the design.
Aside from the experience of the
team, it is generally accepted that
different methodical approaches can
be applied in a timely manner to the
engineering design process, in order
to detect possible hazards that were
not addressed by the design team.
These structured reviews are called
process hazards analyses (PHAs),
and are often conducted or moderated by a specialist, with participation of the design team, owner’s employees or experienced operators.
Several methodologies exist for
conducting a PHA, each suitable for
specific purposes, processes, and
for certain phases of project development and plant lifecycle (Figure 2).
Below is a brief description of some of
the most used PHAs in the CPI.
Consequence analysis. This is a
Chemical Engineering
method to quantitatively assess the
consequences of hazardous material
releases. Release rates are calculated
for the worst case and alternative scenarios, end toxic points are defined,
and release duration is determined.
Hazard identification analysis
(HAZID). HAZID is a preliminary
study that is performed in early project stages when hazard material,
process information, flow diagram
and plant location are known. It’s
generally used later on to perform
other hazard studies and to design
the preliminary piping and instrumentation diagrams (P&IDs).
What-if. This is a brainstorming
method that uses questions starting with “What if...,” such as “What if
the pump stops running” or “What if
the operator opens or closes a certain valve?” It has to be held by experienced staff to be able to foresee
possible failures and identify design
alternatives to avoid them.
Hazard and operability study
(HAZOP). This technique has been
a standard since the 1960s in the
chemical, petroleum and gas Industries. It is based on the assumption
that there will be no hazard if the
plant is operated within the design
parameters, and analyzes deviations
of the design variables that might
lead to undesirable consequences
for people, equipment, environment,
plant operations or company image.
If a deviation is plausible, its consequences and probability of occurrence are then studied by the HAZOP
team. Usually an external company
is hired to interact with the operator
company and the engineering company to perform this study. There are
at least two methods using matrices
to evaluate the risk (R): one evaluates consequence level (C) times
frequency (F) of occurrence; and the
other incorporates exposition (E) as a
time value and probability (P) ranging
from practically impossible to almost
sure to happen, in this method, the
risk is found by Equation (1):
R=E×P×C
(1)
Layer-of-protection
analysis
(LOPA). This method analyzes the
probability of failure of independent
www.chemengonline.com
august 2015
protection layers (IPLs) in the event
of a scenario previously studied in
a quantitative hazard evaluation like
HAZOP. It is used when a plant uses
instrumentation independent from
operation, safety instrumented systems (SIS) to assure a certain safety
integrity level (SIL). The study uses a
fault tree to study the probability of
failure on demand (PDF) and assigns
a required SIL to a specific instrumentation node. For example in petroleum refineries, most companies
will maintain a SIL equal to or less
than 2 (average probability of failure
on demand ≥10−3 to <10−2), and a
nuclear plant will tolerate a SIL 4 (average probability of failure on demand
≥10−5 to <10−4).
Fault-tree analyses. Fault-tree analysis is a deductive technique that
uses Boolean logic symbols (that is,
AND or OR gates) to break down
the causes of a top event into basic
equipment failures or human errors.
The immediate causes of the top
event are called “fault causes.” The
resulting fault-tree model displays the
logical relationship between the basic
events and the selected top event.
Quantitative risk assessment
(QRA). QRA is the systematic development of numerical estimates of
the expected frequency and consequence of potential accidents based
on engineering evaluation and mathematical techniques. The numerical
estimates can vary from simple values of probability or frequency of an
event occurring based on relevant
historical data of the industry or
other available data, to very detailed
frequency modeling techniques [4].
The events studied are the release
of a hazardous or toxic material, explosions or boiling liquid expanded
vapor explosion (BLEVE). The results
of this study are usually shown on
top of the plot plan.
Failure mode and effects analysis
(FMEA). This method evaluates the
ways in which equipment fails and the
system’s response to the failure. The
focus of the FMEA is on single equipment failures and system failures.
When to use a given method
Some studies have more impact
in some phases than in others. For
Chemical Engineering
example, if a consequence analysis
is not performed in a conceptual or
pre-FEED phase, important plot plan
considerations can be missed, such
as the need to own more land to
avoid effects over public spaces; or
the fact that the location might have
a different height with respect to sea
level than surrounding public places
impacted by a flare plume.
Some other studies, like HAZOP,
cannot be developed without a control philosophy or P&IDs, and are
performed at the end of the FEED or
detailed engineering (for best results,
at the end of both) to define and validate pressure safety valves (PSVs)
location and other process controls
and instrument safety requirements.
QRA or LOPA (or both) are done
after HAZOP to validate siting and
define safety instrumented systems
SIL levels, and finally meet the level
required by the plant.
Figure 2 shows the typical CPI
project phases, with a general indication of when it is recommended
to conduct each study; however,
this may vary depending on the
specific industry, corporate practices, project scope and execution
strategy. AIChE’s CCPS [2] has an
Applicable PHA technique table that
indicates which study to perform in
each project phase, which also includes research and development
(R&D), pilot plant operations, and
other phases not covered in the
present article.
Table 2 includes some real-life examples of how the results of some of
these studies can impact the development of the plant design at different project phases.
Out of the previously mentioned
studies, a properly timed HAZOP,
at the end of the basic engineering
phase, is key to identifying safety
and operability issues that have been
overlooked by the engineering design team, especially when involving
an experienced facilitator and plant
operators in the study, given that
they have a fresh, outsiders’ view
of the project, and they can provide
input on daily operating experience.
Also, the deviations identified in the
HAZOP can serve to detect the need
for additional safeguards that were
www.chemengonline.com
august 2015
PROVEN
PERFORMANCE
ROTOFORM
GRANULATION
FOR PETROCHEMICALS
AND OLEOCHEMICALS
High productivity solidification of
products as different as resins, hot
melts, waxes, fat chemicals and
caprolactam has made Rotoform® the
granulation system of choice for
chemical processors the world over.
Whatever your solidification
requirements, choose Rotoform for
reliable, proven performance and a
premium quality end product.
 High productivity –
on-stream factor of 96%
 Proven Rotoform technology –
nearly 2000 systems installed
in 30+ years
 Complete process lines or
retrofit of existing equipment
 Global service / spare parts supply
Sandvik Process Systems
Division of Sandvik Materials Technology Deutschland GmbH
Salierstr. 35, 70736 Fellbach, Germany
Tel: +49 711 5105-0 · Fax: +49 711 5105-152
info.spsde@sandvik.com
www.processsystems.sandvik.com
Circle 26 on p. 74 or go to adlinks.chemengonline.com/56200-26
55
SANDVIK_Chemical_ad_55.6x254_MASTER.indd 1 09/02/2015 14:48
TABLE 3. Additional Costs of Changes Associated with HAZOP Recommendations
during EPC Phase
Sample
project
number
Project description
Estimated cost of changes
associated with HAZOP
recommendations in EPC phase
(as % of approved budget)
PHAs and proper
safe-design practices
implemented in previous
design phases?
1
Gas dehydration unit
3%
Yes
2
Gas compression unit
3%
Yes
3
Crude oil atmospheric
unit
1%
Yes
4
Fuel storage tank farm
2%
Yes
5
Petrochemical plant relief
and flare systems
2%
Yes
6
Crude oil dehydration
station
1%
Yes
7
Crude oil evaluation
facilities
1%
Yes
8
Heavy crude oil dehydration unit
3%
Yes
9
Propane/air injection
plant
1%
Yes
10
Oil pipeline + two gas
compression units
1%
Yes
11
New flare system in existing refinery
1%
Yes
12
Refinery gas concentration unit revamp
7%
No
13
Extra-heavy oil deasfalting unit
5%
No
14
Demineralized water
plant
13%
No
15
Hydrogen compression
unit
35%
No
not considered by the design team.
When the recommendations are
implemented correctly, and no other
changes to the process or plant are
done between the preparation of the
basic engineering design book and
the EPC phase, then a HAZOP significantly reduces the probability of
significant cost impacts in the latter
as a result of changes due to additional PHAs.
Even though “what-if,” HAZID
and consequence analyses have
impact on the capital cost of the
project, the cost of implementing
their modifications to the design are
typically included on the EPC bidding process, as they are realized
at the beginning of the project lifecycle. Fault-tree analysis and LOPA
are used to define the redundancy
level of controls and instrumentation. The changes derived from
these studies generally represent a
minor portion of the total capital expenditure. That leaves HAZOP and
QRA as the most important studies
56
to identify design improvements to
prevent process hazards in the latter project phases.
Safe-design options
At the early project phases, it is not
possible to identify all possible riskreduction measures that could be
included in the design. However, a
safety-oriented design team might
be able to pinpoint sources of project
risk due to lack of data, and opportunities for risk reduction that could
be evaluated in later stages, as the
design progresses and further details
are known.
Some large organizations have collected the pool of their experiences
within risk checklists and proprietary
design standards, thus paving the
way for future work. Where organizations have not established their own
standards and engineering practices,
the design team should look for accepted codes and standards that are
the result of best engineering practices in a particular field or industry.
Chemical Engineering
The design options include, in descending order of reliability: inherently
safer design, engineering controls
(passive and active) and administrative controls (procedural).
Inherently safer design involves
avoiding or reducing the likelihood of
a hazard in a permanent or inseparable fashion. For example, when designing a centrifugal pump discharge
system, an inherently safer design
would be to specify the design pressure at the centrifugal pump shut-off
pressure, thereby largely reducing
the risk that an increase in the pump
discharge pressure (for example, due
to a blocked outlet) could cause a
rupture in the pipes with consequent
loss of containment.
Engineering controls are features
incorporated into the design that reduce the impact of a hazard without
requiring human intervention. These
can be classified as either passive
(not requiring sensing and or active
response to a process variable) or
active (responding to variations in
process conditions). In the previous centrifugal pump example, a
passive solution would be to contain possible leaks within dikes, and
with adequate drainage. Examples
of active solutions could be: a) providing a high-pressure switch associated with an interlock that shuts
the pump down; and b) providing
a pressure safety valve (PSV) designed for blocked outlet.
Administrative controls require
human intervention. These are the
least reliable, because they depend
on proper operator training and response. In the previous example, an
administrative control would be to
require operators to verify that the
valves in the pump discharge lines
are open.
Throughout
the
engineering
phases leading to the EPC phase,
different safe-design choices can be
made, as further information is made
available. Figure 3 shows some of
the typical design choices made by
the engineering team throughout
a chemical process plant lifecycle,
which have direct impact on lifecycle
cost and risk.
In the visualization phase, safety
can be included in the analysis as a
www.chemengonline.com
august 2015
factor to decide key items, such as
production technology and plant location. These key items are typically
selected based on other technical
criteria, such as overall efficiency,
production cost, or vicinity to either
raw materials, or markets (or export
facilities). For instance, when selecting a technology, health, safety and
environmental concerns could be
included as a criteria on the evaluation matrix, by adding positive points
to technologies that reduce risks to
their environment by using less-toxic
materials, operating at lower pressures or temperatures, or yielding
non-toxic byproducts. When selecting a high-level plant location,
management could opt to locate
the plant away from large population
centers, in order to minimize risks to
communities. In this case, planning
authorities also have an important
role in defining allowable land-uses.
In the conceptual engineering phase,
safety can be included in the analysis,
for example, in the following ways:
1. Defining a simple, yet functional
process scheme, as relatively
simple processes have less equipment and consequently lower failure probability (this can conflict
with other design goals); also, the
types of equipment selected can
have an important effect on process safety (for example, selecting
indirect over direct heating).
2. Including safety concerns in the
early layout definition. For instance,
a design by blocks — keeping the
main process, storage, and utility
areas separate from each other —
can reduce overall risk. Other good
practices include: maintaining an
adequate separation between
pieces of equipment; separating
product inventories taking into account their flammability, toxicity or
reactivity, and considering dikes
around tanks containing dangerous materials; placing flares and
vents in locations separate from
human traffic, taking into account
wind direction (for example, so that
flames or plumes are directed farther from personnel or population);
and allowing sufficient plot space
for an adequate exclusion area.
3. Keeping flammable and toxic maChemical Engineering
terial inventories to the minimum
required to maintain adequate
surge/storage capacity and flexibility in shipping.
In the basic engineering or FEED
phase, many design choices are
made over the specific mechanical, piping, electrical, automation
and civil design that impact on the
overall facility risk. The first decision involves selecting the codes
and standards that will be used
for design, and defining the design
basis and criteria for each engineering discipline. Then, throughout the
design, some other decisions may
include: selecting between automated and manual operation, setting equipment and piping design
conditions, defining the electrical
area classification, designing or
specifying equipment, structures
and buildings, defining control and
emergency systems (including appropriate redundancy, where applicable), and designing appropriate
relief systems, among others. Then,
there are equipment and systemspecific hazards and available safeguards that need to be considered.
Ref. 2 contains a comprehensive list
of hazards and safeguards for various types of unit operations.
When hazards have been properly identified and addressed in the
earlier design phases, this reduces
the probability of significant costly
changes being made during the
EPC phase as a result of unsafe
process conditions.
Addressing hazards early
When hazards are identified, and
proper design choices are taken
early in the engineering design to address them, significant benefits can
be obtained.
Table 3 compares the additional
cost of changes arising from recommendations made during a HAZOP
at the EPC phase. The costs are
expressed as a percentage of the
budget that was approved during the bidding stage, of projects
of different scope and plant type,
executed by different companies
in different countries, including the
U.S. and Latin America, with approved budgets between $5 million
www.chemengonline.com
august 2015
and $200 million.
The projects are divided into two
categories: a) projects where the
design contractor applied best engineering standards and employed
PHAs at optimum points during the
conceptual engineering and FEED
phases; and b) projects where adequate PHAs and safe-design practices were not applied in the previous
design phases.
As can be seen in Table 3, there
is a significant difference between
the cost of the changes arising from
HAZOP recommendations when
proper safe-design practices and
PHAs were applied during the FEED
phase, and when they were not.
For the first category, changes
were typically in the range of 1 to
3%. In the upper end of this category, changes were higher when the
owner requested some minor modifications to the FEED design without
properly assessing the risks associated with said changes.
As an example, the heavy crude
oil dehydration unit (Project 8) was
designed according to best engineering practices, and adequate
analyses (HAZOP, LOPA) were
conducted during the engineering
phase. However, the owner decided to implement changes in the
design in order to compress the
schedule, by removing several longlead items that included emergency
shutdown system (ESD) valves and
components, without updating the
PHAs. With the unit in operation,
the owner asked the contractor to
include the ESD items that were in
the original design.
For the second category, changes
exceeded 5%, and in one case
reached as high as 35% of the approved budget. Below is a description of what went wrong in each of
these projects:
The refinery gas concentration unit
revamp (Project 12) FEED considered hand operations in key pieces
of equipment. As a result of a HAZOP
during the EPC, the operations had
to be automated, which changed
the equipment specifications and
design. The number of loops added
after the HAZOP exceeded the capacity of the controller, and another
57
Now Available:
Chemical Engineering
Features Report
Guidebook-2014
Chemical Engineering’s Feature
Reports provide concise, factual
information that aids in solving
real problems. The practical,
how-to orientation of these
articles ensures that they can
be directly applied to chemical
engineers’ daily jobs. Each article
applies to a relatively broad
section of the chemical process
industries as opposed to focusing
on just one niche.
one had to be installed.
The extra-heavy oil deasphalting unit
(Project 13) was designed during the
basic engineering phase as a mostly
hand-operated facility, with minimum
supervisory controls. As a result of a
HAZOP during the EPC, the risk was
not tolerable to the owner, and the
whole unit had to be automated.
The demineralized water plant
(Project 14) was delivered by the vendor as a package unit, and no PHAs
were conducted by the vendor. When
received, the plant had many safety
and operability issues and a number
of important modifications had to
be made, including: additional lines,
block and control valves, relief valves
and associated lines, among others.
Aside from the costs associated with
the changes, the project was delayed
by six months.
The hydrogen compression unit
(Project 15) basic engineering design
did not address all of the safety considerations associated with hydrogen handling. Some of the modifications recommended by the HAZOP/
LOPA studies during the EPC phase
included changing the compressor
specification, and increasing the SIL
of the SIS from SIL-1 to SIL-3.
Final remarks
ts
ep or n
R
e
r
o
FeatuCompilati
2014
25139
Find this and other related
reference material at
store.chemengonline.com
58
Hazards are present in the CPI;
some are avoidable, while others
cannot be separated from the plant,
as they are tied to the very nature
of the chemicals or the unit operations, or both. However, a proper
design team, one that is trained to
identify hazards, and address them
using the best engineering practices
in safe-design from early on in the
project lifecycle, along with properly timed and executed PHAs, can
be very valuable in avoiding costly
changes during the EPC phase, or
even worse: potential damages to
persons and the environment. n
Edited by Gerald Ondrey
References
1. Center for Chemical Process Safety (CCPS), “Guidelines
for Investigating Chemical Process Incidents,” 2nd edition, CCPS, AIChE, New York, N.Y., 2003.
2. CCPS, “Guidelines for Engineering Design for Process
Safety,” 2nd ed., CCPS, AIChE, New York, N.Y., 2012.
3. AACE International Recommended Practice No. 18R97, Cost Estimate Classification System – As Applied
in Engineering, Procurement, and Construction for the
Process Industries.
Chemical Engineering
4. American Petroleum Institute (API) Recommended
Practice (RP) 752, Management of Hazards Associated with Location of Process Plant Permanent Buildings, 3rd ed., 2009.
5. U.S. Environmental Protection Agency (EPA), “Risk Management Program Guidance For Offsite Consequence
Analysis,” March, 2009.
6. EPA, Chemical Emergency Prevention & Planning Newsletter, Process Hazard Analysis, July – August, 2008.
7. Occupational Safety and Health Administration (OSHA)
29 CFR 1910.119. Process Safety Management of
Highly Hazardous Chemicals.
Authors
Sebastiano Giardinella is
the vice president and co-owner
of the Ecotek group of companies
(The City of Knowledge, Bldg.
239, 3rd floor, offices A and B,
Clayton, Panama City, Republic of
Panama; Phone: +507-2038490; Email: sgiardinella@
ecotekgrp.com). He has experience in corporate management,
project management, project engineering and process
engineering consulting in engineering projects for the
chemical petrochemical, petroleum-refining, oil-andgas and electrical power-generation industries. He is a
certified project management professional (PMP), has
a M.Sc. in renewable energy development from HeriotWatt University (Scotland, 2014), a master’s degree in
project management from Universidad Latina de Panamá (Panama, 2009), and a degree in chemical engineering from Universidad Simón Bolívar (Venezuela,
2006). He is also professor of project management at
Universidad Latina de Panamá, and has written a number of technical publications.
Mayra Marchetti is a senior
process engineer, currently working
as independent consultant (Coral
Springs, Fla.; Email: mmarchetti@
ecotekgrp.com), with more than ten
years of experience in the oil-andgas, petrochemical, petroleum-refining and pharmaceutical industries, and has participated in the
development of conceptual, basic
and detail engineering projects. She specializes in process simulation, plant debottlenecking and optimization,
and relief systems design. She has a master’s degree in
engineering management from Florida International University (Florida, 2008), and a degree in chemical engineering from Universidad de Buenos Aires (Argentina,
1996). She has published articles and delivered worldwide seminars focused in the use of simulation tools for
the process industry.
Alberto Baumeister is the
CEO and co-owner of the Ecotek
group of companies (same address
as above; Email: abaumeister@
ecotekgrp.com). He has experience
in corporate management, project
management, and senior process
consulting in engineering projects
for the chemical, petrochemical,
petroleum-refining, oil-and-gas,
electrical power-generation and agro-industrial industries. He has a specialization in environmental engineering (gas effluents treatment) from the Universidad Miguel
de Cervantes (Spain, 2013), a master’s diploma in water
treatment management from Universidad de León (Spain,
2011), a specialization in management for engineers at
Instituto de Estudios Superiores de Administración (Venezuela, 1990), and a degree in chemical engineering
from Universidad Metropolitana (Venezuela, 1987). He
was professor of the Chemical Engineering School at
Universidad Metropolitana between 1995 and 2007, and
has written a number of technical publications.
www.chemengonline.com
august 2015
Feature Report
Managing SIS Process
Measurement Risk and Cost
With a focus on flowmeters, this article shows how advances in measurement
technologies help safety system designers reduce risk and cost in their safety
instrumented systems (SIS) design and lifecycle management
Craig McIntyre
and Nathan
Hedrick
Endress+Hauser
IN BRIEF
RISK SOURCES FOR SIS
MAINTAINING LOW
FAILURE RISK
EXTENDING PROOF-TEST
INTERVALS
TRACEABLE
CALIBRATION
VERIFICATION
REDUNDANT
REFERENCES
LIFECYCLE
MANAGEMENT TOOLS
DETECTING PROBLEMS
CONCLUDING REMARKS
50
S
uccessful implementation and management of a safety instrumented
system (SIS) requires designers and
operators to address a range of
risks. First among these involves the specification of a proven measurement instrument,
such as a flowmeter (Figure 1), and its proper
installation for a given application, an undertaking that is fundamental to achieving the
initial targeted risk reduction.
Second is the definition of the support required to keep the flowmeter (or other measurement subsystem) available at that targeted level of risk reduction throughout the
life of the SIS equipment. The support for the
flowmeter must be defined in the design and
implementation phase.
Third involves following the recommendations found in the standard IEC 61511/
FIGURE 1. Flowmeters like the one shown here can play key
roles in reducing risks with safety instrumented systems
(SIS)
ISA 84 (International Electrotechnical Commission; Geneva, Switzerland; www.iec.ch
and International Society for Automation;
Research Triangle Park, N.C.; www.isa.org),
which provides “good engineering practice”
guidance for SIS development and management. The emerging IEC 61511 Edition 2 introduces some changes to these guidelines,
strengthening emphasis on the requirements
for end users to collect reliability data to
qualify or justify specifications and designs.
This article shows how to address those
risks and describes several tools, capabilities
and procedures that can be considered for
designing and managing a SIS installation in
flow-measurement applications.
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
AUGUST 2016
Risk sources for SIS
3.0E-02
Probability of Failure on Demand
Under IEC 61511-ANSI/ISA 84, operators and SIS designers are re2.5E-02
quired to qualify the appropriateness
of a SIS measurement subsystem to
2.0E-02
SIL 1
be effective in addressing an appliCoriolis flowmeter A - 73 FIT
cation-specific safety instrumented
1.5E-02
function (SIF). This not only includes
the initial design of the SIS itself, but
1.0E-02
the qualification of the measurement
Typical SIL capable coriolis flowmeter B - 160 FIT
subsystem used in that service.
SIL 2
5.0E-03
The capture and assessment of
data is used to qualify the use of
SIL 3
0.0E+00
measurement instruments in SIS ap0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
plications. Even after this qualificaYears
tion, operational data and management of change of these instruments FIGURE 2. Flowmeters with a lower “dangerous undetected” (ƛdu) FIT and in-situ testing capabilities may
over their lifetimes in SIS applications allow extension of the interval time needed for proof tests
must still be captured and assessed.
determined and executed to keep
SIS measurement subsystems Maintaing low PFD and du
are typically exposed to challenging Risk of failure to perform an expected both the probability of failure on deprocess and environmental condi- function can come from probabilis- mand (PFD) average and the lambda
tions, so they tend to contribute a tic failure sources. For example, this dangerous undetected (du; the failhigher risk to the availability of the includes the collective probabilistic ure rate for all dangerous undetected
SIS than safety controllers, which failures of electronic components in failures) fault risk (that is outside the
are normally installed in a controlled a transmitter. Required maintenance reach of diagnostics) below a reChemical Eng.1-4pgKrytoxA Ad 2014.qxp_Layout
12:49
PM Page 1must be quired average risk-reduction target.
and2 11/15/14
proof-test
procedures
environment.
Krytox ® Lubricants
Prefabricated piping and spools
Greases and Oils are: Chemically Inert. Insoluble in common
solvents. Thermally stable (-103°F to 800°F). Nonflammable.
Nontoxic. Oxygen Compatible, Low Vapor Pressure. Low
Outgassing. No Migration - no silicones or hydrocarbons.
DuPontTM Krytox® Lubricants offer Extreme Pressure,
Anticorrosion and Anti-wear properties. Mil-spec, Aerospace
and Food Grades (H-1 and H-2) available! Widely used in
Vacuum and Cleanroom environments.
We also offer a complete line of inert fluorinated Dry Lubricants
and Release Agents.
For technical information, call 800.992.2424 or 203.743.4447
(8AM - 4PM ET).
The use of sophisticated welding techniques,
the processing of austenitic steels and special
steel grades and the production of stainless
steel pipes have always been our core
Marcel Bartels
Phone: +49 5834 50-7155
marcel.bartels@butting.de
competences.
www.butting.com
We also offer customized solutions for piping
systems, vessels and special components in
nickel alloys, duplex and superduplex
Channel Partner Since 1991
m
BUTTING Group
Germany · Brazil ·
Canada · China
materials and clad materials.
TM
s
Take advantage of our know-how and
contact us.
Connecticut - Illinois - California - Canada
supportCE@mschem.com
miller-stephenson.com/krytox
Progress by Tradition!
Circle 31 on p. 82 or go to adlinks.chemengonline.com/61498-31
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
Circle 07 on p. 82 or go to adlinks.chemengonline.com/61498-07
AUGUST 2016
Prefabricated_piping_&_spools_Chem_Eng_86x123mm.indd 1
51
05.07.2016 13:57:42
National time
standard
International
mass
standard
Secondary
time
standard
National
mass
standard
Counter/
timer
Flow standard
(calibration rig)
Flowmeter
Reference
mass
FIGURE 3. The figure shows a traceability chain for a mass flowmeter
Risk of failure to perform an expected function can also come from
systematic failure sources. This
could include damage to a sensor while being tested, for example.
Systematic fault risk may be created
by properties of the process fluids,
operating conditions, build-up, corrosion or other factors. Periodic visual field inspections, calibrations
and maintenance that may need to
be conducted can introduce failure
risk. There is some measure of risk
from (and to) personnel who need to
follow written procedures to conduct
activities in the field and work with
instruments that may need to be removed, transported, repaired, tested
and reinstalled.
It has been stated by one of the
world’s largest chemical companies that “2% of every time we
have human intervention, we create a problem.” Another leading
specialty chemical company conducted a study that concluded “4%
of all devices (instruments) that are
proof-tested get damaged during reinstallation.” Reducing the need for
personnel to physically touch a measurement subsystem offers designers an avenue to reduce systematic
failure risk to a SIS.
IEC 61511 Edition 2 points to the
need to specify in the safety requirements specification (SRS) the methods and procedures required for testing SIS diagnostics. SRS clause 10
states some of the requirements for
proof-test procedures — including
scope, duration, state of the tested
device, procedures used to test the
diagnostics, state of the process,
detection of common cause failures,
methods and prevention of errors.
Measurement subsystems from
several instrument suppliers are now
available with integral redundant selftesting diagnostics that can conduct
continuous availability monitoring.
This means a measurement subsystem may not only have high diagnos52
tic coverage, but also redundancy
— meaning the testing functions are
redundant and continuously checking each other. This redundancy provides a number of benefits for the
lifecycle management of instruments
used in a SIS.
Extending proof-test intervals
Periodic proof-testing of the SIS
and its measurement subsystems
is required to confirm the continued operation of the required SIF,
and to reduce the probability of
dangerous undetected failures that
are not covered by diagnostics. A
proof-test procedure for a flowmeter or other measurement devices
often requires removal of the instrument and its wiring, transportation
to a testing facility, and reinstallation
afterward. In some cases, modern
instrumentation may provide the capability to conduct proof testing insitu, thus eliminating the removal of
equipment and risk of wiring, instrument or equipment damage.
Safety Integrity Level (SIL)-capable measurement subsystems
typically have hardware and software assessments conducted during their development to determine
failure mode effects and diagnostic
analysis and to manage change
processes according to IEC 615082, 3. The du and proof-test coverage values, among other safety
parameters, are provided in a safety
function manual and described in a
certificate. Lower du values give
system designers greater freedom
when setting measurement subsystem proof-test intervals, because
these intervals contribute a lower
increase in PFD over time.
For example, some Coriolis flowmeters have du values in the
range of 150 to 178 failure in time
(FIT, where 1 FIT= 1 failure in a billion hours). Others, such as two-wire
Coriolis flowmeters, have du values in the 73 to 89 FIT range. Vortex
flowmeters with du in the 70 to 87
range are also available. If all other
factors were equal, a measurement
subsystem with half the FIT value
could allow a doubling of the prooftest interval time (Figure 2).
Some measurement subsystems
offer the capability to remotely invoke
in-situ proof testing with a high degree of proof-test coverage to reduce
the PFD subsystem contribution.
Given that external visual inspections are sufficient for at least some
proof-test events, these measurement instruments might be prooftested in-situ without the need to
remove the instrument from service.
Data from these proof-tests can
be transmitted via 4–20-mA HART
connections from the instrument to
and through some safety control
systems to a digital network, such
as Ethernet/IP, where these data
can be captured. In short, the prooftesting event can be invoked, and
related data can be captured, managed and reported through safety
control systems supporting these
capabilities.
In-situ proof testing can create
documented evidence that diagnostic checks have been carried out,
and thereby fulfill the requirements
for documentation of proof-testing,
in accordance with IEC 61511-1,
Section 16.3.3b, “Documentation
of proof testing and inspections.”
When in-situ proof testing can be
engineered into an SIS design, cost
may be reduced compared to the
expense of periodically removing
the instrument from service to perform testing.
Flowmeter
True
mass flow
Sensor
Sensing
element
Transmitter
AV
Transducer
Analog- AV
to-digital
converter
Signal
processing
AV
Data
display
Measured
mass flow
(measurand)
AV1 auxiliary variable
FIGURE 4. The diagram illustrates the relationship among the various subsystem elements of a flowmeter
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
AUGUST 2016
AV1
[mA]
Measurement error [%]
Maximum permissible error (MPE)
AV2
[mV]
AV3
[Hz]
Tolerance interval
Flow [kg/h]
(a)
(b)
FIGURE 5. All measurements results from a particular instrument need to be within the band between the
measuring error of the instrument and the maximum permissible error for the verification to be considered positive (AV = auxiliary variable)
Traceable calibration verification
Measurement subsystem proof-test
procedures often require calibration verification of the measuring instrument. As operators seek to set
proof-test intervals, they also need
to set associated intervals for calibration verification.
Verification and documentation
to prove that the SIS subsystem
calibration is acceptable normally
requires removal of the subsystem.
This exposes the instrument to dam-
age during removal, transport and
reinstallation. There is also a risk introduced for unrealized damage or
the introduction of an error due to
process shutdowns, which are often
required when an instrument is removed from service.
The measurement subsystem
may need to be calibrated or verified with traceability to an international standard. If an organization is ISO 9001:2008-certified,
it needs to address Clause 7.6a
(Control of monitoring and measuring devices), which states: “Where
necessary to ensure valid results,
measuring equipment shall…be
calibrated or verified at specified
intervals, or prior to use, against
measurement standards traceable
to international or national measurement standards.”
Some measurement instruments
provide certified integral and redundant references that have been
calibrated via accredited and traceable means, and can thus have their
measurement calibration verified insitu. This eliminates sources of risk
and cost associated with removing
instruments from service, while still
meeting ISO 9001:2008 Clause
7.6a requirements.
Redundant references
Appointed with the task of coordinating the realization, improvement and
comparability of worldwide measurement systems, the International
Bureau of Weights and Measures
Superior check valves should . . .
Save space
Arrive on time
Withstand rigorous conditions
Check-All the above
®
A superior check valve does all the right
things. By offering a wide selection of spring
settings and seat materials, Check-All® spring
loaded check valves are assembled to your
exact needs, ensuring absolute precision and
reliability. They work like they should. That’s
what makes Check-All® the only choice. Plus,
most lead times are less than one week. All
the better!
Get me a Check-All® !
See us at
WEFTEC
Booth #8521
Manufactured in
West Des Moines, Iowa, USA
• 515-224-2301
• www.checkall.com • sales@checkall.com
Circle 11 on p. 82 or go to adlinks.chemengonline.com/61498-11
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
Circle 24 on p. 82 or go to adlinks.chemengonline.com/61498-24
AUGUST 2016
53
Safety controller/
logic solver
Flowmeter
subsystem
Lifecycle management/
data
Hart communication
Namur NE107
4-20mAdc LOOP
Device Lifecycle
management
Namur NE43
Ethernet/IP
Serial number F
Serial number F
Field device
management
Serial number F
FIGURE 6. Cloud- or enterprise network-based lifecycle management tools can provide support documentation for specific instruments
(Sèvres, France; www.bipm.org) defines traceability as “the property of a
measurement result to be related to
a reference through a documented
unbroken chain of calibrations, each
contributing to the measurement uncertainty.” Figure 3 shows a traceability chain for a flowmeter.
The term “measurement result”
can be used in two different ways to
describe the metrological features of
a measuring instrument:
1. Measurand (Process Value): Out-
put signal representing the value of
the primary process variable being
measured (that is, mass flow).
2. Auxiliary variable: Signal(s) coming
either from the instrument’s sensor
(transducer) or a certain element of
the transmitter, such as an analogto-digital (A/D) converter, amplifier, signal processing unit and so
on. This variable is often used to
transmit current, voltage, time, frequency, pulse and other information.
Current ranges for signal of digital transmitters
Failure information
Failure information
Measurement information
A
M
A
mA
0
3.6 4
20
3.8
21
20.5
Current ranges for signal identification in process control systems
Measurement information
A:=0
A:=1
M
A:=0
A:=1
mA
0
3.6 4
3.8
20
21
20.5
A =Alarm state (i0,1); M=measurement (analog mA value)
FIGURE 7. NAMUR NE43 recommendations for 4–20-mA d.c. transmitters (top) and process control systems (bottom) address the risk of mixing different vendor-specific current range signal levels
54
CHEMICAL ENGINEERING
Figure 4 illustrates the basic concept and the relation among subsystem elements in a flowmeter.
During the lifecycle of any instrument, it is important to monitor measurement performance on a regular
basis (ISO 9001:2008 Chapter 7.6.a),
especially if the measurements from
the instrument can significantly impact process quality.
For example, in Figure 4, the process value is defined as mass flow,
and a traceable flow calibration system can be used to perform a proof
test. Typically, the outcome of this
test is seen in calibration certificates
as a graph depicting the relative
measuring error of the instrument
and the maximum permissible error
band. All of the measurement results
are expected to be enclosed within
this band for the verification to be
considered positive (Figure 5a).
A second approach (Figure 5b)
consists of assessing the functionality of an instrument by looking at
one or more elements that can significantly impact the process value.
In this case, verification can assist in
assessing the instrument’s functionality by observing the response of
the process variable and the auxiliary
variables. The auxiliary variables are
compared to specific reference values to make sure they are within a
tolerance interval established by the
manufacturer.
Typically, proof testing requires
the flowmeter to be removed from
the process line and examined with
specific equipment, such as a mobile
calibration rig or a verification unit.
This rig or unit needs to be maintained and calibrated by qualified
personnel, thus introducing a costly
and time-consuming procedure.
The process has to be shut down to
perform testing, often resulting in a
loss of production. If removal and reinstallation of the flowmeter are carried out in a hazardous area, safety
issues can arise.
Modern instruments, such as
mass flowmeters, typically have insitu proof testing built into the devices. While many instrument vendors have similar solutions, there are
significant differences in how they
work. In the cases where flowmeter
WWW.CHEMENGONLINE.COM
AUGUST 2016
Status signal
Color
Symbol
Normal; valid output signal
n
n
Maintenance required; still
valid output signal
n
Out of specification; signal out
of the specified range
n
Function check; temporary
non-valid output signal
n
Failure; non-valid output signal n
FIGURE 8. Five standard status states are specified by the NAMUR NE 107 recommendation
hardware and its associated software can conduct in-situ testing, the
approach is often different as well.
For example, the authors’ company
embeds the verification functionality
in the device electronics of the flowmeter, so removal of the flowmeter is
not required.
A key requirement for this type of
verification method is high reliability.
The internal references used to verify
the auxiliary variables must remain
stable and avoid drift during the
service life of the instrument. And if
drift does occur, it must be detected
immediately. The stability of the references can be addressed with durable and high-quality components.
Potential drift can be detected by the
use of an additional, redundant reference, so that each can cross-check
with the other. If one or both references drift out of tolerance, these
cross-checks can trigger an alarm.
Redundancy of the references is
achieved differently depending upon
the measurement technology:
• Electromagnetic flowmeters use
voltage references because the
primary signal generated by the
sensor is a voltage induced by the
conductive fluid passing through a
magnetic field
• Coriolis, vortex, and ultrasonic
flowmeters use frequency generators (digital clocks) as references
because the primary signals are
measured either by a time period
(the phase-shift in a mass flowmeter or the time-of-flight differential
in an ultrasonic flowmeter), or by
I NNOVATION
the frequency of an oscillation
(such as the rate of capacitance
swings by the differential switched
capacitor sensor in vortex flowmeters)
In flowmeter models where redundant references are in place,
observing both references drifting simultaneously in the same manner is
very unlikely. On an installed base of
100,000 flowmeters, such an event
is anticipated to occur just once
every 148 years. Put another way, a
device with a typical lifecycle of 20
years would have only a 0.007%
probability of experiencing such a
drift during its life.
Independent, third-party verification of a particular redundant-references approach can be obtained by
organizations such as TÜV Rheinland AG (Cologne, Germany; www.
tuv.com), and verification reports
thus obtained can satisfy the need to
document the approach.
In practice, a verification report
from an independent, third-party or-
Low Liquid Rate Technology
Engineered for demanding, very low liquid flow distillation
Many applications, such as vacuum distillation, fatty acid, tall oil, vitamin production and
natural gas dehydration, operate at very low liquid rates. Spreading the liquid evenly
across the tower, which is difficult to do at low liquid rates, is important for the
good liquid distribution needed to achieve superior packing efficiency.
Koch-Glitsch has developed a line of liquid distributors specifically for very
low liquid rate applications. When combined with high performance
FLEXIPAC® and INTALOX® structured packings, these distributors can help
optimize separations in many demanding distillation services.
• Minimize liquid entrainment
• Maximize usable packing surface area
• Flows as low as 0.03 gpm/ft2 [70 liters/h/m2]
• Baffle design helps wet every sheet in the structured packing
• Secondary troughs act as flow multipliers to enhance the spreading of the liquid
• Decrease in number of distribution points results in larger, more fouling resistant orifices
YOU CAN RELY ON US.™
United States (316) 828-5110 | Canada (905) 852-3381 | Italy +39-039-6386010 | Singapore +65-6831-6500
For a complete list of our offices, visit our Web site.
www.koch-glitsch.com
For related trademark information, visit http://www.koch-glitsch.com/trademarks.
This technology is protected by one or more patents in the USA. Other foreign patents may be relevant.
Circle 29 on p. 82 or go to adlinks.chemengonline.com/61498-29
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
AUGUST 2016
55
ganization constitutes the front end
of an unbroken, documented chain
of traceability. Since the internal references remain valid over the lifetime
of the instrument, their own factory
calibration, performed in accredited
facilities and documented, is the next
link in this chain.
In addition, a traceable calibration
of the instrument ensures that the
integrity of the device has not deteriorated during assembly or handling in the plant. Calibration of the
equipment used for calibration in the
factory can then be traced back to
national standards.
In-situ verification is therefore compliant with international standards for
traceable verification.
Lifecycle management tools
ISA 84 and IEC 61511 — in particular, edition 2 clause 11 of the IEC
document — require end users to
collect reliability data to qualify or
justify specification and design data.
According to these documents, data
quality and sources:
• Shall be credible, traceable, documented and justified
• Shall be based on the field feedback existing on similar devices
used in similar operating
environments
• Can use engineering judgment to
assess missing reliability data or
evaluate the impact on reliability
data collected in a different operating environment
Collecting reliability data for SIS
is costly, but lifecycle management
tools are available to reduce the risk
and required time for some of these
activities. Several vendors offer lifecycle management tools that can
work externally or through the safety
system environment. They can also
capture lifecycle events, such as
systematic and probabilistic failures.
If anomalies are detected, SIS components can be repaired or replaced.
The right configurations can then be
uploaded, reducing required time
and risk of errors.
Field device management tools
can work externally or through the
safety system environment to invoke
subsystem proof-testing and calibration verification, and to capture
56
lifecycle events, such as systematic
and probabilistic failures in the measurement subsystems. Subsystems
can be repaired and replaced, and
then the correct configurations can
be uploaded, reducing time and risk
of errors.
At least one field device management tool follows the Field Device
Tool (FDT) standard from the FDT
Group (Jodoigne, Belgium; www.
fdtgroup.org), which provides a unified structure for accessing measurement subsystem parameters,
configuring and operating them, and
diagnosing problems. A logic solver
with HART I/O and HART passthrough management capabilities
can allow such a tool to work with
the measurement subsystem to invoke in-situ proof testing and traceable calibration verification.
Some field-device-management
tools can be used with device lifecycle management tools to aid in subsystem-related data support access
and capture. These tools can also be
integrated with overall lifecycle-management tools.
Several instrument suppliers provide, populate and maintain a realtime Cloud- or enterprise-based
device lifecycle-management tool
connection for individual device-specific support documentation, certificates, history, changes and calibration information.
For example, Figure 6 illustrates
one possible configuration. In this
case, information flows between a
flowmeter subsystem through a logic
solver (safety controller) to a fielddevice-management tool and device
lifecycle-management software in
the Cloud or on a local server.
In this example, the flowmeter
and logic solver both use NAMUR
NE 43-recommended current loop
signal settings to reduce systematic
risk from mixing the different vendorspecific current loop signal levels.
Also, the flowmeter and logic solver
both use standard HART Communication commands including the
NAMUR NE 107 recommendation,
which provides five clear actionable
subsystem status indicators.
In the case of the author’s employer, the FDT communicates
CHEMICAL ENGINEERING
though the logic solver with the flowmeter via 4–20-mA HART to monitor the device, to invoke in situ proof
testing and calibration functions,
and to diagnose problems. The
field-device-management tool communicates via Ethernet/IP to a lifecycle management server installed
within the user’s network or the
Cloud, where all flowmeter data are
stored in accordance with ISA and
IEC standards. The flowmeter data
are synchronized and maintained
from the flowmeter along with all associated data via its serial numbers.
The goal of this kind of field-device-management software is to enable plant operators to design a system that provides the following:
• Device power and wiring condition monitoring through the logic
solver or safety controller
• Device primary current loop/secondary HART communication and
status management through the
logic solver
• Device repair/replace management through the logic solver
• Device proof testing management
through the logic solver
• Device traceable verification of
calibration management through
the logic solver
• Capture and management of device proof testing, calibration, and
other lifecycle data that may reduce risk and cost in SIS designs
and lifecycle management
Detecting problems
A typical SIL-capable instrument,
such as a flowmeter, connects to
the logic solver or safety controller
via 4–20-mA or 4–20-mA HART.
These signals are also used to indicate problems.
Current signals per NAMUR NE 43
recommendations (Figure 7) convey
measurement and failure information
from the flowmeter to the safety controller via the 4–20-mA loop. Most
every instrumentation and control
system supplier offers options to
support this standard. Essentially,
any flowmeter and logic solver that
follows the NAMUR NE 43 recommendation uses 4–20 mA for the
measurement, and signals of less
than 3.6 mA or greater than 20.5
WWW.CHEMENGONLINE.COM
AUGUST 2016
mA to indicate failures. The benefit
of following this practice is reducing
the risk of mixing different instrument
vendor-specific signal level variations
with different safety controller signal level settings — something that
could happen during a repair or replacement event.
Figure 8 shows the five standard status states specified by the
NAMUR NE 107 recommendation.
The NE 107 recommendation is now
implemented within many HARTenabled devices for standard status communication. Under NE 107,
problems are identified as normal,
failure, out-of-specification, maintenance required and function check.
The purpose of NE 107 is to alert
systems and operations personnel in
an actionable way if a problem exists. When the logic controller sees
a NE 107 status indication change,
it notifies the operator. Field-devicemanagement tools can be used to
provide additional
diagnostic
to
CIC-10307
halfp page
ad.qxd data
3/25/07
help identify specific problems.
Concluding remarks
Implementation of a SIS requires
process risk protection to a targeted
minimum while maintaining design
and lifecycle costs at a reasonable
level. Intelligent instruments and lifecycle management tools can help
process plant personnel reduce risks
and costs associated with a SIS system. They can also aid in capturing
reliability data.
Instrumentation suppliers who
serial-number their components are
able to provide operators a realtime
Cloud- or enterprise-based connection between the measurement device
in the field and serial number-based
support documentation, certificates,
history, changes and calibration information. These data are maintained
by the supplier for the user. Additional
user data can be captured, including
service history. This can help reduce
the time required to obtain needed information, as well as reduce the risk
of6:19
usingPMthePage
wrong
n
1 information.
Edited by Scott Jenkins
Authors
Nathan Hedrick is the flow product marketing manager at
Endress+Hauser (2350 Endress
Place, Greenwood, IN. 46143;
Phone: 1-888-363-7377; Email:
nathan.hedrick@us.endress.
com). Hedrick has more than six
years of experience consulting on
process automation. He graduated
from the Rose-Hulman Institute of
Technology in 2009 with a B.S.Ch.E. He began his career with Endress+Hauser in 2009 as a technical support engineer. In 2014, Hedrick became the technical
support team manager for flow, where he was responsible for managing the technical support team covering
the flow product line. He has recently taken on his current position.
Craig McIntyre is the chemical
industry
manager
with
Endress+Hauser (same address
as above; Phone: 1-888-3637377; Email: craig.mcintyre@
us.endress.com). McIntyre has
held several positions with
Endress+Hauser over the past 17
years. They include level product
manager, communications product manager and business development manager. Prior
to joining E+H, he was director of marketing for an Emerson Electric subsidiary. McIntyre holds a B.S. degree
in physics from Greenville College and an MBA from the
Keller Graduate School of Management.
PLASTIC CONTROL VALVES FOR
ALL YOUR CORROSIVE APPLICATIONS
Collins plastic control valves are
highly responsive control valves
designed for use with corrosive
media and/or corrosive atmospheres.
Collins valves feature all-plastic
construction with bodies in PVDF,
PP, PVC and Halar in various body
styles from 1/2" - 2" with Globe,
Angle or Corner configurations and
many trim sizes and materials.
Valves may be furnished without
positioner for ON-OFF applications.
Call for more information on our
plastic control valves.
P.O. Box 938 • Angleton, TX 77516
Tel. (979) 849-8266 • www.collinsinst.com
Circle 14 on p. 82 or go to adlinks.chemengonline.com/61498-14
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
AUGUST 2016
57
Feature Report
Column Instrumentation
Basics
An understanding of instrumentation is valuable in
evaluating and troubleshooting column performance
Ruth R. Sands
DuPont Engineering
Research & Technology
I
nstrumentation is critical to understanding and troubleshooting
all processes. Very few engineers
specialize in this field, and many
learn about instrumentation through
experience, myth and rumor. A good
understanding of the various types of
instrumentation used on columns is a
valuable tool for engineers when evaluating column performance, starting up
new towers or troubleshooting any type
of problem. This article gives an overview of the common types of instruments used for pressure, differential
pressure, level, temperature and flow.
A discussion of their accuracy, common
installation problems and troubleshooting examples are also included.
The purpose of this article is to provide some basic information regarding
the common types of instrumentation
found on distillation towers so that
process engineers and designers can
do their jobs more effectively.
Introduction
Anyone trying to complete a simple
mass balance around a column understands that process data contain some
error. Closing a mass balance within
10% using plant data is usually considered very good. Generally, some values
must be thrown out when matching a
model to plant data. Understanding
which measured plant data is likely
to be most accurate is invaluable in
making good decisions about a model
of the plant, column performance and
future designs.
The following is a real case and a
telling example of how little the average chemical engineer may understand
about instrumentation. A process engineer with over 20 years of experience
was doing a material balance around a
distillation tower, illustrated in Figure
48
'5
'5
'5
&SSPS
'5
Figure 1. Which flowmeter is the most
accurate? What is the source of error in
the material balance?
Figure 2. Flush-mounted diaphragm
pressure transmitters are common in
low-temperature services
1. Based on the material balance, the
engineer concluded that the bottoms
flowrate must be in error and wrote
a work order to have the flowmeter
recalibrated. The instrument group
disagreed heartily. By the end of this
article, the reader will understand the
instrument group’s response.
as in scrubbers and storage tanks.
The process diaphragm, an integral
part of the transmitter, is mounted
on a nozzle directly on the vessel, and
the transmitter is mounted directly
on the nozzle.
Remote-seal diaphragm
There are three common types of pressure transmitters: flush-mounted
diaphragm transmitters, remote-seal
diaphragm transmitters and impulseline transmitters. All use a flexible
disk, or diaphragm, as the measuring
element. The deflection of the flexible
disk is measured to infer pressure.
The diaphragm can be made of many
different materials of construction,
but the disk is thin and there is little
tolerance for corrosion. Coating of the
diaphragm leads to error in the measurement. The instrument accuracy
of all three types of pressure transmitters is similar, usually 0.1% of the
span, or calibrated range.
Used in higher temperature service
when the electronics must be mounted
away from the process, a flush-mounted
diaphragm is installed on a nozzle at
the process vessel. A capillary tube
filled with hydraulic fluid connects the
flush-mounted diaphragm to a second
diaphragm, which is located at the remotely mounted pressure transmitter.
The hydraulic fluid must be appropriate for the process temperature and
pressure. Hydraulic fluid leaks will
lead to errors in measurement. Calibration is complex because the head
from the hydraulic fluid must be considered. The calibration changes if the
transmitter is moved, the relative position of the diaphragms changes or if
the hydraulic fluid is changed.
Flush-mounted diaphragms
Impulse-line
Pressure
These pressure transmitters are common in low-temperature services, such
Impulse-line pressure transmitters
can either be purged or non-purged.
Chemical Engineering www.che.com March 2008
20_CHE_031508_FR_KT.indd 48
2/26/08 11:16:10 AM
DEFINITIONS
Instrumentation range
The instrumentation range, the scale over which the instrument
is capable of measuring, is built into the device by the manufacturer. The purchaser defines the desired measured range,
and the vendor should provide a device that is appropriate for
the application.
Calibrated range
The calibrated range is the scale over which the instrument is
set to measure at the plant. It is a subset of the instrument range.
The calibration has a zero and a span. The zero is the minimum
reading, while the span is the width of the calibrated range.
The calibrated range will simply be referred to as the range at
a plant site.
Instrument accuracy
Error
Accuracy v100%
Scale of Measurement
The instrument accuracy is published by the manufacturer in the
product documentation, which is easily obtained on-line. A few
examples of how accuracy can be expressed are:
Purged impulse-line pressure transmitters measure purge-fluid pressure
to infer the process pressure. Most
commonly, the purge fluid is nitrogen, but it can also be air or other
clean fluids. The purge fluid is added
to an impulse line of tubing to detect
pressure at the desired point in the
process. The purge fluid enters the
process and must be compatible with
it. Check valves are required to ensure that process material does not
back up into the purge-fluid header.
The system must be designed so that
the pressure drop through the impulse line is negligible. A pressure
transmitter measures the purge-fluid
pressure with a diaphragm to infer
the process pressure.
Non-purged, impulse-line
Rather than a purge fluid, this type
of pressure transmitter uses process
fluid. Usually, this style is chosen
when the process is non-fouling or
it is undesirable to add inerts to the
process. One example is a situation
where emissions from an overhead
condenser vent must be minimized.
An impulse line is connected from
the desired measurement point in
the process to a pressure transmitter,
which measures the process pressure
at the remote point. The system must
be designed so that the pressure drop
through the impulse line is negligible.
The system designer must consider
the safety implications of an impulseline failure. The consequence of releasing hazardous material from a tubing
failure may warrant the selection of a
• Best-in-class performance with 0.025% accuracy
• ±0.10% reference accuracy
• ±0.065% of span
These examples refer to the ideal instrument accuracy, which is
only the accuracy of the measuring device itself. The total accuracy, on the other hand, includes the instrument accuracy plus
all other factors that contribute to error in the measured reading
as compared to the actual value. These other factors can include
digital to analog conversions, density errors, piping configurations, calibration errors, vibration errors, plugging and more.
Turndown ratio
The ratio of the maximum to minimum accurate value is an important factor in considering the total accuracy of a measured value.
Turndown ratio =
maximum accurate value
minimum accurate value
For example, an instrument with 100:1 turndown and 0–100psi instrument range would have the stated instrument accuracy
down to 1 psi. Below 1 psi, the instrument might read, but it will
have greater inaccuracy.
❏
different type of pressure transmitter.
Adequate freeze protection on the impulse lines is also important to obtain
accurate measurements.
Example 1. A good example of a problem with impulse-line pressure transmitters can be found in Kister’s Distillation Troubleshooting [2]. Case Study
25.3 (p. 354), contributed by Dave
Simpson of Koch-Glitsch U.K., describes three redundant impulse-line
pressure transmitters used to measure column head pressure. Following
a tray retrofit, operating difficulties
eventually led to suspicion of the head
pressure readings. The impulse lines
and pressure transmitters had been
moved during the turnaround. The
transmitters had been moved below
the pressure taps on the vessel. Condensate filled the impulse lines and
caused a false high reading. Relocating the transmitters to the original
location above the nozzles solved the
problem by allowing condensate to
drain back into the tower.
Transmitters in vacuum service
Pressure transmitters in vacuum service are generally the most problematic, leading to greater inaccuracy in
the measured value. Damage to the
diaphragm can occur from exceeding
the maximum pressure rating of the
instrument. Often, this happens on
startup, or it can happen when performing a pressure test of the vessel.
The diaphragm deflects permanently
and introduces error.
Calibration of vacuum pressure
transmitters is more difficult for in-
strument mechanics. The operating
range must be clearly defined; for example, is the range 100-mm Hg vacuum, 100-mm Hg absolute, or 650-mm
Hg absolute? Using different measurement scales in the same plant is confusing, and it can make it very hard
for mechanics to calibrate the pressure
transmitters accurately.
Another issue is measuring the
relief pressure. The system designer
must consider the instrument ranges
available and the accuracy of the
measurement for the operating range
versus the relief pressure range. It is
good practice to install a second pressure transmitter on vacuum towers to
measure the relief pressure.
Example 2. An excellent example of
calibration problems is illustrated in
vacuum service in Reference [2]. Case
Study 25.1 (p. 348), contributed by
Dr. G. X. Chen of Fractionation Research, Inc., describes several years
of troubleshooting a steam-jet system
in an attempt to achieve 16-mm Hg
absolute head pressure on a tower. It
was eventually determined that the
calibration of the top pressure transmitter was wrong, and they had been
pulling deeper vacuum than they
thought. The top pressure transmitter
was calibrated using the local airport
barometric pressure, which was normalized to sea-level pressure and was
off by 28-mm Hg.
Differential pressure
Differential pressure can be measured
either with a differential pressure (dP)
meter or by subtracting two pressure
Chemical Engineering www.che.com March 2008
20_CHE_031508_FR_KT.indd 49
49
2/26/08 11:18:36 AM
Feature Report
3FCPJMFS
SFUVSO
-5
Figure 3. (left) Remote-seal
diaphragm pressure transmitters
are used in high-temperature
service
Figure 4. (above) Location of
reboiler return nozzle does not
allow for accurate level reading
measurements. Subtracting two pressure readings is not always accurate
enough to obtain a meaningful measurement, so it is important to consider
the span of the anticipated measured
readings. If the dP is a substantial fraction of the top pressure, then it is okay
to subtract the readings of two pressure transmitters. However, if the dP
is a small fraction of the top pressure,
then it will be within the instrument
error of the pressure transmitter.
For example, a column at a plant runs
at 30 psia top pressure. The expected
dP is 2-in. H2O over a few trays. The instrument error for a 0–50 psi pressure
transmitter is 1.4-in. H2O. The measurement is within the accuracy of the
pressure transmitters, and a dP meter
is the appropriate meter to obtain an
accurate measurement. The downside
of dP meters is that very long impulse
lines are required on tall towers.
Level
Level and flow are the hardest basic
things to measure on a distillation
tower. Kister reports that tower base
level and reboiler return problems
rank second in the top ten tower malfunctions, citing that “Half of the case
studies reported were liquid levels rising above the reboiler return inlet or
the bottom gas feed. Faulty level measurement or control tops the causes
of these high levels...Results in tower
flooding, instability, and poor separation...Vapor slugging through the liquid also caused tray or packing uplift
and damage.” (Reference 2, p. 145)
One of the main reasons for faulty
level indications is that dP me50
Figure 5. Nuclear level transmitters are non-contact devices
ters are the most common type of
level instrument, and an accurate
density is required to convert the dP
reading to a level reading. In many
cases, froth in the liquid level decreases the actual density and causes
faulty readings. Changes in composition or the introduction of a different
process feed with a different density
are cited several times as reasons for
level measurement problems. Plugging of impulse lines and equipment
arrangements that make accurate
readings impossible are also very
common problems.
Differential pressure transmitters
are the most common type of level
transmitter. The accuracy of the instrument is quite good, at 0.1% of
span (calibrated range). Any type of
dP meter can be used: flush-mounted
diaphragms, remote-seal diaphragms,
purged impulse-line, or non-purged
impulse-line pressure transmitters.
The level measurement is dependent
on the density of the fluid:
P
height of liquid, ft
Rl
An accurate density is required for
calibration. Changes in composition or
the introduction of a process feed with
a different density will cause erroneous readings. Level transmitters suffer from the same problems that occur
in pressure transmitters. Hydraulic
fluid leaks, compatibility of the hydraulic fluid, damage to diaphragms,
and plugging or freezing of impulse
lines are just a few of the problems
that can be encountered with dP level
transmitters.
Figure 6. Non-contact radar level
transmitters generate waves that are
reflected from the surface of the level
back to the transmitter
Example 1. A column in a high-temperature, fouling service began to experience high pressure drop, and the
plant engineers were concerned that
they were flooding the column. Calculations showed that the tower should
not be flooding if the trays were not
damaged. Downcomer flooding was a
possibility if the cartridge trays had
become dislodged and reduced the
downcomer clearance. The tower was
taken down, and internal inspection
revealed no damage to the internals.
It was determined that a false low
level caused the bottoms flow controller to close. This raised the level in the
tower above the reboiler return line
and above the lower column pressure
tap. The column dP meter was reading
the height of liquid above the lowercolumn pressure tap. Consultation
with the instrument manufacturer
revealed that the remote, seal hydraulic fluid was not appropriate for the
high temperature of the process. The
hydraulic fluid was boiling in the capillary tubes and had deformed the diaphragm, which was also coated from
the fouling service. The level transmitter was switched to a periodic, purged
impulse-line dP meter. An automated
high-flow nitrogen purge prevents accumulation of the solids in the impulse
lines and is done once per shift. Logic
was added to the control loop to maintain the previous level reading during
the short nitrogen purge, a method
that has eliminated the problem with
the level.
Example 2. Another common example of a level transmitter failure
is based on the fact that equipment
is designed in such a way that an accurate level reading can never be obtained. Though this may be surpris-
Chemical Engineering www.che.com March 2008
20_CHE_031508_FR_KT.indd 50
2/26/08 11:19:43 AM
Figure 8. Resistive temperature detectors respond to a
temperature change
with a change in
resistance
Figure 7. Guided wave radar level
transmitter on a distillation
tower level [5]
ing, it is mentioned in Ref. [3], Case
Study 8.4 (p. 149), as illustrated in
Figure 4. A column that was being
retrofitted was originally designed
so that the reboiler return was introduced directly between the two liquid-level taps. The level in the tower
could never be accurately measured,
and it was modified on the retrofit to
rectify this situation.
Nuclear level transmitters
Common in polymer, slurry and
highly corrosive or fouling services,
these instruments work by placing a
radioactive source on one side of the
vessel and a detector on the other side.
The amount of radiation reaching the
detector depends on how much material is inside the vessel. A strip source
and strip detector are more accurate
than a single source, strip detector. A
sketch of a single source, strip detector
is shown in Figure 5. The advantage
of nuclear level transmitters is that
they are non-contact devices, making
them ideal for services where the process fluid would coat or damage other
types of level instruments.
Nuclear level transmitters are more
expensive than other level devices.
They also require permits and a radiation safety officer, so they are often only
used as a last resort. The instrument
accuracy is generally ±1% of span. The
total accuracy depends on how well the
system was understood by the designer
and installer. The thickness of the vessel walls and any other metal protrusions in the measuring range, such as
baffles, must be taken into account in
the calibration, along with the correct
rate of decay of the source. Build-up of
solids in the measuring range will also
result in error.
Radar level
transmitters
figure 9. For
vapor or gas applications, orifice flowmeters require temperature and pressure
compensation
This type of level transmitter has been used in the
chemical processing industries (CPI) for the last
30 years. They demonstrate high accuracy on oil tankers and have been
used frequently in storage-tank applications. Radar level transmitters are
now being applied to distillation towers but are still more commonly found
on auxiliary equipment, like reflux
tanks. There are contact and non-contact types of radar level instruments.
A non-contact, radar level transmitter generates an electromagnetic wave
from above the level being measured.
The wave hits the surface of the level
and is partially reflected to the instrument. The distance to the surface is
calculated by measuring the time of
flight, which is the time it takes for the
reflected signal to reach the transmitter. Some things that cause inaccuracy
with non-contact radar are: size of the
cone, heaving foaming, turbulence,
deposits on the antenna, and varying
dielectric constants caused by changes
in composition or service. The instrument accuracy is reported as ±5 mm.
Contact radar sends an electromagnetic pulse down a wire to the vaporliquid interface. A sudden change in
the dielectric constant between the
vapor and the liquid causes some of
the signal to be reflected to the transmitter. The time of flight of the reflected signal determines the level.
Guided wave radar can be used for
services where the dielectric constant
changes, but is not a good fit for fouling
services. A bridle (Figure 7), is used on
distillation towers to reduce turbulence
and foaming and therefore increases
the accuracy of the measurement. Instrument accuracy is ±0.1% of span.
Example 3. A reflux tank on a batch
distillation tower had a non-contact
radar level transmitter. The tower
stepped through a series of water
washes, solvent washes, and process
cuts. The reflux-tank level transmitter gave false high readings during the
solvent wash cycle, which used toluene. The reflux pumps would always
gas off during this part of the process.
The dielectric constants of the various
fluids in the reflux tank, of which toluene had the lowest dielectric constant,
varied ten times during the cycle, affecting the height of liquid able to be
measured. Larger antennas focus the
signal more and give greater signal
strength. As the dielectric constant decreases, a larger antenna is required to
measure the same height of fluid. The
level transmitter used in this service
was not appropriate for all measured
fluids and could not accurately measure the liquid level when the reflux
drum was inventoried with toluene.
Temperature
There are two common types of temperature transmitters in distillation
service — thermocouples and Resistive Temperature Devices (RTDs).
Both are installed in thermowells.
Thermocouples. The most popular
temperature transmitter, thermocouples, consist of two wires of dissimilar
metals connected at one end. An electric potential is generated when there
Chemical Engineering www.che.com March 2008
20_CHE_031508_FR_KT.indd 51
51
2/26/08 11:20:19 AM
0SJGJDF1MBUF4RVBSF3PPU'VODUJPO
Feature Report
RTDs
The second most-common type of temperature transmitter, RTDs consist of
a metal wire or fiber that responds to
a temperature change by changing
its resistance. Though RTDs are less
rugged than thermocouples, they are
also more accurate. Typically, they are
made of platinum. The instrument
accuracy of thermcouples and RTDs
is very good in both. However, thermocouples have a higher error than
RTDs. The total accuracy of a thermocouple is 1–2°C. There is greater error
due to calibration errors and cold-reference junction error.
It is important to note that, with
temperature transmitters, there is
a lag in the dynamic response to
changes in process temperatures. All
temperature measurements have a
slow response, because the mass of
the thermowell must change in temperature before the thermocouple or
RTD can see the change. The lag time
will depend on the thickness of the
thermowell and on the installation.
The thermocouple and RTD must be
touching the tip of the thermowell for
best performance. If there is an air
gap between the thermowell and the
measuring device, the heat-transfer
resistance of the air will add substantially to the lag time, which is also
why temperature transmitters work
better in liquid service. The response
time for temperature transmitters
in liquid service is between 1–10 s,
whereas the response time for temperature transmitters in vapor service
is about 30 s. Heat-transfer paste is a
thermally conductive silicone grease;
it has been used with success in some
plants to improve the response time of
temperature transmitters.
Example. The plant in this example
experienced a temperature lag problem. A thermocouple near the bottom
of a large tower controlled the steam
to the reboiler. The temperature control point had a 10-min delayed response to changes in steam flowrate.
52
'MPXSBUF HQN
is a temperature delta between the
joined end and the reference junction.
Type J thermocouples, made of iron
and Constantine, are commonly used
in the CPI for measuring temperatures under 1,000°C.
#FMPXPGTDBMF FSSPSJTIJHIGPS
GMPXTFOTPSBOEEQNFBTVSFNFOU
0SJGJDFQMBUFQSFTTVSFESPQ JO)0
Figure 10. Volumetric flowrate is proportional to
the square root of the ∆P, causing high error at less
than 10% of span
The rest of the column responded to
the change in boilup in about 3 min.
The lag in the control point caused
cycling of the steam flowrate and created an unstable control loop. The
cause was determined to be a thermocouple that was too short for its thermowell. Normally, thermocouples are
spring-loaded to ensure that the tip
is touching the end of the thermowell, but the instrument mechanics had
installed a thermocouple of the wrong
length because they lacked the proper
replacement part. The poor heat
transfer through the air gap between
the end of the thermocouple and the
thermowell caused the delay in temperature response. Replacing the installed thermocouple with one of the
proper length fixed the problem.
Flow
There are many different types of
flowmeters. Here, the types commonly
used in plants will be discussed: orifice
plates, vortex shedding meters, magnetic flowmeters and mass flowmeters.
Orifice plates
Orifice plates are the most common
type of industrial flowmeter. They are
inexpensive, but they also have the
greatest error of all the common types
of flowmeters. Orifice plates measure
volumetric flowrate according to the
following equation:
1
¥ $P ´ 2
Q C v¦
µ
§ R ¶
Q is the volumetric flowrate, C is a constant, ∆P is the pressure drop across
the orifice, and ρ is the fluid density.
To obtain an accurate flowrate, an accurate fluid density must be known.
Temperature and pressure compensation are required for vapor or gas
applications and may be required
for some liquids. Figure 9 shows the
Figure 11. Due to impulse
line problems, this "clean" service did not meet standards
equipment arrangement for an orifice
flowmeter with temperature and pressure compensation.
Typical turndown for orifice plates is
10:1. Below 10% of span, the measurement is extremely erroneous because
the volumetric flowrate is proportional
to the square root of the ∆P. At 10% of
span, the meter is only measuring 1%
of the ∆P span (Figure 10).
Multiple meters can be used to
overcome the turndown ratio when
high accuracy is required over the
entire span. This is often worth the
effort when measuring the flowrate
of raw materials or final products. At
one plant, three orifice plates in parallel were used to measure the plantboundary steam flowrate due to the
large span and the accuracy required
at the low end of the range. This resulted in a very complicated system.
There are many common problems
that lead to error in the orifice plate
measurement, including inaccurate
density, impulse-line problems, erosion
of the orifice plate, and an inadequate
number of pipe diameters upstream
and downstream of the orifice plate.
An accurate density is required to
obtain an accurate flowrate. In a plant
that has a process feed that varies
from as low as 12% to as high as 30%
water, the density changes significantly, and therefore an orifice meter
will not provide an accurate reading
without density compensation.
Impulse line problems include
plugging, freezing due to loss of electric heat tracing, and leaking. Condensate filling the impulse lines in
vapor/gas service and gas bubbles
in the impulse lines in liquid service
are also commonly cited. Figure 11
shows a pipe just upstream of an orifice that was in “clean” water service
for two years. There was a filter just
upstream of this section of pipe. The
Chemical Engineering www.che.com March 2008
20_CHE_031508_FR_KT.indd 52
2/26/08 11:21:43 AM
Figure 12. Vortex meters contain a shedder bar that creates vortices downstream when fluid flows past it (left). Depending on the application and pipe size,
vortex shedding meters are available in a range of sizes and shapes (right)
impulse lines to the orifice plate flowmeter were completely plugged. This
section of pipe was removed and a
Teflon-lined magnetic flowmeter was
installed instead.
Orifice plates can erode, especially
in vapor service with some entrained
liquid. This is common in steam service, and orifice plates should be
checked every three years for wear.
Orifice plates generally need 20
pipe diameters upstream and 10 pipe
diameters downstream of the orifice
plate for the velocity profile to fully
develop for predictable pressure-drop
measurement. This requirement varies with the orifice type and the piping
arrangement. This is rarely achieved
in a plant, which introduces error in
the measurement.
The instrument accuracy of orifice
plates ranges from ±0.75–2% of the
measured volumetric flowrate. Various problems are encountered with
orifice plate installations, and they
have the highest error of all flowmeters. “Orifice plates are, however, quite
sensitive to a variety of error-inducing conditions. Precision in the bore
calculations, the quality of the installation, and the condition of the plate
itself determine total performance.
Installation factors include tap location and condition, condition of the
process pipe, adequacy of straight
pipe runs, gasket interference, misalignment of pipe and orifice bores,
and lead line design. Other adverse
conditions include the dulling of the
sharp edge or nicks caused by corrosion or erosion, warpage of the plate
due to water hammer and dirt, and
grease or secondary phase deposits
on either orifice surface. Any of the
above conditions can change the orifice discharge coefficient by as much
as 10%. In combination, these problems can be even more worrisome and
Figure 13. The magnetic flowmeter principle states that the voltage induced across
a conductor as it moves at right angles
through a magnetic field is proportional to its
velocity.
the net effect unpredictable. Therefore, under average operating conditions, a typical orifice installation can
be expected to have an overall inaccuracy in the range of 2 to 5% AR (actual reading)” [6].
Vortex shedding meters
Vortex shedding meters contain a
bluff body, or a shedder bar, that creates vortices downstream of the object
when a fluid flows past it. The meters
utilize the principle that the frequency
of vortex generation is proportional
to the velocity of the fluid. The whistling sound that wind makes blowing
through tree branches demonstrates
the same phenomenon.
The fluid’s density and viscosity are
used to set a “k” factor, which is used
to calculate the fluid velocity from
the frequency measurement. The frequency, or vibration, sensor can either
be internal or external to the shedder
bar. The velocity of the fluid is converted to a mass flowrate using the
fluid density. Therefore, accurate fluid
density is important for accurate measurements. Vortex meters work well
both in liquid and gas service. They
are commonly used in steam service
because they can handle high temperatures. They are available in many
different materials of construction and
can be used in corrosive service.
Vortex meters have lower pressure
drop and higher accuracy than orifice
plates. A minimum Reynolds number (Remin) is required to achieve the
manufacturer’s stated accuracy. Vortex
meters exhibit non-linear operation
as they transition from turbulent to
laminar flow. Typical accuracy above
the Remin is 0.65–1.5% of the actual
reading. In general, the meter size
must be smaller than the piping size
to stay above the Remin throughout
the desired span. The requirements
for straight runs of pipe upstream and
downstream of the meter vary, but
both are usually longer than for orifice
plates. In general, 30 pipe diameters
are required upstream and 15 pipe
diameters downstream. The upstream
and downstream piping must be the
same size pipe as the meter.
There are only a few problems commonly encountered with vortex meters. Older models may be sensitive to
building vibrations, but newer models
have overcome this issue. If the shedder bar becomes coated or fouled, the
internal vibration sensor will cease to
work. This can be avoided by using an
external vibration sensor. The most
common issue is failing to meet the
Remin requirements over the desired
span. At one plant, every vortex meter
was line-sized, which means it was
the same size as the surrounding piping. The flow went into the laminar
region in the desired measured range
in every case. The flow read zero when
it transitions to laminar, making the
meters useless.
Example. Another good example of
failing to meet the Remin requirements over the desired span happened on a project where a tower that
had been out of service for some time
was recommissioned. The distillate
flowrate was substantially lower than
the original tower design and was in
the laminar flow region over the entire operating range. The distillate
flow was a major control point on the
tower, but the vortex meter could not
read the flowrate. The control strategy had to be changed to work around
this issue until an appropriate meter
could be installed.
Magnetic flowmeters
Faraday’s law states that the voltage
induced across any conductor as it
moves at right angles through a mag-
Chemical Engineering www.che.com March 2008
20_CHE_031508_FR_KT.indd 53
53
2/26/08 11:22:36 AM
Feature Report
netic field is proportional to the
"
"
"
velocity of that conductor. This
#
is the principle used to measure
#
#
velocity in magnetic flowmeters,
which are commonly referenced
to as mag meters (Figure 13).
Mag flowmeters measure the
volumetric flowrate of conductive liquids. Fluids like pure Figure 14. Mass flowmeters use the Coriolis effect to infer mass flowrate from the meaorganics or deionized water do surement of flowtube deflection
not have a high enough conductivity for a mag meter. An accurate density is required to convert flowmeters have the highest ac- the new mass flowmeter had enough
the volumetric flowrate to a mass curacy of all the different types of additional pressure drop to force the
flowrate. The meters are line-sized, flowmeters, usually ±0.1–0.4% of the liquid level into the condenser tubes
but they have a minimum and maxi- actual reading. The measurement is and restrict rates — an expensive lesmum velocity to achieve the stated independent of the fluid’s physical son for a new engineer.
instrument accuracy. A smaller line properties, making mass flowmeters Example 2. Another tower had a
size may be necessary to achieve the unique in that most flowmeters re- mass flowmeter installed on the botvelocity requirements throughout quire the fluid density as an input. toms flow, which was pumped but
the desired span. The instrument Mass flowmeters are insensitive to not cooled. The mass flowmeter alaccuracy is quite good, generally upstream and downstream pipe con- ways had erratic readings and was
at ±0.5% of the actual reading. The figurations. Practical turndown is never believed. A closer examination
error is very high below the mini- 100:1, although the manufacturers of the system revealed enough presmum velocity. Turndown for newer claim 1,000:1. The density measure- sure drop through the mass flowmemag meters is 30:1, but older models ment is not as accurate as a density ter to result in flashing in the flowwill be closer to 10:1.
meter. Mass flowmeters are gener- tube. The two-phase flow caused the
Mag meters do not have a lot of op- ally very reliable and only require erratic readings.
erating problems. They must be liq- periodic calibration to zero them.
uid-full to get an accurate reading and
Mass flowmeters are on the expen- Epilogue
are often placed in vertical piping to sive end to purchase and to install. With a knowledge of the basics of
achieve this. They rarely plug as they They require 110-V power. Pressure column instrumentation, the quescan be specified with Teflon liners and drop can sometimes be an issue, and tion posed in the introduction should
are often used in slurry service. Mag the meters are only available in line seem trivial. Our experienced enflowmeters are more expensive to sizes up to 6 in. Coating of the inside gineer had concluded that the botinstall because they usually require of the flowtube will result in higher toms flowrate of the column had to
110-V power.
pressure drop and can result in loss of be erroneous, but the instrument
range and accuracy if the tube is re- group had disagreed. The flowmeter
Mass flowmeters
stricted. Wear and corrosion can result in question was a mass flowmeter in
Mass flowmeters use the Coriolis effect in a gradual change of the mechanical relatively clean and non-corrosive
to measure mass flowrate and density. characteristics of the tube, resulting service. The other three flowmeters
A very small oscillating force is applied in error. Zero stability was an issue on the column were orifice plates
to the meter’s flowtube, perpendicular with older meters but this problem and are known to have a myriad of
to the direction of the flowing fluid. The has been solved in newer units.
problems that introduce error.
oscillations cause Coriolis forces in the Example 1. The reflux flowrate on
fluid, which deform or twist the flow- a final product column was an im- Summary
tube. Sensors at the inlet and outlet of portant measurement, and the reli- Some basic knowledge of instrumenthe flowtube measure the change in the ability of the existing flowmeter was tation can be a very valuable troublegeometry of the flowtube, which is used questioned. Product literature for shooting and design tool. Gauging
to calculate the mass flowrate. The os- mass flowmeters promised high ac- whether an instrument installation
cillation frequency is used to measure curacy and low pressure drop. The will ever give accurate readings or
the fluid’s density. The temperature of plant-area engineer coordinated a whether it is an expensive spool piece
the fluid is measured to compensate for small project to replace the existing is useful in itself. Being able to assess
thermal influences and can be chosen orifice plate flowmeter with a mass the relative accuracy of two measureas an output of the meter.
flowmeter. Column performance was ments will help determine from which
The original mass meters were U- very poor after startup. The new data to draw conclusions. Knowledge
tubes, but several different shapes meter had to be bypassed to operate of common instrument problems can
are now available, including straight the column normally. The overhead help in troubleshooting.
tubes as shown in Figure 14. Mass condenser was gravity drained, and
Get to know the instrumentation
54
Chemical Engineering www.che.com March 2008
20_CHE_031508_FR_KT.indd 54
2/26/08 11:23:14 AM
/PGMPX
M I C R O F I LT R AT I O N
U LT R A F I LT R AT I O N
0SJGJDF
GMPXNFUFS
'5
0SJGJDF
GMPXNFUFS
/PGMPX
0SJGJDF
GMPXNFUFS
'MPX
'5
.BTT
GMPXNFUFS
Figure 15. With an understanding of the accuracies of
mass flowmeters and orifice
flowmeters, we revisit the
question — Which flowmeter
is the most accurate?
'5
- 18. APRIL
LEIPZIG 14.
STAN D G 9
Have a bath with a friend
N A N O F I LT R A T I O N
'5
H
IF AT M U N IC
5. - 9. M AY
HA LL A2
.PTUMJLFMZUPCFNPTUBDDVSBUF
ST AN D 30 3 FWFOBU
col-
on your towers. Gather the manufacturer’s information so you can assess
the instrument accuracy. Keep in
mind that the manufacturer’s literature refers to the ideal instrument
accuracy, which is the accuracy of the
measuring device itself. There are
many other factors that contribute
to the accuracy of the reading that is
displayed on the DCS screen or in the
data historian. The total accuracy includes the instrument accuracy plus
all of the other things that contribute
to error in the measured reading as
compared to the actual value. Other
inaccuracies lie in digital to analog
conversions, density errors, piping
configurations, calibration errors, vibration errors, and the list goes on
and on. Check the field installation
to see what types of problems your
meters will experience.
Get to know your mechanics and instrumentation experts at your plant.
Now that you know some of the lingo
of instrumentation, you can better
converse with your instrument engineers and mechanics.
to Nick, the following DuPont
MPXGMPXSBUFT*OTFOTJUJWFUPQJQJOHDPO
leagues contributed their instrument
GJHVSBUJPO6OMFTTGMPXJTFSSBUJDEVFUP
GMBTIJOHJOMJOFPSUIFUVCFJTDPSSPEFEPS
war stories, and the author is grateTFWFSFMZQMVHHFE UIFPSJGJDFQMBUFTBSF
ful for their willingness to share their
NVDINPSFMJLFMZUPIBWFIJHIFSFSSPS
experiences:
•Jim England, DuPont Electronic
Technologies (Circleville, Ohio)
•Charles Orrock, DuPont Advanced
Fibers Systems (Richmond, Va.)
•Adrienne Ashley, DuPont Advanced
Fibers Systems (Richmond, Va.)
•Joe Flowers, DuPont Engineering
Research & Technology (Wilmington, Del.)
Acknowledgements
This paper is a compilation of instrumentation basics obtained from
the references listed below, of troubleshooting experience from many
colleagues at DuPont, and of troubleshooting examples from Henry
Kister’s most recent book, Distillation Troubleshooting. Much of the
technical information and many of
the examples come from Nick Sands,
Process Control Leader for DuPont
Chemical Solutions Enterprise in
Deepwater, N.J. Nick has worked for
DuPont for 17 years and is a specialist in process control. In addition
Author
References
1. Gillum, Donald R., Industrial Pressure,
Level and Density Measurement. Resources
for Measurement and Control Series. ISA,
1995.
2. Kister, Henry Z., “Distillation Troubleshooting,” John Wiley & Sons, 2006.
3. Spitzer, David W., Industrial Flow Measurement. Resources for Measurement and Control Series. ISA, 1990
4. Trevathan, V. L., editor. A Guide to the Automation Body of Knowledge. ISA, 2006.
5. emersonprocess.com/rosemount
6. omega.com
7. efunda.com
8. us.endress.com
9. spiraxsarco.com
Ruth Sands is a senior consulting engineer for DuPont
Engineering Research &
Technology (Heat, Mass &
Momentum Transfer Group,
1007 Market St., B8218,
Wilmington, DE 19898; Phone:
302-774-0016; Fax: 302-7742457; Email: ruth.r.sands@
usa.dupont.com).
She has
specialized for the last nine
years in mass transfer unit
operations: distillation, extraction, absorption,
adsorption, and ion exchange. Her activities
include new designs and retrofits, pilot plant
testing, evaluation of flowsheet alternatives, and
troubleshooting. She has 17 years of experience
with DuPont, which includes assignments in
process engineering, manufacturing, and corporate recruiting. She holds a B.S.Ch.E. from West
Virginia University, is a registered professional
engineer in the state of Delaware, and is a member of the FRI Executive Committee.
7>D"8:Aœ
hjWbZg\ZYBZbWgVcZBdYjaZ
[dgW^dad\^XValVhiZlViZgigZVibZci
MICRODYN
TECHNOLOGIES INC
P. O. Box 98269
Raleigh, NC. 27624
Phone 001 - 919 - 341-5936
info@microdyn-nadir.com
WWW.MICRODYN-NADIR.COM
Circle 31 on p. 76 or go to
adlinks.che.com/7370-31
56x254_biocel_messe_usa_rz.indd 1
20_CHE_031508_FR_KT.indd 55
07.02.2008 15:36:39 Uhr
2/26/08 11:23:54 AM
Control valve
position sensors
Department Editor: Scott Jenkins
T
Hall-effect sensors
A number of non-contact proximity positioners are based on the solid-state Hall effect,
and are used to help improve monitoring
and control of production processes. The
Hall effect refers to a potential difference, known as the Hall voltage, between
opposite sides of an electrical conductor
through which an electric current is flowing.
NC
Non-contact proximity positioners
Non-contact technology approaches to
valve positioning can provide accurate
valve-position data without the need for the
linkages or levers required by traditional
systems. Avoiding mechanical contact in the
valve positioning system addresses some
of the performance and cost challenges
associated with control valves, including
mechanical wear, environmental hazards,
human error and inaccurate readings.
Many non-contact proximity positioners
(Figure, top right) incorporate a controlloop feedback mechanism based on an
analog PID (proportional integral derivative) algorithm that has been updated for a
digital device. The algorithm incorporates
the Ziegler-Nichols (Z-N) tuning procedure,
a well-known method for tuning automatic
controllers. It is a two-step tuning approach
that adjusts how agressively the valve controller reacts to errors between the process
variable and the desired setpoint.
NO
Mechanical switches
Most mechanical-switch valve positioners
(Figure, top left) utilize some type of rotary
potentiometer for converting linear to rotary
feedback. These widely used devices are
similar to variable resistors.
Rotary potentiometers have an arched coil
of wire, over which an arm, called a wiper,
slides. The wiper is attached to the valve
cam shaft, and as it moves across the coil of
wire, a differing voltage output is produced.
The voltage output is proportional to the
angle at which the wiper is oriented.
Mechanical switches include contact linkages that are subject to wear over time. The
wear can eventually degrade performance.
The Hall effect is created
Limit Switch Options
when a magnetic field is
applied perpendicular to
the current direction.
A sensor using the
Hall effect is a transducer that returns a voltage output according to
changes in the magnetic
field. For valve position
sensing, an integrated
Hall-effect sensor and
magnet assembly detect
the presence, absence
and orientation of a
Solid state
Reed proximity
Mechanical
magnetic trigger. The
sensor
switch
switch
sensor is powered by
a constant current, and
develops a varying
electrical potential
that is proportional to
the flux density of a
magnetic field applied
perpendicular to the
axis of the sensor.
Hall-effect proximity sensors used for
valve positioning offer
increased reliability in
extreme environments.
These sensors eliminate
all mechanical contact
between the valve actuator and the transmitter.
Because there are no
moving parts within the
Improved reliability — Safety integrity level
Hall-effect sensor and magnet, the life
(SIL) ratings are higher with non-contact
expectancy is improved compared to a
sensors and low-power solenoids. SILs are
traditional electromechanical switch.
a measure of safety system performance.
Higher SIL numbers mean better safety
Reed switches
performance and higher confidence in the
Some non-contact valve positioners are
field device.
based on reed switches. A reed switch is
Lower costs — Non-contact valve positionan electrical switch that is operated by an
ers have a lower overall total cost of ownapplied magnetic field. Reed switches have
ership than conventional devices, thanks
a pair of electrical contacts on ferrous metal
to the precise positioning capabilities that
reeds in a hermetically sealed glass envecan be customized by valve application.
lope. An applied magnetic field moves the
Also, the cost of ownership is lowered by
reeds, causing the contacts to either touch
ease of calibration and service, and rich
or move apart. The contacts can either be
diagnostics for predictive maintenance
open normally, closing when a magnetic
signatures.
field is present, or closed normally, opening
Increased versatility — Non-contact valve
in the presence of a magnetic field. Bifurpositioners are designed to be compatible
cated reed switches can be used in applicawith most standard industrial communications where ultralow power or capacitive
tions protocols, including HART, Foundation
discharge consideration are in effect.
Fieldbus, AS-I, Modbus, DeviceNet and
Profibus. These devices can help engineers
Benefits
take advantage of the cost savings and inSignificant benefits for non-contact valve
creased diagnostic capabilities of networks,
positioners, include the following:
along with the advantages offered by
Greater flexibility — Non-contact positionimproved position sensors.
ers utilizing Hall-effect sensors provide
feedback on valve position without linkages, levers or rotary or linear seals. This
Notes
allows a remote sensor-head assembly to
This edition of “Facts at Your Fingertips” was
adapted
from Jack DiFranco’s article, entitled Adbe mounted a considerable distance from
vances in Valve Position Monitoring, that appeared
the electronics enclosure, giving engineers
in the December 2007 issue of Chemical Engineerincreased flexibility and improved safety.
ing, pp. 46–50.
COM
he precise monitoring and control of
valve position is essential for efficient automation of both discrete and continuous
processes. Measurement of valve position
provides the data required for the use of
advanced control strategies and predictive
maintenance algorithms.
More effective monitoring of valve position has been an area in which considerable progress has been made in improving
the performance and reliability of control
valves. Modern electrical valve-position
indicators offer either mechanical or noncontact switching. The position indicators
are typically mounted either directly on a
valve actuator or work indirectly using a
non-contact remote feedback device.
Control Valve
Performance
Department Editor: Scott Jenkins
A valve’s static response refers to measurements that are made with data points
recorded when the device is at rest. Key
static-response parameters for control valves
include travel gain, dead band and resolution (Figure 1).
Travel gain (Gx). This term represents the
change in position of the valve closure
member divided by the change in input
signal. Both quantities are expressed as
a percentage of the full valve span. The
closure member is part of the valve trim (the
combination of flow-control elements inside
a valve). Travel gain measures how well the
valve system positions its closure member
compared to the input signal it receives.
Without signal characterization in the valve
system, the travel gain should be 1.0. [1]
Dead band. This term can be defined as
the range through which an input signal
may be varied, with reversal of direction,
without initiating a response (an observable
change in output signal). With respect to
control valve performance, if the process
controller attempts to reverse the position of
the control valve, the valve will not begin
to move until after the controller output has
reversed an amount greater than the dead
band. A large dead band will negatively
impact control performance.
Resolution. This term can be defined as the
minimum amount of change in valve shaft
position when an input is applied. Resolution will cause the control valve to move
in discrete steps in response to small, step
input changes in the same direction. This
occurs as the valve travel sticks (when the
starting friction on the valve shaft is greater
than the friction when the shaft is in motion).
Similar to dead band, a larger resolution
will negatively impact control performance.
Dynamic response
Dynamic response for a control valve is the
time-dependent response resulting from a
time-varying input signal.
Dead time. This term refers to the time after
the initiation of an input change and before
the start of the resulting observable response.
Step response time. This term represents
the interval of time between initiation of an
input-signal step change and the moment
that the dynamic response reaches 86.5%
of its full, steady-state value [1]. The step
response time includes the dead time before
the dynamic response.
Overshoot. This term is the amount by which
a step response exceeds its final, steady-state
value. Overshoot is usually expressed as a
percentage of the full change in steady-state
value. Figure 2 shows the dead time, step
c
d
c ≤ dead band < d
Amplitude
Static response
response time and overshoot for a control valve
response to a step input
change. In this case,
stem position in percent
Output
of travel is used as the
control valve “output.”
Step-change size. The
dynamic response of
b
a
a control valve varies
depending upon the
a < resolution ≤ b
size of the input step
change. Four “ranges”
Input
of step sizes to help
understand the staticand dynamic- response
metrics are defined by
ANSI/ISA standards:
Time
• Small input steps (ReDynamics are not shown
gion 1) that result in no
FIGURE 1. Dead band and resolution, illustrated here, are
measurable movement
key static-response parameters for control valves
of the closure member
within the specified
39
wait time
Initial overshoot to 38.11 = 23%
Final steady-state
• Input step changes
average values
that are large enough
input = 37.84, stem = 37.65
to result in some
38
Stem
control-valve response
Input
with each input signal
change, but the re37
Travel gain = 0.91
sponse does not satisfy
Time to steady state, Tss = 18.3 s
the requirements of
the specified time and
86.5% of response, T86 = 2.06 s
linearity (Region 2)
36
• Step changes that are
Dead time Td =1.6 s
large enough to result
Initial steady state average values, input and stem = 35.67
in flow coefficient
35
changes, which sat0
10
20
30
isfy both the specified
Time, s
maximum response
time and the specified
FIGURE 2. This graph shows the response of a control valve
maximum linearity
to a step input (reprinted with permission from EnTech Con(Region 3)
trol Valve Dynamic Specification V3.0)
• Input steps larger than
istic refers to the curve relating percentage of
in Region 3 where the specified magnituderesponse linearity is satisfied but the speciflow to percentage of valve travel. Inherent
fied response time is exceeded (Region 4)
flow characteristic applies when constant
Region 1 is directly related to dead band
pressure drop is maintained across the valve.
and resolution. Region 2 is a highly nonlinear
Typically linear, quick opening or equal perregion that causes performance problems and
centage, this will impact both the magnitude
should be minimized. Region 3 is the range of
and the consistency of the process gain over
input movements that are important to control
the operating range [1]. Good control-valve
performance [1].
Input, stem %
M
inimizing process variability is an
important component of a plant’s profitability. The performance of control
valves within process control loops has a
significant impact on maintaining consistent
processes. This refresher outlines some of the
important aspects of control valve performance, including parameters of both the
static response and the dynamic response.
Process gain
Process gain is the ratio of the change in
a given process variable to the change in
controller output that caused the change.
To achieve effective process control, the process gain should ideally fall within a certain
range, and should be consistent throughout
the operating range of the valve. When the
process gain is too high, valve non-linearities are amplified by the process gain and
process control performance deteriorates.
When the process gain is too low, the
range of control is reduced. Changes in the
process gain over the range of operation
result in poorly performing regions in the
closed-loop controller response.
Two control-valve features impact process
gain: the size of the valve trim and the inherent flow characteristic of the valve. If the
valve trim is oversized, the process gain will
be higher than it would be for an appropriately sized valve. The valve’s flow character-
performance depends on proper valve sizing
and trim characteristics.
References
1. Beall, James, Improving Control Valve Performance, Chem. Eng., Oct. 2010, pp. 41–45.
2. Emerson Entech, Control Valve Dynamic Specification, Version 3.0, November 1998.
3. Hoop, Emily, Control Valves: An Evolution in
Design, Chem. Eng., August 2012, pp. 48–51.
4. Ruel, M., A simple method to determine control
valve performance and its impacts on control
loop performance, Top Control Inc., Swanton,
Vt., white paper, 2001.
5. International Society of Automation (ISA) and
American National Standards Institute (ANSI).
ANSI-ISA-TR75-25-02-2000, Control Valve Response Measurement from Step Inputs, 2000.
6. Neles-Jamesbury Inc., “The Valve Book,” NelesJamesbury, Worchester, Mass., 1990.
7. Skousen, Philip L., Valve Handbook, McGraw
Hill, New York, 1998.
Editor’s note: Portions of this page were adapted
from the article in Ref. 1.
Environmental Manager
Common Mistakes When Conducting a
HAZOP and How to Avoid Them
An important part of ensuring the success of a HAZOP study is to understand the
errors that can cause the team to lose focus
Arturo Trujillo, Walter S. Kessler
and Robert Gaither
Chilworth, a DEKRA Company
S
ince its inception in the 1960s
and its first official publication in 1977, the Hazard and
Operability Study (HAZOP)
has become one of the most powerful tools for identifying process hazards in the chemical process industries (CPI). Utilizing systems that are
qualitative or even simplified semiquantitative, the HAZOP method has
been increasingly used, not only as a
tool for identifying process hazards,
equipment deficiencies or failures and
operability problems and assessing
their risks, but also as a tool for prioritizing actions and recommendations
for process-risk reduction. Reducing
risk is especially important in ensuring the safety of the personnel who
must work in the plant environment
each day (Figure 1).
The HAZOP methodology is a systematic team-based technique that
can be used to effectively identify
and analyze the risks of potentially
hazardous process operations. It is
the most widely used process hazard
analysis (PHA) technique in numerous industries worldwide, including
petrochemicals,
pharmaceuticals,
oil-and-gas and nuclear, and is used
during the design stages of new processes or projects, for major process
modifications and for periodic review
of existing operations.
A HAZOP is a time-consuming exercise and should be conducted in
such a way to ensure that the results
justify the effort. This article presents
some common mistakes that can
jeopardize a HAZOP team’s task. Frequent or chronic occurrence of these
mistakes indicates potential gaps in
the site’s process-management system. However, it is ultimately the responsibility of the HAZOP facilitator
54
FIGURE 1. HAZOP studies are useful tools in reducing process risk, and they provide safeguards against
hazardous scenarios for the personnel who must maintain and operate the plant
to correct these mistakes if or when
they occur during the course of the
HAZOP study. Therefore, the selection of an experienced facilitator is
an essential element for assuring the
success of the HAZOP. Without an
adequate depth of knowledge and
experience, the HAZOP can become
a “check the box” exercise.
Chemical Engineering
Benefits of a HAZOP
The advantages offered by HAZOP
over other process-risk analysis
tools are numerous, and include
the following:
• It is a rigorous process; it is structured,
systematic and comprehensive
• It is adaptable to the majority of CPI
and manufacturing operations, in-
www.chemengonline.com
december 2015
cluding those in petroleum refineries (Figure 2) and other oil-and-gas
processing plants, nuclear facilities, and specialty chemical, pharmaceutical and even high-speed
manufacturing plants
• It is team-based and allows the
interchange of knowledge and experience between the participants
• It helps to anticipate potential accidents or harm to employees, the
facility, the environment and the
surrounding community
• It functions as a type of training for
the team’s participants and leader,
who are required by the nature of
the method to look at the process
from a new perspective — not just
from the perspective of “how should
it run?,” but also “how can it fail to
run correctly?”
A HAZOP is time-consuming because it requires the participation
of a multi-disciplinary team over extended timeframes. This investment
of time and personnel, often involving third parties, means that the performance of the HAZOP needs to be
optimized to maximize its value. The
following sections detail some commonly found mistakes that occur
during the planning, execution and
followup stages of a HAZOP.
Planning stage
Mistake 1: Mismanagement of
time-allotment issues. One of the
most frequent mistakes of a HAZOP
is failure to manage the time allotted for the study. A HAZOP is often
scheduled for a set amount of time,
neither by the HAZOP facilitator nor
the team, and sufficient time may not
have been allocated. Furthermore,
there may be little or no flexibility in
the schedule. An insufficient amount
of time for the HAZOP limits discussion and brainstorming and reduces
the quality of the analysis, in turn
leading to some of the mistakes discussed in more detail below.
Estimating the duration of a
HAZOP is not an exact science, and
it requires a good knowledge of the
methodology, the complexity of the
process, the nature of the risks that
can be identified up front and the idiosyncrasies of the group. Although a
HAZOP should not be open-ended in
time allotment, the ideal HAZOP has
some flexibility built into the schedule. The team leader should make an
Chemical Engineering
FIGURE 2. Many processes in the CPI are potentially hazardous if not managed correctly. HAZOP studies
seek to prioritize actions to reduce process risks, and are adaptable across a wide range of
industrial sectors
estimate of the time required for the
team based on the process description and preliminary count of HAZOP
nodes (specific portions or topics of
the study process) so that managers
are aware of the degree of personnel
commitment that will be required.
Mistake 2: Incomplete, inaccurate
or unavailable process safety information. Another common mistake during a HAZOP is not having
all the prerequisite process safety
information (PSI) and other valuable
information available, including outof-date or incomplete information.
This is especially critical regarding
piping and instrumentation diagrams
(P&IDs), current standard operating
procedures (SOPs) and appropriate
data on flammability, combustibility,
reactivity, toxicity and electrostatic
properties of materials in all forms
and phases, as well as compatibility of chemicals with each other and
with the processing equipment. If the
HAZOP is conducted by an external
facilitator, it is the responsibility of the
owner of the process to verify the integrity of the PSI.
Related to this, it is not acceptable
that participants attend the HAZOP
for the purpose of obtaining information on a process or project. HAZOP
participants should be well prepared
to contribute to the discussion and
have all requisite background information with them. It is the responsibility of the facilitator to instruct all
participants that they must come to
the HAZOP prepared.
www.chemengonline.com
december 2015
Mistake 3: Incorrect size of HAZOP
team. The HAZOP team should be
limited in size, ideally five to seven
people, excluding the HAZOP facilitator and the HAZOP scribe or secretary. A team that is too large can
easily lose focus, dwell on a subject
or issue too long, or be disruptive. It
is human nature that all participants
seek to present their perspectives,
but this can lead to excessive discussion. A group that is too small will
not likely include the right expertise
or provide enough different perspectives to evaluate the process hazards and controls adequately or in
the right detail.
Execution stage
Mistake 4: Lack of focus during
the meeting. A HAZOP is a complex exercise that requires the concentrated and coordinated contribution of all the members of the team.
Distractions should be minimized
in order to ensure and maintain the
team’s focus. Therefore, team members should not be allowed to come
and go into and out of the meeting,
take phone calls, answer emails,
or discuss issues not related to the
HAZOP during the sessions. Use of
an offsite venue may be helpful to
prevent plant operations from becoming a distraction.
It is the responsibility of the HAZOP
facilitator to maintain the focus of the
group and keep the HAZOP process
moving by allowing some open discussion on the issue, node and con55
FIGURE 3. It is crucial that a HAZOP be explicitly targeted for the specific process in question, and not
based on previous HAZOPs for similar processes, as process safety information and controls may have
recently changed
sequence at hand, but not letting it
get out of control. Sufficient (but not
excessive) breaks for participants to
eat and drink and conduct activities
not related to the HAZOP, such as
checking their emails and voicemails, should be planned and coordinated. The HAZOP room should
be free from cellphones, and distractions like texting during the HAZOP
exercise should be forbidden.
Mistake 5: Preventing the team
from brainstorming. Another frequent mistake in HAZOPs is to restrict the brainstorming exercise,
which is, after all, the basis (and the
power) of the method. The most
common issues in this area include
the following:
• Omitting key words, parameters
or even nodes, with the argument
that an upper bound for the consequences in this node can be
easily identified, and these maximum consequences are protected
by safeguards. This clearly means
that steps or phases of the HAZOP
procedure will be skipped, and
some process hazards may not be
identified. This violates the HAZOP
methodology and overall purpose
of conducting the HAZOP in the
first place. Although on many occasions, strict application of the
methodology will not identify any
hazardous scenarios other than the
obvious ones, which have already
been listed up front and used as
an argument for omitting any further analysis. Nevertheless, sometimes a non-obvious scenario will
56
be identified that constitutes the
purpose of the HAZOP, and this is
where it demonstrates its power
• Carrying out a superficial review of
the combinations of key words and
parameters, listing the most obvious, and often repetitive, causes of
deviation without going into detail.
In other words, repeating the same
causes, parameter after parameter
and node after node, instead of
conducting a more in-depth analysis and discussion
• Carrying out HAZOPs using some
form of prior information — prebuilt templates or the HAZOP
from a similar project, for example.
Again, what the HAZOP is meant
to do is analyze the possible specific risk scenarios (especially the
non-obvious ones) of the process
or project being studied at the time
of the HAZOP (Figure 3). While one
can refer to, or reference previous
material, the HAZOP is to be conducted based upon the current
facility or process, and the equipment, process or controls may have
changed since the last HAZOP
In practice, the quality of a HAZOP
is influenced by the ability of the
HAZOP leader to ask the appropriate questions to ensure that the
team identifies all the hazards of the
process being studied, not only the
most obvious hazards. This ability
is based on the leader’s experience
with the HAZOP technique and his or
her technical skills in process-hazard
identification, as well as human error
and equipment failure potential. It is
Chemical Engineering
the responsibility of the HAZOP facilitator to manage the team and
the HAZOP study process to ensure
that the team stays focused and that
no nodes or hazards are missed by
the team.
Mistake 6: Mistaking the tools for
the process. The HAZOP spreadsheet should not be viewed as a
questionnaire whose boxes all have
to be filled in, even with numerous
repetitions of scenarios. The combination of pairs of key words and parameters is not intended to be an end
in itself, but to encourage discussion
and identify deviations from the desired state. As would be expected,
the same deviation generally causes
the alteration of more than one process parameter, and therefore could
be entered in more than one place in
the spreadsheet. An obvious example is a distillation column, in which
pressure, temperature, composition
and flowrate (of reflux, for example)
are clearly interrelated. Hence, any
change in one of the parameters
automatically causes responses and
changes in the others.
It is not as important for all the
spreadsheet “boxes” to be filled in
as it is for the HAZOP group to work
effectively in identifying all the possible deviations. A HAZOP table is
not and should not be a form-filling
exercise. Rather, it should guide and
structure strategic brainstorming discussion with the intent of identifying
all hazards and operability problems
that may injure employees (Figure 4),
cause damage to property and assets, impact the community or cause
environmental damage.
Mistake 7: Misrepresenting or
misunderstanding
safeguards.
Documentation of effective and appropriate safeguards is a key step
in the PHA team’s decision whether
additional process-risk reduction
is required for a specific scenario.
Examples of safeguards that are
neither effective nor appropriate are
given below:
• Local instruments that are never
checked by field operators
• Alarms that fail to give the operator
sufficient time to effectively halt the
consequences of the deviation.
Examples include the following:
❍❍ Alarms that fail
❍❍ Very generic alarms that are
activated in numerous differ-
www.chemengonline.com
december 2015
❍❍
❍❍
❍❍
ent situations. In this case,
the operator has to diagnose
which of the multiple options he or she is faced with,
thereby losing valuable time
for action
Alarms that are activated frequently, often for trivial reasons,
and that therefore tend to be
ignored by the operators
Alarms where no specific
operator
response
has
been given in procedures
and training
Cascades of alarms, where
“first-in” is not obvious
or indicated
• Pressure-relief systems (such as
safety valves and rupture discs) that
were not designed for the case and
process conditions being studied.
Obviously, the purpose of a HAZOP
is not to verify the correct design of
pressure-relief systems. Nevertheless, if there is reasonable doubt,
a recommendation should be issued to check that the scenario for
which it was listed as a safeguard
was one of the cases of design for
the relief device or system
Operating procedures cannot be
considered safeguards when the
cause giving rise to the scenario is
human error, which presupposes
that the procedure has not been followed properly
Mistake 8: Excessive recommendations. Some HAZOP groups believe that they should issue a recommendation for any scenario that has
negative consequences, whether a
hazard scenario, equipment failure or
operability problem. This is not in the
spirit of the HAZOP method. What a
HAZOP aims to do is identify all of
the hazardous scenarios, determine
the associated risk for each particular scenario and check whether the
process has been duly protected by
the safeguards, and only if there is
not adequate protection, propose
recommendations for doing so.
Mistake 9: Irrelevant recommendations. Sometimes, people will
suggest and utilize HAZOP recommendations as a way to obtain approval for an operational or plant
design improvement that is not necessarily directly related to the safety
of personnel or the release of a hazardous chemical. In many cases,
these changes have already been
Chemical Engineering
evaluated and ruled out for various
reasons. While a HAZOP can and
should include recommendations related to operational and maintenance
issues, the HAZOP’s sole intent is for
the identification of issues, not to find
a solution to the problems or redesign the facility. All recommendations
are made for further investigation and
design considerations. Therefore, the
actual HAZOP is not the best time
or place to deal with these types of
issues. They should be further investigated offline in the correct setting,
and should include the appropriate
personnel in the discussions.
Mistake 10: Excessively lax recommendations. When making recommendations in a HAZOP, it is very
important to utilize the proper wording. Since the HAZOP team is composed of knowledgeable people,
recommendations should be made
that involve action. Two words that
are highly over-utilized are “recommend” and “consider.” “Recommend” is already used in the title for
the column and most of the time, the
team’s brainstorming makes up the
“consideration” aspect of the recommendation being proposed. If additional risk analysis is required, “consider” is an appropriate phrase.
There are often multiple ways to reduce risk and the team’s time should
not be spent analyzing alternatives.
Another common phrase seen in
many HAZOP recommendations is
“Further study on what needs to be
done in order to...” — which in reality
is not specific and can be left open
for interpretation. Most of the time,
recommendations that involve an
action and have a specific purpose
should be made. Start recommendations with strong action words,
such as “install,” “investigate,”
“graph,” or “add.” Additionally, when
wording recommendations, if a recommendation is being made for a
specific reason, include that reason
in the recommendation so it is not
forgotten when the HAZOP report
is written or is being reviewed. The
following are good examples of wellworded recommendations:
• Install a pressure gage and transmitter on the overhead line “L12” of
the distillation column to increase
the SIL level from 1 to 2
• Graph the P/T curve for the reaction process and add the accept-
www.chemengonline.com
december 2015
FIGURE 4. HAZOP studies intend to provide a comprehensive index of the hazards and operability
problems that may cause damage or put employees in danger
able operating range. Utilize this
chart to set appropriate process
alarm and shutdown points
It should be noted how these two
recommendations are very specific
action items and also include the
reason for the action.
On some occasions, there may
be two or more divergent opinions,
and a consensus cannot be reached
during the HAZOP itself. In this case,
both recommendations should be
included in the HAZOP and left for
further investigation or evaluation
by the company, based upon the
information from the HAZOP. For
scenarios such as these, the best
solution — after further investigation
and research is completed — may
be something not even mentioned or
thought about in the HAZOP itself.
Again, it should be reiterated that
except for a few unique situations,
such as the divergent opinion case,
recommendations should be clear,
specific, not open to interpretations
and include the reasoning at the time
that the HAZOP was conducted.
Mistake 11: Trying to solve the recommendation or design the solution during the HAZOP. Another
common mistake that can delay the
HAZOP and cause the group to lose
its focus is trying to solve the problem or redesign the process listed
in the recommendation during the
HAZOP study itself. This is most
common when process-design engineers are team members and they
desire to make the process perfect.
Unless it is a clear and easy solution,
many recommendations require further investigation or other actions to
complete the task, alleviate or minimize the hazard, and close out the
action item based upon the recommendation. It must be remembered
that a HAZOP is a brainstorming
57
exercise with knowledgeable process personnel from different areas
of the plant, whose task is to identify
hazards or hazardous scenarios and
make practical recommendations
to alleviate or minimize the hazardous scenarios or consequences.
As previously stated, not all recommendations have clear-cut solutions,
and the HAZOP time should not be
wasted with actions that may require
research and further investigation
that only one of the participants, or
a qualified expert, can resolve in the
quiet of his or her own office. Even
HAZOP-recommended changes to a
process should be subjected to the
site’s management-of-change (MOC)
process to prevent the introduction
of new hazards. It is not uncommon
for an incident to be triggered by a
change made for safety reasons.
The HAZOP can and should result in
a list of actions or recommendations,
with the designation of someone responsible for carrying them out, but
not necessarily the final solution or
re-engineering of the plant.
Followup stage
The output of the HAZOP study is
the set of recommendations that are
usually presented to management in
a standardized report format. At this
stage, site management is responsible for responding to each recommendation according to local or site
requirements and the requirements
of applicable standards, such as the
U.S. Occupational Safety and Health
Administration (OSHA) Process Safety
Management (PSM) standard Title
29, CFR Part 1910.119. Site procedures should include regular followup
reports to track recommendations
to their resolution.
Mistake 12: Failure of management to act promptly on each recommendation. Site management
must evaluate each recommendation
according to its technical feasibility,
the risk-reduction benefit versus total
cost of implementation, availability of
alternative solutions and other factors. The PSM standard allows rejection of a PHA recommendation only
for specific causes. Good industry
practices dictate that management
takes prompt action on each recommendation and ensures that all recommendations are tracked to final
resolution and closure.
58
Mistake 13: Failure to update
HAZOPs when process knowledge
changes. A HAZOP worksheet is a living document. Ideally, it reflects management’s current knowledge of the
process hazards, the consequences
of those hazards and the controls
necessary to reduce the process risk
to a tolerable level. HAZOPs lose their
effectiveness over time when they are
not updated promptly.
Changes in process safety information should result in a PHA review
through the site MOC procedure. The
review will identify any new causes of
a process deviation or operability issues, changes in safeguards for previously documented hazard scenarios,
and possibly new or revised recommendations to address the hazards.
Recent accidents or near misses
on a site process, or a similar process
elsewhere, should trigger a HAZOP
review to ensure that the same or
similar scenario has already been
considered and documented during the most recent HAZOP and that
effective controls are in place to prevent a similar incident from occurring
in the future.
Additional applications
For the sake of simplicity, this article
has focused on common mistakes
observed during the use of the HAZOP
methodology. The discussion in this
article can be equally applied to other
scenario-based methodologies, such
as “what-if” analyses, which can be
carried out at very early stages of the
process lifecycle — HAZOP is typically reserved for late-design stage
or later-lifecycle stages when more
detailed PSI is available. The specific
PSI that is available and the expertise
needed for other hazard evaluation
methodologies may be different, but
the types of mistakes discussed here,
and their prevention, are very similar.
Closing thoughts
OSHA recognizes the HAZOP technique as an acceptable methodology
for conducting PHAs of processes
covered by the PSM standard. Other
regulators around the world also accept the HAZOP methodology as
appropriate for analyzing the existing and potential hazards of a complex process that involves a highly
hazardous substance.
The HAZOP methodology repreChemical Engineering
sents an extremely powerful tool for
the identification, semi-quantification
and mitigation of risks in CPI production facilities with continuous, batch
or semi-batch processes.
The biggest inconvenience of this
technique is its relatively high cost, in
terms of time and people who need
to be involved and participate in the
brainstorming sessions. This high
cost means that the HAZOP needs
to be carried out to optimum effect,
avoiding the sorts of mistakes that
have been discussed in this article. It
is the responsibility of the HAZOP facilitator to make sure the group stays
focused and does not commit any
of these mistakes. Finally, the selection of a knowledgeable and experienced PHA facilitator is a crucial element for assuring the success of the
HAZOP process.
n
Edited by Mary Page Bailey
Authors
Arturo Trujillo is managing director of Chilworth Amalthea, the
Spanish subsidiary of the process
safety division of DEKRA (Nàpols
249, 4ª planta 08013 Barcelona,
Spain; Phone: +34-931-426-029;
Email: arturo.trujillo@dekra.com).
He has facilitated more than 200
HAZOPs, and his specialities include SIL and LOPA. Prior to working at Chilworth, he served as a division manager at
Technip Iberia and as engineering director at Asesoría
Energética. He attended Universitat Politècnica de Catalunya and received a Ph.D. from Johns Hopkins University.
Walter S. Kessler is a senior process safety consultant at Chilworth
Technology Inc. (113 Campus
Drive, Princeton, NJ 08540;
Phone: 832-492-4358; Email:
walter.kessler@dekra.com).
Kessler has 20 years of experience in the petroleum refinery,
gas-processing, specialty-chemical, pharmaceutical, manufacturing and HVACR (heating, venting, air conditioning and
refrigeration) industries, including five years performing
process-safety engineering functions. He was instrumental in the design and construction of several refinery, gas and chemical processing facilities, designing a
pharmaceutical filling process and also has experience
in Six Sigma and lean manufacturing.
Robert L. Gaither is a senior process safety specialist at Chilworth
Technology Inc. (113 Campus
Drive, Princeton, NJ 08540;
Phone: 732-589-6940; Email:
robert.gaither@dekra.com).
Gaither has more than 28 years of
experience in company operations,
regulatory compliance, management consulting and process
safety and risk management. He has led organizations at
site, division and corporate levels to achieve record
safety performance and significant cost savings. Gaither
is trained in HAZOP and SIL/LOPA facilitation. He holds a
Ph.D. and is a certified safety professional (CSP).
www.chemengonline.com
december 2015
Engineering Practice
Chemical Process Plants: Plan for Revamps
Follow this guidance to make the most of engineering upgrades that are designed to
improve plant operations or boost throughput capacity
600
Chemical process plant revamps are typically undertaken for the following reasons:
• To change in feedstock composition
• To adopt energy-conserving processes in light of increasing energy costs
• To reduce the fixed-cost components of production,
by increasing capacity within the existing facility
• To extend the life of a well-maintained process plant
Similarly, there are many benefits to conducting an appropriate plant revamp. These include the ability to:
• Increase the reliability of equipment, leading to reduced
downtime and maintenance costs
• Reduce energy consumption
• Extend useful plant life
• Reduce the cost of production, thereby improving the
overall bottom line for the facility
However, experience shows that inefficient implementation of proposed revamp options can lead to failure, so
care must be taken to avoid this by building the right
team of experts. This team typically includes representatives of the process licensor company, engineering
and project-management consultants, and experts from
the owner company representing diverse fields, such as
operations, project management and maintenance. If
sufficient expertise for the proposed revamp is not available internally, one can hire consultants to carry out the
feasibility studies and implementation of the revamp on
70
500
530 mm (min)
60
400
Head, m
T
48
Efficiency %
570 mm (rated)
he chemical process industries (CPI) are functioning in an era of globalization, and between the
prevailing economic conditions and upheavals
in the energy sector, the number of new investments in CPI facilities has fallen in recent years. Many
industries are seeking cost reductions by revamping
existing plants with minimum investment. The objective
is to reduce the cost of production through the use of
upgrades and new technologies, to remain competitive
in the market. By way of example, if one wants to set
up a new complex to produce ammonia and urea, the
specific capital cost will be on the order of $666/ton of
urea. By comparison, if an existing plant is revamped to
raise the existing production from 100% to 120% (that
is, adding 20% additional capacity), this can be done at
an expenditure that is closer to $300/ton to achieve this
incremental production
This article reviews key concepts, objectives and procedures that are needed to successfully carry out various types of CPI plant revamps.
The need for revamps
80
590 mm (max)
50
300
40
Efficiency, %
Koya Venkata Reddy
FACT Engineering and Design Organization
30
200
20
100
0
NPSHR
0
250
500
750
1,000
Capacity, m3
1,250
1,500
10
0
1,750
FIGURE 1. Shown here are typical pump characteristic curves, with three
different impeller sizes, showing capacity versus head, and NPSHR versus
capacity
a turnkey basis. Meticulous planning related to the hookup of tie-in points arising out of expansion schemes can
help to reduce the amount of downtime required to execute the revamp schemes and put the plant back online.
Targeted revamp capacity, change in process
In general, it is possible to increase the rated capacity
of a plant by 10%, with very little added expenditure.
But to increase capacity by 20–50% over the nameplate
capacity, substantial modifications must be taken into
consideration that often involve implementing different
technologies from the ones already applied in the existing plant. When seeking such notable increases in production capacity, plant operators and managers must
not only verify the soundness of the economics, but also
carefully evaluate the potential drawbacks, if any.
Sometimes the existing process path may have to be
changed to enhance the capacity of the plant, since the
current process may not yield the desired efficiency or
conversion rates. Two cases are discussed below.
Example 1. In the case of units to recover liquefied petroleum gas (LPG) from natural gas, such units are designed for a certain composition of feed gas. The need
for a revamp often arises if the gas composition has
changed and the expected recovery of C3/C4 and higher
compounds has become unprofitable. In this case, the
expected recovery of LPG and natural gas liquids (NGLs)
can be achieved by compressing the feedstock to higher
pressures than present levels, or by spiking heavier NGLs
back to the feed gas stream. Thus, such a revamp re-
Chemical Engineering
www.chemengonline.com
december 2015
quires a study to assess the technical and economic feasibility of the different process paths being considered.
Example 2. A feedstock change from naphtha to natural
gas in ammonia plants, hydrogen plants and methanol
plants also necessitates a need for revamp of the reformer section and front end, but in many cases, the
existing process path can be retained. In this case, the
absorbed duty of the reformer — which tends to be the
major energy-consuming equipment found in the system
— and the burner duties required vis-a-vis the required
reformer absorbed duty are calculated to check their suitability. The maximum skin temperature of the reformer
tubes for the feedstock change must be checked.
In all cases, the existing process path, along with other
options, must be studied in detail to arrive at the most
economical and technically feasible revamp option.
Lifecycle of the plant
The different phases of a plant’s lifecycle must be taken
into consideration when planning a revamp. Such phases
include the following:
1. Incubation stage — Initial stabilization period
2. Growth stage — Optimization and debottlenecking of
operations to improve the efficiency
3. Maturity stage — Attainment of stable operation
4. Declining stage — Realization that plant capacity is
not sustainable because of frequent equipment failures or excessive maintenance requirements
Revamping the plant during Phases 1, 2 or 3 is relatively
easy, whereas revamping a plant during Phase 4, when
the facility is already in decline, requires the engineering
team to adapt many of the modern technology options
to an aging infrastructure, and to replace many equipment components.
Objectives of a revamp
The objectives of a plant revamp should be spelled out
prior to studying the options. Possible objectives could
be the following:
• Enhance capacity from the present operating level to
expand capacity to, say, 110%, 120%, 130% of rated
capacity
• Reduce production costs
• Reduce pollution
• Reduce the consumption ratios of various raw materials and utilities
• Reduce maintenance costs and increase the onstream
factor
• Upgrade the technology to keep pace with the new
developments, and to increase the plant life
• Minimize plant shutdown
These objectives can be achieved by maximizing efficiency,
yield and conversion of raw materials in various sections.
Specifically, plant revamps are often implemented to improve process optimization, increase energy conservation, improve product quality and expand capacity.
Key revamp procedures
Every revamp project should start by identifying the goals
and actual bottlenecks. A material-and-energy balance
for the base case should be developed to reflect the actual operating conditions. The consumption of various
Chemical Engineering
www.chemengonline.com
raw materials, utilities and energy per unit of production
are tabulated. The material-and-energy balance of the
existing operation, and the required revamp plant load,
are prepared.
The existing equipment components are rated for
the revamp conditions, and then changes and required
new equipment are identified. Cost estimates of various
schemes are prepared (after consultation with various
vendors). Feasibility studies, followed by detailed project
reports (DPR), are also prepared. The potential rates of
return of various options are studied. The best option
available (on the grounds of economic sustainability and
technical feasibility) is then selected, so that the basic
engineering design package (BEDP) can be prepared,
and the revamp project implemented.
As noted, successful revamps require assembling
the right revamp team. Typically, such a team consists
of individuals from the process licensor company, consultants for basic engineering and detailed engineering
services, contractors for specific electrical-, mechanicaland instrumentation-related aspects of the project, and
various engineers from the owner’s group (for instance,
those who represent specific disciplines and have a concrete understanding of the current operation).
The following planning steps should be undertaken:
1. Estimate the plant’s inherent capacity from past and recent data. This can be done by identifying weak areas
in the plant (for instance, those that are contributing
to non-realization of rated or required plant capacity),
or by conducting an end-to-end survey of the plant.
Once such a study is carried out, efforts should be
made to predict the potential performance improvements of the plant if the weak areas are rectified.
2. Prepare the process scheme and the equipment data
sheets. Carry out feasibility studies of all options (including both technical and financial aspects of the proposed revamps) and then develop the detailed project
report. Set the target of the revamp in terms of time
(schedule) and cost.
3. Implement the approved revamp. Ideally, the revamp
activities should be carried out during the annual
scheduled turnaround period for the plant, to minimize
unscheduled downtime.
Estimate plant capacity
Many older CPI plants can run at or above the rated capacity continuously for a week or a month. But due to certain operating limitations, and downtime that may arise
from some underperforming equipment, the annual rated
capacity is seldom achieved. Analyzing past operating
data on a monthly basis (for the past 10 years or so) will
reveal which equipment components are most often to
blame for downtime, and are thus affecting overall capacity utilization. Such a study of past data is often called a
weak-area analysis. Similarly, sometimes an end-to-end
survey of the plant (from the plant commissioning to the
present day) is also conducted.
Existing equipment poses both opportunities (in the
form of underutilized capabilities) and challenges (in terms
of limitations). The ability to identify problem areas can
help the team to prioritize their debottlenecking efforts in
order to improve capacity utilization more quickly.
december 2015
49
TABLE 1. A typical calculation of Cv, before and after a
revamp
The weak-area analysis
Understanding current operation is very important for
the successful revamp of a plant. The plant performance
can be evaluated based on the performance data for the
past 10 years, if the plants are relatively old. Otherwise
the plant performance is studied from the beginning to
the present day (using the end-to-end survey).
Two indices, the plant load factor (PLF), and the onstream factor (OSF), are important to scientifically evaluate the plant performance.
Actual production 100
PLF =
(Actual stream days) (Daily rated capacity) (1)
Actual stream days 100
OSF =
Annual design on stream days
Overall capacity utilization =
PLF OSF
100
(2)
(3)
Actual annual production 100
=
Annual design onstream days x daily rated capacity (4)
The performance of the plant is studied based on the
highest PLF and OSF, on both a yearly and monthly basis.
Data on the highest daily production that is achieved with
the present hardware should also be captured.
In addition to the past production performance of the
units, a breakdown of individual equipment must be assessed to identify the weak areas and arrive at the predicted performance in the post-revamp implementation
scenario. The best yearly, monthly and daily performance
must be considered in order to find the target capacity of
the plant and identify the number of stream days that this
target capacity is likely to achieve.
Analysis of historic downtime factors can also provide insight. To assess the feasibility of the plant operating at higher capacity, the best-achieved PLF (on a
monthly basis), and the highest load achieved, should
be considered.
In any process plant, onstream days are lost due to
various factors — including process problems, mechanical breakdown of equipment, raw material shortages,
planned shutdowns, finished product sales, effluent
treatment and byproduct sales (if any). Such lost days —
which contribute to a loss of overall capacity utilization
— should be tabulated, and the associated causative
factors noted and tabulated.
From the weak-area analysis, one can estimate the
inherent capacity potential of the plant and identify individual equipment components or sections that are
becoming a bottleneck to maximum capacity utilization.
Sometimes the plant capacity is affected by external
circumstances, such as feedstock supply issues (for instance, urea plant capacity is impacted by the capacity
of upstream ammonia plants) utility supplies and more.
Dividing these factors into recurring and non-recurring
factors will also provide insight into the priorities needed
to address the problem.
50
Unit
Before
After
Flowrate
m3/h
80
100
Density
kg/m3
950
950
P
kPa
49.03
49.03
P
kg/cm2
0.5
0.5
N1
unitless
0.0865
0.0865
Cv
unitless
128.73
160.92
Control valve size
in.
4
6
Pipeline size
in.
6
6
• Internal reasons: Recurring. Examples include process problems, mechanical breakdown of equipment,
planned shutdowns and more
• Internal reasons: Non-recurring. Examples include
lack of finished product sales, effluent treatment, lack
of byproduct sales and more
• External reasons: Recurring. Examples include utility
failure, raw-material shortages and more
• External reasons: Non-recurring. Examples include
worker strikes, natural calamities and more
FFS and RLA analysis
In a chemical process plant, critical equipment and piping must be evaluated for their fitness for service (FFS),
according to API 579 [1], and their potential residual
life analysis (RLA) must also be assessed. The API 579
guidelines are designed to ensure that pressurized critical equipment are operated safely. The ability to establish
the minimum years of residual life of the critical equipment is essential to justify the revamp of the old and wellmaintained plants.
Use of simulation software
Simulation software can play an important role during the
evaluation of potential revamp options, so its use is recommended to study the competing process-revamp options. Such modeling can help the team to substantially
reduce the time needed to study the technical feasibility
of revamp options. However, great care must be taken to
ensure the use of most appropriate thermodynamic modeling options that are suitable for the plant and its components, fluid properties, process conditions and so on;
otherwise the results can be wrong. Appropriate use of
simulation software can reduce the time required to carry
out the revamp projects, and help the team to identify an
optimized, cost-effective process path, based on an evaluation of proposed process sequence changes given the
various constraints.
The various revamp options are studied from a technical and financial point of view, a suitable process path
is selected and the equipment that create a bottleneck
for the desired revamp option are identified. Once the
additional equipment and piping are identified (per the
proposed expansion schemes), the required hookup
points and tie-in connections must be identified. As
noted, to reduce the impact of these hookups, they should
— wherever possible — be undertaken in conjunction
with short shutdowns that are planned for preventive
maintenance.
Chemical Engineering
www.chemengonline.com
december 2015
Environmental and safety impacts
TABLE 2. Typical Design Velocities of Fluids in CPI PIPELINEs
Environmental-impact assessment studies should be
conducted during the conceptual stage to evaluate the
positive and negative impacts of the proposed engineering changes on the environment, and to arrive at the solutions to mitigate the adverse impacts, if any.
Safety is always a paramount consideration. The team
must ensure that the proposed plant revamp, and all revised process schemes, conform to the latest codes and
safety norms. Hazard operability (Hazop) studies of the
process schemes during the basic engineering-design
package stage, front-end engineering-design stage, and
the detailed engineering stage should be conducted. During the implementation stage, periodic technical audits
should be conducted to see that the construction is progressing according to design intentions.
Hazardous-area classification drawings of the plant
are developed, and existing electrical considerations and
other instruments are evaluated and changed according to the modified hazardous area classification of the
plant. Quantitative risk analysis (QRA) is also conducted
to submit to the statutory authorities, and any onsite and
offsite emergency plans must be revised, as needed.
Similarly, a safety integrity level (SIL) analysis should
also be conducted according to BS IEC 61511[3] and
BS IEC 61508 [4]. And, all safety-instrumented functions
(SIF) of the instruments are to be SIL 2 (minimum).
Debottlenecking individual equipment systems
Different strategies are available to debottleneck different
equipment components and systems. Some examples
are discussed below:
Trayed columns. The design data of the distillation
column should be studied, preferably using process
simulation software. The column is simulated for both
the existing operating conditions, and for desired higher
throughput or changed feed composition. The liquid and
vapor rates for each tray, along with their physical properties, are obtained. After obtaining the column profile and
liquid-vapor-traffic details in the column, the tray hydraulics are calculated and suitable recommendations are
made, regarding changes made to the weir height, the
number of holes, pitch, the diameter of the holes (considering the flooding conditions) and more. Tray vendors
should be contacted when considering revamping the
distillation column trays. The team should ensure that
the reboiler and condenser are rated for the maximum
throughput expected.
Many advanced separation technologies that are available today allow for higher-capacity trays to be retrofitted into distillation columns. Similarly, the suitability of
advanced structured packings can also be considered
when planning a revamp of distillation columns in petroleum refinery and other critical CPI applications. Many
present-day structured packings can help revamped
columns to improve capacity by 40–50%, while reducing
pressure drop across the column.
Packed columns. In the late 1980s, Raschig rings were
popular in chemical process operations. A study of pressure drop of the packed column at the rated capacity
should be carried out to determine the pressure drop per
foot of packed column. Such a study should also identify
Chemical Engineering
www.chemengonline.com
Type of line
Allowable velocity (max), m/s
Suction lines for the pump
1
Discharge lines for the pump
2–3
Fire water
5
Gravity lines
0.6–0.7
Low-pressure gas
20
High-pressure gas
15
Low-pressure steam
20
High-pressure steam
15
the percent flooding velocity with the revamped throughput. If the flooding velocity is greater than 80%, the packings are replaced with ones that offer lower packing factors
and higher surface area per specified volume. However,
adequate wetting of the packing must be ensured, according to design guidelines, and circulation rates of liquids must be enhanced accordingly, if needed.
Packed towers that contain ceramic packings have a
tendency to flood at lower gas velocities. Hence, in some
cases, such packings may be replaced with steel packings
(after conducting the technical suitability check) to help reduce the flooding velocity and increase throughput.
Pumps. Pumps are very important and often provide a
relatively simple revamp opportunity, to take advantage
of advancements in pump technology. The throughput
required at desired plant capacity is determined, and
the characteristic head-versus-capacity curves, required net positive suction head (NPSH), and other key
characteristics should be studied. Normally, pump
manufacturers
indicate
three
impellers
(minimum, normal, maximum) that are suitable for
any duty. The possibility of using a larger-sized
impeller diameter should be studied, considering the
head and capacity requirements (Figure 1).
As the pump capacity increases, required NPSH
(NPSHR) increases. Hence, the available NPSH
(NPSHA)should be checked, to avoid cavitation of the
pump at higher flows. The motor’s suitability should
also be verified. Many successful revamps were carried out by changing the impellers to those with larger
diameters. The team should also carry out a design
check to ensure that the piping material classification
is still suitable for the pump’s discharge piping.
Instruments. Instruments such as flowmeters (orifice,
venturi and mass flowmeters), pressure indicators, temperature transmitters, level instruments and so on should
be rated and studied in detail for the proposed changed
condition. Since orifice meters often give rise to higher
pressure drop, they may be replaced with mass flowmeters. Similarly, level instruments based on differential
pressure can be replaced with non-contact type, radartype level instruments, which tend to be more accurate.
Normally, the orifice plates in flowmeters are maintained
with  ratios — that is, the ratio of orifice plate bore diameter (d) to pipeline diameter (D) — of 0.3 (minimum) to
0.7 (maximum). The orifice meters are rated for the target throughput and the pressure drop across the orifice
element is determined. If the pressure drop is too high,
the orifice plates are changed to those of higher  ratios,
december 2015
51
to address the pressure drop issue without changing the
transmitter. To keep the  ratio less than 0.7 for a given
pressure drop across primary element, either or both the
orifice plate and the transmitter is changed.
Control valves. The flow through a control valve depends
on its capacity, or so-called CV value (Equation 5), which is
defined as the flowrate in m3/h of water at a temperature
of 60°F with a pressure drop across the valve of 1 psi. The
rule-of-thumb rule is that the CV is roughly 10D2 (where
D is the size of the control valve in inches). For example,
the CV of a 2-in. control valve is roughly 40. The CV value
is recalculated according to ISA 75.01.01[2] with the new
flowrate, inlet pressure and allowable pressure drop.
Normally, the control valves in the original design of the
plant are kept one size lower than the pipe line diameter,
and their rated flow is specified as 1.7 times the normal
target flow, or 1.3 times the maximum target flowrate.
Since the flowrate is specified as 70% higher normal
flowrate, or 30% higher maximum flowrate, the control
valves will be suitable to handle the revamped target
flow, which is 20–30% more than the design flowrate.
Hence, for a proposed 20–30% plant load increase, the
existing control valve will normally be sufficient. If the
CV of the control valve is not sufficient, the team may
consider either changing the trim of the control valve,
or installing one with a higher CV. Equation 5 is used to
calculate the CV .
CV =
Q
N1
1
/ 0
P
(5)
Where:
Q = the flowrate through the control valve, m3/h
N1 = a constant (8.65 x 10-2), from ISA 75.01.01-2007
(IEC 60534-2-1 Mod), Table 1 [2]
1 = density of the fluid, kg/m3
0 = density of the water at 15°C, kg/m3
P = differential pressure, kPa
Table 1 shows a typical calculation of CV before and
after revamp flowrates, and shows how the existing control valve must be changed to the pipeline size for a 20%
increase in flowrate.
Control valves should also be checked for noise levels. Controllability and rangeabilty are also important for
revamping the valve. Revamps involving control valves
should always involve vendor cooperation. If the revamp
is not able to bring the process into the controllability
range, either the valve should be replaced with one of
higher size, or fine feed-control valve can be added parallel to the existing control valves.
Heat exchangers. The existing heat exchangers should
be checked for any excess available surface area, by rating them using standard software modeling packages. In
general, an existing heat exchanger provides enhanced
heat exchanging capacity if the pressure drop across the
tube side or shell side is increased.
If the heat exchanger is downstream of a pump, the
team should consider increasing pump head, which
would increase the allowable pressure drop across the
heat exchanger. There may be a tradeoff between the
operating cost of the pump and fixed cost associated
52
TABLE 3. Allowable Pressure and Temperature ratings, per [7]
Flange rating, Allowable pressure
per ANSI B16.5 (max) kg/cm2
Allowable temperature (max)
150 class
18.3
93.3°C /200°F
300 class#
47.8
93.3°C /200°F
with changing the heat exchanger. Also, increasing the
number of baffles on the shell side to increase the heat
transfer coefficient should be considered. In the case of
plate heat exchangers, additional plates can be added
to increase the heat transfer, in consultation with original
equipment manufacturer.
Limitation in line sizes. All of the line sizes are checked
using the standard velocity criterion. Typical standard velocity criteria are shown in Table 2.
The lines are checked for pressure drop. In case the line
pressure drop is high, the lines are changed to provide
larger-diameter pipes. Special attention must be given for
gravity-flow lines, as the allowable velocity is in the range
of 0.6–0.7 m/s and sufficient slope must be ensured.
The piping material thickness (according to ANSI B
31.3) and flange ratings (ANSI B16.5) are checked to be
sure they comply with higher pressure. In some cases,
the flange rating will be sufficient, as there is often a wide
margin available, as shown in Table 3.
Thus, if a line that was designed for 10 kg/cm2 is going
to experience a pressure of 12 kg/cm2 at 90°C, then the
flange rating of 150# need not be changed. However, the
actual pipe thickness should be measured and checked
for its suitability in the revamped design pressure condition. Sometimes no piping needs to be changed — for
instance, if the design pressure in the revamped condition
is less than that of the original process. One example is an
ammonia synthesis section, where pressures have come
down from 200 kg/cm2 to 140 kg/cm2.
Pressure safety valve (PSVs). When the plant runs
at higher revamped capacity, all of the PSVs must be
checked according to API 520 [5]. The team must evaluate the nozzle area suitability and the rating of the inlet
and outlet piping, after recalculating the fluid-relieving
rates associated with the new throughput. PSVs are
changed if they are found to be unsuitable. In the case of
feedstock changeover, PSVs must be also be checked
for changes in fluid properties such as molecular weight,
compressibility factors and so on.
Compressors. Various options for revamping the compressors should be studied initially. Various revamp options include the following:
1. Installation of a suction booster
2. Installation of a parallel compressor
3. Changing internals in the low-pressure and high-pressure casing, along with steam turbine upgrading
4. Providing a chiller at the suction inlet and changing the
intercoolers. A chiller can be installed to reduce the
gas temperature and increase the volumetric capacity of the gas and reduces the power requirement. In
cases where the drive needs to be changed, this can
be applied.
5. Change of compressor type. In older-generation urea
plants, urea reactors operated at 200 kg/cm2, and
they fed the CO2 to the urea reactor; Historically, CO2
Chemical Engineering
www.chemengonline.com
december 2015
compressors have been reciprocating-type, which
incur high energy costs. As the pressures in presentday urea reactors have come down to 135 kg/cm2,
centrifugal compressors can be used instead, which
helps to reduce operating costs as well as maintenance costs).
Effluent treatment plants (ETP). Worldwide, wastewater-treatment plants are typically designed with high
safety margins, to cater to shock loading or sudden
peak loading of effluents containing high chemical oxygen demand (COD). However, when a plant is stabilized
and optimized, the generation of wastewater containing
high COD is drastically reduced.
The following methodology should be adopted while
checking the capacity of ETP that are based on an activated sludge process during revamp planning:
1. Evaluate existing facilities by collecting operating data
for one month and developing a statistical analysis of
various parameters.
2. Check the design basis and the design volume of the
aeration basin, thickener and clarifier.
3. Evaluate the operating case using the above design
basis.
4. Calculate the energy requirements of the design and
operating cases, and quantify the potential for reduction of electrical energy at various loads.
Flares and knockout drums. Flare systems, including
knockout drums, must be checked before embarking on
a plant revamp. Flares are used to ensure plant safety,
by flaring hydrocarbons in case of emergency conditions
such as power outages, fire or blocked discharge.
While converting the ammonia plant from a liquid fuel
(such as naphtha) to natural gas, the properties of the
fluid (such as molecular weight, compressibility factor),
viscosity and density undergo a drastic change and
have profound effects on height, flare diameter and flare
tip suitability. Calculations must be performed to verify
the new case, according to API 521[6]. The goal is to
see whether the existing flare is suitable to handle the
changed load and fluid conditions associated with the
proposed revamp. Vendor support should be sought,
if needed, and the flare design can be checked using
manual calculations, spreadsheet calculations and flarespecific computer software.
Reactors. Reactors are the heart of chemical process
operations. Efforts should be made to maximize yield and
conversion rates in the revamp scheme. If, following the
reaction, raw materials remain unconverted, they must
be separated and recycled back to the reactors. This
consumes utilities, thereby increasing energy consumption. If conversion rates in the reaction are increased via
a revamp, the recycle ratios will be drastically reduced.
In one urea plant, a revamp involved the following
changes: Introduction of higher-capacity trays in the
urea reactor in the ammonia plant; changing the converter baskets from axial- to radial-type in the ammonia
converter in the caprolactam plant; using an enrichedoxygen supply to the cyclohexanone reactors with introduction of improved safety features. These changes
were able to increase the conversion rate, increase overall production and decrease energy consumption.
In the ammonia plant’s synthesis section, the syntheChemical Engineering
www.chemengonline.com
sis converter pressures were reduced to 135 kg/cm2
(from an initial level of 200 kg/cm2), as a result of the
introduction of radial basket converters instead of the
older-generation axial converters. By retaining the same
high-pressure converter shell, one can change the converter baskets to radial ones, which helps to reduce
pressure drop.
Catalysts play a vital role in enhancing the reaction rate.
The use of advanced catalysts should be considered,
where possible. For example, in sulfuric acid plants, vanadium pentoxide (V2O5) is typically used as a catalyst.
If an improved cesium catalyst is added to the reactor,
the SO2 to SO3 conversion can be increased, and the
emission of SO2 can be reduced, generally to far below
the statutory limits.
Storage tanks. If the process revamp is based on a
“more in/more out” concept — that is, more fluids will be
flowing into and out of storage tanks — then the team
must check the capacity of “breather” valves and emergency vents according to API 2000 [8]. If the breather
valves need to be replaced, the pressure settings may
be adjusted in consultation with vendors, according to
the applicable codes.
Utilities. During any plant revamp, the capacity of key
plant utilities, such as demineralized water, instrument
air, plant air, steam plants, power, and cooling tower
should also be checked to be sure they will support the
proposed revamp. Offsite facilities related to raw-materials receiving, tank farms, and product-storage capacities must also be studied and related personnel requirements must be ascertained.
n
Edited by Suzanne Shelley
References
1. American Petroleum Inst., API 579: Recommended Practice for Fitness for Service, 2nd
Ed., July 2007.
2. Instrument Soc. of America, ISA 75.01.01-2007 (IEC 60534-2-1 Mod): Flow Equations
for Sizing Control Valves, 2007.
3. International Electrotechnical Commission (IEC), BS IEC 61511: Functional Safety –
Safety Instrumented Systems for the Process Industry, 2003.
4. International Electrotechnical Commission (IEC), BS IEC 61508: Standard for Functional
Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems, 2010.
5. American Petroleum Inst., API 520: Sizing, Selection and Installation of Pressure Relieving
Devices, Part 1, 8th Ed., 2008, and Part 2, 5th Ed., 2003.
6. American Petroleum Inst., API 521: Pressure Relieving and Depressurizing Systems, 5th
Ed., 2007.
7. ASME/ANSI B16.5: Pipe Flanges and Flanged Fittings, April 2013.
8. American Petroleum Inst., API 2000: Venting Atmospheric and Low Pressure Storage
Tanks, 7th Ed., March 2014.
Author
Koya Venkata Reddy is senior manager, process engineering, at FACT Engineering & Design Organization (FEDO), a div. of
Fertilizers and Chemicals Travancore Ltd. (FACT; Udyogamandal
683501, Kochi, Kerala, India; Phone: +91-484-2568763; Email:
koyareddy@yahoo.com). He has 24 years of experience in chemical plant operations, including expertise in the fields of process
control, process design, process risk analysis, Hazop analysis, process simulations, environmental management and plant revamps.
He is a recipient of FACT’s Merit Award. Reddy holds a Bachelor of
Technology degree from Andhra University (Visakhapatnam) and a
Master of Technology degree in project management from Cochin University of Science
and Technology. He also received an M.B.A. in finance from Indira Gandhi National Open
University (IGNOU; Delhi). He is a lifetime member of the Indian Inst. of Chemical Engineers (IIChE) and a member of the Institution of Engineers (India).
december 2015
53
Feature Report
Point-Level Switches for
Safety Systems
Industries that manufacture or store potentially hazardous materials need to employ
point-level switches to protect people and the environment from spills
Bill Sholette
Endress + Hauser
In Brief
The need for point
level switches
The need for testing
Types of point-level
switches
Summary
S
afety is an important and common
subject of discussion in the chemical process industries (CPI) today.
Conversations on safety include
many topics, such as risk assessment, risk
mitigation, and tolerable risk. Acronyms
like SIS (safety instrumented systems), SIL
(safety integrity level) and PFD (probability of
failure on demand) and others have become
part of the safety lexicon in CPI facilities
throughout North America and the world. All
of these terms and acronyms can be confusing, complicating what steps need to be
taken to make a facility safe.
Regardless of how the safety concepts are
labeled, there are a few principles that form
the basis for all safety models. Whether you
subscribe completely to the SIS concept or
have developed your own safety procedures
internally, risk assessment and risk mitigation are the two key concepts in any safety
Chemical Engineering
Figure 1. Preventing overfilling of chemical storage tanks
requires proper selection of high-high point-level switches
model. Determining what may go wrong and
then taking steps to reduce the possibility by
adding safety procedures, retention dikes,
safety instrumentation and so on, are universal to any safety program.
The following are assessments of point
level switches as they are used in overfill prevention safety programs. We review
some basic concepts and look at some of
the common technologies used to prevent
overfilling of vessels (Figure 1). The positive
and negative aspects of each technology are
also considered.
The need for point-level switches
Point-level switches are often used in applications designed to prevent accidents. Industries that manufacture or store materials
that are potentially hazardous employ point-
www.chemengonline.com
december 2015
43
level switches to protect people and
the environment. These industries
include oil-and-gas, chemical and
petrochemical manufacturing. Some
examples of where these safety
switches are used include overfill
and spill prevention on tanks, retention dike level alarms, and seal pot
low-level indication. These critical
safety applications require careful
consideration to make certain the
best technology is provided for the
given application. Technologies that
are robust with few or no moving
parts are preferred. Additionally, a
procedure for testing the integrity of
the switch is critical.
Providing safe and reliable facilities
is a moral and financial responsibility.
Accidents such as Buncefield 2005,
Texas City petroleum refinery 2005,
and the Elk River spill 2014 can
and must be avoided. By providing
safety systems and instruments to
prevent or mitigate accidental spills
and releases, we protect against
injury to people, damage to equipment, environmental damage, and
ensure that the availability of the
process is maintained.
Due to the nature of the safety requirement for point-level switches,
they are typically placed in a position
where they may never be used. That
is, for example, a switch for high-level
overfill prevention is located above
the highest point the level should ever
reach. These switches are often called
high-high level because they are
above the stop-fill high-level instrument in the vessel (Figure 2). These
high-high switches may go for years
without ever having the level reach
them because reaching the high-high
switch is an accidental occurrence or
noncompliant condition.
The need for testing
Because of this, it is imperative that
the safety switch has a means of
testing on a regular basis to ensure it
will operate in the event of an actual
emergency. This test must exercise
the entire switch — not just the contact closure or output — to expose
any potential failures, and should
not require raising the product level
to the switch point. Raising the level
up to a high-high switch for test pur44
poses can potentially cause a spill
and therefore is considered bad
practice. Raising the product level to
test the high-high switch is also specifically not permitted per API 2350
(American Petrochemical Institute)
recommended practices for aboveground storage tanks. API 2350
states that high-high level switches
must be tested on a regular basis
without raising the level to a dangerously high condition.
Depending on the type of pointlevel switch being used, the only
accepted method to ensure the performance of the switch may be to
remove it from the vessel for testing.
Removing a switch for testing incurs
cost through downtime and lost production, as well as the time and expense for personnel to remove the
switch, perform the test and reinstall
the switch. There is also the concern
that the switch could be damaged
during removal and reinstallation,
or that the switch is not reinstalled
correctly. Either of these scenarios would negate the test and the
switch failure may not be detected.
For these reasons, employing a
point-level switch that can be tested
in-situ (Figure 3) should be the first
choice for safety applications.
Testing the switch exercises the
point-level switch and may bring to
light any potential failures. The intent
is to validate the switch, with the goal
being to return the switch as close
as possible to its original installed
condition. That is, the switch should
be validated to “new” condition or
as close as is reasonably possible.
Conceptually, this refers to probability of failure on demand (PFD). When
a point level switch is first installed,
it has a low PFD. Over time the PFD
of the switch increases. Testing the
switch re-establishes the PFD to a
lower number.
A good analogy of the PFD concept would be the purchase of a
brand new car. You park the car in
your driveway and retire for the evening. The next morning you get up
and go to start your car. The expectation is that the car will start. This
represents a low PFD. Now, leave
that same car in your driveway for
a year without starting it or performChemical Engineering
High-high level overfill
prevention point-level
switch
Normal stop-fill
point-level
switch
Figure 2. Located above the normal stop-fill
control, high-high level switches may go for years
without seeing the liquid in the vessel
ing any maintenance. Trying to start
that car after a year may be difficult.
This represents a higher PFD. PFD
increases with time. Testing provides
a way to return to a lower PFD.
Generally speaking, in-situ testing
(testing with the unit installed in the
process) may only validate a percentage of the potential failures. This
is known as a partial proof test. As
such, the PFD recovery is dependent
on the proof test coverage (PTC).
The PTC is based on the percentage
of failures exercised by the proof test.
The higher the PTC percentage, the
higher the recovery, and the result is a
lower PFD. Since the partial proof test
recovers a percentage of the PFD, it
does not return to the entire original
installed state. The result is that with
each partial proof test, the PFD will be
a little higher after each test. To correct for this “drift,” a full proof test will
be required after a determined number of partial proof tests. A full proof
test will typically require removing the
switch from the process for testing.
Clearly, a level switch product with a
high PTC will allow for a longer period
of in-situ partial proof testing resulting
in savings and process availability.
Types of point-level switches
There are many point technologies
available that can be used for level
indication. Because of the critical
nature of safety switches, some
technologies are better suited than
others for this task. Let’s take a look
at some of these technologies and
why they may or may not be good
choices for safety applications.
Float point-level switch. Float
www.chemengonline.com
december 2015
PROVEN
PERFORMANCE
ROTOFORM
GRANULATION
FOR PETROCHEMICALS
AND OLEOCHEMICALS
Figure 3. The ability to test in-situ validates the functionality of the safety switch while reducing maintenance and downtime
switches, as the name implies, utilize
a float that changes position due to
buoyancy and indicates presence
of a liquid. The float may move on a
vertical shaft and trip a magnetically
coupled reed switch or may pivot
on an access providing a mechanical internal switch to activate. The
appeal of float switches is that they
are simple devices and relatively inexpensive. However, the mechanical nature of a float and the moving
parts that can hang up or bind due to
coatings makes them questionable
for use in safety applications.
The ability to test a float switch is
also suspect. Some manufacturers
provide a lift arm to physically move
the float to make it change state
from normal to alarm. This test is
insufficient to exercise potential failures, such as leaking floats, and may
not identify binding or heavy coatings. Some test arms are fitted with
magnets that will release if the float
is heavy due to leakage or coating,
but even this precaution is suspect.
As such, the only true way to test
the float switch is to remove it from
the vessel for testing, or to raise the
product level to the high-high switch,
which, as previously discussed, is
not permitted.
Floats are best suited for simple
non-critical applications. Moving
parts and the potential for a lack of
buoyancy are critical failure points.
From the standpoint of safety applications, floats should be avoided.
Ultrasonic gap point-level switch.
Ultrasonic gap switches are comprised of two piezoelectric crystals
situated on opposite sides of a gap.
One crystal is excited electrically and
generates acoustic energy that is
directed across the gap toward the
second crystal. With air or gas in the
gap, the energy is not strong enough
to reach the second crystal. Once
the gap fills with a liquid the acoustic
energy is coupled through the liquid
molecules, reaches the second crystal and completes the circuit, indicating that the liquid is present.
Ultrasonic gap switches have no
moving parts to wear or hang up,
which is an advantage over mechanical switches, such as floats. However, materials that leave coatings
and materials that have suspended
solids, or are aerated, will block the
acoustic energy, causing a failure.
In-situ testing of ultrasonic gap
switches that validate all potential
failures is not possible. Some manufacturers provide test buttons that
are used to test the switch. This
test operates in one of two ways. In
some products, there is a second set
of crystals that are wired together.
High productivity solidification of
products as different as resins, hot
melts, waxes, fat chemicals and
caprolactam has made Rotoform® the
granulation system of choice for
chemical processors the world over.
Whatever your solidification
requirements, choose Rotoform for
reliable, proven performance and a
premium quality end product.
 High productivity –
on-stream factor of 96%
 Proven Rotoform technology –
nearly 2000 systems installed
in 30+ years
 Complete process lines or
retrofit of existing equipment
 Global service / spare parts supply
Sandvik Process Systems
Division of Sandvik Materials Technology Deutschland GmbH
Salierstr. 35, 70736 Fellbach, Germany
Tel: +49 711 5105-0 · Fax: +49 711 5105-152
info.spsde@sandvik.com
www.processsystems.sandvik.com
Circle 18 on p. 190 or go to adlinks.chemengonline.com/56204-18
Chemical Engineering
www.chemengonline.com
december 2015
45
SANDVIK_Chemical_ad_55.6x254_MASTER.indd 1 09/02/2015 14:48
These two sets of crystals are used
for the actual measurement, and
when the test button is depressed,
an acoustic frequency travels from
one crystal through a wire to the second crystal, indicating a valid test.
The assumption is that if the two test
crystals operate properly, so will the
measurement crystals.
The second approach to this test
is to increase the frequency on the
actual measuring crystal, which allows the acoustic energy to travel
through the metal in the gap to the
second crystal completing the circuit and validating the test. Neither
of these tests addresses one of the
most common failures in ultrasonic
gap switches — namely, coating or
plugging of the gap itself. Coatings
in the gap or material plugging the
gap will prevent the acoustic energy
from crossing the gap and indicating
when the liquid is present.
A second common problem with
gap switches is dis-bonding of the
crystal. The dual-crystal design will
not detect this failure. The second
design may detect dis-bonding,
but increasing the frequency could
provide a valid test while the switch
may fail to operate at its normal frequency. Also, neither of these test
methods will detect potential failures
due to liquids with suspended solids
or aeration.
Performing a valid test on an ultrasonic gap switch requires removing
it from the vessel and testing it in a
sample of the material from the vessel. For these reasons, ultrasonic gap
switches are best suited for general,
non-critical level applications. They
should not be used for high-high safety
and spill-prevention applications.
Capacitance point-level switch.
Capacitance point-level switches are
based on a capacitor. A capacitor is
made of two conductive plates separated by a dielectric insulator. The
capacity of the capacitor is based on
the size of the plates, the distance
between the plates, and the dielectric constant of the insulating material between them. In a capacitance
point-level switch, one plate of the
capacitor is the active center rod
of the sensing element; the second
plate of the capacitor is the vessel
46
Figure 4. Vibronic point-level switches (left) are an active technology with constant self-check function
built-in (right) to ensure functional integrity
wall or an added ground rod or plate.
As the material in the vessel rises, it
covers the sensing element and the
capacitance increases. The output
of the electronic unit changes state
to indicate presence of material once
the capacitance exceeds a preset
switch point.
Capacitance point-level switches
have several advantages over previously discussed technologies.
There are no moving parts to wear
or hang up. Internal diagnostics
monitor data, such as the base capacitance. Reduction of the base
capacitance would indicate a wiring
failure or a sensing element that has
lost mass due to damage or corrosion. Failures can result in a switch
going into fault mode or activating
an “alarm” contact.
One disadvantage to capacitance
switches is that they require calibration. Initially, the base capacitance
needs to be balanced, and then an
additional set-point capacitance
is added. There are capacitance
switches that “calibrate” themselves.
These switches follow the same procedure as manual calibration with the
exception that it is done internally in
the electronic unit. If the calibration is
not performed correctly it is possible
that the switch will not respond to increasing material level in the vessel.
There are products available that
incorporate testing features so insitu testing can be performed to ensure the functionality of the switch.
The PTC percentage for a capacitance-switch partial proof test tends
to be low. The result is that the intervals between a required full proof
test tend to be fairly short, increasing
downtime and maintenance.
Depending on the application
Chemical Engineering
requirement, a capacitance pointlevel switch may be the best choice
for a safety installation. This is particularly true of applications involving extremely viscous materials that
coat sensing elements heavily. It is
important to make sure the capacitance switch selected provides active coating-rejection technology to
compensate for the coatings.
Vibronic (tuning fork) point-level
switch for liquids. Vibronic pointlevel switches, also called tuning
forks, operate by vibrating the fork
at a resonant frequency in the uncovered state. When process material covers the fork it causes the frequency to shift down, indicating the
presence of the liquid and changing
the output of the switch.
There are a number of advantages
to vibronic switches (Figure 4). First,
vibronic switches are an active technology. Because they are constantly
vibrating, additional diagnostics are
possible. The frequency of the fork is
monitored to determine the covered
or uncovered state. But changes in
frequency can also indicate damage
or corrosion to the fork, heavy coatings, and objects jammed between
the forks. Any of these conditions will
result in a fault output. The electronic
unit is constantly running self-test
routines to identify these and other
potential faults.
Some manufacturers have developed additional functions to ensure
the operation of the switch in safety
systems. One such function is to
provide a live signal superimposed
on the current signal (Figure 5).
This live signal is constantly switching from one current to another and
back. This switching current certifies
that the current signal is not stuck
www.chemengonline.com
december 2015
“best practice” and should be followed for safety overfill applications.
The reliability of a point-level switch
with continuous live signal will exceed other technologies in demanding critical services. As always, the
application will determine the best
technology for safety devices.
4 to 20 mA with live wire signal
Safety
PLC
Figure 5. Vibronic tuning forks with continuous live signal provide the highest in reliability and maximum
in-situ proof test coverage
and ensures that the current will shift
when the fork changes from the uncovered to covered state.
Vibronic point-level switches have
no moving parts to hang up or wear
out. Additionally, there is no calibration required so you can be sure the
switch is set up properly.
Another advantage to vibronic
point-level switches is the ability to
perform in-situ partial proof tests.
The sophistication in design of these
switches employs redundant circuitry
along with the diagnostic capability
previously discussed. These features
added together result in an extremely
high proof-test-coverage percentage. The high PTC provides the ability
to test in-situ for an extended period
of years without having to perform a
full proof test. Some manufacturers
provide products that will not require
a full proof test for as many as twelve
years, greatly reducing testing cost
and ensuring process availability.
Continuous technologies. Some
plants rely on continuous level technologies, such as free-space radar,
guided radar or ultrasonics, to provide for overfill prevention or function
as a point-level device. Their thought
process is that with a continuous
level technology, they would know if
something was wrong with the transmitter, because they have a continuous output. In reality, it is possible
that upset conditions in the process,
such as foam, condensation and
buildup, can lock up the signal to a
false value. It should also be noted
that API 2350 states that instruments
used to prevent accidental overfilling
and spills must be separate from the
instruments used for tank-gauging
the vessel.
A recent trend in overfill prevention has been to use two continuous
level transmitters in redundancy. The
Chemical Engineering
concept is to poll the two transmitters and to shut down the process if
the level reaches a preset high-high
value, or if the two transmitters differ
by a predetermined percentage. As in
the previously mentioned use of continuous level for overfill prevention,
the thought is that the continuous
signal provides a measure of security
that a point-level switch cannot offer.
However, there are a few concerns
with this approach.
First, as in the previous example,
it is possible for a process condition such as foam, condensation,
or buildup to cause a signal to lock.
Using two different technologies in
redundancy may provide an advantage in that a process condition that
causes one technology to fail may
not affect the other. Using two different technologies can introduce other
problems. An example would be the
accuracy of guided-wave radar may
be different than a differential pressure transmitter under the same
process conditions. The difference
in output, while typically negligible, is
often hard for operators to overlook.
There is still the issue with API 2350
recommended practices.
If the overfill prevention transmitter
needs to be separate from the tank
gauging transmitter, which continuous level transmitter is which? This is
certainly a grey area, since they both
are outputting continuous level and
a high-high trip point. Last, it is clear
that a high-high point-level switch is
the most reliable and simple device
for overfill requirements. The availability of point-level switches with
current output and live signals provides the same assurance as the
continuous level transmitter that the
switch is operating as required.
Using a point-level switch for highhigh indication is recognized as a
www.chemengonline.com
december 2015
Summary
There are many point-level switches
available in the market today. We
have reviewed some of the most
common types in this article. As we
have discussed, some technologies
are better suited for safety systems
than others. Technologies with moving parts and those that cannot be
easily tested are best suited for noncritical level applications.
Trends toward using continuous
level transmitters for overfill prevention on the surface have some appealing merits. However, the advantages of using a separate point-level
switch with live signal are clear and
continue to provide a “best practice”
solution to overfill prevention.
From an overall standpoint of sophisticated diagnostics and ease of
commissioning with no calibration, vibronic point-level switches are excellent choices for safety systems. The
extremely high proof-test coverage
and long intervals between full prooftest requirements result in the highest
cost savings and plant availability. From
these standpoints, vibronic point-level
switches are the clear choice for most
safety-system applications.
n
Edited by Gerald Ondrey
Author
Bill Sholette is the Northeast region level product manager at
Endress + Hauser (2350 Endress
Place, Greenwood, IN 46143;
Phone: 888-363-7377; Fax: 317535-2171: Email: bill.sholette@
us.endress.com). He has spent
the last 36 years consulting and
specifying on level-measurement
instrumentation. Sholette received
his certification in management and marketing from Villanova University. He previously worked for Ametek,
where he began his career in sales as a regional sales
manager. Later, he moved on to work at Drexelbrook
Engineering as a product manager. In 2012, Sholette
came to work for Endress+Hauser as the Level Products business manager for the Northeast region. In this
role, he is responsible for technology application and
development of level products. He has also published a
number of white papers and articles pertaining to level
measurement.
47
Solids Processing
Control Strategies Based On
Realtime Particle Size Analysis
Practical experience
illustrates how
to achieve better
process control
Jeff DeNigris
Malvern Instruments
Alberto Ferrari
Ferrari Granulati
T
he pursuit of manufacturing
excellence is a strong theme
across all of the chemical process industries (CPI). Improving operations by eliminating variability and waste, a cornerstone of the
six sigma approach, is essential for
competitive performance in the global
marketplace. In addition, of course,
health, safety and the environment
(HSE) remains the subject of intense
scrutiny and concern. Within this climate, automation of both analysis and
control, can be highly advantageous.
Particle size analysis is a measurement technique that has already successfully completed the transition from
laboratory to line and become an established part of the automation toolkit.
Reliable, online particle-size analysis
systems are now commercially proven
across a number of sectors, on both
wet and dry process streams, with all
aspects of project implementation wellscoped and understood. With this maturation has come greater accessibility
to sectors that previously faced technical or financial barriers to adoption.
Requirements are different in each
and every case, from a simple single
PID (proportional integral derivative)
control loop to multivariate statistical
control or the rigors of operation within
a highly regulated environment. Nevertheless, the building blocks needed to
fashion an optimal solution are there,
for the vast majority of applications.
Particle size
analyzer
Wireless
TCP/IP network
Control PC
Wireless
TCP/IP network
Historian database
PLC
Figure 1. Integrated system architecture is illustrated here for
an automated mill with an online particle-size analyzer
This article draws on experience
from different industries as they apply
realtime particle-sizing technology, to
demonstrate the various control strategies that it supports and the benefits
that can be derived.
Note: With an online system the
analyzer is typically installed on a
dedicated loop fed from the process
line, while with an inline instrument
the analyzer sits directly in the bulk
process flow. Both approaches provide
continuous realtime data so for the
purposes of this article the term online has been used to cover both types
of installation.
Moving toward realtime
It should be stated from the outset
that offline analytical capability remains vital. Essential during research,
it is also frequently the norm for final
quality control checks. Furthermore,
certain techniques that yield critical
information have yet to move successfully to the process environment. That
said, for routine process monitoring,
an offline regime is far from ideal and
online measurement, if available may
be the preferred option.
Laser-diffraction particle sizing exemplifies a number of commercially
proven, online analytical technologies
with established credentials for realtime monitoring. Today it is possible to
tailor an online particle-sizing solution
that closely matches user requirements,
from turnkey integration within a sophisticated control platform to sensoronly purchase for the inhouse implementation of simple closed-loop control.
This ready availability of realtime measurement presents an important opportunity to realize a number of important
practical gains, even before considering
the issue of process control.
Full automation — from sample
extraction through the delivery of
results to a control system — means
that online analysis completely eliminates the issue of operator-to-operator
variability. In addition, the ability to
analyze a much higher proportion of
the process stream improves the statistical relevance of the data. Equally
important is that the benefits of automation come with a dramatic decrease in the amount of manual labor
required for analysis, and a significant
reduction in the containment and exposure risks associated with manual
sampling and analysis.
Investing in realtime measurement
technology may be justifiable simply
on the basis of these practical benefits, especially where the envisioned
set-up is relatively simple. The cost
of installation is offset by savings in
manpower, and the operational team
maintains manual control exactly as
before, using the online system solely
for particle size monitoring. While this
approach may deliver improved operations and variable cost gains, it fails
Chemical Engineering www.che.com February 2012
41
Solids Processing
Average size and transmission
2,600
rpm
3,200
rpm
3,600
rpm
3,400
rpm
Transmission, %
150.00
4,000
rpm
250.00
187.50
112.50
Avg{Dv(90)} = 170.54
125.00
75.00
Avg{Dv(50)} = 57.58
37.50
62.50
Avg{Dv(10)} = 9.16
0.00
11/13/2008 -17:05:00
11/13/2008 -17:15:00
to fully realize the potential that online instrumentation offers in opening
up the route to automated control. An
efficient control-automation strategy
fully exploits the information stream
provided by realtime measurement
and maximizes return on investment.
Simple closed-loop control
The simplest option when implementing automated control on the basis of
continuous data from an online instrument is usually a single variable
PID control loop. Such an approach
can prove highly productive and be
an efficient way of automating existing manual control strategies. Even
the implementation of one automated
loop changes the process from being
fixed to becoming responsive. A fixed
process translates, or magnifies, upstream variations on to the product or
downstream process; a responsive one
either erases or reduces their impact.
For milling, a common approach is
to automatically vary either mill speed
or downstream separator variables
(classifier speed, for example) to meet
the product specification (as in the example below). An exactly analogous
strategy in emulsification processes
allows key variables, such as pressure, to be automatically manipulated
in order to control droplet size. Since
online particle-size analyzers are commercially available for both wet and
dry process streams, any unit involving comminution to a defined particle
size can potentially benefit from this
very basic type of automation.
42
3,800
rpm
Particle diameter, µm
2,800
rpm
Figure 2. For
simple, closedloop control,
online analysis
can be used to
track particle
size changes that
are induced by
varying mill rotor
speed over time
11/13/2008 -17:25:00
11/13/2008 -17:35:00
Case study: Simple closed loop control on the basis of realtime data.
An automated mill system recently installed at a commercial pharmaceutical
manufacturing site is shown in Figure
1. It was developed as a widely applicable, validated alternative to manual
mill control using offline particle-size
measurement. One routine operation
in the company is milling an active
pharmaceutical ingredient (API), typically recovered through crystallization,
to a defined particle size. The particle
size distribution of an API is often a
critical quality attribute because of its
impact on clinical efficacy and drug
product manufacturability.
The comminution mill has fast dynamics, making rapid and continuous
data acquisition and interpretation essential. Since the selected online particle-size analyzer has a measurement
rate of four complete particle-size distributions per second, it can efficiently
track even this swiftly changing process in fine detail.
This fully integrated system uses upgraded programmable-logic-controller
(PLC) code and proprietary software
to handle data exchange between the
main hardware units. The operator
interacts with the central controlling
PC via the mill HMI (human machine
interface) and can do the following:
input set points; start and stop the
mill or analyzer remotely; perform
background tests; and receive particle
size results. A closed control loop links
particle size with mill rotor speed, the
principal operating parameter for size
Chemical Engineering www.che.com February 2012
0.00
11/13/2008 -17:45:00
manipulation (Figure 2). A 30-s rolling
average Dv50 (median particle size)
figure from the online particle-size
analyzer drives this loop.
To test the response of the system,
the setpoint for the loop was reduced
from an initial 58 microns to 50 microns, and then back up to the original
value. Despite the absence of comprehensive loop tuning — proportional
(P) only control was used at this stage
— the results were good. Steady operation at 50 microns was established
just 30 s after the change was made,
and the final transition was complete
in under 2 min.
To mill a new batch with this system, the operator simply selects the
target particle size and feeds material into the mill. Control is then sufficiently tight to largely eliminate
the production of out-of-specification
material. Contrasting this control
scheme with offline or manual analysis quickly brings the multiple benefits into sharp focus:
• When using offline particle-size
analysis, each new batch required a
potentially lengthy iterative process
to determine the appropriate rotor
speed, in order to meet the defined
specification. Eliminating this step
has saved both time and material.
• Prior to automation, the rotor
speed was fixed for each batch,
based on a test sample. However, if
the sample was not representative
of the batch, or if segregation had
occurred, the rotor speed would be
less than optimal and the product
Particle
size
product
quality
High
efficiency
separator
Separator
speed
To
storage
silo
Elevator
power
(separator
feedrate)
Elevator
conveyor
Exit
temperature
Exit water spray
air flowrate
From
clinker
silos
Feed
rate
Ball mill
LEGEND
Controlled variables
Manipulated variables
Florida Rock Industries – Thompson Baker Cement Plant (Vulcan Materials Co.)
Figure 3. This upgrade to a cement finishing circuit for a milling clinker (and gypsum
and limestone) helps achieve the fineness required to meet final cement specifications
inconsistent. Since the automated
mill responds to, and compensates
for, any variability in the feed
stock, the product particle size has
become extremely consistent.
Multiple control loops
The next step beyond a single control
loop involves multiple, simple control
loops, which enable the parallel manipulation of a number of variables to
simultaneously meet product quality,
variable cost and throughput goals. For
example, at its plant close to Verona in
Italy, Ferrari Granulati mills very fine
marble powders of exemplary quality
[1]. Three discrete products are marketed with Dv50s in the range of three
to eight microns. Here, online particlesize analysis has been used extensively
to develop the design of the mill and a
control strategy for the milling circuit
(mill and associated classifier).
The adopted strategy relies largely
on fixing a number of process variables,
at values defined through detailed
optimization trials that are based on
realtime measurement. These values
have been defined for each product.
Two independent, automated control
loops are, however, applied to optimize mill performance and maximize
plant throughput on an ongoing basis.
One loop maintains a prescribed powder depth on the table of the vertical
roller mill to ensure efficient comminution and prevent excessive wear of
the mill. The other controls the rate of
fresh feed to the mill with reference to
the recycle rate from the classifier, to
ensure that the total feedrate to the
mill remains constant. In combination with the fixed operating strategy
these loops ensure that exceptional
product quality is achieved at competitive cost.
As with the previous example, the
architecture of the control loops employed here is relatively simple: one
single process variable manipulated
on the basis of a single measured
variable. Nevertheless, they are
highly effective with each loop efficiently targeting one specific aspect
of process performance.
Multivariate process control
Just as a seasoned operator bases
manual control decisions on every
piece of relevant information, the
most sophisticated automated control
relies on an array of data, rather than
a single input. Multivariate process
control systems take in and use data
from a number of sources and, in combination with a process model, provide
multiple outputs, simultaneously manipulating various operational parameters. Such systems work within welldefined boundaries to target optimal
operation at all times.
Quite recently, steps have been
taken to reduce one of the barriers to
implementing multivariate process
control: the difficulty of integrating
analyzers from different suppliers.
The new OPC Foundation Analyzer
Device Integration (ADI) specification [2] provides a common standard
for instrument manufacturers. Into
the future this should ease the integration of both process and laboratory
systems. Enabling software, based on
this specification, is already available commercially. These advances
will reduce the difficulty and cost of
implementing customized multivariate control strategies, bringing the
potential rewards within the reach of
more manufacturers.
Case study: Multivariate control
of a heavy commodity milling circuit. In 2006, Vulcan Materials Company (Birmingham, Ala.) made the
decision to transform control of its cement finishing circuit (Figure 3). The
project involved the three following
significant changes:
•Switching from Blaine measurement
to laser-diffraction particle-size analysis, with the intention of more precisely targeting cement performance
and accessing online technology
•Adopting online, rather than offline
analysis for process control
•Selecting and installing a powerful
model-predictive-control package to
automate process control
Vulcan Materials installed a proprietary solution for multivariate control and an online laser-diffraction
particle-size analyzer. At the heart
of the control package is a multivariate process model that is tuned using
plant data to accurately predict plant
performance from a range of inputs.
Automatic manipulation of process
variables, on the basis of these predictions, achieves plant performance targets, which are as follows:
• Maintain product quality
•Reduce variability and improve operational stability
•Maximize fresh feedrate subject to
equipment constraints
The process model runs in real time,
employing an integrated steady-state
optimizer and dynamic controller to
drive the system toward optimal operation within the above constraints.
The impact of changing manipulated
variables is projected into the future;
predictive control ensures that multiple performance targets are met
simultaneously and that process outputs are as close as possible to desired
reference trajectories. Optimization
procedures are repeated each time
process values are re-read, following
Chemical Engineering www.che.com February 2012
43
Solids Processing
a change, in order to maintain the future prediction-horizon period. This
is termed “realtime receding horizon
control” and is characteristic of model
predictive control.
Here, a number of loops are operating in tandem (Figure 3). Manipulating clinker feedrate and separator
speed controls product quality. These
same two parameters are also used,
together with air flow through the
mill, to drive separator feedrate (measured as elevator power) toward a defined high limit. Air flowrate through
the mill and exit water spray variables control the temperature of exiting material. Finally, a stabilizing loop
minimizes a function defined as “mill
condition”, calculated from the rate of
change of elevator and mill power. The
online particle-size analyzer measures
product quality in real time, providing
vital information that is used by the
model in combination with an array of
other process measurements.
The installed solution has pushed
the circuit into a new operating regime, uncovering better control strategies than had been identified through
manual operation. Significant benefits
have accrued, including the following:
•A 20% reduction in specific energy
consumption
•An improvement of 15% in one-day
strength levels (a primary performance indicator)
•An increase in throughput in excess
of 15%
These savings are not individually attributable to either the analyzer or the
control package but arise from symbiosis between the two, which has unlocked the full potential of each. The
payback time for the entire project is
estimated at just over one year, based
on energy savings alone.
Conclusions
The automation of process analysis
and its closer integration with plant
Harness the power of
positive press.
References
1. Ferrari, A. and Pugh, D., Marble fillers made
to measure, Industrial Minerals, Aug 2008.
2. http://opcfoundation.org/Default.
aspx/02_news/02_news_display.
asp?id=740&MID=News
Custom reprints from Chemical Engineering
could be one of the smartest marketing
decisions you make.
Authors
Jeff DeNigris joined Malvern Instruments in 2005
as national sales manager,
process systems to focus on
supplying online, realtime
particle size analyzers to the
pharmaceutical, fine chemical, mineral, toner and cement
markets (Address: ; Phone:
XXX-XXX-XXXX; Email: ).
He graduated in 1989 with
a B.S.M.E. degree from Widener University, College of Engineering in Pennsylvania. He has spent most of his career in the
manufacturing sector with top original equipment manufacturers of capital equipment. Particular focus was given to the plastics industry in
the sales and marketing of recycling and reclaim
systems, and to the design and implementation
of granulation and separation systems.
Contact The YGS Group
at 717.399.1900 x100 or
learn more online at
www.theYGSgroup.com/reprints
The YGS Group is the authorized
provider of custom reprints from
Chemical Engineering.
44
Chemical Engineering www.che.com February 2012
Chem Eng_Third Square.indd 1
operation presents an opportunity for
improved control, reduced risk and financial gain. For online particle-size
analysis proven automated systems
have developed to the point of widespread availability, making this opportunity financially and technically
feasible across a broad range of manufacturing sectors. Options for investment now range from fully integrated
turnkey solutions to sensor-only purchase for in-house implementation.
Continuous, realtime analysis of a
critical variable, one that directly influences product performance, provides a
platform for developing and implementing the very best automated control strategies for a given application.
Such strategies deliver multiple economic benefits in the form of reduced
waste, increased throughput, enhanced
product quality, reduced manual input
and lower energy consumption. Automating control realizes the full potential of realtime analysis, extracting
maximum return from an investment
in process analytical technology. ■
Edited by Rebekkah Marshall
Alberto Ferrari is the production manager of Ferrari
Granulati S.A.S., a leading
producer of marble chips and
powders, situated in Grezzana
(Verona) Italy. He is a specialist in the fields of information
technology and industrial
automation, working for five
years as R&D manager in
Dellas S.p.A., a producer of
diamond tools, before returning to the family company in 1995. He continues
to work to improve the efficiency of milling and
sieving plants for calcium carbonate minerals.
3/10/09 3:59:01 PM
Facts At Your Fingertips
Process Hazards Analysis Methods
Department Editor: Scott Jenkins
D
ifferent methodologies are
available for conducting the
structured reviews known as
process hazards analyses (PHAs) for
new processes. PHAs are often conducted or moderated by specialists,
with participation by the design team,
representatives of the facility owner,
and experienced process operators.
Each different PHA method is better-suited to a specific purpose and
should be applied at different stages of the project development. The
table includes brief descriptions of
some of the most widely used PHA
methods in the chemical process industries (CPI).
When to use different methods
Different types of PHA studies have
varying impact, depending on the design phase in which they are applied.
For example, if a consequence analysis is not performed in a conceptual
or pre-FEED (front-end engineering
and design) phase, important plotplan considerations can be missed,
such as the need to own more land
to avoid effects on public spaces; or
the fact that the location might have a
different elevation with respect to sea
level than surrounding public places
impacted by a flare plume.
Some other studies, like HAZOP,
cannot be developed without a control philosophy or piping and instrumentation diagrams (P&IDs), and are
performed at the end of the FEED
stage or at the end of the detailed
engineering phase (or for improved
results, at the end of both) to define
and validate the location of pressure
safety valves (PSVs) as well as to
validate other process controls and
instrument safety requirements.
QRA or LOPA evaluations (or both)
are undertaken after the HAZOP study
to validate siting and define safety integrity levels (SIL), to finally meet the
level required by the plant.
n
Editor’s note: The definitions in the table, and associated
comments, were adapted from the following article: Giardinella, S., Baumeister, A. and Marchetti, M. Engineering for
Plant Safety. Chem. Eng., August 2015, pp. 50–58. An additional reference is the following article: Wong, A., Guillard,
P. and Hyatt, N. Getting the Most Out of HAZOP Analysis,
Chem. Eng., August 1, 2004, pp. 55–58.
34
Table: Different PHA methods and Approaches
Method
Description
Consequence analysis
This method quantitatively assesses the consequences of hazardous material
releases. Release rates are calculated for the worst case and also for alternative scenarios. Toxicological endpoints are defined, and possible release duration is determined
Hazard identification
analysis (HAZID)
HAZID is a preliminary study that is performed in early project stages when
potentially hazardous materials, general process information, initial flow diagram
and plant location are known. HAZID is also generally used later on to perform
other hazard studies and to design the preliminary piping and instrumentation
diagrams (P&IDs)
What-if method
The what-if method is a brainstorming technique that uses questions starting
with “What if...,” such as “What if the pump stops running” or “What if the operator opens or closes a certain valve?” For best results, these analyses should
be held by experienced staff to be able to foresee possible failures and identify
design alternatives to avoid them
Hazard and operability
study (HAZOP)
The HAZOP technique has been a standard since the 1960s in the chemical,
petroleum refining and oil-and-gas industries. It is based on the assumption
that there will be no hazard if the plant is operated within the design parameters, and analyzes deviations of the design variables that might lead to undesirable consequences for people, equipment, environment, plant operations or
company image.
If a deviation is plausible, its consequences and probability of occurrence
are then studied by the HAZOP team. Usually an external company is hired to
interact with the operator company and the engineering company to perform
this study. There are at least two methods using matrices to evaluate the risk
(R): one evaluates consequence level (C) times frequency (F) of occurrence;
and the other incorporates exposition (E) as a time value and probability (P)
ranging from practically impossible to almost sure to happen. In this method,
the risk is found by the following equation: R = E × P × C
Layer-of-protection
analysis (LOPA).
The LOPA method analyzes the probability of failure of independent protection
layers (IPLs) in the event of a scenario previously studied in a quantitative hazard
evaluation like a HAZOP. LOPA is used when a plant uses instrumentation independent from operation, safety instrumented systems (SIS) to assure a certain
safety integrity level (SIL). The study uses a fault tree to study the probability of
failure on demand (PFD) and assigns a required SIL to a specific instrumentation
node. For example, in petroleum refineries, most companies will maintain a SIL
equal to or less than 2 (average probability of failure on demand ≥10−3 to <10−2),
and a nuclear plant will tolerate a SIL 4 (average probability of failure on demand
≥10−5 to <10−4)
Fault-tree analysis
Fault-tree analysis is a deductive technique that uses Boolean logic symbols (that
is, AND or OR gates) to break down the causes of a top event into basic equipment failures or human errors. The immediate causes of the top event are called
“fault causes.” The resulting fault-tree model displays the logical relationship
between the basic events and the selected top event
Quantitative risk assess- QRA is the systematic development of numerical estimates of the expected
ment (QRA)
frequency and consequence of potential accidents based on engineering
evaluation and mathematical techniques. The numerical estimates can vary
from simple values of probability or frequency of an event occurring based on
relevant historical data of the industry or other available data, to very detailed
frequency modeling techniques. The events studied are the release of a hazardous or toxic material, explosions or boiling-liquid expanded-vapor explosion
(BLEVE). The results of this study are usually shown on top of the plot plan
Failure mode and effects This method evaluates the ways in which equipment fails and the system’s reanalysis (FMEA)
sponse to the failure. The focus of the FMEA is on single equipment failures and
system failures
Chemical Engineering
www.chemengonline.com
january 2016
Feature Report
Aging Relief Systems —
Are they Working Properly?
Common problems,
cures and tips
to make sure your
pressure relief valves
operate properly
when needed
Sebastiano Giardinella
Inelectra S.A.C.A.
R
elief systems are the last line of
defense for chemical process facilities. Verifying their capability to safeguard equipment integrity becomes important as process
plants age, increase their capacities to
adjust to new market requirements,
undergo revamps or face new environmental regulations.
In the past, approximately 30% of
the chemical process industries’ (CPI)
losses could be attributed, at least in
part, to deficient relief systems [1].
Furthermore, in an audit performed
by an independent firm at more than
250 operating units in the U.S., it was
determined that more than 40% of the
pieces of equipment had at least one relief-system-related deficiency [2]. These
indicators underscore the importance
of checking the plant relief systems.
This article presents the most common types of relief system problems
with their possible solutions and offers
basic guidelines to maintain problemfree relief systems.
Common problems in
existing relief systems
Problems and their causes
Relief system problems or deficiencies
can be identified, with respect to the
U.S. Occupational Safety and Health
Admin. (OSHA) regulation 29 CFR
1910.119, as items that do not com38
Symptoms
Inadvertent
relief-valve
blocking
Equipment
and/or piping
failure
Relief lines
and/or headers
vibrations
Relief valve
chattering
Other
symptoms
Problems
Unprotected equipment
or piping during
overpressure scenarios
Relief system
components malfunction
Level 1
causes
Lack of a relief
device
Undersized relief
device
Improperly
installed relief
device
Miscellaneous
deficiencies
Level 2
causes
Undersized
relief lines or
equipment
Incorrect relief
device set
pressure
Inappropriate
relief line routing
Block valves
without involuntary
closure prevention
Level 3
causes
Overpressure
scenario unforeseen
during design
Higher relief loads
than forseen
during design
Figure 1. A problem tree for relief system shows causes, problems and symptoms
ply with “recognized and generally accepted good engineering practices” [3]
in relief systems design. The recognized
and generally accepted good engineering practices are criteria endorsed by
widely acknowledged institutes or organizations, such as the Design Institute
for Emergency Relief Systems (DIERS)
or the American Petroleum Institute
(API). For instance, in the petroleum
refining industry, the accepted good engineering practices are collected in API
Standards 520 and 521.
The most common relief system de-
Chemical Engineering www.che.com July 2010
ficiencies can be classified into one of
three types [2]:
1.No relief device present on equipment with one or more potential
overpressure scenarios
2.Undersized relief device present on
equipment with one or more potential overpressure scenarios
3.Improperly installed pressure relief
device
The first type of deficiency refers to
the lack of any relief device on a piece
of equipment that is subject to potential overpressure. The second type
Table 1. Relief-system problem identification during overpressure-scenario modeling
Relief System
Deficiency
Identified When:
Undersized
relief device
Insufficient reliefdevice area
Calculated relief-device area > installed relief device area
Improperly
installed
relief device
Excessive relief-valve
inlet-line pressure drop
Friction pressure drop in pressure-relief-device inlet line > allowable friction
losses (typically 3% of the set pressure)
Excessive relief valve
backpressure
Relief valve backpressure > allowable backpressure (typically 10% for conventional valves, or 50% for balanced bellows valves considering backpressure
capacity-reduction factor)
Incorrect relief-valve
set pressure
Pressure in protected vessel or line > Maximum allowable accumulated pressure (typically 10, 16 or 21% of the MAWP for pressurized vessels with single
relief valves for non-fire scenario, multiple relief valves for non-fire scenario, and
relief valves for fire-scenario), AND pressure at PSV inlet < PSV set pressure
Excessive line velocity
Line Mach number > allowable Mach number (typically 0.7)
Insufficient knockout
drum liquid separation
Effectively separated droplet size at maximum relief load > allowable droplet
size (typically 300–600 µm)
Excessive flare radiation
Calculated radiation level at a specific point > allowable radiation level (typically 1,500 Btu/h-ft2 where presence of personnel with adequate clothing is
expected for 2–3 min during emergency operations, or 500 Btu/h-ft2, where
continuous presence of personnel is expected, both including sun radiation)
Miscellaneous
refers to an installed relief device
with insufficient capacity to handle
the required relief load. The third
type encompasses relief devices with
incorrect set pressures, possibility
of involuntary blocking or hydraulic
problems. In addition to these problems, other less frequent ones can be
cataloged as miscellaneous deficiencies. A relief-system problem tree is
shown in Figure 1.
In a previous statistical analysis of
272 process units in the U.S., it was
observed that [2]:
• 15.1% of the facilities lacked relief devices on equipment with one or more
potential overpressure scenarios
• 8.6% of the relief devices were undersized
• 22% of the relief devices were improperly installed
Identifying potential problems
There are work methodologies that
allow identifying potential problems
in relief systems. OSHA regulation
29 CFR 1910.119 is based on safety
audits that use techniques such as
process hazard analyses performed at
regular intervals. The work methodology established by this regulation to
identify safety hazards comprises two
basic steps [3]:
1. Process safety data gathering,
which includes the following:
• Process chemical safety data
• Process technology data
• Process equipment data [materials
of construction (MOCs), piping and
instrumentation diagrams (P&IDs),
design standards and codes, design
and basis of design of the relief systems, among others]. As part of these
data, “the employer shall document
that equipment complies with recognized and generally accepted good
engineering practices” [3]
2. Process hazards analysis, which
may include: What-if, hazard and operability (HAZOP) study, failure mode
and effects analysis (FMEA), fault-tree
analysis or equivalent methodologies.
In order to document that the plant
equipment complies with recognized
and generally accepted good engineering practices, the plant management
must validate that the facilities are
protected against potential overpressure scenarios, in accordance with
accepted codes and standards, such
as API standards 520 and 521. An effective relief-system-validation study
comprises the following steps:
1. Plant documents and drawings gathering. The first step involves obtaining and classifying the
existing plant documents and drawings: process flow diagrams (PFDs),
mass and energy balances, product
compositions, equipment and instrument datasheets, P&IDs, relief device
datasheets, relief loads summaries, relief line isometrics, one-line diagrams,
unit plot plan, and so on.
2. Plant survey. The second step consists of inspecting the installed relief
devices to verify that they are free of
mechanical problems, to update and
fill-out missing data in the plant documents and to verify consistency between the documents and drawings
and the actual as-built plant. During
plant surveys, other typical indications of relief system problems are the
presence of pockets, leaks or freezing
in relief lines and headers.
3. Overpressure scenario identification. In this step, the P&IDs are
examined in order to identify credible
overpressure scenarios for each piece
of equipment.
4. Overpressure scenario modeling. The fourth step is to model each
credible overpressure scenario. Each
model is developed in accordance with
the chosen reference standard (for
instance, API 520 and 521). The following calculations are typically performed during this step:
• Required relief load for each overpressure scenario
• Required relief-device orifice area
for each overpressure scenario
• Relief line’s hydraulics
• Knockout drum (KOD) liquid-separation verification
• Flare or vent radiation, dispersion
and noise level calculations
The overpressure scenario modeling
can be done in different ways, be it
by hand calculations, spreadsheets
or by the use of steady-state or dynamic relief-system simulation software. The results of the models are
analyzed to identify potential problems. Table 1 summarizes the possible relief system problems and the
ways to identify them on the calculation results.
Available solutions
There are various solutions for each
type of relief system problem. The
available solutions can be classified
as: (a) modification of existing relief
system components, (b) replacement
of existing relief system components,
(c) installation of new relief system
components, or (d) increasing the reli-
Chemical Engineering www.che.com July 2010
39
Table 2. Conditions that increase the probability
and impact of relief system failure
Feature Report
Conditions that increase the probability of
relief system failure
The plant has over 20 years of service
Conditions that increase the impact of
relief system failure
The plant handles toxic, hazardous or
flammable fluids
The plant handles gases
ability of the emergency shut- The plant currently handles different products to those it was originally designed for
down systems.
The plant operates at high pressures
The modification of exist- The plant operates at a different load or
at different conditions to those it was origiing relief-system components nally designed for
includes changes made to in- There have been contingencies that have
The plant operates at high temperatures
stalled components, without required the replacement of equipment or
requiring their replacement. lines in the past
Some examples of this type of Rotating equipment (pumps, compressors) The plant has furnaces, or equipment
that adds considerable heat input to the
solution include the following: has been modified (for instance, new imfluids
1.Recalibrating the pressure pellers) or replaced
The
relief
valves
have
not
been
checked
or
The plant has high-volume equipment
relief valve by readjusting
(such as columns, furnaces)
the set pressure (solution validated in the last ten years
Modifications
have
been
made
to
existing
The plant has exothermic reactors, or
to incorrect set pressure) or
relief valve lines (that is, they have been
chemicals that could react exothermithe blowdown (solution to rerouted)
cally in storage
inlet-line friction losses be- A complete and up-to-date relief valve inThe plant has large relief valves (8T10),
tween 3% and 6% of the set ventory is not available
or the relief header has a large diameter
pressure)
The relief load summary has not been upThe plant has a high number of opera2.Adding locks to relief lines’ dated in the last ten years
tions personnel
block valves (to prevent in- A relief header backpressure profile is not
The plant is located near populated
voluntary valve closure)
available, or the existing model has not
areas
The replacement of existing been updated in the last ten years
relief system components involves substituting inadequate relief in which redundant instrumentation Deficiency No. 2
system elements for newer, appropri- and emergency shutdown valves are This type of deficiency involves underate ones. Some examples of this solu- installed in order to cutoff the over- sized relief devices that are present on
pressure sources during a contingency. equipment with one or more potential
tion are the following:
1.Replacing the installed pressure The main advantage of this type of overpressure scenarios.
relief valve, either for one with a solution is that it can significantly re- Case 2: Insufficient orifice area
larger orifice area (solution to un- duce the required relief loads, hence after changes in the stream comdersized relief device) or for one of a posing an economical alternative to position. In a petroleum refinery, a
different type (solution to excessive the installation of new relief headers, desalter that was originally designed
knockout drums or flares.
backpressure)
to process heavy crude oil was pro2.Replacing relief line sections to
tected against a potential blocked
solve hydraulic problems, such as: examples of problems
outlet by a relief valve on the crude
excessive relief-valve inlet-line fric- in aging systems
outlet. When the refinery started
tion losses, excessive backpressure, What follows are examples of some processing lighter crude, simulations
excessive fluid velocity, pockets, typical relief-system problems that showed partial vaporization in the
among others
can be found in aging process facilities relief valve. The vapor reduced the
The installation of new relief system and the recommended remedy.
PSV capacity until it was insufficient
components involves the addition of
to handle the required relief load. In
relief system elements that were not Deficiency No. 1
this case, the recommendation was to
included in the original design, such The first type of deficiency is when no replace the original PSV for one with
as the following:
relief device is present on equipment a larger orifice and appropriate relief
1.New pressure relief valves, either with one or more potential overpres- lines.
on equipment lacking overpressure sure scenarios.
protection, or as supplementary Case 1. New overpressure scenario Deficiency No. 3
valves on equipment with under- after pump replacement. In a pro- The third type of deficiency involves
sized relief valves
cess unit, a centrifugal pump was re- improperly installed pressure relief
2.New headers, knockout drums or placed for another one with a higher devices.
flares, when the revised relief loads head, without considering the down- Case 3: Excessive backpressure due
exceed the existing relief system stream system’s maximum-available to discharge line modifications. An
capacity, or when relief system seg- working pressure (MAWP). Since the existing vacuum-distillation column’s
regation (that is, acid flare/sweet downstream system was designed at PSV outlet lines were rerouted from the
flare, high-pressure/low-pressure the previous pump’s shutoff pressure, atmosphere to an existing flare header
flare) is required
the installation of a higher shutoff due to new environmental regulations.
Increasing the reliability of the emer- pressure pump created a new blocked The installed PSVs were a convengency shutdown systems is typically outlet scenario. Therefore, the instal- tional type, so with the new outlet-line
done via implementation of high in- lation of a new pressure safety valve routing, the backpressure exceeded
tegrity protection systems (HIPS), (PSV) was recommended.
the allowable limit. A recommendation
40
Chemical Engineering www.che.com July 2010
Correct
Incorrect
LO
LO
No measures are taken
to prevent involuntary
PSV blocking
CSO
CSO
The block valves on PSV lines are kept
open via locks (LO) or car seals (CSO)
Correct
Incorrect
A PSV installed over the mist
eliminator is ineffective when
the latter gets clogged
A PSV installed below the mist
eliminator is effective even if
the latter is clogged
Figure 2. The risk of blocking in a pressure safety valve
(PSV) can sometimes be readily identified on P&IDs
was made to replace the existing PSVs
for balanced bellows PSVs.
Case 4: Incorrect PSV set pressure
due to static pressure differential.
A liquid-full vessel’s relief valve was
set to the vessel’s MAWP; however, the
relief valve was installed several feet
above the equipment’s top-tangent
line. The static pressure differential
was such that the pressure inside the
vessel exceeded the maximum-allowable accumulated pressure before the
PSV would open. The problem was
solved by modifying the existing PSV,
recalibrating it to the vessel MAWP
minus the static pressure differential.
Case 5: Incorrect PSV set pressure
due to higher operating temperature. The temperature of a stream was
increased with the addition of new heat
exchangers, and no attention was paid
to the set pressure of the thermal relief
valve in the line. By increasing the temperature, the pipe MAWP was reduced.
The PSV set pressure was lowered to
the new MAWP at the new working
temperature plus a design margin.
Case 6: Risk of blocking the relief
valve. A relief valve can be blocked for
various reasons. Some of the most common include the lack of locked-open
(LO) or car-seal-open (CSO) indications in the PSV inlet- and outlet-line
block valves, and installing the PSV
above the mist eliminator on a separator. Both deficiencies can be readily
identified on P&IDs (Figure 2).
Case 7: Pockets. Relief lines going
to closed systems should be selfdraining. It is not uncommon during
construction that, due to space limi-
Figure 3. Non-free-draining lines in installed relief lines, such
as shown in these two constructions, may cause accumulation
of liquids that can hamper relief valve performance
tations, a non-ideal line arrangement
is installed, creating pockets on relief
lines that may cause liquid accumulation and hamper relief valve performance (Figure 3).
Deficiency No. 4
The fourth category of deficiencies is a
miscellaneous grouping.
Case 8: Problems in an existing
flare network due to additional
discharges. The additional discharges of various distillation-column
relief valves were rerouted to an existing flare network because of new environmental regulations. The additional
discharges exceeded the system capacity, and the entire flare network and
emergency shutdown system had to be
redesigned by selecting the optimum
tie-in locations for the discharges, and
by implementing HIPS in order to reduce the required relief loads.
Case 9: Sweet and sour flare mixing. When revamping a section of a
process unit’s relief headers, some acid
discharges were temporarily routed
to the sweet flare header in order to
maintain operations. Soon afterwards,
the header backpressure started to increase and scaling was detected upon
inspection. The acid gases could also
generate corrosion, as the sweet flare
header material was inadequate to
handle them.
Case 10: High- and low-pressure
flare mixing. The discharges of low
pressure PSVs located on drums were
routed to the closest flare header, which
was a high pressure header. Since the
design case for relief of the drums was
only for a fire, additional discharges
were not considered by the designer.
However, the power failure also affected these drums. When this case
was evaluated, the backpressure was
too high for the installed PSVs, so they
had to be replaced by piloted valves.
MAINTAINING PROBLEMFREE RELIEF SYSTEMS
Some practical guidelines are offered
below to help the plant management
to assess, identify and troubleshoot relief system problems.
Tip No. 1: Assess the risk
Some factors tend to increase the probability and impact of a relief system
failure. Table 2 qualitatively shows
some of them. If several of the conditions shown on Table 2 apply, then the
plant management should consider
planning a detailed study, such as a
quantitative risk analysis (QRA), or a
relief-system validation study.
Tip No. 2: Maintaining up-todate relief-valve information
The plant management should maintain accurate, up-to-date relief-valve
data for maintenance and future reference. The following documents are of
particular interest: (a) relief valve inventory, (b) relief loads summary and
(c) relief header backpressure profile.
Relief valve inventory. The relief
valve inventory is a list that contains
basic information and status for each
relief valve, which should include the
following:
• Valve tag
Chemical Engineering www.che.com July 2010
41
Feature Report
Table 3. Relief system validation study typical execution phases and deliverables
Phase
Deliverable
Deliverable description
Survey and
data gathering
Updated relief device inventory
A list containing up-to-date, accurate data for each relief device located in the
plant. The minimum data to be included on the list are as shown in Tip No. 1, and
they should be obtained by combining relief-valve manufacturer documentation
with onsite inspections
Updated P&IDs
P&IDs showing the existing installed relief-device information: connection diameters, orifice letter, set pressure, inlet- and outlet-line diameters and block valves
Existing relief
system modeling
Relief system
troubleshooting
List of pockets
A document identifying pockets on relief lines, with the appropriate photographs
Updated relief
loads summary
A list containing the required relief loads for each applicable overpressure scenario of each relief device, the required orifice area and the relieving fluid properties, based on actual process information
Updated reliefnetwork backpressure profile
A document showing a general arrangement of the relief headers and subheaders, along with updated backpressure profiles for the major plant contingencies
Updated relief device calculations
A document containing the calculations for each relief device under actual operating conditions
List of relief system deficiencies
A document listing all of the deficiencies found in the existing relief system, categorized by type
Conceptual engineering
A document defining the modifications required to solve the relief system deficiencies
• Process unit and area
• Location
• Discharge location
• Connection sizes
• Connection rating
• Orifice letter
• Manufacturer
• Model
• Type (conversion, ball, pilot)
• Set pressure
• Allowable overpressure
• Design case
• Installation date
• Last inspection date
• Last calculation date
Relief loads summary. The relief
loads summary contains all the overpressure scenarios and relief loads
for each relief device at the plant. The
data in this document can be used to
identify the critical overpressure scenarios in the plant.
Relief-header backpressure profile. A backpressure profile of the entire relief network is valuable when
evaluating the critical contingencies
in the system, as it can be used to
identify relief valves operating above
their backpressure limits.
Tip No. 3: Planning and
executing a relief system study
The execution of a typical, relief-system validation study comprises three
phases: (a) survey and information
gathering, (b) existing relief system
modeling and (c) relief system troubleshooting. The typical deliverables
42
for each phase are described in Table
3. If the plant management has specific document formats, it should
provide them as part of the deliverable description.
The study may require a number of
resources that are not readily available in the plant. If the plant management has available resources but
lacks specialized software licenses,
then it can assign some of the tasks
to inner resources, for example, survey
and data gathering. Tasks requiring
expertise or software packages above
the plant’s capabilities, such as complex distillation column, reactor system or dynamic simulations, should
be outsourced.
A consulting firm should be selected based on its experience in
similar projects, technological capabilities (specialized software licenses) and a reasonable cost estimate. In order for the consulting firm
to deliver an accurate estimate, the
plant management should provide
the scope definition along with sufficient information to identify each
relief device within the scope of the
project, its location and the possible
overpressure scenarios. These data
are available in the relief loads summary and relief device inventory.
One person should be assigned on
the plant management side to manage
the project, along with administrative
personnel, and at least one in charge
of technical issues; the latter should
Chemical Engineering www.che.com July 2010
be available to provide technical information and verify the validity of
the consulting firm’s calculations. The
typical information that the consulting firm will request in order to complete the study includes: relief device
inventory, relief loads summary, relief
device datasheets, mass and energy
balances, PFDs, P&IDs, equipment
datasheets and relief line isometrics
for each evaluated process unit/area.
The consulting firm may also request
process simulations, if available.
Tip No. 4: When modeling, go
from simple to complex
Replacing a relief valve or header
section generates labor, materials, installation and loss of production costs
that can only be justified when the
results of an accurate model identify
the need for it. However, developing
an accurate model for every relief device in the plant can be impractical
and costly, especially if only a small
number of relief devices require replacement at the end.
A practical compromise is to verify
each system starting from a simple
model with conservative assumptions, and developing a more accurate model for those items that do
not comply with the required parameters under such assumptions. This
approach minimizes the time and
effort dedicated to items, and concentrates on those items that could
present problems.
For instance, for a blocked outlet
downstream of a centrifugal pump
and control valve system, the simplest model is to assume a relief load
equal to the pump’s rated capacity. If
the relief-valve orifice area is insufficient under the previous assumption,
the next step would be to read the
required relief load from the pump
curve with the control valve’s rated
discharge coefficient and the valve’s
downstream pressure equal to the relief pressure, ignoring piping friction
losses. If the orifice area still seems
insufficient, then a rigorous hydraulic
calculation of the entire circuit should
be performed to determine the required relief load.
Tip No. 5: Evaluate various
solutions to problems
As was mentioned earlier, there are
multiple solutions that are possible
for a single relief system problem, and
the plant management would natu-
rally wish to implement the quickest,
most practical and least costly one.
For instance, when a relief valve’s
inlet losses are between 3 and 6% of
the set pressure, the valve blowdown
can be adjusted instead of replacing
the entire valve inlet line.
Tip No. 6: What to do after
validation and troubleshooting
A routine revalidation of the relief
system’s correct operation not only
bring that security to the plant management over the integrity of its facilities, but also to third parties, such
as occupational safety organizations
and insurance companies. The cost of
a relief valve study may very well be
paid with a reduction in the plant insurance premium. Furthermore, the
image of a company that worries over
the safety of its employees and the
environment constitutes an important intangible benefit.
n
Edited by Gerald Ondrey
References
1. American Institute of Chemical Engineers,
“Emergency Relief System (ERS) Design
Using DIERs Technology”, New York, 1995.
2. Berwanger, Patrick, others, Pressure-Relief
Systems: Your Work Isn’t Done Yet, www.hydrocarbononline.com, July 7th, 1999.
3. Occupational Safety and Health Administration, 29 CFR 1910.119 “Process Safety Management of Highly Hazardous Chemicals”.
Author
Sebastiano Giardinella is
a process engineer at Inelectra S.A.C.A. (Av. Principal con
Av. De La Rotonda. Complejo
Business Park Torre Este
Piso 5, Costa Del Este. Panamá. Phone: +507-340-4842;
Fax: +507-304-4801; Email:
sebastiano.giardinella@
inelectra.com). He has six
years’ work experience in
industrial projects with a
special focus in relief systems design and evaluation, equipment sizing, process simulation and
bids preparation. He has participated in several
relief system evaluation studies, revamps and
new designs. Giardinella graduated as Chemical
Engineer, Summa Cum Laude, at Universidad
Simón Bolívar in Venezuela and holds an M.S.
degree in project management from Universidad
Latina de Panamá. He has taken part as speaker
or coauthor on international conferences and is
affiliated to Colegio de Ingenieros de Venezuela.
SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE
SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE
SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE
SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE SUBSCRIBE
SUBSCRIBE SUBSCRIBE SUBSCRIBE
SUBSCRIBE
S U
B S C SUBSCRIBE
R I B E SUBSCRIBE
T O SUBSCRIBE SUBSCRIBE SUBSCRIBE
July
2008
6
Incorp
oratin
g
So
Into lids
Liquid
s
www.
che.
com
Focu
s on
Gas
Detec
tion
eering
Mater
ials
Liquid
Dispe
nsing
g the
Right
To Fit
the Ap Gloves
plicat
Facts
ion
at Yo
ur Fin
gertip
Vacu
um Pu s:
mps
Flowm
eter
News
New
Engin
Closed
Findin
Augu
st
2008
8
r
www.
che.
com
Sterili
zation
PAG
E 34
PAG
E 34
Heat
Hydro
Rever
se
Osmo
sis
Transf
er
Fluids
car
Prope bon
rties
r
Facts
Lesso
ns-Le
Syste
arned
ms
Focu
s on
Filtra
tion
at Yo
ur
Finge
rtips:
Valve
s
Preven
Written for engineers, by engineers
More and more, business in the Chemical Process Industries (CPI) is not
local, it’s global. To keep up with this rapidly evolving marketplace, you
need a magazine that covers it all, not just one country or region, not just
one vertical market, but the whole CPI.
ting
Cakin
g
The #1 choice
of worldwide
CPI organizations
With editorial offices in Europe, Asia, and North America, CHEMICAL
ENGINEERING is well-positioned to keep abreast of all the latest innovations
in the equipment, technology, materials, and services used by process
plants worldwide. No other publication even comes close.
To subscribe, please call 1-847-564-9290
or visit clientservices@che.com
www.che.com
Chemical Engineering www.che.com July 2010
43
Solids
Environmental
Processing
Manager
Overpressure Protection: Consider
Low Temperature Effects in Design
Aubry Shackelford
Inglenook Engineering
Brian Pack
BP North America
I
n designing and sizing reliefdevice and effluent-handling
systems, one commonly overlooked aspect of the performance is examining the potential for
low temperatures that can cause the
components of the system to reach
temperatures below their respective,
minimum-design metal temperatures (MDMT), which may result in
brittle fracture with subsequent loss
of containment. This article points
out limitations of the typical overpressure-protection-analysis philosophy, discusses common sources of
low temperatures for further investigation, and addresses possible design remedies for MDMT concerns.
The primary objectives of a process
engineering evaluation of an effluent handling system (such as a flare
system) include ensuring that operation of the pressure relief devices discharging into the collection system
(flare headers, for example) is not adversely affected; and that the effluent
handling equipment are properly designed to perform safely. The results of
an overpressure-protection design are
the primary input for this engineering
evaluation; however, there are several
potential gaps in the ability of these
data to identify situations in which
the MDMT may be exceeded.
0
-10
Temperature drop, °F
Understanding the
inherent limitations of
current overpressureprotection analyses is
key to developing a more
robust heuristic
k=1.1
-20
-30
k=1.2
-40
-50
-60
k=1.3
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Mach
Figure 1. Temperature drop relative to stagnation as a result of flowing
Current-practice limitations
Common practices for pressure relief and effluent handling are found
in numerous references [1–5]. The
processes for estimating a discharge
temperature and performing the outlet pressure-drop calculations in the
pressure-relief-device discharge piping are limited in their ability to accurately predict flowing temperatures
for many situations.
First, the discharge calculations
are quite often only performed for the
controlling contingency for which the
pressure relief device was sized, which
does not necessarily represent the
most likely cause of overpressure or
the cause resulting in the lowest discharge temperatures.
Second, the outlet pressure-drop
calculations for individual pressure
relief valves consider the outlet discharge piping and potentially exclude
the remaining, downstream piping
system. This practice can result in a
temperature discontinuity between
the calculated discharge temperature
for the individual relief device and
that calculated for the same section
of piping considering the entire down-
stream piping system using an effluent-handling hydraulic model.
Third, the temperature estimates are
typically made for isothermal pressuredrop equations and do not account for
effects like retrograde condensation.
Fourth, some simplifications of the
calculations that are used for the purposes of estimating the outlet pressure drop do not represent flashing
effects (for example, highly subcooled
flashing liquids are often choked at
the bubblepoint; therefore, the sizing
of the valve may assume the backpressure is at the bubblepoint).
Finally, the temperature estimates
tend to be based on either relieving
temperatures or isenthalpic flashes
from relief conditions, which do not
account for kinetic energy effects.
These effects can be substantial if the
developed velocity in the outlet piping
is high and can be compounded when
there are multiple relief devices discharging simultaneously into a collection system, or when large diameter
differences exist between the tail-pipe
and the main effluent header.
Temperature drop. Figure 1 shows
the temperature drop from the stagna-
Chemical Engineering www.che.com July 2012
45
5
NRU
He
N2
LP
HP
1
6
Autorefrigeration
4
Dehydration
2
Sweetening
Refrigerant
Demethanizer
3
Sweetening
NGL
surge
Flash
tank
Deethanizer
Propane
loading
Stabilizer
Amine still
7
Depropanizer
Emergency storage
1. Process upsets in the NRU can cause low flowing temperatures to
propagate downstream to after the gas/gas exchanger, which is typically
designed based on normal operating temperatures
2. Relief of unstabilized condensate, a flashing liquid
3. Breakthrough of high pressure gas or flashing liquid into the flash tank
4. Tube rupture in the chillers cause relief of expanded gas that starts cold
and is cooled further by JT effects
5. Valve specifications for appropriate MDMT are
common issues
6. Common discharge lines can cause increased velocities, further
reducing flowing temperatures. Also depressuring is an
isentropic process that results in low flowing temperatures
7. Potential relief of flashing volatile liquids that can cool
significantly based on flashing
Figure 2. Typical schematic of NGL processing facility showing common areas of potential MDMT issues
tion temperature (Tstagnation) caused
by the kinetic energy developed during
adiabatic compressible flow of an ideal
gas as a function of the Mach number
for ideal gases having different idealgas specific-heat ratios (k) (see Ref. 6,
Equation 6–128). For the purposes of
illustrating the temperature drop, a
stagnation temperature of 0°F (460R)
was chosen.
It is useful to note that while a stagnation temperature of 0°F seems unlikely for many cases, this stagnation
temperature is established after the
fluid has been relieved into the collection system (in other words, after the
isentropic process of flowing through
the pressure-relief-valve nozzle and
the subsequent adiabatic process of
expanding from the nozzle throat to
the total backpressure that results in
Joule-Thompson (JT) cooling, both of
which can result in significantly lower
stagnation temperatures of the fluid
entering into the discharge piping).
Additional limitations. Additional
gaps in the overpressure protection
analysis include the common practice
of not considering the potential for
46
pressure relief valves to leak, or the
effects of other inputs to the effluent
handling system (such as pressure
control valves, depressuring valves,
pump seals and manual purge valves).
A leaking pressure-relief valve is typically considered an operational and
mechanical issue, not a cause of overpressure that needs to be evaluated for
the sizing of the pressure relief valve
or for the effects on the downstream
collection system; however, many of us
in the warm Gulf Coast region of the
U.S. recognize an ice-covering as indicative of a leaking valve, and the fluids
used in the evaluation of the pressurerelief-device sizing may not be representative of the normal process fluid
(for example, the external fire case,
which is a common design basis).
Pressure control valves may also be
called upon to “relieve” fluids, yet are
commonly not accounted for in overpressure protection studies based on
the desire to not include the positive
response of control systems in preventing overpressure. In actual situations,
the basic process-control systems are
expected to function as intended, and
Chemical Engineering www.che.com July 2012
thus represent a more likely source of
fluid input to the collection system.
In addition, these control valves are
not necessarily sized to handle the full
flow of an overpressure scenario, resulting in flow from both the control
valve and the pressure relief valve,
thereby exacerbating velocity effects.
Finally, depressuring is a dynamic
process, releasing fluids of different
pressures and temperatures as a function of time. Considering the most
likely behavior of a depressuring system to be an isentropic expansion of
the residing fluid, the inlet fluid temperatures can drop significantly as the
depressuring progresses.
Low temperatures
While the potential for low flowing
temperatures falling below the MDMT
exists in a variety of processing facilities, the issue is especially apparent in
natural-gas processing facilities where
high pressure, low temperature, lowmolecular-weight gases and volatile
liquids are present.
Design considerations. Based on
recent evaluations of several natural-
gas processing facilities with ethane
recovery capabilities, the authors
have identified several common areas
of concern that may provide a starting point for other gas processors’
investigations into this aspect of collection system design, as well as for
process piping. These areas include
the following: multiple inputs (such
as pressure relief devices or control
valves) discharging into subheaders
having diameters close in size to the
individual discharge piping diameter;
flashing liquid relief (unstablized condensate, natural gas liquids [NGL] or
liquid propane); internal-boundaryfailure cases (tube rupture, for example) in gas chillers; cryogenic drain
operations (such as draining expander
casing for maintenance); pressurerelief-device MDMT specifications not
commensurate with discharge piping
MDMT; and pressure relief devices or
vents on the outlet of cryogenic coldbox sections where the normal process
fluid is at elevated temperatures, yet
during process upsets may experience
significantly lower temperatures.
Figure 2 provides an overview of
these common areas of concern related
to low flowing temperatures. NGL
and propane processing-and-storage
equipment are examples of commonly
overlooked systems that can achieve
low flowing-discharge temperatures.
These equipment usually have pressure relief devices that are sized based
on an external fire case, yet also have
the potential for relieving the liquid
either due to blocked discharges, leaking relief valves or depressuring.
Alternative solutions. While the
design issues related to low flowing
temperatures can be dealt with by
specifying appropriate metallurgy,
there are other alternatives for consideration. These alternatives can
include identifying ways to eliminate the cause of overpressure in
the first place (for example, preven-
®
MPLIAN
T
CO
ERNAT
INT
I
ARM
S
BRAND PRODUCTS
C IN
FI
ITAR
Circle 3 on p. 56 or go to adlinks.che.com/40269-03
tion of overfilling of vessels), mitigation of relieving conditions causing
the low temperature excursion via
safety instrumented systems (SIS),
performing mechanical stress analyses to establish a better estimate of
the MDMT per ASME B31.3 (with replacement of components not covered
by stress analysis as needed), adding supplemental fluid (such as gas
or methanol) to raise the stagnation
temperature, rerouting the discharge
to a different location (such as to the
atmosphere), or conducting Charpy
testing on the piping in question to
establish the actual MDMT.
For potentially leaking pressurerelief valves, the options also include
recognizing the additional consequences in a risk-based inspection
protocol, installing rupture disks, or
adding skin thermocouples and low
temperature alarms on the discharge
piping to notify personnel of leakage
before the MDMT is crossed.
AL TRAF
ON
Environmental Manager
Final analysis
In summary, established overpressureprotection-analysis philosophies are
not well suited to identify possible material concerns as a result of process
fluid flashing and depressuring. Reliefdevice and effluent-handling sizing
conventions and simplified calculation
methodologies limit the ability of the
designer to recognize potential MDMT
concerns. Understanding the inherent
limitations of current overpressure-protection-analysis practice is key to devel-
oping a more robust overpressure protection analysis heuristic, which more
fully recognizes the effects of low temperature flashing on material design.
It is the experience of the authors
that modification of the typical overpressure-protection-analysis philosophy to identify and propose alternative solutions for conditions resulting
in excursions beyond MDMT is prudent in promotion of enhanced facility
process-safety management.
■
Edited by Dorothy Lozowski
References
1. API Standard 520, “Sizing, Selection, and
Installation of Pressure-Relieving Devices
in Refineries, Part I — Sizing and Selection”,
8th Ed., December 2008.
2. API Standard 520, “Sizing, Selection, and
Installation of Pressure-Relieving Devices in
Refineries, Part II — Installation”, 5th Ed.,
August 2003.
3. ISO 23251:2006 (API Standard 521), “Petroleum, Petrochemical and Natural Gas Industries — Pressure-relieving and Depressuring
Systems”, 1st Ed., August 2006.
4. Coats, and others, “Guidelines for Pressure
Relieving Systems and Effluent Handling”,
1st Ed., Center for Chemical Process Safety
of the American Institute of Chemical Engineers, 1998.
5. Gas Processors Suppliers Association, “Engineering Data Book”, 12th Ed., 2004.
6. Perry, R.H., D.W. Green, and J.O. Maloney,
editors, “Perry’s Chemical Engineering
Handbook”, McGraw-Hill, 7th Ed., 1997.
7. ASME B31.3-2008, “Process Piping”, December 31, 2008.
Authors
Aubry Shackelford is a
principal engineer for Inglenook Engineering, Inc., which
provides process engineering consulting with a specific
focus on process safety management (15306 Amesbury
Lane, Sugar Land, TX 77478;
Email: aubry@inglenookeng.
com; Phone: 713-805-8277).
He holds a B.S.Ch.E. from
Northeastern University and
is a professional engineer licensed in the state of
Texas and the province of Alberta (Canada).
Brian Pack is the area engineering-support team leader
for the Mid-Continent operations area in North America
gas region for BP America
Production Co. (501 Westlake Park Blvd., Houston, TX
77079; Email: brian.pack@
bp.com; Phone: 281-366-1604).
He holds a B.S.Ch.E. from the
University of Oklahoma and
is a professional engineer licensed in the states of Texas and Oklahoma.
Note
The views in this paper are entirely
the authors and do not necessarily reflect the views of BP America Production Co. or its affiliates.
❯❯
rEcEivE full accEss
.
icles in one convenient location
facts at Your fingertips art
to all of chemical Engineering’s
Each information packEd pdf articlE includes
graphs, charts, tables, equations and columns on the full
chemical engineering processes you deal with on a daily basis.
This is the tool you will come to rely on, referring back to the
information again and again with just the click of a mouse.
Facts at Your Fingertips Topics Include:
 Conservation Economics:
Carbon Pricing Impacts
 Distillation Tray Design
 Burner Operating
Characteristics
 Measurement Guide for
Replacement Seals
 Steam Tracer Lines and Traps
 Positive Displacement Pumps
 Low-Pressure Measurement
for Control Valves
 Creating Installed Gain Graphs
 Aboveground and
Underground Storage Tanks
 Chemical Resistance of
Thermoplastics
 Heat Transfer: System
Design II
 Adsorption
 Flowmeter Selection
 Specialty Metals
 Plus much, much more…
receive full
full access
access today
visiting
www.omeda.com/cbm/facts
Receive
todaybyby
visiting
http://store.che.com/product/
48
Chemical Engineering www.che.com July 2012
17872
Environmental Manager
Things You Need to Know Before Using an
Explosion-Protection Technique
Understanding the different classification methods is necessary to better select the
explosion-protection techniques that will be used
Class I, Division 2
(from 5 ft to 10 ft radius)
Class I, Division 1
(5 ft around vent)
Zone 1
Vent
Zone 2
Vent
Outdoors
10 ft (3m)
Gasoline
storage
tank
without
floating
roof
Zone 0
Gasoline
storage
tank
without
floating
roof
10 ft (3m)
Circular dike
around tank
Class 1, division 2
FIGURE 1. Shown here is a typical example of a Class I hazardous area utilizing division methods of area classification
Robert Schosker
Pepperl+Fuchs
E
xplosion protection is essential for many companies, and
those companies have decision makers. But before any
decisions can be made, there are
some important factors one must
consider. These factors include what
is most efficient and economical, as
well as knowing the basics of explosion protection; so the decision makers are headed in the right direction.
We will highlight many of the different
“things to know,” but first, let’s step
back in time and take a look at the
background of explosion protection.
Backdrop
After World War II, the increased
use of petroleum and its derivatives
brought the construction of a great
number of plants for extraction, refining and transformation of the chemical substances needed for technological and industrial development.
The treatment of dangerous sub50
FIGURE 2. The example hazardous area shown in Figure 1 is here classified according to the zones
stances, where there exists the
risk of explosion or fire that can be
caused by an electrical spark or hot
surface, requires specifically defined
instrumentation located in a hazardous location. It also requires that
the interfacing signals coming from
a hazardous location are unable to
create the necessary conditions to
ignite and propagate an explosion.
This risk of explosion or fire has
been the limiting factor when using
electrical instrumentation because
energy levels were such that the
energy limitation to the hazardous
location was difficult, if not impossible, to obtain. For this reason, those
parts of the process that were considered risky were controlled with
pneumatic instrumentation.
Moving forward
Now let’s move forward 70 years,
where almost everything you can
think of can be found at the touch of a
finger. From pneumatics to quad core
processors, information gathering
has definitely changed, but the same
Chemical Engineering
principles for working or gathering
information out of a hazardous area
remain the same. It’s just that today
we have multiple options. In order to
exercise those options, we must first
determine if the danger of an explosion exists and how severe it may be.
What is a hazardous area?
Hazardous areas are most frequently
found in places where there is a possibility of an emission of flammable
gas or dust. A hazardous area can
occur in normal operation, in the
event of a fault, or due to wear and
tear of seals or other components.
Now the risk of an ignition of an
air/gas mixture in this hazardous
area depends on the probability of
the simultaneous presence of the
following two conditions:
• Formation of flammable or explosive vapors, liquids or gases, or
combustible dusts or fibers with
atmosphere or accumulation of
explosive or flammable material
• Presence of an energy source (electrical spark, arc or surface temperwww.chemengonline.com
july 2015
Table 2. The breakdown of Classes into subgroups
Table 1. Defining areas for Divisions
Class
Class I
Type of Material
Class
Subgroup
Atmospheres
Locations containing flammable
gases, flammable liquid-produced vapors, or combustible liquid-produced
vapors
Class I
Group A
Atmospheres containing acetylene
Group B
Atmospheres containing hydrogen and flammable process gases with
more than 30 vol.% H2, or gases or vapors posing a similar risk level,
such as butadiene and ethylene oxide
Class II
Locations containing combustible
dusts
Group C
Atmospheres such as ether, ethylene or gases or vapors posing a similar
risk
Class III
Locations containing fibers and
flyings
Group D
Atmospheres such as acetone, ammonia, benzene, butane, cyclopropane,
ethanol, gasoline, hexane, methanol, methane, natural gas, naphtha, propane or gases or vapors posing a similar threat
Group E
Atmospheres containing combustible metal dusts, including aluminum,
magnesium, and their commercial alloys, or other combustible dusts
whose particle size, abrasiveness and conductivity present similar hazards in the use of electronic equipment
Group F
Atmospheres containing combustible carbonaceous dusts, including
carbon black, charcoal, coal or coke dusts that have more than 8% total
entrapped volatiles, or dusts that have been sensitized by other materials
so that they present an explosion hazard
Group G
Atmospheres containing combustible dusts not included in Group E or
Group F, including flour, grain, wood, plastic and chemicals
ature) that is capable of igniting the
explosive atmosphere present
Determining hazardous areas in
a plant is normally performed by
experts from various disciplines. It
may be necessary for chemists, process technologists, and mechanical engineers to cooperate with an
explosion-protection expert in order
to evaluate all hazards. The possible
presence of a potentially explosive
atmosphere as well as its properties
and the duration of its occurrence
must be established.
Also understanding terms such as
minimum ignition energy (MIE), upper
and lower explosive limit (UEL/LEL),
flash point, and ignition temperature
in the evaluation of your hazardous
area will also provide a clearer direction on how severe a hazardous area
might be.
In any situation involving an explosive material, the risk of ignition
must be taken into account. In addition to the nominal rating of materials under consideration, parameters
related to the process involved are
especially important in the evaluation. For example, the risk of explosion may be caused by the evaporation of a liquid or by the presence of
liquid sprayed under high pressure.
It is also important to know which
atmospheric conditions are present
normally and abnormally. The range
of concentration between the explosion limits generally increases as
the pressure and temperature of the
mixture increases.
Divisions and zones
Once it has been determined that a
hazardous area exists, it now needs
to be classified. While the physical
principles of explosion protection
are the same worldwide and are
not differentiated, there are two different and distinct models to define
your hazardous area —divisions and
zones — both of which are accepted
Chemical Engineering
Class II
Table 3. The Division Method
Division
Class I
Class II
Class III
(gases and vapors)
(flammable dust or powder)
In accordance with NEC
500.5 and CEC J18-004
In accordance with NEC
500.6 and CEC 18-008
(flammable fibers or suspended particles)
Division 1
Areas containing dangerous
concentrations of flammable gases, vapors or mist
continuously or occasionally
under normal operating
conditions
Areas containing dangerous
concentrations of flammable
dusts continuously or occasionally under normal
operating conditions
Areas containing dangerous
concentrations of flammable
fibers or suspended particles
continuously or occasionally under normal operating
conditions
Division 2
Areas probably not containing dangerous concentrations of flammable gases,
vapors or mist under normal
operating conditions
Areas probably not containing dangerous concentrations of flammable dusts
under normal operating
conditions
Areas probably not containing dangerous concentrations of flammable fibers or
suspended particles under
normal operating conditions
and utilized worldwide.
In rather simple terms, we can differentiate between the International
Electrotechnical Commission (IEC;
Geneva, Switzerland) (zones) and
the North American (division) procedures. The differences lie in the categorization of hazardous areas, the
design of apparatus, and the installation technology of electrical systems.
The categorization of these areas is
carried out in North America in accordance with the National Electrical
Code (NEC) NFPA 70, article 500.
The European Zone practice is described in IEC/EN 60079-10.
So how does each work? First let’s
start at the basics, and then we’ll
cover each individually.
Defining the area
Hazardous location or area classification methods specify the danger of fire or explosion hazards that
www.chemengonline.com
july 2015
In accordance with NEC
500.5 and CEC 18-010
may exist due to flammable gases,
vapors, or liquids within a plant or
working environment. These are explained by defining the type of hazardous material present, severity of
the hazard, and probability of the
hazard. It may also depend on the
likelihood of the hazard, risk of an
explosion, and the boundaries of the
hazardous location.
This is usually determined by a
HAZOP (hazard and operability)
study and documented on a set
of electrical plot plans on record in
every plant.
For divisions, the type of material
is given by a class designation, as
shown in Table 1. These can be broken down further into sub-groups,
as shown in Table 2.
Once we have determined the hazardous material we are working with,
the probability of an explosion and
boundaries must also be taken in to
51
Hazardous atmosphere
Hazardous atmosphere
Q
Hazardous
atmosphere
L
Io
Interstice
R
U
C
Length of
junction
S
Uo
P
FIGURE 3. Explosion-proof protection is based on
the explosion-containment concept, whereby the
enclosure is built to resist the excess pressure
created by an internal explosion
FIGURE 4. In purging or pressurization protection, a dangerous air/gas mixtures is not allowd
to penetrate the enclosure containing the electrical parts that can generate sparks or dangerous
temperatures
consideration. The division method
is divided into two areas: Division 1
and Division 2 (Table 3). These were
created in 1947 when the NEC first
recognized that different levels of risk
exist in hazardous locations. Figure 1
shows a typical example of a Class
I hazardous area utilizing Division
methods of area classification.
In comparison to the divisionbased area classification, which is
prevalent throughout North America,
the zone-based architecture prevails
in the rest of the world.
Zones are similar in nature to divisions where type of hazardous material present, severity of the hazard,
and probability of the hazard and
boundaries must be determined.
Zones are in accordance with IEC/
EN 60079-10, which states that any
area in which there is a probability of
a flammable gas or dispersed dust
must be classified into one of the
areas shown in Table 4.
Similar to the division method of
area classification, zones can be
better rationalized by looking at the
example shown in Figure 2.
With a slightly different approach,
IEC 600079-0 requires apparatus to
be subdivided into two groups, as
shown in Table 5.
The groups indicate the types
of danger for which the apparatus
has been designed. Group I is intended for mines. Group II concerns
above-ground industries (electrical
apparatus for hazardous areas with
potentially explosive gas (dust) atmosphere except firedamp hazardous mining areas) and is subdivided
into II G (gases) and II D (dusts).
Similar to divisions, the zones
offer a sub material classification
as well. Table 6 shows how this
approach compares to the North
American equivalent.
Finally, when classifying your hazardous area, whether it be division
or zones, you must also classify the
maximum surface temperature that
can go in to the hazardous area. The
maximum surface temperature must
be below the minimum ignition temperature of the gas/dust present.
In North America as in Europe, six
temperature classes are differenti-
Table 4. Defining areas by Zones
52
Zone
Type of material
Zone 0
An area in which an explosive air/gas
mixture is continuously present or
present for long periods of time
Zone 1
An area in which an explosive air/gas
mixture is likely to occur in normal
operation
Table 5. Apparatus Groups per IEC
600079-0
Group
Apparatus
Apparatus to be used in mines where
the danger is represented by methane
gas and coal dust
Apparatus to be used in surface
industries where the danger is represented by gas and vapor that has
been subdivided into three groups:
A, B and C. These subdivisions are
based on the maximum experimental
safe gap (MESG) for an explosionproof enclosure or the minimum ignition current (MIC) for intrinsically safe
electrical apparatus
Zone 2
An area in which an explosive air/gas
mixture is unlikely to occur; but if it
does, only for short periods of time
Group I
Zone 20
An area in which a combustible dust
cloud is part of the air permanently,
over long periods of time or frequently
Group II
Zone 21
An area in which a combustible dust
cloud in air is likely to occur in normal
operation
Zone 22
An area in which a combustible dust
cloud in air may occur briefly or during abnormal operation
Chemical Engineering
FIGURE 5. Intrinsic safety is based on the principle of preventing an effective source of ignition
ated, T1 to T6. The classes T2, T3
and T4 are divided into further subclasses, as indicated in Table 7.
In Europe, the apparatus are
certified on the basis of design
and construction characteristics.
From a practical point of view, the
two systems are equivalent, even
if there are minor differences, but
before you run out and choose the
most convenient method for you, it
is important that you consult your
local authority having jurisdiction to
learn what method is allowed or, in
fact, preferred.
The initial steps to determine
whether a hazardous area exists and
classify that area may seem rudimentary to some, but they are important
as they now open up the multiple
methods of protection, which may
or may not be allowed, depending
on whether you classified your area
by divisions or zones.
Protection methods
There are three basic methods of
protection — explosion containment,
segregation and prevention.
Explosion containment. This is the
only method that allows the explosion
to occur, but confines it to a well-defined area, thus avoiding the propagation to the surrounding atmosphere.
Flameproof and explosion-proof enclosures are based on this method.
Segregation. This method attempts
to physically separate or isolate the
electrical parts or hot surfaces from
the explosive mixture. This method
includes various techniques, such
as pressurization, encapsulation,
and so on.
Prevention. Prevention limits the
energy, both electrical and thermal,
to safe levels under both normal operation and fault conditions. Intrinsic
safety is the most representative
technique of this method.
www.chemengonline.com
july 2015
Table 6. Sub material classification for Zones
Material
Table 7. Temperature classes
Apparatus clasification
Apparatus classification
Europe (*IEC)
North America
Methane
Group I (mining)
Class I, Group D
Acetylene
Group IIC
Class I, Group A
> 20 µJ
Hydrogen
Group IIC
Class I, Group B
> 20 µJ
Ethylene
Group IIB
Class I, Group C
> 60 µJ
Propane
Group IIA
Class I, Group D
> 180 µJ
Conductive dust (metal)
Group IIIC*
Class II, Group E
Non-conductive dust
(carbon)
Group IIIB*
Class II, Group F
Cereal/flour
Group IIIB*
Fibers/suspended
particles
Group IIIA*
My application requirements
Now the questions really start racing
in: Which should I use? Which one
offers the best protection? What if
all of my equipment is not low powered? My plant is already using a
technique; can I use another protection method? Can they co-exist?
Who makes that decision? Why
should I use one method over the
other? Can I use two methods at
the same time? So many questions,
all of which are very important, and
with a little understanding of your
process, they will guide you to best
method(s) to use.
Hazardous-area protection method
selection depends on three important factors: (1) area classification, (2)
the application and (3) the cost of the
protection method solution.
Area. Area classification depends on
the type of hazardous substances
used, operating temperature, and
explosion risk due to how often the
dangerous substance is present in
the atmosphere and the boundary
of the substance from various parts
of the process. Area classification
is determined by either the division
method or zone method.
Application. Application characteristics also affect which protection
method is used. For example, some
methods are more appropriate for
large equipment protection, while
others are more appropriate for highpower applications.
Cost. Cost is also an important factor for many engineers. For example,
if their application requires Division
2 protection, they may not want to
purchase more expensive equipment
rated for Division 1. For that reason,
it is important to understand the interplay of all three factors — classiChemical Engineering
Ignition energy
Tmax, °F
T Class in N.A.*
450
842
T1
300
572
T2
280
536
T2A
260
500
T2B
230
446
T2C
215
419
T2D
200
392
T3
180
356
T3A
165
329
T3B
Class II, Group G
160
320
T3C
Class III
135
275
T4
120
248
T4A
100
212
T5
85
185
T6
fication, application, and cost — in
helping users find the ideal solution
to match their needs.
In addition to considering the normal functioning of the apparatus,
eventual malfunctioning of the apparatus due to faulty components must
be a consideration. And finally, all
those conditions that can accidentally occur, such as a short circuit,
open circuit, grounding and erroneous wiring of the connecting cables,
must be evaluated. The choice of a
specific protection method depends
on the degree of safety needed for
the type of hazardous location considered in such a way as to have the
lowest probable degree of an eventual simultaneous presence of an
adequate energy source and a dangerous concentration level of an air/
gas mixture.
None of the protection methods
can provide absolute certainty of
preventing an explosion. Statistically,
the probabilities are so low that not
even one incident of an explosion
has been verified when a standardized protection method has been
properly installed and maintained.
The first precaution is to avoid
placing electrical apparatus in hazardous locations. When designing
a plant or factory, this factor needs
to be considered. Only when there is
no alternative should this application
be allowed.
Choosing the best method
After carefully considering the above,
we can look at three more popular
methods of protection, XP (explosion proof/flameproof), purging and
pressurization, and intrinsic safety.
Although these are the most commonly used methods in the division
www.chemengonline.com
july 2015
Tmax, °C
*N.A. = North America
area classification, there are many
other options when an area is classified using zones, but for now we will
concentrate on the above as they
are most commonly used.
XP. The explosion-proof protection
method is the only one based on the
explosion-containment concept. In
this case, the energy source is permitted to come in contact with the dangerous air/gas mixture. Consequently,
the explosion is allowed to take place,
but it must remain confined in an enclosure built to resist the excess pressure created by an internal explosion,
thus impeding the propagation to the
surrounding atmosphere.
The theory supporting this method
is that the resultant gas jet coming
from the enclosure is cooled rapidly through the enclosure’s heat
conduction and the expansion and
dilution of the hot gas in the colder
external atmosphere. This is only
possible if the enclosure openings
or interstices have sufficiently small
dimensions (Figure 3).
In North America, a flameproof enclosure (in accordance with IEC) is, as
a rule, equated with the "flameproof"
designation. In both considerations,
the housing must be designed for
a x1.5 explosion overpressure. The
North American version “Explosion
proof” (XP) must withstand a maximum explosion overpressure of x4.
Furthermore, in North America,
the installation regulations (NEC 500)
specify the use of metal conduit for
the field wiring installation. It is also
assumed here that the air-gas mixture
can also be present within the con53
duit system. Therefore, the resulting
explosion pressures must be taken
into consideration. The conduit connections must be constructed according to specification and sealed
(that is, lead seals) with appropriate
casting compound. The housing is
not constructed gas-tight. Of course,
large openings are not permitted on
the enclosure, but small ones are inevitable at any junction point. Some
of these gaps may serve as pressure
relief points. Escaping hot gases are
cooled to the extent that they cannot
ignite the potentially explosive atmosphere outside the housing. Ignition
is prevented if the minimum temperature and minimum ignition energy
of the surrounding potentially explosive atmosphere is not reached. For
this reason, the maximum opening
allowed for a particular type of joint
depends on the nature of the explosive mixture and width of the adjoining surfaces (joint length).
The classification of a flameproof enclosure is based on the gas group and
the maximum surface temperature
which must be lower than the ignition
temperature of the gas present.
Purging or pressurization. Purging or pressurization is a protection
method based on the segregation
concept. This method does not allow
the dangerous air/gas mixture to penetrate the enclosure containing electrical parts that can generate sparks or
dangerous temperatures. A protective
gas — air or inert gas — is contained
inside the enclosure with a pressure
slightly greater than the one of the external atmosphere (Figure 4).
The internal overpressure remains
constant with or without a continuous flow of the protective gas. The
enclosure must have a certain degree of tightness; however, there
are no particular mechanical requirements because the pressure supported is not very high.
To avoid pressure loss, the protective gas supply must be able to
compensate during operation for
enclosure leakage and access by
personnel where allowed (the use of
two interlocked doors is the classical
solution). Because it is possible for
the explosive atmosphere to remain
inside the enclosure after the pressurization system has been turned
off, it is necessary to expel the remaining gas by circulating a certain
quantity of protective gas before re54
starting the electrical equipment.
The classification of the electrical
apparatus must be based on the
maximum external surface temperature of the enclosure, or the maximum surface temperature of the internal circuits that are protected with
another protection method and that
remain powered even when the protective gas supply is interrupted.
The purging or pressurization
technique is not dependent upon the
classification of the gas. Rather, the
enclosure is maintained at a pressure
higher than the dangerous external
atmosphere, preventing the flammable mixture from coming in contact
with the electrical components and
hot surfaces inside.
In the U.S., the term “pressurization” is limited to Class II applications. This is the technique of supplying an enclosure with clean air or an
inert gas, with or without continuous
flow, at sufficient pressure to prevent
the entrance of combustible dusts.
Internationally, the term “pressurization” refers to a purging technique
for Zones 1 and 2.
The divisional model of the purging
protection method is based on the
reduction of the classification inside
the enclosure to a lower level. The
following three types of protection (X,
Y, and Z) are identified in relation to
the hazardous-location classification
and the nature of the apparatus.
• Type X: reduces the inside of the
enclosure from Division 1 to a nonhazardous state that requires an
automatic shutdown of the system
in case of pressure loss
• Type Y: reduces the inside of the
enclosure from Division 1 to Division 2
• Type Z: reduces the inside of the
enclosure from Division 1 to a nonhazardous state, requiring alarm
signals only
Intrinsic safety. Finally, intrinsic
safety is based on the principle of
preventing an effective source of ignition. The electrical energy is kept
below the minimum ignition energy
required for each hazardous area
(Figure 5).
The intrinsic safety level of an electrical circuit is achieved by limiting
current, voltage, power and temperature; therefore, intrinsic safety is
limited to circuits that have relatively
low levels of power. Of critical importance are the stored amounts of enChemical Engineering
ergy in circuits in the form of capacitance and inductance. These energy
storage elements must be limited
based on the voltage and current
levels present in a particular circuit
or make-break component.
In normal operation and in the
event of a fault, no sparks or thermal
effects may occur that could lead to
the ignition of a potentially explosive
atmosphere. Intrinsically safe circuits
may therefore be connected and
disconnected by experts during operation (even when live), as they are
guaranteed to be safe in the event of
a short circuit or disconnection.
Intrinsic safety is the only ignitionprotection class that allows connectors to be opened and intrinsically
safe apparatus to be removed and
replaced by an equivalent device in
a hazardous area. Because of the
level of freedom this brings, intrinsic
safety has become one of the most
important methods of protection in
the industrial automation industry.
Final remarks
Each method offers its own advantages and disadvantages, and in most
cases no one method will be or can
be the only method used in a process
plant. Generally, this mixed system
does not present installation difficulty
if each of the protection methods is
appropriately used and is in compliance with the respective standards.
No matter how you classify your
plant or which method of protection
you chose, it is always important
to remember that the method you
choose today may not necessarily
be the appropriate choice tomorrow.
Evaluate, choose and protect not
only to keep your plant safe, but to
keep your personnel safer.
n
Edited by Gerald Ondrey
Author
Robert Schosker is the product
manager/team lead for intrinsic
safety (IS), remote I/O, HART, signal conditioners, power supplies
and
surge
protection
at
Pepperl+Fuchs Inc. (1600 Enterprise Parkway, Twinsburg, OH
44087; Phone: 330-425-3555;
Fax: 330-425-4607; email:
rschosker@us.pepperl-fuchs.
com). Since joining the company in 1995, Schosker has
been focused on technology and product-related support, and is involved in a wide range of activities and
roles including certifications, sales, and marketing. He
has been the key lead in many IS and HART projects
resulting in the development of new products for intrinsic safety and HART infrastructure. Schosker holds a
B.S.E.E. from the University of Akron.
www.chemengonline.com
july 2015
Cybersecurity Defense
for Industrial ProcessControl Systems
Security techniques widely used in information technology (IT) require special considerations to
be useful in operational settings. Here are several that should get closer attention
Mike Baldi
Honeywell Process
Solutions
In Brief
Honeywell
Figure 1. Expansion of the
Industrial Internet of Things
(IIoT) and cloud storage offers
benefits, but raises security
concerns
cyber threats and
consequences
Defense in Depth
Adapting to the
needs of Operational
Technology
Risk-analysis
solutions
Next-generation
Firewalls
Endpoint protection
looking to the
Future
I
ndustrial cybersecurity risks are widely
appreciated. In April, the deputy director of the U.S. National Security Agency,
Rich Ledgett, warned that industrial
control systems (ICS) and other critical infrastructure assets remain vulnerable to
attack (Figure 1). Robust cyberdefense of
industrial facilities remains an ongoing challenge for the chemical process industries
(CPI). The convergence between the world
of information technology (IT) and the world
of operational technology, in which control systems for industrial facilities reside,
36
has brought tremendous benefits, along
with more complex security concerns. The
same convergence, however, has allowed
the industrial world to adopt cyberdefense
techniques that have been widely used in
IT. This article discusses several key cybersecurity IT tools that can help industrial
facilities establish a layered cybersecurity
system for its operations.
Cyber threats and consequences
The Stuxnet worm, a computer virus that infamously affected Iran’s nuclear centrifuges,
Chemical Engineering
www.chemengonline.com
july 2016
and the damage due to a cyberattack of a
German steel mill reported in 2014 are evidence that cyberattacks can have physical,
real-world impacts. But it is not necessary
to prompt an explosion to cause significant
disruption. A cyber attack on Ukraine’s electric power grid, and subsequent widespread
power failure last December, was evidence
of that.
As NSA’s Ledgett put it, “You don’t need
to cause physical harm to affect critical infrastructure assets.”
Cybersecurity risks are not easily addressed, however. One challenge is the
increasing sophistication of attacks. The
German government report on the steel
mill incident, for example, noted that the attackers demonstrated not only expertise in
conventional IT security, “but also detailed
technical knowledge of the industrial control
systems and production processes used in
the plant.”
Moreover, once the tools and knowledge
to enable such attacks are developed, they
are often quickly commoditized and shared,
allowing others with fewer technical skills to
use them.
Another challenge, however, is simply the
increasing vulnerabilities introduced by the
growth of intelligent, connected devices in industrial control systems. As Chris Hankin, director of the Institute for Security Science and
Technology (ISST) at Imperial College, London (www.imperial.ac.uk/security-institute),
remarked recently: “Almost every component
of such systems now has fully functional computing capability and most of the connections
will now be Ethernet, Wi-Fi or will be using
Internet protocol.”
The growth of the Internet of Things —
and, more specifically the Industrial Internet
of Things (IIoT), in particular — is adding to
both the number of devices and their connectivity. Today, the IT research and advisory
company Gartner Inc. (Stamford, Conn.;
www.gartner.com) estimates 6.4 billion connected devices are in use worldwide. By
2020, it forecasts, that total will reach 20.8
billion. Moreover, heavy industries such as
utilities, oil and gas, and manufacturing are
among the leading users. Each device and
connection expands the possible attack surface for cyberattacks.
Closely connected to the increasing number of connected devices is the growth of
the network of remote computer servers
casually known as the “Cloud,” which provides access to infinitely scalable computing
power and storage. The Cloud provides an
Chemical Engineering
Honeywell
opportunity to store and process the large
volumes of data resulting from the proliferation of connected devices, such as with the
IIoT. Again, however, it introduces new connection and communication channels that
would-be cyberattackers will try to exploit.
Figure 2. A layered approach
to cybersecurity, with several
types of different cyberdefenses should be the objective of industrial control
systems
Defense in depth
In fact, the security issues related to the IIoT
and Cloud storage result from the longerterm challenges surrounding the convergence between the IT and operational technology (OT) worlds. Open platforms and the
proliferation of third-party and open-source
software in industrial control systems has
long brought the power and efficiencies from
the enterprise side of the business to the
process side. But along with those benefits,
the convergence also brings associated security concerns.
To complicate matters, while the vulnerabilities on both sides — enterprise and operations — may be similar, the solutions are
often not directly transferable. The priorities
of each are necessarily different: while confidentiality can be prioritized in the enterprise;
availability and integrity must, for the most
part, take priority on the OT side. In practice, a security solution cannot be allowed
to shutdown operator access to data or devices that are essential to the safe running of
the plant, even if the security of those data is
at risk of being compromised.
ISST’s Hankin acknowledged this reality in
his speech: “While there has been a convergence between the two worlds [IT and OT],
particularly in the past five years, there are
major differences, such as the fact the industrial control systems (ICS) tend to have
to operate in a time-critical way; they have to
www.chemengonline.com
july 2016
37
Honeywell
Figure 3. Risk analysis
enables the prioritization of
cybersecurity risks so that
limited resources can be applied intelligently
operate around the clock; and edge clients,
such as sensors and actuators, are becoming much more important” (Figure 2).
In essence, the options for ensuring security are more limited in the OT world. This is
partly why the concept of “defense in depth”
is so important to industrial security: without
the option of configuring protection mechanisms to potentially inhibit system availability, it is even clearer in an OT setting that no
single security solution can provide complete
protection. A layered approach that employs
several different defenses is the better goal.
Such an approach means that if (or rather,
when) one layer fails or is bypassed, another
may block the attack. Defense in depth
makes it more difficult to virtually break into
a system, and, if it includes active monitoring
and a good incidence-response plan, promotes quicker detection and responses that
minimize the impact where an attack does
breach security.
This also means that — perhaps even
more so than in the IT world — security in an
operational setting cannot rely solely on software. As in all operations, success is only
achieved through a combination of people,
processes and technology.
Adapting to the needs of OT
Notwithstanding these points, though, security developments in the IT world do prove
valuable to operations. Provided the priorities of OT users are accommodated, and the
solutions are implemented in an appropriate
framework, recent IT developments offer significant potential to boost security in the OT
world of industrial facilities.
Four recent technologies, in particular, are
worth looking at in more detail:
• Risk-analysis technologies that enable
plants to prioritize investments in cybersecurity
• Next-generation firewalls, which can bring
about radical improvements in network
protection
38
• Application whitelisting and device control
to protect individual end nodes
• Advanced analytics, focused on using
“big data” to detect and predict
cyberattacks
The first three are already seeing significant uptake, and accompanying security
benefits, among industrial users. The last offers a glimpse at how industrial cybersecurity
is likely to continue to develop in the future,
based on IT trends. It also demonstrates
how the increasing connectivity and elastic
computing power embodied by the IIoT and
the Cloud can contribute to the security challenges they have done so much to highlight.
Risk analysis solutions
A key value of risk analysis is that it recognizes that resources are finite. Plant owners
face numerous choices about where and
how to apply security controls and solutions.
Risk analysis techniques provide a way to
quantify, and therefore prioritize, cybersecurity risks, to ensure that limited resources are
applied effectively and efficiently to mitigate
those that are most severe.
That quantification is aided by the existence of standard definitions of risk from
bodies such as the International Organization for Standardization (ISO; Geneva, Switzerland; www.iso.org) and the National Institute of Standards and Technology (NIST;
Gaithersburg, Md.; www.nist.gov). The former defines risk as “the potential that a given
threat will exploit vulnerabilities of an asset
or group of assets, and thereby cause harm
to the organization.” The latter characterizes
risk as “a function of the likelihood of a given
threat source’s exercising a particular potential vulnerability, and the resulting impact of
that adverse event on the organization.”
Cybersecurity risk is therefore a function
of vulnerabilities, threats and potential consequences of a successful compromise. By
accepting this as a definition, risk can be
quantified and prioritized.
In practice, vulnerabilities will always exist
— whether in the form of a software bug
or due to weak passwords or poor system
configuration. They cannot be entirely eliminated. Threats, meanwhile, constantly vary,
and will be driven not just by the availability
of malicious software or technical knowledge, but also by the motivation and means
of potential attackers. The consequences of
exploiting a specific threat have to be calculated into a relative risk score for each vulnerability (Figure 3). Owner-operators of industrial control systems can then determine
Chemical Engineering
www.chemengonline.com
july 2016
Honeywell
what level of risk to mitigate, and which
risks they are willing to accept — their
risk appetite.
Since vulnerabilities and threats continually evolve and expand (with 200,000
new variants of malware identified every
day, for example), the process must be
continuous. Automating the risk-analysis process brings significant benefits to
the security of a plant.
Risk-analysis software does so, and
enables users to monitor networks and
system devices in realtime (Figure 4).
By consolidating complex site-wide
data, risk-analysis software significantly
improves the ability to detect threats
and identify vulnerabilities. Perhaps
more importantly, by calculating the
risk for each device in realtime, it enables prioritization of risks by their potential impact to the plant or business.
It also provides a realtime update when
the risks change due to new threats
or vulnerabilities to the system. Combined with well-configured alerts, users
can assign resources more efficiently,
and respond more effectively and more
quickly to risks.
In the IT world, risk-analysis and
risk-management solutions have seen
widespread uptake, but there are difficulties in simply transposing these to
an industrial setting. First, the requirements and competencies of the users
— control engineers and operators,
as opposed to IT staff — are different.
An OT risk-analysis tool must present
results that are meaningful to non-security specialists who operate the ICS
around the clock.
Second, allowance has to be made
for the OT environment. Many traditional
vulnerability assessment (VA) tools used
in enterprise systems may be unsuitable
(and possibly unsafe) when applied to
network activity in an ICS.
This is because they probe aggressively to test for vulnerabilities, launching a variety of network packets directed at every possible port on an
end node. The responses are used to
determine the state of each port, and
whether the protocols are actively supported. A database of known vulnerabilities is then used to match the responses, and then further scanning of
the device is attempted.
There are two key problem areas with
this technique.
Chemical Engineering
• Non-standard network traffic into
poorly managed ports can cause unintended consequences — including
locking up a communications port,
tying up resources on the end node,
or even hanging up an entire end
node. This type of probing can reveal
weaknesses in the configuration or
programming of applications that
results in unintended consequences
• Network scanning can increase the
load on an end node to an unmanageable level, resulting in a denial
of service (with the node unable to
complete normal operation), or even
a node crash. To avoid this vulnerability, scanners must be “throttled”
properly to protect both the end
nodes as well as the network latency
and bandwidth
An IT VA tool may therefore introduce
risks to the safe operation of an ICS, as
much as it may identify them.
Essentially, realtime risk analysis in an
OT environment must be tailored to ensure that it never interferes with normal
plant operation or control. It must also
provide realtime, actionable information
that can be used by operators, security
administrators and business leaders.
VA tools tailored to the ICS environment are now becoming available, and
are seeing good uptake. With the scale
of the cybersecurity challenge continually growing, they are likely to become
an increasingly important tool in helping
operators focus and tailor their cybersecurity strategies.
Figure 4. By compiling complex, sitewide data, risk-management software
can improve the ability of plants to detect
threats and identify vulnerabilties
Next-generation firewalls
In IT systems, firewalls are among the
most widely used cybersecurity measures. While antivirus software protects
the end nodes, the firewall monitors and
controls network traffic based on con-
www.chemengonline.com
july 2016
39
figured security rules to detect and prevent
network-based cyberattacks. For most business, they are the first line of defense in their
cybersecurity strategy.
Next-generation firewalls (NGFWs) significantly enhance the protection capabilities
of these systems. In addition to traditional
network protection which restricts access
to a particular port or address, NGFWs include deep packet inspection of network
traffic in realtime.
Increased analysis of the content of network traffic (not just the source and destination addresses) facilitates a range of additional defenses:
• Application profiling — tracking application behavior to raise alerts or interrupt
communications displaying abnormal
behavior, or patterns associated with
known malware
• Protocol support — including, in industrial
NGFWs and most industrial control system protocols, such as Modbus, DNP3,
OPC and HART. This allows the NGFW to
be configured to restrict protocols to only
specific functions, such as restricting the
ability of applications using Modbus to
write to certain registers, or restricting all
write commands coming into the ICS
• Potential to interface with the ICS domain
controller to identify the user associated
with specific application traffic on the
plant control network and to block unauthorized users
• Advanced threat detection (on high-end
NGFW), based on network traffic patterns, and signatures of known malware
The potential benefits of NGFWs may even
be greater in an OT than IT setting. Network
traffic in the OT environment is typically
more “predictable,” with most communication channels clearly defined. That makes it
possible in many cases to more tightly lock
down communications traffic on an ICS —
and easier to determine deviations from normal network traffic patterns.
Again, there are significant challenges,
though: an NGFW can decode some, but
not all, encrypted traffic, for example. ICS
owners also need to coordinate the NGFW
selection with their process control vendors
to ensure the correct configuration and to
ensure that network performance is not affected when on critical operations and network traffic latency.
However, the potential rewards make this
worthwhile. An NGFW not only provides
tighter control of network traffic, but more
intelligent control: it is as much about letting
40
desirable traffic through as detecting and
blocking threats.
More highly sophisticated control gives
plant operators not only increased protection,
but also the confidence to allow connections
they would otherwise feel forced to block: to
enable and control access for an increasing
range of applications; to facilitate authorized
personnel using mobile devices; and to promote collaboration across the enterprise with
controlled access to realtime data.
End-point protection
Application whitelisting (AWL) is another staple in traditional cybersecurity approaches. It
protects individual end nodes by restricting
the files that can be executed to only those
specifically authorized to run.
Its value is well recognized. Whitelisting
is listed first among the top four strategies
listed by the Australian government intelligence agency, the Signals Directorate, and
last October, NIST published a guide to
whitelisting for businesses.
As the NIST guide notes, the power of
application whitelisting comes from its prescriptiveness: “Unlike security technologies,
such as antivirus software, which block
known bad activity and permit all other, application whitelisting technologies are designed to permit known good activity and
block all other.”
Added to this, whitelisting avoids some of
the maintenance required for technologies
like antivirus software or intrusion prevention/detection systems (IPS or IDS). Such
“blacklisting” technologies require frequent
updates to the “known bad” signatures; DAT
files (binary data files with .dat filenames) for
antivirus solutions are updated daily with
new “known malware” signatures. More sophisticated malware, meanwhile, is being
designed to evade detection by signaturebased security protections.
Application whitelisting therefore represents a strong additional line of defense
against malware that is designed to add new
software or modify existing software on an
end-node. It can also offer some protection
for obsolete operating systems no longer
supported by new security patches (such as
Windows Server 2003 and Windows XP operating systems).
There are challenges for an ICS, however. Whitelisting takes time to set up and
configure in all systems. The difficulty lies in
ensuring that all applications that need to
be run on a particular node are enbled (or
not blocked). In an ICS, the risks of blocking
Chemical Engineering
www.chemengonline.com
july 2016
or impacting normal operations are often
greater, however. If improperly configured, a
whitelisting solution can prevent normal operations, causing operators to lose visibility
or control of the plant. It must therefore be
tightly integrated into the control system
operation, because it is active before every
file execution on the system.
To minimize the risk, the AWL solution
should be fully qualified by the ICS vendor
or end user before use. Most solutions also
offer various operation modes: monitoring
or observation, in which users can monitor
unauthorized file execution without blocking
any operations; “self approval” — in which
message pop-ups enable users to override
any blocked executable; and full implementation in which whitelisting policies are fully
executed and enforced. The last should
only be used after the site has validated the
whitelisting configuration against all normal
plant usage scenarios.
Where this is done, however, whitelisting
has proven an effective and safe solution in
industrial settings, bringing similar benefits
for cybersecurity that have been realized in
the IT world. In addition to managing executable files, whitelisting solutions increasingly
offer a wide range of functionality:
• Managing USB (universal serial bus) and
removable storage devices, allowing
users to restrict USB device usage by
vendor, serial number or function (restricting to read-only, for example)
• Extending device management capability to control wireless, Bluetooth and all
plug-and-play devices on the system
• Protecting access to the local registry
• Managing access to non-executable files
• Protecting against malicious behavior
of programs in memory (such as buffer
overflows)
• Controlling execution of scripts or activeX
controls
• Executing files with reputation-based
decisions
• Tracking processes changing files on the
system
Like NGFWs, application whitelisting is a
mature technology and integral part of most
IT cybersecurity strategies. Increasingly, the
same is becoming true in the OT space.
Looking to the future
Advanced analytics, by contrast, remains
resolutely immature in the industrial environment. It is, however, an important emerging
technology that once again offers significant
potential for OT systems.
Chemical Engineering
While the value of risk analysis is that it
recognizes resources for cybersecurity are
finite, the value of advanced analytics is that
it accepts that complete security is unachievable. With the threat landscape constantly
evolving, it is impossible to completely mitigate all threats to the ICS.
Those that have the potential to do the
most harm will be those threats of which
organizations remain unaware. The faster
plants can detect malicious actors on the
system or network, the faster they can address them and minimize the damage.
Advanced analytics uses big data tools
to monitor and analyze a whole range of
information sources, from email and social
media, to network flows and third-party
threat feeds. With this information, it can
identify abnormal patterns that indicate attacks or intrusions. Not only can advanced
analytic techniques detect recognized
threats, but they can also allow the ability to
predict new, emerging dangers. Such systems, for example, can automatically notify
users of a cyberattack occurring on a related system elsewhere in the world — in
realtime — enabling them to take precautions to protect their own sites.
While advanced analytics are increasingly
important in cybersecurity, there is little uptake to date in the OT world. That, however,
is likely to change — as it has with other key
technologies in the IT realm. Convergence
between IT and OT means the challenges
facing the two are often similar. As long as
industrial users pay due regard to the distinctive requirements of process control systems, there is no reason the solutions for OT
cannot draw on the lessons that have been
learned. In time, it may have insights to share
with IT as well.
n
Edited by Scott Jenkins
Author
Mike Baldi is a cybersecurity solutions architect at Honeywell Process Solutions (1860
West Rose Garden Lane, Phoenix, AZ 85027;
Email: mike.baldi@honeywell.com); Phone:
602-293-1549). Baldi has worked for Honeywell for over 36 years. He led a team providing
technical support for Industrial Process Control Systems and advanced applications, and
was the lead systems engineer for HPS system test. Baldi joined the HPS Global Architect
team in 2009, and became the chief cybersecurity architect for
HPS, and the lead architect for the HPS Cyber Security Center-ofExcellence. He lead the design for security initiative — integrating
security into HPS products and the HPS culture. He was also the
primary focal point for HPS product and customer security issues,
and for HPS product security certifications and compliance. Baldi
recently moved to the Honeywell Industrial Cyber Security organization as a cybersecurity solutions architect. Baldi holds a B.S. degree
in computer science, an MBA degree in technology management,
and is CISSP certified.
www.chemengonline.com
july 2016
Editor’s note: For more information on cybersecurity in the CPI,
visit our website (www.chemengonline.com) and see articles by
Andrew Ginter (Chem. Eng., July
2013) and Eric C. Cosman (Chem.
Eng., June 2014).
41
Plant Functional Safety
Requires IT Security
Cybersecurity is critical for plant safety. Principles developed for plant safety can be applied to
the security of IT systems
Peter Sieber
HIMA Paul Hildebrandt
GmbH
In Brief
Safety and security
standards
What requires
Protection?
Applying Safety
principles to
security
Integrating BPCS and
SIS
IT Security and Safety
recommendations
W
hen the Stuxnet computer
worm attacked programmable
logic controllers (PLCs) at Iranian nuclear facilities running
an integrated system, centrifuges were
commanded to literally rip themselves
apart. This clear demonstration of the link
between cybersecurity and safe industrial
operations was a worldwide wakeup call for
plant managers, IT and automation managers, safety engineers and many others.
Of course, smaller-scale attacks are much
more likely, and they are happening. At one
plant, where system maintenance was carried out remotely, a cyber attack from abroad
revealed the vulnerability of using simple
42
username/password authentication for remote access. The attack was discovered
only after the data transmission volume exceeded the company’s data plan.
Cyber-related safety risks do not necessarily result from criminal activity. During the
commissioning of one plant, for example,
the failure of engineering software during
the recompiling of the memory mapped
input (MMI) following a plant shutdown led
to a situation in which an incorrect modification was loaded into an integrated safety
controller, and then activated.
These incidents demonstrate the need for
specific IT security improvements, and at the
same time, raise broader questions about
Chemical Engineering
www.chemengonline.com
July 2016
HIMA Americas
Figure 1. Under a model
put forth under IEC standard
61511, an industrial process
is surrounded by a series of
risk-reduction layers that act
together to lower risk
the relationship between cybersecurity and
plant safety:
1. Can the “insecurity” of integrated control
systems influence the functional safety of
a plant?
2. What needs to be protected?
3. Can the principles developed for functional
safety be applied to security?
This article considers these questions and
includes operational examples and specific
recommendations for improving security and
safety at industrial facilities.
Safety and security standards
The International Electrotechnical Commission (IEC; Geneva, Switzerland; www.iec.ch)
standard IEC 61508 is the international standard of rules for functional safety of electrical, electronic and programmable electronic
safety-related systems. According to IEC
61508, functional safety is “part of the overall
safety that depends on functional and physical units operating correctly in response to
their inputs.”
By this definition, the answer to the first
question posed earlier — Can the “insecurity” of integrated control systems influence
the functional safety of a plant? — has to be
“yes.” In the examples cited above, vulnerabilities to people and facilities were introduced. Clearly, functional safety was compromised, and while security breaches may
not have led to deaths or injuries, there is
no evidence to suggest that such a situation
could not occur in the future.
Even ruling out malicious threats, the fact
remains that IT security-based vulnerabilities
can be found in all kinds of automation systems. This includes the safety-related sys44
tem itself and the distributed control system
(DCS), of which the safety system may be a
part. This is one reason why so many safety
experts call not only for the physical separation of safety instrumented system (SIS) and
DCS components, but also for different engineering staffs or vendors to be responsible
for each.
To answer the other questions, we need
to highlight two other standards. One is the
international standard IEC 61511 for SIS
in the process industries. Whether independent or integrated into an overall basic
process control system (BPCS), the SIS is a
fundamental component of every industrial
process facility.
In this model, the industrial process is surrounded by different risk-reduction layers,
which collectively lower the risk to an acceptable level (Figure 1). The risk reduction
claim for the safety layer is set by the safety
integrity level (SIL).
The first line of protection for any plant is
the control and monitoring layer, which includes the BPCS. By successfully carrying
out its dedicated function, the BPCS reduces
the risk of an unwanted event occurring.
Typically, IEC 61511 stipulates that the risk
reduction claim of a BPCS must be larger
than 1 and smaller than 10. A risk-reduction
capability of 10 corresponds to SIL 1.
The cyberattack and IT vulnerability prevention layer includes the SIS. The hardware
and software in this level perform individual
safety instrumented functions (SIFs). During
the risk and hazard analyses carried out as
part of the basic design process of every
plant, the risk-reduction factor to be achieved
by the protection layer is determined.
In most critical industrial processes, the
SIS must be rated SIL 3, indicating a riskreduction factor of 1,000, to bring the overall
risk to an acceptable level.
At the mitigation layer, technical systems
are allocated, allowing mitigation of damages in case the inner layers of protection
fail. In many cases, mitigation systems are
not encountered as being part of the safety
system, as they are only activated after an
event (that should have been prevented)
happens. However, in cases where the mitigation system is credited as part of defining
additional measures, it may be covered by
the safety evaluation as well.
Now consider the IEC standard for cybersecurity. IEC 62443 covers the safe security
techniques necessary to stop cyber attacks
involving networks and systems at industrial facilities.
Chemical Engineering
www.chemengonline.com
July 2016
What requires protection?
According to the most recent version of IEC 61511, the answer to the
question of what needs to be protected is that both norms and physical structures need to be protected.
As for norms, the standard calls for
the following:
• SIS security risk assessment
• Making the SIS sufficiently resilient
against identified security risks
• Securing the performance of the
SIS system, as well as diagnostic
and fault handling, protection from
unwanted program alterations,
data for troubleshooting the SIF,
and bypass restrictions so that
alarms and manual shutdown are
not disabled
• Enabling/disabling of read/write
access via a sufficiently secure
method
• Segregation of the SIS and BPCS
networks
As for the structural requirements,
IEC 61511 instructs operators to
conduct an assessment of their SIS
related to the following:
• Independence between protection
layers
• Diversity of protection layers
• Physical separation between different protection layers
• Identification of common-cause
failures between protection layers
One other IEC 61511 note has
particular bearing on the issue of
cybersecurity and plant safety. The
standard states: “Wherever practicable, the SIF should be physically
separated from the non-SIF.” Also,
the standard demands that countermeasures be taken for foreseeable threats.
Applying safety principles
The IEC 61511 (safety) and IEC
62443 (security) standards coincide
on the demand for independent layers of protection. Together, these
standards prescribe:
• Independence between control
systems and safety systems
• Reduction of systematic errors
• Separation of technical and management responsibility
• Reducing common-cause errors
The standards also reinforce that
anything and everything within the
Chemical Engineering
system is only as strong as its weakest link. When using embedded
safety systems, all hardware and
software that could impair the safety
function negatively should be treated
as being part of the safety function.
IEC 61511 requires different, independent layers of protection.
Unifying two layers of protection will
require the new risk-reduction evaluation to prove that compliance with
the overall risk reduction is reached
when two different protection layers
are in place.
Integrating BPCS and SIS
As an illustrative example, assume
that a risk analysis of a given process has led to the conclusion that a
SIL-3-compliant SIS is required. The
traditional approach implies that a
risk reduction of greater than 1,000
Where can you find all your CPI solutions in one spot?
The Chemical Processing Industry covers a broad range of products such
as petrochemical and inorganic chemicals, plastics, detergents, paints, pulp
& paper, food & beverage, rubber and many more. Chemical Engineering
magazine is uniquely suited to cover this worldwide market.
Written for engineers by engineers, Chemical Engineering delivers solid
engineering essentials and developing industry trends to keep its readers
abreast of everything they need to keep their facilities running smoothly.
Missing archived issues or what to share
Chemical Engineering with your colleagues?
Visit www.chemengonline.com/chemical-engineering-magazine
for more information.
www.chemengonline.com
27584
July 2016
45
and less than 10,000 will be achieved.
The risk reduction is partly covered by
the BPCS (up to 10, as per IEC 61511)
and by the SIS (1,000 in a SIL-3-compliant solution).
In the integrated solution, there will be
common components for the BPCS and
SIS. Depending on the individual setup,
this will be either the central processing
unit (CPU), input-output (I/O) buses or
(parts) of the solution software (for example, the operating system), and symbol libraries.
The argument could be made that different components (of the same make)
may be used for the SIS and BPCS.
However, if common elements (such as
operating systems and buses) are used,
the systematic capabilities of such components may need to comply with the
requirements mentioned above.
It should also be kept in mind that
using components such as CPUs with
freely configurable software on board –
engaging the various parties to make
sure that potential deficiencies in each
task are identified and corrected. While
integrated tools can support the effectiveness of engineering processes, addressing aspects like common-cause
failures requires first narrowing integration to a sustainable level. This helps
maintain both efficient engineering processes and functional safety at the required level.
The previous comments about BPCS
and SIS independence and diversity
also apply to engineering tools. A potential hidden failure of the engineering
tool may impair the desired reduction in
overall risk.
There are two types of integrated solutions that have either a common configuration database for SIS and BPCS,
or have independent databases for SIS
and BPCS, but use the same data access mechanisms. Both solutions have
the disadvantage of having a common
The quality of engineering processes, tools and associated services may be even more
important to overall safety results than BCPS and SIS hardware.
and using these same components for
different tasks – may not be considered
sufficient leveraging of the integrity level
of the solution.
These commonly used components,
in order to comply with the initial risk
reduction requirements, will need to
maintain a risk reduction of greater than
1,000 by less than 10,000. Practically,
this means SIL 4, which is currently an
unachievable level.
Engineering’s key role in security
The quality of engineering processes,
tools and associated services may be
even more important to overall safety
results than BCPS and SIS hardware.
Proper engineering includes the following aspects:
• Reducing complexity by splitting tasks
into independent modules
• Properly defining and verifying
interfaces
• Testing each module intensively
• Maintain the “four-eyes” principle
when reviewing engineering documents and results of implementation
tasks, according to IEC 60158-1,
paragraph 8.2.18
Application of this strategy requires
46
Chemical Engineering
cause for potential failures, which would
infect both the BPCS and SIS.
The engineering tool for safety systems should overcome these issues by
remaining independent (to the greatest
extent reasonably possible) from the
hardware and software environment.
This is accomplished by having the complete functionality of the safety engineering tool, running in a Windows software
environment, implemented in a way that
allows it to be independent from Windows functions. This concept allows
maximum protection from errors and
creates a trusted set of engineering data
that can be used to program the SIS.
Nevertheless, the engineering tool
should allow integrated engineering by
maintaining interfaces that permit automated transfer of configuration data
(tag-oriented information as well as logic-oriented data) from third-party systems into the trusted set of engineering
data used for programming the SIS.
Furthermore, having the same engineers in charge of programming the DCS
and safety system ignores the proven
benefits of the checks and balances of
independent thinking. For this reason,
IEC 61508 is setting recommendations
www.chemengonline.com
July 2016
for the degree of independence of parties involved in design, implementation
and verification of the SIS.
IT security recommendations
Cybersecurity and plant safety are so
intertwined in the connected world of industrial processes that an equal commitment to both is required to achieve the
needed protection. Following the recommended international standards for functional safety for PLCs (IEC 61508), safety
instrumented systems (IEC 61511) and
cybersecurity (IEC 62443) provides a
path to a safe, secure facility.
For the most robust security and
reduced safety risks, the author advocates the traditional approach of
standalone SIS and BPCS units — ideally from different vendors — versus an
integrated BPCS/safety system from
the same vendor.
For valid security and safety reasons,
it is also good practice for companies to
consider an independent safety system
built on a proprietary operating system.
Of course, such a system can and should
be completely compatible with DCS
Chemical Engineering
products. Additionally, it should feature
easy-to-use engineering tools with fully
integrated configuration and programming and diagnostic capabilities.
Applying these recommendations and
adhering to international standards for
separate BPCS and SIS systems help
plant operators meet their obligation to
protect people, communities, the environment and their own financial security.
The good news is that hardware, software and expertise are available today
to help operators meet their obligations
for the full lifecycle of their plants.
n
Edited by Scott Jenkins
Author
Peter Sieber is vice president for
global sales and regional development
for HIMA Paul Hildebrandt GmbH (Albert-Bassermann-Strasse 28, 68782
Bruehl, Germany, Phone +49-6202
709-0, p.sieber@hima.com), a leading
specialist in safety automation systems.
Sieber is participating in the ongoing
effort by the steering committees working on functional safety and IT security
standards, IEC 61508 and IEC 62443, respectively. He has
been actively involved in the development of the definition of
both functional safety guidelines and IT security guidelines
for process automation applications.
www.chemengonline.com
July 2016
47
Solids Processing
Diverter valves
Dilute-phase Pneumatic Conveying:
Instrumentation and
Conveying Velocity
Follow these guidelines
to design a well-instrumented
and controlled system, and to
optimize its conveying velocity
Amrit Agarwal
Consulting Engineer
D
ilute-phase pneumatic conveying systems must be
operated in a certain sequence and have sufficient
instrumentation and operating controls to assure reliable operation
and prevent problems. This article
discusses two subjects that are important for successful dilute-phase
conveying. Discussed below are design guidelines for instrumentation and controls that can prevent
operating problems, such as pipeline plugging, downtime, equipment
failure, high power consumption,
product contamination and more.
The article also provides a simple
methodology for finding out if the
presently used conveying velocity is too low or two high and for
making the required changes in
this velocity.
The required instrumentation
depends on the degree of automation that is necessary, and whether
the system is to be controlled locally or remotely. When manual
control of the conveying system
is used, problems can arise, especially if the operators do not have a
thorough understanding of the design and of the required operating
method of the conveying system, or
if they do not pay close attention to
day-to-day operation of the system.
For conveying systems — where
even a single error can result in a
large financial loss — a well-instrumented and automated control system is highly recommended.
54
High
level
Feed
bin
Rotary
valve
Air
inlet
Blower
Low
level
Receiving bins with bin
vent filters
Run or position light
FIGURE 1. This figure is a schematic flow diagram of the
conveying system with run and position lights to show the
operating condition of each component of the system
Process logic description
Feeding solids into a conveying line
that does not have an airflow with
sufficiently high conveying velocity
will result in plugging of the line.
To prevent this, solids must be fed
into the conveying line only after
the required airflow has been fully
established. This requirement is
met by allowing the solids feeder
to start only after the blower has
been running for at least five
minutes. To do this, the rotaryvalve motor should be interlocked
with the blower motor so that the
blower motor has run for five minutes before the rotary-valve motor
can start.
When the conveying system is
running, the rotary-valve motor
must stop immediately in the event
that the blower motor stops for any
reason. If the rotary valve is not
stopped, solids feed will continue
and will plug the pipeline below
the feeder. To remove this plug, the
pipeline will need to be opened. This
required control option is implemented by interlocking the rotaryvalve motor with the blower motor
so that the rotary-valve motor stops
when the blower motor stops.
Should the conveying system need
to be stopped, certain steps must be
followed: The first step is to stop the
solids feed, after which the blower
is allowed to run until the conveying line is empty and the blower
discharge pressure has come down
to the empty-line pressure drop. Do
Chemical Engineering www.che.com March 2014
not stop the blower and the solids
feed at the same time.
When a conveying cycle has
been completed and the solids flow
into the conveying line has been
stopped, the blower motor must
continue to run for at least a few
more minutes to ensure that all of
the solids that are still inside the
conveying line have been conveyed
to the destination bin. If these solids are allowed to remain in the
conveying line, they may plug the
line when the system is restarted.
These solids may also cause contamination if a different solid is
conveyed in the next cycle.
Solids feed must stop immediately
if the normal operating pressure of
the blower increases by 10% and
continues to rise. This is because
the pressure increase is most likely
due to the conveying line starting to
plug. If the ongoing feed stream is
not stopped, the pressure will keep
increasing, making the plugging
situation worse.
After stopping the feed, the
blower is allowed to run for about
five minutes in an effort to flush the
plug. If the plug does not flush out
and the blower pressure remains
high, the blower motor should be
stopped. The plug is then removed
by tapping the pipeline to find the
plug location and opening up the
plugged section of the pipeline.
Solids feed must also be stopped
if the receiving bin or silo becomes
full, as indicated by its high-level
light and alarm. If the feed is continued, the bin will overfill and the
solids will back up into the conveying line, causing pluggage.
If a conveying line has diverter
valves, the position of the diverter
valves must be set up in a “through”
mode or in a “divert” mode before
starting the blower and the solids
feed. If the destination bin or silo
is changed for the next conveying
cycle, the diverter valves position
must be changed before the conveying blower and the rotary valve
are started.
Graphic control panel. In the
central control room, a graphic
panel (Figure 1) should be provided to show a schematic diagram
of the conveying system, starting from the air supply blower
to the receiving bins or silos.
This panel should have the following lights:
• Run lights to indicate the opera-
ing status of the blower motor and
the rotary-valve motor
• Position lights to indicate the divert or through position of the diverter valves
• Position lights to indicate the low
and high levels in the receiving
bin or silos
• Run lights to show the operating
status of the bin vent filters/dust
collectors
Figure 1 shows in one glance how
the conveying system has been set
up, and the operating status of all
components of the system.
Monitoring conveying air pressure. Conveying pressure is a key
parameter in pneumatic conveying systems. It must be regularly
monitored from the control room
as well as locally at the blower.
For measurement of the conveying
pressure, a locally mounted pressure indicator should be provided at
the blower discharge. If the blower
is located far away from the rotary
valve, a second pressure indicator
should be provided just upstream of
the rotary valve.
These two measurements will
show the overall pressure being
provided by the blower, and the
pressure drop in the conveying line.
In addition to local pressure indicators, these pressure measurements
should also be provided in the
control room using pressure
transmitters.
Digital pressure indicators are
better than the analog type, because
they can show the pressure much
more accurately, up to two decimal
points. These pressure measurements should be archived on the
computer so that historical data are
available if needed in the future.
An alarm for high blower-discharge
pressure should also be provided in
the control room.
Monitoring blower discharge air
MONITOR VISCOSITY SIMPLY
SENSE MIXER MOTOR HORSEPOWER
WITH UNIVERSAL POWER CELL
PROFILING A PROCESS
EASY INSTALLATION
• No holes in tanks or pipes
• Away from sensitive processes
VERSATILE
• One size adjusts to motors, from
small up to 150hp
24
• Power changes reflect viscosity changes
• Good batches will fit the normal “profile” for
that product
POWER DECREASE
SHOWS BATCH
IS DONE
22
20
• Works on 3 phase, fixed or variable
frequency, DC and single phase power
SENSITIVE
• 10 times more sensitive than
just sensing amps
18
POWER
SENSOR
16
14
12
10
CONVENIENT OUTPUTS
• For meters, controllers, computers
4-20 milliamps 0-10 volts
8
DRY MIX
HIGH SPEED
ADD LIQUID
LOW SPEED
MIXER
MOTOR
BEGIN HIGH
SPEED MIX
6
4
2
0
BATCH 1
BATCH 2
BATCH 3
CALL NOW FOR YOUR FREE 30-DAY TRIAL 888-600-3247
WWW.LOADCONTROLS.COM
Circle 22 on p. 68 or go to adlinks.che.com/50974-22
Chemical Engineering www.che.com March 2014
55
Solids Processing
temperature. A locally mounted
temperature indicator should be
provided at the blower discharge,
and also at the blower after-cooler
discharge if an air cooler is used.
This temperature is needed to
carry out calculations for the “asbuilt” conveying system. If this air
temperature can affect the con-
veying characteristics of solids
being conveyed, it must be monitored closely.
Rotary-valve motor interlocks
with the blower motor. A manually adjustable timer with a selector switch should be provided in
the control room to provide three
functions: 1) Automatically stop the
Understanding Protective Coatings
in Hot Environments
The use of fireproofing, high-temperature, and other coatings to
protect infrastructure in high-temperature facilities is becoming more
widespread around the world. The 2014 Bring on the Heat Conference
is focused on providing an informative look into how these coatings are
used and how they benefit different industries. This event will provide
presentations, case studies, and forum discussions on the following
topics:
• Corrosion under insulation
• Thermal insulation coatings
• Thermal spray aluminum
• Passive fire protection
• Coatings needs for owners
Register Today
Register by May 16
to
SAVE!
To register or for more information go to
www.nace.org/both2014
Circle 27 on p. 68 or go to adlinks.che.com/50974-27
56
Chemical Engineering www.che.com March 2014
rotary valve if the conveying pressure starts to increase (indicating
start of formation of a line plug); 2)
Allow the blower motor to continue
to run for the selected time, such
as 10 to 15 minutes (in an effort
to clear the line plug); and 3) Restart the rotary-valve motor if the
conveying pressure falls to the normal pressure.
Diverter valves. Position lights are
provided in the control room graphic
panel to indicate if the valves are in
the “through” or “divert” position.
Receiving bins. Low- and highlevel lights are provided in the
graphic panel for the receiving bins.
An alarm should be provided in the
control room to indicate high level
in the bins. At the high level, the rotary valve motor should be stopped
automatically.
Bin vent filters/dust collectors.
The bin vent filters or the dust collectors on the bin vents must be
running before the conveying system is started. A “run” light for the
filter should be provided in the
graphic panel.
Pressure drop indicators should
be installed locally to show the pressure drop across the filter elements.
Their locations should be easily accessible to the operating staff. For
conveying materials that have high
dust loading, alarms for low- and
high-pressure drops should be provided in the control room. The lowpressure drop alarm would indicate a ruptured filter element, and
the high pressure drop alarm
would indicate a completely clogged
filter element.
Instrumentation checklist
A summary of the instrumentation
requirements, as described above, is
provided below:
For the blower:
• Local and control room mounted
running lights for the blower
motor
• Local pressure indicator at the
blower discharge
• Local temperature indicator at
the blower discharge
• Local temperature indicator at
the blower after-cooler discharge,
for applications using a cooler
M
be
Saltation line,
joining pressure
minima and
saltation velocity
=
ing
W2
H
• Alarms for low- and high-pressure drop across filter elements
(optional)
Graphic control panel:
• Graphic panel showing the conveying system route with run
lights for the blower motor and
rotary valve motor, position lights
for the diverter valves, low- and
high-level lights for the receiving
bins, and run lights for the binvent filters
im
e
d
reg
sta
Un
F
G
lid
W
g=
l
ids
C
B
l
So
1
din
oa
e
ip
yp
pt
E
Dilute
phase:
strand
flow
oa
sl
So
K
ble
G
Dense
phase
regime
Dilute
phase:
suspension
flow
L
Pa
ck
ed
∆P
Log (pressure drop per unit length), L
d
Unstable
flow
Em
D
A
Saltation velocity/minimum pressure
Finding the conveying velocity
Log (gas velocity), V
Packed bed
Dense phase
Dilute phase:
strand flow
Dilute phase:
suspension flow
FIGURE 2. This figure shows the relationship of conveying velocity with conveying
pressure at different solids-loading rates W1 and W2. The solids-loading rate is the
solids-conveying rate divided by the internaal cross-sectional area of the conveying pipeline. For these two loading rates, the figure also shows the transition points
(Points D and G) at which the conveying system migrates from dilute to dense phase.
For solids-loading rate W1, as the conveying velocity is reduced, the conveying system's operating point moves from Point C to Point D in dilute phase; and then in the
dense phase, from Point D to Points E, F and G. Similarly, for the solids-loading rate
W2, the operating point moves from Point H to Point G in the dilute phase; and then
in the dense phase from Point G to Points K, L and M
Feed
bin
Pressure
indicator PI
Vent
Flow control
valve and
flow indicator
FLC
Air
inlet
Rotary
valve
Blower
FIGURE 3. This figure shows the design of the vent air system for venting out a
portion of the blower airflow to determine saltation velocity
• Pressure transmitter at the
blower discharge with a pressure
indicator in the control-room control panel. Computer storage of
pressure data
• Control room alarm for high
blower discharge pressure
• Blower motor interlocks with the
rotary-valve motor
For the rotary valve:
• Local and control-room-mounted
running lights for rotary valve
motor
• Control-room-located, manually
adjustable timer for starting and
stopping the rotary valve motor
• Interlocks with the blower motor
For the diverter valves:
• Position
lights
to
indicate
“through” and “divert” positions
• Hand switches for control room
operation of valve positions
Receiving bin:
• Low-level and high-level switches
with indicating lights for the receiving bins
• Control room alarm to indicate
high level in the bin
Bin vent filters/dust collectors:
• Running lights for the bin vent filters or dust collectors
• Local pressure-drop indicator
Along with conveying pressure, conveying velocity is perhaps the most
important variable in pneumatic
conveying. After a conveying system has been installed and is going
through startup, its conveying velocity should be checked to make sure
it is not too low or too high, and is
about equal to the conveying velocity that is required. If the conveying velocity is too low, it may cause
line plugging problems; if it is too
high, it will result in higher particle
attrition, pipeline wear, and higher
energy usage.
The conveying velocity used in
the conveying system’s design calculations may be too low or too high
because it is difficult to find a reliable method to determine its correct value. This value depends upon
many variables, such as solids particle size, bulk-solids density, solidsto-air ratio, air density, pipeline diameter and others. Presently, there
are two methods to find the conveying velocity. The first method is to
use equations to calculate saltation
velocity (the gas velocity at which
particles will fall out of the gas
stream). These equations have been
developed by researchers to find
the impact of the above-mentioned
variables on saltation velocity. As
they are based on research work
that is carried out in small-scale
test equipment in a laboratory, they
do not cover the entire range of solids and all of their properties. These
equations can be found in published
books and literature.
Th second method is to use conveying velocity values that are available in published literature such as
those given in Table 1. It should be
Chemical Engineering www.che.com March 2014
57
TABLE 1. COMMONLY USED CONVEYING VELOCITIES
Solids Processing
Material
Conveying
velocity, ft/
min
Material
Conveying
velocity,
ft/min
Alum
5,100
Malt, barley
3,300
noted that these published values
are applicable to only those pneumatic conveying systems from which
they were derived, but may or may
not be applicable for new conveying
systems. This is because the conveying velocity for a particular conveying system depends on the values of
various factors and variables such
as solids particle size, particle size
distribution, particle density, air
density, solids conveying rate, pipeline diameter and more. As shown
in Table 1, the published values
may not be applicable because they
do not give any information on the
values of the variables on which
they are based.
Alumina
3,600
Oats, whole
4,200
Bentonite
3,600
Nylon, flake
4,200
Bran
4,200
Paper, chopped
4,500
Calcium carbonate
3,900
Polyethylene pellets
4,200
Clay
3,600
Polyvinylchloride, powder
3,600
Coffee beans
3,000
Rice
4,800
Coke, petroleum
4,500
Rubber pellets
5,900
Corn grits
4,200
Salt cake
5,000
Corn, shelled
3,300
Salt, table
5,400
Diatomaceous earth
3,600
Sand
6,000
Dolomite
5,100
Soda ash, light
3,900
Feldspar
5,100
Starch
3,300
Fluor (wheat)
3,600
Sugar, granulated
3,600
Flourspar
5,100
Trisodium phosphate
4,500
Lime,hydrate
2,400
Wheat
3,300
A proposed method
Lime, pebble
4,200
Wood flour
4,000
This third method is based on running a test on the as-designed and
built conveying system to determine
the true value of the solids saltation
velocity. The value of the saltation
velocity obtained by the test will be
accurate because it is based on the
properties of the solids being conveyed and on the as-designed and
built conveying system. This value
is then used to determine the value
of the conveying velocity.
This test requires gradually reducing the airflow that goes into the
conveying line so that the conveying
velocity continues to decrease until
it reaches saltation conditions. The
Zenz diagram (Figure 2) shows both
the dilute- and dense-phase conveying regimes, and the saltation
velocity interface between them. As
shown, the conveying pressure is at
a minimum at the saltation velocity.
In the test, the airflow and hence
the conveying velocity is reduced
until this minimum pressure point
is reached, after which the pressure
starts to increase.
The equipment required for this
test is shown in Figure 3. A vent
line is installed in the air-supply
line at the discharge of the blower.
Its purpose is to vent off to the atmosphere some of the conveying air
that is being supplied by the blower.
In this vent line, a flow-control valve
with a flow indicator is used to control the airflow that is to be vented
out. The airflow that is vented out
58
is then subtracted from the air supplied by the blower to determine
the airflow going to the conveying
line. The conveying velocity is then
calculated based on this airflow and
pipeline diameter.
To run this test, the conveying
system is started and run at full
capacity for a few minutes to bring
it to steady-state conditions. Keeping the solids flowrate constant, the
vent valve is manually and gradually opened to start ventinga few
cubic feet per minute of the conveying air, reducing the conveying airflow and the conveying velocity.
A close watch is kept on the discharge-pressure indicator installed
at the blower outlet. This pressure
will keep falling with the decrease
in airflow, but as shown in Figure
2, its value will eventually reach a
point after which it will start to increase. The objective of the test is to
find the airflow at that point. The
vent airflow is gradually increased
until this point is reached and the
pressure, instead of falling, starts to
increase. This is the minimum pressure point beyond which the conveying system migrates to dense-phase
conveying. At this point, the solids
reach their saltation velocity.
The saltation velocity value obtained by the test is increased by a
safety factor of about 30% to select
an appropriate value for the conveying velocity. Solids velocity always
Chemical Engineering www.che.com March 2014
decreases when solids flow through
a bend. This decrease can be 5 to
20% depending on the properties of
the solid being conveyed. Unless the
conveying velocity is high enough,
such a decrease can result in saltation of the solids and plugging of
the bend or its downstream conveying line.
This test-derived optimum conveying velocity is compared with the
velocity that is actually being used.
If the actual velocity currently in
use is lower, then the blower speed
is increased to match the optimum
conveying velocity; if it is higher,
then the blower speed is decreased.
The change in speed is determined
from the blower performance curve.
The speed change is implemented
by changing the belts and sheaves
of the blower.
■
Edited by Suzanne Shelley
Author
Amrit Agarwal is a consulting engineer with Pneumatic Conveying Consulting
(7 Carriage Rd., Charleston,
WV 25314; Email: polypcc@
aol.com). He retired from The
Dow Chemical Co. in 2002,
where he worked as a resident
pneumatic-conveying and solids-handling specialist. Agarwal has more than 40 years of
design, construction, operating and troubleshooting experience in pneumatic
conveying and bulk-solids-handling processes. He
holds an M.S. in mechanical engineering from the
University of Wisconsin, Madison, and an MBA
from Marshall University (Huntington, W. Va.).
He has written a large number of articles and
given classes on pneumatic conveying and bulk
solids handling.
Alarm Management By
the Numbers
Deeper understanding of common alarm-system metrics can improve remedial actions and
result in a safer plant
Kim VanCamp
Emerson Process
Management
In Brief
Alarm management
Performance Metrics
Alarm system
example metrics
average alarm rates
Peak Alarm rate
Alarm priority
distribution
Alarm source
contribution
stale alarms
Closing remarks
Figure 1. A better understanding of alarm system metrics can lead to more focused remedial actions and help to make the
plant safer
D
o you routinely receive “alarm management performance” reports,
or are you expected to monitor
a managerial dashboard equivalent? What do you look for and what does it
mean? We all know that fewer alarms mean
fewer operator interruptions and presumably
fewer abnormal process or equipment conditions. But a deeper understanding of the
more common alarm-management metrics
can yield greater insight, leading to more focused remedial actions and ultimately to a
safer, better performing plant (Figure 1).
This article reviews the now well established benchmark metrics associated with
the alarm-management discipline. Most articles previously published on alarm managements cover alarm concepts (for example,
50
defining a valid alarm), alarm management
methods (for instance, rationalization techniques), justification (such as the benefits of
investing in alarm management) and tools
(including dynamic alarming enablers). This
article provides a different perspective. Written for process plant operation managers or
others that routinely receive alarm management performance reports, this article aims
to explain the most common metrics, without requiring an understanding of the alarmmanagement discipline in depth.
Alarm-management KPIs
The first widely circulated benchmark metrics, or key performance indicators (KPIs), for
alarm management relevant to the chemical
process industries (CPI) were published in the
Chemical Engineering
www.chemengonline.com
march 2016
Table 1. Example of typical alarm performance metrics, targets and action limits
Metric
Target
Action limit
Average alarm rate per operator (alarms per day)
< 288
> 432
Average alarm rate per operator (alarms per hour)
< 12
> 18
Average alarm rate per operator (alarms per 10 minutes)
1–2
>3
Percent of 10-minute periods containing > 10 alarms
< 1%
> 5%
Maximum number of alarms in a 10 minute period
≤10
> 10
Percent of time the system is in flood
< 1%
> 5%
Annunciated priority distribution (low priority)
~80%
< 50%
Annunciated priority distribution (medium priority)
~15%
> 25%
Annunciated priority distribution (high priority)
~5%
>15%
Percent contribution of top 10 most frequent alarms
< 1% to ~5%
> 20%
Quantity of chattering and fleeting alarms
0
>5
Stale alarms (number of alarms active for more than >24 hours)
< 5 on any day
>5
1999 edition of the Engineering Equipment
and Materials Users Association publication
EEMUA-191 Alarm Systems – A Guide to
Design, Management and Procurement [1].
Later works from standards organizations,
such as the 2009 publication International Society of Automation (ISA) 18.2 Management
of Alarm Systems for the Process Industries
[2] and the 2014 publication IEC62682 Management of alarms systems for the process
industries [3], built upon EEMUA-191 and
have furthered alarm-management thought
and discipline. For example, they provide a
lifecycle framework for effectively managing alarms and establish precise definitions
for core concepts and terminology. Yet fifteen years later, little has changed regarding
the metrics used to measure alarm-system
performance. This consistency in measurement has been positive in many respects,
leading to the wide availability of generally
consistent commercial alarm analytic reporting products, from both control-system
vendors and from companies that specialize
in alarm management. Consequently, selection of an alarm-analysis product may be
based on factors such as ease of use, integration and migration, reporting capabilities,
price, support availability and so forth; with
reasonable certainty that the KPIs derived
from the chosen product can be interpreted
consistently and compared across sites and
across differing process control, safety and
other open platform communications (OPC)capable alarm-generating sources.
In addition to defining the KPI measurements, the EEMUA-191, ISA-18.2 and
IEC62682 publications also suggest performance targets, based in large part on the
practical experience of the companies participating in the committees that contributed
to each publication. As an example, these
Chemical Engineering
publications state that an average long-term
rate of new alarms occurring at a frequency
of up to 12 alarms per hour is the maximum
manageable for an operator. Suggested
performance levels such as this can provide a reasonable starting point if you are
just beginning an alarm-management program. But before deciding what constitutes
a reasonable set of targets for your site, you
should also consider other firsthand inputs,
like surveying your operators and reviewing
in-house studies of significant process disturbances and alarm floods. Note that more
research into the human factors that affect
operator performance is needed to validate
and potentially improve on the current published performance targets. Important work
in this area is ongoing at the Center for Operator Performance (Dayton, Ohio; www.
operatorperformance.org).
Alarm system example metrics
A typical alarm-performance report contains
a table similar to Table 1, where the metrics
and targets are based upon, and in many
cases, copied directly from, the EEMUA191, ISA-18.2 and IEC62682 publications. It
is also common to see locally specified action limits based on a site’s alarm philosophy.
When a target or action limit is exceeded, it
is important to ask: what problems are likely
contributing to the need for action, and what
are the actions? These questions are the
focus of the following discussion.
Average alarm rate
The average alarm rate is a straightforward
measure of the frequency with which new
alarms are presented to the operator, expressed as an average count per day, hour
or per 10-minute interval. As alarm frequency
increases, an operator’s ability to respond
www.chemengonline.com
march 2016
51
1400
Alarm
basis
•
•
•
1200
Average alarm rate
Figure 2. Timeline views of
the data can reveal periods
where alarm performance is
not acceptable
1000
800
rates for Figure 2 on a per-hour
Overall: 16.5
During alarm floods: 100.7
Excluding alarm floods: 7.9
600
n Critical
n Warning
n Advisory
400
5/31/2009
5/30/2009
5/29/2009
5/28/2009
5/27/2009
5/26/2009
5/25/2009
5/24/2009
5/23/2009
5/22/2009
5/21/2009
5/20/2009
5/19/2009
5/18/2009
5/17/2009
5/16/2009
5/15/2009
5/14/2009
5/13/2009
5/12/2009
5/11/2009
5/9/2009
5/10/2009
5/8/2009
5/7/2009
0
5/6/2009
200
Date
correctly and in time to avoid the ultimate
consequence of inaction decreases. If the
rate is excessively high, it is probable that
some alarms will be missed altogether or the
operators will ignore them, thus eroding their
overall sense of concern and urgency. So
clearly it is an important metric.
Averages can be misleading, however, because they provide no sense of the peaks in
the alarm rate, making it difficult to distinguish
“alarm floods” from steady-state “normal”
operation. Consequently, most alarm performance reports supplement this basic KPI
value with a timeline view or separate calculation of alarm rates for both the times when
operation is normal and for times of an alarm
flood. Figure 2 presents a typical example.
The average alarm rate of 16.5 alarms per
hour exceeds the target KPI value of 12 from
Table 1, but is slightly less than the action
limit of 18 per hour, and so might not raise
concern, while the timeline view shows that
there are significant periods of time where
the performance is unacceptable.
Common contributors to an excessively
high alarm rate include the following:
• The alarm system is being used to notify
the operator of events that do not constitute actual alarms, such as communicating informational “for your information”
messages, prompts, reminders or alerts.
According to ISA-18.2, an “alarm” is an indication to the operator that an equipment
malfunction, process deviation or abnorFigure 3. Pie charts can supplement alarm performance
reports and give information
on how much time is spent in
the acceptable range
New alarm activation rate distribution
6.6%
n Acceptable
(0–1 per 10 min.)
10.1%
n Manageable
(2–4 per 10 min.)
20.0%
63.4%
n Demanding
(5–9 per 10 min.)
n Unacceptable
(≥10 per10 min.)
52
mal condition requiring a timely response
is occurring
• Chattering or other frequently occurring
nuisance alarms are present. These often
originate from non-process alarm sources
of marginal interest to the operator, such
as field devices or system hardware diagnostics. Chattering alarms can also indicate an incorrect alarm limit or deadband
• Redundant alarms, where multiple alarms
are presented when a single abnormal situation occurs. An example is when a pump
is shut down unexpectedly, generating a
pump fail alarm in addition to alarms for low
outlet flow and low discharge pressure
• A problem with the metric calculation is occurring. A correct calculation only counts
new alarms presented to the particular
operator or operating position for which
the metric is intended, taking into consideration any by-design threshold settings or
other authorized filtering mechanisms that
cause fewer alarms to be presented to the
operator than may be recorded in system
event logs
Peak alarm rate
The two metrics — the percentage of
10-minute periods with more than 10
alarms, and the percent of time spent in
an “alarm flood” state — are calculated differently, but are highly similar in that they
quantify how much of the operator’s time
is spent within the highly stressful circumstance of receiving more alarms than can
be managed effectively.
EEMUA-191 defines the start of an alarm
flood as a 10-minute period with more than
10 new alarms, continuing through subsequent 10-minute intervals until reaching
a 10-minute interval with fewer than five
new alarms. Equally acceptable is to define a flood simply as a 10-minute period
with more than 10 new alarms. Often, an
alarm-performance report will supplement
these two metrics with a pie chart (Figure
Chemical Engineering
www.chemengonline.com
march 2016
3) that segments the report period into
10-minute periods that are categorized into named alarm-rate ranges,
such as acceptable, manageable, demanding and unacceptable.
Another commonly included metric in
the alarm-performance report, the peak
number of alarms within a 10-minute
period, is a straightforward measure
of the degree of difficulty of the worstcase alarm flood for the operator. In
poorly performing alarm systems, it is
common to see peak alarm counts in
a 10-minute period that exceed 250,
a total that would overwhelm even the
most highly skilled operator.
Common contributors to high peakalarm-rate frequency and severity include the following items:
• Multiple redundant alarms for the
same abnormal condition. The optimum situation is of course that any
single abnormal event will produce
just one alarm, representing the best
choice in terms of operator comprehension and the quickest path to take
remedial action. This requires study of
alarm causes and often leads to the
design of conditional, first-out or other
form of advanced alarming logic
• Cascading alarms. The sudden
shutdown of equipment often triggers automated actions of the control
system, which in turn, triggers more
alarms
• False indications. When routine
transitions between process states
occur, the alarm system is not usually
designed to “follow the process,” so
it can therefore produce a multitude
of false indications of an abnormal
condition. Likewise, logic is typically
required to detect state changes and
suppress or modify alarms accordingly
Some systems provide specialized
alarm views that present alarms in a
graphical pattern to aid an operator’s
comprehension of peak alarm events
and their associated causality, supplementing the classic alarm list to help provide a built-in layer of defense against the
overwhelming effects of an alarm flood.
Alarm priority distribution
When faced with multiple alarms, the
operator must decide which to address
first. This is — or should be — the basis
for assigning priority to an alarm. Most
systems will employ three or four prioriChemical Engineering
Alarm priority distribution
Figure 4. When the number of high-priority alarms exceeds that of low-priority
alarms, the methodology of how alarms
are assigned priority should be evaluated
8.7%
n Medium
39.4%
51.8%
n High
n Low
ties: low, medium, high and very-high.
There are a number of well accepted
methods for assigning priority, the most
common being a systematic guided
(selection-based) consideration of the
severity of the consequence of inaction
combined with the time available for the
operator to take the required action.
Conventional wisdom says that the annunciated alarm-priority distribution experienced by the operator for low-, medium- and high-priority alarms should
be in an approximate ratio of 80, 15 and
5%. Ultimately however, the goal should
be to guide the operator’s determination
of the relative importance of one alarm
compared to another, based on their
importance to the business.
Figure 4 illustrates a situation where
the number of high-priority (critical)
alarms being presented to the operator
far exceeds the low-priority (advisory)
alarms, suggesting the need to review
the consistency and methodology of
the priority assignment.
Common contributors to out-of-balance alarm-priority distributions include
the following:
• Alarm prioritization (a step in the rationalization process) has not been performed and alarm priorities have been
left at their default values
• Misuse of the priority-setting scheme
to classify alarms for reasons other
than providing the operator with a tiebreaker during alarm peaks. For example, using priority to classify alarms
by impact categories, such as environmental, product quality, safety/
health, or economic loss
• Lack of discipline in setting priority based on consideration of direct
(proximate) consequences rather than
ultimate (unmitigated) consequences.
While it may be the case that a designed operator action could fail, followed by a protective system failure,
followed by a subsequent incorrect
www.chemengonline.com
march 2016
53
450
100.0%
400
80.0%
Number of alarms
350
300
60.0%
250
200
40.0%
150
100
20.0%
50
0
FIFC1054
TIFG41106
PICFP2043
FIC-1252
TIFH42106
OPC_FI-N2-051
IIPX15P1
FICUP1516
IIUP16P1
FITST111
0.0%
n Alarms
— Cumulative %
Stale alarms
Alarm source
Figure 5. A small number
of alarm sources can often
account for the majority of
alarms
human response, such what-if considerations are likely to lead to a vast skewing
of alarm priorities toward critical
Alarm source contribution
The percent of alarms coming from the topten most frequent alarm sources relative to
the total alarm count is a highly useful metric for quantifying, identifying and ultimately
weeding out nuisance alarms and alarmsystem misuse. This is especially true if the
alarm performance report covers a range of
time where operations were routine and without significant process upsets or equipment
failures. The top-ten alarm sources often
provide “low-hanging” fruit for alarm-management performance improvement. They
are a handful of alarms, which if addressed,
will create a noticeable positive change for
the operator.
Figure 5 shows a pattern observed in
many control systems, where as few as
ten alarm sources (like a control module or
transmitter) out of the many thousands of
defined alarm sources, collectively account
for about 80% of all of the alarms presented
to the operator. In this example, the first
alarm source (FIST111) alone was responsible for 15% of all of the alarms presented
to the operator.
Another related metric is the count of
chattering alarms — alarms that repeatedly
transition between the alarm state and the
normal state in a short period of time. The
specific criteria for identifying chattering
alarms vary. The most common method is
to count alarms that activate three or more
times within one minute.
When the top-ten alarm sources generate
over 20% of all the alarms presented to the
operator, it is a strong indicator that one or
both of the following is the case:
• Some of those alarms are nuisance alarms
54
— alarms that operators have come to expect, and in most cases, ignore or consider to be informational
• The alarm system is being misused to (frequently) generate operator prompts based
on routine changes in process conditions
or operating states that may or may not
require action
Eliminating chattering alarms is generally
straightforward, using signal-conditioning
features found in most control systems,
such as on-delay, off-delay and hysteresis
(deadband).
A stale alarm is one that remains annunciated for an extended period of time, most
often specified as 24 hours. Stale alarms
are surprisingly challenging to quantify.
Metrics based on event histories require the
presence of both the start and ending alarm
event in order to compute an alarm’s annunciated duration. There is no event representing the attainment of a certain age of
an annunciated alarm. Thus, it is common
to miss counting stale alarms if their activation event or all-clear event falls outside
the range of dates and times covered in the
event history. Consequently, there are alternate methods for quantifying stale alarms,
such as periodic sampling of the active
alarm lists at each operator workstation, or
simply counting the number of alarms that
attained an age greater than the threshold
age. Given this variation in methods, it is important to exercise caution when comparing stale-alarm metrics across different sites
that may be using different alarm-analytic
applications.
In addition to being hard to quantify, stale
alarms can also be some of the most difficult
nuisance alarms to eliminate. Thus in some
respects the upward or downward trend in
stale alarm counts provides an informal indication of the overall ongoing health of the
alarm management program.
Common contributors to stale alarm
counts include the following:
• Routine transitions between process states
where the alarm system is not designed to
adapt and therefore provides false indications of an abnormal condition
• Alarms associated with standby or idle
equipment
• Alarms configured to monitor conditions no
longer relevant or available, an indicator of
poor management-of-change processes
• Alarms that are essentially latched due to
excessive application of hysteresis
• Alarms that persist beyond the called-for
Chemical Engineering
www.chemengonline.com
march 2016
operator action, waiting for maintenance
action. This likely constitutes an incorrect
use of the alarm system, using it as a recording method for outstanding maintenance actions
In conjunction with reviewing the number
of stale alarms or the list of stale alarms, it is
also important to review what alarms have
been manually suppressed (thus removing
them from the view of the operator). Suppressing the alarm will remove a stale alarm
from the alarm list (effectively reducing the
number of stale alarms), but will not address
the underlying condition.
Closing remarks
This article touches on just some of the key
alarm-system performance metrics and
what the numbers represent, in terms of the
issues that lay behind them and possible actions to address them. With this understanding, periodic reviews of alarm-performance
reports should lead to more focused actions
that can improve operator effectiveness and
thereby reduce the risks for economic loss,
environmental damage or unsafe situations.
For further reading on these and other alarm
performance metrics, including suggested
methods for corrective action, one outstanding resource is Ref. 4. n
Edited by Scott Jenkins
References
1. EEMUA Publication 191 — Alarm Systems: A Guide to Design,
Management and Procurement – Third edition, published by the
Engineering Equipment and Materials Users Association in 2013.
2. ANSI/ISA–18.2–2009 — Management of Alarm Systems for
the Process Industries – approved June 23, 2009. ISBN: 978-1936007-19-6.
3. ANSI/ISA–18.2–2009 — Management of Alarm Systems for
the Process Industries – approved June 23, 2009. ISBN: 978-1936007-19-6.
4. International Society of Automation. Technical Report ISA-TRI
18.2.5, Alarm System Monitoring Assessment and Auditing, ISA.
2012.
Author
Kim VanCamp is the DeltaV marketing product manager for alarm management at Emerson Process Management (8000 Norman
Center Drive, Bloomington, MN 55437; Phone:
1-952-828-3500; Email: Kim.VanCamp@
emerson.com). He joined Emerson in 1976
and has held senior assignments in manufacturing, technology, field service, customer
service, service marketing and product marketing. VanCamp is a voting member of the
ISA-18.2 committee on Management of Alarm Systems for the Process Industries and has published multiple papers on alarm management. He holds a bachelor’s degree in electrical engineering
from the University of Nebraska.
Oil, Gas and Chemicals Filtration & Separations Conference – Expo
American Filtration and Separations Society • May 9-11, 2016 • Houston Marriott Westchase, Houston, TX
The American Filtration & Separations Society invites you to
Oil, Gas and Chemicals Filtration & Separations Conference – Expo
Conference website: http://spring.afssociety.org/, Contact Conference Chair for questions, David Engel +1 (832) 296-6624
Conference Features
• 3 Plenary sessions
• 3 Concurrent tracks
• 72 Technical papers
• Student Poster Competition
• Short Courses on Monday, May 9
• Vendor Expo
Plenary Speakers
• Larry Ryan – Dow Chemical Company
• Michael Spearman – Otronix
• Scott Northrop – Exxon Mobil
Technical Conference Topics
• Filtration
• Coalescing
• Adsorption & Absorption
• Air/Gas Purification
• Bulk Separations and Cyclonics
• Chemical Assisted Separations
• Equipment & Systems
• Media Technology
• Produced Water
• R & D Innovation
Participating Companies
• Berry Plastics (formerly AVINTIV)
• Delta Pure Filtration
• Dorstener Wire Tech
• GKD – USA Metal Fabrics
• IFTS Filter Testing
• Industrial Netting
• Nexo Solutions
• Onyx Specialty Paper
• Sefar Filtration
• Spifil Filtration
Don’t miss the premier filtration & separations Expo in the USA.
Expo only passes are complimentary but registration in advance is required.
Circle 2 on p. 90 or go to adlinks.chemengonline.com/61493-02
Chemical Engineering
www.chemengonline.com
march 2016
55
Part 2
Understand and Cure
High Alarm Rates
Alarm rates that exceed an operator’s ability to manage them are common. This article explains
the causes for high alarm rates and how to address them
Bill Hollifield
PAS Inc.
In Brief
alarm rates
Averages can be
misleading
bad actor alarm
reduction
Alarm rationalization
Alarm management
work processes
Concluding remarks
M
odern distributed control systems (DCS) and supervisory control and data acquisition (SCADA)
systems are highly capable at
controlling chemical processes. However,
when incorrectly configured, as is often the
case, they also excel at another task — generating alarms. It is common to find alarm
rates that exceed thousands per day or per
shift at some chemical process industries
(CPI) facilities (Figure 1). This is a far greater
number than any human can possibly handle
successfully. This article examines the nature
of the problem and its cure.
The alarm system acts as an intentional interruption to the operator. It must be reserved
for items of importance and significance. An
alarm should be an indication of an abnormal
condition or a malfunction that requires operator action to avoid a consequence. Most
alarm systems include interruptions that
meet this definition, but also many miscellaneous status indications that do not.
A major reason for this situation is that control system manufacturers make it very easy
56
Figure 1. Alarm rates on the order of thousands per day are
not uncommon in some CPI facilities
to create an alarm for any imaginable condition. A simple analog sensor, such as one
for temperature, will likely have a dozen alarm
types available by simply clicking on check
boxes in the device’s configuration. Without
following sound alarm-management principles, the typical results are over-alarming,
nuisance alarms, high alarm rates and an
alarm system that acts as a nuisance distraction to the operator rather than a useful tool.
Whenever the operators’ alarm-handling
capacity is exceeded, then operators are
forced to ignore alarms, not because they
want to do so, but because they are not able
to handle the number of alarms. If this is the
case, the average, mean, median, standard
deviation, or other key performance indicators (KPIs; see Part 1, p. 50) for alarms do
not matter, because plant managers have no
assurance that operators are correctly ignoring inconsequential alarms or are paying attention to the ones that matter. This situation
contributes to many major accidents.
Chemical Engineering
www.chemengonline.com
march 2016
Alarm rates
3,000
The International Society of Automation (ISA;
Research Triangle Park, N.C.; www.isa.org)
Standard 18.2 on alarm management identifies the nature of the problem and offers a
variety of assessment measurements. An
important measurement is the rate of alarms
annunciated to a single operator.
Figure 2 shows an overloaded alarm system. The difference between the two lines is
the effect of including or removing only 10
individual high-rate nuisance alarms. This is
a common problem that is discussed later in
the article.
To respond to an alarm, an operator must
detect the alarm, investigate the conditions
causing the alarm, decide on an action, take
the action and finally, monitor the process
to ensure that the action taken resolves the
alarmed condition. These steps take time
and some must necessarily be executed sequentially. Others can be performed in parallel as part of a response to several alarms
occurring simultaneously.
Given these steps, handling one alarm in
10 minutes (that is, approximately 150 over a
24-h period) can generally be accomplished
without the significant sacrifice of other operational duties, and is considered likely to be
acceptable. A rate greater than 150 per day
begins to become problematic. Up to two
alarms per 10-minute period (~300 alarms/
day) are termed the “maximum manageable.”
More than that may be unmanageable.
The acceptable alarm rates for small periods of time (such as 10 minutes or one hour)
depend on the specific nature of the alarm,
rather than the raw count. The nature of the
response varies greatly in terms of the demand upon the operator’s time. The duration
of time required for an operator to handle an
alarm depends upon the particular alarm.
As an example, consider a simple tank with
three inputs and three outputs. The tank’s
high-level alarm occurs. Consider all of the
possible factors causing the alarm and what
the operator has to determine:
• Too much flow on inlet stream A, or B or C
• Too much combined flow on streams
A-B, A-C, B-C or A-B-C
• Not enough flow on outlet stream D, E or F
• Not enough combined flow on streams
D-E, D-F, E-F or D-E-F
• Several more additional combinations of
the above inlet and outlet possibilities.
The above situation takes quite a while
to diagnose, and involves observing trends
of all of these flows and comparing them to
the proper numbers for the current process
situation. The correct action varies highly
Chemical Engineering
Annunciated alarms per day
— Annunciated alarms
— Annunciated alarms without the 10 most frequent
2,500
Peak rates
9,195
14,899
2,000
1,500
1,000
500
0
Acceptable range: 150 to 300
58 days
with the proper determination of the cause
or causes. The diagnosis time varies based
upon the operator’s experience and involvement in previous similar situations.
Process control graphics (human-machine interfaces; HMIs) play a major role in
effective detection of abnormal situations
and responses to them. Using effective
HMIs, an operator can quickly and properly ascertain the cause and corrective action for an abnormal situation. However, the
quality of the HMI varies widely throughout
the industry. Most HMI implementations
are little more than a collection of numbers
sprinkled on a screen while showing a piping and instrumentation diagram (P&ID),
making diagnosis much more difficult. For
more discussion on this topic, search the
Internet for the term “High-Performance
HMI,” or see the comprehensive white
paper cited in Refs. 1 and 2.
As a result, the diagnosis and response
to a simple high-tank-level alarm becomes
quite complicated. Given the tasks involved,
it might only be possible to handle a few
such alarms in an hour.
Other alarms are simpler, such as, “Pump
412 should be running but has stopped.”
The needed action is very direct: “Restart the
pump, or if it won’t restart, start the spare.”
Operators can handle several such alarms
as these in 10 minutes. It takes less time to
assess and work through the situation.
Response to alarm rates of 10 alarms per
10 minutes (the threshold of a “flood”) can
possibly be achieved for short periods of time
— but only if the alarms are simple ones. And
this does not mean such a rate can be sustained for many 10-minute periods in a row.
During flood periods (Figure 3), operators are
likely to miss important alarms. Alarm rates
per 10 minutes into the hundreds or more,
lasting for hours, are common. What are the
odds that the operator will detect the most
important alarms in such a flood? Alarm
www.chemengonline.com
march 2016
Figure 2. Removing a small
number of high-rate alarms
can have a large effect on the
alarm system’s overall profile
57
Figure 3. During alarm flood
periods, it is very likely that
operators will miss important
alarms
Annunciated alarms per 10 minutes
Alarm floods – alarm count
160
Highest 10-min. rate
= 144
1,000
140
Alarm flood = 10 or
more in 10 min.
900
120
800
700
100
820 separate
floods
Highest count
in an alarm
flood = 2,771
Longest
duration of
flood = 19 h
Several
peaks
above
1,000
600
500
80
400
60
300
40
200
100
20
0
0
8 weeks
floods can make a difficult process situation
much worse, and are often the precursors to
major upsets or accidents.
Averages can be misleading
Alarm performance should generally be
viewed graphically rather than as a set of
averages. Imagine that during one week,
your alarm system averaged 138 alarms per
day and an average 10-minute alarm rate of
0.96. That would seem to be well within the
bounds of acceptability. But the data producing those average numbers could look
like that shown in Figure 4.
The first flood lasted 40 minutes with 118
alarms. The second flood lasted 30 minutes
with 134 alarms. How many of those alarms
were likely to be missed? A simplistic answer
(but good enough for this illustrative purpose)
is to count the alarms that exceed 10 within
any 10-minute period for the duration of each
flood, which, for the current example, would
be a total of 182. In other words, despite
these seemingly great averages (many plant
managers would consider these averages to
be strong alarm-system performance and that
they would be happy to achieve), the alarm
pattern still puts the operators in the position
of likely missing almost 200 alarms. Missing
so many alarms can result in improper operator actions and undesirable consequences
— perhaps quite significant ones.
It is easy to plot such data, as in Figure 5.
During an eight-week period, almost 21,000
alarms were likely to be missed. A weekly
view of such data in this way will likely gain the
attention of management, whereas viewing
the overall averages alone would indicate that
things are satisfactory when they are not.
Bad actor alarm reduction
Many types of nuisance alarm behaviors
exist, including chattering (rapidly repeat58
8 weeks
ing), fleeting (occurring and clearing in very
short intervals), stale, duplicate and so forth.
Alarms with such behaviors are called “bad
actors.” The most common cause of high
alarm rates is the misconfiguration of specific alarms, resulting in unnecessarily high
alarm occurrence rates. Commonly, 60–80%
of the total alarm occurrences on a system
come from only 10–30 specific alarms. Chattering alarms and fleeting alarms are both
common. Simply ranking the frequency of
alarms will identify the culprits. Finding and
correcting these rate-related nuisance behaviors will significantly reduce alarm rates
with minimal effort.
In the example data shown in Figure 6,
76% of all alarm occurrences came from
only 10 individual configured alarms. In fact,
the top two alarms make up 50% of the
total load, with about 48,000 instances in
30 days. Alarms are never intentionally designed to annunciate so frequently, but they
do. In this configuration, they would not perform a useful function; rather, they would be
annoying distractions.
Many of these were chattering alarms. In
summarizing 15 alarm-improvement projects
at power plants, the author’s employer found
that 52% of all alarm occurrences were associated with chattering alarms. Proper application of alarm deadband and alarm ondelay/off-delay time settings usually corrects
the chattering behavior. The calculations for
determining those settings are straightforward (but beyond the scope of this article).
Much more detailed information for solving
all types of nuisance alarm problems can be
found in Ref. 3.
Alarm rationalization
The other cause of high alarm rates requires
more effort to address. Most alarm systems
are initially configured without the benefit of
a comprehensive “alarm philosophy” docu-
Chemical Engineering
www.chemengonline.com
march 2016
ment. This document sets out the rules
for determining what kinds of situations qualify for alarm implementation.
It specifies methods for consistently determining alarm priority, controlling alarm
suppression, ongoing performance
analysis, management of change, and
dozens of other essential alarm-related
topics.
Systems created without such a document are usually inconsistent collections of both “true alarms,” along with
many other items, such as normal status notifications that should not use the
alarm system. Such non-alarms diminish the overall effectiveness of the system and diminish the operator’s trust in
it. They must be purged. While it may
be easy to spot things that clearly have
no justification for being alarms by looking at the list of most frequent alarms,
a comprehensive alarm rationalization is
needed to ensure the consistency of the
overall alarm system.
With alarm rationalization, every existing alarm is compared to the principles
in the alarm philosophy document and
is either kept, modified or deleted. Setpoints or logical conditions are verified.
Priority is assigned consistently. New
alarms will be added, but the usual
outcome of rationalization is a reduction in configured alarms by 50–75%.
Since the alarm-management problem was identified in the early 1990s,
thousands of alarm systems have undergone this process and achieved the
desired performance.
After the bad actor reduction and the
rationalization steps, alarm rates are
usually within the target limits. A typical
result is shown in Figure 7. Significant
process upsets, particularly equipment
trips, may still produce some alarm
floods, which can be addressed in Step
6 listed below.
The 2009 publication of the ISA-18.2
Alarm Management Standard includes
both having an alarm philosophy document and performing alarm rationalization as mandatory items. For a comprehensive white paper on understanding
and applying ISA-18.2, see Ref. 4.
Alarm management work process
There is an efficient seven-step plan for
improving an alarm system, proven in
more than 1,000 improvement projects
in plants throughout the world. Steps
1–3 are simple, and often done simultaChemical Engineering
Annunciated alarms per 10 minutes
70
60
50
Flood of
118 alarms
over 40
min.
Figure 4. Different alarm data can generate similar average alarm rates, and the
average rate may not tell the full story
Flood of
134 alarms
over 30
min.
40
30
20
10
0
7 days
neously as an initial improvement effort
with fast, high-impact results.
Step 1: Develop, adopt and maintain
an alarm philosophy. A comprehensive
guideline for the development, implementation and modification of alarms,
an alarm philosophy establishes basic
principles for a properly functioning
alarm system. It provides an optimum
basis for alarm selection, priority setting, configuration, response, handling
methods, system monitoring and many
other topics.
Step 2: Collect data and benchmark
the alarm system. Measuring the existing system against known, best-practice
performance indicators identifies specific deficiencies, such as various types
of nuisance alarms, uncontrolled suppression, and management-of-change
issues. A baseline is established for improvements measurement.
Step 3: Perform “bad actor” alarm
resolution. Addressing a few specific
alarms can substantially improve an
alarm system. Bad actor alarms, which
can render an alarm system ineffective,
are identified and corrected to be consistent with the alarm philosophy. An
ongoing program to identify and resolve
nuisance alarms is necessary.
Step 4: Perform alarm rationalization.
Alarm rationalization is a comprehensive
review of the alarm system to ensure it
complies with the principles in the alarm
philosophy. This team-based effort reexamines existing and potential alarms
configured on a system. Alarms to be
added, deleted and reconfigured are
identified, prioritized and documented.
The resulting alarm system has fewer
configured alarms and is consistent and
documented with meaningful priority
and setpoint values.
Step 5: Implement alarm audit and
enforcement technology. Once an
www.chemengonline.com
march 2016
59
Alarms per day likely to have been missed
Alarms per day before and after rationalization
1200
8000
Week 1: 3,885
Week 2: 2,281
Week 3: 2,728
Week 4: 1,903
Week 5: 2,173
Week 6: 1,443
Week 7: 2,253
Week 8: 4,260
Total: 20,926
1000
800
7000
6000
Average before: 2,417
Average after: 249
An 89% reducation
5000
600
4000
3000
400
2000
200
1000
0
0
8 weeks
Figure 5. Despite sound averages for alarm rates, it can still
be the case that many alarms could be missed during alarm
flood periods
Figure 7. Alarm rates can usually be brought into target limits by alarm rationalization and bad-actor reduction steps
alarm system is rationalized, its configuration must not change without authorization. Because DCS systems can be easily
changed by a variety of sources, they often
require mechanisms that frequently audit
(and enforce) the approved configuration.
Step 6: Implement advanced alarm management. Certain advanced alarm capabilities may be needed on some systems to
address specific issues. For example, statebased alarming monitors the current process
state, and alarm settings are dynamically altered in predetermined ways to match the
alarming requirements of that process state.
Alarm flood suppression temporarily eliminates the expected and distracting alarms
from a unit trip, leaving the relevant alarms
that assist the operator in managing that
post-trip situation. Such advanced methods
can ensure that the alarm system is effective
even in abnormal situations.
Step 7: Control and maintain the improved system. An effective alarm system
requires an ongoing and typically automated
program of system analyses that may include KPI monitoring and the correction of
problems as they occur.
Alarm
count
35,000
Author
Most frequent annunciated alarms
100
30 days
90
30,000
80
25,000
Cumulative %
20,000
60
40
30
10,000
20
5,000
10
Tag9.High1
Tag10.Alarm
Tag8.Low1
Tag7.Alarm
Tag6.Low1
Tag5.Low1
Tag4.Alarm
Tag2Low1
Tag3.Alarm
0
Tag1.Alarm
0
70
50
Ten alarms make
up 76% of the total
alarm loading
15,000
Figure 6. In many cases, the most frequently occurring
alarms make up the bulk of the total alarm load
60
31 days
Concluding remarks
The various problems with alarm systems are
well recognized and there are proven solutions
to these problems. The principles from these
solutions have been successfully applied to
thousands of alarm systems worldwide. The
alarm management body of knowledge is mature. Solving alarm-system problems simply
requires the will and effort to do so.
n
Edited by Scott Jenkins
References
1. Hollifield, B. and Perez, H. Maximize Operator Effectiveness: High
Performance HMI Principles and Best Practices, Part 1 of 2. PAS
Inc., Houston, 2015.
2. Hollifield, B. and Perez, H. Maximize Operator Effectiveness: High
Performance HMI Case Studies, Recommendations, and Standards,
Part 2 of 2. PAS Inc., Houston 2015.
3. Hollifield, B. and Habibi, E. The Alarm Management Handbook, 2nd
Ed., PAS Inc., Houston 2010.
4. Hollifield, B. Understanding and Applying the ANSI/ISA 18.2 Alarm
Management Standard. PAS Inc., Houston 2010.
Bill Hollifield is the principal consultant at
PAS Inc. (16055 Space Center Blvd., Suite
600, Houston, TX 77062; Phone: 281-2866565; Email: bhollifield@pas.com). He is responsible for alarm management and highperformance HMI. He is a member of the
ISA-18 Alarm Management committee, the
ISA-101 HMI committee, and is a co-author of
the Electric Power Research Institute’s (EPRI)
Alarm Management Guidelines. Hollifield is
also coauthor of the Alarm Management Handbook and The High
Performance HMI Handbook, along with many articles on these topics. Hollifield has a dozen years of international, multi-company experience in all aspects of alarm management and effective HMI
design consulting for PAS, coupled with 40 years overall of industry
experience focusing on project management, chemical production
and control systems. Hollifield holds a B.S.M.E. from Louisiana Tech
University and an MBA from the University of Houston. He’s a pilot
and has built his own plane (with a high-performance HMI).
Chemical Engineering
www.chemengonline.com
march 2016
Feature Report
Wireless
Communication
in Hazardous Areas
Stephan Schultz
R. Stahl
W
ireless communications have
great potential in the chemical process industries (CPI)
because they do away with
complex and costly cable installations
and enable completely new applications. And while a recent wave of successful demonstrations has begun to
emerge in the CPI (for more, see CE,
Nov. 2009, p. 17–23), a number of hurdles stand in the way of a completely
wireless Utopia. In most cases, the
totally reliable, uncompromised availability of a production plant remains a
paramount objective, and it will therefore likely take some more time before
radio transmissions of critical signals
in control loops take root.
One impediment often cited as a limit
for wireless solutions is power. In fact,
many process applications basically
rule out wireless field devices without an independent, onboard source
of power. Granted, there have been a
number of promising approaches in
this regard, which are based on consumption-optimized electronic circuits
and alternative sources of power using
accumulators or solar cells, or on socalled energy harvesting, where energy
is recovered from vibration, temperature fluctuations, and so on.
At the same time, there are a range
of ancillary functions in almost any
plant today for which wireless communications truly are already a boon.
In these cases, power is not an insurmountable hurdle because the power
requirements are low enough to maintain battery life of five or more years.
Meanwhile, the use of wired power
should not be ruled out automatically.
Consider these criteria in deciding where
wireless fits in today’s CPI plants and the
explosive atmospheres that permeate them
In existing plants, power sources are
around nearly every corner, so the cost
of wiring for power is not nearly as
significant as the cost of the wiring for
the control signals themselves.
A look at typical routines in process
plants will identify the potential ancillary application areas with a view to
how and how much they may benefit.
Once a case is made for wireless technology in general for these purposes,
users are faced with various solutions to
choose from for actual implementations.
And last but not least, there are additional safety considerations for applications in hazardous areas. All of these
aspects will be discussed in order to enable users to make informed choices, or
to at least prime themselves for further
consultations with specialist manufacturers or systems solution providers.
application Areas
Logistics and supply chain
State-of-the-art logistics solutions
depend on systems that acquire data
on the flows of goods with the highest
possible degree of precision, and preferably at the very instant when stock
items are taken out or replenished.
In the CPI, many raw materials and
products are transported in containers
such as drums, tanks, intermediated
bulk containers (IBCs), and so on. Most
containers are marked with either
barcodes or RFID (radio-frequency
identification) tags. Acquiring RFID
tag information is an obvious model
application for wireless technology. As
of yet, though, most reading devices
used for this purpose are handheld
terminals with a cable that curtails
their operation. Portable radio devices
capable of both acquiring data and
passing it on via wireless link to MES
(manufacturing execution system)
and ERP (enterprise resource planning) servers save time and costs, and
increase data reliability due to exact
and nearly instant data acquisition.
RFID tags can be expected to increase
their foothold in the CPI due to reliability and safety benefits, since one
key RFID advantage over barcodes is
that even smudged and stained labels
are still legible. Also, there are other
convenient features that previous solutions could not provide; for instance,
data can be written to the tags more
than once and it is possible to acquire
several tags at the same time.
Maintenance and monitoring
Anyone in the field who is servicing
a plant is likely to benefit from using
portable devices with a connection to
a central management system, since
doing so enables optimization of typical routines and measurements. For
example, maintenance instructions
can be automatically dispatched because all relevant information can be
provided via radio to a portable handheld device that service engineers can
carry with them in the field. Staff are
then able to inspect equipment as
Chemical Engineering www.che.com may 2010
39
Feature Report
needed and, upon completion or even
while they are taking care of a maintenance task, enter the results of the
inspection, or repairs made, directly
into the portable device. Those data
are then instantly available in a central database and can be utilized, for
instance, for documentation purposes
or even to speed up billing. Similar
advantages apply to operating and
monitoring tasks in industrial plants.
Portable devices make it possible to
read realtime measured values and
therefore keep an eye on the actual
state of the production plant onsite.
At the same time, operators in the
field have access to ancillary information such as maintenance schedules,
operating instructions, ATEX or other
hazardous area certificates and much
more. As a result, routine procedures
can be modified to become considerably more efficient.
Security and asset management
Using radio transmission, camera systems or sensors at distant measuring
points — for instance, within pump
stations — can be integrated into the
site’s human-machine-interface (HMI)
concept at a low cost and can be readily displayed where needed. While process signals in the narrow sense are
absolutely needed to ensure proper
control of a plant, a host of other measured values may be useful only for
operative improvements or preventive maintenance. Radio transmission
is a good alternative for such signals
not only if they are particularly hard
to acquire any other way, but, more
broadly, for all kinds of non-critical
asset management data.
For the time being, HART communication is most commonly implemented
to transmit signals that are only used
for process optimization and similar
purposes. Wireless solutions are well
suited to satisfy the growing demand
for higher-level asset management.
There is good cause for its increasing
importance in the process industry:
live information about the current
state of production equipment in a
plant in as much detail as possible
gives staff a better means to anticipate imminent plant failures and to
adjust maintenance intervals to actual needs.
40
Networking Options
Whatever functions are to be enabled,
all wireless network installations require thorough planning, which starts
with the definition of the requirements
for the wireless network. A range of
aspects has to be considered, including bandwidth, mobility, hardware requirements in terms of realtime signal
transmission, the encryption system,
information-technology (IT) department demands and so on.
1. Using a floor plan, it is possible, in
principle, to assess the radio frequency
(RF) coverage in the area with the aid
of planning programs. However, practical experience shows that the effort
to emulate the complete structure of
a CPI plant is too high. Experience is
the key to success. Meanwhile, users
deploying a new network also have to
know exactly which wireless systems
are already in use in the same place
and in neighboring areas. The location
and selection of the antennas can then
be established.
2. In the next step, the deployment
plan should be verified against a socalled onsite survey. This is a live onsite inspection of the area to check
the values previously determined on
the computer in the real environment,
using a portable access point. In this
confirmation process, some additional
information can be gathered that
cannot be anticipated in a floor plan,
such as the effects of vehicles passing
through, or of mobile containers that
may have appeared in the way in unexpected places. The survey will also
allow users to realistically determine
the effective bandwidth in the central
and the outer areas of RF coverage.
3. Finally, the RF system can be installed, commissioned, and put through
a final test under real operating conditions to avoid unpleasant surprises.
While the many steps of this procedure might appear to drive expenses
up, they have indeed proven to be by
far the most reliable way to ensure
that a new wireless system really
works as expected and brings about
the desired process improvements.
Obviously, wireless communications
can be implemented using a variety
of different radio technologies. As is
so often the case, there is no one standard that meets all requirements.
Chemical Engineering www.che.com may 2010
Figure 1.
State-of-theart Wireless
LAN systems
ensure secure data
handling, unlike
earlier versions, which
were easier to crack
Most users will therefore have to examine at least some of the following
options to assess whether they fit the
application at hand.
Wireless LAN
All radio technologies currently available on the market have specific advantages and disadvantages. It is
worth noting, however, that the most
widely used solutions have originated
in the office IT sector and were not
genuine developments for industrial
applications. Wireless LAN (local area
network) is the most prominent case
in point. In an industrial environment,
Wireless LAN (WLAN) is quite suitable for use with portable equipment,
such as barcode scanners or handheld
HMI devices. It provides the greatest bandwidth (for IEEE 802.11b, 11
Mbit/s, or IEEE 802.11g, 54 Mbit/s
gross data throughput) and is designed for the transmission of Ethernet-based protocols. It is important to
keep in mind, though, that most CPI
applications only require bandwidth
in the 100–500 kbit/s range.
In a WLAN network, an access client, such as a PDA (personal digital
assistant), can also roam from one
access point to another without any
interruption in transmission. This
means users carrying portable devices
can move freely around the site without losing their connections to the network. State-of-the-art WLAN systems
also ensure secure data handling, unlike earlier versions, which only relied
on Wired Equipment Privacy (WEP),
bandwidth of only 50 kbit/s, GPRS is
also considerably slower than WLAN
and other radio protocols.
Figure 2. This WirelessHART gateway
is suitable for Zone 2 areas
WLAN’s original, out-dated encryption method that was very easy to
break by brute force.
GPRS on public GSM networks
The General Packet Radio Service
(GPRS) enables the transfer of data
packets on the public radio networks
that were originally built for cellphone voice communications, and
have since been enhanced for other
data transfer. GPRS is, for instance,
the base for the popular Blackberry
technology. In CPI applications, the
service can be used for remote maintenance and remote monitoring functions in pumping stations, remote
tank farms, centrifuges, compressors
and other machines. Unlike WLAN,
GPRS operates on a licensed frequency
range, which means that less interference occurs in the radio connection
than in the frequency bands used by
most other established wireless-datacommunications standards. Since it is
based on the existing, fully developed
Global System for Mobile Communications (GSM) mobile networks, GPRS
requires no extra investments for a
purpose-built, self-run radio system.
On the contrary, GPRS connections
constitute communication routes that
are totally independent from a company’s own, existing IT infrastructure.
Also, the technology can be used for
additional services, such as alerting
responsible staff via text message or
Email in case of a malfunction.
Some restrictions and weak points
must be taken into account, however.
GPRS is, for instance, not yet universally available worldwide. In some
countries, such as Japan and Korea,
GPRS coverage will remain unavailable, since these countries’ mobile
radio networks do not use the GSM
standard. A more extensive coverage
and better bandwidth will only be
achieved when the followup technology
to GPRS, the so-called UMTS service,
becomes well established around the
world. Last but not least, with a net
Bluetooth
Bluetooth does not provide a bandwidth that can match WLAN network
performance, but recent systems do
achieve transmission rates of up to
2 Mbit/s. In addition, due to its synchronous communication modes, Bluetooth provides a very good basis for
realtime applications. One key Bluetooth feature is the frequency hopping spread spectrum (FHSS) scheme,
which makes this technology significantly less susceptible to interference
than WLAN. FHSS also provides some
additional protection against eavesdroppers. Bluetooth works well for
networks with up to eight users, while
greater numbers will require increased
technical efforts. Bluetooth radio consumes less power in operation than
WLAN. Due to its characteristics, it
is particularly suitable for integrating
fixed devices such as HMI stations or
sensors. Like WLAN, Bluetooth boasts
specifications that have been internationally agreed upon, which ensures
that devices from different manufacturers are fully, or at least to a great
extent, compatible with each other.
WirelessHART and ISA 100.11a
WirelessHART and ISA 100.11a are
standards that are dedicated for sensor networks in the CPI. Both standards promise to connect field devices
of various vendors onto one network.
The network structure could be pointto-point, star or — the most interesting way — meshed. The meshed
structure offers two advantages. First
of all, if a field device is installed out
of the range for a direct connection to
the gateway, it may use a neighboring field device as a repeater. This
method extends the communication
range. Secondly, the meshed structure
enables a self-healing of the network
in case of interruptions, which could
happen, for instance due to delivery or
service trucks parking in front of a device. The first field trials have proven
that the technology and components
are ripe for industrial use.
Both WirelessHART and ISA
100.11a committees are working to-
gether to find a way to merge both
standards or enable interoperability.
This would be important to tearing
down the last obstacle for the success
of wireless technology in the CPI.
Coexistence
While the industrial-scientific-medical
(ISM) frequencies (the radio bands at
2.4 GHz used by most common wireless solutions) are licence-free and
therefore help to reduce operating
costs, they do have the disadvantage
that they must be shared by different applications. The standardization
forums are aware of this fact and
have come forth with some adequate
approaches for resolving potentially
problematic side effects. For instance,
Bluetooth’s adaptive frequency hopping scheme enables an operation of
WLAN and Bluetooth networks at the
same time in the same environment.
WirelessHART and ISA 100 enable
a so-called blacklisting of channels,
which are occupied by other wireless
applications. Given thorough and sensible wireless network planning and
deployment, interference can be practically eliminated in most scenarios.
Besides the more or less established
standards just discussed, there are
numerous other proprietary protocols.
However, users will more often than
not be inconvenienced by them due to
incompatibilities between devices from
different vendors. Based on the existing standards for WLAN, Bluetooth
WirelessHART and ISA 100 technology, various committees and organizations in several countries have been
trying and keep trying to improve
standardization and provide users and
manufacturers with implementation
guidelines. Major protagonists include
the German VDI/VDE GMA working committee 5.21, the ZVEI, and a
Namur subcommittee. Contributions
in this field also come from organizations such as the ISA’s (Instrumentation, Systems and Automation Society
of America) SP100 committee and the
HCF’s (HART Communication Foundation) WirelessHART.
Hazardous areas
Radio devices emit electromagnetic radiation that is clearly a possible source
of ignition in an explosive atmosphere.
Chemical Engineering www.che.com may 2010
41
Feature Report
The main risk lies in the induction of
electrical currents in metallic objects
or electronic circuits that are inadequately protected from electromagnetic
interference (EMI). These currents
can result in excessively high temperatures and the formation of sparks.
Other dangers, such as direct ignition
of an explosive atmosphere, are much
less relevant. IEEE studies on electromagnetic radiation in hazardous
areas have shown that even RF with
power of 6 W can become a potential
hazard in terms of induction in metal
objects. Because of this danger, the IEC
60079-0 2008 and the upcoming EN
60079-0 for continuous high frequency
sources limit the maximum permitted
Mission: Immersion.
Immersion Engineering™™
goes deep to solve your
heat transfer problems.
Even though you may call us on the
phone miles away, we're so deep into
your stuff--your fluid, your equipment,
your system--we can virtually touch it,
see it.
Immersion Engineering is a bundle of
very specialized services that you can
cherry pick. Some are free, some you
pay for. We’re the only company offering
them all.
One thing is for sure; when you need
HTF help you need it now. Nobody
knows more about the chemistry,
� Fluid Analysis
� Troubleshooting
� Fluid Maintenance � Consulting
� Training
HEAT TRANSFER FLUIDS
4 Portland Road
West Conshohocken PA 19428 USA
800-222-3611
610-941-4900 • Fax: 610-941-9191
info@paratherm.com
www.paratherm.com
Stocking & Sales Locations: USA • Canada • Mexico • Brazil • Argentina • Guatemala • Netherlands • Belgium • Denmark
• United Kingdom • Australia • China • Japan • Thailand
42
Circle 34 on p. 70 or go to adlinks.che.com/29250-34
Chemical Engineering www.che.com may 2010
P2008A
1/2 Page
transmitting power in wireless networks that are operated in hazardous
areas. The location of a wireless node
in Zone 0, 1 or 2 can be disregarded and
has no relevance for the limit, since an
RF signal will obviously not stop at the
boundary between two zones.
Safe emission levels
performance and applications of heat
transfer fluids than we do.
So pick a service and call one of our
technical specialists. Or, check out our
web site for case histories, data sheets,
comparisons, user’s guide, tip sheets
and technical reports. It’s all there, it’s
deep, it’s Immersion Engineering.
Eyeball this selection of services.
Figure 3. An external antenna in wireless units, such as this access point, is currently required to attain an individual ATEX
certification for use in hazardous areas
®
The threshold is set to a value between 6 and 2 W emitted power, with
the lower end applying to atmospheres
with group IIC explosive gases, such
as hydrogen or acetylene. WLAN, Bluetooth, WirelessHART and ISA 100
all predominantly use the aforementioned ISM bands at 2.4 GHz, which
are restricted to low-power radio
transmissions anyway. More specifically, WLAN access points using this
band are limited by RF regulations to
no more than 100 mW. Fortunately,
Bluetooth, WirelessHART and ISA
100 transmissions typically require
only about 10 mW of energy in the
first place. At face value, all of these
technologies therefore need or can do
with significantly less energy than the
maximum allowed by the standard.
However, the so-called antenna gain
must also be factored into the calculation, as the ignition risk is also defined
by the magnitude of the field strength.
Antenna gain is a parameter that describes the concentration of radio energy emitted in a specific direction.
Such directional gain increases as radio
emissions in other directions decrease
because the total energy emitted remains the same. Antenna gain is measured in relation to a specific reference.
If the gain value is stated in dBi units,
then it refers to an isotropic radiator,
or omnidirectional radiator (the theoretical model of an antenna that evenly
distributes energy in all directions
from a point source). Typical values for
rod antennas and directional antennas
are between 5 and 9 dBi. Users have to
take antenna gain into account when
they refer to the values given in the
tables in IEC 60079-0.
Suitable Zone 1 device designs
With few exceptions, automation components and devices currently available on the market must not be used
in Zone 1* right out of the box. This restriction is largely a consequence of the
rapid pace of development for new devices, which are released in very short
intervals and are therefore often affected by incomplete standardization.
One possible solution to the problem
is an installation of such RF equipment without Zone 1 approval in housings featuring a flameproof enclosure.
This includes Ex d type of protection,
or another suitable type. The majority of all Ex d enclosures are made of
metal, which shields electromagnetic
radiation from the antenna as a side
effect. Obviously, not just any antenna
can be installed inside a housing of
this type without additional measures.
In some cases, a housing with a glass
pane can be used in combination with
a directional antenna installed within.
However, tests have shown that only
antennas specially matched to a particular type of flameproof enclosure
will actually work well, since the signal loss is otherwise excessive.
Another possible option is the use of
external antennas. However, hazardous area requirements demand that
special explosion-protected antennas have to be installed in this case.
They usually have to be designed for
increased safety (Ex e) protection, because, in the event of a short circuit
between the power supply and the
output or input stage in the RF device,
no excessively high currents or voltages are allowed to coincide with the
explosive atmosphere without protection. Zone 1 GPRS modems typically
have GSM antennas connected via
an Ex i interface, and also feature an
intrinsically safe Ex i SIM card slot.
One way to do away with most limitations concerning the choice of antenna
would be antenna breakouts for devices in encapsulated housings that
* For a one-page reference card on hazardous
area classifications, see http://www.che.com/
download/facts/CHE_0507_Facts.pdf
implement Ex ib (intrinsically safe)
type protection, which would allow for
communication via an intrinsically
safe HF signal. Such solutions are
currently in development. Once they
actually become available, users will
finally have access to the full range of
standard antennas.
■
Edited by Rebekkah Marshall
5297-Convey 7.562x10.5
10/3/07
3:55 PM
Author
Stephan Schultz is senior
product manager automation, isolator and wireless
at R. Stahl (Am Bahnhof 30,
74638 Waldenburg, Germany;
Phone: +49 7942-943-4300;
Fax: +49 7942-943-404300;
Email: stephan.schultz@stahl.
de; Website: www.stahl.de).
Page 1
The Smartest Distance
Between Two Points.
Pneumatic Conveying Systems
from VAC-U-MAX.
VAC-U-MAX is a premier manufacturer of custom pneumatic
systems and support equipment for conveying, batching,
and weighing materials. With a VAC-U-MAX system on
site, your company’s product can move gently and
quickly from point to point, with nothing in the way to
impede the efficiency of its movement.
Count on us for:
• Decades of engineering and conveying expertise.
• Customized solutions that
meet your specific needs.
Because our systems are
not “off the shelf,” they
are always on the mark.
• Reliable equipment that’s
proudly made in America.
• Our Airtight Performance Guarantee™. We stand
behind every system we engineer. And we say it
in writing.
For more information about our custom-engineered
pneumatic systems and solutions, call:
1-866-239-6936
or visit online at:
www.vac-u-max.com/convey
Air-driven solutions.
Belleville, New Jersey
convey@ vac-u-max.com
Circle 41 on p. 70 or go to adlinks.che.com/29250-41
Chemical Engineering www.che.com may 2010
43
Feature
Cover
Story
Report
Piping-System Leak Detection
and Monitoring for the CPI
Eliminating the potential for leaks is an integral part
of the design process that takes place at the very
onset of facility design
W. M. (Bill) Huitt
W.M. Huitt Co.
L
eaks in a chemical process industries (CPI) facility can run
the gamut from creating a
costly waste to prefacing a catastrophic failure.. They can be an
annoyance, by creating pools of liquid on concrete that can become a
possible slipping hazard and housekeeping problem, or a leak that can
emit toxic vapors, causing various
degrees of harm to personnel. In
some cases a leak may be a simple
housekeeping issue that goes into
the books as a footnote indicating
that a repair should be made when
resources are available. In other
cases it can become a violation of
regulatory compliance with statutory consequences, not to mention
a risk to personnel safety and the
possible loss of capital assets.
Understanding the mechanisms
by which leaks can occur and prioritizing piping systems to be checked
at specific intervals based on a few
simple factors is not only a pragmatic approach to the preventive
maintenance of piping systems, but
is part of a CPI’s regulatory compliance. This includes compliance
under both the U.S. Environmental Protection Agency (EPA) Clean
Air Act (CAA; 40CFR Parts 50 to
52) and the Resource Conservation
and Recovery Act (RCRA; 40CFR
Parts 260 to 299). We will get into
more detail with these regulations,
as well as the leak detection and
repair (LDAR) requirement within
the above mentioned regulations, as
we move through this discussion.
44
When discussing anything to do
with government regulations, the
terminology quickly turns into an
“alphabet soup” of acronyms. The
box on the right lists, for easy reference, the titles and acronyms that
will be used in this discussion.
Leak mechanisms
Eliminating the potential for leaks
is an integral part of the design
process that takes place at the very
onset of facility design. It is woven
into the basic precept of the piping
codes because it is such an elemental and essential component in the
process of designing a safe and dependable piping system.
Piping systems, as referred to
here, include pipe, valves and other
inline components, as well as the
equipment needed to hold, move and
process chemicals. Why then, if we
comply with codes and standards,
and adhere to recommended industry practices, do we have to concern
ourselves with leaks? Quite pointedly it is because much of what we
do in design is theoretical, such as
material selection for compatibility,
and because in reality, in-process
conditions and circumstances do
not always perform as expected.
Whether due to human error or
mechanical deficiencies, leaks are
a mechanism by which a contained
fluid finds a point of least resistance
and, given time and circumstances,
breaches its containment. What we
look into, somewhat briefly, are two
general means by which leaks can
occur; namely corrosion and mechanical joint deficiencies.
Corrosion. Corrosion allowance
Chemical Engineering www.che.com May 2014
ACRONYMS
AVO = Audio/visual/olfactory
CAA = Clean Air Act
HAP = Hazardous air pollutants
HON =Hazardous organic NESHAP
LDAR = Leak detection and repair
LUST =Leaking underground
storage tank
NEIC =National Enforcement
Investigations Center
NESHAP =National Emission
Standard for Hazardous Air
Pollutants
NSPS =New Source Performance
Standards
RCRA =Resource Conservation and
Recovery Act
SOCMI =Synthetic organic chemical
manufacturing industry
TSDF =Treatment, storage and
disposal facilities
UST = Underground storage tank
VOC =Volatile organic
compounds
(CA) is used as an applied factor
in calculating, among other things,
wall thickness in pipe and pressure
vessels. The CA value assigned to
a material is theoretical and predicated on four essential variables:
material compatibility with the
fluid, containment pressure, temperature of the fluid and velocity
of the fluid. What the determination of a CA provides, given those
variables, is a reasonable guess at
a uniform rate of corrosion. And
given that, an anticipated loss of
material can be assumed over the
theoretical lifecycle of a pipeline
or vessel. It allows a reasonable
amount of material to be added into
the equation, along with mechanical allowances and a mill tolerance
in performing wall thickness calculations. The problem is that be-
Written LDAR compliance
First attempt at repair
Training
Delay of repair compliance assurance
LDAR audits
Electronic monitoring and storage of data
Contractor accountability
QA/QC of LDAR data
Internal leak definitions
Calibration/calibration drift assessment
Pump, compressor and agitator
seals can develop leaks where shaft
misalignment plays a part. If the
shaft is not installed within recommended tolerances or if it becomes
misaligned over time there is a
good possibility the seal will begin
to fail.
Less frequent monitoring
Records maintenance
The LDAR program
Table 1. Elements of a Model LDAR Program
yond the design, engineering, and
construction phase of building a
facility, the in-service reality of corrosion can be very different.
Corrosion, in the majority of
cases, does not occur in a uniform
manner. It will most frequently
occur in localized areas in the form
of pits, as erosion at high-impingement areas, as corrosion under
insulation, at heat-affected zones
(HAZ) where welding was improperly performed, causing a localized
change to the mechanical or chemical properties of the material, and
in many other instances in which
unforeseen circumstances create
the potential for corrosion and the
opportunity for leaks in the pipe
itself or in a vessel wall. Because
of that incongruity, corrosion is an
anomaly that, in reality, cannot
wholly be predicted.
Corrosion-rate values found in
various published resources on the
topic of material compatibility are
based on static testing in which a
material coupon is typically set in
a vile containing a corrosive chemical. This can be done at varying
temperatures and in varying concentrations. After a period of time,
the coupon is pulled and the rate
of corrosion is assessed. That is a
simplification of the process, but
you get the point. When a material
of construction (MOC) and a potentially corrosive chemical come
together in operational conditions,
the theoretical foundation upon
which the material selection was
based becomes an ongoing realtime
assessment. This means that due
diligence needs to be paid to examining areas of particular concern,
depending on operating conditions,
such as circumferential pipe welds
for cracking, high-impingement
areas for abnormal loss of wall
thickness, hydrogen stress-corrosion cracking (HSCC), and others.
The LDAR program does not
specify the need to check anything
other than mechanical joints for potential leaks. Monitoring pipe and
vessel walls, particularly at welds
that come in contact with corrosive
chemicals, is a safety consideration
and practical economics. Performing cursory examinations for such
points of corrosion where the potential exists should be made part
of any quality assurance or quality
control (QA/QC) and preventive
maintenance program.
Mechanical joints and openended pipe. Mechanical joints
can include such joining methods
as flanges, unions, threaded joints,
valve bonnets, stem seals and clamp
assemblies. It can also include
pump, compressor and agitator
seals. Other potential points of transient emissions include open-ended
piping, such as drains, vents, and
the discharge pipe from a pressurerelief device. Any of these joints or
interfaces can be considered potential leak points and require both
monitoring and record-keeping documentation in compliance with the
EPA’s LDAR program.
Mechanical joints can leak due to
improper assembly, insufficient or
unequal load on all bolts, improperly selected gasket type, sufficient
pressure or temperature swings
that can cause bolts to exceed their
elastic range (diminishing their
compressive load on the joint), and
an improperly performed “hot-bolting” procedure in which in-service
bolts are replaced while the pipeline
remains in service. “Hot bolting” is
not a recommended procedure, but
is nonetheless done on occasion.
Promulgated in 1970 and amended
in 1977 and 1990, the Clean Air
Act requires that manufacturers producing or handling VOCs
develop and maintain an LDAR
program in accordance with the
requirements set forth under the
Clean Air Act. This program monitors and documents leaks of VOCs
in accordance with Method 21 —
Determination of Volatile Organic
Compound Leaks.
Table 1 provides a listing of key
elements that should be contained
in an LDAR program. Those elements are described as follows:
Written LDAR compliance. Compile a written procedure declaring
and defining regulatory requirements that pertain to your specific
facility. This should include recordkeeping certifications; monitoring
and repair procedures; name, title,
and work description of each personnel assignment on the LDAR team;
required procedures for compiling
test data; and a listing of all process
units subject to federal, state and
local LDAR regulations.
Training. Assigned members of
the LDAR team should have some
experience base that includes work
performed in or around the types of
piping systems they will be testing
and monitoring under the LDAR
program. Their training should include familiarization with Method
21 and also training as to the correct procedure for how to examine
the various interface connections
they will be testing. They should
also receive training on the test
instrument they will be using and
how to enter the test data in the
proper manner. All of this needs to
be described in the procedure.
LDAR audits. An internal audit
team should be established to ensure that the program is being car-
Chemical Engineering www.che.com May 2014
45
Cover Story
ried out on a routine basis in an efficient and comprehensive manner
in accordance with the written procedures. A third-party audit team is
brought in every few years to confirm that internal audits are being
carried out in the proper manner
and that all equipment that should
be included in the monitoring is
listed as such. It also ensures that
the tests are being carried out properly and that the test results are
entered properly.
Contractor
accountability.
When selecting an outside contractor to perform internal LDAR
audits for a facility or when bringing in an outside contractor to inspect the work of the internal audit
team, it is recommended that the
contract be written in a manner
that places appropriate responsibility on that contractor. In doing
so there should be penalties described and assessed as a result
of insufficient performance or inaccurate documentation of prescribed testing and documentation
procedures. Expectations should
be well defined and any deviation
from those prescribed norms by a
third-party contractor should constitute a breach of contract. In all
fairness, both parties must understand exactley what those expectations are.
Internal leak definitions. Internal leak definitions are the maximum parts per million, by volume
(ppmv) limits acceptable for valves,
connectors and seals, as defined by
the CAA regulation governing a facility. For example, a facility may be
required to set an internal leak-definition limit of 500 ppm for valves
and connectors in light liquid or gas/
vapor fluid service and 2,000 ppm
internal leak definition for pumps
in light liquid or gas/vapor fluid
service. “Light liquid” is defined
as a fluid whose vapor pressure is
greater than 0.044 psia at 68°F.
Less frequent monitoring. Under
some regulations it is allowed that
a longer period between testing is
acceptable if a facility has consistently demonstrated good performance (as defined in the applicable
regulation). For example, if a facil46
ity has consistently demonstrated
good performance under monthly
testing, then the frequency of testing could be adjusted to a quarterly
test frequency.
First attempt at repair. Upon detection of a leak, most rules will require that a first attempt be made
to repair the leak within five days
of detection; if unsuccessful, any follow-up attempts need to be finalized
within 15 days. Should the repair
remain unsuccessful within the 15day time period, the leak must be
placed on a “delay of repair” list and
a notation must be made for repair
or component replacement during
the next shutdown of which the
leaking component is a part.
Delay of repair compliance assurance. Placing a repair item on
the “delay of repair” list gives assurances that the item justifiably belongs on the list, that a plan exists
to repair the item, and that parts
are on hand to rectify the problem.
It is suggested that any item being
listed in the “delay of repair” list automatically generate a work order
to perform the repair.
Electronic monitoring and storage of data. Entering leak-test
data into an electronic database
system will help in retrieving such
data and in utilizing them in ways
that help provide reports highlighting areas of greater concern to areas
of lesser concern. Such information
can help direct attention and resources away from areas of least
concern, while mobilizing resources
to areas of greater concern. This enables a much more efficient use of
information and resources.
QA/QC of LDAR data. A well
written LDAR program will include
a QA/QC procedure defining the
process by which it is assured that
Method 21 is being adhered to, and
that testing is being carried out in
the proper manner and includes the
proper equipment and components.
This also includes the maintenance
of proper documentation.
Calibration/calibration-drift
assessment. LDAR monitoring
equipment should be calibrated in
accordance with Method 21. Calibration-drift assessment of LDAR
Chemical Engineering www.che.com May 2014
monitoring equipment should be
made at the end of each monitoring work shift using approximately
500 ppm of calibration gas. If, after
the initial calibration, drift assessment shows a negative drift of more
than 10% from the previous calibration, all components that were
tested since the last calibration
with a reading greater than 100
ppm should be re-tested. Re-test all
pumps that were tested since the
last calibration having a reading of
greater than 500 ppm.
Records maintenance. Internal
electronic record-keeping and reporting is an essential component to
a well-implemented LDAR program.
It is an indication to the NEIC that
every effort is being made to comply
with the regulations pertinent to a
facility. It provides ready access to
the personnel associated with the
program, the test data, leak repair
reports and so on.
Testing for leaks
Results, when using a leak detection monitor, are only as accurate
as its calibration and the manner in
which it is used. Calibration is discussed in the next section, “Method
21.” To use the monitor correctly, the
auditor will need to place the nozzle
or end of the probe as close as possible to the flange, threaded joint, or
seal interface as follows:
• In the case of a flange joint test:
180 deg around perimeter of the
flange joint at their interface
• In the case of a threaded joint test:
180 deg around perimeter of interface of the male/female fit-up
• If it is a coupling threaded at both
ends, check both ends 180 deg
around the perimeter
• If it is a threaded union, then
check both ends and the body nut
180 deg around the perimeter
• In the case of a valve test:
180 deg around perimeter of
all end connections if anything
other than welded
180 deg around perimeter of
body flange
180 deg around perimeter of
body/bonnet interface
180 deg around perimeter of
stem packing at stem
160,000
140,000
120,000
100,000
191,242
113,919
108,766
102,798
100,165
93,123
87,983
40,000
129,828
60,000
136,265
80,000
142,709
National backlog
(confirmed releases, cleanups completed)
National Cleanup Backlog
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
20,000
0
Fiscal year
Figure 1. Progress is slowly being made to clean up leaking underground storage
tanks under the RCRA program
• In the case of a rotating equipment
shaft seal test: 180 deg around the
perimeter of the interface of the
seal and the shaft
Method 21
Method 21, under 40 CFR Part 60,
Appendix A, provides rules with
respect to how VOCs are monitored and measured at potential
leak points in a facility. Those potential leak points include, but are
not limited to: valves, flanges and
other connections; pumps and compressors; pressure-relief devices;
process drains; open-ended valves;
pump and compressor seals; degassing vents; accumulator vessel
vents; agitator seals and access door
seals. It also describes the required
calibration process in setting up the
monitoring device. Essentially any
monitoring device may be used as
long as it meets the requirements
set forth in Method 21.
Cylinder gases used for calibrating a monitoring device need to be
certified to be within an accuracy
of 2% of their stated mixtures. It is
recommended that any certification
of this type be filed in either digital
form or at the very least as a hard
copy. There should also be a specified shelf life of the contents of the
cylinder. If the shelf life is exceeded,
the contents must be either re-analyzed or replaced.
Method 21 goes on to define how
to test flanges and other joints, as
well as pump and compressor seals
and various other joints and interfaces with the potential for leaks.
There are two gases required for
calibration. One is referred to as a
“zero gas,” defined as air with less
than 10 ppmv (parts per million
by volume) VOC. The other calibration gas, referred to as a “reference gas,” uses a specified reference
compound in an air mixture. The
concentration of the reference compound must approximately equal
the leak definition specified in the
regulation. The leak definition, as
mentioned above, is the threshold
standard pertinent to the governing regulation.
Monitoring devices
A portable VOC-monitoring device
will typically be equipped with a
rigid or flexible probe. The end of
probe is placed at the leak interface of a joint, such as a flange,
threaded connection or coupling,
or at the interface of a pump, compressor, or agitator seal where it
interfaces with the shaft. With its
integral pump, the device, when
switched on, will draw in a continuous sample of gas from the leakinterface area into the monitoring
device. The instrument’s response
or screening value is a relative
measure of the sample’s concentration level. The screening value is
detected and displayed in parts per
million by volume, or if the instrument is capable and the degree of
accuracy needed, in parts per billion by volume (ppbv).
The detection devices operate on
a variety of detection principles.
The most common are ionization,
infrared absorption and combustion. Ionization detectors operate
by ionizing a sample and then measuring the charge (that is, number
of ions) produced.
Two methods of ionization currently used are flame ionization
and photoionization. The flame ionization detector (FID) theoretically
measures the total carbon content
of the organic vapor sampled. The
photoionization detector (PID)
uses ultraviolet light to ionize the
organic vapors. With both detectors, the response will vary with
the functional group in the organic
compounds. PIDs have been used to
detect equipment leaks in process
units in SOCMI facilities, particularly for compounds such as formaldehyde, aldehydes and other oxygenated chemicals that typically do
not provide a satisfactory response
on a FID-type unit.
Operation of the non-dispersive
infrared (NDIR) detector is based
on the principle that light absorption characteristics vary depending
on the type of gas. Because of this,
NDIR detection can be subject to
interference due in large measure
to such constituents as water vapor
and CO2, which may absorb light
at the same wavelength as the targeted compound. This type of detector is typically confined to the detection and measurement of single
components. Because of that proclivity, good or bad, the wavelength
at which a certain targeted compound absorbs infrared radiation,
having a predetermined value, is
preset for that specific wavelength
through the use of optical filters. As
an example, if the instrument was
set to a wavelength of 3.4 micrometers, the device could detect and
measure petroleum fractions, such
as gasoline and naphtha.
The combustion-type analyzer is
designed to measure either thermal
conductivity of a gas or the heat produced as a result of combustion of the
gas. Referred to as hot-wire detectors
or catalytic oxidizers, combustiontype monitors are nonspecific for
gas mixtures. If a gas is not readily
combustible, similar in composition
to formaldehyde and carbon tetrachloride, there may be a reduced response or no response at all.
Chemical Engineering www.che.com May 2014
47
Table 2 – Federal Regulations That Require a Formal LDAR
Program With Method 21
Cover Story
40 CFR
Part
Due to the variability in the sensitivity of the different monitoring
devices, the screening value does
not necessarily indicate the actual
total concentration at the leak interface of the compound(s) being
detected. The leak interface is the
immediate vicinity of the joint
being tested — the point at which
the end of the probe is placed. Response factors (RFs), determined
for each compound by testing or
taken from reference sources, then
correlate the actual concentration
of a compound to that of the concentration detected by the monitoring device. As mentioned previously, the monitoring device must
first be calibrated using a certified
reference gas containing a known
compound at a known concentration, such as that of methane and
isobutylene. RFs at an actual concentration of 10,000 ppmv have
been published by the EPA in a
document entitled “Response Factors of VOC Analyzers Calibrated
with Methane for Selected Organic
Chemicals.”
Method 21 requires that any selected detector meet the following
specifications:
• The VOC detector should respond
to those organic compounds being
processed (determined by the RF)
• Both the linear response range
and the measurable range of the
instrument for the VOC to be
measured and the calibration gas
must encompass the leak definition concentration specified in the
regulation
• The scale of the analyzer meter
must be readable to ±2.5% of
the specified leak definition
concentration
• The analyzer must be equipped
with an electrically driven pump
so that a continuous sample is
provided at a nominal flowrate of
between 0.1 and 3.0 L/min
• The analyzer must be intrinsically safe for operation in explosive atmospheres
• The analyzer must be equipped
with a probe or probe extension
not to exceed 0.25 in. outside diameter with a single end opening
for sampling
48
Regulation Title
Subpart
60
VV
SOCMI VOC Equipment Leaks NSPS
60
DDD
Volatile Organic Compound (VOC) Emissions from the Polymer Manufacturing Industry
60
GGG
Petroleum Refinery VOC Equipment Leaks NSPS
60
KKK
Onshore Natural Gas Processing Plant VOC Equipment
Leaks NSPS
61
J
National Emission Standard for Equipment Leaks (Fugitive
Emission Sources) of Benzene
61
V
Equipment Leaks NESHAP
63
H
Organic HAP Equipment Leak NESHAP (HON)
63
I
Organic HAP Equipment Leak NESHAP for Certain Processes
63
J
Polyvinyl Chloride and Copolymers Production NESHAP
63
R
Gasoline Distribution Facilities (Bulk Gasoline Terminals and
Pipeline Breakout Stations)
63
CC
Hazardous Air Pollutants from Petroleum Refineries
63
DD
Hazardous Air Pollutants from Off-Site Waste and Recovery
Operations
63
SS
Closed Vent Systems, Control Devices, Recovery Devices
and Routing to a Fuel Gas System or a Process
63
TT
Equipment Leaks – Control Level 1
63
UU
Equipment Leaks – Control Level 2
63
YY
Hazardous Air Pollutants for Source Categories: Generic
Maximum Achievable Control Technology Standards
63
GGG
Pharmaceuticals Production
63
III
Hazardous Air Pollutants from Flexible Polyurethane Foam
Production
63
MMM
Hazardous Air Pollutants for Pesticide Active Ingredient
Production
63
FFFF
Hazardous Air Pollutants: Miscellaneous Organic Chemical
Manufacturing
63
GGGGG
Hazardous Air Pollutants: Site Remediation
63
HHHHH
Hazardous Air Pollutants: Miscellaneous Coating Manufacturing
65
F
Consolidated Federal Air Rule — Equipment Leaks
264
BB
Equipment Leaks for Hazardous Waste TSDFs
265
BB
Equipment Leaks for Interim Status Hazardous Waste TSDFs
Federal regulations
There are federal regulations that
pertain to monitoring for VOCs
and require the implementation of
a formal LDAR program in concert
with the rules of Method 21. There
are other federal regulations that
require the rules of Method 21, but
do not require a formal LDAR program. Tables 2 and 3 list those various regulations.
It is the manufacturer’s responsibility to make the proper determination as to what regulations it needs
to comply with. Those specific regulations, coupled with the Method 21
requirements, will define the LDAR
Chemical Engineering www.che.com May 2014
program and help establish a comprehensive and detailed procedure.
RCRA
The Solid Waste Disposal Act of
1965 was amended in 1976 to include the Resource Conservation
and Recovery Act (RCRA), which
encompassed the management of
both hazardous waste and solid
waste. Prompted further by an ever
increasing concern of underground
water contamination, this act was
again amended in 1984 to address
underground storage tanks (USTs)
and associated underground piping
under Subtitle I. This Amendment
Table 3 – Federal Regulations that Require the Use of Method 21
But Not a Formal LDAR Program
40 CFR
Part
Subpart
60
XX
Regulation Title
Bulk Gasoline Terminals
60
QQQ
60
WWW
Municipal Solid Waste Landfills
61
F
Vinyl Chloride
61
L
Benzene from Coke By-Products
61
BB
Benzene Transfer
61
FF
Benzene Waste Operations
63
G
Organic Hazardous Air Pollutants from SOCMI for Process
Vents, Storage Vessels, Transfer Operations, and Wastewater
63
M
Perchloroethylene Standards for Dry Cleaning
63
S
Hazardous Air Pollutants from the Pulp and Paper Industry
63
Y
Marine Unloading Operations
63
EE
Magnetic Tape Manufacturing Operations
63
GG
Aerospace Manufacturing and Rework Facilities
63
HH
Hazardous Air Pollutants from Oil and Gas Production
Facilities
63
OO
VOC Emissions from Petroleum Refinery Wastewater Systems
Tanks ­— Level 1
63
PP
Containers
63
QQ
Surface Impoundments
63
VV
Oil/Water, Organic/Water Separators
63
HHH
Hazardous Air Pollutants from Natural Gas Transmission and
Storage
63
JJJ
Hazardous Air Pollutant Emissions: Group IV Polymers and
Resins
63
VVV
Hazardous Air Pollutants: Publicly Owned Treatment Works
65
G
CFAR ­— Closed Vent Systems
264
AA
Owners and Operators of Hazardous Waste Treatment, Storage, and Disposal Facilities — Process Vents
264
CC
Owners and Operators of Hazardous Waste Treatment,
Storage and Disposal Facilities — Tanks, Surface Impoundments, Containers
265
AA
Interim Standards for Owners and Operators of Hazardous
Waste Treatment, Storage, and Disposal Facilities — Process
Vents
265
CC
Interim Standards for Owners and Operators of Hazardous
Waste Treatment, Storage, and Disposal Facilities — Tanks,
Surface Impoundments, Containers
270
B
Hazardous Waste Permit Program — Permit Application
270
J
Hazardous Waste Permit Program — RCRA Standardized Permits for Storage Tanks and Treatment Units
regulates the construction, monitoring, operating, reporting, recordkeeping, and financial responsibility for USTs and associated
underground piping that handle
petroleum and hazardous fluids.
As of 2011, there were 590,104
active tanks and 1,768,193 closed
tanks in existence in the U.S. Of the
still active tanks, 70.9% were under
significant operational compliance.
This means that they were using
the necessary equipment required
by current UST regulations to prevent and detect releases and were
performing the necessary UST system operation and maintenance.
In 1986, the Leaking Underground Storage Tank (LUST) Trust
Fund was added to the RCRA program. The trust financing comes
from a 0.1¢ tax on each gallon of
motor fuel (gasoline, diesel or biofuel blend) sold nationwide. The
LUST Trust Fund provides capital
to do the following:
• Oversee cleanups of petroleum releases by responsible parties
• Enforce cleanups by recalcitrant
parties
• Pay for cleanups at sites where
the owner or operator is unknown,
unwilling, or unable to respond,
or those that require emergency
action
• Conduct inspections and other release prevention activities
In Figure 1 the progress being
made by the program can readily
be seen. In 2002, RCRA was looking
at 142,709 LUST sites — sites that
were flagged for cleanup. Throughout the following nine years, 2002
through 2011, 54,726 of those sites
were cleaned, leaving 87,983 still
targeted for cleanup.
Within the RCRA program there
are requirements that impact design, fabrication, construction, location, monitoring and operation of
USTs and associated underground
piping. The EPA has provided a
number of sites on the internet that
provide a great deal of information
on the various CFR Parts. 40 CFR
Part 260 contains all of the RCRA
regulations governing hazardous
waste identification, classification,
generation, management and disposal.
Listed wastes are divided into the
following group designations:
• The F group — non-specific source
wastes found under 40 CFR
261.31
• The K group — source-specific
wastes found under 40 CFR
261.32
• The P and U group — discarded
commercial chemical products
found under 40 CFR 261.33
Characteristic wastes, which exhibit
one or more of four characteristics
defined in 40 CFR Part 261 Subpart
C are as follows:
• Ignitability, as described in 40
CFR 261.21
• Corrosivity, as described in 40
CFR 261.22
• Reactivity, as described in 40 CFR
261.23
• Toxicity, as described in 40 CFR
261.24
Table 4 provides a listing of additional CFR parts that further
Chemical Engineering www.che.com May 2014
49
Feature
Cover
Story
Report
Table 4 – Resource Conservation and Recovery Act (RCRA)
Information
40 CFR Part
define the regulations under the
Resource Conservation and Recovery Act.
Final remarks
I am fervently against overregulation and watch with keen interest
the unfolding debate occurring on
Capitol Hill over the amendment
to the Toxic Substances Control
Act (TSCA) for example. But the
improved safety, clean air, clean
water, and cost savings realized
from the CAA and RCRA programs
are four major returns on investment that come back to a manufacturer from the investment in a good
leak-detection program. Whether
monitoring and repairing leaks
above ground, in accordance with
the CAA, or below ground, in accordance with the RCRA, it is, simply
put, just good business. As alluded
to at the outset of this article, leaks
in hazardous-fluid-service piping systems have served, in many
cases, as an early-warning indicator
of something much worse to come.
At the very least, such leaks can
contribute to air pollution, groundwater contamination, lost product
revenue, housekeeping costs, and a
risk to personnel — a few things we
can all live without.
■
Edited by Gerald Ondrey
Author
W. M. (Bill) Huitt has been
involved in industrial piping design, engineering and
construction since 1965. Positions have included design engineer, piping design instructor, project engineer, project
supervisor, piping department supervisor, engineering
manager and president of W.
M. Huitt Co. (P.O. Box 31154,
St. Louis, MO 63131-0154;
Phone: 314-966-8919; Email: wmhuitt@aol.
com), a piping consulting firm founded in 1987.
His experience covers both the engineering and
construction fields and crosses industry lines
to include petroleum refining, chemical, petrochemical, pharmaceutical, pulp & paper, nuclear
power, biofuel and coal gasification. He has written numerous specifications, guidelines, papers,
and magazine articles on the topic of pipe design
and engineering. Huitt is a member of the International Society of Pharmaceutical Engineers
(ISPE), the Construction Specifications Institute
(CSI) and the American Society of Mechanical Engineers (ASME). He is a member of the
B31.3 committee, a member of three ASME-BPE
subcommittees and several task groups, ASME
Board on Conformity Assessment for BPE Certification where he serves as vice chair, a member
of the American Petroleum Institute (API) Task
Group for RP-2611, serves on two corporate specification review boards, and was on the Advisory
Board for ChemInnovations 2010 and 2011 a
multi-industry conference & exposition.
Regulation Title
260
Hazardous Waste Management System: General
261
Identification and Listing of Hazardous Waste
262
Standards Applicable to Generators of Hazardous Waste
264
Standards for Owners and Operators of Hazardous Waste Treatment, Storage and Disposal Facilities
265
Interim Status Standards for Owners and Operators of Hazardous
Waste Treatment, Storage and Disposal Facilities
266
267
Standards for the Management of Specific Hazardous Wastes
and Specific Types of Hazardous Waste Management Facilities
Standards for Owners and Operators of Hazardous Waste Facilities Operating Under a Standardized Permit
270
EPA Administered Permit Programs: The Hazardous Waste Permit
Program
272
Approved State Hazardous Waste Management Programs
273
Standards for Universal Waste Management
279
Standards for the Management of Used Oil
280
Technical Standards and Corrective Action Requirements for
Owners and Operators of Underground Storage Tanks (UST)
281
Approval of State Underground Storage Tank Programs
282
Approved Underground Storage Tank Programs
RG LeaseFleet Ad_4.5625 x 4.875.pdf
1
4/7/14
11:47 AM
INTRODUCING THE WORLD’S LARGEST FLEET OF
SUCCESSFULLY TESTED BLAST-RESISTANT BUILDINGS.
No matter how many you need, how big you need them to be or when
you need them, get proven protection from the safety authority.
• F O R M E R LY A B O X 4 U •
855.REDGUARD
Circle 1 on p. 76 or go to adlinks.che.com/50976-01
Chemical Engineering www.che.com May 2014
51
Environmental Manager
Monitoring Flame Hazards In Chemical Plants
The numerous flame sources in CPI facilities necessitate the installation of
advanced flame-detection technologies
Ardem Antabian
MSA — The Safety Company
F
ire is a primary and very real
threat to people, equipment
and facilities in the chemical
process industries (CPI), especially in the refining and storage of
petrochemicals. The consequences
of failing to detect flames, combustible gas leaks or flammable chemical
spills can have dire consequences,
including loss of life and catastrophic
plant damage.
The monitoring of flame hazards is
mandated by the U.S. Occupational
Safety and Health Administration
(OSHA; Washington, D.C.; www.
osha.gov) through its comprehensive Process Safety Management
(PSM) federal regulation. Internationally, the European Union (E.U.)
splits gas and flame safety responsibilities between E.U. directives and
European standards organizations,
including the European Committee
for Electrotechnical Standardization
(Cenelec; Brussels, Belgium; www.
cenelec.eu), the International Electrotechnical Commission (IEC; Geneva, Switzerland; www.iec.ch) and
several other bodies.
Many accidents are the result of either failing to implement these standards properly with suitable flamedetection equipment or the failure
Relative energy
Ultraviolet
Visible
Infrared
Sun's
energy
reaching
the
earth
300 nm 400 nm
800 nm
Wavelength
4-5 m
FIGURE 1. Flame detectors can detect light emissions at specific wavelengths across the UV, visible and IR spectrum to distinguish between actual
flames and false alarm sources
70
FIGURE 2. Flame detectors, such as those shown here, implement ultraviolet and infrared detection
technologies
to train employees to follow related
safety procedures consistently. In
either case, it is important to understand the many different sources of
flame hazards, the detection sensor
technologies that can warn of imminent danger and the proper location
of flame detectors in today’s complex chemical plants.
In the petrochemical plant environment, the range of potential flammable hazards is expansive and
growing as materials and processes
become more complex. These hazards have led to the development of
more sophisticated combustible-gas
and flame-sensing technologies with
embedded intelligence that can better detect the most common industrial fire sources, some of which are
listed in Table 1.
Principles of flame detection
Industrial process flame detectors
detect flames by optical methods,
Chemical Engineering
including ultraviolet (UV) and infrared (IR) spectroscopy and visual
flame imaging. The source of flames
in CPI plants is typically fueled by
hydrocarbons, which when supplied with oxygen and an ignition
source, produce heat, carbon dioxide and other products of combustion. Intense flames emit visible, UV,
and IR radiation (Figure 1). Flame
detectors are designed to detect
the emission of light at specific
wavelengths, allowing them to discriminate between flames and false
alarm sources.
Flame-sensing technologies
The flame safety industry has developed four primary optical flamesensing technologies: UV, UV/IR,
multi-spectrum infrared (MSIR),
and visual flame imaging (Figure 2).
These sensing technologies are all
based on line-of-sight detection of
radiation emitted by flames in the
www.chemengonline.com
May 2016
Input layer
Table 1. Common Industrial
Fire Sources
Alcohols
Diesel fuels
Gasoline
Kerosene
Jet fuels
Ethylene
Hydrogen
Liquefied natural gas (LNG)
Liquefied petroleum gas (LPG)
Paper
Textiles
Solvents
Sulfur
Wood
UV, visible and IR spectral bands.
Process, safety and plant engineers must choose from among
these technologies to find the device that is best suited to their individual plant’s requirements for
flame monitoring by deciding upon
the importance of the detection
range, field of view, response time
and immunity against certain false
alarm sources.
Ultraviolet/infrared (UV/IR). By integrating a UV optical sensor with
an IR sensor, a dual-band flame detector is created that is sensitive to
the UV and IR radiation emitted by
a flame. The resulting UV/IR flame
detector offers increased immunity
over a UV-only detector, operates at
moderate speeds of response, and
is suited for both indoor and outdoor use.
Multispectral infrared (MSIR). Advanced MSIR flame detectors combine multiple IR detector arrays with
neural network intelligence (NNT).
They provide pattern-recognition capabilities that are based on training
to differentiate between real threats
and normal events, thus reducing
false alarms. MSIR technology allows
area coverage up to six times greater
than that of more conventional UV/IR
flame detectors.
NNT is based on the concept of artificial neural networks (ANNs), which
are mathematical models based on
the study of biological neural networks. A group of artificial neurons
in an ANN process information and
actually change structure during a
learning phase. This learning phase
allows ANNs to model complex relationships in the data delivered by
sensors in a quick search for patterns that results in pattern recognition (Figure 3).
Flame detectors with NNT operate similarly to the human brain;
they have thousands of pieces of
data stored in their memories from
hundreds of flame and non-flame
Chemical Engineering
Hidden layer
Output layer
Sensor 1
Sensor 2
Output
Sensor 3
Sensor 4
FIGURE 3. Many flame detectors employ technology based on artificial neural networks (ANNs) to more
accurately analyze flames
events observed in the past. These
detectors have been trained through
NNT intelligence to recognize flames
based upon those data, and determine if they are real events or potential false alarm sources.
Visual
flame-imaging
flame
detectors. The design of visual
flame detectors relies on standard charge-couple-device (CCD)
image sensors, commonly used in
closed-circuit television cameras,
and flame-detection algorithms to
establish the presence of fires. The
imaging algorithms process the live
video image from the CCD array
and analyze the shape and progression of would-be fires to discriminate between flame and nonflame sources.
Visual flame detectors with CCD
arrays do not depend on emissions from carbon dioxide, water
and other products of combustion
to detect fires, nor are they influenced by fire’s radiant intensity. As
a result, they are commonly found
in installations where flame detectors are required to discriminate
between process fires and fires resulting from an accidental release
of combustible material.
Visual flame detectors, despite
their many advantages, cannot
detect flames that are invisible to
the naked eye, such as hydrogen
flames. Heavy smoke also impairs
the detector’s capacity to detect
fire, since visible radiation from the
fire is one of the technology’s fundamental parameters.
Flame detection requirements
When configuring a flame-detection
system and evaluating the available
www.chemengonline.com
May 2016
technology alternatives, there are
many performance criteria that must
be considered. The following sections outline some of these important detector criteria.
False alarm immunity. False alarm
rejection is one of the most important considerations for the selection
of flame detectors. False alarms are
more than a nuisance — they are
both productivity and cost issues. It
is therefore essential that flame detectors discriminate between actual
flames and benign radiation sources,
such as sunlight, lighting fixtures, arc
welding, hot objects and other nonflame sources.
Detection range and response
time. A flame detector’s most
basic performance criteria are detection range and response time.
Depending on a specific plant-application environment, each of the
alternative flame-detection technologies recognizes a flame within
a certain distance and a distribution of response times. Typically,
the greater the distance and the
shorter the time that a given flamesensing technology requires to detect a flame, the more effective it is
at supplying early warning against
fires and detonations.
Field of view (FOV). Detection
range and FOV define area coverage per device. Like a wide-angle
lens, a flame detector with a large
field of view can take in a broader
scene, which may help reduce the
number of flame detectors required
for certain installations. Most of today’s flame detector models offer
fields of view of about 90 to 120
deg (Figure 4).
Self diagnostics. To meet the high71
FIGURE 5. Three-dimensional mapping of a facility is useful in determining the most appropriate
installation locations for flame detectors
FIGURE 4. Field of view is an important factor to consider in the installation of flame-detection equipment. This diagram shows the distance a flame can be detected at various angles. For example, at 0 deg,
a flame can be detected at 230 ft, and at a 50-deg angle, it can be detected at 50 ft (in this figure, the
degree symbol ° is used for angle degrees, and the prime symbol ’ is used for feet)
est reliability standards, continuous
optical-path monitoring (COPM) diagnostics are often built into optical flame detectors. The self-check
procedure is designed to ensure
that the optical path is clear, the
detectors are functioning, and additionally, that the electronic circuitry
is operational.
Self-check routines are programmed into the flame detector’s
control circuitry to activate about
once every minute. If the same fault
occurs twice in a row, then a fault is
indicated via a 0–20-mA output or
a digital communications protocol,
such as HART or Modbus.
SIL/SIS standards. When plant
safety engineers choose detectors
certified to safety integrity levels
(SIL) and properly integrate them
into safety-instrumented systems
(SIS), they have again added another layer of safety. Certification
to these standards plays a valuable
role in effective industrial gas and
flame detection.
Normative standards establish
minimum requirements for the design, fabrication and performance of
flame detectors and other safety devices as necessary to maintain protection of personnel and property.
The ANSI/ISA S84.00.01 standard
was enacted to drive the classification of SIS for the process industries
within the U.S., as well as the norms
introduced by the IEC (IEC 61508
and IEC 61511).
Together, these standards have
introduced several specifications
72
that address safety and reliability
based on optimizing processes for
risk. The IEC 61508 standard is a
risk-based approach for determining the SIL of safety-instrumented
functions. Unlike other international
standards, IEC 61508 takes a holistic approach when quantifying
the safety performance of electrical control systems — the design
concept, the management of the
design process and the operations
and maintenance of the system
throughout its lifecycle are within
the scope.
Location and installation
A variety of processes and sources
within the plant environment can
lead to flame and fire incidents,
including leaking tanks, pipes,
valves, pumps and so on. Accurate detection while avoiding false
alarms is also important because
false alarms result in unnecessary
process or plant shutdowns, slowing production and requiring timeconsuming reviews, paperwork
and reporting.
False alarms can, over time, provide a false sense of security, because employees can become complacent if alarms go off frequently for
no apparent reason and are continually ignored. The problem is that personnel alone cannot really determine
the difference between a false alarm
and a serious accident that is about
to happen.
Fixed flame- and gas-detector
systems are designed and installed
Chemical Engineering
to protect large and complex areas
filled with process equipment from
the risks of flames, explosions and
toxic gases. For these systems to be
effective, their location and installation are important so that they offer
a high likelihood of detecting flame
and gas hazards within monitored
process areas.
Three-dimensional mapping. Determining the optimal quantity and
location of flame and gas detectors is therefore critical to ensure
the detection system’s effectiveness. Flame and gas three-dimensional mapping is a solution that
assists in the evaluation of flame
and gas risks within a process facility and also reduces these risks
toward an acceptable risk profile.
Flame and gas mapping includes
the placement of detectors in appropriate locations to achieve the
best possible detection coverage
(Figure 5).
The use of three-dimensional
flame and gas mapping helps
plant, process and safety engineers in a number of ways. First,
mapping helps to increase plant
safety by improving the likelihood
of detecting flame and gas hazards. Also, it allows facilities to
quantify their risk of a fire or a gas
leak, and then assess the overall
effectiveness of their flame- and
gas-detection coverage. For new
installations, mapping can help improve the design of new fire and
gas systems to mitigate risks from
accidental gas releases or fires.
For existing installations, mapping
provides a method for assessing
the risk-reduction performance
of existing fire- and gas-detector
www.chemengonline.com
May 2016
systems and recommends ways to
improve coverage.
Mapping assists facilities in understanding their risk of a fire or
a gas leak, and then allows them
to optimize their flame- and gasdetection protection layout by
recommending the appropriate
detector technologies, detector
locations and quantities. Mapping
also equips the engineer with the
means to measure detection improvements when small incremental design changes are made. Mapping can therefore help to minimize
overall system costs.
With mapping, determining detector layouts becomes much simpler,
because mapping provides a methodical and systematic approach
for determining the areas with the
highest likelihood of flame and gas
risks. Understanding the locations
and likelihood of risks will help remove guesswork and uncertainties
from engineering.
Once the optimal locations are
CIC-10307
halfp
3/25/07
determined
forpage
the ad.qxd
placement
of
the flame detectors, then installation depends on the type of flame
detector chosen. Most optical-type
flame detectors are placed high and
are pointed downward either inside
or outside buildings or structures to
monitor tanks and pipelines running
throughout the plant.
Wrapping up
In order to protect chemical processes and plants from flame hazards, it is important to understand
the basic detection sensor technologies and their limitations. Defining
the type of potential hazard fuels,
the minimum fire size to be detected
and the configuration of the space
to be monitored through three-dimensional hazard mapping can influence the choice of instrument.
When reviewing a plant’s flamesafety protection level, be sure to
ask for assistance from any of the
flame detection equipment manufacturers. They have seen hundreds, if not thousands, of plants
6:19their
PM Page
1 layouts, which
and
unique
makes them experts in helping to
identify potential hazards and the
best way to prevent accidents.
Remember, too, that no single
flame-detection sensing technology is right for every potential plant
layout and hazard. For this reason,
adding multiple layers of flame- and
gas-detection technology provides
a multi-sensory approach that increases detection reliability and also
can prevent false alarms.
■
Edited by M. Bailey and D. Lozowski
Author
Ardem Antabian is currently the
OGP (Oil & Gas Products) segment manager at MSA — The
Safety Company (26776 Simpatica Circle, Lake Forest, CA
92630; Email: Ardem.Antabian@
msasafety.com; Phone: 949268-9523. Website: www.
msasafety.com). Antabian joined
the company in 1999, and has
held various positions, including global assignments in
Dubai, U.A.E. and Berlin, Germany. He also helped develop the company’s advanced-point and open-path
infrared gas detectors, as well as its multi-spectral
infrared flame detector. Antabian holds dual B.S. degrees in chemical engineering and chemistry from
California State University, Long Beach.
PLASTIC CONTROL VALVES FOR
ALL YOUR CORROSIVE APPLICATIONS
Collins plastic control valves are
highly responsive control valves
designed for use with corrosive
media and/or corrosive atmospheres.
Collins valves feature all-plastic
construction with bodies in PVDF,
PP, PVC and Halar in various body
styles from 1/2" - 2" with Globe,
Angle or Corner configurations and
many trim sizes and materials.
Valves may be furnished without
positioner for ON-OFF applications.
Call for more information on our
plastic control valves.
P.O. Box 938 • Angleton, TX 77516
Tel. (979) 849-8266 • www.collinsinst.com
Circle 07 on p. 94 or go to adlinks.chemengonline.com/61495-07
Chemical Engineering
www.chemengonline.com
May 2016
73
Feature Report
Part 2
Integrated
Risk-Management Matrices
An overview of the tools available to reliability professionals for making their
organization the best-in-class
In Brief
Reliability,
historically
Reliability, today
Risk-mitigation
approaches
How do we measure
risk?
KPIs and risk
Nathanael Ince
PinnacleART
S
ince the 1960s, process facility operators have made concerted efforts to improve the overall reliability
and availability of their plants. From
reliability theory to practical advancements
in non-destructive examination and condition-monitoring techniques, the industry has
significantly evolved and left key operations
personnel with more tools at their disposal
than ever before. However, this deeper arsenal of tools, coupled with more stringent
regulatory scrutiny and internal business
pressure, introduces a heightened expectation of performance. Now, more than ever,
companies recognize that best-in-class reliability programs not only save lives but increase the bottom line. These programs are
also one of the foremost “levers” for C-level
personnel to pull when trying to contend in a
Chemical Engineering
competitive environment.
With this in mind, a best-in-class reliability
organization combines state-of-the-art theory,
software and condition-monitoring techniques
with a strong collaboration of departments
and associated personnel. An independent
risk-based inspection (RBI) program or reliability-centered maintenance (RCM) program
no longer suffices as cutting-edge. Rather, the
inspection department (power users of RBI)
and maintenance department (power users
of RCM) are integrating with process, operations, capital projects and other teams to form
an overall reliability work process for the success of the plant.
To highlight reliability’s growing prominence
within process facilities, this article addresses
the following:
• A brief history of reliability practices in the
20th and 21st centuries
• Examples of current reliability program tools
• A characterization of three different
www.chemengonline.com
may 2016
65
Table 1. Example Mechanical-Integrity and Maintenance-Program
Improvements
Mechanical integrity improvements
Maintenance/reliability improvements
Assessments and audits
Assessments and audits
Damage/corrosion modeling
Preventive and predictive maintenance
Risk-based inspection
Equipment hierarchies and data cleanup
Inspection data management and trending
Operator-driven reliability (rounds)
Piping circuitization
Mobile platforms
Integrity operating windows
Reliability operating windows
Corrosion monitoring locations (CML) and thickness
management locations (TML) optimization
Maintenance data/order management (computerized maintenance-management system; CMMS)
Asset retirement calculation
Spare parts optimization
Corrosion under insulation (CUI) program
Reliability-centered maintenance
Utilizing advanced non-destructive evaluation
Reliability-centered design
Continuous condition monitoring
Repair procedures
risk-mitigation applications that
are currently applied in process
facilities
• The case for ensuring these risk
mitigation frameworks are working
together
• The value of key performance
indicators (KPIs) in providing
transparency and accountability
to the effectiveness of these risk
mitigation frameworks
Reliability, historically
When one thinks about process reliability, a variety of definitions come
to mind. However, it has come a
long way since the early 20th century. From the 1920s to the 1950s,
reliability went from being classified
as “repeatability” (how many times
could the same results repeat) to dependability (hours of flight time for an
engine), to a specific, repeatable result expected for a duration of time.
Through the 1950’s age of industrialization, reliability’s evolving definition
was still very much focused on design and not as much on operations
or maintenance. Then in the 1960s,
the airline industry introduced the
concept of reliability centered maintenance (RCM), pushing the idea that
the overall reliability of a system included not only the design, but also
the operations and maintenance of
that system. In other words, reliability
engineering was now stretching into
other departments, mandating that
the overall risk of failure was tied to
multiple aspects of the asset’s life66
cycle. As a result, several different departments and individuals cooperated
to ensure they attained reliability.
The concept of RCM pushed
through some industries quicker
than others. While it started with the
airlines, it flowed quickly into power
generation, petrochemical and petroleum-refining operations thereafter.
Fast-forward to 1992, and another
facet, called process-safety management (PSM), was introduced into the reliability picture. In response to a growing
perception of risk related to hazardous
processes, the Occupational Safety
and Health Administration (OSHA) issued the Process Safety Standard,
OSHA 1910.119, which includes the
following 14 required elements:
• Process-safety information
• Process hazard analysis
• Operating procedures
• Training
• Contractors
• Mechanical integrity
• Hot work
• Management of change
• Incident investigation
• Compliance audits
• Trade secrets
• Employee participation
• Pre-startup safety review
• Emergency planning & response
The intent of the regulation was to
limit the overall risk related to dangerous processes, and “raise the bar”
for compliance expectation for facilities with these “covered” processes.
At that point, it became law to fulfill
these 14 elements, and to ignore
Chemical Engineering
them, or to show negligence to these
steps in the event of a release, implied the possibility of criminal activity.
In other words, if those responsible
in the event of a release were found
to be negligent in these items, they
could go to jail. The other business
implication of this standard was that it
meant that other individuals, and departments, now had a part to play in
reliability and overall process safety.
While reliability was confined to
designing equipment that could last
a certain time and coupling it with a
non-certified inspector to make general observations in the 1950s, by the
mid-1990s, reliability had become a
much more complex, integrated and
accelerated science.
Reliability today
With the greater expectation on today’s programs, department managers (including reliability, mechanicalintegrity or maintenance managers)
face a powerful, but often intimidating array of tools available to them for
improving their reliability programs.
Examples are listed in Table 1.
While this only represents a subset of the options available to the
manager, all of these activities aim at
doing the following:
1. Reducing the risk of unplanned
downtime.
2. Limiting safety and environmental
risk.
3. Ensuring compliance with regulatory standards.
4. Doing steps one through three for
the least cost possible.
To summarize, the goal of these
managers is to put a plan in place
and execute a plan that identifies and
mitigates risks as efficiently as possible. To do that, one has to systematically identify those risks in addition
to the level to which those risks must
be mitigated. If this is done correctly,
the design, inspections, preventative
maintenance, operational strategies,
and other program facets should
all be aligned in attaining steps one
through four.
Risk-mitigation approaches
Since the 1960s, there have been
substantial efforts on figuring out
how to best characterize both
downtime and loss-of-containment
www.chemengonline.com
may 2016
Consequence of failure
PHA/HAZOP/QRA
(quantitative risk assessment)
Extreme
RBI
High
Med high
RCM
Med
Med low
Low
Negligible
Likelihood of failure (failure rate)
Figure 1. This graphical “consequence-of-failure” risk matrix shows the areas covered by process hazard analysis (PHA), risk-based inspection (RBI) and reliability centered maintenance (RCM)
risk in a facility so that appropriate
and targeted mitigation actions can
be taken at the right time. That being
said, there are three common risk
identification and mitigation frameworks that are currently being used
in process facilities today. These include process hazard analysis (PHA),
risk-based inspection (RBI), and reliability-centered maintenance (RCM).
Let’s briefly characterize each.
PHA. The PHA came out of OSHA’s
PSM standard and is one of the 14
elements listed above. Every five
years, subject matter experts come
together for a couple of weeks and
identify the major events that could
happen at different “nodes” in a unit.
The general idea is to use guidewords to systematically focus the
INSPECT
100%
OF yOur
HEATEr
COILS
team on the identification of process
deviations that can lead to undesirable consequences, the risk ranking
of those deviations, and the assignment of actions to either lower the
probability of those failures or the
consequence if the failures do occur.
While a PHA would not identify maintenance strategies or detailed corrosion mitigation or identification strategies, it focuses on safety and not unit
reliability. In the end, the major deliverable is a set of actions that have to
be closed out to ensure compliance
with the PSM standard. Typically, this
process is owned and facilitated by
the PSM manager or department.
RBI. RBI arose from an industry
study in the 1990s that produced
API (American Petroleum Institute)
580 and 581, which describe a systematic risk identification and mitigation framework that focuses only on
loss of containment. For this reason,
when an equipment item or piping segment (typically called “piping
circuit”) is evaluated, the only failure
that is of concern to the facility is the
Quest Integrity’s Furnace Tube
Inspection System (FTIS™) is
the proven technology providing
100% inspection coverage of your
serpentine coils.
The FTIS inspection results are
processed with our LifeQuest™
Heater engineering software,
providing a comprehensive
fitness-for-service and remaining
life assessment compliant with
•
Pitting (interior or
exterior of pipe)
•
Corrosion (interior
or exterior of pipe)
•
Erosion and flow
assisted wear
•
Denting and
ovality
•
Bulging and
swelling
•
Coke and scale
build-up
the API-579 Standard. Quest
Integrity delivers a complete
solution that helps transfer your
integrity and maintenance risk
into reliability.
QuestIntegrity.com
CHALLENGE CONVENTION
Circle 30 on p. 94 or go to adlinks.chemengonline.com/61495-30
CE-half-page-FTIS-Feb-2016.indd 1
Chemical Engineering
www.chemengonline.com
may 2016
1/15/2016 2:10:50 PM
67
breach of the pressure boundary.
As an example, the only failure
mode evaluated on a pump would
typically be a leak in the casing or
the seal. The consequence of those
losses can be business, safety or
environmental, and while a variety
of software packages and spreadsheets can be used to accomplish
the exercise, the deliverable is an RBI
plan targeting the mitigation of lossof-containment events.
In addition, a best-in-class RBI
program will not just be a systematic
re-evaluation of that plan every five
or ten years, but an ongoing management strategy that updates the
framework whenever, the risk factors change. Therefore, if an equipment’s material of construction was
changed, insulation was added to
an asset, or a piece of equipment
was moved to a different location,
a re-evaluation of the asset loss-ofcontainment risk and an associated
update of the RBI plan would be appropriate. Typically, this process is
owned and facilitated by the inspection or mechanical integrity manager
or department.
RCM. As mentioned earlier, RCM
was spawned out of the aviation industry, but the focus was to identify
a proactive maintenance strategy
that would ensure reliability and that
performance goals were met. While
this has been loosely codified in SAE
(Society of Automobile Engineers)
JA1011, there are a variety of methods and approaches and therefore
RCM isn’t as controlled as RBI.
However, much like RBI, the RCM
study itself aims at identifying the different failure modes of an asset, the
effects of those failure modes, and the
68
probabilities of those failure modes
occurring at any given time. Once
the potential failure causes are identified, strategies are recommended
that mitigate the failure mode to acceptable levels. Unlike RBI, RCM
accounts for all failure modes relating to loss of function, including loss
of containment (although it typically
outsources this exercise to the RBI
study), and the end deliverable is a
set of predictive maintenance, preventative maintenance, and operator
activities that lower loss-of-function
risks to acceptable levels. Typically,
this process is owned and facilitated
by the maintenance or reliability manager or department.
How do we measure risk?
While it’s not uncommon for a single
facility to run PHA, RBI and RCM at
once, it begs the question, which one
is right? To find the answer, let’s briefly
discuss risk matrices. A risk matrix is a
tool that allows one to associate individual assets, failure modes or situations with specific levels of risk. There is
both a probability of an asset failing and
a consequence of an asset failing, and
each is represented by one axis on the
matrix. The multiplication of both probability of failure and consequence of
failure (represented by the actual location of the asset on the matrix) equals
risk. What’s interesting is that many
facilities that are utilizing multiple-risk
frameworks in their facility are utilizing
multiple-risk matrices. This again begs
the question, which one is right?
Figure 1 is a risk matrix that is much
larger than the typical 4 × 4 or 5 × 5
risk matrix, but it shows each of the
previously discussed risk frameworks
on one larger matrix. The probability
Chemical Engineering
of failure is on the horizontal axis, and
the consequence of failure is shown
on the vertical axis.
As shown, the frameworks reveal
the following characterization for
each of the three covered risk mitigation frameworks:
• PHA — High consequence of
failure events but lower probability
that they will happen (an example
would be an overpressure on a
column with insufficient reliefsystems capacity)
• RBI — Medium consequence of
failure events (loss of containment)
and a medium probability that they
will happen (an example would be
a two-inch diameter leak of a flammable fluid from a drum)
• RCM — Low consequence of
failure events (loss of function) but
a higher probability that they will
happen (an example would be a
rotor failure on a pump)
While each of these frameworks generally operate in different areas on the
matrix, they are still standardized to a
consistent amount of risk. The need to
include all three risk-management tools
into one standard matrix is twofold:
1. Making sure the data, calculations and actions coming from one
study are properly informing the
other studies.
2. Insuring that the actions being
produced by each framework are
being prioritized appropriately, as
determined by their risk.
Making sure each of the three
frameworks is communicating with
one another is a common omission in facilities and programs. Many
times, facilities spend millions of dollars building out and managing these
frameworks, but there is often overlap
between them and data gathered for
one framework could be utilized for
another framework. As an example,
an inspection department representative should be present to ensure the
RBI study is aiding the PHA effort.
In addition, prioritizing risk between
each framework is another challenge. A plant manager is not wholly
concerned about each individual risk
framework but rather a prioritized list
of actions with those action’s projected return-on-investment (whether
it is reduction of risk, a reduction of
cost, or a reduction of compliance
www.chemengonline.com
may 2016
fines). The objective of the integrated
and organization-wide risk mitigation
system should be that all possible
failures must be identified, assessed,
properly mitigated (whether through
design, maintenance, inspection, or
operations) and monitored in order
of priority with an expected amount
of return. If a consistent risk matrix is
used effectively, this can inform single
asset or system decisions and continue to ensure reliability value is being
driven consistently across the facility.
KPIs and risk
A good set of key performance indicators (KPIs) is needed as well
to help identify root causes and
guide programmatic decisions.
Once systematic risk management,
production loss, and enterpriseresource-planning (ERP) systems
are properly setup, roll-up KPIs
can be reported regularly that reveal the overall trending of the reliability program and drive specific
initiatives with targeted results (risk
reduction, cost reduction or compliance satisfaction).
For example, at any point in time,
the plant (or group of plants) could
see the total risk of loss-of-containment or loss-of-function events
across their units and assets, the
total risk of loss of function events
across its units and assets, the total
planned and unplanned downtime
across the plant with associated
causes, and the total cost associated with running those programs
broken out by activity, area and other
helpful specifics. When one or many
of those rollout KPIs reveal concerns,
sub KPIs should be accessible to explore the root cause of those risks,
downtime or costs. It’s from this KPI
drill-down, empowered by synthesized risk frameworks, that targeted
initiatives and actions can be driven.
Summary
Reliability programs have come a
long way in 100 years, and reliability
professionals have more tools than
ever at their disposal to increase
overall plant availability and process
safety. To drive systematic improvements in plant reliability with all these
different tools, it is essential for facilities to get the data-management
strategy right, to synthesize one’s
approach to measuring, reporting
and mitigating risk, and to roll it up in
a KPI framework that combines risk,
cost and compliance reports.
n
Edited by Gerald Ondrey
Author
Nathanael Ince is client solutions
director, supporting the Solutions
Department of Pinnacle Advanced
Reliability Technologies (One Pinnacle Way, Pasadena, TX 77504;
Phone: +1-281-598-1330; Email:
nathanael.ince@pinnacleart.com).
In this capacity, he works closely
with his team of solutions engineers to ensure the department is
building and implementing the best asset integrity and
reliability programs for PinnacleART’s clients. With more
than eight years on the PinnacleART team, Ince is an
expert source on mechanical integrity, including proper
assessment and implementation of risk-based mechanical-integrity programs. Ince has a B.S.M.E. degree from
Texas A&M University.
HEXOLOY
The Sustainable
Solution in the Harshest Environments
®
■
■
Corrosion & Heat Problems that
SHUT YOU DOWN
■
Universal corrosion resistance in extremely high
temperatures; small to large complex shape & tight
tolerance capabilities.
Our customers minimize maintenance, energy
& downtime costs, reduce emissions & maximize
productivity through Saint-Gobain Engineered
Ceramics' innovation & relentless sustainability focus.
Leverage Saint-Gobain's
global footprint, technical
resources & manufacturing
operations to solve your
most difficult problems.
For more information, technical data and case studies contact:
SCD.sales@saint-gobain.com
716-278-6233
Visit our NEW website: refractories.saint-gobain.com
Circle 34 on p. 94 or go to adlinks.chemengonline.com/61495-34
Chemical Engineering
www.chemengonline.com
may 2016
69
Environmental Manager
Process Safety and Functional Safety in
Support of Asset Productivity and Integrity
Approaches to plant safety continue to evolve based on lessons learned, as well as
new automation standards and technology
Luis Durán
ABB Process Automation Control
Technologies
I
n the chemical process industries
(CPI), one incident can have a tremendous impact on the people
in the plant, the communities
around it, the environment and the
production asset.
This article outlines how learning from past incidents continues
to drive the development of both
newer standards, as well as new approaches to process automation as
it relates to plant safety and security.
Learning from incidents
Today, there is a lot of information available about process incidents and industrial accidents from
sources such as the Chemical Safety
Board (www.csb.gov), Industrial
Safety and Security Source (www.
isssource.com) or Anatomy of an Incident (www.anatomyofanincident.
com). Regardless of the source, and
considering the amount of public
discussion that takes place, particularly following the very large and visible industrial incidents, it’s important
to take the opportunity to learn and
seek opportunities to improve and
prevent these incidents from happening again (Figure 1).
The impact of incidents and accidents on people, the environment and
plant assets is significant. According
to a Marsh LLC (www.marsh.com)
publication [1], there is evidence that
the petrochemical sector suffered a
terrible period in terms of accidents
between 1987 and 1991 (Figure 2).
The losses (property damage of the
production assets, liabilities and so
on), recorded in that period were
about ten times worse than previous
periods (1976–1986) and about 3.5
times worse than following periods
66
Figure 1. Safety should always a priority at a process plant
(1992–2011).
On the positive side, the Marsh report shows that there has been improvement in the sector after 1992.
This improved safety can be attributed, in part, to the introduction of
the process safety management
(PSM) standards.
Taking a closer look, it is evident that the significant loss for the
1987 –1991 period was dominated
by three explosion events, two of
which were vapor-cloud explosions
and account for 70% of the total
losses for this timeframe. The key
takeaway from this is that a single
incident can have a tremendous
impact on the people in the plant,
the communities around it, the environment and, last but not least, the
production asset.
In 1992, the U.S. Occupational
Safety and Health Administration
Chemical Engineering
(OSHA; www.osha.gov) — the
agency tasked with safety of personnel — issued the Process Safety
Management of Highly Hazardous Chemicals standard (29 CFR
1910.199). This regulation set a requirement for hazard management
and established a comprehensive
PSM program that integrates technology, procedures and management practices. The introduction
of this standard may be credited
with improving process safety performance in U.S. hydrocarbon processing facilities.
Defining safety
In industry, safety is defined as a reduction of existing risk to a tolerable
or manageable level; understanding
risk as the probability of occurrence
for that harmful incident and the
magnitude of the potential harm. In
www.chemengonline.com
november 2015
Figure 2. The 1987–1992 period was exceptionally bad for the petrochemical
sector due to a few major accidents (Source: Marsh LLC [1])
many cases, safety is not the elimination of risk, which could be impractical or unfeasible.
Although the CPI must accept
some degree of risk, that risk needs
to be managed to an acceptable
level; which in turn makes safety a
societal term as well as an engineering term. Society establishes what
is commonly accepted as safe and
engineers have to manage risk by
introducing risk-reduction methods
including human elements, such
as company culture and work processes and technologies that make
the production facilities an acceptable place to work and a responsible
neighbor in our communities.
The CPI has applied learnings
from numerous events over the last
40 years. These incidents and accidents have resulted in changes to
regulations and legislation and have
driven the adoption of best practices
that address the known factors at
the root of those events.
A lot of the best practices are related to understanding and evaluating hazards and defining the appropriate risk reduction, including
measuring the effectiveness of the
methodologies or technologies used
in reducing the risk.
Risk-reduction methods using
technology — including digital systems — have received extensive coverage in trade publications over time
as they are important contributors to
process safety and plant productivity. However, it is critical to recognize
human factors and their impact on
process safety in the design, selection, implementation and operation
of technology.
Chemical Engineering
Figure 3. Operators at a modern control room monitor both the operation
of the process as well as the safety and security of the plant
Connecting PSM and FS
Organizations, such as OSHA, recognize Functional Safety Standard
ISA 84 as a Recognized and Generally Accepted Good Engineering
Practice (RAGAGEP) and one way
to meet the PSM requirements defined in 29 CFR 1910.199. Applying ISA 84 is more than purchasing
a technology with a given certification or using a particular technology
scheme or architecture. Industry best
practices such as ISA 84 consider a
great deal of applied learning. ISA
84 is a performance-based standard
and describes multiple steps before
and after selecting and implementing
safety system technologies. These
steps — commonly referred to as
the safety lifecycle — are also the result of applying lessons learned from
incidents and events.
Research (as documented in the
book “Out of Control” [2]) has shown
that many industrial accidents have
their root cause in poor specification or inadequate design (about
58%). Additionally, users should
consider that installing a system is
not the “end of the road,” but rather
another step in the lifecycle of the
facility. Approximately 21% of incidents are associated with changes
after the process is running, and
about 15% occur during operation
and maintenance.
ISA 84’s grandfather clause
It is well-known that Functional
Safety Standard ISA 84.01-2004
contains a grandfather clause
based
on
OSHA
regulation
1910.119. This clause allows users
to continue the use of pre-existing
www.chemengonline.com
november 2015
safety instrumented systems (SIS)
that were designed following a previous RAGAGEP, and to effectively
keep its older equipment as long
as the company has determined
that the equipment is designed,
maintained, inspected, tested and
operated in a safe manner. As indicated by Klein [3], that does not
mean that the existing system can
be grandfathered and ignored from
that point forward.
The intent of the clause is for
the user to determine if the PSMcovered equipment, which was designed and constructed to comply
with codes, standards or practices
no longer in general use, can continue to operate in a safe manner,
and to document the findings.
Therefore, the emphasis should be
on the second part of the clause,
which states that “the owner/operator shall determine that the
equipment is designed, maintained, inspected, tested and operated in a safe manner.” And that
determination is a continuous effort
that should be periodically revised
until said equipment is removed
from operation and replaced with a
system that is designed in line with
current best practices.
Another consideration is that the
clause would cover not only hardware
and software, but also management
and documentation, including maintenance, all of which should follow current standards — that is, the most recent version of ISA 84 or IEC 61511.
Emerging technologies
The last few decades have seen
technology changing in all aspects
67
Potential common-cause failures
Figure 4. This diagram illustrates the concept of functionally independent protection layers
of humankind’s daily activities. Process automation and safety automation have not escaped from
such changes (Figure 3). Nevertheless, technology-selection criteria
should respond to the risk-reduction needs in the manufacturing
facility and consider the improvements that some of these technologies offer, such as enabling better visualization of the health of the
production asset.
The new breed of systems not
only addresses the need to protect
plant assets, but allows users to
bring safety to the center stage, side
by side with the productivity of the
plant, in many cases by eliminating
technology gateways and interfaces
that were common a few years ago.
There are also new developments,
particularly in software, that help
prevent human errors in the design,
and that guide users to fulfill industry
best practices using standard offthe-shelf functionality. Off-the-shelf
products avoid the introduction of
error by complex manual programming and configuration.
Although productivity and profitability of many manufacturing processes limit the rate of change in the
process sector, whenever there is an
opportunity, facilities should explore
modern technologies and determine
if they are a good fit. One should not
assume the system shouldn’t be
touched behind the shield of “grandfather clauses” that are believed to
justify maintaining the system “asis.” Once again, despite the comfort
68
provided by known technologies,
such as general-purpose programmable logic controlers (PLCs), it is
important to keep in mind that those
platforms might not satisfy the current risk-reduction requirements in
the facility and a significant investment to maintain the risk-reduction
performance over the lifecycle of the
plant asset micht might be required.
Also, users will need to develop new
competencies in order to understand new risk-reduction requirements and apply the next generation of technology accordingly.
Performance-based safety standards (IEC 61508 and IEC 61511/
ISA 84) have changed the way
safety systems should be selected.
The days of simply choosing a certified product, or selecting a preferred
technology architecture should be
behind us; today’s system selection
is driven by performance requirements and the risk-reduction needs
of the plant.
Understand the hazards
Although this has nothing to do with
the safety system technology, it is
critical in the selection process to
understand the scope of the process
hazards and to determine the necessary risk reduction required. This
should be done to create the safety
requirements specification (SRS)
necessary to start a system selection. Even when replacing an existing system, this is critical because
the risk profile of the plant may have
changed since installation.
Chemical Engineering
There has been a long-standing requirement that a safety system must
be different (or diverse) technology
from its process-automation counterpart to avoid common-cause
failures. But most safety systems
rely on component redundancy
(hardware fault tolerance [HFT]) to
meet reliability and availability requirements, introducing a degree
of common-cause failure directly
into the safety system. Rather than
redundancy, modern systems now
provide a diversity of technologies
designed into logic solvers and
input/output (I/O) modules, along
with a high degree of diagnostics,
to allow a simplex hardware configuration to meet safety integrity level
(SIL) 3 requirements.
Product-implementation
diversity is also key. Even though most
safety systems are manufactured
by process-automation vendors,
organizational diversity between the
two product teams is only the first
level of separation. Within the safety
product team, leading suppliers will
also be separating the design group
from product-development group
and then again from the producttesting group.
Systematic capabilities
Systematic capabilities address how
much protection against human factors is built into the safety system.
Users should look for the following:
• Certified software libraries that
offer functions according to the SIL
requirements of the application
• Compiler restrictions to enforce
implementations according to the
SIL requirements
• User-security management to
separate approved from non-approved users for overrides, bypass
and other key functions
• Audit-trail capability to record
and document changes to aid in
compliance with functional safety
standards
Separate, interfaced or integrated
Typically based on the SRS and other
business needs, it is important to define one of these three integration
philosophies. Integrated systems
offer many key benefits, drawing on
common capabilities of the process
automation system not related to
www.chemengonline.com
november 2015
Figure 5. Integrated control and safety is a modern alternative to traditional point solutions
the safety functions directly. But only
being interfaced or even kept completely separate are also options, and
need to be thoroughly considered.
maintenance, both in compliance to
functional safety standards and at a
lower cost over the lifecycle.
Protection layers
The extended use of networked
systems is also territory for potential
vulnerabilities. A lot of ground has
been covered in this area over the
last five years and industry has experienced the emergence of standards to address new threats and
has the accelerated development of
a strong relationship between safety
and security. To satisfy the security
requirements of a system network,
the user should do the following:
• Perform a full vulnerability assessment/threat modeling and testing
of the different subsystems
• Define the best security mechanism for each of those subsystems
to cover any identified gaps
• Perform a full vulnerability assessment/threat modeling and testing
of the entire interfaced architecture
For users of an interfaced system,
which could be “secured” using “airgaps,” the key is establishing a security management system (SMS) of
the interface architecture and supporting it over the system lifecycle.
The use of multiple protection layers, or functionally independent
protection layers (Figure 4) to be
precise, is common in industry.
These include technology elements
such as the process control system
and alarms. Safety instrumented
systems are a last resort to prevent
a hazard from escalating.
There are additional layers that
mitigate the impact of a hazard or
contain it. Once more, there are
other layers of protection that are
not based on technology, but on
work processes or procedures that
might be even more critical than the
technology in use.
Most times, system interfaces
are not designed, implemented or
tested in accordance to industry
best practices or current functional
safety standards, and therefore they
have an impact on the performance
of the system. It has been common
to ignore safety requirements on
these interfaces. Failure of these interfaces should not compromise the
safety system.
Integrated control and safety (Figure 5) is a modern alternative to previous point solutions that takes into
consideration the best practices
and solves issues related to interface design, implementation and
Chemical Engineering
Network security
Defense-in-depth in security
The principle of “defense in depth”
(Figure 6) means creating multiple
independent and redundant prevention and detection measures. The
security measures should be layered, in multiple places, and diver-
www.chemengonline.com
november 2015
sified. This reduces the risk that the
system is compromised if one security measure fails or is circumvented.
Defense-in-depth tactics can be
found throughout the SD3 + C security framework (secure by design,
secure by default, secure in deployment, and communications).
Examples of defense-in-depth
tactics include the following:
• Establishing defenses for the perimeter, network, host, application
and data
• Security policies and procedures
• Intrusion detection
• Firewalls and malware protection
• User authentication & authorization
• Physical security
The key message is that, like in
the case of safety, security is not resolved only by certification and it’s not
an isolated activity after the product
development is completed. Security
is part of the design considerations
early in the process and must be
supported over the site lifecycle.
Summary
Although following the functional
safety standards is not a silver bullet,
it’s a good place to start the journey
to improve safety in the process sector. If your industry requires compliance to OSHA regulation 1910.119,
for the automation portion of any
project, complying with the requirements of ISA 84 is a way to address
PSM requirements.
Adopting ISA 84 is more than selecting a certified or SIL-capable logic
solver or having a given redundancy
scheme on the instrumentation. It
requires a lifecycle approach that
starts with the hazards analysis and
defines the required risk reduction. It
also involves evaluating technologies
that better address the hazards and
reduce the risk, as well as considering the technical requirements to
mitigate risk to an acceptable level.
Although existing systems can be
grandfathered, they can’t be ignored
from that point forward. Rather, it is
a continuous effort that should be
periodically revised until the equipment is removed from operation and
replaced with a system designed following current best practices.
When it’s time for selecting a new
risk-reduction technology, consider
that choosing a given technology
scheme is not enough to address the
69
FIGURE 6. The concept of “defense in depth in
security” is illustrated here
functional safety requirements. Assuming that your existing technology
or a “replacement in kind” still complies with the safety requirements of
your process might lead to a “false
sense of safety.” Consider the new
breed of systems that not only addresses the need of protecting the
plant assets, but allows users to
bring safety to the center stage side
to side with the productivity of the
plant — in many cases by eliminat-
ing technology gateways and interfaces that were common a few years
ago, therefore also reducing lifecycle
cost and maintenance efforts.
The selection criteria should begin
with a proper understanding of the
hazards and a technology assessment to address human factors,
avoidance of common factors that
could disable the safety instrumented system, and the integration
of process safety information to the
process automation systems; this
integration is possible and must be
done right.
Like in the case of safety, security
(or network security) is not resolved
only by certification and it’s not an
isolated activity after the product
development is completed but part
of the design considerations early in
the process and that must be supported over the site lifecycle.
n
Edited by Gerald Ondrey
References
1. Marsh LLC, The 100 Largest Losses 1972-2011:
Large Property Damage Losses in the Hydrocarbon
Industry, 22nd ed., New York, N.Y., 2012.
2. Health and Safety Executive (HSE), "Out of Control:
Why Control Systems Go Wrong and How to Prevent
Failure," HSE, London, 2003; available for download
at www.hse.gov.uk.
3. Klein, Kevin L., Grandfathering, It’s Not About Being
Old, It’s About Being Safe, ISA, Research Triangle
Park, N.C., 2005; Presented at ISA Expo 2005, Chicago, Ill., October 25–27, 2015.
4. Durán, Luis, Safety does not come out of a box, Control Engineering, February 2014.
5. Durán, Luis, Five things to consider when selecting a
safety system, Control Engineering, October 2013.
6. Durán, Luis, The rocky relationship between safety
and security, Control Engineering, June 2011.
Author
Luis Durán is the global product manager, Safety Systems at
ABB Inc. (3700 W. Sam Houston
Parkway South, Houston, TX
77042; Phone: 713 587 8089;
Email: luis.m.duran@us.abb.
com). He has 25 years of experience in multiple areas of process automation and over 20
years in safety instrumented
systems. For the last 12 years, he had concentrated
on technical product management and product marketing management of safety automation products,
publishing several papers in safety and critical control systems. Durán has B.S.E.E. and M.B.A. degrees
from Universidad Simon Bolívar in Caracas, Venezuela and is a certified functional safety engineer (FSE)
by TÜV Rheinland.
Statement of Ownership, Management, and Circulation (Requester Publications Only)
1. Publication Title: Chemical Engineering 2. Publication Number: 0009-2460 3. Filing Date: 9/30/15 4. Issue Frequency: Monthly 5. Number of Issues Published
Annually: 12 6. Annual Subscription Price $149.97. Complete Mailing Address of Known Office of Publication: Access Intelligence, 4 Choke Cherry Road, 2nd
Floor, Rockville, MD 20850-4024 Contact: George Severine Telephone: 301-354-1706 8. Complete Mailing Address of Headquarters or General Business Office
Publisher: Access Intelligence, LLC, 4 Choke Cherry Road, 2nd Floor, Rockville, MD 20850-4024 9. Full Names and Complete Mailing Addresses of Publisher,
Editor, and Maging Editor: Publisher: Michael Grossman, 4 Choke Cherry Road, 2nd Floor, Rockville, MD 20850-4024 Editor: Dorothy Lozowski, 4 Choke Cherry
Road, 2nd Floor, Rockville, MD 20850-4024 10. Owner if the publication is owned by a corporation, give the name and address of the corporation immediately
followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock: Veronis Suhler Stevenson, 55 East 52nd
Street, 33rd Floor, New York, NY 10055 11. Known Bondholders, Mortgagees, and Other Security Holders Owning or Holding 1 Percent or More of Total Amount
of Bonds, Mortgages, or other Securities: None 12. Non-profit organization: not applicable. 13. Publication: Chemical Engineering 14. Issue Date for Circulation
Data: September 2015.
Average No. of
No. Copies of
15. Extent and Nature of Circulation:
Copies Each Issue
Single issue
During Preceding
Nearest to
12 Months
Filing Date
a. Total Number of Copies (Net press run)
35,015
34,489
b. Legitimate Paid and/or Requested Distribution
(1) Outside County Paid/Requested Mail Subscriptions
29,370
29.921
(2) Inside County Paid/Requested Mail Subscriptions
0
0
(3) Sales Through Dealers and Carriers, Street Vendors, Counter Sales or Other Paid
or Requested Distribution Outside USPS
2.615
1.915
(4) Requested Copies Distributed by Other Mail Classes
118
66
c. Total Paid and/or Requested Circulation
32,103
31,902
d. Nonrequested Distribution (By Mail and Outside the Mail)
(1) Outside County Nonrequested Copies
758
752
(2) Inside-County Nonrequested Copies
0
0
(3) Nonrequested Copies Distributed Through the USP by Other Classes of Mail
0
0
(4) Nonrequested Copies Distributed Outside the Mail
(Include Pickup Stands, Trade Shows, Showrooms and Other Sources)
772
902
e. Total Norequested Distribution
1,530
1,654
f. Total Distribution (Sum of 15c and 15e)
33,633
33,556
g. Copies not Distributed (Office, Returns, Spoilage, Unused)
1,382
933
h. Total (Sum of 15f and g)
35,015
34,489
i. Percent Paid and/or Requested Circulation
95.45%
95.07%
16. Electronic Copy Distribution: None Reported
17. Publication of Statement of Ownership for a Requester Publication is required and will be printed in the November 2015 issue of this publication
18. Signature of Fulfillment Manager: George Severine
Date: 9/29/15
PS Form 3526-R, July 2014
70
Chemical Engineering
www.chemengonline.com
november 2015
Engineering Practice
Improving the Operability of Process Plants
Turndown and rangeability have a big impact on the flexibility and efficiency of
chemical process operations
Mohammad Toghraei
Consultant
D
uring the design of a chemical process plant, the main
focus is on which process
units or unit operations
must be integrated to convert the
feed streams into product stream(s).
Design engineers work to achieve
this goal; however, in terms of making sure the plant operates smoothly,
which is equally important for operation engineers and operators, there
are less well-know parameters facing the design engineers.
There are five primary process
parameters in each plant — flow,
(liquid) level, pressure, temperature,
and composition. Composition can
be considered a collective term that
reflects all parameters (chemical and
physical), and provides an indicator
of the quality of the stream. Composition can be used to describe
the moisture of a gas stream or the
octane number of a gasoline stream,
or even the electric conductivity of a
water stream.
During operation, equipment process parameters generally deviate
from the design values (normal level)
over time. Five levels can be defined
for each process parameter: normal level, high level, high-high level,
low level and low-low level. In essence, the operational parameters
of a plant relate to the behavior of
the plant between the low level and
high level of each parameter of the
individual equipment components,
individual units or the entire plant.
In most cases, the operability of a
plant can be defined using at least
three key parameters: flexibility in
operation, resistance against surge
(or upset) and the speed of recovery
from upset.
Maintaining operating flexibility
Flexibility of operation in this context
means the ability of a plant to operate
72
reliably across a wide range of flowrates without sacrificing the overall
quantity or quality of product(s).
From a process standpoint, a
chemical process plant is a combination of equipment, utility networks
and control systems. To design a
plant with sufficient flexibility, each
of these three elements needs to
allow flexibility. Generally speaking,
the control system (including control valves and sensors) and utility network should offer the largest
amount of operating flexibility, while
the equipment itself could offer the
lowest amount of flexibility (Figure 1).
This requirement for larger flexibility
for control items and utility network
considerations is important because
of the supporting role of the utility system and the controlling role
played by instruments in a plant.
Two important concepts are used
to quantify flexibility: turndown (TD)
ratio and rangeability. These are
discussed below, and illustrated in
Figure 2.
Turndown ratio
The flexibility of equipment or a plant
can be defined using the TD ratio.
The most common definition for TD
ratio is “ratio of the normal maximum
parameter (numerator) to the normal
minimum parameter (denominator).”
However, the meaning of “normal
maximum parameter” and “normal
minimum parameter” is not always
clear and the interpretation may vary
in different companies and plants
(This is discussed below).
For an individual equipment component, or multi-component equipment systems, low-flow or low-capacity operation happens frequently
over the lifetime of a plant. The reduced-capacity operation may be
intentional or accidental.
For instance, reduced-capacity
operation could be planned for the
purpose of off-loading the equipment
for inspection, testing, monitoring,
CHEMICAL ENGINEERING
Equipment
Utility
Control valve, instruments
FIGURE 1. Different elements of a plant need different levels of operating flexibility. Since the utility
network provides support duty to the equipment,
it needs a higher turndown ratio. Control valves
and other instruments have a duty to take care of
equipment across a wider operating range; thus
they require an even higher rangeability
or even to support the shutdown of
downstream equipment. But it may
also occur accidentally due to, for
example, a drop in feed flowrate.
But process plant operators like
to know by how much the flowrate
of the equipment (and in the larger
sense, the entire plant capacity) can
be decreased without compromising the process goal or generating
off-specification product. Thus, TD
ratio can be defined as the ratio of
high flow to normal flow, as shown in
Equation 1.
TD ratio =
QHigh
QLow
(1)
QHigh = the flowrate of the system at
high level
QLow = the flowrate at low level
The numerical value of the TD ratio
is typically reported as a ratio, such
as 2:1.
It is important to note that the
denominator term is flowrate in low
level, and not low-low level. This is
important as it is the differentiator
between the concept of TD ratio
and rangeability, which is discussed
later. Generally, flowrate in low level
(as shown in Figure 2) is considered
to be the minimum level of flow at
which the process goals can still
be reached.
However, there is another interpretation of TD ratio that is often used
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
TABLE 1. TURNDOWN RATIO OF SELECT EQUIPMENT
Item
Turndown ratio
Pipe
Large, but depends on the definition of maximum and minimum flow
Storage containers
(Tank or vessels)
Very large; The maximum value is the total volume of the container, but the
minimum value could be dictated by a downstream component. For example,
a centrifugal pump may dictate a minimum volume to provide required NPSH
Centrifugal pump
Typically: 3:1 to 5:1
Positive-displ. pump
Theoretically infinite
Heat exchanger
Small, depends on the type; for instance, less than 1.5:1
Burner [1]
Depends on the type; for example:
Pressure jet type: ≈ 2:1
Twin fluid-atomizing type: >8:1
TABLE 2. UTILITY SURGE CONTAINER TO PROVIDE TD RATIO
Surge container
Residence time
Instrument air (IA)
Air receiver
5–10 min. or higher depending on
whether it is connected to UA or not
Utility water (UW)
Water tank
Several hours
Utility steam (US)
Utility steam cannot be stored for a long time without condensing; the options
for storing steam are the steam drum of a boiler, or if a conventional boiler is
not available, a vessel as an “external steam drum” could do the same task
Utility air (UA)
No dedicated container; could “float” with IA
Cooling water (CW)
Cooling tower basin
Depends on the size of the network
Cooling/heating glycol
Expansion drum
Depends on the size of the network
TABLE 3. TURNDOWN RATIO OF SELECT INSTRUMENTS
Item
Turndown ratio
Flowmeter: orifice-type
3:1 [2 ]
Flowmeter: vortex-type
10:1 to 50:1 [2 ]
Flowmeter: Coriolis-type
5:1 to 25:1 [2 ]
Control valve
Depends on type and characteristics; generally 50:1, and less than 100:1
TABLE 4. ARBITRARY VALUES OF FLEXIBILITY PARAMETERS
Low flexibility
Medium flexibility
Equipment (TD ratio)
< 1.2:1 to 2:1
2:1 to 3:1
5:1 to 8:1
Instrument, control valves (rangeability)
≈ 4:1
10:1 to 30:1
20:1 to 100:1
by operations staff. During operation, people expect the TD ratio to
answer the question in this scenario:
“My plant is running normally and
all parameters are normal. However, occasionally, because of different reasons (including shortage of
feed, reduced plant or unit capacity), the flowrate falls. What is the
minimum value I can withstand
without compromising the quality of
the product?”
They basically interpret the TD ratio
so that the numerator is the “normal
level parameter” (and not the “high
level parameter”). However, the difference in the interpretation does not
generate a big difference in numerical
value of TD ratio, as the normal and
high level of parameters are often
not very far from each other. Due to
this potential confusion, the TD ratio
should be considered an approximate parameter and not a precise
CHEMICAL ENGINEERING
High flexibility
number. In general, the academic
definition of TD ratio generally uses
a high-to-low values set up, while in
the field, operators often define TD
ratio using normal-to-low values.
The TD ratio can be defined for
parameters other than flowrate, but
it generally refers to flowrate. One
reason for this is because flowrate
can be the most important parameter of a plant, helping to define the
economy of the system. The other
reason is because the flowrate might
be influenced by constraints outside of the plant (for instance, a lack
of stored feed), which the control
system cannot necessarily adjust
(thus making a reduction in flowrate
unavoidable).
While the TD ratio is not always a
requested parameter, and is often
not mentioned in project documents
for design purposes, operators are
usually looking for a TD ratio of least
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
2:1 for a plant. The required TD
ratio could be as high as 3:1 or 4:1
for a plant.
Equipment flexibility
The TD ratio can also be determined
for a given piece of equipment, using
other values that are stated for the
component. For example, even when
a TD ratio is not explicitly stated for a
centrifugal pump, when the pump is
said to have a capacity of 100 m3/h
and a minimum flow of 30 m3/h, this
means that the centrifugal pump has
a TD ratio of 3:1.
The TD ratio of a reciprocating
pump could theoretically be defined
as infinite because it can work over a
very wide range of flows. However, in
practice, such a pump cannot handle
any flowrate that fails to fill the cylinder of the pump in one stroke. Partial
filling of the cylinder may cause some
damage to mechanical components
of the pump over the long term. Thus
the minimum required flow is a function of cylinder volume and stroke
speed of a specific pump.
The TD ratio for pipelines presents
a more complicated situation. With
piping systems, there are several different ways to define the minimum
flow. For instance, it could be defined
as the minimum flow that does not
fall into the laminar flow regime. Or,
it could be considered as the minimum flow that keeps a check valve
open (if a check valve is used).
For liquid flows in pipes, the minimum flow is more commonly interpreted as the minimum flow that
makes the pipe full, or the sealing
flowrate (that is, no partial flow), or a
flow threshold below which the fluid
will freeze in an outdoor pipe. If the
flow bears suspended solids, the
minimum flowrate could be defined
as that at which sedimentation of
suspended solids may occur.
Table 1 provides examples of
typical values and rules of thumb
regarding the TD ratio for various
types of process equipment. Note
that in Table 1, the TD ratio of storage containers is relatively large. This
high TD helps to explain why large
containers are used for surge dampening as part of a typical plant-wide
control system.
In some cases deciding on a required TD ratio needs good judgment. One example is chemicalinjection packages. The TD ratio
73
Trip
Process
goal
achieved
High-high flow
High flow
Turndown ratio
Rangeability
Alarm
Normal flow
Alarm
Trip
Low flow
Low-low flow
FIGURE 2. Process plants typically define different threshold values for flowrate levels, and set appropriate alarms and trips when the threshold values of this important parameter are reached. The concept of
turndown ratio and rangeability are shown, in relation to these key threshold flowrate values
is important for chemical-injection
packages to protect against chemical overdosing or underdosing.
Chemical-injection packages typically provide a TD ratio of about 100:1
or lower. In some cases, 10:1 can
be provided by stroke adjustment,
and another 10:1 through the use
of a variable frequency drive (VFD)
to control the motor. But the question that arises is why such a large
TD ratio is necessary if the host flow
experiences, for example, only a 2:1
TD ratio. This high TD ratio is generally desired because of uncertainty
in the required chemical dosage and
the variety of available chemicals.
The required dosage of a chemical
depends on the type of chemical
and the host stream properties.
Thus, during the design phase of a
project, the designer doesn’t exactly
know what the optimum dosage
would be, even though a chemical
provider recommends a specific dosage. Often, he or she prefers to conservatively have a chemical-injection
system with a high TD ratio.
There is generally less uncertainty
when using chemicals of known
composition, rather than proprietary
mixtures. If the dosage is fairly firm
and the chemical used is a non-proprietary type, the TD ratio could be
decreased, to reduce the overall cost
of the chemical-injection system.
Utility network flexibility
The flexibility of a utility network is
also defined by the TD ratio. As mentioned above, when a plant requires
a TD ratio of, say, 2:1, the TD ratio of
the utility network should be higher.
To accommodate a larger TD ratio,
the utility network generally requires
containers to absorb fluctuations
that may be caused by utility usage
changes in process areas. Table 2
provides additional details to sup74
port this concept.
Different segments of a utility network experience different levels of
turndown, and consequently each
segment may need a different TD
ratio. For instance, as shown in Figure
3, the main header could need the
minimum TD ratio, while sub-headers
may need a higher TD ratio.
The good news is that achieving a
high TD ratio for the utility network
and related instruments is not difficult. The overall utility network is
mainly a series of pipe circuits that
inherently show a large TD ratio. If
instruments are included in the utility network, this poses no problem.
Many instruments (including control
valves and sensors) have an intrinsically large TD ratio — generally
greater than 20:1.
Instrument rangeability
Instruments typically need to operate over a wider range of process
conditions than other equipment or
utilities. This is because their duty is
not limited to normal operation, or a
band defined by low and high values. Rather, they have to be operational across the entire, wider band
from low-low to high-high threshold
values. Therefore, rangeability, R,
can be defined as:
R=
QHigh
QLow
high
low
(2)
Where:
QHigh-high = the flowrate of the instrument or control valve at the highhigh level threshold value
QLow-low = the flowrate at the lowlow level threshold value
For control valves, the formula is a
bit different because a control valve
is a device that passes flow and
also drops the pressure of the flow.
CHEMICAL ENGINEERING
Thus, the rangeability cannot be defined only as a function of flowrate
— pressure drop also needs to be
incorporated. The rangeability of
control valves is a function of the
control-valve flow coefficient (Cv).
Rangeability can also be defined for
other parameters, such as temperature, but generally defining rangeability with regard to flowrate is the most
important parameter. Table 3 shows
some typical rangeability values for
commonly used instruments.
It should be stressed that TD ratio
and rangeability are two separate parameters, for two separate systems.
They cannot be used interchangeably and attempts to relate or convert them to each other do not have
much meaning.
Providing required flexibility
There are three main ways that one
can provide a specific TD ratio for
process equipment, and each is discussed below:
• Using equipment with an inherently high TD ratio
• Replacing equipment with multiple
similar, smaller-capacity equipment in a parallel arrangement
• Providing a recirculation route
Using equipment with an inherently higher TD ratio. Some process elements have an inherently
higher TD ratio. Two of them, tanks
and pipes, were mentioned above.
It is not always easy to recognize
if a piece of equipment has an inherently high or low TD ratio. However,
the following rules of thumb can be
used as guidelines:
• Smaller-volume equipment tends
to have a smaller TD ratio than
larger-volume equipment
• Equipment with internal baffles
tends to have a lower TD ratio
(a good example is some gravity
separators, such as baffled skim
tanks)
• Equipment in gas service may
show a higher TD ratio than equipment used in liquid service
• Equipment with an internal weir
(especially fixed ones) may have a
very low TD ratio
• Equipment that uses some properties of the inlet stream for their
functioning, may have a lower TD
ratio. For example, in cyclones or
hydrocyclones, the energy of the
inlet stream (“energy” as a property of the inlet stream) is used to
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
Utility consumer branch
Sub-header
Utility
generation unit
Main header
Low turndown ratio
High turndown ratio
FIGURE 3. Shown here is a map of turndown ratio for a typical utility network.The pipes closer to the utility generation system (main header) need less turndown ratio compared to sub-headers and branches
generate centrifugal force, so any
reduced flow will reduce the centrifugal force, which may reduce
the effectiveness of the system
• Equipment containing loose, porous media may show a lower TD
ratio in liquid service, and the TD
ratio may be lower when the porous media is comprised of larger
solid particle sizes. Examples
include sand filtration systems,
catalyst contactors and related
systems
• Despite a common misconception, perforated-pipe flow distributors do not necessarily have limited TD ratios [3]
As noted, the utility network should
have a relatively large TD ratio. Fortunately, utility networks consist mainly
of pipes in different sizes, which have
inherently large TD ratios. If control
valves are needed on the network,
their lower TD ratios may generate
bottlenecks. In such situations, it
may be necessary to install parallel
control valves with split control, because of the required large TD ratio.
Using parallel equipment. Instead
of using a component with a capacity of 100 m3/h, this technique is essential to use an arrangement that
employs two parallel components,
each with the capacity of 50 m3/h.
By doing so, a TD ratio of at least
2:1 can often be provided. It should
be noted that the equipment by itself
may have some inherent TD-ratio capability, which may have to be added
to the provided 2:1 TD ratio.
For example, instead of using one
shell-and-tube heat exchanger with
the capacity of 100 m3/h, three heat
exchangers — each with the capacity of 33 m3/h —can be used
to achieve a TD ratio of at least 3:1.
CHEMICAL ENGINEERING
The TD ratio may actually be higher
because each shell-and-tube heat
exchanger has an inherent TD ratio
too, even though it is very small. This
technique has additional benefits.
The parallel arrangement provides
higher availability for the system,
because the failure of two or three
parallel equipment components is
less likely than the potential for failure
when the system relies on a single
equipment component.
Using two control valves in parallel in a single control loop (through a
“split range” control) is also another
example of this technique in the area
of instrumentation.
However, there are some disadvantages associated with this technique. In particular, capital cost and
operating cost considerations may
rule against it.
Providing recirculation pipe. Implementing a recirculation pipe from
the equipment outlet to its inlet is
a widely used method to increase
the TD ratio of the system. In many
cases, a pump and definitely a control system, are needed to implement this technique. As long as you
can afford an extra pump and control
system on the recirculation pipe, this
technique can be used. The recirculation pipe needs a control system,
otherwise all flow goes through the
recirculation pipe back to the inlet of
the unit of interest (Figure 4).
One example of this technique is
using a minimum-flow line for a centrifugal pump. A centrifugal pump with
a capacity of 100 m3/h and a minimum-flow line of 30 m3/h (thus, with a
TD ratio of 1:3) can be equipped with
a minimum-flow line with an appropriate control system to increase its TD
ratio. If the minimum-flow line and the
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
control system are designed to handle a maximum flowrate of 30 m3/h, it
means the TD ratio of the pump can
theoretically be increased to infinite,
by zeroing the minimum flow.
Another example is a vertical
falling-film evaporator. This type of
evaporator has a vertical tube bundle
that is similar to the ones found in a
shell-and-tube heat exchanger. The
tube-side flow is two-phase flow. The
liquid flows down by gravity, and the
vapor (of the same liquid) is pushed
down by liquid drag. The flow inside
the tubes is an “annular regime,”
meaning the liquid covers the internal perimeter of tubes and the vapor
is in the center of the tubes.
In the case of low flow, there is a
chance of “dry patches” forming on
the tube’s internal surface. Because
of this, vertical, falling-film evaporators are typically equipped with recirculation pipes to provide a minimum
practical TD ratio (Figure 5).
However, this method cannot be
applied for all equipment. For example it is not a good technique to
increase the TD ratio of a furnace or
fired heater, because recirculation
of fluid around a furnace may increase the furnace coil temperature
and cause burning out if the firing
system doesn’t have sufficient TD
ratio. Table 4 provides some rules of
thumb to gauge the flexibiliy of different elements of a process plant.
Resistance against surge
While TD ratio refers to the static behavior of a plant, there are two additional parameters (resistance against
surge, and speed of recovery from
upset) that refer to its dynamic
behavior. However, there is less
emphasis on dynamic theories,
and only practical aspects of
dynamic behavior.
A process upset could result from
a surge. Surge can arbitrarily be defined as the deviation of a parameter
(such as flowrate) beyond its normal
level. The final value of the parameter may or may not be in a band
between high level and low level and
the change often occurs quickly.
When a parameter moves quickly,
an upset could happen. The surge/
upset could be defined for each parameter including flowrate, temperature, pressure and even composition.
A surge in the composition is often
called a slug. Level surge is generally
75
Feed
Unit
Vs.
Recirculation pump
Recirculation pipe
Compressor
Unit
FIGURE 4. By providing a recirculation pipe, the
turndown ratio of a piece of equipment can be
increased. If the fluid pressure is not enough, a
pump (or compressor) may be needed, and a control system is definitely needed
a consequence of other surges and
it can be dampened in surge-equalization tanks or drums.
Surge can also be defined by its
shape (in a diagram of parameter
change versus time), and by its magnitude. The magnitude of surge can
be stated as a relative number or
an absolute number. For example,
a flow surge of 2% per minute is a
relative number and means if a surge
occurs in every minute, the flowrate
is increased or decreased by 2%. In
another example, a system can be
said to be resistant to temperature
surge (thus no upset conditions will
be generated) as long as any poten-
Feed
Distillate
Brine
FIGURE 5. Shown here is a system for brine recirculation in a vertical falling-film evaporator. The brinerecirculation line in the vaporizer plays an important control role. Without the recirculation line, the
vaporizer has a very narrow turndown ratio, which is not generally acceptable for optimal operation
tial surge remains less than 2°C per
minute (an absolute value).
A 2%-per-minute surge means
that the flowrate could start at 100
m3/h and then increase to 102 m3/h,
then to 104 m3/h and so on. Or the
surge may start at 100 m3/h and
then decrease to 98 m3/h, then 96
m3/h and so on.
Some systems show different behavior against surge, when it is a
positive surge (an increase in the parameter value), or a negative surge
(a decrease in the parameter value).
Therefore, it is good idea to clarify it.
For example, an API separator could
be more resistant to the impact of
decreasing inlet stream compared to
MONITOR VISCOSITY SIMPLY
SENSE MIXER MOTOR HORSEPOWER
WITH UNIVERSAL POWER CELL
EASY INSTALLATION
• No holes in tanks or pipes
• Away from sensitive processes
VERSATILE
• One size adjusts to motors, from
small up to 150hp
• Works on 3 phase, fixed or variable
frequency, DC and single phase power
SENSITIVE
• 10 times more sensitive than
just sensing amps
CONVENIENT OUTPUTS
• For meters, controllers, computers
4-20 milliamps 0-10 volts
PROFILING A PROCESS
24
• Power changes reflect viscosity changes
• Good batches will fit the normal “profile” for
that product
POWER DECREASE
SHOWS BATCH
IS DONE
22
20
18
POWER
SENSOR
16
14
12
10
8
DRY MIX
HIGH SPEED
ADD LIQUID
LOW SPEED
MIXER
MOTOR
BEGIN HIGH
SPEED MIX
6
4
2
0
BATCH 1
BATCH 2
BATCH 3
CALL NOW FOR YOUR FREE 30-DAY TRIAL 888-600-3247
WWW.LOADCONTROLS.COM
Circle 24 on p. 94 or go to adlinks.chemengonline.com/56201-24
76
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
the impact of increasing the inlet stream.
The first line of defense against a surge is provided by
the control system or control valves. However, control
valves alone cannot totally eliminate a surge, but will
only stop a surge from impacting a downstream system. Ultimately, the surge needs to be handled, but by
other methods.
There are basically two surge-management methods
that can be implemented for each piece of equipment
or group of equipment in a plant:
• Boxing-in a surge in a specific equipment component
or series of equipmen
• Transferring the surge to an external or auxiliary
system
Understanding the applicability of each of these techniques requires some knowledge about the inherent
dynamic characteristics of the systems from a process
control viewpoint. The three dynamic features of each
equipment or unit are resistance, capacitance and inertia (dead time) [4]. A brief qualitative explanation of
these three features is presented next.
If a system is more dominantly a “resistance” type,
this system will be able to prevent the surge from transferring to downstream equipment. A piece of pipe is
one example of a resistance-type element. A pipe
could inherently stop the surge if it is narrow enough.
However, because a pipe's main function is to transfer
fluid, the designer generally sizes the pipe based on
its duty (transferring fluid) and then, if needed, a control
valve is placed on the pipe to stop a potential surge.
The capability of a system to dampen the surge depends on the “capacitance characteristics” of the system. The higher the capacitance characteristic, the
more it is able to dampen a surge.
Here, a capacitance-type element refers to whatever
element that can be used to temporarily store excess
mass (such as liquid volume or gas pressure) or energy
(such as thermal or chemical energy).
For instance, large-volume equipment generally have
a higher capacitance feature. Implementing a surge
tank, equalization tank, surge drum (or even pond) is
one means of providing a system with sufficient capacity to dampen the surge.
Another example of using a high-capacitance system
is when transferring a surge to heat-exchange media.
Utility heat exchangers use streams such as cooling
water, steam, and other media, to transfer the heat
to or from process streams. These utility streams are
also able to absorb a temperature surge in the system.
The capacitance feature of a utility network can be provided in part by pipes in the network (the pipes function mainly as resistance elements but they have some
capacitance too), and also their surge tank, as
discussed above.
A system is called robust against upset when it can
tolerate a large surge (as defined for each process parameter) and no upset occurs, thereby allowing the
process to proceed smoothly.
If an upset cannot be tolerated, one solution is to
implement a rate-of-change control loop in the system.
The following llist provides some general rules of thumb
on the capability of a system to handle surges:
1. Generally speaking, equipment with larger volume
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEE US AT CHEMSHOW - BOOTH #543
Circle 13 on p. 94 or go to adlinks.chemengonline.com/56201-13
It’s more than a check valve...
IT’S A CHECK-ALL®
Our spring loaded check
valves are assembled to
your exact needs, ensuring
absolute precision and
reliability. They work as
they should in any
orientation. Most lead
times are less than one
week. That’s what makes
Check-All® the only choice.
SINCE 1958
GET ME A CHECK-ALL®!
Manufactured in
West Des Moines, Iowa, USA
515-224-2301 • sales@checkall.com • www.checkall.com
Circle 7 on p. 94 or go to adlinks.chemengonline.com/56201-07
SEPTEMBER 2015
77
and fewer internals is better able
to dampen upsets.
2. Containers with plug-flow regime
are more susceptible to upset
from surge compared to mixedflow-regime containers.
3. The equipment that exerts centrifugal effect on the process fluid
is more sensitive toward the upset
(Examples include centrifuges and
centrifugal pumps).
4. Containers that hold loose media
are less robust against upsets.
5. Non-flooded containers can handle and dampen a surge better
than flooded containers.
Speed of recovery from upset
The speed of recovery from an
upset situation primarily depends on
the dynamic characteristics of the
system, and more specifically, the
“process dead time” and “process time constant” of a system. The dead time is a result
of inertia characteristics of the
system, while the process time constant is a function of capacitance and
resistance features of the system.
A larger dead time or time constant
means the system requires a longer
time to recover from an upset.
However, in addition to this inherent characteristic of a system, other
features can also impact (and decrease) the speed of recovery from
an upset. Sometimes these features
(rather than the dynamic behavior of
the system) govern the behavior of
the system. For example, a hot lime
softener within a water-treatment
system has an established sludge
blanket. It takes time to “heal”
a broken sludge blanket if an upset
creates “breaks” in it.
Another example is “vesselmedia” systems. These are systems
that are used in operations such as
ion exchangers, loose-media filtration systems, packing-type absorption towers, catalyst beds and so
on. A big surge in flow may displace
the media in a way that leads to flow
channeling. Putting the displaced
media back into a homogenous form
takes time.
Similarly, a surge to a biological
system will generally require a long
recovery system, because a surge in
temperature or slug of a toxic chemi-
cal may kill a large portion of the biomaterial growing there.
n
Edited by Suzanne Shelley
References
1. Mullinger, P., and Jenkins, B., “Industrial and Process
Furnaces,” 1st Ed., Amsterdam: Butterworth-Heinemann,
2008, p. 171.
2. Upp, E., and LaNasa, P., Fluid flow measurement,” 2nd
Ed., Gulf Professional Publishing, Boston, 2002, pp.
157–158.
3. Perry, R., Green, D. and Maloney, J., “Perry's Chemical
Engineers' Handbook,” 7th Ed., McGraw-Hill, New York,
1997, pp. 6–32.
4. Liptak, B., “Instrument Engineers Handbook — Vol 2.
Process Measurements and Analysis,” 4th Ed., CRC
Press, Boca Raton, 2003. Chapter 2.
Author
Mohammad Toghraei, is an instructor and consultant with Engrowth Training (Email: mohtogh@
gmail.com; Phone: 403-8088264; Website: engedu.ca), based
in Calgary, Alberta, Canada. He
has more than 20 years of experience in the chemical process industries. Toghraei has published
articles on different aspects of
chemical process operations. His main expertise is in
the treatment of produced water and wastewater from
petroleum industries. He holds a B.S.Ch.E. in from Isfahan University of Technology (Iran), and an M.Sc. in environmental engineering from Tehran University (Iran).
He is a professional engineer (PEng) in the province of
Alberta, Canada.
“Your #1 replacement for C.T.F.E, Silicone & FluoroSilicone Lubricants”
Inert Light Oil & Grease
Designed For:
• Chemical Metering Pumps
• Hydrocarbon Metering Pumps
• Diaphragm Pumps
• Mechanical Seals
• Valves
• O-rings
Focus Industries:
• Chemical Toll Processing
• Fuel Refineries
• Cryogenic Plants
• High Pressure Gases
• Water Treatment
• Oxygen
Content Licensing for
Every Marketing Strategy
Marketing solutions fit for:
• Outdoor
• Direct Mail
• Print Advertising
m
• Tradeshow/POP Displays
• Social Media
TM
s
m
• Radio & Television
Logo Licensing | Reprints
Eprints | Plaques
TM
s
MS-1010
Leverage branded content from Chemical Engineering
to create a more powerful and sophisticated statement
about your product, service, or company in your
next marketing campaign. Contact Wright’s Media
to find out more about how we can customize your
acknowledgements and recognitions to enhance your
marketing strategies.
MS-2010
m
TM
s
Connecticut • Illinois • California • Canada
For technical information: 800.992.2424 or 203.743.4447
supportCE@mschem.com • miller-stephenson.com
For more information, call Wright’s Media at 877.652.5295
or visit our website at www.wrightsmedia.com
Circle 27 on p. 94 or go to adlinks.chemengonline.com/56201-27
78
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
Solids Processing
Solids Discharge: Characterizing
Powder and Bulk Solids Behavior
How shear-cell testing provides a basis for predicting flow behavior
Core flow
B
A
C
Flowing core
Static
material
Hopper half angle
Mass-flow
Robert McGregor
Brookfield Engineering Laboratories
P
owder jams are the oncein-a-month catastrophe that
can bring processing operations to a standstill. Whether
it’s erratic flow behavior or complete
stoppage of powder discharge, the
consequence is the same. Shutdown
may be necessary before startup can
take place. Why? Formulations often
involve multiple component powders
blended together. If the flow becomes
disrupted, one of the possible consequences is segregation of components. Smooth and continuous flow
of powder from start to finish is the
operating goal to minimize the onset
of other problems like segregation.
Traditional testing techniques
used to predict flow performance,
such as flow cup, angle-of-repose
measurement and tap test, actually
have limited relevance to whether a
powder will flow. They are relatively
affordable in terms of equipment
purchase and easy for operators to
use. The data, however, do not predict whether reliable discharge will
take place from the storage vessels
containing the powder.
Shear cells for testing powder flow
62
Drain down
angle of repose
Critical rat-hole
diameter
FIGURE 1. Three common types of flow behavior for powder in a bin are mass flow (1a), core flow or funnel flow (1b) and rathole formation (1c)
have been used in the minerals industry for decades. Recent improvements in the design of this equipment
and the processing power available
in today’s personal computers (PCs)
make them more affordable and user
friendly. The bottom line is that shear
cells can predict powder flow behavior using a proven scientific principle
that measures inter-particle sliding
friction. Mathematical calculations
embedded in the software used
with shear cells provide estimates
for “arching dimension” in mass flow
and “rathole diameter” in core flow.
These values become design limits
for hopper openings and half angle.
This article addresses the rheology of powder-flow behavior and
explains how the shear cell is used
to make these types of powder measurements and calculations for storage equipment design (see also, “A
pragmatic Approach to Powder Processing,” Chem. Eng., August 2015,
pp. 59–62).
Types of powder flow
In a perfect world for powder processors, “mass flow” would take place
all the time when powder discharges
CHEMICAL ENGINEERING
from a container. Figure 1a shows
how particles move uniformly downward in lockstep with one another as
the fill level in the bin reduces. The
fundamental principle is referred to
as “first in, first out.” One obvious
advantage is that blends of powders
retain their component ratio without
segregation. This is one of the most
important considerations for formulators who must ensure that final
product has the intended makeup
as designed in research and development (R&D).
More typical of powder processing in most plant operations is “core
flow” or “funnel flow” as shown in
Figure 1b. Particles at the top of the
container move toward the center
and then downward through the
middle, discharging out the hopper
well before the powder that had been
further down in the vessel. Larger
particles have a tendency to move
more readily than smaller particles,
potentially resulting in segregation.
This type of behavior is called “last in,
first out.” One possible unfortunate
consequence is that powder around
the outer wall of the vessel becomes
stagnant, consolidates over time,
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
FIGURE 2. The flow cup test is relatively easy to
setup and perform, and the data are used to calculate the Carr index, Equations (1), and Hausner
ration, Equation (2)
and then becomes lodged in place.
This type of structure is referred to as
a “rathole” shown in Figure 1c. The
rathole may extend from top to bottom of the bin and may change in
diameter of opening as a function of
powder depth.
Processors prefer mass flow for
obvious reasons. Cohesive materials will generally exhibit core flow
in plant equipment as originally designed. The hopper wall angle and
its material of construction have a direct impact on flow behavior. Therefore the challenge is to manage the
problem with the equipment that
exists, which means modifying the
formulation, or redesigning the bin
equipment, if practical.
is needed to allow the powder to discharge from the cup. In a practical
sense, this instrument is used as a
“go” or “no-go” indicator for powder
processing on a regular basis.
Angle of repose. This is a simple
test method that observes powder in
a pile and measures the angle of the
pile relative to horizontal. Note that
both the angle-of-repose method
and the flow-cup test work with powders that are loosely consolidated.
They do not attempt to evaluate the
powder as it settles, which is what
happens when powder is placed in a
containment vessel of any kind. This
phenomenon, called “consolidation,”
is an important distinction to keep in
mind because it has direct impact on
how flow behavior can change.
Tap test. The tap test takes a cylinder of powder and shakes it to determine how much settling will occur.
The change in volume of the powder
from start to finish is a measurement
of the powder’s tendency to consolidate. The “loose fill” density, ρpoured,
of the powder at the start of the test
is calculated by dividing the cylinder volume into the weight of the
sample. The “tap density,” ρtapped,
is calculated by dividing the reduced
volume of powder at the end of the
test into the sample weight. The two
density values are compared to one
another, giving an indicator for the
consolidation that can take place
over time when the powder settles.
Two standard calculations that are
typically used by industry to evaluate
tap test data are called Carr Index
(Carr%) and Hausner Ratio (HR), as
defined in Equations (1) and (2):
(1)
(2)
Shear cell test for flowability
Shear cells measure the inter-particle friction of powder materials. This
type of test has direct application
to predicting flow behavior in gravity discharge for powders stored in
vessels of any kind. Shear cells were
ENSURE
YOUR PIPING
INTEGRITY
Traditional tests for flowability
As mentioned earlier, there are three
common methods for predicting
flow: flow cup, angle of repose and
the tap test.
Flow cup. The most popular testing method is the flow cup, which is
quick and easy to use. The cup is
basically an open cylinder with a removable disc that is inserted into the
bottom (Figure 2). A family of discs,
each with a different hole diameter in
the middle, is provided with the cup.
Once the disc is in place, the cup is
filled with powder and the operator
observes whether the material discharges through the hole. Processors may know from experience
what difficulties they are likely to face
depending on the hole diameter that
CHEMICAL ENGINEERING
In today’s operating environment, it’s more
important than ever that the piping within
your Mechanical Integrity Program complies
with standards such as API-570 and API-574.
Quest Integrity offers a comprehensive solution
for piping circuits using our proprietary,
ultrasonic-based intelligent pigging technology
combined with LifeQuest™ Fitness-for-Service
software.
Ensure your piping integrity by identifying
degradation before loss of containment occurs.
•
100% inspection of
internal and external
pipe surfaces
•
Inspection results
tailored to comply
with API-570 and
API-574
•
LifeQuest Fitnessfor-Service results
tailored to comply
with API-579
QuestIntegrity.com
CHALLENGE CONVENTION
Circle 34 on p. 94 or go to adlinks.chemengonline.com/56201-34
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
63
A
C
B
D
FIGURE 3. For shear-cell testing, powder is placed into a ring-shaped trough for annular shear cell (3a),
which is placed into a commercial powder flow tester (3b), which uses either a vane lid (3c) or a wallfriction lid (3d)
64
Basic operation of the instrument
during the test procedure is to bring
the lid down onto the powder sample
and compress the material to a specified pressure. This action consolidates
the powder, forcing the particles to
move closer to one another. With the
powder in this compressed state, the
trough rotates at a low speed — perhaps 1 rpm. The following is observed,
depending on the lid in use:
Three key graphs
Basic tests run with the shear cell
address flow behavior of powder
in gravity discharge from a storage
vessel. The following summarizes
the three primary graphs used to
characterize flow characteristics.
Flow function. The flow-function test
evaluates the ability of the powder to
form a cohesive arch in the hopper
that could restrict or prevent flow out
the opening in the bottom. Result-
Flow function graph
8.0
7.0
Unconfined failure strength, kPa
first applied to powders and bulk
solids in the minerals industry over
50 years ago. More recent advancements in the use of computers to
automate testing and improvements
in shear cell design have allowed this
type of instrument to become more
commonplace throughout the powder-processing industries.
The current popular design is
the annular shear cell. Powder is
placed into a ring-shaped cell called
the “trough,” shown in Figure 3a,
weighed in order to calculate the
“loose fill” density, and then placed
onto a test instrument such as that
shown in Figure 3b. The lid, which
will fit on top of the cell, is attached
to the upper plate on the instrument
and can be one of two types:
1. The vane lid (Figure 3c) has individual pockets separated by vanes.
2. The wall-friction lid (Figure 3d) is a flat
surface and is made of material similar to the hopper wall in the powder
storage vessel on the production
floor. Examples might include mild
steel, stainless steel or Tivar.
1. The vane lid, which is attached
to a torsional spring, rotates with
the trough as long as the frictional
force between powder particles
is greater than the torsion in the
spring. When the lid stops moving
with the trough, the torsion in the
spring exceeds the inter-particle
friction. The moment when this
stoppage in lid movement takes
place defines the yield stress between powder particles and is a
measure of what is referred to as
“failure strength” of the powder.
2. The wall-friction lid behaves in
a similar fashion to the vane lid
while measuring the sliding friction between the powder particles
and the surface material of the lid.
When rotation of the wall lid stops
during the test, the yield stress for
powder flow on this particular surface is established.
Movements of trough and lid during
the shear-cell test are very small and
almost unobservable to the naked
eye. Increasing consolidating pressures are applied to the powder sample to construct a picture of how the
powder’s failure strength will change.
This equates with vessels that have
increasing fill levels of material.
File
Very cohesive
Cohesive
Easy flowing
Free flowing
Data set #1
Data set #2
Data set #3
Data set #4
Very cohesive
6.0
5.0
Non flowing
Cohesive
4.0
3.0
Easy flowing
2.0
1.0
Free flowing
0.0
0
1
2
3
4
5
6
7
8
9
Major principal consolidating stress, kPa
10
11
12
13
FIGURE 4. The flow-function graph shows how the failure strength for the powder changes as a function
of increasing consolidating stress
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
Friction angle graph
90
File
Data set #1
Data set #2
Data set #3
Data set #4
80
Friction angle, deg
70
60
50
40
30
20
10
0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
Normal stress, kPa
4.0
4.5
5.0
FIGURE 5. Data from the wall-friction test show how the effective friction angle for the hopper wall to
allow gravity-driven powder flow on its surface changes as a function of consolidating stress
Density curve graph
File
Data set #1
Data set #2
Data set #3
Data set #4
1200
Bulk density, kg/m3
1000
the dimension of the opening, flow
restrictions may result.
2. The rathole diameter is the potential diameter of a hole in the center
of the vessel through which powder will move when the type of behavior is “core flow.” The rathole
diameter may change in value as
a function of the powder depth in
the vessel. Powder particles that
are located radially outside of this
diameter dimension may become
lodged in place over time and potentially not flow at all.
3. The hopper half angle is the required
angle — relative to vertical in the
hopper section — that is needed to
achieve mass flow behavior.
These three values can be used for
the design of powder storage equipment or to characterize reference
powders that constitute benchmarks
for future production batches.
800
Concluding remarks
600
400
200
0
0
1
2
3
4
5
6
7
8
9
10
Major principal consolidating stress, kPa
11
12
13
FIGURE 6. The density of a powder in a vessel will vary depending on the consolidating stress, which in
turn is a function of the fill level
ing data form the flow-function graph
(Figure 4), which shows how the failure strength for the powder changes
as a function of increasing consolidating stress (height of powder-fill level
in the vessel). Industry has agreed
to classify regions of flow behavior
as shown in the figure, ranging from
“free flowing” to “non-flowing.” As
might be expected, many powders
exhibit “cohesive” or “very cohesive”
flow and are likely to be problematical
in terms of processability.
Wall friction. The wall-friction test
measures the flowability of the powder on the material comprising the
hopper wall. Data from the wallfriction test (Figure 5) show how the
effective friction angle for the hopper
wall to allow gravity-driven powder
flow on its surface changes as a function of consolidating stress (height of
powder-fill level in the vessel). Experience indicates that friction angles
below 15 deg will have relatively easy
flow behavior whereas friction angles
above 30 deg will be cause for conCHEMICAL ENGINEERING
cern. Data from this test may also
have some correlation with findings
obtained in the angle-of-repose test
described earlier in this article.
Density. Density of powder in a vessel will vary depending on the consolidating stress, which in turn is
a function of the fill level. Figure 6
shows an example. If the change in
density increases by more than 50%
relative to the “loose fill” condition,
then there is an expectation that
flow problems may exist. Note that
the density test will very likely have
a point on the curve that correlates
with the findings in the tap test described earlier in this article.
Data analysis
Parameters of interest that can be
calculated from the data in the above
tests include the following:
1. The arching dimension is the
length of a bridge section that the
powder has sufficient strength to
create in the hopper section of a
vessel. If the bridge is longer than
WWW.CHEMENGONLINE.COM
SEPTEMBER 2015
Shear cells provide a scientific basis
for analytically predicting flowability
of powder in gravity discharge. Their
use is becoming more accepted
because improved designs for the
instrument make them affordable,
user friendly, and automatic in operation under control of a computer.
The most notable change in the past
year is the reduction in time needed
to run a standard flow-function test
from 45 min to 15 min. Productivity
gains with the current generation of
instrumentation certainly give rise to
their potential use in quality control
as well as R&D. The chemical process industries on the whole view
the shear cell as an important tool
for improving rapid scaleup of new
formulations into full production. n
Edited by Gerald Ondrey
Acknowledgement
All photos courtesy of Brookfield Engineering Laboratories, Inc.
Author
Robert McGregor is the general
manager, global marketing and
sales for High-End Laboratory Instruments at Brookfield Engineering Laboratories, Inc. (11 Commerce Blvd., Middleboro, MA
02346; Phone: 508-946-6200
ext 7143; Email: r_mcgregor@
brookfieldengineering.com; Web:
www.brookfieldengineering.com).
He holds M.S. and B.S. degrees in mechanical engineering from MIT (Cambridge, Mass.; www.mit.edu).
65
Advantages Gained in
Automating Industrial
Wastewater Treatment Plants
Process monitoring and automation can improve efficiencies in wastewater
treatment systems. A number of parameters well worth monitoring, as well as tips for
implementation are described
JP Pasterczyk
GE Water & Process
Technologies
IMPLEMENTING
PROCESS CONTROL
Primary
Grit
removal
Pump
Secondary
2
Sedimentation
Clarifier
Hypochlorite
Chlorine
contact
Bar screen
Collection system
Advanced tertiary/recycled
Open water
Tertiary filter
PROCESS PARAMETERS
IMPLEMENTING
PROCESS ANALYTICS
Reuse
Storage
3
4
UV
T
here is growing interest in automating wastewater treatment processes across a broad range of
industries. In particular, a paradigm
shift is starting in automating industrial
wastewater treatment in various sectors of
the chemical process industries (CPI), such
as foods (especially grain processing, sugars, sweeteners and edible oils), beverages
(mainly soft drink bottlers and breweries),
and hydrocarbon and chemical processing
(particularly petroleum and petrochemical
plants). The driving forces behind this evolution are economic. Wastewater process
optimization most often leads to a more
efficient use of chemicals, reduced energy
consumption and less solid waste.
Most wastewater-treatment systems use a
common sequence of steps (Figure 1), with
the purpose of first removing solids materials in the influent wastewater, recovering
lost product, removing solids, fats, oils and
greases (FAG), treating the water biologically and chemically enhancing flocculation,
coagulation and physical removal of the biological solids and sludge. The clarified and
decanted wastewater is the effluent that may
44
Aeration
Disinfection
Dechlorination
IN BRIEF
Pretreatment
CHEMICAL ENGINEERING
FIGURE 1. Most wastewater treatment systems use a common sequence of steps to treat influent wastewater and then
discharge, store or reuse it in line with local regulations.
Automating this approach helps an operator more effectively
manage and treat wastewater, saving time and money in the
process
undergo tertiary treatments to be further oxidized or disinfected, or to undergo additional
purification, including by granular activated
carbon (GAC) or membrane separation, before reuse or discharge to a public sewer or
open body of water.
A fully optimized, industrial wastewatertreatment plant will operate at a lower total
cost of materials, labor and energy to do the
following:
• Remove or reduce large solids and particulate matter (primary)
• Remove or reduce fats, free oil (and
grease), dispersed oil and emulsions
• Remove organic materials efficiently
(secondary) and withstand higher variable loading, with enhanced, biological
activated sludge systems through:
❍❍ Control of dissolved oxygen levels,
minimizing energy required for
aeration
WWW.CHEMENGONLINE.COM
SEPTEMBER 2016
Circle 22 on p. 98 or go to adlinks.chemengonline.com/61499-22
Maintaining food-to-mass ratio, pH and nutrient balance,
minimizing chemical usage
and system upsets
• Produce a readily settleable biological floc (small microbial mass);
less energy to coagulate and
separate (Figure 2)
• Generate minimal volume of
sludge and biosolids to dewater,
minimizing energy, chemical usage and disposal costs
❍❍
• Disinfect pathogens and produce
effluent water quality for reuse or
below discharge limits to open
body, waterway or public wastewater treatment plant
More advanced integration of
technologies can be applied to meet
requirements for reuse, whether
within the facility (for example, wash
water), for irrigation and agricultural
purposes or higher purity applications, like clean water utilities. De-
Your Valve and
Instrumentation
Partner
GEMÜ Valves features
diaphragm valves,
butterfly valves, angle
seat and globe valves,
lined metal valves,
flow measurement,
and multiport
block valves.
pending upon the reuse application
and corresponding water quality requirements, tertiary disinfection for
pathogens and final polishing with
GAC or reverse osmosis (or both)
may be needed.
Implementing process control
In general industry, process automation is ubiquitous and integral to
upstream control mechanisms and
production yield. Statistical process
control (SPC) can use process analytical technology to generate highvalue data in real- and near-time,
and is critical to closely control processes, quality and maximum production yield. There is a prevailing
interest across industries to identify
opportunities to gain process knowledge by understanding process effluent streams. These waste streams
combine to become the wastewater
treatment influent. Companies are
investing in multiple tools, devices,
analyzers and sensors, and integrating these measurements into process automation and control systems for the wastewater treatment
plant (WWTP). They are looking at
collecting useful data with the right
parameters, and applying SPC tools,
previously reserved for production
purposes, to continually analyze and
optimize their wastewater treatment
processes. The proper design and
execution of experiments can help
show the pertinent relationships between multiple parameters that yield
the best process performance. The
application of this empirical process
knowledge can translate into significant performance improvements
and efficiencies.
Process parameters
Visit us at
Booth #134
3800 Camp Creek Parkway • Building 2600 • Suite 120
Atlanta, GA 30331 • 678-553-3400 • info@gemu.com
www.gemu.com
Circle 17 on p. 98 or go to adlinks.chemengonline.com/61499-17
chemEng201609_gemu_isld.indd
1
46
Depending upon the physical and
chemical characteristics of waste
streams, a number of treatment
modules are employed to remove,
reduce and change sample stream
constituents including, but not limited to, the following:
• Bar screens and strainers for grit
and particles
• API (American Petroleum Institute)
separators and corrugated plate
separation for free oil and grease
• Chemicals and dissolved or induced gas (or air) flotation for oily
8/1/2016 3:12:07 PM
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEPTEMBER 2016
solids and emulsified oils
• Biological activated sludge and
advanced membranes for organics, nitrogen and heavy metals
• Physical and chemical clarification
and advanced membranes for
microbial flocs
• Chlorine (gas, hypochlorite and
chlorine dioxide solution) and
ozone for trace organics and
pathogens
• Granular activated carbon (GAC)
for organics
• Chemical disinfection for pathogens (typically chlorination)
• UV (ultraviolet) for pathogens,
trace organics and residual
ozone destruction
• Chemical pH neutralization
• Reverse osmosis for inorganics
and minerals
By employing a combination of
discrete (grab) and online measurements before, after and at intermediate process points, each
module’s performance can be
monitored and improved over time.
Some of the parameters measured
by the available probes, meters,
sensors and analyzers include:
flow, pH/ORP (oxidation-reduction
potential), conductivity, dissolved
oxygen (DO), suspended solids,
specific ions [for example: nitrogen
(ammonia, nitrates, nitrites), phosphorus (phosphates), chlorine],
total organic carbon, sludge density index and turbidity.
Free oil and grease: Before introduction of the waste stream to the
biological or activated sludge system, free oil and grease should be
removed or reduced to below a
maximum threshold of 50 mg/L, and
ideally below 25 mg/L, to avoid interfering with the microbial activity.
Some of the negative repercussions
of allowing excess levels of free oil
to come into contact with the biomass are rapid oxygen depletion,
encapsulation of the bacteria, and
foaming. Depending upon the levels
of free oil, and geometry of the oil
droplets, one can use API separators or corrugated-plate separation.
Dispersed and emulsified oils are removed and reduced through a combination of chemicals, for lowering
pH and enhancing the dissolved or
induced gas flotation unit(s).
Organic carbon: The influent, organic carbon loading is a key process parameter for a WWTP, and
has historically been quantified using
chemical oxygen demand (2 hours)
or biochemical oxygen demand (5
days; BOD5). With the availability of
online, process instrumentation for
total organic carbon (TOC) analysis,
a direct measurement of the organic
concentration can be used to improve downstream performance.
Specifically, by knowing the exact
values of TOC, the plant can be operated to accommodate variation in
the amount of organics, and remove
them efficiently. For instance, there
is often an introduction of chemicals
(such as potassium permanganate,
hydrogen peroxide or chlorine) after
primary solids removal to reduce the
total oxygen demand, often referred
to as pre-oxidation. This step can
be eliminated with lower influent organic concentrations, or minimized
by using it only when the load is
above a threshold limit based on the
plant’s treatment capacity.
Dissolved oxygen: In a biological or
activated sludge system, there is an
opportunity to adjust the amount of
dissolved oxygen generated by the
aeration system to a level commensurate with the organic load, while
avoiding excessive aeration that can
shear or tear the biological flocs,
which in turn reduces the overall effectiveness of organics and biosolids removal. Continuous monitoring
of influent organic loading and dissolved oxygen levels in select zones
of the activated sludge basin provide
an opportunity to optimize the aeration system, the largest energy expense in the operation of a WWTP.
Food-to-mass ratio: Industrial
wastewater-treatment systems are
looking at the ratio of organic load
or “food,” to the total biomass present in the biological system. The
biomass of the mixed liquor can be
estimated by measuring mixed liquor suspended solids and sludge
density. This F:M or food-to-mass
ratio, is a critical process control
parameter that can indicate system
overload or when there are insufficient organics to “feed” the microbial population. The plant operation
can use near realtime information
PROVEN
PERFORMANCE
ROTOFORM
GRANULATION
FOR PETROCHEMICALS
AND OLEOCHEMICALS
High productivity solidification of
products as different as resins, hot
melts, waxes, fat chemicals and
caprolactam has made Rotoform® the
granulation system of choice for
chemical processors the world over.
Whatever your solidification
requirements, choose Rotoform for
reliable, proven performance and a
premium quality end product.
 High productivity –
on-stream factor of 96%
 Proven Rotoform technology –
nearly 2000 systems installed
in 30+ years
 Complete process lines or
retrofit of existing equipment
 Global service / spare parts supply
Sandvik Process Systems
Division of Sandvik Materials Technology Deutschland GmbH
Salierstr. 35, 70736 Fellbach, Germany
Tel: +49 711 5105-0 · Fax: +49 711 5105-152
info.spsde@sandvik.com
www.processsystems.sandvik.com
Circle 38 on p. 98 or go to adlinks.chemengonline.com/61499-38
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEPTEMBER 2016
47
SANDVIK_Chemical_ad_55.6x254_MASTER.indd 1 09/02/2015 14:48
FIGURE 2. Wastewater treatment often involves settling
of solids in a tank such as
this one
and take actions to address and improve
process conditions before they become a
stress to the biological system.
Nutrient addition: The organic or carbon
loading can be used to assure the most
appropriate levels of nutrients, specifically
nitrogen and phosphorus, and improve the
efficiency of the biological system. The proportion of carbon to nitrogen to phospho-
rus, commonly referred to as the CNP ratio,
conventionally follows 100:10:1 (using BOD5
instead of carbon). The amount of nitrogen
or phosphorus present in a system depends
upon the upstream processes and can be
optimized using chemical addition, often
through pH control. For example, if there is a
deficient amount of phosphorus, and a basic
pH, phosphoric acid can be used to reduce
MONITOR VISCOSITY SIMPLY
SENSE MIXER MOTOR HORSEPOWER
WITH UNIVERSAL POWER CELL
EASY INSTALLATION
• No holes in tanks or pipes
• Away from sensitive processes
VERSATILE
• One size adjusts to motors, from
small up to 150hp
• Works on 3 phase, fixed or variable
frequency, DC and single phase power
SENSITIVE
• 10 times more sensitive than
just sensing amps
CONVENIENT OUTPUTS
• For meters, controllers, computers
4-20 milliamps 0-10 volts
PROFILING A PROCESS
24
• Power changes reflect viscosity changes
• Good batches will fit the normal “profile” for
that product
POWER DECREASE
SHOWS BATCH
IS DONE
22
20
18
POWER
SENSOR
16
14
12
10
8
DRY MIX
HIGH SPEED
ADD LIQUID
LOW SPEED
MIXER
MOTOR
BEGIN HIGH
SPEED MIX
6
4
2
0
BATCH 1
BATCH 2
BATCH 3
CALL NOW FOR YOUR FREE 30-DAY TRIAL 888-600-3247
WWW.LOADCONTROLS.COM
Circle 23 on p. 98 or go to adlinks.chemengonline.com/61499-23
48
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEPTEMBER 2016
the pH while supplementing the phosphorus
concentration. Supplemental nitrogen can
be added using nitric acid, urea or anhydrous ammonia.
Clarification: The flocculation and coagulation steps, which allow small microbial flocs
to form and join together for removal by
clarification (Figure 3), is achieved through
a combination of chemical addition and
physical separation. The chemical
feedrates are typically flow-paced,
metered in direct proportion to the
system flowrates. By utilizing online
organic measurements, the chemical addition can be “trimmed” for
better performance at a lower chemical cost.
Nitrogen removal: Systems with
excess nitrogen can employ a biological or membrane-enhanced
nitrification/denitrification process
after the aerobic, activated sludge
system. Nitrifying bacteria can convert ammonia nitrogen to nitrite,
then nitrate, which can then be denitrified to nitrogen gas. These bacteria are more sensitive to process
changes, particularly temperature,
and may require an alternate food
source, such as methanol and molasses, to supplement when nitrogen levels are low. Online nitrogen
and organic measurements can be
used to regulate the amount of organic food sources used in these
applications.
Heavy metals: Some residual
heavy metals, such as arsenic and
selenium, can be removed through
chemical, physical, biological and/
or membrane-enhanced processes.
CHEMICAL ENGINEERING
These processes may require a combination of pretreatment, pH control and physical treatment steps.
Final polishing and purification: Tertiary
treatment typically refers to final polishing,
but can be interpreted differently by industry and is dependent upon the composition
of the water and the next purpose, whether
some form of reuse or discharge. Disinfec-
FIGURE 3. The flocculation
and coagulation steps, which
allow small microbial flocs
to form and join together for
removal by clarification is
achieved through a combination of chemical addition and
physical separation
Bungartz
Masterpieces
the Power
of regulation
Representing the groundbreaking special
centrifugal pumps from the Bungartz collection,
series production
A top-class factory expertly picks
up on the current movement towards
pure autonomy. The dry-running
centrifugal pump Van regulates
itself and requires no interference.
It works free of cavitation, which
makes it even more fascinating.
When working with toxic media it
unfolds its full potential. An asset
for many companies!
More under +49 (0) 211 57 79 05 - 0
and: www.bungartz.de/masterpieces5
Circle 30 on p. 98 or go to adlinks.chemengonline.com/61499-30
WWW.CHEMENGONLINE.COM
SEPTEMBER 2016
49
Continuous monitoring of influent organic loading and dissolved oxygen levels in select
zones of the activated sludge basin provide an opportunity to optimize the aeration
system, the largest energy expense in the operation of a WWTP
tion can be accomplished by several different chemical and physical methods, such as
chlorine gas, sodium or calcium hypochlorite solution, chlorine dioxide, ozone, and
UV light (254 nanometer wavelength). After
disinfection, the end-of-pipe purpose will determine if additional treatment is necessary.
Some industrial utilities have reused wastewater with a GAC step to absorb organics
and excess chlorine, and reverse-osmosis
membrane separation to remove inorganics
and trace organics, achieving higher purity.
Managing process upsets: Upsets in the
wastewater process can affect removal efficiencies at each treatment step. More severe
upsets can overload a system, even leading to the loss of an entire, activated sludge
biomass. The cost and time to reseed and
restore lost biomass are significant, often
upwards of tens of thousands of dollars and
several months. Real- and near-time detec-
tion can also be used to prevent or mitigate
the negative impact of process upsets. In
the case of an unexpected event or excessive “shock” load to the system, the influent,
online TOC measurement can be used to
automatically divert to an equalization basin
or temporary storage vessel, sometimes referred to as a calamity tank.
Effluent discharge monitoring: Meeting regulatory requirements for effluent discharge levels is critical to any business operation. There are continuous monitors for
many of the common effluent-wastewaterquality characteristics, including pH, dissolved oxygen, total dissolved solids, total
suspended solids, and total organic carbon
(often used to trend chemical and biochemical oxygen demand). Finally, effluent pH for
discharge should almost always be neutral,
ideally pH 6.8 –7.2.
Solids disposal: The biosolids produced
Circle 36 on p. 98 or go to adlinks.chemengonline.com/61499-36
50
CHEMICAL ENGINEERING
WWW.CHEMENGONLINE.COM
SEPTEMBER 2016
from management of the activated
sludge volume in the aeration basins
and during clarification, are typically dewatered using a belt press
or centrifuge, before being used as
fertilizer or disposed of as waste.
The cost of sludge handling and dewatering, in energy, chemical usage
and disposal, is often the second
highest expense in a wastewater
treatment facility, after aeration. The
ability to use the dewatered sludge
as fertilizer is dependent upon the
content of undesirable constituents,
such as heavy metals or residual
pathogens, including fecal coliforms
such as E. coli (Escherichia coli).
Instead of land application for agricultural purposes, the solid waste
can be compacted or incinerated
(or both) to reduce volume for disposal. A more sustainable approach
is sending the sludge to anaerobic
digesters to produce methane gas,
which can be fed to gas-fired turbines to generate electricity.
Implementing process analytics
The data for each measured parameter can be tracked through
a data collection and visualization
system. A wide range of commercially available software, as well as
discrete supervisory control and
data acquisition (SCADA) systems,
are employed by treatment facilities
to monitor critical and complementary water-quality characteristics.
With these tools, each treatment
module indicates the measured parameters before, during and after
treatment, while steady-state conditions can be established to better detect and anticipate upset and
sub-optimal conditions. Many parameters integrate into a feedback
or feed-forward loop for chemical
feed, becoming statistical process
control applications. New, multivariate relationships can be tested
and inferred through sound experimental design and intrinsically valid,
statistical analyses. Good process
data leads to process understanding and SPC brings and maintains
processes in control. Empirical evidence can support or modify preliminary assumptions and control
schemes. This acquired learning
CHEMICAL ENGINEERING
can be impacted by changes in
the upstream processes, as well as
seasonal variations in environmental conditions such as ambient temperature and rainfall.
By employing continuous process
monitoring tools and integration to
automation and process control systems, more industries are finding better ways to effectively manage and
treat their process and wastewater
effluents. This automation provides
more predictable and controllable processes, reducing the frequency of upsets and assuring a more consistent
effluent that meets discharge requirements. The efficiency of the biological
system to remove organics depends
upon the quality of the upstream processes — oil and grease and solids
removal, and the controllable, ambient conditions, such as dissolved
oxygen, food-to-mass ratio and nutrient balance (CNP ratio). Utilization of
process analytical instrumentation and
automation controls enables these
facilities to reduce total chemical and
energy consumption, and solid waste
disposal, by maintaining the dynamic
treatment system in an optimal operational state.
n
Edited by Dorothy Lozowski
Author
J.P. Pasterczyk is the corporate
key accounts manager ­— analytical instruments for GE Water &
Process Technologies (6060 Spine
Road, Boulder, CO 80301-3687;
Email: john.pasterczyk@ge.com;
Phone: 720-622-0166). He has
25 years of international experience in water and wastewater
treatment, from water quality
monitoring to pretreatment, biological treatment processes and disinfection. Pasterczyk has spent the last
17 years with GE’s Analytical Instruments, primarily focused on total organic carbon analysis and integration
of water quality monitoring with process automation in
petroleum refining and petrochemicals, chemical, municipal water, pharmaceutical and semiconductor industries. He is an expert in industrial wastewater treatment,
applied statistics, statistical process control and optimization, Lean Six Sigma methods and advanced quality
management systems. Pasterczyk received a B.S. degree in physics from Drexel University and a Master of
Engineering degree from the Lockheed Martin Engineering Management Program at the University of Colorado, specializing in business performance excellence
and applied statistics/Six Sigma.
Circle 06 on p. 98 or go to adlinks.chemengonline.com/61499-06
WWW.CHEMENGONLINE.COM
SEPTEMBER 2016
51
Download