Categorizing Risks for Risk Ranking

advertisement
Risk Analysis, Vol. 20, No. 1, 2000
Categorizing Risks for Risk Ranking
M. Granger Morgan,1 H. Keith Florig,1 Michael L. DeKay,1 and Paul Fischbeck1
Any practical process of risk ranking must group hazards into a manageable number of categories. Defining such categories requires value choices that can have important implications
for the rankings that result. Most risk-management organizations will find it useful to begin
defining categories in terms of environmental loadings or initiating events. However, the resulting categories typically need to be modified in light of other considerations. Risk-ranking
projects can benefit from considering several alternative categorization strategies and drawing upon elements of each in developing their final categorization of risks. In principle, conducting multiple ranking exercises by using different categorizations could be interesting and
useful. In practice, agencies are unlikely to have either the resources or patience to do this,
but other groups in society might. Done well, such additional independent rankings could
add valuable inputs to democratic risk-management decision making.
KEY WORDS: Risk ranking; risk categorization
rank risks. Risk-management agencies are typically
responsible for thousands of specific hazards. Comparing and ranking such large numbers of risks systematically is infeasible, especially for lay groups.
Therefore, one must first group risks into a manageable number of categories. Then, because risk is a
multiattribute concept,(11) one must choose a set of
risk attributes against which to evaluate each category.
Finally, one must develop descriptions of each category of risks in terms of the attributes. These descriptions, which we refer to as risk summary sheets, are
needed to provide concise and systematic information
on each risk to those who perform the ranking.
This paper addresses the problem of grouping
risks into categories.2 We begin with a brief critique of
the categorizations used in past risk-ranking exercises in the environmental domain. We then review
selected literature on the general subject of categorization. From this review, it will become apparent that
1. INTRODUCTION
The past decade has witnessed growing interest in
risk ranking. The U.S. Environmental Protection
Agency (EPA) has been especially active, running three
ranking exercises at the agency level(1–3) and supporting
almost 50 local and regional “comparative risk” ranking
projects through its Regional and State Planning Bureau.(4–6) In an effort to gauge priorities of nonexperts
and nonstakeholders, many of these exercises have involved laypersons. Elsewhere in the U.S. government,
the Occupational Safety and Health Administration has
recently undertaken a broad priority-setting exercise
that involves considerable input from stockholders in
labor and industry, as well as from the scientific community.(7) In addition, the Federal Aviation Administration
has launched an effort to prioritize its safety initiatives.(8)
Some foreign governments have also pursued ranking of national environmental priorities.(9,10)
Several choices must be made when preparing to
2
1
Carnegie Mellon University, Pittsburgh, PA.
49
In other writings, we address the issues of choosing attributes,(12)
preparing risk summary sheets,(13) and methods that individuals
and groups can use to rank risks.(14)
0272-4332/00/0200-0049$16.00/1 © 2000 Society for Risk Analysis
50
no unique categorization exists for risks, and that different categorizations will best serve different objectives. In the balance of the paper, we use this and
other insights from the literature and our own riskranking experience to prescribe a practical strategy
that we believe agencies and other groups should use
when categorizing risks in preparation for risk ranking.
Our practical knowledge of risk ranking comes
from three sources: the literature on a number of
ranking efforts;(5,15–19) our own work in assisting Allegheny County, Pennsylvania, in formulating and
conducting a ranking exercise;(20,21) and studies that
we have conducted in developing and using an experimental test bed involving 22 health and safety risks
to children in a fictitious middle school.(13) The test
bed has proven valuable in conducting empirical
studies of alternative ranking procedures. However,
because the school environment is both familiar and
well bounded, the task of grouping school risks into a
manageable number of categories was relatively
straightforward. For the diverse set of risks managed
by large government agencies, the task of categorizing risks becomes much more complex.
2. CATEGORIZATION IN PAST
ENVIRONMENTAL RISK RANKINGS
The problem of categorizing risks for ranking
has received little formal attention in the many national and local comparative risk exercises that EPA
has spawned to date. The risk categories that have
been used in these exercises are fairly similar to each
other, largely because many of these projects have
started with a list of core environmental problems
published in an EPA guidebook on comparative
risk.(4) Unfortunately, EPA’s basis for risk categorization is not entirely clear. Drinking-water supplies and
groundwater are singled out as resources that can be
degraded, but “air” is not. Some point sources of pollution are categorized by the entity responsible for
creating the risk, but others are not. Municipal waste
water and solid waste are distinguished from industrial waste water and solid waste, but no such public/
private distinction is made for sources of air pollution. A number of categories are defined by the physical agent directly responsible for harm. These include
radon, “other” radiation, particulate matter, sulfur
dioxide, ozone, pesticides, lead, and carbon dioxide.
But hundreds of other agents (e.g., trihalomethanes,
itself a category of agents) are not explicitly listed.
One risk category, accidental chemical releases, dis-
Morgan et al.
tinguishes between routine and accidental events.
Others do not. Finally, RCRA and Superfund hazardous waste sites are distinguished from each other,
presumably because each is covered by a different
administrative program, even though other characteristics of the risks from RCRA and Superfund sites
are very similar. EPA’s guidebook on comparative
risk does not offer a rationale for choosing these particular categorizations, nor are alternative schemes
explored. A survey of participants in past comparativerisk projects revealed criticisms that “problem areas
were too broad, lacked common measurement criteria,
and failed to distinguish past and present activities.”(16)
The lack of attention to the implications of alternative risk categorizations in past risk-ranking exercises is a problem because the results of risk-ranking
efforts can be very sensitive to the way that risks are
divided up in the first place. In addition, using mixed
criteria for defining categories (e.g., affected resource
in some cases and human health endpoint in others)
can result in biases of inclusion and exclusion, as well
as double counting. This may not be a problem if, as
noted by Minard, agencies that sponsor comparativerisk exercises “are putting a higher premium on using
the list as a risk-communication tool rather than on
its intellectual consistency”(5) or are using ranking as a
vehicle to promote stakeholder interaction. However, if an agency wants to use the results of a riskranking project as input to risk management, explicit
analysis of the basis for choosing risk-categorization
schemes is clearly needed.
3. CATEGORIZATION IN THE
PSYCHOLOGY, NATURAL SCIENCE,
AND RISK LITERATURES
The substantial psychological literature on categorization is primarily concerned with how people
learn existing classification schemes and how they develop classifications for sets of objects or concepts.
Most of the work prior to the late 1970s adopted a
tradition that Komatsu(22) has termed the similaritybased view. This tradition holds that the inclusion of a
particular instance in a category is based upon the
similarity of that instance to some abstract specification. As originally formulated, the similarity-based
view “held that all instances of a concept share common properties, and that these common properties
were necessary and sufficient to define the concept.”(23) Later literature in this tradition adopts less
restrictive, sometimes probabilistic, formulations in
Categorizing Risks for Risk Ranking
which (1) individual instances bear a “family resemblance” to an idealized category member referred to
as a “prototype,” (2) categories are constructed on
the basis of similarity comparisons among the individual instances or “exemplars,” or (3) categories
result from the use of a cognitive “schema” that often includes both the prototype and the individual
exemplars.(22,23)
A second tradition, which Komatsu terms the explanation-based view of categorization, developed in
the 1980s. In this literature, which draws upon cognitive schemas and knowledge-based representations
of the world, categorizations are viewed as constructed on the basis of relationships that exist between instances and categories. In the words of Keil,(24)
“coherent belief systems, or ‘theories,’ are critical to
understanding the nature of concepts and how they
develop.” Arguing in this tradition, Medin and
Ortony(25) suggest that “people act as if their concepts
contain essence placeholders that are filled with ‘theories’ about what the corresponding entries are” and
that “these theories often provide or embody causal
linkages to more superficial properties.” These authors suggest that “organisms have evolved in such a
way that their perceptual (and conceptual) systems
are sensitive to just those kinds of similarity that lead
them to the deeper and more central properties.”
They term this formulation psychological essentialism, arguing that although real physical objects may
not have Aristotelian essences, people’s cognitive
models of such objects do.
One general characteristic of the explanationbased view of categorization is that objects, which on
superficial consideration have little or nothing to do
with each other, may be linked to form a category by
some underlying scenario or knowledge structure.
Medin and Ortony(25) illustrate this point by describing whales as mammals that look like fish. Barsalou(26)
gives an example of a category consisting of children,
pets, photo albums, family heirlooms, and cash—the
set of things one should grab on leaving the house in
the event of a fire. Clearly this is a category that takes
on meaning only because of an underlying set of
knowledge and beliefs. Indeed, it is a category that is
meaningful only in a particular decision context that
clearly defines the actor’s goals for categorization.
Komatsu(22) notes that “the explanation-based view
casts doubt on the assumption that concepts can be
understood independently of the inferences they enter into. It may even cast doubt on the assumption
that concepts are stable representations. Abandoning
51
these two assumptions makes the study of concepts
enormously more complicated.”
In the natural sciences, the objective of classification has frequently been to develop a “natural system” that is “believed to be in accord with nature.”(27)
However, here too issues arise regarding the features
that should be used to form the categories, whether
these features are directly observable (e.g., similarity
in bone structure), or depend upon some deeper
structure (e.g., similarity in genetic code). Sokal(27)
draws attention to the distinction between monothetic
classification systems, “in which the classes established
differ by at least one property that is uniform among
the members of each class” and polythetic systems in
which “individuals or objects that share a large portion of their properties . . . do not necessarily agree in
any one property.” Note that this distinction has
strong parallels with the development of the similaritybased view mentioned previously.
Cvetkovich and Earl(28) have identified two motivations for classification in environmental psychology. The first, which they term essentialist, aims to
“devise systems that accurately reflect categories of
phenomena (taxa of animals or plants for example)
as they exist in nature.” The second motivation, which
they term constructivist, is based upon human decisions constrained by knowledge of the world. This approach “considers classification systems as aids to
thinking and communicating and assumes that there
is no single generally best classification system. The
quality of a classification system depends upon how
well the functions of analysis and communication are
performed. Thus, whether one system is better than
another depends upon the specific aims of its user.”3
Cvetkovich and Earl review a large number of alternative systems that have been proposed by different
authors for classifying hazards. Three things are apparent from this review: (1) As with the explanation-based
psychological approaches described by Komatsu, risk
classification schemes can vary greatly depending
upon the underlying perspective and knowledge used
in their construction; (2) different objectives are best
served by different classification schemes; and (3)
given the inherent multivariate nature of risk, virtually all classification schemes that have been proposed are, in Sokal’s terms, highly polythetic.
3
Note the confusion in nomenclature. What Cvetkovich and Earl
term “constructivist” is very similar to what Medin and Ortony
term “essentialist,” whereas what Cvetkovich and Earl term “essentialist” is very similar to what Komatsu terms the “similaritybased view.”
52
4. CRITERIA FOR EVALUATING
CATEGORIZATIONS
Although experts may be able to describe a risk
objectively in terms of specified attributes, deciding
which of two hazards is more risky requires value
judgments. For example, in our middle-school test
bed,(13) commuting to and from school by private car,
bicycle, or on foot produces the largest expected
number of annual deaths, whereas transporting hazardous materials over nearby rail and road corridors
produces a much lower expected number of annual
deaths. On the other hand, in the unlikely event that
it happened, a catastrophic hazardous material transportation accident could kill many more students
than the largest imaginable commuting accident. Which
of these risks someone is more concerned about depends in large part on a value judgment about the relative importance of annual expected mortality versus
large numbers of fatalities in low probability accidents.4
Judgments about the relative importance of different attributes are not the only value judgments required in risk ranking. The process of categorizing a
large number of hazards into a manageable number of
categories also requires value judgments. Because different people and organizations have different concerns about risks, there can be no entirely value-free
way to classify risky phenomena.(29) In the case of the
school, should commuting in school buses be combined in the same category with commuting in private
cars, bicycling, and walking? Should the common cold
be included in the same category as more serious infectious diseases such as pneumonia and meningitis?
Should risks from chlorine transport on the nearby
railroad be combined with risks from on-site chlorine
gas, which is stored in pressure tanks at the school for
treatment of the water in the swimming pool?5 The
4
5
In our middle-school test bed, hazardous material transport
yields roughly 105 fewer expected annual fatalities than commuting. The largest plausible hazardous material accident kills
roughly 10 times as many students as the largest plausible commuting accident. In risk-ranking exercises that we have run, commuting is ranked as one of the risks of greatest concern, whereas
hazardous materials transport receives low rankings.
In our middle-school test bed, school-bus accidents (which yield,
on average, about 0.0002 deaths per year) are categorized separately from private commuting (which yields about 0.03 deaths
per year). The risks associated with driving, walking, and bicycling are reported separately in the risk summary sheet. Chlorine
is not treated separately from other hazardous material accidents, although it is singled out as a major contributor to risk in
the discussion in the risk summary sheet. Risks from chlorine
used to treat swimming pool water are included in a separate risk
summary sheet that describes “accidents.”
Morgan et al.
choices one makes about such options can have important consequences for the rankings that result.
5. CHOICE OF CATEGORIES SHOULD
REFLECT RISK-MANAGEMENT
OBJECTIVES
Given the wide variety of dimensions along which
risks can be considered similar or dissimilar, we believe
that similarity-based categorization of risks is inherently ambiguous. An explanation-based approach, in
which risks are characterized in accordance with the
goals of the organization that desires the ranking, is
likely to be much more useful.
For regulatory agencies charged with protecting
health, safety, or the environment, we find it helpful
to think about alternative categorization strategies in
terms of causally linked risk processes.(30–33) These begin with human (or possibly natural) activities that
can give rise to environmental loadings (such as the
release of a pollutant) or accident-initiating events.
These, in turn, lead to exposure and effects processes,
which are then perceived and valued by people. In
previous writings on risk categorization, both Cvetkovich and Earl(28) and Webler et al.(29) have adopted a
similar strategy, although they did not consider the
role of categorization in risk ranking per se. Figure 1
illustrates these causally linked risk processes and,
using air-pollution risks as an example, lists alternative criteria that could be used as a basis for categorization at various stages.
A consideration of each element in this diagram
leads to additional factors that might be used in creating categories. For example, in addition to the kind
Fig. 1. Examples of alternative categorization schemes that could
be employed at different points along the chain of risk processes.
The examples given are for air-pollution risks, but the basic categorization strategies can apply to any domain of risk.
Categorizing Risks for Risk Ranking
of facility that creates the environmental loading
(power plant, steel mill, etc.), one might consider the
facility’s organizational or legal status (profit, nonprofit, cooperative, municipal, etc.). In addition to the
age, sex, and race of the persons affected by the pollutant, one could consider more complex historical
and relational social factors (families, economically
depressed areas, culturally unique artifacts, etc.).
Risks differ in terms of where along the chain of
risk processes management intervention is most commonly or efficiently applied. Interventions might focus
on modifying human activities, reducing inventories,
limiting the release of hazardous materials, shielding
or otherwise protecting those who could be exposed,
ameliorating damage once it has occurred, or compensating victims. Because most risk-management organizations devote most of their attention to the early
stages of the chain, in most cases it makes sense for
them to begin the process of categorization by sorting
risks according to human activities, environmental
loadings, or initiating events.
However, categorization that is based only on
these early steps in the chain will probably not incorporate all the attributes that regulators and the public
find important. Having begun categorization in terms
of environmental loadings or initiating events, one
will typically need to modify categories in light of
other considerations. The revision process that we
went through to categorize the 22 risks in our middleschool test bed is described in Florig et al.(13) Figure 2
Fig. 2. Illustration of how consideration of various factors along
the chain of risk processes can modify an initial categorization on
the basis of a choice of loading (or initiating event).
53
illustrates how such logic might apply to the categorization of risks associated with air pollution.
The final list of categories in Fig. 2 looks fairly
standard, and it is tempting to stop at this stage. However, we believe that developing several alternatives
before settling on a final categorization is useful. Figure 3 illustrates the logic that could be used in developing such alternative categorizations for the case of
air pollution.
In addition to the factors illustrated in Figs. 1 and
2, various other considerations, including agency administrative organization and regulatory statute, may
sometimes be relevant. These and other considerations are discussed in the next section.
Careful consideration of several alternative categorizations encourages broad systematic thinking
about what is important about the risks that are to be
categorized and ranked. In choosing a final categorization, it is typically desirable to draw upon elements of
several of the alternatives that have been considered.
Fig. 3. Several alternatives to the categorization shown in Fig 2.
Although it is less likely that an organization responsible for managing air-pollution risk would want to use any of these categorizations in place of that developed in Fig. 2, they might well wish to refine the categorization of Fig. 2 in light of insights gained by
considering these alternatives. Private organizations, with different
agendas, might wish to conduct their own ranking exercises by using such alternative categorizations.
54
In principle, conducting multiple ranking exercises by using different categorizations could be interesting and useful. In practice, agencies are unlikely
to have either the resources or the patience to engage
in multiple rankings. But of course, there is no reason
why careful, well-executed risk rankings should be the
sole domain of risk-management agencies. Although
risk-management agencies will probably want to operate with categories that are derived from an initial
consideration of loadings and initiating events, organizations that are more concerned with equity or the
distribution of social and economic power may wish
to commission rankings that are performed by using
very different classification schemes. If done well,
such rankings could add valuable inputs to democratic risk-management decision making.
6. WORKING OUT THE DETAILS
Criteria for judging the merits of specific risk categorizations have been mentioned previously,(4,34,35)
but these have not been fleshed out to any extent. In
addition to reflecting risk-management objectives as
argued previously, an ideal risk-categorization scheme
should be logically consistent, compatible with administrative systems, equitable, and compatible with
human cognitive limitations. Table I elaborates on
these objectives. Applying the criteria of Table I to
the task of developing categories is complicated because these criteria often conflict. In this section, we
draw upon our experience in developing the middle-
Morgan et al.
school test bed and in assisting with risk-ranking
exercises in Allegheny County, Pennsylvania, to discuss some of the difficulties and conflicts that can
arise and identify some of the subtle ways in which
category development can influence how risks are
framed and compared.
6.1. Logical Consistency
Risk categories should be exhaustive, encompassing all relevant risks. But to assure that nothing
falls through the cracks, category definitions might
need to be quite exact. Creating lists of subcategories,
and even sub-sub-categories, can be helpful in assuring that inclusion and exactness criteria are satisfied.
Note that these subcategory lists would be used only
to help define the categories and would not, themselves, be ranked.
Risk categories should be mutually exclusive so
that management priorities derived from risk rankings
can be unambiguously targeted. Using the subcategory exercise described previously can help identify
overlaps between categories. In our middle-school
test bed, we identified a number of overlapping risks
in the process of preparing the risk summary sheets
describing each risk category. For instance, in the process of preparing the risk summary sheets for “Allergens” and “Food Poisoning” in our middle-school
test bed, we realized that risks from school cafeteria
food containing allergens could fit in either of these
categories.
Table I. Desirable Attributes of an Ideal Risk-Categorization System for Risk Ranking
Categories for risk ranking should be:
Logically consistent
• Exhaustive so that no relevant risks are overlooked.
• Mutually exclusive so that risks are not double-counted.
• Homogenous so that all risk categories can be evaluated on the same set of attributes.
Administratively compatible
• Compatible with existing organizational structures and legislative mandates so that lines of authority are clear and management actions at
cross purposes are avoided.
• Relevant to management so that risk priorities can be mapped into risk-management actions.
• Large enough in number so that regulatory attention can be finely targeted, with a minimum of interpretation by agency staff.
• Compatible with existing databases, to make best use of available information in any analysis leading to ranking.
Equitable
• Fairly drawn so that the interests of various stakeholders, including the general public, are balanced.
Compatible with cognitive constraints and biases
• Chosen with an awareness of inevitable framing biases.
• Simple and compatible with people’s existing mental models so that risk categories are easy to communicate.
• Few enough in number so that the ranking task is tractable.
• Free of the “lamp-post” effect, in which better understood risks are categorized more finely than less understood risks.
Categorizing Risks for Risk Ranking
Sometimes making risk categories distinct is
difficult because some risks are coupled in a riskmanagement sense so that risk-management measures that affect one category can reduce or exacerbate risks in another category. For example, measures
to reduce the health and ecological impacts of sulfur
dioxide emissions from coal burning can accelerate
climate warming by removing fine particles from the
atmosphere, thus decreasing the planetary albedo.
Such links between categories weaken their independence and complicate the task of deriving riskmanagement priorities from risk rankings.
Risk categories should be defined so that they
can be evaluated on a common set of attributes.
Sometimes finding a common set of attributes to describe all risk categories of interest is difficult because the categories are so diverse. For instance, the
attributes of behavioral risks such as drug abuse
and sexually transmitted diseases are quite different from those of environmental risks such as radon
and lead dust from old paint. Yet a school district
examining school risks might want to include both
behavioral and environmental risks in the same
ranking exercise.
6.2. Administrative Concerns
Most people would agree that the risks addressed by risk ranking within an agency should include at least the set of risks for which the agency has
a legislative mandate. Often, however, agencies reach
beyond the minimum set that is required by law.
EPA’s attention to naturally occurring radon and the
Food and Drug Administration’s (FDA’s) attention
to tobacco are examples of such behavior. In selecting risk categories for risk ranking, one might ask
how far outside an agency’s legislative mandate risk
categories should be defined. If EPA includes radon
in its ranking exercise, should it also include other
“natural” environmental hazards such as soil dust
particles or natural allergens in the air? Because an
important objective of ranking is to examine the adequacy of current agency priorities, we think being
more expansive and inclusive is generally wise.
Making categories compatible with technical,
regulatory, and organizational elements can sometimes lead to awkward divisions of risks. For instance,
indoor tobacco smoke is regulated in public buildings
but not in private homes. Radon from industrial
sources and mill tailings is regulated, but radon in
homes is not. Therefore, risks from indoor air pollu-
55
tion in homes might be placed in a different category
than identical risks elsewhere.
What about risks that are currently unknown or
poorly understood and therefore not specifically covered by existing legislation or organizations? These
might include both risks that are already addressed generically by legislation (e.g., carcinogenic effects from
human exposure to new industrial chemicals) and
those that are not (e.g., many types of unknown risks
to ecosystems). By including a general category of
poorly known but suspected risks (e.g., endocrine disrupters, power-frequency electric and magnetic fields),
agencies may be able to develop some sense of the
value that the public places on exploratory research.
Priorities derived from risk ranking ultimately
need to be mapped into risk-management actions.
This can be facilitated if risk categories are defined to
be management relevant, as described in Section 4.
Some risks have fallen completely off the radar
screen because, like boiler explosions, strategies for
their adequate management were adopted years ago,
or because, like horse manure on city streets, technological or societal changes have made them irrelevant
today. Other risks, such as radon, are currently only
partially controlled, even though additional management actions would in many cases be cost effective.
Still other risks, such as collisions between small cars
and sport-utility vehicles, are relatively new and thus
may not have been adequately addressed. In defining
risk categories, it is important to consider the extent
to which uncontrolled, partially controlled, fully controlled, or obsolete risks are to be included in the
analysis.
Data on population exposures to risks and risk
consequences are often collected by organizations
different from those responsible for risk management. Even when data collection and management
are done by the same agency, the office responsible
for data collection might not use “applicability to risk
management” as a strong criteria for designing data
collection, preferring instead to focus on criteria such
as tradition or convenience. As a result, databases are
often mis-aimed, too coarse, or too sparse to permit
agencies to divine some risk categories that might be
of interest. Risk ranking may offer a useful vehicle
for reevaluating an agency’s data-collection practices.
6.3. Equity Concerns
Risk categories can carry ethical implications.
Categories can be drawn to highlight risks produced
56
Morgan et al.
by a particular moral agent (e.g., industry, nature).
Categories can be drawn on the basis of some quality
of fairness, such as the extent to which those at risk
consent to being exposed or the extent to which the
benefits of the risky activity accrue to persons other
than those at risk. Categories might also be chosen on
the basis of specific attributes of a population such as
special vulnerabilities or past experience (as in arguments for environmental justice). Even when such
factors are not included in the process of categorization, considering them in later stages of risk ranking
is still possible by incorporating equity and controllability as attributes of each risk category.
6.4. Cognitive Concerns
The categorization of risks can have powerful
effects on the cognitive frames that people use in risk
ranking. Particular risks can be made to seem significant or insignificant, simply by shifting the risk bundles presented for ranking. Indoor air pollution, for
instance, may seem to have very different health impacts depending on whether radon is included among
the pollutants. Similarly, indoor air pollution may
seem to be dreaded or not depending on whether asbestos is included. EPA has noted this bundling effect
with respect to ground-water:
Ground-water can be contaminated by numerous
sources, such as pesticides, leaking underground storage tanks, salt/sand mixtures used to de-ice slippery
roads, solid and hazardous waste sites, improper storage of hazardous materials at all sorts of commercial
and industrial facilities, and residential septic systems.
Questions arise as to how to allocate these risks. At first
glance, counting ground-water risks in the problem
areas where they occur would seem to be the best approach. For example, ground-water contamination associated with hazardous waste sites would be counted
as one of the risks for the hazardous waste problemarea. However, it has been found from past comparative risk projects that ground-water risks become
“lost” among a number of different problem areas as a
result. This has created difficulties in communicating
a comprehensive picture of the risks posed by and to
ground-water.(4)
Changing the way that issues are categorized can
also significantly alter the salience of particular attributes. A categorization example that does not involve risk is instructive. In the 1970s, the federal government’s decision to impose waterway user fees was
due, in part, to changes in perception resulting from
reclassifying navigable waterways as transportation
corridors rather than as natural resources.
. . . one [Hill staffer] attributed the passage of a waterway user charge partly to a change in categories. As he
put it, “Navigation always used to be considered a part
of water resources policy. It is only recently that it has
come to be thought of as transportation.” This change
had an important effect. If waterways are categorized
as water resources, then such navigation projects as
locks and dams, canals, and dredging would be compared to such projects as reclamation and irrigation.
But if waterways are viewed as corridors of transport,
then a barge’s use of locks and dredged channels without paying any user fee is unfair when compared to the
fuel taxes paid by users that finance highway construction and maintenance, the ticket tax on airline travel
that finances airport construction, and the railroads’
own expenditures for right-of-way construction.(36)
Examples in which risk recategorization could
greatly affect attitudes toward management include
efforts by FDA to reclassify tobacco as a drug rather
than as an agricultural commodity and efforts by
Mothers Against Drunk Driving and various guncontrol groups to frame their respective issues in
terms of public health or child safety.
Category definitions should be easy to communicate, but because risks overlap each other in complex
ways, finding simple rules for separating them is often
difficult. For instance, indoor air pollution would
seem to be a risk category that is simple to define. But
complications arise when one considers whether or
not this category should include radon, direct inhalation of tobacco smoke, infiltration of outdoor pollutants, or other risks that the category label might not
obviously subsume. To be accurate, therefore, category definitions often must often contain a number of
conditions and caveats. Such complications can make
categories more difficult to communicate.
Cognitive limitations restrict the number of risk
categories that people can thoughtfully rank to perhaps a few dozen. Yet, typically, regulatory organizations focus at finer levels, with the result that they
must set priorities among hundreds of separate risks.
One way to deal with this problem is to adopt an iterative hierarchical approach to risk ranking. For example, once “air pollution” has been identified as important in a ranking exercise, one could run a second
exercise in which the risks to be ranked are all the different kinds of air pollution. Note that subcategory
rankings can be used to set priorities only within a
category and not across categories. Some subcategories of high-ranked categories may actually be of
less concern than some subcategories of low-ranked
categories. Agencies are unlikely to have the resources to do very much hierarchical risk ranking,
but a few carefully chosen elaborations could go a
Categorizing Risks for Risk Ranking
long way to improving the utility of ranking as
an input to an agency’s risk-management decision
making.
An agency’s ability to categorize risks depends
greatly on the amount of technical knowledge available. There is a natural tendency for agencies (and
others) to categorize well-known risks in finer detail
than poorly understood risks. At EPA, for instance,
risks from carcinogenic chemicals in the environment
have been treated in more detail than risks from hazardous but noncarcinogenic chemicals. Similarly,
sources of human health risks have been categorized
more finely than sources of ecological risks. In selecting a classification system, agencies should ask whether
this “lamp-post effect” serves their management
needs.
The knowledge of those doing the risk ranking
may constrain the types of categories that are most
appropriate. Groups of lay persons, for instance, may
not find meaning in risk categories that depend on
distinctions that are too technical (e.g., primary and
secondary particulate air pollutants). Although lay
persons may be able to best understand categories
based on health endpoints or hazards that most immediately affect them, less salient categories (e.g.,
based on legislative boundaries, for instance) might
still be workable, as long as the categories are understandable. Before finalizing the categories that will be
used, having the categories reviewed by representatives of the groups that will use them in ranking
would be wise.
57
approach will almost always need to be modified in
light of other considerations.
Agencies or organizations conducting risk-ranking
projects would be well-advised to develop several alternative categorizations before settling on a final
choice. Careful consideration of several alternative
categorizations encourages broad systematic thinking concerning what is important about the risks that
are to be categorized and ranked. In choosing a final
categorization, drawing upon elements of several of
the alternatives considered will typically prove useful.
In principle, conducting multiple ranking exercises by using different categorizations could be interesting and useful. Although government agencies are
unlikely to do this, other organizations in the society
might. Such additional independent rankings could
add valuable inputs to democratic risk-management
decision making.
ACKNOWLEDGMENTS
The authors thank A. Bostrom, J. C. Davies, A.
Finkel, B. Fischhoff, K. Jenni, R. Kasperson, J. Long,
and K. Morgan for helpful discussions. This work was
supported by grants from the National Science Foundation (SRB-9512023), the Electric Power Research
Institute (W02955-12), the Alcoa Foundation, and
the Chemical Manufacturers Association.
REFERENCES
7. CONCLUSIONS
Even within a specific domain of risk such as air
pollution or airline safety, the world is filled with numerous known and potential hazards. For this reason,
any practical process of risk ranking must group hazards
into a manageable number of categories. Defining categories requires value choices. These choices can have
important implications for the rankings that result.
Because a similarity-based approach to risk categorization is inherently ambiguous, we believe that
it makes more sense for risk-management organizations to categorize risks according to the goals of the
organization and the risk-management decisions that
they face. In most cases, these organizations will find
it useful to begin defining categories in terms of
human behavior, environmental loadings, and initiating events. However, before a set of risks will be useful in risk ranking, categories that result from such an
1. U.S. Environmental Protection Agency, Unfinished Business:
A Comparative Assessment of Environmental Problems, Office of Policy Analysis, Washington, DC (1987).
2. U.S. Environmental Protection Agency, Reducing Risk: Setting Priorities and Strategies for Environmental Protection,
Science Advisory Board, Washington, DC (1990).
3. U.S. Environmental Protection Agency, Integrated Environmental Decisionmaking in the 21st Century, Draft report, Science Advisory Board, Washington, DC (1999).
4. U.S. Environmental Protection Agency, Guidebook to Comparing Risks and Setting Environmental Priorities, Office of
Policy, Planning, and Evaluation, Washington, DC (1993).
5. R. A. Minard, “CRA and the States: History, Politics, and Results,” in J. C. Davies (ed.), Comparing Environmental Risks—
Tools for Setting Government Priorities (Resources for the Future, Washington, DC, 1996).
6. K. Jones, A Retrospective on Ten Years of Comparative Risk,
Green Mountain Institute for Environmental Democracy,
Montpelier, Vermont (1997).
7. U.S. Occupational Safety and Health Administration, Announcement of Results of the OSHA Priority Planning Process, Washington, DC (1995).
8. J. T. McKenna, “Industry, FAA Struggle to Steer Agenda,” Aviation Week & Space Technology (April 27), 60–61 (1998).
58
9. Environment Canada, Environmental Issue Ranking: A Proposed Priority Setting Methodology for Environment Canada,
Economics & Conservation Branch, Ecosystem Sciences &
Evaluation, Conservation & Protection Service, Ottawa
(1993).
10. New Zealand Environment Ministry, Towards Strategic Environmental Priority Setting: Comparative Risk Scoping Study,
Wellington (1996).
11. B. Fischhoff, S. Watson, and C. Hope, “Defining Risk,” Policy
Sciences 17, 123–139 (1984).
12. K. E. Jenni, Attributes for Risk Evaluation, Ph.D. dissertation,
Carnegie Mellon University, Pittsburgh (1997).
13. H. K. Florig, M. G. Morgan, K. Morgan, et al., An Experimental Test Bed for Studies of Risk Ranking by Lay Groups, prepared for Electric Power Research Institute by the Department of Engineering and Public Policy, Carnegie Mellon
University, Pittsburgh (1996).
14. K. Morgan, M. L. DeKay, P. S. Fischbeck, et al., Methods for
Risk Ranking by Individuals and Groups, Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh (1999).
15. J. C. Davies, Comparing Environmental Risks: Tools for Setting
Government Priorities (Resources for the Future, Washington,
DC, 1996).
16. D. L. Feldman, Environmental Priority-Setting through Comparative Risk Assessment, presented at the Annual Conference of the Association for Public Policy Analysis and Management, Pittsburgh, 1996 (unpublished).
17. D. L. Feldman, R. Perhac, and R. A. Hanahan, Environmental
Priority-Setting in U.S. States and Communities: A Comparative Analysis, Energy, Environment, and Resources Center,
University of Tennessee, Knoxville (1996).
18. Green Mountain Institute, Web site (http://www.gmied.org)
listing over 100 reports that describe various state, local, and
foreign risk-ranking exercises (1998).
19. R. Minard, K. Jones, and C. Paterson, State Comparative Risk
Projects: A Force for Change, Northeast Center for Comparative
Risk Assessment, Vermont Law School, South Royalton (1993).
20. P. S. Fischbeck and D. Piposzar, Development of a County
Risk-Ranking Process: Risks Like Politics Are Local, presented at the Society of Risk Analysis Annual Meeting, New
Orleans, 1996 (unpublished).
Morgan et al.
21. P. S. Fischbeck and D. Piposzar, Evolution of a County-Wide,
Risk-Ranking Process: Using the Web to Begin a Dialogue,
presented at the Society of Risk Analysis Annual Meeting,
Washington, DC, 1997 (unpublished).
22. L. K. Komatsu, “Recent Views of Conceptual Structure,” Psychological Bulletin 112(3), 500–526 (1992).
23. D. L. Medin and E. E. Smith, “Concept and Concept Formation,” Annual Review of Psychology 35, 113–138 (1984).
24. F. C. Keil, Concepts, Kinds and Cognitive Development (MIT
Press, Cambridge, Massachusetts, 1989).
25. D. Medin and A. Ortony, “Psychological essentialism,” in S.
Vosniadou and A Ortony (eds.), Similarity and Analogical Reasoning (Cambridge University Press, New York, 1989).
26. L. W. Barsalou, “Ad Hoc Categories,” Memory and Cognition
11, 211–227 (1983).
27. R. Sokal, “Classification: Purposes, Principles, Progress, Prospects,” Science 185(4157), 1111–1123 (1974).
28. G. Cvetkovich and T. C. Earle, “Classifying Hazardous
Events,” Journal of Environmental Psychology 5, 5–35 (1985).
29. T. Webler, H. Rakel, O. Renn, et al., “Eliciting and Classifying
Concerns: A Methodological Critique,” Risk Analysis 15(3),
421–436 (1995).
30. M. G. Morgan, “Choosing and Managing Technology-Induced
Risk,” IEEE-Spectrum 18(12), (December), 53–60 (1981).
31. C. Hohenemser, R. W. Kates, and P. Slovic, “The Nature of
Technological Hazard,” Science 220, 378–384 (1983).
32. C. Hohenemser, R. E. Kasperson, and R. W. Kates, “Causal
Structure,” in R. W. Kates, C. Hohenemser, and J. X. Kasperson (eds.), Perilous Progress (Westview Press, Boulder, Colorado, 1985).
33. T. C. Earle and G. Cvetkovich, Risk Judgment and the Communication of Hazard Information: Toward a New Look in the
Study of Risk Perception (Battelle Human Affairs Research
Center, Seattle, Washington, 1983).
34. B. Fischhoff, “Ranking Risks,” Risk: Health, Safety and Environment 191(Summer), 191–202 (1995).
35. M. G. Morgan, B. Fischhoff, L. Lave, et al., “A Proposal for Ranking Risk within Federal Agencies,” in J. C. Davies (ed.), Comparing Environmental Risks—Tools for Setting Government Priorities (Resources for the Future, Washington, DC, 1996).
36. J. W. Kingdon, Agendas, Alternatives and Public Policies (Little, Brown and Company, Boston and Toronto, 1984).
Download