Trust_in_experts

advertisement
Ashley Coles
Political Ecology
December 6, 2006
Truth and/or consequences: How scientific information simultaneously builds and
erodes trust among non-experts
Scientists and policy makers have long recognized that the lay public’s perception
of risk depends largely on trust in the science and the scientists themselves. Trust
therefore plays a crucial role in determining whether certain policies based on scientific
findings will gather support or opposition among the populace. This proves especially
relevant for cases involving risk assessment and risk management. The notorious
difficulty of maintaining and rebuilding trust as necessary has inspired scientists and
politicians alike to seek new ways to gain the confidence of the public, including
widespread access to both information and open scientific debates. Despite extensive
efforts in these directions, mixed reactions have left the scientists and policy makers
uncertain as to how to appease an apparently irrational public. This paper will explore
how increased exposure to scientific information and controversy became the expected
solution to lack of trust, as well as how it may work to either build or erode trust.
Trust is important.
Trust holds such strong influence over an individual’s risk perception because it
serves as an adaptation to an increasingly complex environment. To reduce the necessity
for learning every detail about every possible risk, individuals place trust in those whom
they believe to both have the knowledge and willingness to share accurate information
about particular risks (Slovic, 1999; Siegrist and Cvetkovich, 2000; Savadori et al., 2004;
Lang and Hallman, 2005). Not only does trust in industry and scientists regarding
particular hazards simplify the experiences of individuals, they are more likely to
perceive fewer risks and greater benefits associated with those hazards (Siegrist and
Cvetkovich, 2000).
The fragility of trust makes the issue much more salient, particularly in matters of
risk management. Events that destroy trust tend to have more visibility and carry more
weight than those that build trust, and sources that provide trust-reducing information are
also perceived as more credible than those that provide trust-building information. Even
though many studies report failure to find measurable risk, the fact that they are looking
for it increases risk perception because risk carries more cognitive weight than no risk
(Slovic, 1999).
Additionally, “distrust, once initiated, tends to reinforce and perpetuate distrust
(Slovic, 1999, p. 698).” Once an institution has earned a reputation for distrust,
individuals tend to avoid contact or automatically reject even trust-building information.
The resilience of public opinions formed based on distrust thus often creates a barrier that
prevents experts from communicating new or amended hazard information. The search
for appropriate media and messages for risk communication becomes futile when the
public lacks trust in the messenger, because as Slovic (1999) asserts, “[i]f trust is lacking,
no form or process of communication will be satisfactory (p. 697).”
The processes perpetuating distrust mentioned above tend to operate on single
individuals, but distrust may also spread through social networks. The social
amplification of risk framework (SARF) provides a theoretical model for how
information about hazards and risk perceptions may be amplified or attenuated as people
consult with their social networks (Kasperson, 1992; Masuda and Garvin, 2006). Rumors
or information will be propagated as it fits preconceived mental models, beliefs, or values
(Kasperson, 1992). This means that if an individual already trusts the source they will
circulate trust-building information and downplay trust-reducing information, while if the
individual does not trust the source they will focus on the trust-reducing information
instead.
So how do we build trust?
Historically, laypersons accepted scientific discoveries as fact without question, a
practice that began to change through the practice of individual reflexivity in late
modernity (Beck et al., 1994). Beck uses the term individualization to describe the
transition from the general acceptance of risks associated with industrialization to a
cautious monitoring of agencies and institutions. In a sense, individuals began to
withdraw from the protection of the mother’s wing and instead question the motives and
competency of the experts. Individualization thus requires the experts to earn and
maintain what Giddens refers to as active trust, which “depends upon a more institutional
‘opening out,’” or discussion and self-disclosure with the public (Beck et al., 1994, p.
187). Jasanoff (1990) described this process as it occurred within the United States,
where the public eventually developed the perception that policy makers were somehow
corrupting the scientific process and thus the findings that led to regulations. At the same
time, responsibility for policies would not suit scientists with little social knowledge, and
“decisions cannot be wholly legitimate if they are comprehensible only to the initated (p.
11).” The compromise thus created the role of the scientist-as-advisor for policy makers.
The lack of trust in experts stems partially from their tendency to leave the public
out of risk assessment and management practices under the assumption that they lack
rationality and do not understand probability and uncertainty. Much research carried out
under the psychometric paradigm rests on this assumption and attempts to discover how
and why the public errs in judgment so that experts may work to correct
misunderstandings (Jasanoff, 1998; Slovic, 1999; Frewer, 2004). As Jasanoff (1998)
points out, however:
The psychometric paradigm has fostered a strikingly asymmetrical view of the
goals and methods of risk perception research. There is, to begin with, a widely
held assumption that factors affecting lay perceptions deserve closer inspection
than factors affecting expert judgments, which are considered less likely to be
tainted by subjective bias (92).
In other words, the psychometric approach aims to change the perspectives of the lay
public and bring them into alignment with that of the “objective” and “rational” experts.
Though certainly not the first or the only suggestion that scientists do not operate free of
bias and context, many scientists adhere to a belief in their own objectivity and maintain
the psychometric paradigm for matters of risk. As a result, risk managers provide more
information and instruction to the public, hoping that education will cure the masses of
ignorance and irrationality (Douglas, 1992). However, as we will see, education has the
potential to work against risk managers by reducing public trust.
Many risk management agencies attempt to establish legitimacy by creating a
power distance between their experts and the lay public, but this action potentially strips
them of the legitimacy. Risk managers believe they produce and provide more accurate
information if they rely only upon expert, objective, bias-free data. Members of the
public perceive that bureaucracies deliberately seek to serve their own needs at the
expense of the people. At the very least, they assume that these organizations fail to
recognize public needs because they are so far removed from both daily situations and
extreme events (Sáenz, 2003; Paton and Johnston, 2001). Schematic drawings by both
experts and laypersons differ in the relative placement of risk management agencies, the
public, and hazards themselves (Parker and Handmer, 1998; Handmer, 2001).
Individuals are more focused on their homes, families, and closest social networks than
on the community at large, while the opposite is true for the agency. The agency tends to
situate themselves as central and to some degree above the public and the hazards, a
position that allows them to monitor and control both. They assume the responsibility for
handing out warnings in a top-down manner, delivering universal information and advice
equally to a community, despite varying needs and vulnerability. Community members
see themselves as central and much closer to the problem than the agency, which reflects
reality more accurately in nearly every case. To them, the agency is so far removed from
the actual situation that they cannot possibly know what is best for them, and in many
cases this may be true (Parker and Handmer, 1998; Handmer, 2001). As a result,
individuals tend to seek confirmation of hazard information from the sources they do
trust, such as friends and family (Mileti, 1995).
These findings seem to support the theory that allowing the public more access to
scientific information and debate will increase trust in scientists and risk managers. Proof
of objectivity, transparency, and competency through more exposure to science may
indeed build the public’s confidence. Risk managers must ascend a slippery slope to gain
trust, however. The climb is slow and arduous, and a single mistake will send them
hurtling towards public distrust. Revealing the details and uncertainties of science in
public arenas creates yet another opportunity to erode the already fragile trust in experts.
The existence of fields like science communication as well as current practices that
attempt to enlighten the public of scientific matters clearly demonstrate that the experts
want the public to trust them. Before offering more information or access, however,
experts would do well to understand the ways in which these practices may either
improve or destroy public trust.
Trust building through exposure
Proof of objectivity serves as one of the main justifications for exposing the
public to more scientific information and controversy. Experts rely heavily on the
concept of objectivity in science for legitimacy. Not only do they believe in the
objectivity of science and scientists, they believe that they must appear as such to the lay
public. As mentioned before, this often leads to the exclusion of any non-scientists from
the production of science. Kuhn (1970) explains the reasoning behind the historical
practice of keeping science within the scientific community by clearly defining the
“peers” from whom a scientist should seek legitimacy. In order to maintain authority,
scientists must appear “uniquely competent,” both as individuals and as a group, to know
the rules of science and how to evaluate the validity of its findings (p. 168). Allowing
other players such as government or the public to weigh in on such matters challenges
their very expertise, which therefore challenges the necessity of their existence. Jasanoff
(1990) refers to the sequestering of science into a closed domain as boundary work, and
describes how it thus establishes scientists as the only suitable judges of scientific work.
Left out of the scientific community and its business, the public may begin to
question the objectivity of hidden processes (and possible hidden agendas) that somehow
produce policy-relevant findings that affect their lives. The expert’s answer to this
concern, as described in Latour (1987), is to throw wide the doors of the laboratory and
allow the citizens to see for themselves. Should a non-scientist still doubt the reality of
their own observations, the scientist may prove objectivity by opening any black boxes
demanded of them until the inquisitor either concedes or grows disinterested. By placing
trust in the experts, laypersons relieve themselves of the laborious task of opening each
and every black box ever constructed.
Experts must exercise proof of objectivity with some restraint, however, because
trying to seem too objective and thus neglecting the social implications appears both
suspicious and arrogant. The public may interpret such actions as an attempt to disguise
bias in probabilities and jargon, perhaps realizing the inherent value-ladenness of
scientific endeavors (Anthony, 2004; Frewer, 2004). According to Anthony (2004),
encouraging wider stakeholder participation in science-based policy-making increases
transparency by exposing more biases among the stakeholders, including scientists, and
keeps large amounts of partiality from influencing decisions. Opening the discussion to
the public also suggests that the agency has public interests in mind, as evidenced by the
invitation to share opinions.
Perhaps even more compelling is the study by Bakir (2006) that assessed levels of
trust in agencies debating the optimal method of offshore oil structure disposal. A
European division of Shell conducted numerous risk assessments and had determined that
deep sea disposal of the Brent Spar structure carried low occupational and environmental
risk, not to mention a much lower cost than onshore disposal. Unfortunately, Shell
provided virtually no transparency by having their own scientists perform the studies and
shrouding the process in secrecy. The environmental advocate organization Greenpeace
disputed the risk assessment and began a campaign to prevent what they considered a
high risk disposal option. The public perceived Greenpeace as competent, transparent,
concerned for the public, and honest, each of the four conditions that promote trust as
outlined by Lang and Hallman (2005). Citizens also held the organization in such high
moral regard that news that Greenpeace had presented false information in their
campaign did not substantially reduce trust in the organization, while news of a scientific
error in Shell’s risk assessment further wounded their image. By the end of the ordeal,
Shell agreed to recycle the structure by using the materials to build a quay extension that
would serve a human function while also reducing energy and emissions associated with
manufacture. This action thus opened the door for two-way communication and showed
an interest in the public’s environmental concerns, rather than their own financial
concerns, which helped rebuild a portion of the trust that was previously lost (Bakir,
2006).
Trust erosion through exposure
While such arguments seem quite persuasive, some research would suggest that
more exposure to scientific information and controversy has the potential to actually
decrease public trust. While airing scientific disagreements in full view of the public
may increase transparency, it also tends to emphasize uncertainty and self-interest
(Jasanoff, 1990; Slovic, 1999). A system in which different ideas must compete for
acceptability among the public produces a lack of consensus, which then leads to the
rejection of science as an authority (Jasanoff, 1990; Douglas, 1992; Slovic, 1999). This
occurs through what Gottweis (2002) refers to as “delegitimizing cycles” of debate,
which cause many individuals to view the lack of certainty as a lack of good science
(Slovic, 1999). If scientists themselves fail to unanimously agree on sources or amounts
of risk, they can hardly expect the public to place faith in any of the options, much less
the “correct” one. Thus, experts should exercise caution when engaging in public debate
over issues of risk.
Some experts have attempted to avoid losing public trust over issues of
uncertainty by leaving it out of communications entirely (Jensen and Sandøe, 2002).
However, the presentation of information without mention of uncertainty appears to
suggest full certainty, which triggers an alert to many members of the public. To those
who recognize that science contains no certainty, and even to some who do not, experts
presenting their results as certain appear untrustworthy and arrogant (Jensen and Sandøe,
2002). When the expert provides extremely certain information, the doubt shifts to the
source rather than the message and the public questions the ability of the expert to
provide accurate, unbiased risk assessments (Frewer, 2004). This becomes especially
dangerous in cases where the results ultimately end up proven wrong, because the trust
destroyed after even an implicit claim of certainty will remain long in the public memory
(Slovic, 1999; Jensen and Sandøe, 2002).
Following the suggestion that revealing more science to the public will build trust,
some experts approached the problem of uncertainty by including it in their discussions.
Rather than assume the public would not understand, they attempted to educate the public
about uncertainties and probabilities in both data and results. Frewer (2004) recommends
including both the source and magnitude of uncertainty in risk assessments for effective
communication to the public. However, while risk assessors may know major sources of
uncertainty and to some degree the magnitude, the complexity of both human and natural
systems precludes any possibility of accounting for every factor that produces
uncertainty, not to mention magnitude. Johnson and Slovic (1998) found that providing a
range of uncertainty, rather than a single number, made most respondents view the
agency as more honest, and a slight majority found the agency more competent. In a later
discussion of this study as a preface to follow-up research with similar results, Johnson
(2003) noted that “[a] substantial minority, however, found that the use of ranges made
government seem dishonest or incompetent (p. 782).” Respondents of the later study
suggested reasons for the industry to provide the range, including insufficient data and
incompetence, but about a third of the respondents believed that the range of uncertainty
was an attempt by the industry to confuse and deceive the public by making the risk
appear lower (Johnson, 2003). Either of these conclusions will reduce public confidence
in the results of risk assessments.
Another source of distrust arises when the message appears to promote the vested
interests of the communicator. In many cases, the public sees disagreements among the
experts as each endorsing their own self-interest or that of their employers (Johnson and
Slovic, 1998; Johnson, 2003; Frewer, 2004). Frewer (2004) notes that if people already
distrust the source perceive their intentions as self-interest, trust will decrease even more
regardless of the message. When conflicting values work to erode trust in risk managers,
“[t]rying to address risk controversies primarily with more sciences is, in fact, likely to
exacerbate conflict (Slovic, 1999, p. 699).” Experts should carefully consider whether
the arguments they attempt to fortify match the concerns of the public, or risk appearing
self-interested rather than civic-interested (Jensen and Sandøe, 2002). The public does
not exhibit unfounded wariness, since individuals may be exploited if the wrong
protective actions or equipment are promoted in the name of private gain (Douglas and
Wildavsky, 1982; Wisner et al., 2004).
Even a shared opinion among a “majority” of experts fails to hold much sway
when the public perceives a potential to serve vested interests. Johnson and Slovic
(1998) concluded that distrust of the government carried over to scientists who might be
under its payroll, since the agreement of a majority of scientists with the government did
not increase respondents’ trust in the government. A modification of this study by
Johnson (2003) again found that “‘[m]ajority’ scientific opinion was not by itself
persuasive, and people tended to assume the worst if scientists disagreed (p. 782).”
Conclusions
Risk managers know well the importance of trust for gaining public support for
regulatory policies, and the difficulty of sustaining trust over time. Leaving the public
out of scientific matters to “ensure objectivity” effectively created a power distance that
left the public clamoring for more transparency and proof of competence. Experts then
came to the logical conclusion that providing more exposure to scientific information and
controversy would build public trust by causing experts to appear more objective,
transparent, competent, and honest overall. This system produces trust-eroding features,
however, by emphasizing scientific uncertainties both in the data and the results, as well
as self-interest on the part of the experts.
Risk managers must realize that they cannot educate the doubt out of the public
by converting them from their supposedly irrational beliefs to those of the supposedly
objective scientists. After all, experts and non-experts alike are bound within the effects
of their cultural context (Slovic, 1999). As Douglas (1992) argues, probability is learned
through culture, not instruction from an agency, and points out that, “…in a democracy
education is not expected to change political commitments (p. 31).” Experts must also
consider that public opinion varies across populations and constantly evolves (Gottweis,
2002). Most of the studies mentioned above attempted to uncover patterns across large
populations without considering the nuances of particular groups. A wise approach to the
problem thus lies within studies of culture, which have the potential to provide insight
into the varied sources and magnitudes of distrust among the public. Through such
investigations, experts may learn whether exposure to more science will lead to an
increase or decrease in trust as it varies among different cultural groups.
References
Anthony, R. (2004). “Risk communication, value judgments, and the public-policy
maker relationship in a climate of public sensitivity toward animals: Revisiting
Britain’s Foot and Mouth crisis.” Journal of Agricultural and Environmental
Ethics 17: 363-383.
Bakir, V. (2006). “Policy agenda setting and risk communication: Greenpeace, Shell,
and issues of trust.” Press/Politics 11(3): 67-88.
Beck, U., Giddens, A., and Lash, S. (1994). Reflexive Modernization. Stanford, Stanford
University Press.
Douglas, M. (1992). Risk and Blame: Essays in Cultural Theory. New York, Routledge.
Douglas, M. and Wildavsky, A. (1982). Risk and Culture: An Essay on the Selection of
Technical and Environmental Dangers. Berkeley, University of California Press.
Frewer, L. (2004). “The public and effective risk communication.” Toxicology Letters
149: 391-397.
Gottweis, H. (2002). “Gene therapy and the public: A matter of trust.” Gene Therapy 9:
667-669.
Handmer, J. (2001). “Improving flood warnings in Europe: A research and policy
agenda.” Environmental Hazards 3: 19-28.
Jasanoff, S. (1990). The Fifth Branch: Science Advisers as Policymakers. Harvard,
Harvard University Press.
Jasanoff, S. (1998). “The political science of risk perception.” Reliability Engineering
and System Safety 59: 91-99.
Jensen, K.K. and Sandøe, P. (2002). “Food safety and ethics: The interplay between
science and values.” Journal of Agricultural and Environmental Ethics 15: 245253.
Johnson, B.B. (2003). “Further notes on public response to uncertainty in risk and
science.” Risk Analysis 23(4): 781-789.
Johnson, B.B. and Slovic, P. (1998). “Lay views on uncertainty in environmental health
risk assessment.” Journal of Risk Research 1(4): 261-279.
Kasperson, R. E. (1992). “The social amplification of risk: Progress in developing an
integrative framework.” In Social Theories of Risk. S. Krimsky and D. Golding.
(Eds.). Westport, CT, Praeger Publishers: 412.
Kuhn, T.S. (1970). The Structure of Scientific Revolutions. Chicago, University of
Chicago Press.
Lang, J.T. and Hallman, W.K. (2005). “Who does the public trust? The case of
genetically modified food in the United States.” Risk Analysis 25(5): 1241-1252.
Latour, B. (1987). Science in Action: How to Follow Scientists and Engineers Through
Society. Cambridge, Harvard University Press.
Masuda, J.R. and Garvin, T. (2006). “Place, culture, and the social amplification of risk.”
Risk Analysis 26(2): 437-454.
Mileti, D. S. (1995). “Factors related to flood warning response.” US-Italy Research
Workshop on the Hydrometeorology, Impacts, and Management of Extreme
Floods. Perugia, Italy.
Parker, D.J. and Handmer, J.W. (1998). “The role of unofficial flood warning systems.”
Journal of Contingencies and Crisis Management 6(1): 45-60.
Paton, D. and Johnston, D. (2001). “Disasters and communities: Vulnerability, resilience,
and preparedness.” Disaster Prevention and Management 10(4): 270-277.
Sáenz Segreda, L. (2003). "Psychological interventions in disaster situations." In Early
Warning Systems for Natural Disaster Reduction. J. Zschau and A. Küppers
(Eds.). Berlin, Springer-Verlag: 119-124.
Savadori, L., Savio, S., Nicotra, E., Rumiati, R., Finucane, M., and Slovic, P. (2004).
“Expert and public perception of risk from biotechnology.” Risk Analysis 24(5):
1289-1299.
Siegrist, M. and Cvetkovich, G. (2000). “Perception of hazards: The role of social trust
and knowledge.” Risk Analysis 20(5): 713-719.
Slovic, P. (1999). “Trust, emotion, sex, politics, and science: Surveying the riskassessment battlefield.” Risk Analysis 19(4): 689-701.
Wisner, B., Blaikie, P., Cannon, T., and Davis, I. (2004). At Risk: Natural Hazards,
People’s Vulnerability and Disasters. New York, Routledge.
Download