Capturing Travel Behavior during Exceptional Events

advertisement
8TH INTERNATIONAL CONFERENCE ON SURVEY METHODS IN
TRANSPORT: ANNECY, FRANCE, MAY 25-31, 2008
Resource paper for Workshop A4:
CAPTURING TRAVEL BEHAVIOR DURING
EXCEPTIONAL EVENTS
Earl J. Baker, Florida State University, Tallahassee, Florida, USA
INTRODUCTION
For more than half a century survey methods have been applied to study how people
respond to warnings of potential disasters. For some types of threats, responses include
evacuation behavior: whether people evacuate, how promptly they depart, where they go,
and how they get there. Many of the studies have been undertaken specifically to provide
inputs to transportation modeling aimed at calculating time required to complete a
successful evacuation. Other studies have been motivated by the need to understand how
to manage evacuations by increasing or decreasing the number of people leaving, for
example. Although the latter group of studies hasn’t been aimed specifically at
transportation applications, many of them have implications for transportation. A variety
of data collection methods have been employed and applied to a range of hazards. This
review traces the beginnings and evolution of survey research applications to disaster
evacuations, summarizes the behaviors addressed in the studies, lists methods that have
been employed, discusses challenges posed to data collection efforts, and recommends a
number of topics for research. Most examples cited in the review deal with hurricanes in
the United States. That is the author’s own field of expertise, and more studies have been
conducted about evacuation behavior in hurricanes than in other hazards.
1
HISTORY AND EVOLUTION OF EVACUATION SURVEYS
During the 1950’s federal civil defense officials in the U.S. were concerned about how
American citizens and emergency organizations would respond in the event of a nuclear
attack. The closest analogy was response to disasters, so a research program was
undertaken to document how the public and emergency groups behaved when faced with
warnings of a potential disaster and how they functioned during and after the event
(Barton, 1969). Of particular concern was whether people would take warnings seriously
and take appropriate self-protective actions. A related issue dealt with how people would
respond to a second or third warning for the same kind of hazard, after earlier warnings
had not been followed by occurrence of the event. This was called the “cry-wolf”
syndrome and continues to be a policy concern today. Officials worry that people will
become complacent and fail to take warnings seriously after a number of “false alarms.”
Fewer people evacuating would result in fewer trips being generated, from a
transportation standpoint. Surveys were conducted with the public following both false
alarms and actual disasters to document how people responded. Other focuses of the
research centered on the warning process: credibility of the warning source, how the
warning was worded, how the warning was disseminated, and characteristics of recipients
of the warning. Early studies followed floods, hurricanes, tornadoes, and chemical
accidents. Most of the early surveys concentrated on the warning process and whether
people took protective action. One exception was a survey following hurricane Carla, in
which respondents were also asked whether they left their community when they
evacuated (Moore et al., 1963).
The next phase of evacuation survey studies was motivated by learning about how people
responded to threats of floods, tornadoes, and hurricanes for the sake of better preparing
for future floods, tornadoes and hurricanes. The U.S. National Weather Service was a
principal user of the information and funded at least a couple of studies themselves.
Much of the emphasis of studies conducted in the 1960’s and 1970’s was how to
maximize response rate – that is, the number of people responding to warnings. In the
case of hurricanes and floods it meant getting everyone told to evacuate to evacuate. But
it was also during this period that behaviors were included in surveys that dealt with a
greater range of evacuation variables. Specifically surveys began asking about when
evacuees departed, where they went, and the transportation they used (Wilkinson and
Ross, 1970).
It was also during the 1960’s and 1970’s that academic social science researchers began
studying warning response and evacuation as research specialties, beyond a response to
the need for policy applications. Sociologists saw disaster research as a special case of
how individuals and organizations behave and interact under stress. In geography,
hazards research became a stand-alone field of study, either as a subset of how societies
interact with the natural and built environments or as a type of spatial behavior. Much of
the research focused on aspects of warnings and the warning process, and with
2
experience, demographics, and personality attributes of the warned populace (Drabek,
1986).
In 1979 an evacuation occurred at the Three Mile Island (TMI) nuclear power plant in
Pennsylvania. There were no plans for evacuating people living around nuclear power
plants in the U.S. at the time. Decision making by public officials was uncoordinated and
uncertain, there were no inventories of numbers of people living within various distances
of TMI, and when evacuation recommendations were issued, many people not told to
evacuate did so on their own initiative. The spontaneous evacuation of people living
beyond the areas told to evacuate was labeled shadow evacuation, and it persists as an
evacuation concern today, both for nuclear power plants and other hazards. Several
surveys were conducted with people living around TMI to document their warning
response and to explain the reasons for their actions (Lindell et al., 1985).
Predictably, all the nuclear power plants in the U.S. soon were required to develop
evacuation plans, and the plans involved transportation modeling to calculate the times
required for populations at risk to reach safety. The modeling required assumptions about
the number of people evacuating, how quickly they would leave, where they would go,
and how they would get there. For a few nuclear plants, surveys were conducted with
residents to gauge their evacuation intentions (Lindell et al., 1985).
At nearly the same time two U.S. agencies, the Federal Emergency Management Agency
(FEMA) and the U.S. Army Corps of Engineers (USACE) embarked on a series of
collaborative studies to provide technical data that state and local emergency
management officials could use to develop better evacuation plans for hurricanes. The
effort is usually referred to as the Hurricane Evacuation Study (HES) program. The HES
is a comprehensive endeavor that provides computer simulation of hurricanes to identify
the areas in coastal communities that would need to evacuate (hazard analysis),
inventories the number of households and other facilities that would need to evacuate
(population analysis), estimates the demand for public shelters and identifies facilities
that could be used to house evacuees before and during the storm (shelter analysis), and
calculates the time required to clear the road network of evacuating vehicles
(transportation analysis). Both the shelter and transportation analyses require assumptions
about how the threatened population will respond, and in most HES projects, the local
population is surveyed to gather information that will help predict those behaviors
(behavioral analysis). The HES program has also conducted post-storm assessments, in
which local officials are interviewed and the public are surveyed, in part to assess the
accuracy and utility of HES products previously provided and in part to gather
information that will help refine the HES products. A great deal of the survey work
conducted about evacuation behavior has been a result of the HES program which
continues (Baker, 2000).
Today survey research about evacuation behavior is a well-developed field of study with
both academic and applied motives. Studies exist not only for hurricanes, floods, and
nuclear power plants, but wildfires, hazardous material accidents, and dam failures.
Sample sizes have gotten larger and the range of behaviors addressed has expanded. In
3
1999 hurricane Floyd prompted the largest evacuation in U.S. history and resulted in
lengthy, problematic “commute times” for many evacuees from south Florida through
North Carolina. One response to Floyd was an increased interest among transportation
researchers outside the social sciences in modeling evacuation events and created an
increased demand for evacuation behavioral data. The broadened community of
researchers has brought a healthy new perspective to analyses of behavioral surveys
dealing with evacuations.
BEHAVIORS ADDRESSED IN EVACUATION SURVEYS
It isn’t feasible to list all the topics and behaviors addressed in evacuation surveys, but
there are certain behaviors that are typically included and certain issues that recur. The
main behaviors about which people are asked in surveys are evacuation participation rate,
evacuation timing, type of refuge, location of refuge, and vehicle use. The inclusion of
these behaviors is driven partly by the demand for this information in transportation and
shelter analyses.
Participation rate refers to the percentage of a population that evacuates. Evacuation
means leaving one’s residence or lodging to go someplace safer. Sometimes this behavior
is referred to as trip generation, but at least in the HES context it is a multiplier used as an
input into trip generation. Participation rates vary as a function of actions taken by public
officials, location vulnerability within a community, severity of the threatening agent, and
certain demographic, socioeconomic, and psychological factors. A persistent finding is
that too few people evacuate from the most vulnerable locations and too many evacuate
from relatively safe locations. In planning for evacuations and calculating the time
required for an evacuation, analysts often ignore the most probable participation rates and
assume that 100% of the population being told to evacuate will do so. They reason that
the plan should provide sufficient time and shelter capacity in case everyone at risk does
evacuate. Even in that case shadow evacuation (people leaving from areas not told to
evacuate) must be accounted for. Consequently, “predicted” evacuation clearance times
and shelter use usually exceed observed values.
Evacuation timing refers to when evacuees depart their residences or other origins. From
the emergency management perspective, it isn’t sufficient to convince people to evacuate,
because if too many wait too long to depart, there will be insufficient time for at least a
portion of the population to reach safety. This is usually displayed as a cumulative
response curve, showing the cumulative percentage of eventual evacuees who have left
by a particular time. The curve is compared to the timing of other events such as the
issuance of evacuation notices and the onset of dangerous conditions. In most
evacuations few people evacuate before evacuation notices are issued, then they leave as
quickly as they believe they need to leave. If an evacuation notice is issued days before
anticipated onset of dangerous conditions, and there is no urgency of immediate
departure communicated by officials, evacuees will take the entire two-day period to
evacuate. Some of the later evacuees wait to see if evacuation will actually prove to be
necessary, because the threat might not materialize. When prompt evacuation is urgent
4
and officials communicate that necessity successfully, evacuees leave much more
quickly, resulting in a much steeper response curve (Baker, 2000; Sorensen, Vogt, and
Mileti, 1987). Evacuation plans usually focus on the minimum amount of time necessary
to complete a successful evacuation, given the size of the evacuating population, the
roadway network, and other factors. Therefore clearance time modeling does not usually
incorporate the longer-duration response curves if they reflect earlier-than-necessary,
precautionary timing of evacuation notices.
In most evacuations the majority of evacuees go to the homes of friends and relatives,
followed by hotels and motels, and then public shelters operated by government and
disaster organizations (Drabek, 1986). This variable is usually called type of refuge. In
one sense it is of greater interest to public safety officials than to transportation analysts.
Government has a responsibility for public safety and attempts to ensure that enough
public shelter space is provided to accommodate the demand. Once demand is projected,
organizations strive to identify and manage safe, suitable facilities. However, the
locations of types of refuge will affect trip assignments. In some communities one or
more categories of refuge can be provided locally, but in other communities evacuees
will need to leave the local area to reach the refuge.
Location of refuge indicates the geographical location where evacuees will seek refuge.
In most evacuations a mixture of local and out-of-town destinations are used. Trip
assignments in transportation analyses need to identify the distribution of trips to specific
local places or areas as well as ascertaining the out-of-town locations to which evacuees
will travel. It is common, at least in hurricanes, for many evacuees to travel distances that
are much greater than necessary to reach safety, but in some locations people must travel
long distances to reach places that offer both safety and refuge. A related behavior is the
evacuation routes used by evacuees. It is asked about in some surveys, but many
transportation analysts infer routes from origins and destinations. Route choice can also
be managed by public officials by closing some routes, using contraflow, or giving rightof-way to designated streets and roads (Dow and Cutter, 2002).
Transportation analysts need to know the number of vehicles on roads, not the number of
people, so surveys typically measure vehicle usage. This can be expressed as an average
number of vehicles per household (e.g., 1.5) or as a percentage of the vehicles available
to the household (e.g., 70%). Sometimes surveys also identify how many evacuating
households pull trailers or take motorhomes to account for vehicles that might have an
unusual impact on traffic flow or lane volume or vulnerability (e.g, instability in strong
winds). For hurricanes, in the great majority of evacuations, between 65% and 75% of the
available vehicles are used by evacuees. Some people don’t have transportation of their
own, and some of those individuals will need assistance from public agencies or
organizations. Surveys usually identify the extent of those needs (Baker, 2000).
The bulk of survey instruments are made up of questions about variables that will refine,
explain, or predict variations in the behaviors enumerated in the previous discussion. The
specific variables that are included depend on the application of the data, the user of the
information, and the perspective of the researcher. For example, local public safety
5
officials might be interested in how they can educate the public to change unwanted
behaviors, how they can word and disseminate evacuation notices to achieve desired
results, how they need to manage traffic to overcome behavioral tendencies, and what
sorts of special accommodations they might need to make for evacuees needing medical
care or who will bring pets. A few examples of specific issues addressed in surveys
include effect of probability information on evacuation, the relationship between home
safety and evacuation, effect of pet ownership on evacuation, implications of evacuation
fatigue following multiple evacuations, evidence of the cry-wolf syndrome, evacuation
expenditures, preference for and effect of refuges of last resort.
APPROACHES TO DATA COLLECTION
Surveys about evacuation behavior are conducted both before and after evacuations using
a variety of techniques. The following discussion describes several approaches and lists
some of their advantages and disadvantages.
The earliest evacuation surveys were conducted almost exclusively with door-to-door
interviews. Questionnaires (i.e., interview schedules) were structured, with both open and
closed-ended questions. It isn’t clear how individuals were selected for interviewing,
although addresses could have been chosen at random. It’s more likely that a spatially
systematic scheme was employed (e.g., every third house). Response rates (i.e., people
agreeing to participate in the survey) were generally good. The face-to-face nature of the
interviewing sometimes fostered interaction between interviewer and respondent that
facilitated follow-up questions and in-depth explanations and elaborations. Door-to-door
interviewing is still practiced occasionally, but less and less over time because of
concerns over cost, time requirements, and safety of interviewers. If no one is home the
first time an interview is attempted at a residence, interviewers should go to the same
address on a different day of the week and different time of day, up to perhaps four times,
before replacing it with another address. That can be prohibitively expensive. American
society is different today than it was in the 1950’s, 60’s, and even 70’s. Residents are
more cautious about strangers knocking on their doors, and interviewers face greater risks
in certain neighborhoods. Still, some populations might be accessible only by going to
their residences, and certain types of questionnaires (e.g., involving graphics) require the
respondent to view objects before responding.
Occasionally people are interviewed in high-traffic volume venues of convenience, hence
the name convenience samples. Interviewers have approached patrons in parking lots of
“big box” department stores, for example. The motivation is usually cost, and the
technique is usually justified by the argument that the clientele of the establishment
represents a “cross section” of the community. Socioeconomic information can be
gathered about respondents so the sample can be compared to the general population, and
the sample might be weighted to make it more representative. Other examples of
convenience samples are interviewing tourists on boardwalks and interviewing attendees
of an event such as a “hurricane expo” where vendors exhibit products of interest to
residents of hurricane-prone communities.
6
One of the first surveys conducted specifically to provide data for hurricane planning was
a newspaper survey (Southwest Florida Regional Planning Council, 1983). Questions
were printed about whether people would evacuate, where they would go, and when they
would leave. Readers were asked to cut out the survey, complete it, and mail it to an
address. The technical data report for the HES acknowledged the non-random nature of
the newspaper survey, but employed the results for planning anyway. One seldom sees
newspaper surveys in printed media today.
Mail surveys are probably the second most commonly employed data gathering technique
for evacuation surveys today. Researchers employing the mail surveys are confident that
they provide reliable results and yield larger samples than telephone surveys, at the same
cost. A certain level of education is required, which might exclude the less educated from
the survey, although demographics of respondents can be compared to the general
population. The length of the questionnaire must be limited, and branching questions
(e.g., “if…go to”) are difficult to include. One of the greatest concerns about mail surveys
is that they require a high level of motivation to respond. There are follow-up methods
that can eventually get a completion rate that might be comparable to telephone surveys,
but it is still possible that the survey participants are people whose views and behaviors
are atypical with respect to the survey topic. People who evacuated, for example, might
be more likely than others to participate in the survey, possibly because they are
interested in sharing their opinions and experiences during the evacuation or simply
because the same characteristics that led them to evacuate are more likely to motivate
them to talk to an interviewer about evacuation. Demographics are not generally good
predictors of most evacuation behaviors, so there is no way to use demographics to
correct the bias.
The great majority of surveys conducted about evacuations today are done by telephone.
There are many firms and centers in the business of conducting telephone surveys on a
daily basis about political and consumer issues, and their resources can be applied to
evacuation subjects. In many cases random-digit dialing can be employed, but in other
instances sampling is allocated spatially, to reflect vulnerability to the hazard in question.
Although vulnerability levels can be assigned after data collection is finished if address
information is gathered, there might be too few responses in certain locations. Many
interviewing organizations employ Computer Aided Telephone Interview (CATI)
systems that reduce interviewer error on branching questions and create the database as
the interview is finished. Although it is possible to conduct interactive, open-ended
interviews by phone, in which responses are categorized for analysis after the fact, most
telephone surveys are extremely structured and provide little opportunity for probing and
free-form explanations. Nonresponse rate is an increasing problem for telephone surveys.
Even though interest and cooperation is typically greater for evacuation surveys than for
consumer and political surveys, nonresponse overall is still a concern. In some locations
twelve phone numbers are needed for every completed interview (Downs, 2008). Some
numbers are simply out of date or incorrect, but many people don’t answer at all, letting
their answering machines screen their calls. After someone answers the call, the
completion rate is roughly 34%. As with mail surveys, one worries that the people
7
agreeing to answer questions about evacuation are atypical of the general population with
respect to how they evacuate.
Internet surveys are uncommon in the evacuation field, but will probably gain in
popularity in the future. They contain sampling biases which are generally
acknowledged. They’re confined to people who at least have access to the internet, and
most likely include people who use it routinely. Solicitations to participate can be sent to
email lists but can also be distributed by website postings, mailings, newspaper
advertisements, and even telephone calls. On top of the sample bias, there is still the
nonresponse problem that also plagues other data collection methods. At least two
internet surveys concerning hurricane evacuation are being planned that will employ
some of the same questions used in a telephone survey of the general population.
Comparison of results should provide some insights into differences yielded by the two
approaches.
Sometimes respondents are contacted a second time or even more frequently to compare
their responses over time and from one evacuation event to another (Dow and Cutter,
1998). The respondents are often called panels, but sometimes the response sets are
simply called longitudinal data. It is certainly valuable to know what the same person did
in more than one evacuation. However, that can frequently be accomplished in a single
interview if more than one evacuation preceded the interview. The panel approach offers
certain advantages, but it raises certain concerns as well. One survey might ask
respondents what they would do in an evacuation, for example, and if an evacuation
ensued, contacting the same person again would permit comparison of the intended and
actual responses. Panels also provide data about responses in multiple evacuations that
had not occurred at the time of the initial data collection. However, one worry is that
respondents become atypical after being interviewed even once, and almost certainly
twice. If they are told that they will be contacted again in the future, they might become
even more atypical. Part of the apprehension is similar to the “Hawthorne effect” in
which experimental subjects behave differently when aware that they are participating in
an experiment. Another worry is that the very act of talking with a respondent for 10 to
20 minutes about evacuation causes the person to become sensitive to issues they were
unaware of before the interview. Their subsequent evacuation behavior could change as a
result. One hurricane researcher reported that her panel was becoming more “hurricane
savvy” with repeated evacuations. Other observers worried that the panelists might be
becoming more “survey savvy” as a result of repeated interviews.
Many evacuation surveys ask people what they would do rather than what they actually
did. This is necessary in locations that haven’t experienced an evacuation and might be
relevant in places that have. Most of the social science literature calls this survey
information “intended response” data but in the transportation field it is sometimes called
“stated preference” data. It can be collected using any of the methods described above but
a distinction is made here simply to differentiate between surveys that ask about intended
as opposed to actual behavior. From the earliest days of intended-response surveys on all
subjects, there have been questions about the correspondence between what people say
and what they do. Although there have been reports of close agreement in some pre-event
8
and post-event evacuation surveys (Kang, Lindell, and Prater, in press), most of the
evidence suggests significant disagreement. In hurricane surveys, for example, people
overstate the likelihood that they will evacuate in low-threat scenarios, without
evacuation notices being issued. They also overstate the likelihood that they will use
public shelters. On the other hand intended vehicle use matches well with actual vehicle
use (Baker 2000). A persistent policy issue is whether the sort of shadow evacuation that
occurred around TMI will occur in other nuclear power plant evacuations. Stated
preference surveys suggest that it will, but there is the possibility that it might be
suppressed with effective public information (Lindell et al., 1985). A nuclear power plant
in New York was constructed but never operated, in part because of concerns about
shadow evacuation. Some of the earliest HES efforts took intended-response survey data
at face value and used it without modification as inputs into the transportation and shelter
analyses. More recent attempts to derive planning assumptions for those analyses employ
a mix of actual response data, modified intended response data, and statistical models that
capture general relationships between behaviors and certain predictor variables.
Another valuable source of data about evacuations comes from mechanical observation,
not surveying. Traffic counters are used to record traffic volume over time at numerous
roadway locations during evacuations. A couple of decades ago, traffic count data was
relatively sparse and unreliable. Counters sometimes relied on electricity which
occasionally was disrupted before the evacuation was complete or the data wasn’t stored
due to electrical failure. In some instances software automatically discarded traffic count
data during an evacuation because it was out of the range of values considered reliable.
Today traffic counts are monitored during evacuations, revealing traffic volumes, average
speeds, and trends in departures. In Florida real-time traffic count data is compared to
results of HES products to monitor whether evacuations are proceeding as anticipated.
CHALLENGES TO CONDUCTING EFFECTIVE EVACUATION SURVEYS
Many challenges facing data collection for evacuation surveys are the same facing other
sorts of surveys. Other challenges are more specific to the topics and issues that need to
be addressed in evacuation surveys.
Nonresponse bias continues to cast doubt about the representativeness of evacuation
survey data. Because demographic variables are not well correlated with evacuation
behaviors, adjustments to results can’t be made confidently on that basis. All forms of
data collection suffer from nonresponse bias, and nonresponse is getting worse with the
profusion of telemarketing and political polling. People who participate in evacuation
surveys are likely to be people who are most interested in the subject, the most informed,
and people who have given the topic the most thought.
There are subgroups of populations that are more difficult to reach than others. Many
poor individuals don’t have telephones and don’t possess the educational skills to respond
to mail questionnaires. Language is often a barrier, especially in certain neighborhoods of
major cities (Lindell and Perry, 2004). Any single hard-to-reach subgroup of the
9
population might not constitute a large enough percentage of the total population to
significantly affect overall clearance times and shelter demand, but collectively, they
might. Moreover, public officials have responsibilities for the safety of each pocket of the
population.
Sample sizes have gotten larger and larger in evacuation surveys, as users have insisted
on more and more disaggregation of the data. Whereas this isn’t a challenge in and of
itself, it results in higher costs. A recent telephone survey in Florida dealing with
evacuation cost $29 per completed interview, not including survey design or report
preparation (Downs, 2008). Each coastal county had 400 completed interviews, and each
non-coastal county had 150. The survey cost for data collection alone was $545,000
(US). The client required “statistically valid” data in each county.
Cell phones have created a challenge for certain types of telephone surveys as well as for
telephone-based emergency notification of evacuations. Evacuations for many events are
based on spatial variations in vulnerability to the events. Hurricane evacuations and
riverine flood evacuations are ordered primarily for areas that would flood dangerously,
and some flood-prone areas would flood more dangerously than others. Nuclear power
plant evacuations would be called for within certain distances of the facility, referred to
as Emergency Planning Zones (EPZs). Many evacuation surveys are designed to meet
quotas of responses from predetermined evacuation zones. To do that by phone,
telephone numbers within the target zones are identified prior to calling. This has
traditionally been accomplished by using reverse telephone directories, in which an
address can be entered, and the directory will display a phone number for the address.
More recently Geographical Information System (GIS) overlays of evacuation zone
boundaries have been used in conjunction with address and phone databases to yield
phone numbers in certain zones. However, cell phone numbers don’t usually match a
physical address like land lines do. As more and more people rely on cell phones
exclusively, they will be excluded from spatially-targeted evacuation surveys by phone.
Tourists and other transient populations pose particular problems for data collection.
They can be interviewed prior to an event, but it is extremely difficult to contact them
after an evacuation. They don’t have local residences where the evacuation occurred, and
tourists have returned home by the time an evacuation survey can be mounted. At least
one study identified tourists who were present during an evacuation by inspecting
registrations at public attractions and then attempted to contact them by mail (Drabek,
1996). Attempts to obtain contact information from accommodations have been largely
unsuccessful due to concerns about privacy.
A secondary sort of challenge associated with evacuation surveys is the use to which the
data will be put. Social science research on evacuation was initially concerned with
identifying factors associated with evacuation behavior and warning response, with the
aim of discovering ways to enhance the level of appropriate responses. The application to
clearance time calculations and projections of shelter demand have led to the need for
prediction of very precise numerical values. In some jurisdictions such as North Carolina
and Florida in the U.S., regulations governing whether new residential developments will
10
be allowed to occur are tied to hurricane evacuation clearance times and shelter demand.
Thus the quality of data upon which predictions are made must be legally defensible.
RESEARCH NEEDS
As with most topics, there is no shortage of candidate ideas for “further research.” The
list compiled by one person might vary greatly from that of another, based on the
experiences, perspectives, and disciplinary blinders of each. Moreover, even from a
single perspective, no list will be exhaustive. The following list is offered with those
caveats.
The collection of real-time survey data, during an evacuation, is almost totally absent.
Traffic counters do provide real-time data, but for a limited range of variables and for
only certain locations. Surveys following evacuations often ask respondents to explain
certain behaviors -- why they did or didn’t evacuate, for example. The surveys are often
conducted months or even years following the event, and interviewee responses are
subject not only to memory failure but to “reconstructions” of their own experiences and
decisions, based on discussions with others and media accounts of the event. Real-time
data collection would permit accounts of behaviors and decision processes as they occur.
Real-time data collection would also yield more complete and accurate measures of
information that people are receiving during the threatening event. This includes media
broadcasts, but also official pronouncements by public officials, which can be used to
help explain variations in certain evacuation behaviors.
Finally, real-time survey data could provide information useful to public officials
managing the evacuation. This might provide information about certain subgroups of the
population that are not responding as officials believe they need to respond and why.
Survey techniques employed for most evacuation surveys today are designed to achieve
large sample sizes at minimal cost. They rely greatly on closed-ended questions that yield
themselves to ready data entry suitable for statistical analysis. The methods seldom allow
for in-depth, interactive follow-up questions and probing for insights not anticipated prior
to the survey instrument being designed. “Qualitative” research is probably an abused
term, used to legitimize anecdotal data, but good qualitative methods are underutilized in
evacuation research. When coupled with large-sample, structured survey data, the
addition of focus group data and unstructured interview data might provide insights to
patterns in survey data that otherwise go undetected. At least one study used ethnographic
survey methods to construct a decision tree for evacuation decisions (Gladwin, Gladwin,
and Peacock, 2001).
Both intended response and actual response surveys about evacuation behavior are
common, and there have been a few comparisons between results obtained by the two
methods. Some of the better-established generalizations were cited previously. One
behavior that merits more comparative work as well as explanatory modeling work is
11
destination choice. Many surveys ask people where they would go or ask people where
they went with respect to geographical destinations, particularly the distribution of local
trips vs. out of town trips, and the percentage of out of town trips to various cities or
counties. A better, more systematic comparison of the two data sets is needed to indicate
the extent to which the stated preference data can be utilized to make trip assignments. If
the intended response data doesn’t predict well or if it is absent, more statistical modeling
of actual destination distributions is needed to assess the ability to make trip assignments
with predictor variables.
The main reason people do or don’t evacuate is that they do or don’t believe they would
be unsafe staying in place. Misconceptions about vulnerability contribute to shadow
evacuation but they also contribute to under-evacuation from high-risk locations.
Surprisingly little is known about why people believe they would be safe or not, beyond
having confidence in the relevant forecast information. Belief that one’s home would or
would not be safe in a hurricane, for example, is a strong predictor of evacuation, but
many people have misconceptions about the safety of their homes (Baker, 2002), and
researchers can’t explain why. A better understanding of this belief could help reduce
shadow evacuation and enhance evacuation from high-risk areas.
A related issue is the failure to hear evacuation notices from public officials. Authorities
vary with respect to being able to compel evacuation (i.e., mandatory evacuation orders),
but people who say they heard from officials that they should or must evacuate are much
more likely than others to leave. Many residents in areas told to evacuate, however, say
they did not hear that they were supposed to go. In some instances the failure to hear
evacuation notices or to comprehend to whom they applied is understandable. But in
other cases it seems incomprehensible that residents did not hear, either directly or
indirectly, that they were being told to leave. This too is a strong predictor of evacuation
participation rate (Baker, 2002), and research has not explained why so many people fail
to hear evacuation notices.
Experimental methods, even “pencil-and-paper” experiments are uncommon in
evacuation research. There have been experimental survey designs to assess the effect of
hurricane probability forecasts on evacuation response (Baker, 1995) and to assess the
effect of colorized satellite imagery on response (Sherman-Morris, 2005). In general,
though, few studies have attempted to present sets of experimentally controlled attributes
when eliciting intended-response survey data. These methods have the potential to
replicate the sorts of decisions and tradeoffs that people have to make in evacuation
situations. In such cases they should provide insights into the effect of those attribute
variables on evacuation decisions and facilitate better predictions from intended-response
data.
More creative methods or combinations of data collection methods need to be employed.
Wilmot (2004) has proposed a variety of approaches used to study transportation
behaviors other than evacuation and described how they could be used for evacuation
research. All of the proposals have a great deal of merit. If combined with more
traditional methods, they could broaden both the scope and depth of understandings about
12
evacuation behavior, while being benchmarked against measures whose reliability is
already known.
Finally is a strong appeal for collaborative research. Social scientists and transportation
engineers and other transportation specialists often have different but complementary
perspectives and tool sets for conducting evacuation research. The state of knowledge can
only improve as a result of interdisciplinary collaboration and fresh looks at old
problems.
REFERENCES
Baker, E. J. (1995). Public Response to Hurricane Probability Forecasts. The Professional
Geographer, 47, 137-147.
Baker, E. J. (2000). Hurricane Evacuation in the United States, Storms: Volume 1, (eds.)
Pielke, R. Jr. and R. Pielke, Sr., Chap. 16, Routledge.
Baker, E. J. (2002). Social Impacts of Tropical Cyclone Forecasts and Warnings. World
Meteorological Bulletin, 51, 229-235.
Barton, A. H. (1969). Communities in Disaster: a Sociological Analysis of Collective
Stress Situations, Doubleday.
Dow, K. and S. L. Cutter (1998). Crying Wolf: Repeat Responses to Hurricane
Evacuation Orders. Coastal Management, 26, 237-252.
Dow, K. and S. L. Cutter (2002). Emerging Hurricane Evacuation Issues: Hurricane
Floyd and South Carolina. Natural Hazards Review, 3, 12-18.
Downs, P. (2008). Personal communication, Tallahassee, Florida, Kerr & Downs
Research, Inc.
Drabek, T. E. (1986). Human Systems Responses to Disaster: an Inventory of
Sociological Findings, Springer-Verlag.
Drabek, T. E. (1996). Disaster Evacuation Behavior: Tourists and Other Transients,
Natural Hazards Research and Information Center, University of Colorado.
Gladwin, C. H., H. Gladwin, and W. G. Peacock (2001). Modelling Hurricane
Evacuation Decisions with Ethnographic Methods, International Journal of Mass
Emergencies and Disasters, 19, 117-143.
Kang, J. E., M. K. Lindell, and C. S. Prater (in press). Hurricane Evacuation Expectations
and Actual Behavior in Hurricane Lili. Journal of Applied Social Psychology.
Lindell, K., et al. (1985). Planning Concepts and Decision Criteria for Sheltering and
Evacuation in a Nuclear Power Plant Emergency, AIF/NESP-031, Atomic
Industrial Forum.
Lindell, M. K. and R. W. Perry (1992). Behavioral Foundations of Community
Emergency Planning, Hemisphere Publishing Corporation.
Lindell, M. K. and R. W. Perry (2004). Communicating Environmental Risk in
Multinethnic Communities, Sage.
Mileti, D. S. and J. H. Sorensen (1990). Communication of Emergency Public Warnings,
ORNL-6609, Oak Ridge National Laboratory.
Moore, H.E., et al. (1963). Before the Wind: A Study of Response to Hurricane Carla,
NAS/NRC Disaster Study #19, National Academy of Sciences.
13
Rogers, G. O. and J. H. Sorensen (1989). Warning and Response to Two Hazardous
Materials Transportation Accidents in the U.S., Journal of Hazardous Materials,
22, 57-74.
Ruch, C. et al. (1991). The Feasibility of Vertical Evacuation, Program on Environment
and Behavior Monograph #52, University of Colorado Institute of Behavioral
Science.
Sherman-Morris, K. (2005). Enhancing Threat: Using Cartographic Principles to Explain
Differences in Hurricane Threat Perception. The Florida Geographer, 36, 61-83.
Sorensen, J. H., B. M. Vogt, and D. S. Mileti (1987). Evacuation: an Assessment of
Planning and Research, ORNL-6376, Oak Ridge National Laboratory.
Southwest Florida Regional Planning Council (1983). Hurricane Evacuation Study.
North Fort Myers, Florida.
Wilkinson, K.P. and P. J. Ross (1970). Citizens’ Responses to Warnings of Hurricane
Camille, Social Science Research Center Report 35, Mississippi State University.
14
Download