Mixed-Mode Methods for Conducting Survey Research

advertisement
Mixed-Mode Methods for Conducting Survey Research
Herbert M. Baum, Ph.D.; Anna Chandonnet M.A.; Jack Fentress M.S., M.B.A.; and Colleen Rasinowich, B.A.
www.datarecognitioncorp.com
In November 2010 the Federal government held The
Workshop on the Future of Federal Household Surveys where Dr. Don Dillman presented a paper entitled
“Making New Data Collection Modes Effective in
Household Surveys” and in 2011 Public Opinion Quarterly published an article “The Future Modes of Data
Collection” (Couper). Both these distinguished authors note that there is growing interest in mixed-mode
surveys, because of the limitations posed by each of the
major existing modes (paper/pencil, personal, telephone,
and web).
The typical limitations cited by mode are:
• Cost – paper/pencil and personal
• Time – paper/pencil and personal
• Coverage – telephone and web
• Decreasing response rates – paper/pencil and
telephone
• Different preferences of respondents – all modes
To overcome these limitations purchasers of surveys as
well as the survey research community have been seeking
innovative solutions. In addition to exploring mixedmode methods from the perspective of its impact on the
quality of data being gathered, this topic is important
because with declining response rates there is greater
interest in offering respondents options of how they
want to respond to a survey. To increase response rates,
survey research needs to move from our needs to those
of the respondent. For example, people are accessing
information via the Internet, but increasingly doing so
via smartphones. Can we use that to invite people to
participate in a survey?
What is a mixed-mode survey?
Couper (2011) presents a lengthy discourse on what is
meant by mode and notes that there can be as much
variation within mode as between modes. For example,
when conducting telephone interviewing is the survey
being conducted using computer-assisted telephone
interviewing (CATI) or by interactive voice response
(IVR)? The former has a live person conducting the
interview while the latter does not. Between CATI
surveys there may be differences in how the sample was
selected – random digit dialing (RDD) versus dual-frame
sampling; thus we need to acknowledge that intra-mode
variations do need to be considered as well. Intra-mode
variation is not covered in this paper.
For this paper, mode refers to the four general survey
methods: paper/pencil, telephone (CATI or IVR), personal (including Computer Assisted Personal Interviewing – CAPI) and web. We define mixed-mode as using
one survey instrument with two or more data collection
modes (e.g., gathering data via paper/pencil and the
web). By this definition, conducting multiple surveys
of a population by different modes is not mixed-mode.
That definition leads to the questions:
• When is it best to use a mixed-mode survey?
• What challenges are posed when fielding a survey
using mixed-modes?
The answer to the former relates to the survey instrument, coverage, access, and respondent preferences. The
latter to formatting the instrument and the introduction
of bias in one mode that is not present in the other mode
(e.g., satisficing or social desirability).
In this white paper we review what a mixed-mode method survey means, when it is best to use that approach,
the challenges posed in conducting a mixed-mode survey, the potential benefits of a mixed-mode survey, and
DRC’s experience in conducting mixed-mode surveys.
1
© 2012 Data Recognition Corporation.
When should you use a mixed-mode
approach?
There is no general answer other than letting the potential survey instrument dictate how the survey is best
conducted. As a result, there are instances when the
length and complexity of the survey dictate only using one mode. Specifically, it would not be advisable
to conduct adaptive conjoint or complex interviews on
paper. The complicated skip patterns are best suited for
an interviewer conducting either CATI or CAPI or possibly a web mode that leverages available analytic software.
Cost and time would dictate which one of these methods
you would select.
Similarly, there are instances when the mode selected
requires a mixed mode approach. For a general population web survey, there is potential bias related to coverage
issues. As of 2010 Internet access at home was limited
to 71% of the population and this percentage varies by
both age and race (Smith, 2010). To correct for this differential in access, potential respondents without Internet
access must be offered an alternative means of responding (e.g., paper/pencil); making the survey dual mode
by design. If however you had a closed population, one
where all the potential respondents are identified including email addresses (e.g., college students or employees
of a company), a single mode web-based survey might be
the most appropriate choice.
In addition to theoretic reasons for using either a single
or mixed-mode approach, there are the pragmatic
reasons of cost and time. To boost response rates and
keep the cost of survey reasonable, it might be decided to
follow up with respondents of a paper/pencil survey with
CATI. By conducting the mail response first, you limit
the number of calls that need to be made.
What are the challenges?
The challenges relate to not introducing bias in the
results by using different modes for gathering the data;
thus mode effects become an important design consideration with the goal of minimizing their impact. The
literature on mixed-modes primarily focuses on differences in paper/pencil versus the web; probably because
these two modes deliver the survey instrument via visual
stimuli. In 2000, Young et al. found virtually no differences between paper/pencil and web modes of a survey.
Church (2001) went beyond the paper/web comparision
and also considered another respondent-administered
mode (IVR). He found that two dual-mode surveys, one
utilizing paper-and-pencil and web and the other using
paper-and-pencil and IVR, showed a small percentage
of unique variance in the data when comparing modes.
Church recommends that with equivalent data quality
results across mode, one should choose the methodology
based on the project parameters (e.g., culture fit or ease
of implementation).
A mixed mode study that used both self-administered
and interviewer administered questions would be prone
to bias being introduced from the use of an interviewer.
Dillman, et al. (2009) reported “that for the satisfactiondissatisfaction questions … respondents to the aural
modes are significantly more likely than are respondents
to the visual modes to give extreme positive responses,
a difference that cannot be accounted for by a tendency
toward recency effects with the telephone.” This type of
effect is known as social desirability bias (i.e., providing
a positive response when there is somebody asking the
question). Chang and Krosnick (2009) compared an
internet survey version with a telephone interview and
found that the “telephone data manifested more random
measurement error, more survey satisficing, and more social desirability response bias than did the Internet data.”
Kraut, Oltrogge, and Block (1998) observed social
© 2012 Data Recognition Corporation.
2
desirability bias when they found the telephone mode of
an Employee Opinion survey to have significantly more
favorable results than the paper/pencil mode of the survey. These studies indicate a “measurement effect” that
is present when questions are asked by an interviewer
as opposed to self-administered. Social desirability bias,
can be minimized to the extent that questions are factual
or by using the phone as a mode without an interviewer
(IVR). The latter is substantiated by Knapp and Kirk
(2003) who found no significant differences in responses
to sensitive questions between paper/pencil, web, and
automated touchtone phone response modes.
As noted, mixed methods might be used to increase
response rate. Dillman, et al. (2009) utilized different
combinations of modes for the initial phase compared to
the follow up to non respondents (second phase). “Phase
1 data collection was conducted by telephone interview,
mail, interactive voice response, and the Internet, while
Phase 2 consisted of nonrespondents to Phase 1, and was
conducted by telephone or mail. In general, results from
[the] study suggest that switching to a second mode is an
effective means of improving response.”
If conducting a mixed-mode survey, should there be
different surveys for each? Doing so could introduce response bias associated with the survey instrument. Dillman and Christian (2003) recommend using unimode
construction where the instrument is written for the
more restrictive environment. This is the practice employed by DRC. If conducting a survey via paper/pencil
and using CATI to increase the response rate, the latter
would be considered the more restrictive environment.
CATI being administered aurally requires that questions
be fairly short and answer options brief so that respondents can remember both the question and answers as
they respond. A paper/pencil survey can have longer
questions and answer options, because the respondent
can go back and review them.
3
Though not a challenge directly related to mixed-mode,
many organizations are considering changing the mode
they use to conduct a survey. If longitudinal trends
are important, this can introduce bias. If a test of both
modes is not made, then at least all reports should indicate when the change occurred.
What are the benefits of conducting a
mixed-mode survey?
As noted earlier there are pragmatic benefits, usually
cost and time. A mail survey with CATI follow up may
increase response rate and it will be much less expensive
than conducting the entire survey via CATI. De Leeuw
(2005) notes “Survey designers choose a mixed-mode
approach because mixing modes gives an opportunity to
compensate for the weaknesses of each individual mode
at affordable cost.”
Offering a paper survey option with a web survey may
help to alleviate some concerns about web surveys, such
as implementation issues (bandwidth), confidentiality,
perceived versus actual accessibility, respondent burden,
tolerance level with technological glitches, perceived
versus actual cost savings, data representativeness and
accuracy, and lower response rate (Church, 2001). This
is not meant to imply that there are not benefits in conducting a web survey. The benefits of include: a quicker
turnaround time; lower item nonresponse; use of longer
open-ended questions (Kwak & Radler, 2002); the time
and cost savings associated with not printing/mailing;
and less transcription or scanning errors because the data
are returned in an electronic format (Kaplowitz, Hadlock, & Levine, 2004).
As noted above, there is some evidence that a mixedmode approach does increase response rates. This has
been demonstrated in general population surveys, special
© 2012 Data Recognition Corporation.
populations, among professionals, and in establishment
surveys (de Leeuw, 2005). A mixed mode approach
also demonstrates to respondents that you respect their
time and are offering them options that may allow them
to respond when it is most convenient for them. This
is particularly relevant for branded surveys (customer
satisfaction or loyalty). If you are using technology in
innovative ways (e.g., IVR or web surveys), clients will
view your approach as “leading edge” but by also offering a traditional option they see that you are attempting
to appeal to a spectrum of respondents. Increasing the
response rate has two benefits. It reduces the likelihood
of nonresponse bias and it decreases the standard errors,
if you are weighting the data. The latter occurs because
when making population estimates there has to be a
weight accounting for nonresponse, and a larger weight
usually means larger standard errors.
Opinion Research (AAPOR) Annual Conference (Randolf, 2001). And as a supplement to this white paper, an
additional DRC white paper is available with the specific
focus on comparing paper-and-pencil and web survey
modes (Fenlason & Christianson DeMay, 2002).
For additional information on this topic or working
with DRC’s professional staff on your mixed-mode
survey please contact Jon Leinen, Vice President,
Business Development, Survey Services group by
email (JLeinen@DataRecognitionCorp.com) or by
telephone (763.268.2542)
Overall, the benefits of using a mixed-mode approach,
when feasible, are many. The challenges, with proper attention and expertise can be overcome or minimized.
DRC’s experience in conducting
mixed-mode surveys
DRC has a long history of providing mixed-mode expertise on a variety of survey types (e.g., 360 feedback,
employee opinion, customer satisfaction, parent satisfaction with school services) for both commercial and
federal clients. As thought leaders on this topic, DRC
has presented or supported several multi-mode research
papers at the following conferences: The Society for Industrial and Organizational Psychology (SIOP) Annual
Conference (Christianson DeMay & Armstrong, 2000;
Christianson DeMay & Toquam-Hatten, 2001), The International Military Testing Association Annual Conference (Elig & Waller, 2001; Quigley, Riemer, Cruzen, &
Rosen, 2000), and the American Association for Public
© 2012 Data Recognition Corporation.
4
References (Those in bold are written or
supported by DRC staff)
Chang, L., & Krosnick, J. A. (2009). National surveys
via RDD telephone interviewing versus the internet:
Comparing sample representativeness and response quality. Public Opinion Quarterly, 73(4), 641-678.
Christianson DeMay, C., & Armstrong, S. (2000,
April). Dual process surveying: A technical perspective. In W. H. Macey (chair), Making the move to
internet-based surveys: Lessons from experience.
Practitioner Forum presented at the annual conference of the Society for Industrial and Organizational
Psychology, New Orleans, LA.
Christianson DeMay, C., & Toquam-Hatten, J.
(2001). 360-degree assessment at The Boeing Company: A dual-process (web/paper) approach. In M.
M. Harris (chair), The internet and I-O Psychology:
Applications and issues. Practitioner Forum presentation at the 16th annual meeting of the Industrial
and Organizational Psychology, San Diego, CA, April
2001.
Church, A. H. (2001). Is there a method to our madness? The impact of data collection methodology on
organizational survey results. Personnel Psychology, 54,
937-969.
Couper, M.P. (2011). The future modes of data collection. Public Opinion Quarterly, 75(5), 889-908.
de Leeuw, E. D. (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics,
21(2), 233-255.
Dickinson, J. R., Faria, A. J., & Friesen, D. (1994).
Live vs. automated telephone interviewing: Study finds
automated interviews yield comparable results for less
money. Marketing Research, 6(1), 28-34.
5
Dillman, D. A., & Christian, L. M. (2003). Survey
mode as a source of instability in responses across surveys. Retrieved 01/13/05 from http://survey.sesrc.wsu.
edu/dillman/papers.htm
Dillman, D. A., Phelps, G., Tortora, R., Swift, K.,
Kohrell, J., Berck, J., & Messer, B. L. (2009). Response
rate and measurement differences in mixed-mode surveys
using mail, telephone, interactive voice response, and the
internet. Social Science Research, 38(1), 1-18.
Dillman D.A. (2010) Making New Data Collection
Modes Effective in Household Surveys. Presented at the
Workshop on the future of Federal Household Surveys,
November 4-5. Washington, DC.
Elig, T., & Waller, V. (2001). Internet versus paper
survey administration: Impact on qualitative responses. Paper presented at the 43rd Annual Conference
of The International Military Testing Association,
Canberra, Australia.
Fenlason, K. J. & Christianson DeMay, C. (2002).
Paper-and-pencil and web surveys: Different methods,
same results? Published electronically on http://www.
datarecognitioncorp.com/survey/websur.htmlhttp://
www.datarecognitioncorp.com/survey/websur.html.
Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004)
A comparison of web and mail survey response rates.
Public Opinion Quarterly, 68, 94-101.
Knapp, H., & Kirk, S. A. (2003). Using pencil and
paper, Internet and touch-tone phones for self-administered surveys: Does methodology matter? Computers in
Human Behavior, 19, 117-134.
© 2012 Data Recognition Corporation.
Kraut, A. I., Oltrogge, C. G., & Block, C. J. (1998,
April). Written vs. telephone surveys of employees: Are
the data really comparable? Paper presented at the 13th
annual conference of the Society for Industrial and Organizational Psychology, Dallas, TX.
Kwak, N., & Radler, B. (2002). A comparison between
mail and web surveys: Response pattern, respondent
profile, and data quality. Journal of Official Statistics,
18(2), 257-273.
Quigley, B., Riemer, R. A., Cruzen, D. E., & Rosen, S.
(2000). Internet versus paper survey administration:
Preliminary findings on response rates. Paper presented at the 42nd Annual Conference of the International Military Testing Association, Edinburgh, U.K.
Randolph, J. (2001). Internet versus paper survey
administration: Impact on qualitative responses. Presented at the American Association for Public Opinion Research Annual Conference, Montreal, Quebec.
Smith, A. (2010). Home Broadband 2010. Pew
Research Center’s Internet & American Life Project.
August 11, 2010. Washington DC.
Young, S. A., Daum, D. L., Robie, C., & Macey, W. H.
(2000, April). Paper versus Web survey administration:
Do different methods yield different results? Paper presented at the 15th annual conference of the Society for
Industrial and Organizational Psychology, New Orleans,
LA.
© 2012 Data Recognition Corporation.
6
www.datarecognitioncorp.com
© 2012 Data Recognition Corporation.
Download