R E S P O N D E N T

advertisement
RESPONDENT
Norman
M.
Bradburn,
BURDEN
University
As the number
and complexity
of
sample
surveys
increases,
concern
for the
burden
that
such
surveys
place
on
respondents
has also
increased.
Perhaps
the most
noticeable
manifestation
of this
concern
has been
the Paperwork
Commission,
which
was
specifically
s e t u p to f i n d
ways
to r e d u c e
the amount
of forms
that
citizens
have
to f i l l o u t .
During
their
deliberations,
an attempt
was made
to
reduce
the amount
of forms
b y i0 p e r c e n t .
The consequences
of this
across
the board
reduction
fell disproportionately
on
sample
surveys
undertaken
for research
and evaluation
purposes,
although
they
constitute
a relatively
small
part
of the
paperwork
required
of citizens
by the
federal
government.
While
there
was
consternation
among
practitioners
of
survey
research
during
the reduction,
the
net effect
of the Commission's
activities
has been
to increase
our concern
for the
potential
burden
we may be placing
on
respondents
when
we undertake
surveys.
I
am less
sure
that
it h a s
in f a c t
reduced
the amount
of paperwork.
of
Chicago
respondents
that
there
is
conscious
research
on the
so m u c h
a part
of everyday
it is n o t s e e n a s a t o p i c
research.
little
selfissue.
It is
practice,
that
in n e e d
of
If " r e s p o n d e n t
burden"
is n o t a
developed
concept
in t h e r e s e a r c h
literature,
what
can we say about
it?
Let me
start
by describing
my general
way of
thinking
about
survey
research
interviews
so t h a t w e w i l l
have
a way of thinking
about
the problem
of respondent
burden
that
can relate
to o t h e r
aspects
of
survey
research.
We begin
with
the no,on
that
the research
interview
is a t w o - r o l e
social
system
governed
by general
norms
about
the behavior
of the actors.
The
two roles
are that
of respondent
and
interviewer.
The roles
are joined
by the
common
task
of giving
and obtaining
information.
In t h e m o s t
general
sense,
the quality
of the data
is t h e c r i t e r i o n
by which
to j u d g e
how effectively
the
task
has been
carried
out.
The interview
is a s o c i a l
encounter.
It is n o t i m m u n e
from
general
norms
that
prevail
when
people
voluntarily
participate
in social
events.
The researcher
is
asking
the respondent
to p r o v i d e
information.
We must
pay attention
to w h a t
motivates
respondents
to p a r t i c i p a t e
in
an interview
a n d to w h a t
w e , as r e s e a r c h ers,
can do to increase
are decrease
that
motivation,
particularly
the motivation
to perform
the respondent
role well.
In
general,
we stress
contribution
to
knowledge
and/or
civic
duty
as r e a s o n s
for participating
in r e s e a r c h .
Such
reasons
appear
to b e f a i r l y
powerful
ones
as e v i d e n c e d
by the relatively
high
cooperation
rates
for serious
studies.
I am tempted
to s t a r t
by saying
that
the concept
of "respondent
burden"
is
like
that
of "the weather":
everyone
talks
about
it, b u t n o o n e d o e s
anything
about
it.
But this
is a n o v e r s t a t e m e n t
on both
sides.
If w e m e a n
by "talking
about
it," doing
some
research
or writing
about
respondent
burden,
it c a n h a r d l y
be
said
that
everyone
or even
very
many
are
doing
anything
about
it.
I know
of only
one study
(and that
is s t i l l
in t h e p r o posal
stage)
by Laure
Sharp
of the Bureau
of Social
Science
Research
which
will
be
directly
and specifically
focused
on
respondent
burden.
Searches
of relevant
abstracts
and other
indexing
systems
do
not show
respondent
burden
as a category
that
is u s e d
to o r g a n i z e
methodological
studies.
There
is s o m e
research
on
response
rates
in w h i c h
length
or difficulty
of the questionnaire
are variables,
but
investigations
of respondent
burden
seems
to b e i n d i r e c t
at best.
But the interview
may also
be an
enjoyable
social
event
in i t s o w n r i g h t
when
conducted
by trained
interviewers
who can put respondents
at their
ease
and
listen
to them
sympathetically.
E. N o e l l e - N e u m a n n
(1976)
has pointed
out
the importance
of proper
questionnaire
construction
for motivating
the respondent
to p a r t i c i p a t e
actively
in t h e
interview
and to make
the effort
to g i v e
accurate
data.
Some
questionnaires
may be boring
or tedious,
and attention
should
be given
in t h e d e s i g n
of questionnaires
to c r e a t ing an interesting
and enjoyable
experience
for the respondents.
In p a r t i c u l a r ,
the researcher's
desire
to g e t e x t r a
data
fairly
cheaply
should
not be allowed
to
a d d so m u c h
to a q u e s t i o n n a i r e
that
it
puts
off respondents
and reduces
their
willingness
to p a r t i c i p a t e
fully
in t h e
research
enterprise.
If t h e t a s k
is n o t
to b e p e r c e i v e d
as a burdensome
one,
attention
must
be paid
to t h e n e e d s
of the
On the other
hand,
it w o u l d
be unfair
to s u g g e s t
that
no one does
anything
about
it, s i n c e
survey
research
professionals
are concerned
with
the completeness
and accuracy
of their
data.
Since
they must
depend
on the cooperation
of
respondents
to o b t a i n
complete
and
accurate
data,
they must
always
be sensitive
to t h o s e
factors
that might
decrease
cooperation.
On the whole
they
are
vigilant
about
things
that might
make
respondents
feel
imposed
upon
or feel
that
the survey
is i n s o m e w a y b u r d e n s o m e .
Indeed,
I would
argue
that
it is b e c a u s e
of their
day-to-day
concern
for the
potential
burden
that
they place
on
35
respondent,
researcher.
as
well
as,
to
those
of
There
is n o s i m p l e
relationship
between
length
of an individual
interview
and data
quality.
Within
the range
of
forty-five
minutes
to one and one-half
hours,
there
does not appear
to b e a
clear
effect
either
on response
rates
or
breakoffs,
although
systematic
evidence
on the matter
is n o t e a s y t o c o m e b y .
N o r is t h e r e
any belief
that even
substantially
shorter
interviews
have
a
better
completion
rate.
The experienced
field
workers
I have
spoken
with
believe
that while
length
p e r se d o e s
not have
much
to do with
completion
rates,
at
least
within
these
ranges,
the longer
the
interview
schedule,
the more
difficult
it
is to a c h i e v e
a high
completion
rate;
that
is l e n g t h
does have
some relation
to
effort,
and thus
to costs,
in g e t t i n g
a
high
completion
rate.
the
Since
it is t h e t a s k t h a t d e f i n e s
the relationship
between
the actors
in
the research
interview,
the notion
of
respondent
burden
is m o s t
naturally
related
to variations
in t h e n a t u r e
of
this task.
As the task becomes
more
difficult,
ceterus
paribus,
the burden
on
the respondent
increases.
On the other
hand,
since
the task
is d e f i n e d
as
obtaining
information
from the respondent and the demand
characteristics
of
the situation
(Orne,
1 9 6 9 ) ' a r e s u c h as
to r e q u i r e
the respondents
to give
accurate
information
if t h e y a r e to b e
good
respondents,
more
difficult
tasks
may be interpreted
as m o r e
challenging
and interesting
and subjectively
perceived
as l e s s b u r d e n s o m e .
In d i s c u s s ing the variables
that we tend
to t h i n k
o f in c o n n e c t i o n
with
respondent
burden,
we should
consider
the conditions
under
which
a particular
type of task may be
viewed
as m o r e
or less burdensome.
"Burdensomeness"
is n o t a n o b j e c t i v e
characteristic
of the task,
b u t is t h e
product
of an interaction
between
the
nature
o f t h e t a s k a n d t h e w a y in w h i c h
it is p e r c e i v e d
by the respondent.
Bradburn
and Mason
(1964)
failed
to
find any position
effects
on sections
of
a fairly
long
(average
1.25 hour)
interview
schedule.
When
sections
of the
schedule
were
systematically
rotated,
those
that appeared
near
the end of the
interview
did not show any effects
of
respondent
fatigue
or less willingness
to
cooperate.
There
is s o m e e v i d e n c e
from
Noelle-Neuman
(1976),
however,
that use
of filtered
questions
may affect
responses
to single
items.
She provides
some data
indicating
that
following
up a
particular
response
with
another
question,
e.g.,
" I f yes-_
In w h a t
way?",
may reduce
the number
of people
who will
say "yes"
or give any opinion
at all.
It
is n o t c l e a r
whether
this
effect
is
produced
by the respondents'
desire
to
avoid
the follow-up
question
(and thus
reduce
the burden
of answering)
or by the
interviewer's
cluing
respondents
that
answering
a particular
way will
prolong
the interview.
In c o n s i d e r i n g
variables
related
to
respondent
burden,
I shall
divide
the
discussion
into four main
headings:
i) t h e l e n g t h
of the interview;
2) t h e
amount
of effort
required
of the
respondent;
3) t h e a m o u n t
of stress
on
the respondent;
a n d 4) t h e f r e q u e n c y
with which
the respondent
is i n t e r v i e w e d .
i.
Length.
Interviews
and
questionnaires
differ
greatly
in t h e i r
length
as m e a s u r e d
by the number
of
questions,
number
of words
per question,
number
of pages
or other
measures
of
bulk
and total
length
o f t i m e to c o m plete
the interview.
Most
investigators
think
of total
length
o f t i m e to c o m p l e t e
the interview
or questionnaire
as the
measure
of length.
It is t y p i c a l l y
this
figure
that
is t o l d to r e s p o n d e n t s
when
their
cooperation
is s o l i c i t e d .
Interviews
may run from a few minutes
to t h r e e
hours
or more.
While
I know
of no data on the distribution
of the
length
of interviews
in t h e s u r v e y
field,
my guess
is % h a t t h e m e a n
is a r o u n d
one hour with
a standard
deviation
of
about
fifteen
minutes.
The tail on
the upper
e n d is p r o b a b l y
quite
long.
Of course,
if o n e c o n s i d e r s
repeated
interviews,
the total
length
of time
given
by the respondent
can be much
greater.
A current
longitudinal
study
of medical
care
expenditures
conducted
for the National
Center
for Health
Statistics
requires
more
than ten hours
of interview
time per respondent,
although
the time
is d i s t r i b u t e d
over
more
than a year.
There
is a g e n e r a l
feeling
that
telephone
interviewing
imposes
greater
time
limitations
on the interview
than
does personal
contact.
The evidence
for
this belief,
however,
is n o t g r e a t .
At a
1976 Airlie
House
conference
(NCHSR,
1977),
the consensus
of the participants
was that
telephone
interviews
u p to a n
average
of one hour were
quite
possible
without
adverse
effects
on data quality.
I am not sure that there
is m u c h
experience with
longer
telephone
interviews,
b u t it is n o t i m m e d i a t e l y
clear
that
longer
ones
are out of the question.
It
does
seem likely
that
longer
telephone
interviews
will
need careful
scheduling
with
respondents
so t h a t t h e y
are not
inconvenienced
by tying
up their
telephones
for a long
time.
Here again
a
longer
interview
that was perceived
by
respondents
as very
important
could
very
well
result
in a high
co-operation
rate.
I expect
that
it w o u l d
take
a higher
level
of justification
to get respondent
co-operation.
36
Intuitively
one would
expect
that
the strongest
relationship
between
length
(at l e a s t
apparent
length)
and response
rate would
be with
self-administered
questionnaires.
I have heard
several
researchers
maintain
with great
convict i o n t h a t it is e x t r e m e l y
important
to
make
self-administered
questionnaires,
not only short,
b u t a l s o to a p p e a r
to b e
short.
Operationally,
this advice
leads
to r e d u c i n g
the number
of p a g e s
in t h e
questionnaire
to an a b s o l u t e
minimum,
e v e n at t h e c o s t o f c r o w d i n g
more onto a
single
page.
Two studies
(Champion
and
Sear,
1969;
Sheth
and Roscoe,
1975),
however,
provide
evidence
that there
is
no s i g n i f i c a n t
difference
in r e s p o n s e
rate between
short
and long questionnaires,
at l e a s t w i t h i n
the range
of
three
to n i n e p a g e s .
Dillman
(1977)
reports
that mail
questionnaires
greater
than twelve
pages
get much
lower
response
rates.
response.
But we should
also consider
the other
side of the coin.
Ordinarily,
w h e n w e a r e in a p o s i t i o n
to a f f o r d
longer
interviews,
it is b e c a u s e
the
study has been
judged
of s u f f i c i e n t
importance
to j u s t i f y
a bigger
budget.
Whatever
it is a b o u t
the study
that contributed
to t h e j u d g m e n t
of importance
may also work on the researchers
and
interviewers
to i n c r e a s e
their
efforts
to
insure
high completion
rates
a n d to
influence
the respondents
so t h a t t h e y
are willing
to m a k e a g r e a t e r
effort
to
contribute
to t h e s t u d y .
If l e n g t h
is
correlated
with
importance
and importance
is c o r r e l a t e d
with higher
completion
rates,
we might
even find a mild positive
correlation
between
length
and response
rate.
2.
Respondent
effort.
As w i t h
length,
the amount
of e f f o r t
required
of
the respondent
in a n s w e r i n g
questions
in
a survey
differs
considerably.
Respondents may be asked
their opinion
on
matters
with which
they are familiar
and
to w h i c h
they can respond
without
much
thought.
On the other
hand,
t h e y m a y be
asked
for complicated
and detailed
information
about
finances
(e.g., Housing
Allowance
Supply
Experiment)
or expenditures
(e.g., Consumer
Expenditure
Study,
Medical
Care Cost Study).
They may be
asked
to a s s e m b l e
records
in t h e i r
own
homes
or t h e y m a y b e a s k e d
to c o m e i n t o a
central
testing
s i t e to t a k e t e s t s o r
submit
to a m e d i c a l
examination.
To s o m e
extent
differences
in e f f o r t
are correlated with
length,
b u t it is p o s s i b l e
to
have long interviews
t h a t do n o t r e q u i r e
any greater
effort
o n t h e p a r t of t h e
respondent
than a short
interview,
other
than that entailed
by the greater
number
of q u e s t i o n s
themselves.
Since
it t a k e s
t i m e to a s s e m b l e
records
or to go to a
central
examining
location,
it is a l m o s t
always
the case that studies
requiring
great
effort
o n t h e p a r t of t h e r e s p o n d e n t
will also take more
time.
I k n o w of no
studies
t h a t t r y to s o r t o u t t h e e f f e c t s
of total
t i m e f r o m t h o s e of e f f o r t .
Even though
length
did not affect
completion
rates
on a particular
study,
it m i g h t
have an effect
on follow-up
studies
with
the same respondents.
It is
difficult
to c o m e up w i t h a n y g o o d e v i dence
o n e w a y or t h e o t h e r
since most
investigators
who are planning
longitudinal
studies
worry
about
the follow-up
rates
and adjust
their
data collection
aspirations
with
such rates
in m i n d .
There
is at l e a s t
anecdotal
evidence
from
one NORC study
in w h i c h
the original
interviews
w e r e up to t h r e e
hours
in
length.
A ten-year
follow-up
study was
conducted
with a subsample
of t h e r e s p o n dents.
The length
of i n i t i a l
interview
was still
remembered
by many respondents
and may have played
a r o l e in s o m e
refusals
for the follow-up
study.
On t h e o t h e r
hand,
the Consumer
Expenditure
Survey
which
is a v e r y l o n g
questionnaire
with
repeated
interviews
has a high completion
rate
(90 p e r c e n t )
and few respondents
complain
about
the
survey
when reinterviewed.
Respondents
m a y be i n t e r v i e w e d
for two or more hours,
five times
a year.
The survey
covers
detailed
expenditures
about sometimes
unreasonable
items
(e.g., asking
p o o r or
elderly
respondents
about
purchases
of
airplanes
or snowmobiles)
and asks
respondents
to r e f e r
to r e c o r d s
a n d to
prepare
themselves
for the follow-up
interviews.
The survey,
however,
is u s e d
to f o r m t h e b a s i s
of t h e c o s t of l i v i n g
index which
has significant
income
implications
for large
numbers
of p e o p l e .
Both interviewers
and respondents
may
consciously
or u n c o n s c i o u s l y
use this
information
to j u s t i f y
the expenditure
of
so m u c h e f f o r t .
The use of records
has complicated
effects
on the level
and accuracy
of
reporting
(see S u d m a n
and Bradburn,
1974,
Chapter
3) a n d , p r o p e r l y
used,
can improve
overall
data quality.
As w i t h
the case
of l e n g t h ,
the request
to u s e r e c o r d s
may
indicate
the greater
importance
attached
to t h e s t u d y
and thus emphasize
the
demand
characteristics
for "good"
respondents
to c o - o p e r a t e
and provide
the most
accurate
data they can.
I do n o t k n o w of
any evidence
that asking
the respondent
to go to g r e a t e r
trouble
in t h e f o r m o f
consulting
his records
leads
to a l o w e r
completion
rate.
In s u m , t h e r e
is no c l e a r
evidence
that interview
length
is in i t s e l f
an
important
contribution
to r e s p o n s e
rate,
although
it m a y h a v e s o m e i m p a c t
on item
Effort,
as m e a s u r e d
by coming
into
central
examining
station,
is a l s o an
important
variable.
High completion
37
a
rates have been obtained
even under
conditions
requiring
respondents
to m a k e
considerable
expenditures
of t i m e a n d
effort
to c o m e to an e x a m i n i n g
location,
as f o r e x a m p l e
with the National
Health
Examination
Survey
(HES) w h i c h
requires
respondents
to c o m e to a m o b i l e
testing
station
and undergo
an
extensive
physical examination.
Response
rates on this
study were high
(between
87 a n d 96 p e r cent) on the first three cycles.
actually
takes
in o f w h a t h e is b e i n g
told.
With the increased
concern
for a
workable
definition
of informed
consent,
some experimental
work has been conducted
to d e t e r m i n e
empirically
the effects
of
differing
levels
of initial
explanation
about
the content
of interviews.
Since
most refusals
occur
before
the respondents
know what t~e survey
is a b o u t ,
the
problem
seems
to b e m o r e o n e o f " i n f o r m e d
refusal"
than informed
consent
(see
Singer,
1978).
In 1 9 7 1 , h o w e v e r ,
when the HES was
expanded
to i n c l u d e
responsibility
for
measuring, and monitoring
the nutritional
status
of the U.S. population,
the
response
rate dropped
to a r o u n d
64 p e r cent
(NCHS, 1975).
It is n o t c l e a r w h a t
factors
were responsible
for this drop.
One hypothesis
is t h a t t h e a d d i t i o n
of
the nutritional
portion
of the survey
lowered
the appeal
o f t h e s t u d y to t h e
respondents,
either
because
the study was
now longer
and/or
because
nutrition
is
deemed
less important.
The effect
of the
change
in t h e H E S c o u l d p a r t i a l l y
be offset by respondent
remuneration,
b u t it
may be that some sort of threshold
of
effort
has been reached
that began
to
have a serious
effect
on response
rate.
Johnson
and Delamater
(1976) report
on a study undertaken
for the Commission
on Obscenity
and Pornography
and on
several
experiments
they conducted
on
response
effects
in s e x s u r v e y s .
They
conclude
that there
is s o m e d i f f e r e n t i a l
effect
on completion
rates within
demographic
groups,
but that co-operation
is
not obviously
more problematic
in s e x
surveys
t h a n in s u r v e y s
on other
topics.
E v e n if it w e r e t r u e t h a t t h e s e n s i tivity
of topics
had little
effect
on
response
rates,
either
for the interview
as a w h o l e
or for specific
questions,
it
still might
be the case that respondents
evade
stressful
questions
by underreporting.
Underreporting
may be particularly
likely
for topics
that have many
subsidiary questions
filtered
through
a general
question
of the type:
"Have you ever
done X?"
If r e s p o n d e n t s
deny ever having
d o n e X, t h e y t h e n a v o i d
a whole
series
of
questions
about
frequency,
amount,
dates,
etc.
In a r e c e n t
methodological
experiment,
we have evidence
that suggests
some
evasion
of response
is t r u e f o r t h o s e
respondents
who find particular
topics
anxiety
provoking
(Bradburn,
Sudman,
Blair
and Stocking,
1978).
There
are
more ways
to e v a d e a q u e s t i o n
than outright
refusal.
Even with complete
anonymity,
as w i t h t h e r a n d o m
response
technique,
we know that there
is s t i l l
substantial
underreporting
of threatening
events
(Locander,
Sudman,
and Bradburn,
1976).
From the fragmentary
evidence,
I
would
conclude
that when greater
effort
is r e q u i r e d
by the respondent,
particularly when
it m e a n s
going
to s o m e s p e c i a l
location
for testing,
response
rates may
suffer
somewhat
and greater
efforts
on
t h e p a r t of t h e r e s e a r c h e r s
will be needed to i n s u r e
high completion
rates.
On
the other
hand,
data quality
may increase.
Again,
as w i t h l e n g t h ,
if r e s p o n d e n t s
perceive
the study
as p a r t i c u l a r l y
important,
they may be willing
to e x p e n d
greater
effort
and perform
the role of a
good respondent.
3.
Respondent
stress.
By respondent
stress,
I mean the amount
of personal
discomfort
that a respondent
undergoes
during
the course
of t h e i n t e r v i e w .
Such
discomfort
may arise
from the content
of
the questions,
s u c h as m i g h t
result
from
embarrassing
or ego-threatening
questions
or from those
that might
provoke
emotionally laden responses,
or from other
activities
s u c h as m e n t a l
or physical
tests
that might
be part of the data
collection
operation.
Other
things
being
equal,
one might
expect
that greater
respondent
stress
would
be associated
with lower completion
rates and/or
lower
validity
of data.
Respondent
stress
as a v a r i a b l e
is
more difficult
to d e a l w i t h
than variables
s u c h as l e n g t h
or effort.
While
length
and effort
are fairly
constant
across
all respondents,
stress
probably
involves
much more
individual
variance.
Although
we think
of some topics
as m o r e
threatening
or sensitive
than others,
e.g.,
illegal
behavior,
sex, drug use,
there
still
s e e m s to b e s u b s t a n t i a l
individual
differences
in s e n s i t i v i t y
to
topics.
Thus the stratagems
for coping
with differences
in r e s p o n d e n t
stress
may
h a v e to d e p e n d
on finer
tuning
or adjustment of data based
on the data from the
individual
respondent
rather
than on some
more general
procedure
that would
apply
to a l l r e s p o n d e n t s .
The relationship
between
stress
and
completion
rates
is d i f f i c u l t
to d e t e r mine.
It is d i f f i c u l t
to k n o w h o w m u c h
respondents
are warned
in a d v a n c e
about
the potentially
stressful
nature
of the
material
or, e v e n w h e n t h e r e
are efforts
to e x p l a i n
more
fully the nature
of the
interview,
how much the respondent
4.
38
Frequency
of
being
interviewed.
I have already
touched
on the problem
of
repeated
interviews
under
the discussion
of length.
Clearly,
repeated
interviews
as p a r t of a s i n g l e
longitudinal
study
pose problems
of respondent
burden.
The
difficulties
in m a i n t a i n i n g
high completion
rates
in l o n g i t u d i n a l
studies
are
well known.
Many of the difficulties
come from locational
problems
with a
mobile
population
and some come from
maintaining
co-operation.
On the whole,
however,
the fact that respondents
have
previously
responded
to a n i n t e r v i e w
is
the best predictor
of subsequent
participation,
given
that they can be located.
After
several
waves
of i n t e r v i e w i n g ,
one
has probably
gotten
a sample
of co-operative respondents
who will continue
to
participate.
By that time they know what
t h e y a r e in f o r , e v e n if t h e e x a c t n u m b e r
of w a v e s
was not known
in t h e b e g i n n i n g .
n e e d to b e d e v e l o p e d
about
the number
of
surveys
a particular
respondent
should
be
asked
to p a r t i c i p a t e
in o v e r a g i v e n
period
of time.
High response
rates
for
physicians
can still be obtained
even
when the length
and amount
of effort
required
is h i g h ,
as f o r e x a m p l e
in t h e
National
Ambulatory
Medical
Care study
(NAMCS) which
requires
physicians
to f i l l
out a questionnaire
for each of their
outpatients
for a week.
With considerable effort
and support
by the relevant
professional
societies,
response
rates
averaging
80 to 85 p e r c e n t
are maintained
each week.
One of the elements
in m a i n taining
t h a t r a t e is t h e p r o m i s e
to t h e
physicians
that they will not be asked
to
be respondents
in t h e N A M C S
study more
t h a n o n c e in t w o y e a r s .
As s u r v e y s
of
medical
care practitioners
become
part of
a routine
monitoring
of t h e m e d i c a l
care
system,
procedures
to p r o t e c t
respondents
against
overinterviewing
w i l l h a v e to b e
worked
out.
Otherwise,
we run the risk
of a major
revolt
from segments
of the
population
that will undermine
the entire
data collection
process.
There
is a n o t h e r
source
of burden
to
some respondents
about which
more
should
be k n o w n .
I mean here the problem
of
being
repeatedly
drawn
in s a m p l e s
of
different
and independent
studies.
As
l o n g as o n e is t h i n k i n g
about
national
probability
samples,
the probability
of
a household
falling
into two independently d r a w n
samples
is s m a l l .
Survey
research
organizations,
s u c h as N O R C ,
make
sure that the same segment
of households
is n o t d r a w n m o r e t h a n o n c e in f i v e
years.
Even with the overlap
of the
major
PSU's,
overburdening
the same
households
with
interviews,
does not yet
s e e m to b e a p r o b l e m .
Conclusion.
I have tried
to o u t line some of the issues
with regard
to
respondent
burden
t h a t a r e of i m p o r t a n c e
in e n h a n c i n g
the quality
of data collected in s u r v e y s .
The major
theme
througho u t is t h a t r e s p o n d e n t s
s e e m to b e w i l l i n g to a c c e p t
high levels
of burden
if
they are convinced
that the data are
important.
In g e n e r a l ,
it s e e m s to m e
the problem
is n o t :
is t h e r e
a burden
level which
respondents
will not tolerate,
but rather
h o w to r e l a t e
the level of
burden
to t h e i m p o r t a n c e
of the data.
To
a considerable
extent,
t h i s is c o n t r o l l e d
by the amount
of funding
available,
since
greater
respondent
burden
usually
requires
more extensive
efforts
to i n s u r e
high
response
rates
and good quality
data.
One problem
t h a t is n o t e a s i l y
related
to
budgetary
control
is t h e i n c r e a s i n g
use
of surveys
among
specialized
populations.
In s o m e r e s p e c t s
these
surveys
may have
high importance
but become
burdensome
just because
the population
is so s m a l l
and the probability
of multiple
interviews
is h i g h .
Given
the decentralized
system
of funding
and conducting
research
it is d i f f i c u l t
to s e e h o w t h e o v e r w o r k ing of some classes
of respondents
is to
be prevented.
But I think we must give
some serious
attention
to t h e m a t t e r
or
it m a y b e d e t e r m i n e d
f o r us b y o t h e r s .
The recent
experience
with the attempt
to
cut down the amount
of data supplied
by
citizens
does not suggest
a welcome
precedent.
A recent
study by the Survey
Research
Center
and the Bureau
of the
Census
(Goldfield
et a l . , 1 9 7 7 )
asked
about
frequency
of r e c e i v i n g
questionnaires
in t h e m a i l ,
telephone
interviews
or requests
for personal
interviews.
Data from this study
indicate
that about
half of the respondents
(54 p e r c e n t )
reported
survey
contacts
o f s o m e k i n d in
the last four or five years.
There are, however,
classes
of
respondents
who are more
frequently
selected
in s a m p l e s
and for whom the
burden
may be perceived
as h i g h .
When
the population
is r e l a t i v e l y
small,
as
for example,
a single
professional
group
s u c h as p h y s i c i a n s
or more particularly
the specialties,
or incumbents
of a
particular
position,
s u c h as m a y o r s
of
cities
or members
of Congress,
the probability
of falling
into a sample
for
independent
studies
becomes
fairly
high.
When the population
is v e r y s m a l l ,
as
with chairmen
of p s y c h o l o g y
departments,
the temptation
to d o a c e n s u s
is o v e r whelming
and thus one becomes
a respond e n t in e v e r y
s t u d y d o n e on t h a t
population.
In
reaching
the medical
a point
at
area we appear
to
which
guidelines
References
Bradburn,
N. M. , a n d M a s o n ,
William.
The
effect
of question
order
on
responses.
Journal
of Marketinq
Research,
V o l i, 1 9 6 4 , pp. 5 7 - 6 1 .
be
39
tion upon response
in t h e H e a l t h
and Nutrition
Examination
Survey.
Vital and Health
Statistics,
Series
2, No. 76, O c t . ,
1975.
Bradburn,
N. M., S u d m a n ,
S., B l a i r ,
E,
and Stocking,
C.
Question
threat
and response
bias.
Public
Opinion
Quarterly,
V o l . 42, 1 9 7 8 , pp. 2 2 1 234.
Noelle-Neumann,
E.
Die Empfindlichkeit
demoskopischer
Messinstrumente.
In
Allensbacher
Jahrbuch
der Demoskopie,
1976.
Wien:
Verlag
Fritz Molden,
1976.
Champion,
D. J. a n d S e a r , A. M.
Questionnaire
response
rate:
a
methodological
analysis.
Social
Forces,
V o l . 47, No. 3, M a r c h , 1 9 6 9 ,
pp. 3 3 5 - 3 3 9 .
Orne,
Dillman,
D o n A.
Mail and Telephone
Surveys.
New York: Wiley
Interscience,
1978.
Johnson,
W. T. a n d D e l a m a t e r ,
J. D.
Response
effects
in s e x s u r v e y s .
Public
Opinion
Quarterly,
V o l . 40,
1 9 7 6 , pp. 1 6 5 - 1 8 1 .
Sheth,
J., a n d R o s c o e ,
A. M.
Impact
of
questionnaire
length,
follow-up
methods,
and geographical
location
on response
r a t e to a m a i l s u r v e y .
Journal
of Applied
Psychology,
Vol.
60, No. 2, 1 9 7 5 , pp. 2 5 2 - 2 5 4 .
Locander,
W., S u d m a n ,
S. , a n d B r a d b u r n ,
N. M.
An investigation
of i n t e r view method,
threat
and response
distortion.
Journal
of American
Statistical
Association,
V o l . 71,
No. 354, pp. 2 6 9 - 2 7 5 .
Singer,
Eleaner.
Informed
consent.
American
Sociological
Review.
Vol.
43, No. 2, A p r i l , 1 9 7 8 , pp. 1 4 4 - 1 6 1 .
National
Center
for Health
Services
Research.
Advance
in H e a l t h
Survey
Methods.
DHEW Publication
No.
(HRA)
77-3154,
1977.
National
Center
A s t u d y of
for
the
Health
effect
M.
Demand
characteristics
and the
concept
of q u a s i - c o n t r o l s .
In
R. R o s e n t h a l
a n d R. L. R o s n o w
(eds).
Artifact
in B e h a v i o r a l
Research.
New York:
Academic
Press,
1969,
pp. 1 4 3 - 1 7 9 .
Sudman,
S., a n d
Effects
in
Synthesis.
Publishing
Statistics.
of remunera-
40
Bradburn,
N. M.
Response
Surveys:
A Review
and
Chicago:
Aldine
Co., 1974.
Download