Document 14776179

advertisement
PREFACE
2002 NSAF Nonresponse Analysis is the seventh report in a series describing the methodology of
the 2002 National Survey of America’s Families (NSAF). The NSAF is part of the Assessing the
New Federalism project at the Urban Institute, conducted in partnership with Child Trends. Data
collection for the NSAF was conducted by Westat.
The NSAF is a major household survey focusing on the economic, health, and social
characteristics of children, adults under the age of 65, and their families. During the third round
of the survey in 2002, interviews were conducted with over 40,000 families, yielding information
on over 100,000 people. The NSAF sample is representative of the nation as a whole and of 13
states, and therefore has an unprecedented ability to measure differences between states.
About the Methodology Series
This series of reports has been developed to provide readers with a detailed description of the
methods employed to conduct the 2002 NSAF. The 2002 series of reports includes the following:
No. 1:
No. 2:
No. 3:
No. 4:
No. 5:
No. 6:
No. 7:
No. 8:
No. 9:
No. 10:
No. 11:
No. 12:
An overview of the NSAF sample design, data collection techniques, and
estimation methods
A detailed description of the NSAF sample design for both telephone and inperson interviews
Methods employed to produce estimation weights and the procedures used to
make state and national estimates for Snapshots of America’s Families
Methods used to compute and results of computing sampling errors
Processes used to complete the in-person component of the NSAF
Collection of NSAF papers
Studies conducted to understand the reasons for nonresponse and the impact of
missing data
Response rates obtained (taking the estimation weights into account) and methods
used to compute these rates
Methods employed to complete the telephone component of the NSAF
Data editing procedures and imputation techniques for missing variables
User’s guide for public use micro data
2002 NSAF questionnaire
i
About This Report
Report No. 7 describes analysis conducted to gain some insight into the characteristics of
nonrespondents to the 2002 NSAF, and to assess the impact of nonresponse on the NSAF
statistics. This analysis includes a detailed breakdown of the call attempts to determine how
effective the sample was worked and an investigation into the level of effort both in terms of
refusal conversion and call attempts. In addition, a sociodemographic comparison of
nonrespondents was done using census block information obtained for nonrespondents and
respondents.
For More Information
For more information about the National Survey of America’s Families, contact:
Assessing the New Federalism
Urban Institute
2100 M Street, NW
Washington, DC 20037
E-mail: nsaf@ui.urban.org
Web site: http://anf.urban.org/nsaf
Tim Triplett
ii
CONTENTS
Chapter
1
Page
INTRODUCTION ............................................................................................... 1-1
Demographic Comparisons of Hard-to-Interview Respondents.......................... 1-1
Recent Trends and Developments ....................................................................... 1-3
2
NSAF RESPONSE RATE OVERVIEW............................................................. 2-1
3
EVALUATION OF THE CALL ATTEMPTS EFFORT.................................... 3-1
4
EVALUATION OF THE EFFORTS TO CONVERT REFUSALS ................... 4-1
5
SOCIODEMOGRAPHIC STUDY OF SURVEY NONRESPONDENTS......... 5-1
Comparison of Screener Respondents versus Nonrespondents ........................... 5-2
Comparison of Extended Interview Respondents versus Nonrespondents.......... 5-5
Conclusions.......................................................................................................... 5-8
6
SUMMARY......................................................................................................... 6-1
REFERENCES ................................................................................................................R-1
Tables
Table
2-1
3-1
3-2
4-1
5-1
5-2
5-3
Page
National Response Rates, by Round.................................................................... 2-1
Call Attempt Regression Output.......................................................................... 3-6
Across-Round Call Attempt Comparisons........................................................... 3-7
Initial Refusal Regression Output........................................................................ 4-4
Household Screener Response, Refusal, and Other
Nonresponse Rates by Neighborhood Characteristics......................................... 5-3
Household Extended Response, Refusal, and Other
Nonresponse Rates by Neighborhood Characteristics......................................... 5-6
Linear Regression ................................................................................................ 5-8
Charts
Page
Chart
3A
3B
3C
4A
Call Attempt Efforts and Adult Response Rates.................................................. 3-1
Screener: Average Number of Call to Complete ................................................. 3-3
Extended: Average Number of Call to Complete ................................................ 3-3
Adult Response Rates Before and After Refusal Conversion Attempts.............. 4-2
iii
1. INTRODUCTION
This report analyses the characteristics of nonrespondents to the National Survey of America’s
Families (NSAF). Conducted in 1997, 1999 and 2002, the NSAF provides a comprehensive look
at the well-being of children and nonelderly adults, and it reveals sometimes striking differences
among the 13 states studied in depth. The survey provides quantitative measures of child, adult,
and family well-being in America, with an emphasis on persons in low-income families. To
garner respondents living with and without household landline telephones, the NSAF used a
random-digit dial (RDD) sample of telephone households and an area sample of nontelephone
households. While numerous survey strategies were employed to reduce the number of
nonrespondents, there was a steady decline in response rates throughout the three years of the
NSAF data collection. As a result, this paper assesses the impact of nonresponse on the NSAF
statistics, particularly the 2002 estimates. The next section of this report provides more
information on the NSAF response rates, while this introduction reviews the nonresponse
literature.
Response rates in many respects measure people’s willingness to participate in a study. The most
important factor in getting a good response rate is making additional contact attempts
(Newcomer and Triplett 2004). Many organizations use a variety of contact attempt and refusal
conversion strategies for each of their studies. There is no consensus, however, among survey
organizations on the maximum number of contact attempts or the amount of refusal conversion
needed.
Efforts to reduce the number of nonrespondents in a study are usually related to two factors:
budget and time. Given a limited budget, making a large number of contact attempts or
attempting to convert refusals can prove economically unfeasible. Similarly, a study that must be
completed in a short period may not allow for enough time to make a large number of attempts.
Numerous studies have investigated the issue of how much additional effort an organization
should make in attempting to reduce nonresponse. They support Kish’s (1965) dicta that new
responses must be numerous enough to justify the effort and that decreasing the proportion of
nonresponse is important only if it also reduces its effect. The first assumption is an issue of cost
and is addressed in this paper by looking at the costs of calling back telephone numbers versus
using additional RDD telephone numbers to obtain the required number of completed interviews.
The second assumption is an issue of whether the respondents reached after multiple call
attempts or refusal conversion differs from respondents never refusing to cooperate. In sections 3
and 4, it will be become evident that those reached after refusal conversion and multiple call
attempts do indeed differ.
Demographic Characteristics of Hard-to-Interview Respondents
Research by James Massey and colleagues (1981) found that respondents who initially refuse,
but later in the study complete an interview are disproportionately persons 65 years or older. This
study also found that male respondents were more difficult to reach than female respondents.
William C. Dunkelberg and George S. Day (1973) found that the first attempt to contact
respondents yields only 25 to 30 percent of the final sample. They also observed a rapid decline
in completing interviews after three attempts. Dunkelberg and Day (1973), however, found that
over 20 percent of interviews required more than four attempts. They also found that the
1- 1
demographic characteristics of respondents reached on the first few contacts differed from those
found at home later in the interviewing process. Specifically, they note that younger adults,
higher-income adults, and respondents from larger cities require more contact attempts. The
findings from their research are consistent with the findings in this report. What makes that
somewhat surprising is that Dunkelberg and Day’s (1973) research was based on “personal”
interview data from the 1967 and 1968 “Survey of Consumer Finances” conducted by the
University of Michigan—more than 35 years ago.
Research on telephone studies has consistently shown that with more call attempts the final
sample becomes younger and includes higher proportions of male and black respondents (Blair
and O’Rourke 1986; Traugott 1987; Merkle 1993; Shaiko 1991; Triplett 2002). The data
analyzed in this report support these findings. Some of the call attempt research has found that
highly educated respondents require more call attempts, but this finding is not consistent with the
results from the analysis done in this report. A report from the 1986 U.S. National Crime Survey
did not find differences in gender and race between the initial households and the follow-up
households in which there were unanswered telephone numbers (Sebold 1988). The unanswered
telephone group could explain the reason for the non-finding of differences by gender and race.
The U.S. National Crime Survey, however, did find that the initial interviews contained
proportionally more respondents 65 years or older.
The NSAF was conducted in 1997, 1999, and 2002, while many of the studies reviewed for this
research were from telephone studies conducted in the 1970s and 1980s. However, results
regarding the number of calls needed to complete an interview, difficulties reaching certain
demographic groups, and percentage of the sample completed in the first three calls have
maintained similar patterns. These results are quite similar to Groves and Kahn’s (1979) findings
in their comparison of telephone versus personal interviews. Over time, however, the number of
call attempts needed to reach 18- to 24-year-old respondents has increased significantly;
nonwhite respondents are also becoming harder to contact. In addition, fewer interviews are now
completed in the first three call attempts. Another clear difference with telephone studies prior to
the 1990s is that large number of telephone numbers that were never answered (categorized at
ring no-answer) have subsequently been replaced by an increase in telephone numbers that are
coded as having reached an answering machine. This paper does not examine this issue in detail,
but the results tend to support the hypothesis that answering machines have not had much effect
on telephone surveys. Basically, home recorders seem to have substituted for the ring no-answer
group. Since we can often determine the residential status of most answering machines, the
answering machine has improved our ability to estimate response rates. Several papers look more
closely at the effects of home recorders; raising some concerns over the effect home recorders
may have on the representativeness of general population samples (Oldendick 1993; Piazza
1993; Tuckel and O’Neil 1996; Link and Oldendick 1999).
It took approximately nine months to complete data collection for each round of the NSAF,
always starting in February. Therefore, there was little or no data collected in the months of
November, December, and January. Prior research has shown that response rate varies depending
on which month you are calling (Steeh et al. 1983). A more recent study, however, has found
that the time of the year the data collection takes place has little effect on the sample efficiency
and response rates (Losch et al. 2002). Results from previous call attempt research (Triplett
2002) have shown that while studies conducted in the winter and summer do not differ
1- 2
significantly in terms of response rates, they do provide a few unique call attempt results (such as
reaching young people is easier in winter and reaching black respondents is easier in summer).
Research on call attempts is a necessary procedure for reducing nonresponse. In determining the
optimal number of call attempts, one should be aware of the work that Michael Weeks and James
T. Massey have done in determining the optimal times to contact sample households (Weeks et
al. 1980; Massey et al. 1996). This work has led to the development of optimal time scheduling
for telephone surveys (Weeks et al. 1987; Greenberg and Stokes 1990). To determine the optimal
number of call attempts for a general population random digit dialing study, a survey research
operation should be aware of the optimal times to reach the majority of its sample. It is from this
work that we get clear data showing the advantages of contacting respondents during the evening
on weekdays or on weekends rather than calling during the day on weekdays. By following the
recommendations of this research, an organization can begin to minimize the overall call
attempts needed on a study.
Refusal conversion increases response rates and usually changes the final demographic sample
distribution. Several studies, however, show issues related to data quality when comparing the
information provided by reluctant respondents (respondents that initially refused to participate in
a study) to respondents who never refused to cooperate (Cannell and Fowler 1963; Blair and
Chun 1992; Lavrakas et al. 1992; Triplett et al. 1996; Teitler et al. 2003). Therefore, in
determining the appropriate refusal conversion effort, an organization needs to consider more
than just the response rate.
While these studies do not argue for the elimination of refusal conversion, they do raise a
concern over the accuracy of reporting on difficult survey questions. Refusal conversion involves
waiting for a better time to call and try again (Groves and Couper 1998). One study argues that
the optimal number of days to wait is approximately one week for most refusals, and two weeks
for refusals where the actual respondent refused the interview (Triplett et al. 2001). The need to
wait before attempting refusal conversion makes it more difficult to convert refusals with people
who refuse late in the study or are hard to contact by phone.
Recent Trends and Developments
In the most recent decade, the telephone system in the United States has undergone rapid change,
making it more difficult for survey research firms using RDD sampling methods to identify
residential households (Roth et al. 2001; Triplett and Abi-Habib 2005). As the
telecommunication world changes it is important to readdress the efficiency of RDD sampling
(Tucker et al. 2002). Since 2002, new laws allow cellular phone subscribers to transfer their
landline phone numbers over to their cell phones. There is also a general increase in the number
of households that use only cellular phones. As a result, future telephone nonresponse analysis
will need to consider telecommunication coverage issues. Fortunately for the 2002 NSAF, the
estimated coverage rate (percentage of households that had a non-cellular phone number or
landline phone number) for the RDD sample was higher than 95 percent.1 Thus, this report does
not address these coverage issues.
1
Federal Communications Commission, Telephone Subscribership in the United States: Data through Nov 2002.
Released April 2003.http://www.fcc.gov/Bureaus/Common_Carrier/Reports/FCC-State_Link/IAD/subs1102.pdf.
1- 3
Another recent trend that also held true for the NSAF (Brick et al. 2003) is a decline in survey
response and, in particular, telephone survey response rates (Steeh et al. 2001; Newcomer and
Triplett 2004; Curtin et al. 2005). The current trend is much worse than any other historic period
of falling response rates (Steeh 1981; Curtin et al. 2005). What is not clear is the effect of this
decline in response rates on our survey estimates. The main purpose of this report is to begin to
answer that question.
Understanding the effect of nonresponse on the NSAF will be a good indicator of nonresponse
issues in general. At the same time, the NSAF has a number of unique sample design features
that makes it hard to generalize some of this study’s findings. The next section provides an
overview of the NSAF. More information about the survey is available in other reports in the
NSAF methodology series, particularly Report No. 1, 2002 NSAF Survey Methods and Data
Reliability.
1- 4
2. NSAF RESPONSE RATE OVERVIEW
The NSAF sample had two parts. The main sample consisted of a random-digit dial survey of
households with telephones and a supplementary area probability sample of households without
telephones. In both the RDD and area samples, interviews were conducted in two stages. First, a
short, five-minute screening interview was conducted to determine household eligibility. Once
the household was deemed eligible, a second, more detailed, 27- to 50-minute extended
interview was conducted to ask about the main survey items of interest. Since the NSAF 2002
area sample was quite small (n = 649) and had a relatively high response rate (79.4 percent), this
nonresponse analysis will focus on the RDD telephone component.
The telephone component of the 2002 NSAF used a list-assisted method to select the RDD
sample of telephone numbers, and it used computer-assisted interviewing (CATI) for screening
and interviewing. Featuring large sample sizes in each of 13 states with additional sample drawn
from the balance of the nation, the NSAF allowed researchers to produce national estimates. The
sample of telephone households that completed the screener survey were subsampled for
inclusion in the sample of households needed to complete the extended survey. The subsampling
rates varied depending on the state, the presence of children in the household, and responses to
income screening questions. All households with children that were classified as low-income
households were sampled, while higher-income households with children and all households
without children (but with someone under 65) were subsampled.
The initial 2002 RDD sample consisted of 556,651 telephone numbers. Of the 556,651 telephone
numbers called, 133,503 households were screened, and detailed extended telephone interviews
were conducted in 39,220 households with 43,157 persons under the age of 65. Prior rounds of
the NSAF yielded a few more extended interviews (46,798 in 1997 and 45,040 in 1999) that can
be in part attributed to the lower third round response rate. The table below shows the NSAF
response rates by round, with separate calculations for the screener interviews, extended adult
interviews, and extended child interviews. In addition, this table shows how the response rate
was much lower for the RDD telephone sample than the area probability sample.
Table 2-1. National Response Rates, by Round
Sample type
All
Screener
Child extended
Adult extended
Round 1
Round 2
Round 3
77.8
84.1
76.9
76.7
81.4
77.5
66.0
83.5
78.7
RDD sample only
Screener
Child extended
Adult extended
77.4
83.4
79.4
76.3
80.5
77.0
65.5
83.0
78.2
Area sample only
Screener
Child extended
Adult extended
87.0
95.7
92.5
89.2
96.2
93.0
81.6
95.0
92.5
2- 1
The most notable thing from this table is the decline in the screener response rate for the third
round of data collection. This occurred despite additional efforts made during the third round to
improve response rates. The decline in the screener response rate in 2002 did mirror the issues
and problems with declining responses other similar telephone surveys were experiencing.
There were a number of small changes in the survey design between rounds that could have had
some impact on response rates. Only a change in the incentive strategy, however, had as its
primary goal an overall improvement in response rates. This change involved including a $2 bill
with the advanced letter or offering the $2 to those telephone households in which an address
was not obtained. Unfortunately, there was only a slight gain from the $2 advance and it was not
a statistically significant gain. The strategy to sample refusals in the third round also had a
positive impact on the responses rates by allowing only more experienced interviewers to work
on refusal conversion efforts. However, this gain was also small and not statistically significant.
Finally, only one reverse address matching service was used to obtain mailing addresses based
on sampled telephone numbers in both rounds 1 and 2, whereas three services were used in
round 3. This increased the percentage of listed addresses in which to send advanced letters.
However, sending out a greater percentage of advance letters did not significantly affect the
response rates, in part due to the lower reliability of the match that was found from alternative
sources.
2- 2
3. EVALUATION OF THE CALL ATTEMPTS EFFORT
Making additional call attempts is an integral part of any effort to reduce nonresponse. For
instance, if we used a maximum of 15-call attempts cutoff rule on the NSAF, response rates in
all three rounds of data collection would have been less than 50 percent. Simply going from three
to five call attempts improved the response rate 12.1, 18.1, and 14.2 percentage points in 1997,
1999, and 2002, respectively. Going from 10 to 15 call attempts increased the response rate 16.8,
13.1, and 12.5 percentage points in 1997, 1999, and 2002, respectively. Interestingly, there is not
a perfect correlation between the response rate after five calls and the final response rate. For
example, after five call attempts, the 1999 study had by far the highest response rate, with 2002
and 1997 roughly equal. Of the first and third round, however, the 1997 survey had the highest
final response rate. This demonstrates that it is possible to accomplish a good final response rate
on studies that do not get off to a good start. Chart 3A shows, for each round of data collection,
how much response rates increase when additional call attempts are made.
There was no firm maximum call attempt rule for the NSAF. Many households received more
than 60 call attempts. In all three rounds, the data consistently shows that after 20 call attempts,
there was a significant decline in additional call attempts’ success in improving the response rate.
However, the final response rates for the NSAF increased on average 4 percentage points from
making more than 20 call attempts. In contrast, after 15 call attempts, an additional five call
attempts increased the NSAF response rate between 6 and 8 percentage points. From a purely
cost-benefit perspective, it would seem that a cutoff point of 20 call attempts were reasonable. If
3- 1
a cutoff rule was imposed, however, additional sample would have had to be screened to reach
the desired sample size. Since achieving high response rates on the screener interview was a
much more difficult task, increasing call attempts beyond 20 calls to complete already screened
households was the appropriate strategy. In addition, the study had a lengthy, nine-month data
collection period, so it was not only possible to make more than 20 calls to hard-to-reach
households but also to spread out these call attempts over different times and days. Finally, this
research shows that the interviews completed after 20 call attempts differ from those that were
completed with less than 20 calls, which again supports the decision to not impose a maximum
call rule.
What are the characteristics of people who are difficult to reach in telephone surveys? As with
most RDD samples, male respondents required more call attempts to complete an interview than
female respondents (Triplett 2002). Similar to the 10-year trend discovered in prior national call
attempt research, the 2002 NSAF indicates that women are becoming just as difficult to reach as
men (Triplett 2002). While respondents from the Northeast required more contacts on average to
complete, it was the state of Florida whose respondents required the most call attempts to
complete on the NSAF. Asian respondents required more call attempts, as did foreign-born
residents. For both the 1997 and 1999 NSAF study, education of the respondent was not a factor
in determining how easy or difficult a person was to reach. In the 2002 study, however, lesseducated respondents did require more call attempts to complete an interview. It also took more
call attempts to complete interviews with respondents who have never been married or who are
married but are currently separated or do not have a spouse in the household. As expected,
reaching respondents who had full-time jobs required more call attempts than interviewing those
not employed. Finally, interviewing respondents age 18 to 24 required significantly more call
attempts.
A somewhat surprising result is that other than finding it hard to contact respondents whose
family income was below 50 percent of the federal poverty level, income was not an issue in
terms of call attempts needed to complete the interview. Chart 3B shows the average number of
call attempts it took to complete an interview with people who were more difficult to reach in
order to complete the 2002 NSAF screener survey. Chart 3C shows groups that were harder to
reach in order to complete the 2002 NSAF extended survey. For the most part, the same groups
that required more call attempts to complete the screener also required more call attempts to
complete the extended interview.
How does the number of call attempts alter the final demographic distribution? Completing
interviews in households that require more call attempts increased the number of interviews
completed in urban areas and households with children under 18. Additional call attempts did
not, however, have any effect on reaching households that were at or below the federal poverty
level. Also, increasing the number of call attempts increased the percentage of respondents
without a high school degree and increased the percentage of black respondents interviewed.
Completing interviews in households that were previously ring no answer increased the number
of interviews in rural areas, single-adult households, and households with no children under 18.
In addition, reducing the number of ring no answers remaining in the non-completed sample
increased the percentage of less-educated respondents and respondents age 55 and older
interviewed.
3- 2
3- 3
What about the cost of additional call attempts? By far the largest cost associated with telephone
data collection is paying for the interviewers’ and supervisors’ time. Another significant cost is
the phone charges. Both these cost factors are affected by how many telephone calls are made.
To minimize costs, one could ignore response rate and try to minimize the total call attempts
needed to complete the target number of interviews. Calling would stop when the probability of
the next call attempt yielding an interview begins to fall. For the NSAF, this did not begin to
occur until after the 20th call attempt. So from a purely cost-to-complete approach, one could
argue to give up on completing an extended interview once you reach the 20th call attempt. The
increase in nonresponse, had the NSAF design not permitted making more than 20 call attempts,
would have been approximately 5 percentage points in 1997 and 4 percentage points in 1999.
A better strategy for minimizing costs would be to allow call attempts beyond 20 for households
in which a resident of that household requested that someone call back. This was, in fact, the
strategy for the NSAF, where only a random sample of the numbers that were never answered
was called more than 20 times for survival estimation.2 Telephone numbers that yielded a live
contact, however, were never retired except in the event of multiple refusals or in situations in
which the number stopped working. In round 1, 19,175 extended were completed (almost 20
percent of all completes) after 20 call attempts, while in round 3, 6,880 extended interviews were
completed (16 percent of all completes) after 20 call attempts. The decline in the percentage
needing more than 20 call attempts was a result of the implementation of a more efficient calling
strategy that spread out the call attempts over a long period (see Cunningham et al. 2005).
However, even 16 percent of the total completes occurring after 20 call attempts is quite large
and certainly argues for continuing to try to reach identified residential telephone households
well beyond the 20-call limit imposed on non-contacted sample. In fact, in 2002, 7.5 percent of
all completes needed more than 30 call attempts, and 3.5 percent of completes needed more than
40 call attempts. This is why no upper call attempt limit was set in order to minimize
nonresponse from family respondents who are difficult to reach.
One problem with looking at call attempts needed to reach different demographic groups is that
there is a known correlation between demographic groups. For instance, it took fewer call
attempts to reach respondents with children and respondents who are married. Since there is a
strong correlation between marital status and having children, it is difficult to tell which is more
important in determining the average call attempts.
One solution would be to create a multiple regression equation that would look at the joint effect
of all the key demographic variables. In this paper, an ordinary least squares regression was run
for the combined 1997 to 2002 NSAF data. The dependant variable was the total number of calls,
and the independent variables were the demographic variables immigration status, marital status,
census region, foreign-born status, age, gender, education, ethnicity, race, labor force status,
poverty level, hours worked, and number in family of various age groups. Two other
independent variables were the survey variables years of survey and year of survey. Means were
substituted for missing data, and there were no missing data for total number of call attempts.
The regression results are shown in table 3-1. The regression analysis shows that race and
ethnicity were two of the most important factors in determining the number of call attempts with
2
Survival estimation was a method employed to approximate what percent of non-contacted telephone numbers
were residential households. This estimate was important in calculating the final response rate.
3- 4
nonwhite and Hispanic respondents requiring more call attempts. In addition, the regression
analysis showed that more call attempts were needed to reach unmarried respondents and
employed respondents. Having family members over age 65 was also a factor in reducing the
number of call attempts. Less important factors that still predicted more call attempts include
being young, foreign born, working long hours, or having children age 6 to 17. Having young
children 0 to 5 or being a male or female respondent was not a significant factor in estimating
call attempts needed to complete the extended interview.
3- 5
Table 3-1. Call Attempt Regression Output
Dependent variable: total number of calls needed to complete the interview
Sample: all three rounds of the NSAF, 1997–2002 (R SQUARE=.161)
Independent variables
(Constant)
B
335.56
Std. error
Beta
26.72
T
Sig T
12.56
0.00
UIMMSTAT Immigration status
-0.606
0.180
-0.029
-3.366
0.001
UMARSTAT New marital status
0.224
0.017
0.044
12.860
0.000
-0.281
0.026
-0.029
-10.892
0.000
2.631
0.300
0.076
8.761
0.000
AGE Age
-0.039
0.004
-0.042
-9.602
0.000
SEX Gender
-0.083
0.065
-0.004
-1.284
0.199
0.277
0.050
0.016
5.574
0.000
UBETH Hispanic
-2.041
0.102
-0.061
-19.949
0.000
UBRACE Race (2 category)
-1.924
0.078
-0.068
-24.561
0.000
U_LFSR Labor force status recode
0.817
0.035
0.067
23.401
0.000
U_SOCPOV Social family income as % of poverty
0.144
0.028
0.017
5.039
0.000
U_USHRS Hours worked per week this year
0.027
0.002
0.030
11.027
0.000
UAGE1 No. family members age 0–5
-0.001
0.046
0.000
-0.018
0.985
UAGE2 No. family members age 6–17
0.144
0.030
0.015
4.856
0.000
UAGE3 No. family members age 18–24
-0.046
0.054
-0.003
-0.853
0.393
UAGE4 No. family members age 25–34
0.019
0.064
0.001
0.305
0.760
UAGE5 No. family members age 35–44
0.037
0.061
0.003
0.609
0.542
UAGE6 No. family members age 45–54
-0.092
0.062
-0.006
-1.475
0.140
UAGE7 No. family members age 55–64
-0.423
0.080
-0.020
-5.264
0.000
UAGE8 No. family members age 65+
-1.002
0.094
-0.029
-10.643
0.000
4.968
0.036
0.361
137.230
0.000
-0.162
0.013
-0.032
-12.101
0.000
UREGION Region
U_USBORN Foreign- or U.S.-born
UBCPSED Education level, CPS
REFUSAL
YEAR
Some of the five-year NSAF call attempt trends (1997 to 2002) are shown in table 3-2. In the
five years between the first and third rounds of the NSAF, the average number of call attempts
needed to complete the screener interview almost doubled, while the number of calls needed to
complete the extended interview declined. The increase in call attempts needed to complete the
screener is consistent with the literature in what was happening to most RDD telephone studies
during the same period. It is less clear, however, why there was a decline in the average call
attempts needed to complete the NSAF extended interview. Some of the change could be
explained by the use of a more efficient calling strategy (Cunningham et al. 2005). But that
strategy involved how to reach households that do not answer the phone, which should have a
greater impact on the screener interview. More likely the decline was a combination of two
things: more people who are difficult to reach refused to complete the screener, thus not being
eligible for the screener; and, since more call screener attempts were made, more information
3- 6
about when to call back to complete an extended was available. As shown in table 3-2, over 90
percent of households were screened by the fifth call attempt in rounds 1 and 2, whereas barely
more than 70 percent of household were screened after five call attempts in round 3. On the
bright side, there seems to have been some benefit to doing additional work on completing
screener interviewers, in that it helped informed when to make the calls to complete the extended
interviews.
Table 3-2. Across-Round Call Attempt Comparisons
Round 1, 1997
Mean number of call attempts
Screener
Extended
Median number of call attempts
Screener
Extended
Response rates
Screener
Extended (adults)
Extended (child)
% completed on first call attempt
Screener
Extended
% completed on third call attempt
Screener
Extended
% completed on fifth call attempt
Screener
Extended
% completed on tenth call attempt
Screener
Extended
Round 2, 1999
Round 3, 2002
2.61
10.91
2.21
11.12
4.95
9.58
2.00
7.00
1.00
7.00
3.00
6.00
77.4%
79.9%
84.1%
76.7%
77.5%
81.4%
66.0%
78.7%
83.5%
49.9%
0.7%
59.2%
0.0%
25.3%
0.0%
79.3%
19.7%
85.0%
24.6%
55.6%
27.3%
90.0%
38.1%
92.9%
41.3%
71.3%
46.5%
97.3%
65.2%
98.3%
65.3%
88.0%
70.5%
3- 7
4. EVALUATION OF THE EFFORTS TO CONVERT REFUSALS
Refusal conversion in telephone surveys is a standard practice at most survey organizations and
accounts for a significant percentage of the final sample. The rationale for refusal conversion is
to increase response rate and hence reliability. But, in achieving this goal, we must also be alert
to potential unintended effects on data quality. There have been analyses of how reluctant
responders differ from others on the distribution of their answers to substantive questions, as
well as how these two types of respondents compare demographically. An analysis of differences
between responder groups in a large study reported by Lavrakas and others (1992) found some
demographic differences. These lines of research, however, do not address the question of
whether reluctant respondents may have other response behaviors that bear on the quality of the
data obtained from them.
Forty years ago, Cannell and Fowler (1963) found that reluctant respondents provided less
accurate data. They attributed this effect mainly to lower respondent motivation. Citing this
result some years later, Bradburn (1984) states the issue more generally, suggesting a possible
effect of interviewer persistence on response behaviors. He asserted, “There are... a number of
people who end up responding because they have given up trying to fend off the
interviewer...[and]…go through the interview quickly—in other words do it but don’t work
hard.” Of course, it may also be that these respondents who are reluctant to participate also
simply have less interest in the survey topic (Groves et al. 2004). While it would be difficult to
disentangle these possible effects, both are likely be in the direction of a decreasing effort on the
part of respondents in answering questions.
The type of question—for example, a simple yes-no item versus an open-ended question—may
affect the amount of cognitive effort required. Effort may also vary by recall task, such as a
question that asks about a simple attribute such as the respondent’s age versus asking for the
respondent’s detailed medical history. In addition to these factors, effort may be affected simply
by how motivated the respondent is to provide an answer. This reduced effort may be stated in
terms of cognitive strategies respondents use. One cognitive strategy that a respondent may use
is to provide the minimum response that will satisfy the interviewer and allow the interview to
proceed, with the hope of ending it as quickly as possible. Krosnick and Alwin (1987) have
termed this general behavior for minimizing cognitive effort “satisficing.” In a survey interview,
this could result in such respondent behaviors as increased item refusals or “don’t know”
responses, more primacy and recency effects in selecting from a list of response categories, and
reduced completeness of answers to open-ended questions.
The concern that respondents who initially refused the NSAF may provide less accurate data is
fortunately less of a concern for the 2002 study than the prior two rounds. As seen in chart 4a,
the overall percentage of completed interviews that were a result of converted refusals declined
in 2002. This may seem odd given that the number of people refusing to complete the NSAF is
much higher in 2002, but this can at in least in part be explained by the change in how monetary
incentives were used. In 2002, the majority of households were sent a $2 bill in advance, instead
of the prior-round strategy of sending or offering $5 only to people who refused the screener. So
the expectation was that the advance payment would reduce refusals and those who do refuse
would be more difficult to convert.
4- 1
As it turned out, providing $2 at the initial attempt to complete the screener works about as well
on response rates as a $5 treatment at refusal conversion (Cantor et al. 2003). A benefit of the
incentive is that it drew attention to the advance materials, as some post-interview questions
found that people who received the $2 in advance were more likely to have read the advanced
letter describing the project. It is unclear how much of the effect of the incentive adds to the
perceived benefits of participating on the survey, but by having reduced the number of initial
refusals, it is likely that the prepaid incentive used in round 3 increased the accuracy of
respondents’ answers.
How did refusal conversion alter the final NSAF demographic distribution? The extended refusal
conversions efforts in all three rounds increased the percentage of male respondents relative to
female respondents. While the refusal conversion of those who refused the screener had
practically no impact on the final distribution of race, the extended refusal conversion efforts did
increase the final number of black families interviewed in all three rounds. In the first two rounds
of the NSAF, extend refusal conversion efforts helped increased the percentage of low-income
families interviews, though attempt to convert screened refusals had no impact on the percentage
of families found to be low income. In the third round, both the screener and extended refusal
conversion efforts had little impact on the final percentage of low-income families interviewed.
In all three rounds, the extended refusal conversions did increase the percentage of interviews
done with families that did not have any children under 18. This was important despite the
4- 2
emphasis the survey placed on collecting information about children, since response rates were
lower among adult-only households.
As in the previous section that looks at call attempts, there is expected correlation between
demographic groups. For instance, refusal conversion increased the percentage of male
respondents, which is more likely to occur in interviews conducted in households without
children, therefore the percentage of interviews with families without children also increased as a
result of refusal conversion. An ordinary least squares regression model similar to the model run
in the previous section was run for the combined 1997 to 2002 NSAF data. The dependant
variable this time was previous refusal status, and the independent variables were the
demographic variables immigration status, marital status, census region, foreign-born status, age,
gender, education, ethnicity, race, labor force status, poverty level, hours worked, and number
family of various age groups. Two other independent variables were the survey variables years
of survey and year of survey. Means were substituted for missing data, and there was no missing
data for total number of call attempts.
The regression results are shown in table 4-1. The regression analysis shows that converted
refusals were less likely to be families that had children and more likely to be families that had
proportionately more older adults. The regression also indicates that refusal conversion actually
did not help in increasing the number of respondents who work or the respondents who reported
having never been married. In addition, refusal conversion did not effectively increase the
number of non-U.S. citizens interviewed. Refusal conversion was much more effective with lesseducated respondents. While refusal conversion was more effective among the non-Hispanic
population, refusal conversion did not have a differential impact for low-income or black
families.
4- 3
Table 4-1. Initial Refusal Regression Output
Dependent variable: refusal status (0=never refused, 1=refused screener, 2=refused extended, 3 = refused
both)
Sample: all three rounds of the NSAF, 1997–2002 (R SQUARE=.146)
INDEPENDENT VARIABLES
B
Std Error
(Constant)
38.67
1.96
UIMMSTAT Immigration status
-0.068
0.013
UMARSTAT New marital status
-0.007
UREGION Region
Beta
T
Sig T
19.75
0.00
-0.045
-5.172
0.000
0.001
-0.020
-5.828
0.000
-0.006
0.002
-0.008
-3.122
0.002
U_USBORN Foreign- or U.S.-born
0.048
0.022
0.019
2.194
0.028
AGE Age
0.001
0.000
0.013
2.960
0.003
SEX Gender
-0.027
0.005
-0.016
-5.706
0.000
UBCPSED Education level, CPS
-0.031
0.004
-0.025
-8.552
0.000
UBETH Hispanic
0.070
0.008
0.029
9.326
0.000
UBRACE Race (2 category)
0.005
0.006
0.002
0.805
0.421
-0.016
0.003
-0.018
-6.204
0.000
U_SOCPOV Social family income as % of poverty
0.002
0.002
0.003
1.037
0.300
U_USHRS Hours worked per week this year
0.000
0.000
-0.004
-1.536
0.125
UAGE1 No. family members age 0-5
-0.003
0.003
-0.003
-0.909
0.364
UAGE2 No. family members age 6–17
-0.004
0.002
-0.005
-1.703
0.089
UAGE3 No. family members age 18–24
0.018
0.004
0.014
4.617
0.000
UAGE4 No. family members age 25–34
0.023
0.005
0.022
4.974
0.000
UAGE5 No. family members age 35–44
0.047
0.004
0.047
10.523
0.000
UAGE6 No. family members age 45–54
0.066
0.005
0.059
14.567
0.000
UAGE7 No. family members age 55–64
0.098
0.006
0.064
16.714
0.000
UAGE8 No. family members age 65+
0.108
0.007
0.043
15.633
0.000
-0.019
0.001
-0.052
-19.690
0.000
0.027
0.000
0.368
137.230
U_LFSR Labor force status recode
YEAR
TNC (total number of call)
4- 4
0.000
5. SOCIODEMOGRAPHIC STUDY OF SURVEY NONRESPONDENTS
The impact of nonresponse on survey estimates depends on the size or number of
nonrespondents and the extent to which nonrespondents differ from respondents on survey items
of interest (Gray et al. 1996; Groves 1989; Keeter et al. 2000). Whether survey estimates are
unbiased depends on the assumption that nonrespondents are missing at random (MAR). When
nonrespondents differ from respondents, the MAR assumption does not hold and nonresponse
error may be present in some of the survey estimates.
Little empirical evidence exists to evaluate the MAR assumption because any information about
nonrespondents must come from administrative records or other auxiliary sources. This section
of the nonresponse report compares respondents and nonrespondents from the Urban Institute’s
2002 National Survey of America’s Families using auxiliary data derived from the 2000 U.S.
biennial census. The address information that was obtained for both NSAF respondents and
nonrespondents enabled merging of census block group level data. A census block is the smallest
geographical area for which census data are collected. A clustering of blocks forms the
geographically larger census block group and, at the next level, several block groups combine to
form a census tract. Although the Census Bureau’s goal is for each block group to contain 400
housing units, block groups generally vary between 250 and 550 housing units. For
confidentiality reasons, census data is available publicly at the block group level (U.S. Census
Bureau 2003).
Linking census block group data to the NSAF allows comparisons of block group characteristics
of respondents to those of nonrespondents. For this research, neighborhood classifications are
defined as being above or below the national averages on a set of characteristics. For example, a
neighborhood that has a Hispanic population above the national average of 12.5 percent is
characterized as a Hispanic neighborhood. Learning the extent to which nonrespondents’
neighborhood characteristics differ from those of respondents sheds light on the appropriateness
of the assumption that nonrespondents are missing at random and the presence of nonresponse
bias in the sample.
Census block groups for each NSAF household (respondent and nonrespondent households)
were obtained using reverse directory call back services. These services identify a household’s
address through its telephone number. Of the initial NSAF telephone sample of over a halfmillion telephone numbers, 214,181 were identified as residential. Of these residential telephone
numbers 75.7 percent (or 162,157) were matched to an address using reverse directory call back
services and subsequently matched to the census block group in which the address is found.
Future research may explore the remaining households by matching them to census data at the
zip code level,3 but the current research only examines nonresponse in the block group matched
NSAF households.
3
Using telephone exchange information, the most populace zip code within an exchange could be assigned to each
household, then either use zip code–level census data or assign the most populace block group within that zip code.
5- 1
Once matched to census data, NSAF respondent and nonrespondent households were
characterized as located in block groups above or below the national average on the indicators
below:
v Racial composition
v Linguistic isolation, foreign-born and U.S. citizenship
v Poverty rate
v Receipt of public assistance
v Education
v Urbanization indicator
v Employment
v Time spent living in neighborhood
v Home ownership
Neighborhood characteristics are explored for respondents and nonrespondents at the screener
interview and extended interview levels. Nonrespondents at both levels are broken down into
refusals and other nonresponse categories. The latter category includes passive refusals and
households that did not respond because they were difficult to contact. Households selected for
the extended interview are a subset of those who completed the screener interview. Criteria for
selection into the extended interview sample include the presence of at least one householder
under the age of 65, the presence of at least one child under the age of 17, and family income
below 200 percent of the federal poverty level. Due to these sampling criteria, analysis of
nonresponse at the extended interview is less comparable to other national telephone surveys of
the general population than analysis of nonresponse at the screener level.
Comparison of Screener Respondents versus Nonrespondents
Most telephone nonresponse occurs before the first question of a survey is even asked. Similar to
other national telephone surveys, the NSAF experiences its highest rates of nonresponse when
attempting to complete the screener survey (Curtin et al. 2005). Table 5-1 shows the variation in
household screener completion, refusal, and other nonresponse rates by neighborhood type. The
above-average and below-average subcolumns indicate the rate of survey response, refusal, or
other nonresponse for each neighborhood characteristic. Given the large sample size, small
differences found in this table are significant.
5- 2
Table 5-1. Household Screener Response, Refusal, and Other Nonresponse Rates by
Neighborhood Characteristics
Neighborhood
characteristics
Urban
Rural
Screener Completes
Overall Rate= 65.9%
(n = 106,804)
Screener Refusals
Overall Rate=26.6%
(n = 43,134)
Above
average
64.4%
70.3
Above
average
27.3%
24.9
Below
average
24.6%
27.1
Below
average
70.3%
64.6
Screener Other
Nonresponse
Overall Rate = 7.5%
(n = 12,219)
Above
Below
average
average
8.4%
5.1%
4.9
8.3
White
Black
Asian
Other race
Hispanic
65.9
66.0
62.3
66.2
64.5
65.8
65.8
67.0
65.8
66.2
27.3
25.5
27.6
24.5
25.8
24.9
26.9
26.3
26.9
26.8
6.9
8.5
10.1
9.3
9.7
9.3
7.3
6.7
7.3
7.0
Spanish language isolation
Asian language isolation
Linguistically isolated: any
language
64.4
62.6
66.2
66.7
25.9
27.3
26.7
26.4
9.7
10.2
7.1
6.9
62.6
66.8
26.7
26.6
10.7
6.6
Foreign born
Noncitizen
61.3
61.8
67.6
67.2
27.9
27.2
26.1
26.4
10.8
11.0
6.3
6.4
Same house in 1995
Different house in 1995
66.2
65.1
65.4
66.4
26.9
26.5
26.2
26.7
6.9
8.4
8.4
6.9
High school degree only
College degree or higher
67.7
62.9
64.5
68.0
25.9
28.7
27.2
25.1
6.5
8.4
8.3
6.9
Employed—all
Employed—females
65.1
65.4
66.5
66.2
27.3
27.0
26.0
26.3
7.6
7.6
7.5
7.5
Receives public assistance
66.7
65.8
24.4
26.8
8.9
7.4
Less than 50% of FPL
Less than 100% of FPL
Less than 200% of FPL
66.5
67.2
68.0
65.6
65.3
64.8
25.1
24.8
24.4
27.2
27.4
27.8
8.4
8.1
7.6
7.2
7.3
7.48
Owner occupied
Renter occupied
66.6
64.3
64.7
66.7
27.0
26.2
25.9
26.8
6.5
9.5
9.4
6.5
Rural neighborhoods tend to have higher household screener completion rates than urban
neighborhoods, although the reason appears to be the result of having a lower other nonresponse
rate. This suggests that people from rural areas are almost as likely to refuse a survey but have
higher response rates since they are much easier to contact. Over half of the 5.7 percentage point
increase in the rural completion rate can be accounted for by the 3.4 percentage point decrease in
the other nonresponse rate. Maybe more telling is that the other nonresponse rate for urban
neighborhoods is almost twice as high as it is in rural neighborhoods. This implies that
nonresponse in urban neighborhoods is not only a problem of high refusal rates but also of
making contact with respondents.
5- 3
There were no differences found in screener completion rates between white and black
neighborhoods. This is surprising given that NSAF weighting adjustments give greater weights
to black non-Hispanics. This anomaly could mean that increasing the share of black nonHispanics through post-stratification might create a slight overrepresentation of black
respondents living in nonblack neighborhoods, which could affect estimates if their answers are
significantly different than black respondents living in black neighborhoods. This potential
problem is similar to the 1997 NSAF nonresponse analysis (Groves and Wissoker 1999), where
there was some evidence that by treating black respondents and nonrespondents the same, the
weight may have overstated the number of black low-income households. Though this potential
problem was investigated, the report did not find evidence that this in fact occurred. While
completion rates are roughly the same in black and white neighborhoods, the source of
nonresponse varied for these two groups. White neighborhoods had a higher refusal rate, while
black neighborhoods suffered from a higher other nonresponse rate. This indicates that potential
respondents in black neighborhoods are more difficult to reach but once reached they can be
more cooperative.
Asian and Hispanic neighborhoods both had lower completion rates than white and black
neighborhoods. These differences may be explained by the low completion rates found in
neighborhoods with high levels of linguistically isolated households and by the low completion
rates of neighborhoods with above-average numbers of foreign-born and noncitizen residents. In
fact, the screener completion, refusal, and other nonresponse rates of Hispanic and Asian
neighborhoods are mirrored by the language isolation measures for those groups. The screener
completion rate in Spanish-language linguistically isolated neighborhoods is higher than in
Asian-language linguistically isolated neighborhoods but may be attributable to the availability
of a Spanish translation of the NSAF survey. The NSAF is available in English and Spanish
only. Presumably, households that are linguistically isolated in a language other than Spanish
would be unable to respond or even to refuse the survey request, explaining the higher other
nonresponse rate and lower refusal rate of Asian-language linguistically isolated neighborhoods.
Transient households, those that had moved to the current neighborhood in the past five years,
did not differ much in their willingness to complete the NSAF screener from more established
households. However, neighborhoods with above-average numbers of transient household had a
higher nonresponse rate than neighborhoods where people had lived for more than five years.
Here the small finding is fairly intuitive, since more transient respondents are often thought to be
more difficult to contact and interview. While this research supports the notion that nonresponse
increases in transient neighborhoods, the difference of 1.1 percentage points is modest.
Education tended to have a negative impact on response rates. The data show that screener
completion rates in less-educated neighborhoods where respondents tended to have only a high
school degree or equivalent are nearly 5 percentage points higher than in neighborhoods with
above-average numbers of college graduates. Highly educated respondents tend to not only
refuse the survey request at a higher rate than less-educated respondents but are also more
difficult to contact, as evidenced by their higher refusal and other nonresponse rates.
Neighborhoods with higher overall employment and higher female employment had only slightly
lower screener completion rates than the overall average for the survey. This is a positive
finding, given that previous NSAF nonresponse analysis found that employment negatively
impacted response rates (Black and Safir 2000; Triplett et al. 2002). Both the long field period
(nine months) and the selection of any adult to complete the screener probably offset the
5- 4
difficulty that most national telephone surveys experience in attempting to reach employed
respondents.
Neighborhoods with more than the national average number of households receiving public
assistance as well as poor neighborhoods had slightly higher completion rates than the overall
average screener completion rate for the survey. This finding seems somewhat counterintuitive,
but there may be topic interest effect among poorer households that increases their propensity to
respond to the NSAF. Research has demonstrated that respondent interest in the topic of a survey
can affect survey participation decisions (Groves et al. 2004).
Finally, neighborhoods that have a high concentration of homeowners tend to have higher
screener completion rates than high rental occupancy neighborhoods. This difference may be
explained by the higher other nonresponse rate in homeowner neighborhoods and indicate that
renters are more difficult to contact. This finding is similar to the trend seen among respondents
in transient neighborhoods where home rental rather than home ownership is likely to be more
common.
Comparison of Extended Interview Respondents versus Nonrespondents
Households that completed an extended interview are a subset of the households that completed
the screener interview. The research on extended interview nonresponse on the NSAF is less
comparable to other household surveys since not all high-income and childless households are
asked to complete an extended interview. In addition, the respondent was often the most
knowledgeable adult for a child in the household. However, the findings in this section should be
useful for those studies collecting data on children or low-income populations.
Table 5-2 shows variation in response rates by neighborhood characteristics at the extended
interview level. As before, the above- and below-average columns indicate the rates of response,
refusal, or other nonresponse to the extended interview by neighborhood type. The sample sizes
are still large, so again even the small differences found in this table are significant.
5- 5
Table 5-2. Household Extended Response, Refusal, and Other Nonresponse Rates by
Neighborhood Characteristics
Extended Completes
Overall Rate= 84.3%
(n=33,919)
Neighborhood
characteristics
Urban
Rural
Above
average
83.4%
86.9
Extended Refusals
Overall Rate=10.4%
(n = 4,179)
Below
Above
average
average
86.6%
10.6%
83.4
9.7
Below
average
9.7%
10.6
Extended Other
Nonresponse
Overall Rate = 5.3%
(n = 2,137)
Above
Below
average
average
6.0%
3.7%
3.5
6.0
White
Black
Asian
Other race
Hispanic
85.4
82.2
82.1
82.3
82.4
81.9
84.9
85.0
84.7
84.8
10.4
10.9
10.9
9.0
9.3
10.4
10.2
10.2
10.6
10.7
4.2
6.9
7.0
8.8
8.3
7.7
4.9
4.8
4.7
4.5
Spanish language isolation
Asian language isolation
Linguistically isolated: any
language
82.3
82.3
84.8
84.8
9.3
10.2
10.7
10.4
8.5
7.5
4.6
4.8
81.7
85.1
9.8
10.6
8.5
4.4
Foreign born
Noncitizen
80.8
81.1
85.6
85.3
10.7
10.3
10.3
10.4
8.6
8.6
4.2
4.3
Same house in 1995
Different house in 1995
84.1
84.6
84.5
84.1
11.0
9.6
9.6
11.0
4.9
5.9
5.9
4.9
High school degree only
College degree or higher
85.1
83.6
83.7
84.7
10.2
11.5
10.5
9.8
4.7
4.9
5.8
5.5
Employed—all
Employed—females
84.8
85.0
83.9
83.7
10.6
10.4
10.2
10.4
4.5
4.6
5.9
5.9
Receives public assistance
84.0
84.3
9.0
10.5
7.0
5.2
Less than 50% of FPL
Less than 100% of FPL
Less than 200% of FPL
83.8
84.1
84.5
84.5
84.4
84.1
9.5
9.5
9.3
10.8
10.8
11.1
6.7
6.4
6.2
4.7
4.8
4.7
Owner occupied
Renter occupied
84.9
83.2
83.3
84.9
10.7
10.0
10.0
10.6
4.4
6.9
6.8
4.5
A comparison of the screener and extended response and nonresponse rates shows that
completion rates are much higher for the extended interview, even though the extended interview
often takes more than 45 minutes to complete. This is evidence that length of the questionnaire
has little impact on response rate. Additionally, other nonresponse makes up over half the overall
extended nonresponse but only 28 percent of the overall screener nonresponse. This is likely a
result of the added difficulty of scheduling a long interview with a chosen respondent, whereas
the screener interview was completed by any householder 18 years of age or older.
While differences in the extended completion rate by neighborhood characteristics were
generally smaller than the differences found at the screener level, they tend to follow the same
5- 6
patterns as the screener interview completion rates. There were a few exceptions, such as a
decline in the completion rates of black neighborhoods and very poor neighborhoods (income
below 50 percent of federal poverty level). However, neighborhoods with high employment rates
maintained response rates above the overall rate for the survey, while at the screener level the
opposite is true. This finding may be attributable to the NSAF data collection design of
administering the survey to the adult in the family who is most knowledgeable about the sampled
child(ren). In addition to providing information about the child(ren), the most knowledgeable
adult provides information about him- or herself and any spouse or partner in the household,
thereby relieving the need for scheduling time to administer the survey to the spouse or partner.
In our descriptive analysis of neighborhood characteristics, most of the findings have shown
some relationship between the neighborhood type and the propensity to responds. To further the
research, a regression model is used to answer the question, “Does not knowing the percentage
of neighbor that has a certain characteristic help us predict a household’s likelihood of
participating?” The results of this model are shown in table 5-3.
The results indicate that knowing any of the block group characteristics used in this model would
provide some insight in predicting nonresponse. These results still show that rural areas predict
higher response but this factor seems smaller compared with some other predictors of higher
response (below poverty, owner occupancy, and receipt of public assistance). The reason
households in rural areas are more likely to respond may have more to do with characteristics
about rural areas than with the household simply being located in a rural area. In this model,
having a greater percentage of either more college graduates or more high school–educated
respondents predicts lower response rates. Thus, the only real surprise is that the sign of the high
school only coefficient is the opposite of what one would have guessed based on the descriptive
analysis. Finally, the descriptive analysis did not find much association between being black or
employed. The regression confirms this in that these two characteristics have only slight
predictive powers.
5- 7
Table 5-3. Linear Regression
Dependent variable: screener disposition (0=nonrespondent, 1=respondent)
Independent variables: percentage of block with that characteristic (values 0 to 100)
(Constant)
Black
Rural
Hispanic
Different house in 1995
Less than 100% of FPL
Other race
Owner occupied
High school degree only
Receives public assistance
College degree or higher
Employed—all
Unstandardized
Coefficients
Std.
B
error
.567 .016
-.034 .007
.044 .004
-.115 .015
.058 .010
.124 .020
.151 .030
.098 .007
-.123 .022
.299 .110
-.212 .014
.053 .012
Standardized
Coefficients
Beta
-.012
.028
-.034
.015
.023
.022
.043
-.022
.008
-.066
.011
t
34.758
-4.670
11.493
-7.587
5.615
6.247
5.022
13.193
-5.594
2.725
-15.171
4.280
Sig.
.000
.000
.000
.000
.000
.000
.000
.000
.000
.006
.000
.000
95% Confidence
Interval for B
Lower
Upper
bound
bound
.535
.598
-.049
-.020
.036
.051
-.144
-.085
.038
.079
.085
.163
.092
.211
.083
.112
-.166
-.080
.084
.513
-.239
-.185
.029
.077
Conclusions
Most of the findings only show modest differences in completion rates by neighborhood type.
However, even differences of a couple percentage points could introduce enough bias into some
survey estimates to affect the overall findings. In addition, the differences in completion rates by
neighborhood characteristics are probably somewhat conservative in that the households that
were not matched to block group data are probably more transient and therefore would tend to
not respond to the survey.
Some of the potential bias associated with different response by neighborhood type is dealt with
by the undercoverage and post-stratification weights developed for the NSAF (Bethlehem 2002).
The NSAF methods and response rate evaluation report did some evaluation of the potential bias
owing to nonresponse (Brick et al. 2003). However, this report was fairly vague on the degree of
how much increasing nonresponse rates are biasing survey estimates. This paper goes one step
further in that it identifies types of neighborhoods that have differing response propensity (rural,
Asian, Hispanic, educated, owner occupancy rates). A natural next step would be to look at the
completed cases block group characteristics using the survey weights. Unfortunately, most
nonresponse for the screener occurs where post-stratification information is unavailable.
However, post-stratification is not as effective if nonrespondents differ from respondents, which
does appear to be at least partially true.
5- 8
6. SUMMARY
Survey research methodologies have improved over the past two decades, bolstered by both
technological advances and learning through expanded experience. Telephone surveys such as
the National Survey of America’s Families have been very popular because they often yield high
response rates and less item nonresponse, provide more control of the question ordering, and
allow surveyors to use skip patterns and recall information during the interview. Higher
cooperation rates and the ability to reach people by phone had been two major advantages of
telephone surveys, but both these advantages are now on the decline, the primary reason the
NSAF response rate in 2002 was 10 percentage point lower than it was in 1997. Still, telephone
surveys remain a very viable mode of collecting survey data, though changing technology, such
as answering machines, cell phones, and call screening has made it more difficult to achieve high
response rates on telephone surveys.
While there is no such thing as an “official” acceptable response rate, response rates are the
industry’s standard which people judge the quality of a survey. Surveys that achieve a response
rate of 70 percent or higher are generally thought of as high-quality surveys, and nonresponse is
not usually a concern. Studies such as the NSAF that have response rates between 50 and 70
percent often benefit from some nonresponse weighting adjustment to reduce potential
nonresponse bias. Nonresponse adjustments involves adding a weighting adjustment that when
used in analysis will increase the overall the impact of the data that was collected from people
who have characteristics similar to the nonrespondents. Determining the characteristics of
nonrespondents involves comparing respondents to census estimates as well as looking at the
characteristics of respondents who were the most difficult to obtain interviews from. While
nonresponse is primarily a function of people refusing to participate, the difficult to reach
respondents also contribute to the over nonresponse rate. These two problems make it imperative
for a quality telephone survey to include both a refusal avoidance and refusal conversion
strategy, and to schedule enough time to make multiple calls at varying times and days of the
week in order to contact difficult to reach respondents.
The lower your response rate, the more vulnerable your study is to nonresponse bias. Unlike
sampling error, it is very difficult to quantify the effect nonresponse error has on the quality of
your survey. Nonresponse error is not easy to quantify because survey researchers do not know
whether the nonrespondents differ from the respondents in terms of how they would have
responded to a survey. While survey researchers will never know for sure how a respondent
would have responded to questions asked, they can predict how a respondent’s absence from the
study may affect survey estimates by studying how the characteristics of nonrespondents differ
from respondents.
To discover who nonrespondents are, it is very important to understand that people who refuse
have different characteristics than people who are simply very difficult to contact. The number of
call attempts to finalize a case is a common measure of how difficult it is to reach a respondent.
Checking to see if anyone in the household ever refused the survey is the common measure used
to measure the willingness of respondents to participate. Hence, this reports looks separately at
the difficult-to-contact respondents and the respondents who initially refused to complete the
survey in order to better understand the nature of NSAF nonrespondents.
6- 1
The hard-to-reach and less cooperative respondents shared many characteristics: for instance,
they were more likely to be foreign born, never married, Hispanic, and work full time. But there
were striking differences, especially with education. For the NSAF, less-educated respondents
were harder to reach, while higher-educated respondents were easier to reach but less
cooperative. It was much easier to reach rural households, but while rural families traditionally
have been more cooperative, in the 2002 NSAF the difference was small and not significant.
Similarly, larger households were easier to contact but slightly less cooperative. Somewhat
surprising and certainly good news for the NSAF study was that while low-income families were
more difficult to reach, they were more cooperative though not significantly more so.
Finally, only modest differences were found when census track information was used to compare
the 2002 nonrespondents with the respondents. This is encouraging, and given that the survey
weights includes some post-stratification to match key census control totals, these small
differences are not likely to have significant impact on the survey estimates. Nevertheless, this
census comparison suggests there likely will be less precision for the survey estimates for certain
subpopulations, such as Asian, Hispanic, urban, and higher-educated respondents.
6- 2
References
Abi-Habib, Natalie, Tamara Black, Simon Pratt, Adam Safir, Rebecca Steinback, Timothy
Triplett, Kevin Wang, et al. 2005. 2002 NSAF Collection of Papers. Methodology Report
No. 6. Washington DC: The Urban Institute.
Abi-Habib, Natalie, Adam Safir, and Timothy Triplett. 2004. 2002 NSAF Survey Methods and
Data Reliability. Methodology Report No. 1. Washington DC: The Urban Institute.
———. 2004. 2002 NSAF Public Use File Data Documentation Guide. Methodology Report
No. 11. Washington DC: The Urban Institute.
Abi-Habib, Natalie, Adam Safir, Timothy Triplett, and Pat Cunningham. 2003. 2002 NSAF
Questionnaire. Methodology Report No. 12. Washington DC: The Urban Institute.
Alexander, Charles H. 1988. “Cutoff Rules for Secondary Calling in a Random Digit Dialing
Survey.” In Telephone Survey Methodology, edited by Robert M. Groves, R. M. Kahn, L.
Lyberg, James T. Massey, Joseph Waksberg, and William L. Nicholls II (113–26). New
York: Wiley.
Beaumont, Jean-Francois. 2000. “An Estimation Method for Nonignorable Nonresponse.”
Survey Methodology 26(2).
Bethlehem, Jelke G. 2002. “Weighting Nonresponse Adjustments Based on Auxiliary
Information.” In Survey Nonresponse, edited by Robert M. Groves, Don Dillman, John L.
Eltinge, and Roderick J.A. Little (275–87). New York: Wiley.
Black, Tamara, and Adam Safir. 2000. “Assessing Nonresponse Bias in the National Survey of
America’s Families.” In Proceedings of the Survey Research Methods Section (816–21).
Alexandria, VA: American Statistical Association.
Blair, Johnny, and Young I. Chun. 1992. “Quality of Data from Converted Refusers in
Telephone Surveys.” Paper presented at the American Association for Public Opinion
Research, St. Petersburg, Florida.
Blair, Johnny, and D. O’Rourke. 1985. “Alternative Callback Rules in Telephone Surveys: A
Methods Research Note.” Paper presented at the American Association for Public
Opinion Research, St. Petersburg, Florida.
Brick, Michael J., Bruce Allen, Pat Cunningham, and David Maklan. 1996. “Outcomes of a
Calling Protocol in a Telephone Survey.” In Proceedings of the Survey Research
Methods Section (142–48). Alexandria, VA: American Statistical Association.
Brick, Michael J., Pam Broene, David Cantor, David Ferraro, Tom Hankins, Carin Rauch, and
Teresa Strickler. 2000. 1999 NSAF Response Rates and Methods Evaluation.
Methodology Report No. 8. Washington, DC: The Urban Institute.
Brick, Michael J., Ismael Flores Cervantes, and David Cantor. 1998. 1997 NSAF Response Rates
and Methods Evaluation. Methodology Report No. 8. Washington, DC: The Urban
Institute.
Brick, Michael J., David Ferraro, Carin Rauch, and Teresa Strickler. 2003. 2002 NSAF Response
Rates and Methods Evaluation. Methodology Report No. 8. Washington, DC: The Urban
Institute.
R-1
Brick, Michael J., David Ferraro, Teresa Strickler, and Benmei Liu. 2003. 2002 NSAF Sample
Design. Methodology Report No. 2. Washington DC: The Urban Institute.
Brick, Michael J., Jill Montaquila, and Fritz Scheuren. 2002. “Estimating Residency Rates for
Undetermined Telephone Numbers.” Public Opinion Quarterly 66(1): 18–39.
Cannell, C. F., and F. J. Fowler. 1963. “Comparison of a Self-Enumerative Procedure and a
Personal Interview: A Validity Study.” Public Opinion Quarterly 27(2): 250–64.
Cantor, David, Bruce Allen, Pat Cunningham, Michael J. Brick, P. Slobasky, and Genevieve
Kenney. 1997. “Promised Incentives on a Random Digit Dial Survey.” In Nonresponse in
Survey Research, Proceedings of the Eighth International Workshop on Household
Survey Nonresponse, edited by A. Koch and R. Porst (219–28). Manheimm, Germany.
Cantor, David, Pat Cunningham, Timothy Triplett, and Rebecca Steinback. 2003. “Comparing
Incentives at Initial and Refusal Conversion Stages on a Screening Interview for a
Random Digit Dial Survey.” In Proceedings of the Survey Research Methods Section.
Alexandria, VA: American Statistical Association.
Cantor, David, Kevin Wang, and Natalie Abi-Habib. 2003. “Comparing Promised and Pre-Paid
Incentives for an Extended Interview on a Random Digit Dial Survey.” In Proceedings of
the Survey Research Methods Section. Alexandria, VA: American Statistical Association.
Converse, Nate, Adam Safir, and Fritz Scheuren. 2001. 1999 NSAF Public Use File
Documentation. Methodology Report No. 11. Washington, DC: The Urban Institute.
Curtin, Richard, Stanley Presser, and Eleanor Singer. 2000. “The Effect of Response Rate on the
Index of Consumer Sentiment.” Public Opinion Quarterly 64(4): 413–28.
———. 2005. “Changes in Telephone Survey Nonresponse over the Past Quarter-Century.”
Public Opinion Quarterly 69(1): 87–98.
Dillman, Don, John L. Eltinge, Robert M. Groves, and Roderick J. A. Little. 2002. “Survey
Nonresponse in Design Data Collection and Analysis.” In Survey Nonresponse, edited by
Groves et al. (3–26). New York: Wiley.
Dunkleberg, William C., and George Day. 1973. “Nonresponse Bias and Call backs in Sample
Surveys.” Journal of Marketing Research 10 (May): 160–68.
Elliott, M. R., and Roderick J. A. Little. 2000. “Subsampling Callbacks to Improve Survey
Efficiency.” Journal of American Statistical Association 95: 730–38.
Goldstein, Kenneth M., and M. Kent Jennings. 2002. “The Effect of Advance Letters on
Cooperation in a List Sample Telephone Survey.” Public Opinion Quarterly 66: 608–17.
Greenberg, B. S., and S. L. Stokes. 1990. “Developing an Optimal Call Scheduling Strategy for a
Telephone Survey.” Journal of Official Statistics 6: 421–35.
Groves, Robert M. 1989. Survey Costs and Survey Errors. New York: Wiley.
Groves, Robert M., and Mick P. Couper. 1998. Nonresponse in Household Interview Surveys.
New York: Wiley.
Groves, Robert M., Don Dillman, John L. Eltinge, and Roderick J.A. Little, eds. 2001. Survey
Nonresponse. New York: Wiley.
R-2
Groves, Robert M., F. J. Fowler, Mick P. Couper, James M. Lepkowski, Eleanor Singer, and
Roger Tourangeau, eds. 2004. Survey Methodology. New York: Wiley.
Groves, Robert M., and R. M. Kahn. 1979. Surveys by Telephone: A National Comparison with
Personal Interviews. New York: Academic Press.
Groves, Robert M., and L. Lyberg. 1988. “An Overview of Nonresponse Issues in Telephone
Surveys.” In Telephone Survey Methodology, edited by Groves et al. (91–211). New
York: Wiley.
Groves, Robert M., Stanley Presser, and S. Dipko. 2004. “The Role of Topic Interest in Survey
Participation Decisions.” Public Opinion Quarterly 68(1): 2–31.
Groves, Robert M., Eleanor Singer, and Amy D. Corning. 2000. “Leverage-Saliency Theory of
Survey Participation.” Public Opinion Quarterly 64(3): 299–308.
Groves, Robert M., Eleanor Singer, Amy D. Corning, and Ashley Bowers. 1999. “A Laboratory
Approach to Measuring the Effects on Survey Participation of Interview Length,
Incentives, Differential Incentives, and Refusal Conversion.” Journal of Official Statistics
15: 251–68.
Groves, Robert M., and Douglas Wissoker. 1998. Early Nonresponse Studies of the 1997
National Survey of America’s Families. Methodology Report No. 7. Washington, DC:
The Urban Institute.
Hansen, Morris H., and William N. Hurwitz. 1946. “The Problem of Non-Response in Sample
Surveys.” Journal of American Statistical Association 41(236): 517–29.
Keeter, Scott, Carolyn Miller, Andrew Kohut, Robert M. Groves, and Stanley Presser. 2000.
“Consequences of Reducing Nonresponse in a Large National Telephone Survey.” Public
Opinion Quarterly 64(2): 125–48.
Kish, Leslie. 1965. Survey Sampling. New York: Wiley.
Lavrakas, Paul J., Sandra L. Bauman, and Daniel M. Merkle. 1992. “Refusal Report Forms
(RRFs), Refusal Conversions and Non-response Bias.” Paper presented at the American
Association for Public Opinion Research conference, St Petersburg, Florida.
Len, I-Fen, and Nora Shaeffer. 1995. “Using Survey Participants to Estimate the Impact of
Nonparticipation.” Public Opinion Quarterly 59(2): 236–58.
Link, Michael W., and Robert W. Oldendick. 1999. “Call Screening: Is It Really a Problem for
Survey Research?” Public Opinion Quarterly 63(4): 577–89.
Losch, Mary E., Maitland Aaron, Gene Lutz, Peter Mariolis, and Gleason Steven C. 2002. “The
Effect of Time of Year of Data Collection on Sample Efficiency: An Analysis of
Behavioral Risk Factor Surveillance Survey Data.” Public Opinion Quarterly 66: 594–
607.
Lynn, Peter, Paul Clarke, Jean Martin, and Paul Sturgis. 2002. “The Effects of Extended
Interviewer Efforts on Nonresponse Bias.” In Survey Nonresponse, edited by Groves et
al. (135–47). New York: Wiley.
R-3
Massey, James T., P. R. Barker, and Hsiung S. 1981. “An Investigation of Response in a
Telephone Survey.” In Proceedings of the Survey Research Methods Section (63–72).
Chicago, IL: American Statistical Association.
Massey, James T., Charles Wolter, Siu Chong Wan, and Karen Liu. 1996. “Optimum Calling
Patterns for Random Digit Dialed Telephone Surveys.” In Proceedings of the Survey
Research Methods Section (426–31). Alexandria, VA: American Statistical Association.
Merkle, Daniel M., Sandra L. Bauman, and Paul J. Lavrakas. 1993. “The Impact of Callbacks on
Survey Estimates in an Annual RDD Survey.” In Proceedings of the Survey Research
Methods Section (1070–75). Alexandria, VA: American Statistical Association.
Newcomer, Kathryn E., and Timothy Triplett. 2004. “Using Surveys.” In Handbook of Practical
Program Evaluation, 2nd edition, edited by Joseph S. Wholly, Harry P. Hatry, and
Kathryn E. Newcomer (chapter 9). San Francisco: Jossey-Bass.
Neyman, J. 1938. “Contribution to the Theory of Sampling Human Populations.” Journal of
American Statistical Association 33(201): 101–16.
Oldendick, Robert W. 1993. “The Effect of Answering Machines on the Representativeness of
Samples in Telephone Surveys.” Journal of Official Statistics 9(3): 663–72.
Oldendick, Robert W., and Michael W. Link. 1994. “The Answering Machine Generation.”
Public Opinion Quarterly 58(2): 264–73.
Piazza, Tom. 1993. “Meeting the Challenges of Answering Machines.” Public Opinion
Quarterly 57(2): 219–31.
Rao, P. S. R. S. 1983. “Callbacks, Follow-ups, and Repeated Telephone Calls.” In Incomplete
Data in Sample Surveys, vol. 1, edited by William G. Madow, I. Olkin, and D.B. Rubin
(173–204). New York: Academic Press.
Roth, Shelly B., Jill Montaquila, and Michael J. Brick. 2001. “Effects of Telephone
Technologies and Call Screening Services on Sampling, Weighting and Cooperation in a
Random Digit Dial (RDD) Survey.” In Proceedings of the Survey Research Methods
Section. Alexandria, VA: American Statistical Association.
Safir, Adam, Rebecca Steinbach, Timothy Triplett, and Kevin Wang. 2002. “Effects on Survey
Estimates from Reducing Nonresponse.” In Proceedings of the Survey Research Methods
Section. Alexandria, VA: American Statistical Association.
Scheuren, Fritz. 2000. “Quality Assessment of Quality Assessment.” In American Statistical
Association 2000 Proceedings of the Section on Government Statistics and Section on
Social Statistics (44–51). Indianapolis, IN: American Statistical Association.
Sebold, Janice. 1988. “Survey Period Length, Unanswered Numbers and Nonresponse in
Telephone Surveys.” In Telephone Survey Methodology, edited by Groves et al. (247–
56). New York: Wiley.
Shaiko, R. G., Dwyre D., M. O’Gorman, J. M. Stonecash, and J. Vike. 1991. “Pre-election
Political Polling and Non-Response Bias Issue.” International Journal of Public Opinion
Research 3: 86–99.
R-4
Singer, Eleanor. 2002. “The Use of Incentives to Reduce Nonresponse in Household Surveys.”
In Survey Nonresponse, edited by Groves et al. (163–77). New York: Wiley.
Singer, Eleanor, John Van Hoewyk, and Mary Maher. 2000. “Experiments with Incentives for
Survey Participation on the Survey of Consumer Attitudes.” Public Opinion Quarterly
64(2): 171–88.
Smith, Tom. 1983. “The Hidden 25 Percent: An Analysis of Nonresponse on the 1980 General
Social Survey.” Public Opinion Quarterly 47(3): 386–404.
Stasny, Elizabeth A. 1988. “Modeling Nonignorable Nonresponse in Categorical Panel Data with
an Example in Estimating Gross Labor Force Flows.” Journal of Business and Economic
Statistics 6(2): 207–19.
Steeh, C. G. 1981. “Trends in Nonresponse Rates, 1952–1979.” Public Opinion Quarterly 45(1):
40–57.
Steeh, C. G., Robert M. Groves, R. Comment, and Hansmire R. 1983. “Report on the Survey
Research Center’s Survey of Consumer Attitudes.” In Incomplete Data in Sample
Surveys, vol. 2, edited by William G. Madow, I. Olkin, and D. B. Rubin (377–81). New
York: Academic Press.
Steeh, C. G., Nicole Kirgis, Brian Cannon, and Jeff DeWitt. 2001. “Are They Really as Bad as
They Seem? Nonresponse Rates at the End of the 20th Century.” Journal of Official
Statistics 17: 227–47.
Teitler, Julien O., Nancy E. Reichman, and Susan Sprachman. 2003. “Costs and Benefits of
Improving Response Rates.” Public Opinion Quarterly 67(1): 126–38.
Traugott, Michael W. 1987. “The Importance of Persistence in Respondent Selection for Preelection Surveys.” Public Opinion Quarterly 51(1): 48–57.
Triplett, Timothy. 2002. “What Is Gained from Additional Call Attempts and Refusal
Conversion and What Are the Cost Implications?” Washington, DC: The Urban Institute.
http://mywebpages.comcast.net/ttriplett13/tncpap.pdf.
Triplett, Timothy, and Natalie Abi-Habib. 2005. “Determining the Probability of Selection for a
Telephone Household in a Random Digit Dial Sample Design Is Becoming Increasingly
More Difficult.” In 2002 NSAF Collection of Papers (1-11–1-17). Washington DC: The
Urban Institute.
Triplett, Timothy, Johnny Blair, Teresa Hamilton, and Yun Chiao Kang. 1996. “Initial
Cooperators vs. Converted Refusers: Are There Response Behavior Differences?” In
Proceedings of the Survey Research Methods Section (1038–41). Alexandria, VA:
American Statistical Association.
Triplett, Timothy, Adam Safir, Kevin Wang, Rebecca Steinbach, and Simon Pratt. 2002. “Using
a Short Follow-up Survey to Compare Respondents and Nonrespondents.” In
Proceedings of the Survey Research Methods Section. Alexandria, VA: American
Statistical Association.
Triplett, Timothy, Julie Scheib, and Johnny Blair. 2001. “How Long Should You Wait Before
Attempting to Convert a Refusal?” In Proceedings of the Survey Research Methods
Section. Alexandria, VA: American Statistical Association.
R-5
Tuckel, Peter, and Harry O’Neil. 1996. “New Technology and Non-response Bias in RDD
Surveys.” In Proceedings of the Survey Research Methods Section. (889–94). Alexandria,
VA: American Statistical Association.
Tucker, Clyde, James M. Lepkowski, and Linda Pierkarski. 2002. “List-Assisted Telephone
Sampling Design Efficiency.” Public Opinion Quarterly 66(3): 321–38.
U.S. Census Bureau. 2002. 2000 Census of Population and Housing, Summary File 3: Technical
Documentation. Washington, DC: U.S. Census Bureau.
———. 2003. Census 2000 Geographic Terms and Concepts. Washington, DC: U.S. Census
Bureau.
Vaden-Kiernan, Nancy, David Cantor, Pat Cunningham, Sarah Dipko, Karen Molloy, and
Patricia Warren. 1999. 1997 NSAF Telephone Survey Methods. Methodology Report No.
9. Washington DC: The Urban Institute.
Waksberg, Joseph. 1978. “Sampling Methods for Random-Digit Dialing.” Journal of American
Statistical Association 73(361): 40–46.
Ward, James C., Betrand Russick, and Redelius William. 1985. “A Test of Reducing Callbacks
and not at Homes Bias in Personal Interviews by Weighting at Home Respondents.”
Journal of Marketing Research 22: 66–73.
Warren, Patricia, and Pat Cunningham. 2003. 2002 NSAF Telephone Survey Methods.
Methodology Report No. 9. Washington DC: The Urban Institute.
Weeks, M. F. 1988. “Call Scheduling with CATI: Current Capabilities and Methods.” In
Telephone Survey Methodology, edited by Groves et al. (403–20). New York: Wiley.
Weeks, M. F., B. L. Jones, R. E. Folsom Jr., and C. H. Benrud. 1980. “Optimal Times to Contact
Sample Households.” Public Opinion Quarterly 44(1): 101–14.
Weeks, Michael F., Richard A. Kulka, and Stephanie A. Pierson. 1987. “Optimal Call
Scheduling for a Telephone Survey.” Public Opinion Quarterly 51(4): 540–49.
R-6
Download