Partisan Bias in Telephone Interviewing

advertisement
Partisan Bias among Interviewers
WORD COUNT: 2748
RUNNING HEADER: Partisan Bias among Interviewers
Andrew Healy (corresponding author)
Loyola Marymount University
Department of Economics
One LMU Drive, Room 4229
Los Angeles, CA 90045
(310) 338-5956
ahealy@lmu.edu
Neil Malhotra
Stanford University
Graduate School of Business
655 Knight Way
Stanford, CA 94305
(408) 772-7969
neilm@stanford.edu
1
Author Note
ANDREW HEALY is an associate professor of economics at Loyola Marymount University,
Los Angeles, CA, USA. NEIL MALHOTRA is an associate professor of political economy in
the Graduate School of Business at Stanford University, Stanford, CA, USA. We thank the
editors and anonymous reviewers for helpful comments. The authors declare no conflicts of
interest. *Address correspondence to Andrew Healy, Loyola Marymount University, Department
of Economics, One LMU Drive Room 4229, Los Angeles CA 90045, USA; email:
ahealy@lmu.edu.
Supplementary materials are available online at http://poq.oxfordjournals.org/.
2
Abstract
Survey researchers have long observed that demographic characteristics of interviewers such as
race and gender affect survey responses as well as response rates. Building on this work, we
address a previously unexplored question: Do interviewers’ partisan leanings bias data collection
in telephone surveys? To do so, we leverage a unique dataset in which interviewers reported their
partisan identifications before the survey was fielded. We find that interviewers are more likely
to ascribe positive experiences to interviews with co-partisans. However, we find little evidence
to suggest that interviewer partisanship biases interviewer ratings of respondents’ personal
characteristics, the answers provided by respondents, item nonresponse, or the measurement
error associated with responses.
3
Survey researchers have long observed that interviewers in face-to-face and telephone
surveys can affect respondents’ answers to questions. Most existing research on the subject has
focused on how observable demographic characteristics of interviewers such as gender and race
affect survey responses as well as response rates (e.g. Hatchett and Schuman 1975; Cotter et al.
1982; Finkel et al. 1991; Kane and Macaulay 1993; Catania et al. 1996; Davis 1997a, 1997b;
Huddy et al. 1997). Due to social desirability pressures, people may change their responses on
sensitive items (e.g. gender or racial issues) to project a certain image to the interviewer. This
line of research has produced several valuable suggestions, including the importance of
diversifying interviewer pools and adjusting for interviewer effects.
Building on this work, we address a previously unexplored question: Do interviewers’
partisan leanings bias data collection in telephone surveys? To do so, we leverage a unique
dataset in which interviewers reported their party identifications before the survey was fielded.
We analyze whether, conditional on the partisanship of respondents, interviewer partisanship
leads to biased data collection. Unlike interviewer gender and perhaps race, respondents cannot
easily infer the partisan leaning of interviewers in telephone surveys. Nonetheless, there are
several potential mechanisms by which interviewer partisanship may affect data collection. In
political surveys interviewers learn the partisanship of respondents as they answer questions,
potentially introducing both conscious (and unconscious) measurement error and bias. It is also
possible that respondents can deduce the partisanship of interviewers from the interviewer’s
reactions to their answers or via demographic correlates such as the interviewer’s race or gender
and consequently provide responses they believe to be desirable to interviewers.1 In addition to
1
The possibility of interviewer ideology impacting responses is suggested by Rice’s (1929) early
study. Analyzing a 1914 survey of homeless individuals, he found that respondents interviewed
4
interviewer ideology potentially biasing responses, recruitment success may vary depending if
the respondent is a co-partisan, producing differing nonresponse errors. Similarly, after a
respondent agrees to participate in a survey, interviewer partisanship may predict item
nonresponse, particularly for politically charged questions.
Even if the responses themselves are not affected, there may be concerns that
interviewers affect data collection by submitting biased interview ratings of respondents.
Understanding whether interviewer ratings are biased by partisanship is important given that
these ratings have been used extensively in political science as measures of respondent
information and political sophistication (e.g. Zaller 1985; Rahn et al. 1994; Bartels 1996, 2005;
Delli Carpini and Keeter 1996; Gilens 2001; Brewer 2003). In addition, partisan bias on the part
of interviewers may affect data quality via lowered rapport with respondents, decreasing their
motivation to be well-engaged throughout the survey process. Indeed, recent research has
suggested that interviewer attitudes—not simply demographic characteristics—can influence
data collection (Durrant et al. 2010; Jackle et al. 2013). Other research has suggested that the
rapport interviewers build with respondents can affect data quality (e.g. Kahn and Cannell 1957;
Ross and Mirowsky 1984).
The issue of partisan bias among interviewers has not received a great deal of attention
from survey researchers, particularly compared to the effects of race and gender. We argue that
this is an important research design consideration given that many public opinion surveys ask
about political items, and previous research has shown that attitudinal predispositions can affect
judgments, evaluations, and social interactions (e.g. Bartels 2002; Taber and Lodge 2006;
by a prohibitionist interviewer were three times more likely to attribute their condition in life to
“liquor” compared to those interviewed by a non-prohibitionist interviewer.
5
Iyengar et al. 2012). Further, even as many aspects of survey methodology become more
standardized and transparent, the proliferation of surveys from partisan organizations (with
potentially biased interviewers) points to the importance of understanding whether partisan
interviewers introduce bias.
Accordingly, this research note addresses three questions:
(1) Is interviewer partisanship related to how interviewers rate respondents’ personal
characteristics or the interview itself?
(2) Is interviewer partisanship related to the answers provided by respondents?
(3) Is interviewer partisanship related to measurement error in those responses?
Data and Measures
Data. This research note leverages a unique dataset, the 2006 American National Election
Studies (ANES) Pilot Study.2 To the best of our knowledge, this is the only national dataset of a
representative probability sample of Americans that asked interviewers to report their party
identifications. Interviewers also answered a series of questions about the respondents they
surveyed and the interview experience. Full question wordings are provided in the appendix.
2
The telephone survey was conducted by Schulman, Ronca & Bucuvalas, Inc. (SRBI) with a
nationally representative sample of U.S. adults between November 13, 2006, and January 7,
2007. As a professional and sophisticated research firm, SRBI interviewers are highly
experienced and well-trained. The sample consisted of 1212 respondents who participated in the
pre-election 2004 ANES Time Series Study; 675 interviews were completed, yielding a reinterview rate of 56.3%. Given that the AAPOR RR1 response rate of the time series study was
66.1%, the cumulative response rate was 37.2%.
6
Interviewer Partisanship. Interviewer party identification was measured on a seven-point
scale: strong Democrat, not strong Democrat, leaning Democrat, Independent, leaning
Republican, not strong Republican, and strong Republican. Respondents did not learn their
interviewers’ responses to this question. We combine all Democrats into a single category and all
Republicans into a single category (pooling leaners with stated partisans); Independents are
excluded from the analysis.3 Of the 14 interviewers included in the analysis, 8 were Democratic
identifiers and 6 were Republican identifiers. 50.7% of analyzed interviews were conducted by
Democratic identifiers.
Respondent Partisanship. Respondents were also asked to report their party identification
on the same seven-point scale.4 As with interviewers, strong partisans, not strong partisans, and
leaners were pooled together and pure Independents were excluded. 57.2% of analyzed
3
Following Keith et al. (1992), we pooled Democratic and Republican leaners with stated
partisans. We also conducted the analyses excluding leaners and obtained similar results in terms
of the magnitude of the coefficients, although the results are less precise due to the smaller
sample size and number of clusters.
4
The question wording was: “[Generally speaking/As of today], do you think of yourself as a
Republican, a Democrat, an Independent, or what?” Partisan identifiers were then asked “Would
you call yourself a strong Republican/Democrat or a not very strong Republican/Democrat?” All
others were asked “Do you think of yourself as closer to the Republican Party of the Democratic
Party?” Half of respondents were randomly assigned to receive the introductory phrase
“Generally speaking” while the other half were assigned to receive the introductory phrase “As
of today.” There was no statistically significant difference in reported partisanship by question
wording (p = .49 for a two-tailed t-test).
7
respondents identified as Democrats.
Altogether, 29.1% of the interviews in our sample matched Democratic interviewers with
Republican respondents, 21.6% matched Democratic interviewers with Republican respondents,
28.1% matched Republican interviewers with Democratic respondents, and 21.2% matched
Republican interviewers with Republican respondents.
Interviewer Ratings of Respondents. At the conclusion of the interview, interviewers
were asked to report four characteristics of respondents’ competence: effort at answering
questions, difficulty with understanding questions, difficulty with coming up with answers, and
intelligence.
Interviewer Ratings of the Interview Experience. Interviewers were also asked to describe
seven aspects of the interviewing experience. They evaluated whether the respondent was
reluctant, cooperative, suspicious, worried, concerned, and interested. Additionally, interviewers
rated the respondent’s enjoyment of the interview.
Survey Responses. Although there are many possible questions to analyze, we focus on
whether interviewer partisanship is related to respondents’ reports of their approval of President
George W. Bush, since this question has clear partisan valence. In addition to overall approval,
respondents were asked to report approval of how Bush was handling the economy, foreign
relations, and terrorism.5
5
The approval questions were asked in a branched format, first by asking people to report
whether they approved or disapproved of the president’s performance and then ascertaining the
strength of the (dis)approval. Accordingly, we analyze both the three-point scale measured by
the initial question, and the full seven- or nine-point scale constructed from both the initial
question and the follow-ups. Respondents were randomly assigned to either respond on a seven8
Response Order Manipulations. The survey contained 44 response order manipulations
that were administered to either the entire sample or a randomly selected portion of the sample.
The questions for which response orders were manipulated are listed in the appendix. We test
whether response order effects are greater when interviewers are from a different party than
respondents, as could occur if data reliability is compromised due to decreased rapport.
Construct Validity Tests. Responses to some items should be highly correlated with one
another. For example, people were asked about their overall approval of President Bush in
addition to their approval of him on individual issues: the economy, foreign affairs, and
terrorism. We test whether the match between interviewer partisanship and respondent
partisanship is related to consistency across these kinds of questions.
All variables are recoded to lie between 0 and 1, with 0 representing the least favorable
evaluation (e.g. interviewer rating that the respondent was “not at all interested” in the interview)
and 1 representing the most favorable evaluation (e.g. interviewer rating that the respondent was
“extremely interested” in the interview). Accordingly, we can interpret difference-in-difference
estimates as representing percentage point changes in the dependent variable.
Empirical Strategy
To assess partisan bias in telephone interviewing, we estimate OLS regressions of the
form:
Y =  + 1Pi + 2Pr + 3(Pi  Pr) + 
(1)
point or nine-point scale. For all four measures, we observed no difference in the mean approval
ratings across the two scales by experimental condition: overall approval (p = .83), economy
approval (p = .97), foreign relations approval (p = .72), and terrorism approval (p = .26).
9
where r indexes respondents, i indexes interviewers, Y represents some response by either a
respondent or an interviewer (e.g. interviewer rating or survey response), Pi is a dummy variable
indicating whether the interviewer is a Democrat (Republican interviewers are the omitted
category), Pr is a dummy variable indicating whether the respondent is a Democrat (Republican
respondents are the omitted category), and  is stochastic error.6 Standard errors are clustered by
interviewer. The parameter of interest is 3, which represents the relevant difference-indifference estimate—for example, the difference between Democratic interviewers’ assessments
of Democratic vs. Republican respondents and Republican interviewers’ assessments of
Democratic vs. Republican respondents. In other words, 3 recovers (didr  dirr)  (ridr  rirr)
where xiyr represents the response provided by an interviewer (x)-respondent (y) combination and
x  {Democratic interviewer (di), Republican interviewer (ri)} and y  {Democratic respondent
(dr), Republican respondent (rr)}.
To assess whether estimates from equation (1) are confounded by omitted variables
correlated with either interviewer or respondent characteristics, we also estimate a more
6
Given that the dependent variables are ordinal rating scales, we also estimated all models using
ordered logistic regression and obtained similar results. We report OLS regressions here for two
reasons. First, estimates from OLS can be easily interpreted as conditional means and we do not
have to make the additional proportional odds assumption as we would need to for ordered
logistic regression. Accordingly, most econometricians these days prefer OLS even when
predicting limited dependent variables over models that require additional, untestable
assumptions (see Angrist and Pischke 2009). Second, we can easily and substantively interpret
the coefficient estimates, which is not straightforward when estimating GLMs (generalized linear
models) with interaction terms (see Ai and Norton 2003).
10
saturated version of the equation:
Y =  + 1Pi + 2Pr + 3(Pi  Pr) + xi + zr + 
(2)
where xi is a vector of interviewer characteristics (gender, age, race, education, experience,
political interest) and zr is a vector of respondent characteristics (gender, age, race, education).
As mentioned above, it is important to include these controls because respondents may infer the
interviewer’s party identification from demographic cues.
To assess whether interviewer partisanship leads to greater measurement error, we must
estimate regression models including three-way interaction terms. For example, we estimate the
following regression model to assess whether order effects are less pronounced when an
interviewer is interviewing a co-partisan:
Yr =  + 1Pi + 2Pr + 3Oi + 4(Pi  Pr) + 5(Pi  Oi) + 6(Pr  Oi) + 7(Pi  Pr  Oi) +  (3)
where Yr is a survey response and Oi is a dummy variable representing whether the response
options were read in a reverse order (e.g. “not well at all” read first) with the omitted category
representing items where the response options were read in the standard order (e.g. “extremely
well” read first). We coded response options read first in the standard order as lower numbers so
that positive coefficients would indicate primacy effects. The parameter of interest is 7, the
three-way interaction between response order, interviewer partisanship, and respondent
partisanship. We also estimate versions of equation (3) including xi and zi.7
Results
7
To assess correlational validity, we estimated a regression analogous to equation (3) where
response order is replaced as the dependent variable by a respondent’s answer to a criterion item
theoretically related to the survey response.
11
We present here results for the three questions of interest mentioned above.
(1) Is interviewer partisanship related to how interviewers rate respondents’ personal
characteristics or the interview itself?
Interviewer partisanship was unrelated to interviewer ratings of respondent
characteristics. The results are presented in Table 1. To explicate the findings, we describe one
rating in detail. The first row, for example, reports how the correspondence between interviewer
and respondent partisanship is related to the interviewer’s assessment of whether a respondent
was hard-working during the interview. As reported in the first four columns, Democratic
interviewers rated Democratic respondents as 2% more hard-working than Republican
respondents, an insignificant difference (p=.65, two-tailed). Republican interviewers likewise did
not rate Democratic respondents differently from Republican respondents, approximately a 0%
difference (p=.95). The difference-in-difference estimate is therefore 2%0% = 2%, which is
also statistically insignificant (p=.73).
In models excluding respondent and interviewer control variables, the difference-indifference estimates for all four interviewer ratings are small (between -2% and 2%) and
indistinguishable from zero. As shown in the last three columns of Table 1, results are similar
when including controls for interviewer characteristics and respondent characteristics. Given the
small number of clusters, we also estimated the standard errors via bootstrapping (Cameron,
Gelbach, and Miller 2008) and found similar results (see rightmost column).
Conversely, we did observe instances where there was partisan bias in how interviewers
rated characteristics of the interview itself. For instance, as shown in the first row of Table 2,
Democratic interviewers perceived co-partisan respondents as 1% more interested than oppositeparty respondents. However, Republican interviewers perceived co-partisan respondents as 7%
12
more cooperative than opposite-party respondents. The difference-in-difference estimate is
therefore 1%(-7)% = 8%. This estimate achieves standard levels of statistical significance
(p=.04). Examining the most saturated models including both interviewer and respondent
controls, we find significant difference-in-difference estimates (at p<.05 or below) for three
interviewer ratings that were associated with positive experiences: cooperativeness, interest, and
enjoyment. The point estimates range from 7-8%. Partisan bias for ratings associated with
negative experiences (e.g. whether the respondent was suspicious) is much weaker. Overall,
interviewers were more likely to associate positive concepts with the interview experience when
they talked with co-partisans rather than opposing partisans.
(2) Is interviewer partisanship related to the answers provided by respondents?
Similar to how respondents might be less likely to provide sexist or racist responses to
female or minority interviewers, respectively, it may be that respondents are more likely to
provide pro-Bush responses to Republican interviewers and anti-Bush responses to Democratic
interviewers. However, we found no evidence that interviewer partisanship is associated with
respondent reports of President Bush’s overall approval or approval in specific domains. As
shown in Table 3, in the models estimated with all controls, none of the eight approval measures
exhibited statistically significant difference-in-difference estimates and four were signed
oppositely from the theoretical expectation.8
(3) Is interviewer partisanship related to measurement error?
As mentioned above, we assessed whether interviewer partisanship is associated with two
forms of measurement error: response order effects and lack of correlation between theoretically
8
Interviewer partisanship also has no significant effect on the propensity of respondents to either
not answer questions or provide noncommittal answers (see Online Appendix 1).
13
related variables. In both cases, we found little evidence of partisan bias.
Of the 44 difference-in-difference estimates for the response order manipulations, only 2
of 44 were statistically significant at the p<.10 level and none were significant at the p<.05 level.
These are roughly the results we would expect to see by chance alone in the absence of any true
effect. Indeed, the chance of observing 42 of 44 null effects if the effect were really there (the
Type II error) is extremely small. The findings are similar when controlling for interviewer and
respondent characteristics. Full results are presented in Online Appendix 2 (Table A1).
Similarly, we found that interviewer partisanship was at most slightly related to
respondents’ consistency across related questions. To investigate this issue, we considered the
consistency between respondents’ approval of President Bush’s overall job performance and
their approval of his performance in three specific domains: the economy, foreign affairs, and
terrorism. For the first two domains, the coefficient of interest is close to zero. For the last
domain, we found that the relationship between overall Bush approval and terrorism-specific
approval is marginally higher when respondents are interviewed by co-partisans. The findings
are similar when controlling for interviewer and respondent characteristics. Full results can be
found in Online Appendix 2 (Tables A2 and A3).
Conclusion
Utilizing a dataset with unique information on interviewer partisanship, we investigated
whether that partisanship is related to the interview experience and the data that the survey
generates. In contrast to other findings that interviewer characteristics like race and gender can
have significant effects, our results suggest a relatively minor role for interviewer partisanship.
While partisanship is related to interviewers’ subjective evaluations of the interview experience,
it is associated with little else. It had no relation to interviewers’ ratings of respondent
14
characteristics such as intelligence, increasing our confidence in prior research that has leveraged
such measures. Most importantly, it has little to no correlation with respondents’ reported
political attitudes or the consistency with which those answers were reported.
Why are partisan interviewing effects not as large as those previously found for race and
gender? As mentioned above, one possibility is that respondents cannot easily infer the
interviewer’s partisanship. Indeed, our findings suggest that interviewers may do a good job of
keeping their personal opinions out of the survey process. These results speak well for positive
arguments about the reliability and validity of survey responses. Normatively, they suggest that
Americans are less subject to social desirability concerns when discussing political issues (as
opposed to matters related to gender and race) and more open to debate and discussion. The
survey interview, both from the perspective of respondent and interviewer, may therefore
approximate a normative ideal of democratic deliberation (Gutmann and Thompson 2006) rather
than a venue of contention and suspicion. Future research is needed to explore why interviewer
partisanship differs from race and gender, as well as the implications of these differences
methodologically and substantively. It would be also interesting to extend the design of this
analysis to other sorts of respondent and interviewer attitudes such as religious beliefs.
Interviewers, like all human beings, bring prior biases into the survey experience. It is
therefore important to understand whether these biases affect data collection. That we find little
partisan bias for standard survey questions provides confidence in the conclusions that we draw
from public opinion surveys in general, and the ANES in particular. Future scholarship can
extend this research to study face-to-face interviewing and other types of interviewer attitudes
besides partisanship. Additionally, while the data we analyzed were collected by highly skilled
interviewers from a major research firm, partisan bias may be more pronounced with new and
15
perhaps less well-trained interviewers.
16
Appendix: Question Wordings
Interviewer Ratings of Respondent Characteristics
Work Hard. How hard did the respondent work to answer questions? Extremely hard, very hard,
moderately hard, slightly hard, or not hard at all?
Understand. How difficult was it for the respondent to understand the questions? Extremely
difficult, very difficult, moderately difficult, slightly difficult, or not difficult at all?
Answers. How difficult was it for the respondent to come up with answers to the questions?
Extremely difficult, very difficult, moderately difficult, slightly difficult, or not difficult at all?
Intelligent. How intelligent would you say the respondent is? Much above average, a little above
average, average, a little below average, or much below average?
Interviewer Ratings of the Survey Experience
Cooperative. How cooperative was the respondent? Extremely cooperative, very cooperative,
moderately cooperative, slightly cooperative, or not cooperative at all?
Concerned. How concerned did the respondent seem to be that you might not be who you say
you are, or that you are not really doing what you said you are doing? Extremely concerned,
very concerned, moderately concerned, slightly concerned, or not concerned at all?
Interested. How interested was the respondent in the interview? Extremely interested, very
interested, moderately interested, slightly interested, or not interested at all?
Enjoy. How much did the respondent enjoy the interview? A great deal, a lot, a moderate
amount, a little, or not at all?
Reluctant. How reluctant to begin the interview did the respondent seem? Extremely reluctant,
17
very reluctant, moderately reluctant, slightly reluctant, or not reluctant at all?
Suspicious. When you started speaking to the respondent, how suspicious did [HE/SHE] seem to
be about who you are and why you were calling? Extremely suspicious, very suspicious,
moderately suspicious, slightly suspicious, or not suspicious at all?
Worried. During the interview, how worried did the respondent seem about reporting personal
information? Extremely worried, very worried, moderately worried, slightly worried, or not
worried at all?
Survey Responses
Overall Bush Approval. Do you approve, disapprove, or neither approve nor disapprove of the
way George W. Bush is handling his job as president? [If “approve”/”disapprove”: Do you
approve/disapprove strongly or not strongly?/Do you approve/disapprove extremely strongly,
moderately strongly, or slightly strongly?] [If “neither approve nor disapprove”: Do you lean
toward approving, lean toward disapproving, or do you not lean either way?]
Economy Approval. Do you approve, disapprove, or neither approve nor disapprove of the way
George W. Bush is handling the economy? [If “approve”/”disapprove”: Do you
approve/disapprove strongly or not strongly?/Do you approve/disapprove extremely strongly,
moderately strongly, or slightly strongly?] [If “neither approve nor disapprove”: Do you lean
toward approving, lean toward disapproving, or do you not lean either way?]
Foreign Relations Approval. Do you approve, disapprove, or neither approve nor disapprove of
the way George W. Bush is handling our relations with foreign countries? [If
“approve”/”disapprove”: Do you approve/disapprove strongly or not strongly?/Do you
approve/disapprove extremely strongly, moderately strongly, or slightly strongly?] [If “neither
18
approve nor disapprove”: Do you lean toward approving, lean toward disapproving, or do you
not lean either way?]
Terrorism Approval. Do you approve, disapprove, or neither approve nor disapprove of the way
George W. Bush is handling terrorism? [If “approve”/”disapprove”: Do you approve/disapprove
strongly or not strongly?/Do you approve/disapprove extremely strongly, moderately strongly, or
slightly strongly?] [If “neither approve nor disapprove”: Do you lean toward approving, lean
toward disapproving, or do you not lean either way?]
Response Order Analyses
The variable codes for which response order was randomly rotated in the 2006 ANES Pilot
Study were: V06P501, V06P502, V06P503, V06P509, V06P510, V06P511, V06P512,
V06P513, V06P514, V06P515, V06P519, V06P533, V06P534, V06P535, V06P536, V06P537,
V06P538, V06P539, V06P540, V06P541, V06P54, V06P543, V06P544, V06P630, V06P631,
V06P632, V06P634, V06P639, V06P644, V06P645, V06P646, V06P647, V06P648, V06P649,
V06P656, V06P657, V06P658, V06P659, V06P770, V06P771, V06P772, V06P773, V06P809,
V06P810
19
References
Bartels, Larry M. 1996. “Uninformed Votes: Information Effects in Presidential Elections.”
American Journal of Political Science. 40:194-230.
Bartels, Larry M. 2002. “Beyond the Running Tally: Partisan Bias in Political Perceptions.”
Political Behavior. 24:117-150.
Bartels, Larry M. 2005. “Homer Gets a Tax Cut: Inequality and Public Policy in the American
Mind.” Perspectives on Politics. 3:15–31.
Brewer, Paul R. 2003. “Values, Political Knowledge, and Public Opinion about Gay Rights: A
Framing-Based Account.” Public Opinion Quarterly. 67:173-201.
Cameron, A. Colin, Jonah B. Gelbach, and Douglas L. Miller. 2008. “Bootstrap-Based
Improvements for Inference with Clustered Standard Errors.” Review of Economics and
Statistics. 90:414-427.
Catania, Joseph A., Diane Binson, Jesse Canchola, Lance M. Pollack, Walter Hauck, and
Thomas J. Coates. 1996. “Effects of Interviewer Gender, Interviewer Choice, and Item
Wording on Responses to Questions Concerning Sexual Behavior.” Public Opinion
Quarterly. 60:345-375.
Cotter, Patrick R., Jeffrey Cohen, and Philip B. Coulter. 1982. “Race-of-Interviewer Effects in
Telephone Interviews.” Public Opinion Quarterly. 46:278-284.
Davis, Darren W. 1997a. “The Direction of Race of Interviewer Effects among AfricanAmericans: Donning the Black Mask.” American Journal of Political Science. 41:309322.
Davis, Darren W. 1997b. “Nonrandom Measurement Error and Race of Interviewer Effects
among African Americans.” Public Opinion Quarterly. 61:183-207.
Delli Carpini, Michael X., and Scott Keeter. 1996. What Americans Know About Politics and
Why it Matters. New Haven, CT: Yale University Press.
20
Durrant, Gabriele B., Robert M. Groves, Laura Staetsky, and Fiona Steele. 2010. “Effects of
Interviewer Attitudes and Behaviors on Refusal in Household Surveys.” Public Opinion
Quarterly. 74:1-36.
Finkel, Steven E., Thomas M. Guterbock, and Marian J. Borg. 1991. “Race-of-Interviewer
Effects in a Preelection Poll: Virginia 1989.” Public Opinion Quarterly. 55:313-330.
Gilens, Martin. 2001. “Political Ignorance and Collective Policy Preferences.” American
Political Science Review. 95:379-396.
Gutmann, Amy, and Dennis F. Thompson. 1996. Democracy and Disagreement. Cambridge,
MA: Harvard University Press.
Hatchett, Shirley, and Howard Schuman. 1975. “White Respondents and Race-of-Interviewer
Effects.” Public Opinion Quarterly. 39:523-528.
Huddy, Leonie., Joshua Billig, John Bracciodieta, Lois Hoeffler, Patrick J. Moynihan, and
Patricia Pugliani. 1997. “The Effect of Interviewer Gender on the Survey Response.”
Political Behavior. 19:197-220.
Iyengar, Shanto, Gaurav Sood, and Yphtach Lelkes. 2012. “Affect, Not Ideology: A Social
Identity Perspective on Polarization.” Public Opinion Quarterly. 76:405-431.
Jackle, Annette, Peter Lynn, Jennifer Sinibaldi, and Sarah Tipping. 2013. “The Effect of
Interviewer Experience, Attitudes, Personality and Skills on Respondent Cooperation
with Face-to-Face Surveys.” Survey Research Methods. 7:1-15.
Kahn, Robert L., and Charles F. Cannell. 1957. The Dynamics of Interviewing: Theory,
Technique, and Cases. New York: John Wiley & Sons.
Kane, Emily W., and Laura J. Macaulay. 1993. “Interviewer Gender and Gender Attitudes.”
Public Opinion Quarterly. 57:1-28.
Keith, Bruce E., David B. Magleby, Candice J. Nelson, Elizabeth Orr, Mark C. Westlye, and
Raymond E. Wolfinger. 1992. The Myth of the Independent Voter. Berkeley, CA:
21
University of California Press.
Rahn, Wendy M., Jon A. Krosnick, and Marijke Breuning. 1994. “Rationalization and
Derivation Processes in Survey Studies of Political Candidate Evaluation.” American
Journal of Political Science. 38:582-600.
Rice, Stuart A. 1929. “Contagious Bias in the Interview: A Methodological Note.” American
Journal of Sociology. 35:420-423.
Ross, Catherine E., and John Mirowsky. 1984. “Socially-Desirable Response and Acquiescence
in a Cross-Cultural Survey of Mental Health.” Journal of Health and Social Behavior.
25:189-97.
Taber, Charles S., and Milton Lodge. 2006. “Motivated Skepticism in the Evaluation of Political
Beliefs.” American Journal of Political Science. 50:755-69.
Zaller, John. 1985. “Pre-Testing Information Items on the 1986 NES Pilot Survey.” Report to the
National Election Study Board of Overseers.
22
Table 1: Partisan Bias in Interviewer Ratings of Respondent Characteristics
____________________________________________________________________________________________________________
Work Hard
Understand
Answers
Intelligent
Dem.
Interviewers
Dem.
Rep.
Resp. Resp.
0.65
0.63
0.77
0.82
0.79
0.84
0.68
0.71
Dif.
0.02
-0.05
-0.05
-0.03
pvalue
0.65
0.02
0.19
0.21
Rep.
Interviewers
Dem.
Rep.
Resp. Resp.
0.52
0.52
0.80
0.85
0.80
0.83
0.59
0.64
Dif.
0.00
-0.05
-0.03
-0.05
No Interviewer
Controls
pDif-inpvalue
Dif
value
0.95
0.02
0.73
0.15
0.00
0.95
0.24
-0.02
0.63
0.19
0.02
0.63
Interviewer
Controls
Dif-inpDif
value
0.05
0.39
0.01
0.80
-0.01
0.75
0.02
0.66
Interviewer and
Respondent Controls
Dif-inpBootstrap
Dif
value p-value
0.04
0.45
0.44
0.02
0.59
0.62
0.00
0.97
0.94
0.03
0.49
0.49
____________________________________________________________________________________________________________
23
Table 2: Partisan Bias in Interviewer Ratings of the Survey Experience
____________________________________________________________________________________________________________
Interested
Concerned
Cooperative
Enjoy
Reluctant
Suspicious
Worried
Dem.
Interviewers
Dem.
Rep.
Resp. Resp.
0.64
0.63
1.00
0.97
0.91
0.90
0.62
0.62
0.93
0.93
1.00
0.99
0.98
0.97
Dif.
0.01
0.02
0.02
0.00
0.01
0.01
0.00
pvalue
0.81
0.05
0.50
0.91
0.64
0.31
0.93
Rep.
Interviewers
Dem.
Rep.
Resp. Resp.
0.65
0.72
0.99
0.99
0.77
0.82
0.64
0.69
0.96
0.96
0.98
0.98
0.98
1.0
Dif.
-0.07
0.00
-0.06
-0.05
0.00
0.01
-0.02
No Interviewer
Controls
pDif-inpvalue
Dif
value
0.01
0.08
0.04
0.96
0.02
0.12
0.07
0.07
0.04
0.04
0.06
0.13
0.76
0.01
0.56
0.64
0.00
0.80
0.03
0.02
0.27
Interviewer
Controls
Difpin-Dif value
0.09
0.01
0.03
0.08
0.06
0.03
0.07
0.02
0.00
0.93
0.01
0.60
0.02
0.20
Interviewer and
Respondent Controls
DifpBootstrap
in-Dif value p-value
0.08
0.01
<0.01
0.03
0.08
0.07
0.07
0.03
<0.01
0.07
0.02
0.02
0.01
0.73
0.75
0.01
0.58
0.64
0.02
0.20
0.12
____________________________________________________________________________________________________________
24
Table 3: Partisan Bias in Survey Responses of Bush Approval
____________________________________________________________________________________________________________
Dem.
Interviewers
Dem.
Rep.
Resp. Resp.
Bush Approval
(3-pt)
Bush Approval
(cont)
Bush Economy
Approval (3-pt)
Bush Economy
Approval (cont)
Bush For. Rel.
Approval (3-pt)
Bush For. Rel.
Approval (cont)
Bush Terrorism
Approval (3-pt)
Bush Terrorism
Approval (cont)
Dif.
pvalue
Rep.
Interviewers
Dem.
Rep.
Resp. Resp.
No Interviewer
Controls
pDif-inpvalue
Dif
value
Dif.
Interviewer
Controls
Difpin-Dif value
Interviewer and
Respondent Controls
DifpBootstrap
in-Dif value p-value
0.12
0.66
-0.54
0.00
0.14
0.65
-0.51
0.00
-0.03
0.45
-0.02
0.57
-0.03
0.50
0.61
0.14
0.64
-0.50
0.00
0.16
0.62
-0.46
0.00
-0.04
0.20
-0.04
0.28
-0.04
0.22
0.34
0.24
0.85
-0.61
0.00
0.22
0.76
-0.54
0.00
-0.07
0.26
-0.06
0.33
-0.06
0.33
0.39
0.26
0.79
-0.53
0.00
0.24
0.73
-0.49
0.00
-0.03
0.43
-0.03
0.55
-0.02
0.55
0.59
0.17
0.59
-0.42
0.00
0.12
0.62
-0.50
0.00
0.09
0.30
0.10
0.24
0.10
0.27
0.24
0.17
0.57
-0.39
0.00
0.15
0.59
-0.45
0.00
0.05
0.44
0.06
0.37
0.06
0.40
0.37
0.32
0.86
-0.54
0.00
0.28
0.81
-0.53
0.00
-0.01
0.84
-0.01
0.84
0.00
0.94
0.96
0.32
0.81
-0.49
0.00
0.30
0.80
-0.50
0.00
0.01
0.64
0.01
0.66
0.02
0.62
0.57
____________________________________________________________________________________________________________
25
Download