Didn`t He Shoot Somebody? Measuring Political

advertisement
Dick Cheney? Didn’t He Shoot Somebody?
Measuring Political Knowledge in the 2008 American National Election Study
Michael D. Martinez
Stephen C. Craig
Department of Political Science
University of Florida
Gainesville, FL 32611-7325
Abstract: While many scholars have illustrated the importance of political knowledge in shaping
public opinion and vote choice, the measurement of political knowledge is still being refined.
Office recognition questions are a common component of general political knowledge measures,
and have appeared regularly in the American National Election Study time series. The 2008
ANES included redacted verbatim responses to the recognition questions, which allows us to
determine the extent of “partial knowledge” about some public officials. Our coding of these
responses reveals evidence of considerable “partial recognition,” which is positively associated
with other measures of political knowledge but not with higher levels of issue voting.
Prepared for presentation at the Annual Meetings of the American Political Science Association
(Washington, DC), September 2-5, 2010. © American Political Science Association. We are
grateful to Will Hicks for his research assistance on this project.
While many scholars have illustrated the importance that political knowledge plays in
shaping public opinion (Althaus 1996, 1998; Brewer 2003; Gibson and Caldeira 2009; Gilens
2001; Krosnick and Brannon 1993; Kuklinski et al. 2000; Sturgis 2003) and vote choice (Bartels
1996; Basinger and Lavine 2005; Goren 1997), the measurement of this key concept is still being
refined. Since Delli Carpini and Keeter’s (1996) analyses of the levels and correlates of political
knowledge in the American public, office recognition questions have been a common component
of general knowledge measures. These questions, which are intended to elicit basic awareness of
people in government, simply ask respondents if they know “what job or political office” is now
held by a few prominent leaders of the day. Questions similar to these that have appeared
regularly in the American National Election Study time series since 1986 show variation in
recognition rates across time and between political figures, but correct responses typically scale
well both with one another and with other measures of political knowledge (Delli Carpini and
Keeter 1993).
Nevertheless, these measures are not without criticism. Some scholars wonder whether
knowledge of officeholders and constitutional provisions is essential for citizens to effectively
discharge their democratic duties (Lupia 2006), or whether the “pop quiz” items embedded in the
ANES and other omnibus surveys truly reflect how knowledgeable citizens would appear to be if
given sufficient time and motivation to retrieve basic political facts stored in memory (Prior and
Lupia 2008). Others argue that open-ended recall questions pose very demanding tasks for
survey respondents, who often do much better at picking the names of prominent government
officials out of a lineup in a closed-ended format (Gibson and Caldeira 2009; Mondak 2001;
Mondak and Davis 2001) or at recognizing names that are supplied on a list or ballot (Parker
1981; Tedin and Murray 1979). Noting both (1) the possibility that many people may have
partial knowledge about some officeholders and (2) systematic variations in the propensity to
venture a guess, Mondak has strongly urged replacing open-ended questions with closed-ended
formats and discouraging (or at least not encouraging) “don’t know” responses. Alternatively,
defenders of the standard format maintain that the evidence establishing that “don’t knows”
conceal partial knowledge is weak and caution researchers that, if it “ain’t broke, don’t fix it”
(Sturgis, Allum, and Smith 2008, 100).
However, the coding of answers to these questions appears to be problematic as well.
Prompted by Gibson and Caldiera’s report of relatively higher awareness of William Rehnquist
in their own national survey, the ANES Principal Investigators and staff discovered a number of
issues that may have reduced the reliability and validity of the office recognition questions as
indicators of political knowledge. Their investigation revealed
•
•
•
subtle differences between the instructions given to coders in 2000 and those given in
2004, with no record of any instructions prior to 2000;
many interviewers did not follow instructions in 2004 to transcribe responses that they
deemed either incorrect or only partially correct; and
the codes used did not reflect answers that might be deemed correct but incomplete.
On this latter point, for example, responses in 2000 that identified Rehnquist as “Chief Justice of
the Supreme Court” were coded as correct – but nine people who said that he was “the Supreme
Court justice in charge” (or used similar language, omitting the words “Chief Justice”), along
with nearly 400 others who named him as a justice of the Supreme Court (without reference to
his role as Chief) or simply as a judge, were all coded as being no better informed than those
who might have identified Rehnquist (hypothetically) as Chairman of the Federal Reserve,
Mayor of Phoenix, or Canadian Prime Minister. Counting the “justice” answers as partially
correct, which they arguably were, would likely result in a more flattering characterization of the
public’s awareness about the Supreme Court and politics in general (Krosnick et al. 2008).
Prior to 2008, the codes released by ANES to the scholarly community indicated only
whether a given response was judged by the research staff to be a correct identification of the
office held, an incorrect identification, don’t know, or refused to answer. In early releases of the
2008 study, ANES did not provide codes for the office recognition variables but instead supplied
redacted transcriptions of actual responses to the open-ended questions regarding what offices
were held by Nancy Pelosi, Dick Cheney, Gordon Brown, and John Roberts. Thus, we can now
determine whether some “incorrect” responses might be concealing hints of political knowledge
that were previously hidden in the ANES staff coding. In this paper, we report our own coding
and analysis of those responses, show the extent of “partial knowledge” captured in them, and
assess whether “partial knowledge” answers are associated with other indicators and correlates of
political knowledge.
Coding
We begin with a typology of responses to the office recognition questions included in the
2008 American National Election Study. While any social science research methods text will
note that the virtue of open-ended survey questions is in allowing respondents to voice what they
think or know in their own words, that also presents a challenge for coding those responses into
categories that are amenable to comparisons and analysis. Coding schemes are based on the
judgment of the researchers and, as we have been reminded in this process, decisions about how
to classify particular cases are sometimes close calls. There is no doubt that our coding decisions
are subject to debate and criticism, just as much as those of ANES staff who labored to code the
office recognition (and other open-ended) questions in previous surveys. Nevertheless, we hope
that readers will find the explanation of our coding scheme to be clear enough to meet the
intersubjectivity standard and, further, that our publication of the codes for individual responses
will invite other scholars to replicate and improve upon our classification. 1
Post-election respondents were asked “what job or political office” was held by Nancy
Pelosi, Dick Cheney, Gordon Brown, and John Roberts, in that order. Those who confessed that
they didn’t know in response to any of identification questions were probed with “Well, what’s
your best guess?” We coded each response into one of the following categories:
Clearly correct: These responses identify the exact leadership position held by the person,
though they may also include some additional extraneous correct or incorrect information
about him or her. Responses that identified Pelosi as Speaker of the U.S. House, Cheney as
Vice President, Brown as Prime Minister of Britain (or the United Kingdom or England),
and Roberts as Chief Justice were coded as clearly correct, even if the respondent provided
other comments that might slightly misplace the figure in the political spectrum. These
included handfuls of responses along the lines of Pelosi was “Speaker of the House over
2
the Senate,” Cheney was “supposed to be Vice President but he's really the President,” and
Brown likely held a position in “England, Tony Blair’s position – PM?”
Leadership: These responses do not cite the specific office held by the political figure, but
identify her or him as a political leader in the appropriate branch of government or as a
party leader. Examples include Pelosi as the person who “controls the floor of the House of
Representatives,” “Majority Leader of the House,” or “Chairman of the House”; Cheney as
“Second in Command,” “behind the President,” or “No. 2 Man”; Brown as “Head guy in
England,” “Head of government in the British Isles,” or “Prime Minister of London”; and
Roberts as “Head Supreme Court justice,” “Head of the Supreme Court,” or “Chief of Staff
of the Supreme Court.” In each of these cases, the respondent did not use the formal title of
the office, but conveyed some understanding of the figure’s role or place as a political
leader.
Accurate: These responses identify the branch or locale of the officeholder, but fail to
specifically note his or her leadership role. Examples include answers that pegged Pelosi as
a “Representative from California,” a “Democratic Congresswoman,” or “in Congress or
the Senate”; vaguely recalled Cheney (“didn't he shoot somebody? He's in the White
House”); noted that Brown was “Algo del parlamento (something in Parliament)” or a
“British elected official”; and noted that Roberts was a “Supreme Court guy” or “Supreme
Court judge.” All of these responses are technically accurate, but could be regarded as
incomplete in that they do not convey a sense that the person is a leading figure in
government.
Party only: These responses only name the party the figure belongs to or, in a few cases,
mention a political ally of the figure without any reference to an office that the person
holds. This category includes answers that identify Pelosi only as a “Democrat” or indicate
that “Right now I don't know, but she was working for the Obama campaign”; Cheney
only as a “Republican”; or Roberts as a “Republican” or “connected with the Republican
Party with Bush,” or “on the White House staff? advisor or some such thing?” Note that
the latter response contained some incorrect placements of Justice Roberts, but correctly
noted his connection to the Republican president who appointed him. There were no
respondents who identified Brown only by his association with the British Labour Party.
Ballpark: Answers here indicate some leadership role or policy with which the figure is
associated, but the specific office is lacking or misplaced and the response is technically
inaccurate. For example, responses that point to Pelosi as “Senate Majority Leader,”
“President of Congress,” “the leader of Congress,” “the Democratic head of something,” or
say that “she opposed the president (Bush) in just about everything, but cannot think of
what she did” might convey some recognition of her role, even if the details are inaccurate.
Similarly, in our judgment, individuals who recognized Cheney as playing some role in
foreign or defense policy (“Secretary of State,” “Secretary of Defense,” “oversees the
military,” “Cabinet”) without naming him as Vice President might have been conveying at
least a vague awareness of the role Cheney actually did play in the Bush administration;
and those who noted that Brown was “President of Great Britain,” “Parliament president,”
3
or “some kind of deal with another country?” might have some awareness of the British
Prime Minister.
Ballpark recognitions of John Roberts fell into three subcategories. The first of these
consisted of responses that identified Roberts as a judge, but not specifically as a Justice on
the Supreme Court (“maybe in the courts,” “District Judge,” “he is a federal Justice”). The
second set of answers identified him as being in the legal profession, but not as serving on
the bench (“lawyer,” “Attorney General”). The third set identified John Roberts as the
television journalist who shares the same name as the Chief Justice (“CNN reporter,”
“newscaster,” “TV guy,” “CBS news”). This third set of answers, of course, could be
regarded as correct and a reflection of poor question wording, though it is an empirical
question as to whether people who recognize John Roberts, the reporter, are as politically
aware on other measures as those who recognize John Roberts, the Chief Justice.
Incorrect: These responses clearly misidentify the office, geographic origin, or party of the
target figure without providing any additional correct or ballpark information. Examples
are identifications of Pelosi as “in the Senate,” “Secretary of State,” “Federal Reserve,”
“Governor of Alaska,” or “Republican”; Cheney as “Congressman,” “House Speaker,”
“Democrat,” “lobbyist,” or “military man”; Gordon Brown as “head of FEMA,” “Attorney
General,” “Supreme Court Judge,” or “Republican Senator, Oregon”; and John Roberts as
“Oral Roberts’ son,” “Governor,” “Prime Minister of Australia,” or “Evangelist”. 2
Don’t Know: These included stated “don’t know” responses, as well as non-specific
references (“big wig,” “someone important,” “politician,” “assistant to somebody,” “public
speaker”) unless other correct or incorrect information was also provided.
Refused and Missing: ANES interviewers noted that a few respondents were assisted by
another person present during the interview, which we regarded as “missing” because there
is no way for us to know whether or how the respondent might have responded without the
other person’s assistance. Wild codes 3 in the ANES data are also coded as missing.
Recognition rates
Not surprisingly, and consistent with prior studies, recognition rates vary by office,
personality, and tenure. Table 1 shows that the sitting Vice President was much more widely
identifiable than the other political leaders, with nearly three-fourths (73.4 percent) of the 2008
sample identifying Dick Cheney’s position correctly; oddly, this figure is down from the 84.5
percent who correctly identified his office in 2004. 4 Only about half as many people (37.7
percent) identified Nancy Pelosi as Speaker, and Gordon Brown was virtually a non-figure for
most Americans; just six percent of ANES respondents identified Brown as the British Prime
Minister (or an equivalent description), over half confessed they didn’t know him at all, and
another third provided some incorrect identification. Similarly, only 5.5 percent of respondents
said that John Roberts was the Chief Justice, half made no guess, and nearly a third reported
something that we categorized as clearly incorrect. Pelosi, who received a great deal of attention
as the first woman to serve as Speaker, was much more widely recognized than her predecessor,
Dennis Hastert, was in 2004, while the recently installed Chief Justice and British Prime Minister
4
were less well known than their predecessors who had just completed several years of service in
their respective positions. 5
Table 1 about here
Evidence of partial knowledge also varies across these four political figures. Roberts
appears to be more recognizable than the 5.5 percent “correct” identification rate would suggest
when we consider that 9.4 percent of respondents identified him as a member of the Supreme
Court (without specifically noting his leadership role), and several others pegged him as the
Court’s leader (without specifically referring to the office of Chief Justice). Thus, the total
number of respondents who recalled that Roberts had some affiliation with the U.S. Supreme
Court (15.4%) in 2008 was strikingly similar to the 15.1% of respondents who did so in a 2006
national survey (Gibson and Caldeira 2009, 434). A few other respondents identified Roberts as
a Republican, a judge (without reference to the Supreme Court), or a lawyer, and another handful
(0.8 percent) identified the television reporter by the same name.
Pelosi’s overall recognition rate would be greater than 50 percent if we include the
respondents who recognized her as a legislative or partisan leader (without mentioning the office
of Speaker; 4.0 percent), a legislator (without reference to her role as a leader in the institution;
7.4 percent), a Democrat (0.4 percent), or in a leadership role that is in the legislative “ballpark”
(1.7 percent). In contrast, taking partial knowledge into account would not do much to change
the high visibility rate for Vice President Cheney, or the low recognition rate for Prime Minister
Brown.
Validation of “partial recognition” as “partial knowledge”
Our main research question is whether this evidence of partial knowledge conveys
enough information to be usable in an overall measure of political knowledge in the 2008 ANES.
The approach used is to perform a series of convergent validity tests, in which we assess the
degree to which the office recognition response categories correspond to other measures of
political knowledge that are present in the survey. In particular, we expect that those who answer
each office recognition question “correctly” will be
H1: more likely to respond to that question without needing a probe.
H2: more likely to answer the other office recognition questions correctly. We compute the
mean number of “clearly correct” identifications of Vice President Cheney, Prime Minister
Brown, and Chief Justice Roberts for each category of response to the Pelosi recognition
question, and comparable figures for each category of response to the Cheney, Brown, and
Roberts questions. Scores in each instance range from 0 (no other correct identifications) to
3 (correct identifications for all other figures).
H3: less likely to give incorrect or “don’t know” answers, or to decline to give answers, to
other office recognition questions. We compute the mean number of clearly incorrect
responses, don’t knows, or refusals regarding the other three figures by each category of
5
response; this is nearly a mirror image of the variable described in H2, except that “partial”
knowledge responses are not counted as either correct or incorrect.
H4: more often rated as “very” or “fairly” high by interviewers in their “general level of
information about politics and public affairs.” These assessments were not contaminated
by the office recognition questions, as the former were obtained following the pre-election
wave and the latter during the post-election interviews;
H5,6: more likely to correctly identify the Democrats as the party that held a majority of
seats in the House of Representatives and in the Senate prior to the 2008 election;
H7: more likely to place Obama correctly relative to McCain on the liberal-conservative
scale (Obama more liberal), aid to blacks (Obama more supportive of government aid), and
the following “old form” issue scales: abortion (Obama more pro-choice), spendingservices (Obama more supportive), health insurance (Obama more supportive of
government involvement), jobs-standard of living (Obama more supportive of government
guarantee), and environment-jobs (Obama more supportive of environmental protection).
This variable ranges from zero to seven based on the old form-issue scale questions asked
of half the sample; 6
H8: more likely to place Obama correctly relative to McCain on liberal-conservative scale
(Obama more liberal), aid to blacks (Obama more supportive of government aid), and on
the following “new form” issue scales: spending-services (Obama more supportive of
increases), and defense spending (Obama less supportive of increases). This variable
ranges from zero to four based on the new form-issue scale questions asked of the other
half of the sample. 7
We see a consistent pattern in Table 2a showing that respondents who clearly indentified
Nancy Pelosi as Speaker of the House had the highest scores on each of the other indicators of
political knowledge. They were, as predicted, most likely to answer the Pelosi identification
question without a probe, most likely to correctly identify Cheney, Brown, and Roberts (and
least likely to incorrectly or refuse to identify them), most likely to receive high political
information ratings from ANES interviewers, most likely to identify Democrats as the majority
party in both the House and Senate, and most likely to place Obama and McCain in correct
relative positions on issue scales. At the other end of the spectrum, those who pleaded “don’t
know” regarding Pelosi (or gave ambiguous, “someone important,” answers) generally scored
the lowest on other indicators of political knowledge, even lower than those who incorrectly
identified the Speaker.
Table 2 about here
In between, there is evidence that “partially correct” responses about Pelosi do reflect
“partial knowledge,” though this appears to be the case more for some manifestations of partial
knowledge than for others. People who identified Pelosi as a legislative or partisan “leader,” for
example, were more knowledgeable on seven of our eight indicators than were those who were
in the “ballpark” or who simply identified Pelosi “accurately” as a legislator with no reference to
6
leadership; in fact, the latter group was most similar to those who offered incorrect guesses about
Pelosi’s office. While there are too few “party only” identifications of Pelosi and “refusals” to
support separate analysis of those categories, we may safely conclude that the ability to clearly
and correctly identify Pelosi’s leadership position indicates a high level of knowledge – but that
some “partially correct” answers seem to convey more knowledge than incorrect or no answers.
As we saw in Table 1, a large majority of ANES respondents were able to correctly
identify Dick Cheney as the incumbent Vice President, and, not surprisingly, Table 2b shows
that they scored significantly higher on our other indicators of political knowledge than did those
who gave incorrect or “don’t know” replies to this relatively easy question. Unlike our analysis
of the Pelosi identifications, there was not much of a difference on other measures of knowledge
between those who guessed wrong about Cheney’s position and those who said “don’t know.”
The “don’t knows” scored higher on four of the eight indicators than did the clearly incorrect
respondents, lower on three others, with one tie. There is some indication that the one
subcategory of “partially correct” respondents with enough cases to analyze (“ballpark”
identifications of Cheney as in the Cabinet or having something to do with defense or foreign
policy) reflects some limited level of knowledge. By no means would we regard these
respondents as well-informed political junkies, as they were below average on all eight of the
other measures of knowledge, but the “ballpark” respondents did score higher, on average, for
seven of the indicators than those who gave clearly incorrect or no responses to this simple
question. Again, some “partially correct” responses seem to convey a certain amount of political
information, though the particular subcategories are different for the Pelosi and Cheney
variables.
The Gordon Brown identification question was a difficult one in 2008, as Brown was
identified as British Prime Minister by only 6.0% of the ANES respondents. Table 2c shows that
people who correctly answered this difficult question were much higher than average on all other
indicators of knowledge. As was the case with our analysis of responses to the Pelosi question,
however, those who offered incorrect guesses about Brown’s office scored generally higher than
those who said “don’t know,” though not as high as respondents who refused to answer. There
were too few “partially correct” identifications of Brown to analyze separately.
The John Roberts question also was relatively hard, as only 5.5% named him as “Chief
Justice.” Again, not surprisingly, those respondents also scored very high on our other indicators
of knowledge, as shown in Table 2d. As was the case with identifications of Brown, individuals
who provided incorrect answers were generally more knowledgeable on our other measures than
those who said “don’t know” or the equivalent, but not as knowledgeable as those who refused.
The one “partially correct” subcategory with enough cases to analyze (9.4% of the sample)
consists of respondents who identified Roberts as a member of the Supreme Court, but who
failed to note his leadership position. As a group, those individuals were well above average on
the other measures of political knowledge, but slightly below the people who explicitly stated
that Roberts was Chief Justice.
Our results thus far suggest that essentially the same manifestations of partial recognition
may reflect different overall levels of knowledge; in other words, there is no consistent pattern
across all categories of partial recognition that we defined for these four political figures in the
7
2008 ANES recognition battery. Knowing that Nancy Pelosi was a member of Congress, for
example, while technically accurate, does not appear to connote as much knowledge as
recognition of her role as a leader in Congress – nor, for that matter, as much as being able to
“accurately” identify John Roberts as a Supreme Court justice (but not Chief).
Keeping this inconsistency in mind, it would nonetheless be helpful to determine how
valuable partial recognition might be in constructing an overall measure of political knowledge.
We estimate that from a series of models in which the dependent variable is another measure of
knowledge, and the three independent variables are the number of clearly correct identifications
of the political figures, the number of partially correct identifications (either as leader, accurate,
ballpark, or party only, and the number of incorrect identifications (all scores ranging from 0 to
4). Each coefficient reflects the knowledge level of a category relative to the excluded category
of no recognition (DK, blanks, and refusals). The ratio of the partially correct coefficient to the
clearly correct coefficient gives us some sense of the value of partial recognition relative to
clearly correct recognition. The four equations presented in Table 3 are
•
•
•
•
an ordered logit model of the interviewer’s subjective rating of political information
(ranging from very low to very high);
a general linear model estimate (based on poisson distribution) of the number of
correct answers to which party held a majority in the U.S. House and Senate (ranging
from 0 to 2);
a general linear model estimate (based on poisson distribution) of the number of
correct placements of Obama and McCain on the ANES scales (identified in H7
above) for the half-sample that heard the “old form” questions; and
a general linear model estimate (based on poisson distribution) of the number of
correct placements of Obama and McCain on the ANES scales (identified in H8
above) for the half-sample asked the “new form” questions.
In each of the model estimates, both clearly correct and partially correct coefficients are
significantly different than zero, indicating that clearly correct identifications and partially
correct identifications are more closely associated with higher levels of political knowledge than
completely absent identifications. As one would expect, the coefficients for clearly correct
identifications are greater than those for the partially correct; ratios of the latter to the former
vary from just over half to a little over three-fourths, centered at around 63%. Incorrect
identifications imply little additional political knowledge over (but not less knowledge than)
absent identifications. The associated coefficients for incorrect identifications are all positive and
are significant in two of the four models, but relatively small in all four.
Table 3 about here
Political Knowledge and Issue Voting
We also provide a construct validity test of the effects of partial recognition on issue
voting. Prior research leads us to expect that people with higher levels of political knowledge
will be more likely to align their vote choices with their issue preferences (Delli Carpini and
Keeter 1996; Jessee 2010), so the question is whether partial knowledge about these political
8
figures contributes to higher levels of issue voting. For each respondent, an “issue score” was
calculated as a principal components factor score of issue preferences, with higher values
representing preferences for less government spending and services (v083105), more defense
spending (v083112), greater reliance on the private sector for health insurance (v083119), less
dependence on government to ensure jobs and a good standard of living (v083128), greater selfreliance by blacks to help themselves (v083137), more concern for jobs over the environment
(v083154), easier access to guns (v083164), more government restrictions on abortion
(v085086), and not allowing gays and lesbians to adopt children (v083213). 8 We estimate a logit
model of presidential vote choice as a function of the issue score, the number of clearly correct
identifications, the number of partially correct identifications, the number of incorrect
identifications (scores ranging from 0 to 4 in each instance), and the interactions between issue
score and the identification variables. We expect that the coefficient for the issue score variable
will be positive and significant, which would show that people with conservative issue
preferences were more likely to vote for McCain. Also, if recognition of the political figures
connotes political knowledge that facilitates issue voting, coefficients on the interaction terms
should be positive and significant as well (indicating greater issue effects for those with more
recognitions).
The estimated model is shown in Table 4. As predicted, (a) the issue score coefficient is
positive and significant, indicating that even among people with no recognition of the four
political figures, conservative issue positions were associated with a preference for McCain; and
(b) the effect of issues on voter choice is magnified by the number of clearly correct
identifications of Pelosi, Cheney, Brown, and Roberts, as the interaction coefficient for correct *
issue score is also positive and significant. The effect of partially correct recognitions on issue
voting, however, as reflected by the coefficient of that interaction term, is positive but not
significant; in other words, there is no reliable evidence that partial knowledge increases the
likelihood of issue voting over no recognition of these political figures. Incorrect identifications
also did not contribute to higher levels of issue voting, as the coefficient for that interaction term
is not reliably different than zero. Once again, the coefficient is positive but trivial; wrong
identifications are not associated with either a lower or higher probability of casting an issue
vote relative to no identification.
Table 4 about here
The relative magnitudes of these effects are better illustrated in Table 5, which shows the
predicted probabilities of someone with strong but not extreme conservative preferences (an
issue score of +1) voting for McCain, varying the number of clearly and partially correct
identifications, and assuming no incorrect identifications. Our hypothetical conservative voter
with no knowledge (partial or otherwise) of the four figures had a 67% probability of voting for
the Republican candidate; that probability increases to 93% for those who are fully aware of the
positions held by Pelosi, Cheney, Brown, and Roberts. The effects of partial recognition on issue
voting are considerably weaker, as the probability of a McCain vote by a hypothetical
conservative with no clearly correct and four partially correct identifications is only 72.7%,
marginally less than the comparable probability for a conservative with only one clearly correct
and no partially correct identifications (76.2%).
Table 5 about here
9
Discussion
Our analysis of the redacted open-ended responses to the 2008 office recognition
questions suggests that the strict coding apparently used by ANES in previous years may have
concealed considerable partial knowledge about some prominent political figures. In particular,
fewer people said that John Roberts was the Chief Justice of the Supreme Court than said he was
on the Supreme Court, and a fair number of people said something accurate about Nancy Pelosi,
the legislator, without specifically mentioning her title of Speaker of the House. These results
suggest that the portrait of Americans’ political knowledge about their leaders could be framed a
little more positively than has sometimes been the case in the past, but it is also important not to
exaggerate that conclusion. Depending on the political leader, different kinds of partial
identification seemed to correspond to different levels of political knowledge, but none of the
partial identification categories that we examined connoted the same level of political knowledge
as “correct” identifications of their formal leadership titles. In other words, not all manifestations
of partial recognition of these officeholders are equal, but they all are partial.
Moreover, with respect to our single construct validity test, we did not see any evidence
that those who were able to provide only partial recognition of these political leaders were
substantially more likely to engage in issue voting than people who couldn’t recognize the
leaders at all. Correct identification of the officeholders’ formal leadership titles did correspond
to statistically greater congruence between issue preferences and voter choice than did no
identifications at all, but partial recognition of their political roles did not. We could imagine
other construct validity tests (see, for example, Mondak 2001), but these preliminary results
suggest that vague knowledge about the roles played by our political leaders does not correspond
to the ability to use other kinds of information in making political evaluations and decisions in
the same way that more precise knowledge does. Researchers who have previously used political
knowledge scales constructed in part from only “strictly correct” identifications of political
leaders can take some solace in this finding, but scholars who are interested in the development
and application of political knowledge scales should begin to investigate other processes where
partial knowledge may affect political choice.
This analysis reminds us that looking at what respondents actually say can sensitize us to
the limits of the validity of our measures. Like Art Linkletter’s kids, survey respondents can say
the darnedest things, as some respondents apparently confused Nancy Pelosi with Sarah Palin,
recalled that Dick Cheney shot someone (without noting that he was also Vice President), and
guessed that Gordon Brown might be somehow related to the godfather of soul, James Brown.
Yet we also read hundreds of responses that could have been considered correct or partially
correct under a liberal coding scheme, and scores of others that we judged incorrect but may
have revealed fragments of political knowledge. Our ability to do this kind of analysis rested on
ANES’s decision to release the redacted texts of the actual responses to these questions, which
we hope will become a precedent for the future.
10
Table 1
Percentages of Responses to Office Recognition Questions
Clearly Correct
Leadership
Accurate
Party Only
Ballpark
Ballpark - judge
Ballpark - lawyer
Ballpark - reporter
Incorrect
DK
Refused
Weighted N
Nancy
Pelosi
37.7
Dick
Cheney
73.4
Gordon
Brown
6.0
John
Roberts
5.5
4.0
7.4
0.4
1.7
na
na
na
0.1
0.7
0.7
1.8
na
na
na
0.3
0.3
0.0
0.1
na
na
na
0.5
9.4
0.5
na
0.6
0.8
0.8
16.2
31.6
1.0
2098.4
8.2
14.7
0.3
2096.1
36.1
53.6
3.5
2096.1
29.2
49.5
3.3
2098.7
11
Table 2a
Other Indicators of Political Knowledge by Pelosi Recognition
others
right
1.25
others
wrong
1.48
73.6%
19.4%
50.8%
7.1%
1.03
0.64
1.04
0.08
341
662
20
22.2%
7.4%
68.1%
2102
43.4%
Wtd N
792
No probe
83.5%
Leader
Accurate
Ballpark
Party only
84
155
37
8
Incorrect
D.K./Blank
Refused
All cases
Speaker
Int rating
78.1%
House maj
66.4%
Senate maj
53.6%
Old scales
5.39
New
scales
3.00
1.72
2.25
1.77
2.39
72.5%
45.7%
64.6%
13.1%
55.6%
23.4%
31.9%
46.5%
48.8%
16.2%
33.0%
9.5%
5.20
3.02
4.72
2.17
2.85
2.28
2.77
1.00
0.65
0.50
0.55
2.23
2.43
2.36
40.9%
30.0%
20.8%
25.4%
18.7%
21.5%
26.1%
16.1%
17.7%
3.36
3.17
1.73
2.24
1.78
0.85
0.85
1.99
53.1%
40.0%
33.5%
4.12
2.41
Italicized entries are based on fewer than 30 cases.
12
Table 2b
Other Indicators of Political Knowledge by Cheney Recognition
1538
No probe
83.6%
others right
0.66
others
wrong
2.04
Int
rating
63.1%
House maj
47.2%
Senate maj
39.7%
Old scales
4.58
New
scales
2.64
Leader
Accurate
Ballpark
Party only
2
16
38
15
100.0%
52.3%
37.3%
46.2%
0.86
0.00
0.17
0.00
2.14
2.83
2.46
2.50
86.1%
17.3%
50.6%
23.9%
86.1%
7.2%
28.2%
29.5%
0.0%
11.5%
26.8%
25.7%
5.17
2.60
3.54
4.37
NA
2.79
2.32
1.34
Incorrect
D.K./Blank
Refused
173
308
7
37.6%
12.3%
67.9%
0.02
0.02
0.00
2.68
2.92
2.70
22.8%
23.9%
19.7%
17.8%
20.0%
20.5%
13.6%
16.4%
9.8%
2.49
2.90
2.38
1.98
1.57
1.33
All cases
2102
67.9%
0.49
2.24
53.1%
40.0%
33.5%
4.12
2.41
Vice President
Italicized entries are based on fewer than 30 cases.
13
Table 2c
Other Indicators of Political Knowledge by Brown Recognition
127
No probe
79.9%
others right
2.86
others
wrong
0.04
Int
rating
88.0%
House maj
80.5%
Senate maj
63.0%
Old scales
6.31
New
scales
3.18
Leader
Accurate
Ballpark
Party only
7
5
2
0
55.3%
0.0%
0.0%
NA
1.68
1.65
1.52
NA
0.00
0.28
0.00
NA
28.8%
50.8%
100.0%
NA
89.7%
43.1%
12.3%
NA
64.8%
43.1%
12.3%
NA
7.00
1.69
5.70
NA
2.03
3.70
3.00
NA
Incorrect
D.K./Blank
Refused
758
1124
74
25.4%
17.5%
85.0%
1.17
0.96
1.49
1.59
1.91
1.41
56.2%
46.3%
64.6%
39.3%
34.6%
51.1%
34.2%
28.5%
44.7%
4.28
3.80
3.85
2.58
2.16
2.71
All cases
2102
26.6%
1.17
1.65
53.0%
39.8%
33.4%
4.12
2.41
Prime Minister
Italicized entries are based on fewer than 30 cases.
14
Table 2d
Other Indicators of Political Knowledge by Roberts Recognition
others right
2.93
others
wrong
0.02
Int
rating
89.1%
House maj
81.5%
Senate maj
68.3%
Old scales
5.97
New
scales
3.19
Chief Justice
115
No probe
85.3%
Leader
Accurate
Ballpark,
attorney
Ballpark, judge
Ballpark,
reporter
Party only
10
196
77.3%
75.7%
1.74
1.78
0.26
0.09
68.4%
82.1%
67.1%
72.2%
48.8%
62.8%
6.70
5.78
3.13
3.16
17
13
59.3%
73.2%
1.54
1.40
0.23
0.51
56.8%
78.5%
82.2%
52.0%
43.9%
49.1%
5.94
4.94
3.14
3.21
17
11
66.7%
35.0%
1.70
0.29
0.26
1.21
60.5%
34.5%
66.0%
28.9%
61.5%
6.5%
2.89
3.29
3.02
1.40
Incorrect
D.K./Blank
Refused
612
1039
69
37.6%
17.1%
84.0%
1.10
0.87
1.41
1.65
1.99
1.48
53.5%
42.2%
63.3%
37.0%
29.0%
49.9%
30.6%
24.5%
43.0%
4.22
3.55
3.57
2.45
2.09
2.69
All cases
2102
35.9%
1.17
1.54
53.0%
40.0%
33.5%
4.12
2.41
Italicized entries are based on fewer than 30 cases.
15
Table 3
Measures of Political Knowledge as functions of Office Recognition
Dependent Variable
Estimation
Intercept
Clearly correct
Partially correct
Incorrect
Interviewer rating
Ordered Logit
Coefficient
s.e.
omitted
1.078
0.049
0.688
0.081
0.114
0.037
sig.
0.000
0.000
0.002
Congress majority
GLM (poissson)
Coefficient
s.e.
sig.
-1.168
0.067
0.000
0.498
0.026
0.000
0.307
0.048
0.000
0.031
0.027
0.265
AIC
Number of cases
5489.7
2068
4380.0
2083
Ratio (part | clearly)
63.9%
61.6%
Dependent Variable
Estimation
Intercept
Clearly correct
Partially correct
Incorrect
Old scales
GLM (poissson)
Coefficient
s.e.
0.972
0.036
0.280
0.016
0.149
0.028
0.025
0.015
sig.
0.000
0.000
0.000
0.099
New scales
GLM (poissson)
Coefficient
s.e.
0.472
0.046
0.222
0.020
0.165
0.038
0.059
0.019
AIC
Number of cases
4561.9
1038
3375.0
1045
Ratio (part | clearly)
53.3%
74.2%
16
sig.
0.000
0.000
0.000
0.002
Table 4
Logit Model of Voter Choice by Issue Score, Office Recognition, and Interactions
Constant
Issue score
Clearly correct identifications
Issue score * Clearly correct
Partially correct identifications
Issue score * Partially correct
Incorrect Identifications
Issue score * incorrect
AIC
Number of Cases
Coefficient
-0.416
1.123
0.006
0.453
-0.043
0.110
-0.244
0.075
755.94
760
17
std. err.
0.195
0.275
0.107
0.160
0.197
0.292
0.094
0.139
sig.
0.033
0.000
0.952
0.005
0.827
0.705
0.010
0.592
Table 5
Predicted Probabilities of a Conservative Voting for McCain
Number
Partially
Correct
0
1
2
3
4
0
67.0%
68.5%
69.9%
71.3%
72.7%
Number Clearly Correct
1
2
3
76.2%
83.6%
88.9%
77.4%
84.5%
89.6%
78.6%
85.3%
NA
79.7%
NA
NA
NA
NA
NA
18
4
92.7%
NA
NA
NA
NA
References
Althaus, Scott. 1996. "Opinion polls, information effects, and political equality: Exploring
ideological biases in collective opinion." Political Communication 13: 3-21.
___. 1998. "Information effects in collective preferences." American Political Science Review
92: 545-558.
Bartels, Larry M. 1996. "Uninformed votes: Information effects in presidential elections."
American Journal of Political Science 40: 194-230.
Basinger, Scott J., and Howard Lavine. 2005. "Ambivalence, information, and electoral choice."
American Political Science Review 99: 169-184.
Brewer, Paul R. 2003. "Values, political knowledge, and public opinion about gay rights: A
framing-based account." Public Opinion Quarterly 67: 173-201.
Delli Carpini, Michael X. , and Scott Keeter. 1993. "Measuring political knowledge: Putting first
things first." American Journal of Political Science 37: 1179-1206.
___. 1996. What Americans know about politics and why it matters. New Haven: Yale
University Press.
Gibson, James L., and Gregory A. Caldeira. 2009. "Knowing the Supreme Court? A
reconsideration of public ignorance of the high court." Journal of Politics 71: 429-441.
Gilens, Martin. 2001. "Political ignorance and collective policy preferences." American Political
Science Review 95: 379-396.
Goren, Paul. 1997. "Political expertise and issue voting in presidential elections." Political
Research Quarterly 50: 387-412.
Jessee, Stephen A. 2010. "Partisan bias, political information and spatial voting in the 2008
presidential election." Journal of Politics 72: 327-340.
Krosnick, Jon A., and Laura A. Brannon. 1993. "The impact of the Gulf War on the ingredients
of presidential evaluations: Multidimensional effects of political involvement." American
Political Science Review 87: 963-975.
Krosnick, Jon A., Arthur Lupia, Matthew DeBell, and Darrell Donakowski. 2008. "Problems
with ANES questions measuring political knowledge." Accessed at http://www.election
studies.org/announce/newsltr/20080324PoliticalKnowledgeMemo.pdf.
Kuklinski, James H., Paul J. Quirk, Jennifer Jerit, David Schwieder, and Robert F. Rich. 2000.
"Misinformation and the currency of democratic citizenship." Journal of Politics 62: 790816.
Lupia, Arthur. 2006. "How elitism undermines the study of voter competence." Critical Review
18: 217-232.
Mondak, Jeffery J. 2001. "Developing valid knowledge scales." American Journal of Political
Science 45: 224-238.
Mondak, Jeffery J., and Belinda Creel Davis. 2001. "Asked and answered: Knowledge levels
when we will not take 'don't know' for an answer. Political Behavior 23: 199-224.
Parker, Glenn R. 1981. "Interpreting candidate awareness in U.S. congressional elections."
Legislative Studies Quarterly 6: 219-233.
Prior, Markus, and Arthur Lupia. 2008. "Money, time, and political knowledge: Distinguishing
quick recall and political learning skills." American Journal of Political Science 52: 169183.
19
Sturgis, Patrick. 2003. "Knowledge and collective preferences: A comparison of two approaches
to estimating the opinions of a better informed ;ublic." Sociological Methods & Research
31: 453-485.
Sturgis, Patrick, Nick Allum, and Patten Smith. 2008. "An experiment on the measurement of
political knowledge in surveys." Public Opinion Quarterly 72: 90-102.
Tedin, Kent L., and Richard W. Murray. 1979. "Public awareness of congressional
representatives: Recall versus recognition." American Politics Quarterly 7: 509-517.
Van Buuren, Stef, and Karin Oudshoorn. 1999. "Flexible multivariate imputation by MICE."
Accessed at http://web.inter.nl.net/users/S.van.Buuren/mi/docs/rapport99054.pdf.
20
Endnotes
1
Our codes are available in an Excel file at
http://www.clas.ufl.edu/users/martinez/apsa10/ANES2008Officerecs.xlsx
2
A few respondents apparently recalled people with names similar to our four leaders, including
Michael Brown (dubbed “Brownie” by President Bush), who was Undersecretary of Emergency
Preparedness and Response (FEMA Director) from 2001-2005; Gordon Smith, who served two
terms (1997–2009) as a U.S. Senator from Oregon; and John Howard, who was Australian Prime
Minister from 1996-2007.
3
“Wild codes” refers to the few responses that we could not interpret, including some with only
single-digit numbers in the response field.
4
This comparison is based on ANES staff coding for 2004.
5
According to staff coding in the 2004 ANES, Hastert was named as Speaker by 9.3 percent of
the sample, Dick Cheney as Vice President by 84.5 percent, Tony Blair as British Prime Minister
by 62.5%, and William Rehnquist as Chief Justice by 27.9 percent.
6
The 2008 ANES included several question wording experiments, and divided the sample into
randomly assigned “old form” and “new form” subsamples. Both “old form” and “new form”
respondents were asked to place the major party candidates on the liberal-conservative scale
(v083070a and v083070b) and on the aid to blacks scale (v083139a and v083139b). “Old form”
respondents were also asked to place the candidates on the abortion (v085089a and v085089b),
spending-services (v083107a and v083107b), health insurance (v083121a and v083121b), jobsstandard of living (v083130a and v083130b), and environment-jobs (v083156a and v083156b)
scales.
7
“New form” respondents were asked to place the major party candidates on the spendingservices (v083110x and v083111x) and defense spending (v083117x and v083118x) scales.
8
The analysis here is for “old form” respondents only, as most of these questions were asked of
about half the sample. Missing values were imputed using the MICE algorithm (Van Buuren and
Oudshoom 1999), and the issue score is the mean factor score derived from the factor analyses of
five replicate datasets.
21
Download