Looking at Presidential Polling After the Election

advertisement
PRE-ELECTION POLLS IN 2012
Clyde Tucker
Outline
• Review of the Polls
• Issues in Pre-election Polls
• Electoral Choice in Survey Research
MoE
Final Result
RCP Average 10/31-11/5
Obama
Romney
Spread
50.9
48.8
47.3
48.1
+3.6 D
+0.7 D
Politico/GWU
Rasmussen
IBD/TIPP
CNN/OR
3.1
3.0
3.7
3.5
47
48
50
49
47
49
49
49
Tie
+1.0 R
+1.0 D
Tie
Gallup
2.0
49
50
+1.0 R
ABC/Wash. Post
Monmouth/Braun
2.5
2.6
50
48
47
48
+3.0 D
Tie
NBC/Wall St J
Pew Research
2.6
2.2
48
50
47
47
+1.0 D
+3.0 D
State
RCP Average
Obama Romney
Spread
Final Result
Obama Romney
Spread
OH
FL
VA
NH
NC
50.0
48.2
48.0
49.9
46.2
47.1
49.7
47.7
47.9
49.2
+2.9 D
+1.5 R
+0.3 D
+2.0 D
+3.0 R
50.7
50.0
51.2
52.0
48.4
47.7
49.1
47.3
46.4
50.4
+3.0 D
+0.9 D
+3.9 D
+5.6 D
+2.0 R
MI
WI
PA
IA
CO
NV
49.5
50.4
49.4
48.7
48.8
50.2
45.5
46.2
45.6
46.3
47.3
47.4
+4.0 D
+4.2 D
+3.8 D
+2.4 D
+1.5 D
+2.8 D
54.2
52.8
52.0
52.0
51.9
52.4
44.7
45.9
46.6
46.2
46.1
45.7
+9.5 D
+6.9 D
+5.4 D
+5.8 D
+5.8 D
+6.7 D
VA
Final Result
RCP Avg.
MA
Final Result
RCP Avg.
ND
Final Result
RCP Avg,
MT
Final Result
RCP Avg.
Kaine Allen
52.5
47.5
48.6
46.8
Warren Brown
53.7
46.3
50.0
47.0
Heitkamp Berg
50.5
49.5
43.3
49.0
Tester Rehberg
48.7
44.8
47.3
47.7
Spread
+5.0 D
+1.8 D
Spread
+7.4 D
+3. 0 D
Spread
+1.0 D
+5.7 R
Spread
+3.9 D
+0.4 R
Issues in Pre-election Polls
• Cell Phone Only Households
– At least a third of households are cell only
– Increases to about half when “cell mostly”
– Excluding cell-only households restricts access
to young voters
• 22% of adults in the Census are 18-29 years old
• 0nly 6% of respondents are 18-29 in Pew landline
samples
• Increasing Nonresponse
– Over the last two decades response rates n
telephone surveys have been in rapid decline
– Contributing to this decline have been the
following
• A growing number of single-person households
• Changing technologies—answering machines, caller
ID and the decline in the number of households with
a landline
Steep Decline in Response Rates
Over the Past 15 Years
Contact rate
Cooperation rate
Response rate
90
77
79
73
72
62
43
40
34
31
21
36
28
1997
•
•
2000
25
2003
14
21
2006
15
2009
9
2012
Rates calculated using AAPOR’s CON2, COOP3 and RR3. Rates are typical for Pew Research surveys conducted in each year. From “Reflections on Election Polling and
Forecasting from Inside the Boiler Room,” Prepared for the CNSTAT public seminar by Scott Keeter.
”
www.people-press.org
8
– Peter Miller, a former President of AAPOR,
said trying to weight away nonresponse can
create unknown errors
– Even if the demographics of telephone
respondents are similar to the voting age
population as a whole, there can still be
substantial differential nonresponse
But Sizeable Differences in Civic and
Political Engagement
Pew
Research
Standard
survey
Governme
nt
surveys
In the past year…
%
%
Volunteered for an
organization
55
27
Contacted a public official
31
10
Talked with neighbors
weekly or more
58
41
From “Reflections on Election Polling and Forecasting from Inside the Boiler Room,” Prepared for the CNSTAT public
seminar by Scott Keeter.
www.people-press.org
10
– The biggest problem, of course, would come
from differential nonresponse by vote choice
– There has been evidence of this in exit polls
– However, one saving grace may be that
nonrespondents are less likely to vote
– This is probably true given the lack of civic
engagement
• Determining Likely Voters
– Likely voter screens are usually based on
answers to a series of questions, including
political interest, past voting history, length of
time in the community, and likelihood of voting
– The likely voter screen is used either to exclude
respondents with a low score or to assign a
probability of voting to each respondent, often
aided by information from voter validation
studies.
– But, what is the result?
– In his recent presentation at NAS, Scott Keeter
provided evidence that using a likely voter
screen resulted in a difference between the
candidates that was closer to the final result in
the 2008 Presidential Election
– Also, as we already have seen, the difference
between the candidates among likely voters in
the final 2012 Pew poll was very close to the
actual result
– Yet, this is not always the case
– Registered voter numbers should exhibit less
variance than likely voter numbers
– Begin with a baseline for both registered voters
and likely voters shortly before the conventions
– Track both series up until election day
– Examine the variance of the likely voter series,
determining which screener questions
contribute most to the variance
– Carry out this exercise over several elections
• Allocating Undecideds
– There are essentially three ways to allocate the
undecideds
• Evenly divide them between the candidates (most
often the two leading ones)
• Assign undecideds proportionately based upon the
proportions for each candidate among the decided
voters
• Assign an undecided voter based on the
characteristics of the voter (models will differ
depending on the analyst)
– Again, the correct allocation is probably
dependent on the specific election context
An Example from Nate Silver
Presidential Vote in Virginia
Obama
Romney
Polling Avg.
Adj. Polling Avg.
State Fundamentals
48.2
48.8
48.2
46.9
46.7
47.4
Now Cast
Overall Avg.
48.8
48.5
46.8
47.4
Projected Vote
50.7
48.7
Allocated Undecideds
2.2
1.3
• House effects
– Political bias—incorporated at any stage
– Methodological sophistication
– Funding
– Mode
• Interviewer conducted (dual frame or only landline)
• Robo calls
• Internet polls
The Tyranny of the Electoral Choice in
Survey Research
• The importance of electoral choice
– Very important behavior--the best known product of survey
research (unemployment rate probably in second place)
– A gold standard is available
• The problem with electoral choice
– Only one of many products of survey research
– Binary choice
– Constrained distribution
• At least 40% (probably more now) can be initially assigned to each
party
• In poll with 1000 respondents, margin of error (+/- 3%) covers a
good part of the remaining difference between the candidates
An Interesting Alternative to a National
Presidential Poll
• Using state-based models to predict the
electoral vote margin
• Two methods offered by Nate Silver at the
New York Times and Drew Linzer at Emory
University
• Both correctly predicted the electoral vote
margin relying, in part, on averages of polls in
individual states
• Silver’s method combines an overall measure
of economic performance, state polls, and
recent state electoral history to forecast each
state race
• He then uses these estimates to determine the
most likely electoral vote margin
• Although the economic measure and past state
voting history have a great effect early in the
campaign season, they decline in importance
relative to the polls as the election draws near
• Linzer’s method starts with a prior based on a
modified structural forecast and then relies on
only the state polls and the recent state
electoral history in a process he calls Dynamic
Bayesian Forecasting
• This is a hierarchical model that borrows
strength from trends in the frequently polled
states to make estimates of states with little
polling that also incorporates random-walk
priors
• As in Silver’s model, electoral history takes a
back seat to the polls in the model as the
election approaches
• Both methods rely on simulations of model
predictions of the electoral outcome to
establish a probability distribution for
determining the chance that each candidate has
of winning
Download