Problems - Investigadores CIDE

advertisement
Opinión Pública y Análisis
de Encuestas
Módulo VII: Encuestas y elecciones
viernes 8 de julio de 2010
David Crow, Associate Director
UC Riverside, Survey Research Center
david.crow@ucr.edu
Polls and Elections
(Asher, Chap. 7)
• Types of Polls
- Who sponsors survey?
- What is survey used for?
• Campaigns Use of Polls to Influence Voters
- Jumping on the bandwagon?
• Media Coverage of Campaigns
– Off to the races  horse race instead of substantive coverage
• Why Polls Go Wrong
• Effects of Polls on Elections
Types of Polls: Benchmark
• Benchmark Surveys: Initial poll to assess candidate
viability; serves as baseline against which to measure
progress
Asks about
1) Name recognition
2) Electoral strength vis-à-vis opponent
3) Support for Incumbent
Problems: typically done very early into a campaign, so
- people are unlikely to know much about the challenger
- economic and political circumstances could change
Typically only leaked if they show a candidate doing well
Types of Polls: Trial Heat
• Trial Heat Questions: Series of questions in a survey
that pair candidates in hypothetical races
 E.g., “If elections were held today, would you vote for A
or B?”
Problems:
1) Abet “horse race” focus of media coverage
2) Changing circumstances between survey and election day
3) Is it a measure of candidate or party appeal? Not clear in
case of lesser-known candidates
4) Also, leaves out relevant alternatives  three two-way
races (between A and B, B and C, and A and C) not the
same as one three-way race
5) Pairs chosen have “framing effect” on race
Types of Polls: Tracking Poll
• Tracking Poll: Polls undertaken on a daily basis close
to election day  goal is to monitor trends (rises and
falls) in support for candidates
- Rolling sample: New sample chosen every day, so
tendency reported is a “moving average” over several
days  “smoothes out” random fluctuation
Problems:
- extraordinary events could swing preferences wildly for
one day, throwing off average over several days
- Sometimes volatile in their predictions  small
samples could lead to wild fluctuations in, e.g.,
distribution of party preferences
Types of Polls: Focus Groups
• Focus Group: in-depth interviews with small group of
people chosen to represent broad demographic
groups (e.g., “soccer moms”)
Used for 1) debate preparation; 2) political marketing; 3)
survey item development; 4) gauge public sentiment on
issues  e.g., 1992 Willie Horton ad
Problems: although they can be a valuable tool, media often
fail to recognize methodological limits  not
representative; “groupthink”  people’s opinions may be
different in a group setting than in isolation
Types of Polls: Exit Polls
• Exit Poll: Interviews of voters as they leave the
polling place;
advantage: we know people actually voted
Problems:
- Don’t take into account absentee ballots
- Non-response bias some groups more willing to
participate than others
- Social desirability
- Do poll results in Eastern states affect outcomes, turnout in
the West?  1980 presidential election
Types of Polls: “Push” Polls
• Push Poll: Tendentious questions that feed
misleading or false information to voters in an
attempt to sway vote against a candidate
 Propaganda disguised as scientific research
Protecting against “push polls”: 1) ask for name of
organization conducting poll; 2) length of survey greater for
legitimate polls; 3) number of calls far higher in a push poll
How Candidates Use Polls
• Gauge the viability of a candidacy:
commission private poll;
bad news might dissuade a candidate from running
• Get Campaign Donations:
news releases of good results as
proof of viability, possibility of winning
• Bad Results?:
cast aspersion on credibility of polls; three common
responses: 1) too early to tell; 2) attack pollsters and journalists; 3)
counter the results with other polls. Also, question methodology
• Manipulate Polls to Promote Candidacy:
– Selectively leak results
– Tweak survey design to get desired results 
» “priming” voters to evaluate candidates on favorable traits
» Sample selection
» Supplying negative information on opponent
Polls and Presidential Selection
• Ubiquity of polls in all phases of presidential
campaigns  statewide polls in primary states,
national polls, etc.
– Pollsters become part of campaign team
– Strategy based on (or takes into account) poll results
• Do polls reflect current state of public opinion, or do
they sway vote choice?
- Reporting polls results creates “momentum”, and feedback cycles
- Affect fund-raising abilities
Polls and Presidential Elections
• Instant Polls: instantaneous or overnight polls media’s
focus on immediate reaction decreases quality of polling
– Proportional to number of days in field
– Proportional to number of respondents
– Interviews only people available at a certain time
• Polls as Filters: third-party and lesser-known candidates
can’t become known, because they can’t reach threshold
(15% in 1980, as set by LWV) of vote preferences to
participate or are excluded outright; case of Ross Perot in
1992
– Other standards: national organization, newsworthiness, enthusiasm
for candidacies (as demonstrated by polls)
– BUT, should just anyone who wants to participate be able to? Debates
should not be cluttered with frivolous candidates  who sets and
enforces standards? Bipartisan commission is exclusive
Media Reporting of Polls
• Overemphasis on poll results
• Undue magnification of small shifts in preferences 
these are random fluctuations, new true changes in
preferences
• Subtleties of poll methodology go unreported
• Media focus on own polls, ignore other pollsters 
averages over polls are more accurate than any
individual poll
Polls’ Effects on Voters
• Do polls mobilize or demobilize voters?
- “Bandwagon” effect: Polls rally voters behind candidate in lead,
discourage people from voting for trailing candidates
- “Underdog effect”: voters rally behind candidate who is behind
• Evidence:
- Experimental designs: poll preferences, expose voters to other
poll results, take poll afterwards  inconclusive: de Bock (1976)
found evidence of discouragement, Marsh (1984), of underdog
effect
- Survey questions: ask voters about their exposure to polls, and
directly whether polls influence choice; problems  imperfect
recall and people unwilling to admit they are influenced by polls
• Polls can also have indirect effects  influence campaign
contributions, media coverage, other attitudes
¡A veces se equivocan
las encuestas electorales!
Polls Are Sometimes Wrong!
• Most Are Pretty Accurate: however, some notorious
mistakes (and run-of-the-mill problems)
- Literary Digest 1936 incident  self-selected sample predicted
Alf Landon (who?) victory over FDR
- 1948 Dewey vs. Truman  quota sampling method (not
probability) and premature cut-off to polling predicted Dewey
victory, despite narrow lead (5%) and downward trend
- 1980 Reagan victory  predicted tight race instead of 10%
Reagan landslide; polling failed to pick up last-minute surge in
Reagan support among undecideds
- 1992 general election  conflicting poll results; criticism over
sampling, allocation of undecideds to candidates, shift from
“registered” to “likely” voters
- 1996 Clinton victory: overestimated margin of victory, widely
varying polls for House
Why Polls Are Wrong: Timing
• The closer a poll is to the election, the more likely it is to be
right
– Later polls capture effects of last-minute events and campaign activities 
Crespi (1988) found this was most important factor, followed by margin of
victory (wider margin  more accurate polls) and turnout (higher turnout 
more accurate)
– Earlier polls are likelier to reflect name recognition of candidates
• Polls do better when they 1) do well at identifying likely voters
and 2) track voter preferences over time
• E.g., Bush Sr.’s win in New Hampshire primaries:
– Most polls stopped too early
– Tracking polls (CBS) did better at predicted margin of victory
• Media focus on own polls, ignore other pollsters  averages
over polls are more accurate than any individual poll
Why Polls Are Wrong: Undecideds
• Three reasons for “undecided” response
– Voters genuinely cannot make up minds  lack of information
– Unwilling to make choice
– Unwilling to reveal choice to pollster
• Polls that do not account for undecideds may not yield
best predictions
– E.g., Wilder race in Virginia (“Wilder effect”) and Dinkins in NY:
“undecided” masked race effects in which whites who intended to
vote for white candidate (or were talking with black interviewer) said
they were “undecided”
• Methods for dealing with “undecided” problem:
– “Secret ballot” method pioneered by Gallup: simulates real election by giving voters
ballot; interviewer does not see choice (only used with face-to-face)
– Ignore undecideds and tabulate results only for those who state a preference 
problem: some will vote, and they may differ in their choices than those with stated
preferences
– Allocate undecideds in proportion to preferences  not good when one candidate is
more well known than others
– Allocate undecideds on other factors, such as party ID, sociodem.
Why Polls Are Wrong: Turnout
• Likely Voter Problem: which respondents will really vote?
“Social Desirability” overestimates probability of voting;
Problem  preferences of voters and non-voters are
probably different
• Methods for assessing likely voters:
–
–
–
–
–
–
–
Self-reported intention to vote
Registration status
Reported frequency of past voting
Knows where polling site is
Interest in politics
Interest/awareness of campaign
Intensity of candidate preferences
• Estimates can be based on only those classified as “likely
voters”, or weighted by predicted probability of voting
• Different polling agencies use different combinations of these
factors
Why Polls Are Wrong: Changing
Economic and Political Environment
• Changing circumstances (economic crisis, campaign
intensity, scandal) could motivate some citizens to vote
that might other
• E.g., 1982 midterm election: polls underestimated
Democratic turnout motivated by economic recession 
labor unions, ethnic advocacy groups had effective getout-the-vote (GOTV) operations
• E.g., 1994 midterm: Republican “Contract with America”
 polls underestimated Republican victory; Clinton
campaigns had reverse effect of mobilizing Republicans
Pronósticos electorales
y votantes probables
• Una de las metas más sonadas de las encuestas
preelectorales es la de pronosticar el resultado
electoral.
• Sin embargo, ello se complica porque se les
entrevista no a quienes van a voter, sino a adultos en
edad de votar
• Es decir, hay una especie de error de cobertura,
porque el marco muestral incluye a personas que no
forman parte de la población
¿Cómo determinar los votantes probables?
Probabilidad autorreportada de votar
• Preguntar directamente sobre la intención de votar.
• La pregunta típica es “Qué tan probable es que Ud. vaya a
votar en la próxima elección?”
•
•
•
•
•
Definitivamente
Muy probable
Algo probable
Poco probable
Nada probable
• 1) se sustrae de la base de datos quienes contestaron de “muy
probable” para abajo, p.e., y la pregunta de intención de voto
se hace sólo sobre quienes dijeron “definitivamente” O
• 2) se asignan valores numéricos (arbitrarios o con base en
encuestas previas) a las probabilidades y se realiza una
ponderación con base en esas probabilidades
¿Cómo determinar los votantes probables?
Participación electoral previo
• Se pregunta sobre si el entrevistado votó en la
elección pasada (o las elecciones pasadas)
• Distintos tipos de elecciones: federales, estatales,
municipales, etc.
• Se le asigna una mayor ponderación a quienes
votaron—nuevamente, de forma arbitraria o
basándose en información externa (encuestas
poselectorales, verificación del voto, etc.)
¿Cómo determinar los votantes probables?
Actitudes hacia la política
• Se pregunta sobre:
•
•
•
•
Interés por la política
La atención ha puesto en la campaña electoral,
Si platica de política con otra gente
Otro tipo de actividades políticas (participar en campañas,
contactar a sus diputados, etc.)
• Pregunta típica: “¿Qué tanto se interesa Ud. por la
política?”
•
•
•
•
Muy
Algo
Poco
Nada
El método de Gallup
• Siete preguntas, seis relacionadas con historia de
participación electoral y una sobre “qué tanto ha pensado
en la próxima elección”
• Se asigna un punto a cada pregunta para sacar una escala
de 0 a 7.
• Basándose en estudios internos de “validación del voto”
(en los que personal de Gallup va a los registros locales
electorales para cerciorarse de que la gente que dijo que
iba a votar, votó—pues la participación electoral es
información pública en EE.UU—), se determinó que
quienes tenían un puntaje de 7 en la escala, contaban con
una probabilidad de votar del 55% porciento.
• Se pondera la muestra por esas probabilidades.
Otros métodos de evaluar
la probabilidad de votar:
• Otras encuestas:
• Calcular la probabilidad de votar de encuestas poselectorales
usando regresión logística multivariada
• Usar esas probabilidades para calcular ponderadores
• Aplicar los ponderadores a una encuesta preelectoral
• Ponderar con información de la población:
• El problema de votantes probables es uno de no respuesta
diferencial  quienes no van a votar contestan la encuesta en
proporción mayor a quienes sí van a votar
• Se puede posestratificar una encuesta preelectoral
• Definir estratos como edad, sexo, raza, estado, etc.
• Usar información del censo (u otra fuente) para saber las
proporciones de esas variables en la población
• Ajustar las proporciones de la muestra a las de la población
Cuadro comparativo de correcciones para
votantes probables
Candidate
IFE
CD
CD w/ Likely
MPS
Voter
MPS w/ Likely
MPS w/ Likely
MPS w/ Interest
Voter + Weights
Voter + ID
+ Weights
FCH
36.7%
29.4%
29.2%
31.4%
36.7%
34.0%
35.3%
AMLO
36.1%
27.4%
31.3%
37.1%
33.4%
37.6%
35.6%
RMP
22.7%
27.02%
25.8%
26.2%
24.9%
27.4%
26.7%
CD = Citizen Disenchantment (June, 2006, N=509)
CD Likely Voter = only those who said they would “definitely” vote
MPS = Mexico 2006 Panel Study (May, 2006, N=1,914)
MPS Likely Voter = only those who said they were “totally sure” they would vote
MPS ID = only those who had a valid voter ID
MPS Interest Weights = 2 for “a lot” of interest, 1.33 for “some”, 0.67 for “a little” and 0 for none
Download