The Voter's Guide to Election Polls. ed.

advertisement
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Introduction
1.
Horse race journalism—low-level use of polls
2.
Push polls—they are campaign techniques disguised as polls
3.
Public opinion—no common definition
a. do you have to make your opinion "public" (express it)?
b. or is it just the opinions of the public, collected (tabulated)?
c. or is it collected opinions on issues in the public arena?
4.
"Founding fathers" of polling: Gallup, Roper, Crossley
5.
News media organizations begin to do polling on their own
6.
Standards set by American Association of Public Opinion
Research and the National Council of Public Polls (see Appendix A)
1
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 1—What are Polls and Why are They Conducted?
1.
Collecting data by asking questions of people
2.
Reasons for polling
a. candidates: issues, candidate evaluation, vote intention
b. media: description, news, determine who to cover, project winner
c. academics: analysis and understanding
d. businesses: decisions on what to do next
3.
Types of surveys and types of questions:
a. data collected in-person, by phone, by mail, or by internet
b. design: cross-sectional, longitudinal, or panel
c. types of questions: long, short; closed-ended, open-ended, or probe
4.
Junk polls: respondents self-selected, interest group ads, push polls
5.
Why not just count everyone?
a. usually prohibitively expensive because of size of population
b. can't find everyone, so it really isn't more accurate
c. takes too long
6.
Who polls?
a. academics
b. businesses and other organizations (e.g., universities)
c. interest groups
d. media (news organizations, entertainment media)
e. governments, political candidates, Presidents, and parties
2
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 2—What are Election Polls and How are They Conducted?
1.
Types of political polls: baseline, trial heat, tracking, exit
2.
Polling errors: non-random sample, quota sample, bad timing/volatility,
finding "likely voters," the "undecideds," and "early voting"
3.
The vote intention (factors affecting)
a. registered or not
c. voting history
b. how interested
d. knowledge/information about election
4.
What do you do with the "undecided" voters in a survey (and the "no
answer/refusals" and "apathetics")?
a. divide equally between candidates?
b. use party ID to decide, divide the rest equally?
c. give greater weight to the challenger rather than the incumbent?
d. ignore them; leave them out of the mathematical calculations?
e. use same percentages as for those who express a preference?
5.
Tracking polls and their problems
a. nightly polls, small Ns; they are combined to get a "rolling average"
b. seldom or never attempt to recontact those not at home the 1st time
c. samples may not be comparable (registered vs. likely voters); different
results may occur and be confused with shifts in the public's opinion
6.
Does learning about polling results affect the electorate?
a. an example of the "Hawthorne effect"
b. bandwagon effect vs. the underdog effect
7.
Exit polls
a. based on face-to-face interviews outside of a polling place
b. both the precinct and the people voting are chosen at random
c. primary uses: to predict and explain election outcome (same day)
d. concern: does announcing an election result affect those who have not
voted yet? What can be done about it, if anything?
e. sample size much higher than for a normal poll
3
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 3—How Do Candidates and Others Use Polling Data?
1.
Presidential candidates use polls throughout the campaign(s)
a. issue research—assessing voter interest and preferences
b. issue research—how to communicate with the voter on issues
c. assessing a candidate's image with the voter and changing it for the
better
d. to show potential donors that the candidate can win with their $upport
e. high poll numbers translate into more media coverage
2.
Question wording—or just why are we polling anyway?
3.
Polls that are not really polls—and worse
a. push polls
b. SUG: soliciting under the guise of polling (aka FRUG: fundraising...)
c. telemarketing (telling someone you are polling when you are selling)
d. canvassing—calling people to learn if they support your candidates
4.
Who pays any attention to polls?
a. who doesn't??
b. politicians (candidates and incumbents)
c. political parties
d. news media
e. interest groups
f. some government agencies are required to use polling data; census
g. voters and the public generally
4
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 4—How Do News Organizations Collect and Use Polling Data?
1.
Why do media organizations conduct their own polls?
a. editorial control over question content and timing of interviews
b. polls shape their news decisions, story content
c. it is prestigious to be big enough to do your own poll
d. the advertising depts. have many phones not being used at night
e. cost-sharing has led to media units combining to do polling
f. rarely done in-house—usually contracted out to a polling firm
2.
What are the standard methods of media polls?
a. short: usually 10 to 20 minutes, sometimes less
b. closed-end questions
c. option for call-back follow-up (non-anonymous then)
d. CATI (computer assisted telephone interviews): on-screen data collection
e. CRAP (computerized response audience polls; 900-number call-in polls)
f. no call-backs when R cannot be interviewed on first attempt
3.
How are the data analyzed?
a. marginals (look at the totals for each question)
b. cross-tabulation (look at two or more questions at once)
c. third question equals a "control"
d. news reports are "data rich but analysis poor": little more than marginals
5
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 5—Why Do Pollsters Use Samples?
1.
What are the principles of sampling and sample design?
a. a sample is a portion of a population; a random sample is one where
every member of the population has a known probability of being in
the sample
b. target population: what you want to have information on
c. sampling frame: how you identify members of the target population
d. elements: the individual members of the population and the frame
2.
Sampling for Election Polls
a.Target populations: eligible voters, registered voters, likely voters
b.Sampling frames: area samples and random digit dialing
i. area samples use states+census tracks+blocks+households+people
ii. RDD uses states+ telephone exchanges+phone nos.+people;
iii. typically excluded: Alaska, Hawaii, people without phones
iv. new problems: cell phones, computer lines
v. special populations: lists of members, classes, voter registration
c. weighting: how many voters at each sample point? how many ph. lines?
d. non-probability: accidental or self-selected or snowball (p. 61)
3.
How large does the sample need to be?
a. how large is the population about which you will draw inferences and how
rarely does it occur in your sample frame? (often use filter questions)
b. what is the margin of error you wish to have when you report findings?
c. the larger the sample, the smaller the margin of error (see pp. 60, 149)
4.
Sample methods include
a. simple (or equal probability) random sample; uses random numbers
b. systematic random sample; uses a single random number, skip interval
c. stratified random sample: uses layers (strata) before drawing the sample
d. weighting: a priori to obtain needed numbers
5.
Sources of error
a. response rate (non-contact, refusals); replacement is required to reach N
b. sample error; to correct likely error (a posteriori)
c. errors in sample frame (new houses, new numbers, disconnected numbers)
6
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 6—How Do Interviews Take Place? Advantages And Disadvantages?
1.
By telephone, using CATI (computer has questions and answers); quality
control possible (overseer present on line or interview taped); very fast, data
ready instantly; can change interviewer to accommodate language of R
2.
By mail (slow but cheap; often can ask more questions; self-administered; not
sure who is answering/sampled; junk mail gets tossed, so do long polls); no
interviewer interaction (race, sex, age) or danger; only one language likely
3.
In-person, at home (expensive; can use visual aids/answer cards; cannot ask
"too personal" questions; can ask many more questions and follow-ups; can
collect some data w/o asking a question—by observation); uses middle-aged
women as interviewers; race a factor in some polls as is language
4.
In person, at polling place (exit polls; brief; self-administered)
5.
How anonymous, confidential?
a. anonymous: researchers do not even know name of R; no follow-up
b. confidential: researchers know name but won't identify R; can follow-up
c. panel studies cannot be anonymous, therefore; must be able to call back
7
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 7—How are Questionnaires put Together?
1.
Purposes: describe, predict, explain
2.
Closed-end questions have fixed alternative answer choices
a. two choices: yes-no
b. five choices: strongly agree to strongly disagree
c. pseudo-scales: feeling thermometer (0° to 100°)
d. but do the choices match what R wants to give as an answer?
e. suggests answers when R has never thought of the matter before
f. advantages: self-coding (esp. mail surveys); machine-readable
g. easier to administer (quicker, less coding error; ask more Qs)
3.
Open-end questions ask for R to answer in own words; probes
a. trade off accuracy for non-comparability to other R's answers
b. must be coded and converted to machine-readable data
c. less consistency in interviewer wording (probes)
4.
Open or closed, wording of Qs makes a BIG difference in answers
a. balanced (symmetrical) vs. unbalanced (only imply other half of idea)
b. "hot button" words (e.g., abortion, drugs, death, rape)—emotions
c. changes in wording over time will affect distribution of answers
d. words people do not understand (jargon; mistake for other words)
e. KISS (no double-negatives, awkward construction)
f. double-barreled Qs: tap only one concept per question
g. intro. phrases provide information but likely tilt answers
h. fixed answer choices also must be carefully worded to avoid bias
i. anchors at end points add a sense of scope, may spread out answers
5.
What to do about the Undecideds, Don't Knows, and No Answers
a. should there be such an explicit answer choice provided to R?
b. should there be a "filter" to omit some Rs from answers?
c. does R "not know" between two choices or all choices?
d. should interviewer press for an answer? can this be recorded?
e. is an answer meaningful even if it is the 1st time R has thought...it?
6.
Question order—built-in source of bias ("order effect")
a. every question is asked in the context of every previous question
b. Rs will recall previous answers and this will affect future answers
c. "recall" of information versus "recognition" in the way Qs are asked
d. demographic questions: 1st and easy or last because they're personal?
e. pretest (e.g., split-half) to assess wording problems; rotate Q order
8
Traugott & Lavrakas, The Voter's Guide to Election Polls.
2nd ed.
Chap. 8—How Do Media Organizations Analyze Polls?
1.
Some special terms:
a. constant and variables, independent and dependent
b. categories, cell frequencies, marginal frequecies, and point estimates
c. controlling for one (or more) variables
2.
Trends and the here and now
a. analyzing the present (one-shot, cross-sectional polls)
b. looking for trends: time 1 to present: serial surveys versus panel studies
c. question wording should be the same or close to the same
d. net change versus the components of change (who changed?)
e. what caused the change?
Chap. 9—How Can I Evaluate Published Poll Results?
1.
Keep in mind these sources of bias:
a. who is sponsor and what's their bias?
b. what organization conducted the poll; what's their bias?
c. how was the poll conducted and was it biased? (see f.)
d. what were the specific questions and the question order?
e. when was the poll conducted; what may have influenced R's answers?
f. did poll follow industry standards and codes of ethics?
2.
What were the technical methods of the poll?
a. sample method
b. sample size
c. response rate
9
Traugott & Lavrakas, The Voter's Guide to Election Polls. 2nd ed.
Chap. 10—Common Problems and Complaints
1.
Do polls measure "real" attitudes? How much measurement error?
2.
Polls can be written to achieve the desired result—bias not always clear
3.
Non-random surveys really are not surveys at all
a. push polls
b. call-in polls
c. automated phone polls (Computerized Response Audience Poll)
4.
What if you are polled? Should you participate?
a. who is the sponsor and what's their bias and their purpose?
b. what organization is conducting the poll? are they legit?
c. how is the poll being conducted and is there a bias in that?
d. what are the specific questions and the question order?
e. is it anonymous, confidential, or don't you care?
f. is it really a survey at all or something disguised as a poll?
g. how long will it take and do you have the time (now)?
h. if you are sampled at random, you represent a LOT of people!
5.
Some "polls" just want to find out how many people care
a. do you watch/like a TV show or not?
b. who reads this magazine and will you re-subscribe?
c. what kind of car will you buy next and when?
Epilogue
The public shares a responsibility for seeing that polls are done and used
reasonably
American Association for Public Opinion Research (AAPOR)
1.
Requires good (scientific) design, avoid bias and misleading Qs and As
2.
AAPOR will monitor and disclose unethical polling behavior
3.
Has standards for disclosure of methods and analysis
National Council on Public Polls (NCPP)
1.
Polls must disclose polling methods so public may evaluate them
2.
NCPP will not criticize any polling organization for its methods
3.
NCPP may certify that a poll meets NCPP's Principles of Disclosure
10
Download