Methodology July 2009 Political Survey/Media Update Prepared by Princeton Survey Research Associates International for the Pew Research Center for the People and the Press July 2009 SUMMARY The July 2009 Political Survey, sponsored by the Pew Research Center for the People and the Press, obtained telephone interviews with a nationally representative sample of 1,506 adults living in the continental United States. The survey was conducted by Princeton Survey Research International. The interviews were conducted in English Princeton Data Source, LLC from July 22 to July 26, 2009. Statistical results are weighted to correct known demographic discrepancies. The margin of sampling error for the complete set of weighted data is 3.3±%. Details on the design, execution and analysis of the survey are discussed below. DESIGN AND DATA COLLECTION PROCEDURES Sample Design A combination of landline and cellular random digit dial (RDD) samples was used to represent all adults in the continental United States who have access to either a landline or cellular telephone. Both samples were provided by Survey Sampling International, LLC (SSI) according to PSRAI specifications. Numbers for the landline sample were drawn with equal probabilities from active blocks (area code + exchange + two-digit block number) that contained three or more residential directory listings. The cellular sample was not list-assisted, but was drawn through a systematic sampling from dedicated wireless 100-blocks and shared service 100-blocks with no directorylisted landline numbers. Contact Procedures Interviews were conducted from June 22 to June 26, 2009. As many as 7 attempts were made to contact every sampled telephone number. Sample was released for interviewing in replicates, which are representative subsamples of the larger sample. Using replicates to control the release of sample ensures that complete call procedures are followed for the entire sample. Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. At least one daytime call was made to each phone number in an attempt to find someone at home if necessary. Interviewing was spread as evenly as possible across the four days in field. For the landline sample, half of the time interviewers first asked to speak with the youngest adult male currently at home. If no male was at home at the time of the call, interviewers asked to speak with the youngest adult female. For the other half of the contacts interviewers first asked to speak with the youngest adult female currently at home. If no female was available, interviewers asked to speak with the youngest adult male at home. For the cellular sample, interviews were conducted with the person who answered the phone. Interviewers verified that the person was an adult and in a safe place before administering the survey. Cellular sample respondents were offered a post-paid cash incentive for their participation. WEIGHTING AND ANALYSIS Weighting is generally used in survey analysis to compensate for sample designs and patterns of non-response that might bias results. A two-stage weighting procedure was used to weight this dual-frame sample. A first-stage weight of 0.5 was applied to all dual-users to account for the fact that they were included in both sample frames.1 Respondents who were identified as either land line only or cell phone only were assigned a first-stage weight of 1. Respondents whose phone use status could not be determined were assigned their sample’s average first-stage weight. 1 Dual-users are defined as [a] landline respondents who either have a working cell phone or live with someone who has a cell phone, or [b] cell phone respondents who have at least one working telephone inside their home that is not a cell phone. 2 The first-stage of weighting also included an adjustment to landline respondents that corrected for unequal probabilities of selection based on the number of adults in the household. An adjustment factor of one was assigned to landline respondents who live with no other adults. This who live in a household with one other adult were assigned an adjustment factor of two and those living in households with more than one other adult were assigned an adjustment factor of three. The second stage of weighting balanced sample demographics to population parameters. The sample was balanced - by form - to match national population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), population density, and telephone usage. The White, non-Hispanic subgroup was also balanced on age, education and region. The basic weighting parameters came from a special analysis of the Census Bureau’s 2008 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States. The population density parameter came from Census 2000 data. The cell phone usage parameter came from an analysis of the July-December 2008 National Health Interview Survey.2 Weighting was accomplished using Sample Balancing, a special iterative sample weighting program that simultaneously balances the distributions of all variables using a statistical technique called the Deming Algorithm. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the national population. Table 1 compares weighted and unweighted sample distributions to population parameters for the two phases of interviewing. 2 Blumberg SJ, Luke JV. Wireless substitution: Early release of estimates from the National Health Interview Survey, July-December, 2008. National Center for Health Statistics. May 2009. 3 Table 1: Sample Demographics Population Parameter Gender Male 48.4% Female 51.6% Unweighted Sample Weighted Sample 46.8% 53.2% 48.4% 51.6% Age 18-24 25-34 35-44 45-54 55-64 65+ 12.6% 17.9% 18.8% 19.5% 14.8% 16.4% 7.7% 10.3% 13.7% 19.8% 20.3% 26.5% 12.8% 15.9% 18.2% 20.1% 15.1% 16.6% Education Less than HS Graduate HS Graduate Some College College Graduate 14.3% 34.9% 23.9% 26.9% 5.8% 29.3% 24.6% 39.4% 10.6% 36.3% 23.9% 28.2% Race/Ethnicity White/not Hispanic Black/not Hispanic Hispanic Other/not Hispanic 69.0% 11.4% 13.5% 6.1% 77.7% 8.2% 6.2% 5.9% 69.6% 10.6% 11.4% 6.0% Region Northeast Midwest South West 18.6% 22.1% 36.7% 22.6% 18.5% 26.3% 36.9% 18.3% 18.5% 22.6% 37.6% 21.3% County Pop. Density 1 - Lowest 2 3 4 5 - Highest 20.1% 20.0% 20.1% 20.2% 19.6% 23.4% 23.4% 19.3% 18.7% 15.2% 20.3% 20.8% 20.4% 20.2% 18.3% Phone Use LLO Dual - few, some cell Dual - most cell CPO 13.6% 49.7% 15.9% 20.8% 12.0% 60.2% 19.5% 7.6% 13.4% 48.9% 15.9% 20.8% 4 Effects of Sample Design on Statistical Inference Post-data collection statistical adjustments require analysis procedures that reflect departures from simple random sampling. PSRAI calculates the effects of these design features so that an appropriate adjustment can be incorporated into tests of statistical significance when using these data. The so-called "design effect" or deff represents the loss in statistical efficiency that results from systematic non-response. The total sample design effect for this survey is 1.71. PSRAI calculates the composite design effect for a sample of size n, with each case having a weight, wi as: n deff n wi 2 i 1 wi i 1 n 2 formula 1 In a wide range of situations, the adjusted standard error of a statistic should be calculated by multiplying the usual formula by the square root of the design effect (√deff ). Thus, the formula for computing the 95% confidence interval around a percentage is: pˆ (1 pˆ ) pˆ deff 1.96 n formula 2 where p̂ is the sample estimate and n is the unweighted number of sample cases in the group being considered. The survey’s margin of error is the largest 95% confidence interval for any estimated proportion based on the total sample— the one around 50%. For example, the margin of error for the entire sample is ±3.3%. This means that in 95 out every 100 samples drawn using the same methodology, estimated proportions based on the entire sample will be no more than three percentage points away from their true values in the population. The margin of error for estimates based on form split is ±4.6%. It is important to remember that sampling fluctuations are only one possible source of error in a survey estimate. Other sources, such as respondent selection bias, questionnaire wording and reporting inaccuracy, may contribute additional error of greater or lesser magnitude. 5 Callback Interview On July 27 after the main interviewing was completed, we attempted to callback all survey respondents and ask them a short series of questions about some breaking news that happened during the field period. Callback interviewing was conducted at Braun Research. A total of 480 respondents completed the callback interview. These cases were weighted separately to match the usual general population parameters. In addition, we also balanced the callback sample distributions of presidential approval (Q2) and party identification (PARTY and PARYTLN) to match the weighted distributions from the final sample. IN the dataset this weight variable is called WEIGHT_CB. RESPONSE RATE Table 2 reports the disposition of all sampled telephone numbers ever dialed from the original telephone number samples. The response rate estimates the fraction of all eligible respondents in the sample that were ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates:3 o Contact rate – the proportion of working numbers where a request for interview was made4 o Cooperation rate – the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused o Completion rate – the proportion of initially cooperating and eligible interviews that were completed Thus the response rate for the land line samples was 17 percent. The response rate for the cellular samples was 20 percent. Table 3 reports the sample disposition for the callback survey. The response rate for the callback survey was 32 percent. PSRAI’s disposition codes and reporting are consistent with the American Association for Public Opinion Research standards. 4 PSRAI assumes that 75 percent of cases that result in a constant disposition of “No answer” or “Busy” are actually not working numbers. For the callback survey we assumed that all of these numbers were actually working. 3 6 Table 2:Sample Disposition Landline 22493 Cell 6750 1705 1091 26 10965 1076 7630 33.9% 138 5 0 2627 217 3763 55.8% OF Non-residential OF Computer/Fax OF Cell phone OF Other not working UH Additional projected not working Working numbers Working Rate 359 1724 39 5508 72.2% 72 733 5 2953 78.5% UH No Answer / Busy UONC Voice Mail UONC Other Non-Contact Contacted numbers Contact Rate 484 3683 1341 24.3% 430 1772 751 25.4% UOR Callback UOR Refusal Cooperating numbers Cooperation Rate 176 0 1165 86.9% 127 242 382 50.9% IN1 Language Barrier IN2 Child's cell phone Eligible numbers Eligibility Rate 36 1129 96.9% 5 377 98.7% R Break-off I Completes Completion Rate 17.0% 19.7% Response Rate T Total Numbers Dialed 7 Table 3: Callback Sample Disposition 1506 T Total Numbers Dialed 0 1 0 4 0 1501 99.7% OF Non-residential OF Computer/Fax OF Cell phone OF Other not working UH Additional projected not working Working numbers Working Rate 562 159 2 778 51.8% UH No Answer / Busy UONC Voice Mail UONC Other Non-Contact Contacted numbers Contact Rate 211 87 480 61.7% UOR Callback UOR Refusal Cooperating numbers Cooperation Rate 0 0 480 100.0% IN1 Language Barrier IN2 Child's cell phone Eligible numbers Eligibility Rate 0 480 100.0% R Break-off I Completes Completion Rate 32.0% Response Rate 8