data differences across panels

advertisement
data differences across panels:
how does opinion outpost compare?
701 EAST TIMPANOGOS PKWY
BUILDING M OREM, UT 84097
PHONE: 801-373-7735
In October 2008, AAPOR (the American Association of Public Opinion Research) commissioned a
panel of industry experts to develop recommendations for using online panels. Because internet panels
represent a convenience sampling frame, the commission cautioned that“there currently is no generally accepted
theoretical basis from which to claim that survey results using samples from non probability online panels are
projectable to the general population. Thus, claims of ‘representativeness’ should be avoided when using these
sample sources (2010, p. 5).” Furthermore, the commission cautioned “users of online panels should understand
that there are significant differences in the composition and practices of individual panels that can affect survey
results (2010, p. 6).”
Despite their concerns, the AAPOR commission noted instances where the use of online panels has provided
“results that are as accurate as, or more accurate than, some surveys using probability samples (2010, p. 4).” The
widespread, and growing, use of online panels among commercial researchers certainly confirms the usefulness of
the methodology. Given the pitfalls of working with a convenience sample, how can researchers ensure the most
accurate results when using an online panel? AAPOR recommends that “researchers should choose the panels they
use carefully (2010, p. 6).”
One of the earliest criticisms of online panels was a lack of data consistency. Data was often inconsistent within
a single panel source and almost always inconsistent across distinct panel sources. While within-panel data
consistency has emerged as the minimum standard that separates quality panel providers from their second tier
competitors, cross-panel data differences remain. Because panel building and maintenance are conducted with
proprietary techniques and technology, cross-panel data differences are more difficult to address. Understanding
cross-panel differences, and blending samples using that understanding, is becoming the new quality benchmark for
top tier online sampling firms.
What causes within- and cross-panel data differences? In November 2008, Ron Gailey presented important findings
on the determinants of panel data differences at the Research Industry Summit in Chicago. In 2006 and 2007, as
head of research for Washington Mutual, Ron completed 29 studies including 40,000 interviews using online panel
sample. After two years his results showed a drop of 30% in the demand for WaMu’s financial products. Sales data
did not support the survey results, and he performed an unprecedented investigation of his sampling frame. He
found, “In all cases…changes in demand were explained by increasing respondent tenure (2008).” More specifically,
new panelists offered higher scores than their more tenured counterparts.
Following up on Ron’s research, Steve Gittelman and Elaine Trimarchi of Mktg Inc. discovered tenure is a proxy for
panelist activity. In other words, panel membership conditions panelists as they repeatedly participate in the survey
research process (2009). More specifically, a more active panelist provides consistently lower scores than a less active
panelist. Furthermore, less active panelists attrite at a higher rate than more active panelists. This means, as a panel
matures, the remaining panelists are the most interested in the survey research process and the sampling frame looks
less like the underlying population with respect to panelist activity.
Gittelman and Trimarchi identified variables that measure panelist activity as frequency of survey taking behavior,
number of panel memberships, and tenure. Understanding, and controlling, the distribution of these three variables
over time is what leads to consistent online data. This control can be exercised within a source by carefully designing
the panelist recruitment program (and following through with ongoing investment), or across sources by quota
sampling.
2
COPYRIGHT WESTERN WATS 2010
data differences across panels: how does opinion outpost compare?
Data Differences Across Panels: How does Opinion Outpost
compare?
Figure 1, taken from the most recent audit report, compares panelist survey taking behavior of Opinion Outpost
panelists with the average across dozens of US panels Mktg Inc. has audited.
Figure 1
Opinion Outpost panelists are, as measured in survey taking behavior, clearly less conditioned than their
counterparts on other panels. What does this mean for your data? This means that, on average, scores (including
purchase intent, ad awareness, and other common metrics) will be higher using Opinion Outpost panel than from
other panels. To put these findings in context, it is worth repeating the findings of the AAPOR task force: because
panel sampling is not based in the formal theories of probability sampling, we cannot tell which type of panelist is
more representative of the broader population. However, there are reasons to believe less conditioned panelists are
better than highly conditioned panelists.
Simply, panelists participate in survey research more than members of the general population. Less conditioned
panelists will be more representative on this behavioral trait than the conditioned panelists. We also believe, by
respecting the time of our panelists and offering a reasonable amount of survey opportunities without spamming,
they will participate in your research with more care and thought than highly conditioned panelists.
3
COPYRIGHT WESTERN WATS 2010
data differences across panels: how does opinion outpost compare?
These findings are important. Opinion Outpost has participated in much of Steve and Elaine’s
research, becoming one of the first organizations to explore this phenomenon. We have also retained
Mktg Inc. to regularly audit our panels and, as a third party, measure and report on variables salient
to data consistency. Mktg Inc. has awarded Opinion Outpost awards for data consistency, and their panel
audit reports provide the foundation for understanding data differences between Opinion Outpost and other panel
sources. Mktg Inc. audit reports are available upon request for our panels.
If I need data to match other panel sources, can I request more conditioned panelists?
Yes, we will match a conditioning profile to your other sample sources. We profile our panels on common
conditioning variables and are able to sample based on those variables. Operationally, we recommend using
panelist tenure. Tenure is the easiest conditioning variable to implement because most panels maintain a join date
for each member, and join date provides a current measure of tenure that does not need to be updated over time.
Additionally, we will pass the panelist tenure variable in the survey URL, so you can quota sample without ever
asking an additional question.
Are there panels that exhibit extreme data differences?
Yes. Poor operating practices cause some panels to suffer from pervasive issues with data quality. These data quality
issues often lead to large cross panel data differences. Fortunately for panel buyers, a few firms are committed to
quality work. Divining the efficacy of proprietary panel operating practices can be difficult; furthermore, company
history, price points, and marketing literature are unreliable signals of quality. We recommend looking for panels
that certify quality by multiple independent third party programs such as MarketTool’s True Sample, MKTG Inc.’s
Consistent Track, Promedius, and others. These third party certifications provide credible signals of quality. It is
also a good idea to test panel sources and monitor results over time.
Is reliability the best measure of data quality?
Producing consistent data requires a great deal of skill. Because claims of validity for panel derived results cannot
be scientifically grounded, we support the continued use of reliability as a litmus test of data quality. In every case
where external benchmarks are available, we recommend either data weighting or quota sampling on conditioning
variables to ensure maximum accuracy, in addition to data reliability.
Does Opinion Outpost offer a discounted or free pilot project to assess differences across panels?
Yes. Your Account Manager will help design a testing and transition program for each tracking survey where you
bring Opinion Outpost in as a source.
Our goal is to help you follow AAPOR’s recommendation and “choose carefully.” Whether your research study
is online, on the phone, or over automated telephony, we want to ensure you receive valid and actionable results.
Please do not hesitate to contact your Account Manager with any questions about panel data differences or any other
sampling questions.
4
COPYRIGHT WESTERN WATS 2010
data differences across panels: how does opinion outpost compare?
Understanding the cause of many data differences across panel sources, and even that there are data
differences, begs a number of practical questions:
Baker, Reg. et. al. “AAPOR Report on Online Panels.” aapor.org. March 2010. Web. <http://aapor.org/AM/
Template.cfm?Section=AAPOR_Committee_and_Task_Force_Reports&Template=/CM/ContentDisplay.
cfm&ContentID=2223>. 7/21/2010.
Gittelman, Steve. Trimarchi, Elaine. “On the road to clarity: Differences between sample sources.” <http://surv.
mktginc.com/mktgV2/pdf/Casro-The%20road%20to%20clarity%201-30-09.pdf>. Mktg. Inc. Web. 7/21/2010.
Gaily, Ron, “Sample factors that influence data quality.” Report at the Research Industry Summit. 7 November
2008.
5
copyright western wats 2009
data differences across panels: how does opinion outpost compare?
REFERENCES:
Download