supplementary materials

advertisement
To get a better sense of the typical age in online samples, we conducted a simple survey.
We looked at four recent studies addressing the use of online data collection in
Psychology (Germine et al., 2012; Crump, McDonnell, & Gureckis, 2013; Paolacci &
Chandler, 2014; Peer, Vosgerau, & Acquisti, 2014), and also at all studies that cited these
articles and ran cognitive or perception experiments. This resulted in a total of 22 articles,
20 that used MTurk samples, one that used Open University and one that the
TestMyBrain platform. It is telling that 4 of these articles did not report the subjects’ age,
suggesting that it was not considered important. Of the other studies, most reported the
average age, 2 reported the median age. The average reported age for these samples was
33.6, SD = 3.0). The overall mean matches our own sample (mean age 33.7). This
suggests that most current online studies sample the entire population of subjects
available online, resulting in samples that are older then studies with undergraduate
students. However, age is generally not considered as a potential dimension of interest,
and comparisons of online vs. lab results are generally focused on average patterns of
effects, or means, variance and internal reliability (Germine et al., 2012; Crump et al.,
2013) and do not consider DIF. While we could have matched the samples, this was not
our goal: our goal was to assess the extent to which the VETCar functioned the same way
in typical lab and online samples, which do differ in age.
Table 1
Year
Author
2012
Mason
Mean age (median if Italics)
32.0
Platform
Mturk
2012
2013
Germine
Crump
27.5
NA
TestMy Brain
Mturk
2014
2014
Scurich
Weissman
31.0
31.7
Mturk
Mturk
2014
Hornsby
33.0
Mturk
2014
2014
Prather
Peer
33.0
33.3
Mturk
Mturk
2014
2014
Verkoeijen
Rowell
37.0
41.0
Mturk
Open University
2014
2015
Mueller
Gilbert
NA
32.0
Mturk
Mturk
2015
Rouse
32.3
Mturk
2015
2015
Schley
Kleinberg
33.0
34.4
Mturk
Mturk
2015
2015
Jung
Liu
34.9
35.2
Mturk
Mturk
2015
Pan
36.7
Mturk
2015
2015
Mitra
Hauser
NA
NA
Mturk
Mturk
2015
2015
Otto
Ward
NA
NA
Mturk
Mturk
mean
33.6
SD
3.0
Crump, M. J., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon's Mechanical
Turk as a tool for experimental behavioral research. PloS one, 8(3), e57410.
Germine, L., Nakayama, K., Duchaine, B. C., Chabris, C. F., Chatterjee, G., & Wilmer, J. B.
(2012). Is the Web as good as the lab? Comparable performance from Web and lab in
cognitive/perceptual experiments. Psychonomic bulletin & review, 19(5), 847-857.
Gilbert, S. J. (2014). Strategic offloading of delayed intentions into the external environment. The
Quarterly Journal of Experimental Psychology, (ahead-of-print), 1-22.
Hauser, D. J., & Schwarz, N. (2015). Attentive Turkers: MTurk participants perform better on
online attention checks than do subject pool participants.Behavior research methods, 1-8.
Hornsby, A. N., & Love, B. C. (2014). Improved classification of mammograms following
idealized training. Journal of applied research in memory and cognition, 3(2), 72-76.
Jung, E. J., & Lee, S. (2015). The combined effects of relationship conflict and the relational self
on creativity. Organizational Behavior and Human Decision Processes, 130, 44-57.
Kleinberg, B., & Verschuere, B. (2015). Memory detection 2.0: The first web-based memory
detection test. PloS one, 10(4).
Liu, A. S., Kallai, A. Y., Schunn, C. D., & Fiez, J. A. Using mental computation training to
improve complex mathematical performance. Instructional Science, 1-23.
Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon’s Mechanical
Turk. Behavior research methods, 44(1), 1-23.
Mitra, T., Hutto, C. J., & Gilbert, E. (2015, April). Comparing Person-and Process-centric
Strategies for Obtaining Quality Data on Amazon Mechanical Turk. In Proceedings of
the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 13451354).
Mueller, M. L., Dunlosky, J., Tauber, S. K., & Rhodes, M. G. (2014). The font-size effect on
judgments of learning: Does it exemplify fluency effects or reflect people’s beliefs about
memory? Journal of Memory and Language, 70, 1-12.
Otto, A. R., Skatova, A., Madlon-Kay, S., & Daw, N. D. (2014). Cognitive control predicts use of
model-based reinforcement learning. Journal of cognitive neuroscience, 27(2), 319-333.
Pan, S. C., Pashler, H., Potter, Z. E., & Rickard, T. C. (2015). Testing enhances learning across a
range of episodic memory abilities. Journal of Memory and Language, 83, 53-61.
Paolacci, G., & Chandler, J. (2014). Inside the Turk Understanding Mechanical Turk as a
Participant Pool. Current Directions in Psychological Science, 23(3), 184-188.
Peer, E., Vosgerau, J., & Acquisti, A. (2014). Reputation as a sufficient condition for data quality
on Amazon Mechanical Turk. Behavior research methods, 46(4), 1023-1031.
Prather, R. W. (2014). Numerical discrimination is mediated by neural coding
variation. Cognition, 133(3), 601-610.
Rouse, S. V. (2015). A reliability analysis of Mechanical Turk data. Computers in Human
Behavior, 43, 304-307.
Rowell, N. E., Green, A. J., Kaye, H., & Naish, P. (2015). Information Reduction—More than
meets the eye?. Journal of Cognitive Psychology, 27(1), 89-113.
Schley, D. R., & DeKay, M. L. (2015). Cognitive accessibility in judgments of household energy
consumption. Journal of Environmental Psychology, 43, 30-41.
Scurich, N, Shniderman., A (2014) The Selective Allure of Neuroscientific Explanations. PLoS
ONE 9(9): e107529.
Verkoeijen, P. P., & Bouwmeester, S. (2014). Is spacing really the “friend of
induction”?. Frontiers in psychology, 5.
Ward, E. J., & Scholl, B. J. (2015). Inattentional blindness reflects limitations on perception, not
memory: Evidence from repeated failures of awareness.Psychonomic bulletin &
review, 22(3), 722-727.
Weissman, D. H., Jiang, J., & Egner, T. (2014). Determinants of congruency sequence effects
without learning and memory confounds. Journal of Experimental Psychology: Human
Perception and Performance, 40(5), 2022-2037.
Download