Clicker article 3 - University of Pittsburgh

advertisement
This article was downloaded by: [University Of Pittsburgh]
On: 20 July 2011, At: 08:52
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK
Teaching of Psychology
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/htop20
Efficacy of Personal Response Systems (“Clickers”) in
Large, Introductory Psychology Classes
a
a
a
Beth Morling , Meghan McAuliffe , Lawrence Cohen & Thomas M. DiLorenzo
a
a
University of Delaware,
Available online: 07 Feb 2008
To cite this article: Beth Morling, Meghan McAuliffe, Lawrence Cohen & Thomas M. DiLorenzo (2008): Efficacy of Personal
Response Systems (“Clickers”) in Large, Introductory Psychology Classes, Teaching of Psychology, 35:1, 45-50
To link to this article: http://dx.doi.org/10.1080/00986280701818516
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching and private study purposes. Any substantial or systematic
reproduction, re-distribution, re-selling, loan, sub-licensing, systematic supply or distribution in any form to
anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should
be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims,
proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in
connection with or arising out of the use of this material.
Teaching of Psychology, 35: 45–50, 2008
C Taylor & Francis Group, LLC
Copyright ISSN: 0098-6283 print / 1532-8023 online
DOI: 10.1080/00986280701818516
Efficacy of Personal Response Systems (“Clickers”)
in Large, Introductory Psychology Classes
Beth Morling, Meghan McAuliffe, Lawrence Cohen, and Thomas M. DiLorenzo
Downloaded by [University Of Pittsburgh] at 08:52 20 July 2011
University of Delaware
Four sections of introductory psychology participated in a
test of personal response systems (commonly called “clickers”). Two sections used clickers to answer multiple-choice
quiz questions for extra credit; 2 sections did not. Even
though we used clickers very minimally (mainly to administer quizzes and give immediate feedback in class), their use
had a small, positive effect on exam scores. On anonymous
course evaluations, students in 1 clicker section reported
that regular attendance was more important, but otherwise, students in clicker sections (compared to traditional
sections) did not report feeling significantly more engaged
during class. We suggest that future researchers might combine clicker technology with other, established pedagogical
techniques.
Personal hand-held responders, commonly called
“clickers,” are one of the latest trends in technology
for teaching (Beatty, 2004; Duncan, 2005). Clickers
are one potential tool for increasing interactive engagement with material, and courses that use interactive engagement show higher levels of concept learning
(Hake, 1998). The instructor poses a question to the
class (using Microsoft PowerPoint) and students respond with hand-held responders. After students have
responded to the question, the instructor displays a
histogram of the class’s responses. Anecdotal evidence
(based on faculty interest at teaching conferences) indicates a rising use of clickers on college campuses, but
do clickers help students learn? Do they help students
feel more engaged?
According to some researchers, students like clickers, and students also believe clickers make them feel
more engaged. For example, on course evaluations,
students at one university reported that clickers had
more benefits than downsides (Draper & Brown, 2004).
However, in most studies on clickers, researchers do
not compare clicker groups to nonclicker comparison
Vol. 35, No. 1, 2008
groups, so demand characteristics might explain the
findings. When asked, “how useful do you think the
handsets are?” (Draper & Brown, 2004) or if “clickers helped me learn” (Duncan, 2005), students might
overestimate their perceptions of benefits of clickers, because such introspection is notoriously faulty
(Nisbett & Wilson, 1977).
In this study, we compared four large sections of introductory psychology (two instructors taught two sections each). For each of the two instructors, one section
used clickers and one section did not. The instructors
used the clickers to administer multiple-choice questions on the reading; to display histograms of the question results; and, when relevant, to correct widespread
misunderstandings. The primary dependent measures
were exam scores (within each instructor, exams in the
two sections contained identical items) and self-reports
of interest and engagement collected via anonymous
course evaluations at the end of the semester.
Method
Participants and Design
Participants were introductory psychology students
(N = 1,290) at the University of Delaware enrolled in
one of four large sections. Each section had approximately 320 students.
At our university, introductory psychology attracts
mostly first-year students (80%), but upper class students also enroll. First-year students are, in essence,
randomly assigned to introductory psychology sections.
During summer orientation sessions, entering students
make a list of courses they wish to take in the fall without regard to class time or professor. Then a computer
45
Table 1.
Responses (Ms) to Engagement Items Added to Anonymous Online Course Evaluations
Dr A
Dr B
Traditional
Clicker
t
Traditional
Clicker
t
68%
3.67
3.10
2.67
2.91
3.71
70%
3.73
3.33
2.78
3.12
3.94
0.59
1.74+
0.97
1.87+
1.49
77%
3.42
2.79
3.10
2.57
3.51
67%
3.46
3.08
3.26
2.60
3.77
0.43
2.52∗
1.61
0.37
1.70+
Percentage of students responding to evaluations
I paid attention in class, stayed engageda
Regular attendance was important in this classa
I enjoyed coming to classa
How often did you read before class?b
How many classes did you miss?c
Downloaded by [University Of Pittsburgh] at 08:52 20 July 2011
Note. The df for t values range from 432 to 486. All students (first-year and upper class students) are included in
these M s. + p < .10. * p < .01.
a
Item was answered on a 5-point scale anchored by 1 (strongly disagree) and 5 (strongly agree). b Item scale: 1 =
never; 2 = once or twice; 3 = about half the time; 4 = about 75% of the time; 5 = every day. c Item scale: 1 = I
missed one or more times a week; 2 = I missed about once a week; 3 = I missed about once every two weeks; 4
= I missed between 3 and 5 times total; 5 = I missed 0 to 2 classes total.
program assigns them to a section of each course they
requested. Although not purely random, such assignment of students to a section is more random than
that of upper class students, who might have registered for specific sections based on their personal preferences. Because the nonsystematic assignment of firstyear students provides good experimental control, we
used only first-year students in our analyses of exam
performance.
Clicker and Traditional Sections
Two of the sections (taught by Dr. A) met Monday,
Wednesday. and Friday for 50 min at 9:05 a.m. and
10:10 a.m. The other two sections (taught by Dr.
B) met Tuesday and Thursday for 75 min, at 9:30
a.m. and 12:30 p.m. Both professors taught using an
interactive lecture style. Both professors taught the
earlier section without clickers (“traditional” sections)
and the later section with clickers. In clicker sections,
at the beginning of class, the instructor posted five
multiple-choice, fact-based questions, based on the
day’s required reading. Students earned extra credit
for answering these questions correctly. Later in the
class period, if relevant, the instructor would briefly
elaborate on a clicker question that most students had
misunderstood. Other than this change, instructors
taught the two sections identically.
Students in the nonclicker sections of the class were
able to obtain the same number of extra credit points
as those in the clicker sections. For extra credit, these
students could participate in an extra research study or
46
read a portion of a chapter of the textbook not assigned
on the syllabus.
Materials
We used radio-frequency clickers manufactured by
Classroom Performance System of eInstruction. Textbook publisher Allyn and Bacon also provided technical support. All sections used the same textbook (Gerrig & Zimbardo, 2005). The four sections covered similar material, but not always in the same topic order.
Students in clicker sections purchased their responders
bundled with their textbook.
Dependent Measures
Exams. Each instructor gave four multiple-choice
exams (there was no comprehensive final exam).
Within an instructor’s class, the two sections answered
identical items (i.e., the same exam was presented at
9:05 and at 10:10). However, to reduce cheating, instructors distributed four different exam question orders in each exam session. In addition, exams in the
earlier sections were labeled as Forms 1 through 4, and
in the later sections were labeled as Forms 5 through
8. We analyzed the percentage of questions answered
correctly.
Self-reports of engagement. Students completed anonymous, semester-end course evaluations
online. In addition to the standard questions used for
all courses, we wrote five questions specifically to measure engagement (see Table 1 for the items).
Teaching of Psychology
Table 2.
Exam M s and SDs for Clicker and Traditional Sections
Exam 1
Traditional
Clicker
Exam 2
Exam 3
Exam 4
M
SD
M
SD
M
SD
M
SD
71.13
72.69
12.9
11.7
69.04
69.62
12.2
11.6
70.04
70.82
13.0
14.0
69.44
72.43
11.8
11.4
Note. N s range from 560 to 574 for traditional sections and 476 to 482 for clicker sections (some students missed an exam).
The values are based on first-year students only. Clicker sections scored significantly higher than traditional sections on
Exam 1 and Exam 4 (see text).
Results
Downloaded by [University Of Pittsburgh] at 08:52 20 July 2011
Exam Scores
We conducted a 2 × 2 × 4 mixed MANOVA
with instructor (Dr. A and Dr. B, between participants), clicker use (traditional vs. clicker, between
participants) and exam (four exams, within participants) as the independent variables (we present Greenhouse Geisser values here). We eliminated students
who eventually withdrew from the course. Our analysis showed significant main effects for clicker use, F (1,
1027) = 6.21, p = .013, partial η2 = .006, and exam,
F (3, 3081) = 16.68, p < .01, partial η2 = .016, which
were qualified by a Clicker Use × Exam interaction,
F (1, 3081) = 3.46, p = .02, partial η2 = .003 (see
means in Table 2). We conducted four post-hoc contrasts comparing clicker to traditional sections for each
exam separately, using the MS error term from the interaction and Bonferroni adjusted p values. These contrasts were significant for Exam 1, t(3081) = 3.01, p <
.05, Cohen’s d = 0.13, and Exam 4, t(3081) = 5.78,
p < .05; Cohen’s d = 0.26. Thus, exam scores were
higher for clicker sections than for traditional sections,
but the main effect was driven by scores for Exams 1
and 4. The effect size for clicker use was very small.
An Exam × Instructor interaction, F (3, 3081) =
26.29, p < .01, partial η2 = .025, showed that for Dr.
A, Exam 1 scores were higher than scores for Exams 2,
3, and 4, whereas for Dr. B, Exam 1 and 3 scores were
higher than those for Exams 2 and 4. No other main
effects or interactions were significant.
To test whether the clicker effect was different for
upper class students and first-year students, we conducted a mixed ANOVA with the full sample, using
a 2 (clicker section) × 2 (professor) × 2 (class: firstyear or upper class student) × 4 (exam) design. The
effects for clickers did not significantly interact with
the class variable, suggesting that the effectiveness of
Vol. 35, No. 1, 2008
clickers did not depend on being a first-year or upper class student. In addition, the class variable did
not interact with any other effects reported in the
primary analysis (which analyzed first-year students
only).
Self-Reports of Engagement
Because we received only summary output (overall Ms and SDs, not individual responses) from the
course evaluation items, we were unable to conduct a
2 × 2 (Professor × Clicker Use) ANOVA on the five
engagement items. Consequently we did independentgroups t tests for the five self-report items (see Table 1). In addition, because only summary output was
available for these questions, our analysis included all
levels of students (first-year and upper class students).
Of 10 possible comparisons, 1 emerged significantly in
favor of clickers (“Regular attendance was important
in this class” for Dr. B, Cohen’s d = .15), three were
marginally in favor of clickers (these three were related
to class attendance and reading before class), and six
showed no difference.
Discussion
Exam Performance
Our data suggest that using clickers to quiz students
in class on material from their reading resulted in a
small, positive effect on exam performance in large introductory psychology classes. We had a large sample
size and took advantage of the near-random assignment
of students to sections, reducing the possibility of selection effects. The outcome did not depend on which
professor was teaching.
In our study, the instructors used clickers very
minimally—to administer quizzes, publicly display the
47
Downloaded by [University Of Pittsburgh] at 08:52 20 July 2011
results, and quickly correct any widespread misunderstandings. Thus, our study shows that using clickers
modestly can result in a small gain in exam performance. The effect we found might be mediated by a
number of mechanisms. Clickers may have provided
opportunities for interactive engagement, an empirically supported method for promoting concept learning (Hake, 1998). Students may have been motivated
by comparing their performance to their peers. Students may have simply benefited from being more prepared for class or from extra practice with the types of
questions that might be on the exam. Future studies of
clickers might be able to evaluate some of these specific
mediators.
An alternative explanation for the improved exam
performance in the clicker sections is that students in
the earlier section might have shared exam questions
with students in later sections. To investigate this explanation, we compared the first three exam scores
from the Spring 2006 semester (i.e., the semester after
the one in which we collected the present data) for
Dr. B, who taught two traditional sections of introductory psychology in consecutive time periods, again with
the same exams. Exam scores in the later section were
higher for the first test, t(477) = 2.88, p < .05, Cohen’s
d = .27, but not significantly different in the second
and third tests, ts(476) = –1.17, –1.28, ns, in contrast
to the consistently higher scores for clicker sections on
all exams in our study. This outcome provides some evidence against the “question-sharing” explanation for
these results.
Student Engagement
Although Dr. B reported that students “got a kick
out of them,” clickers had only marginal effects on
self-reports of student engagement, attendance, and
reading in this study—effects that may be attributable
to Type I error. Students in clicker sections reported
that class attendance was more important, but they
did not report feeling significantly more engaged than
students in traditional sections. Another possible indicator of engagement is attendance over the semester.
Although professors did not take attendance, we inspected the participation rates for the clicker quizzes,
and attendance neither increased nor decreased over
the semester. It is possible that past studies indicating
that students thought clickers help them learn might
have been exaggerated by demand characteristics. Another explanation of the nearly null effect on engagement is that this was the first time Dr. A and Dr. B had
used clickers; Duncan (2005) reported that students
48
rate clickers higher when instructors are more experienced with them (perhaps because they write better
conceptual questions or because they have mastered
the technological details).
Areas for Future Study
Methodologically, our study was a strong test of the
effectiveness of clickers by themselves, unconfounded
by other pedagogical tools. However, future research
should test the impact of combining clickers with
other, well-established pedagogical methods.
One suggestion is to combine clickers with concept inventories; that is, standardized, multiple-choice
questions that ask students about core concepts (see
Foundation Coalition, n.d., for an example in engineering). Concept inventories include as distractors
the most common wrong answers (e.g., “Which of
the four correlations shows the strongest prediction:
(a) r = −.73; (b) r = .62; (c) r = .25; (d) r = .10”
adapted from Chew, 2004a). Chew (2004a, 2004b) has
promoted the use of such conceptual questions in the
psychology classroom.
Group discussion is another tool to combine with
clickers. In our study, students worked on questions
individually. Research might show that clickers lead
to greater benefits when combined with cooperative
learning. For example, Duncan (2005) suggested that
students can enter their answers to a question both
before and after small-group discussion.
Another teaching method that can be supported
by clickers is just in time teaching (JiTT). Dr. A and
Dr. B used clickers to supplement their lectures with a
small amount of JiTT. JiTT is a method in which the
instructor adapts the course lesson—sometimes right
in the middle of class—to get at what students do not
understand (e.g., Beekes, 2006). A future study might
amplify the use of JiTT in combination with clicker
questions.
Practical Considerations
To instructors who are considering using clickers
in their classrooms, we recommend they consider how
they will grade clicker performance. Because students
in our study could earn extra credit points for answering correctly, we may have inadvertently induced
an evaluative focus. Students in the clicker sections
expressed anxiety about whether their responses were
being correctly recorded, so the students may have
seen the clickers as evaluation tools, not as learning
Teaching of Psychology
Downloaded by [University Of Pittsburgh] at 08:52 20 July 2011
tools. In addition, several students in our study openly
“cheated” by asking their neighbors in class what the
right answer was, even though the instructors had
not expressly allowed such discussion. We can think
of two interpretations of the “cheating” we observed.
If students simply copied each other, such cheating
probably undermined the true learning value of
clickers (Brothen & Wambach, 2001; Daniel, 2006).
On the other hand, if asking one’s neighbors about the
right answer introduced some cooperative learning to
the students and made students think more deeply,
it might have actually increased student engagement
with the material. We encourage instructors to think
about how they might handle potential student
cheating on clicker questions.
Finally, we note that there are less expensive options that might be just as effective as clickers. For
instance, a professor can quickly assess the understanding of a group of students by having them display colorcoded index cards in response to a question (Kellum,
Carr, & Dozier, 2001). Similarly, professors can foster
group interaction and engagement with material using scratch-off quizzes (Epstein et al., 2002) without
having to purchase, register, and maintain the clicker
responders. Online quizzes, too, may give the same (or
greater) benefit when used according to their best practices (Brothen, Daniel, & Finley, 2004).
Conclusion
In a design with near-random assignment, across
two different professors, using clickers resulted in a
small improvement in exam scores. Based on our results, instructors should not expect large improvements
in exam performance if they use clickers mainly to administer and display reading quiz questions in introductory psychology classrooms. However, future research
might establish that clickers have greater impact when
they are combined with techniques such as cooperative
learning or concept inventories.
Although students in one section reported that attendance was important, clickers did not otherwise
improve reports of student engagement. We found little support for the engagement hypothesis, showing
that clickers are not, by themselves, sufficient to increase subjective reports of engagement. Future studies
could test whether adding clickers to well-documented
teaching methods will increase subjective student engagement, as well as learning.
Vol. 35, No. 1, 2008
References
Beatty, I. (2004). Transforming student learning with classroom communication systems. Educause Center for Applied
Research Bulletin, 3, 1–13.
Beekes, W. (2006). The “millionaire” method for encouraging participation. Active Learning in Higher Education, 7,
25–36.
Brothen, T., Daniel, D. B., & Finley, D. L. (2004).
Best principles in the use of on-line quizzing. Retrieved
March 23, 2007, from http://teachpsych.org/resources
/pedagogy/onlinetesting.pdf
Brothen, T., & Wambach, C. (2001). Effective student use
of computerized quizzes. Teaching of Psychology, 28, 292–
294.
Chew, S. L. (2004a). Student misconceptions in the psychology classroom. Essays from E-xcellence in Teaching,
Vol. 4. Retrieved May 8, 2007, from http:/teachpsych.org/
resources/e-books/eit2004/eit04-03.pdf
Chew, S. L. (2004b). Using conceptests for formative assessment. Psychology Teacher Network, 14(1), 10–12.
Daniel, D. B. (2006, May). Do teachers still need to teach?
Textbook-related pedagogy and student learning preferences.
Paper presented at the Teaching of Psychology preconference of the annual convention of the Association for
Psychological Science, New York.
Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal
of Computer Assisted Learning, 20, 81–94.
Duncan, D. (2005). Clickers in the classroom: How to enhance
science teaching using classroom response systems. New York:
Addison-Wesley.
Epstein, M. L., Lazarus, A. D., Calvano, T. B., Matthews, K.
A., Hendel, R. A., Epstein, B. B., & Brosvic, G. M. (2002).
Immediate feedback assessment technique promotes learning and corrects inaccurate first responses.The Psychological
Record, 52, 187–201.
Foundation Coalition. (n.d.). Concept inventory assessment instruments. Retrieved March 23, 2007, from
http:/www.foundationcoalition.org/home/keycomponents
/concept/index.html
Gerrig, R. J., & Zimbardo, P. G. (2005). Psychology and life
(17th ed.).Boston: Pearson.
Hake, R. R. (1998). Interactive-engagement vs. traditional
methods: A six-thousand-student survey of mechanics test
data for introductory physics courses. American Journal of
Physics, 66, 64–74.
Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001).
Response-card instruction and student learning in a
college classroom. Teaching of Psychology, 28, 101–
104.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than
we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259.
49
Notes
Downloaded by [University Of Pittsburgh] at 08:52 20 July 2011
1. Portions of this research were presented at the National
Institute for Teaching of Psychology, January 2006.
2. We are grateful to Allyn and Bacon for providing
extensive technical and personal support for this project.
We specifically acknowledge the help of Sarah Ergen and
Christine MacKrell. Lauren Reiss helped manage data,
and Christopher Sanger provided significant technical
support.
3. Send correspondence to Beth Morling, Department of
Psychology, University of Delaware, Newark, DE 19716;
e-mail: morling@udel.edu.
50
Teaching of Psychology
Download