I have narrowed the context of my project down to

advertisement
Understanding Student Disengagement in Peer-Instruction Classrooms.
Steven Pollock
Dep't of Physics, CU Boulder, CO 80309
Steven.Pollock@colorado.edu
Abstract: Peer instruction techniques in large-classrooms are demonstrably effective in
terms of improved student learning, but simple flaws in implementation can have a
noticeably negative impact on the effectiveness of the technique. Through a variety of
student surveys and interviews, we investigate and categorize the underlying cognitive and
procedural issues which limit the effectiveness of peer instruction methods, and suggest
classroom strategies at a variety of levels to improve them.
Physics Education Research [e.g. Arons97, Hake98, Hestenes92, McDermott91, 93] argues
compellingly that traditional large lectures can fail with regard to student attitudes about
physics, classroom engagement, and performance in tests of conceptual underpinnings. A
promising approach advocated by Eric Mazur's group, and others, involves "Peer Instruction"
[Fagen02, Mazur97, Meltzer02, Wyckoff01] which involves frequent in-lecture multiple
choice conceptual questions, with paired ("peer") discussion and votes. Mazur's research
[Crouch01] has focused on the resulting gains in conceptual understanding. However, at our
institution, the half dozen faculty experimenting with these interactive engagement classroom
techniques have been uniformly disturbed by a noticeable fade in student participation with
time. We have begun an action research program to investigate this fade, in order to provide
a theoretical basis for improvement of this promising classroom technique. Our approach is
largely survey based, which at a minimum provides rich and detailed student comments. Our
results are meant to guide further investigation and suggest possible interventions.
Our data came from two large introductory physics courses: an algebra based Electricity and
Magnetism course (Fa'01, 136 students) aimed at health science majors, and a calculus based
course (Sp'02, 307 students), aimed at engineers. We used a "colored card" method of
student voting: multiple choice answers are colors, and students raise an index card to vote.
This provides a quick visual survey of the class, with minimal expense or effort. Using
electronic feedback can have a positive effect on student participation, but adds significant
costs to both students and departments. Since many schools do not yet have this option, we
consider it important to investigate student participation effects without the benefit of
expensive technology. Our data consisted of six distinct types: classroom observation, an
end-of-semester anonymous survey with Likert-scale questions, end of semester quantitative
assessment of conceptual learning, faculty interviews, focus groups of former students, and
online open-ended "participation questions". The latter provided by far our richest source of
information. More details about our data collection can be found elsewhere [Pollock03]
The bottom line is the following: in all cases we studied where peer-instruction was
implemented using colored cards (rather than electronic feedback), student participation
tended to fade with time, and faculty were uniformly discouraged by their perception that the
classroom dynamic was changing with time. However, our student performance on CSEM
(Conceptual Survey in Electricity and Magnetism [Maloney01]) exam questions exceeded
1
the comparable base rate from the CSEM survey by an average of 20-30%, indicating that
conceptual understanding was not objectively hindered by this classroom approach. Student
surveys about the use of concept tests in lecture were also overwhelmingly positive, both in
terms of "affect" (over 90% positive feelings about concept testing in lecture) and in their
perception of the benefits of such a technique to learning physics (just under 2% "not
beneficial" in both classes.) See Pollock03 for more details. Faculty interviews from our
institution, and also from faculty nationwide via a web forum on peer instruction [Crouch02]
tended to focus on the single idea that students simply get tired over time - that repetitive use
of a single classroom technique is bound to "fade". Although it may be tempting to concede
that the fade is thus just a result of unavoidable "human nature", our discussions with
students brought out hints of deeper and more varied explanations of the phenomenon, many
of which allow for classroom methods that could improve student participation levels and
maintain the effectiveness of peer instruction.
Our open ended online survey questions provided the richest data, which allowed us to code
and classify a variety of distinct effects which are involved in the fade. In the end, we have
organized these into five categories, each of which has distinct implications for classroom
approaches to peer instruction. The first category, "fatigue", is the primary assumption of
interviewed faculty. Interestingly, almost no students specifically mentioned this in their
written responses to our questions about their participation, and it was discussed only briefly
and in passing during focus groups. There is no question that fatigue must play some role:
when one uses the same classroom approach for 15 weeks the novelty certainly wears off. It
should be no surprise that students who are increasingly stressed, studying for other classes,
and suffering from persistent lack of adequate sleep will not show up as consistently for
class, and will be more passive and quiet when they're there. If this is the sole issue, varying
the pace and style of concept tests, adding different in-class activities, and creating novel
elements to the course and activities as the semester progresses will all help. Nevertheless,
this effect is quite difficult to mitigate entirely. Since the students still claim to enjoy and
benefit from peer-instruction activities right up to the end of the semester, we have found the
simple act of asking the students if they're tired and want concept tests to end can shake them
up and reinvigorate their participation level.
The second category is "logistical" in nature. When using cards to vote in a class with
hundreds of students, there is no easy mechanism for making their participation explicitly
count. Individuals are quite anonymous, and students quickly deduce that it is not worth the
effort required to talk and vote. One solution to this problem is switching to an electronic
feedback device. These are becoming common, and dropping in price, but add their own
logistical complications - one of which being that it takes longer to collect votes with a large
audience response system than a quick flash of cards. Short of this expensive and
complicated solution, we have begun experimenting with e.g. telling students that the online
participation question will involve telling how their neighbor's solution differed from their
own. Credit for these online questions is pass/fail - we don't try to grade 500 essays, but
automatically give credit for any reply. Nevertheless, this encourages students to participate
at least on those occasions.
We have developed another approach to the logistical problem of anonymity, which has also
proven to be fairly effective. When one starts to notice some "fade" (or preferably even
2
before) a variation on the voting procedure can be introduced. Students are required to get
into small groups, "voting blocs", and the blocs are required to vote the same color. If they
cannot agree on an answer, they have no choice - majority rules! With electronic feedback,
students forced to vote "wrong" because of their neighbors then don't get credit, but even
with colored cards, the frustration of having to vote "wrong" often induces students to
participate more actively, argue more firmly, and defend their opinions.
Yet another important, and distinct logistical issue, is that of time allotted to group
discussions in class. It is difficult to monitor the progress of a large number of small pairings
or groups in a big lecture hall, and the tendency amongst most faculty is to end the discussion
as soon as the sound level begins to drop noticeably. It did not occur to us that this had a
negative impact until I began soliciting student written feedback, asking them e.g. "if you did
not participate in class today, why not?" Fully 25% of student in the class spontaneously
brought up the issue that they felt pressed for time during concept tests. Most argued that
without enough time, it was frustrating and pointless to start a discussion, and many gave up
and just waited quietly. This is clearly important to know about, and a solution may be to
give more time earlier in the semester when the conversations are still more energetic. When
using electronic feedback, this issue is easier to handle - the systems indicate how many
students have replied, and if one allows the discussion to continue until some significant
fraction have answered, the time problem may be self-regulated by the students themselves.
The third broad category of effects impacting the fade is "social issues". Several students in
the focus groups argued that it was not considered "cool" to raise cards and participate
actively. This is not part of the standard social climate in typical classrooms at our large
public institution. In the beginning, many participated due to the novelty and excitement, but
as time went on, many started to notice the small but growing group of "hold-outs" who don't
participate. This fed on itself - the fewer people who talked and voted, the less socially
acceptable it was to continue doing so. There were other, distinct social issues involved too.
Most students informed me that they quickly became embarrassed talking to strangers if they
were at all confused about a point. Sooner or later, there came a time when they didn't feel
comfortable talking, and after that it was difficult for them to start up again.
One cannot easily change social constructs in a classroom, but being aware of them suggests
some useful strategies. We now begin the semester with "ice-breaking" activities, in which
students are required to find out something about their neighbors, including their names. This
may not come naturally to physicists, but from our classroom observations it had a marked
impact on participation levels. We now repeat such activities at regular intervals - students
don't always sit in the same groups, and if attendance begins to dwindle later in the semester,
people are physically farther apart and may suddenly arrive in class without their usual
support structure of friends. One can place remote parts of the large lecture hall "off limits"
late in the semester if attendance drops, to maintain a higher density, and actively encourage
reticent students to speak to their neighbors. Singling out a student in front of the class is
counterproductive, but quiet individual cajoling often brings students together into groups.
These social issues also provide fodder for what I call "classroom propaganda", the essential
explanation and verbal motivations one provides the students with to encourage participation.
Explaining why discussion with neighbors is a useful tool is critical. Mazur and others
3
emphasize the necessity of explaining why we use these classroom techniques[Mazur97,
Meltzer02]], but I didn't appreciate how important this can be until I began reading e.g.
students statements that they refused to talk unless they "knew the answer", so as to avoid
embarrassing themselves in front of strangers.
A fourth category is "epistemological" in nature. Not surprisingly, few students were able to
articulate this idea explicitly in focus groups or online responses, but their replies often
indicated they didn't yet understand what it means to learn physics. Unfortunately, the very
nature of a multiple-choice question does not emphasize the idea that the process of
discussion is important to learning. Why should any student bother with a discussion if the
answer is all that matters? One must constantly remind students that articulation is an
essential part of learning, that hearing alternative viewpoints can strengthen understanding,
and that nobody is expected to know all solutions right away. It may be that the multiple
choice format should not be the sole source of "in-class peer discussions", despite the ease of
collecting the results. One can have students occasionally discuss more open ended, even
subjective issues, and collect their work either by large-class "call out" techniques, or by
having them write up their ideas in an online participation activity due after class.
Developing "how do you know" questions (which can still be multiple choice, or might
dominate the important followup discussion to concept tests) is essential. If students are
expected to be able to discuss their reasons on homework or exams, the in-class peerdiscussion becomes more useful and meaningful to them. A related issue, explicitly
mentioned by many students in their online writings, was their growing realization that they
could just sit quietly during discussion periods and wait for "the answer". In response, I
carefully chose occasions where I would not tell the answer to a multiple choice question. I
let them know in advance - the question mattered, but the peer discussion was their only
opportunity to work on it (besides office hours). This was a powerful motivator, especially
when it became clear that those were more likely to show up on exams. Although this can be
extraordinarily frustrating for some students, choosing questions that they are prepared to
understand on their own, and explaining to them that they need to make sense of the material
rather than simply memorizing answers, can help ease their annoyance at not having an
answer handed to them every time.
Awareness of these epistemological issues may encourage the professor to cajole, encourage,
and demonstrate why participation is acceptable, useful, and necessary. Asking my students
in online questions to explain the value of in-class discussions yielded many insightful
answers, and participation after instituting them was measurably less "faded" than in
previous classes. Encouraging discussion, and teaching what “arguing” means to a physicist,
are necessary steps for the students to engage appropriately with the material and each other.
Our fifth category is "pedagogical" issues. Students repeatedly stated they would not discuss
a problem if they were still seriously confused about the concept - not just embarrassment,
but simply an inability to articulate their thoughts on the problem prevented them from
speaking to their peers. As the semester progressed, this happened more frequently,
compounding the "fade" problem. Breaking complex concept questions into parts can help
here. I believe further research into the nature and differences between concept tests is
necessary here - what kinds of questions encourage student discussions? What is the
4
character of a "good" concept question? One may need to provide more frequent "confidence
builders" as the semester wears on, to consolidate what they do know, and reassure them that
they are still capable of making sense of progressively more complex physics topics.
In summary, the use of concept tests is compelling: I can no longer imagine not using them in
my classes. It's a shame that our students begin to feel a little dispirited near the end of a
course, hence our research into them. It is unlikely such research will lead to a "trick" or
simple solution. But, learning more about what induces students to participate actively in a
large classroom setting is critical. A significant aspect of this study has been the introduction
of online written questions regarding pedagogical issues. It has broad uses, is popular, and
yields valuable insight into student attitudes. Another key result has been the refinement of
what I call "propagandizing" in my classroom: making explicit the reasons behind classroom
methodology and pushing participation actively. Indeed, the act of investigating this problem
helped mitigate the problem! Making visible those aspects of learning which I want them to
develop, through active propaganda, and by asking students to think and write about the
purpose and value, may have improved their understanding and consequently their
engagement level. Many students indicated they were thrilled that I cared enough to ask and
respond. This is a case of "making a large class feel smaller", as is peer discussion itself,
surely a central issue in student engagement, worth pursuing, understanding, and improving.
Acknowledgment: This work was undertaken as part of my participation in the Carnegie
Academy for the Scholarship of Teaching and Learning.
References:
Arons, A. 1997, "Teaching Introductory Physics", Wiley
Crouch, C., Mazur, E. 2001, American Journal of Physics 69, 970-977
Crouch, C. 2002, private communication.
Fagen, A.P., Crouch, C., Mazur. E.2002, Physics Teacher 40, 206-209
Hake, R. 1998, American Journal of Physics 66, 64-74
Hestenes, D., Wells, M., Swackhamer G. 1992, Physics Teacher 30, 141-158
Maloney, D., et al., 2001, Phys. Educ. Res., American Journal of Physics Supplement 69, S12-23
Mazur, E. 1997, "Peer Instruction: A User's Manual", Prentice Hall.
(See also http://galileo.harvard.edu)
McDermott, L.C. 1991, American Journal of Physics 59, 301-315 ,
McDermott, L.C. 1993, American Journal of Physics 61, 295-298
Meltzer, D., Manivannan, K. 2002, American Journal of Physics 70 , P. 639-656
Novak, G., et al. 1999, "Just in Time Teaching", Prentice Hall, 1999.
(see also http://webphysics.iupui.edu/jitt/jitt.html)
Pollock, S. 2003, Article submitted to Journal of College Science Teaching.
(See also http://kml2.carnegiefoundation.org/html/poster.php?id=26)
Wyckoff, S. 2001, Journal of College Science Teaching 30(5), 306-312
5
Download