Genre, Peer Assessment, and Motivation

advertisement
Genre, Peer Assessment, and Motivation
by Jamison Lee
Genre theory privileges the view that each writing task is distinct in almost every way.
With this in mind, one does not simply write a particular piece without first researching the
character of that given genre’s specific components. For instance, if my employer asked me to
write a feasibility report or a compliance report, I would have to do a lot of research before I
would be able to identify the traditional strictures regarding the form and content of those genres.
If this sounds very similar to the notion of a “rhetorical situation,” you probably have a base
understanding from which to build—though this depends on whom you ask! Some have
questioned how “genre” is conceptually different from what is referred to as the “rhetorical
situation,” and I do not have a satisfactory answer. In my humble view, they share certain
components, and it seems to me that one can be used productively to foster a cursory
understanding of the other. However, I am far from willing or qualified to argue that point with
the leading scholars in those fields.
Learning and employing genre theory has influenced my pedagogical philosophy,
particularly as concerns assessment. Having been encouraged to develop my assessment
strategies beyond the traditional summative grading of a final paper, I began to dig into
assessment theory and consider what was most important to me about assessment. I began to ask
structural questions, such as “Who does assessment serve?” and “Who should it serve?” “How
do our methods of assessment affect the students’ learning?” and “What are our options?”
Upon consideration of my assessment practices, I have ultimately come to this
conclusion: If the students have to receive grades, the process by which they are allotted must be
one that enhances their learning. As Huot asserts, “Instructive evaluation requires that we
involve students in all phases of the assessment of their work” (171). Cavanagh and Styles
continue one step further issuing the charge that “the teacher who refuses to involve his or her
students in the various steps of a well-designed evaluation approach is cheating the students of a
priceless learning opportunity” (68).
Hence, I have more recently adopted a Community Based Assessment Pedagogy model
and have involved my students in deciding upon central writing criteria, creating the rubric, and
grading. At the heart of community-based assessment pedagogy (in a composition classroom) is
this central divergence from more conventional assessment structures: “[Students] can’t be
passive, can’t simply accept criteria or assignments, nor can they write the way they’ve usually
written in the past. My students must debate and decide on all the important decisions regarding
their writing in the course from start to finish. The class is about them learning not me teaching”
(Inoue 221). Inoue goes on to state that “allowing our students to assess themselves for real is
pedagogically sound” because:
Assessing for our students only hamstrings their progress by making
pronouncements on their writing, halting reflection and self-assessment—it keeps
them from doing the very things we want them to be able to do: assess and
understand language, write and understand writing, conceptualize hermeneutical
acts. (230)
Here, Inoue underscores that this assessment pedagogy is meant to serve the students.
Additionally, imbuing our students with agency in each level of the assessment process shows
them that we value their input, and proves it by involving them in what are viewed as crucial
aspects of the educational process. An important corollary is that this pedagogy encourages
students to take charge of their education, to take steps toward intellectual autonomy.
How might this magical assessment method be employed? There are, of course, many
variations; the following is an example of one iteration:
Sample Process
1. Read, research and discuss most salient/important genre-specific components.
2. Facilitate the creation of rubric(s).
3. Give feedback on first drafts and return for revisions.
4. Collect final drafts and distribute for peer assessment.
5. Collect and assess students’ grade reports.
If the bulk of students’ grades rests on their grade reports, the student’s compositions
then play the role of sample papers—the diminishment of extrinsic rewards allows freer and,
ideally, more creative exploration of their research topics. Their peer assessment and grade
reports function as a demonstration of their learning and are accordingly assessed by the
instructor for thoroughness and accuracy. Still, one question seems to remain: Should the
students’ assessments of one another’s work actually enter our grade books? That is, if Rachel
has given Jeremy a “D,” can we trust her judgment? Many contextual questions necessarily
contribute to the answer(s): Did Rachel provide sufficient support for her assessment? Was her
assessment of Jeremy’s work quite different from others who assessed the same piece? Were the
students’ grade reports named or anonymous? (These are factors that I explored through
empirical pedagogical research, and I would love to discuss them, if you believe it would be a
productive use of our groups’ time.)
Additional Complementary Supplemental Auxiliary Postscript Addendum:
The Science of Motivation
Intuitively, we suspect that extrinsic motivators (e.g. grades, money, etc.) engender
motivation. And while this is often true for certain simple mechanical tasks, as regards creative
endeavors, extrinsic motivators (e.g. grades) are not motivational; in fact, the opposite is true:
The likelihood of success in creative endeavors is diminished greatly by the promise of extrinsic
rewards; such extrinsic rewards repeatedly have been shown to impede productivity in tasks that
require creativity—writing undoubtedly among them. (For a primer on this, I refer you to “Dan
Pink on the surprising science of motivation,” hosted by Ted.com.) With this in mind, my goal
is to free the students to write a more creative paper, to discover more as they write and to take
their paper in the direction they wish without being concerned that they will get a bad grade if
their interests and creative inclinations lead them down a divergent path.
Of course, what is considered creative thinking has been hotly debated, and I believe my
construal of “creative thinking” is quite inclusive, but I stand by the view that our creative
faculties are more often called upon in writing than in assessing, which I have experienced as a
more interpretive, as opposed to creative, act. With that in mind, it’s important to separate the
two in terms of exactly what we are assessing and how. That is, I have tried to place the grading
emphasis on the students’ assessments of one another’s work. I’ve tried to allow them to
demonstrate their knowledge of genre components by assessing the work of their peers and
writing a thorough grade report, with examples from the text to back up their judgments.
Works Cited
Cavanagh, Gray, and Ken Styles. “Evaluating Written Work.” English Quarterly 16.2 (1983):
63-68. Print.
Huot, Brian. (Re)Articulating Writing Assessment. Logan, Utah: Utah State University Press,
2002. Print.
Inoue, Asao. “Community-Based Assessment Pedagogy.” Assessing Writing 9 (2005): 208-238.
Web. 4 September 2010.
Pink, Dan. “Dan Pink on the Surprising Science of Motivation.” Ted: Ideas Worth Spreading.
Ted.com. Aug. 2009. Web. 7 November 2009.
Download