1. Introduction - Glasgow Caledonian University

advertisement
Improving Student Learning and the Student
Experience of Feedback Using Formative PeerAssessment
Dr Martin A. Sharp
Dr Laura A. Mitchell
Dept Psychology and Allied Health Sciences
Glasgow Caledonian University
Cowcaddens Road
Glasgow, G40BA
martin.sharp@gcu.ac.uk
Dept Psychology
Bishop's University
2600 College St. SherbrookeQC, J1M1Z7
Canada
laura.mitchell@ubishops.ca
1. INTRODUCTION
Assessment forms an essential part of what we do within the teacher-learning environment (Mutch, 2003). It
is also an aspect of our work that is frequently more complex than is sometimes appreciated (Brown et al.,
1997). Certainly students take assessment very seriously and as Brown & Knight note: ‘Assessment defines
what students regard as important, how they spend their time, and how they come to see themselves as
students’ (1994, p.12). Disappointingly, issues concerning assessment and feedback persist in being rated
very poorly across the HE sector on the National Student Survey. This is especially pertinent for the
psychology programme at GCU where, during the 2010/2011 academic year, only 28% of students believed
feedback had been prompt and 35% believed feedback had helped them clarify issues with which they were
struggling (Hefce, 2011).
At the outset of the process we describe here, the overwhelming majority of assessment in the GCU
psychology programme was summative. Hounsell et al. (2005) make a prescient comment about the
challenges faced with this kind of assessment in which "assignments tend to be crowded towards the end of
the course, leaving students with little or no scope to benefit from a tutor’s feedback on their work" (p.3).
However, there has been what Hounsell et al. (ibid.) describe as a "resurgence of interest in formative
assessment" (p.2). The importance of this type of assessment, which assists students as they are engaging
in the process of learning (i.e. assessing ‘for’ learning), is summed up by Sadler (1989) "…students have to
be able to judge the quality of what they are producing and be able to regulate what they are doing during
the doing of it" (p.121). Relating these issues to performance in summative tasks, Black & William (1998)
observe, formative assessment can have a substantial impact upon student learning. Indeed, Ramsden
(2003) suggests that a lack of feedback on progress encourages surface rather than deep approaches to
learning (cf. Marton and Saljö, 1976).
Formative assessment takes many forms, of course. Peer assessment is one such type of innovative
practice which aims to improve the quality of student learning and facilitate the development of autonomous
learners (McDowell and Mowl, 1996). We believed that the introduction of a formative peer assessment
regimen, grounded in a robust pedagogical literature, would therefore involve students not only as passive
recipients of summative grades but as independent learners (Biggs, 1999). Moreover, in making critical
judgments about the work of others, students gain insight into their own performance which is an
increasingly necessary skill for university study and professional life. Consequently, our curriculum
innovation deals with two inter-related issues: formative peer-assessment and peer-feedback. In the
following sections we describe the process, illustrate a selection of our findings in relation to engagement
and highlight a number of challenges that we sought to overcome.
1
2. PROCESS
Smyth (2004) suggests "Building students’ knowledge of how and why assessment takes the form it does,
raising awareness of ongoing as well as final processes, and revealing how critical thinking about
assessment is an integral part of the learning process…" (p.370). Consequently, whilst the primary focus of
the project was to incorporate a peer-reviewed formative assessment component into an honours level
biopsychology module, the project involved a complementary seminar series that aimed to scaffold the peer
assessment element.
2.1 Peer- Assessment Component
Students conducted a laboratory-based biopsychology project over several weeks as part of their
coursework (they were offered a choice of pre-designed projects; all of which both consolidated and
extended theoretical material introduced during the lecture series). Once the project was complete they
wrote and submitted a 3000 word lab-report to be formatively graded by another student, who had conducted
the same project. In these dyads students were given a week to provide detailed, constructive written
feedback, suggesting how the report might be improved. Additionally, each dyad was subsequently asked to
meet in order to justify the grades awarded and engage in a discussion concerning the feedback provided.
As King (1999) observes, providing constructive individual feedback is not just a mechanism to provide a
judgment or evaluation it should be designed and utilized to provide insight. In light of this, at the point of
summative assessment (teaching week 11), students submitted a revised and amended lab report along with
a pro-forma detailing how they had engaged with the feedback they received and in what way they had
subsequently re-formulated their lab-report. Students were thus provided the opportunity to engage in a
process of feed-forward.
2.2 Parallel Seminars
During a seminar series, which ran in parallel to the lab practicals, students were asked to undertake a
number of tasks related to assessment and feedback. Firstly, they were asked to consider and discuss the
nature of assessment at university and additionally reflect upon their own engagement with it. Second, they
debated the criteria for providing and receiving useful feedback. Thirdly, students were placed into smaller
groups and asked to identify the concerns they had with the formative peer-review process. Once the issues
were identified small groups were asked to identify and describe means of resolving or at least ameliorating
these concerns. Generally, the same issues come up with every cohort and are discussed here.
In a subsequent seminar, students were asked to individually mark an illustrative lab report provided by the
tutor. Then, in small-groups they marked an element of the same report (i.e. introduction, methodology,
results or discussion) and were asked to reach a consensus on the grade, for which they provided a verbal
justification to the remainder of the class. Each group additionally presented details of constructive feedback
for their respective sections (and here they had to take into account what they had previously said they
wanted from good quality feedback). Having also graded the assignment during the same session the tutors
assigned their own mark. Where an inevitable discrepancy between the marks arose this was then debated
and a resolution attempted, arriving finally at a negotiated understanding of what was required for the
summative report. Thus the entire process became a comprehensive act of calibration.
Reflection on assessment procedures is a necessary part of the student learning process (Smyth, 2004) and
there are a number of benefits that emerge from engagement with this process. It encourages deeper
approaches to learning, assists in developing critical thinking skills, facilitates the development of
autonomous learners, develops valuable generic skills and allows students a non-threatening environment in
which to obtain useful feedback on their work without having the anxiety of worrying about grades. Moreover,
it embeds into the curriculum a number of institutional and sector-wide enhancement themes.
3. FINDINGS
Although we collected more comprehensive data, for the purposes of this guide there are really three issues
we would like to address. Firstly, do students like it (and why wouldn't they)? Second, do they engage with
the process (i.e. is it worth it - for both students and staff)? Finally, do the students want to see formative
assessment and peer-review in other parts of the degree programme? The findings reported here are for
three successive cohorts and unless otherwise stated uses 325 cases. When considering whether
2
something like this might work for you it is worth considering that the cohort had specific characteristics: 80%
were female, 20% were male and 77% were under 25.
3.1 Did students appreciate it?
Why wouldn't they? Well, students act increasingly as strategic learners (not a bad thing, necessarily) and
seem ever more disgruntled at being asked to participate in any learning activity that doesn't yield an explicit
summative grade. This echoes MacLellan (2000) who suggested that students thought assessment was
more about grading and had very little to do with improving their own learning. And yet, here we have a
process for which they received no summative marks, which required a significant time contribution, which
required sustained critical thinking and reasoning (i.e. stuff that is generally quite hard to do really), for which
there was no 'expert' tutor input, and which ultimately asked them to spend even more time re-drafting
reports - sometimes quite significantly. So, in answer to the question: why wouldn't students be appreciative
of this opportunity? it seems we can provide a litany of reasons. And yet, appreciate it they did, as the
responses to the following question bear out.
Q7. I appreciate the opportunity to submit work and receive informal feedback before submitting for
my final grade (92% agreed or strongly agreed).
I appreciate the opportunity to submit work and receive
informal feedback before submitting for my final grade n=325
200
180
160
140
120
100
80
60
40
20
0
Disagree
Not Sure
Agree
Strongly Agree
3.2 Did students engage with the process and was it effective?
Ok, great, so they like or at least appreciate the idea of the process. There seems little point in implementing
this practice, however, if students are not going to engage or if they are going to engage only superficially.
The following three questions sought to ascertain the extent to which students put time and effort into
actually providing feedback to their peer-review partner, whether they engaged with the feedback they
received and whether they thought both providing and receiving written and oral feedback helped them in
preparing their summative reports. Taken together the responses demonstrate high levels of engagement.
3
Q4. Providing feedback on someone else’s report was beneficial in helping me think about what was
required for my own report (86% either agreed or strongly agreed).
Providing feedback on someone else’s report was beneficial in
helping me think about what was required for my own
200
180
160
140
120
100
80
60
40
20
0
Strongly
Disagree
Disagree
Not Sure
Agree
Strongly Agree
Q6. I read the feedback carefully and try to understand what the feedback is saying about how I
might improve my report (95% either agreed or strongly agreed).
I read the feedback carefully and try to understand what the
feedback is saying about how I might improve my report
200
180
160
140
120
100
80
60
40
20
0
Strongly Disagree
Disagree
Not Sure
Agree
4
Strongly Agree
Q5. The process of receiving written feedback on a formative lab report was beneficial in helping me
prepare for the final report (88% either agreed or strongly agreed).
The process of receiving written feedback on a formative lab
report was beneficial in helping me prepare for the final report
180
160
140
120
100
80
60
40
20
0
Strongly Disagree
Disagree
Not Sure
Agree
Strongly Agree
3.3 Did students want to see formative peer-assessment elsewhere?
Or, it works great for us but is it worth you spending time and energy attempting it? In an effort to broaden
the use of formative assessment and peer-review within the degree programme we sought to ascertain the
extent to which this would be welcomed by students. Results indicate a high level of agreement, that there
should be more formative assessment in other modules.
Q9. I would like to see more formative assessment in other modules (74% agreed or strongly agreed).
I would like to see more formative assessment in other modules
160
140
120
100
80
60
40
20
0
Strongly
Disagree
Disagree
Not Sure
5
Agree
Strongly Agree
3.4 Formative/Summative Grades
We had hoped to demonstrate an improvement in grades from the peer-assessed formative lab report to the
one assessed summatively. Whilst there was indeed clear evidence of an improvement overall, this was not
highly reliable across all students (approximately 20% demonstrated either no improvement of a lower
summative grade). Cross-marking a small number of these reports in an attempt to understand what was
happening we ascertained that some students were awarded a formative grade which was too high, bearing
little relation to actual quality. The extent to which this altered student perceptions of the relative merit of their
reports and how they subsequently approached the summative assessment was not immediately clear.
However, as this has implications for student motivation, performance and engagement with the summative
version of the lab-report it is worth considering why it might have occurred. Whilst some grade inflation is
likely to have resulted from either lack of experience in assigning grades or confidence at undertaking the
task a second explanation came to light through semi-structured interviews with students. Sex differences
appeared to play some role, with those assigning overly high grades tending to be mainly female. To some
extent this was inevitable, given the make-up of the cohort, but for a variety of complex reasons female
students may be more reluctant to assign poor grades to other females in their peer group. Whilst the finding
requires additional exploration we suggest a small number of amendments to the procedure. It may be that
triads are a more appropriate way of assigning formative reports rather than dyads as this changes the
interpersonal dynamic significantly. In addition it may be that rather than specific percentage grades a
generic degree classification may be more appropriate as the unit of formative assessment.
4. Challenges
Ok, it sounds great. I'm sold and want to give it a go; is it really problem free? The short answer is, of course,
not entirely! Whilst it should be relatively easy to use the whole process or extract and amend elements that
you think would work well in your own module there are a number of issues that are worth considering and
paying careful attention to in order for the process to work effectively. The following challenges are those
identified by students during the scaffolding seminars on assessment. They are identified by successive
cohorts as things that concern them and if left unattended to would undermine the success of the entire
process.
4. 1 Problems with peer review partner
The value of the entire process depends upon students engaging and displaying mutual respect for each
others contribution. Unfortunately we come up against the frustrating 'free-rider' problem. In essence, for us
this means that a small but persistent number of students will be quite content to receive feedback from their
peer but will be either tardy in providing it themselves, engage in a superficial manner, or simply not engage
at all. Hence, two people are disadvantaged in every dyad in which this occurs (three if you count the module
leader who has to sort this out). This cannot be left un-addressed and there are a number of ways of
overcoming or minimizing the effects of this challenge. You could, for example, award marks for the
feedback produced. However, this robs the process of its formative element (which is after all its raison
d'être) and requires additional significant staff input. You could appeal to the students better nature (difficult
to be honest; we tried). The solution we finally employed is multi-stranded. Firstly, we debate this problem in
the seminars and emphasize the nature of mutual respect, notions of shared scholarship and highlight the
problems caused if students do not participate fully. In essence, we try and create an environment that is
focused on shared scholarship. Whilst this reduces to a very small percentage (less than 2%) the number of
students who do not participate we have recently made the submission of the formative report and the
provision of written and oral feedback a prerequisite for submission of the final summative report and this is
written into both the module guide and the module descriptor. Of course this final step, whilst effective in
ensuring every student participates, doesn't necessarily encourage willing engagement. As such it is the
silent partner in the process. The debates about contribution and developing a sense of being part of an
academic community remain the most important aspect for encouraging effective participation.
4.2 Poor quality reviews
Related to the issue above is the problem of peer-reviewers proving poor quality or even inaccurate
feedback. At the outset of our seminar discussions students do not, on the whole, perceive themselves to be
subject specialists. As such, they are often extremely wary about having what they consider another nonexpert marking their work. Indeed, they seem to seek validation from the tutor as the only legitimate means
of receiving feedback. As there is the potential for students to mislead each about their performance on the
6
formative report (invariably unintentional rather than malicious) this is a problematic area, no question. In
some respects this is central to the philosophy underpinning the introduction of peer-reviewed formative
assessment in the biopsychology module because the entire edifice collapses if students do not believe they
are going to receive useful, meaningful feedback. As such, we deal at length with this issue in the supporting
seminars. Not only do we explore issues of expertise (students mark lab reports that correspond to the same
experiment they conducted, and so are able to bring significantly more expertise to the process than they
initially think) but we take the position that poor quality feedback may not be the disaster it initially seems to
be (it's not ideal, obviously). Through discussion and reflection we attempt to persuade the students to see
feedback not as an absolute indicator of quality via a direct transmission of expert knowledge, but as a
dialectic process; something they can reflect upon and challenge if necessary (in the same way that those of
us who publish in academic journals don't take reviewers comments as gospel - it is entirely appropriate to
make your case to an editor if you can justify why you were correct and your review incorrect). I stress that
this intersection, where they are forced to engage with feedback and make a decision about its veracity, is
precisely where learning takes place. Students, at this juncture, usually begin to think differently about the
entire process of assessment. So, by eliminating the stamp of authority & introducing diverse, possibly
conflicting feedback, students are required to exercise their critical judgment in deciding what information to
accept and reject.
4.3 Stealing Ideas
If we believe the rhetoric our students should be eager to be fully engaged in collaborative learning and
embrace notions of shared scholarship. The reality? Students seem to hate sharing their work with anyone especially other students. Why this disparity? Does it matter and how can we stop it de-railing our peer
review innovation? This is another issue, along with closely related concerns about plagiarism, that is raised
every year and that we deal with in our seminar series. Once students have identified this as an area of
concern they are asked to consider that there is no monopoly on where ideas come from; that a strong
student has nothing to fear from another student making use of their ideas. Indeed this exchange of ideas is
a central aspect of belonging to an academic community. We discuss notions of criterion vs. normative
marking and reassure students that their own grades will not be affected by someone incorporating ideas
into their lab report and that this is not the same as plagiarism (except of course where it is, but this hasn't
happened yet). Students appear reassured by the criterion/normative argument and by the fact that tutors
mark lab reports in the same pairs as they were marked formatively, so as to easily spot plagiarism.
Moreover, there is a clear and transparent process in place and students are encouraged to highlight
concerns at any stage.
5. SUMMARY
As academics we all understand that assessment is important, both to students and ourselves. It can also be
difficult to get right. The curriculum innovation described here - a peer-reviewed formative assessment in an
honours level biopsychology module - was overwhelmingly well-received. It was demonstrably effective, with
students appreciating the opportunity to have work formatively peer-marked prior to it being graded
summatively, and also reporting high levels of engagement. In addition, having undertaken the process
students reported an appetite for increased formative assessment throughout the programme.
However, introducing a curriculum innovation such as this requires a watchful eye and careful navigation of
issues that can easily derail the process. Amongst the most important issues for students were the potential
for copying, a perception of not being subject experts and apprehension over the possibility of poor quality
feedback. Students, if they are to benefit fully from this process, must be allowed to voice their concerns and
attempt some resolution of these issues. Fortunately, as demonstrated, these worries are not
insurmountable. Further to this, we are very clear in our own minds that the success of this formative peerreview process continues to be dependent upon two elements: the preparatory seminars in which students
are allowed to voice their concerns and explore resolutions and the quality of feedback provided by peers.
We finish with two quotes which we believe sum up the experience for the majority of students.
‘At first I thought it was a complete waste of time. Having done the formative assessment I now
appreciate the effectiveness of the exercise, not only for improving the quality of work, but also as a way
of moderating workload and improving time management’.
7
‘I would like to be able to do formative assessment on other modules too. I may even arrange to do it
with some fellow students. I found it very helpful as it helped give me objectivity about my work; the
feedback was very helpful’.
If colleagues would like to attempt something similar to the process described, or even take constituent parts
and integrate them into their own modules then Dr Martin Sharp would be very happy to either help with
design or simply chat about the process in more detail.
8
Reference List
Biggs, J. B. (1999) Teaching for Quality Learning in University, (Buckingham, Society for Research in Higher
Education and Open University Press).
Black, P. & William, D. (1998). Assessment and Classroom Learning. Assessment in Education. 5(1), 7-74.
Brown, E., Gibbs, G., & Glover, C. (2003). 'Evaluating tools for investigating the impact of assessment
regimes on student learning', Bioscience Education E-journal, 2-5. http://bio.ltsn.ac.uk/journal/vol2/beej-25.htm
Brown, G., Bull, J. & Pendelbury, M. (1997). Assessing Student Learning In Higher Education. London:
Rouledge.
Hefce. (2011). National Student Survey, http://www.hefce.ac.uk/learning/nss/data/2011/
Hounsell, D., Hounsell, J., Litjens, J. & McCune, V. (2005). (Symposium). Enhancing guidance and feedback
to students: findings on the impact of evidence-informed initiatives. European Association for Research on
Learning and Instruction (EARLI). 11th Biennial Conference, August, Nicosia, Cyprus.
King. K. (1999). Career Focus: Giving feedback is central to doctor training. British Medical Journal, 318(6).
Mutch, A. (2003). Exploring the practice of feedback. Active Learning, 4(1), ILTHE, York.
Marton, F. & Saljö, R. (1976). On qualitative differences in learning. 1: Outcome and process. British Journal
of Educational Psychology, 46: 4-11.
McDowell, L. & Mowl, G. 1996 Innovative assessment - its impact on students, 131-147 in Gibbs, G. (ed.)
Improving student learning through assessment and evaluation, Oxford: The Oxford Centre for Staff
Development.
Ramsden, P. (2003) Learning to Teach in Higher Education. Second Edition Routledge. Falmer London.
Re-Engineering Assessment Practices in Scottish Higher Education. http://www.reap.ac.uk/.
Sadler, D.R. (1989) Formative assessment and the design of instructional systems,
Instructional Science, 18, 119-144.
Smyth, K (2004) “The benefits of student learning about critical evaluation rather than being summatively
judged” Assessment and Evaluation in Higher Education. 29(3), pp.369-378.
9
Download