- Society for Research into Higher Education

advertisement
Improving Academic Feedback – Turning the ship around with the power of 1000 students
D Hope, H Cameron
D Hope, Fellow in Medical Education, Centre for Medical Education, University of
Edinburgh, Edinburgh, UK
H Cameron, Director of the Centre for Medical Education, Centre for Medical Education,
University of Edinburgh, Edinburgh, UK
Part 1 Abstract
Efforts to improve staff-to-student feedback are hampered by a lack of empirical data on best
practice and piecemeal, small-scale engagement with students. Our large, multi-year project,
followed an evidence-based approach and was informed and co-developed by all 1300
students on programme. We organized class-wide discussions on feedback to explore
different delivery methods. We collected information on feedback satisfaction, academic
performance, and demographics to identify feedback satisfaction correlates and determine the
feedback techniques most likely to improve performance. We specifically targeted
underperforming students to ensure they were represented. In total, over 1,000 students
participated in some capacity. Collectively this represented a thorough evaluation of and
revision to feedback in the medical school. Staff and students disagreed on what was needed,
and data showed few associations between feedback satisfaction and performance. Student
engagement is crucial to enhancing feedback and performance, and an evidence-based
approach is necessary to test the effect of innovations.
Part 2 Outline
Staff-to-student feedback is typically regarded as a critical component of education (Poulos &
Mahony, 2008) and that the right type of academic feedback has the power to more than
double the effect of standard schooling (Hattie & Timperley 2007) but empirical research on
how best to deliver feedback is limited. Significantly, there is a growing body of evidence
suggesting traditional approaches to feedback do not improve performance. In a randomized
trial comparing constructive criticism with praise following an attempt at a surgical
procedure, Boehler and Rogers (2006) found praised students were happier with their
feedback than those who received constructive criticism. However, praised students failed to
improve their performance when repeating the task, while the constructively criticized group
did. Parkes, Abercrombie and McCarty (2012) found that following a ‘feedback sandwich’
where constructive criticism is placed between two ‘slices’ of praise, students felt more
confident but their performance did not improve. Evidence such as this suggests there is a
risk of improving satisfaction without improving performance – a serious safety risk in
medical contexts and a disservice to students.
Our project was designed to engage all students on a medical programme through a number
of large scale initiatives to identify ways of improving feedback satisfaction whilst improving
performance and to evaluate innovations. We pursued a number of approaches. Firstly, we
organized large class-wide discussions with all year groups in the MBChB on feedback
satisfaction. Secondly, we delivered a large, detailed questionnaire on feedback to students.
Thirdly, we targeted underperforming students to ensure they were represented. Finally, we
began to systematically log all new efforts at feedback to monitor their effectiveness. Each
proved extremely helpful in developing our feedback techniques.
The class-wide discussions revealed intriguing differences in the approach of students vs.
staff. Students emphasized the need for detailed, individualized feedback and suggested
class-wide feedback sessions were too general to be of use. However proposals to run
computer-based detailed, individualised feedback sessions on exams out-of-hours, as a means
of accommodating them within a busy timetable were viewed negatively by students. Staff
also initially thought immediate feedback following the end of examinations would be useful
to students, but this was widely viewed as excessively stressful by students. Student
engagement therefore led to a productive dialogue with some surprising findings and ensured
dead-ends were avoided. We intend to repeat this exercise every year to ensure active and
routine engagement on the subject of feedback.
Our large feedback questionnaire was delivered to over 800 students over the course of two
years. This comprised a detailed personality inventory (Goldberg, 1992), measures of
academic performance, feedback satisfaction and demographic information. We also trialled
for the first time a set of questions designed to test perspectives on feedback in a more
detailed way than simply recording satisfaction. Example items included ‘Having clear
departmental standards for feedback is …’ and ‘Being given motivation to improve is …’ In
2012-13, we validated this scale against a shorter, previously published set of questions
trialled in another medical school (Murdoch-Eaton & Sargeant, 2012). This has produced
findings showing links between personality and feedback satisfaction (Hope & Cameron,
2012), as well as allowing us to test for maturational effects and demographic effects. No
association between feedback satisfaction and performance was found. Our results have
shown the need to routinely test feedback effectiveness to ensure performance improves as
well as satisfaction. It has also further highlighted student preferences for detailed feedback
wherever possible.
We noted that respondents to our large feedback questionnaire consistently outperformed
non-respondents by approximately half a grade. Given the inherently greater risk of failure
for students near the pass/fail boundary we decided to target students who were, or had
previously been, close to this boundary. Seeking their views was considered a priority to
ensure feedback was not driven solely by the most successful students. This exercise has been
extremely productive but also revealed the tendency for such students to disengage: out of an
initial sample pool of 80, less than 10% had responded within ten days of an initial request
for interview. This emphasized the need to monitor not just simple response rate to feedback
exercises, but to carefully examine whether all student ability levels are represented.
During our discussions with staff and students it emerged that many module organizers
would apply feedback innovations – such as new styles of presenting information or new
formative examinations – in an ad hoc way. In order to monitor performance and test the
effectiveness of different strategies we recorded each item in a feedback log and organized
short questionnaires assessing popularity and level of participation among the students.
Events included the use of Peerwise – a question writing tool that enabled students to create
questions and test each other’s knowledge – class-wide reviews of questions and new
formative exams. By monitoring all events we could share knowledge of best practice with
other staff members and identify what did and did not work well for students.
This work, combined with other innovations beyond the scope of this discussion, has led to
an on-going and substantial revision to the delivery of feedback in the medical school. In
particular there has been a shift towards providing a challenging, fully standard-set formative
examination within each module designed to mimic the characteristics of the final
assessment. This has been fully discussed with students and follows on both from their
comments and our examination of their responses to survey items. An empirical test of the
effectiveness of this technique will be trialled in March-April 2013. The discussions with
students have emphasized not only what works, but what doesn’t work – and that staff and
students have very different ideas about how best to approach student feedback. The work
has repeatedly emphasized the need for on-going student engagement and routine empirical
research on feedback in order to deliver sustained improvements in feedback quality. To the
maximum extent possible we aim to make the work described here routine and apply it every
year.
References
Boehler, M. L., Rogers, D. A., Schwind, C. J., Mayforth, R., Quin, J., Williams, R. G., &
Dunnington, G. (2006). An investigation of medical student reactions to feedback: a
randomised controlled trial. Medical Education, 40(8), 746-749. doi: 10.1111/j.13652929.2006.02503.x
Goldberg, L. R. (1992). The development of markers for the Big-Five factor structure.
Psychological Assessment, 4, 26-42.
Hattie, J. & Timperley, H., 2007. The power of feedback. Review of educational research,
77(1), 81–112.
Hope, D., & Cameron, H. (2012). Feedback, Personality and Academic Performance. Paper
presented at the AMEE, Lyon, France.
Murdoch-Eaton, D., & Sargeant, J. (2012). Maturational differences in undergraduate
medical students’ perceptions about feedback. Medical Education, 46(7), 711-721.
doi: 10.1111/j.1365-2923.2012.04291.x
Parkes, J., Abercrombie, S., & McCarty, T. (2012). Feedback sandwiches affect perceptions
but not performance. Advances in Health Sciences Education, 1-11. doi:
10.1007/s10459-012-9377-9
Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: the students’ perspective.
Assessment & Evaluation in Higher Education, 33(2), 143-154. doi:
10.1080/02602930601127869
Download