Assessment and feedback dialogue in online

advertisement
really good stuff
REFERENCE
1 Velan GM, Jones P, McNeil HP, Kumar RK.
Integrated online formative assessments in the
biomedical sciences for medical students: benefits for
learning. BMC Med Educ 2008;8:52.
Correspondence: Dr Louise Lutze-Mann, School of Biotechnology
and Biomolecular Sciences, University of New South Wales,
Sydney, 2052 New South Wales, Australia. Tel: +61 2 9385 2024;
E-mail: l.lutze-mann@unsw.edu.au
doi: 10.1111/medu.12162
A novel means of assessing evidence-based medicine
skills
Eseosa Asemota, Abigail Winkel, Dorice Vieira &
Colleen Gillespie
What problems were addressed? Residents are expected
to learn and demonstrate competency in using medical literature to guide their practice.1 Although medical students
are instructed on the principles of constructing appropriate
questions and searching through literature to identify solutions, it is difficult to observe and assess these skills in an
objective manner, especially among doctors-in-training in
whom it is particularly critical to establish good practices.
Objective structured clinical examinations (OSCEs) present the appealing potential to assess a range of aptitudes in
the setting of a realistic scenario of patient interaction.
What was tried? Twenty-four obstetrics and gynaecology
residents in postgraduate years (PGY) 1–4 participated in
a five-station OSCE that assessed a range of abilities
through a set of scenarios involving standardised patient
(SP) interactions and focused skills testing. In the medical literature station, the patient presented for counselling after her sister had been diagnosed with a BRCA-1
gene mutation. The residents were given instructions to
use the materials available through the online medical
library to locate evidence to facilitate educating the
patient about her risk for cancer and the effect of riskreducing surgery. The station had three parts: a literature
review by the resident (8 minutes); interaction with the
SP (8 minutes), and feedback with a faculty member
(5 minutes). Using a remote desktop software program, a
faculty member, who is also a clinical librarian, observed
the resident’s search. Following the interaction with the
patient, the faculty member discussed the framing query
used by the resident and provided feedback. The faculty
member rated the search according to source of information, framing query and strength of data based on a 3point scale (Does not meet expectations, Meets expectations, Exceeds information). The data were analysed
using IBM SPSS Version 19 (IBM Corp., Armonk, NY,
USA). Interactions at the station were videotaped so that
they could be used for further training and research.
What lessons were learned? All residents were able to
complete the station and received ratings of at least
‘Meets expectations’ on all items. Residents received a
rating of ‘Exceeds expectations’ if they used PubMed
Clinical Queries, EMBASE or the Cochrane Library of
Systematic Reviews and demonstrated a systematic
approach to gathering data by using a real clinical question, or located prospective studies or clinical reviews.
For all 24 residents, the mean standard deviation (SD)
score for ‘Exceeds expectations’ was 37 35%;
mean SD scores by PGY group were 44 39% for
PGY-1 residents, 47 29% for PGY-2 residents,
41 41% for PGY-3 residents, and 18 31% for PGY-4
residents. Although these results are not statistically significant given the small sample size, it is notable that
junior residents performed better than senior residents,
and that the range of performance varied greatly among
individual residents. This shows us that residents’ use of
the medical literature may be assessed using an OSCE
format, and that it may be worthwhile to perform an
objective assessment of this skill in order to evaluate
residents’ ability in this important area. Further evaluation of this method will help to refine the scoring system
and establish the validity of the assessment, in addition
to determining the areas in which additional evidencebased training is needed.
REFERENCE
1 Accreditation Council for Graduate Medical
Education. ACGME Outcome Project: General
Competencies. 1999. http://www.acgme.org/
outcome/comp/compFull.asp. [Accessed 15 July
2012.]
Correspondence: Eseosa Asemota, Department of Internal Medicine,
New York University Medical College, New York, New York 10010,
USA. Tel: 8574174841; E-mail: eseosa@post.harvard.edu
doi: 10.1111/medu.12160
Assessment and feedback dialogue in online distance
learning
Rola Ajjawi, Susie Schofield, Sean McAleer &
David Walker
What problems were addressed? Dundee’s Centre for
Medical Education has delivered a paper-based Masters in
Medical Education for over 20 years. Our new online distance learning programme, which commenced in November 2011, already has 569 students (70% UK-based, 90%
medically qualified). Flexibility, including provision for
rolling enrolment and the absence of assignment deadlines, is valued by students, but hampers social learning
opportunities. The InterACT (Interaction and Collaboration via Technology) project tackles lack of assessment
and feedback dialogue and the isolation felt by both students and assessors.
Without dialogue, teachers may invest time in producing
feedback which students may never use or have difficulty
understanding.1 Feedback often arrives after students have
progressed to the next module, limiting opportunity for
dialogue or redress to assist students’ knowledge building.
ª Blackwell Publishing Ltd 2013. MEDICAL EDUCATION 2013; 47: 513–535
527
really good stuff
Self-evaluation and ‘feedforward’ opportunities are compromised.
Because of the programme’s size, we employ a distributed
model of marking whereby academic staff on and off campus assess and provide feedback. There is no continuity in
terms of establishing a relationship with the student or a
shared context for the assessment task.1 Assessors are
unsure about the usefulness of their feedback and about
whether similar issues have been identified before.
What was tried? InterACT was designed to provide a
longitudinal repository of feedback and student reflection
on previous assignments in the new online course. To
encourage the use of feedback in future assignments,
assessments were blueprinted against learning outcomes
and reviewed to promote better sequencing.
For each assignment, students complete a cover page on
which they evaluate their work qualitatively against the
assignment’s criteria, request specific feedback, and identify how previous feedback informed the current work.
Tutors provide feedback about the assignment and
respond to students’ self-evaluations, thus establishing
dialogue.
Students then upload their marked assignments into their
personal journals and answer four questions relating to
their interaction with and understanding of the feedback.
Explicit comparison of external feedback against internal
evaluative judgements of one’s work is the essence of
informed self-assessment.1 An e-mail alerts the tutor of a
student’s posting to his or her journal and the dialogue
can continue as required. The journal acts as a repository
for all of the student’s work throughout the programme.
It is accessible to all course tutors, but only to that student. Tutors can quickly access previous feedback, which
gives a longitudinal and programmatic approach to assessment. The process is introduced in the induction module
to ensure familiarity.
What lessons were learned? Tutors find the process valuable for dialogue with students; it allows tutors to receive
immediate feedback on their feedback, clarifies issues with
the assessment and module content, and efficiently closes
the evaluation loop. Preliminary student evaluations indicate that students understand the pedagogical rationale
and appreciate the opportunity to ask further questions
about their work.
The process must be easy for students, tutors and administrators to ensure engagement. Automatic alerts and
reminder e-mails stimulate participation. We are now
evaluating InterACT, examining stakeholder satisfaction
and value, and improvements in student learning. We
will introduce a patchwork assessment in which students
identify how they meet exit outcomes at certificate and
diploma levels using evidence from their journal
reflections, and are exploring opportunities for peer
evaluation.
REFERENCE
1 Boud D, Molloy E. Rethinking models of feedback for
learning: the challenge of design. Assess Eval High
Educ 2012;1–15.
528
Correspondence: Rola Ajjawi, Centre for Medical Education,
University of Dundee, Room 9, Tay Park House, 484 Perth Road,
Dundee DD2 1LR, UK. Tel: 00 44 1382 381973;
E-mail: r.ajjawi@dundee.ac.uk
doi: 10.1111/medu.12158
Promoting evaluation culture through partnership and
interactive tools
Omayma Hamed & Stacey Friedman
What problems were addressed? Curriculum evaluation
was appraised in the Faculty of Medicine, King Abdulaziz
University. Stakeholders were minimally engaged and
found improvement tools inefficient. Evaluation was confined to a student end-of-course questionnaire. Processes
and tools were not standardised; results were not interpreted within context, were communicated only to higher
management and were not utilised.
The goal of an evaluation system is improvement of curriculum design, implementation and impact. Strategic,
operational and social aspects must be considered. We
used a holistic approach to set a comprehensive evaluation system which engages all stakeholders (faculty members, students, staff, leaders) through a relationshipcentred approach, and secures a culture of learning
through evaluation.
What was tried? The engagement of stakeholders,
including socially and politically powerful players, in all
stages of the evaluation process (design, implementation,
interpretation, reporting, follow-up) resulted in a comprehensive evaluation system based on multiple perspectives. It comprises three interdependent cycles: (i)
programme design; (ii) ongoing course review with
recommended remedial actions, and (iii) continuous
quality improvement reporting, with improvement action
plans (IAPs) at course, curriculum stage and institutional
levels.
To secure stakeholders’ adherence and preserve efficiency, a feedback–feedforward cycle was established by
designing an electronic curriculum spreadsheet. It guaranteed e-interaction by: providing asynchronous counselling;
providing needs-based learning materials; providing documentation; evaluating the alignment of objectives, teaching and assessment strategies; weighing the teacher
student-related workload with course duration; observing
performance progress by comparing evaluation results
using simple charts and triangulated results, and monitoring the IAP. ‘e-Blueprint software’ was designed to help
faculty members improve students’ assessment.
To guarantee monitoring of the system, we standardised,
coordinated and publicised processes, validated the
diverse and learner-centred tools, and triangulated and
interpreted results within context in an affirmative way.
Strategically, our approach aimed to engage the leadership and curriculum committees early in the process, and
to align evaluation with quality assurance and accreditation standards.
What lessons were learned? Intertwining humanistic
with analytic approaches pulled in various stakeholders.
ª Blackwell Publishing Ltd 2013. MEDICAL EDUCATION 2013; 47: 513–535
Download