University of Leicester

advertisement
11/EM/0186
Dr. Damian Roland – Evaluating outcomes of interventions that change practice
Full Title
Refining evaluation methodologies for interventions that change practice: A model to assess the
effectiveness of an e-learning tool.
Summary
There are a plethora of implementation strategies and educational programmes aimed at improving
the clinical practice of health care professionals. An alteration in the ability to integrate evidence
based practice and high quality care will improve patient safety and clinically relevant outcomes. A
number of these strategies are delivered via information technology systems and labeled as ‘e-Health’
or ‘e-Learning’. Assessment of their impact is difficult as clear objective outcomes are not always
available and often multifactorial in their nature. A number of theoretical constructs exist to evaluate
an intervention designed to improve performance. No single approach has been universally adopted
due to the wide range of individual and organisational factors that affect the outcomes before, during
and after the intervention (Bates 2004). The development of an evaluation model which enables the
effectiveness (the reasons behind the change in outcomes) to be identified would enable a more
systematic evaluation. As e-health care resources become potentially limited in the current economic
climate such a tool would provide an efficacy specification of a health care related intervention.
Aims and Objectives
The main research question to be addressed is, “Which evaluative methods enable effectiveness of
an e-learning intervention to be assessed?”
This project has the following aims:
1. A definition of relevant outcome methods to assess an audio-visual training intervention via
literature review
2. Production of an evidence based online training package using audiovisual descriptions of
feverish children.
3. A quantitative description of the training package by observing performance of participants in
pre-defined outcome measures.
4. A qualitative description of the effectiveness of the training package via interview of
participants at the extremes of performance
Background
Frameworks for assessing training and educational interventions have been well formulated in a
number of scientific disciplines. There are a number of key works in the literature with Kirkpatricks
(1976) training evaluation being the most well known although not necessarily the most robust (Bates
2004). This is defined by four distinct levels of outcome to be approached in a stepwise fashion.
Subsequent approaches have criticised the design by saying it is too simplistic (Ford and Kraiger
1995; Salas and Canon-Bowers 2001). However the Kirkpatrick model still remains a valid
methodology with systematic reviews using the process to examine training effectiveness (Arthur
2003). Regardless of the approach it is important to differentiate between evaluation, a method of
measuring learning outcomes, and effectiveness, a more theoretical approach to understanding the
reasons behind these outcomes (Alvarez 2004). This piece of work proposes an evaluation
methodology, based loosely on the Kirkpatrick model but being open to its limitations, which allows
effectiveness to be determined. The four key domains of the Kirkpatrick model which may be applied
to a medical intervention are learner satisfaction, learner knowledge, learner behaviour change and
organisational change. Although others have argued contextual factors not classified under these
domains may be significant (Bates 2004) their effectiveness is not easily quantified e.g. the nature of
interpersonal support in the workplace for skill acquisition (Bates 2000). This study will concentrate on
determining whether effectiveness can be determined across these four domains as the initial body of
research work. Other domains will be considered as a result of an ongoing literature review on
outcome measures used to investigate the training intervention to be used in the proposed evaluative
1|Page
Version 1.1
12th Mary 2011
11/EM/0186
model. In the future other evaluative methods may be subsequently investigated if results indicate
discriminatory effects across these more extensively studied domains.
The concept of e-Learning, loosely defined as “the use of internet technologies to enhance knowledge
and performance” (Ruiz 2006) is now well established in the medical community. It has uses in underand post-graduate education but also in improving communication and delivering interventions. Dr.
Roland has a particular interest in patient video clips (PVC). This is video footage demonstrating a
particular sign or symptom which because of the explosion of high fidelity but relatively inexpensive
recording equipment has enabled the production of footage with more than adequate quality for
interpretation. PVCs are used in all areas of medicine from demonstrating practical procedures to
enhancing clinical skills. A recent National Patient Safety Agency report encouraged the widespread
availability of “Spotting the Sick Child” (NPSA 2009). This is a website containing a large selection of
PVCs of unwell children and was produced by members of the research team. The evidence for PVCs
ability to improve student’s cognitive evaluation of problem based learning compared to written
material exists in medical education literature (Balslev 2008, De Leng 2007, Kamin 2003).
Theoretically the use of PVCs in medical education when compared to standard written text has many
advantages. These include but are not restricted to (De Leng 2007):
a) A video provides a more holistic picture of a patient. ‘Cues’ in a written text to demonstrate
a particular condition or feature are frequently obvious and do not represent actual pattern
recognition seen in clinical practice
b) Videos enable the viewer to experience the patient in their own way not filtered by a third
party’s perspective and biases.
c) Video conveys many non-verbal cues
d) Adult learners are now used to a varied source of information from multiple mediums
Feedback on “Spotting the Sick Child” has been positive however the project team are aware little, if
any, evidence exists showing patient benefit or change in clinical practice in post graduate staff.
Validating an evaluative methodology enabling effectiveness to be demonstrated would allow Dr.
Roland to subsequently investigate the features of PVCs which improve understanding, knowledge
acquisition and confidence.
To test our evaluative framework we have selected a specific example where physician judgement is
important and in which the research team has considerable experience in the use of PVCs: the NICE
guideline on the management of feverish children. This guideline is described in the “spotting the sick
child” website. By taking a focused part of the product we will use the methodologies developed in this
work to ultimately test the whole product. The NICE guideline on the management of the feverish child
provides information on how to identify the risk of serious illness according to a visually appealing
traffic light system of evidence based signs and symptoms. Some of these are poorly demonstrated
by the written word and require experience in paediatric emergency care to apply objectively. Our
hypothesis is audio-visual representation of the traffic light system would improve understanding and
adherence to the guideline. Without a robust evaluative framework this hypothesis can not be tested
in a valid manner and as current evidence is very limited in this area this study is both novel and
necessary.
Plan of Investigation
The study is spread over three stages. Stages one and two occur in the initial 18 months and stage
three in the latter 18 months
Stage One – a) Narrative Summary
It is important that the evaluative model is challenged by an intervention which is multi-dimensional.
The intervention must also be evidence based and use principles and techniques which are
recognised as standard practice. This is to ensure that the model is testing an intervention that is
likely to be used again in the future. Building on work the principal investigator has already performed
a thorough literature review will be completed to provide an evaluation of audiovisual materials which
change practice. The search strategy will identify in which health care settings educational patient
video clips (PVCs) have been utilised and which outcome measures have been used to assess them.
2|Page
Version 1.1
12th Mary 2011
11/EM/0186
The literature review will be undertaken across medical and educational databases and relevant
search engines. The review has commenced under the guidance of Sarah Sutton, a Senior Clinical
Librarian at the University of Leicester. Preliminary work shows a formal systematic meta-analysis of
the outcomes of the use PVCs will not be possible due to the inadequate and heterogeneous nature
of the available articles. Therefore a narrative summary of published literature will be performed using
the guidance of the Economic and Social Research Council (ESRC) methods programme.
Independent review of each article will be performed by Dr. Roland (principal investigator) and Dr.
Holger Wahl (paediatric registrar) with the project team providing clarification on any areas of
uncertainty.
Stage One – b) Production of PVC focused educational package
A web based educational module will be developed based on the traffic light system of the NICE
Feverish Illness in Children Guidelines. The traffic light system lists a number of key physiological and
behavioural characteristics placed in order of risk with the required intervention or disposition
described for each colour. The module will demonstrate each of these with a video clip example. This
will last approximately 45 minutes and will be accompanied by a guideline based knowledge test. The
module will be produced by Wild Knowledge (an education informatics company). The formatting of
this will be based on the feedback of users of the “Spotting the Sick Child” product, which was
developed by HERADU (a University of Leicester Education initiative), and the evidence synthesis.
This work is easily transcribed from the “Spotting the sick child” project and will not require
considerable primary development by Dr. Roland. Both the educational package and knowledge test
will be reviewed by a standard setting group of leading academics, educators and clinicians in the
field of paediatric acute care. This will contain:
Dr. Ffion Davies (Chair of the Intercollegiate Group of Paediatric Emergency Services)
Dr. Matthew Thompson (Clinical Lecturer and GP)
Dr. Peter Barry (Senior Lecturer in Paediatric Intensive Care)
Dr. Colin Powell (Senior Lecturer in Child Health)
Dr. Andrew Long (RCPCH Officer for Assessment)
Stage Two – Development of the model
Kirkpatrick’s evaluation framework (Kirkpatrick 1998) is an established method to assess the impact
of an educational intervention. The Kirkpatrick model defines four levels and each evaluation will be
tested on groups of junior doctors for feasibility and informal appraisal prior to the before and after
study.
i. Satisfaction with the intervention – Will be measured via a previously developed e-learning
satisfaction questionnaire (Wang 2003). A topic specific version will be devised in collaboration with
Prof. Richard Baker who has experience in this methodology. In order to encapsulate the attitudes
and experiences of junior doctors interactions with febrile children focus groups will be undertaken
under the supervision of Dr. David Matheson who has a particular interest in this area. The key
outcomes of these focus groups will be incorporated into the likhart based scale questionnaire to
enable determination of change in perceived confidence as a result of the educational package.
ii. Learning – Will be measured via web based knowledge test assessing participants’ ability to
correctly assign signs and symptoms according to the NICE Feverish illness in children guidelines via
pre-recorded PVCs. Management decisions appropriate to the signs and symptoms will also be
noted. Completion of this test will be observed by an independent invigilator.
iii. Learner Behaviour – Individual observation of the doctor-patient interaction with reference to the
NICE Feverish illness in Children guideline. A checklist of relevant questions and examinations based
on the RCPCH Feverish illness in Children Audit (awaiting publication) will be developed to be used
by nursing staff. A retrospective review of 293 case notes has already been performed to determine
intra-doctor variation in performance. This will be used to determine the number of assessments
required to obtain stable estimates of reliability. Working with Dr. Graham Martin (a social scientist at
the University of Leicester) steps will be taken to limit ritualised compliance which may be inherent
within the observations.
3|Page
Version 1.1
12th Mary 2011
11/EM/0186
iv. Organisational Change/Patient Outcome – Departmental audit of guideline compliance using the
proformas developed by the RCPCH in their national audit of the feverish illness in children
guidelines.
Stage Three - Training Intervention and measurement of effectiveness
The proposed model demonstrated in figure one has been developed by the project team to enable
collation of all features of the Kirkpatrick evaluation model as well as providing a process to judge
effectiveness. Further outcomes will be added to the model if demonstrated in the literature review to
not belong to the domains defined by Kirkpatrick but whose effectiveness can be judged. Newly
inducted junior doctors (Foundation Year 2 and Specialty Trainee 1 doctors) commencing in the
Leicester Royal Infirmary in August 2012 will be recruited to the study with those having completed
more than 4 months paediatrics at a post-graduate level or in possession of any part of the DCH or
MRCPCH exams excluded. All will undergo the evaluative measures prior to undertaking the
educational package. They will repeat these measures at two weeks and three months. Analysis of
performance will take place across the following domains. A key feature of the project design is the
ability of HERADU to design a website delivering the intervention which also collates questionnaire
responses and knowledge test results.
i) Change in attitude in managing feverish children (via questionnaire)
ii) Change in performance (via knowledge test)
iii) Change in adherence with best practice (via observed behaviour)
iv)Change in audited departmental guideline adherence
Figure One
Whilst potential confounders are acknowledged the completion of assessments immediately following
and three months after will allow for implicit learning via service provision to be partially accounted for.
Statistical advice has been provided by Dr. Nick Taub, a medical statistician at the NIHR Research
Design Service in Leicester. In any given year 35-40 junior doctors commence in the Emergency
Department. These numbers would provide adequate information to enable the feasibility and scope
of the four outcomes to be examined. However the primary purpose of the study is not to validate the
intervention but elucidate whether the tool can examine the reasons behind differing outcomes. This
will be determined by semi-structured interviews performed on those doctors at the extremes of
performance. These will explore how the audio-visual module modified their practice and enable
definition of the benefits and barriers to effective utilisation. It is anticipated approximately the top and
4|Page
Version 1.1
12th Mary 2011
11/EM/0186
bottom ten percent of participants will be interviewed although final determination will depend on the
spread of results of the quantitative analysis. It is important a range of experiences are explored but
the project team recognise the time requirements of scripting and thematic analysis within a fixed time
period. The outcome of interviews will be to identify common themes for each of the evaluative
measures used. The research question will be addressed by determining which evaluative method
produces the greatest participant awareness of effectiveness (whether positive or negative) and
whether this correlated with actual performance. The completion of stage three will:
i) Enable a description of the model to be published proposing which evaluative methods provide the
most useful information on perceived effectiveness
ii) Provide guidelines on the production of similar audio-visual modules
iii) Enable power calculations to be formulated for randomised control trials of PVC interventions to be
performed in the future.
.
References
Alvarez K, Salas E, Garofano C. An integrated model of training evaluatuin and effectiveness. Human
Resource Development Review 2004; Vol. 3, No. 4: 385-416
Balslev T, de Grave W, Muijtjens A M M, Eika B and Scherpbier A J J A. The development of shared
cognition in paediatric residents analysing a patient video versus a paper patient case. Adv in Health
Sci Educ 2008
Bates, R. A., Holton, E. F., III, Seyler, D. A., & Carvalho, M. A. The role of interpersonal factors in the
application of computer-based training in an industrial setting. Human Resource Development
International;2000 3(1),:19 – 43.
Bates R. A critical analysis of evaluation practice: the Kirkpatrick model
and the principle of beneļ¬cence. Evaluation and Program Planning 2004; 27: 341–347
De Leng B A, Dolmans D H J M, Van de Wiel MWJ, Muijtjens A M M, Van der Vleuten C PM. How
video cases should be used as authentic stimuli in problem based medical education. Medical
Education 2007; 41:181-188
Ford, J. K., & Kraiger, K. The application of cognitive constructs and principles to the instructional
systems design model of training: Implications for needs assessment, design, and transfer.
International Review of Industrial and Organizational Psychology 1995; 10: 1 – 48.
Kamin C, O’Sullivan P, Deterding R and Younger M. A comparison of critical thinking in groups of
third year medical students in text, video and virtual PBL case modalities. Acad Med 2003; 78: 204211
Kirkpatrick, D. Evaluation of training. In R. L. Craig (Ed.), Training and development handbook: A
guide to human resource development. New York 1976 : McGraw Hill. 317 – 319.
Kirkpatrick D. Evaluating Training Programs. 2nd ed. San Francisco: Berrett-Koehler, 1998.
NPSA. “Review of patient safety for children and young people”. 2009 Pearson, G A (Ed) Why
Children Die: A Pilot Study 2006; England (South West, North East and West Midlands), Wales and
Northern Ireland. London: CEMACH. 2008
Ruiz J, Minzter M, Leipzig R. The Impact of E-Learning in Medical Education. Acad Med. 2006;
81:207–212.
Salas, E., & Cannon-Bowers, J. A. The science of training: A decade of progress. Annual Review of
Psychology 2001; 471 – 497.
Schon, D. A. How professionals think in action. New York: Basic Books 1983
5|Page
Version 1.1
12th Mary 2011
11/EM/0186
Wang 2003 Assessment of learner satisfaction with asynchronous electronic learning systems
Information & Management 41 (2003) 75–86
6|Page
Version 1.1
12th Mary 2011
Download