Medical Teacher, Vol. 27, No. 6, 2005, pp. 509–513 Formative assessment: a key to deep learning? ALISON RUSHTON University of Birmingham, Edgbaston, Birmingham, UK SUMMARY A paradigm shift in assessment culture has emphasized the importance of formative assessment. The existing evidence supports the identification of feedback as the central component of formative assessment. Feedback provides information about the existing gap between the actual and desired levels of performance. The existing evidence suggests various characteristics of effective feedback, for example, ensuring that feedback is construct-referenced and student referenced. An exploration of the existing educational literature provides evidence for the emphasis on formative assessment. This paper evaluates the pedagogical implications of formative assessment to deep learning. A constructivist approach, emphasizing the principles of adult learning and placing emphasis on the student is advocated. However, in applying the wider educational literature to healthcare, it is questioned if the paradigm shift in assessment culture has occurred as the majority of the existing literature is centred on summative assessment. Introduction Many purposes and roles of assessment are utilized throughout the existing educational systems, the most acknowledged of which is the evaluation of learning outcomes through summative assessment. Gipps (1994) describes a paradigm shift within assessment from a testing to assessment culture, and despite this historical focus on summative assessment there is considerable evidence supporting the importance of formative assessment as highlighted by Black and Wiliam’s (1998) pivotal review. Formative assessment can be considered as a construct, although contemplation of its description is problematic as evidenced by the literature. A commonly used definition describes formative assessment as: ‘‘encompassing all those activities undertaken by teachers, and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged’’. (Black & Wiliam, 1998, pp. 7–8) This paper explores feedback as central to formative assessment and its links to deep learning, seeking to explore the educational literature and its pedagogical lessons for healthcare educational practice, acknowledging from the outset that there is minimal attention to formative assessment within the healthcare literature. The centrality of feedback to formative assessment is supported by a synthesis of meta-analyses that found feedback to produce the most powerful single effect on achievement (Hattie, 1987). Ramprasad (1983) defined feedback as information about the existing ‘gap’ between actual level and the reference level of performance, stressing that information was only ’feedback’ if used to alter the gap. Messick (1975) had previously documented the necessity of information gained to close the gap of being constructreferenced, and therefore related to a developmental framework in the area being addressed. For example, providing the student with a tool to assist planning of the physical examination of a patient to enable the development of clinical reasoning skills when assessing a patient, the tool then forming the basis of subsequent discussion. Sadler (1989) highlighted that progress is inhibited if this gap is too wide, and emphasized the role of the student in taking action to affect the gap. Feedback can usefully be considered as possessing two key components, the teacher providing feedback and the student receiving feedback (Hattie & Jaeger, 1998). This necessitates consideration of the different ways in which feedback may be provided and perceived, dependent upon an individual’s model of self-esteem. Biggs (1998) argues that the effectiveness of formative assessment is dependent upon the student’s accurate perception of the gap, as well as their motivation to address it. This argument is facilitated from a constructivist perspective that views the student’s involvement in the process as essential, and therefore advocates the use of strategies such as self assessment. Sadler (1998) identifies this important dimension of feedback as the student’s interpretation of it, suggesting that students need to be educated to interpret feedback in the context of their current and future work. By definition, formative assessment is criterion referenced, which Harlen and James (1997) develop further to also describe it as ipsative (student-referenced). As the aim of the formative assessment process is to assist the development of the learner, an ipsative assessment therefore permits consideration of an individual’s particular circumstances as highlighted above. This focus on the quality of feedback interestingly excludes some aspects of healthcare practice described as formative assessment, for example, feedback on class standings/test scores in isolation with no reference to content as described in some papers (Houghton & Wall, 2001; Khan et al., 2001; Sibert et al., 2001; Townsend et al., 2001). Feedback Black and Wiliam (1998) perceive feedback as central to formative assessment, defining it as: ‘‘Any information that is provided to the performer of any action about that performance’’ (p. 53). Correspondence: Alison Rushton, Lecturer/Programme Director, School of Health Sciences, 52 Pritchatts Road, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK. Tel: 0121-415-8597; email: a.b.rushton@ bham.ac.uk ISSN 0142–159X print/ISSN 1466–187X online/05/060509-5 ß 2005 Taylor & Francis DOI: 10.1080/01421590500129159 509 A. Rushton Feedback is part of the interactive components of teaching and learning and can therefore be seen as central to pedagogy. There are many ways in which teachers can provide feedback to assist the development of student learning. The important issue is that whatever the selected method, it must be able to provide information about what the student does and does not know, as well as providing direction for improvement (Hattie & Jaeger, 1998). Feedback can be provided on an individual and group basis. Interestingly, this is not an issue explored in detail in the literature, although Hattie (1987) found that the combination of feedback and individualization produced a powerful effect size for achievement. However, it was acknowledged that based upon existing work the key to the effect size was the feedback itself. Task centred feedback as compared to that linked to personal self esteem (i.e. goal orientated), has been shown to have the most positive effect on attitudes and achievement (Black & Wiliam, 1998). Little of the healthcare literature addresses this issue with few papers planning individual and task centred feedback, with the exception of Rahman (2001). The experience of the teacher is therefore an important factor in formative assessment. An experienced teacher will possess skills, knowledge, attitudes, awareness of standards, and expertise in evaluative skills (Sadler, 1998) that have contributed to their tacit professional knowledge. An experienced teacher will also have developed automaticity (Chi et al., 1988) in key aspects of practice, so that they are more able to invest time in providing feedback to students. In referring back to Black and Wiliam’s (1998) narrow definition of feedback, it can therefore be seen clearly that feedback that enhances learning must be wider than this. Hattie and Jaeger (1998) develop this to define feedback as the: ‘‘provision of information related to the understanding of the constructions that students have made from the learned/taught information’’, and ‘‘polymorphous, referring to subsequent information aimed at assisting the learner in meeting the goals of the learning process’’ (p. 113). Their considered use of the word ‘subsequent’ emphasizes the ongoing dimension to feedback, suggestive of a continued rather than a one off process, an issue that is often overlooked in the healthcare literature where formative assessment is frequently seen as a single event in a similar way to summative assessment in papers (Houghton & Wall, 2001; Khan et al., 2001; Sibert et al., 2001; Townsend et al., 2001). Learning Learning can be considered to encompass ‘deep learning’ that includes understanding and interpretation (Entwistle & Entwistle, 1991), although the authors acknowledge that teachers and institutions encourage the lower levels of learning through an essentially quantitative approach to assessment. Gipps (1994) recognized the potential that assessment has for affecting learning and the intricate links are now widely recognized informing pedagogy. However, in 510 the literature formative assessment is linked more strongly to teaching rather than learning. The literature highlights many claims regarding the positive effects of formative assessment on learning, although justifiably Torrance and Pryor (1998) contend that claims are overstated and undertheorized, particularly when considering deep learning. Further work applying the existing theories into practice is therefore necessitated. Klenowski (1996) emphasizes the importance of teachers’ awareness of the interrelationship between the three areas of assessment, curriculum, and pedagogy; highlighting that a move to encourage formative assessment necessitates changes in curriculum and pedagogy. There are many aspects of classroom interaction that contribute to formative assessment, such as discourse, questioning, giving tests and observation. Black and Wiliam (1998) also discuss documented changes in pedagogy that have taken a strategic approach to developing the use of formative assessment, for example the development of mastery learning, curriculum based assessment and portfolio systems. In particular in healthcare, portfolio systems (Friedman Ben David et al., 2001; Pitts et al., 2002) and an emphasis on Problem Based Learning have illustrated this development (Schwartz et al., 2001). There is some evidence to suggest that students prefer frequent testing (Iverson et al., 1994) and that feedback provided by frequent testing can improve learning (Scheerens, 1991). A review of the evidence (Peckham & Roe, 1977) found that earlier studies saw the effects as beneficial to learning and student motivation, but that later research suggested that the benefits were dependent upon other variables including the context. In a meta-analysis of the literature, Bangert-Drowns et al. (1991a) found that students taking more than one test in a term scored higher than those taking no tests. Interestingly, they also found that the degree of improvement reduced as the frequency of testing increased. Dempster (1992) in a review of the literature on tests to facilitate learning highlighted key issues to ensure proper use of testing, for example, testing material soon after delivery, frequent and cumulative use of tests and providing feedback soon after testing. In their meta-analysis, Bangert-Drowns et al. (1991b) found that feedback was the greatest influence on performance if provided prior to provision of the answers. There is considerable literature addressing this area, but there is considerable variation between the existing studies that limits the internal validity of using such meta-analyses to inform practice. Application of theory to practice Torrance and Pryor (1998) define two heuristic models of ’convergent’ and ’divergent’ assessment to facilitate understanding of formative assessment. Convergent assessment is characterized by a behaviourist rigid focus on teaching using a pre-planned programme (Bloom, 1971), with formative assessment considered as continuous or repeated summative assessment. This teacher-centred approach deconstructs knowledge and uses hierarchies of learning and testing of the parts and is commonplace in the healthcare literature (Houghton & Wall, 2001; Khan et al., 2001; Sibert et al., 2001; Townsend et al., 2001), although the ongoing Formative assessment: a key to deep learning? approach to formative assessment is absent. In contrast, divergent assessment is characterized by a constructivist approach with an adaptable process placing emphasis on the student (student-centred). For this model the intention is to teach in the zone of proximal development (Vygotsky, 1978), contributing to a joint assessment process between the teacher and the student (Pryor & Torrance, 1996); arguing that formative assessment taking place in the zone between student and teacher facilitates the best performance. The process involves the teacher and student collaborating to enable the best performance by the student, something that is recognized in teaching but merits further attention in assessment, again highlighting the importance of the student within the process. There are to date few examples in the healthcare literature that draw attention to this issue. Some studies, for example, Rahman et al. (2001) go some way towards addressing this issue by using a 1:1 discussion to analyse performance, although they remain focused on the assessment of learning outcomes in the absence of summative assessment. In translating the constructivist approach into activity in a classroom, the emphasis moves towards key issues of teacher-student interaction, understanding of the effect of the process on the student, the scaffolding of learning to progress tasks, collaboration and being forward focused, and the ‘appropriation’ of learning (Torrance & Pryor, 1998). Formative assessment can therefore be seen as a dynamic, interactive, and evolving process emphasizing its complexity (Lidz, 1995), with the teacher as a facilitator. It is therefore central to pedagogy, emphasizing the necessity of linking the information from assessment to context (Tittle, 1994). Harlen and James (1997) support this by arguing that validity is essential but as an assessment is not formative unless it follows through to take action for the development of learning. This supports the move to authentic assessment in education (Guba & Lincoln, 1989). Current educational models within post-secondary education such as lifelong learning, authentic learning, and self-directed learning support a constructivist approach. This is reflected in the students’ active involvement in their own development (Savery & Duffy, 1995), the movement from a focus on teaching to learning, and a teacher-centred to a student-centred approach. A practical emphasis on group learning also facilitates the educational benefits of cooperative learning (Dewey, 1938). Key premises are that the students are self-directed and autonomous adult learners (Knowles, 1990), with the concept of critical reflection being pivotal (Brookfield, 1987). In addition, by developing Vygotsky’s work, the literature acknowledges that post-secondary education encourages transformation; a process where the student is facilitated in an exploration of how they view themselves and the world (Mezirow, 1994). The self-evaluative skills of the student are therefore integral to learning. The philosophy of student-centred learning supports the development of this process, with feedback an integral component (Klenowski, 1995). Courses therefore need to develop a student’s self-evaluation as part of the process of formative assessment. A limitation of this approach however, is the student who lacks the metacognitive skills to accurately evaluate their own learning. Depending upon the nature of a particular lesson, it is necessary to adapt the relationship between teacher and student to facilitate formative assessment. For example in a session exploring a student’s clinical reasoning skills, metacognition is a key component and the students themselves are therefore an important resource for feedback. Cowie and Bell (1999) usefully classified formative assessment as either planned or interactive, with interactive being formative assessment that occurs spontaneously in a classroom in an unplanned way. Using a combination of observation, interview and survey, the authors described how the teachers used planned formative assessment to assess progress of the whole class, and interactive formative assessment to mediate learning. The dimensions of the interactive model were all influenced by their previous experiences and pedagogical approach. Interestingly, the teachers saw interactive formative assessment as central to teaching and learning, the process being implicit in healthcare education, but not explicit within the existing literature. It has been argued that the move towards the modularization of courses has reduced the opportunity for formative assessment (Yorke, 1998), although this could be debated as having the opposite effect by focusing the nature of formative assessment to the subject area contained within a single module. As long as formative assessment is built into the planned curriculum there should be no reason why its opportunity is reduced. What does favour Yorke’s opinion is that the formative assessment will be limited by the separation of different subject areas into different modules as commonly seen in the theoretical modules of healthcare courses, and the ‘whole’ may be missed from the formative assessment (and summative assessment!). It can therefore be argued that developing formative assessment should involve a much wider perspective of reviewing the whole model of assessment developed for a particular course, aiding coherence across a course. This is an aspect explored explicitly in healthcare curriculum planning, and all components of the curriculum are subsequently integrated in clinically-based learning modules where boundaries are removed. In the current climate of lifelong learning, Boud (2000) argues that a new concept of ‘sustainable assessment’ needs to be recognized, encompassing the characteristics necessitated to underpin activities of lifelong learning. This concept develops the role of assessment to equip students with the preparation required to continue independent assessment of their future learning experiences. At the centre of this argument lies the premise that assessment is a key feature of lifelong learning, and in light of the previous discussion perhaps many assessment strategies inhibit this development at present. Boud (2000) therefore views the process of selfassessment as pivotal, although consideration of this metaprocess is limited to the recent literature. In exploring this pedagogically, the creation of a climate using interactive feedback can perhaps develop lifelong learning. Conclusion Gipps (1994) drew attention to the necessity of a move away from a testing culture towards that of an assessment culture with emphasis on the evaluation of learning in the wider educational literature. It is clear from the above discourse that there has been a change in consideration of 511 A. Rushton assessment and its associated issues over the years. As the existing evidence suggests however, further changes are required in practice to enable effective development of formative assessment involving the consideration of teaching and learning strategies, and in particular the provision of feedback. As illustrated in the above discussion, a search of the healthcare literature found few empirical studies addressing formative assessment. In addition, although the studies purported to discuss formative assessment, by using definition and debate from the wider educational literature the application of the paradigm shift to healthcare can be questioned based on the research available to date. To ensure the success of developing formative assessment it must therefore be established within models of pedagogy to be successful. This necessitates a further move away from the current emphasis on procedures and products of assessment to an emphasis on the processes of assessment and learning. Practice points . . . Formative assessment is an important process to enable learning, and in particular deep learning. Feedback is the central component of effective formative assessment. Further consideration of formative assessment in healthcare education is required to change our existing assessment culture which places emphasis on summative assessment. Notes on contributor ALISON RUSHTON, EdD. MSc. Grad Dip Phys. Cert Ed. Dip TP. mMACP is a Lecturer in Physiotherapy at the University of Birmingham, UK. Alison contributes to undergraduate physiotherapy education and postgraduate education for all healthcare professionals, through teaching and research. Alison is the Programme Director for the MSc Advancing Practice programmes. This work was completed as part of a Doctorate in Education at Warwick University, which had its central focus on the clinical component of healthcare professional education at Masters level. References BANGERT–DROWNS, R.L., KULIK, J.A. & KULIK, C.C. (1991a) Effects of frequent classroom testing, Journal of Educational Research, 85, pp. 89–99. BANGERT–DROWNS, R.L., KULIK, C.L.C., KULIK, J.A. & MORGAN, M.T. (1991b) The instructional effect of feedback in test–like events, Review of Educational Research, 61, pp. 213–238. BIGGS, J. (1998) Assessment and classroom learning: a role for summative assessment? Assessment in Education, 5, pp. 103–110. BLACK, P. & WILIAM, D. (1998) Assessment and classroom learning, Assessment in Education, 5, pp. 7–75. BLOOM, B.S. (1971) Mastery learning, in: J.H. BLOCK (Ed) Mastery Learning: theory and Practice (New York, Holt, Rinehart & Winston). BOUD, D. (2000) Sustainable assessment: rethinking assessment for the learning society, Studies in Continuing Education, 22, pp. 151–167. BROOKFIELD, S. (1987) Developing Critical Thinkers (Milton Keynes, Open University Press). 512 CHI, M., GLASER, R. & FARR, M. (1988) The Nature of Expertise (Hillsdale NJ, Erlbaum). COWIE, B. & BELL, B. (1999) A model of formative assessment in science education, Assessment in Education, 6, pp. 101–116. DEMPSTER, F.N. (1992) Using tests to promote learning: a neglected classroom resource, Journal of Research and Development in Education, 25, pp. 213–217. DEWEY, J. (1938) Experience and Education (London, Collier Macmillan). ENTWISTLE, N.J. & ENTWISTLE, A.C. (1991) Forms of understanding for degree examinations: the pupil experience and its implications, Higher Education, 22, pp. 205–227. FRIEDMAN BEN DAVID, M., DAVIS, M.H., HARDEN, R.M., HOWIE, P.W., KER, J. & PIPPARD, M.J. (2001) AMEE medical education guide, No 24: portfolios as a method of student assessment, Medical Teacher, 23, pp. 535–551, GIPPS, C. (1994) Beyond Testing: towards a Theory of Educational Assessment (London, Falmer Press). GUBA, E.G. & LINCOLN, Y.S. (1989) Fourth Generation Evaluation (Newberry Park, CA, Sage Publications). HARLEN, W. & JAMES, M. (1997) Assessment and learning: differences and relationships between formative and summative assessment, Assessment in Education, 4, pp. 365–379. HATTIE, J.A. (1987) Identifying the salient factors of a model of student learning: a synthesis of meta–analyses, International Journal of Educational Research, 11, pp. 187–212. HATTIE, J. & JAEGER, R. (1998) Assessment and classroom learning: a deductive approach, Assessment in Education, 5, pp. 111–122. HOUGHTON, G. & WALL, D. (2001) Trainers’ evaluations of the West Midlands formative assessment package for GP registrar assessment, Medical Teacher, 22, pp. 399–405. IVERSON, A.M., IVERSON, G.L. & LUKIN, L.E. (1994) Frequent, ungraded testing as an instructional strategy, Journal of Experimental Education, 62, pp. 93–101. KHAN, K.S., DAVIES, D.A. & GUPTA, J.K. (2001) Formative self– assessment using multiple true–false questions on the Internet: feedback according to confidence about correct knowledge, Medical Teacher, 23, pp. 158–163. KLENOWSKI, V. (1995) Student self–evaluation processes in student centred teaching and learning contexts of Australia and England, Assessment in Education: principles, Policy and Practice, 2, pp. 145–163. KLENOWSKI, V. (1996) Connecting assessment and learning. British Educational Research Association Annual Conference (Lancaster University), September 12–15. KNOWLES, M. (1990) The Adult Learner: a Neglected Species, 4th ed. (London, Kogan Page). LIDZ, C.S. (1995) Dynamic assessment and the legacy of L.S.Vygotsky, School Psychology International, 16, pp. 143–153. MESSICK, S. (1975) The standard problem: meaning and values in measurement and evaluation, American Psychologist, 30, pp. 955–966. MEZIROW, J. (1994) Understanding transformational theory, Adult Education Quarterly, 44, pp. 222–235. PECKHAM, P.D. & ROE, M.D. (1977) The effects of frequent testing, Journal of Research and Development in Education, 10, pp. 40–50. PITTS, J., COLES, C., THOMAS, P. & SMITH, F. (2002) Enhancing reliability in portfolio assessment: discussion between assessors, Medical Teacher, 24, pp. 197–201. PRYOR, J. & TORRANCE, H. (1996) Teacher–pupil interaction in formative assessment: assessing the work or protecting the child? The Curriculum Journal, 7, pp. 205–226. RAHMAN, S.A. (2001) Promoting learning outcomes in paediatrics through formative assessment, Medical Teacher, 23, pp. 467–470. RAMPRASAD, A. (1983) On the definition of feedback, Behavioral Science, 28, pp. 4–13. SADLER, R. (1989) Formative assessment and the design of instructional systems, Instructional Science, 18, pp. 119–144. SADLER, D.R. (1998) Formative assessment: revisiting the territory, Assessment in Education, 5, pp. 77–84. SAVERY, J.R. & DUFFY, T.M. (1995) Problem–based learning: an instructional model and its constructivist framework, Educational Technology, 35, pp. 31–37. Formative assessment: a key to deep learning? SCHEERENS, J. (1991) Process indicators of school functioning: a selection based on research literature on school effectiveness, Studies in Educational Evaluation, 17, pp. 371–403. SCHWARTZ, P., MENNIN, S. & WEBB, G. (2001) Problem-based Learning: case Studies, Experience and Practice (London, Kogan Page). SIBERT, L., MAIRESSE, J., AULANIER, S., OLOMBEL, P., BECRET, F., THIBERVILLE, J., PERON, J., DOUCET, J. & WEBER, J. (2001) Introducing the objective structured clinical examination to a general practice residency programme: results of a French pilot study, Medical Teacher, 23, pp. 383–388. TITTLE, C.K. (1994) Towards an educational–psychology of assessment for teaching and learning – theories, contexts, and validation arguments, Educational Psychologist, 29, pp. 149–162. TORRANCE, H. & PRYOR, J. (1998) Investigating Formative Assessment: teaching, Learning and Assessment in the Classroom (Buckingham, Open University Press). TOWNSEND, A.H., MCLLVENNY, S., MILLER, C.J. & DUNN, E.V. (2001) The use of an objective structured clinical examination (OSCE) for formative and summative assessment in a general practice clinical attachment and its relationship to final medical school examination performance, Medical Education, 35, pp. 841–846. VYGOTSKY, L. (1978) Mind in Society (London, Harvard University Press). YORKE, M. (1998) The management of assessment in Higher Education, Assessment and Evaluation in Higher Education, 23, pp. 101–116. 513