Learning from Assessment - University of Nottingham

advertisement
Learning from assessment:
insights about student learning
from programme level evidence
Dr Tansy Jessop, TESTA Project Leader
Launch of the Teaching Centre
School of Politics and International Relations
University of Nottingham
15 May 2014
TESTA premises
1) Assessment drives what students pay
attention to, and defines the actual
curriculum (Ramsden 1992).
2) Feedback is significant (Hattie, 2009; Black
and Wiliam, 1998)
3) Programme is central to influencing change.
Thinking about modules
modulus (Latin): small measure
“interchangeable units”
“standardised units”
“sections for easy constructions”
“a self-contained unit”
How well does IKEA 101 packaging
work for Sociology 101?
Furniture
Student Learning












Bite-sized
Self-contained
Interchangeable
Quick and instantaneous
Standardised
Comes with written
instructions
 Consumption
Long and complicated
Interconnected
Distinctive
Slow, needs deliberation
Varied, differentiated
Tacit, unfathomable,
abstract
 Production
What is TESTA?
Transforming the Experience of Students through Assessment
 HEA funded research project (2009-12)
 Seven programmes in four partner universities
 Maps programme-wide assessment
 Engages with Quality Assurance processes
 Diagnosis – intervention – cure
TESTA ‘Cathedrals Group’ Universities
Edinburgh
Canterbury Christchurch
Edinburgh Napier
Glasgow
Greenwich
Sheffield Hallam
University of West Scotland
Lady Irwin College University of Delhi
TESTA
“…is a way of thinking
about assessment and
feedback”
Graham Gibbs
Based on assessment principles
 Time-on-task
 Challenging and high expectations
 Students need to understand goals and standards
 Prompt feedback
 Detailed, high quality, developmental feedback
 Dialogic cycles of feedback
 Deep learning – beyond factual recall
TESTA Research Methods
(Drawing on Gibbs and Dunbar-Goddet, 2008,2009)
PROGRAMME AUDIT
ASSESSMENT
EXPERIENCE
QUESTIONNAIRE
FOCUS GROUPS
Programme
Team
Meeting
Case Study X: what’s going on?
 Mainly full-time lecturers
 Plenty of varieties of assessment, no exams
 Reasonable amount of formative assessment (14 x)
 33 summative assessments
 Masses of written feedback on assignments (15,000 words)
 Learning outcomes and criteria clearly specified
….looks like a ‘model’ assessment environment
But students:
 Don’t put in a lot of effort and distribute their effort across few topics
 Don’t think there is a lot of feedback or that it very useful, and don’t
make use of it
 Don’t think it is at all clear what the goals and standards are
 …are unhappy
Case Study Y: what’s going on?
 35 summative assessments
 No formative assessment specified in documents
 Learning outcomes and criteria wordy and woolly
 Marking by global, tacit, professional judgements
 Teaching staff mainly part-time and hourly paid
….looks like a problematic assessment environment
But students:
 Put in a lot of effort and distribute their effort across topics
 Have a very clear idea of goals and standards
 Are self-regulating and have a good idea of how to close the gap
Two paradigms…
Transmission Model
Social Constructivist model
Focus Group data
 In pairs/groups, read through quotes from student
focus group data on a particular theme.
 What problems does the data imply?
 What solutions might a programme develop to
address some of these challenges?
 A3 sheets provided to tease out challenges and
solutions.
Student voice data
Challenges
Solutions
Theme 1: Formative is a great idea
but…
 If there weren’t loads of other assessments, I’d do it.
 If there are no actual consequences of not doing it, most
students are going to sit in the bar.
 I would probably work for tasks, but for a lot of people, if
it’s not going to count towards your degree, why bother?
 The lecturers do formative assessment but we don’t get
any feedback on it.
Theme 2: Assessment isn’t driving
and distributing student effort
We could do with more assessments over the course of the year
to make sure that people are actually doing stuff.
We get too much of this end or half way through the term essay
type things. Continual assessments would be so much better.
So you could have a great time doing nothing until like a month
before Christmas and you’d suddenly panic. I prefer steady
deadlines, there’s a gradual move forward, rather than bam!
Theme 3: Feedback is disjointed
and modular
 The feedback is generally focused on the module.
 It’s difficult because your assignments are so detached from
the next one you do for that subject. They don’t relate to each
other.
 Because it’s at the end of the module, it doesn’t feed into our
future work.
 You’ll get really detailed, really commenting feedback from
one tutor and the next tutor will just say ‘Well done’.
Theme 4: Students are not clear
about goals and standards
 The criteria are in a formal document so the language is quite
complex and I’ve had to read it a good few times to kind of
understand what they are saying.
 Assessment criteria can make you take a really narrow approach.
 I don’t have any idea of why it got that mark.
 They read the essay and then they get a general impression, then
they pluck a mark from the air.
 It’s a shot in the dark.
 We’ve got two tutors – one marks completely differently to the
other and it’s pot luck which one you get.
Main findings
1.
2.
3.
4.
5.
6.
7.
8.
Too much summative; too little formative
Too wide a variety of assessment
Lack of time on task
Inconsistent marking standards
‘Ticking’ modules off
Poor feedback: too little and too slow
Lack of oral feedback; lack of dialogue about standards
Instrumental reproduction of materials for marks
Summative-formative issues
1. Students and staff can’t do more of both.
2. Reductions in summative – how many is enough?
3. Increase in formative – and make sure it is valued and
required.
4. Debunking the myth of two summative per module.
5. Articulating rationale with students, lecturers, senior
managers and QA managers.
1. Examples of ramping up formative
The case of the under-performing engineers (Graham,
Strathclyde)
The case of the cunning (but not litigious) lawyers (Graham,
somewhere)
The case of the silent seminar (Winchester)
The case of the lost accountants (Winchester)
The case of the disengaged Media students (Winchester)
2. Examples of improving ‘time on task’
The case of low effort on Media Studies
The case of bunching on the BA Primary
3. Engaging students in reflection
through improving feedback
The case of the closed door (Psychology)
The case of the one-off in History (Bath Spa)
The case of the Sports Psychologist (Winchester)
The conversation gambit
4. Internalising goals and standards
 The case of the maverick History lecturer (a dove)
 The case of the highly individualistic creative
writing markers
Changes
Programmatic Assessment Design
Feedback Practice
Paper processes to people talking
Impacts
 Improvements in NSS scores on A&F – from bottom
quartile in 2009 to top quartile in 2013
 Three programmes with 100% satisfaction ratings post
TESTA
 All TESTA programmes have some movement upwards
on A&F scores
 Programme teams are talking about A&F and pedagogy
 Periodic review processes are changing for the better.
www.testa.ac.uk
References
Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students' learning.
Learning and Teaching in Higher Education. 1(1): 3-31.
Gibbs, G. & Dunbar-Goddet, H. (2009). Characterising programme-level assessment environments
that support learning. Assessment & Evaluation in Higher Education. 34,4: 481-489.
Hattie, J. (2007) The Power of Feedback. Review of Educational Research. 77(1) 81-112.
Jessop, T. and Maleckar, B. (in press). The Influence of disciplinary assessment patterns on student
learning: a comparative study. Studies in Higher Education.
Jessop, T. , El Hakim, Y. and Gibbs, G. (2014) The whole is greater than the sum of its parts: a largescale study of students’ learning in response to different assessment patterns. Assessment and
Evaluation in Higher Education. 39(1) 73-88.
Jessop, T, McNab, N & Gubby, L. (2012) Mind the gap: An analysis of how quality assurance processes
influence programme assessment patterns. Active Learning in Higher Education. 13(3). 143-154.
Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in mass higher
education, Assessment & Evaluation in Higher Education, 35: 5, 501 – 517
Sadler, D.R. (1989) Formative assessment and the design of instructional systems, Instructional
Science, 18, 119-144.
Download