Teaching and Learning. What really works?

advertisement
Teaching and Learning. What really works?
We are constantly trying to drive up standards of teaching and learning with
new approaches, preferably those with a strong evidence base. But is ‘What
Works?’ the right question? Should we really be asking ‘How do good teachers
get better?’ Elaine Hall reflects on the messages from a meta-analysis of
teaching and learning interventions
In teaching and learning, we are increasingly concerned with improving students’
attainment through focused interventions and it is becoming the responsibility of
individual teachers, subject leaders or teaching and learning coordinators to select
the best intervention for their students from a dazzling array of options. But how
and where to start? How can teachers make informed choices? One important tool at
our disposal is meta-analysis, though it is only one of a range of tools that teachers
need.
Meta-analysis is a method for pooling the quantitative estimates of effects of
interventions from multiple studies to give a more reliable and precise estimate of
their benefits (or potential harm). Comparing these estimates across different types
of interventions can also pinpoint which aspects of interventions offer the most
potential in the classroom. Meta-analysis is proving to be a useful approach to
addressing the key question of ‘What works?’ by providing an opportunity for
‘ground-clearing’ and priority- setting, since by offering comparative information
about how well different interventions work, teachers can make informed decisions.
Hattie – author of the largest meta-analysis – made the purpose of this clear:
• ‘We need to make relative statements about what impacts on student work.
• We need estimates of magnitude as well as statistical significance – it is not good
enough to say that this works because lots of people use it etc, but that this
works because of the magnitude of impact.
• We need to be building a model based on these relative magnitudes of effects.’
(Hattie, 2003)
Glossary
Intervention
By intervention we mean any change or programme of change, which
is implemented in the classroom, it could be formal or informal and might be
initiated by the individual teacher, the school or another organisation from outside.
Meta-analysis
A meta-analysis looks across the results of several studies that
address a set of related research questions with the aim of bringing about
generalisable conclusions.
Magnitude of impact is measured by effect size, a statistical calculation of the
amount of change (for a discussion of effect sizes please see the links section at the
end of this article). The enormous meta-analysis of 500,000 effect sizes conducted
by John Hattie enables us to place interventions relative to others and to ‘normal
development’. Of course, some interventions have minimal, or even negative effects
but in terms of positive effect sizes, we must judge them in relation to the following
key benchmarks:
0.1 – (maturation) the yearly improvement from normal development without any
teaching 0.25 – (teaching) the average effect of having a teacher, any teacher, regardless of
the quality of instruction 0.42 – (mean) the average effect size for all educational interventions
0.5 – (minimum for intervention) since better than average effects should be a basic
requirement for expending resources
0.8 – (significant effects) where the impacts start to become visible to the ‘naked
eye’.
The 0.42 ‘average intervention effect size’ is equivalent to raising the average
attainment of class by about 16 percentage points. This sounds quite good but it is
important to remember that, since this is the mean, there are many more effective
interventions out there. For this reason, Hattie considers 0.5 as a minimum for an
intervention to be considered ‘educationally significant’. It is worth noting that in
medicine, where effect sizes are commonly used, the bar is set much higher and
effect sizes approaching 1.0 are required to provide a warrant for action.
So which interventions work better than others? Equally, which interventions have
little evidence of positive effects? The effects results below gives details of some of
the high and low performing approaches and may cause many readers concern in
terms of what is and is not included (the full list is available online, see the Links
section). It is important to emphasize – meta-analysis tells us the effect that these
interventions have on pupil attainment, not whether something is a good idea or
not, not whether these interventions improve teacher or pupil morale and
engagement, not whether they make schools operate more efficiently or more
pleasantly. However, if the purpose of the interventions in the low-performing
column is to raise attainment, then we might want to think again.
The effect of different interventions on student attainment
High performing innovations ‘top ten’ / Effect size
Feedback / 1.13
Prior Ability / 1.04
Instructional Quality / 1.00
Direct instruction / 0.82
Remediation feedback / 0.65
Student disposition / 0.61
Class environment (culture) / 0.56
Challenge/goals / 0.52
Peer tutoring / 0.50
Mastery learning / 0.50
Low Performing Innovations 'bottom ten' / Effect Size
Retention / -0.15
Television / -0.12
Physical elements of school / -0.05
Team teaching / 0.06
Behavioural objectives / 0.12
Finances/money / 0.12
Individualisation / 0.14
Audio visual aids / 0.16
Ability grouping / 0.18
Programmed instruction / 0.18
It seems pretty clear that areas where schools have been directed to spend a
considerable amount of their time and energy – new buildings and ability grouping
– are not going to ‘drive standards up’. Indeed, those implementing the next wave of
change, under the banner of ‘personalised learning’ should be aware of the low
leverage of individualisation. Throwing money at the problem also appears to be
less effective than having a teacher in the room (0.12 compared with 0.25). Other
popular approaches, testing (0.3) and ICT (0.31) also have a below-average impact.
There are also some high leverage factors over which teachers have relatively little
influence: social and demographic factors and prior ability will always determine a
significant amount of each student’s success. However, it is clear that there are very
important effects from the quality of the teaching and the quality of the
reinforcement of learning, through feedback conversations with teachers and peers.
There is certainly mounting evidence that adopting approaches which make aspects
of thinking explicit or which focus on particular kinds of thinking are successful at
raising attainment (particularly metacognitive approaches or cognitively
demanding interventions such as problem solving and hypothesis testing or those
that enhance surface and deep learning). Is this a sufficiently compelling argument
for using only interventions from Hattie’s ‘top ten’?
Experience suggests that the use of meta-analysis can only be one part of the
decision-making process. The Research Centre for Learning and Teaching at
Newcastle University conducted an analysis of thinking skills interventions and
found there is both strong evidence for the value of thinking skills – particularly in
relation to cognitive and curricular gains – and areas where evidence is less
coherent. For thinking skills interventions, it appears that the effects on learners are
age and context specific. The blended results of a meta-analysis are not sufficient for
teachers’ needs. Teachers need to act in a particular context and to make a
particular choice about which approach is likely to work for them in their school –
and meta-analysis data can help them only on the first part of that journey.
Indeed, when Hattie broke down the teaching and learning factors in his metaanalysis by ‘who’ rather than ‘what’ he discovered that the influence of teachers on
student outcomes was extremely important.
Moreover, he found that the differences in teacher practice linked in with the
evidence from successful interventions. The meta-analysis evidence suggests that
successful innovations tend to be structured around feedback loops which
encourage teachers to reflect and improve: Hattie’s significant innovations group is
packed with things that tell the teacher ‘How am I doing?’ (not just ‘How are the
pupils doing?’). It is suggestive that merely ‘doing formative assessment’ to pupils is
not sufficient to provide this feedback, since evidence-based teaching and learning is
dependent on how teachers perceive and use the information.
‘If students do not know something, or cannot process the information, this should be a
cue for teacher action... Merely ascribing to the student the information that they can
or cannot do something is not as powerful as ascribing to the teacher what they have
or have not taught well.’ (Hattie, 2005, p17)
Tight feedback loops focus the teacher’s attention on what is in their sphere of
influence, gives information that can be put to use immediately and increases the
positive reinforcement between learners in the classroom.
A study of the difference between experienced and expert teachers (Hattie, 2003 –
see results below) found that the distinction was not between the level of content
knowledge or procedural expertise but to a series of dimensions and attributes of
pedagogical practice. The difference between experienced and expert teachers was
not reflected simply in their teaching but in the quality of the students’ learning.
Many of the facets of expert teacher practice were strongly related to feedback and
the quality of relationships between teachers and students. The remaining factors
relate to the individual teacher’s readiness to engage with feedback and
relationships: his or her personal and professional ‘comfort zone’.
Dimensions of expert teachers' attributes
Identify essential representations of their subject
•
•
•
•
Have deeper representations about teaching and learning
Adopt a problem-solving stance
Anticipate, plan and improvise as required
Are better decision makers
Guide learning through classroom interactions
• Create an optimal climate for learning
• Have a complex perception of classrooms
• Are more context dependent and aware of situations
Monitor learning and provide feedback
• More adept at monitoring and provide more relevant feedback
• Test hypotheses about learning problems or teaching strategies
• Work more automatically
Attend to affective outcomes
• Have high respect for students
• Are passionate about learning and teaching
Influence student outcomes
•
•
•
•
Develop students’ self-regulation and esteem
Provide appropriate challenge
Have positive influences on student achievement
Enhance surface and deep learning
There is evidence that not all interventions are equal and that some have greater
leverage than others. It is becoming clear that the individuals and groups of teachers
operating the interventions are at least as important a part of the equation as the
content or approach. Meta-analyses may enable us to make smart choices about
‘what’, it then becomes vital to focus upon ‘who’ and ‘how’.
Elaine Hall is a senior researcher in the Research Centre for Learning and Teaching
at Newcastle University
Further information
A very clear introduction to effect size by Robert Coe is available from
www.cemcentre.org/renderpage.asp?linkID=30325016
A good summary of Hattie’s meta-analysis
www.learningandteaching.info/teaching/what_works.htm
The Thinking Skills meta-analysis
http://eppi.ioe.ac.uk/EPPIWebContent/reel/review_groups/thinking_skills/t_s_rv2
/t_s_rv2.pdf
References
• Hattie, J, Biggs, J, and Purdie, N (1996). ‘Effects of Learning Skills Interventions on
Student Learning: A Meta-analysis.‘ Review of Educational Research, 66, 99136
• Hattie, J (2003) ‘Teachers Make a Difference: What is the Research Evidence?’
Paper presented at the Australian Council for Educational Research
Conference ‘Building Teacher Quality: What Does the Research Tell Us?’ 1921 October 2003, Melbourne. Available for download from
www.acer.edu.au/workshops/documents/Teachers_Make_a_Difference_Hatt
ie.pdf
• Hattie, J (2004) ‘Factors that Influence Children‘s Learning: Results from a Study
Involving 500 Meta-analyses.’ Paper presented at the ESRC Seminar ‘Effective
Educational Interventions‘, University of Newcastle upon Tyne, 8 July, 2004
• Hattie, J (2005) ‘What is the Nature of Evidence that Makes a Difference to
Learning?’ Paper presented at the Australian Council for Educational
Research Conference ‘Using Data to Support Learning’ 7-9 August 2005,
Melbourne. Available for download from
www.acer.edu.au/workshops/documents/Hattie.pdf
• Higgins, S, Hall, E, Baumfield, V, and Moseley, D (2005) Thinking Skills Approaches
to Effective Teaching and Learning: What is the Evidence for Impact on
Learners? London: EPPI-Centre
Download