Session 4 - Analysis and trial management pptx

advertisement
Session 4: Trial management
Recruitment and retention: the role of
evaluators
Meg Wiggins (IoE)
Sub-brand to go here
Recruitment and
Retention: the
evaluator’s role
Meg Wiggins –
Institute of Education, London
Recruiting schools
Retaining schools
PROJECT EXAMPLES
EXAMPLE 1
Intervention: 30 hours of primary school classroom chess
teaching, delivered by external CSC tutors
Cluster trial, randomised at school level
Evaluation team at IoE:
John Jerrim (Lead), Lindsey Macmillan, John Micklewright
Process evaluation - Meg Wiggins, Mary Sawtell, Anne Ingold
Chess in Schools - Recruitment
• Community organisation – small central staff team
• Recruitment expectations – return to known ground
• Recruitment reality – IoE provided lists of schools
selected on FSM % criteria, in their chosen Las
• Capacity issues, limited understanding about RCTs,
huge enthusiasm for the evaluation
Chess in Schools – Recruitment 2
Nearly reached target of 100 primary schools within tight
timeframe
Succeeded by tenacious, labour intensive direct contact
by phone
• Often before school; strategies for speaking directly to
head teachers
• Ditched letter and emails as first approach
• Brought in dedicated person to recruit
Chess in Schools – Recruitment 3
As evaluators we assisted recruitment by:
• Providing extra schools from which to recruit
• Providing extra time for recruitment
• Channeling enthusiasm - providing focus
9
Chess in Schools – Retention in study
• Study designed to limit retention challenges
• Influenced by learning from earlier IoE EEF
evaluations
• No testing within schools; use of NPD data
• Collection of UPNs before randomisation
10
Chess in Schools – Retention in study
• Pre-randomisation baseline head teachers’ survey
• Showed some confusion about the trial and intervention
• Limited evaluation involvement in development of materials
used in recruitment of schools
• How much were they used?
• Lack of forum for cascading study information beyond
head/SLT
11
Chess in Schools – Retention in intervention
Most intervention schools adopted the programme
CSC tell us that nearly all have completed the full 30
week intervention
• End of intervention survey pending of tutors & teachers to
confirm this
Case study work flagged variation in schools re: lessons
replaced by intervention
• Important to study; not critical for schools/Chess tutors
12
Chess in Schools – Lessons learnt
• Beyond recruitment – importance of forum for cementing
the key study messages within schools
• Tension between role as impartial evaluator observing from
a distance and partner in achieving a successful
intervention and evaluation
• Plan some interim formal means of assessing
implementation and intervention retention
• Design of the study means that retention issues remain
minimal
13
•Early Language Learning & Literacy (ELLL) Project
Example 2
Early Language Learning & Literacy (ELLL) Project
Intervention: Training primary class teachers to deliver a
curriculum of French lessons as well as follow up activities
linking the learning of French to English literacy.
Cluster trial, randomised within schools at class level,
across two year groups (3 & 4)
IoE evaluation team: Meg Wiggins (Lead), John Jerrim,
Shirley Lawes, Helen Austerberry, Anne Ingold
14
Early Language Learning - Recruitment
Design of study influenced by:
– Tight study timeline – curriculum changes – required post
intervention testing
– Extremely short recruitment window prior to
commencement of teacher training
– Capacity to deliver intervention to limited numbers
Challenges in determining inclusion criteria for schools
• Key issues around specialist language teachers and within
schools randomisation design
15
• Over burdening of London schools – EEF issue
Early Language Learning - Recruitment
Compromises reached:
– Outside organisation brought in to recruit
– London schools allowed
– Relaxation of ban on specialist teachers (slight!)
Close liaison between CfBT and evaluation team
– Case by case basis recruitment
– Development of detailed recruitment materials – FAQs
Minimum target of 30 schools exceeded – 46 randomised
16
Early Language Learning - Retention
Immediate post randomisation drop out: 9 schools
• 2 couldn’t attend teacher training dates
• 2 schools disagreed with randomisation
• 5 never responded to invitation to teacher training
Additionally, 4 schools dropped one year group, but
stayed in trial with other year group
Within one week – 46 schools reduced to 37!
17
Early Language Learning -Retention
Evaluation team attended each training session and
explained study to intervention teachers
– Found almost no knowledge of study had been cascaded
down by heads
– Emphasised randomisation and no diffusion
– Answered many questions! Learnt from them!
– Provided teachers FAQs sheet
– Explained plans for end of year testing
18
Early Language Learning - Retention
• Used additional training events to continue evaluation
presence
• All 37 schools have delivered (most of) the intervention
• Organising testing dates (mostly by email) has been
fairly straightforward
 Lots of messages back and forth to finalise
 Testing begins Tuesday
19
Early Language Learning – Lessons Learnt
• Tight recruitment period led to inclusion of schools that
weren’t committed. Role of external recruitment agency?
• Tension between confusing schools with contacts from
programme and evaluation teams vs. not having evaluation
messages clearly conveyed.
• Need to ensure evaluation messages reach those that
deliver interventions, not just to Heads.
• Allowing time and resources for communicating with
schools at every stage – no shortcuts to personal contact.
20
Do our experiences tally with
yours?
Audience discussion
21
Task - table discussion and feedback
What one top tip or suggestion
would you make for recruitment,
retention or communication with
schools?
22
My conclusions
• Design with recruitment and retention at the
fore
• There is no substitution for evaluation team
direct contact with schools – allocate
resources accordingly
• Be flexible – balance rigour with practicality.
Choose your battles!
23
Session 4: Analysis and reporting
Analysis methods and calculating effect sizes
Ben Styles (NFER)
Analysis Plans: A cautionary tale
Michael Webb (IFS)
Analysis and
effect size
Ben Styles
Education Endowment
Foundation
June 2014
Analysis and effect size
• How design determines analysis methods
• Brief consideration of how to deal with
missing data
• How to calculate effect size
‘Analyse how you randomise’
• Pupil randomised
• The ideal trial: t-test on attainment
• Usually have a covariate: regression
(ANCOVA)
• Stratified randomisation: regression with
stratifiers as covariates
‘Analyse how you randomise’
• Cluster randomised
(think about an imaginary very small trial to
understand why)
• t-test on cluster means
• Regression of cluster means with baseline
means as a covariate
• ‘It’s the number of schools that matters’
BUT
• If we have an adequate number of schools in
the trial, say 40 or more
• We have a pupil-level baseline measure
• We can use the baseline to explain much of
the school-level variance
• Multi-level analysis
Missing data
• Prevention is better than cure
• Attrition is running at about 15% on average in
EEF trials
• Using ad hoc methods to address the problem
can lead to misleading conclusions
• http://educationendowmentfoundation.org.uk/upl
oads/pdf/Randomised_trials_in_education_revis
ed.pdf
• Baseline characteristics of analysed groups
• Baseline effect size
Effect size
• We need a measure that is universal
• The difference between intervention
group mean and control group mean
• As measured in standard deviations
Effect size
• See EEF analysis guidance at
http://educationendowmentfoundation.org.uk/
uploads/pdf/Analysis_for_EEF_evaluations_
REVISED3.pdf
• Write a spreadsheet that does it for you
For use in pupil-randomised trials
Parameter
Description
x1bar - x2bar
Intervention group mean minus control group mean. Usually the
regression coefficient for intervention.
Standard error of effect
SE of regression coefficient
Degrees of freedom for CI Same as for residual mean square degrees of freedom
Calculations
Value
Pooled outcome SD
8.967
0.229249
Correction factor
0.998
0.704391
Raw CI (upper)
1.615
347
Raw CI (lower)
-1.16
Standard deviation of
treatment group
8.794
Hedges' g
0.03
Standard deviation of
control group
9.132
CI (upper)
0.18
Only cases included in the regression model.
175
CI (lower)
-0.13
Number of cases in control
group
Only cases included in the regression model.
180
Number of cases in
treatment group
But what about multi-level models?
• Difference in means is still the model
coefficient for intervention
• But the variance is partitioned – which do we
use?
• And the magnitude of the variance
components change depending on whether
we have covariates in the model – with or
without?
Arrggh!
We want comparability
• Always think of any RCT as a departure from
the ideal trial
• We want to be able to compare cluster trial
effect sizes with those of pupil-randomised
trials
• We want to meta-analyse
Which variance to use
• Pupil-level
• Before covariates
This is controversial
• Before or after covariates means two different
things
• At York on Monday leaning towards total
variance but pupil-level better for metaanalysis
• Report all the variances and say what you do
Conclusions
• A well designed RCT usually leads to a
relatively simple analysis
• Some of the missing data methods are the
domain of statisticians
• Be clear how you calculate your effect size
Analysis Plans: A cautionary tale
Michael Webb (IFS)
Download