Frontline protocol revised 12 Jan 2015

advertisement
Protocol for the DfE-commissioned evaluation of the Frontline pilot
Jonathan Scourfield, Teresa de Villiers, Mark Hadfield, Morag Henderson, Sally Holland,
Paul Kinnersley and Nina Maxwell
Cardiff University
Updated version 12 January 2015
Introduction
The Department for Education (DfE) is funding the Frontline pilot, initially in Manchester
and London, from September 2013 to 2017. Frontline’s mission is to ‘transform the lives of
vulnerable children by recruiting and developing outstanding individuals to be leaders in
social work and broader society’ (Frontline, 2014). This paper presents the protocol for the
DfE-commissioned independent evaluation of the Frontline pilot. The evaluation has the
following aims, as set out by DfE:
1. Assess whether Frontline is successful in attracting high quality graduates
2. Examine the quality of the delivery of Frontline to assess whether the key elements are
being delivered to a high standard
3. Measure objectively how well Frontline prepares participants to be outstanding social
workers
There is beginning to be some sustained UK interest in researching the outcomes of social
work education (Carpenter, 2011). However, comparative research is fairly rare. Most studies
are of change in one group only, without the added strength of using a comparison group.
Also, most studies of outcomes rely on trainees’ self-report. Whereas direct observation of
social work practice is expected as part of practice learning on qualifying programmes, this
relies on the subjective judgement of individual practice assessors. The standardised
assessment of actual or simulated practice, for education or research purposes, is rare in
social work. Examples of its use are described by Petracchi (1999), Badger and MacNeil
(2002), Forrester et al., (2008) and Bogo et al. (2012). The rarity of these examples contrasts
with the situation in medicine, where the standardised assessment of clinical skills using real
and simulated patients is routine in assessment for medical degrees and is also used for
research (e.g. Kinnersley and Pill, 1993).
For this commissioned evaluation we propose a quasi-experimental design, comparing
Frontline trainees with social work students who are about to qualify on regular programmes.
Two comparison groups are used for different elements of the evaluation. These are (1) a
typical sample of students about to qualify from the full range of mainstream programmes
and (2) students in high tariff universities. The second group have been highlighted not
because of any assumption of programme quality but because of necessary assumptions about
the postgraduate student market. There are no data available via the Higher Education
Statistics Agency (HESA) to inform the selection of comparator universities; HESA do not
collect data from postgraduates on class of first degree. However, if the postgraduate student
market is at all similar to the undergraduate (UG) market, we might assume that the best
qualified students will be drawn to Masters programmes in the same universities that have the
highest entry standards at UG level, meaning that these students would be the most
comparable with Frontline participants in terms of academic background. Frontline’s
admissions criteria include an upper second degree or higher and at least 300 UCAS points in
top three A-levels or equivalent. The Guardian newspaper university league table (The
Guardian, 2013) was consulted. This found the top eight English universities for all-subject
UG entry tariff, out of those with postgraduate social work courses, to be identical to the list
of Russell Group universities offering such courses. We considered all-subject tariff to be
more relevant than UG social work tariff, as the latter is less likely to be widely known by
applicants.
The evaluation received ethical approval from the School of Social Sciences Research Ethics
Committee in Cardiff University in May 2014. Key ethical underpinnings of the study are as
follows:
 No student, service user, social work practitioner or university staff member will
suffer any detriment to their career, educational opportunities or service entitlement as
a result of taking part or declining to take part in the study and this will be made
explicit in information sheets.
 Data are held securely and anonymously and in accordance with the Data Protection
Act. Contact details for participants, gathered for the purpose of arranging telephone
interviews, will be stored separately and it will not be possible to link data with
individuals.
 All participants will be provided with clear and accessible information about the
purpose of the study and what will happen to any data they provide. It will be made
clear to participants that they may withdraw their consent to participate at any time
during the data collection process.
 Confidentiality will be maintained unless researchers have grounds to suspect that a
child or vulnerable adult may be at risk of harm.
The evaluation has an advisory group, convened by the Department for Education,
comprising representatives from local authorities, the College of Social Work and Frontline
staff, academic experts in social work education, a consultant social worker and a Frontline
participant. The following protocol is structured around the three evaluation aims set out by
the DfE.
1. Assess whether Frontline is successful in attracting high quality graduates
A survey will be conducted with Frontline Cohort One, Frontline Cohort Two and a similar
number of new first year students on Masters programmes in high UG tariff universities in
England. The survey data set will consist of: data on demographics and academic and work
background already held by Frontline and available to the evaluation team; the same data
collected from students in high tariff universities; and data from both groups about the history
of their interest in social work and career aspirations.
The comparison will involve descriptive statistics and bivariate analysis with appropriate
statistical tests. Multi-variate statistical analysis will be used if possible, depending on the
distribution of the sample. In addition, analysis will be undertaken using HESA data to
compare the demographics and backgrounds of Frontline students with all social work
students on undergraduate and post graduate courses in England.
2. Examine the quality of the delivery of Frontline to assess whether the key elements
are being delivered to a high standard
This aim will be met by research on Frontline students only. Quality of delivery will be
assessed through qualitative research: observation of the Summer Institute and case studies of
practice learning.
Summer Institute
A basic audit will compare the curriculum of Frontline with that of two high tariff university
programmes. Programme handbooks will be used as a starting point, to compare the number
of learning and contact hours and the scope of programme content. Assessment tasks will
also be compared.
Researchers with experience of delivering social work education will visit the Frontline
Summer Institute in July-August 2014 and teaching days in Autumn and Winter 2014-15 for
participant observation of 6 individual days’ teaching. These observation days will be timed
to focus on distinctive elements of the Frontline curriculum, such as the teaching of skills for
delivering evidence-based interventions and leadership training. During some of the
observation days, informal focus groups will be conducted with students where possible, to
gauge their reactions to the teaching.
Case studies
Sampling
A ‘case’ will be a Frontline Cohort One student unit. Four case studies will be conducted. To
ensure these case studies provide an adequate cross-section of the local authorities involved,
they will be selected on the basis of the factors likely to affect effective implementation. The
following criteria will be applied in order and, although this is purposive sampling, restricted
random selection will be used where possible, within categories that fit the following criteria:
1. The first and most important sampling criterion is local authorities’ inspection record for
children’s social care services. Authorities with a range of outcomes will be selected – if
possible the sample will consist of one each from the categories of outstanding, good,
requires improvement and inadequate, using mean scores across all domains inspected.
2. A second criterion is the number of local authority student units per local authority, as
although most local authorities have only one unit, some have two and some have three.
3. The third sampling criterion is geography. It is anticipated that three authorities will be
selected from the South East, if possible with one of these coming from outside London, and
a fourth from the Manchester area. This selection will roughly match the distribution of local
authorities with Frontline units. If the numbers of students in the selected authorities prove to
be lower than the expected four per unit, a fifth case study could be selected.
For Cohort Two, two of the Cohort One authorities will be revisited, to ascertain if any
proposed changes to the Frontline programme arising from the evaluation are responded to
effectively. In addition, two new authorities will be recruited in order to reach a wider range
of staff involved with Frontline, especially consultant social workers. Decisions about Cohort
Two sampling will be taken in Summer 2015, after consultation with the Department and the
project Advisory Group.
Case study design
The objectives of the case studies are:
 A. to assess whether the key elements of Frontline are being delivered to a high standard;
 B. to understand how the different elements of the programme interact in practice.
Objective A: To assess whether the key elements of Frontline are being delivered to a high
standard
The assessment of the quality of Frontline delivery will be based on monitoring the
implementation fidelity of the key innovative aspects of the programme and then judging the
quality of each aspect. This assessment will be in three stages.
 Stage 1: An audit framework will be created to assess fidelity of implementation. This
audit framework will be developed through attendance at the summer school, a review
of Frontline documentation, and via discussion with members of the Frontline team.
It will operationalize the key innovative aspects of the Frontline programme into the
actual processes and interactions that should be taking place within each participant
unit.
 Stage 2: The audit framework will be used during the case studies in order to ensure
data are collected on the timing, frequency and content of key interactions and
processes, and the roles undertaken by individuals, such as consultant social workers.
 Stage 3: The data generated by the framework will be analysed in order to make an
assessment of which aspects of the Frontline programme have been effectively
operationalized. The assessment of the quality of these processes will make reference
to the Frontline competences.
Objective B: To understand how the different aspects of the programme interact in practice
To understand how the different aspects of Frontline interact, and their relative effectiveness
in supporting the overall aims of the Frontline programme, Guskey’s (2000) development of
the Kirkpatrick (1994) model of professional development evaluation will be used. This five
level model will support the research team’s ability to synthesize data collected at different
points in time, from different participants, and across the four case studies. The five levels are
participants’ reactions; participants’ learning; organisational support and change;
participants’ use of new knowledge and skills; and outcomes for service users and services.
Data collection
The case studies for Cohort One will be facilitated by two visits to each case study site.
Cohort One first visit
There will be one initial visit (in the second half of October 2014) to each case study site.
This will be a set-up visit to clarify the evaluation protocols and procedures, apply the audit
framework to develop an initial overview of how Frontline is being implemented, identify
key data sources, and collect basic contact information on participants. In addition it will be
used to set up on-going data collection processes that will continue up until the second visit.
The first visit to each Frontline unit will be used to set up telephone interviews with:
- Consultant social workers (n=4)
- Social work managers and/or practitioners (n≈16)
- Other senior managers (n≈6)
- Lecturers/ supervisors (n≈4)
- Frontline participants (n=16)
- Frontline specialists (n=3).
These interviews will take place in October-December 2014. The focus and content of these
interviews will be based upon the audit framework and Guskey module and any emergent
issues that arise from them.
Cohort One second visit
In January 2015 there will be a second visit to each case study site. By this visit the initial
analysis of telephone interviews will have led to a draft overview of the implementation of
Frontline and an initial understanding of how different aspects are interacting. These visits
will be an opportunity to assess the validity of these developing perspectives.
Data collection during the second visit will include:
- A focus group with Frontline participants in each site in order to validate and deepen the
emerging analyses.
- Face-to-face interviews with consultant social workers in order to discuss emergent
themes and issues.
- Observation of interactions within the Frontline units.
In the period from April to August 2015, further telephone interviews will be conducted with
Frontline participants and consultant social workers about their experiences of the latter half
of the Frontline training period. There will also be interviews with key Frontline staff
(including Frontline specialists) about their perceptions of the first year of operation and what
changes might be needed for Cohort Two.
Also, Frontline participants in case studies will be asked to approach all the families on their
caseloads about their willingness to take part in a short telephone interview about their
experience of working with a Frontline student. These interviews will be undertaken by the
research team and will focus on the family member’s views of the Frontline participant’s
abilities, including their relationship skills and efficiency. Rating scales with Likert-style
response categories will be used, as well as some open questions. All families who express a
willingness to take part will be interviewed, therefore the sample size is unknown. At least
one adult in each family will be interviewed, with an aim to interview fathers as well as
mothers. Older children and young people will also be offered the opportunity to take part in
separate interviews.
Cohort Two
Two of the same four case study unit settings will be continued for Cohort Two, but they will
be slimmed down in comparison with Cohort One. Two new case studies will also be
identified, in discussion with the Department and the Advisory Group. Data collection will be
primarily targeted on problematic issues arising from Cohort One.
The priority for Cohort Two will be to collect data from Frontline participants, consultant
social workers, managers and lecturers. There will be no data collection from children and
families in Cohort Two. Telephone interviews will be undertaken with key staff responsible
for managing Frontline (including Frontline specialists), focused on modifications that have
been made to the programme as a result of the Cohort One experience.
Analysis
Interviews and focus groups will be recorded, with permission, and transcribed. N-vivo
software will be used to facilitate thematic coding. The coding frame will be structured
around the audit framework.
3. Measure objectively how well Frontline prepares participants to be outstanding social
workers.
This element of the evaluation will be quasi-experimental, with agreed aspects of the
Frontline participants’ practice skills and understanding being compared with those of
students in high tariff universities and a group of students representative of the overall
population of entrants to the social work profession (including students on masters and
undergraduate programmes in a range of universities). The research team will build on the
use of simulated patients in medical education and research (Objective Structured Clinical
Examination – OSCE) to create a standardised simulated social work client interview,
supplemented by a linked written assessment task. Both interview and written task will be
assessed in order to compare the practice skills and understanding of Frontline and the two
comparison groups of students.
Development and validation of a simulated client interview and assessment schedule
The development of an assessment tool for assessing qualifying students’ skills via a
simulated client interview will proceed on the basis of consensus-building. A ‘Delphi’
process will be undertaken with social work academics, practice educators, practitioners and
service users, in order to reach a consensus about a system for scoring practice quality. The
Delphi method consists of a series of individual consultations with domain experts,
interspersed with controlled feedback of the experts’ opinions (Dalkey and Helmer,1963).
The Delphi will start from the assessment tools for generic social work skills that have
already been developed and validated by Marian Bogo and colleagues in Canada
(http://research.socialwork.utoronto.ca/hubpage/resources-2). These tools map on to the
Professional Capabilities Framework, the Health and Care Professions Council’s standards of
proficiency and the Chief Social Worker’s Knowledge and Skills document. The Delphi
process will start from these existing scales as they have already been through a process of
validation and will consider the extent to which they work for a UK context.
The Delphi group will consist of four equal proportions: practitioners, service users, practice
educators and social work lecturers. These experts will be recruited via the College of Social
Work (practitioners), the Joint University Council Social Work Education Committee (social
work lecturers), the National Organisation of Practice Teaching (practice educators) and
Voices from Care (care-experienced young people who have had social workers, some also as
parents).
Following the Delphi process, the reliability and validity of the simulated client assessment
will tested by piloting the interview and written assessment with Cardiff University MASW
students and piloting the ‘blind’ rating of these by experienced practice educators. The final
version of the assessment instrument will be sent to the Delphi group to seek confirmation of
face validity.
Data collection
The research team will recruit 70 about-to-be-qualified social work students from each of
three groups – a) Frontline Cohort One, b) Masters programmes in high UG tariff universities
and c) A representative sample of Undergraduate and Masters programmes - to take part in
two simulated client interviews and a written assessment related to these. The sample size is
based on the following calculation:
 Assuming that practice quality is scored on a scale of 0-100, to detect a mean
difference between two groups of 7%, with a standard deviation of 15%, would
require a sample size of 73 in each group, with a significance level of .05 and 80%
power. This level of difference between groups would give a moderate effect size
(Cohen’s d) of 0.46.
To exactly match this number would be spurious precision, because the as-yet-unknown
standard deviation will affect the statistical power. Hence we have rounded the sample down
to 70.
The simulated client interviews will assess students’ communication skills in particular.
There will be two ten-minute interviews, with the same scenarios being used across all three
groups of participating students. Actors will be trained and scripted to take service user roles
in the interviews. It is anticipated that the same actors will be used across all three student
groups. One case scenario will feature a parent and the other a teenager. Their understanding
and response will be further assessed by written tasks related to both interviews, which will
be undertaken immediately after the interviews, within a reasonable time limit. The written
assessment tasks will require students to write about what they thought was the nature of the
presenting problem and what they would have done to help. Space for writing will be limited
to reduce the burden on participants. There will also be one or more questions asking them to
reflect on their own performance: what went well and what they could have improved on.
If their programme has specialist options, the non-Frontline students taking part in the
simulated client test should be following a child and family pathway (or equivalent).
Simulated client events for undergraduates and postgraduates will be run at their universities
where possible. Students participating in the simulated client events will be offered £30 cash
in recognition of their time, on the basis that this is demanding data collection which is
clearly over and above any expectations of their social work training. They will also be
offered feedback in the form of (1) a copy of their video, (2) their individual scores for all
domains assessed, (3) mean scores across all students and (4) generic commentary on the
quality of practice across all 210 students.
Experienced practice educators will rate each video, without knowledge of group
membership. These individuals will be trained in the process through marking and
moderation of the pilot interviews from Cardiff MASW students.
The students taking part in the simulated client exercise will also be asked to rate their social
work self-efficacy (Holden et al 2002) and will be asked about constraints on their experience
of training, such as case-loads, caring responsibilities and paid employment.
Analysis
Analysis of Variance (ANOVA) will be used, which allows comparison of any number of
groups with one test to determine whether there are significant differences between any of the
two groups. The ANOVA is an extension of the 2-sample t-test to multiple groups. Both
methods use ratios of variability between groups to variability within groups.
In the first instance, a one-way ANOVA will be used to answer the question ‘are there
significant differences between the three groups of social work students?’ However the
ANOVA only identifies if there is a difference in means, so in order to answer the question
‘which group is significantly different from which?’ the Tukey’s Honestly Significant
Difference (HSD) will be used. Then the partial eta-squared statistic will be calculated to
identify how important the difference is from a practical point of view.
It is important to note that this is not a comparison of like with like. There are considerable
differences between Frontline and regular social work programmes in terms of, for example,
generic or specialist content and level of financial support, both for students directly and for
programme infrastructure. Such differences will need to be taken into account in interpreting
the results of any comparisons. Analysis of co-variance will allow the research team to
control for the effect of important factors such as students’ caring responsibilities, paid work
commitments and caseloads when comparing the three groups.
Report
The final report will be published online by the Department for Education.
References
Badger, L.W. and MacNeil, G. (2002) Standardized clients in the classroom: a novel
instructional technique for social work educators. Research on Social Work Practice 12:
364-374.
Bogo, M., Regehr, C., Katz, E., Logie, C., Tufford, L. and Litvack, A. (2012) Evaluating an
Objective Structured Clinical Examination (OSCE) adapted for social work. Research on
Social Work Practice, 22 (4): 428-436.
Dalkey, N. and Helmer, O. (1963) An experimental application of the Delphi method to the
use of experts. Management Science, 9, 3: 458-467.
Guardian, The (2013) University league table 2014
http://www.theguardian.com/education/table/2013/jun/03/university-league-table-2014
Guskey, T. R. (2000) Evaluating Professional Development. Thousand Oaks, CA: Corwin
Press.
Carpenter, J. (ed.) (2011) Special issue of Social Work Education on the outcomes of social
work education, 30, 2.
Forrester, D., Kershaw, S., Moss, H. and Hughes, L. (2008), Communication skills in child
protection: how do social workers talk to parents? Child and Family Social Work, 13: 41–
51.
Frontline (2014) Mission statement on website, accessed 12 July 2014:
http://www.thefrontline.org.uk/about-frontline/what-drives-frontline
Holden, G., Meenaghan, T., Anastas, J., & Metrey, G. (2002). Outcomes of social work
education: The case for social work self-efficacy. Journal of Social Work Education, 38,
115–133.
Kinnersley, P. and Pill, R. (1993) Potential of using simulated patients to study the
performance of general practitioners. Journal of General Practice 43(372): 297–300.
Kirkpatrick, D. L. (1994) Evaluating Training Programs: The four levels. San Francisco:
Berrett-Koehler.
Lane, C. Huws-Thomas, N., Hood, K., Rollnick, S., Edwards, K., Robling, M. (2005)
Measuring adaptations of motivational interviewing: the development and validation of the
behavior change counseling index (BECCI). Patient Education and Counselling, 56: 166173.
Petracchi, H.E. (1999) Using professionally trained actors in social work role-play
simulations. Journal of Sociology and Social Welfare 26 (4)
Download