National Evaluation of Keep Well

advertisement
Schedule: part 1 (attachment)
Keep well National Evaluation
Research Design and Specification
Keep well National Evaluation research and design specification 070207
National Evaluation of Keep Well
1. Introduction
The following paper summarises the framework for the national evaluation of Keep Well, with
particular reference to work to be conducted in years 1 and 2. It starts by stating the original
aim of the evaluation as described in the original tender document and identifies the key
challenges in addressing these. It then outlines the purpose of the proposed evaluation
framework and describes the types of questions that will be addressed. The methods that will
be employed are described and costed; the paper concludes by highlighting those areas that
the commissioned evaluation does not intend to cover.
2. Original aim of the evaluation
The tender document prepared by NHS Health Scotland sets out the following aim for the
commissioned national evaluation of Keep Well:
To build knowledge about the feasibility/challenges of delivering Prevention 2010 [Keep Well]
and the effectiveness of different approaches to engagement and service re-design with a
view to incorporating the lessons learned from the pilots into subsequent waves of
implementation.
The original tender document also listed a wide range of more specific objectives for the
evaluation. This list and an indication of whether they are being met in the framework
proposed by the evaluation team are provided in Appendix 1.
3. The central challenge of the stated aim
There are many challenges in evaluating complex interventions but the most significant is in
relation to learning about the effectiveness of an intervention (or ‘what works’). This difficulty
arises for a number of a reasons including:

Specifying in detail what the programme consists of;

The likely variation in what is implemented at a local level (a lack of ‘model fidelity’ to
the national plan and to the evidence base);
2
Keep well National Evaluation research and design specification 070207

The feasibility of implementing robust monitoring/measurement systems;

The identification of robust and acceptable indicators of success; and

The problem of making valid comparisons between intervention and control areas due
to ‘policy diffusion’ or complex policy contexts.
A focus on effectiveness prior to establishing consensus on the ‘evaluability’ of an intervention
risks producing invalid learning and a lack of guidance on programme improvement. For this
reason we propose a two-phase evaluation that focuses on formative learning prior to a
possible assessment of effectiveness being made. This will allow us to provide a much more
detailed understanding of what Keep Well consists of and to develop a more sophisticated set
of hypotheses about key processes and linkages in different settings that are worthy of further
investigation.
4. The proposed framework and its purpose
Table 1 below sets out the two phases of the evaluation, their central purpose and the type of
questions that they will address. The first phase, conducted in year one and two (2007 –
2009), will:

Describe Keep Well and set out its planned and potential unintended consequences
(explication);

Assess the comparability of the programme between pilots and with the national
model (model fidelity);

Identify key lessons about implementation including unintended consequences
(knowledge development); and,

Reach conclusions about the feasibility of conducting an effectiveness study in a
second phase of the evaluation (assessment of evaluability).
Depending on the findings of the first phase, the second phase (conducted in year three
2009-2010) will assess the impact of particular aspects of Keep Well by undertaking practice
level case studies to provide richer learning about how the intervention works in different
contexts for particular groups. (See Appendix 2 for a set of potential issues that might be
covered by such an approach).
Assessment of the additional impact of Keep Well, over and above other on-going policy
developments, e.g. the GMS contract, will be conducted over phase 1 and 2 of the evaluation.
Work conducted during phase 1 will assess the availability and robustness of data from nonKeep Well control areas and, if suitable, will be used to compare performance between Keep
3
Keep well National Evaluation research and design specification 070207
Well and non-Keep Well areas. This work will continue in phase 2 of the evaluation, to allow
for data collection and analysis of mid-term outcomes.
Table 1: Purposes of the Keep Well evaluation
Purpose of
evaluation
Phase 1
Explication
Key questions




Assessing
Fidelity



Knowledge
development



Assessment of
Evaluability


Phase 2
Assessment of
merit and worth
of Keep Well



How and why is the intervention likely to bring about change?
When are programme impacts likely to occur?
What are the components of Keep Well? Which of these components
are stable over time? Which components “evolve” over time?
What are the linkages between Keep Well and other interventions
delivered in the same settings?
Is there consistency and commonality in approaches and
implementation across the sites to define a national Keep Well
framework?
Is there consistency and commonality with the national model of Keep
Well?
Were the planned approaches and implementation by the funders and
individual site conducive to developing a coherent Keep Well
framework?
What lessons have we learned for reach, engagement and adherence
for future initiatives? For example, what are the consequences
(intended and unintended) of strategies adopted?
What are the actual pathways that programmes take?
What are the challenges inherent to implementing the approaches
which programmes take?
Is an assessment of merit of Keep Well possible? Note that the
answers to this question will depend on the stability of the intervention,
the feasibility of having a control group, the clarity with which a Keep
Well is planned and implemented and the quality of core data sets.
Is there enough commonality across sites to implement a multi-site
evaluation?
Does Keep Well improve the identification, engagement and
adherence of patients?
Does Keep Well improve individual level outcomes?
Does Keep Well lead to improvements in practice and population-level
outcomes?
5. Conducting the Phase 1 evaluation
The first phase of the evaluation, which will run from April 2007 until April 2009 1, will be
undertaken by two work packages running concurrently. These are:
1
The time lag between commissioning the national evaluation and its launch is required to obtain ethical
approval and to recruit research staff. This is not uncommon in policy evaluation. However, the core
4
Keep well National Evaluation research and design specification 070207
Work Package 1 – Tracking national and pilot theories of change; and
Work Package 2 - Tracking the impact of Keep Well on “anticipatory care” in the target
population using secondary data analysis.
Prior to commencing the formal evaluation the evaluation team, alongside Health Scotland,
will work with local pilots and national stakeholders to identify key outcomes and success
indicators. This will encourage consensus about how judgements about Keep Well’s success
are made.
Work Package 1 – Tracking national and pilot theories of change (Leads: Mhairi
Mackenzie and Sanjeev Sridharan).
The main objectives and research questions to be addressed in work package 1 are detailed
in Table 2 below. In summary this package will use a Theories of Change 2 approach to
explicate the range of national and local theories about how Keep Well will reach its
objectives. It will also assess the degree of fit between local and national theories, provide
learning about the process of programme implementation and help to identify future promising
components for evaluation in Phase 2. At the end of the first phase of evaluation we will be
able to provide a better understanding of multiple models of reach across Keep Well sites and
be in a position to understand the challenges and benefits of providing anticipatory care to
deprived and hard to reach populations within the contemporary policy context for primary
care.
evaluation team will use this time to build relationships with the pilot areas in advance of the formal
evaluation start date.
2 Connell, J. & Kubisch, A. (1998) ‘Applying a theory of change approach to the evaluation of
comprehensive community initiatives: progress, prospects and problems’. In Fulbright-Anderson, K.,
Kubisch, A. & Connell, J. (eds) New approaches to evaluating community initiatives. Volume 2: theory,
measurement, and analysis. Washington DC: The Aspen
Institute.
5
Keep well National Evaluation research and design specification 070207
Table 2. Work Package 1: Tracking National and Pilot Theories of Change
Objectives
Key questions

To assess closeness of fit between
national and pilot level theories
(explication and model fidelity)
To what extent are underlying rationales and
expectations shared across pilot areas and between
pilot and national levels

To test the ‘goodness’ of pilot level
rationales for approach to Keep Well
using the Aspen criteria of
testability, feasibility and doabilty
(knowledge development)
To what extent are pilot level theories testable,
feasible and doable?

To provide an integrated framework
for tracking the links between Keep
Well activities, processes and
outcomes (explication and
knowledge development)
If agreed indicators of success are established:

To provide a framework for
comparing approaches across pilot
areas
To what extent can we identify those features at a
pilot level associated with the most successful
strategies for reach, engagement, adherence and
behaviour change

To provide a framework for
identifying unintended
consequences (knowledge
development)
To what extent does Keep Well drive attention away
from existing patients with known CHD (focus on the
new 20% rather than the existing 80% in the practice
population) and from other health priorities such as
mental health to cardio-vascular disease

To provide a rationale for more
detailed case-studies (evaluability)
What are the challenges and drivers influencing
change perceived at a pilot level

To provide an assessment of the
overall evaluability of Keep Well
To what extent can a complex multi-site intervention
such as Keep Well be evaluated

Phase 2: To develop a framework
for learning about the merit and
worth of Keep Well
Focus on 4 or 5 of the most promising case studies,
as identified during phase 1 of the evaluation.
To what extent are pilot and national level theories of
change in relation to reach, engagement, adherence
and behaviour change actually implemented.
Risks





That, whilst some high level outcomes for the project have been agreed, there needs
to be consensus on anticipated timelines, thresholds and the feasibility of their
measurement.
That it proves too difficult to establish agreed indicators of success.
That routine data (both practice and community contact) is not available on time to
populate the theories of change prospectively
That stakeholders do not engage with the process because of time constraints
That formative feedback is delayed by implementation delays
6
Keep well National Evaluation research and design specification 070207
Proposed Fieldwork for Work Package 1.
April 2007 – March 2008: Explication, Assessment of Model Fidelity,
Knowledge Development.
Fieldwork for Work Package 1 will be conducted at a national level and across the five pilot
sites, at the level of pilot implementation. Extensive fieldwork within practices will not be
feasible, due to the large number of practices ( approximately ninety) involved across the
pilot sites. A Gantt Chart showing the breakdown of the fieldwork by time in shown in
Appendix 3.
April – September 2007: Data Collection 3
National level Theory of Change interviews (n=8)
This will involve interviews with key stakeholders chosen for their role in overseeing the
national implementation of Keep Well. Potential interviewees will include Frances Wood
(HISD, Scottish Executive); other members of SEHD charged with implementation of Keep
Well; and John Howie (Keep Well Programme Manager, NHS Health Scotland).
Observation of national steering group meetings (n=2 to 4)
Following advice from stakeholders, members of the evaluation team will attend a selection of
meetings thought to be key in the implementation of Keep Well. Such observation of the
process of implementation will allow the evaluation team to develop a greater understanding
of the national context within which Keep Well is operating and may identify areas which
require further explication through detailed pilot-level fieldwork.
Pilot site Theory of Change interviews (n=10 per pilot)
Approximately 10 interviews will be conducted with each pilot site (a total of at least 50
interviews). Interviewees will include the pilot manager and pilot lead in each site. Other
interviewees may include GPs and/or nurses involved in the local implementation of the pilots
and individuals identified through initial preparatory work undertaken with the pilots prior to
the formal start of the evaluation. The purpose of these interviews will be to explore issues
relevant to the implementation of Keep Well in pilot site, e.g. the process of professional
negotiation, and how different interventions were selected across each pilot site prior to the
initiation of Keep Well; the rationale for the selection of the particular models in each pilot site;
the selection of key success criteria; the barriers and supports to the implementation of Keep
Well in each site.
Fieldwork from April – September 2007 will focus on explicating national and local models of Keep
Well. This will pick up on early process learning as well as allowing the evaluation team to start to
answer questions about model fidelity within and across pilots and between national and pilot
programme models.
3
7
Keep well National Evaluation research and design specification 070207
These interviews will be conducted early in Phase 1 of the evaluation. Stakeholders will be
contacted again towards the end of Phase 1 to re-visit these issues and to explore how the
implementation of Keep Well has developed in their site.
Observation at pilot level steering group meetings (n=2 to 4)
As with the national level data collection, members of the evaluation team will attend a
selection of meetings thought to be key in the implementation of Keep Well within each pilot
site. These observations will again allow the evaluation team to develop a greater
understanding of the local contexts within which each pilot is operating.
Documentary review
Documentary evidence about the establishment, priority setting and early work of the pilots
will be collected for analytical critique. These will include the initial and final tender documents
submitted to Health Scotland; documents outlining priority setting; documents detailing the
operational working of the pilots. Other documents will be identified during the interviews.
Email diaries to key stakeholders will also be used as an additional form of documentary
evidence, charting significant developments (both positive and negative) during the
implementation of the pilots.
October – December 2007: Development and conduct of analyses
During this phase of the work, all interview data will be transcribed for analyses. Analyses will
draw together Work Package 1 and Work Package 2 of this study in an integrated framework,
in which the findings from one approach can inform and illuminate the findings from the other.
Qualitative data will be transcribed verbatim and entered into an appropriate qualitative data
package, for example NVIVO or Atlas.ti. Field notes from the interviews and observation of
meetings, along with documentary evidence, will be scanned into Word, then imported into
NVIVO or Atlas.ti. We have established procedures for collating and preparing qualitative data
for analysis, including transcription, anonymisation, and preparation for analysis, which will be
undertaken at this time.
This preparatory work will result in the establishment of six integrated datasets: one set of
national data; one set for each pilot site. Analyses will be conducted initially within each
dataset, then will compare findings across each dataset e.g. comparing different approaches
of reach across the pilot sites.
Analyses will be conducted within each dataset to allow the production of a national and six
pilot level interim reports. During this time, a meeting will be held with key stakeholders within
each pilot site to discuss preliminary findings and the evaluation team’s interpretation of
findings. These discussions will be fed back into the process of analyses.
8
Keep well National Evaluation research and design specification 070207
January – March 2008: Report Writing and Dissemination
Following analyses, a set of six interim reports will be written which address explication, make
an assessment of model fidelity and national and pilot levels and address the knowledge
development nationally and within the sites. These reports will be disseminated in both written
format and through meetings with each of the pilot sites.
April 2008 – March 2009: Explication, Assessment of Model Fidelity,
Knowledge Development & Assessment of Evaluability
April – September 2008: Data Collection 4
This phase will follow the same data collection methods outlined in pages 6 and 7. A repeat
set of Theory of Change interviews will be conducted with those individuals interviewed the
previous year. Some additional interviews will be conducted with individuals involved in the
establishment of the Wave 2 pilots (date and location to be confirmed). Observations of
meetings and documentary review will be conducted as previously outlined.
October – December 2008: Data Analysis
Data will be drawn together from the pilot sites and analysed as previously described. Data
from Work Package 1 will be integrated with that from Work Package 2, to inform the
evaluability assessment.
January – March 2009: Report Writing and Dissemination
Production of final report for Phase 1 integrating data from all six reports above
andincorporating an evaluability assessment. Dissemination will include reporting meetings to
the Scottish Executive and to each of the pilot sites.
Fieldwork from April – September 2008 will focus on how initial plans are unfolding and on the types of
barriers and drivers of change. Stakeholders (both national and local) will be asked to consider the
degree to which emerging data support, amend or refute their original models of how Keep Well should
operate.
4
9
Keep well National Evaluation research and design specification 070207
Work Package 2 – Tracking the impact of Keep Well on ‘anticipatory care’ in
the target population using secondary data analysis (Leads: Kate O’Donnell,
with Matt Sutton)
The main objectives and research questions to be addressed in Work Package 2 are detailed
in Table 3 below. In summary this package will use a range of routine and Keep Well data,
collected at the level of practice populations, to determine the extent to which the programme
is meeting its intended short and medium-term objectives. While much of these data will be
collected as part of routine monitoring, the added value from the evaluation will be to link
these data explicitly with the qualitative approaches used in Work Package 1, using these
data to populate the theories of change articulated through Work Package 1. This work
package will assess the feasibility of existing data collection systems to answer questions of
merit and worth and will assess the merit of different approaches within the pilots to achieving
key indicators, including reach, assessment and interventions, setting the progress made by
each pilot site in the context of the population services and the structural characteristics of
general practice/primary care service provision within each pilot site.
10
Keep well National Evaluation research and design specification 070207
Table 3 – Phase 1, Work Package 2: Tracking the impact of Keep Well on
“anticipatory care” in the target population using routine data (Lead: Catherine
O’Donnell with Matt Sutton).
Objectives
Key questions
To monitor pilot sites ability to identify
patient target groups within practices
What is the size of the practice patient population within
the target groups?
What proportion of the population do practices/pilots
identify within each of the target populations (reach)
e.g. those in the target age group; those on a CHD
register; those attending for a KW health check)?
To measure the proportion of patients
within each target group approached and
engaged by practices and pilots
What proportion of identified patients are approached
by practices and pilots?
How many of those patients attend practices for Keep
Well health checks (reach & engagement)?
To describe the health profiles of identified
patients attending for Keep Well health
checks
What risk factors do such patients have?
What is the level of co-existing disease?
To describe the processes associated
with Keep Well health checks
What processes are put in place following a Keep Well
health check e.g. prescribing; signposting; referral?
To assess the ability of routinely available
data to address the merit of Keep Well
What data are available to address such questions?
How do pilot sites vary in terms of the populations
served and the characteristics of general
practice/primary care service provision in the each site?
Do these characteristics explain potential variation in
the ability of pilots to identify, reach and engage with
the eligible population?
To analyse how Keep Well practices
perform over time, from before the
inception of the intervention.
What were the QOF levels of achievement and
prevalence in Keep Well practices before and after the
implementation of the intervention?
What were the levels of achievement in other areas,
e.g. CHD prescribing; referrals and/or hospital
admissions?
11
Keep well National Evaluation research and design specification 070207
To explain the extent to which the Keep
Well dataset and other routinely collected
data can address the above questions
How readily can data from different sources be linked
and used to assess the impact of Keep Well on
anticipatory care?
Phase 2: How do Keep Well practices
perform compared to other non Keep Well
practices in terms of CHD performance
What are the QOF levels of achievement in Keep Well
practices and non Keep Well practices?
What are the recorded prevalence levels of CHD?
Risks



That routine data from either the KW core dataset or from ISD are not fully available in
order to measure impact of Keep Well activities.
That practice-level data are not sensitive enough to measure differences over time.
That non Keep Well activities (e.g. QOF, enhanced services) “swamp” intervention
effects.
Proposed Fieldwork for Work Package 2.
April – September 2007: Identification and assessment of available data.
The following national, area and practice-level routine datasets will be assessed for their
ability to help answer the above objectives:

ISD datasets on practice, GP and population characteristics.

Payment data on individual practices, including data on nGMS for Essential and
Additional Services and Locally Enhanced Service payments.

Process and outcome data from ISD including prescribing data; outpatient referrals;
admissions data for emergency and elective procedures.

Mortality data.

QOF data including points achievement across the QOF domains and QOF
prevalence data.

CHD Directly Enhanced Service dataset.

Keep Well data-set.
An initial assessment will be made of data availability, quality and suitability to answer the
above objectives e.g. the feasibility of developing pilot level profiles of performance using the
above datasets. The evaluation team will work with the Keep Well Information Sub-group and
with data managers/analysts within each pilot site to identify areas of common interest where
12
Keep well National Evaluation research and design specification 070207
the evaluation team can further develop or support the analysis and reporting being
conducted at a local level.
In addition to developing approaches to analysis within each pilot, an assessment will be
made of the routine data available from non-Keep Well areas to determine if it is possible to
track changes in CHD care across pilot and control areas. However, the lack of appropriate
comparator practices, in terms of the level of socio-economic deprivation within non-Keep
Well practices, may make such a comparison inappropriate.
April – September 2007: Identification and assessment of available data.
Using the data identified above, an analytical profile of each pilot site will be constructed. This
will include a full description of the practices participating within each site in terms of
population served (e.g. population demographics; Keep Well target population as a
percentage of overall Keep Well practice populations served; measures of limiting long-term
illness; standardised mortality rates and premature mortality); practice characteristics (e.g.
WTE GPs and practice nurses; participation in voluntary activities such as SPICE, practice
accreditation programmes, GP training); referral rates for CHD-related activities; emergency
medical admissions; CHD-related prescribing; QOF achievement. Keep Well practices will be
compared both with non-participating practices in their Health Board, to determine the extent
of need on Keep Well practices. Pilot sites will also be compared with each other to determine
if pilots which have adopted a more inclusive approach, in terms of the number of practices
included in the pilot, have different characteristics from other pilot sites.
October – December 2007: Development and conduct of analyses
Quantitative analyses will be conducted in parallel with the analysis in Work Package 1 and
used to populate emergent Theories of Changes across each pilot site. A range of analyses
will be considered, e.g. the ability of practices to improve in terms of reach, assessment and
the delivery of appropriate interventions will be assessed by comparing the levels achieved at
various time points with those levels achieved at the beginning of Keep Well, e.g. at 3 or 6monthly intervals.
Changes in CHD-related care, such as prescribing or referrals, will be analysed over time
starting from the point of implementation of Keep Well, to determine if there are changes in
the level of CHD-related care. Such approaches will also be used to assess changes in care
delivered by comparator non-Keep Well practices, if such controls sites are identified in the
identification and assessment of available data.
13
Keep well National Evaluation research and design specification 070207
January – March 2008: Report Writing and Dissemination
These analyses will inform the set of six interim reports to be written. Combining the findings
from Phase 1 Work Packages 1 and 2, these reports will address explication, make an
assessment of model fidelity and national and pilot levels and address the knowledge
development nationally and within the sites. These reports will be disseminated in both written
format and through meetings with each of the pilot sites.
April 2008 – March 2009: Explication, Assessment of Model Fidelity,
Knowledge Development & Assessment of Evaluability5
The on-going development of Work Package 2 will be determined by the availability and
suitability of the data to answer questions of effectiveness e.g. are some models of reach
more effective at reaching the target population than others? This phase of the work will
identify key questions of effectiveness both within pilot’s approaches to reach, assessment
and the provision of interventions, but will also analyse the performance of practices within
the pilots with regard to CHD care in general.
Appendix 3 contains a Gantt chart outlining the various phases of the work described here.
Reporting in Phase 1
Reporting will take the form of both verbal and written communication. Regular monitoring
structures and protocols will be established to ensure that the evaluation can report into the
ongoing development of both wave 1 and 2 pilots. We will, for example, provide a verbal
update to the Evaluation Advisory Group every 2-3 months. In addition, an interim report
summarising the main features of theories articulated by stakeholders will be produced at the
end of year one.
A formative report integrating work packages 1 and 2 at the end of year two will:

Critique pilot level and national theories

Identify barriers and challenges and unintended consequences

Integrate available monitoring data

Critique the evaluability of the Keep Well Programme

Identify potential approaches to phase 2 evaluation
Fieldwork from April – September 2008 will focus on how initial plans are unfolding and on the types of
barriers and drivers of change. Stakeholders (both national and local) will be asked to consider the
degree to which emerging data support, amend or refute their original models of how Keep Well should
operate.
5
14
Keep well National Evaluation research and design specification 070207
6. What is not being covered and why
Evaluations rarely, if ever, meet all stakeholder expectations of what should be covered. At
this stage we think that it may be helpful to be explicit about some of the aspects of the
programme and its implementation that are not currently part of the proposed approach in
phase one ( years one and two) .

Primary data collection: There will be no primary data collected across participating
practices or patients since this would be prohibitively complex and expensive.

A critique of individual practice level theories of change: For reasons of both time and
money it will not be feasible to undertake a detailed critique of each practice’s
approach to Prevention 2010. Instead it will be assumed that those at a pilot level will
be able to map out variations within their own area.

Patient experiences: The views of patients will not be sought in phase 1 of the
evaluation as the priority expressed by both Health Scotland and the National
Evaluation Team is for learning in the first instance about project implementation.

Patient compliance with/adherence to the advice provided: while this is clearly
important, it is beyond the scope of phase one (years one and two) of the evaluation.

Exploration of the quality of the interaction between patient and health care
professional: this would require observational approaches which would be
prohibitively expensive.
Agreement will be reached with Health Scotland, by the end of year one, as to the more
precise content of the work to be undertaken in phase 2 (year 3 of the overall evaluation).
The work conducted in phase 1 will lead to the identification key case study areas worthy of
in-depth study and analyses in phase 2. As outlined in Appendix 2, possible areas of interest
include an exploration of changes made at a practice level in terms of structure and
organisation that contribute to the mainstreaming of Keep Well activities; the impact of Keep
Well from staff and/or patient perspectives; the extent to which Keep Well has redressed
inequitable service provision. Collection and analyses of routine data, initiated in phase 1, will
be continued during phase 2. A potential timetable ofr Phase 2 work is contained in Appendix
3.
15
Keep well National Evaluation research and design specification 070207
7. Our approach to supporting local evaluation
We have a commitment to supporting pilot areas (but not practices) in developing their own
local monitoring systems and will also host a number of co-learning events where practice
and pilot level staff can share good practice and difficulties in monitoring change.
8. Costs and staffing
A breakdown of costs and staffing are contained in Appendix 4. The contribution of staff to
each Work Package is summarised in the Gantt Chart in Appendix 3.
The National Evaluation Team
Kate O’Donnell
Mhairi Mackenzie
Steve Platt
Sanjeev Sridharan
20 December 2006
16
Keep well National Evaluation research and design specification 070207
Appendix 1: How the proposed evaluation responds to the original brief
Covered by Proposed
Framework
Objective
1.
Describe and document the following
across the pilot practices including
variations across the pilot areas

2.
Assess the feasibility/success in using GP
practice records to identify the target
population

3.
Assess to what extent local practices were
successful in reaching the target population

4.
Assess the barriers/challenges, feasibility,
acceptability, effectiveness and costeffectiveness of different methods of
engagement in reaching the target
population
x


Whilst barriers and challenges
will be identified, questions of
effectiveness assume a stable
intervention. As discussed within
the document the evaluation
team will make an assessment of
the evaluability of Keep Well in
relation to its effectiveness and,
where possible, will assess its
impact.
The acceptability of the
programme to recipients is not
included within the costed
proposal but may be prioritised
by stakeholders as an important
add-on to the existing evaluation
5.
Assess the barriers/challenges of different
methods of engaging with primary care
professionals and how they were addressed

6.
Assess the contribution and effectiveness of
the communications strategy in engaging
health professionals and the target
population
x (as above)
7.
Assess the barriers/challenges, feasibility,
effectiveness and cost-effectiveness of
different approaches to service redesign in
creating more time for primary care
professionals to spend with the target
patient group and improving quality of care
x (as above)
8.
Assess the level of uptake and
concordance/compliance/adherence with
the recommended interventions among the
target population

17
Keep well National Evaluation research and design specification 070207
9.
Document and analyse the impact of P2010
on modifying risk factor levels (eg, smoking,
blood pressure, cholesterol, diabetes
management and other key lifestyle risk
factors)
x
10.
Understand the individual, practice and
CHP level factors associated with
adherence to, and concordance/compliance
with treatment and reductions in CVD risk
factors over the course of the pilot

11.
Document and analyse the impacts of
P2010 on NHS and non-NHS services (eg,
increased demand, prescription costs)

Document and analyse the impact of P2010
on NHS costs in the short-term and longterm, including the impact on hospital
admissions and the cost per life saved.

12.

These data will not be collected
through the Keep Well data set
and the proposed study does not
include primary data collection
from patients.

This will be done utilising
routinely collected data. If the
programme is not found to be
evaluable in relation to its
effectiveness then this objective
will be unable to address
questions of attribution.

This will be done utilising
routinely collected data. If the
programme is not found to be
evaluable in relation to its
effectiveness then this objective
will be unable to address
questions of attribution.
18
Keep well National Evaluation research and design specification 070207
Appendix 2: Outcome focused case studies at a practice level – potential
questions for future exploration
Potential Areas of Interest
To explore the changes made at a practice level in terms of structure
and organisation
To explore the most promising approaches and their links to improved
adherence/compliance
To explore the impact of participating in Keep Well on practice staff
To explore the impact of Keep Well from a patient perspective
To identify the mechanisms put in place to incorporate Keep Well
activities into routine practice
To identify unintended consequences for practices participating in Keep
Well
To investigate the extent to which Keep Well has redressed inequitable
service provision
19
Keep well National Evaluation research and design specification 070207
Appendix 3: Gantt Chart: Phase 1.
Jan-Mar 07
Recruitment of staff
Mar Only
Ethics application & research governance
Mar Only
Office set up
Mar Only
National Theory of Change Interviews I
Pilot Theory of Change Interviews I
Observation of National Steering Group Meetings I
Apr-Jun 07
Oct-Dec 07
Jan-Mar 08
Apr-Jun 08
Jul-Sept 08
Oct-Dec 08
Jan-Mar 09
Jul-Sept 08
Oct-Dec 08
Jan-Mar 09
SRF
SRF & RF1
SRF
Observation of Pilot Steering Group Meetings I
SRF & RF1
Documentary review I
RF1 & RF2
Transcription of interview tapes
Jul- Sept 07
Admin
Development of data analysis framework
Team
Data analysis
Team
Production of pilot interim reports
Team
Production of national interim reports
Team
National Theory of Change Interviews II
SRF
Pilot Theory of Change Interviews II
SRF & RF1
Observation of National Steering Group Meetings II
SRF
Observation of Pilot Steering Group Meetings II
SRF & RF1
Documentary review II
RF1 & RF2
Apr-Jun 07
Jul- Sept 07
Oct-Dec 07
Jan-Mar 08
Apr-Jun 08
20
Keep well National Evaluation research and design specification 070207
Apr-Jun 07
Identification of available datasets
RF2
Assessment of available datasets
RF2
Analytical profile of pilot sites
RF2
Development and conduct of initial analyses
Jul- Sept 07
Oct-Dec 07
Jan-Mar 08
Apr-Jun 08
Jul-Sept 08
Oct-Dec 08
Jan-Mar 09
RF2
Population of thories of change frameworks
SRF & RF2
Production of pilot interim reports
Team
Production of national interim reports
Team
Development and conduct of effectiveness analyses
RF2
Production of final report for phase I
Team
Apr-Jun 07
Jul- Sept 07
Oct-Dec 07
Jan-Mar 08
Apr-Jun 08
Jul-Sept 08
Oct-Dec 08
Jan-Mar 09
Work Package 1 led by MM and SS; Work Page 2 led by KOD, with Matt Sutton
All aspects of the work will involve the core team of KOD, MM, SP and SS in supervising and advising the Research Fellows.
21
Keep well National Evaluation research and design specification 070207
Appendix 3: Gantt Chart: Phase 2.
Apr-Jun 08
Formulation of case study rationale
Jul- Sept 08
Oct-Dec 08
Jan-Mar 09
Apr-Jun 09
Oct-Dec 09
Jan-Mar 10
Team with HS
Ethical approval & governance
Team
Identification of case studies (n = 4-5)
Team
Recruitment of case studies
Team
Collection of data from case studies
SRF, RF1, RF2
Analyses and case study reports
Collection & analyses of routine data from all sites
Jul-Sept 09
Team
RF2
Final report on Phases 1 & 2
Team
Apr-Jun 08
Jul- Sept 08
Oct-Dec 08
Jan-Mar 09
Apr-Jun 09
Jul-Sept 09
Oct-Dec 09
Jan-Mar 10
Phase 2 to be led by KOD & MM.
Full work package to be agreed during Year 1 of the evaluation.
22
Keep well National Evaluation research and design specification 070207
Appendix 4: Costs
Central Project Team
Mar-07
Apr 07 - Mar 08
Apr 08 - Mar 09
Apr 09 - Mar 10
Staff time
KOD
30%
18927
20092
21326
60345
MM
30%
16762
17797
18894
53453
SP
10%
7792
7792
7792
23376
SS
10%
4722
4864
4912
14498
6666
48203
50545
52924
158338
Subtotal:
Staff
Senior Research Fellow
100%
0
42443
46488
50905
139836
Research Fellow 1
100%
0
34365
37563
41092
113020
Research Fellow 2
100%
0
34365
37563
41092
113020
Administrator
100%
0
23257
25436
27814
76507
0
134430
147050
160903
442383
Consumables1
1000
3000
4000
4000
12000
Recruitment
2500
0
0
0
2500
0
4000
4000
4000
12000
0
2000
2000
2000
6000
4000
0
0
0
4000
Subtotal:
Other Costs
Travel
2
Conference Costs3
Equipment (Year 1 only)4
Consultancy: Matt Sutton
0
4800
4800
4800
14400
7500
13800
14800
14800
50900
GU Estate Costs @ 50%
1830
21961
25218
26731
75740
GU Indirect Costs @ 50%
7020
84237
96733
102537
290527
144
1730
1874
1874
5622
Subtotal:
6 days per yr @ £800
University Costs
EU Estate Costs
722
8668
9390
9390
28170
Subtotal:
EU Indirect Costs
9716
116596
133215
140532
400059
Total:
23882
313029
345610
369159
1051680
23
Keep well National Evaluation research and design specification 070207
1. Consumables: Covers all project running costs e.g. telephone calls; photocopying; printer cartridges; digital recording equipment for SRF and RF1; paper,
envelopes and postage.
2. Travel: Covers travel of all team members to team meetings; travel to meetings with Health Scotland; travel to pilot sites for interviews, planning meetings,
debriefing metings, etc.
3. Conference costs: Costs to send team members to national and/or inernational meetings to disseminate findings from the evaluation.
4. Equipment: Costs of purchasing laptop/desktop computer and printer for each researcher and the administrator (£800-1000 per person, depending on
specification of computer).
N.B. The costs of the items detailed above are not covered by the University costs, which are generally retained centrally to pay for central services, heating,
electricity etc. Individual departments do not have budgets to cover such costs, instead expecting them to be costed as part of the project.
24
Download