Evaluability assessment

advertisement
Evaluability Assessments:
Achieving Better Evaluations, Building
Stronger Programs
Nicola Dawkins, PhD, MPH
ICF Macro
Project Team
Robert Wood Johnson Foundation
Laura Leviton, PhD
Centers for Disease Control and Prevention
DNPAO - Laura Kettel Khan, PhD
DASH – Leah Robin, PhD and Seraphine Pitt Barnes, PhD, MPH, CHES
DACH/PRC – Jo Anne Grunbaum, EdD
Centers for Disease Control and Prevention Foundation
Danielle Jackson, MPH, John Moore, PhD, RN, and Holly Wethington, PhD
Macro International Inc.
David Cotton, PhD, MPH, Nicola Dawkins, PhD, MPH, Karen Cheung, MPH, Mary Ann Hall, MPH,
Thearis Osuji, MPH, and Starr Rice, BA
The findings and conclusions presented are those of the authors
and do not necessarily represent the official position of the agencies.
Will Discuss Today
• Introduction to EA, compare with full
evaluation
• Purpose of Early Assessment project
• One unique process for using multiple
EA method
• Steps in project
• Results
• Insights and conclusions
Evaluability Assessment
• Assesses:
1. Underlying program logic
2. Current state of program
implementation
3. Feasibility of conducting rigorous
outcomes-focused evaluation or
other sorts of evaluation
Is intervention promising?
Yes
Does intervention have program design integrity and
realistic, achievable goals?
Yes
Is intervention implemented as
intended and at an appropriate
developmental level?
Yes
To answer questions:
(1) Is there a feasible
design? (2) Are data
available or feasible
to collect?
Yes
Evaluable
Intervention
Assist in
improvement
of program design,
implementation, and
evaluation characteristics
CDC Framework for Program
Evaluation
Engage
stakeholders
Ensure use and
share lessons
learned
Describe the
program
Steps
Focus the
evaluation
design
Justify
conclusions
Gather credible
evidence
Evaluability Steps Compared to
CDC’s Evaluation Framework
CDC Framework
Evaluability Steps
• Engage stakeholders
• Involve stakeholders
and intended users
• Describe the program
• Clarify program intent
• Determine program
implementation
• Focus the evaluation
• design
• Work with stakeholders
to prioritize key
evaluation questions
• Gather credible evidence
• Explore designs and
measurements
•
Justify conclusions
• Ensure use and share
lessons learned
• Agree on intended
uses
Multiple EA Example
• Convene a panel of experts to identify and
review potential environmental programs and
policies
• Assess environmental programs and policies’
readiness for evaluation
• Synthesize findings and share promising
practices with the field
• Develop a network of public health and
evaluation professionals with the skills to
conduct evaluability assessments
Unique Systematic Screening and
Assessment (SSA) Method
Inputs
Guidance
Nominations, existing
inventories, descriptions
Expert review panel
Steps
1. CHOOSE priorities
Products
Focus
2. SCAN environmental
interventions
Brief descriptions
3. REVIEW AND IDENTIFY
INTERVENTIONS that warrant
evaluability assessment
List of interventions
Distributed network of
practitioners/researchers
4. EVALUABILITY
ASSESSMENTS of priority
interventions
Report on each intervention
Expert review panel
5. REVIEW AND RATE
interventions for promise/
readiness for evaluation
Ratings and reports
Communicate with all
stakeholders
6. USE information
7. SYNTHESIZE what is known
•Constructive feedback
•Plan for rigorous evaluation
Report of intervention and
evaluation issues
Systematic Process
Nominations
Received
Met Inclusion
Criteria
After School/
Daycare
81
34
Food Access
55
23
School District
Local Wellness
Policies
146
58
Systematic Process Cont’d
• Expert panel selected 26 using criteria:
– Potential impact
– Innovativeness
– Reach
– Acceptability to stakeholders
– Feasibility of implementation
– Feasibility of adoption
– Sustainability
– Generalizability/transportability
– Staff/organization capacity for evaluation
Selected Programs and Policies
(Year 1)
• 7 After School/3 Daycare Programs
– 5 programs: PA time, nutritious snacks
– 4 programs: PA time, nutrition education
– 1 policy: PA, nutrition, TV screen time
• 10 Food Access Programs
– 5 farmers’ markets
– 3 supermarket or corner store programs
– 2 restaurant programs
• 6 School District Local Wellness Policies
– All selected addressed PA and nutrition
Evaluability Assessment
• Review of documents
– Draft logic model
• 2-3 day site visit
– Interviews: program description, logic
model, staffing, funding, sustainability,
evaluation activities
– Observations
– TA /debriefing session
• Reports and recommendations
• Follow-up TA call with CDC experts
Readiness for Evaluation
• Review of site visit reports identified
classifications:
1. Ready for stand-alone, outcome
evaluation
2. Appropriate for cluster evaluation
3. Theoretically sound but need further
development
4. Technical assistance needed in
specific areas
Results for Year 1
• Expert panel determined:
– 14 ready for stand-alone, outcome
evaluation
– 2 best suited for cluster evaluation
– 3 theoretically sound but need further
development
– 6 need TA in specific areas
Results for Year 1, Cont’d
• Dissemination of results from Year 1
• Full evaluation planned for New York City
Daycare Policy
Discovering Practice Based
Evidence
• SSA Method builds evidence base through
practice based evidence
282 nominations
Year 1:
26 EAs
9 high potential
impact, ready
for evaluation
Year 2
• Year 2 completed EAs of 27 initiatives
Nominations Met Inclusion
Received
Criteria
Selected
After School/
Daycare
86
27
13
Food Access
29
11
8
Comprehensive
School PA
39
7
2
Built Environment
for PA
22
14
4
Discovering Practice Based
Evidence
• SSA Method builds evidence base through
practice based evidence
176 nominations
Year 2:
27 EAs
11 high potential
impact, ready
for evaluation
Key Lessons Learned
• Use an expert panel for diverse
perspective
• Solicit broadly to maximize return
• Include programs/policies beyond
start up phase to ensure
implementation
• Centralize oversight for methodological
integrity
• Provide technical assistance as an
incentive to sites
Recap: It’s a Process
1. Choose priorities for the scan
2. Scan environmental programs & policies
3. Review and identify those that warrant
evaluability assessment
4. Evaluability assessment of programs & policies
5. Review and rate for promise and readiness for
evaluation
6. Use Information:
•
•
•
Position for rigorous evaluation
Feedback to innovators
Cross-site synthesis
Overview of General EA vs SSA Method
• What is the same?
– Review documents
– Discuss with
stakeholders
– Develop logic model
– Iterate the process
– Determine what can be
evaluated
• What is different?
– EA as one component of a
process of discovery
– SSA Method explicitly
provides feedback to
innovators
– SSA Method provided
insights on clusters of
projects
– SSA Method helped identify
policies and programs
worthy of further attention
The Cost-Savings Factor
• Of 458 innovations nominated in both
years:
– 174 met criteria for inclusion;
• 53 were selected for evaluability assessments;
– 20 were of high potential impact and ready for
stand alone evaluation.
– Yet all of the nominations were viewed as
important by stakeholders.
– If all of them underwent evaluation,
• would be a 4% chance of encountering
something with likelihood of concluding
success!
Conclusion 1
Without a systematic process,
one would need to conduct at least 20
evaluations to discover 1 that might
be successful.
The process is cost-effective for
funders and decision makers.
It reduces uncertainty about
evaluation investments.
Conclusion 2
• Innovators found the process
very helpful.
• Evaluability assessment plays
a program development role.
Conclusion 3
• Themes and issues emerged for
clusters of policies and programs.
• Evaluability assessments can be
configured to cast new light on
– developments in the field
– families or clusters of policies and
programs
Impact on the Field of
Prevention
• “Translating practice into
evidence”
• A new method of topic selection
and program identification
• Researchers very engaged by
learning about practice
• Stimulated discussion of new
research agendas
Nicola Dawkins
NDawkins@ICFI.com
Download