Evaluating complex development interventions in real-time: the African Institutions initiative Sonja Marjanovic

advertisement
Evaluating complex development interventions in
real-time: the African Institutions initiative
Sonja Marjanovic
on behalf of the evaluation and learning team
March 2012
Measuring Impact of Higher Education for Development Conference,
London
Outline
– Background and context: what influences evaluation
design
– Evaluating the African Institutions initiative: design and
methods, challenges and opportunities
– For take-away (food for thought) - potentially relevant
areas for consideration in developing indicators for
complex research capacity-building interventions
Evaluation approach is influenced by many factors, including:
Why are we
evaluating?
How complex is
the intervention
being
evaluated?
Why evaluate?
• Accountability
• Advocacy
• Learning
• Informing strategy /
steering management
The interventions we evaluate vary in complexity
• Simple, complicated, complex (e.g. Rogers 2008; Campbell et al., 2007)
• Criteria influence complexity levels and have implications for evaluation:
–
–
–
–
Context in which intervention is being developed and deployed
Evidence base on success factors for intervention
Component complexity of intervention
Ability to specify outputs upfront (range, predictability and probability)
The African institutions initiative is a complex intervention
Many uncertainties in
intervention context (e.g.
socioeconomic, political)
Evidence base on
success factors is mixed
and fragmented
Component complexity:
many interdependent
parts must function
together for intervention
to be sustainable
Ability to specify full range
of outcomes upfront with
high certainty is limited
• Can lead institutions manage funding effectively?
• Will political instability interfere with intervention?
• Some elements more tried and tested than others
(e.g. individual vs. Institutional, network)
• More known about challenges than solutions
• Alignment of parts is not straightforward (skills,
programmes, management , infrastructure)
• Intervention and context highly inter-dependant
• High propensity for adaptation and change over
time
• Unforeseen consequences
Real-time evaluation is of particular benefit for complex
interventions and uncertain environments
When ongoing
learning and
informing
programme
implementation
are important!
When
evaluation has
multiple
objectives summative and
formative
For high-risk
initiatives
where
adaptability is
important
For the African Institutions initiative, we are using real-time
approaches to:
1. Evaluate the performance of each consortium and
ultimately the initiative as a whole
2. Legacy and learning: extract lessons learnt from the
initiative and disseminate insights
3. Help support networking efforts of consortia to improve
learning, strengthen shared experiences and promote resource sharing
Rooted in tried and tested theory; flexible, bespoke,
participative, objectivity in inferences
Our evaluation is based on theory of change, realist evaluation
methods
• Theory of change surfaces perceived causal mechanisms through which an
initiative is intended to deliver benefits and underlying assumptions (e.g.
Weiss, 1995; Ling et al, 2012)
• What is each consortium trying to achieve?
• How are they hoping to achieve their objectives?
• Why do they think their approach will work?
• Realist evaluation emphasises intervention contexts (e.g. Pawson and
Tilley, 1997)
– What works, for whom, in what circumstances
– Engages local expertise and insights
– relationship-building, time, training, listening to African voice
Logic modelling maps sequences of activities
that connect actions to intended consequences
• Helps stakeholders specify and agree on intended outcomes, outputs, activities
and inputs, and necessary conditions
• But milestones are NOT set in stone – are a guide
INPUT
PROCESS
Indicators and
measures
OUTPUT
OUTCOME
Indicators and
measures
Indicators
and measures
1. Capacity-building in
scientific skills and careers
Indicators and
measures
2. Capacity-building in
research management,
governance, administration
Indicators and
measures
Indicators and
measures
Indicators and
measures
Indicators
and measures
3. Capacity-building related
to physical and ICT
infrastructure
Indicators and
measures
Indicators and
measures
Indicators and
measures
Indicators
and measures
4. Learning, linkage, and
exchange: communications
and networking
Indicators and
measures
Indicators and
measures
Indicators and
measures
Indicators
and measures
• Helps stakeholders specify and agree on intended outcomes, outputs,
activities and inputs, and necessary conditions
• Helps set milestones, and develop S.M.A.R.T indicators for evaluating
progress
• Relating inputs, process to outcomes: examining causal effects and
mechanisms, ‘linking constructs’ (McDavid & Hawthorn, 2006)
• Examining evaluation criteria: relevance, efficiency, effectiveness, utility and
sustainability of an intervention
How are we implementing the evaluation?
Work Package 1—Part B:
Work Package 1—Part A:
• Establishing
relationships
• Assessing baseline
research capacity at
institutions
Establishing
where a
consortium is
coming from
Understanding
where it is
heading
• Specifying intervention logic
• Risk management & SWOT
• Evaluation framework/
indicators
• Milestones/targets
Establishing and nurturing mutual
understanding and cooperative relationships
Work Package 3:
Supporting networking and
exchange
Work Package 4:
Endline and initiative-wide
assessments/lessons learned
Learning and
sharing the
learning
Understanding
how things are
going over
time
Work Package 2:
Ongoing co-evaluation
and interim reporting (annual
KPI framework, quarterly
engagement elements)
Complexities of real-time evaluations in development contexts– for
evaluators and initiative participants
• Managing consortia requests for
advice on strategic direction
• Impacts of political turbulence on
timeliness of evaluation evidence and
project management
• Cultural challenges and historical
sensitivities
• Limited baseline evaluation (not monitoring)
capacity in consortia (some exceptions)
• Competing demands on staff time –
mobilising engagement, staff turnover
• Importance of designated posts,
succession planning, training
But also many opportunities from the participative approach and
benefits for interventions
• Building local evaluation capacity
• Knowledge management in networks
and building organisational memory
• Providing timely evidence to increase
chances of programme success and
sharing learning
– highlighting areas where
adaptation is needed
– Sharing how others address similar
issues
• Annual deliverables useful for ongoing
fundraising!
A TAKEAWAY - FOOD FOR THOUGHT –
FOR IN YOUR OWN TIME
The following slides share some examples of the
types of issues indicators of research capacity
building for HE interventions in development
contexts might explore...
Some points to consider
• These are just some examples of some areas of capacity building we
are exploring in this initiative.
• They might apply more widely to other higher education interventions
in development efforts , but this will obviously be context-dependent
• Longer term outcomes are often aspirational, and not in full control of a
single initiative. However, it is still important to examine contributions
towards them
• Important considerations: attribution vs contribution; time-lags
• The slides that follow cover examples of process, output and outcome
indicators
Examples only:
• Take up of Post-doc, PhD, and MSc scholarships
• Completion rates and changes in drop-out dynamics
Training and empowerment
of individuals to conduct
and lead research
Strengthening career
development prospects at
universities – institutional
receptiveness
Improving research
governance, management
and administration
capacity
Physical infrastructure
Equitable and sustainable
South–South and South–
North networks
• Are there clear criteria, roles and responsibilities for
supervisors
• Is thee feedback on quality of supervision and training
courses
• Numbers, types and distribution of researchers
(newly trained or existing with new skills) across the
career pathway in a region?
• Evidence of new and improved training programmes
accredited by institutions
• Is there better access to existing training
opportunities in the region (e.g. linked to opportunities
for credit transfer and more inter-institutional
collaboration)?
• Evidence of knowledge outputs – e.g. publications
and citations as evidence of scientific impact
• Evidence of improved ability to obtain third party
funding for research sustainability in the region’s
institutions?
Examples only:
Training and
empowerment of
individuals to conduct
and lead research
Strengthening career
development prospects at
universities – institutional
receptiveness and
support
• Are there advocacy efforts for research
•
•
•
Improving research
governance, management
and administration
capacity
Physical infrastructure
Equitable and sustainable
South–South and South–
North networks
•
support in institutions, with Deans and
Vice-chancellors? With Ministries?
Evidence of continued professional
development training opportunities in
institution
Institutionalisation of research positions
Evidence of research being valued? Is
there increased demand for it ?
• Systems for merit based
promotion?
• Greater availability of competitive
small grant schemes over time in
region?
• Dedicated research time supported
by institutions?
• Supervision quality monitored and
rewarded?
Third party funding for research
sustainability
• Is there improved access to training in
Examples only:
Training and
empowerment of
individuals to
conduct
and lead research
•
Strengthening career
development
prospects at
universities
Improving research
governance,
management and
administration
capacity
Physical
infrastructure
Equitable and
sustainable South–
South and South–
North networks
•
•
•
research management (e.g. grantwriting, financial management, ethics,
project management, supervision,
publication writing)?
Evidence of improved governance
structures, management systems,
policies, and procedures in institutions?
• Transparent processes for
distribution of funding, be it based
merit or equity?
• Better guidelines for monitoring of
supervision and training quality
embedded in departments and
faculty?
Better knowledge management systems
(tracking people, grants, publications)?
Are research management and
administration staff with new and
improved skills embedded at
institutions?
Is there evidence of more coordinated
use of support structures within
institutions?
Examples only:
Training and empowerment
of individuals to conduct
and lead research
Strengthening career
development prospects at
universities
Improving research
governance, management
and administration
capacity
Physical infrastructure
Equitable and sustainable
South–South and South–
North networks
• Is the process for distributing infrastructure funding
based on clear criteria and needs of individual,
institution and region?
• Is there greater sharing of available resources
within institution and between projects?
• Is there evidence of impact from infrastructure
investments on research and training quality?
• e.g. research which would not be possible
without new infrastructure?
Examples only:
Training and empowerment
of individuals to conduct
and lead research
Strengthening career
development prospects at
universities
Improving research
governance, management
and administration capacity
Physical infrastructure
Equitable and sustainable
South–South and South–
North networks
• Are planned networking interventions
being implemented? Is there take up?
• e.g. staff and student exchanges;
joint supervision, crossappointments
• Levels and diversity of collaborative
dissemination over time?
• in training, research,
dissemination, skills and resource
sharing
• Network sustainability - longer term
aspirations:
• Collaborative publications and
grants by partners are sustained?
• Increased commitment to
research by ministries and policy
makers who see value of
outputs?
Criteria for a fit for purpose evaluation approach –
• Grounded in tried and tested theory
• Bespoke:
•
•
•
•
– Evaluation has multiple objectives: accountability, learning, advocacy
– Consortia have mix of common and unique features
– A.I.I. is multidimensional capacity building (individuals, institutions,
networks)
Participative (co-evaluation)
– To maximise relevance and learning, and enable timely action on
evidence: formative and summative
– Engages and evaluates consortia and Trust
– Aims to support a self-improving system (e.g. Narayan, 1993; Cousins
and Whitmore, 2004)
Flexible
– To be able to address the complexity of the initiative and potentially
changing priorities, contexts, resources
Practical
– Feasible, balance breadth and depth
Objective and independent
Download