Presentation - From Building Evaluation Capacity to Supporting

advertisement
From Building Evaluation Capacity to
Supporting Evaluation Capacity
Development:
Key Conclusions of an exploratory study conducted
in DRC, Niger and South Africa
Dr. Michele Tarsilla
OPEV Week - African Development Bank
December 6, 2012
E-mail: mitarsi@hotmail.com
Presentation Outline

Background on Evaluation Capacity
Building (ECB) and Evaluation
Capacity Development (ECD)

Two key research questions

Conclusions
- ECD targeting
- New ECD
definition/conceptualization
Key ECB et ECD Issues
1)
2)
3)
Lack of both a consensus on and clarity of
Evaluation Capacity Development (ECD)
definition and goals and need for more
empirical research
Biased targeting of past ECD programs aimed
at strengthening in-country evaluation
capacity
Definitions are key and so are strategies to
make ECD processes more inclusive. In
addition, understanding and acting upon
processes (outside the training venues) is of
utmost importance.
First Research Question

To what extent is
Evaluation Capacity
Development (ECD)
distinct from Evaluation
Capacity Building (ECB) in
international development
contexts?
What is Capacity?

Capacity as a latent attribute:
Capacity is a potential state. It is elusive
and transient. Performance, in contrast, is
about execution and implementation or
the result of the application/use of
capacity (Morgan, 2006, p. 6)
How do ECB and ECD relate?
69% (n=51) of the 75 respondents who were asked about
the relationship between ECB and ECD perceived that
there was a difference between ECB and ECD
 69% (n=51) of the 75 respondents who were asked what
they associated ECB with the most, answered trainings
 (64%) (n=48) of the 75 respondents who were asked
about ECB sustainability stated that ECB was no longer
effective or sustainable
 Nearly 59% (n=44) of the 75 respondents who were
asked what they thought about the concept of ECB
stated that the idea of “building” capacity did not do
justice to the fact that there is already capacity in
country.

Evaluation Capacity Building
According to respondents, ECB key objectives
were:
-To enhance the data collection and
reporting skills of donor-funded projects: 73%
of respondents (n=65) of the 90 in-country
practitioners;
-To improve the project performance and
attain the expected project results: 54% of
respondents (n=49) of the 90 in-country
practitioners;
-To promote the use of the Logical framework:
77% (n=69) of respondents;

Evaluation Capacity Development
According to respondents, ECD primary objectives
were:
-To create an in-country sustainable evaluative
culture: 73% (n=65) of interviewed field
practitioners;
-To reduce countries’ dependence/reliance on
external technical support: 67% (n=60) of
interviewed in-country practitioners;
-To promote the use of evaluation within
national governments: 59% (n=53) of interviewed
in-country practitioners;
 “Unlike ECB that has a more individual-focused
approach, ECD is about people and systems”
(South Africa government official)

Second Research Question

What are the key
criteria that need to
be taken into
account in order to
enhance ECD
targeting?
SFAR Framework
Sphere
Function
Actors
Role
SFAR Framework
Source: Tarsilla, 212
SFAR in Niger
Conclusions
Key Conclusions

New definitions of ECB and ECD

Key variables to evaluate the
effectiveness of ECD programming and
processes
New ECB Definition
A necessary (but not sufficient) condition for
ECD to take place. ECB mainly consists of a vast
array of trainings and coaching activities (some of
which are short-term in nature) aimed at building
capacity
 especially where capacity is either very low or
thought not be in place yet,
 among a discrete number of individuals working
either for or within organizations and/or
institutions that develop, commission, manage,
conduct and/or use evaluation.
 ECB is normally implemented in either one or
two of the three ECD levels

ECD Continuum:
Where does ECB lie?
ECD Continuum:
When ECB Gets Creative
New ECD Definition (I)

ECD is a process consisting in both the
integrated enhancement and maintenance over
time of:
-individuals’ knowledge, skills and attitudes;
-organizations’ capabilities; and
-institutions’ readiness,
towards contextually relevant planning,
management, implementation, and use of
evaluation at any level-global, regional, national or
sub-national.
ECD Continuum:
Where does ECD lie?
You can combine
interventions across
levels. However, ECDsavvy strategies are
implemented at all
three levels
New ECD Definition (II)
More specifically, ECD aims at both individual
and collective transformational learning in
the pursuit of three primary goals:
-strengthening the technical quality and
ownership of evaluation processes;
-enhancing the local authenticity and
cultural appropriateness of evaluation
approaches, methods and tools used in-country;
-increasing the use of evaluation as a way
to improve development interventions in a
variety of sectors.
New ECD Definition (III)


In order for ECD to be successful, it is critical
that ECD-savvy strategies be implemented
either in a simultaneous or intentionally
sequenced fashion;
ECD-savvy strategies are specifically aimed at
promoting conditions that enable ECD among
a variety of actors operating in two different
spheres (within and outside of national
governments) and characterized by different
functions (operational and policy- or decisionmaking) and roles (consumers and providers of
evaluation).
New ECD Definition (IV)

In an effort to enhance ECD ownership
and sustainability, it is relevant that spherecrossing entities, either national (e.g.,
Voluntary Organizations Promoting
Evaluation or VOPEs) or multi-national
(regional evaluation associations), actively
contribute to ECD-savvy strategies
Key Principles





Evaluation of processes – not just performance- inherent to
ECD stakeholders (e.g. inter-organizational dynamics, availability
of “learning space” within an institution, resilience at time of
crisis) is key to identifying a priori ECD enabling factors or
barriers;
Assessing the level of ECD needs, interests and motivation
across spheres and across functions within spheres is critical to
customize and sequence ECD programs;
Gauging the quality and the degree of ECD targeting’s
inclusiveness is instrumental in the identification of any possible
inequity (and strategy available to address it) in ECD
programming;
Dissemination of details on the budget resources available for
and effectively spent for the implementation of ECD
programming is key to cost-benefit and cost-effectiveness
analyses;
Assessing the feasibility of ECD success given the established
timeframe is relevant
Evaluating ECD Effectiveness (I)
BEFORE THE IMPLEMENTATION OF AN ECD
PROGRAM:
1. Conduct an evaluation capacity needs assessment
within each sphere;
2. Conduct a diagnostic of evaluation processes within
each sphere (use secondary data as appropriate)
3. Gauge the knowledge level of Monitoring and familiarity
with Results-based management (RBM)
DURING or RIGHT AFTER IMPLEMENTATION:
1. Assess short-term results (such as, increase in knowledge
and development of technical skills) produced by ECD
activities for each of the three levels;
Evaluating ECD Effectiveness (II)
A LITTLE AFTER IMPLEMENTATION:
1.
Start assessing systematically medium- and longer-term results
produced by ECD activities implemented at the organizational and
institutional levels;
2.
Conduct individual or group follow-up interviews and make the best
use of online tools to foster reflections and conversations on the
lessons/challenges resulting from ECD programs
THROUGH ALL ECD PROGRAMMING PHASES:
1.
Assessing the internal processes (e.g., type and quality of
interactions among actors in different spheres and with
different functions as well as the degree to which the targeted
individuals could be both providers and consumers at once);
2.
Monitor the assumption underlying your Theory of Change;
3.
Be very systematic in your assessment of the mediating factors (both
those included in your theory of change and others that you might
identify during implementation)
Study Limitations
The applicability of the conclusions might
be limited;
 The validity of the framework could only
enhanced by testing it to a larger number
of countries;
 The ideas collected among practitioners
interviewed in the course of data
collection may not be representative of the
whole membership;

Thank you!
Contact:
Michele Tarsilla, Ph.D.
International Evaluation Advisor and
Capacity Development Specialist
E-mail: mitarsi@hotmail.com
Download