Ten years of Evaluability at the IDB Paris, 16 November, 2010

advertisement
Ten years of Evaluability at
the IDB
Paris, 16 November, 2010
Yuri Soares, Alejandro Pardo,
Veronica Gonzalez and Sixto Aquino
Preliminaries


Evaluability is the “ability of an intervention to demonstrate in
measurable terms the results it intends to deliver” (IDB, 2002, 2006,
2010).
In terms of Development Effectiveness, the Bank’s capacity to
manage for results depends to a great extent on having operations
that feature the characteristics needed for results to be measured, as
well as the understanding of the main factors affecting the process by
which they are generated.
IDB Experience



The IDB’s Evaluation Office has produced three assessments of project
evaluability: 2001, 2005, and 2009. These exercises systematically
reviewed the universe of IDB projects approved during these years.
Thus, the institution now has longitudinal data on project evaluability.
The 2005 Evaluability Report recommended that evaluability standards
be introduced as a criterion for project approval.
This recommendation was adopted and included in the institution’s
mandate as part of its Ninth General Capital Increase (2010). The
institution is currently moving toward implementing this mandate; and
OVE has been instructed to perform evaluability assessments on a
yearly basis.
Evaluability Dimensions



The evaluability consists of nine dimensions including substantive and
formal dimensions.
Substantive dimensions are those that assess the proper identification
and linkages between the conceptual elements of an intervention.
Formal dimensions are those that go to the “classical” measures of
evaluability, such as the identification of indicators and baselines.
Substantive
Formal
Diagnostic
Objectives
Logic
Assumptions and risks
Output baselines
Outcome baselines
Output indicators
Outcome indicators
Monitoring and evaluation
Substantive Dimensions

Diagnosis: Evidence-based identification of the
problem and its roots causes.

Objectives: Identification of what project expect to
achieve. Objectives must be S.M.A.R.T. (Specific,
Measurable, Agreed upon, Realistic, Temporal).

Logic: why this particular intervention and why not
others? Causal chain: components  create conditions
 produce outputs  achieve outcomes

Risks: Quality of analysis in the identification of
assumptions & risks. Risk Evaluation, Follow-up and
Mitigation
Formal Dimensions

Outcome indicators: measures of expected results during and/or
at end of project

Output Indicators: measure of expected products executed as
part of the operation

Indicators must be mutually exclusive, valid, adequate, and
reliable

Baselines for outcomes: Ex-ante assessments of conditions
expected to change as a result of project

Baselines for outputs: Ex-ante assessments of the goods and
services present prior to the project

Monitoring and Evaluation: Identification of systems and
resources for data collection.
Protocol

To ensure the proper application of the exercise, a protocol
was designed and implemented, consisting of three steps:



Write-up of findings. Project assessments are done by peer
review. Reviewers meet, discuss the proposed operation, and
produce a note reporting on findings in each evaluability
dimensions.
Collegial review. The findings of the note are then discussed by a
collegial group in the office. This same group reviews all projects
to ensure consistency across projects.
Scoring. Once a final draft is produced, the team and collegial
group agree on a scoring for each of the dimensions. The scoring
scale is a 1-4 scale, with two adequate and two inadequate
categories, and is based on a scoring guide.
Principles

The method adheres to a series of principles:




Quality. Reviews are done by peer reviews; these peers contain
staff with knowledge of the sector and problematic.
Independence. To avoid conflicts of interest, staff who may have
had involvement in a project do not participate in the review.
Accountability. A manager oversees the overall exercise and is
responsible for its quality.
Consistency. All reviews are validated by the same collegial
group, so as to ensure consistency across different project
reviews.
Results
An Operations microscope


The review tracks the evaluability of the Bank’s principal
production function: the design of lending operations.
This can provide insights regarding specific problems
encountered in projects in order to improve them. For
example:



Identify what efforts are needed to ensure that problem situations
are accurately dimensioned and ensure that the Bank has
evidence regarding the aptness of the intervention models
Identify overlooked risks that can impact the viability of the
interventions in order to ensure better risk management.
Address and quantify “classical” evaluability questions, such as if
outcome indicators are adequately defined, and sufficient for the
scope of the operation’s objectives, and if they have baseline data.
They have clear implications for the institution’s monitoring and
data collection efforts.
A Institution microscope

Our experience with evaluability shows that it can also
be a valuable tool to look at how the Bank operates.
We have done this by analyzing the link between
evaluability trends and:
 changes in quality review process,
 variations in incentives faced by teams and managers,
 impact of the organizational changes in the quality of
projects,
 allocation of staff and resources, and
 fluctuations in lending portfolio



Oversight example: In 2005 and in 2009, the IDB analyzed the
functioning of the institution’s quality control function. Findings
included that Managers, Bank committees, and peer review instances
were not providing sufficient guidance to project teams, and in almost
no case were they providing oversight of evaluability-related issues of
project design. Also, divisions formally responsible for quality control
were mostly absent from the review process.
National systems example. In 2005 found evidence that the
Institution’s bureaucratic incentives were not aligned with those of
borrowers. OVE recommended the use evaluability in order
determine the Bank’s value in project design in order to inform to
decisions of increased use of national planning systems
Risk example. In 2005 and in 2009 the Evaluability assessment
looked at how the institution managed risk at the project level. The
results found that for sovereign guaranteed operations issues of risk
were rarely identified, while for private sector operations risks were
always front and center, but were exclusively focused on financial risks
responding to the repayment related incentives
Concluding remarks



Evaluability provides CEDs with an opportunity to play an
important role in assessing and influencing the way
development institutions discharge their mandates
Evaluability assesses critical elements of the quality of the
work of the organizations as related to their capacities for
management for results
Evaluability can be used as a tool for looking at how
institutions organize and for steering steer critical
institutional improvement processes
Download