Building Evaluation Capacity in Africa: Strategies and Challenges By Dr. Frannie A. Léautier Executive Secretary The African Capacity Building Foundation (ACBF) AFDB Evaluation Week on the theme: Evaluation for Development December 3 – 6, 2012, Tunis, Tunisia How should evaluation be structured “ catching the short dash but sustaining the long march” Order of Presentation • General overview of Evaluation Capacity issues in Africa – – – – – – – Introduction (The challenge) Status of Evaluation in Africa Why Evaluate? Levels and Dimensions of Evaluation Capacity Opportunities Strategies Challenges • The ACBF Case • Conclusion and Way forward General issues on Evaluation Capacity in African 4 Introduction: The Challenge Terrain, Demography, Infrastructure, Admin Units Production Environment & Constraints Production Systems & Performance Maize Yield Potential t[DM]/ha Interventions/ Responses Linkage to Macro Models 7 6 5 4 3 2 1 0 100 80 60 Fertilizer Application Rate kg[N]/ha 40 20 0 40 30 20 10 Irrigation 0 Threshold NA % of Available Soil Water Administrative Units Profitability of small scale irrigation Runoff Fertilizer Profitability Cropland Drought & Diseases extent Incidence (Maize Value &Zones intensity &of Stem Severity Production Yield Farming Borer) Responses per Systems Distribution Rural to Inputs, Person of Nutrients Management, of Aggregate Welfare Removed Benefits to CC FPUs Crop Distribution &Quantity Yields Settlements, ports, markets Port Market Road, Slope, Elevation travel rail, travel aspect, times river, times drainage ICT &Pests costs networks & Agroecological costs Crop Suitability: Rainfed Wheat Source: HarvestChoice/IFPRI 2010 Strategic Choices and Evaluation • There are generally two approaches to strategy making—the root method and the branch method--evaluation practice needs to be aligned to the strategy approach in order to capture impacts • The root method relies on the ability to define objectives very well, outline a range of options in a comprehensive manner, evaluate the options and select from the one that maximizes the attainment of the objective. • The branch method involves building out step by step and in small degrees from the current situation. It is the state of practice used by leaders in political and complex environments. It is many times referred to as “muddling through”. • Lindblom (1959) presents us with the virtues of muddling through in his example of dealing with inflation and Kay (2009) illustrates how companies have used these strategies with very different effect, offering learning strategies for individuals to be better at managing portfolio risks by knowing why some companies succeed and some fail. Evaluation and the Science of Muddling Through • There would also be a premium on piloting and learning from trials that can be monitored and scaled up • Such an approach would suit very well post conflict and fragile states, investments with long gestation periods like infrastructure and education, as well as investments in building capacity • Growing out a results chain gradually and systemically over time would be valued as it would show a resilient and robust approach to results • If we were to use the science of muddling through then what would matter would be trends in results achievements and the sustainability of results The Challenge Cont’d • Understanding the root causes, impediments, and enablers to Africa's development require good data analysis • Going beyond physical achievement to also gauge long term achievement and implications for the future requires good quantitative and qualitative information • Need to deal with Africa’s problems of yesterday, today and tomorrow simultaneously required evidencebased information 8 Status of Evaluation in Africa Evaluation capacity is varied across the continent- the statement of lack of evaluation capacity can not be applied across board There are increasing expectations for countries to develop national evaluation systems Planning and monitoring have received more resources and have been strengthened more that evaluation Technical soundness is a cornerstone for crediblility but does nt gurantee use Evaluation of public policies and programs must beembedded in the political process. Both technical and political dimensions of evaluation needs to be considered. Why Evaluate? Evaluation is a powerful tool for public accountability and learning African Governments have improved their knowledge and appreciation of the value of evaluations as enhancing the efficiency and effectiveness of public policies, programs and projects. Evaluation provides a means of assessing which public initiatives work well, which do not work well and most importantly, why. provide credible way of demonstrating the outcome of government effort in a transparent and consistent manner including how public resources were used and what informed the prioritization in the allocation of public resources. Speaking the Truth to the King 10 Individual level (experience, knowledge & technical skills) Organizational level (systems, procedures & rules) Enabling environment (institutional framework, power structure & influence) Influences by means of incentives it creates Systemic factors, i.e., relationships between the Enabling environment, organizations and individuals Levels of Evaluation Capacity Successful capacity development requires not only skills & organizational procedures, but also incentives & good governance 11 (OECD paper - 2006) Seeing Through Lens Supply Lens Demand Lens Michael Q. Patton, 2010 Dimensions of Evaluation Capacity Evaluation capacity can also be categorized into three dimensions: Capacity for Conducting Evaluations - Conducting an evaluation involves both producing the study and communicating and disseminating it, which requires specialized technical capacity; Capacity for Managing Evaluations: Managing evaluations requires a broad understanding of evaluation but can be done without the specialized skills to conduct evaluation; and Capacity for Using Evaluations -The capacity to use evaluations is completely different; users of evaluations are mainly decision-makers and in some cases policymakers. 13 Building Evaluation Capacity- Key Considerations • Evaluation capacity must be ‘unbundled’: Different evaluation capacities should be taken into account. One size fit all approach should be avoided. It is important to distinguish between the capacity to manage evaluations and the capacity to conduct them, as well as capacity to use evaluations. These are all different capacities; it is not practical to lump them all together under the single term ‘capacity’ • Individual training on how to conduct evaluations is not sufficient for development of national evaluation capacity: For quite some time evaluation capacity was reduced to ‘the capacity to carry out evaluations’ and to a certain extent this continues today. Experience shows that enhancing individual capacities without strengthening the organization and the enabling environment can be counterproductive as the individual experts may be frustrated by the institutional arrangement and processes • Individual training on how to conduct evaluations is not sufficient for development of national evaluation capacity: Also needed is the capability to use evaluations for learning and adapting methods to objectives 14 Opportunities Growing interest among universities and national and regional research institutions to provide services in evaluation. This provides opportunities to work with these institutions in further developing capacities and promoting specialized training in evaluation. Increasing appreciation and demand from African Governments to reinforce institutional capacities to develop evaluation policy and evaluation coordination at national level Increasing pressure on Governments to be transparent and accountable in the use of national resources as well as demonstrating results of their policies, programs and projects. The complexity of governance in the modern world requires officials to have more knowledge for optimal decision-making. Strong and growing demand by donors in civil society organizations (a requirement for accountability of public action); 15 Challenges The current trend in evaluation capacity across the continent is broadly rooted on two main challenges: First is the low demand for credible evidence about performance and the scant use of the information generated through evaluation efforts to inform public decision-making. Of particular concern are, on the one hand, the poor quality of the evidence generated by M&E systems, and on the other, the lack of interest from legislative bodies and citizens, key players in democracies with the authority to demand accountability for results vis-à-vis public investments such as the media. (National Evaluation Capacities, Proceedings from the International Conference, UNDP-2011) The second problem is the poor integration of institutions and actors associated with the effective evaluation of public policies, programmes and institutions, as well as the lack of convergence among cycles of various public administration processes relevant to broad M&E efforts, such as planning, budgeting and personnel. • 16 Demand & Supply of Evaluation Excess supply, or surplus, is the condition that exists when quantity supplied exceeds quantity demanded at the current price (EUROPE & N. AMERICA). Asia at equilibrium , where quantity (E Capacity) supplied equals quantity (E Capacity) demanded. • Excess demand, is the condition that Africa has where quantity of E Capacity demanded exceeds quantity supplied at the current price Challenges Cont’d The third problem is the lack harmony among donors and the national evaluation system. Donors tend to use their own evaluation systems rather than country systems to ensure visibility of their efforts. These broad challenges are manifested through: Weak of demand for MfDR; Weak human resources (Inadequate evaluation professionals); Weak statistical capacity; Absence of National Evaluation Policy and an incentivizing regulatory framework; Weak management capacity of the government; Low participation of non-government stakeholders in the evaluations; and Lack of specialized training programs in evaluation 18 Strategies Support to national systems: support countries to develop/use inbuilt quality assurance mechanisms, comply with evaluation norms and standards, and set up codes of conduct and ethical principles for evaluation. It is also important to balance the use of selfassessments (which may compromise independence and result in conflict of interest) and independent evaluation—such as ZANSTAT and ZIMSTAT Develop/strengthen networks of evaluation practitioners and national evaluation capacities to ensure continuous capacity building through knowledge and experience sharing and peer learning. Support the development and implementation of national Evaluation Associations—such as AfCOP Develop and implement comprehensive capacity building programs on evaluation with selected higher level institutions in Africa—in progress for Agriculture under EWA 19 Strategies Stimulating demand for evaluation with a focus on utility whilst addressing supply issues—skills, procedures, methodology, data systems, manuals—have to be addressed. Organise regional and higher level for decision makers. Work with the Media. Facilitate the design and implementation of specialized evaluation training programs at selected higher educational institution across the continent. Promote involvement of policy institutes/thinks tanks and non-state actors in the national evaluation process including providing short term practical training and conducting evaluations. 20 The case of ACBF 21 Results & Efficiency: Long-term Identify an appropriate institution Select individuals/ champions to support Choose the areas to strengthen or build Secure resources to deliver Reform process/pro cedure Train and develop skills TA, training, institutional support, KM Sustained funding over time Bulk of spending on personnel costs, Networking degrees, institutional around CB issues infrastructure and some spending on idea generation and spreading Value for money in the long term Strengthened platforms, networks and dialogue Increased engagement Institutional support Improved discourse Research, analysis and dissemination Strengthened economic governance Strengthened reputation Sustainable TTs Sustainable and CSOs system Increased demand Greater accountability Strengthened policy debate support Reputation improved Improved quality and credibility Policy influence When to Evaluate? Portfolio at Risk Evaluation of Complex Networks: Attribution Challenge Results: what do independent evaluations show about the process of change? • Economic policy analysis and management • Financial management and accountability • Public administration • Governance and accountability • Knowledge and learning Results in Knowledge and Learning Knowledge products that have transformed practice (RECs Study, Zimbabwe Currency Reform, ACIR) Features of the ACBF results-efficiency frontier: Think Tanks • Results have been achieved with relatively little support from ACBF • Individual interventions to policy units and think tanks are relatively small compared to other projects • The interventions focus on short routes to results but also longer term change and impact, including in systems and processes • ACBF support has been in many ways significantly cost effective and efficient • Support has resulted in both planned and unplanned positive outcome level results Results & Efficiency: Special Role of ACBF • "the tendency by most donors is to target their support to areas which are relatively easier to generate results as well as easier to justify supporting. Consequently, the focus of most donor support has tended to be biased towards areas deemed relatively cost-effective and efficient." • "ACBF on the other hand understands the African context...invests in elements that "make or break an institution" and yet they are elements whose specific benefits are difficult to trace, measure and quantify." • Going forward, ACBF will select which instruments work best in which contexts, monitor and track the trends in use of expatriate expertise for the services produced, and shift to broader country support. Conclusions and way forward 34 Conclusion Lack of sufficient Evaluation capacity in Africa has led to unexplainable issues like: • High average annual economic growth but lower real per capita income today than in 1970 and more than 500 million people still living below poverty line • Dependency on external and food aid co-existing with growth in domestic revenues and food surpluses in many of the African countries 35 Way Forward There is need for: • Greater investment at the individual level where evaluation capacity is weakest, more investment in enhancing Africa's evaluation knowledge base, and finding better ways to use skills and resources • Greater utilization and rationalization of existing evaluation capacity and improved mobilization of resources to enhance overall evaluation capacity in Africa • Enhancing governance and bring about leadership transformation at individual and institutional levels 36 Way Forward There need for: • Culture of responsibility, mutual accountability, commitment to performance, Monitoring, excellence and results evaluation • Continuous assessment of evaluation capacity to respond to gaps and adapt to new and emerging M&E challenges. • Continental (AFRICA) evaluation capacity building initiative that is practical, responsive to needs and motivated and designed by the continent itself (ACTUALLY this is long overdue) 37