Monitoring and Evaluation Overview Monitoring & Evaluation Workshop MONITORING AND EVALUATION / General Objectives FOR MONITORING: • Ensure adequate programme management. Regular monitoring of inputs and outputs; Information-based management decisions. • Ensure adequate quality of service. Definition of minimum service standards and quality of service indicators/goals; Process monitoring; Real-time, dynamic correction mechanisms. Monitoring & Evaluation Workshop WHY MONITORING? To improve the delivery process of services to beneficiaries. To inform policy makers, wider public, stakeholders on the progress and achievement. Monitoring & Evaluation Workshop MONITORING AND EVALUATION / General Objectives FOR EVALUATION: • Measure programme impact on selected outcomes. Results focus vs process focus. • Uphold accountability and compliance. Whether or not the work has been carried out as agreed; Is in compliance with established standards. • Provide opportunities for stakeholder feedback. To provide input, and as result, improve the overall programme. Monitoring & Evaluation Workshop MONITORING VS. EVALUATION EVALUATION FREQUENCY COVERAGE DATA DEPTH OF INFORMATION COST UTILITY WHO MONITORING Periodic Regular, Continuous Selected aspects of the programme All project cycle phases Sample-based Sample-based and Universal Tailored, often to performance and impact (WHY) Tracks implementation (WHAT) Identifies operational challenges (HOW) Can be high Spread Out Major programme decisions (POLICY, OBJECTIVES) Continuous improvement, management Independent (FIRMS, CONSULTANTS) Internal (Ministry and programme M&E unit) Monitoring & Evaluation Workshop INTERNAL MONITORING IN SCTP /Project Cycle Process Targeting Eligible/ ineligible households Appellant households Re-ranked eligible households Complete/ incomplete forms Transfers Collected/ uncollected transfers per term Households with arrears per term Enrolment Case Management Attendance of selected households at the third community meeting New enrolments (replacements) Households with main and/ or alternative receivers Updates Modified transfer amounts Pending/ resolved claims Claims regarding non-transfers/ wrong transfer amount/ transaction errors Monitoring & Evaluation Workshop ANALYSIS OF MIS-BASED INDICATORS / Example PROCESS NAME OF THE INDICATOR MEASURE FILTER General Main Receivers who are household heads Total Number By Districts Gender: female General Secondary school-going children Total Number By Districts: Mchinji, Machinga, Salima and Neno General Single and double orphans Total Number and % By Gender By Districts Targeting Beneficiaries household members Total Number By Districts Gender: male Enrolment Beneficiary children information enrolled in secondary level Total Number By Age: 15-17 years old District: Chitipa By TAs Transfers Amount paid Amount By Transfer Agency PROCESS MONITORING Overview Monitoring & Evaluation Workshop PROCESS MONITORING OVERVIEW / Definition M ONITORING AND EVALUATION TARGETING An independent verification at various stages of the SCTP Project Cycle, to identify whether the process in the field is being carried out according to the Operational Manual (OM)/respective technical annexes. ENROLM ENT Differences could lead to identify potential problems either in the operational design or in the way the operation is carried out in the field. TRANSFERS Solutions should be suggested accordingly. COM PLIANCE VERIFICATION CASE M ANAGEM ENT Monitoring & Evaluation Workshop PROCESS MONITORING OVERVIEW / Process Monitoring and Process Evaluation Differences ITEM PROCESS MONITORING PROCESS EVALUATION WHO M&E Unit of the Ministry with District Officials Independent consultant/firm SCOPE Sub-processes within a project cycle phase Processes and sub-processes of a project cycle COVERAGE Clusters and TAs Districts, entire programme areas METHODS/TOOLS Mainly qualitative Qualitative and quantitative WHEN Specific issue/problem/challenge is identified. Upon contract, regular reviews, macro issues/challenges identified EXAMPLE Somebody denounced that all the steps for the targeting process were not followed in a given cluster. E-payment process is not appropriate for SCTP transfer receivers. Monitoring & Evaluation Workshop PROCESS MONITORING / How to Execute Process Monitoring Identification Stage Planning Stage Implementation Stage Reporting/Feedback Stage STEP 1 Identification of the need STEP 3 Specify Design/ Methodology STEP 5 Field Research STEP 8 Disseminate Information STEP 2 Establish Questions STEP 4 Create Work plan STEP 6 Analyze Data STEP 7 Document Findings STEP 9 Feedback for programme Operational Improvement Monitoring & Evaluation Workshop CONCLUSIONS Process Monitoring can only uncover possible reasons why a programme activity is working or not The findings from a Process Monitoring are the basis behind improvements in effectiveness and efficiency of safety net programmes A sound evaluation can be done with careful planning and some basic math skills Monitoring & Evaluation Workshop PROCESS MONITORING OVERVIEW / Methodology Ministry and District Officials identify issues/challenges/problems relating the designed project cycle in a given area. Both agree and specify the process to be done: Issue identified, justification and context; Review of the process, using the OM and corresponding technical annex to understand the possible differences that may be occurring with respect to the issue that was identified; Question to be answered is defined; Source of the information to sustain the investigation; Method to be used: interviews, observations, focus groups, and the examination of the records of the programme; Personnel involved; and Possible coverage. Work plan and budget Monitoring & Evaluation Workshop PROCESS MONITORING OVERVIEW / Methodology Definition of Qualitative analysis METHODS • • • • • Individual interviews Group interviews Direct observation Focus group Beneficiary Score cards SPOT CHECKS Overview Monitoring & Evaluation Workshop SPOT CHECKS OVERVIEW / Definition Spot Checks allow verifying data quality received by the programme through the SCTP-MIS. To do this, a sample of the total population is taken and compared with information available at the source to verify if both coincide with each other. To test the truthfulness of information recorded in the MIS, qualitative an quantitative verification methods are applied: Quantitative methods Statistical formulas applied to see if there are any significant differences Qualitative methods Focus groups, interviews, and other qualitative methods applied in order to comprehend and/or confirm significant statistical differences. Monitoring & Evaluation Workshop SPOT CHECKS OVERVIEW /Advantages and Disadvantages ADVANTAGES • Helps verifying that information entered into the MIS is accurate • Promotes accountability • Reasons for differences are analysed, and measures are identified to correct it DISADVANTAGES • Verification is located and limited • Relative expensive and time consuming Monitoring & Evaluation Workshop SPOT CHECKS OVERVIEW / Objectives 1. Evaluate, through sampling techniques of probabilities, accuracy, and validity, data registered in the MIS, which was reported by Programme actors. Data in the MIS matches data gathered during Spot Checks implementation Accuracy Validity Verified data must be right–processes applied efficiently Monitoring & Evaluation Workshop SPOT CHECKS OVERVIEW / Objectives continued 2. If significant differences are found, reasons must be analysed using process monitoring techniques. Detect possible errors that may have occurred during data collection and entry. Possible fraud Others? Example: Are beneficiary households actually receiving the respective benefit amount? Monitoring & Evaluation Workshop SPOT CHECKS OVERVIEW / Methodology Ministry and District Officials identify possible problems in the internal monitoring SCTP-MIS module Question (hypothesis) to be analysed is defined Type of analisis method selected to verify statistical differences Work plan and budget Field work Calculation of statistical differences Qualitative work to understand/confirm statistical differences Monitoring & Evaluation Workshop SPOT CHECKS / How to Execute Spot Check Process Identification Stage Planning Stage STEP 3 Formulating questions and statistical hypothesis STEP 1 Identifying the problem and consequences/risks STEP 2 Defining possible reasons for the problem STEP 4 Designing an instrument (form) to collect information during field work STEP 5 Defining possible sources of actual valid information STEP 6 Determining geogrraphic area for intervention and investigation’s scope STEP 7 Defining the sample size STEP 8 Creating a work plan Implementation Stage Reporting/Feedback Stage STEP 9 Field Work STEP 10 Analysing Results STEP 11 Accepting or rejecting possible reasons for the problem STEP 12 Qualitative investigation to confirm reasons for the problem STEP 13 Feedback for Operational Programme Improvement IMPACT EVALUATION Overview Monitoring & Evaluation Workshop IMPACT EVALUATION / Definition - Assesses the changes in participants’ well-being that can be attributed to a specific programme/intervention; - Determined by comparing the outcomes of programme participants with the outcomes other individuals experience in the absence of the programme (non-participants). Participants = Treatment Group Non-participants = Control /Comparison Group Monitoring & Evaluation Workshop IMPACT EVALUATION / Objectives - To provide feedback to help improve the design of programmes and policies; - Estimate the magnitude of effects with clear causation: • Did impacts vary across different groups, regions or over time? • How could program design be modified to increase impact? - Learn how effective is the program compared to alternative interventions. - Generate lessons learned to inform the decision making process. Monitoring & Evaluation Workshop IMPACT EVALUATION / Why Conduct an Impact Evaluation? - The information generated as a result of the execution of an impact evaluation enables policy-makers and programme managers to answer the following questions: 1. Does the programme achieve its intended goal (s)? 2. Should this pilot programme be scaled up? Should this large scale programme be continued? 3. Can the changes in outcomes as a result of the programme or are they a result of some other factors occurring simultaneously? 4. Do programme impacts vary across groups of intended beneficiaries, across regions, and over time? 5. Does the programme have any unintended effects, either positive or negative? 6. How effective is the programme in comparison with alternative interventions? 7. Is the programme worth the resources it costs? Monitoring & Evaluation Workshop IMPACT EVALUATION / Methodology - A good impact evaluation examines all elements in the causal chain. This involves using mixed methods. Factual Analysis Inputs Counterfactual Analysis Activities Outputs ASSUMPTIONS White, H (2012) “The use of Mixed Methods in randomized control trials” , 3ie Outcomes Monitoring & Evaluation Workshop IMPACT EVALUATION PROCESS STEP 1 : Initial Considerations STEP 2 : Preparation of Evaluation Tasks STEP 3 : Evaluation Research STEP 4 : Reporting and Dissemination STEP 5 : Management Response