Sample ToRs - ceecis.org

advertisement
CRITIQUING A TERMS OF REFERENCE:
SAMPLE TORS -- EVALUATION OF THE MULTIPLE
INDICATOR CLUSTER SURVEYS (MICS)
I. INTRODUCTION
A. Background
At the World Summit for Children (WSC) on September 30, 1990, heads of state of participating countries
agreed to a plan of action to achieve a set of goals in child (and women's) health and well being to be
reached by the year 2000. They also committed to measuring progress toward those goals. The UN system
was given the task of helping countries work toward all these goals and measure that progress. UNICEF
was asked to be the lead agency in this process.
As a step toward these goals, countries developed National Programmes of Action. Later, a set of goals for
the mid-decade was agreed, first by the members of the Organisation of African Unity (OAU), and
subsequently by most other countries. All these activities increased the demand for high-quality, timely data,
and focussed attention on the availability and quality of data in developing countries. In many countries,
routine reporting systems were found to be inadequate and, in all cases, did not collect information on some
of the goals. Although all countries carry out censuses, and many also carry out national household surveys,
such surveys and censuses are often expensive, cumbersome, and can take considerable time to yield
results.
Accordingly, UNICEF led a process to try to develop an affordable, fast, and reliable household survey
system that would fill gaps in knowledge and update available data. UNICEF worked closely with other UN
agencies, primarily the UN Statistical Division, WHO and UNESCO as well as UNFPA, UNDP and the World
Bank and other centres of excellence (the London School of Hygiene and Tropical Medicine and the Centres
for Disease Control) to develop a model questionnaire to measure standard indicators for the mid-decade
goals (MDG).
Data were collected using a probability-sampling cluster survey approach, to take advantage of existing
national technical capacity. The surveys were designed to monitor progress at national and sub-national
levels, for use in advocacy, policy making and programming, as well as to collect standardised information
for international reporting.
A handbook on MICS, "Monitoring Progress Toward the Goals of the World Summit for Children", was
published by UNICEF in January 1995. This handbook lays out all the stages of carrying out a MICS, from
deciding whether a MICS is necessary, to communicating the final results and suggesting ways that those
results could be used. The handbook was supplemented by workshops at regional and sub-regional levels,
an international network of consultants, and an electronic forum where questions could be posed to a group
of people who had been involved in the development of MICS.
The methodology was designed to be flexible, to take into account differing country conditions, such as
variations in the available resources or quality of existing sample frames. The questionnaire was also
intended to be adaptable to local needs. It was designed in "modules" (one module for each goal), which
could be added to or deleted from the survey as circumstances dictated. If a separate cluster-sample survey
was carried out using some or all of the questionnaire modules for the express purpose of collecting data on
the MDG, we refer to it as a "stand-alone MICS." Such surveys were carried out in about 60 countries. In
another 40 countries, modules were added to existing country surveys (of whatever type), or were used to
modify existing questionnaires on MDG topics. This could cover a variety of surveys, from the Demographic
and Health Surveys (which already include measures of many indicators in the MICS questionnaire) to
household economic surveys (where all modules could be added from the standard MICS questionnaire to
form a separate section of the survey). To make the evaluation more manageable, this assessment will be
restricted to "stand-alone MICS", and to comparing this technique to other data collection options.
UNICEF M&E Training Resource Module 3.1.3
Critiquing a TOR (MICS) 1/5
The MICS have produced much new data, especially from countries that were notably data-poor. These
data are already being used (after a preliminary assessment) in UNICEF publications and planning.
However, an in-depth objective assessment of the MICS is needed in order to consider fully the potential
uses of this data collection technique in the future, both within and outside UNICEF.
B. The rationale for the evaluation
This evaluation of the MICS is planned because:

There is a political commitment to achieve the WSC Goals and to adhere to the Convention on the
Rights of the Child. A cost-effective way of monitoring progress will be important to sustain this
commitment;

Considerable resources have been invested in the implementation of MICS, both by member states and
UNICEF;

The development of an effective system for monitoring progress toward goals for children and women is
a UNICEF Board priority. We need to assess the future role MICS could play in this system; and

National statistical offices, other agencies and organisations are considering or already using the MICS
methodology, and want to know about its strengths and weaknesses.
II. OBJECTIVES
The overall objective is to assess whether and how the MICS could be adapted for future (sustainable)
monitoring of the situation of children, especially the WSC goals and the implementation of the Convention
on the Rights of the Child (CRC). The evaluation will review the experience gained in different countries in
conducting the MICS and utilising the information collected. It will identify the lessons learned — with
respect to its effectiveness, efficiency, and capacity building — from the actual implementation of the MICS
in order to make rational decisions for future use. More specifically, the evaluation has the following
objectives:










Examine the preparation, training, co-ordination and follow-up of MICS at the global and regional level;
Assess all stages of implementation of MICS at the country level;
Identify the conditions within UNICEF Country Offices and counterparts that contributed to their success
(or lack of success) in carrying out the MICS;
Determine whether information produced by the MICS is of internationally acceptable quality, and the
extent to which the data have been used;
Document how, and how much, MICS was adapted to local conditions;
Assess whether the process of implementing the MICS resulted in national capacity building (and hence
to sustainability);
Compare the MICS methodology, execution and cost with other data-collection mechanisms;
Identify factors that led some countries to decide against doing a MICS;
Identify specific areas in which MICS will be useful in the future; and
Identify ways to further improve the MICS.
III. KEY ISSUES
In light of the overall objective of assessing the future possible uses for MICS, the following issues will be
addressed:
A. How was MICS managed at global and regional level?


What is the quality of the MICS tools developed at HQ?
How was the MICS introduced in countries, and how much of a sense of ownership was developed in
governments?
UNICEF M&E Training Resource Module 3.1.3
Critiquing a TOR (MICS) 2/5




What "lead-time" was given in the introduction of MICS, and how did that affect quality and capacity
building?
How was training carried out?
How was information exchanged between and within the different levels of UNICEF, including
exchange of advice and instruction as well as questions and results?
What was the management process for MICS? For example, how was interaction with countries
planned and executed, how did MICS fit in to the country programme evaluation, or how were MICS
followed up and commented on?
B. How was the MICS implemented?






In what ways was the standard MICS instrument adapted to local conditions? And to what degree?
Which modules were dropped or added, and why?
What strategy was used to implement MICS (e.g. relying mainly on consultants or outside contractors;
the amount and breadth of government involvement)?
Were the MICS executed according to plan? If not, why not?
What problems were encountered in the fieldwork?
What were the major cost elements of the MICS?
What financial resources did national counterparts, UNICEF, and other agencies and donors invest?
C. What were the outputs?



Did the survey produce data of good enough quality for the intended use of monitoring MDGs at the
international, national and sub-national levels? For example, did the survey encounter statistical
problems that affected data quality?
Were the results produced in a timely fashion?
What problems were faced in analysis and report writing?
D. How were the MICS results used?


Who (in UNICEF, government, NGO and other donor communities at national and international level)
uses the data from the MICS? And who decides not to use MICS data?
Are the data used only for MDG reporting, or do uses go beyond that? If so, what?
E. Was capacity built?



How has survey capacity increased, both in UNICEF (at headquarters, regional and Country Office
levels) and in local counterparts?
How has understanding of data issues (e.g. the use and abuse of data, the constraints of data)
improved in the UNICEF office and among counterparts?
Are policy makers and planners better able to make informed decisions for children as a result of
MICS?
F. Which future monitoring needs could best be served by MICS?



How does MICS compare to other possible methods of data collection for monitoring the WSC goals or
the CRC? Other methods include administrative data-collection systems, sentinel community
surveillance, and other household surveys.
Can the MICS be adapted from its MDG focus to be used to monitor other indicators?
For what purposes would it most be suited in terms of its appropriateness and cost-effectiveness?
IV. EVALUATION METHODOLOGY
This evaluation will rely on three complementary methodologies to try to cover both the breadth and depth of
the experience: (1) a desktop review of existing documentation; (2) a questionnaire-based survey of all the
UNICEF offices regardless of what kind, if any, survey they conducted ; and (3) key informant interviews
through field visits to certain countries for more in-depth knowledge.
UNICEF M&E Training Resource Module 3.1.3
Critiquing a TOR (MICS) 3/5
A. Desktop review
The evaluation will examine all MICS reports and data available in headquarters, including regional or
Country Office material, all trip reports (both by HQ and RO staff) to countries in connection with MICS, the
database of countries intending to perform MICS, the MICS costing study, and any other documentation
available. This will allow the first assessment of data quality (indicating any need for more detailed data
analysis), and of cost-effectiveness.
B. Questionnaire survey
All countries that carried out a MICS will be sent a questionnaire seeking more detail about the MICS
process in their country, including questions on capacity building within the UNICEF office, use of data within
the office, adaptation and cost analysis of the MICS, timeliness of the results, collaboration with other
agencies, and views of UNICEF staff on the usefulness of the exercise. Countries that chose not to do a
MICS will be asked what data collection methodologies were used for measuring MDGs, the reasons leading
to that decision, and the experience of using those methodologies.
The main objective of this questionnaire-based survey is to complement the information available in HQ. No
attempt will be made to solicit government counterpart opinions in this survey.
C. Key informant interviews
Interviews at the HQ, regional and country levels will complement the questionnaires and the desktop review,
providing a much deeper understanding of the process in-country, and allowing some issues to be
investigated in more detail. Naturally, resources do not permit this kind of analysis to be done in more than a
few countries.

At the HQ and regional levels


All key players — policy makers, technical experts, and programme managers — involved in the
development and implementation of the MICS will be interviewed in person or by telephone. They
include staff from UNICEF (e.g., Monitoring and Statistics Unit of EPP and selected Programme
Sections), UN Statistical Office, CDC, WHO, UNFPA, UNDP and academic institutions. Regional
institutions (e.g., ECA, CREDESA) will also be contacted as much as possible.
At the country level

Interviews with people from UNICEF, NGOs, government (statistical, policy-making and line
ministries) and other agencies will be carried out in six selected countries. The countries will be
chosen using the following criteria: geography, socio-economic level, efficacy of data collection
system, and quality of the MICS.
UNICEF Country Offices will need to play a key role in organising and facilitating the field visits, which will be
useful for offices’ own assessment of their MICS.
V. EXPECTED OUTPUTS
This evaluation should produce the following:





A first report from the desktop review;
An analysis of the questionnaire survey;
Individual country reports;
One final synthesis report; and
A dissemination seminar.
UNICEF M&E Training Resource Module 3.1.3
Critiquing a TOR (MICS) 4/5
VI. USE OF THE EVALUATION RESULTS






Within UNICEF headquarters, to guide further development of MICS methodology, especially its
adaptation to future measurement needs (such as the WSC goals and CRC);
In regional and field offices, to help decision makers determine the suitability of the MICS instrument for
their monitoring needs;
To inform discussion with other agencies on monitoring joint goals (especially in the context of the
Common Country Assessment);
To provide information to other agencies so they can make informed decisions on the suitability of MICS
for their measurement needs;
To draw lessons about capacity building in data collection, analysis and use; and
To provide input to future discussion of a global monitoring strategy.
VII. EVALUATION TEAM MEMBER QUALIFICATIONS
A team of four consultants will conduct this evaluation, dividing into two teams for the country visits. The
team members should collectively have the following qualifications:






Knowledge of survey methodology;
Experience in carrying out surveys in developing countries;
Good quantitative, interview and analytical skills;
Experience in evaluation involving extensive field work;
Excellent writing skills; and
Appropriate language skills.
Also, the team members should not have been involved in the MICS exercise.
VIII. COST OF THE EVALUATION
The major part of the evaluation will be funded by EPP. Additional resources will be sought from regional
and country offices, especially for the field visits.
IX. TIMETABLE
Event
Timing
1.
Draft TOR
December 1996 – April 1997
2.
Selection of Consultants
End of May 1997
3.
Desktop review (two weeks)
Questionnaire design/distribution
End of June 1997
End of July
4.
Field visits
a. Preparation (one week) in NY
b. Field work (three weeks)
c. Report writing (two weeks)
End of July 1997
August 1997
August – September 1997
5.
Data analysis and overall report writing
(One month)
September 1997
6.
Review of the report
October 1997
7.
Final report
November 1997
UNICEF M&E Training Resource Module 3.1.3
Critiquing a TOR (MICS) 5/5
Download