(SEE): Model background, methodology, results from first two years

advertisement
DRAFT for use with SEE project;
consult UNICEF team for citation
Simulations for Equity in Education –
Background, methodology, results from first
two years1
1
SEE team: UNICEF HQ – Matthieu Brossard (mbrossard@unicef.org), Daniel Kelly (dkelly@unicef.org);
Project consultant – Annababette Wils (babette@babettewils.net). World Bank team: Quentin Wodon
(qwodon@worldbank.org)
Chapter 1. Introduction
As the 2015 milestones of Education for All and the Millennium Goals approach, we see there has been
enormous progress towards global universal basic education. The proportion of out-of-school children
of primary school age in developing countries declined from 20 percent in 1999 to 12 percent in 2010.
The decline was even more marked in low-income countries - from 42 to 20 percent in the same time
period (GMR, 2012). In these countries, where education used to be unavailable to all but the elite, it is
now accessible for most children.
Still, millions of children who should be in school are not. More and more, the children who remain
excluded from school are the marginalized: the poor, the nomadic, those from minorities, the disabled,
and girls. For those disadvantaged children who have gained entry to school, their dropout rates remain
abysmally high and their learning is far below that of more privileged students.
As it has become more widespread, in some ways education has become less equal, in particular within
countries. In some countries with very low levels of education, gaps in the average years of schooling
widened between the poorest and the wealthiest groups. For example, in Burundi, the gap in average
years of schooling between the poorest 20% of 20-24 year olds and the wealthiest 20% increased from
2.0 years in 2000 to 2.6 in 2010; in Burkina Faso the gap rose from 3.2 to 3.3 years2. Learning gaps also
widened in some places. In southern Africa, among 14 countries that participated in The Southern and
Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ) assessments, the difference in
scores between children from higher socio-economic (SES) groups and those from lower SES groups
more than doubled from 37 points in 2000 to 87 points in 20053.
The growth in education inequality is of great concern because it violates children’s right to an
education and condemns many to lives of lower opportunity. It is also of concern from a development
perspective as continued education inequality and education exclusion will likely dampen economic
growth and guarantee continued prevalence of deep poverty (see e.g. UNICEF, 2015). For these
reasons, many interventional and national organizations have made pro-equity approaches an
imperative in their education programming. In its new strategy for education document “Learning for
All”, the World Bank focuses its call on investments to improve learning and to remove inequity.
Similarly, in UNICEF’s new strategic document for 2014-17, the first item is to refocus on equity.
Despite agreement that pro-equity programming is necessary, there is not an agreed-upon, concrete
understanding of what pro-equity education programs would look like. It is not for lack of ideas: many
programs have been proposed and tried to help marginalized children succeed in school. Yet, we do not
know enough about the relative benefits of different types of programs or about their relative costeffectiveness, or even, what process to use to identify the most cost-effective basket of interventions for
2
We compared the average years of basic schooling (grades 1-9) for 20-24 year olds in the lowest income quintile
to those in the highest income quintile. A few examples: In Burkina Faso in 1992, average years basic schooling
was 0.5 and 4.7 for those in the lowest and highest income quintiles respectively. In 2010 it was 1.1 and 5.4 years
respectively. The gap between the two groups increased from 3.2 to 3.3 years. In Burundi average years of
schooling of the two income groups in 2000 was 1.8 years vs. 3.8 years and in 2010 it was 2.9 years and 5.5 years.
The gap increased from 2.0 to 2.6 years. These numbers actually under-estimate the increase in inequality
because they do not account for the rise in access to upper secondary and tertiary education, which is more
skewed to wealthier children than basic education.
3 Across the 14 countries, the average difference in the reading scores of the lowest SES vs. the highest SES was 37
points in 2000; but had increased to 87 points by 2005. The mean score in these assessments is always set to 500
with a standard deviation of 100; thus the increase in differences represents a real rise of learning inequality.
different contexts. In order to make progress on pro-equity programming, concrete and evidence-based
answers to these issues are crucial.
Pro-equity equity programs need to be able to stand up to challenges from competing budget needs.
Funding education for the poor and the disadvantaged is often regarded as more costly and less efficient
than focusing on more easily accessible groups. Strong evidence and analysis are critical in order to
show that investments in the under-privileged can have a large payback. Such evidence will be an
important advantage in efforts to advocate for the rights of out of school children, whose voices are
often silent.
Requirements for evidence on cost-effectiveness are also coming from funders, such as recent new rules
from the United Kingdom Department for International Development (DFID). Decentralization has
similarly created a demand for tools to help sub-national and district managers make decisions on
programming with limited budgets.
But, while evidence is necessary and required, research findings on pro-equity and cost-effective
interventions are often ignored. For example, despite widespread evidence that mother-tongue
instruction in the early years improves learning significantly, it is not implemented in many countries.
One practical barrier to using evidence from research is that it is difficult to translate results from
studies into actual expected outcomes for particular countries or districts, and to project expected costs
and benefits of scaled-up implementation of programs.
For all of these reasons, it is clear that 1) efforts are needed to better analyze existing evidence on proequity interventions and 2) tools are also needed to help planners more easily utilize existing evidence
for their planning. In order to provide such analysis and tools, the World Bank and UNICEF agreed to
work together.
The purpose of this partnership is to create tools that can help governments and organizations answer
the following questions:
1. What programs and interventions are most efficient and cost-effective to ensure more education
success – school access and learning - for marginalized children?
2. Can targeting marginalized children be more cost-effective than business as usual?
The new initiative, called the Simulations for Equity in Education (SEE) project, is to foster analysis and
knowledge exchange between stakeholders in education and help identify programs and interventions
that promote equity in education while also meeting cost-effectiveness objectives. Cornerstones of the
effort are the Simulations for Equity in Education model, and a database of evidence on interventions in
education. The expected outcome is an increased understanding of what pro-equity interventions work,
and under what circumstances.
Figure 1 schematically shows a view of the overall planning process and where the tools developed in
the SEE project – the model and the database -- can be used. The starting point for equity planning is an
understanding or mapping of education outcomes of out-of-school or at-risk children – be it in a
Situation Analysis, a Country Status Report or other work, that includes the identification of barriers or
bottlenecks and a list of proposed interventions. Once the bottlenecks in education outcomes are
understood, the tools from the SEE project can be used to identify the most cost-effective and highimpact interventions, given the local context. Although SEE can be used to identify options that are
cost-effective and have the highest impacts, ultimately, planning decisions will have to be made through
a policy dialogue, followed by implementation in the field.
Figure 1. Schematic view of planning process and where the SEE model can provide useful insights.
Mapping of education
outcomes of out-of-school
or at-risk children
Most likely use of
SEE project tools
Implementation
Identify
bottlenecks
Possible
interventions
Policy dialogue to
decide interventions
to be implemented
Identify highimpact, costeffective
interventions
The project was launched in September 2011. Some highlights from the project in its first two years
include the following. The World Bank and UNICEF both began the compilation of robust, quality
evidence on the effectiveness of education interventions from around the world, but particularly, from
developing countries. They started the analysis of this material, to identify the impacts of interventions
on marginalized groups. Second, after the two organizations agreed on a conceptual approach, UNICEF
took the lead in developing the SEE model, and an early version was piloted in Ghana, in October 2011.
A second pilot took place in Burkina Faso in May 2012, and other pilots are being implemented in
Pakistan, Bolivia and Sao Tome. Training workshops have been held in Ghana, Senegal, USA, Bolivia, and
Pakistan. The project is now also integrated into a larger pro-equity initiative funded by the Global
Partnership for Education.
This background report summarizes some of the findings from the first three years of the project, and
provides guidance on how to use the tools. The report is organized as follows: overview of insights
gained from the database of evidence (Chapter 2); a description of the SEE model (Chapter 3); and
results from the pilot study in Ghana, in which we used the model to identify cost-effective and proequity interventions (Chapter 4); step by step user’s guide for the SEE model (Chapter 5). An appendix
includes a review of the models that the team used as the basis for the new SEE tool.
Chapter 2. What does existing
evidence tell us on how to reach
vulnerable children?
Over the decades, a growing number of studies has been undertaken to understand how effective
different interventions are at improving school outcomes. The literature counts hundreds, if not
thousands of studies. At intervals, governments and donor organizations use such studies to direct
programming – as is an important purpose of the research. Yet, there is a sense that the studies
available are under-utilized and that the answers they gives us are often ambiguous.
The intention of UNICEF and the World Bank to compile the research into searchable databases is a
renewed effort to help policy makers make better use of existing knowledge about interventions for
education in developing countries. The value added from the effort is a) to provide a selection of the
best research, and b) to compile it in a way that will make it easier for policy makers to find studies that
are relevant to their particular situation and plans.
Starting in late 2011, the SEE project teams at UNICEF and the World Bank collected and added over 300
studies to an intervention database, with results from 67 countries and 85 different interventions.
These studies all met certain quality requirements – in general only studies using certain research
methodologies were included (random control studies, econometric modeling, propensity score
matching, before-after analysis) – and many studies were reviewed that did not end up in the databases.
Thus, the evidence compiled is grounded in solid metrics.
2.1 How studies were selected – the branched search
A chief objective of a meta-analysis such as this one is to identify high-quality studies that have been
able to isolate the impacts of a particular intervention from other changes and differences between
groups (Pillemer and Light, 1980). A common approach to identifying such studies starts with a
systematic review: the universe of studies is collected in a broad Internet search by keywords typically
resulting in thousands of articles; analysts then comb through the list, discarding studies that don’t meet
particular criteria and end up with a much smaller set (Glewwe et al.2011; Birdthistle et al., 2011; Leroy
et.al. 2011 use this approach). Another approach, which is more time-efficient but with a limited in
scope, is to report on a particular series of studies (e.g. Kremer and Holla, 2008, reporting on the Poverty
Action Lab random control studies).
Our approach, to make the best use of limited time but still achieve a broad scope, has been to use what
one might call a branched search. Such a search is more focused than the systematic review, but has a
similarly wide scope in that materials are culled from the whole universe of studies. The branched
search starts from a few recent, respected publications, and branches out from those using the citations.
Each branch starts with the first study, from which key references are tracked. The key references in the
second tier of publications are then tracked, and a third level and possibly fourth level are possibly
included. At each subsequent level, the references are older, and by the third or fourth level, the age of
citations has risen to 10–20 years. When one branch is completed, the search returns to level one and
starts anew. Typically, there is a small number of highly significant works that resurface in many studies.
The idea behind this approach is that by starting with a known, respected author or research group,
further references are already filtered through the quality-control lenses of these researchers, to which
are added the quality-control lenses of the team compiling the studies.
While this type of search appears incomplete – as opposed to being a systematic review – it is extremely
time efficient, and it turns out to be fairly comprehensive as well. A one-day branched search for studies
concerning the impacts of water, sanitation and hygiene (WASH) programmes on education indicators
unearthed half of the relevant WASH articles also identified in a much longer effort by Birdthistle et al.
(2011). We also identified about half of the studies ultimately included in Glewwe et al.’s (2011)
systematic review in a very short time period.
A second quality filter for our analysis deals with the quantitative requirements of the database. All of
the studies that are added to the database require the following information: 1) clear description of
treatment group(s); 2) value of the education outcome without the intervention; and 3) value of the
education outcome with the intervention. Upon more careful perusal, it turns out that a good number of
studies are not able to provide these values. For example, many studies in a potentially interesting series
by the World Food Programme could not be included in the database because they described only the
education outcomes of children who were receiving school meals.
An example of the branches that were started in our branched search are: literature review compiled by
UNICEF in 2011 (Takahashi); the 2011 Education for All Global Monitoring Report section, ‘Levelling the
playing field’, on interventions for marginalized groups; the CREATE Pathways to Access Series under the
direction of Keith Lewin; the MIT Poverty Action Lab.
2.2 Categories of interventions and outcomes
The range of education-related interventions that have been researched is wide – 85 were found in our
search -- and it’s useful to organize them into larger groups. Below are seven such larger groups. The
first three groups are supply-side interventions, the last three groups are demand-side interventions and
the middle group can be viewed as supply- or demand-side.
Table 1. Seven groups of education interventions used in the UNICEF intervention database.
Traditional supply-side school
inputs
Beyond traditional school
inputs
Teacher monitoring and
support
Alternative teaching and
learning methods
Health interventions
School readiness
Income support
-- Teachers, teacher qualifications, books and materials,
classrooms, other infrastructure
-- Improved sanitation facilities, transportation to school, school
safety measures, playgrounds.
-- Interventions to reduce teacher absenteeism, increase hours
in class teaching, reduce teacher violence, improve morale and
dedication.
-- Remedial teaching, computer-aided teaching, small group
learning, child-based learning, mother-tongue instruction,
scripted programs to teach reading and basic math.
-- Nutrition to support cognitive capabilities, intestinal worm
treatments, screening for disabilities.
-- Early childhood interventions including preschool and
kindergarten and support to parents to stimulate youngsters.
-- Cash transfers, fee abolition, scholarships, meals, free books
or uniforms.
The education outcomes studied are far fewer in number than the interventions. It is useful to
distinguish here between higher-level education outcomes and intermediate ones. The higher-level
objective of education systems is to provide young people with skills necessary for their adulthood, and
those outcomes are commonly measured by such indicators as: years of schooling, highest level of
education attained, and skills as measured by learning benchmarks reached or exceeded. Interventions
often do not impact those outcomes directly, but rather affect intermediate outcomes that are their
proximate determinants, such as, entry rates, dropout rates, repetition rates, school attendance, and
learning scores. For example, cash transfers may ultimately improve school graduation rates (higherlevel outcome), but in the immediate term do so by increasing entry rates and by reducing absenteeism
and dropout (intermediate outcomes). Studies tend to focus on these intermediate outcomes,
presumably because changes to the intermediate outcomes can be seen more immediately, and
because it is easier to trace the interventions’ effects. We found that most intervention studies look at
the following
intermediate outcomes:
 School entry rates, including on-time entry
 Repetition rates
 Dropout rates
 Absenteeism
 Learning measures.
What does the evidence tell us about the impacts of interventions on those intermediate outcomes, in
particular with regards to vulnerable children? And, what do the studies imply about how one can best
compute the (cost-) effectiveness of interventions?
Three general insights have emerged from this review so far and they are discussed in turn in the next
sections:
1. In general, vulnerable children benefit more from interventions than less vulnerable children.
2. The size of intervention impacts is roughly proportional to the size of education gaps.
3. Context and implementation are important contributors to intervention effectiveness.
2.3 In general, vulnerable children benefit more from interventions, but not
always
The studies show that in general vulnerable children benefit more from interventions than less
vulnerable children. We might expect this result for two reasons:
First, many interventions address barriers that vulnerable children face but not (or less so) nonvulnerable children. For example, cash transfers address the poverty barrier to schooling; this is a
barrier that wealthier children do not have. Or, school ramps address an access barrier for children in
wheelchairs or on crutches, a barrier, which does not exist for walking children.
Second, even if the interventions address barriers that exist for all children – for example, teacher
quality – vulnerable children may benefit more because they are starting from a lower base. Say better
teacher quality was shown to lower the dropout rate by 10 percent. If the benchmark dropout rate for
non-vulnerable children is 3 percent, the better teachers translate into a 0.3 percentage point decline.
Comparatively, if the benchmark dropout rate for vulnerable children is 20 percent, the 10 percent
benefit of higher quality teachers translates into a 2-percentage point drop, a much greater effect than
on the non-vulnerable group in the example.
The evidence suggests that often, vulnerable children benefit more from an intervention than nonvulnerable, but there are also many instances where the opposite is true. Thus, pro-equity programming
is not necessarily cost-effective across the board: it is critical to identify the right programs to achieve
pro-equity outcomes in a cost-effective manner.
The remainder of this section reviews the evidence for the above finding.
There are relatively few studies that look at how one particular intervention affects different groups of
children, to answer the question: is it more cost-effective to provide certain interventions to vulnerable
children, or not?
Much of the research on particular interventions, especially random control trials, focuses on
marginalized groups of children. Supply-side interventions are typically studied in poor or rural areas, in
villages where there are no schools, where teachers are known to be absent, classes are large, books are
absent, learning levels are low. Demand-side interventions typically focus on groups of vulnerable
children – transfers for poor children, micronutrient supplements and treatments to malnourished or
sick groups of children, and so forth. This makes sense given that the children of concern are those who
are falling behind. But it does imply that there is a certain vulnerability bias in the research. From these
studies we cannot directly compare the effects among vulnerable children to what they would be among
more the privileged, and cannot determine which group it would be more cost-effective to provide for.
Another branch of research, namely econometric studies (such as those conducted for Country Status
Reports), treat the population of children and school pupils as an aggregate, in which the effects of
particular characteristics are studied. We typically learn, for example, that socio-economic status affects
intermediate school outcomes, and that particular school inputs (say school proximity) may or may not
affect school outcome. It is far less common to find econometric studies that look differential effects
for sub-groups, for example, whether school proximity matters more, or less (or not at all) to vulnerable
children as compared privileged children.
That said a small sub-set of education interventions studies includes two or more different sub-groups of
children. Table 2 below shows some examples. The first column describes the intervention, the
outcome measured and the country. The second column describes the two groups included in the study
– one vulnerable, one less so. The next two columns show the outcomes measured – without the
intervention in the third and with the intervention in the fourth column. The fifth and last column
shows the difference in outcomes attributed to the intervention.
In seven of the eleven examples shown, interventions have larger impacts for vulnerable children. This
is true for: mother-tongue instruction in Guinea Bissau and Bolivia; school proximity in Comoros;
preschool in Guinea; village schools in Pakistan and in Afghanistan; and cash transfers in Mexico. This
clear and important finding suggests that in many cases, focusing interventions on vulnerable children is
not only important for equity, but is also likely more cost-effective (assuming the interventions studied
cost the same for both groups of children).
The finding comes with a big caveat, because it is far from universal. In four of the eleven comparisons
shown, the impacts are greater for non-vulnerable children: transition to lower secondary in Niger;
cognition scores following preschool in Cape Verde; reading program in the Gambia; survival rates
following free uniforms in Kenya.
The number of studies here too small to make any statistical inferences, and mainly serves to make the
point that higher impacts for vulnerable children are common, but far from universal.
Table 2. Comparisons of impacts of interventions on vulnerable vs. less vulnerable children, from nine
selected studies that analyzed two groups.
Brief description of intervention
Sub-group
Without
intervention
With
intervention
Difference
Mother-tongue instruction impact on math and
reading scores in grade 1, in Guinea Bissau
(Hovens, 2003)
Poor
39
44
5
Non-poor
45
44
-1
57
67
10
72
86
14
Mother-tongue instruction impact on transition
to lower secondary in Niger (Hovens, 2003)
Unschooled
parents
Civil servants
Mother-tongue instruction on pupils over-age
(%) in Bolivia (Sanjinez et.al. 2005)
Very monolingual
39
31
8
Low monolingual
19
16
3
School within 30 minute walk impact on school
entry by age 9, Comoros (Pole de Dakar, 2013)
Q1
74
84
10
Q5
97
98
1
Preschool impact on cognitive scores in Guinea
(Jaramillio and Tjietjen, 2001)
Guinea SES1
85
90
5
Guinea SES3
94
96
2
Preschool impact on cognitive scores in Cape
Verde (Jaramillio and Tjietjen, 2001)
Cape Verde SES1
84
93
9
Cape Verde SES3
87
100
13
EGRA program impact on reading
comprehension in the Gambia (CSR, 2011)
SES 1
26
44
18
SES 4
46
68
22
Free school uniforms impact on survival to G6 in
Kenya (Duflo et.al. 2006)
Girls
52
59
7
Boys
66
74
8
School in village impact on enrollment in rural
Pakistan (Lloyd et.al. 2002)
Girls
35
52
17
Boys
86
87
1
School in village impact on enrollment in rural
Afghanistan (Burden and Linden, 2012)
Girls
28
74
47
Boys
39
76
37
Cash transfers impact on secondary school
enrolment in Mexico (Rawlings and Rubio, 2005)
Girls
67
75
8
Boys
73
78
5
2.4 The size of the intervention impacts is correlated to the size of the education
gaps
For any given intervention there are considerable differences in the effects found from one study to the
next. Some of these differences are related to how the interventions are implemented or to
sociocultural contexts. But another source of difference is the size of the gaps that the interventions are
intended to address. One of the most important – but almost completely overlooked – observations to
be garnered from reviews of existing studies is that the absolute size of the intervention impact tends to
be larger for groups of children with larger disadvantages or gaps in their schooling.
Figure 2 shows the example of two interventions and two outcomes – but other intervention and
outcome combinations show similar patterns. The absolute effects of preschool on the left graph, and
school meals in the right are measured in percentage point change to dropout rates and pupil
absenteeism respectively. On the x-axis, we show the initial (no intervention) values for dropout rates
and absenteeism; and along the y-axis, the percentage point change with the interventions. There is a
clear, positive relationship in both graphs between the initial values of dropout and absenteeism, and
the size of the impact.
The left side graph shows dropout rates in primary school with no preschool intervention along the xaxis and the change in dropout rates for those children who had preschool. Most of the studies looked
at short-term changes – dropout rates in grade 1 and 2, but one followed children for 7 years
(Kagitsibasi et.al. 2001, Turkey) and found undiminished effects.
The right side graphs shows absenteeism (percent of days students were not in school) without school
feeding programs, compared to the change in absenteeism rates when school feeding programs were
introduced. The programs were different in nature, some being take-home meals and others being inschool cafeterias, and also differed in the amount of food distributed and the requirements for
eligibility. Despite these differences, we still see a clear correlation between the initial education gap
(absenteeism) and the effects.
40
35
30
25
20
15
10
5
0
25
% point reduc on in
absenteeism with meals
% point dropout reduc on
Figure 2. Impact of preschool on dropout rates and school meals on absenteeism show the positive
correlation between the initial gap (dropout and absenteeism) and the impact (percent change). Data
points on preschool from studies in Columbia, India, Nepal, and Turkey; points on meals from studies in
Bangladesh, Cambodia and Kenya.
0
20
40
60
80
Star ng dropout rate
20
15
10
5
0
0
10
20
30
40
50
Absenteeism without meals
Sources: Lal and Wati, 1989; Kagitsibasi et al., 2001; Nimnicht and Posada, 1986; Save the Children and UNICEF, 2003;
Finan et al., 2010; Ahmed and del Ninno, 2002; and Nielsen et al., 2010; WFP, 1999.
If it is true that the initial values strongly affect the absolute change caused by the interventions, this
means that the best way to measure the effects of an intervention are as a proportional decline in the
gap; measuring the effect of an intervention in absolute terms can be quite misleading. For example, in
the evaluation of the food program in Cambodia, the World Food Program found a two-percentage
point reduction of absenteeism – pretty small. These are the two points at the bottom left of the
second graph in Figure 2 – pre-intervention absenteeism was 10 percent and it dropped to 8 percent
with the school feeding program. But, the small absolute change likely has much to do with the low
starting value; the program was actually very effective, as it reduced absenteeism by twenty percent.
Effectiveness as a measure for intervention impact
A general take-away from the above finding is that absolute changes – measured in percentage points –
are often a poor measure of the intervention effectiveness and that a more sensible way to report the
difference is in relative terms; by what percentage an education gap was reduced as a result of the
intervention. We can call this: effectiveness.
This measure of effectiveness has its corollary in the medical literature, where it is common to speak of
the effectiveness of an intervention as the percentage reduction of the incidence of a medical problem.
A similar approach would make sense in education.
The presentation of intervention impacts as effectiveness is not the norm in education research; but it is
often possible to compute the values for effectiveness from other information presented in papers and
reports. Below are two examples of a calculation of effectiveness.
Effectiveness of preschool calculation. UNICEF and Save the Children conducted a study of the impacts
of preschool for poor children in Nepal. In their 2003 study, the authors reported that among poor
children who had not attended preschool, only 75 percent entered primary school. In contrast, among a
peer group that was provided with preschool, 95 percent enrolled in primary school upon reaching the
appropriate age. The non-entry gap was reduced from 25 percent to 5 percent a 20-percentage point
drop. From this information, we can compute that in this study the preschool’s effectiveness for
reducing non-entry was 80 per cent (20/25)
Effectiveness of school proximity. Burde and Linden (2009) looked at the impact of village schools on
enrolment in Afghanistan. Two peer groups of villages without schools were compared; in one group, a
village school was built, while the control group received no schools. At the end of a year, the out-ofschool (non-enrolment) rate of girls in villages with a new school was 26 per cent, compared to 82 per
cent for girls who lived in villages where no new school was built. The effectiveness of having a village
school was to reduce girls’ non-enrolment by 66 per cent. For boys, the non-enrolment numbers were
24 per cent and 61 per cent, respectively, implying an effectiveness of 49 per cent.
Education measures can be presented in two ways: as an outcome – enrolment, entry, survival, learning
achievement – or as its opposite, a gap – out-of-school, non-entry, dropout, failure to learn. From the
discussion above, it appears that it is useful to look at the effectiveness of interventions as reductions in
education gaps, rather than as increases in positive outcomes. The reason is practical: effectiveness
measured as reducing gaps provides more reliable predictions of interventions’ impact in other
contexts. Measuring impact as reductions in gaps also has its corollary in medical literature, where the
effectiveness of interventions are more commonly measured as reductions of incidence, morbidity and
mortality, rather than as increases in the absence of an illness.
To provide an example of the difference between looking at effectiveness as changing a gap rather than
on an outcome, let’s look at the example of the Nepali preschools again. The impact of the preschools
can be measured as a reduction in non-entry (a gap) or as an increase in entry rates (outcome). We
calculated that the effectiveness of the intervention on the gap (non-entry) was a reduction of 80
percent. If we looked at it the other way around, the intervention increased school entry rates (the
outcomes) from 75 percent to 95, a 27 percent increase.
Say that policy makers want to implement the same preschool program to another district in Nepal, but
one where the school entry rates were already 90 percent. Which measure of effectiveness would
provide a more plausible prediction of the impact of preschools – the 80 percent reduction of the gap,
or the 27 percent increase of the outcome? The first measure predicts that in the new district non-entry
will decline by 80 percent from 10 to 2 percent of children; the second measure predicts that entry will
increase by 27 percent from 90 to 114 percent of children.
The calculation using the effectiveness on the gap, gives us the smaller absolute change with the smaller
gap – which corresponds to the earlier finding that impacts are proportional to the gaps. Conversely,
the calculation using the effectiveness on the outcome gives us a larger absolute change in the district
with higher entry rates to begin with, and more than 100 percent of children going to school – a nonsense result.
2.5 Modes of implementation and context matter a lot to intervention
effectiveness
A third insight form the literature review is that the effectiveness of interventions can vary quite a lot
from one study and context to the other – context matters. For example, the effectiveness of school
feeding programs on reducing absenteeism, shown in Figure 2 above, is: 22, 18, 25, 19, 41, 29, 44, and
12 percent (taking the points from left to right in order). The range differs by a factor of almost three.
With some intervention/outcome combinations, the range can be even larger.
Figure 3 shows the effectiveness of 11 interventions on reducing the dropout rate. The interventions
include four from the category “Income support” – school feeding, fee abolition, cash transfers and free
uniforms; school readiness; teaching methods – mother tongue instruction; and five traditional school
inputs – school proximity, pupil teacher ratio, materials and buildings, female teachers, and textbooks.
Within effectiveness, there can be clearly quite a wide variation – in many cases, wider than the 12-44
percent effectiveness range found for school feeding and absenteeism.
Figure 3. Effectiveness of 11 interventions on reducing dropout rates.
Effec veness of interven ons to reduce dropout rates
Effec veness
1
0.8
0.6
0.4
0.2
Textbooks
Female teachers
Materials and
buildings
Pupil teacher ra o
Mother-tongue
instruc on
School proximity
Preschool
Free school
uniforms
Cash transfers
Fee aboli on
School feeding
0
It is clear from the graph that many of the interventions can be very effective at reducing dropout rates
– there are some high levels of effectiveness shown. The potentially high-impact interventions are:
three of the four income support measures (meals, fee abolition, and transfers), preschool, mothertongue instruction, and school proximity. In contrast, four of the traditional school inputs – teachers,
materials and buildings, and textbooks – show only low a low effectiveness level. Even those low values
overstate what was found on the effectiveness of these measures; most of the studies that include
these measures are not in the figure because the effects were statistically insignificant. It is useful to
highlight the relatively low effectiveness of traditional school inputs vs. the potentially high effectiveness
of alternative programs, given continued emphasis on traditional inputs.
Another immediate insight from the graph is that the measured variation of effectiveness is wide. For
preschool for example, the range is from 0 to 86 percent effectiveness; for school feeding the range is
from 0 to 52 percent, and this is typical. Such ranges suggest that it may be almost impossible to predict
what the effectiveness of those interventions will be if applied in a new context. This is hardly the kind
of guidance that we were hoping for from the literature.
What causes such wide variation? And, more practically, can we distinguish between situations of low
and high effectiveness?
Analysis of the variation of preschool effectiveness on dropout
It would be exhaustive to do a full review of all the studies here, to analyze the role of context and other
factors to elucidate factors that determine effectiveness, but it is useful to take one set of studies
looking at one intervention as an illustration to see whether any progress can be made to narrowing the
range of effectiveness values. As an example, we take the effects of preschool on dropout rates. This
group of results has the largest range of effectiveness, from 0 to 86 percent.
Table 3 shows selected results from the seven studies on the effectiveness of preschool for reducing
dropout rates. The studies hail from three continents and include both very poor and middle-income
countries and children. Two of the studies were random control trials; four were comparative
propensity matching studies using sampling or household surveys; and one was a macro-econometric
study of 24 countries in Africa. The programs showed some consistency; most of them were five-day
programs with 3-4 hours of preschool daily, and some of them included nutritional and/or parent
counseling benefits. The macro-economic study included all programs that were considered “preschool”
by national administrative data. The studies measured the impacts of preschool after 1, 2, 3 or 7 years.
Thus, there is a considerable range of information.
At first blush, there does not appear to be a clear pattern separating the low-effectiveness from higheffectiveness preschools, but two meta-studies provided useful guidance – Blok et.al. (2005) and Leroy
et.al. (2011). These two meta-studies suggest three main modifiers for preschool effects on dropout
rates:
1. Quality of the programs
2. Competing forms of childcare
3. Grade of measurement.
Quality of the programs. It almost goes without saying that not all preschool programs are alike, and
that some will be more effective than others. Blok et.al. found that it is difficult to distinguish quality
(except ex-post in the effects), but that centers tend to be more effective than home-based programs.
Competing forms of childcare. The “net”, or value-added of preschools is conditional on the
alternatives, say Leroy et.al.. If children who are not in the treatment group are getting other,
stimulating care, the net difference with the preschool group of children is smaller (or nil). This factor
can explain why children from higher socio-economic groups benefit less from preschool – their at-home
environment is more likely to be an on par alternative.
Grade measured. In particular with regards to dropout, the grade measured is significant (Leroy et.al.
based on Berlinski et.al., 2007). On average, the measurable impact of preschool rises over time. In
particular, if dropout rates in the first grades are low, children who fail to reach the learning benchmarks
of those early grades tend to repeat classes rather than leave school. The preschool effects in lower
grades will be to reduce repetition, rather than dropout – in fact, in the study that found no effect of
preschool on dropout in Grade 1 (Martinez et.al. 1986) the impact of preschool on lowering repetition
was strong.
What happens when we apply these three filters to the preschool studies in Table 3? We can eliminate
four points: the effectiveness for high-caste children in Lal and Wati (13%); and the effectiveness for
grade 1 dropout in Martinez et.al., Save the Children, and Nimnicht and Posada (0%, 50%, and 26%). We
may further eliminate the macro-economic study of 24 African countries, as it is the least specific, and
most likely to be influenced by unmeasured sub-national factors (such as household selection of
preschool attendance). The range of preschool effectiveness on dropout is thus reduced from 0–86
percent to 43-86 percent. That leaves us with a range of a factor of two. This is relatively small given
that we have put in no controls for quality of care and are spanning cultures and school systems in three
continents. It is suggestive of a consistent impact of preschool around the globe for children from lower
socio-economic groups. It also suggests that analysis of studies can limit the range of variability of
effectiveness. A more limited effectiveness range means that we can, with greater confidence, transfer
the results of these studies to our expectations of how much preschool will impact dropout in other,
new contexts.
Table 3. Selected results from seven studies on the effectiveness of preschool for reducing dropout
rates.
Authors and
country
Type of study
Brief description of study
Lal and Wati
(1986), India
Matching
study
Jaramillo and
Mingat
(2003), Africa
Multivariate
regression of
macro
indicators
Kagitcibasi,
Sunar, and
Bekman
(2001), Turkey
Longitudinal
RCT
Nimnicht, G.,
P. Posada,
(1986),
Columbia
Matching
Study
Martinez, S.,
S. Naudeau,
V. Pereira
(1986),
Mozam-bique
Random
control trial
Save the
Children and
UNICEF
(2003), Nepal
Matching
study
Berlinksi, S., S.
Galiani, M.
Manacorda
(2007)
Multivariate
regression of
household
survey
Study looks at the “impact of ICDS program (3 hrs
of preschool per day) and compared ICDS and nonICDS children from 4 rural villages with respect to
enrollment, drop-out by grade 3, and school
performance” (from Meyers and Landers, 1989).
Macro-econometric multivariate regression of
effects of preschool on survival and repetition,
while controlling for GDP and other factors. Data
included national level values from 24 African
countries.
Random control study of 280 poor children in
Istanbul, Turkey, on the long-term effects of an
educational early learning home environment.
The results show impacts of parent enrichment
and preschool programs.
The study looked at children age 0-7 in four
communities, and compared children who were
receiving PROMESA interventions
(health/nutrition, mother education, preschool)
with children who did not. Cited in Meyers and
Landers, 1989.
2-year random control study of 76 poor, rural
communities in Gaza province, looks at the
impacts of preschool (3.25 hrs/day) on primary
enrolment, learning, siblings, parents. Of the 76
communities, 30 received preschools for children
36-59 months.
In 2000 Save the Children collected school
information on children in districts in eastern
Nepal, where some children were receiving ECD
programs, including preschool. SC compared the
outcomes of children who had gone to preschool
with those who had not.
Study uses household survey data with
information on years of preschool attendance, to
estimate the impacts of preschool on primary
attendance and grade attainment for children age
7-15. Preschool in Uruguay is 4 hrs/day and can be
1, 2, or 3 years. Study found rising impacts of
preschool with age.
Control
values for %
dropped out
Grade 1-3, by
caste.
Low: 35%
Medium: 25%
High: 8%
Effective-ness
found
7 years: 33%
7 years: 58%
Grade 1: 23%
Grade 1-2:
60%
Grade 1-3:
66%
Grade 1: 26%
Grade 2: 43%
Grade 3: 52%
Grade 1: 4%
0% - no
significant
effect for
dropout in
grade 1
Grade 1: 22%
Grade 2: 9%
Grade 1: 50%
Grade 2: 67%
Age 15: 29%
Age 15: 86%
Grade 1-3, by
caste.
Low: 46%
Medium: 80%
High: 13%
86%
2.6 The intervention effectiveness matrix
Many interventions are effective at reducing multiple education gaps. For example, preschool or
kindergarten for poor children improves entry, reduces dropout and repetition, and increases learning –
a four-fold effectiveness. We devised a simple tool to incorporate such multiple impacts: the
effectiveness matrix. The effectiveness matrix contains columns with the education flow variables that
can be directly impacted by the interventions – non-entry, late entry, dropout, repetition, and learning
rates. In the rows, we put a list of interventions. Each of the cells in the matrix provides the
effectiveness of one intervention on changing one flow variable.
An example of a few rows of the effectiveness matrix used for Ghana is shown in Table 3. The
interventions can be grouped into those affecting supply (S) and demand (D). Many of the demand- side
interventions are most effective for a particular target group – these are children who experience a
barrier that the intervention is intended to reduce – which is included in the effectiveness matrix.
Table 4. Example of effectiveness matrix: S designates a supply-side intervention; D a demand-side
intervention; the last column specifies the target group (sources: SEE-Ghana model).
Effectiveness (as a % reduction)
Interventions
NonEntry
Late
entry
Dropout
Repetition
Failure to
learn
S – Enough textbooks per pupil
All
D - Kindergarten construction and attendance
80%
48%
D - School meals/year
25%
25%
D – Mother-tongue instruction
Barrier
targeted
46%
63%
14%
Poverty
38%
Poverty
Minority
language
In pilots and training workshops on the SEE project, the question has inevitably arisen: Where do we find
these effectiveness values? It is not practical to conduct effectiveness studies separately for each
application; a more efficient approach is to utilize existing studies, and where possible, to select ones
that were conducted in a context similar to the country where SEE is being applied. For help navigating
the large literature on interventions in education, users can turn to the UNICEF and the World Bank
intervention effectiveness databases.
Chapter 3. Simulations for equity in
education – SEE:
A model to target interventions for costeffective and pro-equity outcomes
The literature review itself provides useful insights and information that can be used for effective
planning of pro-equity programs. However, if a policy group wishes to consider multiple interventions
and compare their cost-effectiveness; to consider multiple target groups; or to decide on an optimal
scale, the computations quickly become complicated. For example, in Ghana at a stakeholder meeting
on education, the group identified two main bottlenecks – learning and dropout – and for each of these,
identified 3 to 5 possible interventions (UNICEF, 2011). For learning, the stakeholder group identified
the following: enforcing teacher management to increase time on task and teacher attendance;
improving curriculum and teaching learning materials; analysis of factors contributing to learning
achievement (this was later refined to include mother-tongue instruction, books, certified teachers).
The next step would be to identify which of these interventions is most cost-effective, and how to best
distribute them over different risk-groups to maximize cost-effectiveness and impact. Given the variety
of variables – risk-groups, bottlenecks, and interventions – it is almost impossible to manually compute
an optimal strategy balancing cost-effectiveness, overall impact, and equity effects. The SEE model
developed by UNICEF and the World Bank aims to aid this kind of more complex decision-making and
cost-effectiveness optimization.
3.1 Use and purpose of SEE model
The tool provides users a virtual (if simplified) replication of a country or regional context within which
to test out cost-effectiveness and impacts of different sets of education interventions and pro-equity
targeting, and to compare them against each other. Users input information on the specific context
under analysis – be it a country or a sub-national region – including different sub-groups, existing school
access and learning outcomes, pre-existing interventions. They then input alternate baskets of
interventions -- scenarios. The model provides expected education improvements, equity change, costs,
and cost-effectiveness for each scenario so that they can be compared.
The focus of this analytical tool is on equity, and specifically, its proposed use is to identify the
interventions that are most cost-effective for increasing access and learning for poor and marginalized
children who are today deprived of their right to a quality education. Its purpose is to respond to
questions like: Which interventions are most cost-effective within the context of a particular country or
region? What targeting strategies are most cost-effective, e.g. which risk-groups stand to benefit the
most from proposed interventions? What is the optimal cost-effective scale for an intervention, where
the greatest number of children is reached but minimizing the drawbacks of declining marginal returns?
Considerable space is given in the user interface to specifically target interventions to specific riskgroups and to view cost-benefits across different groups. The model is designed to be used in a
consultative process within countries and can be applied in a variety of contexts.
Within a dialogue, SEE can provide a logical and consistent framework to discuss alternate paths
towards a pro-equity education program. Its results can be used to advocate for a particular set of
effective interventions. Within national and regional planning, SEE can provide a cost-effectiveness tool
to complement financial planning models. As input to national and sub-national government planning,
the model can be used to analyze the relative cost-effectiveness of interventions proposed – be it more
(and better) teachers, books, materials and schools, but also special programs such as meals, preschool,
mother-tongue instruction, transfers etcetera. The model can be applied at the sub-national level, to
consider interventions beyond basic provisions from the central government, or even to consider the
impacts of just one program.
A proposed path for using the model is shown in Table 5. It is based on experience with the Marginal
Budgeting for Bottlenecks model, a tool developed for health by UNICEF and the World Bank (Devinfo,
2015). The MBB model is discussed briefly in the next section. It was implemented in 35 countries. For
each country or application, the process is adapted to the local situation.
Table 5. Proposed path for integrating SEE model into a pro-equity strategy development process,
based on experience with the Marginal Budgeting for Bottlenecks (MBB) model for health interventions.
Task
Orientation
stakeholder
meeting
Data collection
Stakeholder
meeting to identify
bottlenecks and
strategy options
Simulation work
with SEE on
intervention
strategies.
Documentation of
effective strategies
Stakeholder
meeting to present
and discuss
simulation results
Optional –
community roll-out
Description
Participants: 20-30 stakeholders from Ministries, UNICEF, donors, community. The
purpose of the meeting is to describe why bottleneck and equity analysis and planning
are important; and to provide a basic orientation to the bottleneck/equity/modeling
approach; early identification of data needs; early identification of participants in the
process. This meeting can also be used to identify a small team to collect data.
Participants: small technical team, consisting of one or two people from a local
university or the Ministry of Education, and a UNICEF or World Bank partner. The
team will collect data and input it in the SEE model. After data is in, the data team can
do some first bottleneck analysis and perhaps some simulations to prepare for the
next stakeholder meeting.
Outcome: to have the data and bottleneck overview for next stakeholder meeting.
This meeting is ideally with same stakeholder group as before, perhaps extended to
30 or 40, to include more community members (e.g. teachers, headmasters) and
others. In this meeting, the education bottlenecks, and the risk groups are identified,
as well as intervention strategies. This can be done in small group work, where each
group works on one bottleneck, but comes up with multiple strategies (interventions,
baskets of interventions). A core team to do simulations is established – will include
the data person(s) but in addition, one or two members from Ministry.
Outcome: buy-in to process; understanding of key bottlenecks; agreement on key
intervention strategies to test; and core team to do simulations. Core team includes
the data team (above) plus ~2 planners from government.
The core team runs simulations based on the strategies identified in stakeholder
meeting, which provide results on relative cost-effectiveness, overall impact and
equity impacts of the different strategies.
Outcome: The core team identifies effective and pro-equity paths forward.
The core team writes up the results of the stakeholder meeting (rationale,
bottlenecks, intervention strategies identified) and the outcomes of the simulation
work, including the effective and pro-equity paths identified.
Outcome: a solid document backing up the most effective strategies.
The core team presents the document and findings on effective strategies to same
group of stakeholders as the previous meeting. Stakeholders provide refinements to
the document, findings and which will be revised accordingly. At this meeting, the
stakeholders can provide a recommendation to the government / donors concerning
which programs to implement.
After the second stakeholder meeting, and if the government/donors have decided to
pursue the recommended programs, it would be possible to create buzz with a
community roll-out of the proposed plans with local gov’t, media etc.
3.2 Schematic overview of the SEE model
Before the SEE project for education, UNICEF and the World Bank had collaborated on a similar initiative
for pro-equity programming in health. That initiative developed the Marginal Budgeting for Bottlenecks
(MBB) model, which provided policy makers the opportunity to compare the cost-effectiveness of
different health interventions, and the impacts of different targeting strategies for vulnerable groups.
The MBB model was widely used and was applied in 35 countries (World Bank et.al. 2009). It was also
the basis for an influential report that found pro-equity health programming to be a more cost-effective
and rapid way to reach the Millennium Development Health goals than business as usual (“Narrowing
the Gap”, UNICEF 2010).
The MBB and SEE model share a common approach to cost-effectiveness for marginalized groups. Both
measure the effectiveness of interventions on gaps in similar ways, as described in the previous section
on intervention effectiveness to reduce gaps. Both characterize effective access to services as the result
of a hierarchy of conditions. The hierarchy is based on a conceptual model of health services by
Tanahashi (1978). The hierarchy is shown in Table 6 with examples of measures used in the MBB and
SEE models.
Table 6. Hierarchical determinants of effective use of services, following Tanahashi (1978) and with
examples from health and education. These determinants are used in the SEE model.
Determinants
Health measures (examples)
Education measures (examples)
Availability
Health centers within reach and
Schools within reach and supplied
supplied with doctors (human
with teachers (human resources)
resources) and medicines
and learning materials.
(materials)
Utilization
People go to the health centers
Children enter the schools
Continuity
People go for all necessary visits,
Children complete necessary school
e.g. pregnant mothers go
level.
throughout pregnancy.
Effective use
Better health outcomes, e.g. more
School outcomes, e.g. children learn
live and health births, lower
and acquire useful skills.
maternal mortality.
At any level along the way up this hierarchy, access can be constrained; there are bottlenecks. For
example, schools may not be within walking distance for a proportion of eligible children; or, a
proportion of children never enter school even if there is one in the village; or, children drop out early;
or, even those who finish a particular level have very low levels of learning. In most developing
countries, there are bottlenecks at one or more levels. For vulnerable children, the bottlenecks are often
more severe.
Underlying the bottlenecks are barriers; there may be multiple barriers affecting a particular bottleneck,
and groups of vulnerable children face barriers that other children do not. Some illustrations of barriers
relevant to the utilization of available schools are:
 Poverty - very poor children may not be able to pay to go to school;
 Disability -- the school may not have adapted programs for disabled children;
 Gender -there may only be a school for boys;
 Nomadism -- the child may be nomadic and in the village only a few months of the year.
Effective interventions reduce those barriers, which in turn, minimizes bottlenecks to the effective
access to services. For example, providing glasses to children with poor eyesight (disability barrier) can
improve their learning and retention in school reducing continuity and effectiveness bottlenecks (see
Glewwe et.al. 2012 for a study on the distribution of glasses).
The SEE model combines the theory of hierarchical access to services; bottlenecks in that access; and
barriers as moderators of the bottlenecks into a comprehensive framework for analysis.
In very few bullet points, the structure of the SEE model can be captured in the following:
 At the core of SEE is a hierarchical access model of students’ access to school and progress through
the school. Mirroring the Tanahashi hierarchy of determinants, outcomes are determined by the
availability of schools, the probability of school entry (utilization), probability of progress through all
the grades (continuity), and the probability of learning well (effectiveness). Together, this process
is called the education life cycle.
 The education life cycle is simulated grade by grade using conventional projection methods.
Specifically, progress through the education life cycle is determined by: a) intake rates, b) promotion
and repetition rates; and c) learning outcomes are determined by learning rates.
 To accommodate vulnerabilities and barriers of different groups of children, the SEE model divides
the population into up to four risk groups with different expected education life cycle outcomes.
Moreover, within risk-groups, the model includes barriers that further impede education life-cycle
progress. For example, a population may be divided into two risk groups, urban and rural children;
and within these two risk groups, poverty may be included as a barrier to education.
 Projected progress through the education life-cycle is linked to interventions. The user can set
scenarios that assume the implementation of a set of interventions (by risk-group). Depending on
the effectiveness of the interventions (effectiveness is found in the literature discussed in the
previous section) and the scale of the interventions assumed in the scenarios, projected progress
through the education cycle is improved for each risk-group.
 The costs of each scenario are computed depending on the assumed scale of the interventions and
unit costs.
 Cost-effectiveness is obtained by dividing changes to the school life cycle outcomes with the costs.
 The calculations are made separately for each risk group, and can be aggregated into overall
outcomes.
Figure 4 shows a schematic flow diagram of these relationships. A fuller, formal explanation of the
model follows below.
Figure 4. Schematic overview of calculations for each risk group in SEE model and the combination of
the four risk groups into one model.
Simula ons for risk group X
Simulated
impacts of
interven ons to
reduce barriers
Intermediate outcomes –
Intake, promo on,
repe on, learning rates
Simulated
costs of
interven ons
School life-cycle
outcomes – e.g. School
entry rates, survival and
comple on, learning
scores
Cost
effec veness
Simula ons
for risk
group 1
Simula ons
for risk
group 2
Simula ons
for risk
group 3
Simula ons
for risk
group 4
Aggregated
outcomes
3.3 Data requirements
It is clear that the SEE model requires a considerable amount of data. Information is needed about
values for entry, progression (promotion and repetition) and or learning; that information is needed for
separate risk-groups in the population; information on barriers faced within the risk-groups is necessary;
and finally, evidence in an effectiveness matrix for the selection of interventions to be analyzed. While
at first glance the data list may seem formidable, much of the data needed are from sources that are
widespread – school administrative data, household surveys, population estimates, planning documents
for costs of programs.
Specifically, the data requirements of the SEE model are as follows.
1. Number of pupils by grade for up to four risk groups. Up until now, users have generally selected
different regional and/or gender groups that have distinct education outcomes – usually one or two
risk groups with distinctly worse education outcomes, and a residual group with more average
outcomes. This choice has been primarily driven by the availability of administrative pupil data by
regional groups and by gender.
2. Values for the different levels of access -- % of children with a school nearby, entry rates, repetition
and promotion, and learning outcomes. Also, initial values for teachers and textbooks. All values are
needed by risk group.
3. Prevalence of barriers, in each of the risk groups – for example, the prevalence of poverty, minority
language, disability – and the relative reductions of entry, repetition, dropout and learning for
children with these risk factors compared to those who without. This can usually be obtained from
household surveys.
4. Population data – the number of school-age children, including school-entry age and primary school
completion age, and the population growth rate, based on a recent population census or credible
estimate.
5. Intervention effectiveness and other information – which barrier is targeted by the intervention (e.g.
addresses poverty, minority language, discrimination, disability); the existing reach of the
intervention; costs; and the effectiveness values for the effectiveness matrix. This is obtained from a
literature review or the intervention literature databases compiled by UNICEF and the World Bank.
Many countries have much of this data available through administrative education data systems,
household surveys, and population censuses. The data for the effectiveness matrix for interventions is
usually not available from one country or region, but must be pieced together judiciously, using some
research from other countries.
It is clear that not all of the data requirements will be met in all developing countries. A reliable
population count based on a recent Census is unavailable, for example, in a number of developing
nations – such as DR Congo and Pakistan. The administrative data on pupils, repetition and promotion
are unavailable or unreliable in others – such as the Central African Republic. Studies on the
effectiveness of many interventions have been done in only one or two places and may not be
applicable to all contexts, and some interventions may not be covered by any research. Such data
constraints limit effective planning, including limiting the use of planning tools such as SEE. It is
important to recognize these constraints and to advocate for resources to collect more of the
information that is necessary for effective planning.
3.4 Model interface and user interaction with the tool
The SEE tool has been programmed in Excel, which is widely available in developing countries. Using it
is described in more detail in the User’s Guide in Chapter 5. Here, we present a quick preview. The tool
consists of three main pages. Views of the model pages are shown in
Figure 5.
Figure 5. Screen-shots of the user interface of the SEE model – main menu (dark blue cells); scenario
setting matrx (light blue section); viewing results (green cells); entering data (orange cells).
3.5 Formal description of the SEE model
This section describes the model more formally. It proceeds in the order of the computations as they
are executed in the model, so that a reader who has the tool can follow along more easily. The order of
this description is as follows:
1. Effective coverage of interventions – In scenarios, users set interventions, the number of units to be
implemented and how the interventions will be targeted across risk-groups, as well as within risk
groups to children with particular barriers. Effective coverage is the percentage of children within a
risk-group who are effectively reached by the interventions.
2. Impacts of the interventions – The model computes how much the interventions reduce gaps in
school availability, entry, repetition, dropout, and learning, based on effective coverage and the
effectiveness of the interventions. This section describes the relative (percent) change in gaps,
combined impacts of multiple interventions, and absolute change in education gaps.
3. Calculations of improvements in school outcomes for each risk group -- increases in the percentage
of children entering school, completion, attaining learning standards, as well as high-level outcome
measures such as enrolment rates, and out-of-school children.
4. Calculation of overall costs of the interventions and the cost-effectiveness of each intervention by
putting outcomes and the costs together.
3.5.1. Effective coverage of interventions by risk group
The interventions are set separately for up to four risk-groups. The risk-groups highlight certain groups
of children who are disproportionally affected by barriers. Historically, the risk-groups have often been
defined as disadvantaged regions or districts, mainly because data tends to be disaggregated
geographically. But, it would also be possible to define risk-groups based on other distinguishing
markers.
The effective coverage of the intervention – the proportion of children who benefit from the
intervention relative to the overall need – is an important determinant of the intervention’s impact. The
effective coverage of an intervention is determined by scale of the intervention, the size of the
population, in particular, the size of the sub-group the intervention is intended to reach (e.g. girls, or
children with a particular disability), and by how well the intervention is targeted at children with the
barriers it is intended to reduce.
To begin with, the gross scale of the interventions is simply how many units of the intervention are
implemented, multiplied by the number of children each unit reaches. For example, 40,000 school
meals daily reach 40,000 children; while 100 schools, each reaching 200 pupils, cover 20,000 children.
But, there may be some wastage: gross scale may not equal the effective scale. This is particularly the
case with interventions that affect only of a sub-group of children facing particular barriers. For
example, cash transfers are linked to the poverty barrier; transfers help only children for whom income
prevents their attending school. We define effective scale as the number of units of an intervention
that are actually reaching children who are helped by it. Effective scale depends on how well targeted
interventions are. Say we have a program of 10,000 cash transfers and a population with a 30% poverty
rate. If the transfers are not targeted, there is a random 30% chance that a poor child will benefit from
them, so only 3,000 transfers will actually make a difference; the effective scale is 3,000 transfers. If,
through better programming, the effective targeting is 90% instead, then the effective scale is the 9,000
transfers reaching poor children. The effective targeting rate is percentage of units that are effectively
used. It is the greater of intentional targeting (90% in our example) vs. the barrier’s prevalence (30% in
the example). The effective scale of an intervention is the product of the gross scale and the effective
targeting rate:
𝑒𝑓
𝑒𝑓
𝑠𝑖,𝑗 = 𝑠𝑖,𝑗 ∗ 𝑡𝑖,𝑗 ,
𝑒𝑓
where 𝑡𝑖,𝑗 = max⁡(𝑇𝑖 , 𝑝𝑟𝑒𝑣𝑏,𝑗 )
𝑒𝑓
where: 𝑡𝑖,𝑗 is the effective targeting rate of intervention i in risk group j; 𝑇𝑖 , the intentional targeting rate;
𝑒𝑓
𝑝𝑟𝑒𝑣𝑏,𝑗 the prevalence of the affected barrier b in risk group j; 𝑠𝑖,𝑗 the effective scale of the
intervention; and 𝑠𝑖,𝑗 is the gross scale of the intervention. Note that if an intervention affects all
𝑒𝑓
𝑒𝑓
children or pupils (such as teachers) then 𝑡𝑖,𝑗 equal 1 and 𝑠𝑖,𝑗 = 𝑠𝑖,𝑗 .
It is easy to assume highly effective targeting in scenarios, and doing so can dramatically increase the
computed impact and cost-effectiveness of interventions. In real life, targeting may not be so simple and
the administrative costs may be high. During the consultative process that is an important part of
planning in general and working with SEE in particular, stakeholders will need to consider realistic
targeting levels and, possibly, add those costs to the baseline data.
Scaling interventions up increases their impact, but with an obvious limit: the size of the school age
population. Scaling up beyond the population size is simply a waste of resources. When the
interventions affect only the schooling outcomes of particular marginalized children, that upper
population ceiling is further constrained by the number of marginalized children. Continuing the cash
transfer example: if the overall school-size population of 150,000, with the poverty rate of 30%, the
intervention can, at maximum, improve school outcomes for 50,000 children.
Thus, effective scale of an intervention needs to be modified to include the upper population size limit:
𝑒𝑓′
𝑒𝑓
𝑠𝑖,𝑗 = min⁡(𝑠𝑖,𝑗 ∗ 𝑡𝑖,𝑗 , 𝑝𝑜𝑝𝑗 ∗ 𝑝𝑟𝑒𝑣𝑏,𝑗 )
where 𝑝𝑜𝑝𝑗 ∗ 𝑝𝑟𝑒𝑣𝑏,𝑗 is the size of the vulnerable (or marginalized) population with the barrier, b, that
the intervention is intended to reduce. Note that, for interventions that impact all children (such as
teachers for example), 𝑝𝑟𝑒𝑣𝑏,𝑗 is equal to 1. When an intervention is implemented that exceeds the
population size, the average impact per unit and per dollar spent is simply reduced (because some units
are wasted).
The effective coverage of each intervention, i, within each risk group, j, is equal to:
𝑒𝑓
𝑠
𝑒𝑓
𝑖,𝑗
𝑣𝑖,𝑗 = 𝑝𝑜𝑝
,
𝑗
where v is the coverage rate, i is the intervention, and j is the risk group. We use the overall population
as the denominator, rather than the vulnerable population, because in a later step, the coverage rates of
multiple interventions, each targeting perhaps different vulnerable populations, are combined.
If there was an existing coverage rate for the intervention – there already are preschools, or books, or
teachers – then only the additional (marginal) changes in the coverage rate, or the difference between
the initial and the projected coverage, cause future changes:
𝑒𝑓
𝑠
𝑒𝑓
𝑠
𝑒𝑓
𝑖,𝑗,𝑡
𝑖,𝑗,0
Δ𝑣𝑖,𝑗,0→𝑡 = 𝑝𝑜𝑝
− 𝑝𝑜𝑝
𝑗,𝑡
𝑗,0
𝑒𝑓
Where Δ𝑣𝑖,𝑗,0→𝑡 is the change in the coverage rate from time zero to time t in the scenario.
3.5.2. Impacts of the interventions
As discussed above in review of the intervention effectiveness literature and in the overview of the
model, we look at interventions through the lens of how well they reduce gaps. For example, preschool
has been found to decrease non-entry, to reduce repetition and dropout, and to reduce failure to learn.
The effectiveness matrix, also discussed above, captures how effective interventions are at reducing
different education gaps.
To calculate the impact of an intervention on an education gap, first, we calculate the impact of the
intervention as the percent change in an education gap, and then the absolute change to the gap.
The impact, or percent point change in a gap is equal to: the product of the effective coverage rate and
the effectiveness of the intervention (from the intervention effectiveness matrix). For example, to take
the example from in Nepal again, preschool was found to reduce dropout rates by 43 percent (if applied
to all children in a group). Say that effective coverage is increased by 30 percent. The expected impact
then, is to reduce dropout rates by 43 * 30 = 14 percent. More formally the equation is:
𝑒𝑓
𝑚𝑖,𝑜,𝑗,𝑡 = 𝑒𝑖,𝑜 ∗ ⁡Δ𝑣𝑖,𝑗,0→𝑡
where m is the expected relative impact of intervention i, on the education gap o, and 𝑒𝑖,𝑜 is the
effectiveness of intervention i, to reduce education gap o.
As mentioned above, many interventions are linked to particular vulnerability barriers and affect only
the group of children with those barriers. The link is incorporated in the effective coverage. But there is
an additional link to the barriers; namely, the education gaps of vulnerable groups are often greater
than the average. The greater gaps of the vulnerable groups amplify the impacts of the interventions.
For example, the average repetition rate in north Ghana in 2012 was 6.7 per cent. From household
surveys, we found that the repetition rates of poor children were 70% higher than of non-poor children.
For north Ghana, this meant that the repetition rates of poor children are estimated to be 7.5 per cent4,
or, 17 per cent higher than the average repetition rate (7.5/6.7). This matters because – recalling our
central insight about interventions impact - the size of the intervention impact is proportional to the
education gaps – if the gaps of vulnerable groups are larger than the average values, the expected
intervention impact needs to be augmented by the ratio of vulnerable child: average values. We need to
update the impact equation to:
𝑒𝑓
𝑣
𝑚𝑖,𝑜,𝑗,𝑡
= 𝛽𝑏,𝑜,𝑗 ∗ ⁡ 𝑒𝑖,𝑜 ∗ ⁡Δ𝑣𝑖,𝑗,0→𝑡
4
The proportion of poor children in north Ghana was 65%. The estimates were obtained by algebraic substitution
to solve for repetition of the poor, x, if the average repetition 6.7 = 65% * x + (1-65%) * x/1.7, where x/1.7 is the
repetition rate of the non-poor (70% lower than repetition of the poor).
𝑣
where 𝑚𝑖,𝑜,𝑗,𝑡
is the impact of intervention i with consideration of higher education gaps in vulnerable
groups, and 𝛽𝑏,𝑜,𝑗 is the ratio of education gap o in the vulnerable group with barrier b, compared to the
average value of education gap o (in the repetition example above, this number would be 1.17). When
the intervention affects the whole population, or the gap of the vulnerable group is the same as the
average, then 𝛽𝑏,𝑜 = 1.
Combined impact of multiple interventions
Thus far, we’ve considered only the effects of one intervention. In reality (and in scenarios), more than
one intervention may be implemented simultaneously. Given the complex nature of why children do or
do not attend school and learn there, multiple approaches are important. To the extent that
interventions may be addressing the same education gaps, the aggregate impact of all the interventions
together will be modified through interactions.
If there are multiple interventions, there can be complementarity. For example, there might be
complementarity between satellite schools (which increase school proximity and access) and mothertongue instruction (which improves retention and learning). In northern Ghana, only 70 per cent of
children have a school nearby, and 81 per cent of children enter school (this means demand for
education is high; as many children walk to distant schools). If only mother-tongue instruction is
implemented, it benefits just those 81 per cent of children who enter school. But if mother-tongue
instruction is combined with building satellite schools, and school intake increases to nearly 100 per cent
of all children, then the potential impact of the mother-tongue instruction is increased because it can
now be scaled up to reach all children. The two interventions are complementary because they affect
different education gaps: one acts on entry, while the other acts on learning and retention. Taken
together, more children go to school, stay and learn.
This kind of complementarity is automatically computed in the SEE model because impacts of
interventions cascade through the entire education life cycle of children – thus, satellite schools cause
more children to enter grade 1; and mother-tongue instruction increases retention and learning for the
larger pupil population (this is explained more below).
On the other hand, there can also be substitution when two interventions affect the same education
gap: once a child is helped (e.g., the bottleneck is removed) due to one intervention, further
interventions do not affect that education gap for that child. In the extreme, the impact of a particular
intervention can be considerably eroded by substitution. For example, say that cash transfers in a
poverty-stricken region X practically eliminated non-entry. In this case, an additional intervention to
support school entry, e.g., providing school lunches, would not have much, if any, further impact (at
least on entry).
The substitution effect can be modeled with a simple chain function, like the one used in SEE’s sistermodel, the Health Marginal Budgeting for Bottlenecks (MBB) model. The MBB computes mortality
reductions with multiple medical interventions with a chain function. Taking into account substitution,
the aggregate impact of all interventions acting on one particular education gap, 𝑚
̅ 𝑜,𝑡 , is:
𝑣
𝑣
𝑣
𝑣
𝑚
̅ 𝑜,𝑡 = 1 − (1 − 𝑚1,𝑜,𝑡
) ∗ (1 − 𝑚2,𝑜,𝑡
) ∗ (1 − 𝑚3,𝑜,𝑦
) … ∗ ⁡ (1 − 𝑚𝑛,𝑜,𝑡
)
Absolute change to school entry, repetition, dropout and learning
The impact of the interventions, as described thus far, is the relative change to the original education
gap, but not yet absolute change. Say dropout rates in the starting year were 5 percent annually, and
the combined impact of all interventions is to reduce the dropout rate by 25 percent. Then, the
absolute change in dropout rates is 1.25 percentage points (5 percent * 25 percent). The new dropout
rate is 5 – 1.25 or, 3.75 percent. The equation for the computation of absolute changes to school
utilization (proportion of children who enter school if one is available), late entry rates, repetition,
dropout, and learning is the product of impact and the education gap:
∆𝑥𝑜,𝑡=0→𝑡 = 𝑥𝑜,0 ∗ 𝑚
̅ 𝑜,𝑡
where ∆𝑥𝑜,𝑡=0→𝑡 is the absolute change in the education gap from time 0 to t, and 𝑥𝑜,0 is the education
gap o, at time=0. This equation is used to compute changes to: g.
Absolute change to availability of schools, teachers, and books
A slightly different equation is used to compute absolute changes to the supply variables – schools,
teachers, and books/materials. In the model, school supply is expressed as the percentage of children
with a school nearby, or as the number of classrooms relative to the size of the school age population.
In the case of school availability, the gap is the percentage of children with no school in the vicinity, but
it makes less intuitive sense for the impact building schools to be proportional to this gap. There is a 0/1
relationship to having a school nearby – either it is there, or it is not. Each new school fully adds its
capacity to the total number of children who have a school nearby, regardless of the availability gap.
Some caveats to this general point exist: At some point, the density of schools is sufficiently high that
there is a crowding effect, but we can imagine that good planning on school placement can delay this
effect. Poor placement will cause overlaps between the catchment area for existing schools and the
new school – in this case only a portion of the new school capacity would be added to children with a
school nearby. The model does not take account of this effect directly (although users can indirectly
model it in the assumptions for costs, which could include wastage through overlap).
Setting aside those caveats for the moment, the change to the availability for schools is calculated as:
𝑆𝑐ℎ𝑗 ∗ 𝐶𝑎𝑝
∆𝑎𝑗,𝑡 =
𝑆𝑝𝑜𝑝𝑗
where ∆𝑎𝑗,𝑡 is the increased percentage of children with a school (or classroom) available; 𝑆𝑐ℎ𝑗 is the
number of new schools built for risk group j; 𝐶𝑎𝑝 is the pupil capacity of each school; and 𝑆𝑝𝑜𝑝𝑗 is the
school age population.
The same equation is used for teachers and for books, by substituting new teachers/books for schools
and the number of pupils reached per teacher/book for capacity.
3.5.3. Change to education outcomes based on interventions, using the school life cycle approach
The education gaps affected by the interventions in the SEE model are changes to basic resources
(schools, teachers, books), entry, progression, and learning rates. These are proximate determinants,
which lead to higher-level education outcomes such as the enrollment rates, survival and completion
rates, the percentage of children who reach desired learning benchmarks or pass an exam. It is
convenient to compute changes to these higher level-outcomes based on changes to the proximate
determinants with an education life-cycle simulation (also called cohort projections) like the one used in
many national education budget- and planning models.
The life-cycle simulation is done separately for each risk group, and so we can easily track the equity
impacts of different interventions by comparing outcomes for the different risk groups to each other.
We can think of the education life cycle as a path of hierarchical steps to schooling and learning.
Learning of course starts at birth, but the SEE project picks up the education life cycle at preschool.
Starting there, the next step in the education life cycle is entry in first grade; and then progress through
the grades of primary school over time is computed as determined by repetition, dropout and
promotion rates. The school life cycle ends with dropout or reaching the final level, which in the 2012
version of SEE is at the maximum grade 9 (but can be set to a lower grade).
A highly simplified school life cycle is shown in Figure 6. It presents a group of 100 children. All of them,
100 percent, enter school in 2012. At the end of the year, 80 percent are promoted to grade 2; in the
figure, we see them in the 2013 column. Also in 2013, 10 percent of the original 100 are repeating, we
see them at the top of the 2013 column. An additional 10 percent have left school in 2013– they are no
longer included in the graph. Say, we keep the same promotion and repetition rate for all the grades.
The figure shows how the original 100 school entrants progress through each grade. A few children drop
out each year, and by 2017, out of the original 100 children, 32 are in Grade 6 and an additional 20 are
one grade behind, in Grade 5. The flows through school life cycle here are determined by the entry
rates, repetition and promotion rates. The interventions can change any one or a combination of these
determinants – and thus change the education outcomes.
The interventions can be set in scenarios, as discussed in the previous sections, and the model computes
projected change the intake, repetition, and promotion rates through the impacts. A change in these
rates causes the flow of pupils to change. For example, if the promotion rate increased to 85 percent
(though a set of interventions), then by 2017, there would be 44 children in grade 6 by 2017 instead of
32.
Figure 6. Simplified school life-cycle:
100 children who enter school in 2012
and have repetition rates of 10 per cent
and promotion rates of 80 per cent
throughout their progress through
school.
Intake 100%
Repe
on 10%
2012 2013 2014 2015 2016 2017
Grade 1 100 10
Promo on
rate 80%
16
Grade 2
80
Grade 3
Grade 4
64
19
48
Grade 5
Grade 6
20
40
20
32
In the model, the equations for availability, entry, completion and learning are as follows.
School availability, utilization and entry.
The first equation in the school life cycle concerns the new entrants into grade 1. Underlying school
entry are factors relating to school supply – how many children have a school available within reach -,
and school demand – expressed as the utilization of the schools (percentage of children with a school
nearby who enter school). By definition, the entry rate is the product of those two -- percentage of
children with a school available * utilization rate of nearby schools:
𝑒𝑗,𝑡 = ⁡min⁡(1, 𝑎𝑗,𝑡 ∗ 𝑢𝑗,𝑡 ).
Interventions can change 𝑎𝑗,𝑡 (the availability) and 𝑢𝑗,𝑡 (the utilization). The calculation for the
availability of schools is equal to the initial availability of schools plus the construction of new schools,
or:
𝑎𝑗,𝑡 = ⁡ 𝑎𝑗,0 + ∆𝑎𝑗,𝑡 =
𝑆𝑐ℎ𝑗 ∗ 𝐶𝑎𝑝
𝑆𝑝𝑜𝑝𝑗
The calculation for the utilization of available schools is equal to the initial utilization plus the decline in
non-utilization (the utilization gap, or 1 − 𝑢𝑗,0 ) was explained above. In an equation, the utilization of
schools is:
𝑢𝑗,𝑡 = 𝑢𝑗,0 + (1 − 𝑢𝑗,0 ) ∗ ⁡ 𝑚
̅𝑗,𝑡 (if 𝑢𝑗,0 < 1)
School demand can be so high that children walk long distances to go to school, or crowd the available
classrooms to that the percent of children going to school actually exceeds the theoretical school
capacity. In these cases, we divide the population into two groups. The first group, equal to 𝑎𝑗,𝑡 has a
school available, and of these, we say that the utilization rate is 100 percent. The second group does not
𝑢𝑗,0 −𝑎𝑗,0
have a school available, and is equal to 1-𝑎𝑗,𝑡 . The initial utilization rate of this group is equal to 1−𝑎
.
𝑗,0
Let’s take an example from North Ghana: 81 percent of girls enter school, although only 70 percent have
a school nearby. The utilization rate is 81/70 or 116 percent. We divide the North Girls into two
groups: the first, 70 percent, all go to school, accounting for 70 percentage point of school entry. The
second group (30 percent of the girls) account for 81-70 = 11 percentage points of school entry. The
school utilization rate of children with no school nearby is 11/30 or, 37 percent. When there are
interventions that increase school utilization, they only affect the group where utilization is less than 100
percent (since demand is already saturated in the other group). In an equation, this is:
𝑢𝑗,0 − 𝑎𝑗,0
𝑢𝑗,𝑡 = 𝑎𝑗,0 + (1 −
) ∗⁡𝑚
̅𝑗,𝑡
1 − 𝑎𝑗,0
The number of children who enter school in risk group j in year t – the new first graders -- is equal to:
𝐸𝑗,𝑡 = 𝑒𝑗,𝑡 ∗ 𝐸𝑝𝑜𝑝𝑗
where 𝐸𝑝𝑜𝑝𝑗 is the entry age population.
Late school entry can be accommodated if the user has entered pupils by age and grade. These can be
used to compute the start year distribution of school entrants by age (this is done automatically in the
model). Taking this distribution, the entrants in each age group are equal to:
𝐸𝑎,𝑗,𝑡 = 𝜏𝑗,𝑡 ∗ 𝐸𝑗,𝑡
where 𝐸𝑎,𝑗,𝑡 is the entrants in age group a, and 𝜏𝑗,𝑡 is the proportion of entrants who are in age group a.
When an intervention reduces over-age entry, we reduce all the 𝜏𝑗,𝑡 for all the ages over the official
school entry age using the usual impact equation -- ∆𝜏𝑡=0→𝑡 = 𝜏0 ∗ 𝑚
̅ 𝑎,𝑡 . The 𝜏𝑗,𝑡 for the official entry
age is the residual 1 minus the sum of the reduced entry in older ages.
School progression and completion.
The remaining flows through the life cycle model can be represented in just two equations. These tell us
the number of pupils in each grade in every year. The first equation is for pupils in grade 1. These are
equal to the entrants plus the repeaters from grade 1 last year. The second equation is for all the other
grades, where pupils are equal to those who were promoted from the prior grade last year plus the
repeaters. Formally:
Grade 1: 𝑃𝑗,𝑔=1,𝑡 = 𝐸𝑗,𝑡 + 𝑃𝑗,𝑔=1,𝑡−1 ∗ 𝑟𝑗,𝑔=1,𝑡−1
Grade 2 and higher: 𝑃𝑗,𝑔>1,𝑡 = 𝑃𝑗,𝑔−1,𝑡−1 ∗ ⁡ 𝑝𝑗,𝑔−1,𝑡−1 + 𝑃𝑗,𝑔−1,𝑡−1 ∗ 𝑟𝑗,𝑔,𝑡−1
Where 𝑃𝑗,𝑔,𝑡 are the pupils in grade g in year t; 𝑟𝑗,𝑔,𝑡 is the repetition rate and 𝑝𝑗,𝑔,𝑡 is the promotion
rate:
The SEE model computes the effects of interventions on repetition and dropout, but not on promotion.
However, since promotion, repetition, and dropout together always sum to one (any child in school
must do one of these three by the end of the year), we can also write promotion as the residual of
dropout and repetition, we can thus compute changes to promotion indirectly via changes in repetition
and dropout rates.
We can also do these computations separately for pupils of each age in each grade -- taking
each age-specific group of entrants as the start of a new sub-cohort line. The subsequent agespecific pupil projections can be used to compute out-of-school indicators (e.g. percent of
school age children not in school). There are additional calculations in the SEE model that
account for age of the pupils, but they will not be discussed here, except to say that they are
basically an expanded school life cycle model where each grade is divided by age of pupils and
where adjustments are made so that there are never more pupils of a particular age than there
are children in that age group.
Life cycle model for learning outcomes
The school life-cycle model shown in Figure 6Error! Reference source not found. is a standard,
commonly used model, but we are not aware of a similar model for learning. Therefore, a simple but
efficient new learning model is proposed. The proposed learning outcome measure is: the percentage of
children who meet a particular standardized learning benchmark. The benchmarks themselves can be
tailored to any particular context and could include: passing an end-of-primary school examination,
reaching at least reading level 4 in the SACMEQ (Southern and Eastern Africa Consortium for Monitoring
Educational Quality) assessment or able to read 50 words per minute, for example. It should be a value
that is measured in the country concerned, not one taken from another country. Interventions in
learning should increase the proportion of children who reach those benchmarks.
Many learning interventions have been studied, and many are effective at increasing learning outcomes.
But, it is notoriously difficult to transpose findings on effective learning interventions to a context
different from the one studied. Let’s take some examples.
An increase in the availability of textbooks has been found to increase learning outcomes. For example,
the country status report of Chad (World Bank, 2007) found that providing pupils with one additional
textbook would increase the pass rates of the primary examination from 64 to 68 percent. Say we
implement a More Books program to raise the textbooks available to a target level where all pupils have
100 percent of the required books. Assuming More Books is well executed, how much will the
percentage of 6th graders passing the exam increase? In the first year, it would be 4 percent, as
measured in the CSR. In the second year (assuming a continued higher supply of books) it will be higher,
because we now have 6th graders who have benefited from two years of additional books. And so forth,
until by the year six of the program, we finally reach the terminal effect of More Books on a cohort that
has benefited from the better supply throughout their entire school career. How do we account for this
accumulation over time in a model?
Another example: local, remedial teachers provide after-school help to young pupils falling behind in
reading. In India, these so-called balakshi’s improved the reading scores for the participants by .54 of a
standard deviation (Banerjee et.al. 2005). The program is only for first graders. Hopefully, the impacts
of better reading by the end of first grade will improve learning in later grades, and, by the end of
primary school, a higher percentage will pass an exam or assessment. In this case, how do we account
for a delayed impact of the balakshi’s on end-of-primary learning outcomes?
Because learning is a very complex process, it will be impossible to fill in the details of exactly happens
with these and other interventions, and we need to look for a simpler approach. Both of the examples
suggest that a life-cycle approach, one that accounts for the cumulative impacts of learning over time, is
useful.
The idea behind the proposed approach is the following. First, learning is a cumulative process; second,
in each grade, there is a continuum of the level of mastery of the school material among the pupils;
third, in each grade, the distribution of mastery can change depending on what happens in the
classroom. In a classroom with an excellent teacher, more pupils will catch up to the expected levels; in
a classroom with poor teaching, more pupils will fall behind. By the end of primary school, the
distribution of learning among all pupils is the accumulation of 4-7 years of catching up and falling
behind. In almost all cases, we don’t know what the learning levels were in each grade, much less in
each classroom, so a simplifying assumption has to be made. The assumption made in the SEE learning
model is that the proportion of children falling off the learning track changes linearly from grade 1 to the
end of primary. This is not necessarily correct; it is simply the easiest and most conservative assumption
available in the absence of more detailed information.
As an example, let us take the 2011 sixth grade National Education Assessment (NEA) in Ghana. Only 44
percent of the sixth-grade students reached minimum reading competency. If we assume that 100
percent of children started school ready to learn (optimistic, and could be set lower): What proportion
of pupils fell off the learning-track in Ghana in each grade, to ultimately end up with 44 percent having
minimum reading competency? There are 6 grades. If we assume a linear decline, then in each grade,
the proportion of pupils off-track increased by (100-44)/6 = 9.3 percent. The estimated proportion of
pupils with on track learning in each grade in Ghana is shown in Figure 7.
Figure 7. Estimation of the proportion of Ghana pupils with on track learning, based on the 44 percent
who had reached minimum competency in Grade 6 according to the National Education Assessment,
2011.
More formally, the estimated proportion of pupils getting off-track in the first year, is equal to:
𝑟𝑗,𝑜 − 𝑙𝑗,𝑔,
̅0
𝑔̅
where 𝑜𝑙𝑗,𝑔,0 is the average percentage of pupils in risk group j who are falling off the learning track in
each grade; 𝑟𝑗,𝑜 is the percent of children who are school ready when they enter grade 1 (the default is
1); 𝑙𝑗,𝑔̅,0 is proportion of pupils in the terminal grade, 𝑔̅ who reached a set of learning standards in the
starting year.
𝑜𝑙𝑗,𝑔,0 =
Interventions, such as More Books or the balakshi program, reduce rates of failing to learn, according to
the general pattern for the impact of interventions:
∆𝑜𝑙𝑗,𝑔,𝑡=0→𝑡 = 𝑜𝑙𝑗,𝑔,0 ∗ 𝑚
̅ 𝑜=𝑜𝑙,𝑡
Some interventions, such as the balakshi program, may affect learning only of one or two grades; while
other interventions affect learning in all grades – that information is provided as part of the data entry
about interventions.
Taking the changes in the learning rate, new anticipated values for the final learning outcome could be
computed as a chain function:
∆𝑙𝑗,𝑔̅,𝑡 = 𝑟𝑗,𝑜 ∗ (1 − ∆𝑜𝑙𝑗,1,𝑡 ) ∗ (1 − ∆𝑜𝑙𝑗,2,𝑡 ) ∗ … ∗ (1 − ∆𝑜𝑙𝑗,𝑔̅,𝑡 )
Advantages of using the life-cycle approach.
The outputs of the life-cycle computations can be used to compute
a variety of higher-level indicators. For example, completion rate
can be computed as the number of pupils in the last grade divided
by the population of the official end of school age; the survival rate
to the last grade computed from dropout and repetition rates; the
expected completion rate takes survival times the entry rates.
Enrolment rates can be computed as the sum of all pupils divided
by the school age population. The gross enrolment rate can be
computed simply by adding up all of the pupils; the net enrolment
rate adds up only the pupils of school age. Out-of-school children
can be computed by adding up the difference between the school
age population and the sum of pupils of school age (net enrolment
and out-of-school children can only be calculated if data on start
year pupils by age and grade have been entered in the model).
Absolute values such as the number of pupils, number of entrants
and so forth, can also be derived from the life-cycle computations.
MORES outcomes. SEE
explicitly includes all of the 10
determinants of the MORES
framework, including
indicators for: enabling
environment, supply, demand
and quality. Within the model,
it is possible to improve the
enabling environment and
supply directly with
interventions. Demand and
quality can be improved
indirectly with interventions
that increase children’s
utilization of schools, and how
much they learn in school.
The life cycle approach may seem cumbersome – why not simply compute instantaneous changes to
education gaps based the impacts of the interventions? – But it is useful and necessary for a few
reasons. First, some outcome measures, such as the completion rate and enrolment rates are
accumulations of change over time; one can get approximations using an instantaneous approach, but
not exact values. Second, by using the life cycle approach, it is possible to accommodate the effects of
different speeds of intervention implementation and see the effects. Third, by using the life cycle
approach all outcomes are consistent with each other. Fourth, we can accommodate indirect impacts.
For example, a change in school availability (proportion of children with a school nearby) will cascade
through to an effect on the expected completion rate (the proportion of children who can be expected
to finish primary school) even if there is no change in dropout or repetition. Similarly, a reduction of
drop out rates will mean more completers, but will also indirectly lead to more children who reach the
learning benchmark.
3.5.4. Costs and cost-effectiveness of interventions
Costs of interventions
The costs of interventions can be divided into two basic categories: investment costs, which are made at
one time, but have a longer lifetime -- for example the building of a school or training a teachers – and
recurrent costs, which must be made annually – such as teacher salaries, or cash transfers. Any given
intervention might include a combination of these different kinds of costs. For example, a program to
provide ten kindergartens could include construction costs for the classrooms and furniture, training
costs for the teachers, plus recurrent costs for salaries and learning materials. Some interventions
might have only one kind of cost; for example, scholarships may have only recurrent costs. The SEE
model provides the option to include any of these three categories of costs: construction, training, and
recurrent.
Recurrent costs are simply the annual cost per unit of the intervention:
𝑟
𝑌𝑗.𝑖,𝑡
= ⁡ 𝐼𝑗,𝑡 ∗ ⁡ 𝑦 𝑟
𝑟
where 𝑌𝑗.𝑖,𝑡
is the recurrent cost of intervention i, provided to risk
group j, in the year t; 𝐼𝑗,𝑡 is the number of units of intervention (e.g.
number of teachers, number of children provided with meals), and
𝑦 𝑟 is the recurrent unit cost.
Investment (construction and training) costs. If the intervention
includes elements that last longer than one year, like construction or
the value of training, then it is useful to divide the cost calculation
into two parts: 1) how many new units of the intervention are
needed in any given year (e.g. new schools that need to be
constructed; new teachers who need to be trained) followed by 2)
the cost equation above, namely new units needed times unit costs.
The number of new units of an intervention needed is: the net of the
total desired units in a particular year, 𝐼𝑡 , minus the still existing
number of units paid for earlier and still active, or working. How
many is that? Let’s say n is the lifetime (in years) of an intervention
unit. In any given year, the number of still existing units is equal to
units from the previous year, plus units from two years ago, up to
units from n years ago:
Example of computing new
training needs.
Scenario: train 500 new preschool
teachers per year for 2 years
starting in 2014, and maintain the
new level of 1000 trained preschool
teachers. Training lifetime is 5
years. How many new teachers
need to be trained each year?
In 2014: 500 new trainees
In 2015: 500 new trainees. We
reach the desired level of 1000
trained teachers.
In 2016: 0 trainees.
In 2017: 0 trainees.
In 2018: 0 trainees
In 2019: training of the 2014
trainees “runs out”, we need to do
500 replacement trainings.
In 2020: same as 2019.
𝐼𝑜𝑙𝑑 = 𝐼𝑡−1 + 𝐼𝑡−2 + ⋯ + ⁡ 𝐼𝑡−𝑛
The new units needed and their costs are then:
𝑐𝑡
𝑌𝑗,𝑖,𝑡
= (𝐼𝑗,𝑡 − 𝐼𝑗,𝑜𝑙𝑑 ) ∗ 𝑦 𝑐𝑡
𝑐𝑡
where 𝑌𝑗,𝑖,𝑡
are construction or training costs of intervention i, in year t, for group j.
The total costs of an intervention in a given year are the sum of recurrent, training and construction
costs.
Attribution of each intervention’s contribution to overall change
At this point, we are almost able to compute cost-effectiveness, because we have the changes to
education outcomes, and the total costs of the scenarios. Cost-effectiveness is simply the total change
divided by the costs. But what if a scenario contains multiple interventions? In this case, we would like
to be able to attribute the changes contributed by each intervention and compute the cost-effectiveness
of each of those changes.
To make that computation, we need to two things:
1. A way to attribute change to a particular intervention
2. A way to include all the effects (direct and indirect) of interventions (e.g. building schools
contributes directly to more children with a school nearby, but indirectly to higher completion
rates, but indirectly, via a larger number of entrants).
These two complications can be best addressed separately.
Attribution.
Recall from the section on impacts that we compute impact of a particular intervention (on an education
𝑒𝑓
𝑣
gap) as 𝑚𝑖,𝑜,𝑡
= 𝑣𝑏,𝑜 ∗ ⁡ 𝑒𝑖,𝑜 ∗ ⁡Δ𝑐𝑖,𝑗,0→𝑡 . This is the potential impact; the actual impact may be
somewhat reduced by substitution effects if there are multiple interventions affecting the same
education gaps. The simplest way to attribute the proportional contribution of the intervention I to the
overall impact, is to ignore the substitution effects, and to divide the potential impact of intervention I,
by the sum of the potential impacts of all the interventions:
𝛼𝑖,𝑜 =
𝑛
𝑣
𝑣
𝑚𝑖,𝑜 / ∑ 𝑚𝑖,𝑜
𝑖=0
where the sum of all the relative contributions, 𝛼𝑖,𝑜 , is equal to 1, and n is the number of interventions.
Direct and indirect effects of interventions.
As mentioned, there are direct and indirect effects of interventions. In the example above, a rise in
completion rates can come from a reduction of dropout, but also from an increase in entry, which, in
turn, can be an outcome of more school availability or higher utilization of schools. To compute the
effects of an intervention on completion, we need to include its impacts on school availability, and
utilization, and dropout rates.
SEE computes cost-effectiveness for three important education outcomes: how many children enter
schools, how many finish school, and how many have reached targeted learning standards. Let’s take
these three in turn.
The number of children to enter primary school, as explained above, is the outcome of how many
children have a school nearby, and the likelihood that children utilize the school (if one is available).
School availability and school utilization are affected by different interventions; the former is a supplyside constraint and the latter is more influenced by demand-side elements. So, the calculations of how
many new children will enter school will have two parts:
1. Increase in entrants resulting from a rise in school availability,
2. Increase in entrants rising from an increase in the utilization rate.
The additional entrants expected from a rise in school availability caused by intervention i are equal to:
the additional children with a school availability times the utilization rate of schools. And, if the
increased school availability results from multiple interventions, then, to get the specific contribution
from intervention i, the number of additional entrants is multiplied by the attribution of intervention i to
school availability, 𝛼𝑖,𝑜
Secondly, new entrants due to a rise in the utilization rate caused by intervention i are the product of:
the change in the utilization rate of schools and the number of children with a school nearby, and, if
applicable, the attribution of the utilization change to intervention i.
Formally, the equation with the two components, is:
∆𝐸𝑗,𝑖,𝑡=0→𝑡 = (𝛼𝑖,𝑗,𝑜=𝐸 ∗ ∆𝒂𝒋,𝒕=𝟎→𝒕 ∗ 𝑢𝑗,𝑡 ∗ 𝐸𝑃𝑜𝑝𝑗,𝑡 ) +
(𝛼𝑖,𝑗,𝑜=𝑈 ∗ 𝑎0 ∗ ∆𝒖𝒋,𝒕=𝟎→𝒕 ∗ 𝐸𝑃𝑜𝑝𝑗,𝑡 )
where ∆𝐸𝑖,𝑗,𝑡=0→𝑡 is the intervention-caused5 increase in the number of entrants from time 0 to time t;
∆𝑎𝑗,𝑡=0→𝑡 is the change in the % of children with school within reach (availability); 𝑢𝑗,𝑡
is utilization rate; 𝑃𝑜𝑝𝐸𝑗,𝑡 is the entry age population in time t; 𝑎0 is the benchmark availability of
schools; and ∆𝑢𝑗,𝑡=0→𝑡 the change in the utilization rate, all for risk group j.
5
As distinct from population-caused increases.
The total number of new entrants is equal to the sum of all the partial contributions from all the
interventions.
One can calculate the number of additional completers as a result of intervention j similarly in two parts:
1. Increase in completers resulting from increased entrants, and
2. Increase in completers resulting from improved survival rates.
The formal equation looks similar to that for entrants:
∆𝐶𝑖,𝑗,𝑡=0→𝑡 = (∆𝐸𝑖,𝑗,𝑡=0→𝑡 ∗ 𝑠𝑗,𝑡 ) + (𝛼𝑖,𝑗,𝑜=𝑆 ∗ 𝑒𝑗,0 ∗ ∆𝑠𝑗,𝑡=0→𝑡 ∗ 𝐸𝑃𝑜𝑝𝑗,𝑡 )
where ∆𝐶𝑖,𝑗,𝑡=0→𝑡 is the intervention-caused increase in the number of completers from time 0 to time t;
𝑠𝑗,𝑡 is survival rate in risk group j; 𝐸𝑗,0 is the number of entrants school; and ∆𝑠𝑗,𝑡=0→𝑡 is the change in the
survival rate.
Finally, number of additional pupils who have reached targeted levels of learning outcomes (learners) is
equal to:
1. Increase in learners due to additional completers, and
2. Increase in learners due to better learning rates. Formally:
∆𝐿𝑖,𝑗,𝑡=0→𝑡 = (∆𝐶𝑖,𝑗,𝑡=0→𝑡 ∗ 𝑙𝑗,𝑡 ) + (𝛼𝑖,𝑗,𝑜=𝐿 ∗ 𝑐𝑗,0 ∗ ∆𝑙𝑗,𝑡=0→𝑡 ∗ 𝐸𝑃𝑜𝑝𝑗,𝑡 )
where ∆𝐿𝑖,𝑗,𝑡=0→𝑡 is the intervention-caused increase in the number of children who have reached
target learning outcomes from time 0 to time t; 𝑙𝑡 is percent of pupils reaching learning targets or pass
rate; 𝑐0 is the benchmark percent of children expected to complete primary; and ∆𝑙𝑡=0→𝑡 is the change
in the pass rate.
Computation of the cost-effectiveness of each intervention
The cost-effectiveness of each intervention – for different outcomes – can be found by putting together
the costs of each intervention and the change attributed to each intervention.
Because there are multiple outcomes, we compute three values of cost-effectiveness: for increasing the
number of entrants; for increasing the number of children to complete primary (or lower secondary);
and for increasing the number of children who attain the learning standards. Specifically, the costeffectiveness of a particular intervention I, as measured by the dollars per additional child to achieve
one of the outcomes O, is equal to: the costs of intervention I divided by absolute change in outcome O
attributed to intervention I. The outcomes, O, can be either entry (E), completion (C), or learning (L).
Formally:
𝐶𝐸𝑖,𝑗 = ⁡
∆𝑂𝑖,𝑗,𝑡=0→𝑡
𝑌𝑖,𝑗
This equation, ultimately, allows us to compare:
1. the cost-effectiveness of different interventions and
2. the cost-effectiveness of targeting different risk groups – as each cost-effectiveness is calculated
separately for each risk group.
3. In addition, it is possible to test out different scales of the intervention and measure the change
to cost-effectiveness as the intervention is scaled up.
Cost-effectiveness is one, but not the only value to consider with interventions. Another consideration
is the limitation of impact. Some interventions are extremely cost-effective, but have only a limited
scope for causing change – deworming is a prime example, it is very inexpensive, and does increase
attendance, but will leave many education gaps unsolved. Also, interventions must fit within other
goals, explicit and implicit, being pursued in the country. Cost-effective interventions that, for example,
increase average outcomes, but at the cost of equity, may not be appropriate for the overall picture.
The next chapter presents a case study of Ghana, which includes a cost-effectiveness strategy of proequity strategies within a more comprehensive policy framework.
Chapter 4. Results from the Ghana pilot
case study
The Simulations for Equity in Education model was piloted in Ghana while it was under development.
The pilot served both as a test and to inform development of the tool as it progressed. Ghana was
chosen because the UNICEF country office had undertaken a bottleneck analysis in the first half of 2011
(UNICEF 2011). Therefore, it had both the data and the necessary technical capacity to facilitate the
pilot.
Once the model was completed, scenarios were developed for the country. The scenarios were guided
by the main questions of the project:
1. What programs and interventions are most efficient and cost-effective to ensure more education
success – school access and learning - for marginalized children?
2. Can targeting marginalized children be more cost-effective than business as usual?
In brief, the findings from the Ghana pilot provide the following insights on these questions:
1. Targeting special services to the marginalized, rather than the general student population, can greatly
enhance the cost-effectiveness of the interventions and improve equity at the same time. In an
exploration of kindergarten provision, targeting interventions to reach the poorest children results
in a fourfold improvement in the results.
2. A pro-equity focus can lead to more improvements in learning and lower costs compared to business
as usual. This result was found in a test of two scenarios on teachers. Ghana has a significant
shortage of trained teachers, and a logical policy is to replace the untrained with pre-servicetrained teachers. The business-as-usual approach is teacher upgrades (pre- service training)
distributed evenly throughout the country. The outcomes of this approach were compared to a
pro-equity scenario with remedial teachers for disadvantaged children in the northern regions.
Focusing efforts on remedial teaching for the lowest performers can result in much greater gains.
The main mechanism for this result is that the low-cost remedial teachers are able to raise the
learning levels of the lowest achievers in the northern regions by more points than the high-cost
teachers with pre-service training raise the learning levels of the already better-performing students
in the rest of Ghana.
3. The findings hold even within wide margins of uncertainty. For some of the cost and
effectiveness indicators, the range of possible values is large, but the findings above hold even within
those wide ranges. That said, a firmer grasp of intervention effectiveness and costs would enable
better refinement of the scenarios.
Applying the model further in Ghana, as well as in different contexts and countries, would test the
robustness of these initial findings. Further applications would also lead to other insights about
different pro-equity interventions, and what kinds of interventions are most effective in particular
contexts.
4.1 The Ghana context
The SEE pilot in Ghana was preceded by a bottleneck analysis, which identified three risk groups: girls in
the poor and sparsely populated regions in the north, boys in the same regions and the rest of Ghana
(UNICEF 2011). The analysis identified the children attaining learning standards as the target outcome
for primary education.
The analysis further identified six potential bottlenecks, or exclusion points, based on the Tanahashi model
also used in the Health MBB model. As shown in Figure 8, on the supply side the bottleneck measures are
the availability of books, availability of trained teachers and nearby schools; on the demand side the
bottleneck measures are entry into school, completion of primary school and reaching the required
minimum learning by Grade 6. The analysis also identified causes for exclusion, or barriers, and about 20
interventions to address them.
Figure 8. Six education bottlenecks in three Ghana risk groups, 2011
The bottleneck analysis in Ghana found that there were gaps in the following areas:




Shortage of (trained) teachers and learning materials (e.g., textbooks).
Low learning outcomes in all regions.
Low school accessibility in the northern regions.
Low primary completion in the northern regions.
In the northern regions, pupils have only 35 per cent of the trained teachers they need; about 30 per
cent of children have no school nearby; 20 per cent will not enter school; only two thirds of the entrants
reach the end of primary; and of those who finish primary, less than one third pass the final exit
examination.
Moreover, two major barriers disproportionally affect children in the northern regions:
• Poverty – 65 per cent of the children there are in the lowest income quintile, compared to 11 per
cent in the rest of Ghana.
• Language – 74 per cent speak a minority language, compared to 7 per cent in the rest of Ghana.
Clearly, an equity approach would focus on improving education outcomes in the northern regions.
Within the context of the pilot, many scenarios were tested. A training workshop in Accra, in May 2012,
attended by about 25 education experts (mainly from UNICEF offices) led to the identification of
numerous alternatives. Having learned from this experience, two scenario sets were selected to
illuminate the two key policy questions on the effectiveness of the equity approach. The scenario sets
focused on:
•
Kindergartens for children in poverty – these have been shown to be particularly effective for poor
children. As mentioned in Chapter 2, among poor children in Nepal, kindergarten experience
reduced non-entry by 80 per cent. Kindergartens also reduced dropout by almost 50 per cent;
lowered repetition by 63 per cent and reduced failure to learn by up to 14 per cent. Other studies
found similar, if not quite as extreme, results, as discussed above.
•
Better teachers – as in many countries, many teachers in Ghana are ineffective. Trained teachers
were found to be, on average, more effective than untrained teachers (Leherr 2009), although not in
the northern regions. In other contexts, providing untrained teachers with in- service training was
found to improve students’ learning outcomes (Angrist and Lavy 2001). A particularly effective
teacher intervention is providing remedial teachers for those students who are falling behind. In
India, for example, it was found that when young women from the village helped failing students,
overall student achievement greatly increased (Banerjee et al. 2005 and 2010).
4.2 Targeting kindergartens to reach children in poverty
Targeting special services, such as kindergartens, to reach marginalized children, rather than the
general student population, can greatly enhance the cost-effectiveness of the interventions and
improve equity at the same time.
This is the insight from the scenarios on kindergartens in Ghana. Kindergartens have expanded
considerably in Ghana during the past decade, but some gaps remain. In this example, SEE is used to
quantify the effects of expanding kindergartens further, with and without pro-equity targeting. In the
scenarios, 500 kindergartens are built from 2014–2015, each with a capacity of 40 students. The
intervention will reach 20,000 children per year as of 2015 but will have an impact only on the education
outcomes of poor children.
Three targeting options are proposed:
1. No targeting – distribute the kindergartens evenly throughout Ghana to places without
kindergartens.
2. Regional distribution – distribute the kindergartens only in northern Ghana to places without
kindergartens.
3. Full targeting – distribute the kindergartens only in northern Ghana in communities with nearly 100
percent poor children (effective poverty targeting = 90 percent).
Table 5 shows a selection of the results, measured in terms of additional expected primary completers,
additional expected learners (children on track to meet minimum Math standards in the National
Education Assessment [NEA]), and number of new completers and learners per $1000 invested.
The table shows a clear increase in the expected numbers of completers and learners, in absolute
numbers and average costs, as the intervention becomes more targeted towards the marginalized. In
the non-targeted case, there are 6,202 additional expected completers in the projected period (20122022) and in the most-targeted scenario, there are 27,327 additional expected completers – a more
than fourfold improvement for the same investment. The correlate of this result is that per $1000
invested, in the non-targeted case only 0.3 additional children complete primary; compared to 1.5 in the
most targeted scenario.
Table 7. Selection of projected outcomes of an intervention to build kindergartens for 20,000 children
in Ghana, 2012–2015, according to three distribution scenarios.
Even distribution in
Ghana
Even distribution in
north
Distribution in north
communities with
+90 per cent poor
children
Additional expected
completers
6,202
20,283
27,327
Additional expected
Math learners
5,719
17,339
23,447
New completers per
$1000
0.3
1.1
1.5
New learners per
$1000
0.3
1.0
1.3
Equity is also improved in the targeted scenarios because all of the additional children learning are in
the highly disadvantaged northern regions, thus reducing the gap between those regions and the rest of
the country. One might argue, however, that kindergartens are an additional marginal expense for the
schooling of poor children and therefore, no matter how worthy it might be from the equity perspective,
education for the poor remains more costly than for the average child for whom kindergartens are not as
critical to basic learning in school.
Below we compare two scenarios, in which interventions for the poor lead to greater improvement
and at lower cost than upgrades for the better-off children do.
4.3 A focus on better teachers
A pro-equity focus, with special teachers for disadvantaged children, can lead to more
improvements in learning and lower costs compared to business-as-usual teacher upgrades
distributed evenly throughout the country.
It is clear that trained teachers in Ghana are in short supply as only 65 per cent of teachers are
formally trained. At the same time, a recent study found that certified teachers in Ghana were 29
per cent more likely to be effective teachers than those who had not received training (Leherr 2009),
except in the northern regions, where trained teachers had no advantage (perhaps due to the language
barrier). The study may have overstated the advantage of certified teachers because it does not
control for other factors that might improve effectiveness, e.g., that trained teachers might be more
likely to teach relatively well-off children in urban areas. Nonetheless, the suggestion is strong that
pre-service training improves effectiveness.
Another way to improve the training level of teachers is by providing in-service training. Joshua D.
Angrist and Victor Lavy (2001) found an 18 percent improvement in learning among students whose
teachers received in-service training. A third teacher-focused intervention to improve learning is to
provide remedial teachers for those children who are falling behind, usually the marginalized children.
Abhijit Banerjee et al. (2006 and 2010) found that remedial teaching can dramatically improve overall
learning at a very low cost.
In the Ghana pilot, these three teacher interventions were included in two scenarios, both providing
20,000 teachers:
1. Business as usual – replace 20,000 untrained teachers with pre-service-trained teachers, with costs
of approximately US$5,000 for training each teacher, plus higher salaries for the trained teachers.
The program starts in 2014 and takes 3 years to deploy all teachers.
2. Equity-focused – provide 8,000 untrained teachers in the northern regions and 4,000 in the rest
of Ghana with in-service training and hire 6,000 new remedial teachers to assist marginalized
children falling behind in classes – 4,000 in the North and 2,000 in selected poor villages in the Rest
of Ghana with at least 70% of poor children. The in-service training is assumed to cost US$2,000
per teacher and the teachers receive a higher salary upon completing the training. Remedial
teachers are young women recruited from within the villages and are assumed to receive half of the
salary as untrained teachers. Time frame as scenario above.
The results are shown in Table 8.
Table 8. Selection of results from better teacher scenarios (business as usual and equity-focused)
Starting
values in
2012
Costs of programme
Business as usual: Replace
20,000 untrained teachers
with pre-service trained
teachers evenly distributed
Equity-focused: Provide
in-service training to
12,000 untrained teachers
and hire 6,000 remedial
teachers
Mln. US$ 314*
Mln. US$209*
Per cent trained
teachers in 2020
Rest of Ghana
70%
91%
75%
Northern
38%
60%
82%
Per cent pupils on
track to pass NEA
English in 2020
Rest of Ghana
28%
30%
29%
Northern
12%
16%
27%
Additional pupils expected to pass
NEA English 2012–2020
205,185
223,328
Additional expected learners per
$1000
0.70
1.0
* includes 5 percent inflation rate
We find that the equity-focused option leads to more additional children completing primary school and
reaching the NEA learning standards at lower costs. As Table 8 shows, the estimated cost of the preservice training programme is US$314 million over 10 years, compared to US$209 million for the special
teachers in the northern regions. As an absolute measure of better outcomes, 223 thousand additional
students are expected to pass the NEA-English standards by 2022 with the special teacher programme,
compared to 205 thousand with the pre-service training programme. Together, these results mean that
the cost-effectiveness of the equity programme is nearly 60 percent higher that of the business-as- usual
approach.
The main mechanism for this result is that the low-cost remedial teachers are able to raise the learning
levels of the lowest achievers in poor villages by more points than the high-cost pre-service teachers raise
the learning levels of the already better-performing students in the rest of Ghana.
4.4 Sensitivity Analysis
The exact results of the scenarios above have a margin of uncertainty because some of the numbers are
estimated and because the effectiveness of the interventions is uncertain. For example, the pre- service
teacher training may be more, or less, than the estimated $5,000, and as already mentioned, the
effectiveness of trained teachers may actually be overstated. A sensitivity analysis allows us to set lower
and upper bounds on the uncertain numbers, and test how much the scenario results would change
depending on whether our model parameters took the lower or upper values. This section shows a
sensitivity analysis of the ‘A Focus on Better Teachers’ scenario.
The sensitivity analysis made the following assumptions:
• Costs of pre-service training costs: low = US$3,000, middle = US$5,000, high = US$10,000
• Costs of in-service training: low = US$1,000, middle = US$2,000, high = US$4,000.
• Effectiveness of pre-service training of teachers in reducing pupils’ off-track learning: low = 14%,
middle = 29%, high = 29% (same as middle; 29% is regarded as high).
• Effectiveness of teacher in-service in reducing pupils’ off-track learning: low = 9%, middle =18%, high
= 29% (same as pre-service training)
• Effectiveness of remedial teacher in reducing pupils’ off-track learning: low = 15%, middle =31%,
high = 60%.
Figure 9 shows the selected results of the scenario, with results from the base scenario (the one
discussed in the previous section) represented by blue bars, and the range between the upper and lower
values shown by an orange line. The figure suggests that, with our present state of knowledge, the
range of possible outcomes can be quite large – as shown by the long orange lines. For example,
estimated outcomes as measured in additional children reaching the learning benchmarks, could vary by
a factor of two, as could the cost-effectiveness.
That said, the sensitivity analysis shows that even taking large margins of uncertainty into consideration,
the equity-focused teacher scenario is still likely to be the most cost-effective solution. The ranges
in the results do not overlap, but the most optimistic outcomes of the business-as-usual scenario are
not even close to the base values of the equity-focused scenario. While the figure points to the
robustness of the particular finding on teachers, it also highlights the extent to which our knowledge of
costs and effectiveness needs to be improved.
Hopefully, further work with the Simulations for Equity in Education tool will reduce the margins of
uncertainty. It is also hoped that applying SEE to different countries will provide us with robust insights on
the equity approaches in education that will be cost-effective compared to more general interventions.
Figure 9. Selected results from two teacher-focused scenarios with margins of uncertainty.
Equity teacher focus
Business as usual
0
50
100
150
200
250
300
350
400
450
500
Millions of US$
Equity teacher focus
Business as usual
0
50
100
150
200
250
300
350
Addi onal children to pass English exam
Equity teacher focus
Business as usual
400
450
Chapter 5. User’s Guide
This chapter takes the reader through the basic mechanics of setting up and running scenarios. To
follow along better, readers might want to open their computers and implement the examples and
steps provided here while reading along. The views shown below illustrate Ghana as an example.
The chapter is set up as follows:
5.1 First step: Is SEE for you? A set of questions to determine whether this tool is useful for the
questions facing you.
5.2 Getting started. This section helps you set up the interface of SEE, like selecting the language.
5.3 Setting, running and saving scenarios. This section takes you through the steps of selecting a new
intervention, setting the scale, the time of implementation; also shows you how to save scenarios and
retrieve saved scenarios.
5.4 Working with the results charts. This section shows you how to customize the eight different results
charts and guides you through what kinds of results to expect to see in each chart.
5.5 Practice activities for learning how to set better scenarios. Although it is relatively easy to learn to
add interventions, it takes a little practice to learn how to use the model to probe different scenarios
and use it to optimize and identify the most cost-effective alternatives. These practice activities take
you through from the very simple to more complex scenarios.
5.6 MoRES module. SEE can be a supportive tool for the UNICEF Monitoring for Results analyses; helping
to visualize the determinants of effective programs and identify bottlenecks.
5.7 Sensitivity analysis. Sensitivity analyses are often useful to test the robustness of particular scenario
insights.
5.8 Adding new data. The model is loaded with data from various pilot studies; but for implementation
in other countries, new data will need to be added. This section provides a detailed guidance on
where to find data and exactly what indicators are needed by the model.
5.1 First step: Is SEE for you?
Before embarking on SEE, take the time to answer the question: Is SEE for you? The answer may be ‘yes’
if you have ever faced questions similar to these:
1. How can we most effectively allocate a multimillion-dollar education grant in our country over the
next 10 years to improve national learning outcomes? Which programmes will have the most impact
on vulnerable children?
2. How can my district best allocate a grant to improve girls’ school attendance?
3. Over the next 5 to 10 years, what is the expected improvement in the pass rate for the primary
graduation examination as a result of mother-tongue instruction in a remote province? How do I
convince the policymakers or funders that mother-tongue instruction will have an expected result?
4. How much might preschool in the slums or informal settlements of a large city in my country reduce
the number of children who stay out of school? Is preschool more cost-effective at reducing
these numbers than another programme, for example, school meals for poor children? Is a
combination of both programmes useful?
5. What are the greatest bottlenecks to increasing the number of children who pass the final
primary examination or assessment?
6. How can I implement UNICEF’s MoRES framework in the education sector in my country or region?
SEE can help you find answers to questions like these and others more effectively, in particular by
helping you think through alternative options within a consistent framework. So, if you need to think
about questions like these, please read on.
SEE will allow you to virtually test different programming strategies against each other and compare the
impacts on almost 40 education outcomes, ranging from resource availability to exam outcomes. With its
focus on equity, the model includes up to four risk groups and up to six barriers to test the benefits
of targeting interventions to help the most marginalized children.
SEE is designed to be used in policy dialogue and does not substitute for decision-making based on local
knowledge of special circumstances, preferences and priorities. It should support and improve dialogue
by providing consistent computations of the effects of alternative programmes, based on agreed-upon
data and evidence. It will provide insight on the benefits of programmes you may have overlooked or
interactions you had not thought of. In some ways, the best outcome of using SEE might be that you
don’t need to use the model any more because your understanding of how programmes work has
become so much better!
5.2 Getting started
The first task before starting with scenarios is to save the model to your own computer so that you
can make changes to it without affecting other users.
After entering data (see Section 5.8), the way SEE works is basically like a computer game:
1) You artificially add interventions to your country or region – e.g. build schools, hire teachers,
improve pedagogy, provide lunches, etcetera.
2) The SEE model immediately computes the impacts of your interventions on children’s access to
school, and on their promotion, repetition and learning rates over the next 10 years. The model
calculates how many students will be in which grades in each of the four risk groups and what their
learning scores will be based on the interventions you have provided.
3) You can see your results in graphs and tables, which you can adapt to show the results you are
most interested in.
4) You may make multiple scenarios, and compare them: the objective is to get the most improvement
to your education system for the least amount of costs.
Before you start adding interventions and making scenarios, there are a few helpful options for
setting up your work. These basic options are available on the menu bar at the left when you open SEE:
Select language
You can select the language you want to work with – English, French,
Spanish or Portuguese – with the ‘Select language’ drop-down menu.
‘About SEE’
Provides basic information about the developers of SEE.
‘Data summary’
Clicking on this button opens columns with basic information about the
country or region – number of pupils, teachers, classrooms and books
and the prevalence of different marginalization factors within the risk
groups, and about the effectiveness of interventions at reducing four
major gaps – non-entry, repetition, drop out, and failure to learn.
‘Show/Hide MoRES view’ and ‘Show/Hide sensitivity analysis
view’
These are two additional modules for special analysis that can be shown
or hidden with the buttons shown at left. (The MoRES module is
discussed further in Chapter 7; the sensitivity analysis module is
explained in Chapter 9.)
5.3 Adding new interventions
Basically, the scenario setting happens in the table in the blue section, which looks like the diagram below.
You can add up to 10 different interventions in this table.9 The list of interventions comes from the data
page (more on entering data in section 5.6). In this diagram, one intervention has been added, ‘Wing
schools’, in the first row. Wing schools in Ghana are small village schools that have only the first three
grades; they are linked to a larger school with all grades of primary. Two hundred Wing schools will be
built in the northern regions; and 1,000 in the rest of Ghana.
Now let us add a second intervention here.
When you click on an empty cell in the ‘Interventions’ column of the scenario table (the second cell
for us in this example) a pop-up menu with all of the interventions appears. (On a PC, the box is
smaller than the one shown here, which is from a Mac.) Click on one of the interventions. This
automatically adds it to the scenario table. We are going to select the intervention ‘Mother-tongue
instruction’.
Once you have selected an intervention, you can find out more about it – in particular, the units used
for the intervention and how many children are affected by each unit – by clicking on the ‘info’ button
on the right side of the intervention matrix. The ‘info’ button for ‘Mother-tongue instruction’ shows that
the unit of this intervention is the teacher, and that for each teacher trained, 40 children benefit (per
year).
Next, set the parameters of the intervention.
The parameters of the interventions tell the model about the timing of the programme, the scale and
the targeting towards risk groups and barriers.
Timing. To the right of the intervention name are two cells where users can input when the
programme will start; and how years are needed to reach the target level of the intervention (e.g. it
may take 4 years to build 200 schools). In our example (shown below in the next figure), the first
mother-tongue schools will start in 2013 and the roll out period is 2 years.
Distribution over risk-groups. Next, you can input how many units will be provided for each of the riskgroups. How many units will go to girls in the northern regions? To boys in the northern regions? The
rest of Ghana? In our example, 10,000 children in the north, evenly distributed between boys and girls,
will be reached by the programme.
Targeting barriers within risk groups. Within risk groups, how well is the intervention targeted to reduce
particular barriers faced by vulnerable children? In our example, we assume that 90% of the children
receiving mother-tongue instruction will be children who benefit from it, i.e., who speak the language
of mother-tongue instruction at home.
The diagram below shows you where to add all the settings.
1. Add interventions
1
2. Add timing – year when
intervention starts,
duration of roll-out.
Provide the number of units (e.g., schools,
meals,books, teachers) to go to each risk group.
Hint: If the risk groups include boys/girls, the
intervention will often be split 50/50.
Add how well the intervention
will be targeted to reduce
specific barriers.
To see what your scenario looks like – how your intervention is implemented over time, according to your
settings – click ‘Info’ button again. The graphs below show the scenario for ‘Mother-tongue instruction’.
There is one chart for each risk group. The red lines show the number of children in the risk group, and
the green lines show the number of children who will be reached by the intervention. We can see that
for the mother-tongue scenario, there is a
Interven on 2
slight over-shoot in 2014, but eventually,
Mother tongue instruc on; Unit of interven on = Other; Children reached per unit =
coverage is near complete.
40
# children in target group
To hide the graph, click on the ‘Info’
button again.
Boys north
200000
180000
160000
140000
120000
100000
80000
60000
40000
20000
0
250000
200000
# children
# children
These graphs can be used to visually
calibrate the scale of programmes in
scenarios – you can test what scale of
program leads to approximately complete
coverage.
Poten al # children reached by program
Girls north
100000
50000
0
2012
2017
2012
2022
Rest of Ghana
160000
100000
# children
# children
120000
80000
60000
40000
20000
0
2012
2017
2017
2022
2017
2022
0
140000
Keep adding interventions until you are
satisfied with this scenario.
150000
2022
1
1
1
1
1
1
0
0
0
0
0
2012
.
Saving scenarios
Once you are satisfied with your scenario, which can consist of one or
multiple interventions, you can save it. To the right of the scenario setting
table, you’ll find a box with a ‘Save scenario’ button next to it. To save a
scenario, give it a new name in this box, and click the button. You scenario
will automatically be saved.
Retrieving saved scenarios
If, later, you would like to present your
scenario, or work with it again, you can
retrieve it. Below the “Save scenarios”
button, you will see another box with a
button ‘Get scenario’ next to it. To
retrieve your scenario, click on the box, to
open a drop-down box with all the saved
scenarios. Select your scenario, and click
on ‘Get scenario’. All your scenario
settings should be back in the table!
Deleting scenarios
If you save many scenarios, you may get to the point in which the list of scenarios is too long and
cumbersome, or you may want to get rid of scenarios for another reason. To remove saved scenarios
from SEE, go to the ‘Saved Scenarios’ worksheet. The scenarios are saved in column format with the name
of the scenario at the top of the column in row 1. Simply delete the columns with the scenarios that you
no longer need. The list of scenarios will be automatically updated.
Pre-entered, standard scenarios
There are four standard scenarios, which you cannot delete: ‘Blank’, ‘Pilots 1’, ‘Pilots 2’, and ‘Pilots 3’.
 ‘Blank’ = this scenario returns a blank intervention table and is useful for starting new work. It is like
erasing the white-board.
 ‘Pilots’ = these scenarios are useful to provide a starting point for cost-effectiveness and impact. The
assumption in these scenario is that one unit of each intervention is provided to each risk-group (it is
like running many small pilots, hence the name). The scale of the interventions is small so that there
are negligible interaction effects. There are three ‘Pilot’ scenarios to cover the entire maximum list
of 30 interventions (each ‘Pilot’ scenario covers 10 interventions).
Results
The model calculates the following list of outputs and results:
Number of pupils
Number of teachers
Number of trained teachers
Number of untrained teachers
Student attendance rate
Teacher attendance rate
% desired books/materials
(compared to national target)
% desired teachers
Number of classrooms
% qualified teachers
Pupil teacher ratio
% desired classrooms (compared to
national norm
% children with school nearby
School utilization rate (if school is
available)
% children who will enter school
% children age 12 who everattended school (only with OOS)
Gross intake rate
Pupil trained teacher ratio
Pupil classroom ratio
Books per pupil
Additional children who pass
NEA Math
Additional children who pass
NEA English
Additional completers due to
interventions
Additional entrants due to
interventions
Additional trained teachers due
to interventions
Additional books due to
interventions
% children entering overage (only
with OOS)
Net enrolment rate (only with OOS)
Gross enrolment rate
% private pupils
Public gross enrolment rate
Social norms
Legislation / Policy
Survival rate to last grade in
projection
Primary completion rate
Budget/Expenditures
Management / Coordination
Expected primary completion (%
entry x survival rate)
Actual, percent who pass NEA Math
Actual, percent who pass NEA English
Expected, percent who pass NEA
Math
Expected, percent who pass NEA
English
% Expected children who pass NEA
Math (% completers x pass rate)
% Expected children who pass NEA
English (% completers x pass rate)
% children in school (incl. CBE)
% OOS children who will not enter
school
% OOS children who will enter late
% OOS children who dropped out
% OOS children in preschool
TOTAL costs
Additional children with desired
books per 1000 US$
Additional children with trained
teachers per 1000 US$
Additional children with school
nearby per 1000 US$
Additional children to enter school
per 1000 US$
Additional children expected to
complete primary per 1000 US$
Additional children expected to meet
NEA Math per 1000 US$
Additional children expected to meet
NEA English per 1000 US$
5.4 Customizing the results interface (green section)
The model shows the results of the projections in charts. As soon as you make a scenario – by adding an
intervention – these charts are updated, so you can always see the impacts of the interventions you have
assumed.
You can see two charts at one time, by selecting from the two chart menus shown here below:
Chart menu #1
Chart menu #2
You can customize and select which results to see, and can easily switch between different views. This
section discusses the different options.
Bottleneck chart all risk groups, benchmark values. This bottlenecks chart
summarizes the benchmark – initial – situation in your country or region,
for all of the risk groups. It is a graph that should be set up to track the
most important outcomes in your policy dialogue – whether the
discussion applies to an effort to reduce bottlenecks overall, a specific
initiative to improve learning outcomes, a UNICEF country office MoRES effort or an initiative to reduce
the number of out-of-school children.
What you can change:
Select which results you want
to show from the list
indicators – when you click on
the cell, a dropdown box with
the list of indicators appears,
and you can select the desired
indicator from it.
Bottleneck Chart one risk group, two years – shows the values of six
selected high-level outcomes for the starting year and a selected
target year. This graph is useful to see high-level overall outcomes
of projections.
What you can change:
Select risk group
Select the target year
Select which results you want to
show out of list of indicators.
Cost chart -- This chart shows the costs of the scenario. Along the top of the chart, you
see the total costs of all the interventions over the entire projection period of 10 years.
The chart itself shows the costs per year, subdivided by intervention. In the chart
below, we see that the total costs of the 1400 Wing schools and 10,000 Mother-tongue
Instruction classrooms is $92 million over the course of the ten year projection. The
chart shows how these costs vary by year. Mother-tongue instruction (shown in the red portion of the
bars) has costs mainly for the 2 initial years (for training) and then again 5 years later (for re-training). The
Wing schools have costs in the initial three years – for the building of the schools; no costs have been
included yet for the additional teachers who will teach in these schools.
What you can change:
Select risk group
Select whether to show the
existing education expenditures
in the chart (couts du depart) or
only the costs of the scenarios’
interventions.
Compare scenarios chart Often, when you have made different scenarios, it is useful to
compare the results. This can be done with the ‘Compare scenarios chart’. Click on the
button to show this chart.
To set this chart up, first select the scenarios you want to compare, by clicking on the drop down menus in
the boxes next to the y-axis of the chart. You can do this for up to six scenarios. In the chart below, we
compare two scenarios, ‘Ghana construction’ and ‘Ghana early childhood focus’. We have chosen to look
at ‘Expected, percent who pass NEA math”. With the first scenario, the percent of children who will pass
NEA math is 69 percent; in the second, it is slightly better, 70 percent.
What you can change:
Select outcome indicator to
compare
Select up to six scenarios to
compare. NOTE: be careful and
select only scenarios you have
saved for your country/region!
Impact chart – Allows you to compare the absolute impact of the
interventions. The impact chart shows the absolute number of additional
children who will reach six education benchmarks -- children with sufficient
books, teachers, and schools as well as children who enter school, complete
and learn – because of the interventions in the scenario setting.
What you can change:
Select outcome indicator to
compare.
Select whether to see national
average or result by risk group.
Set up whether to see only direct impacts
of the intervention on the selected result
or also indirect impacts. For example,
building schools contributes directly to
more children entering school. But,
because more entrants also (usually)
translate into more completers, an
indirect impact of building schools is more
completers. If we have selected the result
‘Additional children to complete primary
per $1000’, and we select ‘Yes’ to
including indirect impacts, the chart will
show the additional children who
complete as a result of higher entry. If we
select ‘No’ then the chart only shows
additional children to complete as a result
of higher entry rates. We recommend
usually keeping the setting on ‘Yes’ as this
captures the full impacts of interventions.
Select whether to see impacts for a
particular year or summed for all years.
For example, you can select how many
additional children will complete
primary school in 2015. Or, you can
select ‘All years’ for a sum over all the
years of the projection.
Cost efficiency chart – Allows you to compare how cost effective
each intervention is, measured in change per unit of cost, e.g.
additional children to pass the test per 1000 US$ invested (or
other currency unit as specified by the user). Cost-effectiveness is
an important criterion in selecting interventions. This chart shows
how many children reach six education benchmarks – children
with sufficient books, teachers, and schools as well as children
who enter school, complete and learn – per $1000 spent on each
intervention set in the scenario.
What you can change:
SAME changes as for costeffectiveness chart (above)
except for year (cost
effectiveness is based on
average of all years projected).
Equity trend chart – Allows you to
compare the trends for each risk group,
over time, for one indicator at a time.
What you can change:
Select which outcome indicator
you want to see
Out of school children chart – Shows the distribution of out-of-school
children – a) percent who will never enter school, b) percent who will enter
late, c) percent who have dropped out, d) percent of primary age but in
preschool, and e) percent of children in school. This chart is only available if
you have included age-specific data in the data entry (which is optional).
What you can change:
Select whether you want to see
out of school children by single
year age-group in one selected
year, or the whole school age
population over time for all
years of the projection period.
Select risk group to illustrate.
5.5 Practice activities for learning how to set better scenarios
It is one thing to know how to enter the right data and click on the right cells to make SEE produce an
outcome. It is another to use the model effectively to probe the range of interventions and
distribution possibilities and determine the most appropriate cost-effective, high-impact proposal.
This section contains practice activities that take you from the simple to the complex, slowly building your
skills at finding effective intervention combinations. The activities are based on training workshops in Ghana,
Burkina Faso, Senegal, Bo livia and Pakistan.
NOTE: If you find other activities that would be useful for your colleagues, please send these ideas to the
SEE project manager at UNICEF, Mathieu Brossard, mbrossard@unicef.org.
The practice activities utilize data for Ghana. You can adapt the activities to your own country or region
as necessary. After you have completed the activities, you can refer to the end of the section to find out
how the SEE team did them.
Before-and-after experiment
Following good scientific practices, before you start on the activities you might set up this small
before-and-after experiment.
Once you know how to use the buttons in SEE, set yourself the following programming challenge:
Implement interventions to increase, by 50 percent, by 2020, the percentage of children expected to pass
the target learning measure (e.g. the NEA math assessment for Ghana).
Implement what you think is a good set of interventions and note the total budget for the scenario. Save
your scenario. Put it away, and then start the activities.
The last of the practice activities is to redo this exercise. Did you do better than your first try? Let us find
out.
Simulations for Equity in Education (SEE): Model description and user’s guide
49
Activity # 1. One intervention: provision of remedial teachers – Big Sister program 1
In this activity, we start by adding one intervention to a scenario, and working through an analysis of the
results. Numerous studies have shown that children who are falling behind in class (often marginalized
children) can be helped through a small amount of remedial teaching, for example the balakshi program
in India, where young village women helped children, or volunteer after-school camps, also in India, that
brought young children up to a higher reading level (Banerjee et.al 2005, 2010). In this activity, we
implement a remedial teaching program in Ghana along the lines of the balakshi program6. We will call it
the Big Sister program and it is targeted at pupils in the north regions.
We assume the program is rolled out over a two-year period from 2014-2015, and that enough Big Sisters
are trained to reach all children by 2015. The program assumes that one Big Sister can cover pupils in
multiple classes – not all the pupils in the classes, but only those who are falling behind, and only taking
small groups for a portion of the day. After that, the program is maintained at this level until 2022. If you
need help on how to implement an intervention, refer back to section 5.3.
1. Look at the situation without the intervention first. What percentage of North Girls and North Boys
passed the ‘NEA English pass’ standards in 2012? (hint: look at the ‘Bottleneck chart’ or the ‘Equity’
chart and select ‘Actual NEA English pass’). How are the North Girls and Boys faring compared to the
‘Rest of Ghana’?
2. Implement the program for Boys and Girls in the north starting in 2014 and with a 2-year period to
reach the full scale of the program. Click on the ‘Info’ graph, and keep adding more Big Sisters to
your scenario until the green line (children reached) and red line (children in group) overlap in 2015.
How many Big Sisters do you have in 2015?
3. How much did the program cost in total? (Hint: select ‘View cost results’ for ‘North Girls’ or ‘All
Ghana’ risk groups, and look at the title as well as the chart itself)
4. Impact: How many more children will reach the NEA English learning standards thanks to the
program? (Hint: look at the ‘Impact’ chart, ‘Additional children expected to meet NEA English’ for ‘all
years’. Obtain the values by hovering over the bars in the chart or, selecting the series in the chart
and then hovering over the bar.)
5. Cost-effectiveness: For each $1000 invested, how many additional girls are expected to reach the
NEA English pass standards? (Hint: look at the ‘Cost-effectiveness’ chart.)
6. Equity: Will this program close the learning gap between the North and the Rest of Ghana?
7. Save your scenario. (Hint: give it a name to help you remember the contents.)
6
We are going to use the effectiveness parameters of the balakshi program because we could not find parameters from an
African example. The assumptions are: program will be implemented in the same way as in India; Ghana and India pupils are
have similar problems learning to read – e.g. can’t recognize letters, can’t put together words; there are young women with
moderate education levels in villages, who can be taught a reading program like the ones in India; the program can, to a large
extent, be transferred, because the skill being taught (reading) has many similarities across cultures and scripts. However,
even if you don’t agree with this assumption, you can still do the exercise and learn how to use the SEE model from it.
61
Activity # 2. Pro-equity targeting of remedial teachers – Big Sister program 2.
Often, the children who are furthest behind in classes, are from marginalized groups – in particular,
children from poor, illiterate parents -- and remedial teachers (the big sisters) help to reduce the barriers
posed by their background to their academic learning. In the Ghana model, this is formalized by linking
remedial teachers to the barrier of poverty – this means that remedial teachers are most effective for
poor children. In this exercise, we continue to explore whether and how targeting the Big Sister program
at these groups of children can reduce inequity and whether it is cost-effective to do so.
1. Before you start, check the prevalence of the poverty barrier in the three risk-groups. What is the
poverty prevalence among ‘Girls North’, ‘Boys North’, and ‘Rest of Ghana’? (Hint: check ‘Data
summary’ for poverty rates)
2. Implement a Big Sister program in the Rest of Ghana also. First, implement 10,000 Big Sisters. How
many more children pass the NEA English standards only in the ‘Rest of Ghana’ over all the years in
the projection period? What is the cost-effectiveness? (Hint: look at the ‘Cost effectiveness’ and
‘Impact’ charts in the ‘equity’ view to see the results for the ‘Rest of Ghana’.)
3. Why do you think the cost-effectiveness is different in the ‘Rest of Ghana’ compared to the two
‘North’ regions?
4. Now the program is very expensive – because there are so many Big Sisters in the ‘Rest of Ghana’.
Let’s try to improve cost-effectiveness. You can improve the effectiveness of the Big Sisters program
in the ‘Rest of Ghana’ by targeting children with the poverty barrier. Try this: set the effective
targeting to 70%. This means that the Big Sister program is implemented in villages with at least 70%
poverty. First, look at the ‘Info’ graph. Note that the green line (children reached) is now much
higher than the red line (children in target group = 11 percent poor children in ‘Rest of Ghana’). With
the 70% targeting, the green line is much higher than before targeting because the program is much
more effective at actually reaching children who benefit. You can reduce the number of Big Sisters
(bring green line down to red line in the ‘Info’ graph, by reducing number of Big Sisters). How many
Big Sisters are needed to reach approximately 100 percent of the poor children in the Rest of Ghana
with the 70 percent targeting?
5. What do you think would be the best targeting and size for the Big Sister program?
62
Activity # 3. Augmenting existing stock with an intervention: Wing schools
Sometimes, we may want to implement an intervention that augments an existing stock. For example,
2012, only 70% of children in the north regions lived within easy walking distance of a school. A clear
policy imperative is to add to the stock of schools so the remaining 30% of children also have a school
nearby. Most of the children who did not have a school nearby live in small, remote villages; it therefore
makes sense to build small so-called Wing schools, which provide grades 1-3.
Implement a program to build 2,000 Wing schools in the north regions, distributed evenly over boys and
girls. The program starts in 2014, takes 2 years to implement, and the schools are maintained through
the entire projection period (at least 2022). The costs of the program include construction and
subsequent recurrent salaries and materials costs for two teachers.
1. Do the Wing schools remove the school shortage for children? What percentage of children has a
school nearby in 2015 as a result of the intervention? What about in 2020? (Hint: look at the Equity
chart, select “% of children with school nearby”).
2. One thing we have forgotten to do after the construction to add wing schools: that is to add
teachers. Each wing school should have teachers. Recall from question 1, the ‘Info’ button for wing
schools tells us that each wing school reaches 80 children. Approximately how many teachers should
each wing school have? Add enough trained, or qualified teachers, to provision each wing school
(multiply: your estimated # of teachers per wing school by the number of wing schools).
3. What is the effectiveness of a Wing school on dropout, repetition, and learning rates? What is the
effect of pre-service teachers on dropout, repetition, and learning rates? (Hint: view the ‘Data
Summary’ and look up Wing schools in the ‘Information about intervention effectiveness’ table)
4. Cost-effectiveness: How many more children will enter school per $1000 spent on Wing schools and
teachers? (Hint: look at the ‘Cost-effectiveness’ chart and select ‘Additional children to enter school
per $1000’. For the total effectiveness of the scenario, look at the bottom bar ‘Total’). How many
more will complete school per $1000 spent, and how many more will pass ‘NEA English’ standards
per $1000 in each risk-group? How can schools lead to more children passing ‘NEA English’
standards?
5. The percentage of children with a school nearby only reaches 91% and then drops to 89% because of
population growth. Make a scenario where 97% of children has a school nearby even by 2022. This
takes 2 steps.
5a. Get to 97% in 2015, by adding Wing schools incrementally until you have enough schools. (Hint:
use one of the charts to help you)
5b. Supplement these schools after with a second ‘Wing schools’ intervention, implemented
gradually. Remember to add teachers as well! (Hint: start in 2016, implement over 6 years, and add
as many schools as necessary to maintain 97% children with a school nearby until the last projection
year, then add teachers, 2 per wing school)
5c. Look at the ‘Info’ graphs for both steps of the ‘Wing school’ intervention. The charts look
identical because the model sees that you are implementing two steps with the same intervention
and combines them both to give you a total.
This kind of two-step intervention is sometimes useful when you have two different growth and
implementation phases.
6. Save your scenario. (Hint: use a name you’ll remember)
63
Activity # 4 – Two paths to 100% certified teachers
The pupil teacher ratio in Ghana is 32, which looks quite good. However, many of these teachers are
untrained – the pupil trained teacher ratio is only 55. You can see this in the ‘Equity’ chart by selecting
‘Pupil teacher ratio’ and ‘Pupil trained teacher ratio’. The official policy is towards 100% certified (or
trained) teachers and a PTR of 35. By these measures, Ghana has only 64% of the total desired trained
teachers (to verify, you can select ‘% desired trained teachers’ in the ‘Equity’ or ‘Bottleneck’ charts).
In this activity, we will explore two paths to reaching a higher trained teacher percentage – a) by
replacing untrained teachers with trained teachers and b) by providing in-service training to untrained
teachers. First, get a new ‘Blank’ scenario.
1. In this step you will set an intervention to provide in-service training to all untrained teachers. First,
identify how many untrained teachers are in each risk-group. How many untrained teachers are in
North Girls, North Boys and Rest of Ghana? (Hint: you can look at ‘Number of trained teachers’ in the
‘Equity’ chart.)
2. Set up the intervention to provide in-service training to all of these teachers between 2014 and 2017
(Hint: start year 2014 with 4 years to reach desired level of intervention. The intervention is called ‘In
service training diploma for untrained’, check that ‘Number of untrained teachers’ goes to zero by
2017). How many teachers received in-service training in each risk-group? How much does the
program cost? The program costs include training and higher salaries after untrained teachers get a
diploma.
3. How much is this cost as a percentage of the overall costs of education in 2017? (hint: in ‘Bottleneck
chart’ click on ‘Yes’ in ‘Show start year costs’)
4. Save the scenario.
5. In this second set of activities, we will replace the untrained teachers with pre-service trained
teachers. This is a two-step process: 1- un-hire the untrained teachers and 2- hire trained teachers.
It will be implemented over four years from 2014-2017. First get a ‘Blank’ scenario.
5a. Un-hire the untrained teachers. Select intervention ‘Teachers: hire untrained’. To subtract
teachers put negative numbers in the number of units (i.e. -5531 for Girls North etc). There should
be zero untrained teachers.
5b. Hire trained teachers. Select intervention ‘Teachers: pre-service trained’ and hire as many
trained teachers as you fired untrained. How many trained teachers are there in 2017?
6. How much did the scenario cost?
7. Save the scenario.
8. Which scenario is more cost-effective? Let’s compare the two scenarios. Open the ‘Compare
scenarios’ chart. To the left of the plot area, select the two teacher scenarios from the drop-down
boxes. Select one of the learning outcome results. Which scenario results in higher learning? Is the
difference large?
9. Compare the ‘Additional children to pass NEA math’ with each scenario. How many more children
pass with in-service training as compared to pre-service training? (Hint: you need to look at one and
then the other scenario in the ‘Impacts’ chart).
10. Why is there this difference? (hint: if you open ‘Data summary’ you will find the effectiveness of each
intervention on ‘fail learning targets).
11. Do you think that the differences in the effectiveness between pre-service trained and in-service
trained teachers are realistic?
64
Activity # 5 – Which scenario is best?
In the previous activity, you compared the impact and the cost-effectiveness of two scenarios. In this
activity, you will compare all four of your scenarios based on different criteria – cost-effectiveness,
impact, and equity – and evaluate which one you would recommend and why. In the next activity, you
will use this knowledge to build a cost-effective scenario for increasing learning in Ghana.
1. Compare the cost-effectiveness of all four of your scenarios. To do this, select all four scenarios in
the ‘Compare scenarios’ chart.
2. Cost-effectiveness: Which scenario is the most cost-effective, in terms of ‘Additional children
expected to meet NEA math’ standards per $1000? (hint: look for the cost-effectiveness indicator in
the drop-down ‘Select result’ menu above the ‘Compare scenarios’ chart)
3. Impact: Which scenario has the most impact, in terms of ‘Additional children expected to meet NEA
math standards, in all years’?
4. Cost-effectiveness 2: Which scenario is the most cost-effective, in terms of ‘Additional children to
complete school per $1000’?
5. Equity: Which scenario is the most pro-equity, in terms of reducing the NEA scores differences
between the ‘North’ risk-groups and the ‘Rest of Ghana’? (Hint: look at a learning outcome measure
in the ‘Equity’ graph, for each scenario in turn)
6. Based on these results, which scenario would you recommend and for what reasons?
65
Activity # 6 – More information on all the interventions – 25 quasi-pilots
In the previous five activities, we used only a fraction of the interventions available. Perhaps there are other, more
cost-effective, or pro-equity interventions? To find out, we prepared three sets of ‘Pilot’ scenarios. In the ‘Pilot’
scenarios, one unit of each intervention is implemented over all the risk-groups. You can then compare the impact
(per unit), the cost-effectiveness, and the pro-equity effects of all the interventions and start to make a short list of
the interventions that are most pro-equity, highest impact and most cost-effective. This will make it easier to
select which interventions you want to start with in the next, and last activity, to improve learning outcomes in
Ghana.
Retrieve the scenario ‘Pilots 1’. You should see ten interventions in the scenario setting matrix and one unit of each
intervention implemented for each risk group. In reality, if you were implementing pilot programs, you would have
more units to account for local variability, but in the model we can simulate with just one intervention unit and still
get all the information we need about cost-effectiveness, unit impact, unit costs and so forth.
1.
2.
3.
4.
5.
Look at the ‘Cost effectiveness’ chart. What are the five most cost-effective interventions with regards to
‘Additional children to meet NEA math standards per $1000’? Write them down in a list and note the
effectiveness values by risk-group.
Retrieve the scenarios ‘Pilots 2’ and ‘Pilots 3’ and write down the most cost-effective interventions for each of
these as well.
Based on these results, which are the top interventions with regards to cost-effectiveness for additional math
learners?
We know from the bottleneck analysis, that many children in Ghana do not complete primary school; costeffective interventions to increase the number of completers may be different from the list of cost-effective
interventions to increase learning. Repeat the exercises above to identify the most cost-effective interventions
to raise the number of children completing primary school.
Are there any interventions that are much more effective for some risk-groups? If so, which ones? (Hint: look
at the effectiveness chart in the “Equity” view, which shows the effectiveness by risk group).
6.
To reach 100% primary completion, all children must enter school. What are the two interventions that
increase school entry?
7.
Based on these insights, sketch a couple of possibilities to remove the major bottleneck and inequity in Ghana –
the low learning levels for all risk-groups, and the learning inequity between ‘North Girls’ and ‘North Boys’
versus the ‘Rest of Ghana’. Sketch at least one strategy that is highly pro-equity, and another one that is purely
based on cost-effectiveness and impact considerations.
66
Activity # 7– Find a pro-equity and cost-effective program to reduce learning gaps
In this exercise you will put to work what you have learned in the previous exercises to identify a costeffective and pro-equity strategy – which can consist of multiple interventions – for improving learning
outcomes in Ghana. Your case will be strengthened if you are able to contrast your scenario with another
less cost-effective, or less pro-equity strategy that is, at the same time, plausible. As a budget goal, try to
stay within 10 percent of the present budget.
Document your scenario(s) as if you were going to present them to an audience of stakeholders.
67
Completing the before-and-after experiment
At the beginning of the practice activities, we suggested this small experiment:
Implement interventions to increase, by 2020, the percentage of children expected to pass the NEA math
assessment by 50%.
Activity 7 was a repeat of the challenge to increase learning. Compare your original results to the
new results.
Now that you have completed the activities, you can find out whether there is an improvement in your
results. Did you do better after the activities? We hope so!
Please let us know what you have learned, and if you have ideas for other activities!
Contact: Mathieu Brossard, mbrossard@unicef.org or the SEE consultant, Annababette Wils
babette@babettewils.net.
68
Answers for Activities 1-6 Note: answers based on version SEE-Master-12-2014.
Answers Activity #1: 1) In 2012, 12% of girls passed the NEA English standards. In the ‘Rest of Ghana’ it was 28%; 2)
Approximately 4200 for Girls North, and 4600 for Boys North (note, each user may arrive at slightly more or less
depending on their visual assessment of the Info graph or other ways of calculation how many Big Sisters are
needed); 3) total cost US$58,367,000 (approximately, depending on exact number of Big Sisters); 4) 82,572 girls and
90,435 boys, for a total 173,007 5) 2.96 additional children pass NEA English standards per $1000 invested; 6) yes, in
2020, the pass rate for North Girls is 33% compared to 28% in the Rest of Ghana; 7) User determines name.
Answers Activity #2: 1) Poverty prevalence ‘North Girls’ and ‘North Boys’ is 67%; in ‘Rest of Ghana’ it is 11% 2)
41,529 additional pupils pass the NEA English standards in the ‘Rest of Ghana’; cost-effectiveness 0.63 additional
successful pass per $1000; 3) the Big Sisters program targets children in poverty; cost-effectiveness is lower in the
‘Rest of Ghana’ because a) the learning gap is smaller (recall: impact is proportional to the gap) and b) relatively
fewer children actually benefit from the tutoring per Big Sister because only the poor children benefit and the
poverty level is much lower in the ‘Rest of Ghana’. 5) Approximately 5200 Big Sisters in Rest of Ghana. 6) Open
response question.
Answers Activity #3: 1) No, this scenario reduces but does not eliminate the shortage of schools, in 2015 about 90
percent of children in North Ghana have a school nearby; in 2020 it is 89; 2) 91% in 2015 and 89% in 2020 for boys
and girls; 3) The user can set their own number of teachers per wing school. If we assume 2 teachers per school
(average PTR would be 40 in the new wing schools), then we need 4,000 new teachers. 4) Wing schools
effectiveness on ‘Repetition’ and ‘dropout’ is zero; for ‘Fail learning targets’ gap it is 4%; Pre-service trained
teachers effect on learning is 29% reduction of the learning gap; 4) 5) 4.0 more enter school; 3.22 more complete
primary per $1000; 0.45 more reach NEA English pass standards per $1000; schools and more teachers lead to more
children passing the learning standards because there are more pupils due to higher entry rates and because of the
higher learning levels due to more trained teachers; 5a) 3,000 Wing schools by 2015, 1,500 each for boys and girls;
5b) 1600 Wing schools built from 2015 over 8 years, 800 each for boys and girls plus 3200 teachers split evenly over
boys and girls. 6) User determines name.
Answers Activity # 4: 1) Girls North 5,310, Boys North 6,006, Rest of Ghana 22,585; 2) train the same number of
teachers as there are untrained i.e. 5,310, 6,006 and 22,585 respectively; start year 2014, and years needed to
reach desired level of intervention 4; the costs are US$478,719,000; 3) Start year costs are $856 million, in 2017 the
teacher program costs US$ 60 mln, or 7% additional costs; 4) User sets scenario name; 5) 93,536 trained teachers in
2018; 6) cost $520,244,000; 7) User sets scenario name; 8) Pre-service training results in more learning; 9) in-service
training – 91,080 more meet NEA math standards, pre-service training – 140,741 more pass; 10) Pre-service trained
teachers are more effective at reducing learning gaps; that is why the pre-service scenario results in more children
passing the NEA math standards;. 11) Open response question.
Answers Activity # 5: 2) The ‘Big Sisters’ scenario is the most cost-effective – per $1000 invested, 1.6 additional
children pass the NEA math standards.; 3) the ‘Wing schools’ scenario has the biggest impact (245,506 additional
children expected to meet NEA math standards). 4) The ‘Wing school’ scenario results in the most additional
entrants; 5) Answer using actual NEA English pass rates (other measures of learning outcomes can be substituted
here) - the ‘Actual NEA English pass’ rates in 2022 for the ‘Big sister’ scenario in the risk-groups ‘North Girls’, ‘North
Boys’, ‘Rest of Ghana’ are 32, 32 and 31 respectively; in the ‘Wing schools’ scenario 21, 21, and 24; in the ‘In-service
training’ scenario 21, 21, and 31; and in the ‘Pre-service training’ scenario 25, 25, and 32. The ‘Big sister’ scenario is
the most pro-equity in terms of reducing learning outcome differences. 6) Open response question.
Answers Activity # 6: 1) 1- School management committee; 2- head teacher training; 3- Wing schools and public
schools equal; 5- Big Sister program; 2) Pilots 2: 1- Teachers EGRA training; 2- mother tongue instruction; 3- more
books; 4- kindergartens; 5- free uniforms; Pilots 3 1- scholarships; the others are less cost-effective than
interventions shown in Pilots 1 and Pilots 2; 3) 1- School management committee training; 2- head teacher training;
3- Teacher EGRA training; 4; school construction; 3- Mother-tongue instruction; 5- More books; 6- Big sister
remedial teachers; 7- Scholarships. 4) 1- construct schools; 2- mother-tongue instruction; 3- EGRA training; 4Kindergarten; 5- free uniforms; 6- scholarships. 5) Yes; EGRA training, Mother-tongue instruction, Kindergartens,
Latrines, Fee removal, Free uniforms, are much more effective for north risk-groups; 6) Adding public and wing
schools; 7) Open response question.
69
5.6 Applying SEE within the MoRES framework
SEE can be set up for use with UNICEF’s Monitoring Results for Equity System framework. A separate
MoRES view is included in the model, which can be called up by clicking on the ‘Show/Hide MoRES View’
button in the main menu. (For more information about MoRES, see UNICEF 2012.)
The MoRES view, visible, is shown below. It consists of one chart and a table, where the indicators can be
selected. The domains and the determinants of the MoRES framework are provided and fixed, but each
user can select the indicators that will be used to track the determinants based on the relevance to the
country context. The indicators can be selected from a drop-down list, which includes all
the results tracked in SEE.
The target year and the region or risk-group can also be selected in the MoRES view.
70
To the right of the MoRES determinants selection table, a chart shows the values of those determinants
in the present year and projected to the target year. The chart is shown below for the selected
determinants for ‘North girls’ in Ghana. This chart can be used as a visualization tool in a MoRES
bottleneck analysis.
It can also help identify possible ways to reduce or eliminate the bottlenecks when used in conjunction
with scenarios. The example below shows the improvements to several MoRES determinants in Ghana in
a scenario with the following interventions: 3,000 wing schools and 6,000 new trained teachers for those
schools in north Ghana (first scenario in the practice exercises).
Start value
MoRES chart - Girls north
1
2
3
4
0
0%
0%
0
0%
0%
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
2018
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
50
10
9
% schools receiving capita on grant on me
(budget)
8
80%
80%
7
40%
40%
% schools with SMC mee ng >1/term (governance)
6
70%
% children with school nearby
38%
% qualified teachers
100%
57%
100%
81%
% children who will enter school
100%
63%
63%
Survival rate to last grade in projec on
20%
40%
60%
80%
3
2
1
57%
61%
Expected, percent who pass NEA Math
5
4
70%
% children with school nearby
0%
49
100%
0
120%
Percentage of children covered
The advantages to using SEE with MoRES at the national/sub-national level are: (1) It provides a clear
tool in which diverse and available education data and relevant indicators are put into a consistent
framework; (2) It provides a simulation model to assess the efficacy of different intervention strategies so
planners can assess which interventions will b e s t reduce the bottlenecks associated with the MoRES
determinants and by how much; (3) the simulation model will help in calculating costs of proposed
interventions; and (4) the model can be adapted to monitor the real gains in reducing bottlenecks against
the projected impact.
71
5.7 Sensitivity analysis
One of the key drivers to the impacts of interventions is the effectiveness – that is, how much the
interventions will change a particular education outcome.
As discussed in Chapter 2, hundreds, if not thousands of studies exist ascertain the size of interventions
impact and how cost-effective they are. Various authors have made compilations of these studies. As
mentioned, the World Bank recently launched IE², a database of effectiveness studies, and UNICEF
also compiled a database specifically for use with the SEE model. The UNICEF database is available
internally to UNICEF and can be requested by contacting the project managers.
Even for interventions that have been documented in this way, the exact effectiveness – a n d
s o m e t i m e s , o t h e r p a r a m e t e r s , l i k e u n i t c o s t s - remain uncertain ( as discussed in Chapter
2). In addition, there are many interventions for which no accurate studies have been found. When this
kind of uncertainty arises, it may be useful to do a sensitivity analysis for parameters that we are
uncertain about.
The best way to do a sensitivity analysis is by running a few scenarios, where, instead of changing the
interventions, you change the values of the parameters in the ‘Data and Calculations’ page. The
following presents two examples, one sensitivity analysis of effectiveness, and a second sensitivity
analysis of unit costs.
NOTE FOR EARLY SEE USERS: In earlier versions of SEE, there was a separate module for the sensitivity
analysis. At this point, it is not recommended to use this feature as it is not maintained and updated;
instead, it is considered simpler to do the sensitivity analysis as described below using the new “Compare
scenarios” graph.
72
Sensitivity analysis example 1 – Sensitivity analysis of effectiveness.
In Activities 1 and 2 above, a scenario was implemented to provide remedial teachers for students falling
behind in class – called the Big Sister program in the exercise. The effectiveness of the intervention is
based on a study for a similar program from India7. According to that study, working with a remedial
teacher improved learning scores: the intervention’s effectiveness, in terms of reducing learning failure
rate, was 31 percent. This value is entered into the “Effectiveness Matrix”, on the “Data and Calculations
page (see the section on Data Entry below for more on the Effectiveness Matrix), as shown in the figure
below:
In a sensitivity analysis of this effectiveness, we will want to see the sensitivity of the results to higher or
lower values of the effectiveness. One might, for example, run a sensitivity analysis with the
effectiveness one standard deviation higher or lower than the average value. Standard deviations or
other statistical measures were not reported in this particular study, so in this case, we might choose to
test values that are 50 percent higher and lower than the reported value, or, 15.5 percent and 46.5
percent.
These are the steps to running this sensitivity analysis:
1) On the Interface page of the model, open you Big Sisters scenario:
7
MIT Poverty Action Lab, 2005, Making Schools Work for Marginalized Children: Evidence from an inexpensive and effective
program in India. Policy briefcase 2, MIT, Poverty Action Lab
73
2) On the Data and Calculations page of the model, set the effectiveness to the low value, 15.5 percent,
by entering this value in the effectiveness matrix (make sure to remember what the original value is,
for example by writing a formula: = 15.5% + 0*31%, note how this allows you to note the original
value, but without it affecting the value in the cell):
3) Turn back to the Interface page of the model and save the scenario with a name that reminds you
that it is the lower value of the sensitivity:
4) Back on the Data and Calculations page, set the
effectiveness to the high value, 46.5 percent:
5) On the Interface page, save the scenario again,
this time with the name indicating that the
effectiveness value is the high end of the
sensitivity range:
6) MAKE SURE, to reset the effectiveness level
back to the original value!
7) Now you can compare the three sensitivity scenarios by using the “Compare scenarios” feature on
the Interface page. Put the three Big Sisters scenarios into the “Compare scenarios” graph. You can
now go through the different model results and see how your high and low effectiveness values have
74
changed the results. The example below shows the number of additional children to pass the NEA
Math assessment, but, using the “select result” box, you can compare a large number of results.
Sensitivity analysis example 1 – Sensitivity analysis of costs.
Another uncertainty may be around costs. The program in India was very inexpensive – adding a mere 3
percent to the average per pupil costs, or, $2.25 per child compared to an annual per pupil expenditure
level of $78. Most of the program’s costs depend on the salaries paid to the remedial teachers. In India,
the program paid $10-$15 a month to young women from the villages with secondary education.
Whether it is possible to replicate this kind of low cost in Ghana may be open to question. One sensitivity
analysis may be to look at a maximum and a minimum salary for the Big Sisters. The value for Big Sister
salaries in the Data and Calculations page is $567 per year, equal to half of the salary of an unqualified
teacher. But, in India, the remedial teachers were paid even less than this. As an alternative, let’s set the
salary for the Big Sisters to one-third of the unqualified teacher salary, or, $374.
The steps to running this sensitivity analysis are:
1) On the Data and Calculations page, set the Big Sister salary to
one-third of the unqualified teacher salary:
75
2) On the Interface page, save the scenario:
3) Compare this scenario with
others in the Compare Scenario
graph:
76
5.8 Adding data to SEE – Data and Calculations page
Adding data to SEE is an entirely separate activity from working with the model to make scenarios and
requires different expertise. Both require an insight into numbers; to make useful scenarios, a user
needs higher-level policy insight. To add data, a user needs deeper knowledge of a country’s statistics.
SEE requires data that are commonly available in most countries. If the basic data is not available, it will
not be possible to use this model. The data needed come from (a) the school administrative system or
the Education Management Information System (EMIS); (b) a recent household survey; (c) the education
budget plan; (d) population projections; and (e) information on intervention effectiveness, which
requires various data sources and will be addressed in more detail in this chapter. This section
describes exactly which data is needed for SEE and where to input it in the model, as well as guidance
on where to find it.
The better the quality of the data, the more robust the output of the model. If the data are out of
date, or if they are of poor quality, the output will be less certain. There is no country in the world where
the data is perfect; it is in the nature of working with numbers and projections that there is uncertainty.
We therefore suggest that users try to obtain the best data possible, always use common sense and field
knowledge to evaluate whether the data and the model output makes sense and conduct sensitivity
analyses to find out how much the results depend on the exactitude of the data.
All data for SEE are entered in the ‘orange section’, which is located in the first columns of the SEE ‘Data
and calculations’ page (see tabs on the bottom of the Excel workbook). The top of the data section looks
like the picture below:
Country name ->
Ghana
DATA are all entered here in the ORANGE SECTION
**Mandatory data
0
Basic information**
Show/Hide OOS data entry columns
Click to save data to
archived data page
Save data
Start year**
2012
First grade projected**
Official entry age to first grade projected**
6
Last grade projected**
6
1
Name of risk groups**
Girls north
Boys north
Rest of Ghana
Data on pupils and flows
To retrieve data from
another region, select and
click button
Select here:
Ghana
Retrieve data
Resize all
comments
Girls north
0.04
P.1 % non-public pupils
Boys north
0.03
P. 2 % children who will enter first grade projected
P. 3 Grade 1**
P.3 Grade 2
P.3 Grade 3
P.3 Grade 4
P.3 Grade 5
P.3 Grade 6
0.81
Do pupil data include private pupils?
77844
65193
60368
53211
44429
37990
Learning outcomes (optional). You can include two subjects or tests. For each test, you can
include two grades - an early one, and one grade close to the end of primary.
Rest of Ghana
0.22
0 1
Yes
0.98
No
0.80
No
Girls
north
Test grade
Pass rates by risk group
Boys
Rest of
north
Ghana
Name of assessment/test 1
84202
70412
67055
60209
52293
567345
533957
524459
496416
452360
1
2
3
4
5
45350
417554
6
NEA Math
6
0.57
0.57
0.70
0.12333
0.28
Name of assessment/test 2
NEA English
6 0.12333
77
0.00
Specifically, SEE requires the data listed in Table 9. One way to get all the data together is to print out this
page and check the data items off one by one and then enter them into the appropriate cells in the
orange section. Another way is to go through the data entry step by step, as explained below. Experience
from the pilot countries suggests that compiling and entering all the data can take up to a week of
concentrated work with one or two experts.
For training, you may find it useful to empty the orange section in the model and follow the
information in this chapter, filling in the data as you go. (Please save the original data, for example, on the
‘Data archive’ sheet in the model.)
The remainder of this chapter can be used as a reference guide to look up what is needed on each data
piece. The indicators are arranged in the order they appear in the orange data section, going roughly
from top to bottom and from left to right.
Tip: As a general habit, insert comments on the source and the calculation of all data you add to SEE. If
you do not do this, you will likely not be able to reconstruct where you obtained the numbers, should
anyone ask you to verify them.
Getting started – importing from the Archive and saving data to the Archive
The SEE model has an Archive of data from different countries and
regions, where SEE has been applied. This allows users to pull up
data from other applications, and allows you to save your data so
that you can use it later.
You can save your data to the Archive by clicking on the “Save data”
button in the A column of the Data and Calculations page.
You can retrieve data by clicking on the drop-down menu in cell A12
(highlighted in the figure) and clicking on the name of the desired
country or region, and then clicking the “Retrieve data” button.
You can start with a blank data page by selecting “Blank” at the top
of the drop-down menu in cell A12, and clicking “Retrieve data”.
This action pastes blank values into all of the data cells.
78
Table 9. List of data needed for full SEE usage (some of these are optional) – typical data sources
denoted by letters: A = administrative data; H = household survey data; B = education budget data; P =
population estimates; E = intervention effectiveness
BASIC INFORMATION
Name of country or region
Names of risk groups
Start year
School entry age
Number of grades to be projected
First grade to be projected
Age at first grade to be projected
% private pupils (A)
% of children who will enter first grade projected (H)
Pupils by grade (A)
Repetition rates by grade (A or H)
Promotion rates by grade (A or H)
VULNERABILITY BARRIERS
Names of up to seven vulnerability barriers
Prevalence of vulnerability barriers by risk group (H)
Odd ratios of non-entry, dropout, repetition and poor
learning outcomes for children with vulnerability
barriers (H)
POPULATION
Population of age of first grade projected, by risk
group in start year
School-aged population (for all grades projected) by
risk group
Population official age of last grade projected by risk
group
Population growth rate (average)
LEARNING BENCHMARKS
Learning benchmark 1 (optional)
Learning benchmark 2(optional)
RESOURCES FOR PUPILS
Pupil-teacher ratio (A)
% trained teachers (A)
Pupil-classroom ratio (A)
Books per pupil (A)
Total primary education expenditure for risk group
OTHER INFORMATION (optional)
Student attendance rate (A or H)
Teacher attendance rate (A or H)
MoRES ENABLING FACTORS (optional)
Enabling social norms
Enabling legislation and policy
Enabling budget/expenditure
Enabling management/coordination
INFORMATION FOR OUT-OF-SCHOOL PROJECTIONS
Pupils by age and grade (optional, for
OOS children computations only, H)
Children ever in school by age (optional, for OOS
children computations only, H)
School-aged population start year; projections to start
year + 10; past estimates
BASIC INFORMATION ABOUT INTERVENTIONS
List of interventions
Vulnerable group targeted by intervention
Age/pupil group targeted by intervention
Type of intervention (School, teacher, book, other)
Number of children reached per unit of intervention
Initial coverage of interventions (A and H)
Costs per unit of intervention (B)
Lifetime per unit of intervention (A)
SPECIFIC TECHNICAL INFORMATION ABOUT
INTERVENTIONS
Intervention effectiveness (E)
79
Basic information
Name of region (cell B1 in the ‘data and calculations’ page – required)
Provide the name of the country or region that the simulations apply to here. This name will be used to
update the ‘Results’ graphs. The name can be, for example, a country, a province or a district. In the
default model, the name is ‘Ghana’.
After providing the region name, there are a few basic pieces of information that SEE needs in the top
right set of boxes:
Start year (required)
The start year is the beginning point of the simulations. It should be as recent as possible, so the
simulations are up to date. It is not necessary to have all the data from the same year. For example, you
can combine administrative data from 2010 with a household survey from 2008 or 2009, but try to stay
within two or three years.
Meta data on grades included in the projection
The SEE model is set up to be flexible and allow the user to input information about the school system
and grades, which are to be projected.
1) official entry age the first grade projected (say, age 6 for grade 1 of primary; or age 11 for grade 1 of
secondary);
2) The number for the first grade projected (in the May 2015 version of the model, this grade must be
grade 1; but later versions will allow greater flexibility);
3) The last grade projected – or, the last grade to be included in the projections. This is the grade that is
used for various results related to last grade, such as the “completion rate”, “survival to last grade”.
Also the “gross enrollment rate” will include the grades from first to last grade projected as
designated here. The model can project up to nine grades, but a user can choose to project a smaller
number, say 4, 5, or 6.
The model can include grades for either primary/basic or secondary, it cannot include both.
80
To date, all applications of SEE have focused on primary school, or basic education through grade 9.
However, a user wishing to focus on secondary school can do so, simply by designating the secondary
grades and adding all further data for secondary school.
Risk groups (required)
SEE allows for up to four distinct risk groups, which are set by the user. The intervention scenarios and
projections are created for each risk group separately. There are no hard rules for identifying risk
groups. You might choose only one risk group, for example, when using SEE for a district planning
process. Or you might want to split the children into two groups, say boys and girls; this would also be
appropriate for a small-scale application. In the example above three risk groups have been set: girls
in the three northern regions of Ghana, boys in the same regions and the rest of Ghana.
If you are using SEE to look at inequity, you should identify multiple risk groups. You might identify up to
three groups of children who are clearly more marginalized or clearly have worse education outcomes
than the average and set the fourth risk group as the comparison group. Local knowledge and spending
some time analyzing the data will be most useful here. A bottleneck analysis, out-of- school children
analysis or any other analysis where education outcomes of different groups are compared will also be
helpful. The important criteria for selecting risk groups are:
•
•
•
There should be clear education outcome differences between the risk groups.
The risk groups should constitute a significant group of children, for example, 5%–20% of the schoolaged population.
Data should be available for the risk groups.
When defining risk groups, remember that another way to handle inequity is with the vulnerability
barriers and the odds ratios of negative school outcomes for children who face those vulnerability
barriers (see Section 8.5 on vulnerability barriers).
A useful risk group selection is shown in an example from the Burkina Faso SEE training workshop
(September, 2014), shown in Table 10. Three higher risk groups – children in urban Sahel, rural Sahel,
and rural rest of Burkina Faso – compose 2 percent, 4 percent, and 61 percent of the school-aged
population, respectively. The poverty rates are significantly higher in the first three risk groups compared
to the comparison (urban rest of Burkina Faso) group.
Table 10.
Risk group division used in Burkina Faso SEE training, September 2014
% of all schoolaged children
Poverty rate
Sahel Urbain
2%
31%
Sahel Rural
4%
95%
Reste du BR Rural
61%
72%
Reste du BF Urbain
33%
10%
Source: SEE archive, country saved as “Burkina Faso 9/14”
81
Data on pupils and flows.
The next data table is on pupils and flows is for information on the distribution between public and
private pupils, school entry rate, the number of pupils by grade, and promotion and repetition rates.
This table comprises cells B10:F40.
Percentage of non-public pupils (optional, A)
In cells B11:F11 you need to enter the percentage of non-public pupils in each risk group, regardless of
whether they are included in your pupil numbers or not. This will allow the SEE model to appropriately
account for education that is being provided outside of the public sphere. If this is left blank the model
will assume that all pupils are public pupils.
Percent of children who will enter first grade projected (optional, A or H)
The SEE model computes basic access to education – the likelihood that a child will start school (primary
or secondary). The benchmark data for this calculation is the percentage of children who will enter the
first grade projected. If these cells are left blank, SEE will compute the gross intake rate (based on pupil
numbers) as a proxy for the percent of children who will enter school.
A number of data are possible for these values:
1) The preferred numbers are from household surveys, computed as the “percentage of children who
have ever entered school” (or, ever entered the first grade projected) usually for an age range that is
slightly above the official age for the first projected grade to account for over-age students. For
82
example, for Ghana, the percentage of children who enter the first grade refers to the percent at age
12 who had ever gone to school according to the 20078 Demographic and Household Survey.
2) If there is no household survey available, an alternative number is the “gross intake rate” (GIR)from
administrative, or EMIS data. Often, the GIR is unreliable because of large numbers of over- and
under-age pupils; unreliable population data; and unreliable counts of repeaters. However, if no
household survey data is available, GIR can be used.
Do the data on pupils include non-public pupils?
Cell D13 is a simple Yes/No toggle to tell the model whether the pupil numbers that you will include
private pupils, or not. In the calculations, the model includes private pupils to measure outcomes such as
the gross enrolment rate, the completion rate etcetera, but excludes the costs of private pupils.
Pupils by grades (required, A)
Pupils are the basic building block of the education life cycle. The number of pupils should include
at least all public enrolment and can include private enrolment if you have answered “yes” to
“Includes private pupils” above. Whether you include private pupils in these numbers is a
matter of choice and which data are most readily available.
These data can usually be obtained from the administrative school data system (such as the Education
Management Information System), often for a recent year. In some cases, these data are very
unreliable or unpublished (or both). In this situation, it might be necessary to estimate the number
of pupils based on grade-specific enrolment in household surveys, multiplied by the school-aged
population.
Repetition and Promotion rates by grade (A or H)
Repetition and promotion rates are required to compute the flows through the education life cycle.
Often, these rates are available in Annual Education reports, but, if they are not, you may have to
compute repetition and promotion rates from data on pupils and repeaters, which can be found in the
EMIS data (see above on pupils).
To compute repetition rates, you need the number of pupils in the previous year and the number of
repeaters in the start year. The repetition rate is equal to:
Repetition rate =
# repeaters grade N this year
students grade N last year
To compute promotion rates, you can use this equation:
𝑃𝑟𝑜𝑚𝑜𝑡𝑖𝑜𝑛⁡𝑟𝑎𝑡𝑒
#⁡𝑝𝑢𝑝𝑖𝑙𝑠⁡𝑔𝑟𝑎𝑑𝑒⁡𝑁⁡𝑡ℎ𝑖𝑠⁡𝑦𝑒𝑎𝑟 − #⁡𝑟𝑒𝑝𝑒𝑎𝑡𝑒𝑟𝑠⁡𝑔𝑟𝑎𝑑𝑒⁡𝑁⁡𝑡ℎ𝑖𝑠⁡𝑦𝑒𝑎𝑟⁡
=⁡
#⁡𝑝𝑢𝑝𝑖𝑙𝑠⁡𝑔𝑟𝑎𝑑𝑒⁡(𝑁 − 1)𝑙𝑎𝑠𝑡⁡𝑦𝑒𝑎𝑟
83
Learning outcomes
The SEE model calculated learning outcomes based on the benchmark values and changes in the learning
environment as simulated by the interventions.
It is possible to provide two learning outcomes. These must be the percentage of children reaching a
particular learning benchmark as opposed to average outcomes. For example, the learning outcomes
could be: percent of children passing Swahili in the final primary examination, or, the percentage of
children scoring higher than level 4 in Math on a national assessment.
Only one measure is required for a projection of learning or, the learning values can be left blank if no
learning projection is desired.
For each learning measure, three pieces of information are required:
1. Name of the assessment or exam with the measurement benchmark; this can be anything you
choose.
2. Grade in which the assessment or exam is taken.
3. Percentage of children who pass the learning benchmark by risk group.
Resources for pupils
The SEE model computes the availability of basic school amenities, namely, schools or classrooms,
teachers (qualified and unqualified) and books. The benchmark values for these resources are provided
in the resources for puils table.
This table also includes a row for the initial per pupil expenditures, which can be used to compare to the
costs of the scenarios and to assess the overall budget increases needed relative to the existing education
budget.
Note that all of these values are optional – if the table is left blank, the model will still make projections,
but will not include values for schools, teachers, books, or initial costs.
84
Teachers: Pupil-teacher ratio and % trained teachers (optional, A)
The pupil-to-teacher ratio provides information about the human resources for teaching available to
pupils. This indicator is among the most commonly published school statistics.
The ‘% trained teachers’ refers to any category of certification. This category includes teachers with preservice training, but if there is a programme of in-service certification, teachers who completed this path
can also be included.
Pupil-classroom ratio (optional, A)
The pupil-to-classroom ratio is often available from administrative sources. Information about school
infrastructure may also be presented as the number of schools, with schools possibly subdivided into
complete schools (those that provide all primary grades) and incomplete (only early grades). If this is the
case, you may need to make an estimate of the number of classrooms based on the average number of
classrooms in each school.
In the present version of SEE, the pupil-to-classroom ratio, like the pupil-to-teacher and the pupil- tobook ratios, is not dynamically linked to education outcomes, but it can be monitored in the results
graphs. Some research indicates these ratios are important (see, for example, Glewwe et al., 2011) and
might be included in a future version of the SEE model.
Books per pupil (optional, A)
Information on the number of books per pupil is fairly commonly available in the administrative data on
schools and may be regularly published in the EMIS or the Annual Statistical Yearbook on Education.
85
Total initial public expenditures per pupil (optional, B)
This indicator includes only public expenditure on schools; it does not include private expenditures by
households. SEE projects only public expenditures in all of the scenarios. This indicator may be
available in the costs and budget section of an Annual Statistical Yearbook for education, possibly at the
sub-national level. If it is not, you can calculate an estimate by summing up all of the public
expenditures on primary education (or preschool or secondary, if you are using SEE for preschool or
secondary education).
If your risk groups are defined by geographical areas, you may be able to find regional public
expenditures. If these are not available, you may need to assume an equal average cost per pupil, and
distribute the initial public education expenditure in proportion to the number of pupils.
If you are using SEE at the district level, the average cost per pupil may be equal to the capitation grant,
or it may be possible to obtain the total district school budget. Alternatively, you can take the national
levels expenditure and divide by the number of pupils to obtain an estimate of per pupil expenditure.
Then you can multiply the estimated per pupil expenditure by the number of pupils in the district.
Student and teacher attendance rates (optional, A)
Student and teacher attendance rates (or the inverse, absenteeism) can be important determinants of
learning. At the moment, attendance rates are not hardwired to the dynamics of the model and adding
these data is optional. Many countries do not record attendance rates, but some do via School
Report Cards. In this version of SEE, attendance rates are not connected (yet) to outcomes, nor are they
impacted by interventions.
Vulnerability barriers (optional)
As mentioned under the information about risk groups, there is a second level of inequity included in SEE:
vulnerability barriers. These emerged because, in practice, we found that users tend to define risk groups
as geographical regions, but barriers to school are usually caused by factors other than location.
Vulnerability barriers are characteristics that can worsen school outcomes, for example, poverty,
gender, speaking a minority language, disability or nomadism. Adding vulnerability barriers is optional.
Adding them allows you to link certain interventions to just those barriers, thus, it can be a very useful
component of targeting inequity. You can read more about vulnerability barriers in the description of the
model in section 3.5. For each vulnerability barrier, you will need four pieces of information as shown
here and explained in more detail after the illustrations:
86
1. Names of
the
vulnerability
barriers
2. Prevalence of the vulnerability in each
risk group (the percentage of children in
each risk group who are affected by the
vulnerability barrier).
3. Relative risk of worsened
school outcomes for children
with barriers compared to
average.
Basic information about interventions
Interventions (Rename here!!)
Targeted
barrier
New public schools, construct and hire 6 teachers
Wing schools, construct and hire 2 teachers
All
All
Teachers: pre-service trained
Teachers: in-service training diploma for untrained
Teachers: hire untrained
Head teachers: training
All
All
All
All
School management committee training
Teachers: Big Sister remedial assistants
Teachers: pre-service trained GHANA NORTH
All
Poverty
All
4. Vulnerability barrier links to
interventions – set in the interventions
matrix (See below)
1.
Names of the vulnerability categories (barriers). Up to seven vulnerability barriers are
possible. Commonly added vulnerability barriers are: poverty, rural location, minority language,
disability, disability and female gender.
You should add some information in a comment about how you define each vulnerability barrier. For
example, poverty might be defined as ‘in the bottom income quintile’ or ‘living on less than $2 a day’.
2.
Prevalence of the vulnerability in each risk group. This is the percentage of children in each risk
group that is affected by the vulnerability barrier. Information about the prevalence of each barrier
within risk groups will usually come from household surveys.
3.
Odds ratios of worsened education outcomes. The odds ratios are used in four education
outcomes that together affect progress through the school life cycle: non-entry, repetition, drop-out rates
and off-track learning rates. The odds ratios are the increased probabilities that a child with a
vulnerability barrier X will not progress as desired. An assumption about the odds ratios is that they are
the same for each risk group.
The odds ratios can be computed from household surveys, using a statistical software package (like Stata,
SAS, SPSS or AdePT). It is possible that there is a household survey report that already has the odds ratios
in it, but if not, they need to be computed. This requires some technical skill and this task can be best
given to a person who has some experience with statistical software packages and household surveys.
An example of a script for the odds ratio computation using Stata and a DHS household survey is
provided in the box.
87
If it is not possible to get the actual odds ratios, then you may do one of two things. You can enter the
value 1 – in this case the model assumes that the vulnerability barrier does not worsen outcomes, but
interventions can still be linked to the barriers, so that efforts can be focused on children with the
barriers. A second option is to make an educated expert judgment call. In this case, you may want to do
a sensitivity analysis to see how sensitive the scenario outcomes are to your estimate.
4.
Vulnerability barrier links to interventions. Many interventions are effective at reducing
a particular barrier – for example, cash transfers can reduce the impacts of poverty, bicycles can
reduce the impacts of remote location and mobile schools or an adapted school year can reduce the
impact of nomadism. In the interventions matrix (see below) there is a column called ‘Vulnerability barrier
linked to intervention’ where you can link the interventions to one particular barrier.
If the intervention is of general benefit to all children or pupils (like teachers being present), then link
the intervention to ‘All’ children. To select a link, click on the cell next to the name of the
intervention. A drop-down menu will appear that lists all of the vulnerability barriers (it is linked to
your entries from earlier) and the option ‘All’ for interventions that are general. Select one barrier or ‘All’
with a click.
88
* this is a simple model that allows you to compute the prevalence of two vulnerability factors, poverty and girl sex.
* it also allows you to calculate the relative risk of never attendance, dropout, and repetition given the two vulnerabilty factors.
* you can add other risk factors and edit the file so it connects to the right variable names in the dataset.
* what the variables mean:
* hv106 = highest educational attainment (we're looking for primary)
* hv125 = attended school last year
* hv121 = attended school this year
* hv123 = grade attended this year
* hv127 = grade attended last year
* hv270 = wealth quintile
* hv104 = sex
* hv105 = age
drop NAS
drop primarydropout
drop primaryrepetition
drop poor
drop girl
* FIRST generate your dummy variables:
* never attended school
gen NAS = (hv106==0)
replace NAS = . if hv106==8
* primary dropout (those who did not attend this year, but attended primary last year)
gen primarydropout = .
replace primarydropout = 0 if hv125==1 & hv121==2 & hv106==1
replace primarydropout = 1 if hv125==1 & hv121==0 & hv106==1
* primary repetition (those in grades 1-6, with primary attainment, who attended same grade this year as last year)
gen primaryrepetition = .
replace primaryrepetition = 0 if hv106==1 & hv123>=1 & hv123<=6 & hv123 == hv127+1
replace primaryrepetition = 1 if hv106==1 & hv123>=1 & hv123<=6 & hv123 == hv127
gen poor = (hv270==1)
gen girl = (hv104==2)
* SECOND tabulate prevalence of poor and girl in the school age population
sum poor if hv105>=6 & hv105<=17
sum girl if hv105>=6 & hv105<=17
* THIRD run a linear model to predict relative risk of NAS, dropout, repetition by vulnerable group.
glm NAS poor girl if hv105>=10 & hv105<=15, fam(poisson) link(log) nolog robust eform
glm primarydropout poor girl, fam(poisson) link(log) nolog robust eform
glm primaryrepetition poor girl, fam(poisson) link(log) nolog robust eform
* LOOK at the confidence intervals and P statistics in the output table!!
* In words, this model says: run a generalized linear model (glm) with the dependent * variable = NAS; independent variables are poor
and girl (you can add more independent * variables as control e.g. urban/rural, region)
* if: run only for a portion of the dataset (age 10-15 = if hv105>=10 & hv105<=15)
* the rest tells it what kind of statistics to run and how to present, don't edit this.
* Source for code: Karla Lindquist, Senior Statistician in the Division of Geriatrics * at UCSF.
http://www.ats.ucla.edu/stat/stata/faq/relative_risk.htm
89
School-aged population projections (required)
SEE requires the populations of the official age of the first grade projected, the population of the official
age of all the grades projected and the population of the official age of the last grade projected, as well as the
projected average population growth rate. The model projects the school-aged population based on these
starting data and the growth rate. Population data are used to compute future pupil numbers,
enrolment rates, completion rates.
Note that, often, the population growth rates in the different risk groups will not be constant, nor will it
be the same for all groups. There are more complex, precise ways to project the population, and if you
know how to apply these, then you can do this and enter the projected population in the yellow
section in cells starting at AM32. If you do not enter data directly here, SEE will make projections based
on the initial population and the population growth rate provided.
MoRES enabling factors (optional)
For users working with UNICEF who are also doing a Monitoring for Results (MoRES) analysis, the model
has space to add four of the higher levels MoRES indicators – social norms, legislation and policy, budget
and expenditures, management and coordination. The user can add a name for a specific measure to be
used for each of these indicators and values for those measures in the SEE model. These indicators do
not influence the model results, but can be included in results and graphs, as desired.
90
Adding data about interventions
You can input a list containing any intervention (up to 30) relevant to your programme. These may include:
interventions that are of specific interest to your organization, interventions that have been piloted and
you want to simulate wider implementation or a list of interventions already under discussion or being
planned in your country. This is the largest and most complex of the data matrices.
The interventions are organized into six groups – construction of schools or classrooms; three categories
of teachers; textbook provision; and all other interventions. The reason for adding separate categories
for the school resources has to do with the fact that in some countries there are different ways to provide
these basic resources (e.g. standard school classrooms and community school classrooms) but within
each category, they fulfill the same function (e.g. providing a space to have class). The model tracks the
entire stocks of these categories.
All the other interventions will typically include many that address the needs of marginalized children or
increase quality of schooling – e.g. transfers, remedial teaching, changes to the curriculum, WASH
facilities and so forth.
There is room at the v e r y bottom of the m a t r i x to add special interventions that are linked
to particular grades. These interventions require special programming and should not be added unless
you plan to make changes in the yellow calculations section.
For each intervention, you will need to provide some information so SEE can compute the projected
impacts of the intervention. These are:
•
•
•
•
•
•
•
•
Name of the intervention
Vulnerability barriers targeted (as described on pages 80–82).
Targeted demographic group (i.e. age/pupil groups targeted by the intervention).
How many children are reached by each unit of the intervention.
Initial coverage (how many units of the intervention are already in place).
Costs of interventions divided into construction costs, training costs and recurrent costs.
Lifetime of the construction, training and recurrent units.
Effectiveness of the interventions to reduce various intermediate education outcomes.
91
If you do not have this information, no projections can be made with that intervention.
To begin, some general information is required for the interventions:
•
•
Currency unit for costs
Assumed inflation rate
•
Early grades – for interventions that are only proposed for a sub-set of
early grades (e.g. reading programs for young learners) tell the model
which early grades will be affected.
Name of the intervention (required)
Any name for the intervention is allowed. It is sensible to be as brief as possible (so that the entire name
is visible in the results graphs) while choosing a name that is clear and unambiguous. It is possible to
provide more explanation and background for each intervention by adding a comment with information.
Targeted barrier (required)
As mentioned above in the
explanation of vulnerability
barriers, each intervention has to
be linked with a vulnerability
barrier (including “For all
children” if no specific group is
targeted) – meaning that the
intervention is especially (or only)
effective for children with that
vulnerability.
In column C, to the right of the
name of the intervention, the user will select a targeted barrier from the drop-down box. Note that the
list of available vulnerability barriers is equal to the list in the vulnerability barriers table. If the names of
the vulnerability barriers in the table are changed, then the links in the intervention table must be
updated.
Targeted demographic group (required)
An intervention may be targeted to reach only
the younger children or the earlier grades. In this
column, you can specify which age group is affected
by the intervention. To make this selection, click on
the cell in this column and a menu will drop down
with a list of five demographic groups as shown in
the drop-down menu to the right.
92
Number of children reached per unit (required)
The types of interventions added in SEE can be quite diverse – from schools to
school meals, from medical care for worms to pedagogical interventions and
community efforts. In this column, you will tell SEE how many children are
reached by each unit of the intervention. Some examples:
•
•
•
Unit for intervention is ‘a school’; the number of children per unit is how
many pupils will (on average) go to each school.
The intervention is a teacher; the number of children per unit is how
many pupils each teacher (on average) will reach.
The intervention is ‘school meals’; the number of children per unit is one
child (note that in this case, the unit is a year of school meals, not, one
single meal).
Some of these units are obvious – a unit of school meals will benefit one child; but others, while
conceptually intuitive, may be hard to pin down to one number. Take the example of schools.
Around the country or district, the number of children per school varies. Some schools are crowded with
considerably more pupils than they were intended for. What number should be used? The average
number of children per school? Or the intended number of children per school? There is no scientific
answer to this. For the Ghana pilot, we used the average intended number of children per school.
Initial number of units (required, A and H)
Some of the interventions are already in place for a certain portion of children – certainly this is true of
schools, teachers and books. It is important to enter the initial number of units so that SEE can compute
the marginal impact of additional units and also because initial coverage limits the scope for further
expansion. The initial number of units may come from a variety of sources including Annual Statistical
Yearbooks, other EMIS data, special reports on interventions or even household surveys.
Costs per unit of intervention (required, B)
The interventions included in the model may differ substantially in their cost structure. Building a school
may be very expensive, but the benefits accrue over a long time, typically a few decades. Likewise, training
a teacher is expensive, but that teacher will benefit many pupils for many years. On the other hand, a
school uniform may need to be replaced (on average) every year. We want to account for these
different lifetimes. Because of such differences SEE recognizes three major categories of costs:
construction, training and recurrent. For each of these categories, you will provide the up front costs, and
the expected lifetime of the intervention. Some examples:
93
•
•
•
School construction. The main cost item for
school is “construction costs” (column K of the
intervention table). Often, construction
lifetimes are around 20 years (although you
can make a different assumption). In the
table to the right, the top intervention has a
construction cost of 78,000, and a lifetime of
20 years (this is school construction, taken
from the Ghana model). In the case of school
construction, you may also want to include
maintenance costs in the “Annual recurrent
cost” column O.
Teachers. There are two types of costs, first
the training of the teacher, and second the
annual salary costs. In the example to the right, the fourth intervention has 5000 training costs (this
is training a qualified teacher taken from the Ghana model), and a lifetime of 10 years (assuming that
teachers remain on their posts an average of ten years), and an annual cost – the salary – of 2024.
Annually recurrent programs, such as school meals, uniforms, scholarships. All of these programs
would have costs only in the “Annual recurrent cost/unit” column.
Although at first sight it appears cumbersome to have multiple types of costs, in practice it helps users be
more specific about costing interventions. This structure was developed at the request of early users of
SEE.
The cost information for interventions is likely to come from multiple sources. Some costs data might
be available from the Ministry of Education or the Ministry of Planning. Sometimes, this information is
available in an Education Sector Plan (e.g. the salaries of teachers, the costs of constructing a school).
Often, special documents will need to be requested or estimated. This may be particularly true for
interventions that are new to the country or region. In Ghana and Burkina Faso, we found that the
costing information needed to be collected from a variety of sources, including planning personnel or
documents at the Ministry of Education, from donors or reports of programme pilots in the field. Overall,
we found that it was possible to compile reasonable country-specific numbers on costs in both countries.
When you think of what types of costs to include, try to be inclusive and sensible. New schools cost money
to construct (construction costs), but subsequently there are maintenance (recurrent) costs.
Intervention effectiveness (E)
Columns O-S in the intervention matrix are for the effectiveness of interventions. Recall from earlier
chapter that effectiveness is how much the intervention will change an education gap if the
intervention reaches the children it is intended for and if it is well-implemented. It is thus a measure of
the maximum impact that an intervention can have, as discussed in the description of the model Chapter
3. That chapter also provides more background about the literature (the evidence) on effectiveness and
some efforts to compile that information. Here, we will focus on two very practical matters:
A. How to read the effectiveness matrix, so you know where to fill what.
B. How to get estimates of effectiveness.
94
Effectiveness
Names of five
intermediate
outcome gaps
that can be
reduced.
List of interventions
(linked to names in
column B)
Reading the intervention effectiveness matrix
The image above shows the effectiveness values from the Ghana model. At the right is the list of
interventions (linked to the list provided by the user in column B). To the left of the intervention names
the matrix has columns headed by the names of gaps that can be reduced through interventions. The
numbers below the gap names show the effectiveness – these are the numbers we are after!
The list of gaps is fixed because the gaps are hard-wired dynamically to specific parts of the life cycle.
The gaps are:
1. Non-utilization of a nearby school
2. Late entry
95
3. Repetition
4. Dropout
5. Learning gaps
For each intervention and gap, SEE will look for a value to calculate the outcomes. Note that many of these
combinations are zero. This means there is no effect for that gap-intervention nexus. Some of the
interventions have positive values for multiple gaps. For example, kindergartens reduce non-entry, late
entry, repetition, dropout and off-track learning rates. The question is: How can you get estimates for
these effectiveness values?
How to get estimates of effectiveness
This is certainly the trickiest part of the whole data collection effort, because the information is spread
over diffuse sources, and the values differ by source, and the information is seldom reported directly in the
format needed by SEE.
In principle, you would want to find a pilot study or a research study that looked at the intervention,
measured its effects and provided results you can input to SEE. You may prefer to use studies from
your own country or from similar, neighbouring countries, but you will often need to include information
from farther afield. You should select only studies that carefully ensure that only real effects were
measured, and that outcomes are not the result of any other, confounding changes. The best kind of study
is a randomized controlled trial where a group with the intervention is compared to a group without.
But before-and-after studies may also be useful, and in a pinch, trend analyses might be all that you
have (e.g. studies that look at how much school intake increased after school fees were dropped). Where
to find these studies? Here are a couple of sources:
•
World Bank database of intervention effectiveness (IE²), launched December, 2012. This
database contains a short synopsis of about 300 studies, organized by intervention type, and a
summary of the impacts of the interventions. The summaries are not standardized, so the user
will need to make additional computations to find effectiveness values for the model.
•
UNICEF (2012b) also reviewed 300 studies and found 120 to have sufficient information for a
database of intervention effectiveness. The UNICEF database already includes computations of
effectiveness, and can be sorted by intervention and gap. The UNICEF database can be requested
from Mathieu Brossard, mbrossard@unicef.org.
•
The Abdul Latif Jameel Poverty Action Lab has conducted a series of studies on more than 70
interventions on its website, <www.povertyactionlab.org>. Often, the values for the intervention
effectiveness can be found in these summaries.
•
If you are not using the UNICEF database, you will need to translate effects found in studies into the
percentage reduction of a gap because the SEE model interprets effectiveness numbers as reductions
of the gaps. Here is an example of how to compute the percentage reduction of a gap:
•
In Nepal, many poor children were not entering school. CARE and UNICEF wanted to see if
kindergartens could help. They provided kindergartens to one group, while a control group continued as
before, without kindergartens. After the study, the researchers found that 95% of the children with
kindergarten proceeded to primary school, compared to only 75% of the others. What was the
effectiveness? First, the non-entry gap in absence of the intervention was 25%. Second, the reduction
in non-entry was 20 percentage points. The effectiveness of the intervention was:
•
•
96
•
𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 = ⁡
𝑔𝑎𝑝⁡𝑟𝑒𝑑𝑢𝑐𝑡𝑖𝑜𝑛
𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙⁡𝑔𝑎𝑝
=⁡
20%
25%
= 80%
•
•
In other words, the effectiveness equals the ratio of the gap reduction to the gap without
intervention.
Using the UNICEF intervention database
There may be multiple studies that provide information on one intervention-gap nexus. For example, the
UNICEF database contains many values for the preschool-dropout nexus (the reduction in dropout is a commonly
studied effect of preschools). If you open the UNICEF database, you will find columns B–K with information on
the studies. The columns I–S show the interventions and outcomes. You can filter here for ‘Preschool’ in the
‘Interventions’, and ‘Dropout’ in the ‘Outcomes’ columns. This will give you something the table shown below. In
the purple column, you can find the effectiveness already computed (‘% of GAP closed’).
As you can see, the effectiveness of preschool on dropout varies from 13% to 80%, and the average is 43%. For
now, we have taken this average value and entered it into the effectiveness table – you will find this number if
you look at the effectiveness matrix for Ghana above, in the row for “Kindergarten construction & recurrent”,
under ‘Dropout’.
Obviously, taking the average value is a simplistic solution. It would be better if we could distinguish
interventions that are closer to those you intend to implement. You can do this by looking more closely at the
individual studies.
Another option is to test the SEE model’s sensitivity to the value you put in, so that you know how important it
is to get the value exactly right. See Section 9 on sensitivity analysis.
97
Information for out-of-school projections
Show/Hide OOS data entry columns
There are special data entry columns for entering age-specific student information for out-of-school
projections. To reveal these columns, click on the ‘Show/Hide OOS data entry columns’ button at
the top of the data entry section. The OOS data columns look like the picture below:
Includes private pupils (Yes/no)
As for pupils by grade, the model requires information on whether private pupils are included in the agespecific pupils, or not. You should enter the same response as for all pupils by grade here.
98
Pupils by age and grade
Pupils by age and grade are necessary for the out-of-school children computations, because we need to
know how many children of school age are in school. The data are entered in the columns just to the right
of all the information described above.
Sometimes, information on pupils by age and grade is published in an Annual Statistical Yearbook for
Education, or can be obtained through a special request to the Ministry of Education.
If this is not possible, you can get the age distribution of pupils for each grade from household
surveys, using the following two common questions (the exact formulation varies):
•
•
How old is X?
Is X attending school this year? If so, in what grade?
Using a statistical package, extract the ages for all children attending school, by grade. Then, take this
age distribution (converted into percentages) and multiply it with the total numbers of pupils in each
grade.
Children ever in school by age
DATA: Initial distributions of attainment, attendance - ONLY for OOS computations
For OOS children, SEE computes
Initial ever in school
Initial age-specific attendance rate
children who will never enter
Girls north Boys north Rest of Ghana
0 Girls north Boys north Rest of Ghana
school, will enter late and will drop Age 6
48%
44%
48%
45%
40%
47%
Age 7
59%
62%
70%
56%
59%
69%
out. To make the never-entry and
Age 8
70%
66%
87%
64%
61%
85%
late-entry calculations, we need to Age 9
68%
78%
94%
67%
73%
91%
know the age pattern of school
Age 10
76%
76%
95%
70%
70%
91%
Age 11
75%
79%
98%
71%
69%
94%
entry, which can be estimated
Age 12
81%
80%
98%
75%
73%
93%
based on the percentage of
Age 13
74%
79%
97%
66%
71%
93%
78%
80%
99%
69%
68%
89%
children who were ever in school, Age 14
Age 15
76%
76%
97%
66%
62%
83%
by age. For the dropout
Age 16
76%
76%
97%
66%
62%
83%
calculations, we need school
Age 17
76%
76%
97%
66%
62%
83%
Age 18
76%
76%
97%
66%
62%
83%
attendance by age. Both
indicators can typically be
extracted from household surveys.
The data for ‘ever in school’ rates can be computed using the following questions or their variants:
•
•
How old is X?
Did X ever attend school?
•
•
The data for the school attendance rates can be computed from:
How old is X?
Is X attending school this year? (Filter for primary school attendance.)
Population by age in start year
The population by age can be obtained from single-year population estimates based on a recent
population census. Often, single-year populations are available in a census document, but sometimes a
99
0
special request will need to be made to the unit or institution in the country that deals with
population data.
If that single-year population is not available for your risk group, you may be able to estimate the
numbers using national single-age population and risk group population as percentage of the national
population. To get the risk group single-age population, simply multiply the national numbers for the
single-age population by the percentages.
100
References
Abeberese, Ama Baafra, Todd Kumler and Leigh Linden, ‘Improving Reading Skills by Encouraging Children
to Read: A randomized evaluation of the Sa Aklat Sisikat Reading Program in the Philippines’,
Working paper no. 17185, National Bureau of Economic Research, Cambridge, Mass., June 2011.
Ahmed, Akhter and Carlo del Ninno, ‘The Food for Education Program in Bangladesh: An evaluation of its
impact on educational attainment and food security’, Discussion paper 138, International Food
Policy Research Institute, Washington, D.C., 2002.
Angrist and Lavy (2001) Does Teacher Training Affect Pupil Learning? Evidence from Matched
Comparisons in Jerusalem Public Schools, Journal of Labor Economics, Vol. 19, No. 2. (Apr., 2001),
pp. 343-369
Attanasio, Orazio, Costas Meghir and Ana Santiago, ‘Education Choices in Mexico: Using a structural
model and a randomized experiment to evaluate PROGRESA’, The Review of Economic Studies
Limited, vol. 79, no. 1, 1 January 2012, pp. 37–66.
Banerjee, A., R.Banerji, E. Duflo, R. Glennerster, and S. Kheman (2010) Pitfalls of Participatory Programs:
Evidence from a Randomized Evaluation in Education in Indi American Economic Journal:
Economic Policy 2010, 2:1, 1–4
Banerjee, A., S. Cole, E. Duflo, and L. Linden (2005) Remedying Education: Evidence from Two
Randomized Experiments in India. National Bureau of Economic Research working paper No.
11904. Leherr (2009)
Blok, H., Fukkink, R., Gebhardt, E., & Leseman, P. (2005). The relevance of delivery mode and other
programme characteristics for the effectiveness of early childhood intervention. International
Journal of Behavioral Development, 29(1), 35−47 (cited in Kagitcibasi et.al. 2009)
Burde, Dana and Leigh Linden, ‘The Effect of Proximity on School Enrollment: Evidence from a
randomized controlled trial in Afghanistan’, Center for Global Development, Washington, D.C.,
2009.
C. Kagitcibasi, D. Sunar, S. Bekmanc, N. Baydar, Z. Cemalcilar (2009) Continuing effects of early
enrichment in adult life: The Turkish Early Enrichment Project 22 years later. Journal of Applied
Developmental Psychology.
Crouch, Luis A., ‘A Simplified Linear Programming Approach to the Estimation of Enrollment Transition
Rates: Estimating rates with minimal data availability’, Economics of Education Review, vol. 10,
no. 3, 1991, pp. 259–269.
DevInfo, ‘MBB Support’, website for the Marginal Budgeting for Bottlenecks tool, 2015,
<www.devinfolive.info/mbb/ mbbsupport/index.php>, accessed April 28, 2015.
Duflo, Esther and R. Hanna. ‘Monitoring Works: Getting teachers to come to school’, Working paper no.
11880, National Bureau of Economic Research, Cambridge, Mass., 2005.
Duflo, Esther, et al., ‘Education and HIV/AIDS Prevention: Evidence from a randomized evaluation in
Western Kenya’, Policy Research Working Paper 4024, World Bank, October 2006.
Finan, Timothy, et al., ‘Impact Evaluation of WFP School Feeding Programmes in Kenya (1999-2008): A
mixed-methods approach’, vol. 1, Full Evaluation Report, World Food Programme, Rome, March
2010.
Glewwe, Paul W., et al., ‘School Resources and Educational Outcomes In Developing Countries: A review
of the literature from 1990 to 2010’, Working paper no. 17554, National Bureau of Economic
Research, Cambridge, Mass., 2011.
Global Partnership for Education, ‘Results for Learning Report: Fostering evidence-based dialogue to
monitor access and quality in education’, Global Partnership for Education, Washington, DC,
2012.
101
Kagitcibasi, Cigdem, Diane Sunar, and Sevda Bekman, ‘Long-term Effects of Early Intervention: Turkish
low-income mothers and children’, Journal of Applied Developmental Psychology, vol. 22 (4),
2001, pp. 333-361.
Lal, Sunder, and Raj Wati, ‘Non-Formal Preschool Education–An Effort to Enhance School Enrollment’,
Paper presented for the National Conference on Research on Integrated Child Development
Services, National Institute for Public Cooperation in Child Development, New Delhi, February 2529, 1986.
Leroy, J., P. Gadsen, M. Guijaro, 2011, The Impact Of Daycare Programs On Child Health, Nutrition And
Development In Developing Countries: A Systematic Review, International Initiative For Impact
Evaluation
Maluccio, John A., and Rafael Flores, ‘Impact Evaluation of a Conditional Cash Transfer Program: The
Nicaraguan red de proteccion social’, International Food Policy Research Institute, Washington,
D.C., July 2004.
Myers, Robert and Cassie Landers, ‘Preparing Children for Schools and Schools for Children’, discussion
paper prepared for the Fifth Meeting of the Consultative Group on Early Childhood Care and
Development, UNESCO, Paris, October, 1989.
National Population Commission (Nigeria) and RTI International, ‘Nigeria Demographic and Health Survey
(DHS) EdData Profile 1990, 2003, and 2008: Education data for decision-making’, National
Population Commission (Nigeria) and RTI International, Washington, D.C., 2011.
Nimnicht, G. and P. E. Posada, ‘The Intellectual Development of Children in Project Promesa’, Report
prepared for the Bernard van Leer Foundation, Centro Internactional de Educacion y Desarrollo
Humane, Medellin, Colombia, 1986. Cited in Myers and Landers, 1989.
Petrosino, Anthony et al., ‘Interventions in Developing Nations for Improving Primary and Secondary
School Enrollment of Children: A systematic review’, Campbell Systematic Reviews, Woburn,
Mass., 2012.
Porta, Emilio et al., Assessing Sector Performance and Inequality in Education, World Bank, Washington
D.C., 2011.
Porta, Emilio, and Annababette Wils, ‘Review and Evaluation of Selected Education Projection Models in
Use in 2006’, WP-02- 07, Education Policy and Data Center (FHI 360), Washington, D.C., February
2007.
RTI International, ‘Nigeria Northern Education Initiative (NEI) - Results of the Early Grade Reading
Assessment (EGRA) in Hausa’, RTI, Research Triangle Park, N.C., 2011.
RTI International, ‘Task Order 7 NALAP Formative Evaluation Report, Ghana’, RTI, Research Triangle Park,
N.C., 2011. Save the Children and United Nations Children’s Fund, ‘What’s the Difference?’, Save
the Children, Kathmandu, Nepal, 2003. Schiefelbein, Ernesto, and Laurence Wolff, ‘CostEffectiveness of Primary School Interventions in English Speaking East and
Steen Nielsen, Nicolai, et al., ‘WFP Cambodia School Feeding 2000-2010: A Mixed Method Impact
Evaluation’, World Food Programme, Rome, November 2010.
Tanahashi, 1978
UNESCO Institute for Statistics, ‘Reaching Out-of-School Children’, UIS, 2012
<www.uis.unesco.org/Education/Pages/ reaching-oosc.aspx>, accessed 14 November 2012.
United Nations Children’s Fund and World Bank, ‘Simulation for Equity in Education (SEE): Background,
methodology and pilot results’, UNICEF, New York, 2013.
United Nations Children’s Fund, ‘Evidence for Strategies to Reach Out-of-School Children and Increase
School Quality – Based on Review of 120 studies’, UNICEF, New York, 2012b. (Draft under
development, available electronically from UNICEF Headquarters Education Section; contact
Jordan Naidoo jnaidoo@unicef.org)
102
United Nations Children’s Fund, ‘Guide for Application of MoRES framework’, UNICEF, New York, 2012a.
(Draft under development, available from UNICEF Headquarters Education Section; contact Aarti
Saihjee, asaihjee@unicef.org)
United Nations Children’s Fund, ‘Note on Development and Application of Bottleneck Analysis Tool in
Education in Ghana’, UNICEF, 2011.
United Nations Children’s Fund, ‘The Investment Case for Education and Equity’. UNICEF, 2015
United Nations Educational, Scientific and Cultural Organization, EFA Global Monitoring Report 2010:
Reaching the marginalized, UNESCO, Paris, 2010.
United Nations Educational, Scientific and Cultural Organization, Education Policy and Strategy Simulation
Model (EPSSim) description and downloadable model, UNESCO, 2012,
<http://inesm.education.unesco.org/en/esm-library/esm/epssim>, accessed 14 November 2012.
West Africa: A survey of opinion by education planners and economists’, Washington, D.C., and Santiago,
Chile, 2007.
World Bank (2007) Le Système Éducatif Tchadien: Eléments de Diagnostic Pour Une Politique Educative
Nouvelle et Une Meilleure Efficacité de la Dépense Publique. World Bank, Washington DC, USA.
World Bank, ‘Education Financial Simulation Model (EFSM)’, description and downloadable model, World
Bank, Washington, D.C., 2012a, <http://inesm.education.unesco.org/en/esmlibrary/esm/education-financial-simulation-model - tabs-tabset-2>, accessed 14 November 2012.
World Bank, IE² Impact Evaluations in Education Database, World Bank, Washington, D.C., 2012b,
<http://datatopics.
worldbank.org/EdStatsApps/Edu%20Evaluation/evaluationHome.aspx?sD=E>, accessed 18
January 2013.
World Bank/ UNICEF/UNFPA/Partnership for Maternal, Newborn and Child Health (2009). Health Systems
for the Millennium Development Goals: Country Needs and Funding Gap. Un-Edited Conference
Version 29-30 October 2009. UNICEF, New York, USA.
http://www.unicef.org/health/files/MBB_Technical_Background_Global_Costing_HLTF_Final_Dr
aft_30_July.pdf, accesssed Nov 18, 2013.
103
Download